← Bookmarks 📄 Article

How uv got so fast | Andrew Nesbitt

uv isn't fast because it's written in Rust—it's fast because it dropped 15 years of backwards compatibility and leveraged Python packaging standards that only became possible in 2022-2023.

· software engineering
Read Original
Listen to Article
0:005:43
Summary used for search

• The real breakthrough was PEP 658 (2022) putting static metadata in PyPI's API—before that, pip had to download and execute setup.py just to discover dependencies
• uv deliberately drops legacy support: no .egg files, no pip.conf, no system Python installs, ignores python<4.0 upper bounds that are always wrong anyway
• Most speed wins don't need Rust: parallel downloads, HTTP range requests for wheel metadata, global cache with hardlinks, and the PubGrub resolver could all be implemented in pip today
• Rust helps with zero-copy deserialization and lock-free concurrency, but the architectural decisions matter more than the language
• pip can't adopt these optimizations because 15 years of backwards compatibility takes precedence over speed

The common explanation for uv's speed—"it's written in Rust"—misses the actual story. For years, Python packaging required executing arbitrary code (setup.py) just to discover what dependencies a package needed, creating a chicken-and-egg problem where you couldn't know build dependencies without running code, but couldn't run code without installing build dependencies. A series of PEPs from 2016-2022 (518, 517, 621, 658) gradually fixed this by introducing pyproject.toml, separating build frontends from backends, and finally putting static metadata directly in PyPI's API. PEP 658 went live in May 2023; uv launched February 2024. The tool could only exist because the infrastructure finally supported it.

uv's speed comes primarily from elimination—every code path you don't have is one you don't wait for. It drops .egg support, ignores pip.conf entirely, skips bytecode compilation by default, requires virtual environments, and enforces stricter spec compliance. It ignores python<4.0 upper bounds (which are defensive, not predictive) and uses first-index-wins for multiple package sources. Each dropped feature is code pip still executes. More importantly, most of uv's optimizations don't require Rust: HTTP range requests to read wheel metadata without full downloads, parallel downloads, global caching with hardlinks, and the PubGrub resolver could all be implemented in pip today. The Rust-specific wins—zero-copy deserialization with rkyv, lock-free concurrency, no interpreter startup overhead—are real but secondary.

The lesson extends beyond Python: static metadata beats code execution for dependency discovery, and backwards compatibility is expensive. Cargo and npm have operated with static metadata from the start. pip could be faster without rewriting in Rust, but won't because preserving compatibility with 15 years of edge cases takes precedence. uv proves that starting fresh with modern assumptions beats incremental optimization of legacy systems.