Given that platforms generally use shared memory-maps for shared object files, if it's done right this could additionally reduce the memory footprint of python interpreters in different virtualenvs with large C extensions loaded.
pth file shenanigans but I feel like those all have significant drawbacks. You could get a similar reduction effect by setting up an import hook, using zipimport, or doing some kind of. (My ~/Library/Caches/pip is only 256M, and presumably all those virtualenvs contain multiple full, uncompressed copies of it!) Alternative Solutions While there may be other bottlenecks, this would also reduce disk usage by an order of magnitude. Rather than unpacking and duplicating all the data in wheels, pip could store the cache unpacked, so all the files are already on the filesystem, and then clone them into place on copy-on-write filesystems rather than copying them. It also takes up a lot of space my ~/.virtualenvs/ is almost 3 gigabytes, and this is a relatively new machine and that isn't even counting my ~/.local/pipx, which is another 434M.
What's the problem this feature will solve?Ĭreating a new virtual environment in a modern Python project can be quite slow, sometimes on the order of tens of seconds even on very high-end hardware, once you have a lot of dependencies.