-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add per-requirement --no-deps
option support in requirements.txt
#10837
base: main
Are you sure you want to change the base?
Conversation
Closed and re-opened to enable CI runs. |
This comment was marked as resolved.
This comment was marked as resolved.
Co-authored-by: Tzu-ping Chung <[email protected]>
--no-deps
option.--no-deps
option support in requirements.txt.
FYI Conda is a generic package installer, not a Python package installer, e.g. you can install Rust, or Nodejs, or OpenSSL. Which means it can not enforce packages implement standard Python metadata and it is up to package author to follow best practices on building a Python package. Which their documentation lays out correctly on how to do, but obviously mistakes can be made. That is to say it will almost certainly not be a bug in conda but the individual package needs to fix it. |
No, there is no bug. The package ecosytems are simply not aligned (certain artifacts are provided by different packages). |
Nevertheless, pip works with Python's standard metadata. If you want pip to recognise a package that is installed on your system, it needs to have that metadata, and we don't consider "so that we can use pip with packages that don't follow the Python standards" as a compelling argument in favour of a pip feature. |
Let's say I have the following situation: package |
I've ended up here following some thread from StachExchange around this, and just wanted to contribute a concrete example. I've a team that have been using I suppose it could be argued (by purists?) that the people packaging |
I've encountered similar issues mentioned above and more. If I were to categorize the scenarios, they would fall into these buckets: A) Indicating that certain direct dependencies (and their transitives) are expected to be already "provided" - that is, a situation where one wants to build and test code locally with everything installed, but at point of runtime elsewhere, some of the dependencies may already be provided by the environment. This occurs in situations such as pyspark or notebook kernels or AWS Lambda layers, where some dependencies are already installed. In this scenario, I would want an installer to warn me or fail if the dependencies were expected to be provided but were not provided when the installer was run My understanding of this PR is that most are talking about B? I'm not sure that Scenario AFor this scenario, I still want the resolver to "solve" the puzzle of the dependencies and to be able to read metadata of whatever environment it will install into. I would consider it a bug if at the end of installation process that my environment was inconsistent with the instructions that I provided to the installer. Here I still want the resolver to run fully, but the installer to run partially and error if the resulting environment is deemed to be broken. Scenario BThis is intentionally installing a possibly broken environment. For this, I think it's important to make it clear to the user that they are removing guard-rails. Here, I think the intention is for the resolver to run fully, but the installer should only run partially. The difference here is that it shouldn't be considered an error if the resulting environment is deemed to be broken? Scenario CThis seems different to Scenario DHere, I am saying that I've previously created a valid installation plan according to a resolver and "frozen" or "locked" it. I now want to run the installation plan elsewhere and just run installation. I would like an error at the end if the resulting environment is deemed to be broken. Question
|
My scenario is more like A, though your description has many details that do not match. |
using workaround that would be better with pypa/pip#9948 or pypa/pip#10837
Rather than a bug, I'd assess it's that Conda is being consistent with Pip by deferring to Pip behavior. Conda similarly provides a YAML format that includes an optional Moreover, I think the argument being made here for not merging this PR would be equally valid for Conda not changing its implementation (i.e., should Conda undertake reimplementing its |
Seems like this has stalled, so trying to add some new thoughts. I can see the tension between some desiring the "quick fix" for easy Docker files & co., and others concerned about making it too easy for the unwary to fall of the edge of the peverbial cliff. If the "Scenario B" is indeed the dominant use case now, we could perhaps be a lot more specific with a slightly different option it to make it less likely to do something unexpected. So, maybe we are The other part of the convo. (which I don't fully understand) is the dirrerence between the installer and the resolver. Taking a guess at these, I'm guessing that the resolver might penetrate underneath matplotlib and plotly and find other dependancies that would still be resolved (and installed?). I don't really like this option, but it would be workable (there may just be a few more I need to add to my explicit Might this help resolve the tension? |
Hi, I would like to add a use case this PR would fix that I believe has not been mentioned above or in #9948. OpenCV maintains 4 different versions of their software (see 3 here: https://github.com/opencv/opencv-python?tab=readme-ov-file#installation-and-usage). Unfortunately, they do not officially conflict, in that I can install multiple versions (opencv-python, opencv-contrib-python for example) in the same environment, and This issue arose for me in working on Freemocap. We have multiple different ml libraries we use (for example, MediaPipe and YOLO) and there is no standard for which opencv distribution is used (nor is it feasible to ask every ml library to agree on one). Our goal would be to install the superset opencv-contrib-python, which contains everything needed to satisfy every other opencv distribution. Adding this option to the requirements.txt will allow us to keep simple install instructions for our users, and will remove this conflict as a barrier in distributing our software as an application through PyApp. The OpenCV discussion around this hasn't pointed towards any work on fixing the issue, for example: |
@pfmoore @groodt you seem to be against merging this PR, but you never proposed an alternative advice for the described situation. |
Don't mix package managers, basically. Pip and conda use different metadata, so this sort of situation can arise. I'm sorry if that's not the answer you hoped for, but it's the best I can offer. You could try asking conda to include standard Python packaging metadata for their native library packages, but I suspect they would say (quite reasonably) that Python metadata isn't designed for native libraries (correct, it isn't) and so they don't plan on doing so. |
To be clear, it is not about conda per se. The artifacts could very well be provided by pacman, scoop, spack,... But I guess the answer is clear, there is currently no solution. |
This PR would really help us over in https://github.com/freemocap/freemocap We have two dependencies, one of which includes The end result is that my users get BOTH versions of There is a quick fix, which is to uninstall both versions after the initial Heres's a link to the hacky pop up nonsense we implemented to handle this issue - https://github.com/freemocap/freemocap/blob/256f8d89ea332b255ff6f41e96e4892595f8319b/freemocap/gui/qt/widgets/opencv_conflict_dialog.py lol Please merge this, would really appreciate it, thaaanks! (see @philipqueen's comment above for details - #10837 (comment) ) |
is there a blocker for this PR? This would also really help us |
Since there's a massive discussion around this, FYI what it really terrorizes is pytorch. Each project has to include it as a dependency if it wants to be plug and play. However, this in about 70% of the cases results in a different version being demanded. So at this point it must be tens of thousands of people who have wrecked their environment's pytorch versions by installing something that didn't make pip happy, which leads to a gigabyte or two download of the wrong version and minutes of installing (and reinstalling). |
@pradyunsg |
Closes #9948