-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[7.0.x] Rebuild for openssl3 & aws-sdk-cpp 1.9.379 #873
[7.0.x] Rebuild for openssl3 & aws-sdk-cpp 1.9.379 #873
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
b6d631f
to
aa4c70e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocked on conda-forge/aws-sdk-cpp-feedstock#577 (or unblocking aws-sdk-cpp 1.9)
1d419d8
to
80eb635
Compare
@conda-forge-admin, please rerender |
0ba93bb
to
62d9795
Compare
0482b97
to
b9316cf
Compare
warning otherwise: ``` WARNING (pyarrow,lib/python3.10/site-packages/pyarrow/plasma-store-server): Needed DSO lib/libgflags.so.2.2 found in ['gflags'] WARNING (pyarrow,lib/python3.10/site-packages/pyarrow/plasma-store-server): .. but ['gflags'] not in reqs/run, (i.e. it is overlinking) (likely) or a missing dependency (less likely) ```
remove options not supported upstream for 7.0.x that got backported accidentally (they are no-ops, but still)
…nda-forge-pinning 2022.12.09.20.17.28
it doesn't build on windows, fails without much info: ``` [75/110] Building CXX object src\arrow\flight\sql\CMakeFiles\arrow_flight_sql_shared.dir\Unity\unity_0_cxx.cxx.obj FAILED: src/arrow/flight/sql/CMakeFiles/arrow_flight_sql_shared.dir/Unity/unity_0_cxx.cxx.obj ```
b9316cf
to
16fd0e8
Compare
recipe/meta.yaml
Outdated
- cudatoolkit >=9.2 # [cuda_compiler_version != "None"] | ||
- cudatoolkit >={{ cuda_compiler_version_min }} # [cuda_compiler_version != "None"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was intentional since Arrow only uses the CUDA driver library
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's why it's set to the minimum? I.e. aarch it's 11, and cuda_compiler_version_min
takes care of that on a cf level?
Or can we be sure that something compiled with CUDA 10.2 is still compatible with CUDA 9?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't use nvcc
or any of the libraries in the cudatoolkit
package. It only uses the CUDA driver library, libcuda
, which is installed and versioned separately.
I believe this was added because cudatoolkit
has a __cuda
virtual package dependency to ensure that the driver version sufficiently supports the toolkit.
I'm not sure if there was a reason that we didn't use __cuda
directly here. Regardless, if there is a reason I'd be more comfortable handling that in a follow up PR instead of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be because that virtual package wasn't around yet at the time when arrow added CUDA support (not sure exactly about the timing)... So let's just use __cuda
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure. It may have been weird solver behavior at the time or something else. Regardless, could we defer changing this to a follow up PR and just leave this as cudatoolkit >=9.2
for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which is why I put in a lot of effort to make it reviewable per-commit, and pinged several times for people to do so. So I don't agree that this is surprising, but well, we're all very busy and things slip through the cracks.
In any case, I don't see any harm done here - aside from the fact that CUDA<10.2 is positively ancient, we can just loosen the pins again and voilà, packages are installable again even for those on CUDA 9.2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understand you put a lot of effort into that PR and appreciate it
Only suggesting that when we have an unrelated change in PRs in the future, it would help to comment on them to identify that change and raise it for discussion. Could you please do this in the future?
Am not trying to discuss this specific change. Agree with you that is better handled in an issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please do this in the future?
Adding pings for the feedstock-team so far has been almost entirely pointless, so adding more is hardly going to help. Again, there were clean-ups all over the place necessary for the maintenance branches (e.g. the different bindings and components were not in sync with upstream, or across platforms, etc. etc.. Just look at the commit history per branch).
To be clear: I'm more than happy to have others participate in discussions and maintenance, and keep pinging in the hopes that this happens. Barring that, changes will be developed on main, and for big changes (like #875) I will leave ample time to ask for feedback. However I will continue to backport minor cleanups to the maintenance branches. In this particular instance I may have misjudged on a miniscule edge case (that's easy to fix afterwards too), but the principle is sound. If people want to review, review on main, or simply participate on the feedstock.
Still, in an effort towards maximum transparency, I noted even the backports in #918:
I have prepared the following PRs, which also backport some clean-ups from #875 (and making sure we enable all bindings/libs for that are available for the respective version):
So all in all, I think this is making a mountain out of a molehill, and it's verging on disrespectful to insist on more communication after (almost) all previous attempts have gone by unanswered. It's a two-way street.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@h-vetinari I truly appreciate all of the work you've been doing in helping to move this and other feedstocks forward. I was sick for ~3 weeks recently and truly out of commission and then was on vacation celebrating my 1st anniversary with my wife, so I've been MIA for quite a while and am still playing catch up.
I'm also perfectly happy to move forward as is here. With CUDA 12.0 having been released recently being back 3 major versions feels more than generous enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was sick for ~3 weeks recently and truly out of commission and then was on vacation celebrating my 1st anniversary with my wife, so I've been MIA for quite a while and am still playing catch up.
Sorry to hear - hope you're feeling better now, and congrats on the anniversary 🥳
With CUDA 12.0 having been released recently being back 3 major versions feels more than generous enough.
Not sure how long it'll be until we support 12 in conda-forge, but in any case, this should now be compatible with 9.2 again (so 9, 10, 11). Not sure how many people are still on drivers more than 3 years old; I mainly wanted to avoid broken metadata (because we're using 10.2 as minimum now, and loosening restrictions after the fact is always easier than tightening them).
754d016
to
f5e3901
Compare
f5e3901
to
2e6f0b4
Compare
Hi! This is the friendly conda-forge automerge bot! I considered the following status checks when analyzing this PR:
Thus the PR was passing and merged! Have a great day! |
This PR has been triggered in an effort to update openssl3.
Notes and instructions for merging this PR:
Please note that if you close this PR we presume that the feedstock has been rebuilt, so if you are going to perform the rebuild yourself don't close this PR until the your rebuild has been merged.
Closes #892
If this PR was opened in error or needs to be updated please add the
bot-rerun
label to this PR. The bot will close this PR and schedule another one. If you do not have permissions to add this label, you can use the phrase@conda-forge-admin, please rerun bot
in a PR comment to have theconda-forge-admin
add it for you.This PR was created by the regro-cf-autotick-bot. The regro-cf-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. Feel free to drop us a line if there are any issues! This PR was generated by https://github.com/regro/autotick-bot/actions/runs/3324531783, please use this URL for debugging.