-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix handling of prepared collections #38
base: develop
Are you sure you want to change the base?
Conversation
I also noticed that sets weren't working as expected since it, too, passed the wrong column type to serialize its children. The same goes for tuples. However, tuples may also contain mixed data types. The two test cases I added for tuples are failing. Let me know if you have any ideas why that might be! Thanks.
(And similarly for the |
This PR has Quantification details
Why proper sizing of changes matters
Optimal pull request sizes drive a better predictable PR flow as they strike a
What can I do to optimize my changes
How to interpret the change counts in git diff output
Was this comment helpful? 👍 :ok_hand: :thumbsdown: (Email) |
Hello! Thanks for your cotribution. Have you tried using BigInt instead of python's integer? There're extra types which you can use: https://github.com/Intreecom/scyllapy#extra-types The Agree about floats. |
Yeah. I'm only using prepared queries in my app to avoid having to transform my query parameters to use ScyllaPy's extra types, and I noticed that a few things did not work since it passed the container type rather than the type of the contained value when converting them—I believe this would be on the right path? Automatic type conversion would indeed only be possible with prepared queries, so I'm only testing those. The logic for handling tuples would need to be fixed to pass the prepared collections test since ScyllaDB currently complains about getting 2-3 bytes instead of the expected 4 bytes for an I unfortunately don't have much time to work on this at the moment—I would appreciate any ideas you might have to finalize this and perhaps make it a bit cleaner. Thanks! |
Will see what I can do to fix that moment. |
I experienced some issues regarding type marshaling of maps, such as
map<int, bigint>
:After digging through the code, I identified the issue: If
column_type
is aMap
—such asMap(Int, BigInt)
in this case—thePyDict
case ofpy_to_value
would pass alongColumnType::Map
to as the column_type to the recursive call, and since default match in thePyInt
case was int32, it would incorrectly pass an int32 to a bigint field.To make issues like these simpler to debug in the future, I also refactored the
PyInt
andPyFloat
cases not to choose an implicit default but instead return aBindingError
if an unsupported type were to be bound.