Replies: 2 comments 5 replies
-
Hello. Please explain more. What databases are you using? What do you mean when you say "even though they are the same column the decimal point is not deterministic". Usually such issues occur due to different precision between databases, and so the minimal precision needs to be adjusted accordingly for the diff to be accurate. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the quick reply @erezsh! So in short, I am using only trino, but I am using reladiff to check if the output of a specific ETL pipeline is deterministic (for example in different spark version). The issue is that with doubles, the decimal precision produced by the pipeline is a bit random (as expected). I tried to use format and rounding in the normalize_number function, but there are still differences produced in the pipeline. Therefore I was thinking that maybe I could add some double tolerance in the diff, but I haven’t figured out a good solution to do that. Cheers for the quick reply, I really appreciate it! |
Beta Was this translation helpful? Give feedback.
-
Hi! I am currently trying out reladiff, and it looks like a really nice project! I am currently running into somewhat curious issues, my diff fails when comparing doubles, even though they are the same column the decimal point is not deterministic and it fails. I tried to introduce rounding as part of the normalize_number, however, as predicted, it fails. What is the best way to approach the problem and where can this tolerance be introduced in the code in order to make it work for this use-case?
Cheers!
Beta Was this translation helpful? Give feedback.
All reactions