Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Don't Merge] Issue 258 - Update to rank and fix operator #260

Closed
wants to merge 4 commits into from

Conversation

clausherther
Copy link
Contributor

Issue this PR Addresses/Closes

Closes #258

Summary of Changes

Simplified example to explain suggestion made here

Why Do We Need These Changes

Reviewers

@clausherther

@@ -22,6 +22,8 @@
row_condition
) %}

{% set data_type = dbt.type_numeric() if not data_type else data_type %}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fyi, Snowflake won't correctly compare a float in the db to a value cast to decimal (default for the test) so makes more sense to use the built-in types across the test to make sure comparisons actually work.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the official documentation and my testing, inserting floating point data will create a number column type with the smallest precision and scale to fit the value. e.g.: when creating a table with initial data, if a column contains [0, 1, 0.5], the assigned type will be number(2,1). So no information is lost at that point. However casting that to "numeric", "decimal", "number" type without explicit precision and scale will result in rounding and casting it to any "float" or "double" type has the usual accuracy problems.

When comparing, Snowflake will happily attempt to compare any numeric type with the usual issues from floats

comparison result
cast(0.5 as number(2,1)) = cast(0.5 as numeric(28,2)) TRUE
cast(0.5 as float) = cast(0.5 as numeric(28,2)) TRUE
cast(0.5 as float) = cast(0.5 as numeric) FALSE
0.1 + 0.2 = 0.3 TRUE
(cast(0.1 as float) + cast(0.2 as float)) = cast(0.3 as float) FALSE
cast(3.1415926535897932384626433832795028841 as number(38,37)) = cast(3.141592653589793 as float) TRUE

TL;DR: It makes sense to use built-in types everywhere to not get db-engine specific issues.

@@ -13,8 +13,8 @@ union all
select
2 as idx,
'2020-10-22' as date_col,
1 as col_numeric_a,
0 as col_numeric_b,
cast(1 as {{ dbt.type_numeric() }}) as col_numeric_a,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any database engine that wouldn't automatically cast every subsequent value on insert when the first value has an explicit type? (it doesn't hurt to cast them all, I'm curious and couldn't find the information online)

@clausherther clausherther deleted the fix/issue-258-claus branch September 1, 2023 15:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] expect_column_most_common_value_to_be_in_set handling of ties
2 participants