Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

light processing on over threshold errors, and keep track of record count #330

Merged
merged 2 commits into from
Jan 28, 2025

Conversation

lchen-2101
Copy link
Collaborator

No description provided.

@lchen-2101 lchen-2101 requested a review from jcadam14 January 28, 2025 00:41
Copy link

Coverage report

Click to see where and how coverage changed

FileStatementsMissingCoverageCoverage
(new stmts)
Lines missing
  src/regtech_data_validator
  validation_results.py
  validator.py 133-134, 294, 315-316
Project Total  

This report was generated by python-coverage-comment-action

Copy link
Contributor

@jcadam14 jcadam14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good just a few comments to discuss

@@ -199,7 +205,6 @@ def validate_batch_csv(
)
yield results

print("Processing other logic errors")
for validation_results, _ in validate_chunks(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I say we take out any of the none validate lf/parquet stuff because I think we've proven the batch csv approach isn't the way to go. Either here or in a new story.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's deal with it in another story; I think I replicated parts of that code, and abstracted out to what we have now for lazyframes; the thought is to still be able to do cli csv, but it would then fall into using the lazyframe batch processing.

validation_results = format_findings(
validation_results, schema.name.value, checks
)

error_counts, warning_counts = get_scope_counts(validation_results)
results = ValidationResults(
record_count=df.height,
error_counts=error_counts,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the intended use of this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gonna guess u meant to comment on the record_count part, it's so the aggregator knows the total records of the submitted file, so we don't do that processing in the API, lessening the load there, and truly just have the API upload the file, and nothing else.

register_schema = get_register_schema(context)
validation_results = validate(register_schema, pl.DataFrame({"uid": all_uids}), 0)
validation_results = validate(register_schema, pl.DataFrame({"uid": all_uids}), 0, True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this will probably never happen, but say a bank submits a sblar with over 1mil entries and they accidentally use the same UID for every row. We'd end up with register errors beyond our max limit, as well moving on to processing other logic errors beyond the max limit. Doesn't need to take place here but think we should look at treating these the same.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe, although this part isn't doing it on the submitted lf/df; it's on the information we already gathered from the previous pass (syntax errors), so it's a lot less intensive, and it's currently not counting towards the limit.

@lchen-2101 lchen-2101 merged commit a341b4a into parquet_validator Jan 28, 2025
2 of 4 checks passed
@lchen-2101 lchen-2101 deleted the pq_val_test branch January 28, 2025 16:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants