Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize the memory footprint during postprocessing #149

Open
wenyikuang opened this issue Mar 22, 2024 · 1 comment
Open

Optimize the memory footprint during postprocessing #149

wenyikuang opened this issue Mar 22, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@wenyikuang
Copy link
Collaborator

Why?

Right now it takes more than 200 G memory to run the sightglasspostprocessing in generate_metadata.
Which cause it buggy and painful to run. And in the long term the data size will grow in O(n) if we load the whole
thing into memory and do editorial.

Which is not nessasray.

How?
Probably by the lazy load offer from polars.
Should probably need:
Prune the logic to a MVP protocal, then rewrite the indexing/load logic, add the feature back.

Restriction:

  • Don't import huge change and make the tech stack shift.

When:
Before next release

Target:
hopefully next release we could use a normal PC (~100G Ram) to finish the work

@wenyikuang wenyikuang self-assigned this Mar 22, 2024
@wenyikuang wenyikuang added the bug Something isn't working label Mar 22, 2024
@asparke2
Copy link
Member

asparke2 commented Apr 9, 2024

Ideally if each upgrade can be processed sequentially, it can use < 32GB of RAM per upgrade and therefore be run on anyone on the team's machines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants