Skip to content

Conversation

@Zumzigzoo
Copy link

@Zumzigzoo Zumzigzoo commented Feb 26, 2023

Not sure how well this will work in production, since this impl trades off a few seconds in __init__ for faster predict call.

torch.jit.script and torch.jit.optimize_for_inference take a couple seconds to run. As long as autotagger instances are not repeatedly created and destroyed this should be around 2.5x faster than current impl when inferring.

TODO remove dep on fastai PILImage in app.py and autotag (edit: done)

ideally remove dep on fastai PILImage in app and autotag in future
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant