Skip to content

Commit

Permalink
Optimize model loading with cache
Browse files Browse the repository at this point in the history
  • Loading branch information
QubitPi committed Oct 6, 2024
1 parent 5b00b76 commit bc36beb
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion app.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def load_lottieurl(url: str):
###### ➠ If you want to translate the subtitles to English, select the task as "Translate"
###### I recommend starting with the base model and then experimenting with the larger models, the small and medium models often work well. """)


@st.cache_resource
def change_model(current_size, size):
if current_size != size:
loaded_model = whisper.load_model(size)
Expand Down

0 comments on commit bc36beb

Please sign in to comment.