Open
Conversation
Improve cpu and gpu Dockerfiles, resulting in much smaller images
Add option: save network stream to local file while transcribing
… OMP_NUM_THREADS Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
fix: limit CPU usage for VAD onnxruntime inference session by setting…
Add support for RTSP stream
Signed-off-by: makaveli10 <suryanvineet47@gmail.com>
Signed-off-by: makaveli10 <suryanvineet47@gmail.com>
Signed-off-by: makaveli10 <suryanvineet47@gmail.com>
Signed-off-by: makaveli10 <suryanvineet47@gmail.com>
Signed-off-by: makaveli10 <suryanvineet47@gmail.com>
Signed-off-by: makaveli10 <suryanvineet47@gmail.com>
…optional Make writing audio frames optional
- Use a threadlock around the model in single model mode
Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
Expose the srt file location of Transcription client
Update tensorrt llm to v0.9.0
fix spelling of detection in README.md.
Single model mode
Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
Integrate live translation
The help text for `--max_connection_time` is incorrect. Looks like a copy-paste mistake from `--cache_path`.
fix(run_server.py): help text for max_connection_time argument
Previously, the server only accepted local file paths for custom Faster Whisper models. This change allows passing HuggingFace repo IDs which are automatically downloaded and converted to CTranslate2 format by the backend if not already in CTranslate2 format. Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
…tom-model-loading Feat: support HuggingFace model IDs for faster_whisper_custom_model_path.
Specify command to run client script. Signed-off-by: Jeny Sadadia <jeny.sadadia@collabora.com>
README.md: add instructions for running client
Add `--enable-timestamps` option to `run_client.py` script to print out transcripted text with timestamps. Sample output with translation enabled: ``` [0.000 -> 7.440] And so, my fellow Americans, ask not what your country can do for you. [7.440 -> 10.300] Ask what you can do for your country. TRANSLATION to fr: [0.000 -> 7.440] Et donc, mes camarades américains, ne demandez pas ce que votre pays peut faire pour vous. [7.440 -> 10.300] Demandez ce que vous pouvez faire pour votre pays. ``` Signed-off-by: Jeny Sadadia <jeny.sadadia@collabora.com>
Enable timestamps for transcripted text
feat: update to support faster whisper 1.2.0
Resolves pkg_resources missing during wheel build Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
Bump openai-whisper version to 20250625.
Replace hardcoded [-4:] truncation with a configurable display_segments parameter (default: 4) in both Client and TranscriptionClient classes. Fixes #377
Add cross-client GPU batch inference for faster_whisper backend
When VAD removes all speech from an audio chunk, transcriber.transcribe() returns (None, info). Calling list(None) raises TypeError. The _process_multi path already handles this case; this aligns _process_single to match.
Fix NoneType crash in _process_single when VAD filters all audio
feat: make display_segments configurable in Client/TranscriptionClient
Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
Expose __version__ in package root and update dependencies in setup.py
Fix crash when no --files provided; use microphone input instead
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.