Skip to content

Commit

Permalink
Format & README
Browse files Browse the repository at this point in the history
  • Loading branch information
ionic-bond committed May 24, 2024
1 parent 69ef55f commit 8cc10ef
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 11 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Try it on Colab: [![Open In Colab](https://colab.research.google.com/assets/cola
2. [**Install CUDA 11 on your system.**](https://developer.nvidia.com/cuda-11-8-0-download-archive) (Faster-Whisper is not compatible with CUDA 12 for now).
3. [**Install cuDNN to your CUDA dir**](https://developer.nvidia.com/cuda-downloads) if you want to use **Faseter-Whisper**.
4. [**Install PyTorch (with CUDA) to your Python.**](https://pytorch.org/get-started/locally/)
5. [**Create a Google API key**](https://aistudio.google.com/app/apikey) if you want to use **Gemini API** for translation. (Recommend, Free 60 requests / minute)
5. [**Create a Google API key**](https://aistudio.google.com/app/apikey) if you want to use **Gemini API** for translation. (Free 15 requests / minute)
6. [**Create a OpenAI API key**](https://platform.openai.com/api-keys) if you want to use **Whisper API** for transcription or **GPT API** for translation.

**If you are in Windows, you also need to:**
Expand All @@ -76,7 +76,7 @@ Try it on Colab: [![Open In Colab](https://colab.research.google.com/assets/cola
**Install release version from PyPI (Recommend):**

```
pip install stream-translator-gpt
pip install stream-translator-gpt -U
stream-translator-gpt
```

Expand Down
4 changes: 2 additions & 2 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ flowchart LR
2. [**在您的系统上安装CUDA 11。**](https://developer.nvidia.com/cuda-11-8-0-download-archive) (Faster-Whisper和CUDA 12可能有兼容性问题)
3. 如果你想使用 **Faseter-Whisper**, 需要[**将cuDNN安装到你的CUDA目录中**](https://developer.nvidia.com/cuda-downloads)
4. [**将PyTorch(带有CUDA)安装到您的Python中。**](https://pytorch.org/get-started/locally/)
5. 如果你想用 **Gemini API** 进行翻译,需要[**创建一个Google API密钥**](https://aistudio.google.com/app/apikey)。(推荐,每分钟免费60次请求
5. 如果你想用 **Gemini API** 进行翻译,需要[**创建一个Google API密钥**](https://aistudio.google.com/app/apikey)。(每分钟免费15次请求
6. 如果你想用 **Whisper API** 进行转录或 **GPT API** 进行翻译,需要[**创建一个OpenAI API密钥**](https://platform.openai.com/api-keys)

**如果您是Windows用户,则还需:**
Expand All @@ -74,7 +74,7 @@ flowchart LR
**从 PyPI 安装稳定版本(推荐):**

```
pip install stream-translator-gpt
pip install stream-translator-gpt -U
stream-translator-gpt
```

Expand Down
4 changes: 2 additions & 2 deletions README_PyPI.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Try it on Colab: [![Open In Colab](https://colab.research.google.com/assets/cola
2. [**Install CUDA 11 on your system.**](https://developer.nvidia.com/cuda-11-8-0-download-archive) (Faster-Whisper is not compatible with CUDA 12 for now).
3. [**Install cuDNN to your CUDA dir**](https://developer.nvidia.com/cuda-downloads) if you want to use **Faseter-Whisper**.
4. [**Install PyTorch (with CUDA) to your Python.**](https://pytorch.org/get-started/locally/)
5. [**Create a Google API key**](https://aistudio.google.com/app/apikey) if you want to use **Gemini API** for translation. (Recommend, Free 60 requests / minute)
5. [**Create a Google API key**](https://aistudio.google.com/app/apikey) if you want to use **Gemini API** for translation. (Free 15 requests / minute)
6. [**Create a OpenAI API key**](https://platform.openai.com/api-keys) if you want to use **Whisper API** for transcription or **GPT API** for translation.

**If you are in Windows, you also need to:**
Expand All @@ -29,7 +29,7 @@ Try it on Colab: [![Open In Colab](https://colab.research.google.com/assets/cola
**Install release version from PyPI (Recommend):**

```
pip install stream-translator-gpt
pip install stream-translator-gpt -U
stream-translator-gpt
```

Expand Down
10 changes: 5 additions & 5 deletions stream_translator_gpt/llm_translator.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
from .common import TranslationTask, LoopWorkerBase


def parse_json_completion(completion):
def _parse_json_completion(completion):
pattern = re.compile(r'\{.*}', re.DOTALL)
json_match = pattern.search(completion)

Expand Down Expand Up @@ -80,7 +80,7 @@ def _translate_by_gpt(self, translation_task: TranslationTask):
messages=messages,
)

translation_task.translated_text = parse_json_completion(
translation_task.translated_text = _parse_json_completion(
completion.choices[0].message.content)
except (APITimeoutError, APIConnectionError) as e:
print(e)
Expand All @@ -100,7 +100,7 @@ def _gpt_to_gemini(gpt_messages: list):
gemini_messages.append(gemini_message)
return gemini_messages

def _translate_gy_gemini(self, translation_task: TranslationTask):
def _translate_by_gemini(self, translation_task: TranslationTask):
# https://ai.google.dev/tutorials/python_quickstart
client = genai.GenerativeModel(self.model)
messages = self._gpt_to_gemini(self.history_messages)
Expand All @@ -117,7 +117,7 @@ def _translate_gy_gemini(self, translation_task: TranslationTask):
response = client.generate_content(messages,
generation_config=config,
safety_settings=safety_settings)
translation_task.translated_text = parse_json_completion(response.text)
translation_task.translated_text = _parse_json_completion(response.text)
except (ValueError, InternalServerError) as e:
print(e)
return
Expand All @@ -128,7 +128,7 @@ def translate(self, translation_task: TranslationTask):
if self.llm_type == self.LLM_TYPE.GPT:
self._translate_by_gpt(translation_task)
elif self.llm_type == self.LLM_TYPE.GEMINI:
self._translate_gy_gemini(translation_task)
self._translate_by_gemini(translation_task)
else:
raise ValueError('Unknow LLM type: {}'.format(self.llm_type))

Expand Down

0 comments on commit 8cc10ef

Please sign in to comment.