Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve OpenAI Integration #34

Merged
merged 19 commits into from
Feb 8, 2025
Merged

Improve OpenAI Integration #34

merged 19 commits into from
Feb 8, 2025

Conversation

oleander
Copy link
Owner

@oleander oleander commented Feb 8, 2025

Enhances OpenAI integration and model handling.

Changes:

  • Update OpenAI API integration
  • Improve model selection and token management
  • Enhance prompt handling and generation
  • Update model-related configurations

Git AI Test and others added 16 commits February 8, 2025 08:37
Based only on the changes visible in the diff, this commit:
- Adds a new `profile` module in `src/lib.rs`.
- Inserts a new macro in the `profile` module in `src/profile.rs`.
- Adds a new dependency, `tracing`, in `Cargo.toml`.
- Updates the version from `0.2.56` to `0.2.57` in `Cargo.lock`.
- Includes the new `tracing` dependency to the dependencies list in `Cargo.lock`.
- Introduce `OllamaClient` for handling requests to the Ollama API.
- Implement `OllamaClientTrait` with methods for generating responses and checking model availability.
- Create new `client.rs` module for request/response handling and to integrate with both Ollama and OpenAI models.
- Modify `Cargo.toml` to include the `parking_lot` dependency for better synchronization.
- Set up a structured way to format prompts and parse JSON responses from the Ollama API, ensuring robust error handling and logging.
Delete the `ollama.rs` file, which contained the `OllamaClient` struct and its associated trait. Additionally, update `client.rs` to remove all dependencies and usages related to `OllamaClient`. Ensure the logic still supports OpenAI models, maintaining compatibility. This cleanup optimizes the code by eliminating unnecessary components and streamlining model availability checks.
Enhance the `truncate_to_fit` function by adjusting the line retention strategies based on the attempt count. Implement logic to minimize output size progressively with retries, ensuring that the final result adheres to the specified maximum token limit. Return a minimal version of content if all truncation attempts fail, or a truncation message if content is too large.
```
Rename function `get_instruction_token_count` to `create_commit_request` in `commit.rs`

Remove the `prompt`, `file_context`, `author`, and `date` parameters from `generate_commit_message` in `openai.rs`
```
- Based only on the changes visible in the diff, this commit adds a significant amount of new content to the prompt.md file.
Based only on the changes visible in the diff, this commit removes the '#![feature(assert_matches)]' line from the prompt.md file.
---

    Add two lines at the end of prompt.md for additional context

---

    Add two lines at the end of rust.yml for unspecified purpose
…orporate 'mustache' dependency and enhance commit request creation"
@oleander oleander requested a review from Copilot February 8, 2025 14:02

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 6 out of 10 changed files in this pull request and generated no comments.

Files not reviewed (4)
  • .cursorignore: Language not supported
  • src/model.rs: Evaluated as low risk
  • src/profile.rs: Evaluated as low risk
  • tests/patch_test.rs: Evaluated as low risk
Comments suppressed due to low confidence (2)

src/openai.rs:27

  • The function generate_commit_message should have test coverage to ensure it handles various diff inputs correctly.
pub async fn generate_commit_message(diff: &str) -> Result<String> {

src/openai.rs:33

  • The function truncate_to_fit should have test coverage to ensure it handles edge cases, such as empty inputs and very large inputs, correctly.
fn truncate_to_fit(text: &str, max_tokens: usize, model: &Model) -> Result<String> {
@oleander oleander changed the title feat: Improve OpenAI Integration Improve OpenAI Integration Feb 8, 2025
@oleander oleander merged commit a2a9ad2 into main Feb 8, 2025
5 checks passed
@oleander oleander deleted the pr/6-openai branch February 8, 2025 14:21
oleander added a commit that referenced this pull request Feb 8, 2025
* feat: OpenAI Integration changes

* Add `profile` module and tracing dependency

Based only on the changes visible in the diff, this commit:
- Adds a new `profile` module in `src/lib.rs`.
- Inserts a new macro in the `profile` module in `src/profile.rs`.
- Adds a new dependency, `tracing`, in `Cargo.toml`.
- Updates the version from `0.2.56` to `0.2.57` in `Cargo.lock`.
- Includes the new `tracing` dependency to the dependencies list in `Cargo.lock`.

* Add Ollama client implementation for model interactions

- Introduce `OllamaClient` for handling requests to the Ollama API.
- Implement `OllamaClientTrait` with methods for generating responses and checking model availability.
- Create new `client.rs` module for request/response handling and to integrate with both Ollama and OpenAI models.
- Modify `Cargo.toml` to include the `parking_lot` dependency for better synchronization.
- Set up a structured way to format prompts and parse JSON responses from the Ollama API, ensuring robust error handling and logging.

* Remove OllamaClient implementation and references from the codebase.

Delete the `ollama.rs` file, which contained the `OllamaClient` struct and its associated trait. Additionally, update `client.rs` to remove all dependencies and usages related to `OllamaClient`. Ensure the logic still supports OpenAI models, maintaining compatibility. This cleanup optimizes the code by eliminating unnecessary components and streamlining model availability checks.

* Refactor `truncate_to_fit` function for improved token handling

Enhance the `truncate_to_fit` function by adjusting the line retention strategies based on the attempt count. Implement logic to minimize output size progressively with retries, ensuring that the final result adheres to the specified maximum token limit. Return a minimal version of content if all truncation attempts fail, or a truncation message if content is too large.

* Fix redundant line in truncate_to_fit function by removing duplicate halving of current_size

* Add .cursorignore file to exclude target, tmp, .DS_Store, and .git from cursor operations.

* ```
Rename function `get_instruction_token_count` to `create_commit_request` in `commit.rs`

Remove the `prompt`, `file_context`, `author`, and `date` parameters from `generate_commit_message` in `openai.rs`
```

* Update prompt file by adding 30 new lines of content

- Based only on the changes visible in the diff, this commit adds a significant amount of new content to the prompt.md file.

* Remove 'assert_matches' feature flag from hook module

Based only on the changes visible in the diff, this commit removes the '#![feature(assert_matches)]' line from the prompt.md file.

* Remove lines 43-42 from prompt.md

---

    Add two lines at the end of prompt.md for additional context

---

    Add two lines at the end of rust.yml for unspecified purpose

* Remove unused 'mod common;' declaration from patch test file

* "Update Cargo.toml, commit.rs, Cargo.lock, and prompt.md files to incorporate 'mustache' dependency and enhance commit request creation"

* "Refactor commit.rs to use mustache for template rendering and error handling"

* Remove commit message scoring and prompt optimization functions in openai.rs

* Update the 'get_instruction_template' and 'create_commit_request' functions in commit.rs and modify prompt.md

* Remove prompt instruction lines from resources/prompt.md file

---------

Co-authored-by: Git AI Test <test@example.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant