Releases: szczyglis-dev/py-gpt
Releases · szczyglis-dev/py-gpt
2.4.17
- Refactored kernel and render events.
- Added multiple formatted extra outputs from plugins.
- Added a 'Reset (clear)' option in the context list for clearing the current context.
- Improved agents steps output handling.
- Fixed the audio button.
- Improved the stop event.
- Other small fixes.
2.4.16
- Code Interpreter and Files I/O input/output is now displayed in chat window, using syntax highlight.
- Refactored plugins structure.
- Fix: command execution in Chat in Files mode when no index is selected.
- Fix: missing translations in Agent mode.
2.4.15
- Vision analysis added to all modes.
- Added commands for the Vision plugin: image analysis from an attachment, screenshot analysis, camera image analysis - performed in the background in every mode, without switching to vision mode.
- Improved execution of multiple commands at once.
- Improved integration with IPython and extended instructions for the model.
- Other fixes and improvements.
2.4.14
- Added a
Loop / evaluate
mode for Llama-index agents, allowing them to work autonomously in a loop and evaluates the correctness of received results using a percentage scale from 0% to 100%, and automatically continue and correct responses below expected value. - Updated layout: mode and model lists replaced with comboboxes.
2.4.13
- Introduced
Code Interpreter (v2)
withIPython
: Enables session state retention on a kernel and building on previous computations, allowing for a Jupyter-like workflow, which is useful for iterative development and data analysis. Supports magic commands like!pip install <package_name>
for package installation within the session. (currently in beta)
2.4.12
- Added
httpx-socks
to the dependencies, enabling the use of SOCKS proxies.
2.4.11
- Added a new Llama-index agent: OpenAI Assistant.
- Added proxy settings to the configuration dialog.
- Added more models to the Agent (Llama-index) mode.
- Improved the agents/presets dialog window.
- Disabled the "OpenAI API KEY is not set" Llama-index error when using a local model in Chat with Files mode. You can now use local embeddings, such as Llama3 via Ollama, and use local models without any warnings.
2.4.10
- Added support for Llama Index agents. (beta)
- Introduced a new working mode: Agent (Llama Index).
- Added 3 Llama Index agents: OpenAI Agent, ReAct Agent, and Structured Planner Agent.
- Fixed: passing an embed model to vector stores on indexing.
- Fixed: python3-tk error in snap version.
2.4.9
- Mouse movement has been moved to the PyAutoGUI library for seamless operation.
- Bridge calls have been moved to an asynchronous thread.
- Fixed the DALL-E call in Image mode.
- Speed improvements and optimizations.
2.4.8
- Fix: restoring order of notepads.
- Fix: mouse move offset in Mouse and Keyboard plugin.