Skip to content

Commit

Permalink
Local LLMs into main (#1351)
Browse files Browse the repository at this point in the history
* fix

* fixing the toolkit config in iteration workflow

* List file s3 fix (#1076)

* List File S3

* Unit Test Added

---------

Co-authored-by: Taran <97586318+Tarraann@users.noreply.github.com>

* workflow changes

* minor seed file fix

* frontend fixes (#1079)

* fixed api_bug (#1080)

* add one condition (#1082)

* api fix (#1087)

* Tools error fix (#1093)

* webhooks frontend + api calls complete almost

* Tool-LTM(Updated) (#1039)

* Toolkit configuration fix (#1102)

* webhooks compplete frontend

* schedule agent fix (#1104)

Co-authored-by: Rounak Bhatia <rounak@contlo.com>

* Models superagi (#936)

Models Superagi

* Models superagi (#1108)

Bug Fixes

* Changes for no receiver address

* made changes to github helper

* Models superagi (#1112)

* Models superagi (#1117)

* Models fixes (#1118)

* Models Frontend Changes

* Models Frontend Changes

* Models Frontend Changes

* \n bug resolved (#1122)

* PDF and DOCX support in Write File - Feature Improvement, close #548 (#928)

* Added functions to write various file types and a file handler too

* FileManager updated to handle and save HTMLs.

* adding PDF + DOCX support to save images

* Added Wkhtmltopdf package installation run commands in docker

* Added get_all_responses feature for extractng the response for particular tools

* Added Image embedding feature, this will extract and embed the images generated by the agent during the run

* renaming functions and refactoring

* renaming functions and refactoring

* Update Dockerfile

* removing unsused classmethods

* Finding generated images and attached files in the write tool. Images are fetched inorder to be embedded in the respective file type.

* Adding the filename and paths to the Resource Manager S3 storage

* Code Cleanup

* added logger: Fix for the failing TEST

---------

Co-authored-by: Fluder-Paradyne <121793617+Fluder-Paradyne@users.noreply.github.com>

* Revert "PDF and DOCX support in Write File - Feature Improvement, close #548 (#928)" (#1124)

This reverts commit 8b01357.

* expose port

* latest safetensors breaking in macs (#1134)

* Changes in save template  (#1120)

* fix

* changes in save template

* changes in save template

* bug fixes

* changes in save template

* changes in save template

---------

Co-authored-by: Rounak Bhatia <rounak@contlo.com>
Co-authored-by: namansleeps <mandhani12@gmail.com>

* Main to dev sync (#1139)

* Models fixes (#1126)

* Models Frontend Changes

* Models Frontend Changes

* Models Frontend Changes

* Backend Compatibility for New/Existing users on local

* DEV api key requirements

* removing print statements

* removing print statements

* removing print statements

* removing print statements

* backend compatibility

* backend compatibility

* backend compatibility

* added filters in the webhooks

* fix

* added filters in the webhooks

* Models fixes (#1145)

Fixes related to Models Feature

* Jira Bug Fix

* Jira Bug Fix 2.0

* Jira Bug Fix 3.0

* added filters in the webhooks

* Models fixes (#1147)

Model Feature Fixes

* Bug fix model redirection (#1148)

Bug Fix - Model URL Redirection

* added tool config for dalle

* removed model dependency on dalle tool

* Remove hardcoded creds

* fixed env error

* removed refactoring from main

* removed refactoring

* removed refactoring

* handled error

* stop agent from executing if model is not found (#1156)

* entity details (#1158)

* Metric frontend (#1152)

* added filters in webhooks

* added filters in webhooks

* minor changes

* webhooks complete

* minor changes for PR

* minor changes for PR

* Publish agent template to marketplace (#1106)


* publish agent to marketplace

---------

Co-authored-by: Rounak Bhatia <rounak@contlo.com>
Co-authored-by: namansleeps <mandhani12@gmail.com>

* added filters in webhooks

* added filters in webhooks

* resolving conflicts

* added filters in the webhooks

* lint issue fixed

* bug fix of prev PR

* fix for new run and edit agent

* error handling

* added filters in webhooks

* fix for knowledge search tool

* Docker digitalocean deployment

* changed branch name

* added filters in the webhooks

* changes

* removed region

* added button

* change in branch

* added filters in the webhooks

* Update conftest.py

* Added filters in the webhooks (#1140)

* webhooks frontend + web hooks with filters

---------

Co-authored-by: namansleeps <mandhani12@gmail.com>
Co-authored-by: Fluder-Paradyne <121793617+Fluder-Paradyne@users.noreply.github.com>

* Models calls logs dev (#1174)

Call logs organisation level bug fix

* models scroll fix, format of log timestamp fix, adding of loader to models, toolkit metrics dropdown bug fixed, publish agent dropdown bug (#1171)

* Update app.yaml (#1179)

* fixes related to webhooks

* fixes for webhooks

* Fixes for webhooks (#1181)



* fixes for webhooks

---------

Co-authored-by: namansleeps <mandhani12@gmail.com>
Co-authored-by: Fluder-Paradyne <121793617+Fluder-Paradyne@users.noreply.github.com>

* bugs by qa (#1178)

* Fix for schedule agent (#1184)

Co-authored-by: Tarraann <jot.taran15522@gmail.com>

* Entity fix (#1185)

* fixes for webhooks

* fixes for webhooks

* fix added for index state (#1188)

* fix added for index state

* Update KnowledgeTemplate.js

---------

Co-authored-by: Tarraann <jot.taran15522@gmail.com>

* API bug fixes for SDK (#1189)

* fix api's for sdk

* removed unused imports

---------

Co-authored-by: jagtarcontlo <123375045+jagtarcontlo@users.noreply.github.com>

* Main to dev sync v12 (#1193)

sync back to dev

------
Co-authored-by: Taran <97586318+Tarraann@users.noreply.github.com>
Co-authored-by: TransformerOptimus <muknrq@gmail.com>
Co-authored-by: Fluder-Paradyne <121793617+Fluder-Paradyne@users.noreply.github.com>
Co-authored-by: Maverick-F35 <138012351+Maverick-F35@users.noreply.github.com>
Co-authored-by: BoundlessAsura <122777244+boundless-asura@users.noreply.github.com>
Co-authored-by: Akshat Jain <92881074+Akki-jain@users.noreply.github.com>
Co-authored-by: sayan1101 <139119661+sayan1101@users.noreply.github.com>
Co-authored-by: Rounak Bhatia <f20201807@goa.bits-pilani.ac.in>
Co-authored-by: Rounak Bhatia <rounak@contlo.com>
Co-authored-by: Kalki <97698934+jedan2506@users.noreply.github.com>
Co-authored-by: Tarraann <jot.taran15522@gmail.com>
Co-authored-by: rakesh-krishna-a-s <akrishna@contlo.com>
Co-authored-by: Captain Levi <123375045+CaptainLevi0007@users.noreply.github.com>
Co-authored-by: andrew-kelly-neutralaiz <128111428+andrew-kelly-neutralaiz@users.noreply.github.com>
Co-authored-by: James Wakelim <james.wakelim@neutralaiz.com>

* added button

* GitHub pull request tools (#1190)

* adding github review tools

* cleanup and adding code review prompt

* fixing comments

* PDF and DOCX support in Write File - Feature Improvement, close #548 (#1125)

Co-authored-by: Fluder-Paradyne <121793617+Fluder-Paradyne@users.noreply.github.com>
Co-authored-by: Abhijeet <129729795+luciferlinx101@users.noreply.github.com>

* minor documentation fix

* Design bugs (#1199)

* fetching token limit from db

* Revert "PDF and DOCX support in Write File - Feature Improvement, close #548 (#1125)" (#1202)

This reverts commit 26f6a1d.

* Unit Test Fix (#1203)

* adding of docs and and discord link correction (#1205)

* openai error handling

* error_handling

* api call only when agent is running

* Feature : Wait block for agent workflow (#1186)

Agent Wait Block Step

* minor changes (#1213)

Co-authored-by: Jagtar Saggu <jagtarsaggu@Jagtars-MacBook-Pro.local>

* error handling

* error handling

* error handling

* error handling

* error handling

* fix

* fix

* fix

* error handling

* models changes (#1207)

Model related frontend changes.

* error handling

* models marketplace changes (#1219)

Co-authored-by: Abhijeet <129729795+luciferlinx101@users.noreply.github.com>

* minor changes

* error handling

* error handling

* removing single qoutes (#1224)

Co-authored-by: namansleeps <mandhani12@gmail.com>

* apm changes (#1222)

APM Bug Fixes

* list tool fix

* list tool fix

* PR CHANGES

* entity fix for dev (#1230)

Entity Fix for Dev

* frontend changes (#1231)

* read_tool_fix

* fix

* waiting block frontend (#1233)

Waiting Block Changes and Frontend Addition

* Dev Fixes (#1242)

* read tool fix

* Maintaining dev (#1244)

Dev Fix

* added logs (#1246)

* error_handling fix (#1247)

Error handling fix.

* Feature first login src (#1241)

Adding source in user database for analytics.

* apollo NoneType bug fix (#1238)

Bug Fix

* Mixpanel integration (#1256)

Mix Panel

* Models Marketplace bug fix for dev (#1266)

* Fix 1257 dev (#1269)

Bug Fix

* add cache layer (#1275)

Added package caching for github actions workflow

* Fix api dev (#1283)

* save other config to agent_execution config

* add config

* mixpanel changes (#1285)

* rename error_handling.py to error_handler.py (#1287)

* Analytics login (#1258)

* calendar issues fixed

* append fle tool bug fixed (#1294)



Co-authored-by: Tarraann <jot.taran15522@gmail.com>

* adding cookie in access token (#1301)

* local_llms

* local_llms

* local_llms

* local_llms

* local_llms

* fixes

* models error fixed (#1308)

* local_llms

* local_llms

* local_llms

* local_llms

* local_llms

* frontend_changes

* local_llms

* local_llms

* local_llms

* local_llms

* local_llms_frontend

* fixes

* fixes

* fixes

* fixes

* merged main into local_llm_final

* merged main into local_llm_final

* local llms

---------

Co-authored-by: TransformerOptimus <muknrq@gmail.com>
Co-authored-by: Abhijeet <129729795+luciferlinx101@users.noreply.github.com>
Co-authored-by: Taran <97586318+Tarraann@users.noreply.github.com>
Co-authored-by: Fluder-Paradyne <121793617+Fluder-Paradyne@users.noreply.github.com>
Co-authored-by: Maverick-F35 <138012351+Maverick-F35@users.noreply.github.com>
Co-authored-by: namansleeps <mandhani12@gmail.com>
Co-authored-by: Aditya Sharma <138581531+AdityaSharma13064@users.noreply.github.com>
Co-authored-by: Sayan Samanta <139119661+sayan1101@users.noreply.github.com>
Co-authored-by: Kalki <97698934+jedan2506@users.noreply.github.com>
Co-authored-by: Tarraann <jot.taran15522@gmail.com>
Co-authored-by: Arkajit Datta <61142632+Arkajit-Datta@users.noreply.github.com>
Co-authored-by: rakesh-krishna-a-s <akrishna@contlo.com>
Co-authored-by: jagtarcontlo <123375045+jagtarcontlo@users.noreply.github.com>
Co-authored-by: Captain Levi <123375045+CaptainLevi0007@users.noreply.github.com>
Co-authored-by: namansleeps <122260931+namansleeps@users.noreply.github.com>
Co-authored-by: I’m <133493246+TransformerOptimus@users.noreply.github.com>
Co-authored-by: Jagtar Saggu <jagtarsaggu@Jagtars-MacBook-Pro.local>
Co-authored-by: Ubuntu <ubuntu@ip-10-14-2-4.ec2.internal>
  • Loading branch information
19 people authored Oct 25, 2023
1 parent 862a701 commit b96fdab
Show file tree
Hide file tree
Showing 10 changed files with 198 additions and 2 deletions.
2 changes: 1 addition & 1 deletion gui/pages/Content/APM/ApmDashboard.js
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ export default function ApmDashboard() {
const fetchData = async () => {
try {
const [metricsResponse, agentsResponse, activeRunsResponse, toolsUsageResponse] = await Promise.all([getMetrics(), getAllAgents(), getActiveRuns(), getToolsUsage()]);
const models = ['gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-4-32k', 'google-palm-bison-001'];
const models = ['gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-4-32k', 'google-palm-bison-001', 'replicate-llama13b-v2-chat'];

assignDefaultDataPerModel(metricsResponse.data.agent_details.model_metrics, models);
assignDefaultDataPerModel(metricsResponse.data.tokens_details.model_metrics, models);
Expand Down
1 change: 1 addition & 0 deletions gui/pages/_app.js
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ export default function App() {
});
}


const installFromMarketplace = () => {
const toolkitName = localStorage.getItem('toolkit_to_install') || null;
const agentTemplateId = localStorage.getItem('agent_to_install') || null;
Expand Down
11 changes: 10 additions & 1 deletion main.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@
from superagi.llms.replicate import Replicate
from superagi.llms.hugging_face import HuggingFace
from superagi.models.agent_template import AgentTemplate
from superagi.models.models_config import ModelsConfig
from superagi.models.organisation import Organisation
from superagi.models.types.login_request import LoginRequest
from superagi.models.types.validate_llm_api_key_request import ValidateAPIKeyRequest
Expand Down Expand Up @@ -215,6 +216,13 @@ def register_toolkit_for_master_organisation():
Organisation.id == marketplace_organisation_id).first()
if marketplace_organisation is not None:
register_marketplace_toolkits(session, marketplace_organisation)

def local_llm_model_config():
existing_models_config = session.query(ModelsConfig).filter(ModelsConfig.org_id == default_user.organisation_id, ModelsConfig.provider == 'Local LLM').first()
if existing_models_config is None:
models_config = ModelsConfig(org_id=default_user.organisation_id, provider='Local LLM', api_key="EMPTY")
session.add(models_config)
session.commit()

IterationWorkflowSeed.build_single_step_agent(session)
IterationWorkflowSeed.build_task_based_agents(session)
Expand All @@ -238,7 +246,8 @@ def register_toolkit_for_master_organisation():
# AgentWorkflowSeed.doc_search_and_code(session)
# AgentWorkflowSeed.build_research_email_workflow(session)
replace_old_iteration_workflows(session)

local_llm_model_config()

if env != "PROD":
register_toolkit_for_all_organisation()
else:
Expand Down
28 changes: 28 additions & 0 deletions migrations/versions/9270eb5a8475_local_llms.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
"""local_llms
Revision ID: 9270eb5a8475
Revises: 3867bb00a495
Create Date: 2023-10-04 09:26:33.865424
"""
from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision = '9270eb5a8475'
down_revision = '3867bb00a495'
branch_labels = None
depends_on = None


def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('models', sa.Column('context_length', sa.Integer(), nullable=True))
# ### end Alembic commands ###


def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('models', 'context_length')
# ### end Alembic commands ###
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -158,3 +158,4 @@ google-generativeai==0.1.0
unstructured==0.8.1
ai21==1.2.6
typing-extensions==4.5.0
llama_cpp_python==0.2.7
38 changes: 38 additions & 0 deletions superagi/helper/llm_loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
from llama_cpp import Llama
from llama_cpp import LlamaGrammar
from superagi.config.config import get_config
from superagi.lib.logger import logger


class LLMLoader:
_instance = None
_model = None
_grammar = None

def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super(LLMLoader, cls).__new__(cls)
return cls._instance

def __init__(self, context_length):
self.context_length = context_length

@property
def model(self):
if self._model is None:
try:
self._model = Llama(
model_path="/app/local_model_path", n_ctx=self.context_length)
except Exception as e:
logger.error(e)
return self._model

@property
def grammar(self):
if self._grammar is None:
try:
self._grammar = LlamaGrammar.from_file(
"superagi/llms/grammar/json.gbnf")
except Exception as e:
logger.error(e)
return self._grammar
1 change: 1 addition & 0 deletions superagi/jobs/agent_executor.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from datetime import datetime, timedelta

from sqlalchemy.orm import sessionmaker
from superagi.llms.local_llm import LocalLLM

import superagi.worker
from superagi.agent.agent_iteration_step_handler import AgentIterationStepHandler
Expand Down
25 changes: 25 additions & 0 deletions superagi/llms/grammar/json.gbnf
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
root ::= object
value ::= object | array | string | number | ("true" | "false" | "null") ws

object ::=
"{" ws (
string ":" ws value
("," ws string ":" ws value)*
)? "}" ws

array ::=
"[" ws (
value
("," ws value)*
)? "]" ws

string ::=
"\"" (
[^"\\] |
"\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]) # escapes
)* "\"" ws

number ::= ("-"? ([0-9] | [1-9] [0-9]*)) ("." [0-9]+)? ([eE] [-+]? [0-9]+)? ws

# Optional space: by convention, applied in this grammar after literal chars when allowed
ws ::= ([ \t\n] ws)?
1 change: 1 addition & 0 deletions superagi/llms/llm_model_factory.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from superagi.llms.google_palm import GooglePalm
from superagi.llms.local_llm import LocalLLM
from superagi.llms.openai import OpenAi
from superagi.llms.replicate import Replicate
from superagi.llms.hugging_face import HuggingFace
Expand Down
92 changes: 92 additions & 0 deletions superagi/llms/local_llm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
from superagi.config.config import get_config
from superagi.lib.logger import logger
from superagi.llms.base_llm import BaseLlm
from superagi.helper.llm_loader import LLMLoader


class LocalLLM(BaseLlm):
def __init__(self, temperature=0.6, max_tokens=get_config("MAX_MODEL_TOKEN_LIMIT"), top_p=1,
frequency_penalty=0,
presence_penalty=0, number_of_results=1, model=None, api_key='EMPTY', context_length=4096):
"""
Args:
model (str): The model.
temperature (float): The temperature.
max_tokens (int): The maximum number of tokens.
top_p (float): The top p.
frequency_penalty (float): The frequency penalty.
presence_penalty (float): The presence penalty.
number_of_results (int): The number of results.
"""
self.model = model
self.api_key = api_key
self.temperature = temperature
self.max_tokens = max_tokens
self.top_p = top_p
self.frequency_penalty = frequency_penalty
self.presence_penalty = presence_penalty
self.number_of_results = number_of_results
self.context_length = context_length

llm_loader = LLMLoader(self.context_length)
self.llm_model = llm_loader.model
self.llm_grammar = llm_loader.grammar

def chat_completion(self, messages, max_tokens=get_config("MAX_MODEL_TOKEN_LIMIT")):
"""
Call the chat completion.
Args:
messages (list): The messages.
max_tokens (int): The maximum number of tokens.
Returns:
dict: The response.
"""
try:
if self.llm_model is None or self.llm_grammar is None:
logger.error("Model not found.")
return {"error": "Model loading error", "message": "Model not found. Please check your model path and try again."}
else:
response = self.llm_model.create_chat_completion(messages=messages, functions=None, function_call=None, temperature=self.temperature, top_p=self.top_p,
max_tokens=int(max_tokens), presence_penalty=self.presence_penalty, frequency_penalty=self.frequency_penalty, grammar=self.llm_grammar)
content = response["choices"][0]["message"]["content"]
logger.info(content)
return {"response": response, "content": content}

except Exception as exception:
logger.info("Exception:", exception)
return {"error": "ERROR", "message": "Error: "+str(exception)}

def get_source(self):
"""
Get the source.
Returns:
str: The source.
"""
return "Local LLM"

def get_api_key(self):
"""
Returns:
str: The API key.
"""
return self.api_key

def get_model(self):
"""
Returns:
str: The model.
"""
return self.model

def get_models(self):
"""
Returns:
list: The models.
"""
return self.model

def verify_access_key(self, api_key):
return True

0 comments on commit b96fdab

Please sign in to comment.