Skip to content

Conversation

@kevintruong
Copy link

No description provided.

@kevintruong
Copy link
Author

hi @JackHopkins
WIP for bedrock support
I'm testing at my side, working. But need align with you to make a good progress for the MR.

How can I can contact you ...etc (discord invite is invalid)

thanks

@dnlkwak
Copy link
Collaborator

dnlkwak commented Nov 21, 2023 via email

Copy link
Contributor

@JackHopkins JackHopkins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you have a look at these comments and let me know what you think?

Additionally, it'd be really helpful to have some automated tests for Bedrock (possibly mocking out the Bedrock client) so we can ensure that we don't break anything as we continue development.

if ("error" in response and
"code" in response["error"] and
response["error"]["code"] == 'invalid_api_key'):
raise Exception(f"The supplied OpenAI API key {self.api_key} is invalid")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change to "Anthropic API key"

continue

if not choice:
raise Exception("OpenAI API failed to generate a response")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same again here.

modelId=model,
contentType="application/json",
accept="application/json")
return json.loads(response.get('body').read().decode())['completion']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't be returning this here - we should loop the counter below in the case where the bedrock_runtime throws an exception.

return 'anthropic.claude-instant-v1'

elif api_model_name == 'openai':
return ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me double check to ensure whether this works.

"Content-Type": "application/json",
}
response = requests.post(
OPENAI_URL, headers=openai_headers, json=params, timeout=50
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should replace this with the invoke_model you have above.

training_file_id = response["id"]
# submit the finetuning job
try:
finetuning_response = openai.FineTuningJob.create(training_file=training_file_id, model="gpt-3.5-turbo",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to figure out a way to call the logic to trigger a finetune in the relevant LLM provider class (i.e OpenAI, Bedrock, etc) - this is not the right place for this logic to live now we are adding support for more providers.

from monkey_patch.language_models.llm_api_abc import LLM_Api
import os

OPENAI_URL = "https://api.openai.com/v1/chat/completions"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed.

OPENAI_URL = "https://api.openai.com/v1/chat/completions"
import requests

AWS_REGION = os.environ.get("AWS_DEFAULT_REGION", "us-east-1")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should instantiate all envvars / bedrock specific logic inside the init of the Bedrock class - catching and reraising any thrown exceptions.

from monkey_patch.utils import approximate_token_count


class ApiModelFactory:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we please have this factory inside its own file.

logging.basicConfig(level=ALIGN_LEVEL_NUM)
logger = logger_factory(__name__)
language_modeler = LanguageModel()
language_modeler = LanguageModel(api_model=os.getenv('API_MODEL'))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should add the API_MODEL envvar to .example.env to make it obvious that it is required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants