You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @alexrudall,
Your tutorial is helping me develop a similar feature.
One problem I found is that the current code hits the database for every chunk it receives from the OpenAI stream.
Here is my solution:
classGetAiResponseJob < ActiveJob::Base# ...privatedefcall_openai(chat:)OpenAI::Client.new.chat(parameters: {model: 'gpt-3.5-turbo',messages: Message.for_openai(chat.messages),temperature: 0.7,stream: stream_proc(chat:),n: 1# Are you sure you need that `RESPONSES_PER_MESSAGE` complication in a tutorial?})@message.save!# This way, we hit the DB only twice: when @message is created, and when it's updated here.enddefcreate_message(chat:)message=chat.messages.create(role: 'assistant',content: '',response_number: 0)message.broadcast_createdmessageenddefstream_proc(chat:)@message=create_message(chat:)buffer=''procdo |chunk,_bytesize|
new_content=chunk.dig('choices',0,'delta','content')ifnew_contentbuffer += new_content@message.content=buffer# This way we don't hit the database on every chunk@message.broadcast_updated# but we can still call `broadcast_updated`endendendend
If you like this idea, I can prepare a PR 🙂
The text was updated successfully, but these errors were encountered:
Hi @alexrudall,
Your tutorial is helping me develop a similar feature.
One problem I found is that the current code hits the database for every
chunk
it receives from the OpenAI stream.Here is my solution:
If you like this idea, I can prepare a PR 🙂
The text was updated successfully, but these errors were encountered: