You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am looking for some reference architecture on setting up Llama deploy for enterprise purposes. We are looking to create a no-code set up with a UI experience layer for users to create, evaluate and deploy agents for various use cases. We are already have enabled vector store, secure connection to OpenAI/AWS Bedrock models, Mem0 for long-term memory, Redis Chat Store, Langfuse for Observability, enterprise CI/CD set up etc but I am looking for scalable and user friendly approach to connect these different infra components for creating and deploying agents. Are there any reference architectures that people can share or high level ideas on how they have implemented for their enterprise
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am looking for some reference architecture on setting up Llama deploy for enterprise purposes. We are looking to create a no-code set up with a UI experience layer for users to create, evaluate and deploy agents for various use cases. We are already have enabled vector store, secure connection to OpenAI/AWS Bedrock models, Mem0 for long-term memory, Redis Chat Store, Langfuse for Observability, enterprise CI/CD set up etc but I am looking for scalable and user friendly approach to connect these different infra components for creating and deploying agents. Are there any reference architectures that people can share or high level ideas on how they have implemented for their enterprise
Beta Was this translation helpful? Give feedback.
All reactions