You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fine Tuning is a cost-efficient way of preparing a model for specialized tasks. Fine-tuning reduces required training time as well as training datasets. We have open-source pre-trained models. Hence, we do not need to perform full training every time we create a model.
Agatha is a NL2SQL-driven expense tracker API that leverages Llama 3.3 provided by Groq for efficient and fast data extraction and NL2SQL workflow. It allows users to either add transaction data to a SQL database or ask questions about their expenses, all via natural language.
This project uses the open-source model Mixtral 8x7B Instruct, deployed in Amazon SageMaker or invoked via API on Amazon Bedrock, to enable users to chat with their database using natural language, without writing any code or SQL query.
Data Neuron is a powerful framework that enables you to build text-to-SQL applications with an easily maintainable semantic layer. Whether you're creating customer-facing chatbots, internal Slack bots for analytics, or other data-driven applications, Data Neuron provides the tools to make your data accessible through natural language