A Motia-based agent that provides insights about other agents using RAG (Retrieval Augmented Generation).
This agent serves as a meta-assistant that helps users understand and gain insights about other AI agents. It uses:
- Pinecone Vector Database for storing and retrieving agent data, entries, and insights
- OpenAI for generating contextually relevant responses
- RAG Pattern to provide accurate information based on real agent performance data
- Interactive Chat Interface: Users can ask questions about agent performance
- Performance Analysis: Get insights on why agents succeed or fail
- Improvement Suggestions: Receive data-backed suggestions to improve agent performance
- Knowledge Base Access: Query historical entries, inputs, outputs, and evaluations
- Contextual Understanding: The agent understands the context of your questions
- Powerful Chat Interface: Interact with your agent data through natural language
- Pinecone Vector Database: Efficient storage and retrieval of agent interactions
- OpenAI Integration: Generate insights using advanced language models
- LLM-powered Filter Extraction: Natural language understanding of date and metadata filters
- Date and Metadata Filtering: Query for specific time periods like "yesterday" or filter by environments
- Detailed Analytics: Get insights about agent performance across various dimensions
- Handit Tracing: Comprehensive tracing and monitoring of AI operations
- Clone this repository
- Install dependencies:
pnpm install - Set up your environment variables:
cp .env.example .env # Edit the .env file with your actual API keys
Before using the agent, you need to upload your data to Pinecone:
-
Go to the scripts directory:
cd scripts -
Install script dependencies:
npm install -
Configure script environment:
cp .env.example .env # Edit the .env file with your API keys and CSV paths -
Verify your environment and connections:
npm run verifyThis will check that your API keys, connections, and index are properly set up.
-
Run the data upload script:
npm startIf you encounter TypeScript errors:
npm run start:ignore-types
For more details on data upload, see the Script README.
The agent follows an event-driven architecture with the following steps:
- Message API: Receives user queries via a REST API
- Preprocess Message: Optional preprocessing of user messages (traced with Handit)
- LLM Filter Extraction: Uses AI to extract date and metadata filters from natural language (traced with Handit)
- Pinecone RAG Retrieval: Queries Pinecone for relevant context using extracted filters (traced with Handit)
- OpenAI Generation: Uses retrieved context and OpenAI to generate friendly, accessible responses (traced with Handit)
- Response Handler: Handles the final response, stores conversation history
This agent integrates with Handit for comprehensive monitoring and tracing of AI operations. Handit provides:
- End-to-end tracing of AI agent workflows
- Performance monitoring for each processing step
- Visualization of trace data and performance metrics
- Debug insights for identifying bottlenecks and errors
The following components are traced with Handit:
- Preprocessing: Message preparation (ID:
metaAgentK40-toolpreprocessp6) - Filter Extraction: LLM-based extraction of date and metadata filters (ID:
metaAgentK40-llmFiltersrv) - RAG Retrieval: Vector search in Pinecone (ID:
metaAgentK40-toolragRetrievvz) - Response Generation: OpenAI-based response generation (ID:
metaAgentK40-responseGecp)
To see an example of how a complete workflow is traced with Handit, check the example in examples/traced-agent.ts.
The Handit configuration is stored in utils/handit-tracing.ts. You can adjust the tracing IDs and behavior by modifying this file.
The agent now integrates Handit tracing with Motia's flow ID to enable comprehensive monitoring and visualization of the entire conversation flow. This integration ensures that all traced operations are properly associated with a specific flow, enabling Handit to understand the relationships between different operations and visualize the complete workflow.
-
Flow ID Generation: When a new conversation starts, a unique flow ID is generated in the preprocess-message step. This ID is then passed through all subsequent steps as
_handidFlowId. -
Tracing with Flow ID: Each operation is traced using the
traceWithHanditfunction, which accepts the agent node ID, the flow ID, and the callback function to be traced. The flow ID is passed as theexternalIdparameter to the Handit SDK. -
End of Flow Tracing: At the end of the flow, the
flow-completion.step.tscallsendAgentTracingwith the flow ID to properly close the tracing session.
The tracing flow follows this sequence:
- Preprocess Message: Generates a flow ID and starts tracing
- Filter Extraction: Continues tracing using the flow ID
- Pinecone Retrieval: Traces context retrieval with the flow ID
- OpenAI Generation: Traces response generation with the flow ID
- Flow Completion: Ends the tracing session using the flow ID
To add tracing to a new operation:
- Import the necessary functions:
import { agentsTrackingConfig, traceWithHandit } from '../utils/handit-tracing'- Extract the flow ID from the input:
const flowId = input._handidFlowId- Wrap your function with the tracing function:
const tracedFunction = traceWithHandit(
agentsTrackingConfig.yourAgentNodeId,
flowId,
yourFunction
)
// Then call the traced function
const result = await tracedFunction(...args)- Pass the flow ID to the next step:
await emit({
topic: 'next-step',
data: {
// Your data
_handidFlowId: flowId, // Pass the flow ID
},
})After running the agent, you can view the traces in the Handit dashboard. The traces will be grouped by flow ID, allowing you to see the complete flow of operations for each conversation.
To start the agent: