- Project Overview
- System Architecture
- Deployment
- Frontend Implementation
- Backend Implementation
- LangFlow Integration
The goal of this project is to build an analytics module that analyzes engagement data from mock social media accounts using Langflow and DataStax integration.
- DataStax Astra DB: Database for efficient data storage and retrieval
- Langflow: Workflow creation and GPT integration
- React.js: Frontend framework
- Node.js: Proxy backend
- Google Gemini: AI-powered insights generation
- Real-time social media analytics
- GPT-powered insights generation
- AI-powered chatbot
- Custom metric tracking
- Post performance analysis
- Engagement metrics calculation
The system is designed with a microservices-based architecture consisting of:
- Frontend: Built using React.js for a dynamic and interactive user interface.
- Backend: A Node.js proxy server that handles API requests and WebSocket connections.
- Database: DataStax Astra DB for managing data storage and retrieval efficiently.
- AI Integration: Langflow for creating workflows and integrating with Google Generative AI.
- Production URL: https://super-rookie-assignment-social-ai.vercel.app
- 🎥 YouTube Video: https://www.youtube.com/watch?v=HvGaHkaEa80
- Platform: Vercel (Frontend) | Render (Backend)
- Status: ✅ Active
- Frontend: React.js hosted on Vercel
- Database: DataStax Astra DB
- AI Integration: Langflow and Google Gemini
The backend is built using Node.js and acts as a proxy server to manage API requests and WebSocket connections.
- Unique requestId assignment
- Connection tracking
- Real-time data streaming
- Error handling
- Chat Endpoint
- Handles client requests
- Forwards to Langflow API
- Streams responses
- Error management
LangFlow is used to create an end-to-end AI-powered flow that transforms raw data into actionable insights using:
- Google Generative AI
- NVIDIA Embeddings
- DataStax Astra DB
- The file upload serves as the primary data source.
- Any CSV file with relevant data can be used.
- The uploaded file is divided into smaller chunks for easier processing.
- Chunk Size: 300 characters
- Chunk Overlap: 50 characters
- The processed chunks are stored in Astra DB for efficient data retrieval.
- Text chunks are converted into vector representations using NVIDIA's Embedding Models.
- A structured prompt is created to guide the AI response.
- The Google Generative AI component generates insightful content.
- Model used:
gpt-1.5-pro
- Model used:
- The generated content is displayed in a chat format.