ResShare is a decentralized file sharing application that allows users to securely store, share, and manage their files using Resilient DB technology. The application now features an AI-powered chatbot that can answer questions about your uploaded documents using Retrieval-Augmented Generation (RAG).
- User Authentication (Sign up, Login, Logout)
- File Upload and Download
- Folder Creation and Management
- File Sharing between Users
- Secure File Storage using IPFS
- User Account Management
- Intelligent Document Q&A: Ask questions about your uploaded files
- Multi-format Support: Works with PDF, DOCX, and TXT files
- Semantic Search: Finds relevant information across all your documents
- Privacy-First: Your data stays isolated and secure
- Source Attribution: See which documents were used to answer your questions
- Frontend: React.js with Material-UI
- Backend: Python Flask
- Storage: ResilientDB for Metadata storage and IPFS for File Storage
- Authentication: Session-based authentication
- AI/ML:
- Sentence Transformers for embeddings
- FAISS for vector search
- Gemini GPT (optional) or local models for response generation
- LangChain for text processing
- Python 3.8+
- Node.js 16+ and npm
- IPFS daemon running locally
- (Optional) Gemini API key for enhanced AI responses
- Clone the repository:
git clone https://github.com/NoBugInMyCode/ResShareDeployable.git
cd ResShareDeployable- Install backend dependencies:
pip install -r requirements.txt- Install frontend dependencies:
cd frontend
npm install
cd ..- Set up AI features (Optional but recommended):
# For enhanced AI responses, set your Gemini API key
export GOOGLE_API_KEY="your-gemini-api-key-here"- Start the IPFS daemon:
ipfs daemonTo install IPFS Cluster Service, please refer to this link
- Start the backend server:
python app.py- Start the frontend development server:
cd frontend
npm startThe application will be available at:
- Frontend: http://localhost:3000
- Backend API: http://localhost:5000
- Upload Documents: Upload PDF, DOCX, or TXT files through the normal file upload process
- Navigate to AI Chat: Click the "AI Chat" button in the navigation bar
- Ask Questions: Type questions about your documents and get intelligent responses
- "What are the main findings in my research paper?"
- "Summarize the key points from my meeting notes"
- "What does my contract say about payment terms?"
- "Find information about project deadlines"
- PDF: Research papers, reports, contracts
- DOCX: Word documents, meeting notes, proposals
- TXT: Plain text files, code documentation, notes
- Smart Chunking: Documents are intelligently split into semantic chunks
- Vector Search: Uses advanced embeddings to find relevant content
- Source Attribution: Shows which files and sections were used for answers
- Privacy Preserved: Each user's AI data is completely isolated
The AI chatbot works out of the box with:
- Local sentence transformer models for embeddings
- FAISS for fast vector search
- Simple extractive responses
For higher quality responses, set up Gemini:
export GOOGLE_API_KEY="sk-your-key-here"- Create a new account using the sign-up feature
- Log in to your account
- Create folders to organize your files
- Upload files to your folders
- Share files with other users
- Download shared files from other users
- Upload Text Documents: Upload PDF, DOCX, or TXT files
- Wait for Processing: Files are automatically processed for AI search
- Ask Questions: Use the AI Chat interface to query your documents
- Get Intelligent Answers: Receive responses with source attribution
- Explore Knowledge Base: View statistics about your indexed documents
File Upload → Text Extraction → Chunking → Embedding → Vector DB Storage
↓
User Query → Query Embedding → Vector Search → Context → LLM → Response
- Text Extractors: PDF (PyPDF2), DOCX (python-docx), TXT (UTF-8)
- Chunking: LangChain RecursiveCharacterTextSplitter
- Embeddings: Google Gemini API (gemini-embedding-001) with configurable dimensions (768/1536/3072)
- Vector DB: FAISS with per-user isolation
- LLM: Gemini 2.5 Flash (optional) or extractive fallback