This repository contains a collection of applications demonstrating different use cases of Generative AI. Each project is self-contained within its own directory.
A customizable AI-powered chatbot that answers questions based on your own data. This application allows you to upload a PDF document and query its contents using a natural language web interface.
Location: build_chatbot_for_your_data/
Key Technologies:
- Backend: Python, Flask, LangChain
- LLM Integration: IBM Watsonx (default), Hugging Face (alternative)
- Embeddings: Hugging Face Instruct Embeddings, Sentence Transformers
- Vector Store: ChromaDB
- Frontend: HTML, CSS, JavaScript (jQuery)
-
Navigate to the project directory:
cd build_chatbot_for_your_data -
Install dependencies:
pip install -r requirements.txt
-
Configure Credentials: Update
worker.pywith your credentials for IBM Watsonx or uncomment and configure the Hugging Face Hub section. -
Run the application:
python server.py
-
Open your web browser and navigate to
http://0.0.0.0:8000. -
Upload a PDF file and start asking questions about its content.
You can also run the application using Docker:
cd build_chatbot_for_your_data
docker build -t chatbot-data .
docker run -p 8000:8000 chatbot-dataA lightweight client-server chatbot that demonstrates a direct integration with the OpenAI API. It's a simple Node.js application that serves a static frontend and proxies chat requests to the OpenAI API.
Location: software-dev-chatbot/
Key Technologies:
- Backend: Node.js, Express.js
- API: OpenAI
gpt-3.5-turbo-1106 - Frontend: HTML, CSS, JavaScript
-
Navigate to the project directory:
cd software-dev-chatbot -
Install dependencies:
npm install
-
Set your OpenAI API Key: You must set your OpenAI API key as an environment variable.
export OPENAI_API_KEY='your-openai-api-key'
-
Start the server:
node server.js
-
Open your web browser and go to
http://localhost:3000.
