Skip to content

Intelligent FAQ chatbot leveraging semantic similarity and keyword matching, with API support, logging, and fine-tuning.

Notifications You must be signed in to change notification settings

ghfri-code/Semantic-Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

5 Commits
ย 
ย 
ย 
ย 

Repository files navigation

Semantic FAQ Chatbot ๐Ÿš€

An intelligent FAQ-based chatbot built with SentenceTransformers. It combines semantic similarity and keyword matching to provide accurate answers to user queries. The chatbot supports API integration, fine-tuning, logging, and automatic FAQ updates.


โœจ Features

  • Hybrid Search: Combines semantic similarity (SentenceTransformers) with keyword matching.
  • Fallback Mechanism: Returns a default response if similarity is too low.
  • Embedding Storage: Saves embeddings with Pickle for faster startup.
  • FastAPI Integration: Provides REST API endpoints for chatbot interaction.
  • Automatic Logging: Stores all queries and responses in CSV logs.
  • FAQ Correction: Dynamically update chatbot answers.
  • Fine-tuning: Retrain the model on custom FAQ data.
  • Evaluation with BERTScore: Measure model accuracy before and after fine-tuning.
  • Dataset: Uses the Ecommerce FAQ Dataset.

๐Ÿ“ฆ Requirements

Install dependencies:

pip install pandas sentence-transformers torch datasets scikit-learn fastapi uvicorn bert-score

โš™๏ธ Usage

1. Run FastAPI server

uvicorn your_file_name:app --reload

The server will start at: http://127.0.0.1:8000

2. API Endpoints

  • Ask the chatbot:
POST /chat
{
  "query": "What are your payment methods?"
}
  • Correct chatbot response:
POST /correct
{
  "question": "What are your payment methods?",
  "correct_answer": "Payment is available only via credit card."
}

๐Ÿง  Training & Evaluation

  • Evaluate baseline accuracy: Uses BERTScore to measure model performance on test data.

  • Fine-tuning the model:

fine_tune(df, epochs=2, batch_size=8)
  • Re-evaluate after fine-tuning: BERTScore is run again to compare performance before/after training.

๐Ÿ“Š Project Structure

  • ecommerce_faq.csv โ†’ Main FAQ dataset
  • faq_embeddings.pkl โ†’ Pre-computed embeddings
  • chat_logs.csv โ†’ Query/response logs
  • fine_tuned_model/ โ†’ Saved fine-tuned model

๐Ÿ”ฎ Next Steps

  • Add a user-friendly interface (web/Telegram).
  • Multilingual support (English + Persian).
  • Database integration (SQLite, MongoDB).
  • Speed optimization using FAISS or Milvus.

About

Intelligent FAQ chatbot leveraging semantic similarity and keyword matching, with API support, logging, and fine-tuning.

Topics

Resources

Stars

Watchers

Forks