Interview Question Creator is a powerful tool designed to streamline and enhance interview preparation by generating domain-specific interview questions and answers. This application leverages AI-powered models and custom processing techniques to create comprehensive question sets across technical, theoretical, and behavioral domains. Users can upload a PDF of relevant materials, from which tailored interview questions are generated, refined, and formatted for download.
This tool utilizes the Mistral large language model via LangChain, a library that simplifies working with advanced language models. The LangChain integration enables efficient pipelines for both question generation and refinement. The generated questions undergo a refinement process using prompt engineering to ensure they cover diverse aspects of interview readiness, from technical knowledge to problem-solving and interpersonal skills. The refining prompt technique is especially valuable here: it helps identify and enhance questions for clarity and depth by iterating on the initial question set with additional context from the input material. This approach ensures the questions are well-rounded and relevant, addressing various dimensions of interview topics effectively.
Once questions and answers are generated, they are formatted in a DOCX file with custom styling for readability and clarity. This includes applying Markdown-inspired styling, such as bullet points, headers, and bold text, to create a professional and accessible document.
On the frontend, a simple and intuitive HTML, CSS, and JavaScript interface allows users to upload PDFs, specify the number of questions, and download the final document. JavaScript manages user interactions, including file upload, server communication, status polling, and download link display, providing a smooth experience throughout.
Since the server has a 1-minute response timeout and question generation can exceed this time limit, the app is designed to handle tasks asynchronously using Celery and Redis. Redis serves as both the task broker and results backend, while Celery enables background task management, allowing requests to be processed reliably and progress updates to be provided in real-time. Finally, the application is deployed on an AWS EC2 instance, providing flexibility, control, and scalability for end users who need on-demand access to the service.
- Language: Python
- FrameWork: LangChain
- Backend: FastAPI
- Model: Mistral Large Language Model (via LangChain)
- Database: FAISS (vector storage for embeddings)
- Message Queue: Redis
- Task Management: Celery
- Frontend: HTML, CSS, JavaScript
- Deployment: AWS EC2
- Version Control: GitHub
The Code is written in Python 3.10.15. If you don't have Python installed you can find it here. If you are using a lower version of Python you can upgrade using the pip package, ensuring you have the latest version of pip.
git clone https://github.com/jatin-12-2002/Interview_Question_Creator
cd Interview_Question_Creator
conda create -p env python=3.10 -y
source activate ./env
pip install -r requirements.txt
- Create a .env file in the project directory.
- Define the necessary environment variables such as database connection strings, API keys, etc.
- Your .env file should should have these variables:
MISTRAL_API_KEY=""
HF_TOKEN=""
- My .env file is here
sudo apt-get update
sudo apt-get install redis-server
sudo service redis-server start
redis-cli ping
celery -A app.celery worker --loglevel=info
uvicorn app:app --host 0.0.0.0 --port 8080 --workers 2
http://localhost:8080/
You can download the sample output given by the model from here
Use t2.large or greater size instances only as it is a GenerativeAI using LLMs project.
INFORMATION: sudo apt-get update and sudo apt update are used to update the package index on a Debian-based system like Ubuntu, but they are slightly different in terms of the tools they use and their functionality:
sudo apt-get update
Step 6.2: This command uses apt, a newer, more user-friendly command-line interface for the APT package management system.
sudo apt update -y
sudo apt install git nginx -y
sudo apt install git curl unzip tar make sudo vim wget -y
git clone https://github.com/jatin-12-2002/Interview_Question_Creator
cd Interview_Question_Creator
touch .env
vi .env
MISTRAL_API_KEY=""
HF_TOKEN=""
cat .env
sudo apt install python3-pip
Step 6.11: install the requirements.txt. The --break-system-packages flag in pip allows to override the externally-managed-environment error and install Python packages system-wide.
pip3 install -r requirements.txt
OR
pip3 install -r requirements.txt --break-system-packages
The --break-system-packages flag in pip allows to override the externally-managed-environment error and install Python packages system-wide. pip install package_name --break-system-packages
Step 6.12: Test the Application with Uvicorn. Verify the app is working by visiting http://your-ec2-public-ip:8080
uvicorn app:app --host 0.0.0.0 --port 8080
Step 6.13: Configure Nginx as a Reverse Proxy. Set up Nginx to forward requests to Uvicorn. Open the Nginx configuration file:
sudo nano /etc/nginx/sites-available/default
server {
listen 80;
server_name your-ec2-public-ip;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
sudo systemctl restart nginx
Step 6.16: Set Up Uvicorn as a Background Service. To keep Uvicorn as a systemd service, set up a systemd service file. Create a systemd file:
sudo nano /etc/systemd/system/gunicorn.service
[Unit]
Description=Uvicorn instance to serve FastAPI app
After=network.target
[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/your-repo-name
ExecStart=/usr/local/bin/uvicorn app:app --host 0.0.0.0 --port 8080
[Install]
WantedBy=multi-user.target
sudo systemctl start uvicorn
sudo systemctl enable uvicorn
sudo apt-get update
sudo apt-get install redis-server
sudo service redis-server start
redis-cli ping
celery -A app.celery worker --loglevel=info
- Go inside the security
- Click on security group
- Configure your inbound rule with certain values
- Port 8080 0.0.0.0/0 for anywhere traffic TCP/IP protocol
- Port 6379 (Redis) 0.0.0.0/0 for anywhere traffic TCP/IP protocol
uvicorn app:app --host 0.0.0.0 --port 8080
http://your-ec2-public-ip:8080
If you encounter any error like code:400 while running "https://{your-ec2-public-ip:8080}" then just run it with 'http' instead of 'https'.
Check that your app is accessible through http://your-ec2-public-ip. Nginx will handle incoming requests and proxy them to Uvicorn.
This setup makes your app production-ready by using Nginx and Uvicorn for stability, performance, and scalability. You can continue to scale by increasing Uvicorn workers or adding load balancing if traffic grows.
-
The Interview Question Creator uses a RAG (Retrieval-Augmented Generation) approach, combining retrieval-based techniques with generative AI to produce highly relevant, domain-specific questions and answers. By utilizing FAISS as the vector database, the app retrieves contextually similar information from user-uploaded documents, ensuring that generated questions are precise and tailored to specific topics, covering technical, behavioral, and theoretical aspects effectively.
-
The Mistral model and its embeddings play a crucial role in semantic understanding, creating embeddings that accurately capture the nuances of the uploaded content. This allows the app to build contextual embeddings, aligning the generated questions closely with the specific themes and topics of each document.
-
Integrating LangChain for prompt engineering and question refinement provides users with high-quality, diversified question sets. The iterative refinement process ensures each question is clear, focused, and balanced across multiple interview dimensions.
-
Asynchronous task management with Celery and Redis enables efficient handling of longer processing times, allowing the application to manage complex question generation tasks within real-world server constraints, providing status updates to users throughout the process.
-
AWS EC2 deployment ensures scalability and stability, giving users consistent, on-demand access to the service. Combined with an intuitive frontend interface, this solution offers a seamless, interactive experience for custom interview preparation.