A sophisticated multi-agent technical interview system designed to conduct structured Excel interviews with automated evaluation using LangGraph and Google Gemini AI.
- Python: 3.11 or higher
- Operating System: Windows, macOS, or Linux
- Memory: Minimum 4GB RAM (8GB recommended)
- Storage: At least 1GB free space
- Google Gemini API Key: Required for LLM evaluation and question generation
- Get your API key from Google AI Studio
- Free tier available with usage limits
-
Install uv (if not already installed):
# On macOS and Linux curl -LsSf https://astral.sh/uv/install.sh | sh # On Windows powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
-
Clone the repository:
git clone <repository-url> cd excel-interview-agent
-
Install dependencies:
uv sync
-
Clone the repository:
git clone <repository-url> cd excel-interview-agent
-
Create a virtual environment:
python -m venv venv # On Windows venv\Scripts\activate # On macOS/Linux source venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
-
Create a conda environment:
conda create -n excel-interview python=3.11 conda activate excel-interview
-
Install dependencies:
pip install -r requirements.txt
-
Copy the environment template:
cp env.example .env
-
Edit the
.env
file with your configuration:# Required: Google Gemini API Key GOOGLE_API_KEY=your_google_api_key_here # Optional: Customize model settings GOOGLE_MODEL=gemini-1.5-flash TEMPERATURE=0.1
-
Get Google Gemini API Key:
- Visit Google AI Studio
- Sign in with your Google account
- Create a new API key
- Copy the key to your
.env
file
-
Verify API Key:
- Ensure the key is valid and has appropriate permissions
- Check your API usage limits in Google AI Studio
Using uv:
uv run python app.py
Using pip:
python app.py
The application supports several command-line options:
python app.py [OPTIONS]
Options:
--mock Use mock evaluator instead of LLM (for testing)
--port PORT Port to run the Gradio app on (default: 7860)
--host HOST Host to run the Gradio app on (default: 127.0.0.1)
--share Create a public link for the Gradio app
--log-level LEVEL Logging level: DEBUG, INFO, WARNING, ERROR (default: INFO)
Local development:
python app.py --log-level DEBUG
Public sharing:
python app.py --share --host 0.0.0.0
Custom port:
python app.py --port 8080
Mock mode (no API key required):
python app.py --mock
Once started, the application will be available at:
- Local: http://127.0.0.1:7860
- Network: http://[your-ip]:7860 (if using --host 0.0.0.0)
- Public: A shareable link will be provided (if using --share)
-
Start Interview:
- Check the consent checkbox
- Optionally enable LLM evaluation
- Click "Start Interview"
-
Conduct Interview:
- Respond to questions in the chat interface
- Use "Submit Response" to send your answers
- Use "End Interview Early" if needed
-
View Results:
- Click "Get Report" after completing the interview
- Download PDF report using "Download PDF Report"
excel-interview-agent/
├── src/
│ ├── interview_engine/ # Core interview logic
│ │ ├── engine.py # Main interview engine
│ │ ├── evaluator.py # LLM evaluation logic
│ │ ├── question_generator.py # Dynamic question generation
│ │ ├── reporter.py # Report generation
│ │ ├── persistence.py # Session persistence
│ │ └── models.py # Data models
│ └── ui/
│ └── gradio_app.py # Gradio web interface
├── app.py # Main application entry point
├── pyproject.toml # Project configuration
├── requirements.txt # Python dependencies
└── env.example # Environment template
Install development dependencies:
# Using uv
uv sync --dev
# Using pip
pip install -r requirements-dev.txt # if available
# Run all tests
python -m pytest
# Run with coverage
python -m pytest --cov=src
1. API Key Not Working
Error: Failed to start interview: Invalid API key
- Verify your Google API key is correct
- Check if the API key has proper permissions
- Ensure you're not hitting rate limits
2. Port Already in Use
Error: Port 7860 is already in use
- Use a different port:
python app.py --port 8080
- Kill the process using the port:
lsof -ti:7860 | xargs kill -9
3. Module Import Errors
ModuleNotFoundError: No module named 'src'
- Ensure you're running from the project root directory
- Activate your virtual environment
- Reinstall dependencies
4. Gradio Interface Not Loading
- Check if all dependencies are installed correctly
- Try running with
--log-level DEBUG
for more information - Ensure your firewall allows the port
5. PDF Generation Issues
Error generating PDF report
- Check if reportlab is installed correctly
- Ensure you have write permissions in the sessions directory
- Try running with elevated permissions if needed
Run with debug logging for detailed information:
python app.py --log-level DEBUG
Application logs are saved to interview_app.log
in the project directory.
Variable | Required | Default | Description |
---|---|---|---|
GOOGLE_API_KEY |
Yes | - | Google Gemini API key |
GOOGLE_MODEL |
No | gemini-1.5-flash |
Model to use for generation |
TEMPERATURE |
No | 0.1 |
Model temperature setting |
Option | Type | Default | Description |
---|---|---|---|
--mock |
Flag | False | Use mock evaluator |
--port |
Integer | 7860 | Server port |
--host |
String | 127.0.0.1 | Server host |
--share |
Flag | False | Create public link |
--log-level |
String | INFO | Logging level |
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes
- Run tests:
python -m pytest
- Commit changes:
git commit -m "Add feature"
- Push to branch:
git push origin feature-name
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter any issues or have questions:
- Check the troubleshooting section above
- Search existing issues in the repository
- Create a new issue with detailed information
- Include logs and error messages when reporting bugs
To update the application:
git pull origin main
uv sync # or pip install -r requirements.txt
Note: This application requires an active internet connection for LLM evaluation and question generation.