π§ AI Interview Coach
An intelligent desktop app built with Python + Tkinter + AI models that analyzes your speech, posture, eye contact, and sentiment in real-time or from recorded videos β helping you improve your interview performance.
π Features
β π Live Interview Mode β Record via webcam + mic with real-time subtitles and live sentiment color feedback (green, red, white). β π Recorded Video Analysis β Upload any pre-recorded interview video to get a complete AI-generated report. β π Smart Report Dashboard β Displays:
Posture Score
Eye Contact Score
Speech Sentiment Score
Overall Interview Confidence β π§ Posture & Eye Tracking β Uses MediaPipe to analyze body alignment and gaze direction. β π¬ Speech Sentiment Analysis β Powered by Vosk ASR + HuggingFace Transformers to evaluate positivity and clarity in tone. β π¨ Beautiful Dynamic UI β Modern Tkinter interface with dark mode, smooth layout, and vivid live feedback colors.
π οΈ Tech Stack Layer Technologies Used Frontend (GUI) Tkinter, Pillow (PIL), Matplotlib Speech Processing Vosk (Offline ASR), SoundDevice Sentiment Analysis Transformers (BERT / RoBERTa), Torch Posture Detection OpenCV, MediaPipe Video Handling MoviePy Backend Logic Python 3.12, Threading, JSON Visualization Matplotlib (Pie Charts), Tkinter Canvas Packaging requirements.txt for reproducible setup βοΈ Installation 1οΈβ£ Clone the Repository git clone https://github.com/nihals007/AI-Interview-Coach.git cd AI-Interview-Coach1
2οΈβ£ Create & Activate a Virtual Environment python -m venv venv venv\Scripts\activate
3οΈβ£ Install Dependencies pip install -r requirements.txt
4οΈβ£ Run the App python app.py
π§© Folder Structure AI-Interview-Coach1/ β βββ app.py # Main Tkinter Application βββ models/ β βββ posture_model.py # Body posture and eye-contact analyzer β βββ speech_model.py # Speech recording & transcription logic β βββ sentiment_model.py # Text sentiment analyzer (Transformers) β βββ requirements.txt # All dependencies βββ snapshot.jpg # Auto-generated snapshot (from last test) βββ audio.wav # Temporary recorded audio file βββ .gitignore # Keeps venv and cache files out of Git
π‘ How It Works
Live Mode:
Opens webcam & mic, records video and audio.
Uses Vosk for real-time speech recognition.
Runs live sentiment detection β subtitle colors: π’ Positive | βͺ Neutral | π΄ Negative
Analyzes facial alignment (eye contact) and posture with MediaPipe.
On stop β generates detailed report with pie chart + transcript.
Recorded Video Mode:
Upload any .mp4 / .avi / .mov file.
Extracts audio, runs transcription + sentiment + posture frame sampling.
Creates a full AI feedback report with realistic scoring.
π Example Report Metric Score Description π§ Posture 82% Slight slouch but stable presence π Eye Contact 88% Mostly maintained eye contact π¬ Speech 79% Positive and articulate tone π― Overall 83% Great confidence! Minor posture improvement needed πΈ Screenshots
(Add your screenshots here once you present)
π’ Live Mode with subtitles
π Recorded video upload screen
π AI-generated report window
π©βπ» Contributors
Team Interview Architects
AI Models by Open Source Communities (HuggingFace, Vosk, MediaPipe)
π§° Future Enhancements
Browser-based version using Flask + React
Emotion recognition via facial analysis
Voice modulation + clarity score
Resume-based question simulation