Skip to content

nihals007/AI-Interview-Coach

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 AI Interview Coach

An intelligent desktop app built with Python + Tkinter + AI models that analyzes your speech, posture, eye contact, and sentiment in real-time or from recorded videos β€” helping you improve your interview performance.

πŸš€ Features

βœ… πŸŽ™ Live Interview Mode – Record via webcam + mic with real-time subtitles and live sentiment color feedback (green, red, white). βœ… πŸ“‚ Recorded Video Analysis – Upload any pre-recorded interview video to get a complete AI-generated report. βœ… πŸ“Š Smart Report Dashboard – Displays:

Posture Score

Eye Contact Score

Speech Sentiment Score

Overall Interview Confidence βœ… 🧍 Posture & Eye Tracking – Uses MediaPipe to analyze body alignment and gaze direction. βœ… πŸ’¬ Speech Sentiment Analysis – Powered by Vosk ASR + HuggingFace Transformers to evaluate positivity and clarity in tone. βœ… 🎨 Beautiful Dynamic UI – Modern Tkinter interface with dark mode, smooth layout, and vivid live feedback colors.

πŸ› οΈ Tech Stack Layer Technologies Used Frontend (GUI) Tkinter, Pillow (PIL), Matplotlib Speech Processing Vosk (Offline ASR), SoundDevice Sentiment Analysis Transformers (BERT / RoBERTa), Torch Posture Detection OpenCV, MediaPipe Video Handling MoviePy Backend Logic Python 3.12, Threading, JSON Visualization Matplotlib (Pie Charts), Tkinter Canvas Packaging requirements.txt for reproducible setup βš™οΈ Installation 1️⃣ Clone the Repository git clone https://github.com/nihals007/AI-Interview-Coach.git cd AI-Interview-Coach1

2️⃣ Create & Activate a Virtual Environment python -m venv venv venv\Scripts\activate

3️⃣ Install Dependencies pip install -r requirements.txt

4️⃣ Run the App python app.py

🧩 Folder Structure AI-Interview-Coach1/ β”‚ β”œβ”€β”€ app.py # Main Tkinter Application β”œβ”€β”€ models/ β”‚ β”œβ”€β”€ posture_model.py # Body posture and eye-contact analyzer β”‚ β”œβ”€β”€ speech_model.py # Speech recording & transcription logic β”‚ └── sentiment_model.py # Text sentiment analyzer (Transformers) β”‚ β”œβ”€β”€ requirements.txt # All dependencies β”œβ”€β”€ snapshot.jpg # Auto-generated snapshot (from last test) β”œβ”€β”€ audio.wav # Temporary recorded audio file └── .gitignore # Keeps venv and cache files out of Git

πŸ’‘ How It Works

Live Mode:

Opens webcam & mic, records video and audio.

Uses Vosk for real-time speech recognition.

Runs live sentiment detection β†’ subtitle colors: 🟒 Positive | βšͺ Neutral | πŸ”΄ Negative

Analyzes facial alignment (eye contact) and posture with MediaPipe.

On stop β†’ generates detailed report with pie chart + transcript.

Recorded Video Mode:

Upload any .mp4 / .avi / .mov file.

Extracts audio, runs transcription + sentiment + posture frame sampling.

Creates a full AI feedback report with realistic scoring.

πŸ“Š Example Report Metric Score Description 🧍 Posture 82% Slight slouch but stable presence πŸ‘€ Eye Contact 88% Mostly maintained eye contact πŸ’¬ Speech 79% Positive and articulate tone 🎯 Overall 83% Great confidence! Minor posture improvement needed πŸ“Έ Screenshots

(Add your screenshots here once you present)

🟒 Live Mode with subtitles

πŸ“‚ Recorded video upload screen

πŸ“Š AI-generated report window

πŸ‘©β€πŸ’» Contributors

Team Interview Architects

AI Models by Open Source Communities (HuggingFace, Vosk, MediaPipe)

🧰 Future Enhancements

Browser-based version using Flask + React

Emotion recognition via facial analysis

Voice modulation + clarity score

Resume-based question simulation

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages