This project implements an automated evaluation system for handwritten subjective answers, developed as part of academic coursework to streamline manual grading by comparing student answers with predefined key answers and generating an accuracy assessment.
Traditional subjective answer evaluation is manual, time-consuming, and prone to human bias. Teachers spend hours reading and comparing handwritten answers against the key. This system aims to digitize and automate the evaluation process, ensuring consistency and faster grading.
✔️ Handwritten Answer Recognition
Uses Google Cloud Vision API to extract text from scanned handwritten answer images.
✔️ Natural Language Processing Evaluation
Compares extracted text with key answers using NLP techniques to compute similarity scores.
✔️ Accuracy Assessment
Generates an accuracy percentage representing how closely the student's answer matches the expected key answer.
- Programming Language: Python
- APIs & Libraries: Google Cloud Vision API, NLP libraries (
nltk),pandas,numpy - Other Tools: Batch script for execution (
run.bat)
- Clone this repository:
git clone https://github.com/vijay-atla/subjective-answer-evaluation.git
- Navigate to the project directory:
cd subjective-answer-evaluation/
- Install dependencies:
pip install -r requirements.txt- Set up Google Cloud Vision credentials:
-
Create a Google Cloud project.
-
Enable the Vision API.
-
Download credentials.json and set your environment variable:
export GOOGLE_APPLICATION_CREDENTIALS="path/to/credentials.json"
- For Windows PowerShell:
setx GOOGLE_APPLICATION_CREDENTIALS "path\to\credentials.json"- Run the application:
python Subjective.py
- Alternatively, you can run the batch execution file:
run.bat-
Vijay Atla
-
Year Developed: 2021
- This project was developed for educational purposes. Feel free to fork and customize for learning, research, and academic demonstrations.






