Skip to content

LMS-style AI quiz generator + evaluator inspired by Sakai. Upload materials, generate questions, assess responses. Simulates real quiz workflows.

Notifications You must be signed in to change notification settings

Programming-Sai/Sakaai-Simulator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sakaai-Simulator

LMS-style AI quiz generator + evaluator inspired by Sakai. Upload materials, generate questions, assess responses. Simulates real quiz workflows.


📑 Table of Contents

  1. Overview
  2. Features & Endpoints
  3. Tech Stack
  4. Getting Started
  5. Configuration
  6. Usage Examples
  7. Frontend Integration
  8. Logging & Monitoring
  9. Testing
  10. Deployment
  11. Acknowledgement

Overview

Sakaai-Simulator is a custom-built backend API that supports:

  • Quiz Generation & Conversion • Generate new quizzes from text, notes, or uploaded docs • Convert existing quizzes (DOCX/TXT) into a validated JSON schema

  • Subjective Answer Evaluation • Score essays & fill-in-the-blank responses (0–10 scale)

  • Feedback Collection • Collect user feedback via a modal and persist to Google Sheets

While the frontend will emulate Sakai’s quiz-taking and grading UI, this backend focuses on AI-driven logic, strict validation, and robust observability.


Features & Endpoints

1. Health Check

GET /health

Returns the current status of the Sakaai Simulator along with configuration details from environment variables.

Example Response:

{
  "status": "healthy",
  "message": "Sakaai Simulator is up and running.",
  "config": {
    "max_file_size": 5,
    "max_number_of_questions_per_generation": 30,
    "max_requests_per_day": 5,
    "feedback_questions": [
      "What frustrated you the most while using this?",
      "Was anything confusing, broken, or just... not it?",
      "Was there anything that worked well for you?",
      "If you could change one thing about this app, what would it be?",
      "Any final thoughts, rants, or feedback you wish we’d asked for?"
    ]
  }
}

2. Quiz Generation / Conversion

POST /generate (rate-limited, default 5/day) Form Data (multipart/form-data):

Field Type Req. Description
request_id string Yes Client-generated UUID for tracing
user_additional_instructions string Yes Main prompt
topic string No High-level subject
quiz_type string[] No mcq, sata, tf, fitb, essay
num_questions integer No Desired count
options_per_question integer No Choices per MCQ/SATA
answer_required boolean Yes Include answer field
explanation_required boolean Yes Include explanation field
file_intent string No study_material or existing_quiz
file file No .txt, .docx, .pdf (server extracts text only)

Success (200)

{
  "model_used": "...",
  "inference_time": 1.23,
  "question_count": 10,
  "attempt_number": 1,
  "token_usage": {
    "prompt_tokens": 2046,
    "completion_tokens": 446,
    "total_tokens": 2492
  },
  "quizzes": [
    /* MCQ/SATA/TF/FITB/Essay items */
  ]
}

Errors

  • 400 Bad Request
  • 413 Payload Too Large
  • 429 Rate Limit Exceeded ({ retry_after: <sec until midnight UTC> })
  • 502/503 Internal or model errors

3. Subjective Evaluation

POST /evaluate JSON:

{
  "question": {
    /* EssayQuiz or FITBKeywordQuiz */
  },
  "user_answer": "..."
}

Success (200):

{
  "keyword": 8.0,
  "similarity": 9.5,
  "readability": 8.0,
  "structure": 10.0,
  "final_score": 8.88,
  "time_taken": 0.12,
  "word_count": 45,
  "character_count": 256
}

4. Feedback Submission

POST /feedback JSON:

{
  "requestId": "",
  "answers": [
    "What feature made you smile?",
    "What annoyed you most?",
    "Which question felt off?",
    "If you could zap one part, what would it be?",
    "Any final thoughts or rants?"
  ]
}

Success (200):

{ "status": "ok", "message": "Feedback recorded" }

Tech Stack

  • FastAPI + SlowAPI (async, rate-limited)
  • Groq + LangChain (LLM orchestration)
  • Pydantic (schema validation)
  • Unstructured (file → text parsing)
  • RapidFuzz, NLTK, textstat (subjective scoring)
  • gspread + Google Service Accounts (feedback storage)

Getting Started

  1. Clone

    git clone https://github.com/Programming-Sai/Sakaai-Simulator.git
    cd Sakaai-Simulator
  2. Virtual Env

    python -m venv venv
    source venv/bin/activate   # macOS/Linux
    venv\Scripts\activate      # Windows
  3. Install

    pip install -r requirements.txt
  4. Configure ➡️ see Configuration

  5. Run

    uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
  6. API Docshttp://localhost:8000/docs


Configuration

Create a .env:

GROQ_API_KEY=your_groq_key
MAX_REQUEST_PER_DAILY=5/day
MAX_NUM_QUESTIONS=30

MAX_FILE_SIZE_MB=2
MAX_TOKENS=10000

GOOGLE_SERVICE_ACCOUNT_INFO='{"type":"service_account", … }'
FEEDBACK_SHEET_ID=your_sheet_id

Usage Examples

  • Generate Quiz (cURL):

    curl -X POST http://localhost:8000/generate \
      -F request_id="$(uuidgen)" \
      -F user_additional_instructions="Explain photosynthesis in 5 MCQs" \
      -F quiz_type='["mcq"]' \
      -F num_questions=5
  • Evaluate Essay:

    curl -X POST http://localhost:8000/evaluate \
      -H "Content-Type: application/json" \
      -d '{"question":{…},"user_answer":"…"}'
  • Submit Feedback:

    curl -X POST http://localhost:8000/feedback \
      -H "Content-Type: application/json" \
      -d '{"request_id":"ID-ONE","answers":[…]}'

Frontend Integration

Your React/Vue/… app will:

  1. POST /generate ➡️ Render quiz UI
  2. Collect answers ➡️ POST /evaluate per question
  3. Show scores & explanations
  4. Popup modal for feedback ➡️ POST /feedback

Use request_id (UUID) to correlate logs, evaluations, and feedback.


Logging & Monitoring

All requests are logged as structured JSON (inc. request_id, model metadata, token counts, timings). Rate-limit hits include retry_after until UTC midnight. Optional persistence to Google Sheets for analytics.


Testing

  • Unit: pytest
  • Integration: test /generate, /evaluate, /feedback via Swagger or HTTP client.
  • Edge Cases: large files, invalid schemas, rate limits, model failures.

Deployment

  • Dockerize your app or directly deploy with uvicorn behind a reverse proxy.
  • Set env vars in your hosting platform (Render, Heroku, AWS, etc.).
  • Monitor logs for structured entries.

Acknowledgements

Powered by Groq, LangChain, FastAPI, Unstructured, RapidFuzz, and the Google Sheets API.

About

LMS-style AI quiz generator + evaluator inspired by Sakai. Upload materials, generate questions, assess responses. Simulates real quiz workflows.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages