Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
*__pycache__/
*.env
82 changes: 58 additions & 24 deletions EXPLANATION.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,69 @@
# Technical Explanation
# LearnLite Quiz Workflow
This repository contains an agent-based system designed to create a dynamic learning and quizzing experience. It leverages a series of interconnected agents to plan a curriculum, generate course content, act as an instructor, create quizzes, conduct them, and finally evaluate user responses.

## 1. Agent Workflow
## 1. Features
1. **Curriculum Planning**: Automatically generates a learning curriculum based on a given topic.
2. **Course Content Generation**: Creates comprehensive course material for the planned curriculum.
3. **Interactive Instruction**: An instructor agent delivers the course content in an engaging, lesson-like format.
4. **Quiz Generation**: Develops multiple-choice quizzes with questions, options, and correct answers.
5. **Quiz Conduction**: Guides the user through the quiz, collects answers, and stores them.
6 **Automated Evaluation**: Compares user answers against correct ones, provides correctness feedback, individual question reviews, and a total score summary.

Describe step-by-step how your agent processes an input:
1. Receive user input
2. (Optional) Retrieve relevant memory
3. Plan sub-tasks (e.g., using ReAct / BabyAGI pattern)
4. Call tools or APIs as needed
5. Summarize and return final output
## 2. Key Agents

## 2. Key Modules
The system is composed of the following sequential agents:

- **Planner** (`planner.py`): …
- **Executor** (`executor.py`): …
- **Memory Store** (`memory.py`): …
- **CurriculumPlanner**: Plans the learning curriculum for a specified topic.
- **CourseContentCreator**: Generates detailed course content based on the curriculum.
- **Instructor**: Presents the generated course content as an interactive lesson.
- **QuizMaker**: Creates a quiz with questions, options, and correct answers from the course content.
- **QuizConductor**: Manages the quiz flow, prompts the user for answers, and collects their responses.
- **QuizEvaluator**: Assesses the user's performance on the quiz, providing scores and detailed feedback.

## 3. Tool Integration
## 3. Setup and Installation
### Prerequisites
- Python > 3.9
- google-generativeai library (or similar, depending on the google.adk dependency)
- python-dotenv (for managing API keys)
- pydantic
- json_repair


List each external tool or API and how you call it:
- **Search API**: function `search(query)`
- **Calculator**: LLM function calling
### Installation
1. Clone the repository:
```
git clone <your-repository-url>
cd <your-repository-name>
```

## 4. Observability & Testing
2. Install Dependencies:
```
pip install -r requirements.txt
```

Explain your logging and how judges can trace decisions:
- Logs saved in `logs/` directory
- `TEST.sh` exercises main path
3. Set up environment variables:
```
GOOGLE_GENAI_USE_VERTEXAI=FALSE
GOOGLE_API_KEY=<your-api-key>
GOOGLE_GENAI_MODEL="gemini-2.0-flash"
```

## 5. Known Limitations

Be honest about edge cases or performance bottlenecks:
- Long-running API calls
- Handling of ambiguous user inputs
### How to run
1. To run the LearnLite Quiz Workflow, run below command from path ```<your-directory>/fourcast_agents```:
```
adk web
```
2. Select ```learnLite``` from the dropdown on the left and enter ```google agent development kit (or anything of your interest), Beginner``` in the chat-box.

3. The course_content_creator and instructor will work together to conduct a class. You are then prompted to take a small quiz. The answers should be filled in using your Terminal.

4. Once all the questions are answered, the loop stores your response and triggers the evaluation_agent which provides a score and a review.

### sample outputs
#### Snippet of Instructor's Class Session
![sample instructor session](images/snippet_of_instructor.png)
#### Snippet of Quiz Conduction Loop
![sample quiz conduction loop (in terminal)](images/sample_quiz_conduction.png)
#### Snippet of Final Evaluation Score and Report
![sample Evaluation output](images/evaluation_output.png)
Binary file added images/evaluation_output.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/sample_quiz_conduction.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/snippet_of_instructor.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions learnLite/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from . import agent
174 changes: 174 additions & 0 deletions learnLite/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
import os
import sys
from json_repair import repair_json
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)), "../"))
try:
from dotenv import load_dotenv
load_dotenv()

MODEL_NAME = os.environ.get("GOOGLE_GENAI_MODEL", "gemini-2.0-flash")
except ImportError:
print("Warning: python-dotenv not installed. Ensure API key is set")
MODEL_NAME = "gemini-2.0-flash"

from google.adk.agents import LlmAgent, BaseAgent, SequentialAgent
from google.adk.agents.invocation_context import InvocationContext
from typing import AsyncGenerator
import asyncio
import json
from pydantic import BaseModel
from typing import List
from google.adk.events import Event, EventActions

from learnLite.instructions import (
CURRICULUM_PLANNER_INSTRUCTION,
QUIZ_MAKER_INSTRUCTION,
COURSE_CONTENT_GENERATOR_INSTRUCTION
)

class QuizItem(BaseModel):
question: str
options: List[str]
answer: str

class QuizMakerOutput(BaseModel):
quiz: List[QuizItem]

class ItemReview(BaseModel):
question: str
user_answer: str
correct_answer: str
correctness: str # e.g., "correct", "incorrect"
review: str # e.g., "Good job!", "Try to review the topic again."

class EvaluationOutput(BaseModel):
score: int
review: list[ItemReview]

# 1. Agent to plan curriculum for a given topic
curriculum_planner_agent = LlmAgent(
name="CurriculumPlanner",
model=MODEL_NAME,
instruction=CURRICULUM_PLANNER_INSTRUCTION,
output_key="curriculum"
)

# 2. Agent to create course content
course_content_creator = LlmAgent(
name="CourseContentCreator",
model=MODEL_NAME,
instruction=COURSE_CONTENT_GENERATOR_INSTRUCTION,
output_key="course_content"
)

# 2.2. Agent to act as instructor
instructor_agent = LlmAgent(
name="Instructor",
model=MODEL_NAME,
instruction="""You are an instructor. Deliver the course content provided in state['course_content']. Present it in an engaging manner, as if you are teaching a class. Use clear subheadings and examples to make the content easy to understand. The output should be structured as a lesson.""",
output_key="instructor_output"
)

# class InstructorAgent(BaseAgent):
# async def _run_async_impl(self, ctx: InvocationContext) -> AsyncGenerator:
# course_content = ctx.session.state.get("course_content", "")
# print("=======================COURSE CONTENT=====================")
# print(course_content)
# print("-----------------------------------------------------------")
# # Simulate the instructor delivering the content
# print("Instructor: Let's start the lesson!")
# print(course_content)
# yield # End of instructor session

# instructor_agent = InstructorAgent(name="Instructor")

# 3. Agent to generate questions and answers
quiz_maker = LlmAgent(
name="QuizMaker",
model=MODEL_NAME,
instruction=QUIZ_MAKER_INSTRUCTION,
output_key="quiz",
output_schema=QuizMakerOutput
)

# 3. Custom agent to conduct the quiz
class QuizConductorAgent(BaseAgent):
async def _run_async_impl(self, ctx: InvocationContext) -> AsyncGenerator:
quiz = ctx.session.state.get("quiz", {})
print("=======================QUIZ=====================")
print(quiz)
print("----------------------------------")
# #Correct the quiz_str to json format
# quiz_repair = repair_json(quiz_str)
# quiz = json.loads(quiz_repair)
# Simulate the quiz conduction
print("Quiz Conductor: Let's start the quiz!")
print("Please answer the following questions:")
user_answers = []
actual_answers = []
questions =[]

for idx, qa in enumerate(quiz['quiz']):
print(f"Question {idx+1}: {qa['question']}")
print(f"Options {idx+1}: {qa['options']}")
questions.append(qa['question'])
actual_answers.append(qa['answer'])
user_answer = input("Your answer: ")
user_answers.append(user_answer)
ctx.session.state["user_answers"] = user_answers

ctx.session.state["actual_answers"] = actual_answers
ctx.session.state["questions"] = questions
print("User's Answers-->", ctx.session.state["user_answers"])
# Optionally yield an event here if needed
quiz_is_done = True
yield Event(
author=self.name,
#content={"text": "Quiz Conductor Finished Job"},
actions=EventActions(escalate=quiz_is_done)
) # End of quiz

quiz_conductor_agent = QuizConductorAgent(name="QuizConductor")

print("Quiz Conductor Agent finished running.")
# 3. Agent to evaluate responses
evaluation_agent = LlmAgent(
name="QuizEvaluator",
model=MODEL_NAME,
instruction=(
"""Compare each user answer in state['user_answers'] to the correct answers in state['actual_answers'] for each question within state['questions'].
For each, show the question, user's response, correct response, correctness and a brief review.
Give a score of 1 for each correct answer and 0 for each incorrect answer and add them up to generate the final score.
Summarize the total score in state['score'] and provide a review in state['review'].
"""
),
output_key="evaluation_output",
output_schema=EvaluationOutput # Define a suitable schema for the evaluation output
)


# 4. Compose the workflow
learnlite_workflow = SequentialAgent(
name="LearnLiteQuizWorkflow",
sub_agents=[
curriculum_planner_agent, # 1. Plan curriculum
course_content_creator, # 2. Create course content
instructor_agent, # 2.2. Act as instructor
quiz_maker, # 3. Generate questions
quiz_conductor_agent, # 4. Conduct quiz
evaluation_agent # 5. Evaluate
]
)

root_agent = learnlite_workflow

# async def main():
# topic = input("What topic do you want to be quizzed on? ")
# state = {"topic": topic}
# await learnlite_workflow.run(state)
# print(f"Curriculum: {state.get('curriculum')}")
# print(f"Score: {state.get('score')}")
# print(f"Review: {state.get('review')}")

# if __name__ == "__main__":
# asyncio.run(main())
22 changes: 22 additions & 0 deletions learnLite/instructions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Instruction for Curriculum Planner Agent
CURRICULUM_PLANNER_INSTRUCTION = """
You are the Curriculum Planner Agent. Your task is to plan a curriculum for a given topic with not more than 3 topics per curriculum.
Input:
- Topic: The topic of the curriculum
- Level: The level of the curriculum (Beginner, Intermediate, Advanced)

Ensure that the curriculum generated can be finished within 10 minutes. Your output will then be used by a Content Creator to create course content and by a Quiz Maker Agent to create a quiz with upto 5 multiple choice questions for the curriculum.
"""

# Instruction for Course Content Generator Agent
COURSE_CONTENT_GENERATOR_INSTRUCTION = """
Create a comprehensive course content as per state['curriculum'] which can be finished by the user in 10 minutes. Include key concepts, learning objectives, and suggested resources and activities. The output should be structured as a lesson conducted by an Instructor with clear subheadings and interesting examples. The output should be stored in state['course_content'].
"""


# Instruction for Quiz Maker Agent
QUIZ_MAKER_INSTRUCTION = """
You are the Quiz Maker Agent. Your task is to create a quiz with upto 5 multiple choice questions along with the correct answers using the instructor's course content within state['course_content']. Store the response in state['quiz'] as dict with key 'quiz' with values as a list of dictionaries with keys 'question', 'options' and 'answer'.
Input:
- Curriculum: The curriculum to create a quiz for
"""
2 changes: 2 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
google-adk
google-generativeai