-
Notifications
You must be signed in to change notification settings - Fork 79
Description
Track
Reasoning Agents (Azure AI Foundry)
Project Name
CertFlow: A Microsoft Certification Study Planner
GitHub Username
Repository URL
https://github.com/HillPhelmuth/AgentsLeagueReasoningAgents
Project Description
This project is a multi-agent Microsoft Certification Study Assistant built with the Microsoft Agent Framework (.NET). It guides learners from initial interest in a topic to exam readiness by combining reasoning agents, structured workflows, and programmatic access to Microsoft Learn resources.
The system accepts a student's goals, available study hours, and target duration, then orchestrates a preparation workflow powered by specialized agents. A learning-path curator discovers relevant certifications, exams, and modules through the Microsoft Learn Catalog API and MCP tools. A study-plan generator produces a milestone-based weekly schedule with daily learning sessions and direct resource links. An engagement agent creates personalized reminder messages aligned to each study session and schedules them through Azure services for delivery.
Once preparation is complete, a readiness assessment agent generates a structured multiple-choice evaluation tailored to the learner's curated path. Based on results, the workflow either recommends next certification steps or loops back into additional preparation.
The project explores both single-agent and multi-agent execution strategies, comparing reliability, reasoning quality, and orchestration complexity through dataset-driven evaluation. Optional enrichment toolsets integrate community study resources such as flashcards, practice questions, and technical discussions to enhance learning context.
Designed as a practical demonstration of reasoning agents in real applications, the solution highlights agent orchestration, tool integration, evaluation pipelines, and human-in-the-loop learning workflows within a production-style Azure architecture.
Demo Video or Screenshots
Live Demo: https://agent-league-mslearn-helper.azurewebsites.net
Screenshots:
Primary Programming Language
C#/.NET
Key Technologies Used
Frameworks & SDKs
- Microsoft Agent Framework (.NET)
- ASP.NET Core / Blazor Server
- Azure Functions (Isolated Worker)
- .NET (C#)
Microsoft Services & Azure Components
- Azure App Service
- Azure Communication Services Email
- Azure Service Bus (Queue-based scheduling)
- Azure Cosmos DB
- Microsoft Learn Catalog REST API
- Microsoft Learn MCP Server
- Azure OpenAI models via Foundry Project
Submission Type
Individual
Team Members
No response
Submission Requirements
- My project meets the track-specific challenge requirements
- My repository includes a comprehensive README.md with setup instructions
- My code does not contain hardcoded API keys or secrets
- I have included demo materials (video or screenshots)
- My project is my own work with proper attribution for any third-party code
- I agree to the Code of Conduct
- I have read and agree to the Disclaimer
- My submission does NOT contain any confidential, proprietary, or sensitive information
- I confirm I have the rights to submit this content and grant the necessary licenses
Quick Setup Summary
Running Locally - Basic Setup
- Clone the repo
- Set the following user secrets in
AgentsLeagueReasoningAgents.Demo:
{
"AzureOpenAI:ApiKey": "<api-key>",
"AzureOpenAI:DeploymentName": "gpt-4.1 (or similar)",
"AzureOpenAI:Endpoint": "<azure OpenAI or AI Foundry Endpoint>",
"ConnectionStrings:ReminderDb": "<cosmos-db-connection-string>"
}- Run it!
dotnet restore
dotnet run --project .\AgentsLeagueReasoningAgents.Demo\AgentsLeagueReasoningAgents.Demo.csprojOpen the local URL, enter topics/email/hours/weeks on /, run the preparation workflow, then navigate to /assessment when ready.
Technical Highlights
Agent Architecture - One of the most significant technical decisions was designing the preparation workflow to run in both single-agent and multi-agent modes using the same toolset and data contracts. This allowed direct comparison of reliability, reasoning behavior, and orchestration complexity through a dataset-driven evaluation pipeline. The results demonstrated how reducing inter-agent serialization boundaries improved end-to-end success rates while preserving output quality, providing practical insight into when multi-agent architectures add value versus unnecessary failure surface.
Using the Microsoft Learn Platform API - Another key highlight is exposing AITool functions built around the Microsoft Learn Catalog API rather than relying on only the MCP server (which is great, but not designed for this sort of use-case).
Production-style workflow design - Structured JSON contracts enforce clean handoffs between agents, while Azure Service Bus, Cosmos DB, and Azure Communication Services enable asynchronous scheduling and delivery of engagement reminders. Human-in-the-loop checkpoints allow learners to control progression into assessment, reflecting real-world learning workflows.
Evals - implementation includes an evaluation runner with LLM-as-judge scoring across multiple quality metrics. Treating agent workflows as testable systems rather than static prompts helped surface real runtime issues such as datetime serialization edge cases and transient failures, shaping architectural decisions and improving overall robustness.
Challenges & Learnings
Handling complex structured outputs and tool function definitions - Testing exposed several edge cases, e.g. DateTime formatting adherence in structured output. It also required a balance of tool parameter complexity vs tool count per agent when adapting the MS Learn Platform API to an LLM friendly toolset. These issues reinforced the importance of thoughtful full-context engineering rather than relying solely on prompt engineering.
Contact Information
Country/Region
United States