This project involves creating a multi-agent workflow that automates the processing of multi-page resumes using Large Language Models (LLMs). The system is designed to read resumes, extract key entities, validate them, and incorporate human feedback at each stage to ensure accuracy and completeness. The final output is a JSON file containing all validated entities.
- Resume Reading: Process multi-page resumes in various formats (e.g., PDF, DOCX).
- Entity Extraction: Extract key entities such as personal information, education, work experience, and skills.
- Entity Validation: Validate extracted entities and initiate correction if issues are detected.
- Human Feedback Loop: Allow human intervention at each stage for feedback and updates.
- JSON Output: Compile validated entities into a predefined JSON format.
- Monitoring: Used LangGraph and LangSmith to monitor and visualize LLM calls within the agents.
- python
- Groq - A specialized AI accelerator designed for large language models, offering high performance and efficiency. used "mixtral-8x7b-32768" LLM model through groq.
- Langchain - used for documents loader, llm integration
- Langgraph - Used for creating multi agent workflows
- Langsmith - Monitoring and visualizing LLM calls, tokens, and other LLM parameters
- git - for version control system
- pylint - for ensuring code quality
- Streamlit - For user interface creation "IN PROGRESS"
-
Clone the repository:
https://github.com/mayurd8862/Multi-Agent-Workflow-for-Resume-Processing.git cd Multi-Agent-Workflow-for-Resume-Processing
-
Create a virtual environment and activate it:
python -m venv myenv myenv\Scripts\activate
-
Install the required packages::
pip install -r requirements.txt
-
Set up environment variables:
-
Create a .env file in the root directory of the project and following api keys to it.
LANGCHAIN_API_KEY = your_langchain_api_key GROQ_API_KEY = your_groq_api_key
Add resume file location and run the main.py code:
python main.py
- Added human feedback functionality to the extractor agent and validation agent.
- When these two agents give their output, an input function will be activated where we can add our feedback to the system and provide suggestions.
- If add feedback to the system then it run the respective agent once by passing human feedback and give output.
- If we do not have feedback to add, we need to press ENTER.
- After pressing ENTER, the output will be passed to the next agent.
LangSmith is a powerful platform designed to provide comprehensive visibility and control over your Large Language Model (LLM) calls within your agents.
- Login to LangSmith: Visit the LangSmith website and enter your credentials to log in to your account.
- Go to Projects: Once logged in, navigate to the "Projects" section of the LangSmith dashboard. Locate the project(s) that you have integrated with LangSmith. You can identify them by their names or descriptions.
- Monitor LLM Call Flows: Click on a specific project to view its details. Go to the Runs section and analyse llm calls and outputs
- visualize: In the "Monitoring" section, you should see a visualization of the LLM call flows within that project. The visualization will display the sequence of LLM calls, their relationships, and any dependencies between them.
For any questions or suggestions, feel free to reach out:
- Email: mayur.dabade21@vit.edu
- GitHub : mayurd8862
- LinkedIn : https://www.linkedin.com/in/mayur-dabade-b527a9230