A smart, personalized cover letter generator that leverages the power of Ollama's local LLM models to create tailored cover letters based on your resume and job descriptions.
- Smart Resume Processing: Automatically extracts and cleans text from PDF, DOCX, and TXT files
- Multiple Input Methods: Upload files or paste text directly
- Local LLM Integration: Uses Ollama API for privacy-focused, local processing
- Real-time Personalization: Interactive chat interface for further customization
- Professional Formatting: Generates email-formatted cover letters with proper structure
- Typing Animation: Smooth display of generated content
- Session Management: Caches processed files to avoid reprocessing
- Frontend: Streamlit
- LLM: Ollama API with Gemma:2b model
- PDF Processing: PyMuPDF (fitz), custom extraction module
- Document Processing: python-docx
- Backend: Python 3.7+
-
Ollama Installation: Install Ollama on your system
# Visit https://ollama.ai to download and install -
Model Setup: Pull the Gemma:2b model
ollama pull gemma:2b
-
Python Dependencies: Install required packages
pip install -r requirements.txt
-
Clone the repository
git clone <your-repository-url> cd cover-letter-generator
-
Install dependencies
pip install streamlit requests PyMuPDF python-docx
-
Start Ollama service
ollama serve
-
Run the application
streamlit run letter_creator.py
cover-letter-generator/
βββ letter_creator.py # Main Streamlit application
βββ extract_resume.py # PDF text extraction utilities
βββ resume_cleaner.py # Resume text cleaning and processing
βββ requirements.txt # Python dependencies
βββ README.md # Project documentation
- Upload your resume file (PDF, DOCX, or TXT)
- Or paste your resume text directly in the text area
- Upload the job description file (PDF, DOCX, or TXT)
- Or paste the job description text directly
- Click "π Generate Cover Letter"
- Wait for the AI to process and generate your personalized cover letter
- Use the chat interface to request modifications
- Examples: "Make it more formal", "Add emphasis on teamwork", "Make it shorter"
You can modify the Ollama configuration in letter_creator.py:
# Default Ollama API endpoint
Ollama_API_URL = "http://localhost:11434/api/generate"
# Model configuration
MODEL_NAME = "gemma:2b" # Change to your preferred model
# Generation parameters
"temperature": 0.7, # Creativity level (0.0-1.0)
"top_p": 0.9, # Diversity control (0.0-1.0)- PDF: Processed using PyMuPDF with fallback extraction
- DOCX: Microsoft Word documents
- TXT: Plain text files
Creates a comprehensive prompt for the LLM with specific instructions for:
- Professional tone and format
- Technology matching between resume and job description
- Enthusiasm for learning new technologies
- Concise, email-format output
Handles multiple file formats with intelligent processing:
- PDF extraction with cleanup
- DOCX paragraph extraction
- TXT direct reading
- Resume cleaning for type 0 (resume files)
Communicates with the Ollama API:
- Sends structured requests
- Handles response parsing
- Error handling and reporting
Creates an engaging user experience:
- Typing animation effect
- Professional styling
- Session state management
- Sidebar Navigation: Clean file upload and text input areas
- Real-time Feedback: Success/error messages and loading spinners
- Responsive Design: Professional styling with centered layout
- Interactive Chat: Post-generation customization interface
- Local Processing: All data processing happens locally using Ollama
- No Data Storage: Files are processed in memory, not saved permanently
- Session-based: Data exists only during your session
-
Ollama Connection Error
- Ensure Ollama is running:
ollama serve - Check if the model is installed:
ollama list
- Ensure Ollama is running:
-
File Processing Issues
- Ensure files are not corrupted
- Check file size limitations
- Verify file format compatibility
-
Performance Issues
- Consider using a more powerful model
- Adjust temperature and top_p parameters
- Ensure sufficient system resources
- Model Selection: Gemma:2b is efficient; consider larger models for better quality
- File Size: Smaller files process faster
- Prompt Engineering: The built-in prompt is optimized for best results
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Commit changes:
git commit -m 'Add feature' - Push to branch:
git push origin feature-name - Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for providing local LLM capabilities
- Streamlit for the amazing web framework
- PyMuPDF for PDF processing
If you encounter any issues or have questions:
- Check the troubleshooting section
- Review Ollama documentation
- Open an issue in the repository
Made with by No0Bitah π