Applying to jobs on LinkedIn is tedious, time-consuming, and repetitive. Professionals spend hours:
- Manually searching for relevant job postings
- Reading job descriptions to find matches
- Customizing resumes for each application
- Filling out LinkedIn forms repeatedly
- Tracking which jobs they've applied to
DevHire automates this entire workflow, enabling job seekers to apply to dozens of relevant positions in minutes instead of hours, with AI-powered resume customization tailored to each job.
DevHire is a full-stack AI automation platform that:
- Extracts Your Profile β Uses a Chrome extension to capture LinkedIn credentials, cookies, and browser fingerprint
- Parses Your Resume β Analyzes your resume using Google Gemini AI to extract skills, experience, and qualifications
- Searches for Jobs β Scrapes LinkedIn using Playwright (headless browser) based on parsed job titles and keywords from your resume
- Tailors Resumes β Uses Google Gemini to dynamically generate job-specific resumes for each application with LaTeX rendering to PDF
- Auto-Applies β Programmatically fills out LinkedIn's Easy Apply forms and submits applications with tailored resumes
- Tracks Progress β Provides real-time progress tracking and application history in the web dashboard
Result: Apply to 50+ tailored job applications in the time it used to take to apply to 5.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Frontend (Next.js 15 + React 19) β
β β¨ Dashboard, Login, Job Selection, Apply Flow β
β Supabase Auth + Prisma ORM β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β (API calls)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Backend (Python FastAPI + Uvicorn) β
β β’ /get-jobs β Parse resume & scrape LinkedIn β
β β’ /apply-jobs β Apply with tailored resumes β
β β’ /tailor β AI resume customization (Gemini) β
β β’ /store-cookie β Receive auth from extension β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β (Browser control)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Chrome Extension (Manifest V3) β
β β’ Captures LinkedIn cookies & localStorage β
β β’ Sends credentials to backend β
β β’ Injects content scripts for data collection β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Agents (Async Python Tasks) β
β β’ Scraper Agent β LinkedIn job search (Playwright)β
β β’ Parser Agent β Resume analysis (Gemini AI) β
β β’ Tailor Agent β Resume generation (Gemini) β
β β’ Apply Agent β Form filling & submission β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Databases β
β β’ PostgreSQL (Prisma ORM) β Users, applied jobs β
β β’ Supabase β Authentication & Auth helpers β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Component | Technology | Purpose |
|---|---|---|
| Frontend | Next.js 15, React 19, TypeScript | User dashboard, job browsing, application tracking |
| Backend | Python FastAPI, Uvicorn | Job scraping, resume tailoring, form automation |
| AI Engine | Google Gemini 2.5 Flash | Resume parsing, job-resume matching, tailored resume generation |
| Browser Automation | Playwright (async) | LinkedIn login, job scraping, form filling |
| Auth | Supabase + JWT | User registration, login, session management |
| Database | PostgreSQL + Prisma | User profiles, applied jobs, resume URLs |
| Extension | Chrome Manifest V3 | Cookie/fingerprint capture, credential sync |
DevHire/
βββ my-fe/ # Next.js Frontend
β βββ app/
β β βββ Components/
β β β βββ Home.tsx # Landing page
β β β βββ Jobs.tsx # Job search & listing
β β β βββ Apply.tsx # Application progress tracker
β β β βββ Tailor_resume.tsx # Resume tailoring UI
β β β βββ Login.tsx # Authentication
β β β βββ JobCards.tsx # Job card display
β β βββ api/ # Backend API routes (Next.js)
β β βββ Jobs/ # Job pages
β β βββ apply/ # Application flow pages
β β βββ utiles/
β β βββ agentsCall.ts # API calls to Python backend
β β βββ supabaseClient.ts # Supabase initialization
β β βββ getUserData.ts # User profile management
β βββ prisma/
β β βββ schema.prisma # Database schema (PostgreSQL)
β βββ package.json
β
βββ backend_python/ # Python FastAPI Backend
β βββ main/
β β βββ main.py # FastAPI app setup
β β βββ routes/
β β βββ list_jobs.py # GET /get-jobs endpoint
β β βββ apply_jobs.py # POST /apply-jobs endpoint
β β βββ get_resume.py # POST /tailor endpoint
β β βββ cookie_receiver.py # POST /store-cookie endpoint
β β βββ progress_route.py # Progress tracking
β βββ agents/
β β βββ scraper_agent.py # LinkedIn job scraper
β β βββ scraper_agent_optimized.py # Optimized scraper with Gemini extraction
β β βββ apply_agent.py # Form filling & submission
β β βββ tailor.py # Resume AI customization
β β βββ parse_agent.py # Resume parsing
β βββ database/
β β βββ db_engine.py # SQLAlchemy connection
β β βββ SchemaModel.py # User model (SQLAlchemy)
β βββ config.py # API keys & environment variables
β βββ requirements.txt # Python dependencies
β βββ run.sh # Startup script
β
βββ extension/ # Chrome Extension
β βββ manifest.json # Extension configuration
β βββ background.js # Service worker (cookie/storage sync)
β βββ content.js # Content script (page injection)
β
βββ prisma/ # Prisma schema (shared)
βββ schema.prisma # Database models
1. User logs in with LinkedIn on DevHire web app
2. Chrome extension captures LinkedIn cookies & localStorage
3. Extension sends data to backend (/store-cookie)
4. Backend stores encrypted credentials for session
1. User uploads resume (PDF/DOCX)
2. Backend extracts text using PyMuPDF + OCR (pytesseract)
3. Gemini AI parses resume β extracts job titles, skills, experience
4. Output: ["Full Stack Developer, Backend Engineer"] + ["Python, React, PostgreSQL"]
1. Frontend calls POST /get-jobs with:
- user_id, password, resume_url
2. Backend uses Playwright to:
- Login to LinkedIn with credentials
- Search for each parsed job title
- Scrape job listings (title, company, description, link)
3. Jobs returned to frontend, user selects 10-50 jobs
1. For each selected job:
- Frontend calls POST /tailor with:
- original_resume_url, job_description
2. Backend:
- Downloads original resume PDF
- Sends to Gemini: "Tailor this resume for this JD"
- Gemini generates LaTeX with highlighted relevant skills
- LaTeX rendered to PDF
- PDF converted to Base64
3. Base64 PDFs sent to frontend, stored in sessionStorage
1. Frontend calls POST /apply-jobs with:
- jobs: [{job_url, job_description}]
- user_id, password, resume_url
2. Backend applies in parallel batches (15 jobs per batch):
- Open LinkedIn Easy Apply modal via Playwright
- Detect form fields (experience, skills, resume upload)
- Fill fields with tailored answers + upload tailored PDF
- Click Submit
3. Real-time progress updates via /progress endpoint
4. Applications tracked in PostgreSQL (User.applied_jobs)
- Framework: Next.js 15.4.6 with App Router
- Language: TypeScript + React 19
- Styling: Tailwind CSS 4
- State Management: Redux Toolkit
- HTTP: Axios for API calls
- Auth: Supabase Auth + JWT
- Database ORM: Prisma Client
- Animations: Framer Motion
- UI Feedback: React Hot Toast (notifications)
- PDF Handling: PDF.js for rendering
- Framework: FastAPI (async web server)
- Async Runtime: Uvicorn (ASGI)
- Browser Automation: Playwright (async headless browser)
- Database: PostgreSQL + SQLAlchemy ORM
- AI: Google Gemini 2.5 Flash (resume parsing, tailoring)
- PDF Processing: PyMuPDF (fitz), pdf2image, pytesseract (OCR)
- Resume Generation: FPDF, python-docx
- HTTP: aiohttp for async requests
- Auth: browser-cookie3 for session cookies
model User {
id String @id @default(uuid())
email String @unique
name String?
resume_url String?
applied_jobs String[] # Array of LinkedIn job URLs applied to
}- Manifest: V3 (latest standard)
- Service Worker: background.js (cookie sync every 2 minutes)
- Content Script: content.js (data injection)
- Capabilities: Cookie capture, localStorage/sessionStorage access, browser fingerprint collection
- Node.js β₯ 20
- Python 3.9+
- PostgreSQL database (local or cloud)
- Google API Key (Gemini access)
- Supabase Project (optional, for auth)
- Chrome Browser (for extension & Playwright)
cd my-fe
npm install
npx prisma generate # Generate Prisma ClientCreate .env.local:
NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_key
NEXT_PUBLIC_API_URL=http://localhost:8000
DATABASE_URL=postgresql://user:password@localhost:5432/devhire
cd backend_python
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
playwright install # Download browser binariesCreate config.py:
GOOGLE_API = "your_gemini_api_key"
LINKEDIN_ID = "your_linkedin_email"
LINKEDIN_PASSWORD = "your_linkedin_password"cd my-fe
npx prisma migrate dev --name initcd backend_python
python -m uvicorn main.main:app --reload --host 0.0.0.0 --port 8000cd my-fe
npm run devOpen http://localhost:3000 in browser.
- Open
chrome://extensions/ - Enable "Developer mode"
- Click "Load unpacked"
- Select
DevHire/extension/folder
Input: Resume PDF Process:
- Extract text with OCR
- Send to Gemini: "Extract job titles and technical skills"
- Parse response
Output:
["Full Stack Developer", "Backend Engineer"] + ["Python", "React", "PostgreSQL", ...]
Input: Original resume + Job description Process:
prompt = f"""
Given this resume:
{resume_text}
And this job description:
{job_description}
Generate a tailored resume in LaTeX that:
1. Highlights relevant skills
2. Reorders experience by relevance
3. Emphasizes matching technologies
"""
response = gemini_model.generate_content(prompt)
# Response is LaTeX β rendered to PDF β Base64Output: Customized PDF resume (Base64 encoded)
Example Flow:
# 1. Playwright opens LinkedIn Easy Apply
await page.click('button:has-text("Easy Apply")')
# 2. Detect form fields
experience_field = await page.query_selector('input[name="experience"]')
skills_field = await page.query_selector('input[name="skills"]')
# 3. Fill with AI-suggested answers
await experience_field.fill("5 years in Full Stack Development")
await skills_field.fill("Python, React, PostgreSQL")
# 4. Upload tailored resume
resume_input = await page.query_selector('input[type="file"]')
await resume_input.set_input_files(tailored_resume_path)
# 5. Submit
await page.click('button:has-text("Submit application")')| Metric | Value |
|---|---|
| Jobs Applied Per Hour | 50-100 (vs. 5-10 manual) |
| Resume Customization Time | <5 seconds per job (Gemini) |
| Job Scraping Speed | 20-30 jobs/minute |
| End-to-End Application Time | 10-15 minutes for 50 jobs |
| Success Rate (Easy Apply) | ~85% (varies by form complexity) |
- LinkedIn credentials encrypted with AES-256
- Session-based authentication (JWT)
- Credentials NOT stored in database (ephemeral per session)
- CORS middleware restricts to authorized origins
- Supabase Auth for user verification
- PostgreSQL for encrypted data at rest
- Uses official LinkedIn API where available
- Respects robots.txt and rate limits
- Stealth mode Playwright (mimics human behavior)
Solution: Ensure package.json has "type": "module"
Solution: Run playwright install after pip install
Solution: Add delays between tailoring requests (5-10 second intervals)
Solution:
- Update
LINKEDIN_IDandLINKEDIN_PASSWORDinconfig.py - Check if LinkedIn has changed login flow (inspect with DevTools)
- Ensure 2FA is disabled or use app passwords
Solution: Retry with simpler job description or increase context length in Gemini prompt
| Endpoint | Method | Purpose |
|---|---|---|
/get-jobs |
POST | Parse resume & fetch matching jobs from LinkedIn |
/apply-jobs |
POST | Apply to selected jobs with tailored resumes |
/tailor |
POST | Generate AI-tailored resume for a job |
/store-cookie |
POST | Receive LinkedIn cookies from extension |
/progress |
GET | Real-time application progress tracking |
# Parse resume & get jobs
curl -X POST http://localhost:8000/get-jobs \
-H "Content-Type: application/json" \
-d '{
"user_id": "user123",
"file_url": "https://..../resume.pdf",
"password": "linkedin_password"
}'
# Tailor resume for a job
curl -X POST http://localhost:8000/tailor \
-H "Content-Type: application/json" \
-d '{
"job_desc": "We are looking for a Full Stack Developer...",
"resume_url": "https://..../resume.pdf"
}'
# Apply to jobs
curl -X POST http://localhost:8000/apply-jobs \
-H "Content-Type: application/json" \
-d '{
"user_id": "user123",
"password": "linkedin_password",
"resume_url": "https://..../resume.pdf",
"jobs": [
{
"job_url": "https://linkedin.com/jobs/123456",
"job_description": "Full Stack Developer..."
}
]
}'-
Async Python in Backend
- Read:
backend_python/agents/scraper_agent_optimized.py(async Playwright usage) - Learn: How Playwright async contexts handle multiple browser sessions
- Read:
-
Resume Tailoring Logic
- Read:
backend_python/agents/tailor.py(Gemini integration) - Learn: PDF text extraction, LaTeX generation, batch processing
- Read:
-
Frontend State Management
- Read:
my-fe/app/Components/Jobs.tsx&Apply.tsx - Learn: Redux + sessionStorage for multi-step flows
- Read:
-
Database Models
- Read:
my-fe/prisma/schema.prisma&backend_python/database/SchemaModel.py - Learn: How Prisma ORM mirrors SQLAlchemy models
- Read:
- Create a feature branch:
git checkout -b feature/your-feature - Make changes and test locally
- Commit with clear messages:
git commit -m "Add resume parsing optimization" - Push and create a Pull Request
This project is licensed under the MIT License β see LICENSE file for details.
- Support for multiple job boards (Indeed, Glassdoor, Dice)
- Resume versioning & A/B testing
- Salary negotiation insights
- Interview prep with AI coaching
- Email notification tracking
- Mobile app (React Native)
- API marketplace for third-party integrations
For issues, questions, or feature requests:
- Open an issue on GitHub
- Email: support@devhire.dev
- Discord: Join our community
Made with β€οΈ to help developers land their dream jobs faster.