Skip to content

Dev-Pross/DevHire

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

DevHire – AI-Powered LinkedIn Job Application Automation

🎯 The Problem It Solves

Applying to jobs on LinkedIn is tedious, time-consuming, and repetitive. Professionals spend hours:

  • Manually searching for relevant job postings
  • Reading job descriptions to find matches
  • Customizing resumes for each application
  • Filling out LinkedIn forms repeatedly
  • Tracking which jobs they've applied to

DevHire automates this entire workflow, enabling job seekers to apply to dozens of relevant positions in minutes instead of hours, with AI-powered resume customization tailored to each job.


πŸš€ What DevHire Does

DevHire is a full-stack AI automation platform that:

  1. Extracts Your Profile – Uses a Chrome extension to capture LinkedIn credentials, cookies, and browser fingerprint
  2. Parses Your Resume – Analyzes your resume using Google Gemini AI to extract skills, experience, and qualifications
  3. Searches for Jobs – Scrapes LinkedIn using Playwright (headless browser) based on parsed job titles and keywords from your resume
  4. Tailors Resumes – Uses Google Gemini to dynamically generate job-specific resumes for each application with LaTeX rendering to PDF
  5. Auto-Applies – Programmatically fills out LinkedIn's Easy Apply forms and submits applications with tailored resumes
  6. Tracks Progress – Provides real-time progress tracking and application history in the web dashboard

Result: Apply to 50+ tailored job applications in the time it used to take to apply to 5.


πŸ—οΈ Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        Frontend (Next.js 15 + React 19)             β”‚
β”‚  ✨ Dashboard, Login, Job Selection, Apply Flow    β”‚
β”‚         Supabase Auth + Prisma ORM                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        ↕ (API calls)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    Backend (Python FastAPI + Uvicorn)              β”‚
β”‚  β€’ /get-jobs – Parse resume & scrape LinkedIn     β”‚
β”‚  β€’ /apply-jobs – Apply with tailored resumes      β”‚
β”‚  β€’ /tailor – AI resume customization (Gemini)     β”‚
β”‚  β€’ /store-cookie – Receive auth from extension    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        ↕ (Browser control)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Chrome Extension (Manifest V3)                   β”‚
β”‚  β€’ Captures LinkedIn cookies & localStorage       β”‚
β”‚  β€’ Sends credentials to backend                   β”‚
β”‚  β€’ Injects content scripts for data collection    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        ↕
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Agents (Async Python Tasks)                      β”‚
β”‚  β€’ Scraper Agent – LinkedIn job search (Playwright)β”‚
β”‚  β€’ Parser Agent – Resume analysis (Gemini AI)      β”‚
β”‚  β€’ Tailor Agent – Resume generation (Gemini)      β”‚
β”‚  β€’ Apply Agent – Form filling & submission         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        ↕
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Databases                                        β”‚
β”‚  β€’ PostgreSQL (Prisma ORM) – Users, applied jobs  β”‚
β”‚  β€’ Supabase – Authentication & Auth helpers       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Components

Component Technology Purpose
Frontend Next.js 15, React 19, TypeScript User dashboard, job browsing, application tracking
Backend Python FastAPI, Uvicorn Job scraping, resume tailoring, form automation
AI Engine Google Gemini 2.5 Flash Resume parsing, job-resume matching, tailored resume generation
Browser Automation Playwright (async) LinkedIn login, job scraping, form filling
Auth Supabase + JWT User registration, login, session management
Database PostgreSQL + Prisma User profiles, applied jobs, resume URLs
Extension Chrome Manifest V3 Cookie/fingerprint capture, credential sync

πŸ“ Project Structure

DevHire/
β”œβ”€β”€ my-fe/                          # Next.js Frontend
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ Components/
β”‚   β”‚   β”‚   β”œβ”€β”€ Home.tsx           # Landing page
β”‚   β”‚   β”‚   β”œβ”€β”€ Jobs.tsx           # Job search & listing
β”‚   β”‚   β”‚   β”œβ”€β”€ Apply.tsx          # Application progress tracker
β”‚   β”‚   β”‚   β”œβ”€β”€ Tailor_resume.tsx  # Resume tailoring UI
β”‚   β”‚   β”‚   β”œβ”€β”€ Login.tsx          # Authentication
β”‚   β”‚   β”‚   └── JobCards.tsx       # Job card display
β”‚   β”‚   β”œβ”€β”€ api/                   # Backend API routes (Next.js)
β”‚   β”‚   β”œβ”€β”€ Jobs/                  # Job pages
β”‚   β”‚   β”œβ”€β”€ apply/                 # Application flow pages
β”‚   β”‚   └── utiles/
β”‚   β”‚       β”œβ”€β”€ agentsCall.ts      # API calls to Python backend
β”‚   β”‚       β”œβ”€β”€ supabaseClient.ts  # Supabase initialization
β”‚   β”‚       └── getUserData.ts     # User profile management
β”‚   β”œβ”€β”€ prisma/
β”‚   β”‚   └── schema.prisma          # Database schema (PostgreSQL)
β”‚   └── package.json
β”‚
β”œβ”€β”€ backend_python/                # Python FastAPI Backend
β”‚   β”œβ”€β”€ main/
β”‚   β”‚   β”œβ”€β”€ main.py               # FastAPI app setup
β”‚   β”‚   └── routes/
β”‚   β”‚       β”œβ”€β”€ list_jobs.py      # GET /get-jobs endpoint
β”‚   β”‚       β”œβ”€β”€ apply_jobs.py     # POST /apply-jobs endpoint
β”‚   β”‚       β”œβ”€β”€ get_resume.py     # POST /tailor endpoint
β”‚   β”‚       β”œβ”€β”€ cookie_receiver.py # POST /store-cookie endpoint
β”‚   β”‚       └── progress_route.py # Progress tracking
β”‚   β”œβ”€β”€ agents/
β”‚   β”‚   β”œβ”€β”€ scraper_agent.py             # LinkedIn job scraper
β”‚   β”‚   β”œβ”€β”€ scraper_agent_optimized.py   # Optimized scraper with Gemini extraction
β”‚   β”‚   β”œβ”€β”€ apply_agent.py               # Form filling & submission
β”‚   β”‚   β”œβ”€β”€ tailor.py                    # Resume AI customization
β”‚   β”‚   └── parse_agent.py               # Resume parsing
β”‚   β”œβ”€β”€ database/
β”‚   β”‚   β”œβ”€β”€ db_engine.py          # SQLAlchemy connection
β”‚   β”‚   └── SchemaModel.py        # User model (SQLAlchemy)
β”‚   β”œβ”€β”€ config.py                 # API keys & environment variables
β”‚   β”œβ”€β”€ requirements.txt           # Python dependencies
β”‚   └── run.sh                     # Startup script
β”‚
β”œβ”€β”€ extension/                     # Chrome Extension
β”‚   β”œβ”€β”€ manifest.json             # Extension configuration
β”‚   β”œβ”€β”€ background.js             # Service worker (cookie/storage sync)
β”‚   └── content.js                # Content script (page injection)
β”‚
└── prisma/                        # Prisma schema (shared)
    └── schema.prisma             # Database models

πŸ”„ User Journey & Data Flow

Step 1: User Authenticates

1. User logs in with LinkedIn on DevHire web app
2. Chrome extension captures LinkedIn cookies & localStorage
3. Extension sends data to backend (/store-cookie)
4. Backend stores encrypted credentials for session

Step 2: Resume Upload & Parsing

1. User uploads resume (PDF/DOCX)
2. Backend extracts text using PyMuPDF + OCR (pytesseract)
3. Gemini AI parses resume β†’ extracts job titles, skills, experience
4. Output: ["Full Stack Developer, Backend Engineer"] + ["Python, React, PostgreSQL"]

Step 3: Job Search

1. Frontend calls POST /get-jobs with:
   - user_id, password, resume_url
2. Backend uses Playwright to:
   - Login to LinkedIn with credentials
   - Search for each parsed job title
   - Scrape job listings (title, company, description, link)
3. Jobs returned to frontend, user selects 10-50 jobs

Step 4: Resume Tailoring (Per Job)

1. For each selected job:
   - Frontend calls POST /tailor with:
     - original_resume_url, job_description
2. Backend:
   - Downloads original resume PDF
   - Sends to Gemini: "Tailor this resume for this JD"
   - Gemini generates LaTeX with highlighted relevant skills
   - LaTeX rendered to PDF
   - PDF converted to Base64
3. Base64 PDFs sent to frontend, stored in sessionStorage

Step 5: Batch Application

1. Frontend calls POST /apply-jobs with:
   - jobs: [{job_url, job_description}]
   - user_id, password, resume_url
2. Backend applies in parallel batches (15 jobs per batch):
   - Open LinkedIn Easy Apply modal via Playwright
   - Detect form fields (experience, skills, resume upload)
   - Fill fields with tailored answers + upload tailored PDF
   - Click Submit
3. Real-time progress updates via /progress endpoint
4. Applications tracked in PostgreSQL (User.applied_jobs)

πŸ› οΈ Tech Stack Deep Dive

Frontend (Next.js)

  • Framework: Next.js 15.4.6 with App Router
  • Language: TypeScript + React 19
  • Styling: Tailwind CSS 4
  • State Management: Redux Toolkit
  • HTTP: Axios for API calls
  • Auth: Supabase Auth + JWT
  • Database ORM: Prisma Client
  • Animations: Framer Motion
  • UI Feedback: React Hot Toast (notifications)
  • PDF Handling: PDF.js for rendering

Backend (Python)

  • Framework: FastAPI (async web server)
  • Async Runtime: Uvicorn (ASGI)
  • Browser Automation: Playwright (async headless browser)
  • Database: PostgreSQL + SQLAlchemy ORM
  • AI: Google Gemini 2.5 Flash (resume parsing, tailoring)
  • PDF Processing: PyMuPDF (fitz), pdf2image, pytesseract (OCR)
  • Resume Generation: FPDF, python-docx
  • HTTP: aiohttp for async requests
  • Auth: browser-cookie3 for session cookies

Database (PostgreSQL + Prisma)

model User {
  id           String   @id @default(uuid())
  email        String   @unique
  name         String?
  resume_url   String?
  applied_jobs String[]  # Array of LinkedIn job URLs applied to
}

Chrome Extension

  • Manifest: V3 (latest standard)
  • Service Worker: background.js (cookie sync every 2 minutes)
  • Content Script: content.js (data injection)
  • Capabilities: Cookie capture, localStorage/sessionStorage access, browser fingerprint collection

πŸš€ Getting Started

Prerequisites

  • Node.js β‰₯ 20
  • Python 3.9+
  • PostgreSQL database (local or cloud)
  • Google API Key (Gemini access)
  • Supabase Project (optional, for auth)
  • Chrome Browser (for extension & Playwright)

Environment Setup

1. Clone & Install Frontend

cd my-fe
npm install
npx prisma generate  # Generate Prisma Client

Create .env.local:

NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_key
NEXT_PUBLIC_API_URL=http://localhost:8000
DATABASE_URL=postgresql://user:password@localhost:5432/devhire

2. Install Backend Dependencies

cd backend_python
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt
playwright install  # Download browser binaries

Create config.py:

GOOGLE_API = "your_gemini_api_key"
LINKEDIN_ID = "your_linkedin_email"
LINKEDIN_PASSWORD = "your_linkedin_password"

3. Setup Database

cd my-fe
npx prisma migrate dev --name init

4. Run Backend

cd backend_python
python -m uvicorn main.main:app --reload --host 0.0.0.0 --port 8000

5. Run Frontend

cd my-fe
npm run dev

Open http://localhost:3000 in browser.

6. Load Chrome Extension

  1. Open chrome://extensions/
  2. Enable "Developer mode"
  3. Click "Load unpacked"
  4. Select DevHire/extension/ folder

πŸ”‘ Key Features & Examples

Feature 1: Smart Job Parsing

Input: Resume PDF Process:

  • Extract text with OCR
  • Send to Gemini: "Extract job titles and technical skills"
  • Parse response Output: ["Full Stack Developer", "Backend Engineer"] + ["Python", "React", "PostgreSQL", ...]

Feature 2: Intelligent Resume Tailoring

Input: Original resume + Job description Process:

prompt = f"""
Given this resume:
{resume_text}

And this job description:
{job_description}

Generate a tailored resume in LaTeX that:
1. Highlights relevant skills
2. Reorders experience by relevance
3. Emphasizes matching technologies
"""
response = gemini_model.generate_content(prompt)
# Response is LaTeX β†’ rendered to PDF β†’ Base64

Output: Customized PDF resume (Base64 encoded)

Feature 3: Automated Job Application

Example Flow:

# 1. Playwright opens LinkedIn Easy Apply
await page.click('button:has-text("Easy Apply")')

# 2. Detect form fields
experience_field = await page.query_selector('input[name="experience"]')
skills_field = await page.query_selector('input[name="skills"]')

# 3. Fill with AI-suggested answers
await experience_field.fill("5 years in Full Stack Development")
await skills_field.fill("Python, React, PostgreSQL")

# 4. Upload tailored resume
resume_input = await page.query_selector('input[type="file"]')
await resume_input.set_input_files(tailored_resume_path)

# 5. Submit
await page.click('button:has-text("Submit application")')

πŸ“Š Performance & Metrics

Metric Value
Jobs Applied Per Hour 50-100 (vs. 5-10 manual)
Resume Customization Time <5 seconds per job (Gemini)
Job Scraping Speed 20-30 jobs/minute
End-to-End Application Time 10-15 minutes for 50 jobs
Success Rate (Easy Apply) ~85% (varies by form complexity)

πŸ”’ Security & Privacy

Credential Handling

  • LinkedIn credentials encrypted with AES-256
  • Session-based authentication (JWT)
  • Credentials NOT stored in database (ephemeral per session)

Data Protection

  • CORS middleware restricts to authorized origins
  • Supabase Auth for user verification
  • PostgreSQL for encrypted data at rest

LinkedIn Compliance

  • Uses official LinkedIn API where available
  • Respects robots.txt and rate limits
  • Stealth mode Playwright (mimics human behavior)

πŸ› Troubleshooting

"Cannot use import statement outside a module"

Solution: Ensure package.json has "type": "module"

"Playwright browser download failed"

Solution: Run playwright install after pip install

"Gemini API rate limit exceeded"

Solution: Add delays between tailoring requests (5-10 second intervals)

"LinkedIn login failing"

Solution:

  • Update LINKEDIN_ID and LINKEDIN_PASSWORD in config.py
  • Check if LinkedIn has changed login flow (inspect with DevTools)
  • Ensure 2FA is disabled or use app passwords

"Resume tailoring returns LaTeX errors"

Solution: Retry with simpler job description or increase context length in Gemini prompt


πŸ“š API Endpoints

Frontend Calls Backend

Endpoint Method Purpose
/get-jobs POST Parse resume & fetch matching jobs from LinkedIn
/apply-jobs POST Apply to selected jobs with tailored resumes
/tailor POST Generate AI-tailored resume for a job
/store-cookie POST Receive LinkedIn cookies from extension
/progress GET Real-time application progress tracking

Example Requests

# Parse resume & get jobs
curl -X POST http://localhost:8000/get-jobs \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "user123",
    "file_url": "https://..../resume.pdf",
    "password": "linkedin_password"
  }'

# Tailor resume for a job
curl -X POST http://localhost:8000/tailor \
  -H "Content-Type: application/json" \
  -d '{
    "job_desc": "We are looking for a Full Stack Developer...",
    "resume_url": "https://..../resume.pdf"
  }'

# Apply to jobs
curl -X POST http://localhost:8000/apply-jobs \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "user123",
    "password": "linkedin_password",
    "resume_url": "https://..../resume.pdf",
    "jobs": [
      {
        "job_url": "https://linkedin.com/jobs/123456",
        "job_description": "Full Stack Developer..."
      }
    ]
  }'

πŸŽ“ Learning Resources

Understanding Key Workflows

  1. Async Python in Backend

    • Read: backend_python/agents/scraper_agent_optimized.py (async Playwright usage)
    • Learn: How Playwright async contexts handle multiple browser sessions
  2. Resume Tailoring Logic

    • Read: backend_python/agents/tailor.py (Gemini integration)
    • Learn: PDF text extraction, LaTeX generation, batch processing
  3. Frontend State Management

    • Read: my-fe/app/Components/Jobs.tsx & Apply.tsx
    • Learn: Redux + sessionStorage for multi-step flows
  4. Database Models

    • Read: my-fe/prisma/schema.prisma & backend_python/database/SchemaModel.py
    • Learn: How Prisma ORM mirrors SQLAlchemy models

🀝 Contributing

  1. Create a feature branch: git checkout -b feature/your-feature
  2. Make changes and test locally
  3. Commit with clear messages: git commit -m "Add resume parsing optimization"
  4. Push and create a Pull Request

πŸ“ License

This project is licensed under the MIT License – see LICENSE file for details.


πŸŽ‰ Future Enhancements

  • Support for multiple job boards (Indeed, Glassdoor, Dice)
  • Resume versioning & A/B testing
  • Salary negotiation insights
  • Interview prep with AI coaching
  • Email notification tracking
  • Mobile app (React Native)
  • API marketplace for third-party integrations

πŸ“§ Contact & Support

For issues, questions, or feature requests:


Made with ❀️ to help developers land their dream jobs faster.