Skip to content

subodh30/Job-Analyzer

 
 

Repository files navigation

JobCruncher

GitHub made-with-python GitHub issues GitHub closed issues GitHub contributors GitHub repo size GitHub pull requests GitHub closed pull requests GitHub Workflow Status codecov DOI

Group 20

Juggling multiple assignments, quizzes, projects, presentations, and clutching the deadlines every week? Feel like you have no time to watch your favorite series or sports team play let alone search for job posting on a day-to-day basis? Here comes JobCruncher.

JobCruncher is an online job scraping and analysis tool that provides the user with the ability to filter jobs posted on Linkedin based on the user’s interest. LinkedIn is an employment-oriented online service that is a platform primarily used for professional networking and career development. This allows job seekers to post their CVs and employers to post jobs, hence a perfect site to scrap the job details from.

So, leave the tedious and monotonous task of looking up the job postings to our JobCruncher that not only provides the jobs posted every day but helps to filter out the results based on your liking.

So why use JobCruncher instead?

Project 1 Video

https://www.youtube.com/watch?v=_ASFR0DymiU&ab_channel=TejasPrabhu

Project 2 Video

Video_GRP20_SE22.mp4

Unlike many other job portals, JobCruncher is a simple, lightweight, online tool that helps users get clear information about the jobs posted on LinkedIn and further help the user finetune the results.

Further, it helps to provide the user insights about the job postings and as the scraper is executed every day, the user is always provided with the most recent job postings.

Deployment and Scalability

arch The Job Analyzer applocation can be deployed on any cloud service provider like AWS, GCP, Azure using docker image created by docker file. We have created deployment service and route yaml files for kubernetes to access the application publically. As the number of users increases from 100, 1000, 10000.... we need to increase the number of container instances. As we have Global Traffic Manager (GTM) to load balance multiple user requests to different datacenters through Local Traffic Manager (LTM) using Ngnix. In the cloud we also have a HA proxy/services to distribute each request to a container which is having the least load to serve the request. In the backend we have mongodb deployed on different datacenters which will asynchronously replicate the data using multileader architecture. By using this architecture we can accomodate every user request without affecting the performance of our application. We will be using A:A deployments to increase the availability of our application.

Installation

Check INSTALL.md for installation instructions for Python, VS Code and MongoDB

To get started with project

  • Clone the repo

     git clone https://github.com/subodh30/Job-Analyzer.git
    
    
  • Setup virtual environment

    pip install virtualenv
    cd Job-Analyzer
    virtualenv env
    .\env\Scripts\activate.bat
    
  • Install required libraries by

      pip install -r requirements.txt
    
    
  • After running command 'flask --app src.app run', in src directory you are good to go

      flask --app src.app run
    
    

Application Preview:

Search Page

Result Page

Filtering the results

Tech Stack used for the development of this project

python Python
mongo MongoDB
flask Flask
selenium Selenium
pytest Pytest

Project documentation

The docs folder incorporates all necessary documents and documentation in our project.

Code Coverage

codecov

Files Coverage
src/scraper.py 61.34%
test/test_flask.py 100.00%
test/test_scraper.py 100.00%
src/app.py 100.00%

Future Scope:

As the job market grows exponentially every year, the JobCruncher tool has to keep up with this pace and hence has to shed many overheads induced in the current process.

Phase 2:

  1. Deploying on AWS – The main idea is to make JobCruncher serverless. Removing the need for a local server and pushing to the cloud amplifies usability. Using AWS lambda, S3, Cloudwatch, and SNS services to schedule jobs for every X hours to scrap job listing from each employee-oriented site.

  2. User Profile – Adding the feature of the user profile to JobCruncher provides the functionality of extracting the vital features from user information and accordingly deduces the scraped job based on the extracted feature.

  3. Features from Resume – The user can upload a Resume / CV and cover letter. Using text analysis we can extract the cardinal features such as technical skills, projects, experience, and job position, and cater to the user’s job search needs.

  4. Notification System – In phase 2, as every user has a unique profile associated with them, a notification system can be set up in order to notify the user of any new job updates.

  5. Chatbot Integration – This is a feel-good feature that provides the user with an easy-to-interact chatbot that provides information and ways to access the features provided by JobCruncher.

Roadmap

We have a lot planned for the future! Completed tasks and future enhancements can be found here

Contributors

Thanks goes to these wonderful people.


Ameya Vaichalkar

Kunal Patil

Rohan Shiveshwarkar

Subodh Gujar

Yash Sonar

About

Welcome to Job Analyzer. Get all job information from all sources.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HTML 48.2%
  • Python 35.6%
  • CSS 14.0%
  • Shell 1.3%
  • Other 0.9%