Skip to content

Splitting and classifying documents from a pdf or image consisting of 5 classes of documents like Aadhar card,Pan etc followed by information retrieval from each document.

Notifications You must be signed in to change notification settings

PrincySinghal/Document-classification-and-Data-extraction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 

Repository files navigation

Document-Classification-and-Data-Extraction

Table of Contents
  1. About The Project
  2. Salient Features
  3. Description
  4. Data Preprocessing
  5. Document Classification Model
  6. Results
  7. Information extraction model
  8. Team

About the project

We put out a model that can recognise the collection of papers contained in a pdf or image made up of numerous documents. To accomplish this, the input PDF is divided into individual pages. The CNN model is used to categorise each page into the appropriate document category. After that, each document's data is extracted using OCR (optical character recognition). This is being recommended for five documents: voter identification, driver's licence, PAN, and Aadhar. Except for the front and back of the same document, the input pdf must include a single document on a single page. Our data classification model obtained 0.7342 accuracy on the training set and 0.7736 accuracy on the validation set, with gains of 0.6923 and losses of 0.8340.

Salient Features

Hyperparameter tuning, regularization(early stopping), document split

Tech stack used

  • models: CNN and OCR
  • Framework-Keras

Methodology

image

Data Description

When we began searching for an appropriate dataset, we observed that there is no publicly available dataset of identity documents as they hold sensitive and personal information. But we came across a dataset on Kaggle that consisted of six folders, i.e., Aadhar Card, PAN Card, Voter ID, single-page Gas Bill, Passport, and Driver's License. We added a few more images to each folder. These were our own documents that we manually scanned, with the rest coming from Google Images. Thus, these are the five documents we are classifying and extracting information from.

Data Preprocessing

Before model training, we applied horizontal and vertical data augmentation using random flips. This further increased the size and diversity of the dataset. The categorical values of the labels column were converted to numerical values using one-hot encoding.

Document Classification Model

image

Various hyperparameters like the number of layers, neurons in each layer, number of filters, kernel size, value of p in dropout layers, number of epochs, batch size, etc. were changed until satisfactory training and validation accuracy was achieved.

image

image

The final Model and results

image

image

Information extraction model

Following are the steps of OCR done on images:

image

image

Team

About

Splitting and classifying documents from a pdf or image consisting of 5 classes of documents like Aadhar card,Pan etc followed by information retrieval from each document.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •