Skip to content

Latest commit

 

History

History
63 lines (50 loc) · 2.39 KB

README.md

File metadata and controls

63 lines (50 loc) · 2.39 KB

LCSD Captcha Solver

Description

The captcha solver uses attention-based OCR deep learning model for the Leisure and Cultural Services Department (LCSD) of Hong Kong SAR Government Online Booking system with TensorFlow Serving.

Prerequisites

Installation

To build the TensorFlow model with AOCR support image, run the following command:

docker build -t tf-aocr:v1 .

To start the containers with TensorFlow Serving, run the following command:

docker-compose up -d

Usage

Captcha Solving Prediction API

  1. Download the image from the LCSD Online Booking system.
  2. Dilate the captcha image to make the characters more distinguishable.
  3. Convert the captcha image to grayscale.
  4. Crop the captcha image to 79x32 and 92x32 for 4 and 5 characters captcha.
  5. Post the captcha image to the TensorFlow Serving API with:
For 4 characters captcha:
http://localhost:9000/v1/models/lcsd-captcha-4:predict

For 5 characters captcha:
http://localhost:9001/v1/models/lcsd-captcha-5:predict

For 4 characters captcha in the same docker-compose file:
http://tf-aocr-lcsd-4:8501/v1/models/lcsd-captcha-4:predict

For 5 characters captcha in the same docker-compose file:
http://tf-aocr-lcsd-5:8501/v1/models/lcsd-captcha-5:predict
  1. Get the prediction result from the response.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the project
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a pull request

Acknowledgements

This project is based on a model by Qi Guo and Yuntian Deng.
You can find the original model in the da03/Attention-OCR repository.
The TensorFlow version of the model is available in the @emedvedev/attention-ocr

References