This recipe shows you how to use the Cloud Vision API together with the Google Translate API using Cloud Pub/Sub as a message bus. Where applicable:
Replace [PROJECT-ID] with your Cloud Platform project ID
- Image is uploaded to Cloud Storage with text in any language (text in the image itself)
- Cloud Function is triggered, uses the Vision API to extract the text, and the Translate API to detect the language
- For all languages we're translating into (except the language of the text), publish a message to ther translate topic
- For the language that matches the language of the text, bypass translation and publish to the save topic
- Cloud Function is triggered and uses the Translate API to translate the message into various languages, then publishes each translation to the save topic
- Cloud Function is triggered and saves text to Cloud Storage
- Translated text from the original source image is downloaded
-
Follow the Cloud Functions quickstart guide to setup Cloud Functions for your project
-
Enable the Vision API and the Translate API
-
Clone this repository
cd ~/ git clone https://github.com/jasonpolites/gcf-recipes.git cd gcf-recipes
-
Create a file called
translate_apikey.json
and copy the Translate API Key into this file as a Stringecho "\"[YOUR API KEY]\"" > ocr/app/translate_apikey.json
-
Run the setup for the ocr sample:
npm install node setup install ocr [PROJECT-ID]
-
Upload a sample image
gsutil cp ocr/samples/sample_ch.jpg gs://[PROJECT-ID]-gcf-samples-ocr-in/
-
Watch the logs to make sure the executions have completed
gcloud alpha functions get-logs --limit 100
-
Pull the extracted text from the bucket and pipe to standard out
gsutil cat gs://[PROJECT-ID]-gcf-samples-ocr-out/sample_ch_to_en.txt
This recipe comes with a suite of unit tests. To run the tests locally, just use npm test
npm install
npm test
The tests will also produce code coverage reports, written to the /coverage
directory. After running the tests, you can view coverage with
open coverage/lcov-report/index.html