Skip to content
This repository has been archived by the owner on Apr 18, 2022. It is now read-only.

Latest commit

 

History

History
103 lines (67 loc) · 4.55 KB

README.md

File metadata and controls

103 lines (67 loc) · 4.55 KB

status: inactive

This repo has been deprecated. All new work will be found at a given language's repo:

C# / .NET | Go lang | Java | Node.js | PHP | Python | Ruby

Note: The Android and IOS samples haven't been moved to the main Android and IOS sample repos yet.


Google Cloud Vision API examples

This repo contains some Google Cloud Vision API examples.

The samples are organized by language and mobile platform.

Language Examples

Landmark Detection Using Google Cloud Storage

This sample identifies a landmark within an image stored on Google Cloud Storage.

Face Detection

See the face detection tutorial in the docs.

Label Detection

See the label detection tutorial in the docs.

Label Tagging Using Kubernetes

Awwvision is a Kubernetes and Cloud Vision API sample that uses the Vision API to classify (label) images from Reddit's /r/aww subreddit, and display the labelled results in a web application.

Text Detection Using the Vision API

This sample uses TEXT_DETECTION Vision API requests to build an inverted index from the stemmed words found in the images, and stores that index in a Redis database. The resulting index can be queried to find images that match a given set of words, and to list text that was found in each matching image.

For finding stopwords and doing stemming, the Python example uses the nltk (Natural Language Toolkit) library. The Java example uses the OpenNLP library.

Mobile Platform Examples

Image Detection Using Android Device Photos

This simple single-activity sample that shows you how to make a call to the Cloud Vision API with an image picked from your device’s gallery.

Image Detection Using iOS Device Photos

The Swift and Objective-C versions of this app use the Vision API to run label and face detection on an image from the device's photo library. The resulting labels and face metadata from the API response are displayed in the UI.

Check out the Swift or Objective-C READMEs for specific getting started instructions.