Skip to content

Commit

Permalink
Restructure README to align with documentation (#153)
Browse files Browse the repository at this point in the history
* Restructure README to align with documentation

* Moving things around

* Link to inferences / getting started docs

* Separate notes
  • Loading branch information
pappacena authored Oct 26, 2023
1 parent de3c395 commit f3cf02b
Showing 1 changed file with 68 additions and 67 deletions.
135 changes: 68 additions & 67 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,100 +11,101 @@
</p>

# LandingLens Python SDK
The LandingLens Python SDK contains the LandingLens development library and examples that show how to integrate your app with LandingLens in a variety of scenarios. The examples cover different model types, image acquisition sources, and post-procesing techniques.

We've provided some examples in Jupyter Notebooks to focus on ease of use, and some examples in Python apps to provide a more robust and complete experience.

<!-- Generated using https://www.tablesgenerator.com/markdown_tables -->

| Example | Description | Type |
|---|---|---|
| [Poker Card Suit Identification](https://github.com/landing-ai/landingai-python/blob/main/examples/webcam-collab-notebook/webcam-collab-notebook.ipynb) | This notebook shows how to use an object detection model from LandingLens to detect suits on playing cards. A webcam is used to take photos of playing cards. | Jupyter Notebook [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/landing-ai/landingai-python/blob/main/examples/webcam-collab-notebook/webcam-collab-notebook.ipynb)|
| [Door Monitoring for Home Automation](https://github.com/landing-ai/landingai-python/blob/main/examples/rtsp-capture-notebook/rtsp-capture.ipynb) | This notebook shows how to use an object detection model from LandingLens to detect whether a door is open or closed. An RTSP camera is used to acquire images. | Jupyter Notebook [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/landing-ai/landingai-python/blob/main/examples/rtsp-capture-notebook/rtsp-capture.ipynb) |
| [Satellite Images and Post-Processing](https://github.com/landing-ai/landingai-python/tree/main/examples/post-processings/farmland-coverage/farmland-coverage.ipynb) | This notebook shows how to use a Visual Prompting model from LandingLens to identify different objects in satellite images. The notebook includes post-processing scripts that calculate the percentage of ground cover that each object takes up. | Jupyter Notebook [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/landing-ai/landingai-python/blob/main/examples/post-processings/farmland-coverage/farmland-coverage.ipynb) |
| [License Plate Detection and Recognition](https://github.com/landing-ai/landingai-python/tree/main/examples/license-plate-ocr-notebook/license_plate_ocr.ipynb) | This notebook shows how to extract frames from a video file and use a object detection model and OCR from LandingLens to identify and recognize different license plates. | Jupyter Notebook [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/landing-ai/landingai-python/blob/main/examples/license-plate-ocr-notebook/license_plate_ocr.ipynb) |
| [Streaming Video](https://github.com/landing-ai/landingai-python/tree/main/examples/capture-service) | This application shows how to continuously run inference on images extracted from a streaming RTSP video camera feed. | Python application |

The LandingLens Python SDK contains the LandingLens development library and examples that show how to integrate your app with LandingLens in a variety of scenarios. The examples cover different model types, image acquisition sources, and post-procesing techniques.

## Documentation

- [Landing AI Python Library Quick Start Guide](https://landing-ai.github.io/landingai-python/)
- [Landing AI Python Library API Reference](https://landing-ai.github.io/landingai-python/api/common/)
- [Landing AI Python Library Changelog](https://landing-ai.github.io/landingai-python/changelog/)
- [Landing AI Python SDK Docs](https://landing-ai.github.io/landingai-python/)
- [Landing AI Support Center](https://support.landing.ai/)
- [LandingLens Walk-Through Video](https://www.youtube.com/watch?v=779kvo2dxb4)


## Install the Library
## Quick start

### Install
First, install the Landing AI Python library:

```bash
pip install landingai
```

## Quick Start

### Prerequisites
### Acquire Your First Images

This library needs to communicate with the LandingLens platform to perform certain functions. For example, the `Predictor` API calls the HTTP endpoint of your deployed model. To enable communication with LandingLens, you will need the following information:
After installing the Landing AI Python library, you can start acquiring images from one of many image sources.

1. The **Endpoint ID** of your deployed model in LandingLens. You can find this on the Deploy page in LandingLens.
2. The **API Key** for the LandingLens organization that has the model you want to deploy. To learn how to generate these credentials, go [here](https://support.landing.ai/docs/api-key-and-api-secret).

### Run Inference
Run inference using the endpoint you created in LandingLens:
For example, from a single image file:

1. Install the Python library.
2. Create a `Predictor` class with your Endpoint ID and API Key.
3. Load your image into a PIL Image (below the image is "image.png")
4. Call the `predict()` function with an Image.
```py
from landingai.pipeline.frameset import Frame

```python
from PIL import Image
from landingai.predict import Predictor
# Enter your API Key and Secret
endpoint_id = "FILL_YOUR_INFERENCE_ENDPOINT_ID"
api_key = "FILL_YOUR_API_KEY"
# Load your image
image = Image.open("image.png")
# Run inference
predictor = Predictor(endpoint_id, api_key=api_key)
predictions = predictor.predict(image)
frame = Frame.from_image("/path/to/your/image.jpg")
frame.resize(width=512, height=512)
frame.save_image("/tmp/resized-image.png")
```

See an end to end **working example** [here](https://colab.research.google.com/github/landing-ai/landingai-python/blob/main/examples/webcam-collab-notebook/webcam-collab-notebook.ipynb).
You can also extract frames from your webcam. For example:

### Visualize and Save Predictions
Visualize your inference results by overlaying the predictions on the input image and saving the updated image:
```py
from landingai.pipeline.image_source import Webcam

```python
from landingai.visualize import overlay_predictions
# continue the above example
predictions = predictor.predict(image)
image_with_preds = overlay_predictions(predictions, image)
image_with_preds.save("image.jpg")
with Webcam(fps=0.5) as webcam:
for frame in webcam:
frame.resize(width=512, height=512)
frame.save_image("/tmp/webcam-image.png")
```
### Create a Vision Pipeline

All the modules shown above and others can be chained together using the `landingai.pipeline` abstraction. At its core, a pipeline is a sequence of chained calls that operate on a `landingai.pipeline.Frame`.

The following example shows how the previous sections come together on a pipeline. For more details, go to the [*Vision Pipelines User Guide*](https://landing-ai.github.io/landingai-python/landingai.html#vision-pipelines)
```python
To learn how to acquire images from more sources, go to [Image Acquisition](https://landing-ai.github.io/landingai-python/image-acquisition/image-acquisition/).


### Run Inference

If you have deployed a computer vision model in LandingLens, you can use this library to send images to that model for inference.

For example, let's say we've created and deployed a model in LandingLens that detects coffee mugs. Now, we'll use the code below to extract images (frames) from a webcam and run inference on those images.

> [!NOTE]
> If you don't have a LandingLens account, create one [here](https://app.landing.ai/). You will need to get an "endpoint ID" and "API key" from LandingLens in order to run inferences. Check our [Running Inferences / Getting Started](https://landing-ai.github.io/landingai-python/inferences/getting-started/).
> [!NOTE]
> Learn how to use LandingLens from our [Support Center]([https://support.landing.ai/docs/landinglens-workflow](https://support.landing.ai/landinglens/en)) and [Video Tutorial Library](https://support.landing.ai/docs/landinglens-workflow-2).
> Need help with specific use cases? Post your questions in our [Community](https://community.landing.ai/home).

```py
from landingai.pipeline.image_source import Webcam
from landingai.predict import Predictor
import landingai.pipeline as pl

cloud_sky_model = Predictor("FILL_YOUR_INFERENCE_ENDPOINT_ID"
, api_key="FILL_YOUR_API_KEY")
Camera = pl.image_source.NetworkedCamera(stream_url)
for frame in Camera:
(
frame.downsize(width=1024)
.run_predict(predictor=cloud_sky_model)
.overlay_predictions()
.show_image()
.save_image(filename_prefix="./capture")
)

predictor = Predictor(
endpoint_id="abcdef01-abcd-abcd-abcd-01234567890",
api_key="land_sk_xxxxxx",
)
with Webcam(fps=0.5) as webcam:
for frame in webcam:
frame.resize(width=512)
frame.run_predict(predictor=predictor)
frame.overlay_predictions()
if "coffee-mug" in frame.predictions:
frame.save_image("/tmp/latest-webcam-image.png", include_predictions=True)
```


## Examples

We've provided some examples in Jupyter Notebooks to focus on ease of use, and some examples in Python apps to provide a more robust and complete experience.

<!-- Generated using https://www.tablesgenerator.com/markdown_tables -->

| Example | Description | Type |
|---|---|---|
| [Poker Card Suit Identification](https://github.com/landing-ai/landingai-python/blob/main/examples/webcam-collab-notebook/webcam-collab-notebook.ipynb) | This notebook shows how to use an object detection model from LandingLens to detect suits on playing cards. A webcam is used to take photos of playing cards. | Jupyter Notebook [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/landing-ai/landingai-python/blob/main/examples/webcam-collab-notebook/webcam-collab-notebook.ipynb)|
| [Door Monitoring for Home Automation](https://github.com/landing-ai/landingai-python/blob/main/examples/rtsp-capture-notebook/rtsp-capture.ipynb) | This notebook shows how to use an object detection model from LandingLens to detect whether a door is open or closed. An RTSP camera is used to acquire images. | Jupyter Notebook [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/landing-ai/landingai-python/blob/main/examples/rtsp-capture-notebook/rtsp-capture.ipynb) |
| [Satellite Images and Post-Processing](https://github.com/landing-ai/landingai-python/tree/main/examples/post-processings/farmland-coverage/farmland-coverage.ipynb) | This notebook shows how to use a Visual Prompting model from LandingLens to identify different objects in satellite images. The notebook includes post-processing scripts that calculate the percentage of ground cover that each object takes up. | Jupyter Notebook [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/landing-ai/landingai-python/blob/main/examples/post-processings/farmland-coverage/farmland-coverage.ipynb) |
| [License Plate Detection and Recognition](https://github.com/landing-ai/landingai-python/tree/main/examples/license-plate-ocr-notebook/license_plate_ocr.ipynb) | This notebook shows how to extract frames from a video file and use a object detection model and OCR from LandingLens to identify and recognize different license plates. | Jupyter Notebook [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/landing-ai/landingai-python/blob/main/examples/license-plate-ocr-notebook/license_plate_ocr.ipynb) |
| [Streaming Video](https://github.com/landing-ai/landingai-python/tree/main/examples/capture-service) | This application shows how to continuously run inference on images extracted from a streaming RTSP video camera feed. | Python application |


## Run Examples Locally

All the examples in this repo can be run locally.
Expand Down

0 comments on commit f3cf02b

Please sign in to comment.