Skip to content

Commit 08baaf4

Browse files
Run code formatting
Fix isort generated folder Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
1 parent deb93c6 commit 08baaf4

File tree

6 files changed

+53
-19
lines changed

6 files changed

+53
-19
lines changed

.isort.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@ import_heading_thirdparty=Third Party
77
import_heading_firstparty=First Party
88
import_heading_localfolder=Local
99
known_firstparty=alog,aconfig,caikit,import_tracker
10-
known_localfolder=caikit_computer_vision,tests
10+
known_localfolder=caikit_computer_vision,tests,generated

examples/runtime/run_train_and_inference.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030

3131
# pylint: disable=no-name-in-module,import-error
3232
try:
33-
# Third Party
33+
# Local
3434
from generated import (
3535
computervisionservice_pb2_grpc,
3636
computervisiontrainingservice_pb2_grpc,
@@ -41,7 +41,7 @@
4141
# The location of these imported message types depends on the version of Caikit
4242
# that we are using.
4343
try:
44-
# Third Party
44+
# Local
4545
from generated.caikit_data_model.caikit_computer_vision import (
4646
flatchannel_pb2,
4747
flatimage_pb2,
@@ -58,7 +58,7 @@
5858
IS_LEGACY = False
5959
except ModuleNotFoundError:
6060
# older versions of Caikit / py to proto create a flat proto structure
61-
# Third Party
61+
# Local
6262
from generated import objectdetectiontaskrequest_pb2
6363
from generated import (
6464
objectdetectiontasktransformersobjectdetectortrainrequest_pb2 as odt_request_pb2,

examples/text_to_image/README.md

Lines changed: 28 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,82 +1,101 @@
1-
# Text To Image (SDXL)
1+
# Getting Started With Text To Image
2+
23
This directory provides guidance for running text to image inference for text to image and a few useful scripts for getting started.
34

45
## Task and Module Overview
6+
57
The text to image task has only one required parameter, the input text, and produces a `caikit_computer_vision.data_model.CaptionedImage` in response, which wraps the provided input text, as well as the generated image.
68

79
Currently there are two modules for text to image.:
10+
811
- `caikit_computer_vision.modules.text_to_image.TTIStub` - A simple stub which produces a blue image of the request height and width at inference. This module is purely used for testing purposes.
912

1013
- `caikit_computer_vision.modules.text_to_image.SDXL` - A module implementing text to image via SDXL.
1114

1215
This document will help you get started with both at the library & runtime level, ending with a sample gRPC client that can be usde to hit models running in a Caikit runtime container.
1316

1417
## Building the Environment
18+
1519
The easiest way to get started is to build a virtual environment in the root directory of this repo. Make sure the root of this project is on the `PYTHONPATH` so that `caikit_computer_vision` is findable.
1620

1721
To install the project:
22+
1823
```bash
1924
python3 -m venv venv
2025
source venv/bin/activate
2126
pip install .
2227
```
2328

2429
Note that if you prefer running in Docker, you can build an image as you normally would, and mount things into a running container:
30+
2531
```bash
2632
docker build -t caikit-computer-vision:latest .
2733
```
2834

2935
## Creating the Models
36+
3037
For the remainder of this demo, commands are intended to be run from this directory. First, we will be creating our models & runtime config in a directory named `caikit`, which is convenient for running locally or mounting into a container.
3138

3239
Copy the runtime config from the root of this project into the `caikit` directory.
40+
3341
```bash
3442
mkdir -p caikit/models
3543
cp ../../runtime_config.yaml caikit/runtime_config.yaml
3644
```
3745

3846
Next, create your models.
47+
3948
```bash
4049
python create_tti_models.py
4150
```
4251

4352
This will create two models.
53+
4454
1. The stub model, at `caikit/models/stub_model`
4555
2. The SDXL turbo model, at `caikit/models/sdxl_turbo_model`
4656

4757
Note that the names of these directories will be their model IDs in caikit runtime.
4858

4959
## Running Local Inference / API Overview
60+
5061
The text to image API is simple.
5162

5263
### Stub Module
64+
5365
For the stub module, we take an input prompt, a height, and a width, and create a blue image of the specified height and width.
66+
5467
```python
5568
run(
56-
inputs: str,
57-
height: int,
69+
inputs: str,
70+
height: int,
5871
width: int
5972
) -> CaptionedImage:
6073
```
6174

6275
Example using the stub model created from above:
76+
6377
```python
6478
>>> import caikit_computer_vision, caikit
6579
>>> stub_model = caikit.load("caikit/models/stub_model")
6680
>>> res = stub_model.run("This is a text", height=512, width=512)
6781
```
6882

6983
The resulting object holds the provided input text under `.caption`:
84+
7085
```python
7186
>>> res.caption
7287
'This is a text'
7388
```
89+
7490
And the image bytes stored as PNG under `.output.image_data`
91+
7592
```python
7693
>>> res.output.image_data
7794
b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x02\x00\x00\x00\x02\x00 ...
7895
```
96+
7997
Note that the `output` object is a `Caikit` image backed by PIL. If you need a handle to it, you can call `as_pil()` to get handle to the PIL object as shown below.
98+
8099
```
81100
>>> pil_im = res.output.as_pil()
82101
>>> type(pil_im)
@@ -86,7 +105,9 @@ Note that the `output` object is a `Caikit` image backed by PIL. If you need a h
86105
Grabbing a handle to the PIL image and then `.save()` on the result is the easiest way to save the image to disk.
87106

88107
### SDXL Module
108+
89109
The SDXL module is signature to the stub, with some additional options.
110+
90111
```python
91112
run(
92113
inputs: str,
@@ -109,16 +130,18 @@ The `image_format` arg follows the same conventions as PIL and controls the form
109130
>>> res = stub_model.run("A golden retriever puppy sitting in a grassy field", height=512, width=512, num_steps=2, image_format="jpeg")
110131
```
111132

112-
113133
## Inference Through Runtime
134+
114135
To write a client, you'll need to export the proto files to compile. To do so, run `python export_protos.py`; this will use the runtime file you had previously copied to create a new directory called `protos`, containing the exported data model / task protos from caikit runtime.
115136

116137
Then to compile them, you can do something like the following; note that you may need to `pip install grpcio-tools` if it's not present in your environment, since it's not a dependency of `caikit_computer_vision`:
138+
117139
```bash
118140
python -m grpc_tools.protoc -I protos --python_out=generated --grpc_python_out=generated protos/*.proto
119141
```
120142

121143
In general, you will want to run Caikit Runtime in a Docker container. The easiest way to do this is to mount the `caikit` directory with your models into the container as shown below.
144+
122145
```bash
123146
docker run -e CONFIG_FILES=/caikit/runtime_config.yaml \
124147
-v $PWD/caikit/:/caikit \
@@ -129,5 +152,6 @@ docker run -e CONFIG_FILES=/caikit/runtime_config.yaml \
129152
Then, you can hit it with a gRPC client using your compiled protobufs. A full example of inference via gRPC client calling both models can be found in `sample_client.py`.
130153

131154
Running `python sample_client.py` should produce two images.
155+
132156
- `stub_response_image.png` - blue image generated from the stub module
133157
- `turbo_response_image.png` - picture of a golden retriever in a field generated by SDXL turbo

examples/text_to_image/create_tti_models.py

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
"""Creates and exports SDXL Turbo as a caikit module.
22
"""
3-
from caikit_computer_vision.modules.text_to_image import TTIStub
3+
# Standard
44
import os
55

6+
# Local
7+
from caikit_computer_vision.modules.text_to_image import TTIStub
8+
69
SCRIPT_DIR = os.path.dirname(__file__)
710
MODELS_DIR = os.path.join(SCRIPT_DIR, "caikit", "models")
811
STUB_MODEL_PATH = os.path.join(MODELS_DIR, "stub_model")
@@ -15,13 +18,17 @@
1518
model.save(STUB_MODEL_PATH)
1619

1720

21+
# Third Party
1822
### Make the model for SDXL turbo
1923
import diffusers
24+
25+
# Local
2026
from caikit_computer_vision.modules.text_to_image import SDXL
2127

2228
### Download the model for SDXL turbo...
2329
sdxl_model = SDXL.bootstrap("stabilityai/sdxl-turbo")
2430
sdxl_model.save(SDXL_TURBO_MODEL_PATH)
31+
# Standard
2532
# There appears to be a bug in the way that sharded safetensors are reloaded into the
2633
# pipeline from diffusers, and there ALSO appears to be a bug where passing the max
2734
# safetensor shard size to diffusers on a pipeline doesn't work as exoected.
@@ -30,11 +37,14 @@
3037
# the sharded u-net, and reexport it as one file. By default the
3138
# max shard size if 10GB, and the turbo unit is barely larger than 10.
3239
from shutil import rmtree
40+
3341
unet_path = os.path.join(SDXL_TURBO_MODEL_PATH, "sdxl_model", "unet")
3442
try:
3543
diffusers.UNet2DConditionModel.from_pretrained(unet_path)
3644
except RuntimeError:
37-
print("Unable to reload turbo u-net due to sharding issues; reexporting as single file")
45+
print(
46+
"Unable to reload turbo u-net due to sharding issues; reexporting as single file"
47+
)
3848
rmtree(unet_path)
3949
sdxl_model.pipeline.unet.save_pretrained(unet_path, max_shard_size="12GB")
4050

examples/text_to_image/export_protos.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@
77
from caikit.runtime.dump_services import dump_grpc_services
88
import caikit
99

10-
SCRIPT_DIR=os.path.dirname(__file__)
11-
PROTO_EXPORT_DIR=os.path.join(SCRIPT_DIR, "protos")
12-
RUNTIME_CONFIG_PATH=os.path.join(SCRIPT_DIR, "caikit", "runtime_config.yaml")
10+
SCRIPT_DIR = os.path.dirname(__file__)
11+
PROTO_EXPORT_DIR = os.path.join(SCRIPT_DIR, "protos")
12+
RUNTIME_CONFIG_PATH = os.path.join(SCRIPT_DIR, "caikit", "runtime_config.yaml")
1313

1414
if os.path.isdir(PROTO_EXPORT_DIR):
1515
rmtree(PROTO_EXPORT_DIR)
@@ -27,4 +27,4 @@
2727
k: v for k, v in grpc_service_dumper_kwargs.items() if k in expected_grpc_params
2828
}
2929
dump_grpc_services(**grpc_service_dumper_kwargs)
30-
# NOTE: If you need an http client for inference, use `dump_http_services` from caikit instead.
30+
# NOTE: If you need an http client for inference, use `dump_http_services` from caikit instead.

examples/text_to_image/sample_client.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
1+
# Standard
12
import io
23

3-
from generated import (
4-
computervisionservice_pb2_grpc,
5-
)
6-
from generated.ccv import texttoimagetaskrequest_pb2
7-
4+
# Third Party
85
from PIL import Image
96
import grpc
107

8+
# Local
9+
from generated import computervisionservice_pb2_grpc
10+
from generated.ccv import texttoimagetaskrequest_pb2
1111

1212
# Setup the client
1313
port = 8085

0 commit comments

Comments
 (0)