You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/text_to_image/README.md
+28-4Lines changed: 28 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -1,82 +1,101 @@
1
-
# Text To Image (SDXL)
1
+
# Getting Started With Text To Image
2
+
2
3
This directory provides guidance for running text to image inference for text to image and a few useful scripts for getting started.
3
4
4
5
## Task and Module Overview
6
+
5
7
The text to image task has only one required parameter, the input text, and produces a `caikit_computer_vision.data_model.CaptionedImage` in response, which wraps the provided input text, as well as the generated image.
6
8
7
9
Currently there are two modules for text to image.:
10
+
8
11
-`caikit_computer_vision.modules.text_to_image.TTIStub` - A simple stub which produces a blue image of the request height and width at inference. This module is purely used for testing purposes.
9
12
10
13
-`caikit_computer_vision.modules.text_to_image.SDXL` - A module implementing text to image via SDXL.
11
14
12
15
This document will help you get started with both at the library & runtime level, ending with a sample gRPC client that can be usde to hit models running in a Caikit runtime container.
13
16
14
17
## Building the Environment
18
+
15
19
The easiest way to get started is to build a virtual environment in the root directory of this repo. Make sure the root of this project is on the `PYTHONPATH` so that `caikit_computer_vision` is findable.
16
20
17
21
To install the project:
22
+
18
23
```bash
19
24
python3 -m venv venv
20
25
source venv/bin/activate
21
26
pip install .
22
27
```
23
28
24
29
Note that if you prefer running in Docker, you can build an image as you normally would, and mount things into a running container:
30
+
25
31
```bash
26
32
docker build -t caikit-computer-vision:latest .
27
33
```
28
34
29
35
## Creating the Models
36
+
30
37
For the remainder of this demo, commands are intended to be run from this directory. First, we will be creating our models & runtime config in a directory named `caikit`, which is convenient for running locally or mounting into a container.
31
38
32
39
Copy the runtime config from the root of this project into the `caikit` directory.
Note that the `output` object is a `Caikit` image backed by PIL. If you need a handle to it, you can call `as_pil()` to get handle to the PIL object as shown below.
98
+
80
99
```
81
100
>>> pil_im = res.output.as_pil()
82
101
>>> type(pil_im)
@@ -86,7 +105,9 @@ Note that the `output` object is a `Caikit` image backed by PIL. If you need a h
86
105
Grabbing a handle to the PIL image and then `.save()` on the result is the easiest way to save the image to disk.
87
106
88
107
### SDXL Module
108
+
89
109
The SDXL module is signature to the stub, with some additional options.
110
+
90
111
```python
91
112
run(
92
113
inputs: str,
@@ -109,16 +130,18 @@ The `image_format` arg follows the same conventions as PIL and controls the form
109
130
>>> res = stub_model.run("A golden retriever puppy sitting in a grassy field", height=512, width=512, num_steps=2, image_format="jpeg")
110
131
```
111
132
112
-
113
133
## Inference Through Runtime
134
+
114
135
To write a client, you'll need to export the proto files to compile. To do so, run `python export_protos.py`; this will use the runtime file you had previously copied to create a new directory called `protos`, containing the exported data model / task protos from caikit runtime.
115
136
116
137
Then to compile them, you can do something like the following; note that you may need to `pip install grpcio-tools` if it's not present in your environment, since it's not a dependency of `caikit_computer_vision`:
In general, you will want to run Caikit Runtime in a Docker container. The easiest way to do this is to mount the `caikit` directory with your models into the container as shown below.
144
+
122
145
```bash
123
146
docker run -e CONFIG_FILES=/caikit/runtime_config.yaml \
124
147
-v $PWD/caikit/:/caikit \
@@ -129,5 +152,6 @@ docker run -e CONFIG_FILES=/caikit/runtime_config.yaml \
129
152
Then, you can hit it with a gRPC client using your compiled protobufs. A full example of inference via gRPC client calling both models can be found in `sample_client.py`.
130
153
131
154
Running `python sample_client.py` should produce two images.
155
+
132
156
-`stub_response_image.png` - blue image generated from the stub module
133
157
-`turbo_response_image.png` - picture of a golden retriever in a field generated by SDXL turbo
0 commit comments