Skip to content

Commit 610d39a

Browse files
committed
feat(server): add web server and docker container
1 parent ec7ec24 commit 610d39a

File tree

6 files changed

+168
-0
lines changed

6 files changed

+168
-0
lines changed
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
2+
# Adapted from https://docs.github.com/en/actions/tutorials/publishing-packages/publishing-docker-images
3+
name: Publish a Docker image
4+
5+
# Configures this workflow to run every time a new release is created in the repository.
6+
on:
7+
release:
8+
types: [ created ]
9+
10+
# Defines two custom environment variables for the workflow. These are used for the Container registry domain, and a name for the Docker image that this workflow builds.
11+
env:
12+
REGISTRY: ghcr.io
13+
IMAGE_NAME: ${{ github.repository }}
14+
15+
# There is a single job in this workflow. It's configured to run on the latest available version of Ubuntu.
16+
jobs:
17+
build-and-push-image:
18+
runs-on: ubuntu-latest
19+
# Sets the permissions granted to the `GITHUB_TOKEN` for the actions in this job.
20+
permissions:
21+
contents: read
22+
packages: write
23+
attestations: write
24+
id-token: write
25+
26+
steps:
27+
- name: Checkout repository
28+
uses: actions/checkout@v4
29+
# Uses the `docker/login-action` action to log in to the Container registry registry using the account and password that will publish the packages. Once published, the packages are scoped to the account defined here.
30+
- name: Log in to the Container registry
31+
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
32+
with:
33+
registry: ${{ env.REGISTRY }}
34+
username: ${{ github.actor }}
35+
password: ${{ secrets.GITHUB_TOKEN }}
36+
# This step uses [docker/metadata-action](https://github.com/docker/metadata-action#about) to extract tags and labels that will be applied to the specified image. The `id` "meta" allows the output of this step to be referenced in a subsequent step. The `images` value provides the base name for the tags and labels.
37+
- name: Extract metadata (tags, labels) for Docker
38+
id: meta
39+
uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
40+
with:
41+
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
42+
# This step uses the `docker/build-push-action` action to build the image, based on your repository's `Dockerfile`. If the build succeeds, it pushes the image to GitHub Packages.
43+
# It uses the `context` parameter to define the build's context as the set of files located in the specified path. For more information, see [Usage](https://github.com/docker/build-push-action#usage) in the README of the `docker/build-push-action` repository.
44+
# It uses the `tags` and `labels` parameters to tag and label the image with the output from the "meta" step.
45+
- name: Build and push Docker image
46+
id: push
47+
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
48+
with:
49+
context: .
50+
push: true
51+
tags: ${{ steps.meta.outputs.tags }}
52+
labels: ${{ steps.meta.outputs.labels }}
53+
54+
# This step generates an artifact attestation for the image, which is an unforgeable statement about where and how it was built. It increases supply chain security for people who consume the image. For more information, see [Using artifact attestations to establish provenance for builds](/actions/security-guides/using-artifact-attestations-to-establish-provenance-for-builds).
55+
- name: Generate artifact attestation
56+
uses: actions/attest-build-provenance@v2
57+
with:
58+
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}}
59+
subject-digest: ${{ steps.push.outputs.digest }}
60+
push-to-registry: true
61+

Dockerfile

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
FROM python:3.12-slim
2+
3+
# Allow statements and log messages to immediately appear in the Knative logs
4+
ENV PYTHONUNBUFFERED True
5+
6+
# Setup local workdir and dependencies
7+
WORKDIR /app
8+
9+
# Install python dependencies.
10+
ADD ./pyproject.toml ./pyproject.toml
11+
RUN mkdir -p sign_language_segmentation/src/utils
12+
RUN touch README.md
13+
RUN pip install --no-cache-dir ".[server]"
14+
15+
# Copy local code to the container image.
16+
COPY ./sign_language_segmentation ./sign_language_segmentation
17+
18+
# Run the web service on container startup. Here we use the gunicorn
19+
# webserver, with one worker process and 8 threads.
20+
# For environments with multiple CPU cores, increase the number of workers
21+
# to be equal to the cores available.
22+
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
23+
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 sign_language_segmentation.server:app

README.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,20 @@ To create an ELAN file with sign and sentence segments:
1818
pose_to_segments --pose="sign.pose" --elan="sign.eaf" [--video="sign.mp4"]
1919
```
2020

21+
### Web Server on Docker
22+
23+
```bash
24+
docker build -t segmentation .
25+
26+
docker run --rm -p 9876:8080 -e PORT=8080 \
27+
-v $(pwd)/sign_language_segmentation/tests:/mnt/examples \
28+
segmentation
29+
30+
curl -X POST http://localhost:9876/ \
31+
-H "Content-Type: application/json" \
32+
-d '{"input": "/mnt/examples/example.pose", "output": "/mnt/examples/example.eaf"}'
33+
```
34+
2135
---
2236

2337
## Main Idea

pyproject.toml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,12 @@ dev = [
2727
"pandas"
2828
]
2929

30+
server = [
31+
"Flask",
32+
"Werkzeug",
33+
"gunicorn",
34+
]
35+
3036
[tool.yapf]
3137
based_on_style = "google"
3238
column_limit = 120
Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
import os
2+
import traceback
3+
from pathlib import Path
4+
5+
from flask import Flask, request, abort, make_response, jsonify
6+
from pose_format import Pose
7+
8+
from sign_language_segmentation.bin import segment_pose
9+
10+
app = Flask(__name__)
11+
12+
13+
def resolve_path(uri: str):
14+
# Map gs:// URIs to the gcsfuse mount point, or return as-is
15+
return uri.replace("gs://", "/mnt/")
16+
17+
18+
@app.errorhandler(Exception)
19+
def handle_exception(e):
20+
print("Exception", e)
21+
traceback.print_exc()
22+
23+
code = e.code if hasattr(e, "code") else 500
24+
message = str(e)
25+
print("HTTP exception", code, message)
26+
27+
return make_response(jsonify(message=message, code=code), code)
28+
29+
30+
@app.route("/", methods=['POST'])
31+
def pose_segmentation():
32+
# Get request parameters
33+
body = request.get_json()
34+
for param in ['input', 'output']:
35+
if param not in body:
36+
abort(make_response(jsonify(message=f"Missing `{param}` body property"), 400))
37+
38+
# Check if output file already exists
39+
output_file_path = Path(resolve_path(body["output"]))
40+
if output_file_path.exists():
41+
return make_response(jsonify(message="Output file already exists", path=body["output"]), 208)
42+
43+
# Check if input file exists at all
44+
pose_file_path = Path(resolve_path(body["input"]))
45+
if not pose_file_path.exists():
46+
raise Exception("File does not exist")
47+
48+
with pose_file_path.open("rb") as f:
49+
pose = Pose.read(f)
50+
51+
eaf, tiers = segment_pose(pose)
52+
53+
output_file_path.parent.mkdir(parents=True, exist_ok=True)
54+
print("Saving .eaf to disk ...")
55+
eaf.to_file(output_file_path)
56+
57+
return make_response(jsonify(
58+
message="Pose segmentation completed successfully",
59+
path=body["output"],
60+
), 200)
61+
62+
63+
if __name__ == "__main__":
64+
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
1.16 MB
Binary file not shown.

0 commit comments

Comments
 (0)