Skip to content

Triton backend that enables pre-processing, post-processing and other logic to be implemented in Python. In the repository, I use tech stack including YOLOv8, ONNX, EasyOCR, Triton Inference Server, CV2, Minio, Docker, and K8S. All of which we deploy on k80 and use CUDA 11.4

License

Apache-2.0, Unknown licenses found

Licenses found

Apache-2.0
LICENSE
Unknown
licence-plate-triton-server-ensemble.drawio
Notifications You must be signed in to change notification settings

rushai-dev/licence-plate-triton-server-ensemble

Repository files navigation

licence-plate-triton-server-ensemble

Triton backend that enables pre-processing, post-processing and other logic to be implemented in Python. In the repository, I use tech stack including YOLOv8, ONNX, EasyOCR, Triton Inference Server, CV2, Docker, and K8S. All of which we deploy on k80 and use CUDA 11.4

inference

docker run --gpus=all --rm --shm-size=10Gb -p8000:8000 -p8001:8001 -p8002:8002 -v $(pwd)/model_repository:/models rushai/licence-plate-triton-server-ensemble:21.09-py3 tritonserver --model-repository=/models

About

Triton backend that enables pre-processing, post-processing and other logic to be implemented in Python. In the repository, I use tech stack including YOLOv8, ONNX, EasyOCR, Triton Inference Server, CV2, Minio, Docker, and K8S. All of which we deploy on k80 and use CUDA 11.4

Topics

Resources

License

Apache-2.0, Unknown licenses found

Licenses found

Apache-2.0
LICENSE
Unknown
licence-plate-triton-server-ensemble.drawio

Stars

Watchers

Forks

Packages

No packages published