Main objective of this repo: run traccc as-a-Service. Getting this working includes creating three main components:
- a shared library of
traccc
and writing a standalone version with the essential pieces of the code included - a custom backend using the standalone version above to launch the trition server
- a client to send data to the server
A minimal description of how to build a working version is detailed below. In each subdirectory of this project, a README containing more information can be found.
The beginnings of this work is based on a CPU version developed by Haoran Zhao. The original repo can be found here. This CPU version has been incorporated into other branches of this work such as odd_traccc_v0.10.0
but is omitted here for clarity.
Simply clone the repository with
git clone --recurse-submodules git@github.com:milescb/traccc-aaS.git
A docker built for the triton server can be found at docexoty/tritonserver:latest
. To run this do
shifter --module=gpu --image=docexoty/tritonserver:latest
or use your favorite docker application and mount the appropriate directories.
To run out of the box, an installation of traccc
and the the backend can be found at /global/cfs/projectdirs/m3443/data/traccc-aaS/software/prod/ver_09152024/install
. To set up the environment, run the docker then set the following environment variables
export DATADIR=/global/cfs/projectdirs/m3443/data/traccc-aaS/data
export INSTALLDIR=/global/cfs/projectdirs/m3443/data/traccc-aaS/software/prod/ver_09152024/install
export PATH=$INSTALLDIR/bin:$PATH
export LD_LIBRARY_PATH=$INSTALLDIR/lib:$LD_LIBRARY_PATH
Then the server can be launched with
tritonserver --model-repository=$INSTALLDIR/models
Once the server is launched, run the model via:
cd client && python TracccTritionClient.py
More info in the client directory.
First, enter the docker and set environment variables as documented above. Then run
cd backend/traccc-gpu && mkdir build install && cd build
cmake -B . -S ../ \
-DCMAKE_INSTALL_PREFIX=../install/ \
-DCMAKE_INSTALL_PREFIX=../install/
cmake --build . --target install -- -j20
Then, the server can be launched as above:
tritonserver --model-repository=../../models