This repository contains a Docker Compose configuration for a client application that interacts with a machine learning inference service. It includes services for the main application, inference, and postprocessing.
-
Playnode:
- PlayaAI node for pulling task and registering node
- Port: 3000
- To run service on multinode Node change
HOST_NAME=localhostwithout any quotes indocker-compose.ymlfile
-
Torchserve:
- Inference service for running machine learning models
- Ports: 8083, 8084 & 8085
-
Postprocessing:
- Other ML tasks
- port: 8080
-
More details is there about variable and service in the
docker-compose.ymlfile
You can either use the installer by running the command below:
curl -sL https://playinstaller.playai.network -o playinstaller.sh && chmod +x playinstaller.sh && ./playinstaller.sh
Or clone the repository using the following command:
git clone https://github.com/PlayAINetwork/PlayAI-Node-Client.git
This project requires specific environment variables to be set up. Follow these steps:
-
Create a file named
.env.schemain the root directory of your project. -
Add the following content to the
.env.schemafile:NODE_WALLET_ADDRESS: string NODE_SIGNER_KEY: string NODE_TOKEN_IDS: string(list of int) -
Replace the placeholders with your actual values. Node Token Ids can take multiple value for example
'[1,2,3]'. For example:NODE_WALLET_ADDRESS='0x0000000000000000000000000000000000000000' NODE_SIGNER_KEY='0x00000000000000000000000000000000000000000000000000000000000' NODE_TOKEN_IDS='[0]'
To run the project using Docker, ensure you have Docker and Docker Compose installed. Then, use the following command to start the services:
This command will build the necessary images and start the containers as defined in the docker-compose.yml file. Make sure to check the docker-compose.yml for any additional configuration or services.
- Ensure you have Docker and Docker Compose installed.
- Run the following command to start the services:
docker-compose pull docker-compose up
- Access the services:
playnode: http://localhost:3000torchserve: http://localhost:8083 and http://localhost:8084postprocessing: http://localhost:8080