- Install the Qt app.
- Initiate additional services.
- Mock external services
- Manage database.
- CNC worker.
- Run tests.
Before using the app for the first time you should run:
# Clone this project
$ git clone --recurse-submodules https://github.com/Leandro-Bertoluzzi/cnc-local-app
# 1. Access the repository
$ cd cnc-local-app
# 2. Set up your Python environment
# Option 1: If you use Conda
conda env create -f conda/environment-dev.yml
conda activate cnc-local-app-dev
# Option 2: If you use venv and pip
$ python -m venv env-dev
$ source env-dev/bin/activate
$ pip install -r requirements-dev.txt
# 3. Copy and configure the .env file
cp .env.example .env
# 4. Ask git to stop tracking configuration files
$ git update-index --assume-unchanged config.ini
Take into account that the virtual environment activation with pip (step 2, option 2) is slightly different in Windows:
$ python -m venv env-dev
$ .\env-dev\Scripts\activate
$ pip install -r requirements-dev.txt
Once installed all dependencies and created the Python environment, every time you want to start the app you must run:
# 1. Activate your Python environment
# Option 1: If you use Conda
conda activate cnc-local-app-dev
# Option 2: If you use venv and pip
$ source env-dev/bin/activate
# 2. Start the app with auto-reload
$ watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- python main.py
In case you don't have (or don't want to use) a DB and a message broker for Celery, you can start a containerized version of both, plus an adminer
instance, via docker compose
.
# 1. Run Docker to start the DB, adminer,
# the CNC worker and the Message broker (Redis)
$ docker compose up -d
In addition, you can also add a mocked version of the GRBL device, which runs the GRBL simulator.
$ docker compose -f docker-compose.yml -f docker-compose.test.yml up
Update your environment to use a virtual port:
SERIAL_PORT=/dev/ttyUSBFAKE
Initiate the virtual port inside the worker's container:
docker exec -it cnc-admin-worker /bin/bash simport.sh
To see your database, you can either use the adminer
container which renders an admin in http://localhost:8080
when running the docker-compose.yml
; or connect to it with a client like DBeaver.
You can also manage database migrations by using the following commands inside the core
folder.
- Apply all migrations:
$ alembic upgrade head
- Revert all migrations:
$ alembic downgrade base
- Seed DB with initial data:
$ python seeder.py
More info about Alembic usage here.
The CNC worker should start automatically when running docker compose up
, with certain conditions:
- It only works with Docker CE without Docker Desktop, because the latter can't mount devices. You can view a discussion about it here.
- Therefore, and given that devices in Windows work in a completely different way (there is no
/dev
folder), you won't be able to run theworker
service on Windows. For that reason, in Windows you'll have to comment theworker
service indocker-compose.yml
and follow the steps in Start the Celery worker manually (Windows).
In case you don't use Docker or just want to run it manually (by commenting the worker
service in docker-compose.yml
), you can follow the next steps.
# 1. Move to worker folder
$ cd core/worker
# 2. Start Celery's worker server
$ celery --app tasks worker --loglevel=INFO --logfile=logs/celery.log
Optionally, if you are going to make changes in the worker's code and want to see them in real time, you can start the Celery worker with auto-reload.
# 1. Move to worker folder
$ cd core/worker
# 2. Start Celery's worker server with auto-reload
$ watchmedo auto-restart --directory=./ --pattern=*.py -- celery --app tasks worker --loglevel=INFO --logfile=logs/celery.log
NOTE: You also have to update the value of PROJECT_ROOT
in config.py
.
Due to a known problem with Celery's default pool (prefork), it is not as straightforward to start the worker in Windows. In order to do so, we have to explicitly indicate Celery to use another pool. You can read more about this issue here.
- solo: The solo pool is a simple, single-threaded execution pool. It simply executes incoming tasks in the same process and thread as the worker.
$ celery --app worker worker --loglevel=INFO --logfile=logs/celery.log --pool=solo
- threads: The threads in the threads pool type are managed directly by the operating system kernel. As long as Python's ThreadPoolExecutor supports Windows threads, this pool type will work on Windows.
$ celery --app worker worker --loglevel=INFO --logfile=logs/celery.log --pool=threads
- gevent: The gevent package officially supports Windows, so it remains a suitable option for IO-bound task processing on Windows. Downside is that you have to install it first.
# 1. Install gevent
# Option 1: If you use Conda
$ conda install -c anaconda gevent
# Option 2: If you use pip
$ pip install gevent
# 2. Start Celery's worker server
$ celery --app worker worker --loglevel=INFO --logfile=logs/celery.log --pool=gevent
NOTE: You also have to update the value of PROJECT_ROOT
in config.py
.
$ pytest -s
The coverage report is available in the folder /htmlcov
.
$ flake8
$ mypy .
You can also run all tests together, by using the following command:
$ make tests