Copyright (C) 2021 Politecnico di Torino, Italy. SPDX-License-Identifier: Apache-2.0. See LICENSE file for details.
Authors: Alessio Burrello, Daniele Jahier Pagliari, Matteo Risso, Simone Benatti, Enrico Macii, Luca Benini, Massimo Poncino
If you use Q-PPG in your experiments, please make sure to cite our paper:
@ARTICLE{burrello2021qppg,
author={Burrello, Alessio and Jahier Pagliari, Daniele and Risso, Matteo and Benatti, Simone and Macii, Enrico and Benini, Luca and Poncino, Massimo},
journal={IEEE Transactions on Biomedical Circuits and Systems},
title={Q-PPG: Energy-Efficient PPG-based Heart Rate Monitoring on Wearable Devices},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/TBCAS.2021.3122017}}
Hearth Rate (HR) monitoring is increasingly performed in wrist-worn devices using low-cost photoplethysmography (PPG) sensors. However, Motion Artifacts (MAs) affect the performance of PPG-based HR tracking. This is typically addressed coupling the PPG signal with acceleration measurements from an inertial sensor. Unfortunately, most standard approaches of this kind rely on hand-tuned parameters, which impair their generalization capabilities and their applicability to real data in the field. In contrast, methods based on deep learning, despite their better generalization, are considered to be too complex to deploy on wearable devices. In this work, we tackle these limitations, proposing a design space exploration methodology to automatically generate a rich family of deep Temporal Convolutional Networks (TCNs) for HR monitoring, all derived from a single "seed'' model. Our flow involves two Neural Architecture Search (NAS) tools and a hardware-friendly quantizer, whose combination yields highly accurate and extremely lightweight models. When tested on the PPG-Dalia dataset, our most accurate model sets a new state-of-the-art in Mean Absolute Error. Furthermore, we deploy our TCNs on an embedded platform featuring a STM32WB55 microcontroller, demonstrating their suitability for real-time execution. Our most accurate quantized network achieves 4.41 Beats Per Minute (BPM) of Mean Absolute Error (MAE), with an energy consumption of 47.65 mJ and a memory footprint of 412 kB. At the same time, the smallest network that obtains a MAE < 8 BPM, among those generated by our flow, has a memory footprint of 1.9 kB and consumes just 1.7 mJ per inference.
- Python 3.6+
- Tensorflow 2.4.0
- Torch 1.9.0
- Scikit-learn 0.24.2
- Scikit-image 0.17.2
- Pandas 1.1.5
In order to reproduce the flow described in the paper two main steps are required:
-
Architecture search, simply run:
- MorphNet Search:
python architecture_search/pit_mn.py --root <> --NAS <MN-Size or MN-Flops> --strength <> --threshold <>
- PIT Search:
python architecture_search/pit_mn.py --root <> --NAS <PIT> --learned_ch <x y z ...> --strength <> --warmup <>
N.B.,
- The path passed as
--root <>
should contains the dataset folder. - The learned architectures description file can be found inside the directory:
<root>/saved_models_<NAS>/
. - You need to properly install MorphNet from here.
- In
--learned_ch <x y z ...>
x, y, and z represent the number of channels found with MorphNet search.
- MorphNet Search:
-
Precision search, simply run:
- Mixed-Precision Search:
source precision_search/launch_MN_PIT_mix.sh
- Static Quantization:
source precision_search/launch_MN_PIT_all.sh <architecture_name> <n_bits>
- Mixed-Precision Search:
Q-PPG is released under Apache 2.0, see the LICENSE file in the root of this repository for details.