ABeL is a 3D end-to-end framework for creating, simulating and evaluating indoor and outdoor positioning algorithms. Through the usage of NVidia Sionna's differentiable raytracer, a realistic representation of a signals electromagnetic propagation effects could be archived, while a faithful recreation of the environments could be attained by using Blender. ABeL streamlines the usage of both, while also providing tools for implementing, testing and benchmarking positioning algorithms.
ABeL is structured into multiple modules, which can be used and modified independently of each other. The general workflow from the scenarios definition to the benchmarking of the positioning algorithms can then be described, in relation to the modules, as follows.
- Scenario Definition and Parametrization: Create a model of the environment by importing OSM (Open Street Map)
data via Blosm directly into Blender. Assign
materials to the imported objects and then define the materials electromagnetic properties in the
radioMaterialsConfig.yaml. Furthermore, define the transceiver and signal properties insionnaRayTracingConfig.yaml. - Signal Generation and Preprocessing: Load the scenario and generate signal propagation data. Then, either preprocess and save the data for later use or visualize it with the provided or custom plots.
- Algorithm Testing and Benchmarking: Prepare the data for the positioning algorithms, creating an interface ensuring that the testing and benchmarking setup is the same for all algorithms. Then implement your positioning algorithms and benchmark them with the tool provided, which are consistent for all algorithms.
In the remainder of this document, we'll explain how to install the tools necessary for running ABeL as well as ABeL itself. We'll mainly focus on reproducing the benchmark from our paper, while still discussing all parts of the frameworks setup and functionality, if necessary by referring to external resources.
Figure 1: ABeL is structured into multiple independent modules for 1. the creation of scenarios, 2. the simulation
of the signal propagation in the scenario defined in the proceeding step and 3. the evaluation of the algorithms, while
ensuring strict comparability.
The installation process will be shown for Ubuntu 25.04, but should work similar for other distributions. If you're on Windows, you may want to use the WSL (Windows Subsystem for Linux). Alternatively, the framework should also work with minor changes on a Windows machine. Furthermore, we also strongly recommend using a GPU (Graphics Processing Unit) as the compute time is significantly longer when using the CPU (Central Processing Unit).
Before we begin, we want to explain why and how we use Docker for ABeLs implementation. The "why" results from two reasons: 1. the packages and tools used for creating ABeL are heavily interdependent, meaning updating the wrong python or system package may lead to difficult to understand error messages resulting in 2. a not so beginner-friendly simulation environment. With Docker instead, we (nearly) guarantee, that if ABeL can be run on our system, so will it on yours.
Regarding the "how", take a look at the following figure. First, let's talk about what docker is and how to use it. You
may already be familiar with concept of VMs (Virtual Machines), which are quite similar to Docker, terminology-wise.
You begin with source code, here ABeL which you can download via git, which on itself cannot be used to create or
run a VM. Instead, you'll create an image or .iso file first, by compiling or "building" the source code, which is
exactly what we do with the docker buildx build command. Following, you would insert this .iso image into your VM
and run it, creating a disposable OS (Operating System). Nearly the identical thing is also done with dockers
docker run command, which creates a Docker container. The difference mainly is, that a VMs fully emulates an OS and
all its hardware components, while Docker containers only occupy an "insolated space" on the hosts machine.
Figure 2: Overview over ABeL's creation and usage as well as its interactions with the host system.
We are currently in the process of migrating from Sionna version 0.19.2 to the full release version. The latter uses a Mitsuba version not compatible with Sionna 0.19.2. Furthermore, the "Mitsuba for Blender" addon also uses the currently installed Mitsuba version and is dependent on the Blender version. If you don't want to use Docker for running ABeL, but choose to install ABeL directly on the host system, then you may want to use the linked "Mitsuba for Blender" version and install the correct Blender version with the following command.
sudo snap install blender --channel=3.6lts/stable --classicThe remaining setup is the same as mentioned below.
The first step is downloading Blender itself. This can either be done directly of Blender website itself or done by using the following command.
sudo snap install blender --classicThen, download Blosm and
Mitsuba for Blender directly by from the linked site.
For Blosm, you'll need to use your e-mail address to which a download link will be sent. Following, open Blender and go
toEdit > Preferences...> Add-ons. Choose Install from Disk..., blosm.zip and mitsuba-blender.zipand activate the
add-on; for further information see also Blosms
installation guide.
The setup of docker follows the installation guide and post-installation guide on dockers documentation site. First, make sure that all old installations of docker are removed before carrying on.
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; \
do sudo apt-get remove $pkg; doneFirst, add Docker to the apt repository by using the following command.
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get updateThen, Docker can be installed by executing the following.
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginTry running Dockers hello-world container. If you don't receive any errors, the installation was successful.
sudo docker run hello-worldNow, let's set up Docker in such a way, that you don't need to use sudo when using it (rootless mode). First we create
a new user group called docker, then we add the current user to it.
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp dockerIf you want docker to automatically run, when the system starts, you may use the following command.
sudo systemctl enable docker containerdThe Nvidia Docker Container Toolkit does not allow using custom drivers and CUDA version with the following setup. We recommend installing one two driver and CUDA configurations mentioned below.
1. Nvidia Driver: 535.247.01; Nvidia CUDA: 12.2 2. Nvidia Driver: 570.169.00; Nvidia CUDA: 12.8You can also take a look at oddmario's NVIDIA-Ubuntu-Driver-Guide for information on how to install a custom Nvidia driver.
The setup of Nvidia's Container Toolkit follows the official
main installation guide
and post-installation guide.
The toolkit allows the Docker containers to access the hosts GPU, as shown in Figure 2, and is mandatory when you want
to use an NVidia GPU for simulating the signal propagation. First, you'll need to add the toolkits apt repository to
apt itself.
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o \
/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list Then update your apt repository and install the current version of Nvidias container toolkit.
sudo apt update && sudo apt install -y nvidia-container-toolkitNow we need to allow Docker to use Nvidia Container runtime, by entering the following command and restarting the Docker client.
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart dockerFinally, we need to allow Docker to use the Nvidia Container runtime, if we run Docker in rootless mode, i.e. if we
don't use sudo before the Docker commands.
nvidia-ctk runtime configure --runtime=docker --config=$HOME/.config/docker/daemon.json
sudo systemctl restart docker
sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-placeFirst create a folder in a directory of your choice and clone this repository by running the following command.
git clone https://github.com/RobotVisionHKA/ABeL.git <path/to/your/ABeL/directory>If you successfully finished the docker installation and configuration in the preceding section, you should be able to create a docker image by typing the following. You'll always need to do this step, when you change the contents of ABeL itself in some way, e.g. when adding or modifying a python script.
docker buildx build --tag abel <path/to/your/ABeL/directory>Before we can run and use the container, we must first download the archive containing the scenario. We can either do this by directly following the link or using the shell.
pathToDataDirectory="<path/to/your/data/directory>"
wget -P $pathToDataDirectory \
https://bwsyncandshare.kit.edu/s/pYPjmC4rsSkGwyT/download/ABeL_HKA.tar.gz
tar --strip-components=1 -xzf "${pathToDataDirectory}/ABeL_HKA.tar.gz" -C $pathToDataDirectory \
&& rm "${pathToDataDirectory}/ABeL_HKA.tar.gz"If the Nvidia Container Toolkit was successfully installed, we
should be able to run the docker container by using the following shell command. It will bind the directory, to which
you downloaded and extracted the ABeL_HKA,tar.gz file, to the directory ABeL/data inside the container. In other
words, with ABeL/data and <path/to/your/data/directory/> the container can access the host system and the other way
around.
docker run -it --privileged --rm --gpus all \
--mount type=bind,source=<absolute/path/to/your/data/directory>,target=/ABeL/data \
abel:latestWith the docker container up and running, you can visit the next section explaining how to execute the examples and
create your own scripts and scenarios. If you want to exit the ABeL / container, just type exit. One last notice: you
may run into the problem, that the docker containers output folder is owned by root, meaning that you can neither
change nor delete any files. To regain ownership of folder, just type the following.
sudo chown -cR $USER <path/to/your/data/directory>After running ABeL's Docker container, you'll generally start with a bash environment allowing for a maximum of
flexibility. You may copy files from and to the data folder shared with the host system, manipulate the project files
with nano or use any other command available in the (standard) shell.
Regarding the framework itself, the following sections will explain 1. how to use the examples included in the framework, 2. how you can define your own scenarios with Blender as well as 3. how to implement your own custom positioning algorithms.
The examples can be found in ABeL's project folder /ABeL and can be run by python3 <pythonFileName>.py; you may
either directly call the script by prepending the project folder or first move with cd into it and then using the
command shown above.
example_dataGeneration: A simple interactive python script, allowing for the generation of signal propagation data depending on the scenario defined in the shared data folder.example_dataEvaluation: A simple interactive python script, allowing for an evaluation of a dataset containing signal propagation data.example_paperIPIN: Python script, which in conjunction with the data setABeL_HKA.tar.gz, reproduces the data used in the 2025 IPIN paper "ABeL: A Customizable Open-Source Framework for Evaluating 3D Terrestrial Positioning Algorithms". Warning: generating the data with the simulation properties used in the paper, may take serval hours. If you want to reduce the simulation needed while still receiving comparable results, you may reduce themax_depthortimeResolutionparameters.
In this chapter we'll only describe the general workflow on how to use Blender with Blosm. For a more in-depth explanation, we'll refer you to the Sionna Blender introduction.
To import OSM data, first choose the Blosm option on the vertical ribbon on the right side. You can either select a
part of the map you want to import visually by using the select option or import it directly by inputting the maximum
and minimum longitude and latitude. Then, under settings, choose which terrain objects you want to import and make sure
the Import as single object option is not selected. Now you can add and / or modify the object materials, such that
they match your defined radio materials (the materials you defined in the radioMaterialConfig.yaml). The easiest way
to do this, is by selecting the display mode Blender File (directly next to the search bar). This allows you to change
materials based on the objects type, e.g. off all streets, roads etc.
When you have finished creating your scene, you can export the meshes and .xml file needed for ABeL by choosing
File > Export > Mitsuba (.xml). Then, select Export IDs, Forward > Y Forward and Up > Z Up. Create a new folder
and use the Mitsuba Export button.
Figure 3: Look at the scene, used in the 2025 IPIN paper, in Blender. On the right side, the menu for the OSM import
via Blosm is also shown.
We generally recommend using the template template_algorithmsImplementation.py for implementing your own custom
positioning algorithms or at least use it as a point of reference.
from typing import TypeAlias, Union
from Localization.utils.dataPreparation import DataPreparation
from globalUtils.decorators import iterateMethod, propertySetter
from Localization.vizsualizationFunctions.precisionMetrics import LocalizationPrecisionMetrics
t_position: TypeAlias = tuple[float, float, float]
class PositioningAlgorithm(DataPreparation, LocalizationPrecisionMetrics):
def __init__(self) -> None:
DataPreparation.__init__(self)
LocalizationPrecisionMetrics.__init__(self)
self.localizedReceiverPositions: Union[list[t_position],
list[list[t_position]]] = []
def setupPositioning(self,
pathToRayTracingData: str,
pathToRayTracingConfig: str = "config/sionnaRayTracingConfig.yaml",
*args) -> None:
# Load and prepare data for positioning algorithm
self.somePropData = ...
@propertySetter(propertyToSet="localizedReceiverPositions")
@iterateMethod(iteratorName="_localizationData")
def runPositioning(self, *args, **kwargs) -> list[t_position]:
currentTrajectoryIdx = kwargs["_localizationDataIndex"]
positions = self.__algorithm(propagationData=self.somePropData[currentTrajectoryIdx])
return positions
@staticmethod
def __algorithm(propagationData, *args) -> list[t_position]:
passUse the setupPositioning method to import and prepare the data with the inherited DataPreperation class, ensuring
that all algorithms have the same propagation data. Then, use or modify the data for your own algorithm, e.g. in the
case of TDoA (Time Difference of Arrival) subtract the ToAs (Time of Arrival) from each other.
Following, implement your algorithm in the method __algorithm. You'll only need to actually implement the algorithm for
a single simulation time step, but for all receivers. Return your calculated receiver positions as a list.
Finally, use the runPositioning method to run your positioning algorithm. Pay attention to the iterateMethod
decorator, which is used to automatically iterate over all simulation time steps. This means, that the positioning
algorithm defined in the __algorithm method will automatically be called for every simulation time step. So you only
need to implement the algorithm for a single simulation time step, reducing the dimensionality by 1.
After running the positioning algorithm, use the inherited LocalizationPrecisionMetric class to evaluate your
algorithm. You may also implement your own performance metrics.
To actually run your own positioning algorithm, copy it into the folder you downloaded the framework to, rebuild the image and create a Docker container as described in the subsection "Running ABeL with Docker".
ABeL is licensed under GLP v3, a disclaimer can be found below, the License is available in full in the repository itself.
Copyright (C) 2025 Simon Huh
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/.
If you want to use this software in part of work or create a project based on it, please cite it follows.
@software{ABeL,
author={Simon Huh},
title={ABeL},
url={https://github.com/SimonHuh/ABeL},
version={1.0.0},
date={2025},
}The framework was also presented at IPIN2025 (International Conference on Indoor Positioning and Navigation). If you want to publish a paper based on this work, you may also directly cite the conference paper.
@conference{
}