Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: fix pre-commit config and adapt code to make all checks pass #162

Merged
merged 25 commits into from
Jun 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
8a09f5e
chore: fix pre-commit versions and clang-format hook failure
mojomex Jun 6, 2024
c442c8c
chore(docs): fix documentation pre-commit errors
mojomex Jun 6, 2024
084f172
chore: fix pre-commit and compiler warnings in all code files modifie…
mojomex Jun 6, 2024
2973053
chore(yamllint): ignore yaml document start in clang-format config as…
mojomex Jun 6, 2024
2bdae1d
chore: add copyright and license headers
mojomex Jun 6, 2024
83e9a85
chore(pre-commit): allow multiple documents in YAML files to make .cl…
mojomex Jun 6, 2024
cfe3d0c
chore: make pre-commit pass for parameter schema code
mojomex Jun 7, 2024
66783f5
chore: add copyright and license to all source files
mojomex Jun 7, 2024
f080524
chore: implement pre-commit suggestions in all CPP/HPP files
mojomex Jun 7, 2024
d633414
chore: whitespace changes in non-cpp files to satisfy pre-commit
mojomex Jun 7, 2024
7adf53e
chore: flake8 changes to satisfy pre-commit
mojomex Jun 7, 2024
fa95f61
fix: allow implicit conversion to status types again
mojomex Jun 7, 2024
9cfe040
chore: clean up imports
mojomex Jun 7, 2024
6748ca9
chore: add override/inline where necessary
mojomex Jun 7, 2024
3f0f35a
chore(nebula_ros): remove obsolete wrapper base types
mojomex Jun 7, 2024
3032f0b
chore: move nolint comments to be robust to formatting linebreaks
mojomex Jun 7, 2024
ff3a6a6
chore(velodyne): fix indentation in velodyne calibration files to sat…
mojomex Jun 7, 2024
b4aa564
chore(hesai): fix decoder test config yaml quotations
mojomex Jun 7, 2024
1bc62e3
chore: whitespace changes
mojomex Jun 7, 2024
592f88d
chore: remove empty, un-parseable local.cspell.json
mojomex Jun 7, 2024
950dce5
chore(prettier): ignore yaml and json files as they are handled by ya…
mojomex Jun 7, 2024
f79f42f
chore: make pre-commit pass on new files after #146
mojomex Jun 7, 2024
cf44e62
chore: update cspell to pass spell-check
mojomex Jun 7, 2024
da0bb68
chore: rename contributing.md docs page to make failing link check pass
mojomex Jun 7, 2024
dbada66
Merge remote-tracking branch 'origin/develop' into pre-commit-fixes
mojomex Jun 26, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .clang-format
Original file line number Diff line number Diff line change
Expand Up @@ -45,3 +45,6 @@ IncludeCategories:
- Regex: '".*"'
Priority: 1
CaseSensitive: true
---
Language: Json
BasedOnStyle: llvm
2 changes: 2 additions & 0 deletions .cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
"gptp",
"Helios",
"Hesai",
"horiz",
"Idat",
"ipaddr",
"manc",
Expand Down Expand Up @@ -50,6 +51,7 @@
"vccint",
"Vccint",
"Vdat",
"Wbitwise",
"XT",
"XTM",
"yukkysaito"
Expand Down
1 change: 0 additions & 1 deletion .github/workflows/build-and-test-differential.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -54,4 +54,3 @@ jobs:
fail_ci_if_error: false
verbose: true
flags: differential

2 changes: 1 addition & 1 deletion .github/workflows/json-schema-check.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,4 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Dummy step
run: echo "No relevant changes, passing check"
run: echo "No relevant changes, passing check"
6 changes: 5 additions & 1 deletion .markdown-link-check.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
{
"aliveStatusCodes": [200, 206, 403],
"aliveStatusCodes": [
200,
206,
403
],
"ignorePatterns": [
{
"pattern": "^http://localhost"
Expand Down
3 changes: 2 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ repos:
- id: check-toml
- id: check-xml
- id: check-yaml
args: [--allow-multiple-documents]
- id: detect-private-key
- id: end-of-file-fixer
- id: mixed-line-ending
Expand All @@ -24,7 +25,7 @@ repos:
args: [-c, .markdownlint.yaml, --fix]

- repo: https://github.com/pre-commit/mirrors-prettier
rev: v4.0.0-alpha.8
rev: v3.1.0
hooks:
- id: prettier

Expand Down
3 changes: 2 additions & 1 deletion .prettierignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
*.param.yaml
*.yaml
*.rviz
*.json
9 changes: 6 additions & 3 deletions .yamllint.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
extends: default

ignore: |
*.param.yaml

rules:
braces:
level: error
Expand All @@ -13,10 +10,16 @@ rules:
document-start:
level: error
present: false # Don't need document start markers
ignore:
- .clang-format # Needs '---' between languages
line-length: disable # Delegate to Prettier
truthy:
level: error
check-keys: false # To allow 'on' of GitHub Actions
quoted-strings:
level: error
required: only-when-needed # To keep consistent style
indentation:
spaces: consistent
indent-sequences: true
check-multi-line-strings: false
10 changes: 8 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,28 @@
# Nebula

## Welcome to Nebula, the universal sensor driver

Nebula is a sensor driver platform that is designed to provide a unified framework for as wide a variety of devices as possible.
While it primarily targets Ethernet-based LiDAR sensors, it aims to be easily extendable to support new sensors and interfaces.
While it primarily targets Ethernet-based LiDAR sensors, it aims to be easily extendable to support new sensors and interfaces.
Nebula works with ROS 2 and is the recommended sensor driver for the [Autoware](https://autoware.org/) project.

## Documentation

We recommend you get started with the [Nebula Documention](https://tier4.github.io/nebula/).
Here you will find information about the background of the project, how to install and use with ROS 2, and also how to add new sensors to the Nebula driver.

- [About Nebula](https://tier4.github.io/nebula/about)
- [Design](https://tier4.github.io/nebula/design)
- [Supported Sensors](https://tier4.github.io/nebula/supported_sensors)
- [Installation](https://tier4.github.io/nebula/installation)
- [Launching with ROS 2](https://tier4.github.io/nebula/usage)
- [Parameters](https://tier4.github.io/nebula/parameters)
- [Point cloud types](https://tier4.github.io/nebula/point_types)
- [Contributing](https://tier4.github.io/nebula/contributing)
- [Contributing](https://tier4.github.io/nebula/contribute)
- [Tutorials](https://tier4.github.io/nebula/tutorials)

## Quick start

Nebula builds with ROS 2 Galactic and Humble.

> **Note**
Expand All @@ -39,13 +43,15 @@ rosdep install --from-paths src --ignore-src -y -r
# Build Nebula
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release
```

To launch Nebula as a ROS 2 node with default parameters for your sensor model:

```bash
ros2 launch nebula_ros *sensor_vendor_name*_launch_all_hw.xml sensor_model:=*sensor_model_name*
```

For example, for a Hesai Pandar40P sensor:

```bash
ros2 launch nebula_ros hesai_launch_all_hw.xml sensor_model:=Pandar40P
```
4 changes: 0 additions & 4 deletions docs/about.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
# About Nebula

WIP, please check back soon!




3 changes: 3 additions & 0 deletions docs/contribute.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Contributing to Nebula

WIP - please check back soon!
3 changes: 0 additions & 3 deletions docs/contributing.md

This file was deleted.

78 changes: 42 additions & 36 deletions docs/hesai_decoder_design.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,19 @@ This way, runtime overhead for this generalization is `0`.
### Packet formats

For all handled Hesai sensors, the packet structure follows this rough format:

1. (optional) header: static sensor info and feature flags
2. body: point data
3. tail and other appendices: timestamp, operation mode info

### Decoding steps

For all handled Hesai sensors, decoding a packet follows these steps:

```python
def unpack(packet):
parse_and_validate(packet)
# return group: one (single-return) or more (multi-return)
# return group: one (single-return) or more (multi-return)
# blocks that belong to the same azimuth
for return_group in packet:
if is_start_of_new_scan(return_group):
Expand All @@ -40,17 +42,18 @@ def decode(return_group):
append to pointcloud
```

The steps marked with __*__ are model-specific:
The steps marked with **\*** are model-specific:

* angle correction
* timing correction
* return type assignment
- angle correction
- timing correction
- return type assignment

### Angle correction

There are two approaches between all the supported sensors:
* Calibration file based
* Correction file based (currently only used by AT128)

- Calibration file based
- Correction file based (currently only used by AT128)

For both approaches, sin/cos lookup tables can be computed.
However, the resolution and calculation of these tables is different.
Expand All @@ -60,7 +63,7 @@ However, the resolution and calculation of these tables is different.
For each laser channel, a fixed elevation angle and azimuth angle offset are defined in the calibration file.
Thus, sin/cos for elevation are only a function of the laser channel (not dependent on azimuth) while those for azimuth are a function of azimuth AND elevation.

Lookup tables for elevation can thus be sized with `n_channels`, yielding a maximum size of
Lookup tables for elevation can thus be sized with `n_channels`, yielding a maximum size of
`128 * sizeof(float) = 512B` each.

For azimuth, the size is `n_channels * n_azimuths = n_channels * 360 * azimuth_resolution <= 128 * 36000`.
Expand Down Expand Up @@ -94,9 +97,10 @@ While there is a wide range of different supported return modes (e.g. single (fi
Differences only arise in multi-return (dual or triple) in the output order of the returns, and in the handling of some returns being duplicates (e.g. in dual(first, strongest), the first return coincides with the strongest one).

Here is an exhaustive list of differences:
* For Dual (First, Last) `0x3B`, 128E3X, 128E4X and XT32 reverse the output order (Last, First)
* For Dual (Last, Strongest) `0x39`, all sensors except XT32M place the second strongest return in the even block if last == strongest
* For Dual (First, Strongest) `0x3c`, the same as for `0x39` holds.

- For Dual (First, Last) `0x3B`, 128E3X, 128E4X and XT32 reverse the output order (Last, First)
- For Dual (Last, Strongest) `0x39`, all sensors except XT32M place the second strongest return in the even block if last == strongest
- For Dual (First, Strongest) `0x3c`, the same as for `0x39` holds.

For all other return modes, duplicate points are output if the two returns coincide.

Expand All @@ -119,9 +123,10 @@ Return mode handling has a default implementation that is supplemented by additi
### `AngleCorrector`

The angle corrector has three main tasks:
* compute corrected azimuth/elevation for given azimuth and channel
* implement `hasScanCompleted()` logic that decides where one scan ends and the next starts
* compute and provide lookup tables for sin/cos/etc.

- compute corrected azimuth/elevation for given azimuth and channel
- implement `hasScanCompleted()` logic that decides where one scan ends and the next starts
- compute and provide lookup tables for sin/cos/etc.

The two angle correction types are calibration-based and correction-based. In both approaches, a file from the sensor is used to extract the angle correction for each azimuth/channel.
For all approaches, cos/sin lookup tables in the appropriate size are generated (see requirements section above).
Expand All @@ -133,9 +138,10 @@ It is a template class taking a sensor type `SensorT` from which packet type, an
Thus, this unified decoder is an almost zero-cost abstraction.

Its tasks are:
* parsing an incoming packet
* managing decode/output point buffers
* converting all points in the packet using the sensor-specific functions of `SensorT` where necessary

- parsing an incoming packet
- managing decode/output point buffers
- converting all points in the packet using the sensor-specific functions of `SensorT` where necessary

`HesaiDecoder<SensorT>` is a subclass of the existing `HesaiScanDecoder` to allow all template instantiations to be assigned to variables of the supertype.

Expand All @@ -144,32 +150,32 @@ Its tasks are:
To support a new sensor model, first familiarize with the already implemented decoders.
Then, consult the new sensor's datasheet and identify the following parameters:

| Parameter | Chapter | Possible values | Notes |
|-|-|-|-|
| Header format | 3.1 | `Header12B`, `Header8B`, ... | `Header12B` is the standard and comprises the UDP pre-header and header (6+6B) mentioned in the data sheets |
| Blocks per packet | 3.1 | `2`, `6`, `10`, ... | |
| Number of channels | 3.1 | `32`, `40`, `64`, ... | |
| Unit format | 3.1 | `Unit3B`, `Unit4B`, ... | |
| Angle correction | App. 3 | `CALIBRATION`, `CORRECTION`, ... | The datasheet usually specifies whether a calibration/correction file is used |
| Timing correction | App. 2 | | There is usually a block and channel component. These come in the form of formulas/lookup tables. For most sensors, these depend on return mode and for some, features like high resolution mode, alternate firing etc. might change the timing |
| Return type handling | 3.1 | | Return modes are handled identically for most sensors but some re-order the returns or replace returns if there are duplicates |
| Bytes per second | 1.4 | | |
| Lowest supported frequency | 1.4 | `5 Hz`, `10 Hz`, ... | |

| Chapter | Full title |
|-|-|
|1.4| Introduction > Specifications|
|3.1| Data Structure > Point Cloud Data Packet|
|App. 2| Absolute Time of Point Cloud Data|
|App. 3| Angle Correction|
| Parameter | Chapter | Possible values | Notes |
| -------------------------- | ------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Header format | 3.1 | `Header12B`, `Header8B`, ... | `Header12B` is the standard and comprises the UDP pre-header and header (6+6B) mentioned in the data sheets |
| Blocks per packet | 3.1 | `2`, `6`, `10`, ... | |
| Number of channels | 3.1 | `32`, `40`, `64`, ... | |
| Unit format | 3.1 | `Unit3B`, `Unit4B`, ... | |
| Angle correction | App. 3 | `CALIBRATION`, `CORRECTION`, ... | The datasheet usually specifies whether a calibration/correction file is used |
| Timing correction | App. 2 | | There is usually a block and channel component. These come in the form of formulas/lookup tables. For most sensors, these depend on return mode and for some, features like high resolution mode, alternate firing etc. might change the timing |
| Return type handling | 3.1 | | Return modes are handled identically for most sensors but some re-order the returns or replace returns if there are duplicates |
| Bytes per second | 1.4 | | |
| Lowest supported frequency | 1.4 | `5 Hz`, `10 Hz`, ... | |

| Chapter | Full title |
| ------- | ---------------------------------------- |
| 1.4 | Introduction > Specifications |
| 3.1 | Data Structure > Point Cloud Data Packet |
| App. 2 | Absolute Time of Point Cloud Data |
| App. 3 | Angle Correction |

With this information, create a `PacketMySensor` struct and `SensorMySensor` class.
Reuse already-defined structs as much as possible (c.f. `Packet128E3X` and `Packet128E4X`).

Implement timing correction in `SensorMySensor` and define the class constants `float MIN_RANGE`,
`float MAX_RANGE` and `size_t MAX_SCAN_BUFFER_POINTS`.
The former two are used for filtering out too-close and too-far away points while the latter is used to
allocate pointcloud buffers.
allocate pointcloud buffers.
Set `MAX_SCAN_BUFFER_POINTS = bytes_per_second / lowest_supported_frequency` from the parameters found above.

If there are any non-standard features your sensor has, implement them as generically as possible to allow for future sensors to re-use your code.
20 changes: 13 additions & 7 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# Welcome to the Nebula documentation

Welcome to the Nebula documentation. Here you will find information about the background of the project, how to install and use with ROS 2, and also how to add new sensors to the Nebula driver.

# About Nebula
## About Nebula

Nebula is a sensor driver platform that is designed to provide a unified framework for as wide a variety of devices as possible.
While it primarily targets Ethernet-based LiDAR sensors, it aims to be easily extendable to support new sensors and interfaces.
While it primarily targets Ethernet-based LiDAR sensors, it aims to be easily extendable to support new sensors and interfaces.
Nebula works with ROS 2 and is the recommended sensor driver for the [Autoware](https://autoware.org/) project. The project aims to provide:

- A universal sensor driver
Expand All @@ -15,18 +17,22 @@ Nebula works with ROS 2 and is the recommended sensor driver for the [Autoware](

For more information, please refer to [About Nebula](about.md).

# Getting started
## Getting started

- [Installation](installation.md)
- [Launching with ROS 2](usage.md)

# Nebula architecture
## Nebula architecture

- [Design](design.md)
- [Parameters](parameters.md)
- [Point cloud types](point_types.md)

# Supported sensors
## Supported sensors

- [Supported sensors](supported_sensors.md)

# Development
## Development

- [Tutorials](tutorials.md)
- [Contributing](contributing.md)
- [Contributing](contribute.md)
6 changes: 4 additions & 2 deletions docs/installation.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
# Installing Nebula

## Requirements

Nebula requires ROS 2 (Galactic or Humble) to build the ROS 2 wrapper.
Please see the [ROS 2 documentation](https://docs.ros.org/en/humble/index.html) for how to install.


## Getting the source and building

> **Note**
>
> A [TCP enabled version of ROS' Transport Driver](https://github.com/mojomex/transport_drivers/tree/mutable-buffer-in-udp-callback) is required to use Nebula.
Expand All @@ -28,6 +29,7 @@ colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release
## Testing your build

To run Nebula unit tests:

```bash
colcon test --event-handlers console_cohesion+ --packages-above nebula_common
```
Expand All @@ -36,4 +38,4 @@ Show results:

```bash
colcon test-result --all
```
```
2 changes: 1 addition & 1 deletion docs/parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,4 +118,4 @@ Parameters shared by all supported models:
| min_range | double | 0.3 | meters, >= 0.3 | Minimum point range published |
| max_range | double | 300.0 | meters, <= 300.0 | Maximum point range published |
| cloud_min_angle | uint16 | 0 | degrees [0, 360] | FoV start angle |
| cloud_max_angle | uint16 | 359 | degrees [0, 360] | FoV end angle |
| cloud_max_angle | uint16 | 359 | degrees [0, 360] | FoV end angle |
2 changes: 1 addition & 1 deletion docs/point_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,4 +53,4 @@ These definitions can be found in the `nebula_common/include/point_types.hpp`.
| `return type` | `uint8` | | Contains the lase return type according to the sensor configuration. |
| `azimuth` | `float` | `degrees` | Contains the azimuth of the current point. |
| `distance` | `float` | `m` | Contains the distance from the sensor origin to this echo on the XY plane. |
| `timestamp` | `float` | `ns` | Contains the relative time to the triggered scan time.
| `timestamp` | `float` | `ns` | Contains the relative time to the triggered scan time. |
Loading
Loading