Skip to content

Commit

Permalink
cleaned house, took out the trash
Browse files Browse the repository at this point in the history
  • Loading branch information
admercs committed Feb 7, 2024
1 parent 87e6e1f commit 92d20c7
Show file tree
Hide file tree
Showing 74 changed files with 925 additions and 1,082 deletions.
8 changes: 5 additions & 3 deletions docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
# What's new
# Change Log

Below is summarized list of important changes. This does not include minor/less important changes or bug fixes or documentation update. This list updated every few months. For complete detailed changes, please review [commit history](https://github.com/nervosys/AutonomySim/commits/main).
Below is summarized list of important changes. This does not include minor/less important changes or bug fixes or documentation update. This list updated every few months. For complete detailed changes, please review the project [commit history](https://github.com/nervosys/AutonomySim/commits/main).

### February 2024
* AutonomySim relaunched as clean-slate repository.
* Major documentation update.
* Trimmed the fat to go light and fast.

### October 2023
* AutonomySim fork created.
* Migration from Batch/Command to PowerShell.
* Support dropped for Unity, Gazebo, and ROS1 to better focus on Unreal Engine 5.
* Major project reorganization begun.

### Jan 2022
Expand Down Expand Up @@ -188,7 +190,7 @@ Below is summarized list of important changes. This does not include minor/less
### November, 2018
* Added Weather Effects and [APIs](apis.md#weather-apis)
* Added [Time of Day API](apis.md#time-of-day-api)
* An experimental integration of [AutonomySim on Unity](https://github.com/nervosys/AutonomySim/tree/main/Unity) is now available. Learn more in [Unity blog post](https://blogs.unity3d.com/2018/11/14/AutonomySim-on-unity-experiment-with-autonomous-vehicle-simulation).
* An experimental integration of [AutonomySim on Unity](https://github.com/nervosys/AutonomySim/tree/master/Unity) is now available. Learn more in [Unity blog post](https://blogs.unity3d.com/2018/11/14/AutonomySim-on-unity-experiment-with-autonomous-vehicle-simulation).
* [New environments](https://github.com/nervosys/AutonomySim/releases/tag/v1.2.1): Forest, Plains (windmill farm), TalkingHeads (human head simulation), TrapCam (animal detection via camera)
* Highly efficient [NoDisplay view mode](settings.md#viewmode) to turn off main screen rendering so you can capture images at high rate
* [Enable/disable sensors](https://github.com/nervosys/AutonomySim/pull/1479) via settings
Expand Down
8 changes: 4 additions & 4 deletions docs/SUPPORT.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Support

We highly recommend to take a look at source code and contribute to the project. Due to large number of incoming feature request we may not be able to get to your request in your desired timeframe. So please [contribute](CONTRIBUTING.md) :).
We highly recommend that user look at source code and contribute to the project. Due to the large number of incoming feature requests, we may not be able to get to your request in your desired timeframe. So, please [contribute](CONTRIBUTING.md):

* [Ask in Discussions](https://github.com/nervosys/AutonomySim/discussions)
* [File GitHub Issue](https://github.com/nervosys/AutonomySim/issues)
* [Join AutonomySim Facebook Group](https://www.facebook.com/groups/1225832467530667/)
* [File GitHub Issues](https://github.com/nervosys/AutonomySim/issues)
* [Join the Discussions](https://github.com/nervosys/AutonomySim/discussions)
* [Join the Discord](https://discord.gg/x84JXYje)
11 changes: 4 additions & 7 deletions docs/adding_new_apis.md → docs/adding_apis.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Adding New APIs

Adding new APIs requires modifying the source code. Much of the changes are mechanical and required for various levels of abstractions that AutonomySim supports. The main files required to be modified are described below along with some commits and PRs for demonstration. Specific sections of the PRs or commits might be linked in some places, but it'll be helpful to have a look at the entire diff to get a better sense of the workflow. Also, don't hesitate in opening an issue or a draft PR also if unsure about how to go about making changes or to get feedback.
Adding new `AutonomySim` APIs requires modifying the source code. These changes are required for the various levels of abstractions that `AutonomySim` supports. The primary files that need to be modified are described below, along with demonstration commits and pull requests (PRs). Specific sections of PRs or commits might be linked in some places, but it'll be helpful to have a look at the entire `diff` to get a sense of the workflow.

Do not hesitate to open an issue or a draft PR if you are unsure about how to make changes or to get feedback.

## Implementing the API

Expand Down Expand Up @@ -33,19 +35,14 @@ The APIs use [msgpack-rpc protocol](https://github.com/msgpack-rpc/msgpack-rpc)
To add the RPC code to call the new API, follow the steps below. Follow the implementation of other APIs defined in the files.

1. Add an RPC handler in the server which calls your implemented method in [RpcLibServerBase.cpp](https://github.com/nervosys/AutonomySim/blob/main/AutonomyLib/src/api/RpcLibServerBase.cpp). Vehicle-specific APIs are in their respective vehicle subfolder.

2. Add the C++ client API method in [RpcClientBase.cpp](https://github.com/nervosys/AutonomySim/blob/main/AutonomyLib/src/api/RpcLibClientBase.cpp)

3. Add the Python client API method in [client.py](https://github.com/nervosys/AutonomySim/blob/main/PythonClient/AutonomySim/client.py). If needed, add or modify a structure definition in [types.py](https://github.com/nervosys/AutonomySim/blob/main/PythonClient/AutonomySim/types.py)

## Testing

Testing is required to ensure that the API is working as expected. For this, as expected, you'll have to use the source-built AutonomySim and Blocks environment. Apart from this, if using the Python APIs, you'll have to use the `AutonomySim` package from source rather than the PyPI package. Below are 2 ways described to go about using the package from source -

1. Use [setup_path.py](https://github.com/nervosys/AutonomySim/blob/main/PythonClient/multirotor/setup_path.py). It will setup the path such that the local AutonomySim module is used instead of the pip installed package. This is the method used in many of the scripts since the user doesn't need to do anything other than run the script.
Place your example script in one of the folders inside `PythonClient` like `multirotor`, `car`, etc. You can also create one to keep things separate, and copy the `setup_path.py` file from another folder.
Add `import setup_path` before `import AutonomySim` in your files. Now the latest main API (or any branch currently checked out) will be used.

1. Use [setup_path.py](https://github.com/nervosys/AutonomySim/blob/main/PythonClient/multirotor/setup_path.py). It will setup the path such that the local AutonomySim module is used instead of the pip installed package. This is the method used in many of the scripts since the user doesn't need to do anything other than run the script. Place your example script in one of the folders inside `PythonClient` like `multirotor`, `car`, etc. You can also create one to keep things separate, and copy the `setup_path.py` file from another folder. Add `import setup_path` before `import AutonomySim` in your files. Now the latest main API (or any branch currently checked out) will be used.
2. Use a [local project pip install](https://pip.pypa.io/en/stable/cli/pip_install/#local-project-installs). Regular install would create a copy of the current source and use it, whereas Editable install (`pip install -e .` from inside the `PythonClient` folder) would change the package whenever the Python API files are changed. Editable install has the benefit when working on several branches or API is not finalized.

It is recommended to use a virtual environment for dealing with Python packaging so as to not break any existing setup.
Expand Down
48 changes: 20 additions & 28 deletions docs/autonomysim_apis.md → docs/apis.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,22 @@
---
title: "AutonomySim APIs"
keywords: introduction faq
tags: [getting_started, introduction]
sidebar: autonomysim_sidebar
permalink: autonomysim_getting_started.html
summary: These brief instructions will help you get started with the simulator. The other topics in this help provide additional information and detail about working with other aspects of this platform.
---
# APIs

## Introduction

AutonomySim exposes APIs so you can interact with vehicle in the simulation programmatically. You can use these APIs to retrieve images, get state, control the vehicle and so on.
`AutonomySim` exposes application programming interfaces (APIs) that enable you to interact with vehicle in the simulation programmatically. You can use these APIs to retrieve images, get state, control the vehicle, and so on.

## Python Quickstart

If you want to use Python to call AutonomySim APIs, we recommend using Anaconda with Python 3.5 or later versions however some code may also work with Python 2.7 ([help us](CONTRIBUTING.md) improve compatibility!).

First install this package:

```
```shell
pip install msgpack-rpc-python
```

You can either get AutonomySim binaries from [releases](https://github.com/nervosys/AutonomySim/releases) or compile from the source ([Windows](build_windows.md), [Linux](build_linux.md)). Once you can run AutonomySim, choose Car as vehicle and then navigate to `PythonClient\car\` folder and run:

```
```shell
python hello_car.py
```

Expand All @@ -33,7 +26,7 @@ If you are using Visual Studio 2019 then just open AutonomySim.sln, set PythonCl

You can also install `AutonomySim` package simply by,

```
```shell
pip install AutonomySim
```

Expand Down Expand Up @@ -137,29 +130,28 @@ for response in responses:
* `simPrintLogMessage`: Prints the specified message in the simulator's window. If message_param is also supplied then its printed next to the message and in that case if this API is called with same message value but different message_param again then previous line is overwritten with new line (instead of API creating new line on display). For example, `simPrintLogMessage("Iteration: ", to_string(i))` keeps updating same line on display when API is called with different values of i. The valid values of severity parameter is 0 to 3 inclusive that corresponds to different colors.
* `simGetObjectPose`, `simSetObjectPose`: Gets and sets the pose of specified object in Unreal environment. Here the object means "actor" in Unreal terminology. They are searched by tag as well as name. Please note that the names shown in UE Editor are *auto-generated* in each run and are not permanent. So if you want to refer to actor by name, you must change its auto-generated name in UE Editor. Alternatively you can add a tag to actor which can be done by clicking on that actor in Unreal Editor and then going to [Tags property](https://answers.unrealengine.com/questions/543807/whats-the-difference-between-tag-and-tag.html), click "+" sign and add some string value. If multiple actors have same tag then the first match is returned. If no matches are found then NaN pose is returned. The returned pose is in NED coordinates in SI units in the world frame. For `simSetObjectPose`, the specified actor must have [Mobility](https://docs.unrealengine.com/en-us/Engine/Actors/Mobility) set to Movable or otherwise you will get undefined behavior. The `simSetObjectPose` has parameter `teleport` which means object is [moved through other objects](https://www.unrealengine.com/en-US/blog/moving-physical-objects) in its way and it returns true if move was successful

### Image / Computer Vision APIs
### Image/Computer Vision APIs

AutonomySim offers comprehensive images APIs to retrieve synchronized images from multiple cameras along with ground truth including depth, disparity, surface normals and vision. You can set the resolution, FOV, motion blur etc parameters in [settings.json](settings.md). There is also API for detecting collision state. See also [complete code](https://github.com/nervosys/AutonomySim/tree/main/Examples/DataCollection/StereoImageGenerator.hpp) that generates specified number of stereo images and ground truth depth with normalization to camera plane, computation of disparity image and saving it to [pfm format](pfm.md).
AutonomySim offers comprehensive images APIs to retrieve synchronized images from multiple cameras along with ground truth including depth, disparity, surface normals and vision. You can set the resolution, FOV, motion blur etc parameters in [settings.json](settings.md). There is also API for detecting collision state. See also [complete code](https://github.com/nervosys/AutonomySim/tree/master/Examples/DataCollection/StereoImageGenerator.hpp) that generates specified number of stereo images and ground truth depth with normalization to camera plane, computation of disparity image and saving it to [pfm format](pfm.md).

More on [image APIs and Computer Vision mode](image_apis.md).
For vision problems that can benefit from domain randomization, there is also an [object retexturing API](retexturing.md), which can be used in supported scenes.

### Pause and Continue APIs

AutonomySim allows to pause and continue the simulation through `pause(is_paused)` API. To pause the simulation call `pause(True)` and to continue the simulation call `pause(False)`. You may have scenario, especially while using reinforcement learning, to run the simulation for specified amount of time and then automatically pause. While simulation is paused, you may then do some expensive computation, send a new command and then again run the simulation for specified amount of time. This can be achieved by API `continueForTime(seconds)`. This API runs the simulation for the specified number of seconds and then pauses the simulation. For example usage, please see [pause_continue_car.py](https://github.com/nervosys/AutonomySim/tree/main/PythonClient//car/pause_continue_car.py) and [pause_continue_drone.py](https://github.com/nervosys/AutonomySim/tree/main/PythonClient//multirotor/pause_continue_drone.py).

AutonomySim allows to pause and continue the simulation through `pause(is_paused)` API. To pause the simulation call `pause(True)` and to continue the simulation call `pause(False)`. You may have scenario, especially while using reinforcement learning, to run the simulation for specified amount of time and then automatically pause. While simulation is paused, you may then do some expensive computation, send a new command and then again run the simulation for specified amount of time. This can be achieved by API `continueForTime(seconds)`. This API runs the simulation for the specified number of seconds and then pauses the simulation. For example usage, please see [pause_continue_car.py](https://github.com/nervosys/AutonomySim/tree/master/PythonClient//car/pause_continue_car.py) and [pause_continue_drone.py](https://github.com/nervosys/AutonomySim/tree/master/PythonClient//multirotor/pause_continue_drone.py).

### Collision API

The collision information can be obtained using `simGetCollisionInfo` API. This call returns a struct that has information not only whether collision occurred but also collision position, surface normal, penetration depth and so on.

### Time of Day API
### Time-of-day API

AutonomySim assumes there exist sky sphere of class `EngineSky/BP_Sky_Sphere` in your environment with [ADirectionalLight actor](https://github.com/nervosys/AutonomySim/blob/v1.4.0-linux/Unreal/Plugins/AutonomySim/Source/SimMode/SimModeBase.cpp#L224). By default, the position of the sun in the scene doesn't move with time. You can use [settings](settings.md#timeofday) to set up latitude, longitude, date and time which AutonomySim uses to compute the position of sun in the scene.

You can also use following API call to set the sun position according to given date time:

```
```python
simSetTimeOfDay(self, is_enabled, start_datetime = "", is_start_datetime_dst = False, celestial_clock_speed = 1, update_interval_secs = 60, move_sun = True)
```

Expand All @@ -176,18 +168,18 @@ Sim world extent, in the form of a vector of two GeoPoints, can be retrieved usi

By default all weather effects are disabled. To enable weather effect, first call:

```
```python
simEnableWeather(True)
```

Various weather effects can be enabled by using `simSetWeatherParameter` method which takes `WeatherParameter`, for example,

```
```python
client.simSetWeatherParameter(AutonomySim.WeatherParameter.Rain, 0.25);
```
The second parameter value is from 0 to 1. The first parameter provides following options:

```
```python
class WeatherParameter:
Rain = 0
Roadwetness = 1
Expand All @@ -199,15 +191,15 @@ class WeatherParameter:
Fog = 7
```

Please note that `Roadwetness`, `RoadSnow` and `RoadLeaf` effects requires adding [materials](https://github.com/nervosys/AutonomySim/tree/main/Unreal/Plugins/AutonomySim/Content/Weather/WeatherFX) to your scene.
Please note that `Roadwetness`, `RoadSnow` and `RoadLeaf` effects requires adding [materials](https://github.com/nervosys/AutonomySim/tree/master/Unreal/Plugins/AutonomySim/Content/Weather/WeatherFX) to your scene.

Please see [example code](https://github.com/nervosys/AutonomySim/blob/main/PythonClient/environment/weather.py) for more details.

### Recording APIs

Recording APIs can be used to start recording data through APIs. Data to be recorded can be specified using [settings](settings.md#recording). To start recording, use -

```
```python
client.startRecording()
```

Expand All @@ -221,7 +213,7 @@ Note that this will only save the data as specfied in the settings. For full fre

Wind can be changed during simulation using `simSetWind()`. Wind is specified in World frame, NED direction and m/s values

E.g. To set 20m/s wind in North (forward) direction -
For example, to set 20m/s wind in north (forward) direction:

```python
# Set wind to (20,0,0) in NED (forward direction)
Expand Down Expand Up @@ -325,9 +317,9 @@ See the [Adding New APIs](adding_new_apis.md) page
## References and Examples

* [C++ API Examples](apis_cpp.md)
* [Car Examples](https://github.com/nervosys/AutonomySim/tree/main/PythonClient//car)
* [Multirotor Examples](https://github.com/nervosys/AutonomySim/tree/main/PythonClient//multirotor)
* [Computer Vision Examples](https://github.com/nervosys/AutonomySim/tree/main/PythonClient//computer_vision)
* [Car Examples](https://github.com/nervosys/AutonomySim/tree/master/PythonClient//car)
* [Multirotor Examples](https://github.com/nervosys/AutonomySim/tree/master/PythonClient//multirotor)
* [Computer Vision Examples](https://github.com/nervosys/AutonomySim/tree/master/PythonClient//computer_vision)
* [Move on Path](https://github.com/nervosys/AutonomySim/wiki/moveOnPath-demo) demo showing video of fast multirotor flight through Modular Neighborhood environment
* [Building a Hexacopter](https://github.com/nervosys/AutonomySim/wiki/hexacopter)
* [Building Point Clouds](https://github.com/nervosys/AutonomySim/wiki/Point-Clouds)
Expand All @@ -350,7 +342,7 @@ We recommend [Anaconda](https://www.anaconda.com/download/) to get Python tools

You can install OpenCV using:

```
```shell
conda install opencv
pip install opencv-python
```
Expand Down
Loading

0 comments on commit 92d20c7

Please sign in to comment.