diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000000..e69de29bb2 diff --git a/404.html b/404.html new file mode 100644 index 0000000000..ce7e2627ed --- /dev/null +++ b/404.html @@ -0,0 +1,2348 @@ + + + +
+ + + + + + + + + + + + + + +ClockPublisher
allows the publication of the simulation time from the clock operating within AWSIM. The current time is retrived from a TimeSource
object via the SimulatorROS2Node
. The AWSIM provides convenient method for selecting the appropriate time source type as well as the flexibility to implement custom TimeSources
tailored to specific user requirements.
To enable the publication of the current time during simulation execution, ClockPublisher
must be included as a component within the scene. Moreover, to allow the TimeSource
to be set or changed, the TimeSourceSelector
object must also be present in the active scene.
The desired TimeSource
can be selected in two ways:
TimeSource
type can be conveniently choosen directly from the editor interface.TimeSource
type can be specified in the JSON configuration file via the TimeSource field. The supported values for this field can be found in the list of available time sources in the "String Value for JSON Config" column.Type | +String Value for JSON Config | +Driven by | +Start Value | +Affected by Time Scale | +Remarks | +
---|---|---|---|---|---|
UNITY | +unity | +UnityEngine.Time | +0 | +yes | ++ |
SS2 | +ss2 | +externally | +depends on external source | +no | +used by the scenario simulator v2 | +
DOTNET_SYSTEM | +system | +System.DateTime | +UNIX epoch | +yes | +starts with UNIX epoch time and progresses with System.DateTime scaled by AWSIM time scale | +
DOTNET_SIMULATION | +simulation | +System.DateTime | +0 | +yes | +starts with zero value and progresses with System.DateTime scaled by AWSIM time scale | +
ROS2 | +ros2 | +ROS2.Clock | +UNIX epoch (by default) | +no | +uses ROS 2 time | +
The ClockPublisher
operates within a dedicated thread called the 'Clock' thread. This design choice offers significant advantages by freeing the publishing process from the constraints imposed by fixed update limits. As a result, ClockPublisher
is able to consistently publish time at a high rate, ensuring stability and accuracy.
Running the clock publisher in a dedicated thread introduced the challenge of accessing shared resources by different threads. In our case, the Main Thread and Clock Thread compete for TimeSoruce
resources. The diagram below illustrates this concurrent behaviour, with two distinct threads vying for access to the TimeSource
:
Given multiple sensors, each with its own publishing frequency, alongside a clock running at 100Hz, there is a notable competition for TimeSource
resources. In such cases, it becomes imperative for the TimeSource
class to be thread-safe.
The TimeSource
synchronization mechanism employs a mutex to lock the necessary resource for the current thread. The sequence of actions undertaken each time the GetTime()
method is called involves:
There are two additional classes used to synchronise the UnityEngine TimeAsDouble
and TimeScale
values between threads:
TimeScaleProvider
: facilitates the synchronisation of the simulation time scale value across threads,TimeAsDoubleProvider
: provides access to the UnityEngine TimeAsDouble
to the threads other than the main thread.Environment
is an object that contains all the elements visible on the scene along with components that affect how they are rendered.
+It contains several objects aggregating static environment objects in terms of their type.
+Moreover, it contains elements responsible for controlling random traffic.
Own Environment prefab
+If you would like to develop your own prefab Environment
for AWSIM, we encourage you to read this tutorial.
AutowareSimulation scene
+If you would like to see how Environment
with random traffic works or run some tests, we encourage you to familiarize yourself with the AutowareSimulation
scene described in this section.
Prefab Environment
is also used to create a point cloud (*.pcd
file) needed to locate the EgoVehicle
in the simulated AWSIM scene.
+The point cloud is created using the RGL
plugin and then used in Autoware.
+We encourage you to familiarize yourself with an example scene of creating a point cloud - described here.
Create PointCloud (*.pcd file)
+If you would like to learn how to create a point cloud in AWSIM using Environment
prefab, we encourage you to read this tutorial.
The architecture of an Environment
- with dependencies between components - is presented on the following diagram.
Prefabs can be found under the following path:
+Name | +Description | +Path | +
---|---|---|
Nishishinjuku | +Only stationary visual elements, no traffic | +Assets/AWSIM/Prefabs/Environments/Nishishinjuku.prefab |
+
Nishishinjuku RandomTraffic | +Stationary visual elements along with random traffic | +Assets/AWSIM/Prefabs/Environments/Nishishinjuku RandomTraffic.prefab |
+
Nishishinjuku Traffic | +Stationary visual elements along with non-random traffic | +Assets/AWSIM/Prefabs/Environments/Nishishinjuku Traffic.prefab |
+
Environment prefab
+Due to the similarity of the above prefabs, this section focuses on prefab Nishishinjuku RandomTraffic
.
+The exact differences between Nishishinjuku RandomTraffic
and Nishishinjuku Traffic
will be described in the future.
Environment name
+In order to standardize the documentation, the name Environment
will be used in this section as the equivalent of the prefab named Nishishinjuku RandomTraffic
.
Nishishinjuku RandomTraffic
prefab has the following content:
As you can see it contains:
+SJK*
objects - which are aggregators for visual models.RandomTrafficSimulator
, TrafficIntersections
, TrafficLanes
, StopLines
- which are responsible for random traffic of NPCVehicles
.NPCPedestrians
- which is an aggregator of NPCPedestrian
prefabs added to the scene.Volume
, Directional Light
- which are components that affect the appearance of objects on the scene.All of these objects are described below in this section.
+Nishishinjuku RandomTraffic
prefab contains many visual elements which are described here.
Nishishinjuku RandomTraffic
prefab is added to the Environment
object - between which there is rotation about the Oy
axis by 90 degrees.
+This rotation is added because of the differences in coordinate alignments between the Nishishinjuku RandomTraffic
prefab objects (which have been modeled as *.fbx
files) and the specifics of the GridZone definition (more on this is described here).
Object Environment
is added to AutowareSimulation
which is added directly to the main parent of the scene - there are no transformations between these objects.
Nishishinjuku RandomTraffic
(Environment
) prefab contains only one component:
In order to enable the movement of vehicles around the environment, additional layers have been added to the project: Ground
and Vehicle
.
All objects that are acting as a ground for NPCVehicles
and EgoVehicle
to move on have been added to Ground
layer - they cannot pass through each other and should collide for the physics engine to calculate their interactions.
For this purpose, NPCVehicles
and EgoVehicle
have been added to the Vehicle
layer.
In the project physics settings, it is ensured that collisions between objects in the Vehicle
layer are disabled (this applies to EgoVehicle
and NPCVehicles
- they do not collide with each other):
Due to the specificity of the use of RandomTrafficSimulator
, TrafficIntersections
, TrafficLanes
, StopLines
objects, they have been described in a separate section Traffic Components - where all the elements necessary in simulated random traffic are presented.
The visuals elements have been loaded and organized using the *.fbx
files which can be found under the path:
Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_optimized/Models/*
+
Environment
prefab contains several objects aggregating stationary visual elements of space by their category:
SJK01_P01
- contains all objects constituting the ground of the environment, these are roads and green fields - each of them contains a MeshColliders
and layer set as Ground
to ensure collisions with NPCVehicles
and EgoVehicle
.
+
SJK01_P02
- contains all road surface markings on roads added to the environment.
+The objects of this group do not have MeshColliders
and their layer is Default
.
SJK01_P03
- contains all the vertical poles added to the environment, such as lamp posts, road signs and traffic light poles.
+Only TrafficLight
poles and PedestrianLight
poles have MeshCollider
added.
+The layer for all objects is Default
.
SJK01_P04
- contains all barriers added to the environment, such as barriers by sidewalks.
+The objects of this group do not have MeshColliders
and their layer is Default
.
SJK01_P05
- contains all greenery added to the environment, such as trees, shrubs, fragments of greenery next to buildings.
+The objects of this group do not have MeshColliders
and their layer is Default
.
SJK01_P06
- contains all buildings added to the environment.
+Objects of this category also have a MeshCollider
added, but their layer is Default
.
Scene Manager
+For models (visual elements) added to the prefab to work properly with the LidarSensor
sensor using RGL
, make sure that the SceneManager
component is added to the scene - more about it is described in this section.
In the scene containing Nishishinjuku RandomTraffic
prefab Scene Manager (script) is added as a component to the AutowareSimulation
object containing the Environment
.
TrafficLights
are a stationary visual element belonging to the SJK01_P03
group.
+The lights are divided into two types, the classic TrafficLights
used by vehicles at intersections and the PedestrianLights
found at crosswalks.
Classic traffic lights are aggregated at object TrafficLightA01_Root01_ALL_GP01
while lights used by pedestrians are aggregated at object TrafficLightB01_Root01_All_GP01
.
TrafficLights
and PedestrianLights
are developed using models available in the form of *.fbx
files, which can be found under the following path:
+Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_opimized/Models/TrafficLights/Models/*
+
TrafficLights
lights, outside their housing, always contain 3 signaling light sources of different colors - from left to right: green, yellow, red.
+Optionally, they can have additional sources of signaling the ability to drive in a specific direction in the form of one or three signaling arrows.
In the environment there are many classic lights with different signaling configurations. +However, each contains:
+SJK01_P03
).Mesh
of the object.Mesh
, including its geometry, textures, and materials, giving it a visual appearance in the scene.Mesh
.An important element that is configured in the TrafficLights
object are the materials in the Mesh Renderer
component.
+Material with index 0 always applies to the housing of the lights.
+Subsequent elements 1-6 correspond to successive slots of light sources (round luminous objects) - starting from the upper left corner of the object in the right direction, to the bottom and back to the left corner.
+These indexes are used in script Traffic Light (script) - described here.
Materials for lighting slots that are assigned in Mesh Renderer
can be found in the following path:
+Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_opimized/Models/TrafficLights/Materials/*
PedestrianLights
lights, outside their housing, always contain 2 signaling light sources of different colors - red on top and green on the bottom.
In the environment there are many pedestrian lights - they have the same components as classic TrafficLights
, but the main difference is the configuration of their materials.
An important element that is configured in the PedestrianLights
object are the materials in the Mesh Renderer
component.
+Material with index 0 always applies to the housing of the lights.
+Subsequent elements 1-2 correspond to successive slots of light sources (round luminous objects) - starting from top to bottom.
+These indexes are used in script Traffic Light (script) - described here.
+Materials for lighting slots that are assigned in Mesh Renderer
can be found in the following path:
+Assets/AWSIM/Externals/Nishishinjuku/Nishishinjuku_opimized/Models/TrafficLights/Materials/*
Volume
is GameObject with Volume component which is used in the High Definition Render Pipeline (HDRP).
+It defines a set of scene settings and properties.
+It can be either global, affecting the entire scene, or local, influencing specific areas within the scene.
+Volumes are used to interpolate between different property values based on the Camera's position, allowing for dynamic changes to environment settings such as fog color, density, and other visual effects.
In case of prefab Nishishinjuku RandomTraffic
volume works in global mode and has loaded Volume profile.
+This volume profile has a structure that overrides the default properties of Volume related to the following components: Fog, Shadows, Ambient Occlusion, Visual Environment, HDRI Sky.
+It can be found in the following path:
+Assets/AWSIM/Prefabs/Environments/Nishishinjuku/Volume Profile.asset
Directional Light
is GameObject with Light
component which is used in the High Definition Render Pipeline (HDRP).
+It controls the shape, color, and intensity of the light.
+It also controls whether or not the light casts shadows in scene, as well as more advanced settings.
In case of prefab Nishishinjuku RandomTraffic
a Directional
type light is added.
+It creates effects that are similar to sunlight in scene.
+This light illuminates all GameObjects in the scene as if the light rays are parallel and always from the same direction.
+Directional
light disregards the distance between the Light itself and the target, so the light does not diminish with distance.
+The strength of the Light (Intensity
) is set to 73123.09 Lux
.
+In addition, a Shadow Map
with a resolution of 4096
is enabled, which is updated in Every Frame
of the simulation.
+The transform of the Directional Light
object is set in such a way that it shines on the environment almost vertically from above.
NPCPedestrians
is an aggregating object for NPCPedestrian
objects placed in the environment.
+Prefab Nishishinjuku RandomTraffic
has 7 NPCPedestrian
(humanElegant
) prefabs defined in selected places.
+More about this NPCPedestrian
prefab you can read in this section.
Environment (script) contains the information about how a simulated Environment
is positioned in real world.
+That means it describes what is the real world position of a simulated Environment
.
AWSIM uses part of a Military Grid Reference System (MGRS).
+To understand this topic, you only need to know, that using MGRS you can specify distinct parts of the globe with different accuracy.
+For AWSIM the chosen accuracy is a 100x100 km square.
+Such a square is identified with a unique code like 54SUE
(for more information on Grid Zone please see this page).
Inside this Grid Zone the exact location is specified with the offset calculated from the bottom-left corner of the Grid Zone.
+You can interpret the Grid Zone as a local coordinate system in which you position the Environment
.
In the Nishishinjuku RandomTraffic
prefab, the simulated Environment
is positioned in the Grid Zone 54SUE
.
+The offset if equal to 81655.73
meters in the Ox
axis, 50137.43
meters in the Oy
axis and 42.49998
meters in the Oz
axis.
+In addition to this shift, it is also necessary to rotate the Environment in the scene by 90
degrees about the Oy
axis - this is ensured by the transform in the prefab object.
This means that the 3D models were created in reference to this exact point and because of that the 3D models of Environment
align perfectly with the data from Lanelet2.
The essence of Environment (script)
+The Environment (script) configuration is necessary at the moment of loading data from Lanelet2.
+Internally it shifts the elements from Lanelet2 by the given offset so that they align with the Environment
that is located at the local origin with no offset.
Environment is an important part of a Scene in AWSIM.
+Every aspect of the simulated surrounding world needs to be included in the Environment
prefab - in this section you will learn how to develop it.
+However, first Lanelet2 needs to be developed along with 3D models of the world, which will be the main elements of this prefab.
Tip
+If you want to learn more about the Environment at AWSIM, please visit this page.
+Before you start creating Lanelet2, we encourage you to read the documentation to find out what Lanelet2 is all about. Lanelet2 can be created using VectorMapBuilder (VMP
) based on the PCD obtained from real-life LiDAR sensor.
When working with the VMP
, it is necessary to ensure the most accurate mapping of the road situation using the available elements.
+Especially important are TrafficLanes
created in VMB
as connected Road Nodes
and StopLines
created in VMB as Road Surface Stoplines
.
Lanelet2 positioning
+Lanelet2 should be created in MGRS coordinates of the real place you are recreating. +Please position your Lanelet2 relative to the origin (bottom left corner) of the MGRS Grid Zone with the 100 km Square ID in which the location lays. More details can be read here.
+You can think of the Grid Zone as a local coordinate system. +Instead of making global (0,0) point (crossing of Equator and Prime Median) our coordinate system origin we take a closer one. +The MGRS Grid Zone with 100 km Square ID code designates a 100x100 [kmxkm] square on the map and we take its bottom left corner as our local origin.
+Example
+Lets examine one node from an example Lanelet2 map:
+<node id="4" lat="35.68855194431519" lon="139.69142711058254">
+ <tag k="mgrs_code" v="54SUE815501"/>
+ <tag k="local_x" v="81596.1357"/>
+ <tag k="local_y" v="50194.0803"/>
+ <tag k="ele" v="34.137"/>
+</node>
+
The node with id="4"
position is described as absolute coordinates given in the <node>
.
+In this example the coordinates are as follows lat="35.68855194431519" lon="139.69142711058254
.
It is also described as local transformation defined as a translation relative to the origin of the MGRS Grid Zone with 100 km Square ID (bottom left corner).
+The MGRS Grid Zone designation with 100 km Square ID in this case is equal to 54SUE
.
+In this example the offset in the X axis is as follows k="local_x" v="81596.1357"
+and the offset in the Y axis is as follows k="local_y" v="50194.0803"
.
Note that elevation information is also included.
+You can create 3D
models of an Environment
as you wish.
+It is advised however, to prepare the models in form of .fbx
files.
+Additionally you should include materials and textures in separate directories.
+Many models are delivered in this format.
+This file format allows you to import models into Unity with materials and replace materials while importing. You can learn more about it here.
You can see a .fbx
model added and modified on the fly in the example of this section.
To improve the simulation performance of a scene containing your Environment
prefab, please keep in mind some of these tips when creating 3D models:
Prefer more smaller models over a few big ones.
+In general it is beneficial for performance when you make one small mesh of a object like tree and reuse it on the scene placing many prefabs instead of making one giant mesh containing all trees on the given scene. +It is beneficial even in situations when you are not reusing the meshes. +Lets say you have a city with many buildings - and every one of those buildings is different - it is still advised to model those building individually and make them separate GameObjects.
+Choose texture resolution appropriately.
+Always have in mind what is the target usage of your texture. +Avoid making a high resolution texture for a small object or the one that will always be far away from the camera. +This way you can save some computing power by not calculating the details that will not be seen because of the screen resolution.
+Practical advice
+You can follow these simple rules when deciding on texture quality (texel density):
+(optional) Add animation.
+Add animations to correct objects. +If some element in the 3D model are interactive they should be divided into separate parts.
+What's more, consider these tips related directly to the use of 3D models in AWSIM:
+GameObject
. Also, each light in the traffic light should be split into separate materials.In this part, you will learn how to create a Environment
prefab - that is, develop a GameObject containing all the necessary elements and save it as a prefab.
In this section we will add roads, buildings, greenery, signs, road markings etc. to our scene.
+Most often your models will be saved in the .fbx
format.
+If so, you can customize the materials in the imported model just before importing it.
+Sometimes it is necessary as models come with placeholder materials.
+You can either
In order to add 3D models from the .fbx
file to the Scene please do the following steps:
Example
+An example video of the full process of importing a model, changing the materials, saving new model as a prefab and importing the new prefab. +
+When creating a complex Environment with many elements you should group them appropriately in the Hierarchy view. +This depends on the individual style you like more, but it is a good practice to add all repeating elements into one common Object. +E.g. all identical traffic lights grouped in TrafficLights Object. +The same can be done with trees, buildings, signs etc. +You can group Objects as you like.
+Object hierarchy
+When adding elements to the Environment that are part of the static world (like 3D models of buildings, traffic lights etc.) it is good practice to collect them in one parent GameObject called Map
or something similar.
By doing this you can set a transformation of the parent GameObject Map
to adjust the world pose in reference to e.g. loaded objects from Lanelet2.
+
Remember to unpack
+Please remember to unpack all Object added into the scene.
+If you don't they will change materials together with the .fbx
model file as demonstrated in the example below.
This is unwanted behavior. +When you import a model and change some materials, but leave the rest default and don't unpack the model, then your instances of this model on the scene may change when you change the original fbx model settings.
+See the example below to visualize what is the problem.
+In this example we will
+Watch what happens, the instance on the Scene changes the materials together with the model. +This only happens if you don't unpack the model.
+ +Example Environment after adding 3D models
+After completing this step you should have an Environment
Object that looks similar to the one presented below.
The Environment
with 3D models can look similar to the one presented below.
Add an Environment Script
as component in the Environment
object (see the last example in section before).
+It does not change the appearance of the Environment, but is necessary for the simulation to work correctly.
Click on the Add Component button in the Environment
object.
Search for Environment
and select it.
Set the MGRS
to the offset of your Environment as explained in this section.
Info
+Due to the differences between VectorMapBuilder and Unity, it may be necessary to set the transform of the Environment
object.
+The transform in Environment
should be set in such a way that the TrafficLanes
match the modeled roads. Most often it is necessary to set the positive 90
degree rotation over Y
axis.
This step should be done after importing items from lanelet2. +Only then will you know if you have Environment misaligned with items from lanelet2.
+ +Create a new child Object of the Environment and name it Directional Light
.
Click Add Component
button, search for Light
and select it.
Change light Type to Directional
.
Now you can configure the directional light as you wish. E.g. change the intensity or orientation.
+ +Tip
+For more details on lighting check out official Unity documentation.
+Example Environment after adding Directional Light
+ +Create a new child object of the Environment and name it Volume
.
Click Add Component
search for Volume
and select it.
Change the Profile to Volume Profile
and wait for changes to take effect.
Now you can configure the Volume individually as you wish.
+Tip
+For more details on volumes checkout official Unity documentation.
+Example Environment after adding Volume
+ +Make NPCPedestrians
parent object.
Open Assets/AWSIM/Prefabs/NPCs/Pedestrians
in Project view and drag a humanElegant
into the NPCPedestrians
parent object.
Click Add Component
in the humanElegant object and search for Simple Pedestrian Walker Controller
Script and select it.
This is a simple Script that makes the pedestrian walk straight and turn around indefinitely. +You can configure pedestrian behavior with 2 parameters.
+Tip
+The Simple Pedestrian Walker Controller
Script is best suited to be used on pavements.
Finally position the NPCPedestrian
on the scene where you want it to start walking.
Warning
+Remember to set correct orientation, as the NPCPedestrian
will walk straight from the starting position with the starting orientation.
Example Environment after adding NPC Pedestrians
+ +After doing all the previous steps and having your Environment finished you can save it to prefab format.
+Assets/AWSIM/Prefabs/Environments
).Environment
Object into the Project view.Success
+Once you've added the Environment
, you need to add and configure TrafficLights
.
+For details please visit this tutorial.
To add a Random Traffic to your scene you need the Random Traffic Simulator Script.
+Create a new Game Object as a child of Environment
and call it RandomTrafficSimulator
.
Click a button Add Component
in the Inspector
to add a script.
A small window should pop-up.
+Search for RandomTrafficSimulator
script and add it by double clicking it or by pressing enter.
After clicking on the newly created RandomTrafficSimulator
object in the Scene tree you should see something like this in the Inspector
view.
Random Traffic Simulator, as the name suggests, generates traffic based on random numbers. +To replicate situations you can set a specific seed.
+You can also set Vehicle Layer Mask
and Ground Layer Mask
.
+It is important to set these layers correctly, as they are a base for vehicle physics.
+If set incorrectly the vehicles may fall through the ground into the infinity.
Random Traffic Simulator Script moves, spawns and despawns vehicles based on the configuration. +These settings can to be adjusted to your preference.
+Setting Max Vehicle Count.
+This parameter sets a limit on how many vehicles can be added to the scene at one time.
+NPC Prefabs
+These are models of vehicles that should be spawned on the scene, to add NPC Prefabs please follow these steps:
+To do this click on the "+" sign and in the new list element at the bottom and click on the small icon on the right to select a prefab.
+ +Change to the Assets
tab in the small windows that popped-up.
Search for the Vehicle prefab you want to add, e.g. Hatchback
.
Available NPC prefabs are shown in the NPC Vehicle section.
+Control NPC Vehicle spawning
+Random Traffic Simulator Script will on random select one prefab from Npc Prefabs
list every time when there are not enough vehicles on the scene (the number of vehicles on the scene is smaller than the number specified in the Max Vehicle Count
field).
You can control the odds of selecting one vehicle prefab over another by adding more than one instance of the same prefab to this list.
+Spawnable lanes are the lanes on which new vehicles can be spawned by the Random Traffic Simulator Script. +Best practice is to use beginnings of the lanes on the edges of the map as spawnable lanes.
+Warning
+Make sure you have a lanelet added into your scene. +The full tutorial on this topic can be found here.
+Adding spawnable lanes is similar to Adding NPC Prefabs.
+Add an element to the Spawnable Lanes
list by clicking on the "+" symbol or by selecting number of lanes directly.
Now you can click on the small icon on the right of the list element and select a Traffic Lane you are interested in.
+ +Unfortunately all Traffic Lanes have the same names so it can be difficult to know which one to use. +Alternatively you can do the following to add a traffic lane by visually selecting it in the editor:
+Lock RandomTrafficSimulator
in the Inspector view.
Select the Traffic Lane you are interested in on the Scene and as it gets highlighted in the Hierarchy view you can now drag and drop this Traffic Lane into the appropriate list element.
+ +The last thing to configure is the behavior of NPCVehicles. +You can specify acceleration rate of vehicles and three values of deceleration.
+ +Acceleration
+This value is used for every acceleration the vehicle performs (after stop line or traffic lights).
+Deceleration
+This deceleration value is used for most casual traffic situations like slowing down before stop line.
+Sudden Deceleration
+This deceleration rate is used for emergency situations - when using standard deceleration rate is not enough to prevent some accident from happening (e.g. vehicle on the intersection didn't give way when it was supposed to).
+Absolute Deceleration
+This deceleration rate is a last resort for preventing a crash from happening. +When no other deceleration is enough to prevent an accident this value is used. +This should be set to the highest value achievable by a vehicle.
+Question
+This configuration is common for all vehicles managed by the Random Traffic Simulator Script
.
Success
+The last thing that needs to be done for RandomTraffic
to work properly is to add intersections with traffic lights and configure their sequences. Details here.
Every TrafficIntersection
on the scene needs to be added as a GameObject.
+Best practice is to create a parent object TrafficIntersections
and add all instances of TrafficIntersection
as its children.
+You can do this the same as with Random Traffic Simulator.
Traffic Lights configuration
+Before performing this step, check all TrafficLights
for correct configuration and make sure that TrafficLights
have added scripts. If you want to learn how to add and configure it check out this tutorial.
TrafficIntersection
needs to be marked with a box collider.
+First click on the Add Component
button.
In the window that popped up search for Box Collider
and select it.
Then set the position and orientation and size of the Box Collider.
+You can do this by manipulating Box Collider properties Center
and Size
in the Inspector view.
Traffic Intersection Box Collider guidelines
+When adding a Box Collider marking your Traffic Intersection please make sure that
+Click on the Add Component
button.
In the window that popped up search for Traffic Intersection
and select it.
You need to set a proper Collider Mask in order for the script to work.
+ +Traffic Light Groups are groups of traffic lights that are synchronized, meaning they light up with the same color and pattern at all times.
+Traffic lights are divided into groups to simplify the process of creating a lighting sequence. +By default you will see 4 Traffic Light Groups, you can add and remove them to suit your needs.
+First choose from the drop-down menu called Group
the Traffic Light Group name you want to assign to your Traffic Light Group.
+
+Then add as many Traffic Lights as you want your group to have. +From the drop-down menu select the Traffic Lights you want to add.
++
+Select Traffic Lights visually
+If you have a lot of Traffic Lights it can be challenging to add them from the list. +You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.
+Lighting Sequences is a list of commands based on which the Traffic Lights will operate on an intersection. +The elements in the Lighting Sequences list are changes (commands) that will be executed on the Traffic Light Groups.
+Group Lighting Order should be interpreted as a command (or order) given to all Traffic Lights in selected Traffic Light Group. +In Group Lighting Orders you can set different traffic light status for every Traffic Light Group (in separate elements). +Lighting sequences list is processed in an infinite loop.
+It should be noted that changes applied to one Traffic Light Group will remain the same until the next Group Lighting Order is given to this Traffic Light Group. +This means that if in one Group Lighting Order no command is sent to a Traffic Light Group then this Group will remain its current lighting pattern (color, bulb and status).
+For every Lighting Sequences Element you have to specify the following
+Interval Sec
+This is the time for which the sequence should wait until executing next order, so how long this state will be active.
+For every element in Group Lighting Orders there needs to be specified
+List of orders (Bulb Data)
+In other words - what bulbs should be turned on, their color and pattern.
+SOLID_OFF
is necessary only when you want to turn the Traffic Light completely off, meaning no bulb will light up)Note
+When applying the change to a Traffic Light
+This means it is only necessary to supply the data about what bulbs should be turned on. +E.g. you don't have to turn off a red bulb when turning on the green one.
+Warning
+The first Element in the Lighting Sequences (in most cases) should contain bulb data for every Traffic Light Group. +Traffic Light Groups not specified in the first Element will not light up at the beginning of the scene.
+Lets consider the following lighting sequence element.
+ +In the Lighting Sequence Element 5 we tell all Traffic Lights in the Vehicle Traffic Light Group 2 to light up their Green Bulb with the color Green and status Solid On which means that they will be turned on all the time. +We also implicitly tell them to turn all other Bulbs off.
+In the same time we tell all Traffic Lights in the Pedestrian Traffic Light Group 2 to do the very same thing.
+This state will be active for the next 15 seconds, and after that Traffic Intersection will move to the next Element in the Sequence.
+Now lets consider the following Lighting Sequences Element 6.
+ +Here we order the Traffic Lights in the Pedestrian Traffic Light Group 2 to light up their Green Bulb with the color Green and status Flashing. +We also implicitly tell them to turn all other bulbs off, which were already off from the implicit change in Element 5, so this effectively does nothing.
+Note that Lighting Sequences Element 6 has no orders for Vehicle Traffic Light Group 2. +This means that Traffic Lights in the Vehicle Traffic Light Group 2 will hold on to their earlier orders.
+This state will be active for 5 seconds, which means that Traffic Lights in the Vehicle Traffic Light Group 2 will be lighting solid green for the total of 20 seconds.
+To test how your Traffic Intersection behaves simply run the Scene as shown here (but don't launch Autoware).
+To take a better look at the Traffic Lights you can change to the Scene view by pressing ctrl + 1
- now you can move the camera freely (to go back to the Game view simply press ctrl + 2
).
As the time passes you can examine whether your Traffic Intersection is configured correctly.
+ + + + + + + + + + + + + +To add RandomTraffic
to the Environment
, it is necessary to load elements from the lanelet2.
+As a result of loading, TrafficLanes
and StopLines
will be added to the scene. Details of these components can be found here.
Warning
+Before following this tutorial make sure you have added an Environment Script and set a proper MGRS
offset position. This position is used when loading elements from the lanelet2!
Click on the AWSIM
button in the top menu of the Unity editor and navigate to AWSIM -> Random Traffic -> Load Lanelet
.
In the window that pops-up select your osm file, change some Waypoint Settings to suit your needs and click Load
.
Waypoint Settings explanation
+Traffic Lanes and Stop Lanes should occur in the Hierarchy view.
+If they appear somewhere else in your Hierarchy tree, then move them into the Environment
object.
The Traffic Lanes that were loaded should be configures accordingly to the road situation. +The aspects you can configure
+Right of way
+The right of way has to be configured so that Vehicles know how ot behave in the traffic. +To configure this please visit a dedicated section in Add a Traffic Lane. +After you have set right of way to all traffic lanes please follow this final step.
+Stop Line
+Assuming you have all Stop Lines loaded from the lanelet you have to add them to the Traffic Lanes. +For detailed instruction please visit a dedicated section in Add a Traffic Lane.
+If - for any reason - you don't have all the Stop Lines added, please follow this dedicated section.
+If you want to test your Traffic Lanes you have to try running a Random Traffic. +To verify one particular Traffic Lane or Traffic Lane connection you can make a new spawnable lane next to the Traffic Lane you want to test. +This way you can be sure NPCVehicles will start driving on the Traffic Lane you are interested in at the beginning.
+When something goes wrong when loading data from lanelet2 or you just want to add another StopLine manually please do the following
+Add a new GameObject StopLine in the StopLines parent object.
+ +Add a StopLine Script by clicking 'Add Component' and searching for Stop Line
.
Example
+So far your Stop Line should look like the following
+ +Set the position of points Element 0
and Element 1
.
+These Elements are the two end points of a Stop Line.
+The Stop Line will span between these points.
You don't need to set any data in the 'Transform' section as it is not used anyway.
+StopLine coordinate system
+Please note that the Stop Line Script operates in the global coordinate system. +The transformations of StopLine Object and its parent Objects won't affect the Stop Line.
+In this example you can see that the Position of the Game Object does not affect the position and orientation of the Stop Line.
+For a Game Object in the center of the coordinate system.
+ +The stop Line is in the specified position.
+ +However with the Game Object shifted in X axis.
+ +The Stop Line stays in the same position as before, not affected by any transformations happening to the Game Object.
+ +Select whether there is a Stop Sign.
+Select the Has Stop Sign
tick-box confirming that this Stop Line has a Stop Sign.
+The Stop Sign can be either vertical or horizontal.
Select from the drop-down menu the Traffic Light that is on the Traffic Intersection and is facing the vehicle that would be driving on the Traffic Lane connected with the Stop Line you are configuring.
+In other words select the right Traffic Light for the Lane on which your Stop Line is placed.
+ +Select Traffic Lights visually
+If you have a lot of Traffic Lights it can be challenging to add them from the list. +You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.
+Every Stop Line has to be connected to a Traffic Lane. +This is done in the Traffic Lane configuration. +For this reason please check the Traffic Lane section for more details.
+It is possible that something may go wring when reading a lanelet2 and you need to add an additional Traffic Lane or you just want to add it. +To add a Traffic Lane manually please follow the steps below.
+Add a new Game Object called TrafficLane
into the TrafficLanes
parent Object.
Click the 'Add Component' button and search for the Traffic lane
script and select it.
Example
+So far your Traffic Lane should look like the following.
+ +Now we will configure the 'Waypoints' list.
+This list is an ordered list of nest points defining the Traffic Lane.
+When you want to add a waypoint to a Traffic Lane just click on the +
button or specify the number of waypoints on the list in the field with number to the right from 'Waypoints' identifier.
The order of elements on this list determines how waypoints are connected.
+ +Traffic Lane coordinate system
+Please note that the Traffic Lane waypoints are located in the global coordinate system, any transformations set to a Game Object or paren Objects will be ignored.
+This behavior is the same as with the Stop Line. +You can see the example provided in the Stop Line tutorial.
+General advice
+You also need to select the Turn Direction. +This field describes what are the vehicles traveling on ths Traffic Lane doing in reference to other Traffic Lanes. +You need to select whether the vehicles are
+STRAIGHT
)RIGHT
)LEFT
)You need to add all Traffic Lanes that have their beginning in the end of this Traffic Lane into the Next Lanes list. +In other words if the vehicle can choose where he wants to drive (e.g. drive straight or drive left with choice of two different Traffic Lines).
+To do this click the +
sign in the Next Lanes list and in the element that appeared select the correct Traffic Lane.
Lets consider the following Traffic Intersection.
+ +In this example we will consider the Traffic Lane driving from the bottom of the screen and turning right. +After finishing driving in this Traffic Lane the vehicle has a choice of 4 different Traffic Lanes each turning into different lane on the parallel road.
+All 4 Traffic Lanes are connected to the considered Traffic Lane. +This situation is reflected in the Traffic Lane configuration shown below.
+ +Select Traffic Lanes visually
+If you have a lot of Traffic Lanes it can be challenging to add them from the list. +You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.
+Traffic Lane has to have previous Traffic Lanes configured. +This is done in the exact same way as configuring next lanes which was shown in the previous step. +Please do the same, but add Traffic Lanes that are before the configured one instead of the ones after into the Prev Lanes list.
+Now we will configure the Right Of Way Lanes. +The Right Of Way Lanes is a list of Traffic Lanes that have a priority over the configured one. +The process of adding the Right Of Way Lanes is the same as with adding Next Lanes. +For this reason we ask you to see the aforementioned step for detailed description on how to do this (the only difference is that you add Traffic Lanes to the Right Of Way Lanes list).
+In this example lets consider the Traffic Lane highlighted in blue from the Traffic Intersection below.
+ +This Traffic Lane has to give way to all Traffic Lanes highlighted on yellow. +This means all of the yellow Traffic Lanes have to be added to the 'Right Of Way Lanes' list which is reflected on the configuration shown below.
+ +Adding a Stop Line is necessary only when at the end of the configured Traffic Lane the Stop Line is present. +If so, please select the correct Stop Line from the drop-down list.
+ +Select Stop Line visually
+If you have a lot of Stop Lines it can be challenging to add them from the list. +You can select them visually from the Scene the same as you had selected Traffic Lanes in the Random Traffic Simulator.
+In the field called Speed Limit
simply write the speed limit that is in effect on the configured Traffic Lane.
To make the Right Of Ways list you configured earlier take effect simply click the 'Set RightOfWays' button.
+ + + + + + + + + + + + + + +To add TrafficLights
into your Environment
follow steps below.
Tip
+In the Environment
you are creating there will most likely be many TrafficLights
that should look and work the same way.
+To simplify the process of creating an environment it is advised to create one TrafficLight
of each type with this tutorial and then save them as prefabs that you will be able to reuse.
Into your Map
object in the Hierarchy view add a new Child Object and name it appropriately.
Click on the Add Component
button.
Search for Mesh filter
and select it by clicking on it.
For each TrafficLight
specify the mesh you want to use.
The same way as above search for Mesh Renderer
and select it.
Now you need to specify individual component materials.
+For example in the Traffic.Lights.001
mesh there are four sub-meshes that need their individual materials.
To specify a material click on the selection button on Materials
element and search for the material you want to use and select it.
Repeat this process until you specify all materials.
+When you add one material more than there are sub-meshes you will see this warning.
+Then just remove the last material and the TrafficLight
is prepared.
Info
+Different material for every bulb is necessary for the color changing behavior that we expect from traffic lights.
+Even though in most cases you will use the same material for every Bulb
, having them as different elements is necessary.
+Please only use models of TrafficLights
that have different Materials Elements
for every Bulb
.
Materials order
+When specifying materials remember the order in which they are used in the mesh.
+Especially remember what Materials Elements
are associated with every Bulb
in the TrafficLight
.
+This information will be needed later.
Example
+In the case of Traffic.Lights.001
the bulb materials are ordered starting from the left side with index 1 and increasing to the right.
The same way as above search for Mesh Collider
and select it.
+Collider may not seem useful, as the TrafficLight
in many cases will be out of reach of vehicles.
+It is however used for LiDAR simulation, so it is advised to always add colliders to Objects that should be detected by LiDARs.
Finally after configuring all visual aspects of the TrafficLight
you can position it in the environment.
+Do this by dragging a TrafficLight
with a square representing a plane or with an arrow representing one axis.
The Traffic Light
Script will enable you to control how the TrafficLight
lights up and create sequences.
Click on Add Component
, search for the Traffic Light
script and select it.
You should see the Bulb Emission config
already configured. These are the colors that will be used to light up the Bulbs in TrafficLight
. You may adjust them to suit your needs.
You will have to specify Bulb material config
, in which you should add elements with fields:
Bulb Type
- One of the predefined Bulb types that describes the Bulb (its color and pattern).
Material Index
- Index of the material that you want to be associated with the Bulb Type. This is where you need to use the knowledge from earlier where we said you have to remember what Materials Element corresponds to which bulb sub-mesh.
Bulb configuration example
+Here we specify an element Type as RED_BULB
and associate it with Material that has an index 3.
+This will result in associating the right most bulb with the name RED_BULB
.
+This information will be of use to us when specifying TrafficLights
sequences.
Success
+Once you have added TrafficLights
to your Environment
, you can start configuring RandomTraffic
which will add moving vehicles to it! Details here.
This section
+This section is still under development!
+PointCloudMapper
is a tool for a vehicle based point cloud mapping in a simulation environment.
+It is very useful when you need a point cloud based on some location, but don't have the possibility to physically map the real place.
+Instead you can map the simulated environment.
To properly perform the mapping, make sure you have the following files downloaded and configured:
+*.osm
file)3D model map of the area
+How to obtain a map
+You can obtain the 3D model of the area by using a Environment
prefab prepared for AWSIM or by creating your own.
+You can learn how to create you own Environment
prefab in this tutorial.
Configured in-simulation vehicle object with sensors attached (only the LiDAR is necessary)
+ +Vehicle model
+For the sake of creating a PCD the vehicle model doesn't have to be accurate. +It will be just a carrier for LiDAR. +The model can even be a simple box as shown earlier in this tutorial. +Make sure it is not visible to the LiDAR, so it does not break the sensor readings.
+Drag and drop an OSM file into Unity project.
+ +OSM file will be imported as OsmDataContainer
.
For mapping an Environment
prefab is needed.
+The easiest way is to create a new Scene and import the Environment
prefab into it.
+Details on how to do this can be found on this tutorial page.
Create a Vehicle
GameObject in the Hierarchy view.
Add vehicle model by adding a Geometry
Object as a child of Vehicle
and adding all visual elements as children.
Visual elements
+You can learn how to add visual elements and required components like Mesh Filter or Mesh Renderer in this tutorial.
+Add a Camera component for enhanced visuals by adding a Main Camera
Object as a child of Vehicle
Object and attaching a Camera
Component to it.
Add a Main Camera
Object.
Add a Camera
Component by clicking 'Add Component' button, searching for it and selecting it.
Change the Transform
for an even better visual experience.
Camera preview
+Observe how the Camera preview changes when adjusting the transformation.
+This part of the tutorial shows how to add a LiDAR sensor using RGL.
+RGL Scene Manager
+Please make sure that RGLSceneManager
is added to the scene.
+For more details and instruction how to do it please visit this tutorial page.
Create an empty Sensors
GameObject as a child of the Vehicle
Object.
Create a Lidar
GameObject as a child of the Sensors
Object.
Attach Lidar Sensor (script) to previously created Lidar
Object by clicking on the 'Add Component' button, searching for the script and selecting it.
Point Cloud Visualization
+Please note that Point Cloud Visualization (script) will be added automatically with the Lidar Sensor (script).
+Configure LiDAR pattern, e.g. by selecting one of the available presets.
+Example Lidar Sensor configuration
+ +Gaussian noise
+Gaussian noise should be disabled to achieve a more accurate map.
+Attach RGL Mapping Adapter (script) to previously created Lidar
Object by clicking on the 'Add Component' button, searching for the script and selecting it.
Configure RGL Mapping Adapter
- e.g. set Leaf Size
for filtering.
Example RGL Mapping Adapter configuration
+ +Downsampling
+Please note that downsampling is applied on the single LiDAR scans only. If you would like to filter merged scans use the external tool described below.
+Leaf Size
to Point Cloud Data (PCD) generationDownsampling aims to reduce PCD size which for large point clouds may achieve gigabytes in exchange for map details. It is essential to find the best balance between the size and acceptable details level.
+A small Leaf Size
results in a more detailed PCD, while a large Leaf Size
could result in excessive filtering such that objects like buildings are not recorded in the PCD.
In the following examples, it can be observed that when a Leaf Size
is 1.0, point cloud is very detailed.
+When a Leaf Size
is 100.0, buildings are filtered out and results in an empty PCD.
+A Leaf Size
of 10.0 results in a reasonable PCD in the given example.
Leaf Size = 1.0 | +Leaf Size = 10.0 | +Leaf Size = 100.0 | +
---|---|---|
+ | + | + |
Create a PointCloudMapper
GameObject in the Hierarchy view.
Attach Point Cloud Mapper
script to previously created Point Cloud Mapper
Object by clicking on the 'Add Component' button, searching for the script and selecting it.
Configure the Point Cloud Mapper
fields:
Osm Container
- the OSM file you imported earlierWorld Origin
- MGRS position of the origin of the scene
World Origin coordinate system
+Use ROS coordinate system for World Origin, not Unity.
+Capture Location Interval
- Distance between consecutive capture points along lanelet centerline
Output Pcd File Path
- Output relative path from Assets
folderTarget Vehicle
- The vehicle you want to use for point cloud capturing that you created earlierExample Point Cloud Mapper configuration
+ +Lanelet visualization
+
+It is recommended to disable Lanelet Visualizer by setting Material
to None
and Width
equal to zero. Rendered Lanelet is not ignored by the LiDAR so it would be captured in the PCD.
Capture Location Interval
to PCD generationIf the Capture Location Interval
is too small, it could result in a sparse PCD where some region of the map is captured well but the other regions aren't captured at all.
In the below example, Leaf Size
of 0.2 was used. Please note that using a different combination of leaf size
and Capture Location Interval
may result in a different PCD.
Capture Location Interval = 6 | +Capture Location Interval = 20 | +Capture Location Interval = 100 | +
---|---|---|
+ | + | + |
If you play simulation with a scene prepared with the steps above, PointCloudMapper
will automatically start mapping.
+The vehicle will warp along centerlines by intervals of CaptureLocationInterval
and capture point cloud data.
+PCD file will be written when you stop your scene or all locations in the route are captured.
If the Vehicle stops moving for longer and you see the following message in the bottom left corner - you can safely stop the scene.
+ +The Point cloud *.pcd
file is saved to the location you specified in the Point Cloud Mapper.
Install required tool
+The tool (DownsampleLargePCD
) required for PCD conversion can be found under the link. README contains building instruction and usage.
The generated PCD file is typically too large. Therefore you need to downsample it. Also, it should be converted to ASCII format because Autoware
accepts only this format. PointCloudMapper
returns PCD in binary format.
DownsampleLargePCD
tool.Use this tool to downsample and save PCD in ASCII format. +
./DownsampleLargePCD -in <PATH_TO_INPUT_PCD> -out <PATH_TO_OUTPUT_PCD> -leaf 0.2,0.2,0.2
+
in_cloud.pcd
and output PCD is to be named out_cloud.pcd
the command will be:
+ ./DownsampleLargePCD -in in_cloud.pcd -out out_cloud.pcd -leaf 0.2,0.2,0.2
+
-binary 1
option.Your PCD is ready to use.
+Converting PCD format without downsampling
+If you don't want to downsample your PCD you can convert PCD file to ASCII format with pcl_convert_pcd_ascii_binary
tool. This tool is available in the pcl-tools
package and can be installed on Ubuntu with the following command:
+
sudo apt install pcl-tools
+
pcl_convert_pcd_ascii_binary <PATH_TO_INPUT_PCD> <PATH_TO_OUTPUT_PCD> 0
+
To verify your PCD you can launch the Autoware with the PCD file specified.
+Copy your PCD from the AWSIM project directory to the Autoware map directory.
+cp <PATH_TO_PCD_FILE> <PATH_TO_AUTOWARE_MAP>/
+
Source the ROS and Autoware
+source /opt/ros/humble/setup.bash
+source <PATH_TO_AUTOWARE>/install/setup.bash
+
Launch the planning simulation with the map directory path (map_path
) and PCD file (pointcloud_map_file
) specified.
PCD file location
+The PCD file needs to be located in the Autoware map directory and as a pointcloud_map_file
parameter you only supply the file name, not the path.
Absolute path
+When launching Autoware never use ~/
to specify the home directory.
+Either write the full absolute path ot use $HOME
environmental variable.
ros2 launch autoware_launch planning_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit map_path:=<ABSOLUTE_PATH_TO_AUTOWARE_MAP> pointcloud_map_file:=<PCD_FILE_NAME>
+
Wait for the Autoware to finish loading and inspect the PCD visually given the Effect of Leaf Size and Effect of Capture Location Interval.
+ +PointCloudMapping.unity
is a sample scene for PointCloudMapper
showcase. It requires setup of OSM data and 3D model map of the area according to the steps above.
Sample Mapping Scene
+In this example you can see a correctly configured Point Cloud Mapping Scene.
+ +Lanelet Bounds Visualizer is an Unity Editor extension allowing the user to load the left and right bounds of Lanelet to the Unity scene.
+The lanelet bounds load process can be performed by opening AWSIM -> Visualize -> Load Lanelet Bounds
at the top toolbar of Unity Editor.
A window shown below will pop up. Select your Osm Data Container
to specify which OSM data to load the Lanelet from.
The user can select whether to load the raw Lanelet or to adjust the resolution of the Lanelet by specifying the waypoint settings.
+To load the raw Lanelet, simply click the Load Raw Lanelet
button.
If the user wishes to change the resolution of the Lanelet, adjust the parameters of the Waypoint Settings
as described below, and click the Load with Waypoint Settings
button.
Resolution
: resolution of resampling. Lower values provide better accuracy at the cost of processing time.Min Delta Length
: minimum length(m) between adjacent points.Min Delta Angle
: minimum angle(deg) between adjacent edges. Lowering this value produces a smoother curve.Once the Lanelet is successfully loaded, Lanelet bounds will be generated as a new GameObject named LaneletBounds
.
To visualize the LaneletBounds
, make sure Gizmos is turned on and select the LaneletBounds
GameObject.
+
Generally speaking, visualizing Lanelet Bounds will result in a very laggy simulation. Therefore, it is recommended to hide the LaneletBounds
GameObject when not used. The lag of the simulation becomes worse as you set the resolution of the Lanelet Bounds higher, so it is also recommended to set the resolution within a reasonable range.
It is also important to note that no matter how high you set the resolution to be, it will not be any better than the original Lanelet (i.e. the raw data). Rather, the computational load will increase and the simulation will become more laggy. If the user wishes to get the highest quality of Lanelet Bounds, it is recommended to use the Load Raw Lanelet
button.
In short, Waypoint Setting
parameters should be thought of as parameters to decrease the resolution from the original Lanelet to decrease the computational load and thus, reducing the lag of the simulation.
Higher Resolution | +Raw Lanelet | +Lower Resolution | +
---|---|---|
+ | + | + |
+Simulating smoke in AWSIM may be useful when one wants to simulate exhaust gases from vehicles, smoke from emergency flare, etc.
+In Unity, it is common to use Particle System to simulate smokes. +However, smoke simulated by Particle System cannot be sensed by RGL in AWSIM although in reality, smokes are detected by LiDAR.
+Smoke Simulator
was developed to simulate smokes that can be detected by RGL in Unity.
+Smoke Simulator
works by instantiating many small cubic GameObjects called Smoke Particles
and allows each particle to be detected by RGL.
This document describes how to use the Smoke Simulator
.
1.Create an empty GameObject
+2.Attach SmokeGenerator.cs
to the previously created GameObject.
3.Adjust the parameters of the SmokeGenerator
as described below:
Max Particle
: Specifies the maximum number of particles created by the Smoke Generator
Particle Range Radius
: Specifies the radius of a circle, centered at the GameObject, which defines the region in which Smoke Particles
are generated inParticle Size
: Specifies the edge of a Smoke Particle
Average Lifetime
: Specifies the average lifetime of a Smoke Particle
Variation Lifetime
: Specifies the variation of lifetime of Smoke Particles
.
The lifetime of a Smoke Particle
is calculated as follows:
`lifetime` = `Average Lifetime` + Random.Range(-`Variation Lifetime`, `Variation Lifetime`)
+
+Physics
: These parameters can be adjusted to specify the behavior of the smoke particles.
Initial Plane Velocity
: Specifies the velocity of a SmokeParticle
in the x-z planeInitial Vertical Velocity
: Specifies the velocity of a SmokeParticle
in the vertical directionPlane Acceleration
: Specifies the acceleration of a SmokeParticle
in the x-z planeVertical Acceleration
: Specifies the acceleration of a SmokeParticle
in the vertical direction4.(Optional): You may also specify the Material
of Smoke Particles
. If this field is unspecified, a default material is used
V2I
is a component that simulates V2I communication protocol which allows to exchange data between vehicles and road infrastructure. In the current version of AWSIM, the V2I
component publishes information about traffic lights.
Load items from lanelet2 following the instruction
+Verify if Traffic Light Lanelet ID
component has been added to Traffic Light
game objects.
+
Verify if WayID
and RelationID
has been correctly assigned. You can use Vector Map Builder as presented below
+
Traffic Light Lanelet ID
component (alternatively)If for some reason, Traffic Light Lanelet ID
component is not added to Traffic Light
object.
Add component manually
+ +Fill Way ID
+ +Fill Relation ID
+ ++ +
+Name | +Type | +Description | +
---|---|---|
Output Hz | +int | +Topic publication frequency | +
Ego Vehicle Transform | +transform | +Ego Vehicle object transform | +
Ego Distance To Traffic Signals | +double | +Maximum distance between Traffic Light and Ego | +
Traffic Signal ID | +enum | +Possibility to select if as traffic_signal_id field in msg is Relation ID or Way ID |
+
Traffic Signals Topic | +string | +Topic name | +
Note
+V2I feature can be used as Traffic Light ground truth information, and for that usage Way ID
is supposed to be selected.
If you want to use custom message in AWSIM, you need to generate the appropriate files, to do this you have to build ROS2ForUnity
yourself - please follow the steps below. Remember to start with prerequisities though.
ROS2ForUnity role
+For a better understanding of the role of ROS2ForUnity
and the messages used, we encourage you to read this section.
custom_msgs
+In order to simplify this tutorial, the name of the package containing the custom message is assumed to be custom_msgs
- remember to replace it with the name of your package.
ROS2ForUnity
depends on a ros2cs - a C# .NET library for ROS2.
+This library is already included so you don't need to install it, but there are a few prerequisites that must be resolved first.
Please select your system and resolve all prerequisites:
+ros2cs
prerequisites for Ubuntuhumble
and is located in /opt/ros/humble
~/custom_msgs
or is hosted on git repository.bash
shell ros2cs
prerequisites for Windows
Question
+Tests are not working ('charmap'
codec can't decode byte) on Windows - look at troubleshooting here
ROS2 version is humble
and is located in C:\ros2_humble
C:\custom_msgs
or is hosted on git repository.powershell
shellClone ROS2ForUnity
repository by execute command:
git clone https://github.com/RobotecAI/ros2-for-unity ~/
+
Warning
+The cloned ROS 2 For Unity
repository must be located in the home directory ~/
.
git clone https://github.com/RobotecAI/ros2-for-unity /C
+
Warning
+The cloned ROS 2 For Unity
repository must be located in the home directory C:\
.
Pull dependent repositories by execute commands:
+cd ~/ros2-for-unity
+. /opt/ros/humble/setup.bash
+./pull_repositories.sh
+
cd C:\ros2-for-unity
+C:\ros2_humble\local_setup.ps1
+.\pull_repositories.ps1
+
custom_msgs
packageThe method to add a custom package to build depends on where it is located. The package can be on your local machine or just be hosted on a git repository.
+Please, choose the appropriate option and follow the instructions.
Copy the custom_msgs
package with custom message to the folder to src/ros2cs/custom_messages
directory
cp -r ~/custom_msgs ~/ros2-for-unity/src/ros2cs/custom_messages/
+
Copy-Item 'C:\custom_msgs' -Destination 'C:\ros2-for-unity\src\custom_messages'
+
ros2-for-unity/ros2_for_unity_custom_messages.repos
file in editor.Modify the contents of the file shown below, uncomment and set:
+<package_name>
- to your package name - so in this case custom_msgs
,<repo_url>
- to repository address, <repo_branch>
- to desired branch.
+repositories:
+# src/ros2cs/custom_messages/<package_name>:
+# type: git
+# url: <repo_url>
+# version: <repo_branch>
+
Example
+Below is an example of a file configured to pull 2 packages (custom_msgs
,autoware_auto_msgs
) of messages hosted on a git repository.
+
# NOTE: Use this file if you want to build with custom messages that reside in a separate remote repo.
+# NOTE: use the following format
+
+repositories:
+ src/ros2cs/custom_messages/custom_msgs:
+ type: git
+ url: https://github.com/tier4/custom_msgs.git
+ version: main
+ src/ros2cs/custom_messages/autoware_auto_msgs:
+ type: git
+ url: https://github.com/tier4/autoware_auto_msgs.git
+ version: tier4/main
+
Now pull the repositories again (also the custom_msgs
package repository)
cd ~/ros2-for-unity
+./pull_repositories.sh
+
cd C:\ros2-for-unity
+.\pull_repositories.ps1
+
Build ROS2ForUnity
with custom message packages using the following commands:
cd ~/ros2-for-unity
+./build.sh --standalone
+
cd C:\ros2-for-unity
+.\build.ps1 -standalone
+
custom_msgs
to AWSIMNew ROS2ForUnity
build, which you just made in step 3, contains multiple libraries that already exist in the AWSIM.
+To install custom_msgs
and not copy all other unnecessary files, you should get the custom_msgs
related libraries only.
You can find them in following directories and simply copy to the analogous directories in AWSIM/Assets/Ros2ForUnity
folder, or use the script described here.
ros2-for-unity/install/asset/Ros2ForUnity/Plugins
which names matches custom_msgs_*
ros2-for-unity/install/asset/Ros2ForUnity/Plugins/Linux/x86_64/
which names matches libcustom_msgs_*
ros2-for-unity/install/asset/Ros2ForUnity/Plugins
which names matches custom_msgs_*
ros2-for-unity/install/asset/Ros2ForUnity/Plugins/Windows/x86_64/
which names matches custom_msgs_*
To automate the process, you can use a script that copies all files related to your custom_msgs
package.
copy_custom_msgs.sh
in directory ~/ros2-for-unity/
and paste the following content into it.
+#!/bin/bash
+echo "CUSTOM_MSGS_PACKAGE_NAME: $1"
+echo "AWSIM_DIR_PATH: $2"
+find ./install/asset/Ros2ForUnity/Plugins -maxdepth 1 -name "$1*" -type f -exec cp {} $2/Assets/Ros2ForUnity/Plugins \;
+find ./install/asset/Ros2ForUnity/Plugins/Linux/x86_64 -maxdepth 1 -name "lib$1*" -type f -exec cp {} $2/Assets/Ros2ForUnity/Plugins/Linux/x86_64 \;
+
chmod a+x copy_msgs.sh
+
Run the script with two arguments: +
./copy_custom_msgs.sh <CUSTOM_MSGS_PACKAGE_NAME> <AWSIM_DIR_PATH>
+
<CUSTOM_MSGS_PACKAGE_NAME>
- the first one which is the name of the package with messages - in this case custom_msgs
,
<AWSIM_DIR_PATH>
- the second which is the path to the cloned AWSIM repository.Example
+./copy_custom_msgs.sh custom_msgs ~/unity/AWSIM/
+
To automate the process, you can use these commands with changed:
+<CUSTOM_MSGS_PACKAGE_NAME>
- the name of your package with messages - in this case custom_msgs
,<AWSIM_DIR_PATH>
- to path to the cloned AWSIM repository
+Get-ChildItem C:\ros2-for-unity\install\asset\Ros2ForUnity\Plugins\* -Include @('<CUSTOM_MSGS_PACKAGE_NAME>*') | Copy-Item -Destination <AWSIM_DIR_PATH>\Assets\Ros2ForUnity\Plugins
+Get-ChildItem C:\ros2-for-unity\install\asset\Ros2ForUnity\Plugins\Windows\x86_64\* -Include @('<CUSTOM_MSGS_PACKAGE_NAME>*') | Copy-Item -Destination <AWSIM_DIR_PATH>\Assets\Ros2ForUnity\Plugins\Windows\x86_64
+
Example
+Get-ChildItem C:\ros2-for-unity\install\asset\Ros2ForUnity\Plugins\* -Include @('custom_msgs*') | Copy-Item -Destination C:\unity\AWSIM\Assets\Ros2ForUnity\Plugins
+Get-ChildItem C:\ros2-for-unity\install\asset\Ros2ForUnity\Plugins\Windows\x86_64\* -Include @('custom_msgs*') | Copy-Item -Destination C:\unity\AWSIM\Assets\Ros2ForUnity\Plugins\Windows\x86_64
+
Make sure that the package files custom_msgs
have been properly copied to the AWSIM/Assets/Ros2ForUnity
.
+Then try to create a message object as described in this section and check in the console of Unity Editor if it compiles without errors.
Ros2ForUnity (R2FU
) module is a communication solution that effectively connects Unity and the ROS2 ecosystem, maintaining a strong integration.
+Unlike other solutions, it doesn't rely on bridging communication but rather utilizes the ROS2 middleware stack (specifically the rcl
layer and below), enabling the inclusion of ROS2 nodes within Unity simulations.
R2FU
is used in AWSIM for many reasons.
+First of all, because it offers high-performance integration between Unity and ROS2, with improved throughput and lower latencies compared to bridging solutions.
+It provides real ROS2 functionality for simulation entities in Unity, supports standard and custom messages, and includes convenient abstractions and tools, all wrapped as a Unity asset.
+For a detailed description, please see README.
This asset can be prepared in two flavours:
+By default, asset R2FU
in AWSIM is prepared in standalone mode.
Warning
+To avoid internal conflicts between the standalone libraries, and sourced ones, ROS2 instance shouldn't be sourced before running AWSIM or the Unity Editor.
+Can't see topics
+There are no errors but I can't see topics published by R2FU
Try to stop it forcefully (pkill -9 ros2_daemon
) and restart (ros2 daemon start
).
Describing the concept of using R2FU
in AWSIM, we distinguish:
RuntimeInitializeOnLoadMethod
mark.The SimulatorROS2Node implementation, thanks to the use of R2FU
, allows you to add communication via ROS2 to any Unity component.
+For example, we can receive control commands from any other ROS2 node and publish the current state of Ego, such as its position in the environment.
Simulation time
+If you want to use system time (ROS2 time) instead of Unity time, use ROS2TimeSource
instead of UnityTimeSource
in the SimulatorROS2Node
class.
Ros2ForUnity
asset contains:
*.dll
and *.so
files).
+In addition to the necessary libraries, here are the libraries created as a result of generation the types of ROS2 messages that are used in communication.R2FU
in Unity - details below.R2FU
in AWSIM, provide abstractions of a single main Node and simplify the interface - details below.
+These scripts are not in the library itself, but directly in the directory Assets/AWSIM/Scripts/ROS/**
.ROS2UnityCore
- the principal class for handling ROS2 nodes and executables.
+Spins and executes actions (e.g. clock, sensor publish triggers) in a dedicated thread.ROS2UnityComponent
- ROS2UnityCore
adapted to work as a Unity component.ROS2Node
- a class representing a ROS2 node, it should be constructed through ROS2UnityComponent
class, which also handles spinning.ROS2ForUnity
- an internal class responsible for handling checking, proper initialization and shutdown of ROS2 communication,ROS2ListenerExample
- an example class provided for testing of basic ROS2->Unity communication.ROS2TalkerExample
- an example class provided for testing of basic Unity->ROS2 communication.ROS2PerformanceTest
- an example class provided for performance testing of ROS2<->Unity communication.Sensor
- an abstract base class for ROS2-enabled sensor.Transformations
- a set of transformation functions between coordinate systems of Unity and ROS2.PostInstall
- an internal class responsible for installing R2FU
metadata files.Time
scripts - a set of classes that provide the ability to use different time sources:ROS2Clock
- ROS2 clock class that for interfacing between a time source (Unity or ROS2 system time) and ROS2 messages.ROS2TimeSource
- acquires ROS2 time (system time by default).UnityTimeSource
- acquires Unity time.DotnetTimeSource
- acquires Unity DateTime
based clock that has resolution increased using Stopwatch
.ITimeSource
- interface for general time extraction from any source.TimeUtils
- utils for time conversion.Additionally, in order to adapt AWSIM to the use of R2FU
, the following scripts are used:
SimulatorROS2Node
- it is a class that is directly responsible for AWSIM<->ROS2 communication.ClockPublisher
- allows the publication of the simulation time from the clock running in the SimulatorROS2Node.
+It must be added as a component to the scene in order to publish the current time when the scene is run.
QoSSettings
- it is the equivalent of ROS2 QoS, which allows to specify the QoS for subscribers and publishers in AWSIM.
+It uses the QualityOfServiceProfile
implementation from the Ros2cs library.
ROS2Utility
- it is a class with utils that allow, for example, to convert positions in the ROS2 coordinate system to the AWSIM coordinate system.DiagnosticsManager
- prints diagnostics for desired elements described in *.yaml
config file.The basic ROS2 msgs types that are supported in AWSIM by default include:
+std_msgs
.geometry_msgs
,sensor_msgs
,nav_msgs
,diagnostic_msgs
,builtin_interfaces
,action_msgs
,rosgraph_msgs
,test_msgs
.In order for the message package to be used in Unity, its *.dll
and *.so
libraries must be generated using R2FU
.
Custom message
+If you want to generate a custom message to allow it to be used in AWSIM please read this tutorial.
+Each message type is composed of other types - which can also be a complex type.
+All of them are based on built-in C# types.
+The most common built-in types in messages are bool
, int
, double
and string
.
+These types have their communication equivalents using ROS2.
A good example of a complex type that is added to other complex types in order to specify a reference - in the form of a timestamp and a frame - is std_msgs/Header. +This message has the following form:
+builtin_interfaces/msg/Time stamp
+string frame_id
+
ROS2 directive
+In order to work with ROS2 in Unity, remember to add the directive using ROS2;
at the top of the file to import types from this namespace.
The simplest way to create an object of Header
type is:
var header = new std_msgs.msg.Header()
+{
+ Frame_id = "map"
+}
+
It is not required to define the value of each field.
+As you can see, it creates an object, filling only frame_id
field - and left the field of complex builtin_interfaces/msg/Time
type initialized by default.
+Time is an important element of any message, how to fill it is written here.
As you might have noticed in the previous example, a ROS2 message in Unity is just a structure containing the same fields - keep the same names and types. +Access to its fields for reading and filling is the same as for any C# structure.
+var header2 = new std_msgs.msg.Header();
+header2.Frame_id = "map";
+header2.Stamp.sec = "1234567";
+Debug.Log($"StampSec: {header2.Stamp.sec} and Frame: {header2.Frame_id}");
+
Field names
+There is one always-present difference in field names. +The first letter of each message field in Unity is always uppercase - even if the base ROS2 message from which it is generated is lowercase.
+In order to complete the time field of the Header
message, we recommend the following methods in AWSIM:
When the message has no Header
but only the Time
type:
var header2 = new std_msgs.msg.Header();
+header2.Stamp = SimulatorROS2Node.GetCurrentRosTime();
+
When the message has a Header
- like for example autoware_auto_vehicle_msgs/VelocityReport:
velocityReportMsg = new autoware_auto_vehicle_msgs.msg.VelocityReport()
+{
+ Header = new std_msgs.msg.Header()
+ {
+ Frame_id = "map",
+ }
+};
+var velocityReportMsgHeader = velocityReportMsg as MessageWithHeader;
+SimulatorROS2Node.UpdateROSTimestamp(ref velocityReportMsgHeader);
+
These methods allow to fill the Time
field in the message object with the simulation time - from ROS2Clock
Some message types contain an array of some type.
+An example of such a message is nav_msgs/Path
, which has a PoseStamped
array.
+In order to fill such an array, you must first create a List<T>
, fill it and then convert it to a raw array.
var posesList = new List<geometry_msgs.msg.PoseStamped>();
+for(int i=0; i<=5;++i)
+{
+ var poseStampedMsg = new geometry_msgs.msg.PoseStamped();
+ poseStampedMsg.Pose.Position.X = i;
+ poseStampedMsg.Pose.Position.Y = 5-i;
+ var poseStampedMsgHeader = poseStampedMsg as MessageWithHeader;
+ SimulatorROS2Node.UpdateROSTimestamp(ref poseStampedMsgHeader);
+ posesList.Add(poseStampedMsg);
+}
+var pathMsg = new nav_msgs.msg.Path(){Poses=posesList.ToArray()};
+var pathMsgHeader = pathMsg as MessageWithHeader;
+SimulatorROS2Node.UpdateROSTimestamp(ref pathMsgHeader);
+// pathMsg is ready
+
In order to publish messages, a publisher object must be created.
+The static method CreatePublisher
of the SimulatorROS2Node
makes it easy.
+You must specify the type of message, the topic on which it will be published and the QoS profile.
+Below is an example of autoware_auto_vehicle_msgs.msg.VelocityReport
type message publication with a frequency of 30Hz
on /vehicle/status/velocity_status
topic, the QoS profile is (Reliability=Reliable, Durability=Volatile, History=Keep last, Depth=1
):
using UnityEngine;
+using ROS2;
+
+namespace AWSIM
+{
+ public class VehicleReportRos2Publisher : MonoBehaviour
+ {
+ float timer = 0;
+ int publishHz = 30;
+ QoSSettings qosSettings = new QoSSettings()
+ {
+ ReliabilityPolicy = ReliabilityPolicy.QOS_POLICY_RELIABILITY_RELIABLE,
+ DurabilityPolicy = DurabilityPolicy.QOS_POLICY_DURABILITY_VOLATILE,
+ HistoryPolicy = HistoryPolicy.QOS_POLICY_HISTORY_KEEP_LAST,
+ Depth = 1,
+ };
+ string velocityReportTopic = "/vehicle/status/velocity_status";
+ autoware_auto_vehicle_msgs.msg.VelocityReport velocityReportMsg;
+ IPublisher<autoware_auto_vehicle_msgs.msg.VelocityReport> velocityReportPublisher;
+
+ void Start()
+ {
+ // Create a message object and fill in the constant fields
+ velocityReportMsg = new autoware_auto_vehicle_msgs.msg.VelocityReport()
+ {
+ Header = new std_msgs.msg.Header()
+ {
+ Frame_id = "map",
+ }
+ };
+
+ // Create publisher with specific topic and QoS profile
+ velocityReportPublisher = SimulatorROS2Node.CreatePublisher<autoware_auto_vehicle_msgs.msg.VelocityReport>(velocityReportTopic, qosSettings.GetQoSProfile());
+ }
+
+ bool NeedToPublish()
+ {
+ timer += Time.deltaTime;
+ var interval = 1.0f / publishHz;
+ interval -= 0.00001f;
+ if (timer < interval)
+ return false;
+ timer = 0;
+ return true;
+ }
+
+ void FixedUpdate()
+ {
+ // Provide publications with a given frequency
+ if (NeedToPublish())
+ {
+ // Fill in non-constant fields
+ velocityReportMsg.Longitudinal_velocity = 1.00f;
+ velocityReportMsg.Lateral_velocity = 0.00f;
+ velocityReportMsg.Heading_rate = 0.00f;
+
+ // Update Stamp
+ var velocityReportMsgHeader = velocityReportMsg as MessageWithHeader;
+ SimulatorROS2Node.UpdateROSTimestamp(ref velocityReportMsgHeader);
+
+ // Publish
+ velocityReportPublisher.Publish(velocityReportMsg);
+ }
+ }
+ }
+}
+
The above example demonstrates the implementation of the 'publish'
method within the FixedUpdate
Unity event method. However, this approach has certain limitations. The maximum output frequency is directly tied to the current value of Fixed TimeStep
specified in the Project Settings
. Considering that the AWSIM is targeting a frame rate of 60 frames per second (FPS), the current Fixed TimeStep
is set to 1/60s. And this impose 60Hz as a limitation on the publish rate for any sensor, which is implemented within FixedUpdate
method. In case a higher output frequency be necessary, an alternative implementation must be considered or adjustments made to the Fixed TimeStep
setting in the Editor->Project Settings->Time
.
The table provided below presents a list of sensors along with examples of topics that are constrained by the Fixed TimeStep
limitation.
Object | +Topic | +
---|---|
GNSS Sensor | +/sensing/gnss/pose | +
IMU Sensor | +/sensing/imu/tamagawa/imu_raw | +
Traffic Camera | +/sensing/camera/traffic_light/image_raw | +
Pose Sensor | +/awsim/ground_truth/vehicle/pose | +
OdometrySensor | +/awsim/ground_truth/localization/kinematic_state | +
LIDAR | +/sensing/lidar/top/pointcloud_raw | +
Vehicle Status | +/vehicle/status/velocity_status | +
If the sensor or any other publishing object within AWSIM does not have any direct correlation with physics (i.e., does not require synchronization with physics), it can be implemented without using the FixedUpdate
method. Consequently, this allows the bypass of upper limits imposed by the Fixed TimeStep
.
The table presented below shows a list of objects that are not constrained by the Fixed TimeStep
limitation.
Object | +Topic | +
---|---|
Clock | +/clock | +
In order to subscribe messages, a subscriber object must be created.
+The static method CreateSubscription
of the SimulatorROS2Node
makes it easy.
+You must specify the type of message, the topic from which it will be subscribed and the QoS profile.
+In addition, the callback must be defined, which will be called when the message is received - in particular, it can be defined as a lambda expression.
+Below is an example of std_msgs.msg.Bool
type message subscription on /vehicle/is_vehicle_stopped
topic, the QoS profile is “system default”
:
+
using UnityEngine;
+using ROS2;
+
+namespace AWSIM
+{
+ public class VehicleStoppedSubscriber : MonoBehaviour
+ {
+ QoSSettings qosSettings = new QoSSettings();
+ string isVehicleStoppedTopic = "/vehicle/is_vehicle_stopped";
+ bool isVehicleStopped = false;
+ ISubscription<std_msgs.msg.Bool> isVehicleStoppedSubscriber;
+
+ void Start()
+ {
+ isVehicleStoppedSubscriber = SimulatorROS2Node.CreateSubscription<std_msgs.msg.Bool>(isVehicleStoppedTopic, VehicleStoppedCallback, qosSettings.GetQoSProfile());
+ }
+
+ void VehicleStoppedCallback(std_msgs.msg.Bool msg)
+ {
+ isVehicleStopped = msg.Data;
+ }
+
+ void OnDestroy()
+ {
+ SimulatorROS2Node.RemoveSubscription<std_msgs.msg.Bool>(isVehicleStoppedSubscriber);
+ }
+ }
+}
+
The following is a summary of the ROS2 topics that the AWSIM node subscribes to and publishes on.
+Ros2ForUnity
+AWSIM works with ROS2 thanks to the use of Ros2ForUnity
- read the details here.
+If you want to generate a custom message to allow it to be used in AWSIM please read this tutorial.
Category | +Topic | +Message type | +frame_id |
+Hz |
+QoS |
+
---|---|---|---|---|---|
Control |
++ | + | + | + | + |
Ackermann Control | +/control/command/control_cmd |
+autoware_auto_control_msgs/AckermannControlCommand |
+- | +60 |
+Reliable ,TransientLocal ,KeepLast/1 |
+
Gear | +/control/command/gear_cmd |
+autoware_auto_vehicle_msgs/GearCommand |
+- | +10 |
+Reliable ,TransientLocal ,KeepLast/1 |
+
Turn Indicators | +/control/command/turn_indicators_cmd |
+autoware_auto_vehicle_msgs/TurnIndicatorsCommand |
+- | +10 |
+Reliable ,TransientLocal ,KeepLast/1 |
+
Hazard Lights | +/control/command/hazard_lights_cmd |
+autoware_auto_vehicle_msgs/HazardLightsCommand |
+- | +10 |
+Reliable ,TransientLocal ,KeepLast/1 |
+
Emergency | +/control/command/emergency_cmd |
+tier4_vehicle_msgs/msg/VehicleEmergencyStamped |
+- | +60 |
+Reliable ,TransientLocal ,KeepLast/1 |
+
Control mode |
++ | + | + | + | + |
Engage | +/vehicle/engage |
+autoware_auto_vehicle_msgs/Engage |
+- | +- | +Reliable ,TransientLocal ,KeepLast/1 |
+
Category | +Topic | +Message type | +frame_id |
+Hz |
+QoS |
+
---|---|---|---|---|---|
Clock |
++ | + | + | + | + |
Clock | +/clock |
+rosgraph_msgs/Clock |
+- | +100 |
+Best effort ,Volatile ,Keep last/1 |
+
Sensors |
++ | + | + | + | + |
Camera | +/sensing/camera/traffic_light/camera_info |
+sensor_msgs/CameraInfo |
+traffic_light_left_camera/camera_link |
+10 |
+Best effort ,Volatile ,Keep last/1 |
+
Camera | +/sensing/camera/traffic_light/image_raw |
+sensor_msgs/Image |
+traffic_light_left_camera/camera_link |
+10 |
+Best effort ,Volatile ,Keep last/1 |
+
GNSS | +/sensing/gnss/pose |
+geometry_msgs/Pose |
+gnss_link |
+1 |
+Reliable ,Volatile ,Keep last/1 |
+
GNSS | +/sensing/gnss/pose_with_covariance |
+geometry_msgs/PoseWithCovarianceStamped |
+gnss_link |
+1 |
+Reliable ,Volatile ,Keep last/1 |
+
IMU | +/sensing/imu/tamagawa/imu_raw |
+sensor_msgs/Imu |
+tamagawa/imu_link |
+30 |
+Reliable ,Volatile ,Keep last/1000 |
+
Top LiDAR | +/sensing/lidar/top/pointcloud_raw |
+sensor_msgs/PointCloud2 |
+sensor_kit_base_link |
+10 |
+Best effort ,Volatile ,Keep last/5 |
+
Top LiDAR | +/sensing/lidar/top/pointcloud_raw_ex |
+sensor_msgs/PointCloud2 |
+sensor_kit_base_link |
+10 |
+Best effort ,Volatile ,Keep last/5 |
+
Vehicle Status |
++ | + | + | + | + |
Velocity | +/vehicle/status/velocity_status |
+autoware_auto_vehicle_msgs/VelocityReport |
+base_line |
+30 |
+Reliable ,Volatile ,Keep last/1 |
+
Steering | +/vehicle/status/steering_status |
+autoware_auto_vehicle_msgs/SteeringReport |
+- | +30 |
+Reliable ,Volatile ,Keep last/1 |
+
Control Mode | +/vehicle/status/control_mode |
+autoware_auto_vehicle_msgs/ControlModeReport |
+- | +30 |
+Reliable ,Volatile ,Keep last/1 |
+
Gear | +/vehicle/status/gear_status |
+autoware_auto_vehicle_msgs/GearReport |
+- | +30 |
+Reliable ,Volatile ,Keep last/1 |
+
Turn Indicators | +/vehicle/status/turn_indicators_status |
+autoware_auto_vehicle_msgs/TurnIndicatorsReport |
+- | +30 |
+Reliable ,Volatile ,Keep last/1 |
+
Hazard Lights | +/vehicle/status/hazard_lights_status |
+autoware_auto_vehicle_msgs/HazardLightsReport |
+- | +30 |
+Reliable ,Volatile ,Keep last/1 |
+
Ground Truth |
++ | + | + | + | + |
Pose | +/awsim/ground_truth/vehicle/pose |
+geometry_msgs/PoseStamped |
+base_link |
+100 |
+Reliable ,Volatile ,Keep last/1 |
+
This tutorial describes:
+- how to modify scenario to work with AWSIM
+- how to prepare the AWSIM scene to work with scenario_simulator_v2
To prepare the scenario to work with AWSIM add model3d
field to entity specification
It is utilized as an asset key to identify the proper prefab.
+ +Match the parameters of the configured vehicle to match the entities parameters in AWSIM as close as it is required. Especially the bounding box is crucial to validate the collisions correctly.
+AWSIM currently supports the following asset key values.
+The list can be extended if required. Appropriate values should be added to asst key list in the ScenarioSimulatorConnector
component and the vehicle parameters in scenario simulator should match them.
model3d | +boundingbox size (m) | +wheel base(m) | +front tread(m) | +rear tread(m) | +tier diameter(m) | +max steer(deg) | +
---|---|---|---|---|---|---|
lexus_rx450h | +width : 1.920 height : 1.700 length : 4.890 |
+2.105 | +1.640 | +1.630 | +0.766 | +35 | +
model3d | +boundingbox size (m) | +wheel base(m) | +front tread(m) | +rear tread(m) | +tier diameter(m) | +max steer(deg) | +
---|---|---|---|---|---|---|
taxi | +width : 1.695 height : 1.515 length : 4.590 |
+2.680 | +1.460 | +1.400 | +0.635 | +35 | +
truck_2t | +width : 1.695 height : 1.960 length : 4.685 |
+2.490 | +1.395 | +1.240 | +0.673 | +40 | +
hatchback | +width : 1.695 height 1.515 length : 3.940 |
+2.550 | +1.480 | +1.475 | +0.600 | +35 | +
van | +width : 1.880 height : 2.285 length : 4.695 |
+2.570 | +1.655 | +1.650 | +0.600 | +35 | +
small_car | +width : 1.475 height 1.800 length : 3.395 |
+2.520 | +1.305 | +1.305 | +0.557 | +35 | +
model3d | +boundingbox size (m) | +
---|---|
human | +width : 0.400 height : 1.800 length : 0.300 |
+
model3d | +boundingbox size (m) | +
---|---|
sign_board | +width : 0.31 height : 0.58 length : 0.21 |
+
Vast majority of features supported by scenario_simulator_v2
are supported with AWSIM as well. Currently supported features are described in the scenario_simulator_v2's documentation.
Features which are not supported when connected with AWSIM are listed below.
+attach_*_sensor
pointcloudPublishingDelay
isClairvoyant
detectedObjectPublishingDelay
detectedObjectPositionStandardDeviation
detectedObjectMissingProbability
randomSeed
If those features are curcial for the scenario's execution, the scenario might not work properly.
+scenario_simulator_v2
EgoVehicle
object in the sceneTimeScaleSettingsUI
, VehicleSettingsUI
and TrafficSettingsUI
from Canvas
-> RightScrollView
-> Viewport
-> Content
ClockPublisher
objectScenarioSimulatorConnector
prefab to the scene - located in Assets/ScenarioSimulatorConnector
ScenarioSimulatorConnector
- most likely Main Camera
object from the sceneScenarioSimulatorConnector
- most likely VehicleInformationUI
object from the scenescenario_simulator_v2
Below you can find instructions on how to setup the scenario execution using scenario_simulator_v2
with AWSIM run from Unity Editor as a simulator
+The instruction assumes using the Ubuntu OS.
Build Autoware by following "Build Autoware with scenario_simulator_v2
" section from the scenario simulator and AWSIM quick start guide
Follow Setup Unity Project tutorial
+Assets/AWSIM/Scenes/Main
directoryPlay
button placed at the top section of Editor.scenario_test_runner
.
+ source install/setup.bash
+ros2 launch scenario_test_runner scenario_test_runner.launch.py \
+architecture_type:=awf/universe record:=false \
+scenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim.yaml' \
+sensor_model:=awsim_sensor_kit vehicle_model:=sample_vehicle \
+launch_simple_sensor_simulator:=false autoware_launch_file:="e2e_simulator.launch.xml" \
+initialize_duration:=260 port:=8080
+
This scenario controls traffic signals in the scene based on OpenSCENARIO. It can be used to verify whether traffic light recognition pipeline works well in Autoware.
+ros2 launch scenario_test_runner scenario_test_runner.launch.py \
+architecture_type:=awf/universe record:=false \
+scenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim_conventional_traffic_lights.yaml' \
+sensor_model:=awsim_sensor_kit vehicle_model:=sample_vehicle \
+launch_simple_sensor_simulator:=false autoware_launch_file:="e2e_simulator.launch.xml" \
+initialize_duration:=260 port:=8080
+
This scenario publishes V2I traffic signals information based on OpenSCENARIO. It can be used to verify Autoware responds to V2I traffic lights information correctly.
+ros2 launch scenario_test_runner scenario_test_runner.launch.py \
+architecture_type:=awf/universe record:=false \
+scenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim_v2i_traffic_lights.yaml' \
+sensor_model:=awsim_sensor_kit vehicle_model:=sample_vehicle \
+launch_simple_sensor_simulator:=false autoware_launch_file:="e2e_simulator.launch.xml" \
+initialize_duration:=260 port:=8080
+
CameraSensor
is a component that simulates an RGB camera.
+Autonomous vehicles can be equipped with many cameras used for various purposes.
+In the current version of AWSIM, the camera is used primarily to provide the image to the traffic light recognition module in Autoware.
Prefab can be found under the following path:
+Assets/AWSIM/Prefabs/Sensors/CameraSensor.prefab
+
The mentioned single CameraSensor
has its own frame traffic_light_left_camera/camera_link
in which its data is published.
+The sensor prefab is added to this frame.
+The traffic_light_left_camera/camera_link
link is added to the base_link
object located in the URDF
.
A detailed description of the URDF
structure and sensors added to prefab Lexus RX450h 2015
is available in this section.
CameraSensorHolder (script) allows the sequential rendering of multiple camera sensors.
+To utilize it, each CameraSensor
object should be attached as a child object of the CameraSensorHolder
.
+
Camere Sensors
- a collection of camera sensors used for renderingPublish Hz
- the frequency at which camera rendering, image processing and callbacks are executedRender In Queue
- camera sensors rendering sequence type: in queue (one after another) or all at the same frameFor the CameraSensor
to work properly, the GameObject to which the scripts are added must also have:
TrafficLights recognition
+In case of problems with the recognition of traffic lights in Autoware, it may help to increase the image resolution and focal length of the camera in AWSIM.
+Camera settings
+If you would like to adjust the image captured by the camera, we encourage you to read this manual.
+The CameraSensor
functionality is split into two scripts:
CameraSensor
output as Image and CameraInfo messages type published on a specific ROS2 topics.Scripts can be found under the following path:
+Assets/AWSIM/Scripts/Sensors/CameraSensor/*
+
In the same location there are also *.compute
files containing used ComputeShaders
.
Camera Sensor (script) is a core camera sensor component.
+It is responsible for applying OpenCV distortion and encoding to BGR8 format.
+The distortion model is assumed to be Plumb Bob.
+The script renders the image from the camera to Texture2D
and transforms it using the distortion parameters.
+This image is displayed in the GUI and further processed to obtain the list of bytes in BGR8 format on the script output.
The script uses two ComputeShaders
, they are located in the same location as the scripts:
CameraDistortion
- to correct the image using the camera distortion parameters,RosImageShader
- to encode two pixels color (bgr8 - 3 bytes) into one (uint32 - 4 bytes) in order to produce ROS Image BGR8 buffer.
API | +type | +feature | +
---|---|---|
DoRender | +void | +Renders the Unity camera, applies OpenCV distortion to rendered image and update output data. | +
Output Hz
- frequency of output calculation and callback (default: 10Hz
)Show
- if camera image should be show on GUI (default: true
)Scale
- scale of reducing the image from the camera, 1
- will give an image of real size, 2
- twice smaller, etc. (default: 4
)X Axis
- position of the upper left corner of the displayed image in the X axis, 0
is the left edge (default: 0
)Y Axis
- position of the upper left corner of the displayed image in the Y axis, 0
is the upper edge (default: 0
)Width
- image width (default: 1920
)Height
- image height (default: 1080
)K1, K2, P1, P2, K3
- camera distortion coefficients for Plum Bob model0, 0, 0, 0, 0
)Camera Object
- reference to the basic Camera component (default: None
)Distortion Shader
- reference to ComputeShader asset about Distortion Shader functionality (default: None
)Ros Image Shader
- reference to ComputeShader asset about Ros Image Shader functionality
+(default: None
)The sensor computation output format is presented below:
+Category | +Type | +Description | +
---|---|---|
ImageDataBuffer | +byte[ ] | +Buffer with image data. | +
CameraParameters | +CameraParameters | +Set of the camera parameters. | +
Converts the data output from CameraSensor
to ROS2 Image
+and CameraInfo type messages and publishes them.
+The conversion and publication is performed using the Publish(CameraSensor.OutputData outputData)
method,
+which is the callback
triggered by Camera Sensor (script) for the current output.
Due to the fact that the entire image is always published, the ROI
field of the message is always filled with zeros.
+The script also ensures that binning
is assumed to be zero and the rectification matrix is the identity matrix.
Warning
+The script uses the camera parameters set in the CameraSensor script - remember to configure them depending on the camera you are using.
+Image Topic
- the ROS2 topic on which the Image
message is published"/sensing/camera/traffic_light/image_raw"
)Camera Info Topic
- the ROS2 topic on which the CameraInfo
message is published"/sensing/camera/traffic_light/camera_info"
)Frame id
- frame in which data is published, used in Header
"traffic_light_left_camera/camera_link"
)Qos Settings
- Quality of service profile used in the publicationBest effort
, Volatile
, Keep last
, 1
)10Hz
Best effort
, Volatile
, Keep last/1
Category | +Topic | +Message type | +frame_id |
+
---|---|---|---|
Camera info | +/sensing/camera/traffic_light/camera_info |
+sensor_msgs/CameraInfo |
+traffic_light_left_camera/camera_link |
+
Camera image | +/sensing/camera/traffic_light/image_raw |
+sensor_msgs/Image |
+traffic_light_left_camera/camera_link |
+
GnssSensor
is a component which simulates the position of vehicle computed by the Global Navigation Satellite System based on the transformation of the GameObject to which this component is attached.
+The GnssSensor
outputs the position in the MGRS coordinate system.
Prefab can be found under the following path:
+Assets/AWSIM/Prefabs/Sensors/GnssSensor.prefab
+
GnssSensor
has its own frame gnss_link
in which its data is published.
+The sensor prefab is added to this frame.
+The gnss_link
frame is added to the sensor_kit_base_link
in the base_link
object located in the URDF
.
A detailed description of the URDF
structure and sensors added to prefab Lexus RX450h 2015
is available in this section.
The GnssSensor
functionality is split into two components:
GnssSensor
output as PoseStamped
and PoseWithCovarianceStamped published on a specific ROS2 topics.Scripts can be found under the following path:
+Assets/AWSIM/Prefabs/Sensors/Gnss/*
+
This is the main script in which all calculations are performed:
+callback
is called (which can be assigned externally).Output Hz
- frequency of output calculation and callback (default: 100Hz
)Category | +Type | +Description | +
---|---|---|
Position | +Vector3 | +Position in the MGRS coordinate system. | +
Converts the data output from GnssSensor
to ROS2 PoseStamped
and PoseWithCovarianceStamped messages.
+These messages are published on two separate topics for each type.
+The conversion and publication is performed using the Publish(GnssSensor.OutputData outputData)
method, which is the callback
triggered by Gnss Sensor (script) for the current output update.
Covariance matrix
+The row-major representation of the 6x6 covariance matrix is filled with 0
and does not change during the script run.
Pose Topic
- the ROS2 topic on which the message PoseStamped
type is published"/sensing/gnss/pose"
)Pose With Covariance Stamped Topic
- the ROS2 topic on which the message PoseWithCovarianceStamped type is published"/sensing/gnss/pose_with_covariance"
)Frame id
- frame in which data are published, used in Header
"gnss_link"
)Qos Settings
- Quality of service profile used in the publication"system_default"
: Reliable
, Volatile
, Keep last
, 1
)1Hz
Reliable
, Volatile
, Keep last/1
Category | +Topic | +Message type | +frame_id |
+
---|---|---|---|
Pose | +/sensing/gnss/pose |
+geometry_msgs/Pose |
+gnss_link |
+
Pose with Covariance | +/sensing/gnss/pose_with_covariance |
+geometry_msgs/PoseWithCovarianceStamped |
+gnss_link |
+
IMUSensor
is a component that simulates an IMU (Inertial Measurement Unit) sensor.
+Measures acceleration (\({m}/{s^2}\)) and angular velocity (\({rad}/{s}\)) based on the transformation of the GameObject to which this component is attached.
Prefab can be found under the following path:
+Assets/AWSIM/Prefabs/Sensors/IMUSensor.prefab
+
IMUSensor
has its own frame tamagawa/imu_link
in which its data is published.
+The sensor prefab is added to this frame.
+The tamagawa/imu_link
link is added to the sensor_kit_base_link
in the base_link
object located in the URDF
.
A detailed description of the URDF
structure and sensors added to prefab Lexus RX450h 2015
is available in this section.
The IMUSensor
functionality is split into two scripts:
IMUSensor
output as Imu message type published on a specific ROS2 topics.Scripts can be found under the following path:
+Assets/AWSIM/Scripts/Sensors/Imu/*
+
This is the main script in which all calculations are performed:
+Warning
+If the angular velocity about any axis is NaN
(infinite), then angular velocity is published as vector zero.
Output Hz
- frequency of output calculation and callback (default: 30Hz
)Category | +Type | +Description | +
---|---|---|
LinearAcceleration | +Vector3 | +Measured acceleration (m/s^2) | +
AngularVelocity | +Vector3 | +Measured angular velocity (rad/s) | +
Converts the data output from IMUSensor
to ROS2 Imu type message and publishes it.
+The conversion and publication is performed using the Publish(IMUSensor.OutputData outputData)
method, which is the callback
triggered by IMU Sensor (script) for the current output.
Warning
+In each 3x3 covariance matrices the row-major representation is filled with 0
and does not change during the script run.
+In addition, the field orientation
is assumed to be {1,0,0,0}
and also does not change.
Topic
- the ROS2 topic on which the message is published"/sensing/imu/tamagawa/imu_raw"
)Frame id
- frame in which data is published, used in Header
tamagawa/imu_link"
)Qos Settings
- Quality of service profile used in the publicationReliable
, Volatile
, Keep last
, 1000
)30Hz
Reliable
, Volatile
, Keep last/1000
Category | +Topic | +Message type | +frame_id |
+
---|---|---|---|
IMU data | +/sensing/imu/tamagawa/imu_raw |
+sensor_msgs/Imu |
+tamagawa/imu_link |
+
RGLUnityPlugin
(RGL
) comes with a number of the most popular LiDARs model definitions and ready-to-use prefabs. However, there is a way to create your custom LiDAR. This section describes how to add a new LiDAR model that works with RGL
, then create a prefab for it and add it to the scene.
Supported LiDARs
+Not all lidar types are supported by RGL
. Unfortunately, in the case of MEMs
LiDARs, there is a non-repetitive phenomenon - for this reason, the current implementation is not able to reproduce their work.
The example shows the addition of a LiDAR named NewLidarModel
.
To add a new LiDAR model, perform the following steps:
+Navigate to Assets/RGLUnityPlugin/Scripts/LidarModels
.
Add its name to the LidarModels.cs
at the end of the enumeration. The order of enums must not be changed to keep existing prefabs working.
Now, it is time to define the laser (also called a channel) distribution of the LiDAR.
+Info
+If your LiDAR:
+- has a uniform laser distribution
+- has the equal range for all of the lasers
+- fire all of the rays (beams) at the same time
+
+You can skip this step and use our helper method to generate a simple uniform laser array definition (more information in the next step).
+Laser distribution is represented by LaserArray
consists of:
centerOfMeasurementLinearOffsetMm
- 3D translation from the game object's origin to LiDAR's origin. Preview in 2D:
focalDistanceMm
- Distance from the sensor center to the focal point where all laser beams intersect.
lasers
- array of lasers (channels) with a number of parameters:
horizontalAngularOffsetDeg
- horizontal angle offset of the laser (Azimuth)verticalAngularOffsetDeg
- vertical angle offset of the laser (Elevation)verticalLinearOffsetMm
- vertical offset of the laser (translation from origin)ringId
- Id of the ring (in most cases laser Id)timeOffset
- time offset of the laser firing in milliseconds (with reference to the first laser in the array)minRange
- minimum range of the laser (set if lasers have different ranges)maxRange
- maximum range of the laser (set if lasers have different ranges)To define a new laser distribution create a new class in the LaserArrayLibrary.cs
LaserArray
with the definition.In this example, NewLidarModel
laser distribution consists of 5 lasers with
- elevations: 15, 10, 0, -10, -15 degrees
+- azimuths: 1.4, -1.4, 1.4, -1.4, 1.4 degrees
+- ring Ids: 1, 2, 3, 4, 5
+- time offsets: 0, 0.01, 0.02, 0.03, 0.04 milliseconds
+- an equal range that will be defined later
+
+Coordinate system
+Keep in mind that Unity has a left-handed coordinate system, while most of the LiDAR's manuals use a right-handed coordinate system. In that case, reverse sign of the values of the angles.
+The last step is to create a LiDAR configuration by adding an entry to LidarConfigurationLibrary.cs
Add a new item to the ByModel
dictionary that collects LiDAR model enumerations with their BaseLidarConfiguration
choosing one of the implementations:
UniformRangeLidarConfiguration
- lidar configuration for uniformly distributed rays along the horizontal axis with a uniform range for all the rays (it contains minRange
and maxRange
parameters additionally)LaserBasedRangeLidarConfiguration
- lidar configuration for uniformly distributed rays along the horizontal axis with ranges retrieved from lasers descriptionLidarConfiguration.cs
like:HesaiAT128LidarConfiguration
HesaiQT128C2XLidarConfiguration
HesaiPandar128E4XLidarConfiguration
Lidar configuration parameters descrition
+Please refer to this section for the detailed description of all configuration parameters.
+Done. New LiDAR preset should be available via Unity Inspector.
+ +Frame rate of the LiDAR can be set in the Automatic Capture Hz
parameter.
Note: In the real-world LiDARs, frame rate affects horizontal resolution. Current implementation separates these two parameters. Keep in mind to change it manually.
+LidarSensor.cs
to created object.Model Preset
field, check if the configuration loads correctly. You can now customize it however you like.PointCloudVisualization.cs
for visualization purposes.RglLidarPublisher.cs
script to created object.SceneManager
) or use one of the existing sample scenes.Add the prepared LiDAR prefab by drag the prefab file and drop it into a scene.
+ +A LiDAR GameObject should be instantiated automatically
+ +Now you can run the scene and check how your LiDAR works.
+Success
+We encourage you to develop a vehicle using the new LiDAR you have added - learn how to do this here.
+LidarSensor
is the component that simulates the LiDAR (Light Detection and Ranging) sensor.
+LiDAR works by emitting laser beams that bounce off objects in the environment, and then measuring the time it takes for the reflected beams to return, allowing the sensor to create a 3D map of the surroundings.
+This data is used for object detection, localization, and mapping.
LiDAR in an autonomous vehicle can be used for many purposes. +The ones mounted on the top of autonomous vehicles are primarily used
+LiDARs placed on the left and right sides of the vehicle are mainly used to monitor the traffic lane and detect vehicles moving in adjacent lanes, enabling safe maneuvers such as lane changing or turning.
+LidarSensor
component is a part of RGLUnityPlugin
that integrates the external RobotecGPULidar (RGL
) library with Unity. RGL
also allows to provide additional information about objects, more about it here.
Use RGL in your scene
+If you want to use RGL
in your scene, make sure the scene has an SceneManager
component added and all objects meet the usage requirements.
RGL default scenes
+If you would like to see how LidarSensor
works using RGL
or run some tests, we encourage you to familiarize yourself with the RGL
test scenes section.
Supported LiDARs
+The current scripts implementation allows you to configure the prefab for any mechanical LiDAR. +You can read about how to do it here. +MEMS-based LiDARs due to their different design are not yet fully supported.
+Prefabs can be found under the following path:
+Assets/AWSIM/Prefabs/Sensors/RobotecGPULidars/*
+
The table of available prefabs can be found below:
+LiDAR | +Path | +Appearance | +
---|---|---|
HESAI Pandar40P | +HesaiPandar40P.prefab |
++ |
HESAI PandarQT64 | +HesaiPandarQT64.prefab |
++ |
HESAI PandarXT32 | +HesaiPandarXT32.prefab |
++ |
HESAI QT128C2X | +HesaiQT128C2X.prefab |
++ |
HESAI Pandar128E4X | +HesaiPandar128E4X.prefab |
++ |
HESAI AT128 E2X | +HesaiAT128E2X.prefab |
++ |
Ouster OS1-64 | +OusterOS1-64.prefab |
++ |
Velodyne VLP-16 | +VelodyneVLP16.prefab |
++ |
Velodyne VLC-32C | +VelodyneVLP32C.prefab |
++ |
Velodyne VLS-128-AP | +VelodyneVLS128.prefab |
++ |
LidarSensor
is configured in default vehicle EgoVehicle
prefab.
+It is added to URDF
object as a child of sensor_kit_base_link
.
+LidarSensor
placed in this way does not have its own frame, and the data is published relative to sensor_kit_base_link
.
+More details about the location of the sensors in the vehicle can be found here.
A detailed description of the URDF
structure and sensors added to prefab Lexus RX450h 2015
is available in this section.
Additional LiDARs
+For a LiDAR placed on the left side, right side or rear, an additional link should be defined.
+The LiDAR sensor simulation functionality is split into three components:
+Moreover, the scripts use Resources
to provide configuration for prefabs of supported lidar models:
These are elements of the RGLUnityPlugin
, you can read more here.
This is the main component that creates the RGL
node pipeline for the LiDAR simulation.
+The pipeline consists of:
Automatic Capture Hz
- the rate of sensor processing (default: 10Hz
)Model Preset
- allows selecting one of the built-in LiDAR models (default: RangeMeter
)Return Type
- allows selecting multi-return mode (note: this requires more computation). Modes other than "not divergent" require positive beam divergence.Apply Distance Gaussian Noise
- enable/disable distance Gaussian noise (default: true
)Apply Angular Gaussian Noise
- enable/disable angular Gaussian noise (default: true
)Apply Velocity Distortion
- enable/disable velocity distortion (default: false
)Configuration:
+Laser Array
- geometry description of lidar's array of lasers, should be prepared on the basis of the manual for a given model of LiDAR (default: loaded from LaserArrayLibrary
)Horizontal Resolution
- the horiontal resolution of laser array firingsMin H Angle
- minimum horizontal angle, left (default: 0
)Max H Angle
- maximum horizontal angle, right (default: 0
)Laser Array Cycle Time
- time between two consecutive firings of the whole laser array in milliseconds (default: 0
); used for velocity distortion feature.Horizontal Beam Divergence
- represents horizontal deviation of photons from a single beam emitted by a LiDAR sensor (in degrees);Vertical Beam Divergence
- represents vertical deviation of photons from a single beam emitted by a LiDAR sensor (in degrees);Angular Noise Type
- angular noise typeRay Based
)Angular Noise St Dev
- angular noise standard deviation in degree0.05729578
)Angular Noise Mean
- angular noise mean in degrees0
)Distance Noise St Dev Base
- distance noise standard deviation base in meters0.02
)Distance Noise Rise Per Meter
- distance noise standard deviation rise per meter0
)Distance Noise Mean
- distance noise mean in meters0
)Output Restriction Params:
+Apply Restriction
- enable/disable fault injection (default: false
)Rectangular Restriction Masks
- list of rectangular masks used for output restriction; each mask is represented via ranges of angles in horizontal and vertical dimensionsEnable Periodic Restriction
- change mode from static to periodic (default: false
)Restriction Period
- time of whole period in secondsRestriction Duty Rate
- rate of time with masked outputEnable Restriction Randomizer
- enable/disable random periodic mode (default: false
)Min Random Period
- lower bound of time period in seconds used in random modeMax Random Period
- upper bound of time period in seconds used in random modeAdditional options (available for some Lidar Model Preset)
+Min Range
- minimum range of the sensor (if not avaiable, the range is different for each laser in Laser Array
)Max Range
- maximum range of the sensor (if not avaiable, the range is different for each laser in Laser Array
)High Resolution Mode Enabled
- whether to activate high resolution mode (available for Hesai Pandar 128E4X
LiDAR model)LidarSensor
provides public methods to extend this pipeline with additional RGL
nodes.
+In this way, other components can request point cloud processing operations and receive data in the desired format.
Example of how to get XYZ point cloud data:
+RGLNodeSequence
with RGL node to yield XYZ field and connect it to LidarSensor
:
+ rglOutSubgraph = new RGLNodeSequence().AddNodePointsYield("OUT_XYZ", RGLField.XYZ_F32);
+lidarSensor = GetComponent<LidarSensor>();
+lidarSensor.ConnectToWorldFrame(rglOutSubgraph); // you can also connect to Lidar frame using ConnectToLidarFrame
+// You can add a callback to receive a notification when new data is ready
+lidarSensor.onNewData += HandleLidarDataMethod;
+
RGLNodeSequence
call GetResultData
:
+ Vector3[] xyz = new Vector3[0];
+rglOutSubgraph.GetResultData<Vector3>(ref xyz);
+
RglLidarPublisher
extends the main RGL
pipeline created in LidarSensor
with RGL
nodes that produce point clouds in specific format and publish them to the ROS2 topic.
+Thanks to the ROS2 integration with RGL
, point clouds can be published directly from the native library.
+RGL
creates ROS2 node named /RobotecGPULidar
with publishers generated by RGL
nodes.
Currently, RglLidarPublisher
implements ROS2 publishers for two message types:
PointCloud2
message allows publishing point clouds with different points attributes (described by fields
parameter). In order to easily select different frequently used field sets RglLidarPublisher
has several field presets defined:
Preset | +Description | +Fields | +
---|---|---|
Pcl 24 | +24-byte point cloud format used by Autoware | +XYZ_VEC3_F32, PADDING_32, INTENSITY_F32, RING_ID_U16, PADDING_16 | +
PointXYZIRCEADT | +PointXYZIRCEADT format used by Autoware | +XYZ_VEC3_F32, INTENSITY_U8, RETURN_TYPE_U8, RING_ID_U16, ELEVATION_F32, AZIMUTH_F32, DISTANCE_F32, TIME_STAMP_U32 | +
Pcl 48 | +48-byte extended version point cloud format used by Autoware (legacy) | +XYZ_VEC3_F32, PADDING_32, INTENSITY_F32, RING_ID_U16, PADDING_16, AZIMUTH_F32, DISTANCE_F32, RETURN_TYPE_U8, PADDING_8, PADDING_16, PADDING_32, TIME_STAMP_F64 | +
ML Instance Segmentation | +Machine learning format for instance/semantic segmentation tasks | +XYZ_VEC3_F32, ENTITY_ID_I32, INTENSITY_F32 | +
Radar Smart Micro | +Format used in Radar Smart Micro | +XYZ_VEC3_F32, RADIAL_SPEED_F32, POWER_F32, RCS_F32, NOISE_F32, SNR_F32 | +
Custom | +Empty format that allows the user to define its fieldsets | ++ |
PointXYZIRCEADT format
+For a better understanding of the PointXYZIRCEADT format, we encourage you to familiarize yourself with the point cloud pre-processing process in Autoware, which is described here.
+Frame ID
- frame in which data are published, used in Header
(default: "world"
)Qos
- Quality of service profile used in the publicationReliability Policy
- Reliability policy (default: Best effort
)Durability Policy
- Durability policy (default: Volatile
)History Policy
- History policy (default: Keep last
)History Depth
- History depth. If history policy is Keep all
, depth is ignored. (default: 5
)Point Cloud 2 Publishers
- List of sensor_msgs/PointCloud2 message publishersTopic
- Topic name to publish onPublish
- If false, publishing will be stoppedFields Preset
- allows selecting one of the pre-defined fieldsets (choose Custom
to define your own)Fields
- List of fields to be present in the messageRadar Scan Publishers
- List of radar_msgs/RadarScan message publishersTopic
- Topic name to publish onPublish
- If false, publishing will be stoppedElements configurable in simulation runtime
+Once the simulation starts, only the Publish
flag is handled. All of the publishers are initialized on the simulation startup and updates of their parameters are not supported in runtime. Any changes to the publishing configuration are ignored.
10Hz
Best effort
, Volatile
, Keep last/5
Category | +Topic | +Message type | +frame_id |
+
---|---|---|---|
PointCloud 24-byte format | +/lidar/pointcloud |
+sensor_msgs/PointCloud2 |
+world |
+
PointXYZIRCEADT format | +/lidar/pointcloud_ex |
+sensor_msgs/PointCloud2 |
+world |
+
A component visualizing a point cloud obtained from RGL
in the form of a Vector3
list as colored points in the Unity scene.
+Based on the defined color table, it colors the points depending on the height at which they are located.
The obtained points are displayed as the vertices of mesh, and their coloring is possible thanks to the use of PointCloudMaterial
material which can be found in the following path:
Assets/RGLUnityPlugin/Resources/PointCloudMaterial.mat
+
Point Cloud Visualization
preview:
Point Shape
- the shape of the displayed points (default: Box
)Point Size
- the size of the displayed points (default: 0.05
)Colors
- color list used depending on height6
colors: red, orange, yellow, green, blue, violet
)Auto Compute Coloring Heights
- automatic calculation of heights limits for a list of colors (default: false
)Min Coloring Height
- minimum height value from which color matching is performed, below this value all points have the first color from the list (default: 0
)Max Coloring Height
- maximum height value from which color matching is performed, above this value all points have the last color from the list (default: 20
)To ensure the publication of the information described in this section, GameObjects must be adjusted accordingly. This tutorial describes how to do it.
+RGL Unity Plugin
allows assigning an Intensity Texture
to the GameObjects to produce a point cloud containing information about the lidar ray intensity of hit. It can be used to distinguish different levels of an object's reflectivity.
Point cloud containing intensity is published on the ROS2 topic via RglLidarPublisher
component. The intensity value is stored in the intensity
field of the sensor_msgs/PointCloud2
message.
RGL Unity Plugin
allows assigning an ID to GameObjects to produce a point cloud containing information about hit objects. It can be used for instance/semantic segmentation tasks. This tutorial describes how to do it.
LidarInstanceSegmentationDemo
+If you would like to see how LidarInstanceSegmentationDemo
works using RGL
or run some tests, we encourage you to familiarize yourself with this section.
Point cloud containing hit objects IDs is published on the ROS2 topic via RglLidarPublisher
component. The publisher for such point cloud format is not added by default. Add a new PointCloud2
publisher with ML Instance Segmentation
fields preset or create your format by selecting Custom
preset (remember to add ENTITY_ID_I32
field which holds objects IDs).
The resulting simulation data contains only the id of objects without their human-readable names. To facilitate the interpretation of such data, a function has been implemented to save a file with a dictionary mapping instance ID to GameObject names. It writes pairs of values in the yaml
format:
SemanticCategory
componentTo enable saving dictionary mapping set output file path to the Semantic Category Dictionary File
property in the Scene Manager
component:
The dictionary mapping file will be saved at the end of the simulation.
+Describes LiDAR faults modeled as a set of rectangular masks obstructing part of the rays.
+Example set of parameters for output restriction resulting in one rectangular mask obstructing rays:
+ + + + + + + + + + + + + + +Robotec GPU Lidar (RGL
) is an open source high performance lidar simulator running on CUDA-enabled GPUs.
+It is a cross-platform solution compatible with both Windows and Linux operating systems.
+RGL
utilizes RTX
cores for acceleration, whenever they are accessible.
RGL
is used in AWSIM for performance reasons.
+Thanks to it, it is possible to perform a large number of calculations using the GPU, which is extremely helpful due to the size of the scenes.
+AWSIM is integrated with RGL
out-of-the-box - using RGLUnityPlugin
asset.
Warning
+If you want to use RGL
in your scene, make sure the scene has an RGLSceneManager
component added and all objects meet the usage requirements.
Describing the concept of using RGL
in AWSIM, we distinguish:
Mesh - a handle to the on-GPU data of the 3D model of objects that in AWSIM are provided in the form of Mesh Filter component.
+RGLUnityPlugin
supports two types of meshes: static (rendered by Mesh Renderer) and animated (rendered by Skinned Mesh Renderer).
+Static meshes could be shared between Entities.
Entity - represents a 3D object on the scene with its position and rotation. +It consists of a lightweight reference to a Mesh and a transformation matrix of the object.
+Scene - a location where raytracing occurs.
+It is a set of entites uploaded by SceneManager
script to the RGL Native Library.
Node - performs specific operations such as setting rays for raytracing, transforming rays, performing raytracing, and manipulating output formats.
+In AWSIM, the main sequence of RGL
nodes that simulates LiDAR is created in the LidarSensor
script.
+Other scripts usually create nodes to get requested output or preprocess point cloud, and then connect those nodes to the LidarSensor
.
Graph - a collection of connected Nodes that can be run to calculate results. +It allows users to customize functionality and output format by adding or removing Nodes.
+Producing a point cloud is based on the use of a Scene containing Entities with Meshes, and placing an Ego Entity with LiDAR sensor that creates a Graph describing ray pattern and performing raytracing.
+In subsequent frames of the simulation, SceneManager
synchronizes the scene between Unity and RGL
, and LiDAR sensor updates rays pose on the scene and triggers Graph to perform raytracing and format desired output.
RGLUnityPlugin
asset contains:
*.dll
and *.so
files).RGL
in the Unity - details below.SceneManager
- responsible for syncing the scene between Unity and RGL
.LidarSensor
- provide lidar configuration and create RGL
pipeline to simulate lidar.RadarSensor
- provide radar configuration and create RGL
pipeline to simulate radar.PointCloudVisualization
- visualize point cloud on the Unity scene.IntensityTexture
- adds slot for Intensity Texture ID
to the GameObjectSemanticCategory
- adds category ID to the GameObjectRGLDebugger
- provides configuration for Native RGL
debug tools (logging and tape).LidarModels
- enumeration with supported LiDARs models.LidarConfiguration
- top-level configuration class, horizontal ranges, distance range, laser array.LidarConfigurationLibrary
- provides a number of pre-defined LidarConfigurations
.LaserArray
- definition of a (vertical) array of lasers.LaserArrayLibrary
- provides a number of pre-defined LaserArrays
.Laser
- describes offsets of a single laser within a LaserArray
.LidarNoiseParams
- describes a LiDAR noise that can be simulatedLidarOutputRestrictions
- Describes LiDAR faults modeled as a set of rectangular masks obstructing part of the raysRadarModels
- enumeration with supported radar models.RadarConfiguration
- top-level configuration class, horizontal ranges, distance range, radar parameters.LidarConfigurationLibrary
- provides a number of pre-defined RadarConfigurations
.RadarNoiseParams
- describes a radar noise that can be simulatedLowLevelWrappers
scripts - provides some convenience code to call Native RGL
functions.Utilities
scripts - miscellaneous utilities to make rest of the code clearer.Each scene needs SceneManager
component to synchronize models between Unity and RGL
.
+On every frame, it detects changes in the Unity's scene and propagates the changes to native RGL
code.
+When necessary, it obtains 3D models from GameObjects on the scene, and when they are no longer needed, it removes them.
Three different strategies to interact with in-simulation 3D models are implemented.
+SceneManager
uses one of the following policies to construct the scene in RGL
:
Only Colliders
- data is computed based on the colliders only, which are geometrical primitives or simplified Meshes.
+This is the fastest option, but will produce less accurate results, especially for the animated entities.Regular Meshes And Colliders Instead Of Skinned
- data is computed based on the regular meshes for static Entities (with MeshRenderers
component) and the colliders for animated Entities (with SkinnedMeshRenderer
component).
+This improves accuracy for static Entities with a negligible additional performance cost.Regular Meshes And Skinned Meshes
- uses regular meshes for both static and animated Entities.
+This incurs additional performance, but produces the most realistic results.Mesh Source Strategy | +Static Entity | +Animated Entity (NPC) | +
---|---|---|
Only Colliders |
+Collider | +Collider | +
Regular Meshes And Colliders Instead Of Skinned |
+Regular Mesh | +Collider | +
Regular Meshes And Skinned Meshes |
+Regular Mesh | +Regular Mesh | +
Mesh source can be changed in the SceneManager
script properties:
Performance
+SceneManager
performance depends on mesh source option selected.
Objects, to be detectable by RGL
, must fulfill the following requirements:
Collider
, Mesh Renderer
, or Skinned Mesh Renderer
- it depends on SceneManager
mesh source parameter.Be readable from CPU-accessible memory - it can be achieved using the Read/Write Enabled
checkbox in mesh settings.
Readable objects
+Primitive Objects are readable by default.
+Example
+The activated Readable option in the mesh should look like this.
+ +RGL Unity Plugin
allows to:
Intensity Texture
to the GameObjects to produce a point cloud containing information about the lidar ray intensity of hit. It can be used to distinguish different levels of an object's reflectivity. To enable reading material information, add IntensityTexture
component to every GameObject
that is expected to have non-default intensity values.
After that desired texture has to be inserted into the Intensity Texture
slot.
The texture has to be in R8
format. That means 8bit
in the red channel (255
possible values).
When the texture is assigned, the intensity values will be read from the texture and added to the point cloud if and only if the mesh component in the GameObject
has a set of properly created texture coordinates.
The expected number of texture coordinates is equal to the number of vertices in the mesh. The quantity of indices is not relevant. In other cases, the texture will be no read properly.
+To enable segmentation, add SemanticCategory
component to every GameObject that is expected to have a distinct ID. All meshes that belong to a given object will inherit its ID.
+ID inheritance mechanism allows IDs to be overwritten for individual meshes/objects.
+This solution also enables the creation of coarse categories (e.g., Pedestrians
, Vehicles
)
Example
+SemanticCategory
component is assigned to the Taxi
GameObject. All meshes in the Taxi
GameObject will have the same instance ID as Taxi
:*
+
Example
+The driver has its own SemanticCategory
component, so his instance ID will differ from the rest of the meshes:
+
Example
+SemanticCategory
component is assigned to the Vehicles
GameObject that contains all of the cars on the scene:
+
The resulting simulation data contains only the id of objects without their human-readable names. To facilitate the interpretation of such data, a function has been implemented to save a file with a dictionary mapping instance ID to GameObject names. It writes pairs of values in the yaml
format:
SemanticCategory
componentTo enable saving dictionary mapping set output file path to the Semantic Category Dictionary File
property in the Scene Manager
component:
The dictionary mapping file will be saved at the end of the simulation.
+ + + + + + + + + + + + + +The RadarSensor
component simulates a radar sensor that detect objects in the environment.
+Real-world radar sensors work by emitting radio waves and detecting the waves that are reflected back from objects in
+the environment.
RadarSensor
implements simplified a model of wave propagation and reflection using GPU-accelerated ray casting and
+post-processing
+to obtain radar-specific information such as radial (aka doppler)
+speed, RCS,
+power level, noise level and signal-to-noise ratio.
RadarSensor
component is a part of RGLUnityPlugin
that integrates the external
+RobotecGPULidar (RGL
) library with Unity.
Use RGL in your scene
+If you want to use RGL
in your scene, make sure the scene has
+an SceneManager
component added and all objects meet
+the usage requirements.
Prefabs can be found under the following path:
+Assets/AWSIM/Prefabs/Sensors/RobotecGPULidars/*
+Radar | +Path | +Appearance | +
---|---|---|
SmartmicroDRVEGRD169 | +SmartmicroDRVEGRD169.prefab |
++ |
The Radar sensor simulation functionality is split into three components:
+This is the component that creates the RGL
node pipeline for the radar simulation.
+The pipeline consists of:
Automatic Capture Hz
- the rate of sensor processingModel Preset
- allows selecting one of the built-in radar modelsMin Azimuth Angle
- minimum azimuth angle (in degrees)Max Azimuth Angle
- maximum azimuth angle (in degrees)Min Elevation Angle
- minimum elevation angle (in degrees)Max Elevation Angle
- maximum elevation angle (in degrees)Frequency
- frequency of the wave propagation by the radar (in GHz)Power Transmitted
- power transmitted by the radar (in dBm)Cumulative Device Gain
- gain of the radar's antennas and any other gains of the device (in dBi)Received Noise Mean
- mean of the received noise (in dB)Received Noise St Dev
- standard deviation of the received noise (in dB)Begin Distance
- begin of the distance interval where the following parameters are used (in meters)End Distance
- end of the distance interval where the following parameters are used (in meters)Distance Separation Threshold
- minimum distance between two points to be considered as separate detections (in meters)Radial Speed Seperation Threshold
- minimum radial speed difference between two points to be considered as separate detections (in meters per seconds)Azimuth Separation Threshold
- minimum azimuth difference between two points to be considered as separate detections (in degrees)RadarSensor
provides public methods to extend this pipeline with additional RGL
nodes.
+In this way, other components can request point cloud processing operations and receive data in the desired format.
Example of how to get XYZ point cloud data:
+RGLNodeSequence
with RGL node to yield XYZ field and connect it to RadarSensor
:
+ rglOutSubgraph = new RGLNodeSequence().AddNodePointsYield("OUT_XYZ", RGLField.XYZ_F32);
+radarSensor = GetComponent<RadarSensor>();
+radarSensor.ConnectToWorldFrame(rglOutSubgraph); // you can also connect to radar frame using ConnectToRadarFrame
+// You can add a callback to receive a notification when new data is ready
+radarSensor.onNewData += HandleRadarDataMethod;
+
RGLNodeSequence
call GetResultData
:
+ Vector3[] xyz = new Vector3[0];
+rglOutSubgraph.GetResultData<Vector3>(ref xyz);
+
RadarSensor uses RglLidarPublisher
for publishing two types of ROS 2 messages:
The content of these messages is presented in the table below.
+Message type | +Data which the message has | +Comment | +
---|---|---|
PointCloud2 | +Position Radial speed Power RCS Noise SNR |
+Calculated in Radar node from RGL | +
RadarScan | +Range Azimuth Elevation Radial speed Amplitude |
+Calculated in Radar node from RGL | +
On the screenshot below (scene RadarSceneDevelopSample
) radar detections are shown as blue boxes.
VehicleStatusSensor
is a component that is designed to aggregate information about the current state of the vehicle.
+It aggregates information about:
AUTONOMOUS
or MANUAL
.DRIVE
or REVERSE
.0.1745
(10°).DISABLE
or ENABLE_LEFT
.DISABLE
or ENABLE
.{0.2, 0.0, 0.0}
.Prefab can be found under the following path:
+Assets/AWSIM/Prefabs/Sensors/VehicleStatusSensor.prefab
+
This sensor is added directly to the URDF link in the EgoVehicle
prefab.
A detailed description of the URDF
structure and sensors added to prefab Lexus RX450h 2015
is available in this section.
All features are implemented within the Vehicle Report Ros2 Publisher (script) which can be found under the following path:
+Assets/AWSIM/Prefabs/Sensors/*
+
The script is responsible for updating and publishing each of the aggregated data on a separate topic. +Therefore, it has 6 publishers publishing the appropriate type of message with a constant frequency - one common for all data.
+* Report Topic
- topic on which suitable type of information is publishedPublish Hz
- frequency of publications on each topic30Hz
)Frame ID
- frame in which data is published, used in Header
base_link
)QoS
- Quality of service profile used in the publication"system_default"
: Reliable
, Volatile
, Keep last/1
)Vehicle
- the object from which all published data are readNone
)Vehicle configuration
+An important element of the script configuration that must be set is the scene Object (Vehicle
).
+It will be used for reading all the data needed.
+The appropriate EgoVehicle
object should be selected.
If you can't select the right object, make sure it's set up correctly - it has got added all the scripts needed for EgoVehicle
.
30Hz
Reliable
, Volatile
, Keep last/1
Category | +Topic | +Message type | +frame_id |
+
---|---|---|---|
Control mode | +/vehicle/status/control_mode |
+autoware_auto_vehicle_msgs/ControlModeReport |
+- | +
Gear status | +/vehicle/status/gear_status |
+autoware_auto_vehicle_msgs/GearReport |
+- | +
Steering status | +/vehicle/status/steering_status |
+autoware_auto_vehicle_msgs/SteeringReport |
+- | +
Turn indicators status | +/vehicle/status/turn_indicators_status |
+autoware_auto_vehicle_msgs/TurnIndicatorsReport |
+- | +
Hazard lights status | +/vehicle/status/hazard_lights_status |
+autoware_auto_vehicle_msgs/HazardLightsReport |
+- | +
Velocity status | +/vehicle/status/velocity_status |
+autoware_auto_vehicle_msgs/VelocityReport |
+base_line |
+
NPCPedestrian
is an object that simulates a human standing or moving on the scene.
+It can move cyclically in any chosen place thanks to the available scripts.
+Traffic light tracking will be implemented in the future.
Sample scene
+If you would like to see how NPCPedestrian
works or run some tests, we encourage you to familiarize yourself with the NPCPedestrianSample
default scene described in this section.
Prefab can be found under the following path:
+Assets/AWSIM/Prefabs/NPCs/Pedestrians/humanElegant.prefab
+
Prefab is developed using models available in the form of *.fbx
file.
+From this file, the visual elements of the model, Animator
and LOD
were loaded.
+The Animator
and LOD
are added as components of the main-parent GameObject in prefab, while the visual elements of the model are added as its children.
*.fbx
file can be found under the following path:
Assets/AWSIM/Models/NPCs/Pedestrians/Human/humanElegant.fbx
+
NPCPedestrian
prefab has the following content:
The ReferencePoint
is used by the NPC Pedestrian (script) described here.
Pedestrians implemented in the scene are usually added in one aggregating object - in this case it is NPCPedestrians
.
+This object is added to the Environment
prefab.
There are several components responsible for the full functionality of NPCPedestrian
:
Scripts can be found under the following path:
+Assets/AWSIM/Scripts/NPCs/Pedestrians/*
+
Rigidbody
ensures that the object is controlled by the physics engine.
+In order to connect the animation to the object, the Is Kinematic
option must be enabled.
+By setting Is Kinematic
, each NPCPedestrian
object will have no physical interaction with other objects - it will not react to a vehicle that hits it.
+The Use Gravity
should be turned off - the correct position of the pedestrian in relation to the ground is ensured by the NPC Pedestrian (script).
+In addition, Interpolate
should be turned on to ensure the physics engine's effects are smoothed out.
LOD
provides dependence of the level of detail of the object depending on the ratio of the GameObject’s screen space height to the total screen height.
+The pedestrian model has two object groups: suffixed LOD0
and LOD1
.
+LOD0
objects are much more detailed than LOD1
- they have many more vertices in the Meshes.
+Displaying complex meshes requires more performance, so if the GameObject is a small part of the screen, less complex LOD1
objects are used.
In the case of the NPCPedestrian
prefab, if its object is less than 25% of the height of the screen then objects with the LOD1
suffix are used.
+For values less than 1% the object is culled.
Animator
component provides animation assignments to a GameObject in the scene.
+It uses a developed Controller
which defines which animation clips to use and controls when and how to blend and transition between them.
The AnimationController
for humans should have the two float parameters for proper transitions.
+Transitions between animation clips are made depending on the values of these parameters:
moveSpeed
- pedestrian movement speed in \({m}/{s}\),rotateSpeed
- pedestrian rotation speed in \({rad}/{s}\).Developed controller can be found in the following path:
+Assets/AWSIM/Models/NPCs/Pedestrians/Human/Human.controller
Walking to running transition
+The example shows the state of walking and then transitions to running as a result of exceeding the condition \(\mathrm{moveSpeed} > 1.6\)
+ +The script takes the Rigidbody
and Animator
components and combines them in such a way that the actual animation depends on the movement of Rigidbody
.
+It provides an inputs that allows the pedestrian to move - change his position and orientation.
+In addition, the ReferencePoint
point is used to ensure that the pedestrian follows the ground plane correctly.
Ray Cast Max Distance
- ray-cast max distance for locating the ground.Ray Cast Origin Offset
- upward offset of the ray-cast origin from the GameObject local origin for locating the ground.Category | +Type | +Description | +
---|---|---|
SetPosition | +Vector3 | +Move the NPCPedestrian so that the reference point is at the specified coordinates. |
+
SetRotation | +Vector3 | +Rotate the NPCPedestrian so that the orientation of the reference point becomes the specified one. |
+
Simple Pedestrian Walker Controller is a script that allows the pedestrian to cyclically move back and forth along a straight line.
+One-way motion is performed with a fixed time as parameter Duration
and a constant linear velocity as parameter Speed
.
+The script obviously uses the NPCPedestrian
controls provided by the NPC Pedestrian (script) inputs.
Pedestrian walking on the sidewalk
+ +Collider
is an optional pedestrian component.
+By default, NPCPedestrian
doesn't have this component added, It can be added if you want to detect a collision, e.g. with an EgoVehicle
.
+There are several types of colliders, choose the right one and configure it for your own requirements.
Capsule Collider
+An example of a CapsuleCollider
that covers almost the entire pedestrian.
NPCVehicle
is a non-playable object that simulates a vehicle that is stationary or moving around the scene.
+It can move on roads, more specifically TrafficLanes
, thanks to the use of TrafficSimulator
- which you can read more about here.
+Vehicles moving on the scene take into account each other - avoiding collisions, follow traffic lights and have an implemented mechanism of yielding the right of way.
Sample scene
+If you would like to see how NPCVehicle
works or run some tests, we encourage you to familiarize yourself with the NPCVehicleSample
default scene described in this section.
Ego Vehicle
+If you are interested in the most important vehicle on the scene - Ego Vehicle
, we encourage you to read this section.
Prefabs can be found under the following path:
+Assets/AWSIM/Prefabs/NPCs/Vehicles/*
+
The table shows the available prefabs of the vehicles:
++ | Hatchback | +SmallCar | +Taxi | +Truck | +Van | +
---|---|---|---|---|---|
Appearance | ++ | + | + | + | + |
Prefab | +Hatchback.prefab |
+SmallCar.prefab |
+Taxi-64.prefab |
+Truck_2t.prefab |
+Van.prefab |
+
NPCVehicle
prefab has the following content:
As you can see, it consists of 2 parents for GameObjects: Visuals
- aggregating visual elements, Colliders
- aggregating colliders and single object CoM
.
+ All objects are described in the sections below.
Prefabs are developed using models available in the form of *.fbx
files.
+For each vehicle, the visuals elements and LOD
were loaded from the appropriate *.fbx
file.
+The LOD
is always added as components of the main-parent GameObject in prefab, while the visual elements of the model are aggregated and added in object Visuals
.
*.fbx
file for each vehicle is located in the appropriate Models
directory for the vehicle under the following path:
Assets/AWSIM/Models/NPCs/Vehicles/<vehicle_name>/Models/<vehicle_name>.fbx
+
As you can see, the additional visual element is Driver
.
It was also loaded from the *.fbx
file which can be found under the following path:
Assets/AWSIM/Models/NPCs/Vehicles/Driver/Model/Driver.fbx
+
Vehicle fbx
+The content of a sample *.fbx
file is presented below, all elements except Collider
have been added to the prefab as visual elements of the vehicle.
+Collider
is used as the Mesh source for the Mesh Collider
in the BodyCollider
object.
.
+The default scene does not have vehicles implemented in fixed places, but they are spawned by RandomTrafficSimulator
which is located in the Environment
prefab.
+Therefore, before starting the simulation, no NPCVehicle
object is on the scene.
When you run the simulation, you can see objects appearing as children of RandomTrafficSimulator
:
In each NPCVehicle
prefab, the local coordinate system of the vehicle (main prefab link) should be defined in the axis of the rear wheels projected onto the ground - in the middle of the distance between them.
+This aspect holds significance when characterizing the dynamics of the object, as it provides convenience in terms of describing its motion and control.
+
There are several components responsible for the full functionality of NPCVehicle
:
Script can be found under the following path:
+Assets/AWSIM/Scripts/NPCs/Vehicles
+
CoM
(Center of Mass) is an additional link that is defined to set the center of mass in the Rigidbody
.
+The NPC Vehicle (script) is responsible for its assignment.
+This measure should be defined in accordance with reality.
+Most often, the center of mass of the vehicle is located in its center, at the height of its wheel axis - as shown below.
+
Colliders are used to ensure collision between objects.
+In NPCVehicle
, the main BodyCollider
collider and Wheels Colliders
colliders for each wheel were added.
BodyCollider
is a vehicle Object responsible for ensuring collision with other objects.
+Additionally it can be used to detect these collisions.
+The MeshCollider
uses a Mesh of an Object to build its Collider
.
+The Mesh for the BodyCollider
was also loaded from the *.fbx
file similarly to the visual elements.
WheelsColliders
are an essential element from the point of view of driving vehicles on the road.
+They are the only ones that have contact with the roads and it is important that they are properly configured.
+Each vehicle, apart from the visual elements related to the wheels, should also have 4 colliders - one for each wheel.
To prevent inspector entry for WheelCollider
the WheelColliderConfig
has been developed.
+It ensures that friction is set to 0 and only wheel suspension and collisions are enabled.
Wheel Collider Config
+For a better understanding of the meaning of WheelCollider
we encourage you to read this manual.
LOD
provides dependence of the level of detail of the object depending on the ratio of the GameObject’s screen space height to the total screen height.
+Vehicle models have only one LOD0
group, therefore there is no reduction in model complexity when it does not occupy a large part of the screen.
+It is only culled when it occupies less than 2% of the height.
Rigidbody
ensures that the object is controlled by the physics engine.
+The Mass
of the vehicle should approximate its actual weight.
+In order for the vehicle to physically interact with other objects - react to collisions, Is Kinematic
must be turned off.
+The Use Gravity
should be turned on - to ensure the correct behavior of the body during movement.
+In addition, Interpolate
should be turned on to ensure the physics engine's effects are smoothed out.
The script takes the Rigidbody
and provides an inputs that allows the NPCVehicle
to move.
+Script inputs give the ability to set the position and orientation of the vehicle, taking into account the effects of suspension and gravity.
+In addition, the script uses the CoM
link reference to assign the center of mass of the vehicle to the Rigidbody
.
Script inputs are used by RandomTrafficSimulator
, which controls the vehicles on the scene - it is described here.
Category | +Type | +Description | +
---|---|---|
SetPosition | +Vector3 | +Move the NPCVehicle so that its x, z coordinates are same as the specified coordinates. Pitch and roll are determined by physical operations that take effects of suspension and gravity into account. |
+
SetRotation | +Vector3 | +Rotate the NPCVehicle so that its yaw becomes equal to the specified one. Vertical movement is determined by physical operations that take effects of suspension and gravity into account. |
+
Visual Object Root
is a reference to the parent aggregating visuals, it can be used to disable the appearance of visual elements of the NPCVehicle
in the scene.
Whereas Bounds
Represents an axis aligned bounding box of the NPCVehicle
.
+It is used primarily to detect collisions between vehicles in the event of spawning, yielding and others.
+Moreover, vehicle bounds are displayed by Gizmos.
The settings of the remaining elements, i.e. the Axle
and the Lights
, are described here and here.
No Gizmo visualization
+If you don't see Gizmo's visual elements, remember to turn them on.
+ +This part of the settings is responsible for the proper connection of visual elements with the collider for each wheel - described earlier.
+The objects configured in this section are used to control the vehicle - its wheel speed and steering angle, which are calculated based on the input values.
+Correct configuration is very important from the point of view of the NPCVehicle
movement on the road.
This part of the settings is related to the configuration of materials emission - used when a specific lighting is activated.
+There are 3 types of lights: Brake
, Left Turn Signal
and Right Turn Signal
.
+Each of the lights has its visual equivalent in the form of a Mesh.
+In the case of NPCVehicle
all of the lights are included in the Body
object Mesh, which has many materials - including those related to lights.
For each type of light, the appropriate Material Index
(equivalent of element index in mesh) and Lighting Color
are assigned - yellow for Turn Signals
, red for Break
.
Lighting Intensity
values are also configured - the greater the value, the more light will be emitted.
+This value is related to Lighting Exposure Weight
parameter that is an exposure weight - the lower the value, the more light is emitted.
The brake light is switched on depending on the speed of the NPCVehicle
, while RandomTrafficSimulator
is responsible for switching the turn signals on and off.
This document describes the steps to properly configuer RandomTrafficSimulator
in your environment.
The 3D map model should be added to the scene. Please make sure that the Environment
component with appropriate mgrsOffsetPosition
is attached to the root GameObject.
+
Please attach TrafficLight
component to all traffic light GameObjects placed on scene.
+
The lanelet load process can be performed by opening AWSIM -> Random Traffic -> Load Lanelet
at the top toolbar of Unity Editor.
+
You should be prompted with a similar window to the one presented below. Please adjust the parameters for the loading process if needed.
+ +Waypoint settings affect the density and accuracy of the generated waypoints. The parameters are described below:
+To generate the Lanelet2 map representation in your simulation, please click the Load
button. Environment components should be generated and placed as child objects of the Environment
GameObject. You can check their visual representation by clicking consecutive elements in the scene hierarchy.
To annotate intersection please, add an empty GameObject named TrafficIntersections
at the same level as the TrafficLanes
GameObject.
For each intersection repeat the following steps:
+TrafficIntersection
as a child object of the TrafficIntersections
object.TrafficIntersection
component to it.BoxCollider
as a component of GameObject. It's size and position should cover the whole intersection. This is used for detecting vehicles in the intersection.TrafficLightGroups
. Each group is controlled to have different signals, so facing traffic lights should be added to the same group. These groupings are used in traffic signal control.For the vehicles to operate properly it is needed to annotate the right of way of TrafficLane
manually on intersections without traffic lights.
To set the right of way, please:
+Set RightOfWays
button to give the lane priority over other lanes.
+For each right turn lane that yields to the opposite straight or left turn lane, a stop line needs to be defined near the center of the intersection.
+
+If there is no visible stop line, a StopLine
component should be added to the scene, near the center of the intersection and associated with TrafficLane
.
To make the yielding rules work properly, it is necessary to catagorize the TrafficLanes
.
+The ones that belong to an intersection have the IntersectionLane
variable set to true.
To automate the assignment of the corresponding IntersectionLane
to each TrafficLane
, the script AssignIntersectionTrafficLanes
can be used.
+
Environment
object).TrafficLanesObjectsParent
GameObject, which contains all TrafficLanes
objects.Check the log to see if all operations were completed: +
+As a result, the names of TrafficLane
objects should have prefixes with sequential numbers and TrafficLane
at intersections should be marked. TrafficLanes
with IntersectionLane
set to True are displayed by Gizmos in green color, if IntersectionLane
is False their color is white.
+
+
Once all the components are ready, the simulation can be run. +Check carefully if the vehicles are moving around the map correctly. +For each intersection, review the settings of the relevant components if vehicles are unable to proceed.
+ + + + + + + + + + + + + +The RandomTrafficSimulator
simulates city traffic with respect to all traffic rules. The system allows for random selection of car models and the paths they follow. It also allows adding static vehicles in the simulation.
The random traffic system consists of the following components:
+RandomTrafficSimulator
: manages lifecycle of NPCs and simulates NPC behaviours.TrafficLane
, TrafficIntersection
and StopLine
: represent traffic entitiesNPCVehicle
: vehicle models (NPCs) controlled by RandomTrafficSimulator
The following section describes Unity Editor components settings.
+Parameter | +Description | +
---|---|
General Settings | ++ |
Seed | +Seed value for random generator | +
Ego Vehicle | +Transform of ego vehicle | +
Vehicle Layer Mask | +LayerMask that masks only vehicle(NPC and ego) colliders | +
Ground Layer Mask | +LayerMask that masks only ground colliders of the map | +
NPC Vehicle Settings | ++ |
Max Vehicle Count | +Maximum number of NPC vehicles to be spawned in simulation | +
NPC Prefabs | +Prefabs representing controlled vehicles. They must have NPCVehicle component attached. |
+
Spawnable Lanes | +TrafficLane components where NPC vehicles can be spawned during traffic simulation |
+
Vehicle Config | +Parameters for NPC vehicle controlSudden Deceleration is a deceleration related to emergency braking |
+
Debug | ++ |
Show Gizmos | +Enable the checkbox to show editor gizmos that visualize behaviours of NPCs | +
Gizmos are useful for checking current behavior of NPCs and its causes. +Gizmos have a high computational load so please disable them if the simulation is laggy. +
+ + + + + + + + + + + + + +The RandomTrafficSimulator
assumes that there are 10 phases of yielding priority:
RandomTrafficYielding scene
+If you would like to see how RandomTrafficSimulator
with yielding rules works or run some tests, we encourage you to familiarize yourself with the RandomTrafficYielding
scene described in this section.
NONE
- state in which it is only checked if a vehicle is approaching the intersection. If yes, a transition to state ENTERING_INTERSECTION
is made.
ENTERING_INTERSECTION
- state in which it is checked if any of the situations LANES_RULES_ENTERING_INTERSECTION
, LEFT_HAND_RULE_ENTERING_INTERSECTION
, INTERSECTION_BLOCKED
occur, if yes the state of the vehicle is changed to one matching the situation - to determine if the vehicle must yield priority. If none of these situations occur only the entry into the intersection will result in a transition to AT_INTERSECTION
.
AT_INTERSECTION
- state in which it is checked if any of the situations LANES_RULES_AT_INTERSECTION
, LEFT_HAND_RULE_AT_INTERSECTION
, FORCING_PRIORITY
occur, if yes the state of the vehicle is changed to one matching the situation - to determine if the vehicle must yield priority. If none of these situations occur only leaving the intersection will result in a transition to NONE
.
INTERSECTION_BLOCKED
- when vehicle A is approaching the intersection, it yields priority to vehicle B, which should yield priority, but is forcing it - this refers to a situation in which vehicle B has entered the intersection and has already passed its stop point vehicle B isn’t going to stop but has to leave the intersection. Until now, vehicle A has continued to pass through the intersection without taking vehicle B into account, now it is checking if any vehicle is forcing priority (vehicle A has INTERSECTION_BLOCKED
state).
+
(vehicle A is red car with blue sphere, B is the white car to which it points)
+
LEFT_HAND_RULE_ENTERING_INTERSECTION
- vehicle A, before entering the intersection where the traffic lights are off, yields priority to vehicles (ex. B) that are approaching to the intersection and are on the left side of vehicle A.
+Until now, situations in which the lights are off were not handled. If a vehicle didn't have a red light and was going straight - it just entered the intersection. Now vehicle A checks if the vehicles on the left (ex. B) have a red light, if not it yields them priority.
+
(vehicle A is truck car with gray sphere, B is the white car to which it points)
+
LEFT_HAND_RULE_AT_INTERSECTION
- when vehicle A is already at the intersection, yields priority to vehicles (ex. B) that are also at the intersection and are on its left side - in cases where no other yielding rules are resolved between them (i.e. there are no RightOfWayLanes
between them).
+
(vehicle A is red, B is white)
+
LANES_RULES_ENTERING_INTERSECTION
- when vehicle B intends to turn left and is approaching at the intersection where it needs to yield to vehicle A which is going straight ahead, then it goes to state LANES_RULES_ENTERING_INTERSECTION
. The introduced changes take into account that a vehicle approaching the intersection considers not only the vehicles at the intersection but also those which are approaching it (at a distance of less than minimumDistanceToIntersection
to the intersection).
+
(vehicle B is truck with yellow sphere, A the white car to which it points)
+
LANES_RULES_AT_INTERSECTION
- when vehicle B intends to turn right and is already at the intersection where it needs to yield to vehicle A which is approaching the intersection, then it goes to state LANES_RULES_AT_INTERSECTION
. The introduced changes take into account that a vehicle approaching the intersection considers not only the vehicles at the intersection but also those which are approaching it (at a distance of less than minimumDistanceToIntersection
to the intersection).
+(vehicle B is car with red sphere, A the white car to which it points)
+
FORCING_PRIORITY
- state in which some vehicle B should yield priority to a vehicle A but doesn't - for some reason, most likely it could be some unusual situation in which all other rules have failed. Then vehicle A which is at intersection yields priority to a vehicle that is forcing priority. In such a situation, vehicle A transitions to state FORCING_PRIORITY
. It is very rare to achieve this state, but it does happen.
NONE
, ENTERING_INTERSECTION
or AT_INTERSECTION
.LEFT_HAND_RULE_ENTERING_INTERSECTION
.LEFT_HAND_RULE_AT_INTERSECTION
.LANES_RULES_ENTERING_INTERSECTION
. LANES_RULES_AT_INTERSECTION
.INTERSECTION_BLOCKED
- when the turning vehicle begins to yield, then the blue sphere disappears. However, if the turning vehicle continues to turn (does not yield because it has passed the stopping point), then the vehicle going straight stops before the intersection and allows the turning vehicle to leave the intersection.FORCING_PRIORITY
.This section
+This section is still under development!
+This is a section that describes in detail all components related to simulated traffic in the Environment
prefab.
The random traffic system consists of the following components:
+It is a top level interface meant to be used on the Unity scene.
+TrafficManager
runs all elements needed for a successful traffic simulation.
+This component manages all TrafficSimulators
so they don't work against each other.
+It gives you the possibility to configure the TrafficSimulators
.
TrafficSimulator
Technically it is not a component, it is crucial to understand what it is and what it does in order to correctly configure the TrafficManager
.
+TrafficSimulator
manages NPCVehicles
spawning.
+There can be many TrafficSimulators
on the scene.
+They are added and configured in the TrafficManager
component.
+Every TrafficSimulator
manages some part of the traffic it is responsible for - meaning it has spawned the NPCVehicles
and set their configuration.
RandomTrafficSimulator
- spawns and controls NPCVehicles
driving randomlyRouteTrafficSimulator
- spawns and controls NPCVehicles
driving on a defined routeTrafficSimulator inaccessibility
+It is not possible to get direct access to the TrafficSimulator
.
+It should be added and configured through the TrafficManager
component.
TrafficLane
, TrafficIntersection
and StopLine
These components represent traffic entities. +They are used to control and manage the traffic with respect to traffic rules and current road situation.
+The vehicle models (NPCs) spawned by one of the TrafficSimulators
.
+They are spawned according to the TrafficSimulator
configuration and either drive around the map randomly (when spawned by a RandomTrafficSimulator
) or follow the predefined path (when spawned by a RouteTrafficSimulator
).
+NPCVehicles
are managed by one central knowledge base.
The process of spawning a NPCVehicle
and its later behavior control is presented on the following sequence diagram.
Sequence Diagram Composition
+Please note that the diagram composition has been simplified to the level of GameObjects and chosen elements of the GameObjects for the purpose of improving readability.
+Lanelet2 is a library created for handling a map focused on automated driving.
+It also supports ROS and ROS2 natively.
+In AWSIM Lanelet2 is used for reading and handling a map of all roads.
+Specifically it does contain all TrafficLanes
and StopLines
.
+You may also see us referring to the actual map data file (*.osm
) as a Lanelet2.
Lanelet2 official page
+If you want to learn more we encourage to visit the official project page.
+Nomenclature
+Please note that
+NPCVehicles
randomly andare named RandomTrafficSimulator
.
+Keep this in mind when reading the following page - so you don't get confused.
RandomTrafficSimulator
simulates traffic with respect to all traffic rules. The system allows for random selection of car models and the paths they follow. It also allows adding static vehicles in the simulation.
The RandomTrafficSimulator
consists of several GameObjects.
RandomTrafficSimulator
- this is an Object consisting of a Traffic Manager (script).TrafficIntersections
- this is a parent Object for all TrafficIntersections
.TrafficLanes
- this is a parent Object for all TrafficLanes
.StopLines
- this is a parent Object for all StopLines
.RandomTrafficSimulator
only has one component: Traffic Manager (script) which is described below.
Traffic Manager (script) is responsible for all of top level management of the NPCVehicles
.
+It managed spawning of NPCVehicles
on TrafficLanes
.
TrafficManager
uses the concept of TrafficSimulators
.
+One TrafficSimulator
is responsible for managing its set of NPCVehicles
.
+Every TrafficSimulator
spawns its own NPCVehicles
independently.
+The vehicles spawned by one TrafficSimulator
do respect its configuration.
+TrafficSimulators
can be interpreted as NPCVehicle
spawners with different configurations each.
+Many different TrafficSimulators
can be added to the TrafficManager
.
If a random mode is selected (RandomTrafficSimulator
) then NPCVehicles
will spawn in random places (from the selected list) and drive in random directions.
+To be able to reproduce the behavior of the RandomTrafficSimulator
a Seed
can be specified - which is used for the pseudo-random numbers generation.
TrafficManager
script also configures all of the spawned NPCVehicles
, so that they all have common parameters
Acceleration
- the acceleration used by the vehicles at all times when accelerating.Deceleration
- the value of deceleration used in ordinary situations.Sudden Deceleration
- deceleration used when standard Deceleration
is not sufficient to avoid accident.Absolute Deceleration
- value of deceleration used when no other deceleration allows to avoid the accident.The Vehicle Layer Mask
and Ground Layer Mask
are used to make sure all vehicles can correctly interact with the ground to guarantee simulation accuracy.
Max Vehicle Count
specifies how many NPCVehicles
can be present on the scene at once.
+When the number of NPCVehicles
on the scene is equal to this value the RandomTrafficSimulator
stops spawning new vehicles until some existing vehicles drive away and disappear.
The EgoVehicle
field provides the information about Ego vehicle used for correct behavior ofNPCVehicles
when interacting with Ego.
Show Gizmos
checkbox specifies whether the Gizmos visualization should be displayed when running the simulation.
Show Yielding Phase
checkbox specifies whether yielding phases should be displayed by Gizmos - in the form of spheres above vehicles, details in the Markings section.
Show Obstacle Checking
checkbox specifies whether obstacle checking should be displayed by Gizmos - in the form of boxes in front of vehicles
Show Spawn Points
checkbox specifies whether spawn points should be displayed by Gizmos - in the form of flat cuboids on roads.
Gizmos performance
+Gizmos have a high computational load. +Enabling them may cause the simulation to lag.
+As mentioned earlier - TrafficManager
may contain multiple TrafficSimulators
.
+The two available variants of TrafficSimulator
are described below
TrafficSimulators
should be interpreted as spawning configurations for some group of NPCVehicles
on the scene.
When using RandomTrafficSimulator
the NPCVehicle
prefabs (NPC Prefabs) can be chosen as well as Spawnable Lanes.
+The later are the only TrafficLanes
on which the NPCVehicles
can spawn.
+Upon spawning one of the Spawnabe Lanes is chosen and - given the vehicle limits are not reached - one random NPCVehicle from the Npc prefabs list is spawned on that lane.
+After spawning, the NPCVehicle takes a random route until it drives out of the map - then it is destroyed.
The Maximum Spawns
field specifies how many Vehicles should be spawned before this TrafficSimulator
stops working.
+Set to 0
to disable this restriction.
When using Route traffic Simulator
the NPCVehicle
prefabs (NPC Prefabs) as well as Route can be chosen.
+The later is an ordered list of TrafficLanes
that all spawned vehicles will drive on.
+Given the vehicle limit is not reached - the RouteTrafficSimulator
will spawn one of the Npc Prefabs chosen randomly on the first Route element (Element 0
).
+After the first vehicle drives off the next one will spawn according to the configuration.
+It is important for all Route elements to be connected and to be arranged in order of appearance on the map.
+The NPCVehicle disappears after completing the Route.
The Maximum Spawns
field specifies how many Vehicles should be spawned before this TrafficSimulator
stops working.
+Set to 0
to disable this restriction.
Parameter | +Description | +
---|---|
General Settings | ++ |
Seed | +Seed value for random generator | +
Ego Vehicle | +Transform of ego vehicle | +
Vehicle Layer Mask | +LayerMask that masks only vehicle(NPC and ego) colliders | +
Ground Layer Mask | +LayerMask that masks only ground colliders of the map | +
Culling Distance | +Distance at which NPCs are culled relative to EgoVehicle | +
Culling Hz | +Culling operation cycle | +
NPCVehicle Settings | ++ |
Max Vehicle Count | +Maximum number of NPC vehicles to be spawned in simulation | +
NPC Prefabs | +Prefabs representing controlled vehicles. They must have NPCVehicle component attached. |
+
Spawnable Lanes | +TrafficLane components where NPC vehicles can be spawned during traffic simulation |
+
Vehicle Config | +Parameters for NPC vehicle controlSudden Deceleration is a deceleration related to emergency braking |
+
Debug | ++ |
Show Gizmos | +Enable the checkbox to show editor gizmos that visualize behaviours of NPCs | +
Traffic Light (script) is a component added to every TrafficLight
on the scene.
+It is responsible for configuring the TrafficLight
behavior - the bulbs and their colors.
The Renderer
filed points to the renderer that should be configured - in this case it is always a TrafficLight
renderer.
Bulbs Emission Config
is a list describing available colors for this Traffic Light.
+Every element of this list configures the following
Bulb Color
- the name of the configured color that will be used to reference this colorColor
- the actual color with which a bulb should light upIntensity
- the intensity of the colorExposure Weight
- how bright should the color be when lighting upThe Bulb Material Config
is a list of available bulbs in a given Traffic Light.
+Every element describes a different bulb.
+Every bulb has the following aspects configured
Bulb Type
- the name that will be usd to reference the configured bulbMaterial Index
- The index of a material of the configured bulb.
+ This is an index of a sub-mesh of the configured bulb in the Traffic Light mesh.
+ The material indices are described in detail here and here.TrafficIntersection
is a representation of a road intersection.
+It consists of several components.
+TrafficIntersection
is used in the Scene
for managing TrafficLights
.
+All Traffic Lights present on one Traffic Intersection
must be synchronized - this is why the logic of TrafficLight
operation is included in the TrafficIntersection
.
Every TrafficIntersection
has its own GameObject and is added as a child of the aggregate TrafficIntersections
Object.
+TrafficIntersections
are elements of an Environment
, so they should be placed as children of an appropriate Environment
Object.
TrafficIntersection
has the following components:
Every TrafficIntersection
contains a Box Collider element.
+It needs to accurately cover the whole area of the TrafficIntersection
.
+Box Collider - together with the Traffic Intersection (script) - is used for detecting vehicles entering the TrafficIntersection
.
Traffic Intersection (script) is used for controlling all TrafficLights
on a given intersection.
+The Collider Mask
field is a mask on which all Vehicle Colliders are present.
+It - together with Box Collider - is used for keeping track of how many Vehicles are currently present on the Traffic Intersection.
+The Traffic Light Groups
and Lighting Sequences
are described below.
Traffic Light Group
is a collection of all Traffic Lights
that are in the same state at all times.
+This includes all redundant Traffic Lights
shining in one direction as well as the ones in the opposite direction.
+In other words - as long as two Traffic Lights
indicate exactly the same thing they should be added to the same Traffic Light Group
.
+This grouping simplifies the creation of Lighting Sequences
.
Lighting Sequences
is the field in which the whole intersection Traffic Lights
logic is defined.
+It consists of many different Elements.
+Each Element is a collection of Orders that should take an effect for the period of time specified in the Interval Sec
field.
+Lighting Sequences
Elements are executed sequentially, in order of definition and looped - after the last element sequence goes back to the first element.
The Group Lighting Orders
field defines which Traffic Light Groups
should change their state and how.
+For every Group Lighting Orders
Element the Traffic Lights Group
is specified with the exact description of the goal state for all Traffic Lights in that group - which bulb should light up and with what color.
One Lighting Sequences
Element has many Group Lighting Orders
, which means that for one period of time many different orders can be given.
+E.g. when Traffic Lights
in one direction change color to green - Traffic Lights
in the parallel direction change color to red.
Traffic Light state persistance
+If in the given Lighting Sequences
Element no order is given to some Traffic Light Group - this Group will keep its current state.
+When the next Lighting Sequences
Element activates - the given Traffic Light Group
will remain in an unchanged state.
Description | +Editor | +
+ Traffic Lights in Pedestrian Group 1 change color to flashing green. + Other Groups keep their current state. + This state lasts for 5 seconds. + |
+ + |
+ Traffic Lights in Pedestrian Group 1 change color to solid red. + Other Groups keep their current state. + This state lasts for 1 second. + |
+ + |
+ Traffic Lights in Vehicle Group 1 change color to solid yellow. + Other Groups keep their current state. + This state lasts for 5 seconds. + |
+ + |
+ Traffic Lights in Vehicle Group 1 change color to solid red. + Other Groups keep their current state. + This state lasts for 3 seconds. + |
+ + |
+ Traffic Lights in Vehicle Group 2 change color to solid green. + Traffic Lights in Pedestrian Group 2 change color to solid green. + Other Groups keep their current state. + This state lasts for 15 seconds. + |
+ + |
+ Traffic Lights in Pedestrian Group 2 change color to flashing green. + Other Groups keep their current state. + This state lasts for 5 seconds. + |
+ + |
+ Traffic Lights in Pedestrian Group 2 change color to solid red. + Other Groups keep their current state. + This state lasts for 1 second. + |
+ + |
+ Traffic Lights in Vehicle Group 2 change color to solid yellow. + Other Groups keep their current state. + This state lasts for 5 seconds. + |
+ + |
+ Traffic Lights in Vehicle Group 2 change color to solid red. + Other Groups keep their current state. + This state lasts for 3 second. + Sequence loops back to the first element of the list. + |
+ + |
TrafficLane
is a representation of a short road segment.
+It consists of several waypoints that are connected by straight lines.
+TrafficLanes
are used as a base for a RandomTrafficSimulator.
+They allow NPCVehicles
to drive on the specific lanes on the road and perform different maneuvers with respect to the traffic rules.
+TrafficLanes create a network of drivable roads when connected.
Every TrafficLane
has its own GameObject and is added as a child of the aggregate TrafficLanes
Object.
+TrafficLanes
are an element of an Environment
, so they should be placed as children of an appropriate Environment
Object.
TrafficLanes
can be imported from the lanelet2 *.osm
file.
TrafficLane
consists of an Object containing Traffic Lane (script).
TrafficLane
has a transformation property - as every Object in Unity - however it is not used in any way.
+All details are configured in the Traffic Lane (script), the information in Object transformation is ignored.
Traffic Lane (script) defines the TrafficLane
structure.
+The Waypoints
field is an ordered list of points that - when connected with straight lines - create a TrafficLane
.
Traffic Lane (script) coordinate system
+Waypoints
are defined in the Environment
coordinate system, the transformation of GameObject is ignored.
Turn Direction
field contains information on what is the direction of this TrafficLane
- whether it is a right or left turn or straight road.
Traffic lanes are connected using Next Lanes
and Prev Lanes
fields.
+This way individual TrafficLanes
can create a connected road network.
+One Traffic Lane can have many Next Lanes
and Prev Lanes
.
+This represents the situation of multiple lanes connecting to one or one lane splitting into many - e.g. the possibility to turn and to drive straight.
Right Of Way Lanes are described below.
+Every TrafficLane
has to have a Stop Line
field configured when the Stop Line is present on the end of the TrafficLane
.
+Additionally the Speed Limit
field contains the highest allowed speed on given TrafficLane
.
Right Of Way Lanes
is a collection of TrafficLanes
.
+Vehicle moving on the given TrafficLane
has to give way to all vehicles moving on every Right Of Way Lane
.
+It is determined based on basic traffic rules.
+Setting Right Of Way Lanes
allows RandomTrafficSimulator
to manage all NPCVehicles
so they follow traffic rules and drive safely.
In the Unity editor - when a TrafficLane
is selected - aside from the selected TrafficLane
highlighted in blue, all Right Of Way Lanes
are highlighted in yellow.
Right Of Way Lanes Sample - details
+The selected TrafficLane
(blue) is a right turn on an intersection.
+This means, that before turning right the vehicle must give way to all vehicles driving from ahead - the ones driving straight as well as the ones turning left.
+This can be observed as TrafficLanes
highlighted in yellow.
StopLine
is a representation of a place on the road where vehicles giving way to other vehicles should stop and wait.
+They allow RandomTrafficSimulator
to manage NPCVehicles
in safe and correct way - according to the traffic rules.
+All possible locations where a vehicle can stop in order to give way to other vehicles - that are enforced by an infrastructure, this does not include regular lane changing - need to be marked with StopLines
.
Every StopLine
has its own GameObject and is added as a child of the aggregate StopLines
Object.
+Stop Lines are an element of an Environment
, so they should be placed as children of an appropriate Environment
Object.
StopLines
can be imported from the lanelet2 *.osm
file.
StopLine
consists of an Object containing Stop Line (script).
Stop Line has a transformation property - as every Object in Unity - however it is not used in any way. +All details are configured in the Traffic Lane (script), the information in Object transformation is ignored.
+Stop Line (script) defines StopLine
configuration.
+The Points
field is an ordered list of points that - when connected - create a StopLine
.
+The list of points should always have two elements that create a straight StopLine
.
Stop Line (script) coordinate system
+Points
are defined in the Environment coordinate system, the transformation of GameObject is ignored.
The Has Stop Sign
field contains information whether the configured StopLine
has a corresponding StopSign
on the scene.
Every Stop Line needs to have a Traffic Light
field configured with the corresponding Traffic Light
.
+This information allows the RandomTrafficSimulator
to manage the NPCVehicles
in such a way that they respect the Traffic Lights and behave on the Traffic Intersections
correctly.
Gizmos are a in-simulation visualization showing current and future moves of the NPCVehicles
.
+They are useful for checking current behavior of NPCs and its causes.
+On the Scene they are visible as cuboid contours indicating which TrafficLanes will be taken by each vehicle in the near future.
Gizmos computing
+Gizmos have a high computational load. +Please disable them if the simulation is laggy.
+Ego Vehicle Component
+In this tutorial we will create a new EgoVehicle
.
+To learn more about what an EgoVehicle
is in AWSIM please visit Ego Vehicle description page.
Add a child Object to the Simulation called EgoVehicle
.
While having a newly created EgoVehicle
Object selected, in the Inspector view click on the 'Add Component' button, search for Rigidbody
and select it.
Configure Mass and Drag with the correct values for your Vehicle.
+ +Configure Interpolation and Collision Detection.
+ +For a detailed explanation hwo to add visual elements of your Vehicle check out this dedicated tutorial.
+To add a center of mass to your vehicle you have to add a CoM
child Object to the EgoVehicle
Object (the same as in steps before).
Then just set the position of the CoM
Object in the Inspector view to represent real-world center of mass of the Vehicle.
The best way is to obtain a Center of Mass information from your Vehicle documentation.
+However, if this is not possible, you can try to estimate the Center of Mass of your vehicle. +Best practice is to set the estimated Center of Mass as the following
+Note: This will vary very much depending on your Vehicle construction. +For the best possible result please follow the Vehicle specifications.
+Add a new Object called Reflection Probe
as a child to the EgoVehicle
Object.
Click on the 'Add Component' button, in the windows that pops-up search for Reflection Probe
and select it.
Note
+Please note that with Reflection Probe
there should also be automatically added a `HD Additional Reflection Data
Script.
Configure the Reflection Probe
as you wish.
Example Configuration
+Below you can see an example configuration of the Reflection Probe
.
For a detailed explanation how to add colliders to your Vehicle check out this dedicated tutorial.
+You will most certainly want to add some sensors to your EgoVehicle
.
+First you need to create a parent Object for all those sensors called URDF
.
+To do this we will add a child Object URDF
to the EgoVehicle
Object.
This Object will be used as a base for all sensors we will add later.
+To be able to control your EgoVehicle
you need a Vehicle
Script.
Add the Vehicle
Script to the EgoVehicle
Object.
Configure the Vehicle
Script Axle Settings and Center Of Mass Transform.
Testing
+It is not possible to test this Script alone, but you can test the following
+ +If components listed above work correctly this means the Vehicle
Script works correctly too.
You can control your EgoVehicle
in the simulation manually with just one Script called Vehicle Keyboard Input
.
If you want to add it just click the 'Add Component' button on the EgoVehicle
Object and search for Vehicle Keyboard Input
Script and select it.
For a visual indication of a Vehicle status you will need a Vehicle Visual Effect
Script.
+To add and configure it follow the steps below.
Add a Vehicle Visual Effect
Script by clicking 'Add Component' button, searching for it and selecting it.
Configure the lights.
+Note
+In this step we will configure only Brake Lights
, but should repeat this for every Light.
+The process is almost the same for all Lights - just change the mesh renderer and lighting settings according to your preference.
After configuring Vehicle Visual Effect
Script it is advised to test whether everything works as expected.
Make sure you have a Vehicle Keyboard Input
Script added and that it is enabled.
If your scene does not have any models yet please turn the gravity off in Rigidbody
configuration so that the Vehicle does not fall down into infinity.
Start the simulation.
+ +Test the Turn Signals.
+You can control the Turn Signals with a Vehicle Keyboard Input
Script.
+Activate the Turn Signals with one of the following keys
1
- Left Turn Signal2
- Right Turn Signal3
- Hazard Lights4
- Turn Off all SignalsTest the Lights.
+You can control the lights by "driving" the Vehicle using Vehicle Keyboard Input
Script.
+Although if you have an empty Environment like in this tutorial the Vehicle won't actually drive.
To test Brake Lights change the gear to Drive by pressing D
on the keyboard and activate braking by holding arrow down
.
To test the Reverse Light change the gear to Reverse by pressing R
on the keyboard.
+The Reverse Light should turn on right away.
Camera tip
+If you have not configured a camera or configured it in such a way that you can't see the Vehicle well you can still test most of the lights by changing views.
+ctrl + 1
- now you can move the camera freelyctrl + 2
Pleas note that this method won't work for testing Brake Lights, as for them to work you need to keep the arrow down
button pressed all the time.
For controlling your Vehicle with autonomous driving software (e.g. Autoware) you need a Vehicle Ros Input
Script.
Disable Vehicle Keyboard Input
Script
If you have added a Vehicle Keyboard Input
Script in your Vehicle please disable it when using the Vehicle Ros Input
Script.
Not doing so will lead to the vehicle receiving two different inputs which will cause many problems.
+ +Add it to the EgoVehicle
Object by clicking on the 'Add Component' button, searching for it and selecting it.
The Script is configured to work with Autoware by default, but you can change the topics and Quality of Service settings as you wish.
+Note
+The Vehicle
should be configured correctly, but if you have many Vehicles or something goes wrong, please select the right Vehicle in the Vehicle
field by clicking on the small arrow icon and choosing the right item from the list.
The best way to test the Vehicle Ros Input Script is to run Autoware.
+For a detailed explanation how to add sensors to your Vehicle check out this dedicated tutorial.
+First you will have to save the Vehicle you created as a prefab, to easily add it later to different Scenes.
+Assets/AWSIM/Prefabs/Vehicles
)After that, you can add the Vehicle you created to different Scenes by dragging it from Vehicles directory to the Hierarchy of different Scenes.
+ + + + + + + + + + + + + +Next you need to add Colliders to your Vehicle. +To do this follow the steps below.
+Add a child Object called Colliders
to the EgoVehicle
Object.
Shift parent Object Colliders
accordingly as in earlier steps where we shifted Models
.
Add a child Object Collider
to the Colliders
Object.
Add a Mesh Collider
component to the Collider
Object by clicking on the 'Add Component' button in the Inspector view and searching for it.
Click on the arrow in mesh selection field and from the pop-up window select your collider mesh.
+ Next click on the check-box called Convex
, by now your collider mesh should be visible in the editor.
Add a child Object Wheels
to the Colliders
Object.
Note
+In this tutorial we will add only one wheel collider, but you should repeat the step for all 4 wheels. +That is, follow the instructions that follow this message for every wheel your Vehicle has.
+Add a child Object FrontLeftWheel
to the Wheels
Object.
Add a Wheel Collider
component to the FrontLeftWheel
Object by clicking 'Add Component' and searching for it.
Add a Wheel
Script to the FrontLeftWheel
Object by clicking 'Add Component' and searching for it.
Drag FrontLeftWheel
Object from the WheelVisuals
to the Wheel Visual Transform
field.
Add a Wheel Collider Config
Script to the FrontLeftWheel
Object by clicking 'Add Component' and searching for it.
Configure the Wheel Collider Config
Script so that the Vehicle behaves as you wish.
Set the Transform of FrontLeftWheel
Object to match the visuals of your Vehicle.
Successful configuration
+If you have done everything right your Colliders
Object should look similar to the one following.
There is a number of different sensors available in AWSIM. +Below we present a list of sensors with links to their individual pages.
+Best practice is to replicate a ROS sensors transformations tree in Unity using Objects.
+Please note that Unity uses less common left-handed coordinate system. +Please keep this in mind while defining transformations. +More details about right-handed and left-handed systems can be found here.
+To simplify the conversion process always remember that any point in ROS coordinate system (x, y, z)
has an equivalent in the Unity coordinate system being (-y, z, x)
.
The same can be done with the rotation.
+ROS orientation described with roll, pitch and yaw (r, p, y)
can be translated to Unity Rotation as follows (p, -y, -r)
.
Unit conversion
+Please remember to convert the rotation units.
+ROS uses radians and Unity uses degrees.
+The conversion from radians (rad
) to degrees (deg
) is as follows.
deg = rad * 180 / PI
+
URDF
+Before following this tutorial please make sure you have an URDF
Object like it is shown shown in this section.
First we will have to add a base_link
which is the root of all transformations.
Add a base_link
Object as a child to the URDF
Object.
base_link
transformation
Please remember to set an appropriate transformation of the base_link
Object so that it is identical as the base_link
used in ROS in reference to the Vehicle.
This is very important, as a mistake here will result in all subsequent sensors being misplaced.
+Inside the base_link
we will represent all transformations contained in the ROS transformations tree.
You will have to check your Vehicle specific configuration. +You can do this in many ways, for example:
+Check the ROS specific .yaml
parameter files containing information about each transformation.
Example
+Here we see an example .yaml
file containing transformation from the base_link
to the sensor_kit_base_link
:
base_link:
+ sensor_kit_base_link:
+ x: 0.9
+ y: 0.0
+ z: 2.0
+ roll: -0.001
+ pitch: 0.015
+ yaw: -0.0364
+
Check the values with ROS command line tools (for more information on these please visit official ROS 2 documentation).
+You can run a command like the following to check a transformation between two frames.
+ros2 run tf2_ros tf2_echo [source_frame] [target_frame]
+
Example
+Here we see an example command with the output containing transformation from the base_link
to the sensor_kit_base_link
(note that the line after $
sign is the executed command):
$ ros2 run tf2_ros tf2_echo base_link sensor_kit_base_link
+[INFO] [1686654712.339110702] [tf2_echo]: Waiting for transform base_link -> sensor_kit_base_link: Invalid frame ID "base_link" passed to canTransform argument target_frame - frame does not exist
+At time 0.0
+- Translation: [0.900, 0.000, 2.000]
+- Rotation: in Quaternion [-0.000, 0.008, -0.018, 1.000]
+- Rotation: in RPY (radian) [-0.001, 0.015, -0.036]
+- Rotation: in RPY (degree) [-0.057, 0.859, -2.086]
+- Matrix:
+0.999 0.036 0.015 0.900
+-0.036 0.999 0.000 0.000
+-0.015 -0.001 1.000 2.000
+0.000 0.000 0.000 1.000
+
Note
+In this step we will only add one sensor link. +You will have to repeat this step for every sensor you want to add to your Vehicle.
+Let's say we want to add a LiDAR that is facing right.
+We have the following configuration files.
+base_link:
+ sensor_kit_base_link:
+ x: 0.9
+ y: 0.0
+ z: 2.0
+ roll: -0.001
+ pitch: 0.015
+ yaw: -0.0364
+
sensor_kit_base_link:
+ velodyne_right_base_link:
+ x: 0.0
+ y: -0.56362
+ z: -0.30555
+ roll: -0.01
+ pitch: 0.71
+ yaw: -1.580
+
We can clearly see the structure of transformation tree. +The transformations are as follows.
+base_link -> sensor_kit_base_link -> velodyne_right_base_link
+
We need to start adding these transformation from the root of the tree.
+We will start with the sensor_kit_base_link
, as the base_link
already exists in our tree.
The first step is to add an Object named the same as the transformation frame (sensor_kit_base_link
).
Next we have to convert the transformation from ROS standard to the Unity standard. + This is done with the formulas show in this section.
+The result of conversion of the coordinate systems and units is shown below.
+Position:
+(0.9, 0.0, 2.0) -> (0.0, 2.0, 0.9)
+Rotation:
+(-0.001, 0.015, -0.0364) -> (0.8594, 2.0856, 0.0573)
+
The resulting sensor_kit_base_link
Object transformation is shown below.
Now the same has to be done with the velodyne_right_base_link
.
Add transformation Object (velodyne_right_base_link
).
Info
+Remember to correctly set the child Object, in this case we use sensor_kit_base_link
as a child, because this is what the .yaml
file says.
Convert the transformation into Unity coordinate system.
+The correct transformation is shown below.
+Position:
+(0, -0.56362, -0.30555) -> (0.56362, -0.30555, 0)
+Rotation:
+(-0.01, 0.71, -1.580) -> (40.68, 90.5273, 0.573)
+
The final velodyne_right_base_link
Object transformation is shown below.
Success
+If you have done everything right, after adding all of the sensor links your URDF
Object tree should look something like the one following.
After adding links for all sensors you need to add the actual sensors into your Vehicle.
+Sensor position
+Please keep in mind, that we have created the sensor links in order to have an accurate transformations for all of the sensors. +This implies that the Sensor Object itself can not have any transformation.
+If one of your Sensors, after adding it to the scene, is mispositioned, check whether the transformation is set to identity (position and rotation are zeros).
+When adding sensors almost all of them will have some common fields.
+Frame Id
+Frame Id is the name of frame of reference against which the received data will be interpreted by the autonomous driving software stack.
+Remember that the Frame Id must exist internally in the ROS transformations tree.
+Topics
+Topics are names of broadcasting channels. +You can set the names of topics as you like and the data from sensors will be broadcasted on these topics.
+Remember to configure your receiving end to listen on the same topics as broadcasting ones.
+Quality Of Service settings (QOS settings)
+Quality of service settings allow you to configure the behavior of the source node while broadcasting the sensor data. +You can adjust these settings to suit your needs.
+To add a Vehicle Status Sensor to your Vehicle simply locate the following directory in the Project view and drag a prefab of this Sensor into the URDF
Object.
Assets/AWSIM/Prefabs/Sensors
+
Next in the Inspector View select your Vehicle.
+ +In this example you can see what a valid message from the Vehicle Status Sensor can look like.
+$ ros2 topic echo --once /vehicle/status/velocity_status
+header:
+ stamp:
+ sec: 17
+ nanosec: 709999604
+ frame_id: base_link
+longitudinal_velocity: 0.004912620410323143
+lateral_velocity: -0.005416259169578552
+heading_rate: 0.006338323466479778
+---
+
Scene Manager
+Before continuing with this tutorial please check out a dedicated one focused on Scene Manager.
+To add a LiDAR to your Vehicle you will have to drag a model of the LiDAR to the link tree you have created in the earlier step.
+You can use the predefined RGL LiDAR models or any other LiDAR models.
+In this tutorial we will be using RGL VelodyneVLP16
LiDAR model.
Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.
+Assets/AWSIM/Prefabs/Sensors/RobotecGPULidars
+
LiDAR noise configuration
+LiDAR Sensor in simulation is returning a perfect result data. +This is not an accurate representation of the real-world.
+LiDAR Sensor addresses this issue by applying a simulated noise to the output data.
+You can configure the noise parameters in the Inspector View under Configuration -> Noise Params
fields.
You can optionally remove the noise simulation by unchecking the Apply Distance/Angular Gaussian Noise
.
You can also change the ranges of the LiDAR detection.
+ +There is also a possibility to configure the visualization of the Point Cloud generated by the LiDAR. +E.g. change the hit-point shape and size.
+ +In this example you can see what a valid message from the LiDAR Sensor can look like.
+$ ros2 topic echo --once /lidar/pointcloud
+header:
+ stamp:
+ sec: 20
+ nanosec: 589999539
+ frame_id: world
+height: 1
+width: 14603
+fields:
+- name: x
+ offset: 0
+ datatype: 7
+ count: 1
+- name: y
+ offset: 4
+ datatype: 7
+ count: 1
+- name: z
+ offset: 8
+ datatype: 7
+ count: 1
+- name: intensity
+ offset: 16
+ datatype: 7
+ count: 1
+- name: ring
+ offset: 20
+ datatype: 4
+ count: 1
+is_bigendian: false
+point_step: 24
+row_step: 350472
+data:
+- 156
+- 218
+- 183
+- 62
+- 0
+- 189
+- 167
+- 187
+- 32
+- 58
+- 173
+- 189
+- 0
+- 0
+- 0
+- 0
+- 0
+- 0
+- 200
+- 66
+- 1
+- 0
+- 0
+- 0
+- 198
+- 129
+- 28
+- 63
+- 0
+- 6
+- 230
+- 58
+- 128
+- 184
+- 93
+- 61
+- 0
+- 0
+- 0
+- 0
+- 0
+- 0
+- 200
+- 66
+- 9
+- 0
+- 0
+- 0
+- 92
+- 2
+- 194
+- 62
+- 0
+- 141
+- 42
+- 187
+- 128
+- 89
+- 139
+- 189
+- 0
+- 0
+- 0
+- 0
+- 0
+- 0
+- 200
+- 66
+- 2
+- 0
+- 0
+- 0
+- 187
+- 168
+- 42
+- 63
+- 0
+- 159
+- 175
+- 59
+- 160
+- 243
+- 185
+- 61
+- 0
+- 0
+- 0
+- 0
+- 0
+- 0
+- 200
+- 66
+- 10
+- 0
+- 0
+- 0
+- 119
+- 186
+- 204
+- 62
+- 0
+- 254
+- 23
+- 59
+- 128
+- 143
+- 41
+- 189
+- 0
+- 0
+- 0
+- 0
+- 0
+- 0
+- 200
+- 66
+- 3
+- 0
+- 0
+- 0
+- 65
+- 241
+- 59
+- 63
+- 128
+- 0
+- 252
+- 187
+- '...'
+is_dense: true
+---
+
To add an IMU to your Vehicle you will have to drag a model of the IMU to the link tree you have created in the earlier step.
+You can use the provided or your own IMU Sensor. +In this tutorial we will be using IMU Sensor provided with AWSIM.
+Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.
+Assets/AWSIM/Prefabs/Sensors
+
In this example you can see what a valid message from the IMU Sensor can look like.
+$ ros2 topic echo --once /sensing/imu/tamagawa/imu_raw
+header:
+ stamp:
+ sec: 20
+ nanosec: 589999539
+ frame_id: tamagawa/imu_link
+orientation:
+ x: 0.0
+ y: 0.0
+ z: 0.0
+ w: 1.0
+orientation_covariance:
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+angular_velocity:
+ x: 0.014335081912577152
+ y: 0.008947336114943027
+ z: -0.008393825963139534
+angular_velocity_covariance:
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+linear_acceleration:
+ x: 0.006333829835057259
+ y: -0.005533283110707998
+ z: -0.0018753920448943973
+linear_acceleration_covariance:
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+- 0.0
+---
+
To add a GNSS Sensor to your Vehicle you will have to drag a model of the GNSS to the link tree you have created in the earlier step.
+You can use the provided or your own GNSS Sensor. +In this tutorial we will be using GNSS Sensor provided with AWSIM.
+Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.
+Assets/AWSIM/Prefabs/Sensors
+
In this example you can see what a valid message from the GNSS Sensor can look like.
+$ ros2 topic echo --once /sensing/gnss/pose
+header:
+ stamp:
+ sec: 8
+ nanosec: 989999799
+ frame_id: gnss_link
+pose:
+ position:
+ x: 81656.765625
+ y: 50137.5859375
+ z: 44.60169219970703
+ orientation:
+ x: 0.0
+ y: 0.0
+ z: 0.0
+ w: 0.0
+---
+
To add a Camera Sensor to your Vehicle you will have to drag a model of the Camera to the link tree you have created in the earlier step.
+Simply locate the following directory in the Project view and drag the prefab into the designated sensor link.
+Assets/AWSIM/Prefabs/Sensors
+
You can configure some aspects of the Camera to your liking.
+E.g. you can set the field of view (fov) of the camera by changing the Field of View
field or manipulating the physical camera parameters like Focal Length
.
The important thing is to configure the Camera Sensor
Script correctly.
Always check whether the correct Camera Object
is selected and make sure that Distortion Shader
and Ros Image Shader
are selected.
Example Camera Sensor Script configuration
+ +You can add the live Camera preview onto the Scene.
+To do this select the Show
checkbox.
+Additionally you can change how the preview is displayed.
+Change the Scale
value to control the size of the preview (how many times smaller the preview will be compared to the actual screen size).
Move the preview on the screen by changing the X Axis
and Y Axis
values on the Image On Gui
section.
Camera preview example
+ +Testing camera with traffic light recognition
+You can test the Camera Sensor traffic light recognition by positioning the vehicle on the Unity Scene in such a way that on the Camera preview you can see the traffic lights.
+Remember to lock the Inspector view on Camera Object before dragging the whole Vehicle - this way you can see the preview while moving the vehicle.
+ + +Run the Scene the same as on this page.
+Launch only the Autoware like on this page.
+By default you should see the preview of traffic light recognition visualization in the bottom left corner of Autoware.
+Traffic lights recognition example in Autoware
+ +In this example you can see what a valid message from the Camera Sensor can look like.
+$ ros2 topic echo --once /sensing/camera/traffic_light/image_raw
+header:
+ stamp:
+ sec: 14
+ nanosec: 619999673
+ frame_id: traffic_light_left_camera/camera_optical_link
+height: 1080
+width: 1920
+encoding: bgr8
+is_bigendian: 0
+step: 5760
+data:
+- 145
+- 126
+- 106
+- 145
+- 126
+- 106
+- 145
+- 126
+- 106
+- 145
+- 126
+- 105
+- 145
+- 126
+- 105
+- 145
+- 126
+- 105
+- 145
+- 126
+- 105
+- 145
+- 126
+- 105
+- 145
+- 126
+- 105
+- 145
+- 126
+- 105
+- 145
+- 126
+- 104
+- 145
+- 126
+- 104
+- 145
+- 126
+- 104
+- 145
+- 126
+- 104
+- 145
+- 126
+- 104
+- 145
+- 126
+- 104
+- 145
+- 126
+- 104
+- 145
+- 126
+- 104
+- 145
+- 126
+- 103
+- 145
+- 126
+- 103
+- 145
+- 126
+- 103
+- 145
+- 126
+- 103
+- 145
+- 126
+- 103
+- 145
+- 126
+- 103
+- 145
+- 126
+- 103
+- 145
+- 126
+- 103
+- 145
+- 126
+- 103
+- 145
+- 126
+- 103
+- 145
+- 124
+- 103
+- 145
+- 124
+- 103
+- 145
+- 124
+- 103
+- 145
+- 124
+- 103
+- 145
+- 124
+- 103
+- 145
+- 124
+- 103
+- 145
+- 124
+- 103
+- 145
+- 124
+- 101
+- 145
+- 124
+- 101
+- 145
+- 124
+- 101
+- 145
+- 124
+- 101
+- 145
+- 123
+- 101
+- 145
+- 123
+- 101
+- 145
+- 123
+- 101
+- 145
+- 123
+- '...'
+---
+
To add a Pose Sensor to your Vehicle simply locate the following directory in the Project view and drag a prefab of this Sensor into the base_link
Object.
Assets/AWSIM/Prefabs/Sensors
+
In this example you can see what a valid message from the Pose Sensor can look like.
+$ ros2 topic echo --once /awsim/ground_truth/vehicle/pose
+header:
+ stamp:
+ sec: 5
+ nanosec: 389999879
+ frame_id: base_link
+pose:
+ position:
+ x: 81655.7578125
+ y: 50137.3515625
+ z: 42.8094367980957
+ orientation:
+ x: -0.03631274029612541
+ y: 0.0392342209815979
+ z: 0.02319677732884884
+ w: 0.9983005523681641
+---
+
You can test whether the Sensor works correctly in several ways.
+Check whether the configuration is correct.
+In terminal source ROS with the following line (only if you haven't done so already).
+source /opt/ros/humble/setup.bash
+
Check the details about the topic that your Sensor is broadcasting to with the following command.
+ros2 topic info -v <topic_name>
+
Example
+In this example we can see that the message is broadcasted by AWSIM and nobody is listening. +We can also examine the Quality of Service settings.
+$ ros2 topic info -v /awsim/ground_truth/vehicle/pose
+Type: geometry_msgs/msg/PoseStamped
+
+Publisher count: 1
+
+Node name: AWSIM
+Node namespace: /
+Topic type: geometry_msgs/msg/PoseStamped
+Endpoint type: PUBLISHER
+GID: 01.10.13.11.98.7a.b1.2a.ee.a3.5a.11.00.00.07.03.00.00.00.00.00.00.00.00
+QoS profile:
+ Reliability: RELIABLE
+ History (Depth): KEEP_LAST (1)
+ Durability: VOLATILE
+ Lifespan: Infinite
+ Deadline: Infinite
+ Liveliness: AUTOMATIC
+ Liveliness lease duration: Infinite
+
+Subscription count: 0
+
Check whether correct information is broadcasted.
+In terminal source ROS with the following line (only if you haven't done so already).
+source /opt/ros/humble/setup.bash
+
View one transmitted message.
+ros2 topic echo --once <topic_name>
+
Example
+In this example we can see the Vehicles location at the moment of executing the command.
+NOTE: The position and orientation are relative to the frame in the header/frame_id
field (base_link
in this example).
$ ros2 topic echo --once /awsim/ground_truth/vehicle/pose
+header:
+ stamp:
+ sec: 46
+ nanosec: 959998950
+ frame_id: base_link
+pose:
+ position:
+ x: 81655.7265625
+ y: 50137.4296875
+ z: 42.53997802734375
+ orientation:
+ x: 0.0
+ y: -9.313260163068549e-10
+ z: -6.36646204504876e-12
+ w: 1.0
+---
+
Your EgoVehicle
needs many individual visual parts.
+Below we will add all needed visual elements.
First in EgoVehicle
Object add a child Object called Models
.
Inside Models
Object we will add all visual models of our EgoVehicle
.
First you will need to add a Body of your Vehicle.
+It will contain many parts, so first lets create a Body
parent Object.
Next we will need to add Car Body
+Add a child Object BodyCar
to the Body
Object.
To the BodyCar
Object add a Mesh Filter.
Click on the 'Add Component' button, search for Mesh Filter
and select it.
+Next search for mesh of your vehicle and select it in the Mesh
field.
To the BodyCar
Object add a Mesh Renderer.
Click on the 'Add Component' button, search for Mesh Filter
and select it
Specify Materials.
+You need to specify what materials will be used for rendering your EgoVehicle
model.
+Do this by adding elements to the Materials
list and selecting the materials you wish to use as shown below.
Add as many materials as your model has sub-meshes.
+Tip
+When you add too many materials, meaning there will be no sub-meshes to apply these materials to, you will see this warning. +In such a case please remove materials until this warning disappears.
+ +In this step we will add the following parts
+Info
+It may seem like all of the elements above can be parts of the Body
mesh, but it is important for these parts to be separate, because we need to be able to make them interactive (e.g. flashing turn signals).
Other good reason for having different meshes for Vehicle parts is that you have a Vehicle model, but for the simulation you need to add e.g. a roof rack with sensors - which can be achieved by adding more meshes.
+Note
+We will illustrate this step only for Break Light, but you should repeat this step of the tutorial for each element of the list above.
+Add a child Object to the Body
Object.
Add a Mesh Filter and select the mesh (the same as in section before).
+ + +Add a Mesh Renderer and select the materials (the same as in section before).
+ + +In this step we will add individual visuals for every wheel. +This process is very similar to the one before.
+Add a child Object to the Models
Object called WheelVisuals
.
Note
+In this tutorial we will add only one wheel, but you should repeat the step for all 4 wheels. +That is, follow the instructions that follow this message for every wheel your Vehicle has.
+Add a child Object to the WheelVisuals
Object called FrontLeftWheel
.
Add a child Object to the FrontLeftWheel
Object called WheelFrontL
.
+ This Object will contain the actual wheel part.
Add a Mesh Filter and select the wheel mesh.
+ + +Add a Mesh Renderer and select the wheel materials.
+ + + +Repeat the steps before to add Breaks.
+The same way you have added the WheelFrontL
Object now add the WheelFrontLBreaks
.
+Naturally you will have to adjust the mesh and materials used as they will be different for breaks than for the wheel.
Your final break configuration should look similar to the one following.
+ +Set the FrontLeftWheel
parent Object transformation to position the wheel in correct place.
Successful configuration
+If you have done everything right your WheelVisuals
Object should look similar to the one following.
The last step to correctly configure Vehicle models is to shift them so that the EgoVehicle
origin is in the center of fixed axis.
This means you need to shift the whole Models
Object accordingly (change the position fields in transformation).
Tip
+Add a dummy
Object as a child to the EgoVehicle
Object (the same as in steps before) so it is located in the origin of the EgoVehicle
.
Now move Models
around relative to the dummy
- change position in the Inspector view.
+The dummy
will help you see when the fixed axis (in case of the Lexus from example it is the rear axis) is aligned with origin of EgoVehicle
.
In the end delete the dummy
Object as it is no longer needed.
By attaching a GroundSlipMutiplier.cs
script to a collider (trigger), you can change the slip of the vehicle within the range of that collider.
Assets\AWSIM\Scenes\Samples\VehicleSlipSample.unity
EgoVehicle
is a playable object that simulates a vehicle that can autonomously move around the scene.
+It has components (scripts) that make it possible to control it by keyboard or by Autoware (using ROS2 communication). Moreover, it provides sensory data needed for self-localization in space and detection of objects in the surrounding environment.
The default prefab EgoVehicle
was developed using a Lexus RX450h 2015 vehicle model with a configured sample sensor kit.
Own EgoVehicle prefab
+If you would like to develop your own EgoVehicle
prefab, we encourage you to read this tutorial.
This vehicle model was created for Autoware simulation, and assuming that Autoware has already created a gas pedal map, +this vehicle model uses acceleration as an input value. +It has the following features:
+WheelColliders
).*.fbx
) as road surface for vehicle driving, gradient resistance.AutowareSimulation
+If you would like to see how EgoVehicle
works or run some tests, we encourage you to familiarize yourself with the AutowareSimulation
scene described in this section.
Parameter | +Value | +Unit | +
---|---|---|
Mass | +\(1500\) | +\(kg\) | +
Wheel base | +\(2.5\) | +\(m\) | +
Tread width | +\(Ft = 1.8; Rr = 1.8\) | +\(m\) | +
Center of Mass position | +\(x = 0; y = 0.5; z = 0\) | +\(m\) | +
Moment of inertia | +\(\mathrm{yaw} = 2000; \mathrm{roll} = 2000; \mathrm{pitch} = 700\) | +\(kg \cdot m^2\) | +
Spring rate | +\(Ft = 55000; Rr = 48000\) | +\(N\) | +
Damper rate | +\(Ft = 3000; Rr = 2500\) | +\(\frac{N}{s}\) | +
Suspension stroke | +\(Ft = 0.2; Rr = 0.2\) | +\(m\) | +
Wheel radius | +\(0.365\) | +\(m\) | +
Vehicle inertia
+In general, measuring the moment of inertia is not easy, and past papers published by NHTSA are helpful.
+Measured Vehicle Inertial Parameters - NHTSA 1998
Prefab can be found under the following path:
+Assets/AWSIM/Prefabs/NPCs/Vehicles/Lexus RX450h 2015 Sample Sensor.prefab
+
EgoVehicle name
+In order to standardize the documentation, the name EgoVehicle
will be used in this section as the equivalent of the prefab named Lexus RX450h 2015 2015 Sample Sensor
.
EgoVehicle
prefab has the following content:
As you can see, it consists of 3 parents for GameObjects:
+Models
- aggregating visual elements,Colliders
- aggregating colliders,URDF
- aggregating sensors,CoM
and Reflection Probe
.All objects are described in the sections below.
+Prefab is developed using models available in the form of *.fbx
files.
+The visuals elements have been loaded from the appropriate *.fbx
file and are aggregated and added in object Models
.
*.fbx
file for Lexus RX450h 2015 is located under the following path:
Assets/AWSIM/Models/Vehicles/Lexus RX450h 2015.fbx
+
Models
object has the following content:
As you can see, the additional visual element is XX1 Sensor Kit
.
It was also loaded from the *.fbx
file which can be found under the following path:
Assets/AWSIM/Models/Sensors/XX1 Sensor Kit.fbx
+
Lexus RX450h 2015.fbx
+The content of a sample *.fbx
file is presented below, all elements except Collider
have been added to the prefab as visual elements of the vehicle.
+Collider
is used as the Mesh source for the Mesh Collider
in the BodyCollider
object.
The default scene contains a single Lexus RX450h 2015 Sample Sensor
prefab that is added as a child of the EgoVehicle
GameObject.
In EgoVehicle
prefab, the local coordinate system of the vehicle (main prefab link) should be defined in the axis of the rear wheels projected onto the ground - in the middle of the distance between them.
+This aspect holds significance when characterizing the dynamics of the object, as it provides convenience in terms of describing its motion and control.
+
+There are several components responsible for the full functionality of Vehicle
:
Scripts can be found under the following path:
+Assets/AWSIM/Scripts/Vehicles/*
+
The EgoVehicle
architecture - with dependencies - is presented on the following diagram.
The communication between EgoVehicle
components is presented on two different diagrams - a flow diagram and a sequence diagram.
The flow diagram presents a flow of information between the EgoVehicle
components.
The sequence diagram provides a deeper insight in how the communication is structured and what are the steps taken by each component. +Some tasks performed by the elements are presented for clarification.
+ +Sequence diagram
+Please keep in mind, that Autoware message callbacks and the update loop present on the sequence diagram are executed independently and concurrently. +One thing they have in common are resources - the Vehicle (script).
+CoM
(Center of Mass) is an additional link that is defined to set the center of mass in the Rigidbody
.
+The Vehicle (script) is responsible for its assignment.
+This measure should be defined in accordance with reality.
+Most often, the center of mass of the vehicle is located in its center, at the height of its wheel axis - as shown below.
+
+Colliders are used to ensure collision between objects.
+In EgoVehicle
, the main Collider
collider and colliders in Wheels
GameObject
for each wheel were added.
Colliders
object has the following content:
Collider
is a vehicle object responsible for ensuring collision with other objects.
+Additionally, it can be used to detect these collisions.
+The MeshCollider
takes a Mesh of object and builds its Collider
based on it.
+The Mesh for the Collider
was also loaded from the *.fbx
file similarly to the visual elements.
WheelsColliders
are an essential elements from the point of view of driving vehicles on the road.
+They are the only ones that have contact with the roads and it is important that they are properly configured.
+Each vehicle, apart from the visual elements related to the wheels, should also have 4 colliders - one for each wheel.
Wheel (script) provides a reference to the collider and visual object for the particular wheel. +Thanks to this, the Vehicle (script) has the ability to perform certain actions on each of the wheels, such as:
+update the steering angle in WheelCollider
,
update the visual part of the wheel depending on the speed and angle of the turn,
+update the wheel contact information stored in the WheelHit
object,
update the force exerted by the tire forward and sideways depending on the acceleration (including cancellation of skidding),
+ensure setting the tire sleep (it is impossible to put Rigidbody
to sleep, but putting all wheels to sleep allows to get closer to this effect).
Wheel Collider Config (script) has been developed to prevent inspector entry for WheelCollider
which ensures that friction is set to 0 and only wheel suspension and collisions are enabled.
Wheel Collider Config
+For a better understanding of the meaning of WheelCollider
we encourage you to read this manual.
Rigidbody
ensures that the object is controlled by the physics engine.
+The Mass
of the vehicle should approximate its actual weight.
+In order for the vehicle to physically interact with other objects - react to collisions, Is Kinematic
must be turned off.
+The Use Gravity
should be turned on - to ensure the correct behavior of the body during movement.
+In addition, Interpolate
should be turned on to ensure the physics engine's effects are smoothed out.
Reflection Probe
is added to EgoVehicle
prefab to simulate realistic reflections in a scene.
+It is a component that captures and stores information about the surrounding environment and uses that information to generate accurate reflections on objects in real-time.
+The values in the component are set as default.
HD Additional Reflection Data (script) is additional component used to store settings for HDRP's reflection probes and is added automatically.
+URDF
(Unified Robot Description Format) is equivalent to the simplified URDF
format used in ROS2.
+This format allows to define the positions of all sensors of the vehicle in relation to its local coordinate system.
+URDF
is built using multiple GameObjects as children appropriately transformed with relation to its parent.
A detailed description of the URDF
structure and sensors added to prefab Lexus RX450h 2015
is available in this section.
Vehicle (script) provides an inputs that allows the EgoVehicle
to move.
+Script inputs provides the ability to set the acceleration of the vehicle and the steering angle of its wheels, taking into account the effects of suspension and gravity.
+It also provides an input to set the gear in the gearbox and to control the turn signals.
+Script inputs can be set by one of the following scripts: Vehicle Ros Input (script) or Vehicle Keyboard Input (script).
The script performs several steps periodically:
+PARKING
and the vehicle is stopped (its speed and acceleration are below the set thresholds), it puts the vehicle (Rigidbody
) and its wheels (Wheel (script)) to sleep,DRIVE
and REVERSE
gear.The script uses the CoM
link reference to assign the center of mass of the vehicle to the Rigidbody
.
+In addiction, Use inertia
allows to define the inertia
tensor for component Rigidbody
- by default it is disabled.
Physics Settings
- allows to set values used to control vehicle physics:
Sleep Velocity Threshold
- velocity threshold used to put vehicle to sleep,Sleep Time Threshold
- time threshold for which the vehicle must not exceed the Sleep Velocity Threshold
, have contact with the ground and a set acceleration input equal to zero,Skidding Cancel Rate
- coefficient that determines the rate at which skidding is canceled, affects the anti-skid force of the wheels - the higher the value, the faster the cancellation of the skid.Axles Settings
contains references to (Wheel (script)) scripts to control each wheel.
+Thanks to them, the Vehicle (script) is able to set their steering angle and accelerations.
Input Settings
- allows to set limits for values on script input:
Max Steer Angle Input
- maximum value of acceleration set by the script (also negative),Max Acceleration Input
- maximum steering angle of the wheels set by the script (also negative).Inputs
- are only used as a preview of the currently set values in the script input:
Category | +Type | +Description | +
---|---|---|
AccelerationInput | +float | +Acceleration input (m/s^2). On the plane, output the force that will result in this acceleration. On a slope, it is affected by the slope resistance, so it does not match the input. | +
SteerAngleInput | +float | +Vehicle steering input (degree). Negative steers left, positive right | +
AutomaticShiftInput | +enumeration | +Vehicle gear shift input (AT). Values: PARKING , REVERSE , NEUTRAL , DRIVE . |
+
SignalInput | +enumeration | +Vehicle turn signal input. Values: NONE , LEFT , RIGHT , HAZARD . |
+
Category | +Type | +Description | +
---|---|---|
LocalAcceleration | +Vector3 | +Acceleration(m/s^2) in the local coordinate system of the vehicle | +
Speed | +float | +Vehicle speed (m/s). | +
SteerAngle | +float | +Vehicle steering angle (degree). | +
Signal | +enumeration | +Vehicle turn signal. | +
Velocity | +Vector3 | +Vehicle velocity (m/s) | +
LocalVelocity | +Vector3 | +Vehicle local velocity (m/s) | +
AngularVelocity | +Vector3 | +Vehicle angular velocity (rad/s) | +
The acceleration or deceleration of the vehicle is determined by AutomaticShiftInput
and AccelerationInput
.
+The vehicle will not move in the opposite direction of the (DRIVE
or REVERSE
) input.
Example
+Sample vehicle behaves:
+Sample 1 - vehicle will accelerate with input values (gradient resistance is received).
+AutomaticShiftInput = DRIVE
+Speed = Any value
+AccelerationInput > 0
+
Sample 2 - vehicle will decelerate (like a brake).
+AutomaticShiftInput = DRIVE
+Speed > 0
+AccelerationInput < 0
+
Sample 3 - vehicle will continuously stop.
+AutomaticShiftInput = DRIVE
+Speed <= 0
+AccelerationInput < 0
+
Vehicle Ros (script) is responsible for subscribing to messages that are vehicle control commands. +The values read from the message are set on the inputs of the Vehicle (script) script.
+The concept for vehicle dynamics is suitable for Autoware's autoware_auto_control_msgs/AckermannControlCommand
and autoware_auto_vehicle_msgs/GearCommand
messages interface usage.
+The script sets gear, steering angle of wheels and acceleration of the vehicle (read from the aforementioned messages) to the Vehicle (script) input.
+In the case of VehicleEmergencyStamped message it sets the absolute acceleration equal to 0.
+In addition, also through Vehicle (script), the appropriate lights are turned on and off depending on TurnIndicatorsCommand and HazardLightsCommand messages.
* Command Topic
- topic on which suitable type of information is subscribed (default: listed in the table below)QoS
- Quality of service profile used in the publication (default assumed as "system_default"
: Reliable
, TransientLocal
, Keep last/1
)Vehicle
- reference to a script in the vehicle object where the subscribed values are to be set (default: None
)Reliable
, TransientLocal
, KeepLast/1
Category | +Topic | +Message type | +Frequency (Autoware dependent) | +
---|---|---|---|
TurnIndicatorsCommand | +/control/command/turn_indicators_cmd |
+autoware_auto_vehicle_msgs/TurnIndicatorsCommand |
+10 |
+
HazardLightsCommand | +/control/command/hazard_lights_cmd |
+autoware_auto_vehicle_msgs/HazardLightsCommand |
+10 |
+
AckermannControlCommand | +/control/command/control_cmd |
+autoware_auto_control_msgs/AckermannControlCommand |
+60 |
+
GearCommand | +/control/command/gear_cmd |
+autoware_auto_vehicle_msgs/GearCommand |
+10 |
+
VehicleEmergencyStamped | +/control/command/emergency_cmd |
+tier4_vehicle_msgs/msg/VehicleEmergencyStamped |
+60 |
+
ROS2 Topics
+If you would like to know all the topics used in communication Autoware with AWSIM, we encourage you to familiarize yourself with this section
+Vehicle Keyboard (script) allows EgoVehicle
to be controlled by the keyboard.
+Thanks to this, it is possible to switch on the appropriate gear of the gearbox, turn the lights on/off, set the acceleration and steering of the wheels.
+It's all set in the Vehicle (script) of the object assigned in the Vehicle
field.
+The table below shows the available control options.
Button | +Option | +
---|---|
d |
+Switch to move forward (drive gear) | +
r |
+Switch to move backwards (reverse gear) | +
n |
+Switch to neutral | +
p |
+Switch to parking gear | +
UP ARROW |
+Forward acceleration | +
DOWN ARROW |
+Reverse acceleration (decelerate) | +
LEFT/RIGHT ARROW |
+Turning | +
1 |
+Turn left blinker on (right off) | +
2 |
+Turn right blinker on (left off) | +
3 |
+Turn on hazard lights | +
4 |
+Turn off blinker or hazard lights | +
WASD
+Controlling the movement of the vehicle with WASD
as the equivalent of arrow keys is acceptable, but remember that the d
button engages the drive gear.
Max Acceleration
- maximum value of acceleration set by the script (also negative)Max Steer Angle
- maximum steering angle of the wheels set by the script (also negative)Value limits
+Max Acceleration
and Max Steer Angle
values greater than those set in the Vehicle (script) are limited by the script itself - they will not be exceeded.
This part of the settings is related to the configuration of the emission of materials when a specific lighting is activated.
+There are 4 types of lights: Brake
, Left Turn Signal
, Right Turn Signal
and Reverse
.
+Each of the lights has its visual equivalent in the form of a Mesh.
+In the case of EgoVehicle
, each light type has its own GameObject which contains the Mesh assigned.
For each type of light, the appropriate Material Index
(equivalent of element index in mesh) and Lighting Color
are assigned - yellow for Turn Signals
, red for Break
and white for Reverse
.
Lighting Intensity
values are also configured - the greater the value, the more light will be emitted.
+This value is related to Lighting Exposure Weight
parameter that is a exposure weight - the lower the value, the more light is emitted.
All types of lighting are switched on and off depending on the values obtained from the Vehicle (script) of the vehicle, which is assigned in the Vehicle
field.
Turn Signal Timer Interval Sec
- time interval for flashing lights - value \(0.5\) means that the light will be on for \(0.5s\), then it will be turned off for \(0.5s\) and turned on again.0.5
)The FollowCamera
component is designed to track a specified target object within the scene. It is attached to the main camera and maintains a defined distance and height from the target. Additionally, it offers the flexibility of custom rotation around the target as an optional feature.
Target
- the transform component of the tracked objectDistance
- base distance between the camera and the target objectOffset
- lateral offset of the camera positionHeight
- base height of the camera above the target objectHeightMultiplier
- camera height multiplierRotateAroundModeToggle
- toggle key between rotate around mode and default follow modeRotateAroundSensitivity
- mouse movement sensitivity for camera rotation around the targetHeightAdjustmentSensitivity
- mouse movement sensitivity for camera height adjustmentZoomSensitivity
- mouse scroll wheel sensitivity for camera zoomInvertHorzAxis
- invert horizontal mouse movementInvertVertAxis
- invert vertical mouse movementInvertScrollWheel
- invert mouse scroll wheelMaxHeight
- maximum value of camera heightMinDistance
- minimum value of camera distance to target objectMaxDistance
- maximum value of camera distance to target objectCamera rotation around the target can be activated by pressing the RotateAroundModeToggle
key (default 'C'
key). In this mode, the user can manually adjust the camera view at run-time using the mouse. To deactivate the Rotate Around mode, press the RotateAroundModeToggle
key once more.
In the Rotate Around Mode camera view can be controlled as follows:
+hold left shift + mouse movement
: to adjust the camera position,hold left shift + mouse scroll wheel
: to zoom in or out of the camera view,left shift + left arrow
: to set left camera view,left shift + right arrow
: to set right camera view,left shift + up arrow
: to set front camera view,left shift + down arrow
: to set back camera view.An optional prefab featuring a UI panel, located at Assets/Prefabs/UI/MainCameraView.prefab
, can be used to showcase a user guide. To integrate this prefab into the scene, drag and drop it beneath the Canvas object. This prefab displays instructions on how to adjust the camera view whenever the Rotate Around Mode is activated.
This section describes the placement of sensors in EgoVehicle
on the example of a Lexus RX450h 2015 Sample Sensor
prefab.
URDF
(Unified Robot Description Format) is equivalent to the simplified URDF format used in ROS2.
+This format allows to define the positions of all sensors of the vehicle in relation to its main parent prefab coordinate system.
URDF
is added directly to the main parent of the prefab and there are no transforms between these objects.
+It is built using multiple GameObjects as children appropriately transformed with relation to its parent.
The transforms in the URDF
object are defined using the data from the sensor kit documentation used in the vehicle.
+Such data can be obtained from sensor kit packages for Autoware, for example: awsim_sensor_kit_launch
- it is used in the AWSIM compatible version of Autoware.
+This package contains a description of transforms between coordinate systems (frames) in the form of *.yaml
files: sensors_calibration and sensor_kit_calibration.
In the first file, the transform of the sensor kit frame (sensor_kit_base_link
) relative to the local vehicle frame (base_link
) is defined.
+In Unity, this transform is defined in the object Sensor Kit
.
+While the second file contains a definition of the transformations of all sensors with respect to the sensor kit - they are described in the Sensor Kit
subsections.
Transformations
+Please note that the transformation Objects are intended to be a direct reflection of frames existing in ROS2.
+All frame Objects are defined as children of base_link
and consist of nothing but a transformation - analogical to the one present in ROS2 (keep in mind the coordinate system conversion).
+The sensor Objects are added to the transformation Object with no transformation of their own.
Coordinate system conventions
+Unity uses a left-handed convention for its coordinate system, while the ROS2 uses a right-handed convention. +For this reason, you should remember to perform conversions to get the correct transforms.
+Base Link
(frame named base_link
) is the formalized local coordinate system in URDF
.
+All sensors that publish data specified in some frame present in Autoware are defined in relation to base_link
.
+It is a standard practice in ROS, that base_link
is a parent transformation of the whole robot and all robot parts are defined in some relation to the base_link
.
If any device publishes data in the base_link
frame - it is added as a direct child, with no additional transformation intermediate Object (PoseSensor
is an example).
+However, if this device has its own frame, it is added as a child to its frame Object - which provides an additional transformation.
+The final transformation can consist of many intermediate transformation Objects.
+The frame Objects are added to the base_link
(GnssSensor
and its parent gnss_link
are an example).
Sensor Kit
(frame named sensor_kit_base_link
) is a set of objects that consists of all simulated sensors that are physically present in an autonomous vehicle and have their own coordinate system (frame).
+This set of sensors has its own frame sensor_kit_base_link
that is relative to the base_link
.
In the Lexus RX450h 2015 Sample Sensor
prefab, it is added to the base_link
GameObject with an appropriately defined transformation.
+It acts as an intermediate frame GameObject.
+Sensor Kit
is located on the top of the vehicle, so it is significantly shifted about the Oy
and Oz
axes.
+Sensors can be defined directly in this Object (VelodyneVLP16
is an example), or have their own transformation Object added on top of the sensor_kit_base_link
(like GnssSensor
mentioned in the base_link
section).
The sensors described in this subsection are defined in relation to the sensor_kit_base_link
frame.
LidarSensor
is the component that simulates the LiDAR (Light Detection and Ranging) sensor.
+The LiDARs mounted on the top of autonomous vehicles are primarily used to scan the environment for localization in space and for detection and identification of obstacles.
+LiDARs placed on the left and right sides of the vehicle are mainly used to monitor the traffic lane and detect vehicles moving in adjacent lanes.
+A detailed description of this sensor is available in this section.
Lexus RX450h 2015 Sample Sensor
prefab has one VelodyneVLP16
prefab sensor configured on the top of the vehicle, mainly used for location in space, but also for object recognition.
+Since the top LiDAR publishes data directly in the sensor_kit_base_link
frame, the prefab is added directly to it - there is no transform.
+The other two remaining LiDARs are defined, but disabled - they do not provide data from space (but you can enable them!).
IMUSensor
is a component that simulates an IMU (Inertial Measurement Unit) sensor.
+It measures acceleration and angular velocity of the EgoVehicle
.
+A detailed description of this sensor is available in this section.
Lexus RX450h 2015 Sample Sensor
has one such sensor located on the top of the vehicle.
+It is added to an Object tamagawa/imu_link
that matches its frame_id
and contains its transform with respect to sensor_kit_base_link
.
+This transformation has no transition, but only rotation around the Oy
and Oz
axes.
+The transform is defined in such a way that its axis Oy
points downwards - in accordance with the gravity vector.
GnssSensor
is a component which simulates the position of vehicle computed by the Global Navigation Satellite.
+A detailed description of this sensor is available in this section.
Lexus RX450h 2015 Sample Sensor
prefab has one such sensor located on top of the vehicle.
+It is added to an Object gnss_link
that matches its frame_id
and contains its transform with respect to sensor_kit_base_link
.
+The frame is slightly moved back along the Oy
and Oz
axes.
CameraSensor
is a component that simulates an RGB camera.
+Autonomous vehicles can be equipped with many cameras used for various purposes.
+A detailed description of this sensor is available in this section.
Lexus RX450h 2015 Sample Sensor
prefab has one camera, positioned on top of the vehicle in such a way that the cameras field of view provides an image including traffic lights - the status of which must be recognized by Autoware.
+It is added to an Object traffic_light_left_camera/camera_link
that matches its frame_id
and contains its transform with respect to sensor_kit_base_link
.
PoseSensor
is a component which provides access to the current position and rotation of the EgoVehicle
- is added as a ground truth.
The position and orientation of EgoVehicle
is defined as the position of the frame base_link
in the global frame, so this Object is added directly as its child without a transform.
VehicleStatusSensor
is a component that is designed to aggregate information about the current state of the EgoVehicle
, such as the active control mode, vehicle speed, steering of its wheels, or turn signal status.
+A detailed description of this sensor is available in this section.
This Object is not strictly related to any frame, however, it is assumed as a sensor, therefore it is added to the URDF
.
English/日本語 OK
+e-mail : takatoki.makino@tier4.jp
+twitter : https://twitter.com/mackierx111
+ + + + + + + + + + + + + +This document uses Material for MkDocs.
+1 Install Material for MkDocs
.
+
$ pip install mkdocs-material
+
$ cd AWSIM
+$ mkdocs serve
+INFO - Building documentation...
+INFO - Cleaning site directory
+INFO - Documentation built in 0.16 seconds
+INFO - [03:13:22] Watching paths for changes: 'docs', 'mkdocs.yml'
+INFO - [03:13:22] Serving on http://127.0.0.1:8000/
+
3 Access http://127.0.0.1:8000/
with a web browser.
For further reference see Material for MkDocs - Getting started.
+Use the following /docs
directory and mkdocs.yml
for new documentation files.
+
AWSIM
+├─ docs/ // markdown and image file for each document.
+└─ mkdocs.yml // mkdocs config.
+
AWSIM
+└─ docs/ // Root of all documents
+ └─ DeveloperGuide // Category
+ └─ Documentation // Root of each document
+ ├─ index.md // Markdown file
+ └─ image_0.png // Images used in markdown file
+
When docs are pushed to the main branch, they are deployed to GitHub Pages using GitHub Actions. See also Material for MkDocs - Publishing your site
+ + + + + + + + + + + + + +Everyone is welcome!
+Do not open issues for general support questions as we want to keep GitHub issues for confirmed bug reports. +Instead, open a discussion in the Q&A category. +The trouble shooting page at AWSIM and at Autoware will be also helpful.
+Before you post an issue, please search Issues and Discussions Q&A catecory to check if it is not a known issue.
+This page is helpful how to create an issue from a repository.
+If you find a new bug, please create an issue here
+If you propose a new feature or have an idea, please create an issue here
+If you have plan to contribute AWSIM, please create an issue here.
+If you have an idea to improve the simulation, you can submit a pull request. +The following process should be followed:
+feature/***
) from the main
branch.main
branch.Please keep the following in mind, while developing new features:
+ + + + + + + + + + + + + + +AWSIM License applies to tier4/AWSIM repositories and all content contained in the Releases.
+*.cs
*.compute
*.xml
)*.fbx
*.pcd
*.osm
*.png
*.anim
*.unitypackage
*.x86_64
)**********************************************************************************
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright 2022 TIER IV, Inc.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
**********************************************************************************
+
+Attribution-NonCommercial 4.0 International
+
+=======================================================================
+
+Creative Commons Corporation ("Creative Commons") is not a law firm and
+does not provide legal services or legal advice. Distribution of
+Creative Commons public licenses does not create a lawyer-client or
+other relationship. Creative Commons makes its licenses and related
+information available on an "as-is" basis. Creative Commons gives no
+warranties regarding its licenses, any material licensed under their
+terms and conditions, or any related information. Creative Commons
+disclaims all liability for damages resulting from their use to the
+fullest extent possible.
+
+Using Creative Commons Public Licenses
+
+Creative Commons public licenses provide a standard set of terms and
+conditions that creators and other rights holders may use to share
+original works of authorship and other material subject to copyright
+and certain other rights specified in the public license below. The
+following considerations are for informational purposes only, are not
+exhaustive, and do not form part of our licenses.
+
+ Considerations for licensors: Our public licenses are
+ intended for use by those authorized to give the public
+ permission to use material in ways otherwise restricted by
+ copyright and certain other rights. Our licenses are
+ irrevocable. Licensors should read and understand the terms
+ and conditions of the license they choose before applying it.
+ Licensors should also secure all rights necessary before
+ applying our licenses so that the public can reuse the
+ material as expected. Licensors should clearly mark any
+ material not subject to the license. This includes other CC-
+ licensed material, or material used under an exception or
+ limitation to copyright. More considerations for licensors:
+ wiki.creativecommons.org/Considerations_for_licensors
+
+ Considerations for the public: By using one of our public
+ licenses, a licensor grants the public permission to use the
+ licensed material under specified terms and conditions. If
+ the licensor's permission is not necessary for any reason--for
+ example, because of any applicable exception or limitation to
+ copyright--then that use is not regulated by the license. Our
+ licenses grant only permissions under copyright and certain
+ other rights that a licensor has authority to grant. Use of
+ the licensed material may still be restricted for other
+ reasons, including because others have copyright or other
+ rights in the material. A licensor may make special requests,
+ such as asking that all changes be marked or described.
+ Although not required by our licenses, you are encouraged to
+ respect those requests where reasonable. More considerations
+ for the public:
+ wiki.creativecommons.org/Considerations_for_licensees
+
+=======================================================================
+
+Creative Commons Attribution-NonCommercial 4.0 International Public
+License
+
+By exercising the Licensed Rights (defined below), You accept and agree
+to be bound by the terms and conditions of this Creative Commons
+Attribution-NonCommercial 4.0 International Public License ("Public
+License"). To the extent this Public License may be interpreted as a
+contract, You are granted the Licensed Rights in consideration of Your
+acceptance of these terms and conditions, and the Licensor grants You
+such rights in consideration of benefits the Licensor receives from
+making the Licensed Material available under these terms and
+conditions.
+
+
+Section 1 -- Definitions.
+
+ a. Adapted Material means material subject to Copyright and Similar
+ Rights that is derived from or based upon the Licensed Material
+ and in which the Licensed Material is translated, altered,
+ arranged, transformed, or otherwise modified in a manner requiring
+ permission under the Copyright and Similar Rights held by the
+ Licensor. For purposes of this Public License, where the Licensed
+ Material is a musical work, performance, or sound recording,
+ Adapted Material is always produced where the Licensed Material is
+ synched in timed relation with a moving image.
+
+ b. Adapter's License means the license You apply to Your Copyright
+ and Similar Rights in Your contributions to Adapted Material in
+ accordance with the terms and conditions of this Public License.
+
+ c. Copyright and Similar Rights means copyright and/or similar rights
+ closely related to copyright including, without limitation,
+ performance, broadcast, sound recording, and Sui Generis Database
+ Rights, without regard to how the rights are labeled or
+ categorized. For purposes of this Public License, the rights
+ specified in Section 2(b)(1)-(2) are not Copyright and Similar
+ Rights.
+ d. Effective Technological Measures means those measures that, in the
+ absence of proper authority, may not be circumvented under laws
+ fulfilling obligations under Article 11 of the WIPO Copyright
+ Treaty adopted on December 20, 1996, and/or similar international
+ agreements.
+
+ e. Exceptions and Limitations means fair use, fair dealing, and/or
+ any other exception or limitation to Copyright and Similar Rights
+ that applies to Your use of the Licensed Material.
+
+ f. Licensed Material means the artistic or literary work, database,
+ or other material to which the Licensor applied this Public
+ License.
+
+ g. Licensed Rights means the rights granted to You subject to the
+ terms and conditions of this Public License, which are limited to
+ all Copyright and Similar Rights that apply to Your use of the
+ Licensed Material and that the Licensor has authority to license.
+
+ h. Licensor means the individual(s) or entity(ies) granting rights
+ under this Public License.
+
+ i. NonCommercial means not primarily intended for or directed towards
+ commercial advantage or monetary compensation. For purposes of
+ this Public License, the exchange of the Licensed Material for
+ other material subject to Copyright and Similar Rights by digital
+ file-sharing or similar means is NonCommercial provided there is
+ no payment of monetary compensation in connection with the
+ exchange.
+
+ j. Share means to provide material to the public by any means or
+ process that requires permission under the Licensed Rights, such
+ as reproduction, public display, public performance, distribution,
+ dissemination, communication, or importation, and to make material
+ available to the public including in ways that members of the
+ public may access the material from a place and at a time
+ individually chosen by them.
+
+ k. Sui Generis Database Rights means rights other than copyright
+ resulting from Directive 96/9/EC of the European Parliament and of
+ the Council of 11 March 1996 on the legal protection of databases,
+ as amended and/or succeeded, as well as other essentially
+ equivalent rights anywhere in the world.
+
+ l. You means the individual or entity exercising the Licensed Rights
+ under this Public License. Your has a corresponding meaning.
+
+
+Section 2 -- Scope.
+
+ a. License grant.
+
+ 1. Subject to the terms and conditions of this Public License,
+ the Licensor hereby grants You a worldwide, royalty-free,
+ non-sublicensable, non-exclusive, irrevocable license to
+ exercise the Licensed Rights in the Licensed Material to:
+
+ a. reproduce and Share the Licensed Material, in whole or
+ in part, for NonCommercial purposes only; and
+
+ b. produce, reproduce, and Share Adapted Material for
+ NonCommercial purposes only.
+
+ 2. Exceptions and Limitations. For the avoidance of doubt, where
+ Exceptions and Limitations apply to Your use, this Public
+ License does not apply, and You do not need to comply with
+ its terms and conditions.
+
+ 3. Term. The term of this Public License is specified in Section
+ 6(a).
+
+ 4. Media and formats; technical modifications allowed. The
+ Licensor authorizes You to exercise the Licensed Rights in
+ all media and formats whether now known or hereafter created,
+ and to make technical modifications necessary to do so. The
+ Licensor waives and/or agrees not to assert any right or
+ authority to forbid You from making technical modifications
+ necessary to exercise the Licensed Rights, including
+ technical modifications necessary to circumvent Effective
+ Technological Measures. For purposes of this Public License,
+ simply making modifications authorized by this Section 2(a)
+ (4) never produces Adapted Material.
+
+ 5. Downstream recipients.
+
+ a. Offer from the Licensor -- Licensed Material. Every
+ recipient of the Licensed Material automatically
+ receives an offer from the Licensor to exercise the
+ Licensed Rights under the terms and conditions of this
+ Public License.
+
+ b. No downstream restrictions. You may not offer or impose
+ any additional or different terms or conditions on, or
+ apply any Effective Technological Measures to, the
+ Licensed Material if doing so restricts exercise of the
+ Licensed Rights by any recipient of the Licensed
+ Material.
+
+ 6. No endorsement. Nothing in this Public License constitutes or
+ may be construed as permission to assert or imply that You
+ are, or that Your use of the Licensed Material is, connected
+ with, or sponsored, endorsed, or granted official status by,
+ the Licensor or others designated to receive attribution as
+ provided in Section 3(a)(1)(A)(i).
+
+ b. Other rights.
+
+ 1. Moral rights, such as the right of integrity, are not
+ licensed under this Public License, nor are publicity,
+ privacy, and/or other similar personality rights; however, to
+ the extent possible, the Licensor waives and/or agrees not to
+ assert any such rights held by the Licensor to the limited
+ extent necessary to allow You to exercise the Licensed
+ Rights, but not otherwise.
+
+ 2. Patent and trademark rights are not licensed under this
+ Public License.
+
+ 3. To the extent possible, the Licensor waives any right to
+ collect royalties from You for the exercise of the Licensed
+ Rights, whether directly or through a collecting society
+ under any voluntary or waivable statutory or compulsory
+ licensing scheme. In all other cases the Licensor expressly
+ reserves any right to collect such royalties, including when
+ the Licensed Material is used other than for NonCommercial
+ purposes.
+
+
+Section 3 -- License Conditions.
+
+Your exercise of the Licensed Rights is expressly made subject to the
+following conditions.
+
+ a. Attribution.
+
+ 1. If You Share the Licensed Material (including in modified
+ form), You must:
+
+ a. retain the following if it is supplied by the Licensor
+ with the Licensed Material:
+
+ i. identification of the creator(s) of the Licensed
+ Material and any others designated to receive
+ attribution, in any reasonable manner requested by
+ the Licensor (including by pseudonym if
+ designated);
+
+ ii. a copyright notice;
+
+ iii. a notice that refers to this Public License;
+
+ iv. a notice that refers to the disclaimer of
+ warranties;
+
+ v. a URI or hyperlink to the Licensed Material to the
+ extent reasonably practicable;
+
+ b. indicate if You modified the Licensed Material and
+ retain an indication of any previous modifications; and
+
+ c. indicate the Licensed Material is licensed under this
+ Public License, and include the text of, or the URI or
+ hyperlink to, this Public License.
+
+ 2. You may satisfy the conditions in Section 3(a)(1) in any
+ reasonable manner based on the medium, means, and context in
+ which You Share the Licensed Material. For example, it may be
+ reasonable to satisfy the conditions by providing a URI or
+ hyperlink to a resource that includes the required
+ information.
+
+ 3. If requested by the Licensor, You must remove any of the
+ information required by Section 3(a)(1)(A) to the extent
+ reasonably practicable.
+
+ 4. If You Share Adapted Material You produce, the Adapter's
+ License You apply must not prevent recipients of the Adapted
+ Material from complying with this Public License.
+
+
+Section 4 -- Sui Generis Database Rights.
+
+Where the Licensed Rights include Sui Generis Database Rights that
+apply to Your use of the Licensed Material:
+
+ a. for the avoidance of doubt, Section 2(a)(1) grants You the right
+ to extract, reuse, reproduce, and Share all or a substantial
+ portion of the contents of the database for NonCommercial purposes
+ only;
+
+ b. if You include all or a substantial portion of the database
+ contents in a database in which You have Sui Generis Database
+ Rights, then the database in which You have Sui Generis Database
+ Rights (but not its individual contents) is Adapted Material; and
+
+ c. You must comply with the conditions in Section 3(a) if You Share
+ all or a substantial portion of the contents of the database.
+
+For the avoidance of doubt, this Section 4 supplements and does not
+replace Your obligations under this Public License where the Licensed
+Rights include other Copyright and Similar Rights.
+
+
+Section 5 -- Disclaimer of Warranties and Limitation of Liability.
+
+ a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
+ EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
+ AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
+ ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
+ IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
+ WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
+ PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
+ ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
+ KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
+ ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
+
+ b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
+ TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
+ NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
+ INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
+ COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
+ USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
+ ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
+ DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
+ IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
+
+ c. The disclaimer of warranties and limitation of liability provided
+ above shall be interpreted in a manner that, to the extent
+ possible, most closely approximates an absolute disclaimer and
+ waiver of all liability.
+
+
+Section 6 -- Term and Termination.
+
+ a. This Public License applies for the term of the Copyright and
+ Similar Rights licensed here. However, if You fail to comply with
+ this Public License, then Your rights under this Public License
+ terminate automatically.
+
+ b. Where Your right to use the Licensed Material has terminated under
+ Section 6(a), it reinstates:
+
+ 1. automatically as of the date the violation is cured, provided
+ it is cured within 30 days of Your discovery of the
+ violation; or
+
+ 2. upon express reinstatement by the Licensor.
+
+ For the avoidance of doubt, this Section 6(b) does not affect any
+ right the Licensor may have to seek remedies for Your violations
+ of this Public License.
+
+ c. For the avoidance of doubt, the Licensor may also offer the
+ Licensed Material under separate terms or conditions or stop
+ distributing the Licensed Material at any time; however, doing so
+ will not terminate this Public License.
+
+ d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
+ License.
+
+
+Section 7 -- Other Terms and Conditions.
+
+ a. The Licensor shall not be bound by any additional or different
+ terms or conditions communicated by You unless expressly agreed.
+
+ b. Any arrangements, understandings, or agreements regarding the
+ Licensed Material not stated herein are separate from and
+ independent of the terms and conditions of this Public License.
+
+
+Section 8 -- Interpretation.
+
+ a. For the avoidance of doubt, this Public License does not, and
+ shall not be interpreted to, reduce, limit, restrict, or impose
+ conditions on any use of the Licensed Material that could lawfully
+ be made without permission under this Public License.
+
+ b. To the extent possible, if any provision of this Public License is
+ deemed unenforceable, it shall be automatically reformed to the
+ minimum extent necessary to make it enforceable. If the provision
+ cannot be reformed, it shall be severed from this Public License
+ without affecting the enforceability of the remaining terms and
+ conditions.
+
+ c. No term or condition of this Public License will be waived and no
+ failure to comply consented to unless expressly agreed to by the
+ Licensor.
+
+ d. Nothing in this Public License constitutes or may be interpreted
+ as a limitation upon, or waiver of, any privileges and immunities
+ that apply to the Licensor or You, including from the legal
+ processes of any jurisdiction or authority.
+
+=======================================================================
+
+Creative Commons is not a party to its public
+licenses. Notwithstanding, Creative Commons may elect to apply one of
+its public licenses to material it publishes and in those instances
+will be considered the “Licensor.” The text of the Creative Commons
+public licenses is dedicated to the public domain under the CC0 Public
+Domain Dedication. Except for the limited purpose of indicating that
+material is shared under a Creative Commons public license or as
+otherwise permitted by the Creative Commons policies published at
+creativecommons.org/policies, Creative Commons does not authorize the
+use of the trademark "Creative Commons" or any other trademark or logo
+of Creative Commons without its prior written consent including,
+without limitation, in connection with any unauthorized modifications
+to any of its public licenses or any other arrangements,
+understandings, or agreements concerning use of licensed material. For
+the avoidance of doubt, this paragraph does not form part of the
+public licenses.
+
+Creative Commons may be contacted at creativecommons.org
+
Follow the steps in: +https://code.visualstudio.com/docs/setup/linux
+# Install the keys and repository
+sudo apt-get install wget gpg
+wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > packages.microsoft.gpg
+sudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg
+sudo sh -c 'echo "deb [arch=amd64,arm64,armhf signed-by=/etc/apt/keyrings/packages.microsoft.gpg] https://packages.microsoft.com/repos/code stable main" > /etc/apt/sources.list.d/vscode.list'
+rm -f packages.microsoft.gpg
+
+# Then update the package cache and install the package using:
+sudo apt install apt-transport-https
+sudo apt update
+sudo apt install code
+
Follow the steps in: +https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu#register-the-microsoft-package-repository
+# Get Ubuntu version
+declare repo_version=$(if command -v lsb_release &> /dev/null; then lsb_release -r -s; else grep -oP '(?<=^VERSION_ID=).+' /etc/os-release | tr -d '"'; fi)
+
+# Download Microsoft signing key and repository
+wget https://packages.microsoft.com/config/ubuntu/$repo_version/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
+
+# Install Microsoft signing key and repository
+sudo dpkg -i packages-microsoft-prod.deb
+
+# Clean up
+rm packages-microsoft-prod.deb
+
+# Update packages
+sudo apt update
+
+sudo apt install dotnet-sdk-8.0
+
Follow the steps in:
+++Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter. +-
+ext install ms-dotnettools.csdevkit
+Repeat for: +-ext install VisualStudioToolsForUnity.vstuc
+-ext install ms-dotnettools.csharp
Edit
-> Preferences
-> External Tools
-> External Script Editor
Visual Studio Code
Browse
and navigate and select /usr/bin/code
It should all be configured now.
+You can either open up a script by double clicking in the Project window in Unity or by opening up the project in VS Code:
+- Assets
-> Open C# Project
Syntax highlighting and CTRL-click navigation should work out of the box.
+For more advanced features such as debugging, check the Unity Development with VS Code Documentation.
+In the AWSIM project, the package Visual Studio Editor is already installed to satisfy the requirement from the Unity for Visual Studio Code extension.
+ + + + + + + + + + + + + +This document describes the most common errors encountered when working with AWSIm or autoware.
+Trouble | +Solution | +
---|---|
Massive output of Plugins errors | +git clone the AWSIM repository again |
+
error : RuntimeError: error not set, at C:\ci\ws\src\ros2\rcl\rcl\src\rcl\node.c:262 |
+Set up environment variables and config around ROS2 correctly. For example: - Environment variables - cyclonedds_config.xml |
+
$ ros2 topic list is not displayed |
+- your machine ROS_DOMAIN_ID is different- ROS2 is not sourced |
+
Using AWSIM on Windows and Autoware on Ubuntu. $ ros2 topic list is not displayed. |
+Allow the communication in Windows Firewall | +
self-driving stops in the middle of the road. | +Check if your map data is correct (PointCloud, VectorMap, 3D fbx models) | +
Connecting AWSIM and Autoware results in bad network | +Make ros local host-only. Include the following in the .bashrc (The password will be requested at terminal startup after OS startup.) export ROS_LOCALHOST_ONLY=1 export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp if [ ! -e /tmp/cycloneDDS_configured ]; then sudo sysctl -w net.core.rmem_max=2147483647 sudo ip link set lo multicast on touch /tmp/cycloneDDS_configured fi |
+
Lidar (colored pointcloud on RViz ) does not show. | +Reduce processing load by following command. This can only be applied to Autoware's awsim-stable branch. cd <path_to_your_autoware_folder> wget "https://drive.google.com/uc?export=download&id=11mkwfg-OaXIp3Z5c3R58Pob3butKwE1Z" -O patch.sh bash patch.sh && rm patch.sh |
+
Error when starting AWSIM binary. segmentation fault (core dumped) |
+- Check if yourNvidia drivers or Vulkan API are installed correctly - When building binary please pay attantion whether the Graphic Jobs option in Player Settings is disabled. It should be disabled since it may produce segmentation fault errors. Please check forum for more details. |
+
Initial pose does not match automatically. | +Set initial pose manually. |
+
Unity crashes and check the log for the cause of the error. | +Editor log file location Windows : C:\Users\username\AppData\Local\Unity\Editor\Editor.log Linux : ~/.config/unity3d/.Editor.log Player log file location Windows : C:\Users\username\AppData\LocalLow\CompanyName\ProductName\output_log.txt Linux : ~/.config/unity3d/CompanyName/ProductName/Player.log See also : Unity Documentation - Log Files |
+
Safe mode dialog appears when starting UnityEditor. or error : No usable version of libssl was found |
+1. download libssl $ wget http://security.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5.11_amd64.deb 2. install sudo dpkg -i libssl1.0.0_1.0.2n-1ubuntu5.11_amd64.deb |
+
(Windows) Unity Editor's error:Plugins: Failed to load 'Assets/RGLUnityPlugin/Plugins/Windows/x86_64/RobotecGPULidar.dll' because one or more of its dependencies could not be loaded. |
+Install Microsoft Visual C++ Redistributable packages for Visual Studio 2015, 2017, 2019, and 2022 (X64 Architecture) | +
(Windows) Built-binary or Unity Editor freeze when simulation started | +Update/Install latest NIC(NetworkInterfaceCard) drivers for your PC. Especially, if you can find latest drivers provided by chip vendors for the interfaces (not by Microsoft), we recommend vendors' drivers. |
+
Below you can find instructions on how to setup the self-driving demo of AWSIM simulation controlled by Autoware. +The instruction assumes using the Ubuntu OS.
+ +The simulation provided in the AWSIM demo is configured as follows:
+AWSIM Demo Settings | ++ |
---|---|
Vehicle | +Lexus RX 450h | +
Environment | +Japan Tokyo Nishishinjuku | +
Sensors | +Gnss * 1 IMU * 1 LiDAR * 1 Traffic camera * 1 |
+
Traffic | +Randomized traffic | +
ROS2 | +humble | +
Please make sure that your machine meets the following requirements in order to run the simulation correctly:
+Required PC Specs | ++ |
---|---|
OS | +Ubuntu 22.04 | +
CPU | +6cores and 12thread or higher | +
GPU | +RTX2080Ti or higher | +
Nvidia Driver (Windows) | +>=472.50 | +
Nvidia Driver (Ubuntu 22) | +>=515.43.04 | +
The simulation is based on the appropriate network setting, which allows for trouble-free communication of the AWSIM simulation with the Autoware software.
+To apply required localhost settings please add the following lines to ~/.bashrc
file:
if [ ! -e /tmp/cycloneDDS_configured ]; then
+ sudo sysctl -w net.core.rmem_max=2147483647
+ sudo ip link set lo multicast on
+ touch /tmp/cycloneDDS_configured
+fi
+
and these lines to ~/.profile
or in either of files: ~/.bash_profile
or ~/.bash_login
:
export ROS_LOCALHOST_ONLY=1
+export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
+
Warning
+A system restart is required for these changes to work.
+To run the simulator, please follow the steps below.
+Install Nvidia GPU driver (Skip if already installed).
+sudo add-apt-repository ppa:graphics-drivers/ppa
+sudo apt update
+
Install the recommended version of the driver. +
sudo ubuntu-drivers autoinstall
+
Warning
+Currently, there are cases where the Nvidia driver version is too high, resulting in Segmentation fault. In that case, please lower the Nvidia driver version (525 is recommended.)
+Reboot your machine to make the installed driver detected by the system. +
sudo reboot
+
nvidia-smi
command is available and outputs summary similar to the one presented below.
+$ nvidia-smi
+Fri Oct 14 01:41:05 2022
++-----------------------------------------------------------------------------+
+| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
+|-------------------------------+----------------------+----------------------+
+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+| | | MIG M. |
+|===============================+======================+======================|
+| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
+| 37% 31C P8 30W / 250W | 188MiB / 11264MiB | 3% Default |
+| | | N/A |
++-------------------------------+----------------------+----------------------+
+
++-----------------------------------------------------------------------------+
+| Processes: |
+| GPU GI CI PID Type Process name GPU Memory |
+| ID ID Usage |
+|=============================================================================|
+| 0 N/A N/A 1151 G /usr/lib/xorg/Xorg 133MiB |
+| 0 N/A N/A 1470 G /usr/bin/gnome-shell 45MiB |
++-----------------------------------------------------------------------------+
+
Install Vulkan Graphics Library (Skip if already installed).
+sudo apt update
+
sudo apt install libvulkan1
+
Download and Run AWSIM Demo binary.
+Download AWSIM_v1.2.3.zip
.
Unzip the downloaded file.
+Make the AWSIM.x86_64
file executable.
Rightclick the AWSIM.x86_64
file and check the Execute
checkbox
or execute the command below.
+chmod +x <path to AWSIM folder>/AWSIM.x86_64
+
Launch AWSIM.x86_64
.
+
./<path to AWSIM folder>/AWSIM.x86_64
+
Warning
+It may take some time for the application to start the so please wait until image similar to the one presented below is visible in your application window.
+In order to configure and run the Autoware software with the AWSIM demo, please:
+Download map files (pcd, osm)
and unzip them.
Clone Autoware and move to the directory. +
git clone https://github.com/autowarefoundation/autoware.git
+cd autoware
+
awsim-stable
. NOTE: The latest main
branch is for ROS 2 humble.
+git checkout awsim-stable
+
./setup-dev-env.sh
+
src
directory and clone external dependent repositories into it.
+mkdir src
+vcs import src < autoware.repos
+
source /opt/ros/humble/setup.bash
+rosdep update
+rosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO
+
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-w"
+
Launch Autoware.
+Warning
+<your mapfile location>
must be changed arbitrarily. When specifying the path the ~
operator cannot be used - please specify absolute full path.
source install/setup.bash
+ros2 launch autoware_launch e2e_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=awsim_sensor_kit map_path:=<your mapfile location>
+
Launch AWSIM and Autoware according to the steps described earlier in this document. +
+The Autoware will automatically set its pose estimation as presented below. +
+Set the navigation goal for the vehicle. + +
+Optionally, you can define an intermediate point through which the vehicle will travel on its way to the destination. + +The generated path can be seen on the image below. +
+Enable self-driving.
+To make the vehicle start navigating please engage it's operation using the command below.
+cd autoware
+source install/setup.bash
+ros2 topic pub /autoware/engage autoware_auto_vehicle_msgs/msg/Engage '{engage: True}' -1
+
The self-driving simulation demo has been successfully launched!
+In case of any problems with running the sample AWSIM binary with Autoware, start with checking our Troubleshooting page with the most common problems.
+Info
+It is advised to checkout the Quick Start Demo tutorial before reading this section.
+This page is a tutorial for setting up a AWSIM Unity project.
+Set the ROS 2 middleware and the localhost only mode in ~/.profile
(or, in ~/.bash_profile
or ~/bash_login
if either of those exists) file:
+
export ROS_LOCALHOST_ONLY=1
+export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
+
Warning
+A system restart is required for these changes to work.
+Set the system optimizations by adding this code to the very bottom of your ~/.bashrc
file:
+
if [ ! -e /tmp/cycloneDDS_configured ]; then
+ sudo sysctl -w net.core.rmem_max=2147483647
+ sudo ip link set lo multicast on
+ touch /tmp/cycloneDDS_configured
+fi
+
Info
+As a result, each time you run the terminal (bash
prompt), your OS will be configured for the best ROS 2 performance. Make sure you open your terminal at least one before running any instance of AWSIM (or Editor running the AWSIM).
AWSIM comes with a standalone flavor of Ros2ForUnity
. This means that, to avoid internal conflicts between different ROS 2 versions, you shouldn't run the Editor or AWSIM binary with ROS 2 sourced.
Warning
+Do not run the AWSIM, Unity Hub, or the Editor with ROS 2 sourced.
+~/.bashrc
or ~/.profile
. Make sure it is not obscuring your working environment:~/.profile
.~/.profile
and ~/.bashrc
.Info
+AWSIM's Unity version is currently 2021.1.7f1
+Follow the steps below to install Unity on your machine:
+UnityHub.AppImage
.
+Install Unity 2021.1.7f1 via UnityHub.
+UnityHub.AppImage
is download and execute the following command:
+./UnityHub.AppImage
+
At this point, your Unity installation process should have started.
+Copy link address
) and add it as a argument for Unity Hub app. An example command:
+./UnityHub.AppImage unityhub://2021.1.7f1/d91830b65d9b
+
After successful installation the version will be available under the Installs
tab in Unity Hub.
+
To open the Unity AWSIM project in Unity Editor:
+Make sure you have the AWSIM repository cloned and ROS 2 is not sourced. +
git clone git@github.com:tier4/AWSIM.git
+
Launch UnityHub. +
./UnityHub.AppImage
+
Info
+If you are launching the Unity Hub from the Ubuntu applications menu (without the terminal), make sure that system optimizations are set. To be sure, run the terminal at least once before running the Unity Hub. This will apply the OS settings.
+Open the project in UnityHub
+Click the Open
button
+
Navigate the directory where the AWSIM repository was cloned to +
+The project should be added to Projects
tab in Unity Hub. To launch the project in Unity Editor simply click the AWSIM
item
+
The project is now ready to use +
+Enter the AWSIM directory (make sure ROS 2 is not sourced). +
cd AWSIM
+
If your Unity Editor is in default location, run the project using the editor command. +
~/Unity/Hub/Editor/2021.1.7f1/Editor/Unity -projectPath .
+
Info
+If your Unity Editor is installed in different location, please adjust the path accordingly.
+Warning
+If you get the safe mode dialog when starting UnityEditor, you may need to install openssl.
+$ wget http://security.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5.13_amd64.deb
sudo dpkg -i libssl1.0.0_1.0.2n-1ubuntu5.13_amd64.deb
To properly run and use AWSIM project in Unity it is required to download map package which is not included in the repository.
+Download and import Japan_Tokyo_Nishishinjuku.unitypackage_v2
In Unity Editor, from the menu bar at the top, select Assets -> Import Package -> Custom Package...
and navigate the Japan_Tokyo_Nishishinjuku.unitypackage_v2
file.
+
+
Nishishinjuku
package has been successfully imported under Assets/AWSIM/Externals/
directory.
+Info
+The Externals directory is added to the .gitignore
because the map has a large file size and should not be directly uploaded to the repository.
The following steps describe how to run the demo in Unity Editor:
+AutowareSimulation.unity
scene placed under Assets/AWSIM/Scenes/Main
directoryPlay
button placed at the top section of Editor.
+
+Warning
+Running AWSIM with scenario_simulator_v2 is still a prototype, so stable running is not guaranteed.
+Below you can find instructions on how to setup the OpenSCENARIO execution using scenario_simulator_v2 with AWSIM as a simulator +The instruction assumes using the Ubuntu OS.
+Follow Setup Unity Project tutorial
+scenario_simulator_v2
In order to configure the Autoware software with the AWSIM demo, please:
+git clone git@github.com:RobotecAI/autoware-1.git
+cd autoware
+
awsim-ss2-stable
branch
+ git checkout awsim-ss2-stable
+
./setup-dev-env.sh
+
src
directory and clone external dependent repositories into it.
+ mkdir src
+vcs import src < autoware.repos
+vcs import src < simulator.repos
+
Download shinjuku_map.zip
+archive
Unzip it to src/simulator
directory
+
unzip <Download directory>/shinjuku_map.zip -d src/simulator
+
source /opt/ros/humble/setup.bash
+rosdep update
+rosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO
+
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-w"
+
Download AWSIM_v1.2.2_ss2.zip
& Run
+archive
Launch scenario_test_runner
.
+
source install/setup.bash
+ros2 launch scenario_test_runner scenario_test_runner.launch.py \
+architecture_type:=awf/universe record:=false \
+scenario:='$(find-pkg-share scenario_test_runner)/scenario/sample_awsim.yaml' \
+sensor_model:=awsim_sensor_kit vehicle_model:=sample_vehicle \
+launch_simple_sensor_simulator:=false autoware_launch_file:="e2e_simulator.launch.xml" \
+initialize_duration:=260 port:=8080
+
In case of problems, make sure that the regular demo work well with the Autoware built above. Follow the troubleshooting page there if necessary.
+AWSIM is an open-source simulator made with Unity for autonomous driving research and development. +It is developed for self-driving software like Autoware. This simulator aims to bridge the gap between the virtual and real worlds, enabling users to train and evaluate their autonomous systems in a safe and controlled environment before deploying them on real vehicles. It provides a realistic virtual environment for training, testing, and evaluating various aspects of autonomous driving systems.
+AWSIM simulates a variety of real-world scenarios, with accurate physics and sensor models. It offers a wide range of sensors, such as: Cameras, GNSS, IMU and LiDARs, allowing developers to simulate their autonomous vehicle's interactions with the environment accurately. The simulator also models dynamic objects, such as pedestrians, other vehicles, and traffic lights, making it possible to study interactions and decision-making in complex traffic scenarios. This enables the testing and evaluation of perception, planning, and control algorithms under different sensor configurations and scenarios.
+AWSIM supports a flexible and modular architecture, making it easy to customize and extend its capabilities. Users can modify the current or add a new environment with their own assets and traffic rules to create custom scenarios to suit their specific research needs. This allows for the development and testing of advanced algorithms in diverse driving conditions.
+Because AWSIM was developed mainly to work with Autoware, it supports:
+Prerequisites
+You can read more about the prerequisites and running AWSIM here.
+Connection with Autoware
+Introduction about how the connection between AWSIM and Autoware works can be read here.
+The main objectives of AWSIM are to facilitate research and development in autonomous driving, enable benchmarking of algorithms and systems, and foster collaboration and knowledge exchange within the autonomous driving community. By providing a realistic and accessible platform, AWSIM aims to accelerate the progress and innovation in the field of autonomous driving.
+To describe the architecture of AWSIM, first of all, it is necessary to mention the Scene
. It contains all the objects occurring in the simulation of a specific scenario and their configurations. The default AWSIM scene that is developed to work with Autoware is called AutowareSimulation.
In the scene we can distinguish basics components such like MainCamera
, ClockPublisher
, EventSystem
and Canvas
. A detailed description of the scene and its components can be found here.
Besides the elements mentioned above, the scene contains two more, very important and complex components: Environment
and EgoVehicle
- described below.
Environment
is a component that contains all Visual Elements
that simulate the environment in the scene and those that provide control over them. It also contains two components Directional Light
and Volume
, which ensure suitable lighting for Visual Elements
and simulate weather conditions. A detailed description of these components can be found here.
In addition to Visual Elements
such as buildings or greenery, it contains the entire architecture responsible for traffic. The traffic involves NPCVehicles
that are spawned in the simulation by TrafficSimulator
- using traffic components. A quick overview of the traffic components is provided below, however, you can read their detailed description here.
NPCPedestrians
are also Environment
components, but they are not controlled by TrafficSimulator
. They have added scripts that control their movement - you can read more details here.
TrafficLanes
and StopLines
are elements loaded into Environment
from Lanelet2.
+TrafficLanes
have defined cross-references in such a way as to create routes along the traffic lanes. In addition, each TrafficLane
present at the intersection has specific conditions for yielding priority. TrafficSimulator
uses TrafficLanes
to spawn NPCVehicles
and ensure their movement along these lanes. If some TrafficLanes
ends just before the intersection, then it has a reference to StopLine
. Each StopLine
at the intersection with TrafficLights
has reference to the nearest TrafficLight
. TrafficLights
belong to one of the visual element groups and provide an interface to control visual elements that simulate traffic light sources (bulbs). A single TrafficIntersection
is responsible for controlling all TrafficLights
at one intersection.
+Detailed description of mentioned components is in this section.
EgoVehicle
is a component responsible for simulating an autonomous vehicle moving around the scene. It includes:
Models
and Reflection Probe
components related to its visual appearance. Colliders
providing collisions and the ability to move on roads.Sensors
providing data related to the state of the vehicle, including its position and speed in Environment
and the state of its surroundings.Vehicle
component that simulates dynamics, controls **Wheel
and is responsible for ensuring their movement.Vehicle Ros Input
and Vehicle Keyboard Input
components that have a reference to the Vehicle
object and set control commands in it.Vehicle Visual Effect
provides an interface for Vehicle
to control the lighting.A detailed description of EgoVehicle
and its components mentioned above can be found here. The sensor placement on EgoVehicle
used in the default scene is described here. Details about each of the individual sensors are available in the following sections: Pose
, GNSS
, LiDAR
, IMU
, Camera
, Vehicle Status
.
In AWSIM, the sensors' publishing methods are triggered from the FixedUpdate
function and the output frequency is controlled by:
time += Time.deltaTime;
+var interval = 1.0f / OutputHz;
+interval -= 0.00001f; // Allow for accuracy errors.
+if (time < interval)
+ return;
+timer = 0;
+
Since this code runs within the FixedUpdate method, it's essential to note that Time.deltaTime
is equal to Fixed Timestep
, as stated in the Unity Time.deltaTime documentation. Consequently, with each invocation of FixedUpdate, the time
variable in the sensor script will increment by a constant value of Fixed Timestep
, independent of the actual passage of real-time. Additionally, as outlined in the Unity documentation, the FixedUpdate
method might execute multiple times before the Update
method is called, resulting in extremely small time intervals between successive FixedUpdate calls. The diagram below illustrates the mechanism of invoking the FixedUpdate event function."
During each frame (game tick) following actions are performed:
+Delta Time
is calculated as the difference between the current frame and the previous frame,Delta Time
is added to the Time
(regular time),Fixed Time
(physics time) is behind the Time
,Time
and Fixed Time
is equal to or greater then Fixed Timestep
, the Fixed Update
event function is invoked,FixedUpdate
function were called, the Fixed Timestep
is added to the Fixed Time
,Fixed Time
is behind the Time
,FixedUpdate
function is called again,Time
and the Fixed Time
is smaller than the Fixed Timestep
, the Update
method is called, followed be scene rendering and other Unity event functions.As a consequence, this engine feature may result in unexpected behavior when FPS (Frames Per Second) are unstable or under certain combinations of FPS, Fixed Timestep, and sensor OutputHz
+In case of low frame rates, it is advisable to reduce the Time Scale
of the simulation. The Time Scale value impacts simulation time, which refers to the time that is simulated within the model and might or might not progress at the same rate as real-time. Therefore, by reducing the time scale, the progression of simulation time slows down, allowing the simulation more time to perform its tasks.
Autoware is an open-source software platform specifically designed for autonomous driving applications. It was created to provide a comprehensive framework for developing and testing autonomous vehicle systems. Autoware offers a collection of modules and libraries that assist in various tasks related to perception, planning, and control, making it easier for researchers and developers to build autonomous driving systems.
+The primary purpose of Autoware is to enable the development of self-driving technologies by providing a robust and flexible platform. It aims to accelerate the research and deployment of autonomous vehicles by offering a ready-to-use software stack. Autoware focuses on urban driving scenarios and supports various sensors such as LiDAR, Radars, and Cameras, allowing for perception of the vehicle's surroundings.
+Autoware can be used with a AWSIM for several reasons. Firstly, simulators like AWSIM provide a cost-effective and safe environment for testing and validating autonomous driving algorithms before deploying them on real vehicles. Autoware's integration with a simulator allows developers to evaluate and fine-tune their algorithms without the risk of real-world accidents or damage.
+Additionally, simulators enable developers to recreate complex driving scenarios, including difficult conditions or rare events, which may be difficult to replicate in real-world testing with such high fidelity. Autoware's compatibility with a AWSIM allows seamless integration between the software and the simulated vehicle, enabling comprehensive testing and validation of autonomous driving capabilities. By utilizing a simulator, Autoware can be extensively tested under various scenarios to ensure its robustness and reliability.
+Connection with Autoware
+Introduction about how the connection between AWSIM and Autoware works can be read here.
+In terms of architecture, Autoware follows a modular approach. It consists of multiple independent modules that communicate with each other through a ROS2. This modular structure allowing users to select and combine different modules based on their specific needs and requirements. The software stack comprises multiple components, including perception, localization, planning, and control modules. Here's a brief overview of each module:
+Sensing - acquires data from sensors different sensors mounted on the autonomous vehicle such as LiDARs, GNSS, IMU and cameras. It pre-processing received data in order to later extract relevant information about the surrounding environment through the Perception module or about vehicle location by the Localization module. +More details here.
+Perception - performs advanced processing of sensor data (LiDARs, cameras) to extract meaningful information about the surrounding environment. It performs tasks like object detection (other vehicles, pedestrians), lane detection, and traffic lights recognition. More details here.
+Localization - performs a fusion of data from Sensing module like LiDAR, GNSS, IMU, and odometry sensors to estimate the vehicle's position and orientation accurately. More details here.
+Planning - generates a safe and feasible trajectory for the autonomous vehicle based on the information gathered from Perception and Localization. It also takes into account various factors from Map like traffic rules and road conditions. More details here.
+Control - executes the planned trajectory by sending commands to the vehicle's actuators, such as steering, throttle, and braking. It ensures that the vehicle follows the desired trajectory while maintaining safety and stability. More details here.
+Vehicle Interface - is a crucial component that enables communication and interaction between Autoware software system and a vehicle. It facilitates the exchange of control signals and vehicle information necessary for autonomous driving operations. The vehicle interface ensures that Autoware can send commands to the vehicle, such as acceleration, braking, and steering, while also receiving real-time data from the vehicle, such as speed, position, and sensors data. It acts as a bridge, allowing Autoware to seamlessly interface with the specific characteristics and requirements of the vehicle it is operating with. More details here.
+Map - the map module creates and maintains a representation of the environment in which the autonomous vehicle operates. It combines data from Lanelet2 (*.osm
) and PointCloud (*.pcd
) to generate a detailed map. The map contains information about road geometries, lane markings, traffic lights, rules, and other relevant features. Map serves as a crucial reference for planning and decision-making processes. More details here.
Autoware is a powerful open-source software platform for autonomous driving. Its modular architecture, including perception, localization, planning, and control modules, provides a comprehensive framework for developing self-driving vehicles. Autoware combined with AWSIM simulator provides safe testing, validation, and optimization of autonomous driving algorithms in diverse scenarios.
+Run with Autoware
+If you would like to know how to run AWSIM with Autoware, we encourage you to read this section.
+The combination of Autoware and AWSIM provides the opportunity to check the correctness of the vehicle's behavior in various traffic situations. Below are presented some typical features provided by this combination. Moreover, examples of detecting several bad behaviors are included.
+Driving straight through an intersection with priority
+ +Turning at the intersection
+ +Stopping at a red light
+ +Driving on a green light
+ +Stopping at yellow light
+ +Still driving at yellow light (only when it is too late to stop)
+ +Yield right-of-way when turning right
+ +Following the vehicles ahead
+ +Stopping behind the vehicles ahead
+ +Cutting-in to a different traffic lane
+ +Giving right of way to a pedestrian crossing at a red light
+ +Giving way to a pedestrian crossing beyond a crosswalk
+ +Incorrect and dangerous execution of a lane change
+ +Too late detection of a pedestrian entering the roadway
+ +The combination of AWSIM with Autoware is possible thanks to Vehicle Interface and Sensing modules of Autoware architecture. The component responsible for ensuring connection with these modules from the AWSIM side is EgoVehicle
. It has been adapted to the Autoware architecture and provides ROS2 topic-based communication. However, the other essential component is ClockPublisher
, which provides simulation time for Autoware - also published on the topic - more details here.
EgoVehicle
component provides the publication of the current vehicle status through a script working within Vehicle Status
. It provides real-time information such as: current speed, current steering of the wheels or current states of lights - these are outputs from AWSIM.
On the other hand, Vehicle Ros Input
is responsible for providing the values of the outputs from Autoware. It subscribes to the current commands related to the given acceleration, gearbox gear or control of the specified lights.
Execution of the received commands is possible thanks to Vehicle
, which ensures the setting of appropriate accelerations on the **Wheel
and controlling the visual elements of the vehicle.
The remaining data delivered from AWSIM to Autoware are sensors data, which provides information about the current state of the surrounding environment and those necessary to accurately estimate EgoVehicle
position.
More about EgoVehicle
and its scripts is described in this section.
Below is a simplified sequential diagram of information exchange in connection between AWSIM and Autoware. As you can see, the first essential information published from AWSIM is Clock
- the simulation time. Next, EgoVehicle
is spawned and first sensors data are published, which are used in the process of automatic position initialization on Autoware side. At the same time, the simulation on AWSIM side is updated.
Next in the diagram is the main information update loop in which:
+EgoVehicle
, which are taken into account in the processes carried out in Autoware.EgoVehicle
update is performed.EgoVehicle
is published.The order of information exchange presented in the diagram is a simplification. The exchange of information takes place through the publish-subscribe model and each data is sent with a predefined frequency.
+ + + + + + + + + + + + + + +Scenario Simulator v2 (SS2) is a scenario testing framework specifically developed for Autoware, an open-source self-driving software platform. It serves as a tool for Autoware developers to conveniently create and execute scenarios across different simulators.
+The primary goal of SS2 is to provide Autoware developers with an efficient means of writing scenarios once and then executing them in multiple simulators. By offering support for different simulators and scenario description formats, the framework ensures flexibility and compatibility.
+The default scenario format in this framework is TIER IV Scenario Format version 2.0. The scenario defined on this format is converted by scenario_test_runner
to openSCENARIO
format, which is then interpreted by openscenario_interpreter
. Based on this interpretation, traffic_simulator
simulates traffic flow in an urban area. Each NPC has a behavior tree and executes commands from the scenario.
The framework uses ZeroMQ Inter-Process communication for seamless interaction between the simulator and the traffic_simulator
. To ensure synchronous operation of the simulators, SS2 utilizes the Request/Reply sockets provided by ZeroMQ and exchanges binarized data through Protocol Buffers. This enables the simulators to run in a synchronized manner, enhancing the accuracy and reliability of scenario testing.
QuickStart Scenario simulator v2 with Autoware
+If you would like to see how SS2 works with Autoware using default build-in simulator - simple_sensor_simulator
(without running AWSIM) - we encourage you to read this tutorial.
AWSIM scene architecture used in combination with SS2 changes considerably compared to the default scene. Here traffic_simulator
from SS2 replaces TrafficSimulator
implementation in AWSIM - for this reason it and its StopLines
, TrafficLanes
and TrafficIntersection
components are removed. Also, NPCPedestrian
and NPCVehicles
are not added as aggregators of NPCs in Environment
.
Instead, their counterparts are added in ScenarioSimulatorConnector
object that is responsible for spawning Entities
of the scenario. Entity
can be: Pedestrian
, Vehicle
, MiscObject
and Ego
.
+EgoEntity
is the equivalent of EgoVehicle
- which is also removed from the default scene. However, it has the same components - it still communicates with Autoware as described here. So it can be considered that EgoVehicle
has not changed and NPCPedestrians
and NPCVehicles
are now controlled directly by the SS2.
A detailed description of the SS2 architecture is available here. A description of the communication via ROS2 between SS2 and Autoware can be found here.
+In the sequence diagram, the part responsible for AWSIM communication with Autoware also remained unchanged. The description available here is the valid description of the reference shown in the diagram below.
+Communication between SS2 and AWSIM takes place via Request-Response messages, and is as follows:
+EgoEntity
(with sensors) is spawned in the configuration defined in the scenario.Entities
(NPCs) defined in the scenario are spawned, the scenario may contain any number of each Entity
type, it may not contain them at all or it may also be any combination of the available ones.EgoEntity
is updated - SS2 gets its status, and then every other Entity
is updated - the status of each NPCs is set according to the scenario. Next, the simulation frame is updated - here the communication between Autoware and AWSIM takes place. The last step of the loop is to update the traffic light state.Entities
spawned on the scene are despawned (including EgoEnity
) Documentation of the commands used in the sequence is available here.
+ + + + + + + + + + + + + + +AWSIM has the following directory structure. Mostly they are grouped by file type.
+AWSIM // root directory.
+ │
+ │
+ ├─Assets // Unity project Assets directory.
+ │ │ // Place external libraries
+ │ │ // under this directory.
+ │ │ // (e.g. RGLUnityPlugin, ROS2ForUnity, etc..)
+ │ │
+ │ │
+ │ ├─AWSIM // Includes assets directly related to AWSIM
+ │ | // (Scripts, Prefabs etc.)
+ │ │ │
+ │ │ │
+ │ │ ├─Externals // Place for large files or
+ │ │ | // external project dependencies
+ │ │ | // (e.g. Ninshinjuku map asset).
+ │ │ │ // The directory is added to `.gitignore`
+ │ │ │
+ │ │ ├─HDRPDefaultResources // Unity HDRP default assets.
+ │ │ │
+ │ │ ├─Materials // Materials used commonly in Project.
+ │ │ │
+ │ │ ├─Models // 3D models
+ │ │ │ │ // Textures and materials for 3D models
+ │ │ │ │ // are also included.
+ │ │ │ │
+ │ │ │ └─<3D Model> // Directory of each 3D model.
+ │ │ │ │
+ │ │ │ │
+ │ │ │ ├─Materials // Materials used in 3D model.
+ │ │ │ │
+ │ │ │ │
+ │ │ │ └─Textures // Textures used in 3D model.
+ │ │ │
+ │ │ │
+ │ │ ├─Prefabs // Prefabs not dependent on a specific scene.
+ │ │ │
+ │ │ ├─Scenes // Scenes
+ │ │ │ │ // Includes scene-specific scripts, etc.
+ │ │ │ │
+ │ │ │ │
+ │ │ │ ├─Main // Scenes used in the simulation.
+ │ │ │ │
+ │ │ │ │
+ │ │ │ └─Samples // Sample Scenes showcasing components.
+ │ │ │
+ │ │ │
+ │ │ └─Scripts // C# scripts.
+ │ │
+ │ │
+ │ ├─RGLUnityPlugin // Robotec GPU LiDAR external Library.
+ │ │ // see: https://github.com/RobotecAI/RobotecGPULidar
+ │ │
+ │ │
+ │ └─Ros2ForUnity // ROS2 communication external Library.
+ │ // see: https://github.com/RobotecAI/ros2-for-unity
+ │
+ ├─Packages // Unity automatically generated directories.
+ ├─ProjectSettings //
+ ├─UserSettings //
+ │
+ │
+ └─docs // AWSIM documentation. Generated using mkdocs.
+ // see: https://www.mkdocs.org/
+
List of external libraries used in AWSIM.
+Library | +Usage | +URL | +
---|---|---|
ros2-for-unity | +ROS2 communication | +https://github.com/RobotecAI/ros2-for-unity | +
Robtoec-GPU-LiDAR | +LiDAR simulation | +https://github.com/RobotecAI/RobotecGPULidar | +
The document presents the rules of branching adopted in the AWSIM development process.
+branch | +explain | +
---|---|
main | +Stable branch. Contains all the latest releases. | +
feature/*** | +Feature implementation branch created from main . After implementation, it is merged into main . |
+
gh-pages | +Documentation hosted on GitHub pages. | +
feature/***
branch from main
.feature/***
branch.feature/***
branch to main
branch. Merge after review.key | +feature | +
---|---|
D | +Change drive gear. | +
P | +parking gear. | +
R | +Reverse gear. | +
N | +Neutral gear. | +
1 | +Left turn signal. | +
2 | +Right turn signal. | +
3 | +Hazard. | +
4 | +Turn signal off. | +
Up arrow | +Accelerate. | +
Left arrow | +Steering (Left). | +
Right arrow | +Steering (Right). | +
Down arrow | +Breaking. | +
key | +feature | +
---|---|
C | +Camera rotation on/off toggle. | +
Mouse drag | +Rotate Camera angle. | +
Mouse wheel | +Zoom in/out of camera. | +
By default, AWSIM is a standard windowed application. This is core functionality and essential for running the simulation manually on a local PC with an attached display, but it starts to be problematic for CI and testing on the cloud instances.
+Unity provides a few options for running binaries headless, but all of them have two things in common:
+Since AWSIM requires GPU for sensor simulation, it is required to use 3rd party utilities. The best utility for that is Xvfb.
+Headless with Xvfb is only supported on Ubuntu
+Xvfb, which stands for "X Virtual Framebuffer", is a display server implementing the X11 display server protocol. It enables the running of graphical applications without the need for a physical display by creating a virtual frame buffer in memory.
+This tool is transparent for the AWSIM and the pipeline in which it is working and can be conveniently used to mimic the headless functionalities.
+Xvfb is a standard Ubuntu package and can be installed with apt
:
sudo apt update
+sudo apt install xvfb
+
To run the AWSIM binary, all that is needed to do, is to prefix the command with xvfb-run
:
xvfb-run ./AWSIM.x86_64
+
Please note that xvfb-run
comes with multiple options, like choosing the virtual screen size or color palette. Please see the manual for all the options.
In the AWSIM Unity project there is one main scene (AutowareSimulation) and several additional ones that can be helpful during development. +This section describes the purpose of each scene in the project.
++
+The AutowareSimulation
scene is the main scene that is designed to run the AWSIM simulation together with Autoware.
+It allows for effortless operation, just run this scene, run Autoware with the correct map file and everything should work right out of the box.
The PointCloudMapping
is a scene that is designed to create a point cloud using the Unity world.
+Using the RGLUnityPlugin and prefab Environment
- on which there are models with Meshes
- we are able to obtain a *.pcd
file of the simulated world.
Scene SensorConfig
was developed to perform a quick test of sensors added to the EgoVehicle
prefab.
+Replace the Lexus
prefab with a vehicle prefab you developed and check whether all data that should be published is present, whether it is on the appropriate topics and whether the data is correct.
The NPCVehicleSample
was developed to conduct a quick test of the developed vehicle.
+Replace the taxi prefab with a vehicle prefab you developed (EgoVehicle
or NPCVehicle
) and check whether the basic things are configured correctly.
+The description of how to develop your own vehicle and add it to the project is in this section.
The NPCPedestrianSample
was developed to conduct a quick test of the developed pedestrian.
+Replace the NPC prefab in NPC Pedestrian Test
script with a prefab you developed and check whether the basic things are configured correctly.
The TrafficIntersectionSample
was developed to conduct a quick test of the developed traffic intersection.
+Replace the intersection configuration with your own and check whether it works correctly.
+You can add additional groups of lights and create much larger, more complex sequences.
+A description of how to configure your own traffic intersection is in this section.
The TrafficLightSample
was developed to conduct a quick test of a developed traffic lights model in cooperation with the script controlling it.
+Replace the lights and configuration with your own and check whether it works correctly.
The RandomTrafficYielding
was developed to conduct a tests of a developed yielding rules at the single intersection.
The RandomTrafficYielding
was developed to conduct a tests of a developed yielding rules with multiple vehicles moving around the entire environment.
+
The scenes described below are used for tests related to the external library RGLUnityPlugin
(RGL
) - you can read more about it in this section.
The scene LidarSceneDevelopSample
can be used as a complete, minimalistic example of how to setup RGL
.
+It contains RGLSceneManager
component, four lidars, and an environment composed of floor and walls.
The scene LidarSkinnedStress
can be used to test the performance of RGL
.
+E.g. how performance is affected when using Regular Meshes
compared to Skinned Meshes
.
+The scene contains a large number of animated models that require meshes to be updated every frame, thus requiring more resources (CPU and data exchange with GPU).
The scene LidarDisablingTest
can be used to test RGL
performance with similar objects but with different configurations.
+It allows you to check whether RGL
works correctly when various components that can be sources of Meshes
are disabled (Colliders
, Regular Meshes
, Skinned Meshes
, ...).
The LidarInstanceSegmentationDemo
is a demo scene for instance segmentation feature. It contains a set of GameObjects with ID assigned and sample lidar that publishes output to the ROS2 topic. The GameObjects are grouped to present different methods to assign IDs.
To run demo scene:
+Assets/AWSIM/Scenes/Samples/LidarInstanceSegmentationDemo.unity
rviz2
rviz2
as follows:world
,lidar/instance_id
,QoS
as in the screen above.enitity_id
,Autocompute
intensity and set min to 0
and max to 50
.The scene RadarSceneDevelopSample
demonstrates the use of the RadarSensor
component along with LiDARSensor
.
+LiDAR hit points are shown as small red points, and radar hit points are shown as bigger blue boxes.
The AWSIM simulator utilizes the Unity Test Framework (UTS) for comprehensive testing procedures.
+The Unity Test Framework is a testing framework provided by Unity Technologies for testing Unity applications and games. It is primarily used for automated testing of code written in C# within Unity projects. The framework allows developers to write test scripts to verify the behavior of their code, including unit tests for individual components and integration tests for multiple components working together.
+It provides various assertion methods to check expected outcomes and supports organizing tests into test suites for better management. The UTS is integrated directly into the Unity Editor, making it convenient for developers to run tests and analyze results within their development environment.
+UTS uses an integration of NUnit library and looks for tests inside any assembly that references NUnit (detailed info can be found here). AWSIM assembly is explicitely defined inside the project, so it can be referencered correctly by the test module.
+ +The framework enables testing code in both Edit Mode
and Play Mode
.
+ | Play Mode | +Edit Mode | +
---|---|---|
Environment | +Run within the Unity Editor and require entering the play mode. | +Run within the Unity Editor, without entering the play mode. | +
Speed | +Can be more complex and take more time to set up and execute due to the need for scene loading and game simulation. | +They tend to be faster to execute compared to play mode tests since they don't require scene loading or game simulation. | +
Use Cases | +They allow testing components that rely on Unity's runtime environment, such as physics, animations, or MonoBehaviour lifecycle methods. Play mode tests also provide a more realistic testing environment by simulating gameplay and interactions between game objects. | +Are suitable for testing editor scripts, pure C# logic, or components that don't rely heavily on Unity's runtime environment. | +
In summary, Edit Mode
tests are well-suited for testing isolated code components and editor-related functionality, while Play Mode
tests are more appropriate for integration testing within the runtime environment of a Unity application or game.
Test
to the test class. (ex. ImuSensorTest
)To access the Unity Test Framework in the Unity Editor, open the Test Runner
window; go to Window > General > Test Runner
.
In the Test Runner window, you'll see two tabs: Play Mode
and Edit Mode
. Choose the appropriate mode depending on the type of tests you want to run:
Play Mode
for AWSIM integration tests that require the Unity runtime environment.Edit Mode
for AWSIM unit tests that don't require entering play mode.After selecting the desired test mode, click the Run All
button to execute all AWSIM tests.
Run Selected
button.As the tests run, the Test Runner window will display the progress and results. Green checkmarks indicate passed tests, while red crosses indicate failed tests.
+Info
+In Play Mode
, Unity will load test scenes during the process.
All tests should pass.
+Info
+You can click on individual tests to view more details, including any log messages or assertions that failed.
+Vehicle ROS2 Input Test:
+GearChanging
: Using the ROS 2 interface - changing gear from PARK
to DRIVE
and moving forward, then changing gear from DRIVE
to PARK
and REVERSE
and moving backward. Checking the expected vehicle positions.TurningLeft
: Using the ROS 2 interface - moving the Ego vehicle and turning left. Check if the goal position is valid.TurningRight
: Using the ROS 2 interface - moving the Ego vehicle and turning right. Check if the goal position is valid.Sensors Test:
+GNSS
: Using the ROS 2 interface - verified the number of generated messages based on the sensor frequency.IMU
: Using the ROS 2 interface - verified the number of generated messages based on the sensor frequency.LiDAR
: Using the ROS 2 interface - verified the number of generated messages based on the sensor frequency.Radar
: Using the ROS 2 interface - verified the number of generated messages based on the sensor frequency.Traffic Test:
+Despawn
: Assuring that the NPCs despawn correctly when reaching the end of the lanelet.RandomTrafficSpawn
: Checking the correctness of spawning multiple NPCs on multiple lanes.SeedSpawn
: Determinism of spawning different NPC prefabs based on different seed value.Ego:
+GearsInput
: Gear shift translations between ROS 2 and Unity interface.TurnSignalInput
: Turn signal translations between ROS 2 and Unity interface.Sensors:
+GNSS
: GNSS data checks for various MGRS offsets and sensor positions.IMU
: IMU data check for gravity parameter set to true or false.Traffic:
+TrafficPrimitives
: Stop line center point, traffic lane positions and traffic lane with stop line interface.