Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add dataset replay function #212

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 92 additions & 0 deletions README_replay.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@

OpenCDA Replay
=
### Motivations:

* Open-source dataset [OPV2V](https://mobility-lab.seas.ucla.edu/opv2v/) offers comprehensive simulation data, e.g. RGB images, lidar point clouds and bounding boxes

* **OpenCOOD** framework provides dataset replay function and allows users to add sensors on vehicles and road-side units to generate images and point clouds, but **no bounding box generation**
* **OpenCDA** framework has bounding box generation function using ML detection libraries, but **does not have dataset replay function**
* This work makes extensions on OpenCDA by integrating replay function from OpenCOOD

### Remarks

- Carla provides numerous [pre-made maps](https://carla.readthedocs.io/en/latest/core_map/) for public benchmarking, I chose 'Town05' for its urban environment that better resembles my use cases. More can be included by adjusting line 90 in opencda_replay.py
```python
target_towns = ['Town05']
```

- My project premarily focuses on analysis of lidar placement for Road-side Units, thus **only one vehicle actor** is used for simplicity.

After each simulation in Carla, each actor generates recorded data with randomly assigned actor ID,
- Vehicle actors have ID of *positive* numbers
- RSU actors will have *negative* IDs (following formulation of V2XSet)

Note that original OPV2V dataset has recordings of multiple (i.e. >1) vehicle actors, the one with smallest *positive* number is chosen as the ego vehicle while others will be treated as background vehicles

- For the sake of studying RSU lidar configurations, I needed to record and replay dataset with completely consistent environment while only adjusting lidar placement, i.e. replicating the same traffic conditions with the same cars and their trajectories. I added the 'base_dataset' mode to enable this.

### Workflows:

Pre-requisite: run Carla simulator with 'make launch' and start by pressing play -> 'Simulate'

1. Generate bird's eye view images of each scene
```python
python opencda_replay.py -t logreplay_bev -r ROOT_DIR -o OUTPUT_DIR_BEV
```

Example: (scene: test/2021_08_23_12_58_19)
<!-- ![BEV](./docs/md_files_replay/000163_camera0.png) -->
<img src="./docs/md_files_replay/bev_000163_camera0.png" alt="BEV" width="500">


2. Select desired intersections and log lidar locations for analysis

- Specifically, lidars in Carla simulator require position arguments (x, y, z, pitch, yaw, roll). Need to record manually
- *Efforts attempted*: wrote scripts to extract traffic light information and use them to automatically assign lidar coordinates. However, in the pre-defined Carla maps, some intersections with traffic lights may not have traffic at all (i.e. useless to record), while other intersections with no traffic lights are crowded. Therefore, for Carla's provided maps, we need to inspect manually

```python
python utils/print_spectactor_coord.py
```

This script (credit: [@damascenodiego/print_spectactor_coord.py](https://gist.github.com/damascenodiego/c53ae6960f1ebdcc25d41384392b6500)) prints real-time current coordinates while user roams around in the simulation world

Example:

<img src="./docs/md_files_replay/rsu.png" alt="rsu" width="500">

See example results at 'OpenCDA/opencda/scenario_testing/utils/rsu.json'

3. Replay original OPV2V dataset to generate base dataset



```python
python opencda_replay.py -t logreplay -b -r ROOT_DIR -o OUTPUT_DIR_BASE
# -b --base_dataset: boolean flag indicating whether replaying on original OPV2V dataset to generate base dataset
```

Example: pitch = -20

<!-- ![base](./docs/md_files_replay/base_-20_000069_camera0.png) -->
<img src="./docs/md_files_replay/base_-20_000069_camera0.png" alt="base" width="500">

See automation bash script for reference:
> OpenCDA/automate_opencda.sh

4. Replay base dataset to generate variant dataset

Examples of 'variant dataset' include, different lidar placements with varying **(x, y, z) coordiantes** and **pitch angles**

\* Currently **only pitch angle** is allowed for variation (see line 75 in opencda_replay.py), other adjustment parameters to be further extended

```python
python opencda_replay.py -t logreplay -r OUTPUT_DIR_BASE -o OUTPUT_DIR_VAR
```

Example: pitch = -40

<!-- ![variant](./docs/md_files_replay/var_-40_000069_camera0.png) -->
<img src="./docs/md_files_replay/var_-40_000069_camera0.png" alt="variant" width="500">

Notes: original dataset **did not record background vehicle information** (e.g. car model and color). Therefore, we first generate base dataset and these infnormation will be available for extensive replication
46 changes: 46 additions & 0 deletions automate_opencda.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
pitch=("0" "-10" "-20" "-30" "-40" "-50")
replay=("train" "validate" "test")
output=("train" "train" "test")
lidar="livox"
height="6"
#
# replay base for bird-eyed view (BEV) preview
#
root="/media/hdd1/opv2v/"
outp="/media/hdd1/opv2v/opencda_dump"
for ((i=0; i<${#replay[@]}; i++))
do
python opencda_replay.py -t logreplay_bev -r ${root}/${replay[$i]}_${lidar}_${height}_${p} -o ${outp}/${output[$i]}_bev_-90
done
# #
# # replay original for base
# #
# root="/media/hdd1/opv2v/"
# outp="/media/hdd1/opv2v/opencda_dump"
# for p in ${pitch[*]}
# do
# for ((i=0; i<${#replay[@]}; i++))
# do
# python opencda_replay.py -t logreplay -r ${root}/${replay[$i]}_${lidar}_${height}_${p} -o ${outp}/${output[$i]}_${lidar}_${height}_${p}
# done
# done
# #
# # replay base for variations
# #
# root="/media/hdd1/opv2v/opencda_dump"
# outp="/media/hdd1/opv2v/opencda_dump"
# for p in ${pitch[*]}
# do
# for ((i=0; i<${#replay[@]}; i++))
# do
# python opencda_replay.py -t logreplay_metric -r ${root}/${replay[$i]}_${lidar}_${height}_${p} -o ${outp}/metric_${output[$i]}_${lidar}_${height}_${p}
# done
# done
# #
# # compute metrics for each scenes in test set
# #
# outp="/media/hdd1/opv2v/opencda_dump"
# for p in ${pitch[*]}
# do
# python ComputeDNUC.py --dir ${outp}/metric_test_${lidar}_${height}_${p}
# done
Binary file added docs/md_files_replay/base_-20_000069_camera0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/md_files_replay/bev_000163_camera0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/md_files_replay/metric.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/md_files_replay/rsu.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/md_files_replay/var_-30_000069_camera0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/md_files_replay/var_-40_000069_camera0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
129 changes: 78 additions & 51 deletions opencda/core/common/data_dumper.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,58 +15,19 @@
from opencda.core.common.misc import get_speed
from opencda.core.sensing.perception import sensor_transformation as st
from opencda.scenario_testing.utils.yaml_utils import save_yaml
import carla


class DataDumper(object):
"""
Data dumper class to save data in local disk.

Parameters
----------
perception_manager : opencda object
The perception manager contains rgb camera data and lidar data.

vehicle_id : int
The carla.Vehicle id.

save_time : str
The timestamp at the beginning of the simulation.

Attributes
----------
rgb_camera : list
A list of opencda.CameraSensor that containing all rgb sensor data
of the managed vehicle.

lidar ; opencda object
The lidar manager from perception manager.

save_parent_folder : str
The parent folder to save all data related to a specific vehicle.

count : int
Used to count how many steps have been executed. We dump data
every 10 steps.

"""

def __init__(self,
perception_manager,
vehicle_id,
save_time):
save_path):

self.rgb_camera = perception_manager.rgb_camera
self.lidar = perception_manager.lidar

self.save_time = save_time
self.vehicle_id = vehicle_id

current_path = os.path.dirname(os.path.realpath(__file__))
self.save_parent_folder = \
os.path.join(current_path,
'../../../data_dumping',
save_time,
str(self.vehicle_id))
self.save_parent_folder = save_path

if not os.path.exists(self.save_parent_folder):
os.makedirs(self.save_parent_folder)
Expand All @@ -91,51 +52,107 @@ def run_step(self,
behavior_agent : opencda object
Open
"""
self.count += 1
# self.count += 1 ###

# we ignore the first 60 steps
if self.count < 60:
return

# 10hz
if self.count % 2 != 0:
return
# if self.count % 2 != 0:
# return

# print('saving', self.count)
self.save_rgb_image(self.count)
# self.save_lidar_points()
self.save_yaml_file(perception_manager,
localization_manager,
behavior_agent,
self.count)
self.save_lidar_points(self.count)

def save_rgb_image(self, count):
"""
Save camera rgb images to disk.
"""
if not self.rgb_camera:
return

for (i, camera) in enumerate(self.rgb_camera):

frame = camera.frame
image = camera.image

image_name = '%06d' % count + '_' + 'camera%d' % i + '.png'

cv2.imwrite(os.path.join(self.save_parent_folder, image_name),
image)

# print('saved', self.perception_manager.id, os.path.join(self.save_parent_folder, image_name))###debug
# break # to save front only

# def reverse_transform(self, points, p, y, r):

# tran = carla.Transform(carla.Rotation(pitch=p, yaw=y, roll=r))
# rot = tran.get_matrix()[:3, :3]
# return (rot @ points.T).T

# def reverse_transform(self, points, trans):

# x, y, z = trans.location.x, trans.location.y, trans.location.z
# p, y, r = trans.rotation.pitch, trans.rotation.yaw, trans.rotation.roll
# trans = carla.Transform(carla.Location(x,y,z), carla.Rotation(0,y,r))
# rot = np.array(trans.get_matrix())

# points = np.append(points, np.ones((points.shape[0], 1)), axis=1)
# points = (rot @ points.T).T

# return points[:, :-1]

def save_lidar_points(self):
def reverse_transform(self, points, trans):

x, y, z = trans.location.x, trans.location.y, trans.location.z
p, y, r = trans.rotation.pitch, trans.rotation.yaw, trans.rotation.roll
# trans = carla.Transform(carla.Location(x,y,z), carla.Rotation(p,y,r))
trans = carla.Transform(carla.Location(0,0,0), carla.Rotation(p,0,0))
rot = np.linalg.inv(trans.get_matrix())
# rot = trans.get_matrix()

points = np.append(points, np.ones((points.shape[0], 1)), axis=1)
points = (rot @ points.T).T

return points[:, :-1]

def save_lidar_points(self, count):
"""
Save 3D lidar points to disk.
"""
point_cloud = self.lidar.data
frame = self.lidar.frame
frame = count # self.lidar.frame

point_xyz = point_cloud[:, :-1]
# point_xyz[:, 1] = -point_xyz[:, 1] # horizontal flip
point_intensity = point_cloud[:, -1]
point_intensity = np.c_[
point_intensity,
np.zeros_like(point_intensity),
np.zeros_like(point_intensity)
]

# x_rot = y_rot = z_rot = 0
roll_rot = pitch_rot = yaw_rot = 0
if self.lidar.spawn_point.rotation.pitch != 0:
# y_rot = np.radians(-self.lidar.spawn_point.rotation.pitch)
pitch_rot = -self.lidar.spawn_point.rotation.pitch
# if self.lidar.spawn_point.rotation.yaw != 0:
# z_rot = np.radians(-self.lidar.spawn_point.rotation.yaw)
# yaw_rot = -self.lidar.spawn_point.rotation.yaw

if roll_rot != 0 or pitch_rot != 0 or yaw_rot != 0:
# rot = o3d.geometry.get_rotation_matrix_from_axis_angle([x_rot, y_rot, z_rot])
# point_xyz = (rot @ point_xyz.T).T
# o3d_pcd.rotate(rot)
# point_xyz = self.reverse_transform(point_xyz, pitch_rot, yaw_rot, roll_rot)
pass
# point_xyz = self.reverse_transform(point_xyz, self.lidar.spawn_point)

o3d_pcd = o3d.geometry.PointCloud()
o3d_pcd.points = o3d.utility.Vector3dVector(point_xyz)
o3d_pcd.colors = o3d.utility.Vector3dVector(point_intensity)
Expand All @@ -146,6 +163,7 @@ def save_lidar_points(self):
pcd_name),
pointcloud=o3d_pcd,
write_ascii=True)
# print('saved', self.perception_manager.id, os.path.join(self.save_parent_folder,pcd_name))###debug

def save_yaml_file(self,
perception_manager,
Expand Down Expand Up @@ -175,9 +193,16 @@ def save_yaml_file(self,
# dump obstacle vehicles first
objects = perception_manager.objects
vehicle_list = objects['vehicles']
# print('\nperception_manager vehicles', vehicle_list)

for veh in vehicle_list:
if perception_manager.id == veh.carla_id:
print('perception_manager.id', perception_manager.id)
print('veh.carla_id', veh.carla_id)
continue ###

veh_carla_id = veh.carla_id
# veh_carla_id = str(veh.attributes['role_name'])
veh_pos = veh.get_transform()
veh_bbx = veh.bounding_box
veh_speed = get_speed(veh)
Expand All @@ -204,6 +229,7 @@ def save_yaml_file(self,
}})

dump_yml.update({'vehicles': vehicle_dict})
# print('perception_manager vehicle_dict', vehicle_dict)

# dump ego pose and speed, if vehicle does not exist, then it is
# a rsu(road side unit).
Expand Down Expand Up @@ -238,7 +264,7 @@ def save_yaml_file(self,
lidar_transformation.rotation.roll,
lidar_transformation.rotation.yaw,
lidar_transformation.rotation.pitch]})

# print('dump_yml lidar pose', lidar_transformation);print(dump_yml['lidar_pose'])
# dump camera sensor coordinates under world coordinate system
for (i, camera) in enumerate(self.rgb_camera):
camera_param = {}
Expand Down Expand Up @@ -292,6 +318,7 @@ def save_yaml_file(self,
yml_name)

save_yaml(dump_yml, save_path)
# print('saved', self.perception_manager.id, save_path)###debug

@staticmethod
def matrix2list(matrix):
Expand Down
Loading