diff --git a/article.md b/article.md new file mode 100644 index 0000000..66abf9f --- /dev/null +++ b/article.md @@ -0,0 +1,255 @@ +With policy-controlled robots, it's always difficult to ensure that an updated policy can still solve all the challenges that the previous model could. This is where automated testing gets important. In this post, we implement tests for a Unitree Go2 robot controlled by an RL policy to execute waypoint missions. + +We will focus only on testing here, but you can find the full project source at [https://github.com/art-e-fact/go2-example](https://github.com/art-e-fact/go2-example). Feel free to jump right in. We tried to make spinning up the demo and getting the robot walking as quick and simple as possible. + +TODO: video here + +The stack we will use: + +- **dora-rs** is a Rust based robotics framework. Its simplicity, low latency, and ease of integration make it a great choice for many robotics projects. +- **Isaac Sim** allows us to accurately simulate the robot in photo-realistic environments and use the same RL in simulation and on the real robot. +- **pytest** combines well with the Python API of dora-rs +- **Artefacts** is perfect for test automation and reviewing test results with our teammates. + +# Unit testing nodes + +## Simple input-output test + +_See the full source_ [_here_](https://github.com/art-e-fact/go2-example/blob/aab4c57dcb5c86753f30870908a15104bd399780/nodes/policy_controller/tests/test_policy_controller.py) + +First, we want to know if the policy will calculate the next action without error. + +The `policy` node takes: + +- High level navigation commands (x, y velocity, and rotation) +- Observations from the robot (velocity, joint position, etc.) +- Clock time, which is important to know when the next action should be generated + +And returns the low-level joint targets for the robot. + + + +This is what a test case looks like: + +```python +def test_generates_commands(): + """Check if the policy runs and generates joint commands.""" + with patch("policy_controller.main.Node") as MockNode: + mock_node_instance = MagicMock() + MockNode.return_value = mock_node_instance + + # Create mock inputs + command_2d = Twist2D(linear_x=0.5, linear_y=0.0, angular_z=0.0) + observations = Observations( + lin_vel=np.zeros(3), + ang_vel=np.zeros(3), + gravity=np.array([0.0, 0.0, -9.81]), + joint_positions=np.zeros(12), + joint_velocities=np.zeros(12), + height_scan=np.zeros(154), + ) + clock = Timestamp.now() + + # The mocked node will yield these events when iterated. + mock_node_instance.__iter__.return_value = [ + {"type": "INPUT", "id": "command_2d", "value": command_2d.to_arrow()}, + {"type": "INPUT", "id": "clock", "value": clock.to_arrow()}, + { + "type": "INPUT", + "id": "observations", + "value": observations.to_arrow(), + }, + ] + + from policy_controller.main import main + + main() + + # Check that send_output was called with joint_commands + mock_node_instance.send_output.assert_called() + args, _ = mock_node_instance.send_output.call_args + assert args[0] == "joint_commands" + joint_commands = JointCommands.from_arrow(args[1]) + assert joint_commands.positions is not None + assert len(joint_commands.positions) == 12 +``` + + + +The most important bits: + +- We mock dora-rs `Node` with a `MagicMock` so we can send inputs directly to the node, and assert that it outputs the correct values. +- The input data can be almost anything as long as it's in the correct format. We will only test the shape of the output. +- We can imitate the input data flow by mocking the node's iter values in `mock_node_instance.__iter__.return_value` +- After the mock is ready, we can import the policy node and execute its `main()` function. +- The policy node will call the `send_output` function on the mocked dora-rs node. This allows us to assert that the expected output was generated using generic pytest assertion tools. +- We only validate the format of the output. The integration tests will validate if the robot is behaving correctly. + + + +## Testing multistep exchanges between nodes + +_See the full source_ [_here_](https://github.com/art-e-fact/go2-example/blob/aab4c57dcb5c86753f30870908a15104bd399780/nodes/navigator/tests/test_navigator.py) + +The `navigation` node takes: + +- The pose of the robot. For this demo, we use the ground truth data from the simulator. +- The waypoint positions that the robot should reach. +- A tick message to trigger generating a new output. + +And the output will be the high-level navigation command that the `policy` node will consume. + +```python +def test_navigator_logic_stateful(): + """Test if sends command output after every tick input.""" + # A queue to send events to the node + event_queue = queue.Queue() + + with patch("navigator.main.Node") as MockNode: + mock_node_instance = MagicMock() + MockNode.return_value = mock_node_instance + mock_node_instance.__iter__.return_value = iter(event_queue.get, None) + + from navigator.main import main + + # Run the main function in a separate thread + main_thread = threading.Thread(target=main, daemon=True) + main_thread.start() + + # Set initial robot pose and waypoints + robot_pose = Transform.from_position_and_quaternion( + np.array([0.0, 0.0, 0.0]), + np.array([1.0, 0.0, 0.0, 0.0]), # scalar-first (w, x, y, z) + ) + waypoints = WaypointList( + [ + Waypoint( + transform=Transform.from_position_and_quaternion( + np.array([10.0, 0.0, 0.0]), np.array([1.0, 0.0, 0.0, 0.0]) + ), + status=WaypointStatus.ACTIVE, + ) + ] + ) + event_queue.put( + {"type": "INPUT", "id": "robot_pose", "value": robot_pose.to_arrow()} + ) + event_queue.put( + {"type": "INPUT", "id": "waypoints", "value": waypoints.to_arrow()} + ) + + # Send a tick to trigger a command calculation + event_queue.put({"type": "INPUT", "id": "tick", "value": pa.array([])}) + + threading.Event().wait(0.1) # Wait for async operations + mock_node_instance.send_output.assert_called_once() + args, _ = mock_node_instance.send_output.call_args + assert args[0] == "command_2d" + command_output = Twist2D.from_arrow(args[1]) + assert command_output.linear_x > 0, ( + f"Expected to go towards X direction, got {command_output}" + ) + assert command_output.angular_z == 0.0, ( + f"Expected to go straight, got {command_output}" + ) + + # Reset mock to check for the next call + mock_node_instance.send_output.reset_mock() + + # Set a new robot pose + new_robot_pose = Transform.from_position_and_quaternion( + np.array([10.0, 10.0, 0.0]), + np.array([0.707, 0.0, 0.0, 0.707]), # Rotated 90 degrees (facing +y) + ) + event_queue.put( + {"type": "INPUT", "id": "robot_pose", "value": new_robot_pose.to_arrow()} + ) + + # Send another tick + event_queue.put({"type": "INPUT", "id": "tick", "value": pa.array([])}) + + # Check the new output + threading.Event().wait(0.1) # Wait for async operations + mock_node_instance.send_output.assert_called_once() + args, _ = mock_node_instance.send_output.call_args + assert args[0] == "command_2d" + command_output = Twist2D.from_arrow(args[1]) + assert command_output.angular_z > 0.0, f"Expected to turn, got {command_output}" + + event_queue.put(None) # Signal the iterator to end + main_thread.join(timeout=1) # Wait for the thread to finish +``` + +The main differences compared to the one-step test: + +- The node runs in a separate thread. This allows us to let the node spin and wait for inputs as it would during normal execution. +- The inputs are passed to a `Queue` which is shared with the node. We can use it as the mocked input stream by converting it into an iterator with ``iter(event_queue.get, None)`` +- To wait for the node to finish generating the output inside the thread, we wait in the test with `threading.Event().wait(0.1)` +- After validating the first output, we continue sending new inputs and testing the newly generated outputs. +- Instead of asserting the exact output values, we only check if the node sends the robot in the correct general direction. + +# Integration testing + +Dora is flexible about how we start the processes implementing the nodes. For example, we can use `pytest` to start one of our nodes. This allows us to output messages to the rest of the system inside test-cases and validate the received inputs. + +The `tester` node is implemented as a pytest test file. For the full source, see [`test_waypoints_poses.py`](https://github.com/art-e-fact/go2-example/blob/49b654743439f872332630a8627bdcae776cd85e/nodes/tester/tester/test_waypoints_poses.py) and [`conftest.py`](https://github.com/art-e-fact/go2-example/blob/49b654743439f872332630a8627bdcae776cd85e/nodes/tester/tester/conftest.py) . Here we will only discuss the key parts: + +- The dora-rs node is provided to the tests as a fixture. This allows us to reuse the same node between test-cases. + +```python +@pytest.fixture(scope="session") +def session_node(): + """Create a Dora node for the full test session.""" + node = TestNode() + yield node + # Send the stop signal to tear down the rest of the nodes + node.send_output("stop", pa.array([True])) +``` + +- The simulation is executed at different speeds on faster and slower machines. To make the test timeouts hardware-agnostic, we need to use simulation time in the tests. See the full implementation in [`conftest.py`](https://github.com/art-e-fact/go2-example/blob/49b654743439f872332630a8627bdcae776cd85e/nodes/tester/tester/conftest.py) + +```python +@pytest.fixture(scope="function") +def node(request, session_node: TestNode): + """Provide TestNode that will reset the timeout before and after each test. + + Use `@pytest.mark.clock_timeout(seconds)` to set timeout per test. + """ + # Reset timeout before and after each test + session_node.reset_timeout() + # Try to read the clock_timeout marker from the test function + clock_timeout = request.node.get_closest_marker("clock_timeout") + if clock_timeout: + session_node.set_timeout(clock_timeout.args[0]) + yield session_node + session_node.reset_timeout() +``` + +- Isaac Sim can take a long time to start. Instead of restarting it for each scenario, we send a `load_scene` message to the `simulation` node. This will trigger the `simulation` node to reset the world and load a new scene in seconds. + +```python +node.send_output( + "load_scene", msgs.SceneInfo(name=scene, difficulty=difficulty).to_arrow() +) +``` + +We can use the built-in parametrization feature of pytest to validate the robot behavior in different configurations. + +```python +@pytest.mark.parametrize("difficulty", [0.1, 0.7, 1.1]) +@pytest.mark.clock_timeout(30) +def test_completes_waypoint_mission_with_variable_height_steps(node, difficulty: float): + """Test that the waypoint mission completes successfully. + + The pyramid steps height is configured via difficulty. + """ + run_waypoint_mission_test(node, scene="generated_pyramid", difficulty=difficulty) + + +@pytest.mark.parametrize("scene", ["rail_blocks", "stone_stairs", "excavator"]) +@pytest.mark.clock_timeout(50) +def test_completes_waypoint_mission_in_photo_realistic_env(node, scene: str): + """Test that the waypoint mission completes successfully.""" + run_waypoint_mission_test(node, scene, difficulty=1.0) + +``` \ No newline at end of file diff --git a/isaac-rerun-logger/README.md b/isaac-rerun-logger/README.md new file mode 100644 index 0000000..e69de29 diff --git a/isaac-rerun-logger/pyproject.toml b/isaac-rerun-logger/pyproject.toml new file mode 100644 index 0000000..bd6bf94 --- /dev/null +++ b/isaac-rerun-logger/pyproject.toml @@ -0,0 +1,23 @@ +[project] +name = "isaac-rerun-logger" +version = "0.1.0" +description = "Add your description here" +readme = "README.md" +authors = [ + { name = "Andras Polgar", email = "azazdeaz@gmail.com" } +] +requires-python = ">=3.11" +dependencies = [ + "pillow>=11.2.1", + "rerun-sdk>=0.23.1", + "usd-core>=25.11", +] + +[build-system] +requires = ["uv_build>=0.9.7,<0.10.0"] +build-backend = "uv_build" + +[dependency-groups] +dev = [ + "types-usd>=24.5.2", +] diff --git a/isaac-rerun-logger/src/isaac_rerun_logger/__init__.py b/isaac-rerun-logger/src/isaac_rerun_logger/__init__.py new file mode 100644 index 0000000..d03b447 --- /dev/null +++ b/isaac-rerun-logger/src/isaac_rerun_logger/__init__.py @@ -0,0 +1,596 @@ +from datetime import datetime, timedelta +import os +from pathlib import Path + +import numpy as np +import rerun as rr +from PIL import Image +from pxr import Gf, Sdf, Usd, UsdGeom, UsdShade + + +class UsdRerunLogger: + """Visualize USD stages in Rerun.""" + + def __init__(self, stage: Usd.Stage): + self.stage = None + self._logger_id = None + self._logged_meshes = set() # Track which meshes we've already logged + self._last_transforms = {} # Track last logged transforms for change detection + + def initialize( + self, + stage: Usd.Stage, + logger_id: str = "isaac_rerun_logger", + spawn=True, + save_path: Path | None = None, + ): + """Initialize the Rerun logger with a USD stage.""" + self.stop() + self.stage = stage + + # Add random postfix to logger ID to avoid conflicts + self._logger_id = f"{logger_id}_{np.random.randint(10000)}" + rr.init(self._logger_id, spawn=spawn) + if save_path is not None: + # Ensure directory exists + save_path.parent.mkdir(parents=True, exist_ok=True) + rr.save(save_path) + self._logged_meshes = set() # Track which meshes we've already logged + self._last_transforms: dict[ + str, Gf.Matrix4d + ] = {} # Track last logged transforms for change detection + + def stop(self): + """Stop the Rerun logger.""" + if self._logger_id is not None: + rr.disconnect() + self._logger_id = None + self.stage = None + self._logged_meshes.clear() + self._last_transforms.clear() + + def set_time( + self, + sequence: int | None = None, + duration: int | float | timedelta | np.timedelta64 | None = None, + timestamp: int | float | datetime | np.datetime64 | None = None, + ) -> None: + """ + Set the current time of the Rerun logger. + + Used for all subsequent logging on the same thread, until the next call to + [`rerun.set_time`][], [`rerun.reset_time`][] or [`rerun.disable_timeline`][]. + + For example: `set_time("frame_nr", sequence=frame_nr)`. + + There is no requirement of monotonicity. You can move the time backwards if you like. + + You are expected to set exactly ONE of the arguments `sequence`, `duration`, or `timestamp`. + You may NOT change the type of a timeline, so if you use `duration` for a specific timeline, + you must only use `duration` for that timeline going forward. + + Parameters + ---------- + sequence: + Used for sequential indices, like `frame_nr`. + Must be an integer. + duration: + Used for relative times, like `time_since_start`. + Must either be in seconds, a [`datetime.timedelta`][], or [`numpy.timedelta64`][]. + For nanosecond precision, use `numpy.timedelta64(nanoseconds, 'ns')`. + timestamp: + Used for absolute time indices, like `capture_time`. + Must either be in seconds since Unix epoch, a [`datetime.datetime`][], or [`numpy.datetime64`][]. + For nanosecond precision, use `numpy.datetime64(nanoseconds, 'ns')`. + + """ + rr.set_time( + "clock", + sequence=sequence, + duration=duration, + timestamp=timestamp, + ) + + def log_stage(self): + """ + Log the entire USD stage to Rerun. + """ + if self.stage is None: + print("Warning: USD stage is not initialized.") + return + + # Traverse all prims in the stage + current_paths = set() + # Using Usd.TraverseInstanceProxies to traverse into instanceable prims (references) + predicate = Usd.TraverseInstanceProxies(Usd.PrimDefaultPredicate) + for prim in self.stage.Traverse(predicate): + # Skip guides + if prim.GetAttribute("purpose").Get() == UsdGeom.Tokens.guide: + continue + + entity_path = str(prim.GetPath()) + current_paths.add(entity_path) + + # Log transforms for all Xformable prims + if prim.IsA(UsdGeom.Xformable): + self._log_transform(prim, entity_path) + + # Log mesh geometry (only once per unique mesh) + if prim.IsA(UsdGeom.Mesh): + mesh_path = str(prim.GetPath()) + if mesh_path not in self._logged_meshes: + self._log_mesh(prim, entity_path) + self._logged_meshes.add(mesh_path) + + if prim.IsA(UsdGeom.Cube): + cube_path = str(prim.GetPath()) + if cube_path not in self._logged_meshes: + self._log_cube(prim, entity_path) + self._logged_meshes.add(cube_path) + + # Clear the logged paths that are no longer present in the stage + for path in list(self._last_transforms.keys()): + if path not in current_paths: + rr.log(path, rr.Clear.flat()) + del self._last_transforms[path] + + def _log_transform(self, prim: Usd.Prim, entity_path: str): + """Log the transform of an Xformable prim.""" + if not prim.IsA(UsdGeom.Xformable): + return + + # Get the local transformation + xformable = UsdGeom.Xformable(prim) + transform_matrix: Gf.Matrix4d = xformable.GetLocalTransformation() + + if ( + entity_path in self._last_transforms + and self._last_transforms[entity_path] == transform_matrix + ): + return + + self._last_transforms[entity_path] = transform_matrix + + transform = Gf.Transform(transform_matrix) + + quaternion = transform.GetRotation().GetQuat() + + # Log the transform to Rerun + rr.log( + entity_path, + rr.Transform3D( + translation=transform.GetTranslation(), + quaternion=(*quaternion.GetImaginary(), quaternion.GetReal()), + scale=transform.GetScale(), + ), + ) + + def _log_mesh(self, prim: Usd.Prim, entity_path: str): + """Log mesh geometry to Rerun.""" + mesh = UsdGeom.Mesh(prim) + + # Get vertex positions + points_attr = mesh.GetPointsAttr() + if not points_attr: + return + vertices = np.array(points_attr.Get()) + + # Get face vertex indices + face_vertex_indices_attr = mesh.GetFaceVertexIndicesAttr() + face_vertex_counts_attr = mesh.GetFaceVertexCountsAttr() + + if not face_vertex_indices_attr or not face_vertex_counts_attr: + rr.log(entity_path, rr.Points3D(positions=vertices), static=True) + return + + face_vertex_indices = np.array(face_vertex_indices_attr.Get()) + face_vertex_counts = np.array(face_vertex_counts_attr.Get()) + + if face_vertex_indices is None or face_vertex_counts is None: + rr.log(entity_path, rr.Points3D(positions=vertices), static=True) + return + + # --- Handle UVs --- + # Use UsdGeom.PrimvarsAPI to handle indexed vs non-indexed primvars correctly + primvars_api = UsdGeom.PrimvarsAPI(prim) + st_primvar = primvars_api.GetPrimvar("st") + + texcoords = None + st_interpolation = "constant" + + if st_primvar: + st_interpolation = st_primvar.GetInterpolation() + + # Get the data, resolving indices if present + st_data = st_primvar.Get() + st_indices = st_primvar.GetIndices() + + if st_data is not None: + st_data = np.array(st_data) + if st_indices: + st_indices = np.array(st_indices) + texcoords = st_data[st_indices] + else: + texcoords = st_data + + # --- Handle Normals --- + normals_attr = mesh.GetNormalsAttr() + normals = None + normals_interpolation = "constant" + if normals_attr.HasValue(): + normals = np.array(normals_attr.Get()) + normals_interpolation = normals_attr.GetMetadata("interpolation") + + # --- Flattening Logic --- + # If UVs or Normals are face-varying, we must flatten the mesh to a triangle soup + should_flatten = (st_interpolation == "faceVarying") or ( + normals_interpolation == "faceVarying" + ) + + # Fallback: if texcoords length matches face_vertex_indices length, treat as face-varying + # (This handles cases where metadata might be missing or ambiguous but data shape is clear) + if ( + texcoords is not None + and len(texcoords) == len(face_vertex_indices) + and len(texcoords) != len(vertices) + ): + should_flatten = True + + triangles_list = None + + # Map for subsets: face_index -> list of triangle_indices + face_to_triangle_indices = [[] for _ in range(len(face_vertex_counts))] + current_triangle_index = 0 + + if should_flatten: + # Flatten positions: Create a new vertex for every face corner + vertices = vertices[face_vertex_indices] + + # Flatten normals if they are vertex-interpolated + if normals is not None: + if normals_interpolation == "vertex": + normals = normals[face_vertex_indices] + # if faceVarying, normals should already match face_vertex_indices length + + # Flatten UVs if they are vertex-interpolated + if texcoords is not None: + if st_interpolation == "vertex": + texcoords = texcoords[face_vertex_indices] + # if faceVarying, texcoords should already match face_vertex_indices length + + # Generate trivial triangles (0,1,2), (3,4,5)... + # But we must respect the polygon counts (3, 4, etc.) + triangles = [] + idx = 0 + for face_idx, count in enumerate(face_vertex_counts): + # The vertices for this face are at indices [idx, idx+1, ... idx+count-1] in our new arrays + if count == 3: + triangles.extend([idx, idx + 1, idx + 2]) + face_to_triangle_indices[face_idx].append(current_triangle_index) + current_triangle_index += 1 + elif count == 4: + triangles.extend([idx, idx + 1, idx + 2]) + face_to_triangle_indices[face_idx].append(current_triangle_index) + current_triangle_index += 1 + + triangles.extend([idx, idx + 2, idx + 3]) + face_to_triangle_indices[face_idx].append(current_triangle_index) + current_triangle_index += 1 + else: + # Fan triangulation + for i in range(1, count - 1): + triangles.extend([idx, idx + i, idx + i + 1]) + face_to_triangle_indices[face_idx].append( + current_triangle_index + ) + current_triangle_index += 1 + idx += count + + triangles_list = np.array(triangles, dtype=np.uint32).reshape(-1, 3) + + else: + # Standard indexed mesh path (shared vertices) + triangles = [] + idx = 0 + for face_idx, count in enumerate(face_vertex_counts): + if count == 3: + triangles.extend( + [ + face_vertex_indices[idx], + face_vertex_indices[idx + 1], + face_vertex_indices[idx + 2], + ] + ) + face_to_triangle_indices[face_idx].append(current_triangle_index) + current_triangle_index += 1 + elif count == 4: + triangles.extend( + [ + face_vertex_indices[idx], + face_vertex_indices[idx + 1], + face_vertex_indices[idx + 2], + ] + ) + face_to_triangle_indices[face_idx].append(current_triangle_index) + current_triangle_index += 1 + + triangles.extend( + [ + face_vertex_indices[idx], + face_vertex_indices[idx + 2], + face_vertex_indices[idx + 3], + ] + ) + face_to_triangle_indices[face_idx].append(current_triangle_index) + current_triangle_index += 1 + else: + for i in range(1, count - 1): + triangles.extend( + [ + face_vertex_indices[idx], + face_vertex_indices[idx + i], + face_vertex_indices[idx + i + 1], + ] + ) + face_to_triangle_indices[face_idx].append( + current_triangle_index + ) + current_triangle_index += 1 + idx += count + + triangles_list = np.array(triangles, dtype=np.uint32).reshape(-1, 3) + + # --- Material and Texture Handling --- + texture_buffer = None + + subsets = UsdGeom.Subset.GetAllGeomSubsets(mesh) + if subsets: + for subset in subsets: + if subset.GetElementTypeAttr().Get() != UsdGeom.Tokens.face: + print( + "Warning: Unsupported subset element type:", + subset.GetElementTypeAttr().Get(), + ) + continue + + # Rearrange the mesh data to only include the subset + included_faces = subset.GetIndicesAttr().Get() + if not included_faces: + continue + + # Collect all triangles for these faces + subset_triangle_indices = [] + for face_idx in included_faces: + if face_idx < len(face_to_triangle_indices): + subset_triangle_indices.extend( + face_to_triangle_indices[face_idx] + ) + + if not subset_triangle_indices: + continue + + subset_triangles = triangles_list[subset_triangle_indices] + + # TODO: Remove unused vertices + + texture_path = self._get_image_texture_path(subset.GetPrim()) + texture_buffer = self._load_texture(texture_path) + + self._log_mesh_data( + str(subset.GetPath()), + vertices, + np.array(subset_triangles), + normals, + texcoords, + texture_buffer, + ) + + else: + texture_path = self._get_image_texture_path(prim) + texture_buffer = self._load_texture(texture_path) + + self._log_mesh_data( + entity_path, + vertices, + triangles_list, + normals, + texcoords, + texture_buffer, + ) + + def _log_mesh_data( + self, + entity_path: str, + vertices: np.ndarray, + triangles_list: np.ndarray, + normals: np.ndarray = None, + texcoords: np.ndarray = None, + texture_buffer: np.ndarray = None, + albedo_factor: tuple = None, + ): + rr.log( + entity_path, + rr.Mesh3D( + vertex_positions=vertices, + triangle_indices=triangles_list, + vertex_normals=normals, + vertex_texcoords=texcoords, + albedo_texture=texture_buffer, + albedo_factor=albedo_factor, + ), + static=True, + ) + + def clear_logged_meshes(self): + """Clear the cache of logged meshes, allowing them to be logged again.""" + self._logged_meshes.clear() + + def _get_image_texture_path(self, prim: Usd.Prim): + """ + Get material color or texture path. + Returns: texture_path or None + """ + binding_api = UsdShade.MaterialBindingAPI(prim) + material: UsdShade.Material = binding_api.ComputeBoundMaterial()[0] + if not material: + print(f"No material found for prim {prim.GetPath()}.") + return None + + shader: UsdShade.Shader = material.ComputeSurfaceSource()[0] + if not shader: + print("No surface shader found.") + mdl_surface = material.GetOutput("mdl:surface") + if mdl_surface and mdl_surface.HasConnectedSource(): + source, sourceName, sourceType = mdl_surface.GetConnectedSource() + shader = UsdShade.Shader(source) + else: + return None + + implementation_source = shader.GetImplementationSource() + + if ( + implementation_source == "id" + and shader.GetIdAttr().Get() == "UsdPreviewSurface" + ): + diffuse_color = shader.GetInput("diffuseColor") + + diffuse_color_source: UsdShade.ConnectableAPI = ( + diffuse_color.GetConnectedSource()[0] + ) + + diffuse_color_source_file = diffuse_color_source.GetInput("file") + diffuse_color_source_file_path = diffuse_color_source_file.Get() + + if not diffuse_color_source_file_path or not isinstance( + diffuse_color_source_file_path, Sdf.AssetPath + ): + print("Diffuse color source is not a valid texture file path.") + return None + + return None, diffuse_color_source_file_path.resolvedPath + + elif ( + implementation_source == UsdShade.Tokens.sourceAsset + and shader.GetPrim() + .GetAttribute("info:mdl:sourceAsset:subIdentifier") + .Get() + == "OmniPBR" + ): + diffuse_texture = shader.GetInput("diffuse_texture") + if not diffuse_texture: + print("No diffuse_texture input found in OmniPBR shader.") + print( + "Shader inputs:", [inp.GetBaseName() for inp in shader.GetInputs()] + ) + return None + print(diffuse_texture.GetConnectedSource()) + diffuse_texture_source = diffuse_texture.GetConnectedSource() + if not diffuse_texture_source: + print("No connected source for diffuse_texture.") + return None + diffuse_texture_source, input_name, _ = diffuse_texture_source + diffuse_texture_source_file = diffuse_texture_source.GetInput( + input_name + ).Get() + if not diffuse_texture_source_file or not isinstance( + diffuse_texture_source_file, Sdf.AssetPath + ): + print("Diffuse texture source is not a valid texture file path.") + return None + return diffuse_texture_source_file.resolvedPath + + elif ( + implementation_source == UsdShade.Tokens.sourceAsset + and shader.GetPrim() + .GetAttribute("info:mdl:sourceAsset:subIdentifier") + .Get() + == "gltf_material" + ): + diffuse_texture = shader.GetInput("base_color_texture") + print(diffuse_texture.GetConnectedSource()) + diffuse_texture_source = diffuse_texture.GetConnectedSource()[0] + diffuse_texture_source_file: Sdf.AssetPath = ( + diffuse_texture_source.GetInput("texture").Get() + ) + if not diffuse_texture_source_file or not isinstance( + diffuse_texture_source_file, Sdf.AssetPath + ): + print("Diffuse texture source is not a valid texture file path.") + return None + return diffuse_texture_source_file.resolvedPath + + else: + print(f"Unsupported shader type: {shader.GetIdAttr().Get()}") + return None + + def _load_texture(self, texture_path): + """Load texture from path.""" + if not texture_path: + return None + try: + # Resolve path relative to stage + if not os.path.isabs(texture_path): + stage_path = self.stage.GetRootLayer().realPath + if stage_path: + texture_path = os.path.join( + os.path.dirname(stage_path), texture_path + ) + + if not os.path.exists(texture_path): + print(f"Warning: Texture file does not exist: {texture_path}") + return None + + with Image.open(texture_path) as img: + img = img.convert("RGB") # Ensure 3 channels + # mirror the image vertically and horizontally + img = img.transpose(Image.FLIP_TOP_BOTTOM) + # img = img.transpose(Image.FLIP_LEFT_RIGHT) + img_data = np.array(img) + return img_data + + except Exception as e: + print(f"Failed to load texture {texture_path}: {e}") + return None + + def _log_cube(self, prim: Usd.Prim, entity_path: str): + """Log a cube prim as a Rerun box.""" + cube = UsdGeom.Cube(prim) + size_attr = cube.GetSizeAttr() + size = size_attr.Get() if size_attr else 2.0 + half_size = size / 2.0 + + color = None + display_color_attr = cube.GetDisplayColorAttr() + if display_color_attr and display_color_attr.HasValue(): + colors = display_color_attr.Get() + if colors and len(colors) > 0: + c = colors[0] + color = (int(c[0] * 255), int(c[1] * 255), int(c[2] * 255)) + + rr.log( + entity_path, + rr.Boxes3D( + half_sizes=[half_size, half_size, half_size], + colors=color, + fill_mode="solid", + ), + static=True, + ) + + +if __name__ == "__main__": + test_usds = [ + # "/home/azazdeaz/repos/art/go2-example/assets/rail_blocks/rail_blocks.usd", + # "/home/azazdeaz/repos/art/go2-example/assets/excavator_scan/excavator.usd", + # "/home/azazdeaz/repos/art/go2-example/assets/stone_stairs/stone_stairs_f.usd", + # "/home/azazdeaz/repos/art/go2-example/isaac-rerun-logger/assets/dex_cube_instanceable.usd", + # "/home/azazdeaz/repos/art/go2-example/isaac-rerun-logger/assets/Collected_dex_cube_instanceable/dex_cube_instanceable.usda", + # "/home/azazdeaz/repos/art/go2-example/isaac-rerun-logger/assets/simpleShading.usda", + # "/home/azazdeaz/repos/art/go2-example/isaac-rerun-logger/assets/Collected_block_letter/block_letter_flat.usda", + "/home/azazdeaz/repos/art/go2-example/isaac-rerun-logger/assets/Collected_go2-piamid/go2-piamid.usda", + ] + for usd_path in test_usds: + print(f"\n\n\n>> Logging USD stage: {usd_path}") + stage = Usd.Stage.Open(usd_path) + logger = UsdRerunLogger(stage) + logger.log_stage(frame_idx=0) diff --git a/isaac-rerun-logger/src/isaac_rerun_logger/py.typed b/isaac-rerun-logger/src/isaac_rerun_logger/py.typed new file mode 100644 index 0000000..e69de29 diff --git a/nodes/simulation/simulation/go2_scene.py b/nodes/simulation/simulation/go2_scene.py index b65152b..5793da1 100644 --- a/nodes/simulation/simulation/go2_scene.py +++ b/nodes/simulation/simulation/go2_scene.py @@ -11,6 +11,7 @@ from omni.isaac.core.utils.prims import get_prim_at_path from isaacsim.core.prims import SingleXFormPrim from pxr import Gf, Sdf, UsdGeom +from isaac_rerun_logger import UsdRerunLogger from simulation.camera_manager import CameraManager @@ -175,6 +176,7 @@ def __init__( window_size=200, update_interval=100 ) # 1-second window for 200Hz, update every 100 steps self.log_rtf = False + self._simulation_time = 0.0 def initialize(self): self.world = World( @@ -184,6 +186,8 @@ def initialize(self): ) self.world.reset() + self.rerun_logger = UsdRerunLogger(self.world.stage) + self.go2 = Go2Policy( prim_path=self.robot_path, name="Go2", @@ -210,6 +214,8 @@ def _handle_shutdown(self, signum, frame): def initialize_scene(self): self._is_initializing = True + self.rerun_logger.stop() + demo_config = add_assets_to_world(self.current_scene, self.difficulty) self.world.reset() @@ -252,10 +258,15 @@ def initialize_scene(self): self.camera_manager.initialize() self.camera_manager.link_waypoint_mission(self.waypoint_mission) + run_dir = OUTPUT_DIR / f"{self.current_scene.name}_{int(self.difficulty * 100)}" + + self.rerun_logger.initialize( + self.world.stage, spawn=False, save_path=run_dir / "simulation.rrd" + ) + # TODO: find a better way to separate artefacts outputs per test run self.camera_manager.start_writers( - output_dir=OUTPUT_DIR - / f"{self.current_scene.name}_{int(self.difficulty * 100)}", + output_dir=run_dir, ) def load_next_scene(self): @@ -283,6 +294,8 @@ def reload_scene(self): self.load_scene(self.current_scene) def on_physics_step(self, step_size) -> None: + self._simulation_time += step_size + rtf = self._rtf_calculator.step(step_size) if rtf is not None and self.log_rtf: print(f"Real-Time Factor (RTF): {rtf:.2f}") @@ -327,6 +340,9 @@ def step(self): return self.world.step(render=True) + if self.rerun_logger: + self.rerun_logger.set_time(duration=self._simulation_time) + self.rerun_logger.log_stage() self.follow_camera.update() self.waypoint_mission.update() diff --git a/pyproject.toml b/pyproject.toml index 8e6d4e7..2d4138d 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -23,11 +23,16 @@ dataflow = { workspace = true } teleop = { workspace = true } [tool.uv.workspace] -members = ["nodes/*", "msgs", "dataflow"] +members = [ + "nodes/*", + "msgs", + "dataflow", + "isaac-rerun-logger", +] [dependency-groups] dev = [ "ruff >=0.14.3", "pre-commit", -] \ No newline at end of file +] diff --git a/uv.lock b/uv.lock index c44076e..cb4c31e 100644 --- a/uv.lock +++ b/uv.lock @@ -12,6 +12,7 @@ resolution-markers = [ members = [ "dataflow", "go2-example", + "isaac-rerun-logger", "msgs", "navigator", "policy-controller", @@ -818,6 +819,31 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" }, ] +[[package]] +name = "isaac-rerun-logger" +version = "0.1.0" +source = { editable = "isaac-rerun-logger" } +dependencies = [ + { name = "pillow" }, + { name = "rerun-sdk" }, + { name = "usd-core" }, +] + +[package.dev-dependencies] +dev = [ + { name = "types-usd" }, +] + +[package.metadata] +requires-dist = [ + { name = "pillow", specifier = ">=11.2.1" }, + { name = "rerun-sdk", specifier = ">=0.23.1" }, + { name = "usd-core", specifier = ">=25.11" }, +] + +[package.metadata.requires-dev] +dev = [{ name = "types-usd", specifier = ">=24.5.2" }] + [[package]] name = "isaacsim" version = "5.0.0.0" @@ -2281,6 +2307,25 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/6f/bb/5deac77a9af870143c684ab46a7934038a53eb4aa975bc0687ed6ca2c610/requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5", size = 23892, upload-time = "2022-01-29T18:52:22.279Z" }, ] +[[package]] +name = "rerun-sdk" +version = "0.23.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "attrs" }, + { name = "numpy" }, + { name = "pillow" }, + { name = "pyarrow" }, + { name = "typing-extensions" }, +] +wheels = [ + { url = "https://files.pythonhosted.org/packages/dd/6e/a125f4fe2de3269f443b7cb65d465ffd37a836a2dac7e4318e21239d78c8/rerun_sdk-0.23.1-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:fe06d21cfcf4d84a9396f421d4779efabec7e9674d232a2c552c8a91d871c375", size = 66094053, upload-time = "2025-04-25T13:15:48.669Z" }, + { url = "https://files.pythonhosted.org/packages/55/f6/b6d13322b05dc77bd9a0127e98155c2b7ee987a236fd4d331eed2e547a90/rerun_sdk-0.23.1-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:823ae87bfa644e06fb70bada08a83690dd23d9824a013947f80a22c6731bdc0d", size = 62047843, upload-time = "2025-04-25T13:15:54.48Z" }, + { url = "https://files.pythonhosted.org/packages/a5/7f/6a7422cb727e14a65b55b0089988eeea8d0532c429397a863e6ba395554a/rerun_sdk-0.23.1-cp39-abi3-manylinux_2_31_aarch64.whl", hash = "sha256:dc5129f8744f71249bf45558c853422c51ef39b6b5eea0ea1f602c6049ce732f", size = 68214509, upload-time = "2025-04-25T13:15:59.339Z" }, + { url = "https://files.pythonhosted.org/packages/4f/86/3aee9eadbfe55188a2c7d739378545b4319772a4d3b165e8d3fc598fa630/rerun_sdk-0.23.1-cp39-abi3-manylinux_2_31_x86_64.whl", hash = "sha256:ee0d0e17df0e08be13b77cc74884c5d8ba8edb39b6f5a60dc2429d39033d90f6", size = 71442196, upload-time = "2025-04-25T13:16:04.405Z" }, + { url = "https://files.pythonhosted.org/packages/a7/ba/028bd382e2ae21e6643cec25f423285dbc6b328ce56d55727b4101ef9443/rerun_sdk-0.23.1-cp39-abi3-win_amd64.whl", hash = "sha256:d4273db55b56310b053a2de6bf5927a8692cf65f4d234c6e6928fb24ed8a960d", size = 57583198, upload-time = "2025-04-25T13:16:08.905Z" }, +] + [[package]] name = "rich" version = "14.2.0" @@ -2784,6 +2829,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/78/64/7713ffe4b5983314e9d436a90d5bd4f63b6054e2aca783a3cfc44cb95bbf/typer-0.20.0-py3-none-any.whl", hash = "sha256:5b463df6793ec1dca6213a3cf4c0f03bc6e322ac5e16e13ddd622a889489784a", size = 47028, upload-time = "2025-10-20T17:03:47.617Z" }, ] +[[package]] +name = "types-usd" +version = "24.5.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/1e/2e/b27175e276886dece2660c527556981dbf9465b977826e8b602500113fed/types_usd-24.5.2.tar.gz", hash = "sha256:5da654fdc17dfb8ddf2b264c626e5236e6572b5359498c5ec634b5577abe86fa", size = 487745, upload-time = "2024-08-21T03:40:44.95Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a9/38/77e716ff072350c3ae98da836e26080ffaf72e5da41c301dad55d23d8897/types_usd-24.5.2-py2.py3-none-any.whl", hash = "sha256:e696c40f7aec7fa3f65f14d26345b9b747d9004616bcd9fb01c85d4bb52b3d68", size = 541467, upload-time = "2024-08-21T03:40:43.139Z" }, +] + [[package]] name = "typing-extensions" version = "4.12.2" @@ -2819,6 +2873,16 @@ socks = [ { name = "pysocks" }, ] +[[package]] +name = "usd-core" +version = "25.11" +source = { registry = "https://pypi.org/simple" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/da/9a/2a27b2ff16f8def2ef384a7deebb0e670c1016ea9e5ce7b0ebb14bb9ea93/usd_core-25.11-cp311-none-macosx_10_9_universal2.whl", hash = "sha256:3485b21e8319a74b860df96122fbf7c43a2018b76dd975d3855a5f88a5cdba67", size = 38455875, upload-time = "2025-10-24T19:01:15.247Z" }, + { url = "https://files.pythonhosted.org/packages/78/6a/fe4feefe867e92e3c657c38bc0041a2b39e035f7fdf37b4d9394c959871e/usd_core-25.11-cp311-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a971c76ee4470a318df0517109fa8302f4bfa4f42c1f26b011658bb2ae6b6fa4", size = 27516181, upload-time = "2025-10-24T19:01:18.508Z" }, + { url = "https://files.pythonhosted.org/packages/ef/12/50ab94ef969c4f9cfd08a9086ebd544e9fa3caadd57587e8851e612f2805/usd_core-25.11-cp311-none-win_amd64.whl", hash = "sha256:dff4cf30a36659ce8ddf35b8d013ee02516487174aafa806938c34a782cde89c", size = 13027051, upload-time = "2025-10-24T19:01:21.832Z" }, +] + [[package]] name = "uv" version = "0.9.7"