Skip to content

Commit

Permalink
Merge branch 'main' into apoddubny/ci-updates
Browse files Browse the repository at this point in the history
  • Loading branch information
Dhoeller19 authored Oct 5, 2024
2 parents 0ea5a97 + 0ef582b commit 798e26a
Show file tree
Hide file tree
Showing 31 changed files with 687 additions and 237 deletions.
1 change: 1 addition & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ Guidelines for modifications:
* Johnson Sun
* Kaixi Bao
* Kourosh Darvish
* Lionel Gulich
* Lorenz Wellhausen
* Masoud Moghani
* Michael Gussert
Expand Down
2 changes: 1 addition & 1 deletion docs/source/how-to/save_camera_output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ To run the accompanying script, execute the following command:
.. code-block:: bash
# Usage with saving and drawing
./isaaclab.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py --save --draw
./isaaclab.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py --save --draw --enable_cameras
# Usage with saving only in headless mode
./isaaclab.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py --save --headless --enable_cameras
Expand Down
2 changes: 1 addition & 1 deletion docs/source/overview/teleop_imitation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ format.
# install python module (for robomimic)
./isaaclab.sh -i robomimic
# split data
./isaaclab.sh -p source/standalone//workflows/robomimic/tools/split_train_val.py logs/robomimic/Isaac-Lift-Cube-Franka-IK-Rel-v0/hdf_dataset.hdf5 --ratio 0.2
./isaaclab.sh -p source/standalone/workflows/robomimic/tools/split_train_val.py logs/robomimic/Isaac-Lift-Cube-Franka-IK-Rel-v0/hdf_dataset.hdf5 --ratio 0.2
3. Train a BC agent for ``Isaac-Lift-Cube-Franka-IK-Rel-v0`` with
`Robomimic <https://robomimic.github.io/>`__:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorials/04_sensors/add_sensors_on_robot.rst
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ Now that we have gone through the code, let's run the script and see the result:

.. code-block:: bash
./isaaclab.sh -p source/standalone/tutorials/04_sensors/add_sensors_on_robot.py --num_envs 2
./isaaclab.sh -p source/standalone/tutorials/04_sensors/add_sensors_on_robot.py --num_envs 2 --enable_cameras
This command should open a stage with a ground plane, lights, and two quadrupedal robots.
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ known_third_party = [
"omni.kit.*",
"warp",
"carb",
"Semantics",
]
# Imports from this repository
known_first_party = "omni.isaac.lab"
Expand Down
2 changes: 1 addition & 1 deletion source/extensions/omni.isaac.lab/config/extension.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[package]

# Note: Semantic Versioning is used: https://semver.org/
version = "0.24.13"
version = "0.24.19"

# Description
title = "Isaac Lab framework for Robot Learning"
Expand Down
56 changes: 50 additions & 6 deletions source/extensions/omni.isaac.lab/docs/CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,24 +1,68 @@
Changelog
---------

0.22.15 (2024-09-20)
0.24.19 (2024-10-05)
~~~~~~~~~~~~~~~~~~~~

Added
^^^^^

* Added :meth:`grab_images` to be able to use images for an observation term in manager based environments
* Added new functionalities to the FrameTransformer to make it more general. It is now possible to track:

* Target frames that aren't children of the source frame prim_path
* Target frames that are based upon the source frame prim_path


0.24.18 (2024-10-04)
~~~~~~~~~~~~~~~~~~~~

Fixed
^^^^^

* Fixes parsing and application of ``size`` parameter for :class:`~omni.isaac.lab.sim.spawn.GroundPlaneCfg` to correctly
scale the grid-based ground plane.


0.24.17 (2024-10-04)
~~~~~~~~~~~~~~~~~~~~

Fixed
^^^^^

* Fixed the deprecation notice for using ``pxr.Semantics``. The corresponding modules use ``Semantics`` module
directly.


0.24.16 (2024-10-03)
~~~~~~~~~~~~~~~~~~~~

Changed
^^^^^^^

* Renamed the observation function :meth:`grab_images` to :meth:`image` to follow convention of noun-based naming.
* Renamed the function :meth:`convert_perspective_depth_to_orthogonal_depth` to a shorter name
:meth:`omni.isaac.lab.utils.math.orthogonalize_perspective_depth`.


0.24.15 (2024-09-20)
~~~~~~~~~~~~~~~~~~~~

Added
^^^^^

* Added :meth:`grab_images` to be able to use images for an observation term in manager-based environments.


0.24.14 (2024-09-20)
~~~~~~~~~~~~~~~~~~~~

Added
^^^^^

* Added :meth:`convert_perspective_depth_to_orthogonal_depth`. :meth:`unproject_depth` assumes
that the input depth image is orthogonal. The new :meth:`convert_perspective_depth_to_orthogonal_depth`
can be used to convert a perspective depth image into an orthogonal depth image, so that the point cloud
can be unprojected correctly with :meth:`unproject_depth`.
* Added the method :meth:`convert_perspective_depth_to_orthogonal_depth` to convert perspective depth
images to orthogonal depth images. This is useful for the :meth:`~omni.isaac.lab.utils.math.unproject_depth`,
since it expects orthogonal depth images as inputs.


0.24.13 (2024-09-08)
~~~~~~~~~~~~~~~~~~~~
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ def __init__(self, cfg: DirectMARLEnvCfg, render_mode: str | None = None, **kwar

# set the seed for the environment
if self.cfg.seed is not None:
self.seed(self.cfg.seed)
self.cfg.seed = self.seed(self.cfg.seed)
else:
carb.log_warn("Seed not set for the environment. The environment creation may not be deterministic.")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ def __init__(self, cfg: DirectRLEnvCfg, render_mode: str | None = None, **kwargs

# set the seed for the environment
if self.cfg.seed is not None:
self.seed(self.cfg.seed)
self.cfg.seed = self.seed(self.cfg.seed)
else:
carb.log_warn("Seed not set for the environment. The environment creation may not be deterministic.")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ def __init__(self, cfg: ManagerBasedEnvCfg):

# set the seed for the environment
if self.cfg.seed is not None:
self.seed(self.cfg.seed)
self.cfg.seed = self.seed(self.cfg.seed)
else:
carb.log_warn("Seed not set for the environment. The environment creation may not be deterministic.")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -182,38 +182,52 @@ def body_incoming_wrench(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg) -> tor
return link_incoming_forces.view(env.num_envs, -1)


def grab_images(
def image(
env: ManagerBasedEnv,
sensor_cfg: SceneEntityCfg = SceneEntityCfg("tiled_camera"),
data_type: str = "rgb",
convert_perspective_to_orthogonal: bool = False,
normalize: bool = True,
) -> torch.Tensor:
"""Grab all of the latest images of a specific datatype produced by a specific camera.
"""Images of a specific datatype from the camera sensor.
If the flag :attr:`normalize` is True, post-processing of the images are performed based on their
data-types:
- "rgb": Scales the image to (0, 1) and subtracts with the mean of the current image batch.
- "depth" or "distance_to_camera" or "distance_to_plane": Replaces infinity values with zero.
Args:
env: The environment the cameras are placed within.
sensor_cfg: The desired sensor to read from. Defaults to SceneEntityCfg("tiled_camera").
data_type: The data type to pull from the desired camera. Defaults to "rgb".
convert_perspective_to_orthogonal: Whether to convert perspective
depth images to orthogonal depth images. Defaults to False.
normalize: Set to True to normalize images. Defaults to True.
convert_perspective_to_orthogonal: Whether to orthogonalize perspective depth images.
This is used only when the data type is "distance_to_camera". Defaults to False.
normalize: Whether to normalize the images. This depends on the selected data type.
Defaults to True.
Returns:
The images produced at the last timestep
The images produced at the last time-step
"""
# extract the used quantities (to enable type-hinting)
sensor: TiledCamera | Camera | RayCasterCamera = env.scene.sensors[sensor_cfg.name]

# obtain the input image
images = sensor.data.output[data_type]

# depth image conversion
if (data_type == "distance_to_camera") and convert_perspective_to_orthogonal:
images = math_utils.convert_perspective_depth_to_orthogonal_depth(images, sensor.data.intrinsic_matrices)
images = math_utils.orthogonalize_perspective_depth(images, sensor.data.intrinsic_matrices)

# rgb/depth image normalization
if normalize:
if data_type == "rgb":
images = images / 255
images = images.float() / 255.0
mean_tensor = torch.mean(images, dim=(1, 2), keepdim=True)
images -= mean_tensor
elif "distance_to" in data_type or "depth" in data_type:
images[images == float("inf")] = 0

return images.clone()


Expand Down
Loading

0 comments on commit 798e26a

Please sign in to comment.