diff --git a/docs/locale/zh_CN/LC_MESSAGES/index.po b/docs/locale/zh_CN/LC_MESSAGES/index.po index 84b68dd9ad..3e1039e9df 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/index.po +++ b/docs/locale/zh_CN/LC_MESSAGES/index.po @@ -6,7 +6,7 @@ msgid "" msgstr "" "Project-Id-Version: Isaac Lab 1.0.0\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-10-22 15:00+0800\n" +"POT-Creation-Date: 2024-11-28 10:51+0800\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: Ziqi Fan \n" "Language-Team: zh_CN \n" @@ -15,49 +15,53 @@ msgstr "" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=1; plural=0;\n" -"Generated-By: Babel 2.15.0\n" +"Generated-By: Babel 2.16.0\n" #: ../../index.rst:81 msgid "Getting Started" msgstr "开始" -#: ../../index.rst:2 ../../index.rst:90 +#: ../../index.rst:2 ../../index.rst:92 msgid "Overview" msgstr "概述" -#: ../../index.rst:103 +#: ../../index.rst:106 +msgid "Tiled Rendering" +msgstr "分块渲染" + +#: ../../index.rst:106 msgid "Features" msgstr "特点" -#: ../../index.rst:113 +#: ../../index.rst:115 msgid "Resources" msgstr "资源" -#: ../../index.rst:122 +#: ../../index.rst:124 msgid "Migration Guides" msgstr "迁移指南" -#: ../../index.rst:131 +#: ../../index.rst:133 msgid "Source API" msgstr "源码 API" -#: ../../index.rst:137 +#: ../../index.rst:139 msgid "References" msgstr "参考资料" -#: ../../index.rst:149 +#: ../../index.rst:152 msgid "GitHub" msgstr "GitHub" -#: ../../index.rst:149 +#: ../../index.rst:152 msgid "NVIDIA Isaac Sim" msgstr "NVIDIA Isaac Sim" -#: ../../index.rst:149 +#: ../../index.rst:152 msgid "NVIDIA PhysX" msgstr "NVIDIA PhysX" -#: ../../index.rst:149 +#: ../../index.rst:152 msgid "Project Links" msgstr "项目链接" @@ -123,7 +127,7 @@ msgid "" " domain randomization for improving robustness and adaptability, and support" " for running in the cloud." msgstr "" -"Isaac Lab 中提供的主要功能包括由 PhysX 提供的快速准确的物理仿真,用于矢量化渲染的分瓷渲染 " +"Isaac Lab 中提供的主要功能包括由 PhysX 提供的快速准确的物理仿真,用于矢量化渲染的分块渲染 " "API,用于改善鲁棒性和适应性的域随机化,以及支持在云端运行的功能。" #: ../../index.rst:32 @@ -211,18 +215,18 @@ msgstr "" msgid "Table of Contents" msgstr "目录" -#: ../../index.rst:158 +#: ../../index.rst:161 msgid "Indices and tables" msgstr "索引和表" -#: ../../index.rst:160 +#: ../../index.rst:163 msgid ":ref:`genindex`" msgstr ":ref:`genindex`" -#: ../../index.rst:161 +#: ../../index.rst:164 msgid ":ref:`modindex`" msgstr ":ref:`modindex`" -#: ../../index.rst:162 +#: ../../index.rst:165 msgid ":ref:`search`" msgstr ":ref:`search`" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/deployment/docker.po b/docs/locale/zh_CN/LC_MESSAGES/source/deployment/docker.po index f35aa87047..ea9db19b7a 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/deployment/docker.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/deployment/docker.po @@ -572,7 +572,7 @@ msgid "" "supported. Each of these middlewares can be `tuned`_ using their " "corresponding ``.xml`` files under ``docker/.ros``." msgstr "" -"容器默认使用 ``FastRTPS``,但也支持 ``CycloneDDS``。这些中间件中的每一个都可以通过其对应的 ``.xml`` 文件在 " +"容器默认使用 ``FastRTPS``,但也支持 ``CycloneDDS`` 。这些中间件中的每一个都可以通过其对应的 ``.xml`` 文件在 " "``docker/.ros`` 下进行 `tuned`_。" #: ../../source/deployment/docker.rst diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/features/hydra.po b/docs/locale/zh_CN/LC_MESSAGES/source/features/hydra.po index f4f4f06b4d..3663d3f9eb 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/features/hydra.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/features/hydra.po @@ -98,8 +98,8 @@ msgid "" "by the hydra arguments." msgstr "" "为了保持向后兼容性,并提供更友好的用户体验,我们保留了旧的 cli 参数形式 ``--param``,例如 " -"``--num_envs``、``--seed``、``--max_iterations``。这些参数优先于 hydra 参数,并将覆盖由 hydra " -"参数设置的值。" +"``--num_envs``、``--seed``、``--max_iterations`` 。这些参数优先于 hydra 参数,并将覆盖由 hydra" +" 参数设置的值。" #: ../../source/features/hydra.rst:62 msgid "Modifying advanced parameters" @@ -123,7 +123,8 @@ msgid "" "``env.observations.policy.joint_pos_rel.func=omni.isaac.lab.envs.mdp:joint_pos``." msgstr "" "我们可以修改 ``joint_pos_rel`` 以计算绝对位置,而不是相对位置,使用 " -"``env.observations.policy.joint_pos_rel.func=omni.isaac.lab.envs.mdp:joint_pos``。" +"``env.observations.policy.joint_pos_rel.func=omni.isaac.lab.envs.mdp:joint_pos``" +" 。" #: ../../source/features/hydra.rst:80 msgid "Setting parameters to None" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/features/tiled_rendering.po b/docs/locale/zh_CN/LC_MESSAGES/source/features/tiled_rendering.po index c23cf5491d..37e49da8dd 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/features/tiled_rendering.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/features/tiled_rendering.po @@ -19,7 +19,7 @@ msgstr "" #: ../../source/features/tiled_rendering.rst:2 msgid "Tiled-Camera Rendering" -msgstr "平铺相机渲染" +msgstr "分块相机渲染" #: ../../source/features/tiled_rendering.rst:8 msgid "This feature is only available from Isaac Sim version 4.2.0 onwards." @@ -31,7 +31,7 @@ msgid "" "memory resources, especially at larger resolutions. We recommend running 512" " cameras in the scene on RTX 4090 GPUs or similar." msgstr "" -"平铺渲染结合图像处理网络需要大量内存资源,尤其是在更大的分辨率下。我们建议在RTX 4090 GPUs或类似设备上在场景中运行512台摄像机。" +"分块渲染结合图像处理网络需要大量内存资源,尤其是在更大的分辨率下。我们建议在RTX 4090 GPUs或类似设备上在场景中运行512台摄像机。" #: ../../source/features/tiled_rendering.rst:14 msgid "" @@ -43,7 +43,7 @@ msgid "" "camera. This reduces the amount of time required for rendering and provides " "a more efficient API for working with vision data." msgstr "" -"平铺渲染API提供了一个矢量化接口,用于从摄像头传感器收集数据。这对需要视觉环节的强化学习环境非常有用。平铺渲染通过连接多个摄像头的相机输出并呈现为单个大图像,而不是由每个单独摄像头生成的多个较小图像。这减少了呈现所需的时间,并为处理视觉数据提供了更高效的API。" +"分块渲染API提供了一个矢量化接口,用于从相机传感器收集数据。这对需要视觉环节的强化学习环境非常有用。分块渲染通过连接多个相机的相机输出并呈现为单个大图像,而不是由每个单独相机生成的多个较小图像。这减少了呈现所需的时间,并为处理视觉数据提供了更高效的API。" #: ../../source/features/tiled_rendering.rst:21 msgid "" @@ -63,7 +63,7 @@ msgstr "" msgid "" "To access the tiled rendering interface, a :class:`~sensors.TiledCamera` " "object can be created and used to retrieve data from the cameras." -msgstr "要访问平铺渲染接口,可以创建一个 :class:`~sensors.TiledCamera` 对象,并用于从摄像头获取数据。" +msgstr "要访问分块渲染接口,可以创建一个 :class:`~sensors.TiledCamera` 对象,并用于从相机获取数据。" #: ../../source/features/tiled_rendering.rst:49 msgid "" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/how-to/estimate_how_many_cameras_can_run.po b/docs/locale/zh_CN/LC_MESSAGES/source/how-to/estimate_how_many_cameras_can_run.po index 5eb1ba0529..8a150bb8f0 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/how-to/estimate_how_many_cameras_can_run.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/how-to/estimate_how_many_cameras_can_run.po @@ -30,9 +30,9 @@ msgid "" "characterize their relative performance at different parameters such as " "camera quantity, image dimensions, and data types." msgstr "" -"目前在Isaac Lab,有几种摄像头类型; " -"USD摄像头(标准)、平铺摄像头和光线投射摄像头。这些摄像头类型在功能和性能上有所不同。``benchmark_cameras.py`` " -"脚本可用于了解摄像头类型的差异,以及表征它们在不同参数(如摄像头数量、图像尺寸和数据类型)下的相对性能。" +"目前在Isaac Lab,有几种相机类型; " +"USD相机(标准)、分块相机和光线投射相机。这些相机类型在功能和性能上有所不同。``benchmark_cameras.py`` " +"脚本可用于了解相机类型的差异,以及表征它们在不同参数(如相机数量、图像尺寸和数据类型)下的相对性能。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:14 msgid "" @@ -42,7 +42,7 @@ msgid "" "of cameras one can realistically run, assuming that one wants to maximize " "the number of environments while minimizing step time." msgstr "" -"这个实用程序的目的是让用户能够轻松找到在满足用户场景要求的情况下性能最优的摄像头类型/参数。该实用程序还可以帮助估计用户可以实际运行的摄像头最大数量,假设用户想要最大化环境数量同时最小化步骤时间。" +"这个实用程序的目的是让用户能够轻松找到在满足用户场景要求的情况下性能最优的相机类型/参数。该实用程序还可以帮助估计用户可以实际运行的相机最大数量,假设用户想要最大化环境数量同时最小化步骤时间。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:19 msgid "" @@ -53,8 +53,8 @@ msgid "" "certain specified system resource utilization threshold (without training; " "taking zero actions at each timestep)." msgstr "" -"这个实用程序可以将摄像头注入到来自健身房注册表的现有任务中,这对于在特定场景中对摄像头进行基准测试可能很有用。此外,如果您安装了 ``pynvml`` " -",则可以让此实用程序自动查找可以在您的任务环境中运行的摄像头的最大数量,直到达到特定指定的系统资源利用阈值为止(不进行训练,在每个时间步骤上不采取任何行动)。" +"这个实用程序可以将相机注入到来自健身房注册表的现有任务中,这对于在特定场景中对相机进行基准测试可能很有用。此外,如果您安装了 ``pynvml`` " +",则可以让此实用程序自动查找可以在您的任务环境中运行的相机的最大数量,直到达到特定指定的系统资源利用阈值为止(不进行训练,在每个时间步骤上不采取任何行动)。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:26 msgid "" @@ -83,26 +83,26 @@ msgstr "要查看您可以使用此实用程序变化的所有可能参数。" msgid "" "See the command line parameters related to ``autotune`` for more information" " about automatically determining maximum camera count." -msgstr "请参阅与 ``autotune`` 相关的命令行参数,了解有关自动确定最大摄像头数量的更多信息。" +msgstr "请参阅与 ``autotune`` 相关的命令行参数,了解有关自动确定最大相机数量的更多信息。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:54 msgid "" "Compare Performance in Task Environments and Automatically Determine Task " "Max Camera Count" -msgstr "比较任务环境中的性能并自动确定任务最大摄像头数量" +msgstr "比较任务环境中的性能并自动确定任务最大相机数量" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:56 msgid "" "Currently, tiled cameras are the most performant camera that can handle " "multiple dynamic objects." -msgstr "目前,平铺摄像头是能够处理多个动态对象并且具有最佳性能的摄像头。" +msgstr "目前,分块相机是能够处理多个动态对象并且具有最佳性能的相机。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:58 msgid "" "For example, to see how your system could handle 100 tiled cameras in the " "cartpole environment, with 2 cameras per environment (so 50 environments " "total) only in RGB mode, run" -msgstr "例如,要查看您的系统如何在cartpole环境中处理100个平铺摄像头,每个环境中有2个摄像头(总共50个环境),只在RGB模式下运行。" +msgstr "例如,要查看您的系统如何在cartpole环境中处理100个分块相机,每个环境中有2个相机(总共50个环境),只在RGB模式下运行。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:69 msgid "" @@ -114,7 +114,7 @@ msgid "" "number of cameras you can run with cartpole, you could run:" msgstr "" "如果您已安装pynvml(``./isaaclab.sh -p -m pip install " -"pynvml``),您还可以找到在指定环境中运行的摄像头的最大数量,直到达到某个性能阈值(由最大CPU利用率百分比、最大RAM利用率百分比、最大GPU计算百分比和最大GPU内存百分比指定)。例如,要找出您可以用cartpole运行的摄像头的最大数量,您可以运行:" +"pynvml``),您还可以找到在指定环境中运行的相机的最大数量,直到达到某个性能阈值(由最大CPU利用率百分比、最大RAM利用率百分比、最大GPU计算百分比和最大GPU内存百分比指定)。例如,要找出您可以用cartpole运行的相机的最大数量,您可以运行:" " " #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:83 @@ -122,7 +122,7 @@ msgid "" "Autotune may lead to the program crashing, which means that it tried to run " "too many cameras at once. However, the max percentage utilization parameter " "is meant to prevent this from happening." -msgstr "自动调谐可能会导致程序崩溃,这意味着它试图一次运行太多摄像头。然而,最大百分比利用参数旨在阻止这种情况发生。" +msgstr "自动调谐可能会导致程序崩溃,这意味着它试图一次运行太多相机。然而,最大百分比利用参数旨在阻止这种情况发生。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:86 msgid "" @@ -132,7 +132,7 @@ msgid "" " so to get the total number of environments, divide the output camera count " "by the number of cameras per environment." msgstr "" -"基准测试的输出不包括训练网络的开销,因此考虑减少最大利用率百分比以考虑这种开销。最终输出的摄像头数量是针对所有摄像头的,因此要获取总环境数量,将输出的摄像头数量除以每个环境的摄像头数量。" +"基准测试的输出不包括训练网络的开销,因此考虑减少最大利用率百分比以考虑这种开销。最终输出的相机数量是针对所有相机的,因此要获取总环境数量,将输出的相机数量除以每个环境的相机数量。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:93 msgid "Compare Camera Type and Performance (Without a Specified Task)" @@ -142,7 +142,7 @@ msgstr "比较相机类型和性能(未指定任务情形下)" msgid "" "This tool can also asses performance without a task environment. For " "example, to view 100 random objects with 2 standard cameras, one could run" -msgstr "这个工具还可以在没有任务环境的情况下评估性能。例如,要查看通过两个标准摄像头查看100个随机物体,可以运行" +msgstr "这个工具还可以在没有任务环境的情况下评估性能。例如,要查看通过两个标准相机查看100个随机物体,可以运行" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:105 msgid "" @@ -161,7 +161,7 @@ msgstr "" msgid "" "If your system has a hard time handling the desired cameras, you can try the" " following" -msgstr "如果您的系统无法处理所需的摄像头,您可以尝试以下操作" +msgstr "如果您的系统无法处理所需的相机,您可以尝试以下操作" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:112 msgid "Switch to headless mode (supply ``--headless``)" @@ -173,19 +173,19 @@ msgstr "确保您使用的是GPU pipeline,而不是CPU!" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:114 msgid "If you aren't using Tiled Cameras, switch to Tiled Cameras" -msgstr "如果您没有使用平铺摄像头,请切换到平铺摄像头。" +msgstr "如果您没有使用分块相机,请切换到分块相机。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:115 msgid "Decrease camera resolution" -msgstr "减少摄像头分辨率" +msgstr "减少相机分辨率" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:116 msgid "Decrease how many data_types there are for each camera." -msgstr "减少每个摄像头的数据类型数量。" +msgstr "减少每个相机的数据类型数量。" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:117 msgid "Decrease the number of cameras" -msgstr "减少摄像头数量" +msgstr "减少相机数量" #: ../../source/how-to/estimate_how_many_cameras_can_run.rst:118 msgid "Decrease the number of objects in the scene" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/how-to/index.po b/docs/locale/zh_CN/LC_MESSAGES/source/how-to/index.po index 8938b5f9bc..68fce79137 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/how-to/index.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/how-to/index.po @@ -78,21 +78,21 @@ msgstr "本指南解释了如何在每个环境中导入和配置不同的资产 #: ../../source/how-to/index.rst:51 msgid "Saving Camera Output" -msgstr "保存摄像头输出" +msgstr "保存相机输出" #: ../../source/how-to/index.rst:53 msgid "This guide explains how to save the camera output in Isaac Lab." -msgstr "本指南解释了如何在Isaac Lab中保存摄像头输出。" +msgstr "本指南解释了如何在Isaac Lab中保存相机输出。" #: ../../source/how-to/index.rst:61 msgid "Estimate How Many Cameras Can Run On Your Machine" -msgstr "估算机器可运行摄像头数量" +msgstr "估算机器可运行相机数量" #: ../../source/how-to/index.rst:63 msgid "" "This guide demonstrates how to estimate the number of cameras one can run on" " their machine under the desired parameters." -msgstr "本指南演示了如何在所需参数下估算一个人可以在其机器上运行多少摄像头。" +msgstr "本指南演示了如何在所需参数下估算一个人可以在其机器上运行多少相机。" #: ../../source/how-to/index.rst:72 msgid "Drawing Markers" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/how-to/record_video.po b/docs/locale/zh_CN/LC_MESSAGES/source/how-to/record_video.po index 066de347df..eabf0b34b6 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/how-to/record_video.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/how-to/record_video.po @@ -68,4 +68,4 @@ msgid "" "``IsaacLab/logs////videos/train``." msgstr "" "录制的视频将保存在与训练检查点相同的目录中,路径为 " -"``IsaacLab/logs////videos/train``。" +"``IsaacLab/logs////videos/train`` 。" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/how-to/wrap_rl_env.po b/docs/locale/zh_CN/LC_MESSAGES/source/how-to/wrap_rl_env.po index 5341d5cd2b..32146dad9e 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/how-to/wrap_rl_env.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/how-to/wrap_rl_env.po @@ -106,7 +106,7 @@ msgid "" "called ``\"/OmniverseKit_Persp\"``. The camera's pose and image resolution " "can be configured through the :class:`~envs.ViewerCfg` class." msgstr "" -"用于渲染的视口摄像头是场景中称为 ``\"/OmniverseKit_Persp\"`` 的默认摄像头。摄像头的姿势和图像分辨率可以通过 " +"用于渲染的视口相机是场景中称为 ``\"/OmniverseKit_Persp\"`` 的默认相机。相机的姿势和图像分辨率可以通过 " ":class:`~envs.ViewerCfg` 类进行配置。" #: ../../source/how-to/wrap_rl_env.rst diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/migration/migrating_from_orbit.po b/docs/locale/zh_CN/LC_MESSAGES/source/migration/migrating_from_orbit.po index 9531700af1..c6ff4ca685 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/migration/migrating_from_orbit.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/migration/migrating_from_orbit.po @@ -221,11 +221,11 @@ msgstr "``cuda``: 使用设备ID为``0``的GPU。" #: ../../source/migration/migrating_from_orbit.rst:100 msgid "" "``cuda:N``: Use GPU, where N is the device ID. For example, ``cuda:0``." -msgstr "``cuda:N``: 使用GPU,其中N是设备ID。例如,``cuda:0``。" +msgstr "``cuda:N``: 使用GPU,其中N是设备ID。例如,``cuda:0`` 。" #: ../../source/migration/migrating_from_orbit.rst:101 msgid "The default value is ``cuda:0``." -msgstr "默认值是 ``cuda:0``。" +msgstr "默认值是 ``cuda:0`` 。" #: ../../source/migration/migrating_from_orbit.rst:105 msgid "Offscreen rendering" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/overview/developer-guide/vs_code.po b/docs/locale/zh_CN/LC_MESSAGES/source/overview/developer-guide/vs_code.po index 50a9c2c080..483016174d 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/overview/developer-guide/vs_code.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/overview/developer-guide/vs_code.po @@ -46,7 +46,7 @@ msgid "" "``setup_python_env`` in the drop down menu." msgstr "" "运行 VSCode `Tasks `__ ,通过按下 " -"``Ctrl+Shift+P``,选择 ``Tasks: Run Task`` 并在下拉菜单中运行 ``setup_python_env``。" +"``Ctrl+Shift+P``,选择 ``Tasks: Run Task`` 并在下拉菜单中运行 ``setup_python_env`` 。" #: ../../source/overview/developer-guide/vs_code.rst msgid "VSCode Tasks" @@ -116,7 +116,7 @@ msgid "" msgstr "" "如果你想使用不同的 python 解释器(例如,从你的 conda 环境),你需要通过选择并激活你在 VSCode 左下角选择的 python " "解释器来更改使用的 python 解释器,或者打开命令面板(``Ctrl+Shift+P``)并选择 ``Python: Select " -"Interpreter``。" +"Interpreter`` 。" #: ../../source/overview/developer-guide/vs_code.rst:64 msgid "" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/camera.po b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/camera.po new file mode 100644 index 0000000000..def08c670b --- /dev/null +++ b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/camera.po @@ -0,0 +1,459 @@ +# SOME DESCRIPTIVE TITLE. +# Copyright (C) 2022-2024, The Isaac Lab Project Developers. +# This file is distributed under the same license as the Isaac Lab package. +# FIRST AUTHOR , 2024. +msgid "" +msgstr "" +"Project-Id-Version: Isaac Lab 1.3.0\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2024-11-28 10:51+0800\n" +"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" +"Last-Translator: Ziqi Fan \n" +"Language-Team: zh_CN \n" +"Language: zh_CN\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=utf-8\n" +"Content-Transfer-Encoding: 8bit\n" +"Plural-Forms: nplurals=1; plural=0;\n" +"Generated-By: Babel 2.16.0\n" + +#: ../../source/overview/sensors/camera.rst:5 +msgid "Camera" +msgstr "相机" + +#: ../../source/overview/sensors/camera.rst:7 +msgid "" +"Camera sensors are uniquely defined by the use of the ``render_product``, a " +"structure for managing data generated by the rendering pipeline (images). " +"Isaac Lab provides the ability to fully control how these renderings are " +"created through camera parameters like focal length, pose, type, etc... and " +"what kind of data you want to render through the use of Annotators, allowing" +" you to record not only RGB, but also Instance segmentation, object pose, " +"object ID, etc..." +msgstr "" +"相机传感器通过使用 ``render_product`` 独特地定义,这是一个用于管理渲染管道(图像)生成的数据的结构。Isaac Lab " +"提供了完全控制这些渲染如何通过相机参数(如焦距、姿态、类型等)创建的能力,以及通过使用 " +"Annotators,您可以控制希望渲染的数据类型,允许您记录不仅是 RGB 数据,还包括实例分割、物体姿态、物体 ID 等数据。" + +#: ../../source/overview/sensors/camera.rst:9 +msgid "" +"Rendered images are unique among the supported data types in Isaac Lab due " +"to the inherently large bandwidth requirements for moving those data. A " +"single 800 x 600 image with 32-bit color (a single float per pixel) clocks " +"in at just under 2 MB. If we render at 60 fps and record every frame, that " +"camera needs to move 120 MB/s. Multiply this by the number of cameras in an " +"environment and environments in a simulation, and you can quickly see how " +"scaling a naive vectorization of camera data could lead to bandwidth " +"challenges. NVIDIA's Isaac Lab leverages our expertise in GPU hardware to " +"provide an API that specifically addresses these scaling challenges in the " +"rendering pipeline." +msgstr "" +"渲染的图像在 Isaac Lab 中是唯一的数据类型,因为这些数据的移动具有固有的较大带宽要求。单个 800 x 600 像素、32 " +"位色彩(每个像素一个浮动点)的图像大小接近 2 MB。如果我们以每秒 60 帧的速度渲染并记录每一帧,那么该相机需要以 120 MB/s " +"的速度传输数据。将此值乘以环境中的相机数量和模拟中的环境数量,您可以快速看出,简单地向量化相机数据可能会导致带宽瓶颈。`NVIDIA` 的 `Isaac" +" Sim` 利用我们在 GPU 硬件方面的专业知识,提供了一个专门解决渲染管线中这些扩展挑战的 API。" + +#: ../../source/overview/sensors/camera.rst:12 +msgid "Tiled Rendering" +msgstr "分块渲染" + +#: ../../source/overview/sensors/camera.rst:16 +msgid "This feature is only available from Isaac Sim version 4.2.0 onwards." +msgstr "此功能仅适用于 Isaac Sim 版本 4.2.0 及更高版本。" + +#: ../../source/overview/sensors/camera.rst:18 +msgid "" +"Tiled rendering in combination with image processing networks require heavy " +"memory resources, especially at larger resolutions. We recommend running 512" +" cameras in the scene on RTX 4090 GPUs or similar." +msgstr "" +"分块渲染结合图像处理网络需要大量的内存资源,尤其是在更大分辨率下。我们建议在场景中使用 512 个相机,并配备 RTX 4090 `GPU` " +"或类似的硬件。" + +#: ../../source/overview/sensors/camera.rst:21 +msgid "" +"The Tiled Rendering APIs provide a vectorized interface for collecting data " +"from camera sensors. This is useful for reinforcement learning environments " +"where parallelization can be exploited to accelerate data collection and " +"thus the training loop. Tiled rendering works by using a single " +"``render_product`` for **all** clones of a single camera in the scene. The " +"desired dimensions of a single image and the number of environments are used" +" to compute a much larger ``render_product``, consisting of the tiled " +"individual renders from the separate clones of the camera. When all cameras " +"have populated their buffers the render product is \"completed\" and can be " +"moved around as a single, large image, dramatically reducing the overhead " +"for moving the data from the host to the device, for example. Only a single" +" call is used to synchronize the device data, instead of one call per " +"camera, and this is a big part of what makes the Tiled Rendering API more " +"efficient for working with vision data." +msgstr "" +"分块渲染 API " +"提供了一个向量化接口,用于从相机传感器收集数据。这对于强化学习环境非常有用,在这种环境中,可以利用并行化加速数据收集,从而加速训练循环。分块渲染通过使用一个单一的" +" ``render_product`` 来为场景中单个相机的 **所有** 克隆进行操作。单个图像的期望尺寸和环境的数量被用来计算一个更大的 " +"``render_product``,该产品由相机各个克隆的独立渲染组成。 当所有相机都填充完其缓冲区后,渲染产品便 \"完成\" " +",可以作为一个单一的、大尺寸的图像进行移动,从而大幅减少将数据从主机传输到设备时的开销,例如。只使用一个调用来同步设备数据,而不是每个相机一个调用,这正是分块渲染" +" API 在处理视觉数据时更高效的一个重要原因。" + +#: ../../source/overview/sensors/camera.rst:23 +msgid "" +"Isaac Lab provides tiled rendering APIs for RGB, depth, along with other " +"annotators through the :class:`~sensors.TiledCamera` class. Configurations " +"for the tiled rendering APIs can be defined through the " +":class:`~sensors.TiledCameraCfg` class, specifying parameters such as the " +"regex expression for all camera paths, the transform for the cameras, the " +"desired data type, the type of cameras to add to the scene, and the camera " +"resolution." +msgstr "" +"Isaac Lab 提供了用于 RGB、深度和其他注释器的分块渲染 API,您可以通过 :class:`~sensors.TiledCamera` " +"类来使用这些 API。分块渲染 API 的配置可以通过 :class:`~sensors.TiledCameraCfg` " +"类定义,指定的参数包括所有相机路径的正则表达式、相机的变换、所需的数据类型、要添加到场景中的相机类型以及相机分辨率。" + +#: ../../source/overview/sensors/camera.rst:38 +msgid "" +"To access the tiled rendering interface, a :class:`~sensors.TiledCamera` " +"object can be created and used to retrieve data from the cameras." +msgstr "要访问分块渲染接口,可以创建一个 :class:`~sensors.TiledCamera` 对象,并用其从相机获取数据。" + +#: ../../source/overview/sensors/camera.rst:46 +msgid "" +"The returned data will be transformed into the shape (num_cameras, height, " +"width, num_channels), which can be used directly as observation for " +"reinforcement learning." +msgstr "" +"返回的数据将被转换为形状 (num_cameras, height, width, num_channels),可以直接作为强化学习的观察值使用。" + +#: ../../source/overview/sensors/camera.rst:48 +msgid "" +"When working with rendering, make sure to add the ``--enable_cameras`` " +"argument when launching the environment. For example:" +msgstr "在进行渲染时,确保在启动环境时添加 ``--enable_cameras`` 参数。例如:" + +#: ../../source/overview/sensors/camera.rst:56 +msgid "Annotators" +msgstr "标注器" + +#: ../../source/overview/sensors/camera.rst:58 +msgid "" +"Both :class:`~sensors.TiledCamera` and :class:`~sensors.Camera` classes " +"provide APIs for retrieving various types annotator data from replicator:" +msgstr "" +"两个 :class:`~sensors.TiledCamera` 和 :class:`~sensors.Camera` 类提供用于从 " +"replicator 检索各种类型标注数据的 API:" + +#: ../../source/overview/sensors/camera.rst:60 +msgid "``\"rgb\"``: A 3-channel rendered color image." +msgstr "``\"rgb\"``: 一种三通道渲染的颜色图像。" + +#: ../../source/overview/sensors/camera.rst:61 +msgid "``\"rgba\"``: A 4-channel rendered color image with alpha channel." +msgstr "``\"rgba\"``: 一个具有 alpha 通道的 4 通道渲染颜色图像。" + +#: ../../source/overview/sensors/camera.rst:62 +msgid "" +"``\"distance_to_camera\"``: An image containing the distance to camera " +"optical center." +msgstr "``\"distance_to_camera\"``: 包含到相机光学中心的距离的图像。" + +#: ../../source/overview/sensors/camera.rst:63 +msgid "" +"``\"distance_to_image_plane\"``: An image containing distances of 3D points " +"from camera plane along camera's z-axis." +msgstr "``\"distance_to_image_plane\"``: 一个包含从摄像机平面沿摄像机 z 轴到 3D 点的距离的图像。" + +#: ../../source/overview/sensors/camera.rst:64 +msgid "``\"depth\"``: The same as ``\"distance_to_image_plane\"``." +msgstr "``\"depth\"``: 与 ``\"distance_to_image_plane\"`` 相同。" + +#: ../../source/overview/sensors/camera.rst:65 +msgid "" +"``\"normals\"``: An image containing the local surface normal vectors at " +"each pixel." +msgstr "``\"normals\"``: 一个包含每个像素的局部表面法线向量的图像。" + +#: ../../source/overview/sensors/camera.rst:66 +msgid "" +"``\"motion_vectors\"``: An image containing the motion vector data at each " +"pixel." +msgstr "``\"motion_vectors\"``: 一个包含每个像素处运动矢量数据的图像。" + +#: ../../source/overview/sensors/camera.rst:67 +msgid "``\"semantic_segmentation\"``: The semantic segmentation data." +msgstr "``\"semantic_segmentation\"``: 语义分割数据。" + +#: ../../source/overview/sensors/camera.rst:68 +msgid "``\"instance_segmentation_fast\"``: The instance segmentation data." +msgstr "``\"instance_segmentation_fast\"``: 实例分割数据。" + +#: ../../source/overview/sensors/camera.rst:69 +msgid "``\"instance_id_segmentation_fast\"``: The instance id segmentation data." +msgstr "``\"instance_id_segmentation_fast\"``: 实例 ID 分割数据。" + +#: ../../source/overview/sensors/camera.rst:72 +msgid "RGB and RGBA" +msgstr "RGB 和 RGBA" + +#: ../../source/overview/sensors/camera.rst:-1 +msgid "A scene captured in RGB" +msgstr "以RGB格式捕获的场景" + +#: ../../source/overview/sensors/camera.rst:79 +msgid "" +"``rgb`` data type returns a 3-channel RGB colored image of type " +"``torch.uint8``, with dimension (B, H, W, 3)." +msgstr "``rgb`` 数据类型返回一个 3 通道 RGB 彩色图像,类型为 ``torch.uint8``,维度为 (B, H, W, 3)。" + +#: ../../source/overview/sensors/camera.rst:81 +msgid "" +"``rgba`` data type returns a 4-channel RGBA colored image of type " +"``torch.uint8``, with dimension (B, H, W, 4)." +msgstr "" +"``rgba`` 数据类型返回一个 4 通道 RGBA 彩色图像,类型为 ``torch.uint8``,维度为 (B, H, W, 4)。" + +#: ../../source/overview/sensors/camera.rst:83 +msgid "" +"To convert the ``torch.uint8`` data to ``torch.float32``, divide the buffer " +"by 255.0 to obtain a ``torch.float32`` buffer containing data from 0 to 1." +msgstr "" +"将 ``torch.uint8`` 数据转换为 ``torch.float32``,将缓冲区除以 255.0 以获得一个包含 0 到 1 之间数据的 " +"``torch.float32`` 缓冲区。" + +#: ../../source/overview/sensors/camera.rst:86 +msgid "Depth and Distances" +msgstr "深度和距离" + +#: ../../source/overview/sensors/camera.rst:93 +msgid "" +"``distance_to_camera`` returns a single-channel depth image with distance to" +" the camera optical center. The dimension for this annotator is (B, H, W, 1)" +" and has type ``torch.float32``." +msgstr "" +"``distance_to_camera`` 返回一个单通道深度图像,表示到相机光学中心的距离。此注释器的维度为 (B, H, W, 1),类型为 " +"``torch.float32`` 。" + +#: ../../source/overview/sensors/camera.rst:95 +msgid "" +"``distance_to_image_plane`` returns a single-channel depth image with " +"distances of 3D points from the camera plane along the camera's Z-axis. The " +"dimension for this annotator is (B, H, W, 1) and has type ``torch.float32``." +msgstr "" +"``distance_to_image_plane`` 返回一个单通道深度图像,表示 3D 点相对于相机平面沿相机 Z 轴的距离。该注释器的维度为 " +"(B, H, W, 1),类型为 ``torch.float32`` 。" + +#: ../../source/overview/sensors/camera.rst:97 +msgid "" +"``depth`` is provided as an alias for ``distance_to_image_plane`` and will " +"return the same data as the ``distance_to_image_plane`` annotator, with " +"dimension (B, H, W, 1) and type ``torch.float32``." +msgstr "" +"``depth`` 被作为 ``distance_to_image_plane`` 的别名,并将返回与 " +"``distance_to_image_plane`` 注释器相同的数据,维度为 (B, H, W, 1),类型为 ``torch.float32`` " +"。" + +#: ../../source/overview/sensors/camera.rst:100 +msgid "Normals" +msgstr "法线" + +#: ../../source/overview/sensors/camera.rst:107 +msgid "" +"``normals`` returns an image containing the local surface normal vectors at " +"each pixel. The buffer has dimension (B, H, W, 3), containing the (x, y, z) " +"information for each vector, and has data type ``torch.float32``." +msgstr "" +"``normals`` 返回一张包含每个像素局部表面法向量的图像。该缓冲区的维度为 (B, H, W, 3),包含每个向量的 (x, y, z) " +"信息,并且数据类型为 ``torch.float32`` 。" + +#: ../../source/overview/sensors/camera.rst:110 +msgid "Motion Vectors" +msgstr "运动向量" + +#: ../../source/overview/sensors/camera.rst:112 +msgid "" +"``motion_vectors`` returns the per-pixel motion vectors in image space, with" +" a 2D array of motion vectors representing the relative motion of a pixel in" +" the camera’s viewport between frames. The buffer has dimension (B, H, W, " +"2), representing x - the motion distance in the horizontal axis (image " +"width) with movement to the left of the image being positive and movement to" +" the right being negative and y - motion distance in the vertical axis " +"(image height) with movement towards the top of the image being positive and" +" movement to the bottom being negative. The data type is ``torch.float32``." +msgstr "" +"``motion_vectors`` 返回图像空间中的每个像素的运动向量,使用 2D 数组表示每个像素在相机视口中在帧之间的相对运动。缓冲区的维度为 " +"(B, H, W, 2),其中 x 表示水平方向(图像宽度)的运动距离,向左移动时为正,向右移动时为负;y " +"表示垂直方向(图像高度)的运动距离,向上移动时为正,向下移动时为负。数据类型为 ``torch.float32`` 。" + +#: ../../source/overview/sensors/camera.rst:115 +msgid "Semantic Segmentation" +msgstr "语义分割" + +#: ../../source/overview/sensors/camera.rst:122 +msgid "" +"``semantic_segmentation`` outputs semantic segmentation of each entity in " +"the camera’s viewport that has semantic labels. In addition to the image " +"buffer, an ``info`` dictionary can be retrieved with " +"``tiled_camera.data.info['semantic_segmentation']`` containing ID to labels " +"information." +msgstr "" +"``semantic_segmentation`` 输出相机视口中每个具有语义标签的实体的语义分割。除了图像缓冲区外,还可以通过 " +"``tiled_camera.data.info['semantic_segmentation']`` 获取一个 ``info`` 字典,该字典包含 " +"ID 到标签的信息。" + +#: ../../source/overview/sensors/camera.rst:124 +msgid "" +"If ``colorize_semantic_segmentation=True`` in the camera config, a 4-channel" +" RGBA image will be returned with dimension (B, H, W, 4) and type " +"``torch.uint8``. The info ``idToLabels`` dictionary will be the mapping from" +" color to semantic labels." +msgstr "" +"如果 ``colorize_semantic_segmentation=True`` 在相机配置中,返回的将是一个 4 通道的 RGBA 图像,维度为 " +"(B, H, W, 4),类型为 ``torch.uint8`` 。信息 ``idToLabels`` 字典将是从颜色到语义标签的映射。" + +#: ../../source/overview/sensors/camera.rst:126 +msgid "" +"If ``colorize_semantic_segmentation=False``, a buffer of dimension (B, H, W," +" 1) of type ``torch.int32`` will be returned, containing the semantic ID of " +"each pixel. The info ``idToLabels`` dictionary will be the mapping from " +"semantic ID to semantic labels." +msgstr "" +"如果 ``colorize_semantic_segmentation=False``,则会返回一个维度为 (B, H, W, 1) 且类型为 " +"``torch.int32`` 的缓冲区,包含每个像素的语义 ID。信息 ``idToLabels`` 字典将是从语义 ID 到语义标签的映射。" + +#: ../../source/overview/sensors/camera.rst:129 +msgid "Instance ID Segmentation" +msgstr "实例 ID 分割" + +#: ../../source/overview/sensors/camera.rst:136 +msgid "" +"``instance_id_segmentation_fast`` outputs instance ID segmentation of each " +"entity in the camera’s viewport. The instance ID is unique for each prim in " +"the scene with different paths. In addition to the image buffer, an ``info``" +" dictionary can be retrieved with " +"``tiled_camera.data.info['instance_id_segmentation_fast']`` containing ID to" +" labels information." +msgstr "" +"``instance_id_segmentation_fast`` 输出相机视口中每个实体的实例 ID 分割。每个场景中的 prim " +"拥有不同路径的唯一实例 ID。除了图像缓冲区外,还可以通过 " +"``tiled_camera.data.info['instance_id_segmentation_fast']`` 检索到一个 ``info`` " +"字典,其中包含 ID 到标签的映射信息。" + +#: ../../source/overview/sensors/camera.rst:138 +msgid "" +"The main difference between ``instance_id_segmentation_fast`` and " +"``instance_segmentation_fast`` are that instance segmentation annotator goes" +" down the hierarchy to the lowest level prim which has semantic labels, " +"where instance ID segmentation always goes down to the leaf prim." +msgstr "" +"``instance_id_segmentation_fast`` 和 ``instance_segmentation_fast`` " +"之间的主要区别在于,实例分割注释器会向下遍历层级,直到具有语义标签的最低级 prim,而实例 ID 分割则始终遍历到叶节点 prim。" + +#: ../../source/overview/sensors/camera.rst:140 +msgid "" +"If ``colorize_instance_id_segmentation=True`` in the camera config, a " +"4-channel RGBA image will be returned with dimension (B, H, W, 4) and type " +"``torch.uint8``. The info ``idToLabels`` dictionary will be the mapping from" +" color to USD prim path of that entity." +msgstr "" +"如果 ``colorize_instance_id_segmentation=True`` 在相机配置中,将返回一个 4 通道的 RGBA 图像,维度为" +" (B, H, W, 4),类型为 ``torch.uint8`` 。信息 ``idToLabels`` 字典将是颜色到该实体的 USD prim " +"路径的映射。" + +#: ../../source/overview/sensors/camera.rst:142 +msgid "" +"If ``colorize_instance_id_segmentation=False``, a buffer of dimension (B, H," +" W, 1) of type ``torch.int32`` will be returned, containing the instance ID " +"of each pixel. The info ``idToLabels`` dictionary will be the mapping from " +"instance ID to USD prim path of that entity." +msgstr "" +"如果 ``colorize_instance_id_segmentation=False``,将返回一个形状为 (B, H, W, 1) 的类型为 " +"``torch.int32`` 的缓冲区,包含每个像素的实例 ID。信息 ``idToLabels`` 字典将是从实例 ID 到该实体的 USD " +"prim 路径的映射。" + +#: ../../source/overview/sensors/camera.rst:145 +msgid "Instance Segmentation" +msgstr "实例分割" + +#: ../../source/overview/sensors/camera.rst:152 +msgid "" +"``instance_segmentation_fast`` outputs instance segmentation of each entity " +"in the camera’s viewport. In addition to the image buffer, an ``info`` " +"dictionary can be retrieved with " +"``tiled_camera.data.info['instance_segmentation_fast']`` containing ID to " +"labels and ID to semantic information." +msgstr "" +"``instance_segmentation_fast`` 输出相机视口中每个实体的实例分割。除了图像缓冲区,还可以通过 " +"``tiled_camera.data.info['instance_segmentation_fast']`` 获取 ``info`` 字典,其中包含" +" ID 到标签和 ID 到语义信息。" + +#: ../../source/overview/sensors/camera.rst:154 +msgid "" +"If ``colorize_instance_segmentation=True`` in the camera config, a 4-channel" +" RGBA image will be returned with dimension (B, H, W, 4) and type " +"``torch.uint8``." +msgstr "" +"如果 ``colorize_instance_segmentation=True`` 在相机配置中,则会返回一个 4 通道的 RGBA 图像,尺寸为 " +"(B, H, W, 4),类型为 ``torch.uint8`` 。" + +#: ../../source/overview/sensors/camera.rst:156 +msgid "" +"If ``colorize_instance_segmentation=False``, a buffer of dimension (B, H, W," +" 1) of type ``torch.int32`` will be returned, containing the instance ID of " +"each pixel." +msgstr "" +"如果 ``colorize_instance_segmentation=False``,则将返回一个维度为 (B, H, W, 1) 且类型为 " +"``torch.int32`` 的缓冲区,其中包含每个像素的实例 ID。" + +#: ../../source/overview/sensors/camera.rst:158 +msgid "" +"The info ``idToLabels`` dictionary will be the mapping from color to USD " +"prim path of that semantic entity. The info ``idToSemantics`` dictionary " +"will be the mapping from color to semantic labels of that semantic entity." +msgstr "" +"信息 ``idToLabels`` 字典将是从颜色到该语义实体的 USD prim 路径的映射。信息 ``idToSemantics`` " +"字典将是从颜色到该语义实体的语义标签的映射。" + +#: ../../source/overview/sensors/camera.rst:162 +msgid "Current Limitations" +msgstr "当前的限制" + +#: ../../source/overview/sensors/camera.rst:164 +msgid "" +"Due to current limitations in the renderer, we can have only **one** " +":class:`~sensors.TiledCamera` instance in the scene. For use cases that " +"require a setup with more than one camera, we can imitate the multi-camera " +"behavior by moving the location of the camera in between render calls in a " +"step." +msgstr "" +"由于当前渲染器的限制,我们在场景中只能有 **一个** :class:`~sensors.TiledCamera` " +"实例。对于需要多个相机的使用场景,我们可以通过在渲染调用之间移动相机的位置来模仿多相机的行为。" + +#: ../../source/overview/sensors/camera.rst:168 +msgid "" +"For example, in a stereo vision setup, the below snippet can be implemented:" +msgstr "例如,在立体视觉设置中,可以实现以下代码片段:" + +#: ../../source/overview/sensors/camera.rst:186 +msgid "" +"Note that this approach still limits the rendering resolution to be " +"identical for all cameras. Currently, there is no workaround to achieve " +"different resolution images using :class:`~sensors.TiledCamera`. The best " +"approach is to use the largest resolution out of all of the desired " +"resolutions and add additional scaling or cropping operations to the " +"rendered output as a post-processing step." +msgstr "" +"请注意,这种方法仍然将所有相机的渲染分辨率限制为相同。目前,没有解决方案可以使用 :class:`~sensors.TiledCamera` " +"实现不同分辨率的图像。最好的方法是使用所有期望分辨率中的最大分辨率,并在渲染输出中添加额外的缩放或裁剪操作,作为后期处理步骤。" + +#: ../../source/overview/sensors/camera.rst:190 +msgid "" +"In addition, there may be visible quality differences when comparing render " +"outputs of different numbers of environments. Currently, any combined " +"resolution that has a width less than 265 pixels or height less than 265 " +"will automatically switch to the DLAA anti-aliasing mode, which does not " +"perform up-sampling during anti-aliasing. For resolutions larger than 265 in" +" both width and height dimensions, we default to using the \"performance\" " +"DLSS mode for anti-aliasing for performance benefits. Anti-aliasing modes " +"and other rendering parameters can be specified in the " +":class:`~sim.RenderCfg`." +msgstr "" +"此外,在比较不同环境数量的渲染输出时,可能会出现明显的质量差异。目前,任何宽度小于 265 像素或高度小于 265 像素的合并分辨率,将自动切换到 " +"DLAA 抗锯齿模式,该模式在抗锯齿过程中不会进行上采样。对于宽度和高度都大于 265 的分辨率,我们默认使用 \"performance\" DLSS" +" 模式进行抗锯齿,以获得性能提升。抗锯齿模式和其他渲染参数可以在 :class:`~sim.RenderCfg` 中指定。" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/contact_sensor.po b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/contact_sensor.po new file mode 100644 index 0000000000..1774b2de3f --- /dev/null +++ b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/contact_sensor.po @@ -0,0 +1,104 @@ +# SOME DESCRIPTIVE TITLE. +# Copyright (C) 2022-2024, The Isaac Lab Project Developers. +# This file is distributed under the same license as the Isaac Lab package. +# FIRST AUTHOR , 2024. +msgid "" +msgstr "" +"Project-Id-Version: Isaac Lab 1.3.0\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2024-11-28 10:51+0800\n" +"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" +"Last-Translator: Ziqi Fan \n" +"Language-Team: zh_CN \n" +"Language: zh_CN\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=utf-8\n" +"Content-Transfer-Encoding: 8bit\n" +"Plural-Forms: nplurals=1; plural=0;\n" +"Generated-By: Babel 2.16.0\n" + +#: ../../source/overview/sensors/contact_sensor.rst:4 +msgid "Contact Sensor" +msgstr "接触传感器" + +#: ../../source/overview/sensors/contact_sensor.rst:-1 +msgid "A contact sensor with filtering" +msgstr "带有过滤功能的接触传感器" + +#: ../../source/overview/sensors/contact_sensor.rst:11 +msgid "" +"The contact sensor is designed to return the net contact force acting on a " +"given ridgid body. The sensor is written to behave as a physical object, and" +" so the \"scope\" of the contact sensor is limited to the body (or bodies) " +"that defines it. There are multiple ways to define this scope, depending on " +"your need to filter the forces coming from the contact." +msgstr "" +"接触传感器的设计目的是返回作用于给定刚体的净接触力。该传感器被编写为表现得像一个物理对象,因此接触传感器的 \"范围\" " +"仅限于定义它的物体(或物体)。根据您过滤来自接触的力的需求,有多种方法可以定义此范围。" + +#: ../../source/overview/sensors/contact_sensor.rst:13 +msgid "" +"By default, the reported force is the total contact force, but your " +"application may only care about contact forces due to specific objects. " +"Retrieving contact forces from specific objects requires filtering, and this" +" can only be done in a \"many-to-one\" way. A multi-legged robot that needs " +"filterable contact information for its feet would require one sensor per " +"foot to be defined in the environment, but a robotic hand with contact " +"sensors on the tips of each finger can be defined with a single sensor." +msgstr "" +"默认情况下,报告的力是总接触力,但您的应用程序可能只关心特定物体产生的接触力。从特定物体中检索接触力需要过滤,并且这只能以 \"多对一\" " +"的方式完成。一个需要可过滤接触信息的多足机器人,必须在环境中为每个足部定义一个传感器,而一个在每个手指尖端都有接触传感器的机器人手,则可以通过一个传感器来定义。" + +#: ../../source/overview/sensors/contact_sensor.rst:15 +msgid "Consider a simple environment with an Anymal Quadruped and a block" +msgstr "考虑一个简单的环境,其中包含一个 Anymal Quadruped 和一个 block" + +#: ../../source/overview/sensors/contact_sensor.rst:71 +msgid "" +"We define the sensors on the feet of the robot in two different ways. The " +"front feet are independent sensors (one sensor body per foot) and the " +"\"Cube\" is placed under the left foot. The hind feet are defined as a " +"single sensor with multiple bodies." +msgstr "" +"我们以两种不同的方式定义机器人脚上的传感器。前脚是独立的传感器(每只脚一个传感器体),并且 \"Cube\" " +"被放置在左脚下方。后脚定义为一个传感器,包含多个传感器体。" + +#: ../../source/overview/sensors/contact_sensor.rst:73 +msgid "We can then run the scene and print the data from the sensors" +msgstr "我们可以然后运行场景并打印来自传感器的数据" + +#: ../../source/overview/sensors/contact_sensor.rst:100 +msgid "" +"Here, we print both the net contact force and the filtered force matrix for " +"each contact sensor defined in the scene. The front left and front right " +"feet report the following" +msgstr "在这里,我们打印出场景中每个接触传感器的净接触力和过滤后的力矩阵。前左脚和前右脚报告以下内容" + +#: ../../source/overview/sensors/contact_sensor.rst:123 +msgid "" +"Notice that even with filtering, both sensors report the net contact force " +"acting on the foot. However only the left foot has a non zero \"force " +"matrix\", because the right foot isn't standing on the filtered body, " +"``/World/envs/env_.*/Cube``. Now, checkout the data coming from the hind " +"feet!" +msgstr "" +"注意,即使进行过滤,两个传感器仍然报告作用在脚上的净接触力。然而,只有左脚具有非零的 \"力矩阵\" " +",因为右脚没有站在被过滤的物体上,``/World/envs/env_.*/Cube`` 。现在,查看来自后脚的数据!" + +#: ../../source/overview/sensors/contact_sensor.rst:138 +msgid "" +"In this case, the contact sensor has two bodies: the left and right hind " +"feet. When the force matrix is queried, the result is ``None`` because this" +" is a many body sensor, and presently Isaac Lab only supports \"many to " +"one\" contact force filtering. Unlike the single body contact sensor, the " +"reported force tensor has multiple entries, with each \"row\" corresponding " +"to the contact force on a single body of the sensor (matching the ordering " +"at construction)." +msgstr "" +"在这种情况下,接触传感器有两个主体:左侧和右侧后脚。当查询力矩阵时,结果是 ``None``,因为这是一个多主体传感器,而目前 Isaac Lab " +"仅支持 \"多对一\" 接触力过滤。与单主体接触传感器不同,报告的力张量具有多个条目,每一 \"行\" " +"对应于传感器单一主体上的接触力(与构造时的顺序匹配)。" + +#: ../../source/overview/sensors/contact_sensor.rst +msgid "Code for contact_sensor.py" +msgstr "contact_sensor.py 的代码" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/frame_transformer.po b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/frame_transformer.po new file mode 100644 index 0000000000..1dcb8e6aad --- /dev/null +++ b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/frame_transformer.po @@ -0,0 +1,91 @@ +# SOME DESCRIPTIVE TITLE. +# Copyright (C) 2022-2024, The Isaac Lab Project Developers. +# This file is distributed under the same license as the Isaac Lab package. +# FIRST AUTHOR , 2024. +msgid "" +msgstr "" +"Project-Id-Version: Isaac Lab 1.3.0\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2024-11-28 10:51+0800\n" +"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" +"Last-Translator: Ziqi Fan \n" +"Language-Team: zh_CN \n" +"Language: zh_CN\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=utf-8\n" +"Content-Transfer-Encoding: 8bit\n" +"Plural-Forms: nplurals=1; plural=0;\n" +"Generated-By: Babel 2.16.0\n" + +#: ../../source/overview/sensors/frame_transformer.rst:4 +msgid "Frame Transformer" +msgstr "坐标系转换器" + +#: ../../source/overview/sensors/frame_transformer.rst:-1 +msgid "A diagram outlining the basic geometry of frame transformations" +msgstr "概述坐标系变换基本几何形状的图表" + +#: ../../source/overview/sensors/frame_transformer.rst:14 +msgid "" +"One of the most common operations that needs to be performed within a " +"physics simulation is the frame transformation: rewriting a vector or " +"quaternion in the basis of an arbitrary euclidean coordinate system. There " +"are many ways to accomplish this within Isaac and USD, but these methods can" +" be cumbersome to implement within Isaac Lab's GPU based simulation and " +"cloned environments. To mitigate this problem, we have designed the Frame " +"Transformer Sensor, that tracks and calculate the relative frame " +"transformations for rigid bodies of interest to the scene." +msgstr "" +"在物理仿真中,最常见的操作之一是坐标系转换:在任意欧几里得坐标系的基底下重写向量或四元数。虽然在 Isaac 和 USD " +"中有多种方法可以实现这一操作,但在 Isaac Lab 的基于 GPU 的仿真和克隆环境中实现这些方法可能会显得繁琐。为了解决这个问题,我们设计了 " +"坐标系转换传感器,它能够跟踪并计算场景中感兴趣的刚体的相对坐标系转换。" + +#: ../../source/overview/sensors/frame_transformer.rst:16 +msgid "" +"The sensory is minimally defined by a source frame and a list of target " +"frames. These definitions take the form of a prim path (for the source) and" +" list of regex capable prim paths the rigid bodies to be tracked (for the " +"targets)." +msgstr "" +"传感器的最小定义由一个源坐标系和一个目标坐标系列表组成。这些定义分别以基本路径(用于源)和支持正则表达式的基本路径列表(用于跟踪的刚体的目标)表示。" + +#: ../../source/overview/sensors/frame_transformer.rst:75 +msgid "We can now run the scene and query the sensor for data" +msgstr "我们现在可以运行场景并查询传感器以获取数据" + +#: ../../source/overview/sensors/frame_transformer.rst:101 +msgid "" +"Let's take a look at the result for tracking specific objects. First, we can" +" take a look at the data coming from the sensors on the feet" +msgstr "让我们来看看跟踪特定物体的结果。首先,我们可以查看来自脚部传感器的数据。" + +#: ../../source/overview/sensors/frame_transformer.rst:-1 +msgid "The frame transformer visualizer" +msgstr "坐标系转换器可视化工具" + +#: ../../source/overview/sensors/frame_transformer.rst:122 +msgid "" +"By activating the visualizer, we can see that the frames of the feet are " +"rotated \"upward\" slightly. We can also see the explicit relative " +"positions and rotations by querying the sensor for data, which returns these" +" values as a list with the same order as the tracked frames. This becomes " +"even more apparent if we examine the transforms specified by regex." +msgstr "" +"通过激活可视化工具,我们可以看到脚部的坐标系略微 \"向上\" " +"旋转。我们还可以通过查询传感器数据来查看明确的相对位置和旋转,这些数据以与跟踪坐标系相同顺序的列表形式返回。如果我们检查由正则表达式指定的变换,这一点变得更加明显。" + +#: ../../source/overview/sensors/frame_transformer.rst:151 +msgid "" +"Here, the sensor is tracking all rigid body children of ``Robot/base``, but " +"this expression is **inclusive**, meaning that the source body itself is " +"also a target. This can be seen both by examining the source and target " +"list, where ``base`` appears twice, and also in the returned data, where the" +" sensor returns the relative transform to itself, (0, 0, 0)." +msgstr "" +"在这里,传感器正在跟踪 ``Robot/base`` 的所有刚体子节点,但这个表达式是 **包含的** " +",意味着源身体本身也是一个目标。这可以通过检查源和目标列表来看,``base`` 出现了两次,也可以通过返回的数据看出,传感器返回相对于自身的变换(0," +" 0, 0)。" + +#: ../../source/overview/sensors/frame_transformer.rst +msgid "Code for frame_transformer_sensor.py" +msgstr "frame_transformer_sensor.py 的代码" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/index.po b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/index.po new file mode 100644 index 0000000000..d31fe57ff5 --- /dev/null +++ b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/index.po @@ -0,0 +1,67 @@ +# SOME DESCRIPTIVE TITLE. +# Copyright (C) 2022-2024, The Isaac Lab Project Developers. +# This file is distributed under the same license as the Isaac Lab package. +# FIRST AUTHOR , 2024. +msgid "" +msgstr "" +"Project-Id-Version: Isaac Lab 1.3.0\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2024-11-28 10:51+0800\n" +"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" +"Last-Translator: Ziqi Fan \n" +"Language-Team: zh_CN \n" +"Language: zh_CN\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=utf-8\n" +"Content-Transfer-Encoding: 8bit\n" +"Plural-Forms: nplurals=1; plural=0;\n" +"Generated-By: Babel 2.16.0\n" + +#: ../../source/overview/sensors/index.rst:4 +msgid "Sensors" +msgstr "传感器" + +#: ../../source/overview/sensors/index.rst:6 +msgid "" +"In this section, we will overview the various sensor APIs provided by Isaac " +"Lab." +msgstr "在本节中,我们将概述 Isaac Lab 提供的各种传感器 API。" + +#: ../../source/overview/sensors/index.rst:8 +msgid "" +"Every sensor in Isaac Lab inherits from the ``SensorBase`` abstract class " +"that provides the core functionality inherent to all sensors, which is to " +"provide access to \"measurements\" of the scene. These measurements can take" +" many forms such as ray-casting results, camera rendered images, or even " +"simply ground truth data queried directly from the simulation (such as " +"poses). Whatever the data may be, we can think of the sensor as having a " +"buffer that is periodically updated with measurements by querying the scene." +" This ``update_period`` is defined in \"simulated\" seconds, meaning that " +"even if the flow of time in the simulation is dilated relative to the real " +"world, the sensor will update at the appropriate rate. The ``SensorBase`` is" +" also designed with vectorizability in mind, holding the buffers for all " +"copies of the sensor across cloned environments." +msgstr "" +"每个传感器在 Isaac Lab 中都继承自 ``SensorBase`` 抽象类,该类提供了所有传感器固有的核心功能,即提供对场景的 \"测量\" " +"数据的访问。这些测量数据可以有多种形式,如光线投射结果、相机渲染的图像,甚至是直接从仿真中查询的真实数据(例如位姿)。无论数据是什么,我们可以认为传感器具有一个缓冲区,该缓冲区通过查询场景定期更新测量数据。这个" +" ``update_period`` 是以 \"模拟\" " +"秒为单位定义的,这意味着即使仿真中的时间流逝相对于现实世界有所延迟,传感器也会以适当的速率更新。``SensorBase`` " +"还考虑到了向量化的设计,持有所有传感器在克隆环境中的副本的缓冲区。" + +#: ../../source/overview/sensors/index.rst:10 +msgid "" +"Updating the buffers is done by overriding the ``_update_buffers_impl`` " +"abstract method of the ``SensorBase`` class. On every time-step of the " +"simulation, ``dt``, all sensors are queried for an update. During this " +"query, the total time since the last update is incremented by ``dt`` for " +"every buffer managed by that particular sensor. If the total time is greater" +" than or equal to the ``update_period`` for a buffer, then that buffer is " +"flagged to be updated on the next query." +msgstr "" +"更新缓冲区是通过重写 ``_update_buffers_impl`` 抽象方法来完成的,该方法属于 ``SensorBase`` " +"类。在每个仿真时间步长 ``dt`` 中,所有传感器都会被查询以获取更新。在此查询过程中,每个由该传感器管理的缓冲区的总时间会通过 ``dt`` " +"递增。如果总时间大于或等于某个缓冲区的 ``update_period``,则该缓冲区会被标记为在下次查询时更新。" + +#: ../../source/overview/sensors/index.rst:12 +msgid "The following pages describe the available sensors in more detail:" +msgstr "以下页面更详细地描述了可用的传感器:" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/ray_caster.po b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/ray_caster.po new file mode 100644 index 0000000000..bc248f03f6 --- /dev/null +++ b/docs/locale/zh_CN/LC_MESSAGES/source/overview/sensors/ray_caster.po @@ -0,0 +1,120 @@ +# SOME DESCRIPTIVE TITLE. +# Copyright (C) 2022-2024, The Isaac Lab Project Developers. +# This file is distributed under the same license as the Isaac Lab package. +# FIRST AUTHOR , 2024. +msgid "" +msgstr "" +"Project-Id-Version: Isaac Lab 1.3.0\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2024-11-28 10:51+0800\n" +"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" +"Last-Translator: Ziqi Fan \n" +"Language-Team: zh_CN \n" +"Language: zh_CN\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=utf-8\n" +"Content-Transfer-Encoding: 8bit\n" +"Plural-Forms: nplurals=1; plural=0;\n" +"Generated-By: Babel 2.16.0\n" + +#: ../../source/overview/sensors/ray_caster.rst:4 +msgid "Ray Caster" +msgstr "光线投射" + +#: ../../source/overview/sensors/ray_caster.rst:-1 +msgid "A diagram outlining the basic geometry of frame transformations" +msgstr "一个概述框架变换基本几何的图示" + +#: ../../source/overview/sensors/ray_caster.rst:11 +msgid "" +"The Ray Caster sensor (and the ray caster camera) are similar to RTX based " +"rendering in that they both involve casting rays. The difference here is " +"that the rays cast by the Ray Caster sensor return strictly collision " +"information along the cast, and the direction of each individual ray can be " +"specified. They do not bounce, nor are they affected by things like " +"materials or opacity. For each ray specified by the sensor, a line is traced" +" along the path of the ray and the location of first collision with the " +"specified mesh is returned. This is the method used by some of our quadruped" +" examples to measure the local height field." +msgstr "" +"Ray Caster传感器(以及ray caster相机)与基于RTX的渲染类似,二者都涉及到射线投射。不同之处在于,Ray " +"Caster传感器投射的射线严格返回沿投射路径的碰撞信息,并且每条射线的方向可以被指定。它们不会反弹,也不受材质或不透明度等因素的影响。对于传感器指定的每条射线,会沿射线路径追踪一条线,并返回与指定网格的第一次碰撞位置。这是我们的一些四足动物示例中用于测量局部高度场的方法。" + +#: ../../source/overview/sensors/ray_caster.rst:13 +msgid "" +"To keep the sensor performant when there are many cloned environments, the " +"line tracing is done directly in `Warp `_. " +"This is the reason why specific meshes need to be identified to cast " +"against: that mesh data is loaded onto the device by warp when the sensor is" +" initialized. As a consequence, the current iteration of this sensor only " +"works for literally static meshes (meshes that *are not changed from the " +"defaults specified in their USD file*). This constraint will be removed in " +"future releases." +msgstr "" +"为了保持传感器在有多个克隆环境时的性能,线条追踪直接在 `Warp `_ " +"中完成。这就是为什么需要识别特定网格以进行碰撞的原因:当传感器初始化时,warp " +"会将这些网格数据加载到设备上。因此,当前版本的传感器仅适用于字面上的静态网格(即 *那些没有从其 USD 文件中指定的默认值发生变化的* " +"网格)。这个限制将在未来的版本中移除。" + +#: ../../source/overview/sensors/ray_caster.rst:15 +msgid "" +"Using a ray caster sensor requires a **pattern** and a parent xform to be " +"attached to. The pattern defines how the rays are cast, while the prim " +"properties defines the orientation and position of the sensor (additional " +"offsets can be specified for more exact placement). Isaac Lab supports a " +"number of ray casting pattern configurations, including a generic LIDAR and " +"grid pattern." +msgstr "" +"使用光线投射传感器需要附加一个 **模式** 和一个父级 xform。模式定义了光线如何投射,而 prim " +"属性定义了传感器的方向和位置(可以指定额外的偏移量以实现更精确的放置)。Isaac Lab 支持多种光线投射模式配置,包括通用的 LIDAR " +"和网格模式。" + +#: ../../source/overview/sensors/ray_caster.rst:54 +msgid "" +"Notice that the units on the pattern config is in degrees! Also, we enable " +"visualization here to explicitly show the pattern in the rendering, but this" +" is not required and should be disabled for performance tuning." +msgstr "注意,模式配置上的单位是以度为单位!另外,我们在这里启用了可视化功能,以便在渲染中明确显示模式,但这不是必需的,并且在性能调优时应该禁用。" + +#: ../../source/overview/sensors/ray_caster.rst:-1 +msgid "Lidar Pattern visualized" +msgstr "激光雷达模式可视化" + +#: ../../source/overview/sensors/ray_caster.rst:61 +msgid "" +"Querying the sensor for data can be done at simulation run time like any " +"other sensor." +msgstr "查询传感器的数据可以像其他传感器一样在仿真运行时进行。" + +#: ../../source/overview/sensors/ray_caster.rst:99 +msgid "" +"Here we can see the data returned by the sensor itself. Notice first that " +"there are 3 closed brackets at the beginning and the end: this is because " +"the data returned is batched by the number of sensors. The ray cast pattern " +"itself has also been flattened, and so the dimensions of the array are ``[N," +" B, 3]`` where ``N`` is the number of sensors, ``B`` is the number of cast " +"rays in the pattern, and 3 is the dimension of the casting space. Finally, " +"notice that the first several values in this casting pattern are the same: " +"this is because the lidar pattern is spherical and we have specified our FOV" +" to be hemispherical, which includes the poles. In this configuration, the " +"\"flattening pattern\" becomes apparent: the first 180 entries will be the " +"same because it's the bottom pole of this hemisphere, and there will be 180 " +"of them because our horizontal FOV is 180 degrees with a resolution of 1 " +"degree." +msgstr "" +"在这里,我们可以看到传感器本身返回的数据。首先请注意,开始和结束处有 3 " +"个闭括号:这是因为返回的数据是按传感器的数量进行分批的。射线投射模式本身也已经被压平,因此数组的维度是 ``[N, B, 3]``,其中 ``N`` " +"是传感器的数量,``B`` 是模式中投射的射线数量,3 是投射空间的维度。最后,请注意,这个投射模式中的前几个值是相同的:这是因为 lidar " +"模式是球形的,我们已将视场(FOV)指定为半球形,这包括了极点。在这种配置下, \"压平模式\" 变得显而易见:前 180 " +"个条目将是相同的,因为它是该半球的底极,并且会有 180 个条目,因为我们的水平视场是 180 度,分辨率为 1 度。" + +#: ../../source/overview/sensors/ray_caster.rst:101 +msgid "" +"You can use this script to experiment with pattern configurations and build " +"an intuition about how the data is stored by altering the ``triggered`` " +"variable on line 99." +msgstr "您可以使用此脚本来实验不同的模式配置,并通过修改第 99 行的 ``triggered`` 变量来建立对数据存储方式的直觉。" + +#: ../../source/overview/sensors/ray_caster.rst +msgid "Code for raycaster_sensor.py" +msgstr "raycaster_sensor.py 的代码" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/refs/issues.po b/docs/locale/zh_CN/LC_MESSAGES/source/refs/issues.po index 70bcd156f2..7fd0ff38b9 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/refs/issues.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/refs/issues.po @@ -39,7 +39,7 @@ msgid "" "clouds. This is a known issue which has to do with the way the PhysX and " "rendering engines work in Omniverse." msgstr "" -"重置环境时,某些资产和传感器的数据字段未更新。这些包括关节链中链接的姿势、摄像头图像、接触传感器读数和激光雷达点云。这是一个已知问题,与 " +"重置环境时,某些资产和传感器的数据字段未更新。这些包括关节链中链接的姿势、相机图像、接触传感器读数和激光雷达点云。这是一个已知问题,与 " "Omniverse 中 PhysX 和渲染引擎的工作方式有关。" #: ../../source/refs/issues.rst:16 @@ -66,7 +66,7 @@ msgid "" "that the sensor data is not updated immediately after a reset and it will " "hold outdated values." msgstr "" -"对于与 RTX 渲染相关的传感器(如摄像头),在设置传感器状态后,传感器数据不会立即更新。渲染引擎更新与模拟器的 ``step()`` " +"对于与 RTX 渲染相关的传感器(如相机),在设置传感器状态后,传感器数据不会立即更新。渲染引擎更新与模拟器的 ``step()`` " "调用捆绑在一起,仅在模拟向前步进时才会调用。这意味着传感器数据在重置后不会立即更新,它将保留过时的值。" #: ../../source/refs/issues.rst:29 @@ -100,7 +100,7 @@ msgstr "" #: ../../source/refs/issues.rst:46 msgid "Blank initial frames from the camera" -msgstr "摄像头前的空白初始帧" +msgstr "相机前的空白初始帧" #: ../../source/refs/issues.rst:48 msgid "" @@ -116,7 +116,7 @@ msgstr "" msgid "" "A hack to work around this is to add the following after initializing the " "camera sensor and setting its pose:" -msgstr "解决此问题的一个方法是在初始化摄像头传感器并设置姿势后添加以下内容: " +msgstr "解决此问题的一个方法是在初始化相机传感器并设置姿势后添加以下内容: " #: ../../source/refs/issues.rst:67 msgid "Using instanceable assets for markers" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/setup/faq.po b/docs/locale/zh_CN/LC_MESSAGES/source/setup/faq.po index 9e5c0aa394..f3efd5541d 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/setup/faq.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/setup/faq.po @@ -83,7 +83,7 @@ msgid "" msgstr "" "`Isaac Sim`_ 是一个构建在Omniverse之上的机器人模拟工具包,Omniverse是旨在统一复杂3D工作流的通用平台。Isaac " "Sim利用最新的图形和物理模拟技术,为机器人提供高保真度的模拟环境。它支持ROS/ROS2、各种传感器模拟、域随机化和合成数据创建工具。Isaac " -"Sim中的平铺渲染支持在环境中进行矢量化渲染,并支持使用 `Isaac Automator`_ " +"Sim中的分块渲染支持在环境中进行矢量化渲染,并支持使用 `Isaac Automator`_ " "在云中运行。总的来说,它是机器人学家的一个强大工具,是机器人模拟领域的一个重要进步。" #: ../../source/setup/faq.rst:36 diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/setup/install.po b/docs/locale/zh_CN/LC_MESSAGES/source/setup/install.po index a6b4a43302..1639adf67e 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/setup/install.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/setup/install.po @@ -2,8 +2,6 @@ # Copyright (C) 2022-2024, The Isaac Lab Project Developers. # This file is distributed under the same license as the Isaac Lab package. # FIRST AUTHOR , 2024. -# -#, fuzzy msgid "" msgstr "" "Project-Id-Version: Isaac Lab 1.3.0\n" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/setup/wechat.po b/docs/locale/zh_CN/LC_MESSAGES/source/setup/wechat.po index e4e1418fb3..f21d16afd1 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/setup/wechat.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/setup/wechat.po @@ -7,7 +7,7 @@ msgid "" msgstr "" "Project-Id-Version: Isaac Lab 1.2.0\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-11-21 16:00+0800\n" +"POT-Creation-Date: 2024-11-28 11:41+0800\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: Ziqi Fan \n" "Language: zh_CN\n" @@ -31,6 +31,6 @@ msgid "微信交流群二维码" msgstr "" #: ../../source/setup/wechat.rst:11 -msgid "更新日期:2024.11.21" +msgid "更新日期:2024.11.28" msgstr "" diff --git a/docs/locale/zh_CN/LC_MESSAGES/source/tutorials/01_assets/run_deformable_object.po b/docs/locale/zh_CN/LC_MESSAGES/source/tutorials/01_assets/run_deformable_object.po index 457d5e54d7..5ac722f9be 100644 --- a/docs/locale/zh_CN/LC_MESSAGES/source/tutorials/01_assets/run_deformable_object.po +++ b/docs/locale/zh_CN/LC_MESSAGES/source/tutorials/01_assets/run_deformable_object.po @@ -284,7 +284,7 @@ msgid "" "you can either close the window, or press ``Ctrl+C`` in the terminal" msgstr "" "这应该会打开一个包含地面、灯光和几个绿色立方体的场景。其中两个立方体必须从高度上落下并落在地面上。同时,另外两个立方体必须沿 `z` " -"轴移动。你应该会看到一个标记,显示位于立方体左下角的节点的运动学目标位置。要停止仿真,你可以关闭窗口,或在终端中按 ``Ctrl+C``。" +"轴移动。你应该会看到一个标记,显示位于立方体左下角的节点的运动学目标位置。要停止仿真,你可以关闭窗口,或在终端中按 ``Ctrl+C`` 。" #: ../../source/tutorials/01_assets/run_deformable_object.rst:-1 msgid "result of run_deformable_object.py" diff --git a/docs/source/_static/wechat-group2-1121.jpg b/docs/source/_static/wechat-group2-1121.jpg deleted file mode 100644 index b40b99cc1e..0000000000 Binary files a/docs/source/_static/wechat-group2-1121.jpg and /dev/null differ diff --git a/docs/source/_static/wechat-group2-1128.png b/docs/source/_static/wechat-group2-1128.png new file mode 100644 index 0000000000..e6d07a9755 Binary files /dev/null and b/docs/source/_static/wechat-group2-1128.png differ diff --git a/docs/source/setup/wechat.rst b/docs/source/setup/wechat.rst index 3410f54a7d..3a3a12d4c1 100644 --- a/docs/source/setup/wechat.rst +++ b/docs/source/setup/wechat.rst @@ -3,9 +3,9 @@ 一群已满,现开启二群。为保证群聊质量,进群后请按照 **单位-姓名或昵称-研究方向** 修改备注。 -.. figure:: ../_static/wechat-group2-1121.jpg +.. figure:: ../_static/wechat-group2-1128.png :width: 500px :align: center :alt: 微信交流群二维码 -更新日期:2024.11.21 \ No newline at end of file +更新日期:2024.11.28 \ No newline at end of file