DMControlWrapper¶
- torchrl.envs.DMControlWrapper(*args, **kwargs)[原始碼]¶
DeepMind Control lab 環境包裝器。
DeepMind 控制庫可以在這裡找到: https://github.com/deepmind/dm_control。
論文: https://arxiv.org/abs/2006.12983
- 參數:
env (dm_control.suite env) –
Task
環境實例。- 關鍵字參數:
from_pixels (bool, optional) – 如果
True
,將嘗試從環境傳回像素觀測值。 預設情況下,這些觀測值將寫入"pixels"
條目下。 預設為False
。pixels_only (bool, optional) – 如果
True
,則僅傳回像素觀測值(預設情況下位於輸出 tensordict 中的"pixels"
條目下)。 如果False
,只要from_pixels=True
,就會傳回觀測值(例如狀態)和像素。 預設為True
。frame_skip (int, optional) – 如果提供,則表示相同動作要重複的步驟數。 傳回的觀測值將是序列的最後一個觀測值,而獎勵將是跨步驟的獎勵總和。
device (torch.device, optional) – 如果提供,則是資料要轉換的裝置。 預設為
torch.device("cpu")
。batch_size ( torch.Size, optional) – 環境的批次大小。應與所有觀測值、完成狀態、獎勵、動作和資訊的前導維度相符。預設為
torch.Size([])
。allow_done_after_reset (bool, optional) – 如果為
True
,則容許環境在調用reset()
之後立即為done
。預設為False
。
- 變數:
available_envs (list) – 代表可用環境 / 任務對的
Tuple[str, List[str]]
列表。
範例
>>> from dm_control import suite >>> from torchrl.envs import DMControlWrapper >>> env = suite.load("cheetah", "run") >>> env = DMControlWrapper(env, ... from_pixels=True, frame_skip=4) >>> td = env.rand_step() >>> print(td) TensorDict( fields={ action: Tensor(shape=torch.Size([6]), device=cpu, dtype=torch.float64, is_shared=False), next: TensorDict( fields={ done: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), pixels: Tensor(shape=torch.Size([240, 320, 3]), device=cpu, dtype=torch.uint8, is_shared=False), position: Tensor(shape=torch.Size([8]), device=cpu, dtype=torch.float64, is_shared=False), reward: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.float64, is_shared=False), terminated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), velocity: Tensor(shape=torch.Size([9]), device=cpu, dtype=torch.float64, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False) >>> print(env.available_envs) [('acrobot', ['swingup', 'swingup_sparse']), ('ball_in_cup', ['catch']), ('cartpole', ['balance', 'balance_sparse', 'swingup', 'swingup_sparse', 'three_poles', 'two_poles']), ('cheetah', ['run']), ('finger', ['spin', 'turn_easy', 'turn_hard']), ('fish', ['upright', 'swim']), ('hopper', ['stand', 'hop']), ('humanoid', ['stand', 'walk', 'run', 'run_pure_state']), ('manipulator', ['bring_ball', 'bring_peg', 'insert_ball', 'insert_peg']), ('pendulum', ['swingup']), ('point_mass', ['easy', 'hard']), ('reacher', ['easy', 'hard']), ('swimmer', ['swimmer6', 'swimmer15']), ('walker', ['stand', 'walk', 'run']), ('dog', ['fetch', 'run', 'stand', 'trot', 'walk']), ('humanoid_CMU', ['run', 'stand', 'walk']), ('lqr', ['lqr_2_1', 'lqr_6_2']), ('quadruped', ['escape', 'fetch', 'run', 'walk']), ('stacker', ['stack_2', 'stack_4'])]