ActionMask¶
- class torchrl.envs.transforms.ActionMask(action_key: NestedKey = 'action', mask_key: NestedKey = 'action_mask')[原始碼]¶
一個自適應的動作遮罩器。
此轉換在執行步驟後從輸入 tensordict 中讀取遮罩,並調整 one-hot / 分類動作規格的遮罩。
注意
如果未使用環境,此轉換將會失敗。
- 參數:
action_key (NestedKey, 可選) – 可以在其中找到動作張量的鍵。預設為
"action"
。mask_key (NestedKey, 可選) – 可以在其中找到動作遮罩的鍵。預設為
"action_mask"
。
範例
>>> import torch >>> from torchrl.data.tensor_specs import Categorical, Binary, Unbounded, Composite >>> from torchrl.envs.transforms import ActionMask, TransformedEnv >>> from torchrl.envs.common import EnvBase >>> class MaskedEnv(EnvBase): ... def __init__(self, *args, **kwargs): ... super().__init__(*args, **kwargs) ... self.action_spec = Categorical(4) ... self.state_spec = Composite(action_mask=Binary(4, dtype=torch.bool)) ... self.observation_spec = Composite(obs=Unbounded(3)) ... self.reward_spec = Unbounded(1) ... ... def _reset(self, tensordict=None): ... td = self.observation_spec.rand() ... td.update(torch.ones_like(self.state_spec.rand())) ... return td ... ... def _step(self, data): ... td = self.observation_spec.rand() ... mask = data.get("action_mask") ... action = data.get("action") ... mask = mask.scatter(-1, action.unsqueeze(-1), 0) ... ... td.set("action_mask", mask) ... td.set("reward", self.reward_spec.rand()) ... td.set("done", ~mask.any().view(1)) ... return td ... ... def _set_seed(self, seed): ... return seed ... >>> torch.manual_seed(0) >>> base_env = MaskedEnv() >>> env = TransformedEnv(base_env, ActionMask()) >>> r = env.rollout(10) >>> env = TransformedEnv(base_env, ActionMask()) >>> r = env.rollout(10) >>> r["action_mask"] tensor([[ True, True, True, True], [ True, True, False, True], [ True, True, False, False], [ True, False, False, False]])