DistributionalQValueModule¶
- class torchrl.modules.tensordict_module.DistributionalQValueModule(*args, **kwargs)[來源]¶
用於 Q 值策略的分佈式 Q 值鉤子。
此模組將包含動作值 logits 的張量處理為其 argmax 組件(即產生的貪婪動作),遵循給定的動作空間(one-hot、二進制或分類)。它適用於 tensordict 和常規張量。
輸入動作值預期是 log-softmax 運算的結果。
有關分佈式 DQN 的更多詳細資訊,請參閱「強化學習的分佈式視角」,https://arxiv.org/pdf/1707.06887.pdf
- 參數:
action_space (str, optional) – 動作空間。 必須是
"one-hot"
、"mult-one-hot"
、"binary"
或"categorical"
之一。 此引數與spec
互斥,因為spec
會限定 action_space。support (torch.Tensor) – 動作值的支持度。
action_value_key (str or tuple of str, optional) – 代表動作值的輸入鍵。 預設為
"action_value"
。action_mask_key (str or tuple of str, optional) – 代表動作遮罩的輸入鍵。 預設為
"None"
(相當於沒有遮罩)。out_keys (list of str or tuple of str, optional) – 代表動作和動作值的輸出鍵。 預設為
["action", "action_value"]
。var_nums (int, optional) – 如果
action_space = "mult-one-hot"
,則此值代表每個動作組件的基數。spec (TensorSpec, optional) – 如果有提供,則為 action (和/或其他輸出) 的規格。 這與
action_space
互斥,因為 spec 決定了 action space。safe (bool) – 如果
True
,則會根據輸入 spec 檢查輸出的值。 因為探索策略或數值下溢/溢位問題,可能會發生超出範圍的取樣。 如果此值超出範圍,則會使用TensorSpec.project
方法將其投影回所需的空間。 預設值為False
。
範例
>>> from tensordict import TensorDict >>> torch.manual_seed(0) >>> action_space = "categorical" >>> action_value_key = "my_action_value" >>> support = torch.tensor([-1, 0.0, 1.0]) # the action value is between -1 and 1 >>> actor = DistributionalQValueModule(action_space, support=support, action_value_key=action_value_key) >>> # This module works with both tensordict and regular tensors: >>> value = torch.full((3, 4), -100) >>> # the first bin (-1) of the first action is high: there's a high chance that it has a low value >>> value[0, 0] = 0 >>> # the second bin (0) of the second action is high: there's a high chance that it has an intermediate value >>> value[1, 1] = 0 >>> # the third bin (0) of the thid action is high: there's a high chance that it has an high value >>> value[2, 2] = 0 >>> actor(my_action_value=value) (tensor(2), tensor([[ 0, -100, -100, -100], [-100, 0, -100, -100], [-100, -100, 0, -100]])) >>> actor(value) (tensor(2), tensor([[ 0, -100, -100, -100], [-100, 0, -100, -100], [-100, -100, 0, -100]])) >>> actor(TensorDict({action_value_key: value}, [])) TensorDict( fields={ action: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int64, is_shared=False), my_action_value: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.int64, is_shared=False)}, batch_size=torch.Size([]), device=None, is_shared=False)