QMixer¶
- class torchrl.modules.QMixer(state_shape: Union[Tuple[int, ...], Size], mixing_embed_dim: int, n_agents: int, device: Union[device, str, int])[原始碼]¶
QMix 混合器。
透過單調超網路將代理的本地 Q 值混合成一個全域 Q 值,該超網路的參數是從全域狀態獲得的。來自論文 https://arxiv.org/abs/1803.11485 。
它將每個代理選擇的動作的局部值轉換成形狀為 (*B, self.n_agents, 1) 的張量,並轉換成形狀為 (*B, 1) 的全域值。與
torchrl.objectives.QMixerLoss
一起使用。請參閱 examples/multiagent/qmix_vdn.py 以取得範例。- 參數:
state_shape (tuple 或 torch.Size) – 狀態的形狀(不包括最終的前導批次維度)。
mixing_embed_dim (int) – 混合嵌入維度的大小。
n_agents (int) – 代理數量。
device (str 或 torch.Device) – 網路的 torch 裝置。
範例
>>> import torch >>> from tensordict import TensorDict >>> from tensordict.nn import TensorDictModule >>> from torchrl.modules.models.multiagent import QMixer >>> n_agents = 4 >>> qmix = TensorDictModule( ... module=QMixer( ... state_shape=(64, 64, 3), ... mixing_embed_dim=32, ... n_agents=n_agents, ... device="cpu", ... ), ... in_keys=[("agents", "chosen_action_value"), "state"], ... out_keys=["chosen_action_value"], ... ) >>> td = TensorDict({"agents": TensorDict({"chosen_action_value": torch.zeros(32, n_agents, 1)}, [32, n_agents]), "state": torch.zeros(32, 64, 64, 3)}, [32]) >>> td TensorDict( fields={ agents: TensorDict( fields={ chosen_action_value: Tensor(shape=torch.Size([32, 4, 1]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([32, 4]), device=None, is_shared=False), state: Tensor(shape=torch.Size([32, 64, 64, 3]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([32]), device=None, is_shared=False) >>> vdn(td) TensorDict( fields={ agents: TensorDict( fields={ chosen_action_value: Tensor(shape=torch.Size([32, 4, 1]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([32, 4]), device=None, is_shared=False), chosen_action_value: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.float32, is_shared=False), state: Tensor(shape=torch.Size([32, 64, 64, 3]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([32]), device=None, is_shared=False)