捷徑

DecisionTransformer

class torchrl.modules.DecisionTransformer(state_dim, action_dim, config: dict | DTConfig = None)[來源]

線上決策轉換器。

詳述於 https://arxiv.org/abs/2202.05607

如果使用者未提供特定配置,則轉換器會利用預設配置來建立 GPT2 模型。 default_config = { … “n_embd”: 256, … “n_layer”: 4, … “n_head”: 4, … “n_inner”: 1024, … “activation”: “relu”, … “n_positions”: 1024, … “resid_pdrop”: 0.1, … “attn_pdrop”: 0.1, }

參數:
  • state_dim (int) – 狀態空間的維度

  • action_dim (int) – 動作空間的維度

  • config (DTConfig 或 dict, optional) – 轉換器架構配置,用於從轉換器建立 GPT2Config。預設為 default_config

範例

>>> config = DecisionTransformer.default_config()
>>> config.n_embd = 128
>>> print(config)
DTConfig(n_embd: 128, n_layer: 4, n_head: 4, n_inner: 1024, activation: relu, n_positions: 1024, resid_pdrop: 0.1, attn_pdrop: 0.1)
>>> # alternatively
>>> config = DecisionTransformer.DTConfig(n_embd=128)
>>> model = DecisionTransformer(state_dim=4, action_dim=2, config=config)
>>> batch_size = [3, 32]
>>> length = 10
>>> observation = torch.randn(*batch_size, length, 4)
>>> action = torch.randn(*batch_size, length, 2)
>>> return_to_go = torch.randn(*batch_size, length, 1)
>>> output = model(observation, action, return_to_go)
>>> output.shape
torch.Size([3, 32, 10, 128])
class DTConfig(n_embd: Any = 256, n_layer: Any = 4, n_head: Any = 4, n_inner: Any = 1024, activation: Any = 'relu', n_positions: Any = 1024, resid_pdrop: Any = 0.1, attn_pdrop: Any = 0.1)[source]

DecisionTransformer 的預設配置。

forward(observation: Tensor, action: Tensor, return_to_go: Tensor)[source]

定義每次呼叫時執行的計算。

應由所有子類別覆寫。

注意

儘管 forward pass 的配方需要在這個函式中定義,但應該呼叫 Module 實例,而不是直接呼叫這個函式,因為前者會處理已註冊的 hooks,而後者會靜默地忽略它們。

文件

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources