OpenMLExperienceReplay¶
- class torchrl.data.datasets.OpenMLExperienceReplay(name: str, batch_size: int, root: Path | None = None, sampler: Sampler | None = None, writer: Writer | None = None, collate_fn: Callable | None = None, pin_memory: bool = False, prefetch: int | None = None, transform: 'Transform' | None = None)[source]¶
OpenML 資料的經驗回放。
這個類別為公共資料集提供了一個簡單的入口點。請參閱 "Dua, D. and Graff, C. (2017) UCI Machine Learning Repository. http://archive.ics.uci.edu/ml"
資料格式遵循 TED 慣例。
資料通過 scikit-learn 存取。在檢索資料之前,請確保已安裝 sklearn 和 pandas
$ pip install scikit-learn pandas -U
- 參數:
name (str) – 支援以下資料集:
"adult_num"
、"adult_onehot"
、"mushroom_num"
、"mushroom_onehot"
、"covertype"
、"shuttle"
和"magic"
。batch_size (int) – 採樣期間使用的批次大小。
sampler (Sampler, optional) – 要使用的取樣器。如果未提供,將使用預設的 RandomSampler()。
writer (Writer, optional) – 要使用的寫入器。如果未提供,將使用預設的
ImmutableDatasetWriter
。collate_fn (callable, optional) – 合併樣本列表以形成 Tensor(s)/輸出的微批次。從 map-style 資料集使用批次載入時使用。
pin_memory (bool) – 是否應在 rb 樣本上呼叫 pin_memory()。
prefetch (int, optional) – 使用多執行緒預取的下一個批次的數量。
transform (Transform, optional) – 呼叫 sample() 時要執行的轉換。要鏈接轉換,請使用
Compose
類別。
- add(data: TensorDictBase) int ¶
將單個元素新增到回放緩衝區。
- 參數:
data (Any) – 要新增到回放緩衝區的資料
- 返回值:
資料在回放緩衝區中的索引。
- append_transform(transform: Transform, *, invert: bool = False) ReplayBuffer ¶
將轉換 (transform) 添加到尾端。
當呼叫 sample 時,轉換會依序套用。
- 參數:
transform (Transform) – 要添加的轉換
- 關鍵字引數 (Keyword Arguments):
invert (bool, optional) – 如果
True
,則會反轉轉換(正向呼叫將在寫入期間呼叫,而反向呼叫將在讀取期間呼叫)。預設值為False
。
範例
>>> rb = ReplayBuffer(storage=LazyMemmapStorage(10), batch_size=4) >>> data = TensorDict({"a": torch.zeros(10)}, [10]) >>> def t(data): ... data += 1 ... return data >>> rb.append_transform(t, invert=True) >>> rb.extend(data) >>> assert (data == 1).all()
- abstract property data_path: Path¶
資料集的路徑,包括分割 (split)。
- abstract property data_path_root: Path¶
資料集根目錄的路徑。
- delete()¶
從磁碟刪除資料集儲存。
- dumps(path)¶
將 replay buffer 儲存到指定路徑的磁碟上。
- 參數:
path (Path 或 str) – 儲存 replay buffer 的路徑。
範例
>>> import tempfile >>> import tqdm >>> from torchrl.data import LazyMemmapStorage, TensorDictReplayBuffer >>> from torchrl.data.replay_buffers.samplers import PrioritizedSampler, RandomSampler >>> import torch >>> from tensordict import TensorDict >>> # Build and populate the replay buffer >>> S = 1_000_000 >>> sampler = PrioritizedSampler(S, 1.1, 1.0) >>> # sampler = RandomSampler() >>> storage = LazyMemmapStorage(S) >>> rb = TensorDictReplayBuffer(storage=storage, sampler=sampler) >>> >>> for _ in tqdm.tqdm(range(100)): ... td = TensorDict({"obs": torch.randn(100, 3, 4), "next": {"obs": torch.randn(100, 3, 4)}, "td_error": torch.rand(100)}, [100]) ... rb.extend(td) ... sample = rb.sample(32) ... rb.update_tensordict_priority(sample) >>> # save and load the buffer >>> with tempfile.TemporaryDirectory() as tmpdir: ... rb.dumps(tmpdir) ... ... sampler = PrioritizedSampler(S, 1.1, 1.0) ... # sampler = RandomSampler() ... storage = LazyMemmapStorage(S) ... rb_load = TensorDictReplayBuffer(storage=storage, sampler=sampler) ... rb_load.loads(tmpdir) ... assert len(rb) == len(rb_load)
- empty()¶
清空 replay buffer 並將游標重設為 0。
- extend(tensordicts: TensorDictBase) Tensor ¶
使用可迭代物件中包含的一或多個元素擴充 replay buffer。
如果存在,將會呼叫反向轉換。
- 參數:
data (iterable) – 要新增至 replay buffer 的資料集合。
- 返回值:
新增至 replay buffer 的資料索引。
警告
當處理值列表時,
extend()
可能具有不明確的簽章,這些值列表應被解釋為 PyTree(在這種情況下,列表中的所有元素都將放入儲存的 PyTree 中的切片中),或一次新增一個值的值列表。為了解决這個問題,TorchRL 清晰地區分了 list 和 tuple:tuple 將被視為 PyTree,list(在根層級)將被解釋為一次新增到 buffer 的值堆疊。對於ListStorage
實例,只能提供未綁定的元素(沒有 PyTrees)。
- insert_transform(index: int, transform: Transform, *, invert: bool = False) ReplayBuffer ¶
插入轉換。
當呼叫 sample 時,會依序執行轉換。
- 參數:
index (int) – 插入轉換的位置。
transform (Transform) – 要添加的轉換
- 關鍵字引數 (Keyword Arguments):
invert (bool, optional) – 如果
True
,則會反轉轉換(正向呼叫將在寫入期間呼叫,而反向呼叫將在讀取期間呼叫)。預設值為False
。
- loads(path)¶
在給定路徑載入 replay buffer 的狀態。
緩衝區應該具有匹配的元件,並且使用
dumps()
儲存。- 參數:
path (Path 或 str) – 儲存回放緩衝區的路徑。
參見
dumps()
以取得更多資訊。
- preprocess(fn: Callable[[TensorDictBase], TensorDictBase], dim: int = 0, num_workers: int | None = None, *, chunksize: int | None = None, num_chunks: int | None = None, pool: mp.Pool | None = None, generator: torch.Generator | None = None, max_tasks_per_child: int | None = None, worker_threads: int = 1, index_with_generator: bool = False, pbar: bool = False, mp_start_method: str | None = None, num_frames: int | None = None, dest: str | Path) TensorStorage ¶
預處理資料集,並傳回具有格式化資料的新儲存空間。
資料轉換必須是單一的(針對資料集的單個樣本進行處理)。
Args 和 Keyword Args 會轉發到
map()
。資料集隨後可以使用
delete()
刪除。- 關鍵字引數 (Keyword Arguments):
dest (path 或 equivalent) – 新資料集位置的路徑。
num_frames (int, optional) – 如果提供,則只會轉換前 num_frames 個影格。這對於一開始偵錯轉換很有用。
傳回:要在
ReplayBuffer
實例中使用的新儲存空間。範例
>>> from torchrl.data.datasets import MinariExperienceReplay >>> >>> data = MinariExperienceReplay( ... list(MinariExperienceReplay.available_datasets)[0], ... batch_size=32 ... ) >>> print(data) MinariExperienceReplay( storages=TensorStorage(TensorDict( fields={ action: MemoryMappedTensor(shape=torch.Size([1000000, 8]), device=cpu, dtype=torch.float32, is_shared=True), episode: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.int64, is_shared=True), info: TensorDict( fields={ distance_from_origin: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), forward_reward: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True), qpos: MemoryMappedTensor(shape=torch.Size([1000000, 15]), device=cpu, dtype=torch.float64, is_shared=True), qvel: MemoryMappedTensor(shape=torch.Size([1000000, 14]), device=cpu, dtype=torch.float64, is_shared=True), reward_ctrl: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), reward_forward: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), reward_survive: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), success: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.bool, is_shared=True), x_position: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), x_velocity: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), y_position: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), y_velocity: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True)}, batch_size=torch.Size([1000000]), device=cpu, is_shared=False), next: TensorDict( fields={ done: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True), info: TensorDict( fields={ distance_from_origin: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), forward_reward: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True), qpos: MemoryMappedTensor(shape=torch.Size([1000000, 15]), device=cpu, dtype=torch.float64, is_shared=True), qvel: MemoryMappedTensor(shape=torch.Size([1000000, 14]), device=cpu, dtype=torch.float64, is_shared=True), reward_ctrl: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), reward_forward: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), reward_survive: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), success: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.bool, is_shared=True), x_position: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), x_velocity: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), y_position: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True), y_velocity: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True)}, batch_size=torch.Size([1000000]), device=cpu, is_shared=False), observation: TensorDict( fields={ achieved_goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True), desired_goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True), observation: MemoryMappedTensor(shape=torch.Size([1000000, 27]), device=cpu, dtype=torch.float64, is_shared=True)}, batch_size=torch.Size([1000000]), device=cpu, is_shared=False), reward: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.float64, is_shared=True), terminated: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True), truncated: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True)}, batch_size=torch.Size([1000000]), device=cpu, is_shared=False), observation: TensorDict( fields={ achieved_goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True), desired_goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True), observation: MemoryMappedTensor(shape=torch.Size([1000000, 27]), device=cpu, dtype=torch.float64, is_shared=True)}, batch_size=torch.Size([1000000]), device=cpu, is_shared=False)}, batch_size=torch.Size([1000000]), device=cpu, is_shared=False)), samplers=RandomSampler, writers=ImmutableDatasetWriter(), batch_size=32, transform=Compose( ), collate_fn=<function _collate_id at 0x120e21dc0>) >>> from torchrl.envs import CatTensors, Compose >>> from tempfile import TemporaryDirectory >>> >>> cat_tensors = CatTensors( ... in_keys=[("observation", "observation"), ("observation", "achieved_goal"), ... ("observation", "desired_goal")], ... out_key="obs" ... ) >>> cat_next_tensors = CatTensors( ... in_keys=[("next", "observation", "observation"), ... ("next", "observation", "achieved_goal"), ... ("next", "observation", "desired_goal")], ... out_key=("next", "obs") ... ) >>> t = Compose(cat_tensors, cat_next_tensors) >>> >>> def func(td): ... td = td.select( ... "action", ... "episode", ... ("next", "done"), ... ("next", "observation"), ... ("next", "reward"), ... ("next", "terminated"), ... ("next", "truncated"), ... "observation" ... ) ... td = t(td) ... return td >>> with TemporaryDirectory() as tmpdir: ... new_storage = data.preprocess(func, num_workers=4, pbar=True, mp_start_method="fork", dest=tmpdir) ... rb = ReplayBuffer(storage=new_storage) ... print(rb) ReplayBuffer( storage=TensorStorage( data=TensorDict( fields={ action: MemoryMappedTensor(shape=torch.Size([1000000, 8]), device=cpu, dtype=torch.float32, is_shared=True), episode: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.int64, is_shared=True), next: TensorDict( fields={ done: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True), obs: MemoryMappedTensor(shape=torch.Size([1000000, 31]), device=cpu, dtype=torch.float64, is_shared=True), observation: TensorDict( fields={ }, batch_size=torch.Size([1000000]), device=cpu, is_shared=False), reward: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.float64, is_shared=True), terminated: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True), truncated: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True)}, batch_size=torch.Size([1000000]), device=cpu, is_shared=False), obs: MemoryMappedTensor(shape=torch.Size([1000000, 31]), device=cpu, dtype=torch.float64, is_shared=True), observation: TensorDict( fields={ }, batch_size=torch.Size([1000000]), device=cpu, is_shared=False)}, batch_size=torch.Size([1000000]), device=cpu, is_shared=False), shape=torch.Size([1000000]), len=1000000, max_size=1000000), sampler=RandomSampler(), writer=RoundRobinWriter(cursor=0, full_storage=True), batch_size=None, collate_fn=<function _collate_id at 0x168406fc0>)
- register_load_hook(hook: Callable[[Any], Any])¶
為儲存空間註冊一個載入鉤子。
注意
鉤子目前在儲存回放緩衝區時不會序列化:每次建立緩衝區時都必須手動重新初始化它們。
- register_save_hook(hook: Callable[[Any], Any])¶
為儲存體註冊一個儲存鉤子(save hook)。
注意
鉤子目前在儲存回放緩衝區時不會序列化:每次建立緩衝區時都必須手動重新初始化它們。
- sample(batch_size: Optional[int] = None, return_info: bool = False, include_info: Optional[bool] = None) TensorDictBase ¶
從回放緩衝區採樣一個批次的資料。
使用 Sampler 採樣索引,並從 Storage 檢索它們。
- 參數:
batch_size (int, optional) – 要收集的資料大小。 如果未提供,此方法將採樣一個由 sampler 指示的 batch_size。
return_info (bool) – 是否回傳資訊。 如果為 True,結果是一個元組 (data, info)。 如果為 False,結果是資料。
- 返回值:
一個包含在回放緩衝區中選取的一批資料的 tensordict。 如果 return_info 旗標設定為 True,則為包含此 tensordict 和 info 的元組。
- set_storage(storage: Storage, collate_fn: Optional[Callable] = None)¶
在回放緩衝區中設定一個新的儲存體,並回傳先前的儲存體。
- 參數:
storage (Storage) – 緩衝區的新儲存體。
collate_fn (callable, optional) – 如果提供,collate_fn 會設定為此值。 否則,它會重設為預設值。
- property write_count¶
透過 add 和 extend 方法,到目前為止在緩衝區中寫入的項目總數。