快捷方式

影片 API

注意

Colab 上試用,或前往結尾以下載完整的範例程式碼。

此範例說明了 torchvision 為影片提供的部分 API,以及關於如何建立資料集等的範例。

1. 簡介:建立新的影片物件並檢查其屬性

首先,我們選擇一個影片來測試該物件。 為了方便起見,我們使用了 kinetics400 資料集中的一個影片。 為了建立它,我們需要定義路徑和要使用的串流。

所選影片的統計資料

  • WUzgd7C1pWA.mp4
    • 來源
      • kinetics-400

    • 影片
      • H-264

      • MPEG-4 AVC (part 10) (avc1)

      • fps: 29.97

    • 音訊
      • MPEG AAC audio (mp4a)

      • 取樣率:48K Hz

import torch
import torchvision
from torchvision.datasets.utils import download_url
torchvision.set_video_backend("video_reader")

# Download the sample video
download_url(
    "https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true",
    ".",
    "WUzgd7C1pWA.mp4"
)
video_path = "./WUzgd7C1pWA.mp4"
3.7%
7.4%
11.1%
14.7%
18.4%
22.1%
25.8%
29.5%
33.2%
36.8%
40.5%
44.2%
47.9%
51.6%
55.3%
58.9%
62.6%
66.3%
70.0%
73.7%
77.4%
81.0%
84.7%
88.4%
92.1%
95.8%
99.5%
100.0%

串流的定義方式與 torch 裝置類似。 我們將它們編碼為 stream_type:stream_id 形式的字串,其中 stream_type 是一個字串,而 stream_id 是一個 long int。 建構子接受僅傳遞 stream_type,在這種情況下,將自動探索串流。 首先,讓我們取得特定影片的中繼資料

stream = "video"
video = torchvision.io.VideoReader(video_path, stream)
video.get_metadata()
{'video': {'duration': [10.9109], 'fps': [29.97002997002997]}, 'audio': {'duration': [10.9], 'framerate': [48000.0]}, 'subtitles': {'duration': []}, 'cc': {'duration': []}}

在這裡,我們可以看到影片有兩個串流 - 一個影片和一個音訊串流。 目前可用的串流類型包括 [‘video’, ‘audio’]。 每個描述符都包含兩個部分:串流類型 (例如 ‘video’) 和唯一的串流 ID (由影片編碼決定)。 透過這種方式,如果影片容器包含相同類型的多個串流,使用者可以存取他們想要的串流。 如果僅傳遞串流類型,解碼器會自動檢測該類型的第一個串流並傳回它。

讓我們從影片串流讀取所有幀。 預設情況下,next(video_reader) 的傳回值是一個包含以下欄位的字典。

回傳欄位如下:

  • data:包含一個 torch.tensor

  • pts:包含此特定影格的浮點時間戳記

metadata = video.get_metadata()
video.set_current_stream("audio")

frames = []  # we are going to save the frames here.
ptss = []  # pts is a presentation timestamp in seconds (float) of each frame
for frame in video:
    frames.append(frame['data'])
    ptss.append(frame['pts'])

print("PTS for first five frames ", ptss[:5])
print("Total number of frames: ", len(frames))
approx_nf = metadata['audio']['duration'][0] * metadata['audio']['framerate'][0]
print("Approx total number of datapoints we can expect: ", approx_nf)
print("Read data size: ", frames[0].size(0) * len(frames))
PTS for first five frames  [0.0, 0.021332999999999998, 0.042667, 0.064, 0.08533299999999999]
Total number of frames:  511
Approx total number of datapoints we can expect:  523200.0
Read data size:  523264

但如果我們只想讀取影片的特定時間片段呢?這可以使用我們的 seek 函數,以及每次呼叫 next 都會回傳已回傳影格的顯示時間戳記(以秒為單位)這個事實來輕鬆完成。

鑑於我們的實作依賴於 python 迭代器,我們可以利用 itertools 來簡化流程並使其更具 Python 風格。

例如,如果我們想從第二秒讀取十個影格

import itertools
video.set_current_stream("video")

frames = []  # we are going to save the frames here.

# We seek into a second second of the video and use islice to get 10 frames since
for frame, pts in itertools.islice(video.seek(2), 10):
    frames.append(frame)

print("Total number of frames: ", len(frames))
Total number of frames:  10

或者,如果我們想要從第 2 秒讀取到第 5 秒,我們可以 seek 到影片的第二秒,然後利用 itertools takewhile 取得正確數量的影格

video.set_current_stream("video")
frames = []  # we are going to save the frames here.
video = video.seek(2)

for frame in itertools.takewhile(lambda x: x['pts'] <= 5, video):
    frames.append(frame['data'])

print("Total number of frames: ", len(frames))
approx_nf = (5 - 2) * video.get_metadata()['video']['fps'][0]
print("We can expect approx: ", approx_nf)
print("Tensor size: ", frames[0].size())
Total number of frames:  90
We can expect approx:  89.91008991008991
Tensor size:  torch.Size([3, 256, 340])

2. 建立範例 read_video 函數

我們可以利用上述方法來建立 read video 函數,該函數遵循與現有 read_video 函數相同的 API。

def example_read_video(video_object, start=0, end=None, read_video=True, read_audio=True):
    if end is None:
        end = float("inf")
    if end < start:
        raise ValueError(
            "end time should be larger than start time, got "
            f"start time={start} and end time={end}"
        )

    video_frames = torch.empty(0)
    video_pts = []
    if read_video:
        video_object.set_current_stream("video")
        frames = []
        for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)):
            frames.append(frame['data'])
            video_pts.append(frame['pts'])
        if len(frames) > 0:
            video_frames = torch.stack(frames, 0)

    audio_frames = torch.empty(0)
    audio_pts = []
    if read_audio:
        video_object.set_current_stream("audio")
        frames = []
        for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)):
            frames.append(frame['data'])
            audio_pts.append(frame['pts'])
        if len(frames) > 0:
            audio_frames = torch.cat(frames, 0)

    return video_frames, audio_frames, (video_pts, audio_pts), video_object.get_metadata()


# Total number of frames should be 327 for video and 523264 datapoints for audio
vf, af, info, meta = example_read_video(video)
print(vf.size(), af.size())
torch.Size([327, 3, 256, 340]) torch.Size([523264, 1])

3. 建立隨機取樣資料集的範例 (可以應用於 kinetics400 的訓練資料集)

太棒了,現在我們可以利用相同的原理來建立範例資料集。我們建議您嘗試使用 iterable 資料集來達到此目的。在此,我們將建立一個範例資料集,該資料集讀取影片中隨機選擇的 10 個影格。

建立範例資料集

import os
os.makedirs("./dataset", exist_ok=True)
os.makedirs("./dataset/1", exist_ok=True)
os.makedirs("./dataset/2", exist_ok=True)

下載影片

from torchvision.datasets.utils import download_url
download_url(
    "https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true",
    "./dataset/1", "WUzgd7C1pWA.mp4"
)
download_url(
    "https://github.com/pytorch/vision/blob/main/test/assets/videos/RATRACE_wave_f_nm_np1_fr_goo_37.avi?raw=true",
    "./dataset/1",
    "RATRACE_wave_f_nm_np1_fr_goo_37.avi"
)
download_url(
    "https://github.com/pytorch/vision/blob/main/test/assets/videos/SOX5yA1l24A.mp4?raw=true",
    "./dataset/2",
    "SOX5yA1l24A.mp4"
)
download_url(
    "https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g23_c01.avi?raw=true",
    "./dataset/2",
    "v_SoccerJuggling_g23_c01.avi"
)
download_url(
    "https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g24_c01.avi?raw=true",
    "./dataset/2",
    "v_SoccerJuggling_g24_c01.avi"
)
3.7%
7.4%
11.1%
14.7%
18.4%
22.1%
25.8%
29.5%
33.2%
36.8%
40.5%
44.2%
47.9%
51.6%
55.3%
58.9%
62.6%
66.3%
70.0%
73.7%
77.4%
81.0%
84.7%
88.4%
92.1%
95.8%
99.5%
100.0%

12.4%
24.9%
37.3%
49.7%
62.1%
74.6%
87.0%
99.4%
100.0%

5.8%
11.7%
17.5%
23.4%
29.2%
35.1%
40.9%
46.8%
52.6%
58.5%
64.3%
70.2%
76.0%
81.9%
87.7%
93.6%
99.4%
100.0%

6.4%
12.9%
19.3%
25.8%
32.2%
38.7%
45.1%
51.6%
58.0%
64.5%
70.9%
77.3%
83.8%
90.2%
96.7%
100.0%

5.3%
10.5%
15.8%
21.0%
26.3%
31.6%
36.8%
42.1%
47.3%
52.6%
57.9%
63.1%
68.4%
73.6%
78.9%
84.2%
89.4%
94.7%
99.9%
100.0%

內務處理和實用程式

import os
import random

from torchvision.datasets.folder import make_dataset
from torchvision import transforms as t


def _find_classes(dir):
    classes = [d.name for d in os.scandir(dir) if d.is_dir()]
    classes.sort()
    class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
    return classes, class_to_idx


def get_samples(root, extensions=(".mp4", ".avi")):
    _, class_to_idx = _find_classes(root)
    return make_dataset(root, class_to_idx, extensions=extensions)

我們將定義資料集和一些基本引數。我們假設 FolderDataset 的結構,並新增以下參數

  • clip_len:以影格為單位的剪輯長度

  • frame_transform:個別影格的轉換

  • video_transform:影片序列的轉換

注意

我們實際上新增了 epoch 大小,因為使用 IterableDataset() 類別允許我們在需要時自然地對每個影片中的剪輯或圖像進行過度取樣。

class RandomDataset(torch.utils.data.IterableDataset):
    def __init__(self, root, epoch_size=None, frame_transform=None, video_transform=None, clip_len=16):
        super(RandomDataset).__init__()

        self.samples = get_samples(root)

        # Allow for temporal jittering
        if epoch_size is None:
            epoch_size = len(self.samples)
        self.epoch_size = epoch_size

        self.clip_len = clip_len
        self.frame_transform = frame_transform
        self.video_transform = video_transform

    def __iter__(self):
        for i in range(self.epoch_size):
            # Get random sample
            path, target = random.choice(self.samples)
            # Get video object
            vid = torchvision.io.VideoReader(path, "video")
            metadata = vid.get_metadata()
            video_frames = []  # video frame buffer

            # Seek and return frames
            max_seek = metadata["video"]['duration'][0] - (self.clip_len / metadata["video"]['fps'][0])
            start = random.uniform(0., max_seek)
            for frame in itertools.islice(vid.seek(start), self.clip_len):
                video_frames.append(self.frame_transform(frame['data']))
                current_pts = frame['pts']
            # Stack it into a tensor
            video = torch.stack(video_frames, 0)
            if self.video_transform:
                video = self.video_transform(video)
            output = {
                'path': path,
                'video': video,
                'target': target,
                'start': start,
                'end': current_pts}
            yield output

給定資料夾結構中影片的路徑,即

  • 資料集
    • 類別 1
      • 檔案 0

      • 檔案 1

    • 類別 2
      • 檔案 0

      • 檔案 1

我們可以產生一個 dataloader 並測試資料集。

transforms = [t.Resize((112, 112))]
frame_transform = t.Compose(transforms)

dataset = RandomDataset("./dataset", epoch_size=None, frame_transform=frame_transform)
from torch.utils.data import DataLoader
loader = DataLoader(dataset, batch_size=12)
data = {"video": [], 'start': [], 'end': [], 'tensorsize': []}
for batch in loader:
    for i in range(len(batch['path'])):
        data['video'].append(batch['path'][i])
        data['start'].append(batch['start'][i].item())
        data['end'].append(batch['end'][i].item())
        data['tensorsize'].append(batch['video'][i].size())
print(data)
{'video': ['./dataset/2/v_SoccerJuggling_g23_c01.avi', './dataset/1/RATRACE_wave_f_nm_np1_fr_goo_37.avi', './dataset/2/v_SoccerJuggling_g24_c01.avi', './dataset/1/RATRACE_wave_f_nm_np1_fr_goo_37.avi', './dataset/2/v_SoccerJuggling_g24_c01.avi'], 'start': [0.5363459567786094, 1.5255444179343287, 4.661538505419801, 0.791575600359427, 0.4954620502570422], 'end': [1.067733, 2.033333, 5.1718329999999995, 1.3, 1.001], 'tensorsize': [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]}

4. 資料視覺化

視覺化影片範例

import matplotlib.pyplot as plt

plt.figure(figsize=(12, 12))
for i in range(16):
    plt.subplot(4, 4, i + 1)
    plt.imshow(batch["video"][0, i, ...].permute(1, 2, 0))
    plt.axis("off")
plot video api

清除影片和資料集

import os
import shutil
os.remove("./WUzgd7C1pWA.mp4")
shutil.rmtree("./dataset")

腳本總執行時間: (0 分鐘 4.790 秒)

由 Sphinx-Gallery 產生的 Gallery

文件

存取 PyTorch 的完整開發人員文件

查看文件

教學課程

取得適用於初學者和進階開發人員的深入教學課程

查看教學課程

資源

尋找開發資源並獲得問題解答

查看資源