注意
點擊這裡下載完整的範例程式碼
簡介 || 張量 || Autograd || 建構模型 || TensorBoard 支援 || 訓練模型 || 模型理解
使用 PyTorch 訓練¶
建立於:2021 年 11 月 30 日 | 最後更新:2023 年 5 月 31 日 | 最後驗證:2024 年 11 月 05 日
請觀看下方或 youtube 上的影片。
簡介¶
在過去的影片中,我們討論並展示了
使用 torch.nn 模組的神經網路層和函數建構模型
自動梯度計算的機制,這是基於梯度的模型訓練的核心
使用 TensorBoard 可視化訓練進度和其他活動
在本影片中,我們將為您的工具箱新增一些新工具
我們將熟悉資料集和資料載入器抽象,以及它們如何簡化在訓練迴圈中將資料饋送到您的模型的過程
我們將討論特定的損失函數以及何時使用它們
我們將研究 PyTorch 優化器,它實作演算法以根據損失函數的結果調整模型權重
最後,我們將把所有這些結合起來,並查看完整的 PyTorch 訓練迴圈的運作。
資料集和資料載入器¶
Dataset
和 DataLoader
類別封裝了從儲存裝置提取資料並以批次形式將其公開到訓練迴圈的過程。
Dataset
負責存取和處理單個資料實例。
DataLoader
從 Dataset
提取資料實例(自動或使用您定義的取樣器),將它們收集成批次,並傳回它們以供您的訓練迴圈使用。DataLoader
適用於所有種類的資料集,無論它們包含的資料類型為何。
在本教學中,我們將使用 TorchVision 提供的 Fashion-MNIST 資料集。我們使用 torchvision.transforms.Normalize()
來零中心化和正規化圖像圖磚內容的分佈,並下載訓練和驗證資料分割。
import torch
import torchvision
import torchvision.transforms as transforms
# PyTorch TensorBoard support
from torch.utils.tensorboard import SummaryWriter
from datetime import datetime
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Create datasets for training & validation, download if necessary
training_set = torchvision.datasets.FashionMNIST('./data', train=True, transform=transform, download=True)
validation_set = torchvision.datasets.FashionMNIST('./data', train=False, transform=transform, download=True)
# Create data loaders for our datasets; shuffle for training, not for validation
training_loader = torch.utils.data.DataLoader(training_set, batch_size=4, shuffle=True)
validation_loader = torch.utils.data.DataLoader(validation_set, batch_size=4, shuffle=False)
# Class labels
classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot')
# Report split sizes
print('Training set has {} instances'.format(len(training_set)))
print('Validation set has {} instances'.format(len(validation_set)))
0%| | 0.00/26.4M [00:00<?, ?B/s]
0%| | 65.5k/26.4M [00:00<01:13, 360kB/s]
1%| | 197k/26.4M [00:00<00:36, 721kB/s]
2%|1 | 492k/26.4M [00:00<00:20, 1.28MB/s]
6%|6 | 1.64M/26.4M [00:00<00:05, 4.14MB/s]
15%|#4 | 3.83M/26.4M [00:00<00:02, 8.08MB/s]
34%|###4 | 9.04M/26.4M [00:00<00:00, 18.8MB/s]
49%|####9 | 13.0M/26.4M [00:00<00:00, 21.4MB/s]
65%|######4 | 17.1M/26.4M [00:01<00:00, 26.1MB/s]
77%|#######7 | 20.4M/26.4M [00:01<00:00, 27.0MB/s]
94%|#########3| 24.7M/26.4M [00:01<00:00, 31.1MB/s]
100%|##########| 26.4M/26.4M [00:01<00:00, 18.9MB/s]
0%| | 0.00/29.5k [00:00<?, ?B/s]
100%|##########| 29.5k/29.5k [00:00<00:00, 325kB/s]
0%| | 0.00/4.42M [00:00<?, ?B/s]
1%|1 | 65.5k/4.42M [00:00<00:12, 360kB/s]
4%|4 | 197k/4.42M [00:00<00:05, 777kB/s]
11%|#1 | 492k/4.42M [00:00<00:03, 1.26MB/s]
37%|###7 | 1.64M/4.42M [00:00<00:00, 4.32MB/s]
87%|########6 | 3.83M/4.42M [00:00<00:00, 7.89MB/s]
100%|##########| 4.42M/4.42M [00:00<00:00, 6.05MB/s]
0%| | 0.00/5.15k [00:00<?, ?B/s]
100%|##########| 5.15k/5.15k [00:00<00:00, 39.4MB/s]
Training set has 60000 instances
Validation set has 10000 instances
和往常一樣,讓我們將資料可視化作為健全性檢查
import matplotlib.pyplot as plt
import numpy as np
# Helper function for inline image display
def matplotlib_imshow(img, one_channel=False):
if one_channel:
img = img.mean(dim=0)
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
if one_channel:
plt.imshow(npimg, cmap="Greys")
else:
plt.imshow(np.transpose(npimg, (1, 2, 0)))
dataiter = iter(training_loader)
images, labels = next(dataiter)
# Create a grid from the images and show them
img_grid = torchvision.utils.make_grid(images)
matplotlib_imshow(img_grid, one_channel=True)
print(' '.join(classes[labels[j]] for j in range(4)))

Sandal Sneaker Coat Sneaker
模型¶
我們將在本範例中使用的模型是 LeNet-5 的變體 - 如果您觀看過本系列的先前影片,應該對它很熟悉。
import torch.nn as nn
import torch.nn.functional as F
# PyTorch models inherit from torch.nn.Module
class GarmentClassifier(nn.Module):
def __init__(self):
super(GarmentClassifier, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 4 * 4)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = GarmentClassifier()
損失函數¶
在本範例中,我們將使用交叉熵損失。為了示範的目的,我們將建立虛擬輸出和標籤值的批次,透過損失函數執行它們,並檢查結果。
loss_fn = torch.nn.CrossEntropyLoss()
# NB: Loss functions expect data in batches, so we're creating batches of 4
# Represents the model's confidence in each of the 10 classes for a given input
dummy_outputs = torch.rand(4, 10)
# Represents the correct class among the 10 being tested
dummy_labels = torch.tensor([1, 5, 3, 7])
print(dummy_outputs)
print(dummy_labels)
loss = loss_fn(dummy_outputs, dummy_labels)
print('Total loss for this batch: {}'.format(loss.item()))
tensor([[0.7026, 0.1489, 0.0065, 0.6841, 0.4166, 0.3980, 0.9849, 0.6701, 0.4601,
0.8599],
[0.7461, 0.3920, 0.9978, 0.0354, 0.9843, 0.0312, 0.5989, 0.2888, 0.8170,
0.4150],
[0.8408, 0.5368, 0.0059, 0.8931, 0.3942, 0.7349, 0.5500, 0.0074, 0.0554,
0.1537],
[0.7282, 0.8755, 0.3649, 0.4566, 0.8796, 0.2390, 0.9865, 0.7549, 0.9105,
0.5427]])
tensor([1, 5, 3, 7])
Total loss for this batch: 2.428950071334839
優化器¶
在本範例中,我們將使用帶動量的簡單 隨機梯度下降。
嘗試此優化方案的一些變體可能會有所啟發
學習率決定優化器採取的步驟大小。不同的學習率對您的訓練結果有何影響,在準確性和收斂時間方面?
動量會在多個步驟中將優化器推向最強梯度的方向。改變此值對您的結果有何影響?
嘗試一些不同的優化演算法,例如平均 SGD、Adagrad 或 Adam。您的結果有何不同?
# Optimizers specified in the torch.optim package
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
訓練迴圈¶
下面,我們有一個執行一個訓練 epoch 的函數。它枚舉來自 DataLoader 的資料,並且在迴圈的每次傳遞中執行以下操作
從 DataLoader 取得一批訓練資料
將優化器的梯度歸零
執行推論 - 也就是從模型取得輸入批次的預測結果
計算該組預測結果與資料集標籤之間的損失
計算學習權重的反向梯度
告訴優化器執行一個學習步驟 - 也就是根據這個批次的觀察到的梯度,按照我們選擇的優化演算法,調整模型的學習權重
它會報告每 1000 個批次的損失。
最後,它會報告最近 1000 個批次的平均每批次損失,以便與驗證執行結果進行比較
def train_one_epoch(epoch_index, tb_writer):
running_loss = 0.
last_loss = 0.
# Here, we use enumerate(training_loader) instead of
# iter(training_loader) so that we can track the batch
# index and do some intra-epoch reporting
for i, data in enumerate(training_loader):
# Every data instance is an input + label pair
inputs, labels = data
# Zero your gradients for every batch!
optimizer.zero_grad()
# Make predictions for this batch
outputs = model(inputs)
# Compute the loss and its gradients
loss = loss_fn(outputs, labels)
loss.backward()
# Adjust learning weights
optimizer.step()
# Gather data and report
running_loss += loss.item()
if i % 1000 == 999:
last_loss = running_loss / 1000 # loss per batch
print(' batch {} loss: {}'.format(i + 1, last_loss))
tb_x = epoch_index * len(training_loader) + i + 1
tb_writer.add_scalar('Loss/train', last_loss, tb_x)
running_loss = 0.
return last_loss
每個 Epoch 的活動¶
每個 epoch 我們需要做幾件事
透過檢查我們在未用於訓練的一組資料上的相對損失來執行驗證,並報告結果
儲存模型副本
在這裡,我們將在 TensorBoard 中進行報告。 這需要前往命令列啟動 TensorBoard,並在另一個瀏覽器分頁中開啟它。
# Initializing in a separate cell so we can easily add more epochs to the same run
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
writer = SummaryWriter('runs/fashion_trainer_{}'.format(timestamp))
epoch_number = 0
EPOCHS = 5
best_vloss = 1_000_000.
for epoch in range(EPOCHS):
print('EPOCH {}:'.format(epoch_number + 1))
# Make sure gradient tracking is on, and do a pass over the data
model.train(True)
avg_loss = train_one_epoch(epoch_number, writer)
running_vloss = 0.0
# Set the model to evaluation mode, disabling dropout and using population
# statistics for batch normalization.
model.eval()
# Disable gradient computation and reduce memory consumption.
with torch.no_grad():
for i, vdata in enumerate(validation_loader):
vinputs, vlabels = vdata
voutputs = model(vinputs)
vloss = loss_fn(voutputs, vlabels)
running_vloss += vloss
avg_vloss = running_vloss / (i + 1)
print('LOSS train {} valid {}'.format(avg_loss, avg_vloss))
# Log the running loss averaged per batch
# for both training and validation
writer.add_scalars('Training vs. Validation Loss',
{ 'Training' : avg_loss, 'Validation' : avg_vloss },
epoch_number + 1)
writer.flush()
# Track best performance, and save the model's state
if avg_vloss < best_vloss:
best_vloss = avg_vloss
model_path = 'model_{}_{}'.format(timestamp, epoch_number)
torch.save(model.state_dict(), model_path)
epoch_number += 1
EPOCH 1:
batch 1000 loss: 1.6334228584356607
batch 2000 loss: 0.8325267538074403
batch 3000 loss: 0.7359380583595484
batch 4000 loss: 0.6198329215242994
batch 5000 loss: 0.6000315657821484
batch 6000 loss: 0.555109024874866
batch 7000 loss: 0.5260250487388112
batch 8000 loss: 0.4973462742221891
batch 9000 loss: 0.4781935699362075
batch 10000 loss: 0.47880298678041433
batch 11000 loss: 0.45598648857555235
batch 12000 loss: 0.4327470133750467
batch 13000 loss: 0.41800182418141046
batch 14000 loss: 0.4115047634313814
batch 15000 loss: 0.4211296908891527
LOSS train 0.4211296908891527 valid 0.414460688829422
EPOCH 2:
batch 1000 loss: 0.3879808729066281
batch 2000 loss: 0.35912817339546743
batch 3000 loss: 0.38074520684120944
batch 4000 loss: 0.3614532373107213
batch 5000 loss: 0.36850082185724753
batch 6000 loss: 0.3703581801643886
batch 7000 loss: 0.38547042514081115
batch 8000 loss: 0.37846584360170527
batch 9000 loss: 0.3341486988377292
batch 10000 loss: 0.3433013284947956
batch 11000 loss: 0.35607743899174965
batch 12000 loss: 0.3499939931873523
batch 13000 loss: 0.33874178926000603
batch 14000 loss: 0.35130289171106416
batch 15000 loss: 0.3394507191307202
LOSS train 0.3394507191307202 valid 0.3581162691116333
EPOCH 3:
batch 1000 loss: 0.3319729989422485
batch 2000 loss: 0.29558994361863006
batch 3000 loss: 0.3107374766407593
batch 4000 loss: 0.3298987646112146
batch 5000 loss: 0.30858693152241906
batch 6000 loss: 0.33916381367447684
batch 7000 loss: 0.3105102765217889
batch 8000 loss: 0.3011080777524912
batch 9000 loss: 0.3142058177240979
batch 10000 loss: 0.31458891937109
batch 11000 loss: 0.31527258940579483
batch 12000 loss: 0.31501667268342864
batch 13000 loss: 0.3011875962628328
batch 14000 loss: 0.30012811454350596
batch 15000 loss: 0.31833117976446373
LOSS train 0.31833117976446373 valid 0.3307691514492035
EPOCH 4:
batch 1000 loss: 0.2786161053752294
batch 2000 loss: 0.27965198021690596
batch 3000 loss: 0.28595415444140965
batch 4000 loss: 0.292985666413857
batch 5000 loss: 0.3069892351147719
batch 6000 loss: 0.29902250939945224
batch 7000 loss: 0.2863366014406201
batch 8000 loss: 0.2655441066541243
batch 9000 loss: 0.3045048695363293
batch 10000 loss: 0.27626545656517554
batch 11000 loss: 0.2808379335970967
batch 12000 loss: 0.29241049340573955
batch 13000 loss: 0.28030834131941446
batch 14000 loss: 0.2983542350126445
batch 15000 loss: 0.3009556676162611
LOSS train 0.3009556676162611 valid 0.41686952114105225
EPOCH 5:
batch 1000 loss: 0.2614263167564495
batch 2000 loss: 0.2587047562422049
batch 3000 loss: 0.2642477260621345
batch 4000 loss: 0.2825975873669813
batch 5000 loss: 0.26987933717705165
batch 6000 loss: 0.2759250026817317
batch 7000 loss: 0.26055969463163275
batch 8000 loss: 0.29164007206353565
batch 9000 loss: 0.2893096504513578
batch 10000 loss: 0.2486029507305684
batch 11000 loss: 0.2732803234480907
batch 12000 loss: 0.27927226484491985
batch 13000 loss: 0.2686819267635074
batch 14000 loss: 0.24746483912148323
batch 15000 loss: 0.27903492261294194
LOSS train 0.27903492261294194 valid 0.31206756830215454
載入已儲存的模型版本
saved_model = GarmentClassifier()
saved_model.load_state_dict(torch.load(PATH))
一旦載入模型,它就可以用於您需要的任何用途 - 更多訓練、推論或分析。
請注意,如果您的模型具有影響模型結構的建構子參數,則需要提供這些參數並將模型配置成與儲存時相同的狀態。
其他資源¶
關於 資料工具的文件,包括 Dataset 和 DataLoader,位於 pytorch.org
關於 使用鎖頁記憶體進行 GPU 訓練的說明
關於 TorchVision、 TorchText 和 TorchAudio 中可用資料集的文檔
關於 PyTorch 中可用的 損失函數 的文檔
關於 torch.optim 包的文檔,其中包括優化器和相關工具,例如學習率排程
關於 儲存和載入模型的詳細教學
pytorch.org 的教學課程區段包含各種訓練任務的教學課程,包括不同領域的分類、生成對抗網路、強化學習等等
腳本總執行時間:(5 分鐘 35.279 秒)