• 教學 >
  • 透過範例學習 PyTorch
捷徑

透過範例學習 PyTorch

建立於:2017 年 3 月 24 日 | 最後更新:2025 年 1 月 21 日 | 最後驗證:2024 年 11 月 05 日

作者Justin Johnson

注意

這是我們較舊的 PyTorch 教學之一。您可以在學習基礎知識中查看我們最新的初學者內容。

本教學透過獨立的範例介紹 PyTorch 的基本概念。

PyTorch 的核心提供兩個主要功能

  • 一個 n 維張量,類似於 numpy,但可以在 GPU 上執行

  • 用於建構和訓練神經網路的自動微分

我們將使用以三階多項式擬合 \(y=\sin(x)\) 的問題作為我們的執行範例。網路將有四個參數,並將使用梯度下降來擬合隨機數據,方法是最小化網路輸出與真實輸出之間的歐幾里德距離。

注意

您可以瀏覽本頁面末尾的各個範例。

張量

熱身:numpy

在介紹 PyTorch 之前,我們將首先使用 numpy 實作網路。

Numpy 提供了一個 n 維陣列物件,以及許多用於操作這些陣列的函數。 Numpy 是一個用於科學計算的通用框架;它對計算圖、深度學習或梯度一無所知。但是,我們可以透過使用 numpy 運算手動實作通過網路的前向和後向傳遞,輕鬆地使用 numpy 將三階多項式擬合到正弦函數。

# -*- coding: utf-8 -*-
import numpy as np
import math

# Create random input and output data
x = np.linspace(-math.pi, math.pi, 2000)
y = np.sin(x)

# Randomly initialize weights
a = np.random.randn()
b = np.random.randn()
c = np.random.randn()
d = np.random.randn()

learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y
    # y = a + b x + c x^2 + d x^3
    y_pred = a + b * x + c * x ** 2 + d * x ** 3

    # Compute and print loss
    loss = np.square(y_pred - y).sum()
    if t % 100 == 99:
        print(t, loss)

    # Backprop to compute gradients of a, b, c, d with respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_a = grad_y_pred.sum()
    grad_b = (grad_y_pred * x).sum()
    grad_c = (grad_y_pred * x ** 2).sum()
    grad_d = (grad_y_pred * x ** 3).sum()

    # Update weights
    a -= learning_rate * grad_a
    b -= learning_rate * grad_b
    c -= learning_rate * grad_c
    d -= learning_rate * grad_d

print(f'Result: y = {a} + {b} x + {c} x^2 + {d} x^3')

PyTorch:張量

Numpy 是一個很棒的框架,但它無法利用 GPU 來加速其數值計算。對於現代深度神經網路,GPU 通常提供 50 倍或更高的加速,因此不幸的是,numpy 對於現代深度學習來說是不夠的。

在這裡,我們介紹了 PyTorch 最基本的概念:張量。 PyTorch 張量在概念上與 numpy 陣列相同:張量是一個 n 維陣列,PyTorch 提供了許多用於操作這些張量的函數。在幕後,張量可以追蹤計算圖和梯度,但它們也可用作科學計算的通用工具。

而且與 numpy 不同,PyTorch 張量可以利用 GPU 來加速其數值計算。 若要在 GPU 上執行 PyTorch 張量,您只需指定正確的裝置即可。

在這裡,我們使用 PyTorch 張量將三階多項式擬合到正弦函數。 與上面的 numpy 範例一樣,我們需要手動實作通過網路的前向和後向傳遞

# -*- coding: utf-8 -*-

import torch
import math


dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU

# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)

# Randomly initialize weights
a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)

learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y
    y_pred = a + b * x + c * x ** 2 + d * x ** 3

    # Compute and print loss
    loss = (y_pred - y).pow(2).sum().item()
    if t % 100 == 99:
        print(t, loss)

    # Backprop to compute gradients of a, b, c, d with respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_a = grad_y_pred.sum()
    grad_b = (grad_y_pred * x).sum()
    grad_c = (grad_y_pred * x ** 2).sum()
    grad_d = (grad_y_pred * x ** 3).sum()

    # Update weights using gradient descent
    a -= learning_rate * grad_a
    b -= learning_rate * grad_b
    c -= learning_rate * grad_c
    d -= learning_rate * grad_d


print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')

Autograd

PyTorch:張量和 autograd

在上面的範例中,我們必須手動實作神經網路的前向和後向傳遞。 手動實作後向傳遞對於小型雙層網路來說不是什麼大問題,但對於大型複雜網路來說,很快就會變得非常棘手。

值得慶幸的是,我們可以使用自動微分來自動化神經網路中後向傳遞的計算。 PyTorch 中的 autograd 封裝提供了這個功能。 當使用 autograd 時,網路的前向傳遞將定義一個計算圖;圖中的節點將是張量,邊將是從輸入張量產生輸出張量的函數。 然後,透過此圖進行反向傳播可讓您輕鬆計算梯度。

這聽起來很複雜,但在實務中使用起來非常簡單。 每個張量代表計算圖中的一個節點。 如果 x 是一個具有 x.requires_grad=True 的張量,則 x.grad 是另一個張量,它相對於某個純量值保存 x 的梯度。

在這裡,我們使用 PyTorch 張量和 autograd 來實作我們的正弦波擬合與三階多項式範例; 現在我們不再需要手動實作通過網路的後向傳遞

# -*- coding: utf-8 -*-
import torch
import math

# We want to be able to train our model on an `accelerator <https://pytorch.dev.org.tw/docs/stable/torch.html#accelerators>`__
# such as CUDA, MPS, MTIA, or XPU. If the current accelerator is available, we will use it. Otherwise, we use the CPU.

dtype = torch.float
device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu"
print(f"Using {device} device")
torch.set_default_device(device)

# Create Tensors to hold input and outputs.
# By default, requires_grad=False, which indicates that we do not need to
# compute gradients with respect to these Tensors during the backward pass.
x = torch.linspace(-math.pi, math.pi, 2000, dtype=dtype)
y = torch.sin(x)

# Create random Tensors for weights. For a third order polynomial, we need
# 4 weights: y = a + b x + c x^2 + d x^3
# Setting requires_grad=True indicates that we want to compute gradients with
# respect to these Tensors during the backward pass.
a = torch.randn((), dtype=dtype, requires_grad=True)
b = torch.randn((), dtype=dtype, requires_grad=True)
c = torch.randn((), dtype=dtype, requires_grad=True)
d = torch.randn((), dtype=dtype, requires_grad=True)

learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y using operations on Tensors.
    y_pred = a + b * x + c * x ** 2 + d * x ** 3

    # Compute and print loss using operations on Tensors.
    # Now loss is a Tensor of shape (1,)
    # loss.item() gets the scalar value held in the loss.
    loss = (y_pred - y).pow(2).sum()
    if t % 100 == 99:
        print(t, loss.item())

    # Use autograd to compute the backward pass. This call will compute the
    # gradient of loss with respect to all Tensors with requires_grad=True.
    # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
    # the gradient of the loss with respect to a, b, c, d respectively.
    loss.backward()

    # Manually update weights using gradient descent. Wrap in torch.no_grad()
    # because weights have requires_grad=True, but we don't need to track this
    # in autograd.
    with torch.no_grad():
        a -= learning_rate * a.grad
        b -= learning_rate * b.grad
        c -= learning_rate * c.grad
        d -= learning_rate * d.grad

        # Manually zero the gradients after updating weights
        a.grad = None
        b.grad = None
        c.grad = None
        d.grad = None

print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')

PyTorch:定義新的 autograd 函數

在幕後,每個原始的 autograd 運算子實際上都是作用於張量的兩個函數。 前向函數從輸入張量計算輸出張量。 後向函數接收輸出張量相對於某個純量值的梯度,並計算輸入張量相對於該相同純量值的梯度。

在 PyTorch 中,我們可以輕鬆地透過定義 torch.autograd.Function 的子類別並實作 forwardbackward 函數來定義自己的 autograd 運算子。然後,我們可以建構一個實例並像函數一樣呼叫它,並傳入包含輸入資料的 Tensor,來使用我們新的 autograd 運算子。

在這個範例中,我們將模型定義為 \(y=a+b P_3(c+dx)\),而不是 \(y=a+bx+cx^2+dx^3\),其中 \(P_3(x)=\frac{1}{2}\left(5x^3-3x\right)\) 是三次的 Legendre 多項式。我們編寫自己的自訂 autograd 函數來計算 \(P_3\) 的 forward 和 backward,並使用它來實作我們的模型。

# -*- coding: utf-8 -*-
import torch
import math


class LegendrePolynomial3(torch.autograd.Function):
    """
    We can implement our own custom autograd Functions by subclassing
    torch.autograd.Function and implementing the forward and backward passes
    which operate on Tensors.
    """

    @staticmethod
    def forward(ctx, input):
        """
        In the forward pass we receive a Tensor containing the input and return
        a Tensor containing the output. ctx is a context object that can be used
        to stash information for backward computation. You can cache arbitrary
        objects for use in the backward pass using the ctx.save_for_backward method.
        """
        ctx.save_for_backward(input)
        return 0.5 * (5 * input ** 3 - 3 * input)

    @staticmethod
    def backward(ctx, grad_output):
        """
        In the backward pass we receive a Tensor containing the gradient of the loss
        with respect to the output, and we need to compute the gradient of the loss
        with respect to the input.
        """
        input, = ctx.saved_tensors
        return grad_output * 1.5 * (5 * input ** 2 - 1)


dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0")  # Uncomment this to run on GPU

# Create Tensors to hold input and outputs.
# By default, requires_grad=False, which indicates that we do not need to
# compute gradients with respect to these Tensors during the backward pass.
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)

# Create random Tensors for weights. For this example, we need
# 4 weights: y = a + b * P3(c + d * x), these weights need to be initialized
# not too far from the correct result to ensure convergence.
# Setting requires_grad=True indicates that we want to compute gradients with
# respect to these Tensors during the backward pass.
a = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
b = torch.full((), -1.0, device=device, dtype=dtype, requires_grad=True)
c = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
d = torch.full((), 0.3, device=device, dtype=dtype, requires_grad=True)

learning_rate = 5e-6
for t in range(2000):
    # To apply our Function, we use Function.apply method. We alias this as 'P3'.
    P3 = LegendrePolynomial3.apply

    # Forward pass: compute predicted y using operations; we compute
    # P3 using our custom autograd operation.
    y_pred = a + b * P3(c + d * x)

    # Compute and print loss
    loss = (y_pred - y).pow(2).sum()
    if t % 100 == 99:
        print(t, loss.item())

    # Use autograd to compute the backward pass.
    loss.backward()

    # Update weights using gradient descent
    with torch.no_grad():
        a -= learning_rate * a.grad
        b -= learning_rate * b.grad
        c -= learning_rate * c.grad
        d -= learning_rate * d.grad

        # Manually zero the gradients after updating weights
        a.grad = None
        b.grad = None
        c.grad = None
        d.grad = None

print(f'Result: y = {a.item()} + {b.item()} * P3({c.item()} + {d.item()} x)')

nn 模組

PyTorch: nn

計算圖和 autograd 是一種非常強大的範例,可以用於定義複雜的運算子並自動求導;然而,對於大型神經網路來說,原始的 autograd 可能有點太底層。

在建構神經網路時,我們經常考慮將計算安排到**層 (layers)** 中,其中一些層具有**可學習的參數 (learnable parameters)**,這些參數將在學習期間進行優化。

在 TensorFlow 中,像 KerasTensorFlow-SlimTFLearn 這樣的套件提供了比原始計算圖更高階的抽象,這些抽象對於建構神經網路很有用。

在 PyTorch 中,nn 套件的作用相同。nn 套件定義了一組 **模組 (Modules)**,這些模組大致等同於神經網路層。一個模組接收輸入 Tensor 並計算輸出 Tensor,但也可能持有內部狀態,例如包含可學習參數的 Tensor。nn 套件還定義了一組有用的損失函數,這些損失函數通常在訓練神經網路時使用。

在這個範例中,我們使用 nn 套件來實作我們的多項式模型網路。

# -*- coding: utf-8 -*-
import torch
import math


# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)

# For this example, the output y is a linear function of (x, x^2, x^3), so
# we can consider it as a linear layer neural network. Let's prepare the
# tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)

# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
# of shape (2000, 3) 

# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. The Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
# to match the shape of `y`.
model = torch.nn.Sequential(
    torch.nn.Linear(3, 1),
    torch.nn.Flatten(0, 1)
)

# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')

learning_rate = 1e-6
for t in range(2000):

    # Forward pass: compute predicted y by passing x to the model. Module objects
    # override the __call__ operator so you can call them like functions. When
    # doing so you pass a Tensor of input data to the Module and it produces
    # a Tensor of output data.
    y_pred = model(xx)

    # Compute and print loss. We pass Tensors containing the predicted and true
    # values of y, and the loss function returns a Tensor containing the
    # loss.
    loss = loss_fn(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())

    # Zero the gradients before running the backward pass.
    model.zero_grad()

    # Backward pass: compute gradient of the loss with respect to all the learnable
    # parameters of the model. Internally, the parameters of each Module are stored
    # in Tensors with requires_grad=True, so this call will compute gradients for
    # all learnable parameters in the model.
    loss.backward()

    # Update the weights using gradient descent. Each parameter is a Tensor, so
    # we can access its gradients like we did before.
    with torch.no_grad():
        for param in model.parameters():
            param -= learning_rate * param.grad

# You can access the first layer of `model` like accessing the first item of a list
linear_layer = model[0]

# For linear layer, its parameters are stored as `weight` and `bias`.
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')

PyTorch: optim

到目前為止,我們透過使用 torch.no_grad() 手動修改持有可學習參數的 Tensor 來更新模型的權重。對於像隨機梯度下降這樣簡單的優化演算法來說,這不是一個巨大的負擔,但在實務中,我們通常使用更複雜的優化器(如 AdaGradRMSPropAdam 等)來訓練神經網路。

PyTorch 中的 optim 套件抽象了優化演算法的概念,並提供了常用優化演算法的實作。

在這個範例中,我們將使用 nn 套件像以前一樣定義我們的模型,但我們將使用 optim 套件提供的 RMSprop 演算法來優化模型。

# -*- coding: utf-8 -*-
import torch
import math


# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)

# Prepare the input tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)

# Use the nn package to define our model and loss function.
model = torch.nn.Sequential(
    torch.nn.Linear(3, 1),
    torch.nn.Flatten(0, 1)
)
loss_fn = torch.nn.MSELoss(reduction='sum')

# Use the optim package to define an Optimizer that will update the weights of
# the model for us. Here we will use RMSprop; the optim package contains many other
# optimization algorithms. The first argument to the RMSprop constructor tells the
# optimizer which Tensors it should update.
learning_rate = 1e-3
optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate)
for t in range(2000):
    # Forward pass: compute predicted y by passing x to the model.
    y_pred = model(xx)

    # Compute and print loss.
    loss = loss_fn(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())

    # Before the backward pass, use the optimizer object to zero all of the
    # gradients for the variables it will update (which are the learnable
    # weights of the model). This is because by default, gradients are
    # accumulated in buffers( i.e, not overwritten) whenever .backward()
    # is called. Checkout docs of torch.autograd.backward for more details.
    optimizer.zero_grad()

    # Backward pass: compute gradient of the loss with respect to model
    # parameters
    loss.backward()

    # Calling the step function on an Optimizer makes an update to its
    # parameters
    optimizer.step()


linear_layer = model[0]
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')

PyTorch:自訂 nn 模組

有時候,您可能想要指定比現有模組序列更複雜的模型;對於這些情況,您可以透過繼承 nn.Module 並定義一個 forward 函數來定義您自己的模組,該函數接收輸入 Tensor 並使用其他模組或 Tensor 上的其他 autograd 運算來產生輸出 Tensor。

在這個範例中,我們將我們的三階多項式實作為自訂模組子類別。

# -*- coding: utf-8 -*-
import torch
import math


class Polynomial3(torch.nn.Module):
    def __init__(self):
        """
        In the constructor we instantiate four parameters and assign them as
        member parameters.
        """
        super().__init__()
        self.a = torch.nn.Parameter(torch.randn(()))
        self.b = torch.nn.Parameter(torch.randn(()))
        self.c = torch.nn.Parameter(torch.randn(()))
        self.d = torch.nn.Parameter(torch.randn(()))

    def forward(self, x):
        """
        In the forward function we accept a Tensor of input data and we must return
        a Tensor of output data. We can use Modules defined in the constructor as
        well as arbitrary operators on Tensors.
        """
        return self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3

    def string(self):
        """
        Just like any class in Python, you can also define custom method on PyTorch modules
        """
        return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3'


# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)

# Construct our model by instantiating the class defined above
model = Polynomial3()

# Construct our loss function and an Optimizer. The call to model.parameters()
# in the SGD constructor will contain the learnable parameters (defined 
# with torch.nn.Parameter) which are members of the model.
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-6)
for t in range(2000):
    # Forward pass: Compute predicted y by passing x to the model
    y_pred = model(x)

    # Compute and print loss
    loss = criterion(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())

    # Zero gradients, perform a backward pass, and update the weights.
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

print(f'Result: {model.string()}')

PyTorch:控制流 + 權重共享

作為動態圖和權重共享的一個範例,我們實作了一個非常奇怪的模型:一個三至五階多項式,它在每次 forward 傳遞時選擇一個介於 3 和 5 之間的隨機數,並使用該數量的階數,多次重複使用相同的權重來計算第四階和第五階。

對於這個模型,我們可以使用正常的 Python 控制流來實作迴圈,並且可以透過在定義 forward 傳遞時簡單地重複使用相同的參數多次來實作權重共享。

我們可以輕鬆地將這個模型實作為一個模組子類別。

# -*- coding: utf-8 -*-
import random
import torch
import math


class DynamicNet(torch.nn.Module):
    def __init__(self):
        """
        In the constructor we instantiate five parameters and assign them as members.
        """
        super().__init__()
        self.a = torch.nn.Parameter(torch.randn(()))
        self.b = torch.nn.Parameter(torch.randn(()))
        self.c = torch.nn.Parameter(torch.randn(()))
        self.d = torch.nn.Parameter(torch.randn(()))
        self.e = torch.nn.Parameter(torch.randn(()))

    def forward(self, x):
        """
        For the forward pass of the model, we randomly choose either 4, 5
        and reuse the e parameter to compute the contribution of these orders.

        Since each forward pass builds a dynamic computation graph, we can use normal
        Python control-flow operators like loops or conditional statements when
        defining the forward pass of the model.

        Here we also see that it is perfectly safe to reuse the same parameter many
        times when defining a computational graph.
        """
        y = self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3
        for exp in range(4, random.randint(4, 6)):
            y = y + self.e * x ** exp
        return y

    def string(self):
        """
        Just like any class in Python, you can also define custom method on PyTorch modules
        """
        return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3 + {self.e.item()} x^4 ? + {self.e.item()} x^5 ?'


# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)

# Construct our model by instantiating the class defined above
model = DynamicNet()

# Construct our loss function and an Optimizer. Training this strange model with
# vanilla stochastic gradient descent is tough, so we use momentum
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-8, momentum=0.9)
for t in range(30000):
    # Forward pass: Compute predicted y by passing x to the model
    y_pred = model(x)

    # Compute and print loss
    loss = criterion(y_pred, y)
    if t % 2000 == 1999:
        print(t, loss.item())

    # Zero gradients, perform a backward pass, and update the weights.
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

print(f'Result: {model.string()}')

範例

您可以在此處瀏覽上述範例。

文件

存取 PyTorch 的全面開發人員文件

檢視文件

教學

取得初學者和進階開發人員的深入教學課程

檢視教學課程

資源

尋找開發資源並取得您的問題解答

檢視資源