在 Intel GPU 上入門¶
硬體先決條件¶
支援的作業系統 |
已驗證的硬體 |
---|---|
Linux |
Intel® Client GPU / Intel® Data Center GPU Max 系列 |
Windows |
Intel® Client GPU |
WSL2(實驗性功能) |
Intel® Client GPU |
Intel GPU 支援(原型)已在 PyTorch* 2.6 中準備好,適用於 Linux 和 Windows 上的 Intel® Client GPU 和 Intel® Data Center GPU Max 系列,這將 Intel GPU 和 SYCL* 軟體堆疊引入官方 PyTorch 堆疊中,並提供一致的使用者體驗,以擁抱更多的 AI 應用場景。
軟體先決條件¶
若要在 Intel GPU 上使用 PyTorch,您需要先安裝 Intel GPU 驅動程式。 如需安裝指南,請造訪Intel GPU 驅動程式安裝。
Intel GPU 驅動程式足以進行二進位安裝,但從原始碼建置則需要 Intel GPU 驅動程式和 Intel® Deep Learning Essentials。請參閱Intel GPU 的 PyTorch 安裝先決條件以獲取更多資訊。
安裝¶
二進位檔¶
現在我們已經安裝了Intel GPU 驅動程式,請使用以下命令在 Linux 上安裝 pytorch
、torchvision
、torchaudio
。
針對預覽版本 wheels
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/test/xpu
針對 nightly wheels
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu
從原始碼¶
現在我們已經安裝了Intel GPU 驅動程式和 Intel® Deep Learning Essentials。請按照指南從原始碼建置 pytorch
、torchvision
、torchaudio
。
從原始碼建置 torch
請參考 PyTorch 從原始碼建置安裝。
從原始碼建置 torchvision
請參考 Torchvision 從原始碼建置安裝。
從原始碼建置 torchaudio
請參考 Torchaudio 從原始碼建置安裝。
檢查 Intel GPU 的可用性¶
要檢查您的 Intel GPU 是否可用,您通常會使用以下程式碼
import torch
torch.xpu.is_available() # torch.xpu is the API for Intel GPU support
如果輸出為 False
,請再次檢查 Intel GPU 的驅動程式安裝。
最小程式碼變更¶
如果您要從 cuda
遷移程式碼,您需要將引用從 cuda
變更為 xpu
。例如:
# CUDA CODE
tensor = torch.tensor([1.0, 2.0]).to("cuda")
# CODE for Intel GPU
tensor = torch.tensor([1.0, 2.0]).to("xpu")
以下幾點概述了 PyTorch 對 Intel GPU 的支援和限制
支援訓練和推論工作流程。
同時支援 eager mode 和
torch.compile
。支援 FP32、BF16、FP16 和自動混合精度 (AMP) 等資料類型。
範例¶
本節包含推論和訓練工作流程的使用範例。
推論範例¶
以下是一些推論工作流程範例。
使用 FP32 進行推論¶
import torch
import torchvision.models as models
model = models.resnet50(weights="ResNet50_Weights.DEFAULT")
model.eval()
data = torch.rand(1, 3, 224, 224)
model = model.to("xpu")
data = data.to("xpu")
with torch.no_grad():
model(data)
print("Execution finished")
使用 AMP 進行推論¶
import torch
import torchvision.models as models
model = models.resnet50(weights="ResNet50_Weights.DEFAULT")
model.eval()
data = torch.rand(1, 3, 224, 224)
model = model.to("xpu")
data = data.to("xpu")
with torch.no_grad():
d = torch.rand(1, 3, 224, 224)
d = d.to("xpu")
# set dtype=torch.bfloat16 for BF16
with torch.autocast(device_type="xpu", dtype=torch.float16, enabled=True):
model(data)
print("Execution finished")
使用 torch.compile
進行推論¶
import torch
import torchvision.models as models
import time
model = models.resnet50(weights="ResNet50_Weights.DEFAULT")
model.eval()
data = torch.rand(1, 3, 224, 224)
ITERS = 10
model = model.to("xpu")
data = data.to("xpu")
for i in range(ITERS):
start = time.time()
with torch.no_grad():
model(data)
torch.xpu.synchronize()
end = time.time()
print(f"Inference time before torch.compile for iteration {i}: {(end-start)*1000} ms")
model = torch.compile(model)
for i in range(ITERS):
start = time.time()
with torch.no_grad():
model(data)
torch.xpu.synchronize()
end = time.time()
print(f"Inference time after torch.compile for iteration {i}: {(end-start)*1000} ms")
print("Execution finished")
訓練範例¶
以下是一些訓練工作流程範例。
使用 FP32 進行訓練¶
import torch
import torchvision
LR = 0.001
DOWNLOAD = True
DATA = "datasets/cifar10/"
transform = torchvision.transforms.Compose(
[
torchvision.transforms.Resize((224, 224)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
train_dataset = torchvision.datasets.CIFAR10(
root=DATA,
train=True,
transform=transform,
download=DOWNLOAD,
)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=128)
train_len = len(train_loader)
model = torchvision.models.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR, momentum=0.9)
model.train()
model = model.to("xpu")
criterion = criterion.to("xpu")
print(f"Initiating training")
for batch_idx, (data, target) in enumerate(train_loader):
data = data.to("xpu")
target = target.to("xpu")
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
if (batch_idx + 1) % 10 == 0:
iteration_loss = loss.item()
print(f"Iteration [{batch_idx+1}/{train_len}], Loss: {iteration_loss:.4f}")
torch.save(
{
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
},
"checkpoint.pth",
)
print("Execution finished")
使用 AMP 進行訓練¶
import torch
import torchvision
LR = 0.001
DOWNLOAD = True
DATA = "datasets/cifar10/"
use_amp=True
transform = torchvision.transforms.Compose(
[
torchvision.transforms.Resize((224, 224)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
train_dataset = torchvision.datasets.CIFAR10(
root=DATA,
train=True,
transform=transform,
download=DOWNLOAD,
)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=128)
train_len = len(train_loader)
model = torchvision.models.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR, momentum=0.9)
scaler = torch.amp.GradScaler(enabled=use_amp)
model.train()
model = model.to("xpu")
criterion = criterion.to("xpu")
print(f"Initiating training")
for batch_idx, (data, target) in enumerate(train_loader):
data = data.to("xpu")
target = target.to("xpu")
# set dtype=torch.bfloat16 for BF16
with torch.autocast(device_type="xpu", dtype=torch.float16, enabled=use_amp):
output = model(data)
loss = criterion(output, target)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
if (batch_idx + 1) % 10 == 0:
iteration_loss = loss.item()
print(f"Iteration [{batch_idx+1}/{train_len}], Loss: {iteration_loss:.4f}")
torch.save(
{
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
},
"checkpoint.pth",
)
print("Execution finished")
使用 torch.compile
進行訓練¶
import torch
import torchvision
LR = 0.001
DOWNLOAD = True
DATA = "datasets/cifar10/"
transform = torchvision.transforms.Compose(
[
torchvision.transforms.Resize((224, 224)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
train_dataset = torchvision.datasets.CIFAR10(
root=DATA,
train=True,
transform=transform,
download=DOWNLOAD,
)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=128)
train_len = len(train_loader)
model = torchvision.models.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR, momentum=0.9)
model.train()
model = model.to("xpu")
criterion = criterion.to("xpu")
model = torch.compile(model)
print(f"Initiating training with torch compile")
for batch_idx, (data, target) in enumerate(train_loader):
data = data.to("xpu")
target = target.to("xpu")
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
if (batch_idx + 1) % 10 == 0:
iteration_loss = loss.item()
print(f"Iteration [{batch_idx+1}/{train_len}], Loss: {iteration_loss:.4f}")
torch.save(
{
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
},
"checkpoint.pth",
)
print("Execution finished")