在PyTorch中實現多GPU并行訓練可以通過使用torch.nn.DataParallel
模塊或torch.nn.parallel.DistributedDataParallel
模塊來實現。下面分別介紹這兩種方法的實現步驟:
torch.nn.DataParallel
模塊:import torch
import torch.nn as nn
from torch.utils.data import DataLoader
# 構建模型
model = nn.Sequential(
nn.Linear(10, 100),
nn.ReLU(),
nn.Linear(100, 1)
)
# 將模型放到多個GPU上
model = nn.DataParallel(model)
# 定義損失函數和優化器
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# 構建數據加載器
train_loader = DataLoader(dataset, batch_size=64, shuffle=True)
# 開始訓練
for epoch in range(num_epochs):
for inputs, targets in train_loader:
outputs = model(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
torch.nn.parallel.DistributedDataParallel
模塊:import torch
import torch.nn as nn
from torch.utils.data import DataLoader
import torch.distributed as dist
# 初始化進程組
dist.init_process_group(backend='nccl')
# 構建模型
model = nn.Sequential(
nn.Linear(10, 100),
nn.ReLU(),
nn.Linear(100, 1)
)
# 將模型放到多個GPU上
model = nn.parallel.DistributedDataParallel(model)
# 定義損失函數和優化器
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# 構建數據加載器
train_loader = DataLoader(dataset, batch_size=64, shuffle=True)
# 開始訓練
for epoch in range(num_epochs):
for inputs, targets in train_loader:
outputs = model(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
以上是使用torch.nn.DataParallel
和torch.nn.parallel.DistributedDataParallel
模塊在PyTorch中實現多GPU并行訓練的方法。根據具體需求選擇合適的模塊來實現多GPU訓練。