在PyTorch中進行時序預測和序列生成通常涉及使用循環神經網絡(RNN)或者長短時記憶網絡(LSTM)模型。以下是一個基本的示例,展示如何使用PyTorch進行時序預測和序列生成:
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
# 準備輸入序列
input_sequence = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
# 準備輸出序列
output_sequence = np.array([2, 4, 6, 8, 10, 12, 14, 16, 18, 20])
# 轉換數據為PyTorch張量
input_sequence = torch.from_numpy(input_sequence).float()
output_sequence = torch.from_numpy(output_sequence).float()
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.rnn = nn.RNN(input_size, hidden_size, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
out, _ = self.rnn(x.unsqueeze(0).unsqueeze(2))
out = self.fc(out)
return out
# 定義模型
model = RNN(1, 128, 1)
# 定義損失函數
criterion = nn.MSELoss()
# 定義優化器
optimizer = optim.Adam(model.parameters(), lr=0.001)
# 訓練模型
num_epochs = 1000
for epoch in range(num_epochs):
optimizer.zero_grad()
output = model(input_sequence)
loss = criterion(output.squeeze(), output_sequence.unsqueeze(0))
loss.backward()
optimizer.step()
if epoch % 100 == 0:
print(f'Epoch {epoch+1}, Loss: {loss.item()}')
# 進行時序預測
input_sequence_test = torch.tensor([11]).float()
predicted_output = model(input_sequence_test)
# 進行序列生成
generated_sequence = []
input_sequence_gen = torch.tensor([11]).float()
for i in range(10):
output = model(input_sequence_gen)
generated_sequence.append(output.item())
input_sequence_gen = output.detach()
print("Predicted output: ", predicted_output.item())
print("Generated sequence: ", generated_sequence)
以上示例是一個簡單的例子,演示了如何使用PyTorch進行時序預測和序列生成。實際應用中,您可能需要根據具體問題的需求進行調整和優化。