RuntimeError:标量类型 Long 的预期对象,但参数 #2 'target' 的标量类型为 Float

新手上路,请多包涵

我在计算我的神经网络的损失时遇到了问题。我不确定为什么程序需要一个长对象,因为我所有的张量都是浮点形式。我查看了具有类似错误的线程,解决方案是将 Tensors 转换为 float 而不是 long,但这在我的情况下不起作用,因为我的所有数据在传递到网络时都已经是 float 形式。

这是我的代码:

 # Dataloader
from torch.utils.data import Dataset, DataLoader

class LoadInfo(Dataset):
    def __init__(self, prediction, indicator):
        self.prediction = prediction
        self.indicator = indicator
    def __len__(self):
        return len(self.prediction)
    def __getitem__(self, idx):
        data = torch.tensor(self.indicator.iloc[idx, :],dtype=torch.float)
        data = torch.unsqueeze(data, 0)
        label = torch.tensor(self.prediction.iloc[idx, :],dtype=torch.float)
        sample = {'data': data, 'label': label}
        return sample

# Trainloader
test_train = LoadInfo(train_label, train_indicators)
trainloader = DataLoader(test_train, batch_size=64,shuffle=True, num_workers=1,pin_memory=True)

# The Network
class NetDense2(nn.Module):

    def __init__(self):
        super(NetDense2, self).__init__()
        self.rnn1 = nn.RNN(11, 100, 3)
        self.rnn2 = nn.RNN(100, 500, 3)
        self.fc1 = nn.Linear(500, 100)
        self.fc2 = nn.Linear(100, 20)
        self.fc3 = nn.Linear(20, 3)

    def forward(self, x):
        x1, h1 = self.rnn1(x)
        x2, h2 = self.rnn2(x1)
        x = F.relu(self.fc1(x2))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

# Allocate / Transfer to GPU
dense2 = NetDense2()
dense2.cuda()

# Optimizer
import torch.optim as optim
criterion = nn.CrossEntropyLoss()                                 # specify the loss function
optimizer = optim.SGD(dense2.parameters(), lr=0.001, momentum=0.9,weight_decay=0.001)

# Training
dense2.train()
loss_memory = []
for epoch in range(50):  # loop over the dataset multiple times
    running_loss = 0.0
    for i, samp in enumerate(trainloader):
        # get the inputs
        ins = samp['data']
        targets = samp['label']
        tmp = []
        tmp = torch.squeeze(targets.float())
        ins, targets = ins.cuda(),  tmp.cuda()
        # zero the parameter gradients
        optimizer.zero_grad()
        # forward + backward + optimize
        outputs = dense2(ins)
        loss = criterion(outputs, targets)     # The loss
        loss.backward()
        optimizer.step()
        # keep track of loss
        running_loss += loss.data.item()

我从上面的“ loss = criterion(outputs, targets) ”行中得到错误

原文由 davetherock 发布,翻译遵循 CC BY-SA 4.0 许可协议

阅读 634
2 个回答

根据 pytorch 网页上 的文档和官方示例,传递给 nn.CrossEntropyLoss() 的目标应该是 torch.long 格式

# official example
import torch
import torch.nn as nn
loss = nn.CrossEntropyLoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3, dtype=torch.long).random_(5)

# if you will replace the dtype=torch.float, you will get error

output = loss(input, target)
output.backward()

将代码中的这一行更新为

label = torch.tensor(self.prediction.iloc[idx, :],dtype=torch.long) #updated torch.float to torch.long

原文由 Mughees 发布,翻译遵循 CC BY-SA 4.0 许可协议

一个简单的修复,通过替换对我有用

loss = criterion(outputs, targets)

loss = criterion(outputs, targets.long())

原文由 homelessmathaddict 发布,翻译遵循 CC BY-SA 4.0 许可协议

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题