RuntimeError: one of the variables needed for gradient computation has been modified by an...

发布时间:2024-04-11 12:01

起因:pytorch训练,之前能train的程序,一点没改,现在报错,记录改bug过程。

报错代码段:

loss_qf = L1loss(QF, qf)
loss_img = L1loss(img_E, img_H_tensor)
loss_train = loss_img + 0.1 * loss_qf
print(loss_train)
optimizer.zero_grad()
loss_train.backward()
optimizer.step()
scheduler.step()

报错:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [16, 512, 16, 16]], which is output 0 of AddBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

搜索解决方法:

1、两个loss都进行backward()操作时,丢失参数。但是报错和我的有区别,原链接是TBackward且是两个loss,我只有一个loss,报错AddBackward0。方法不行。

https://discuss.pytorch.org/t/solved-pytorch1-5-runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-operation/90256

2、将网络中的nn.ReLU(inplace=True)改成False;将所有inplace操作转换为非inplace操作,如将x += 1换为y = x + 1。也无效。

3、更改torch版本,原版本1.11.0升级到1.91报错相同,降级到1.10.0,报错变成ReluBackward0。尝试步骤2,删去nn.ReLU(inplace=True)中的inplace=True,程序跑通了!!!

https://github.com/pytorch/pytorch/issues/24853

总结,pytorch版本问题,有点玄学。如果是双loss回传,第一条能解决,报错ReluBackward0用第二条,没发现问题,建议更换pytorch版本。

ItVuer - 免责声明 - 关于我们 - 联系我们

本网站信息来源于互联网,如有侵权请联系:561261067@qq.com

桂ICP备16001015号