one of the variables needed for gradient computation has been modified by an inplace operation

发布时间:2024-02-28 15:01

import torch
import torch.optim

x = torch.tensor([3, 6], dtype=torch.float32)
x.requires_grad_(True)
optimizer = torch.optim.SGD([x], lr=0.1, momentum=0)
f = (x**2).sum()
for i in range(100):

    optimizer.zero_grad()
    f.backward(retain_graph=True)
    optimizer.step()
    print(i, x.grad, f)

代码报错

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2]] is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

这是因为:

在pytorch追踪计算的时候,会生成一个计算图,而本例中f=x[0]^2+x[1]^2,计算图中的偏导含有待更新的张量x,例如df/dx[0]=2x[0],在反向传播时,由于优化器step()方法把张量x更新了,导致计算图中的参数被更新了

解决办法:

讲构图的过程写在每次更新参数后,也就是循环内,如下

import torch
import torch.optim

x = torch.tensor([3, 6], dtype=torch.float32)
x.requires_grad_(True)
optimizer = torch.optim.SGD([x], lr=0.1, momentum=0)

for i in range(100):
    f = (x ** 2).sum()
    optimizer.zero_grad()
    f.backward(retain_graph=True)
    optimizer.step()
    print(i, x.grad, f)

另外可以验证的是,如果计算图中的参数没有待更新的张量,那么构图的过程放在循环外也是可以的,比如:

import torch
import torch.optim

x = torch.tensor([3, 6], dtype=torch.float32)
x.requires_grad_(True)
optimizer = torch.optim.SGD([x], lr=0.1, momentum=0)
f = (x*2).sum()

for i in range(100):
    optimizer.zero_grad()
    f.backward(retain_graph=True)
    optimizer.step()
    print(i, x.grad, f)

此时可以发现,df/dx[0]=df/dx[1]=2,不管更新多少次参数,图都不会被改变,所以程序不会报错。

ItVuer - 免责声明 - 关于我们 - 联系我们

本网站信息来源于互联网,如有侵权请联系:561261067@qq.com

桂ICP备16001015号