Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Ask Question
I want to use
backward
multiple times in pytorch, but I get an error and cannot run the neural network successfully.
The error below is the reason why it does not work.
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:
[torch.FloatTensor [32, 2]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead.
Hint: the backtrace further above shows the operation that failed to compute its gradient.
The variable in question was changed in there or anywhere later. Good luck!
I have tried to resolve this error by checking the following website to try and resolve the error, but without success. What should I do?
discuss.pytorch
discuss.pytorch
discuss.pytorch
discuss.pytorch
https://nieznanm.medium.com/runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-85d0d207623
from sklearn.utils.extmath import cartesian
import torch
import numpy as np
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
from tqdm.notebook import trange
from torchvision import transforms
x_1 = np.arange(0, 2 + 1e-2, 1e-5)
x_2 = np.arange(0, 1 + 5e-2, 5e-2)
product_train = cartesian((x_1,x_2))
product_initial_condition0 = cartesian((np.array([0.]), x_2))
product_initial_condition1 = cartesian((np.array([2e-6]), x_2))
device = "cuda" if torch.cuda.is_available() else "cpu"
x1 = torch.tensor(product_train[:,0].astype(np.float32), requires_grad=True).to(device)
x2 = torch.tensor(product_train[:,1].astype(np.float32)).to(device)
initial_condition0 = torch.tensor(product_initial_condition0.astype(np.float32)).to(device)
initial_condition0_output = torch.tensor(product_initial_condition0[:,1].astype(np.float32)).to(device)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear1 = nn.Linear(2, 32)
self.linear2 = nn.Linear(32, 2)
def forward(self, x1, x2):
inputs = torch.stack([x1,x2], dim=1)
x = self.linear1(inputs)
x = torch.sigmoid(x)
x = self.linear2(x)
return x
NN = Model()
device = "cuda" if torch.cuda.is_available() else "cpu"
NN.to(device)
def f(x1, x2, NN):
initial_conditon_Phi0 = NN(initial_condition0[:,0], initial_condition0[:,1])
loss1 = ms_erorr(initial_conditon_Phi0[:,0], initial_condition0_output)
loss2 = ms_erorr(initial_conditon_Phi0[:,1], initial_condition0_output)
return loss1, loss2
def train(NN, optimizer, iteration):
with trange(iteration) as t:
torch.autograd.set_detect_anomaly(True)
for i in t:
optimizer.zero_grad()
a1, a2 = f(x1, x2, NN)
a1.backward(retain_graph=True)
optimizer.step()
optimizer.zero_grad()
a2.backward()
optimizer.step()
t.set_postfix(a1=a1.item(), a2=a2.item())
return NN
iteration = 1000
optimizer = optim.Adam(NN.parameters(), lr=0.001)
ms_erorr = nn.MSELoss()
NN= train(NN, optimizer, iteration)
This neural network model has two inputs and two outputs; for each of the two outputs, I want to create a model that outputs an arbitrary value. However, there is a situation where I want to learn the values of the two outputs separately, instead of learning the values of the two outputs at the same time, so I want to be able to train the neural network successfully using backward()
twice. How can I do this?
–
–
–
–
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.