Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
The following model returns the error:
TypeError: forward() missing 1 required positional argument: 'indices'
I've exhausted many online examples and they all look similar to my code. My maxpool layer returns both the input and the indices for the unpool layer. Any ideas on what's wrong?
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.MaxPool2d(2, stride=1, return_indices=True)
self.decoder = nn.Sequential(
nn.MaxUnpool2d(2, stride=1),
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
Similar to the question here, the solution seems to be to separate the maxunpool layer from the decoder and explicitly pass its required parameters. nn.Sequential
only takes one parameter.
class SimpleConvAE(nn.Module):
def __init__(self):
super().__init__()
# input: batch x 3 x 32 x 32 -> output: batch x 16 x 16 x 16
self.encoder = nn.Sequential(
nn.MaxPool2d(2, stride=2, return_indices=True),
self.unpool = nn.MaxUnpool2d(2, stride=2, padding=0)
self.decoder = nn.Sequential(
def forward(self, x):
encoded, indices = self.encoder(x)
out = self.unpool(encoded, indices)
out = self.decoder(out)
return (out, encoded)
–
I wrapped both the MaxPool2d
and the MaxUnpool2d
into custom classes. Now I can use them in Sequential
.
It is probably not the most elegant solution, but it does seem to work:
class MaxPool2dIndexExtractor(nn.MaxPool2d):
def __init__(self, segnetlite, kern, stride):
super().__init__(kern, stride, return_indices=True)
self.segnetlite = [segnetlite]
def forward(self, x):
output, indices = super().forward(x)
self.segnetlite[0].pool_indices += [indices]
return output
class MaxUnpool2dIndexConsumer(nn.MaxUnpool2d):
def __init__(self, segnetlite, kern, stride):
super().__init__(kern, stride)
self.segnetlite = [segnetlite]
def forward(self, x):
indices = self.segnetlite[0].pool_indices.pop()
return super().forward(x, indices)
These classes take an additional parameter on construction: something that they should store the indices in or get them back from again. In my case, I called it segnetlite
here (just because of my use case. You can call it whatever you want!) and made sure it had an empty list as an attribute.
You could also adapt this code to directly use a list.
Something important to be aware of is that if segnetlite
is a module and you save it to self.segnetlite
directly, pytorch would enter infinite recursion on train()
because it would walk circles while trying to list all the nested modules. This is why I have saved the segnetlite
object inside a list here.
The way I am using these classe here is inside some Sequential()
s. First I have one that does some downscaling. Later I have a different one that does some upscaling. In the second Sequential
, I have the unpooling that correspond to the pooling in the first one - but in reverse order. This is why the MaxUnpool2dIndexConsumer
pop()
s from the back of the list.
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.