Collectives™ on Stack Overflow
  
  
   Find centralized, trusted content and collaborate around the technologies you use most.
  
  Learn more about Collectives
  
   
    Teams
   
  
  
   Q&A for work
  
  
   Connect and share knowledge within a single location that is structured and easy to search.
  
  Learn more about Teams
  
   My answer assumes
   
    __init__
   
   was a typo and it should be
   
    forward
   
   . Let me know if that is not the case and I'll delete it.
  
  import torch
from torch import nn
class SimpleModel(nn.Module):
  def __init__(self, with_relu=False):
    super(SimpleModel, self).__init__()
    self.fc1 = nn.Sequential(nn.Linear(3, 10), nn.ReLU(inplace=True)) if with_relu else nn.Linear(3, 10)
    self.fc2 = nn.Linear(10, 3)
  def forward(self, x):
    x = self.fc1(x)
    print(torch.min(x))  # just to show you ReLU is working...
    return self.fc2(x)
# Model without ReLU
net_without_relu = SimpleModel(with_relu=False)
print(net_without_relu)
# Model with ReLU
net_with_relu = SimpleModel(with_relu=True)
print(net_with_relu)
# random input data
x = torch.randn((5, 3))
print(x)
# we expect it to print something < 0
output1 = net_without_relu(x)
# we expect it to print 0.
output2 = net_with_relu(x)
You can check the code below running on the Colab: https://colab.research.google.com/drive/1W3Dh4_KPd3iABx5FSzZm3tilm6tnJh0v
To use as you tried:
x = nn.ReLU(self.fc1(x)))
you can use the functional API:
from torch.nn import functional as F
# ...
x = F.relu(self.fc1(x)))
You should not do any view methods in __init__.
Init should hold your structure.
For instance this is copied from AlexNet __init__
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
Your forward method however, may contain reshaping, calculations, functions.
nn.Sequential should be part of the __init__ like in AlexNet:
class AlexNet(nn.Module):
    def __init__(self, num_classes=1000):
        super(AlexNet, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(64, 192, kernel_size=5, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(192, 384, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(384, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
        self.classifier = nn.Sequential(
            nn.Dropout(),
            nn.Linear(256 * 6 * 6, 4096),
            nn.ReLU(inplace=True),
            nn.Dropout(),
            nn.Linear(4096, 4096),
            nn.ReLU(inplace=True),
            nn.Linear(4096, num_classes),
    def forward(self, x):
        x = self.features(x)
        x = x.view(x.size(0), 256 * 6 * 6)
        x = self.classifier(x)
        return x
And then you can use class attributes self.features, self.classifier in forward.
Note: this is an old model of AlexNet from PyTorch 0.4 but it is fairly simple and the logic is the same
                
– 
                
        Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
 
But avoid …
- Asking for help, clarification, or responding to other answers.
 - Making statements based on opinion; back them up with references or personal experience.
 
To learn more, see our tips on writing great answers.