添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I'm trying to carry out the tutorial named "Training a classifier" with PyTorch. WHen trying to debug this part of the code :

import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))

I get this error message :

Files already downloaded and verified Files already downloaded and verified 
Files already downloaded and verified Files already downloaded and verified Traceback (most recent call last):   
File "<string>", line 1, in <module>   
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)   
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 114, in _main
    prepare(preparation_data)   
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 225, in prepare
    _fixup_main_from_path(data['init_main_from_path'])   
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 277, in
_fixup_main_from_path
    run_name="__mp_main__")   
File "D:\Anaconda\lib\runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)   
File "D:\Anaconda\lib\runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)   
File "D:\Anaconda\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)   
File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module>
    dataiter = iter(trainloader)   
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
    return _DataLoaderIter(self)   
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
    w.start()   
File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)   
File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)     
File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)   
File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
    prep_data = spawn.get_preparation_data(process_obj._name)   
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
    _check_not_importing_main()   
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 136, in
_check_not_importing_main
    is not going to be frozen to produce an executable.) 
RuntimeError:
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.
        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:
            if __name__ == '__main__':
                freeze_support()
        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable. 
 Traceback (most recent call last):   
 File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module>
    dataiter = iter(trainloader)   
 File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
    return _DataLoaderIter(self)   
 File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
    w.start()   
 File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)   
 File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)   File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)   
 File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 65, in
__init__
    reduction.dump(process_obj, to_child)   
 File "D:\Anaconda\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj) 
 BrokenPipeError: [Errno 32] Broken pipe

All the previous lines in the tutorial are working perfectly.

Does someone know how to solve this, please ? Thanks a lot in advance

The question happened because Windows cannot run this DataLoader in 'num_workers' more than 0. You can see where the trainloader come from.we can see

trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                      shuffle=True, num_workers=2)

We need to change the 'num_workers' to 0.like this:

trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                      shuffle=True, num_workers=0)

Every trainloaders need to change like this.

l cannot use Multithreading for loading data on windows with pytorch on it's cpu version.Maybe you can try on GPU version.You can also run it in jupyter notebook. – 吕泓瑾 Jan 21, 2020 at 5:54

For anyone facing this issue, in my case it was related to memory overflow. I simply ran out of RAM.

In my case I was striding long audio and passing numpy arrays as arguments to map function, which blown the memory usage above 32GB of RAM, causing the error.

Some possible solutions:

  • reduce memory usage
  • increase swap size
  • Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.