Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: can't pickle Environment objects when num_workers > 0 for LSUN #689

Open
ArtjomUEA opened this issue Dec 17, 2018 · 15 comments
Open

Comments

@ArtjomUEA
Copy link

The program fails to create an iterator for a DataLoader object when the used dataset is LSUN and the amount of workers is greater than zero. I do not have such an error when work with other datasets. Something tells me that the issue might be caused by lmdb. I run on Windows 10, CUDA 10.

Code:

import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms

dataset = dset.LSUN(root='D:/bedroom_train_lmdb', classes=['bedroom_train'],
                            transform=transforms.Compose([
                                transforms.Resize((64, 64)),
                                transforms.ToTensor(),
                                transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                            ]))

dataloader = torch.utils.data.DataLoader(dataset, batch_size=128,
                                             shuffle=True, num_workers=4)

for data in dataloader:
    print(data)

Error:

Traceback (most recent call last):
  File "C:/Users/x/.PyCharm2018.3/config/scratches/scratch.py", line 15, in <module>
    for data in dataloader:
  File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__
    return _DataLoaderIter(self)
  File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__
    w.start()
  File "C:\Anaconda3\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle Environment objects
@fmassa
Copy link
Member

fmassa commented Dec 17, 2018

This seems to be a Windows-specific issue.
But note that even if we address this particular issue (I have no idea how to do it though), you would probably hit another issue further on, which is #619

@Santiago810
Copy link

this issue also appear in linux, the reason is the opened lmdb env can not be pickled

@IsaacBerman
Copy link

@Santiago810 Do you know how to diagnose the issue of an un-pickleable lmdb env?

@gebrahimi91
Copy link

I have the same issue with dataloader when I do not use lmdb dataset.

@fmassa
Copy link
Member

fmassa commented Feb 14, 2020

I think this is a limitation of LMDB in python (and LSUN which uses LMDB internally), and I think there is not much we can do on torchvision side unfortunately.

@4knahs
Copy link

4knahs commented Apr 29, 2020

I implemented my own LMDB dataset and had the same issue when using LMDB with num_workers > 0 and torch multiprocessing set to spawn.

It is very similar to this project's LSUN implementation, in my case the issue was with this line:

https://github.com/pytorch/vision/blob/master/torchvision/datasets/lsun.py#L18

When set to fork it works fine, but when using spawn it seems to try to pickle the dataset object which has the self.env attribute which is a lmdb Environment.

Just use it and discard the reference in the init then instantiate it again in the getitem and save the reference in the class.

@fmassa
Copy link
Member

fmassa commented May 4, 2020

@4knahs if you think you could send a PR fixing the LSUN implementation it would be great!

@ruotianluo
Copy link

ruotianluo commented Jul 4, 2020

I saw a solution somewhere else by adding getstate and setstate.

    def __getstate__(self):
        state = self.__dict__
        state["db_txn"] = None
        return state

    def __setstate__(self, state):
        self.__dict__ = state
        env = lmdb.open(self.db_path, subdir=os.path.isdir(self.db_path),
                                readonly=True, lock=False,
                                readahead=False, meminit=False,
                                map_size=1099511627776 * 2,)
        self.db_txn = env.begin(write=False)

This also doens't save self.env but instead of saving the txn.

@Santiago810
Copy link

Solution: open lmdb in worker_init_fn of torch.utils.data.DataLoader

@RSKothari
Copy link

Could you elaborate or give an example @Santiago810 ?

@airsplay
Copy link

A possible solution is similar to the one for HDF5:

  1. Do not open lmdb inside __init__
  2. Open the lmdb at the first data iteration.

Here is an illustration:

class DataLoader(torch.utils.data.Dataset):
    def __init__(self):
        """do not open lmdb here!!"""

    def open_lmdb(self):
         self.env = lmdb.open(self.lmdb_dir, readonly=True, create=False)
         self.txn = self.env.begin(buffers=True)

    def __getitem__(self, item: int):
        if not hasattr(self, 'txn'):
            self.open_lmdb()
        """
        Then do anything you want with env/txn here.
        """

Explanation
The multi-processing actually happens when you create the data iterator (e.g., when calling for datum in dataloader:):
https://github.com/pytorch/pytorch/blob/461014d54b3981c8fa6617f90ff7b7df51ab1e85/torch/utils/data/dataloader.py#L712-L720
In short, it would create multiple processes which "copy" the state of the current process. This copy involves a pickle of the LMDB's Env thus causes an issue. In our solution, we open it at the first data iteration and the opened lmdb file object would be dedicated to each subprocess.

@neillbyrne
Copy link

Thank you @airsplay . Excellent solution. You just saved me about a months work !!!

@thecml
Copy link

thecml commented May 27, 2021

A possible solution is similar to the one for HDF5:

  1. Do not open lmdb inside __init__
  2. Open the lmdb at the first data iteration.

Here is an illustration:

class DataLoader(torch.utils.data.Dataset):
    def __init__(self):
        """do not open lmdb here!!"""

    def open_lmdb(self):
         self.env = lmdb.open(self.lmdb_dir, readonly=True, create=False)
         self.txn = self.env.begin(buffers=True)

    def __getitem__(self, item: int):
        if not hasattr(self, 'txn'):
            self.open_lmdb()
        """
        Then do anything you want with env/txn here.
        """

Explanation
The multi-processing actually happens when you create the data iterator (e.g., when calling for datum in dataloader:):
https://github.com/pytorch/pytorch/blob/461014d54b3981c8fa6617f90ff7b7df51ab1e85/torch/utils/data/dataloader.py#L712-L720
In short, it would create multiple processes which "copy" the state of the current process. This copy involves a pickle of the LMDB's Env thus causes an issue. In our solution, we open it at the first data iteration and the opened lmdb file object would be dedicated to each subprocess.

Hi airsplay,

This solution works fine, however I'm struggling to find a way to set the self.size property on the Dataset without loading it in the __init__ function beforehand. I cannot instantiate the torch.utils.data.DataLoader without making sure that __len__ returns a valid value. Right now I save the number of samples in a meta data file and load that manually, but is there a smarter way to do this?

@neillbyrne
Copy link

neillbyrne commented May 27, 2021

@thecml you can open an LMDB environment in __init__ just be sure to close it within __init__. So open it, assign a size variable which is called by __len__ and the close it

@thecml
Copy link

thecml commented May 28, 2021

@thecml you can open an LMDB environment in __init__ just be sure to close it within __init__. So open it, assign a size variable which is called by __len__ and the close it

Excellent, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests