Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FP.run_from_iterable() give error of incomplete pickle to calculate fingerprint protein-ligand complex from docking simulation.How to solve this error in #103

Closed
ravishankar1307 opened this issue Dec 22, 2022 · 3 comments · Fixed by #128
Labels
bug Something isn't working

Comments

@ravishankar1307
Copy link

No description provided.

@cbouy
Copy link
Member

cbouy commented Jan 8, 2023

Hi @ravishankar1307,

Could you provide the prolif script that produces the error? It's hard to tell where the error might come from just from this description. Also, which version of prolif are you using?

Cédric

@metma99
Copy link

metma99 commented Jan 31, 2023

Hello Cedric:
I get the same issue if I use your docking example here:
https://github.com/chemosim-lab/ProLIF/blob/master/docs/notebooks/how-to.ipynb

Oh, I did a fresh install of prolif on a mac:
prolif 0+unknown MDAnalysis 2.4.2 rdkit 2022.09.4

Markus

P.S. I am posting the complete error message

RuntimeError Traceback (most recent call last)
Cell In[3], line 6
4 # generate fingerprint
5 fp = plf.Fingerprint()
----> 6 fp.run_from_iterable(lig_suppl, prot)
7 df = fp.to_dataframe()
8 df

File ~/opt/anaconda3/envs/prolif/lib/python3.11/site-packages/prolif/fingerprint.py:593, in Fingerprint.run_from_iterable(self, lig_iterable, prot_mol, residues, progress, n_jobs)
591 raise ValueError("n_jobs must be > 0 or None")
592 if n_jobs != 1:
--> 593 return self._run_iter_parallel(
594 lig_iterable=lig_iterable,
595 prot_mol=prot_mol,
596 residues=residues,
597 progress=progress,
598 n_jobs=n_jobs,
599 )
601 iterator = tqdm(lig_iterable) if progress else lig_iterable
602 if residues == "all":

File ~/opt/anaconda3/envs/prolif/lib/python3.11/site-packages/prolif/fingerprint.py:628, in Fingerprint._run_iter_parallel(self, lig_iterable, prot_mol, residues, progress, n_jobs)
625 if residues == "all":
626 residues = prot_mol.residues.keys()
--> 628 with mp.Pool(
629 n_jobs,
630 initializer=declare_shared_objs_for_mol,
631 initargs=(self, prot_mol, residues),
632 ) as pool:
633 results = []
634 for data in tqdm(
635 pool.imap_unordered(process_mol, suppl),
636 total=total,
637 disable=not progress,
638 ):

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/context.py:119, in BaseContext.Pool(self, processes, initializer, initargs, maxtasksperchild)
117 '''Returns a process pool object'''
118 from .pool import Pool
--> 119 return Pool(processes, initializer, initargs, maxtasksperchild,
120 context=self.get_context())

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/pool.py:215, in Pool.init(self, processes, initializer, initargs, maxtasksperchild, context)
213 self._processes = processes
214 try:
--> 215 self._repopulate_pool()
216 except Exception:
217 for p in self._pool:

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/pool.py:306, in Pool._repopulate_pool(self)
305 def _repopulate_pool(self):
--> 306 return self._repopulate_pool_static(self._ctx, self.Process,
307 self._processes,
308 self._pool, self._inqueue,
309 self._outqueue, self._initializer,
310 self._initargs,
311 self._maxtasksperchild,
312 self._wrap_exception)

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/pool.py:329, in Pool._repopulate_pool_static(ctx, Process, processes, pool, inqueue, outqueue, initializer, initargs, maxtasksperchild, wrap_exception)
327 w.name = w.name.replace('Process', 'PoolWorker')
328 w.daemon = True
--> 329 w.start()
330 pool.append(w)
331 util.debug('added worker')

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/process.py:121, in BaseProcess.start(self)
118 assert not _current_process._config.get('daemon'),
119 'daemonic processes are not allowed to have children'
120 _cleanup()
--> 121 self._popen = self._Popen(self)
122 self._sentinel = self._popen.sentinel
123 # Avoid a refcycle if the target function holds an indirect
124 # reference to the process object (see bpo-30775)

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/context.py:288, in SpawnProcess._Popen(process_obj)
285 @staticmethod
286 def _Popen(process_obj):
287 from .popen_spawn_posix import Popen
--> 288 return Popen(process_obj)

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/popen_spawn_posix.py:32, in Popen.init(self, process_obj)
30 def init(self, process_obj):
31 self._fds = []
---> 32 super().init(process_obj)

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/popen_fork.py:19, in Popen.init(self, process_obj)
17 self.returncode = None
18 self.finalizer = None
---> 19 self._launch(process_obj)

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/popen_spawn_posix.py:47, in Popen._launch(self, process_obj)
45 try:
46 reduction.dump(prep_data, fp)
---> 47 reduction.dump(process_obj, fp)
48 finally:
49 set_spawning_popen(None)

File ~/opt/anaconda3/envs/prolif/lib/python3.11/multiprocessing/reduction.py:60, in dump(obj, file, protocol)
58 def dump(obj, file, protocol=None):
59 '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60 ForkingPickler(file, protocol).dump(obj)

RuntimeError: Incomplete pickle support (getstate_manages_dict not set)

@cbouy
Copy link
Member

cbouy commented Feb 2, 2023

Hello Markus,

Can you try using fp.run_from_iterable(lig_suppl, prot, n_jobs=1) instead and see if it helps?
It will disable multiprocessing support though so not really a solution but a temporary workaround until I can find some time to work on this.

@cbouy cbouy added the bug Something isn't working label Feb 2, 2023
@cbouy cbouy linked a pull request Apr 30, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants