Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

otf2ttf hangs on Windows if the system has 64 or more threads #1420

Closed
be5invis opened this issue Oct 20, 2021 · 5 comments · Fixed by #1421
Closed

otf2ttf hangs on Windows if the system has 64 or more threads #1420

be5invis opened this issue Oct 20, 2021 · 5 comments · Fixed by #1421

Comments

@be5invis
Copy link
Contributor

be5invis commented Oct 20, 2021

> otf2ttf .\SourceSerif4-Regular.otf
Exception in thread Thread-1:
Traceback (most recent call last):
  File "c:\program files\python39\lib\threading.py", line 954, in _bootstrap_inner
    self.run()
  File "c:\program files\python39\lib\threading.py", line 892, in run
    self._target(*self._args, **self._kwargs)
  File "c:\program files\python39\lib\multiprocessing\pool.py", line 519, in _handle_workers
    cls._wait_for_updates(current_sentinels, change_notifier)
  File "c:\program files\python39\lib\multiprocessing\pool.py", line 499, in _wait_for_updates
    wait(sentinels, timeout=timeout)
  File "c:\program files\python39\lib\multiprocessing\connection.py", line 884, in wait
    ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout)
  File "c:\program files\python39\lib\multiprocessing\connection.py", line 816, in _exhaustive_wait
    res = _winapi.WaitForMultipleObjects(L, False, timeout)
ValueError: need at most 63 handles, got a sequence of length 66

The process hangs then, producing no TTF.
Tested on an AMD 3970X system.

@josh-hadley
Copy link
Collaborator

@be5invis Are you able to test this under Python3.8 or Python3.7? I suspect there's some relation to this Python issue: https://bugs.python.org/issue26903, possibly a regression in Python3.9.

We might be able to work around this in otf2ttf and other AFDKO tools that use multiprocessing.Pool by limiting the number of processes/workers under Windows (AFAICT this is a Windows-specific limitation).

@be5invis
Copy link
Contributor Author

@josh-hadley I think it is unrelated. WaitForMultipleObjects has a limit of 64 objects, so you may need to throttle the threads / child processes that OTF2TTF starts.

@josh-hadley
Copy link
Collaborator

@be5invis unfortunately we don't have a comparable system available to repro this or test a patch, so it's not likely we'll initiate a fix. If you want to take a try at something that gets it working for your setup, we'd love to review it and get it incorporated if applicable.

@be5invis
Copy link
Contributor Author

Alternatively, do we have options to disable parallelism? If os then at least this problem can get workarounded.

@josh-hadley
Copy link
Collaborator

josh-hadley commented Oct 21, 2021

do we have options to disable parallelism?

It doesn't look like there's an option in the tool itself currently, but it seems like a good option to add. As before, if you wanted to make an attempt at a patch to speed this along, we would be happy to review and incorporate if applicable.

It's important to note, this is an issue in the implementation of multiprocessing in Windows Python; not any fault of otf2ttf. With a little tinkering around, I was able to reproduce this by explicitly setting processes >= 64 (just in Python REPL on a VM with only 2 cores, completely independent of otf2ttf):

>>> import multiprocessing
>>> pool = multiprocessing.Pool(processes=100)
Exception in thread Thread-1:
Traceback (most recent call last):
  File "C:\Program Files\Python39\lib\threading.py", line 950, in _bootstrap_inner
    self.run()
  File "C:\Program Files\Python39\lib\threading.py", line 888, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Program Files\Python39\lib\multiprocessing\pool.py", line 519, in _handle_workers
>>>     cls._wait_for_updates(current_sentinels, change_notifier)
  File "C:\Program Files\Python39\lib\multiprocessing\pool.py", line 499, in _wait_for_updates
    wait(sentinels, timeout=timeout)
  File "C:\Program Files\Python39\lib\multiprocessing\connection.py", line 884, in wait
    ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout)
  File "C:\Program Files\Python39\lib\multiprocessing\connection.py", line 816, in _exhaustive_wait
    res = _winapi.WaitForMultipleObjects(L, False, timeout)
ValueError: need at most 63 handles, got a sequence of length 102

Some reading suggests that this is rooted in a Windows limitation, MAXIMUM_WAIT_OBJECTS. This is mentioned in the report I linked to above, which is triggered a bit differently (from concurrent.futures), but ends with the same ValueError in _winapi.WaitForMultipleObjects called from multiprocessing.connection. The issue was sort of addressed in concurrent.futures, but not in multiprocessing.Pool.

So again, while we might be able to hack at the tool to get around this, the root cause is actually in Python in Windows, not this tool nor its use of multiprocessing.Pool. A quick search on open Python issues didn't turn anything up, so I might take the opportunity to file a bug on this. Looks like there's an issue open on this: https://bugs.python.org/issue45077

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants