Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Massive delay before test actually starts when using Modelsim #735

Closed
anro7 opened this issue Sep 2, 2021 · 15 comments · Fixed by #745
Closed

Massive delay before test actually starts when using Modelsim #735

anro7 opened this issue Sep 2, 2021 · 15 comments · Fixed by #745

Comments

@anro7
Copy link

anro7 commented Sep 2, 2021

Hello,

I am using VUnit with Modelsim Intel free edition. Ever since I started using VUnit, I experience a massive delay (several minutes) whenever I launch a test, before the test is actually run. What is the reason for this massive wait time? When I do not use VUnit, the test starts immediately, so I am sure this has nothing to do with the Modelsim free edition code size limitations.

@LarsAsplund
Copy link
Collaborator

What is your ModelSim version?

What happens if you take one of the provided examples and run that? For example, https://github.com/VUnit/vunit/blob/master/examples/vhdl/user_guide/run.py. Still same delay?

It takes about 3 seconds to run all tests for me when running

Model Technology ModelSim - Intel FPGA Edition vcom 2020.3 Compiler 2020.07 Jul 22 2020

@anro7
Copy link
Author

anro7 commented Sep 3, 2021

I am using ModelSim Intel FPGA Starter Edition 2020.3. The delay seems to be proportional to the total code size and/or the number of tests/configs that the testbench has.

I will try to run your example and check the delay there.

@skaupper
Copy link

I experience pretty similar behaviour. I don't know exactly when it started, but currently a clean simulation run takes several minutes to start.

My ModelSim version is:
Model Technology ModelSim DE vsim 2020.4 Simulator 2020.10 Oct 13 2020

Calling the runner script with cProfile shows, that the calls to add_source_file takes about 0.65s every time for a cumulative time of about 140s only for adding source files. The compile step itself takes another ~70s. The project I tested this on has about 100 source files (including VHDL and Verilog files).

         24807002 function calls (24433308 primitive calls) in 201.408 seconds

   Ordered by: cumulative time
   List reduced from 1704 to 20 due to restriction <20>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 351631/1   94.486    0.000  211.403  211.403 {built-in method builtins.exec}
        1    0.000    0.000  211.403  211.403 run_vunit_tests.py:1(<module>)
      221    0.003    0.000  143.022    0.647 /home/ky/.local/lib/python3.6/site-packages/vunit/ui/library.py:204(add_source_file)
      221    0.005    0.000  142.760    0.646 /home/ky/.local/lib/python3.6/site-packages/vunit/project.py:112(add_source_file)
        1    0.002    0.002  141.858  141.858 /home/ky/.local/lib/python3.6/site-packages/vunit/ui/__init__.py:243(add_source_files_from_csv)
       15    0.000    0.000  137.688    9.179 /home/ky/.local/lib/python3.6/site-packages/vunit/source_file.py:127(__init__)
       15    0.139    0.009  137.671    9.178 /home/ky/.local/lib/python3.6/site-packages/vunit/source_file.py:157(parse)
       15    0.003    0.000  137.531    9.169 /home/ky/.local/lib/python3.6/site-packages/vunit/parsing/verilog/parser.py:59(parse)
       19    0.000    0.000  134.838    7.097 /home/ky/.local/lib/python3.6/site-packages/vunit/parsing/verilog/tokenizer.py:128(tokenize)
       19    1.307    0.069  134.838    7.097 /home/ky/.local/lib/python3.6/site-packages/vunit/parsing/tokenizer.py:66(tokenize)
   351505    1.261    0.000  132.919    0.000 /home/ky/.local/lib/python3.6/site-packages/vunit/parsing/tokenizer.py:16(Token)
   351522    6.299    0.000  131.223    0.000 /usr/lib/python3.6/collections/__init__.py:357(namedtuple)
        1    0.000    0.000   68.250   68.250 /home/ky/.local/lib/python3.6/site-packages/vunit/ui/__init__.py:717(main)
        1    0.000    0.000   68.250   68.250 /home/ky/.local/lib/python3.6/site-packages/vunit/ui/__init__.py:755(_main)
        1    0.000    0.000   68.250   68.250 /home/ky/.local/lib/python3.6/site-packages/vunit/ui/__init__.py:906(_main_compile_only)
        1    0.000    0.000   68.249   68.249 /home/ky/.local/lib/python3.6/site-packages/vunit/ui/__init__.py:937(_compile)
        1    0.000    0.000   68.249   68.249 /home/ky/.local/lib/python3.6/site-packages/vunit/sim_if/__init__.py:199(compile_project)
        1    0.010    0.010   67.285   67.285 /home/ky/.local/lib/python3.6/site-packages/vunit/sim_if/__init__.py:259(compile_source_files)
      221    0.019    0.000   66.993    0.303 /home/ky/.local/lib/python3.6/site-packages/vunit/sim_if/__init__.py:226(_compile_source_file)
      221    0.004    0.000   66.864    0.303 /home/ky/.local/lib/python3.6/site-packages/vunit/sim_if/__init__.py:363(check_output)

I have attached the whole profiling report for the python -m cProfile -o profile.txt run_vunit_tests.py --clean --compile call I used: vunit_delay.pstats

@skaupper
Copy link

Further tests with different VUnit versions showed, that version 4.5.0 introduced a massive slowdown.

VUnit Version Time to add all sources
4.5.0 133s
4.4.0 8.8s
4.3.0 8.5s
4.2.0 8.4s
4.1.0 8.2s

Tests with git bisect revealed, that the relevant changes occurred in commit 5141f7c in the file tokenizer.py.

Basically reverting this change in v4.5.0 brings the delay down to previous values.

+ TokenType = collections.namedtuple("Token", ["kind", "value", "location"])

def Token(kind, value="", location=None):  # pylint: disable=invalid-name
-     return collections.namedtuple("Token", ["kind", "value", "location"])(
-         kind, value, location
-     )
+    return TokenType(kind, value, location)

@umarcor
Copy link
Member

umarcor commented Sep 23, 2021

@skaupper, does performance improve if you move the TokenType = ... statement into the function (right before the return statement)?

@skaupper
Copy link

When moving it back into the function the time increases again to about 2min.
In the profiling report from above are roughly 350k calls to Token listed.

When using the most simplistic case of 400k calls to namedtuple shows about the same result:

import collections

for i in range(400000):
   token = collections.namedtuple("Token", ["kind", "value", "location"])
$ time python test.py 
python test.py  143,78s user 0,27s system 98% cpu 2:26,64 total

In order to resolve this issue, multiple calls to namedtuple should be avoided, it seems.

@felixn
Copy link
Contributor

felixn commented Sep 23, 2021

Thanks for the deep dive, @skaupper !
We've seen exactly the same issue (several minute delay before test execution actually starts), but I haven't had a chance to investigate.
👍
For me, this has a high priority of getting fixed.
@LarsAsplund : are you ok with reverting the changes to the namedtuple ?

@eschmidscs
Copy link
Contributor

With the change proposed, we go down from 3min 55s to 50s.
Thanks a lot for the investigations!

@LarsAsplund
Copy link
Collaborator

@felixn Yes, this can be reverted. It was done to suppress some new warnings from pylint. If there is no other way we can always create an exception for that warning. Will you do a PR?

@umarcor
Copy link
Member

umarcor commented Sep 23, 2021

When moving it back into the function the time increases again to about 2min.

@skaupper, my bad. I misinterpreted the - and + signs in your diff block.

@umarcor
Copy link
Member

umarcor commented Sep 23, 2021

Can you all please confirm that #745 fixes this issue? Moreover, do you need that we release a new version including the fix or are you ok with consuming master?

umarcor added a commit to dbhi/vunit that referenced this issue Sep 23, 2021
@LarsAsplund
Copy link
Collaborator

@umarcor Maybe this deserves a patch release 4.5.1?

@umarcor
Copy link
Member

umarcor commented Sep 24, 2021

@umarcor Maybe this deserves a patch release 4.5.1?

I think we can release 4.6.0. There are some changes since may (v4.5.0...master), including "vunit/ui/common: make glob search recursive by default".

However, I'd like to have #743 (and the follow-up) merged first, so that CI is green. That should be just syntax changes. Would you mind reviewing and merging?

@LarsAsplund
Copy link
Collaborator

@umarcor It's still not complete. Right?

@umarcor
Copy link
Member

umarcor commented Sep 25, 2021

That PR is partial, but complete. I.e., it does not solve all the warnings, but the ones that are in the PR are correct (I think) and ready to merge. Then, I have branch https://github.com/dbhi/vunit/tree/fix-lint-fstrings, which includes all the remaining fixes related to f-strings. There are three other new warnings (related to open and encoding) which I did not address yet: https://github.com/dbhi/vunit/runs/3699806503?check_suite_focus=true#step:5:106.

Precisely, fix-lint-fstrings contains the fixes in the sim_if sources which I cannot test myself. That's why I left them out from #743. Hence, we can merge #743 and I can immediately create a new PR, as long as we have someone who can run the tests locally.

More precisely, #743 includes some fixes to string formatting in tools/*.py sources. Those sources are used for creating releases. Therefore, I prefer to have that merged and then create a release. That's because, if I broke anything, I can see and fix it now, instead of waiting probably some months until the next release. In some months, I might not remember all the details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants