Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add sim_opt 'ghdl.elab_e' to run 'ghdl -e' instead of 'ghdl --elab-run --no-run' #467

Merged
merged 7 commits into from
Apr 16, 2019

Conversation

umarcor
Copy link
Member

@umarcor umarcor commented Apr 8, 2019

Close #466.

The fix I suggest is to modify https://github.com/VUnit/vunit/blob/master/vunit/ghdl_interface.py#L227 so that --elab-run is replaced with -e, instead of adding --no-run. This will probably require to modify _get_sim_command too, in order to ignore sim arguments if elaborate_only.

EDIT

Since, VUnit expects GHDL to actually evaluate some flags and generics during elaboration-only, the first commit in this PR breaks test_artificial_elaborate_only. So, in the second commit the tests are changed, and all of them are expected to pass now. I don't know if this makes sense. Probably, some of these tests should be removed (config.sim_options, config.generics and config.vhdl_assert_stop_level are ignored now). However, I am unsure because they might be used with other simulators.

@kraigher
Copy link
Collaborator

kraigher commented Apr 8, 2019

The point of the acceptance test is to define the behaviour of VUnit as a black box. It should work for any simulator we support. Unfortunately we are only able to run the test on Travis CI for GHDL.

The elab fail test will test that an error at the elaboration stage is caught when running with the --elaborate flag. This worked previously and it seems your change has broken this. If we accept that the test is changed it means the --elaborate flag is broken.

@umarcor
Copy link
Member Author

umarcor commented Apr 8, 2019

I think that the issue is whether those tests were/are meaningful in the context of 'elaboration only'. The fact that --elaborate did translate to --elab-run --no-run meant that GHDL accepted multiple arguments that are valid for -r but not for -e (e.g., assertion level or generics). None of them were meaningful in the elaboration phase. Then, the binary was executed, and at this stage is when the remaining args were checked. Last, the simnulation would exit before the first cycle, because of --no-run.

I believe that the acceptance tests which were expected to fail before but do not fail now, have never been testing elaboration options, but runtime arguments. They do not fail now because those args are not passed anymore, as they do not belong. Therefore, I suggest to move some of the tests in test_artificial_elaborate_only to a new set name test_artificial_no_run. For these, GHDL should be executed with sim option -r --no-run. This would allow to have three different testing stages, which is the desired setup I believe:

  • Arguments which are only valid for elaboration.
  • Arguments which are only valid for simulation, and which are evaluated before the simulation starts.
  • Arguments which are only valid for simulation, and which are evauated/used at any point.

@kraigher
Copy link
Collaborator

kraigher commented Apr 8, 2019

The reason the elab_fail test bench fails is because it has a generic for which no value is specified.
Generics are typically evaluated at elaboration time in simulators. Other simulators do this check at elaboration when loading the top level. This is the use case that the --elaborate flag is for, to run a fast check to ensure the design elaborates. If we change this for GHDL it would be inconsistent with other simulators so it makes me reluctant to change it.

Currently VUnit offers two stages of running a test bench, elaborate only or full test run. What would be the stage in-between expressed in the terms of the VHDL-standard? Run only the first delta cycle? Is this what -r --no-run does? Also if we add a new stage it needs to be implemented for all simulators so it becomes more work.

There are two ways to solve this on the user-side:

  1. Do not run any checks when the '--no-run' argument is found in the argv
  2. Detect --elaborate flag in run.py and do not substitute GHDL main in this case

Maybe 1. is the best method.

@umarcor
Copy link
Member Author

umarcor commented Apr 8, 2019

The reason the elab_fail test bench fails is because it has a generic for which no value is specified.
Generics are typically evaluated at elaboration time in simulators. Other simulators do this check at elaboration when loading the top level. This is the use case that the --elaborate flag is for, to run a fast check to ensure the design elaborates. If we change this for GHDL it would be inconsistent with other simulators so it makes me reluctant to change it.

I did not know that other simulators do check generics at elaboration. I wonder why GHDL is not consistent.

Is there any requirement for this PR to be accepted other than "GHDL should evaluate generics at -e"? If this is the single issue, I would suggest asking politely to @tgingold to add generics as an elaboration option, as long as it is ok with the standard.

Currently VUnit offers two stages of running a test bench, elaborate only or full test run. What would be the stage in-between expressed in the terms of the VHDL-standard? Run only the first delta cycle? Is this what -r --no-run does?

When the top-level of the binary is GHDL, existing two options behave as expected: elaborate generates the binary and the default executes it. The fact that --elaborate does not only do that, but it also executes the design until cycle 0, is negligible because nothing is done before or after the simulation. It helps at keeping consistency, as you explained.

However, when the simulation is wrapped, there is no option in VUnit to just check if the design can be compiled and build the binary. --compile will analyse all the VHDL sources and generate the objects. --elaborate will create the binary and it will execute it.

I believe that, in terms of the VHDL standard, this stage is the output of the elaboration. Ideally, I do not want to run until the first delta cycle, I don't want to execute the binary at all.

Also if we add a new stage it needs to be implemented for all simulators so it becomes more work.

If there is no requirement other than evaluating generics during elaboration, this PR will eventually be valid, so ignore the following paragraphs. Otherwise, if --elaborate will be kept as --elab-run --no-run:

I don't think that the new option needs to be added for all simulators. This is only required for the simulators that allow to build an executable binary which includes some VHDL sources/objects and some C sources/objects. What do you think about --build as the name for a new option?

On the one hand, this is only supported by GHDL for now, so the implementation in other simulators would be an error telling that it is a not supported arg.

On the other hand, although some other simulators supported by VUnit allow to combine VHDL and C sources (e.g. https://gitlab.com/reds-public/fss), I don't know if any of them allows to generate an independent executable binary. If it is not the case, this would always be a GHDL-only flag.

There are two ways to solve this on the user-side:
1. Do not run any checks when the '--no-run' argument is found in the argv
2. Detect --elaborate flag in run.py and do not substitute GHDL main in this case

I'd rather call ghdl -e or gcc explicitly from run.py. Almost all the compilation objects are located in vunit_out/ghdl/libraries, so it is not difficult to generate the command. It is just misleading to me the execution result when VUnit --elaborate is used with GHDL.

@kraigher
Copy link
Collaborator

kraigher commented Apr 9, 2019

Elaboration in my understanding means executing all code that results in a compile time constants such as the choice in an if-generate statement, bit width of a port or signal etc. This is what most simulators do when loading the design. This is different from executing the first delta cycles of all processes. It is also one of the earliest stages of a synthesis tool. A synthesis tool will never execute a process but it will execute all code that results in constant values such as bit widths and initial values. This is what I would expect should happen when i run "ghdl -e" if e means elaborate.

It might be that today we are doing to much in the VUnit elaboration stage for GHDL if also the first delta cycle is executed (equivalent with running "run 0 ns") in other simulators. However I would rather err on this side than having less checks for --elaborate. Thus I would prefer to keep elaborate the way it is now. If in the future GHDL adds more elaboration into its "-e" flag maybe we can switch to use it.

My thinking is still if this is not best fixed on outside of VUnit by simply disabling the post simulation check in the user binary if '--no-run' was in ARGV. Conceptually it is a simple operation and it does not require VUnit to add any GHDL specific --build flag or reduce the amount of verification provided by --elaborate. Conceptually I am not totally against adding a --build flag I am just thinking if it is pre-mature to do that at this point. VUnit should not stand in the way for the user to do anything he wants and it does not in this case, it does not that VUnit shall have every possible wish built into it that can be solved on the user side.

@umarcor
Copy link
Member Author

umarcor commented Apr 10, 2019

This is what I would expect should happen when i run "ghdl -e" if e means elaborate.

After a more careful reading of the docs, it seems that -e means 'elaborate', but is is not a full elaboration according to the VHDL standard. Some of the elaboration tasks are postponed to runtime:

https://ghdl.readthedocs.io/en/latest/using/InvokingGHDL.html#elaboration-e
Re-analyzes all the configurations, entities, architectures and package declarations, and creates the default configurations and the default binding indications according to the LRM rules. It also generates the list of object files required for the executable. Then, it links all these files with the runtime library. The actual elaboration is performed at runtime.

If the GCC/LLVM backend was enabled during the compilation of GHDL, the elaboration command creates an executable containing the code of the VHDL sources, the elaboration code and simulation code to execute a design hierarchy.

Therefore, it seems that there is no alternative other than running --elab-run, if you want to be as compatible as possible with other simulators.

It might be that today we are doing to much in the VUnit elaboration stage for GHDL if also the first delta cycle is executed (equivalent with running "run 0 ns") in other simulators. However I would rather err on this side than having less checks for --elaborate. Thus I would prefer to keep elaborate the way it is now. If in the future GHDL adds more elaboration into its "-e" flag maybe we can switch to use it.

Yes, I was considering to keep this issue open just in case. However, enough features are not likely to be added to the elaboration command in the near future. Therefore, we can close this either now or after we are done with the discussion.

My thinking is still if this is not best fixed on outside of VUnit by simply disabling the post simulation check in the user binary if '--no-run' was in ARGV. Conceptually it is a simple operation and it does not require VUnit to add any GHDL specific --build flag or reduce the amount of verification provided by --elaborate.

It is possible to do it in several ways that do no require modification of the application (executing ghdl -e, gcc, llvm...). So, I think that this is an issue with the build tool, not related to the code base of the example/design. Certainly, I am pushing VUnit to the limit as a 'build tool for designs written in VHDL', because that's not the main objective. Hence, I understand why you are reluctant to adding --build or even supporting this build stage in general.

Conceptually I am not totally against adding a --build flag I am just thinking if it is pre-mature to do that at this point. VUnit should not stand in the way for the user to do anything he wants and it does not in this case, it does not that VUnit shall have every possible wish built into it that can be solved on the user side.

I'll try to check if I can retrieve the simulation command generated in GHDLInterface from the run.py file. This would allow to execute ghdl -e without modifying the VUnit sources, but allowing to used the built-int features to generate the list of args.

@umarcor
Copy link
Member Author

umarcor commented Apr 11, 2019

Conceptually I am not totally against adding a --build flag I am just thinking if it is pre-mature to do that at this point. VUnit should not stand in the way for the user to do anything he wants and it does not in this case, it does not that VUnit shall have every possible wish built into it that can be solved on the user side.

I'll try to check if I can retrieve the simulation command generated in GHDLInterface from the run.py file. This would allow to execute ghdl -e without modifying the VUnit sources, but allowing to use the built-in features to generate the list of args.

I think I can achieve this by adding a --dry-run option to VUnit. This prints all the commands, but none of them is actually executed. Hence, it can be used for debugging/learning and also as a work around for this issue. See https://github.com/dbhi/vunit/blob/feat-dry-run/examples/vhdl/array_axis_vcs/run.py#L41-L58.

However, I need to retrieve the logs, in order to manipulate them and generate the build commands in the run.py. @kraigher, which is the suggested approach to do so? Is it compulsory to read test_name_to_path_mapping.txt and use custom logic to parse it and find the output.txt corresponding to each testbench?

@kraigher
Copy link
Collaborator

If you want to know the output path for a test you need to use the test_name_to_path_mapping.txt file. We could probably add this information to the --export-json file as well.

Regarding the issue at hand, if you really want --elaborate to be -e instead of --elab-run --no-run I think the best solution with a small impact is just to add a sim_option such as 'ghdl_elaborate_e' which can be True/False and the default False which is the current behaviour. I think having a dry run, then parsing a log to modify some command sounds like a fragile solution.

@umarcor umarcor force-pushed the fix-ghdl-e branch 2 times, most recently from 3dea5ac to 9cdb9c9 Compare April 14, 2019 06:02
@umarcor
Copy link
Member Author

umarcor commented Apr 14, 2019

If you want to know the output path for a test you need to use the test_name_to_path_mapping.txt file. We could probably add this information to the --export-json file as well.

It is possible to read test_name_to_path_mapping.txt directly as some Python type (dict or list)?

Regarding the issue at hand, if you really want --elaborate to be -e instead of --elab-run --no-run I think the best solution with a small impact is just to add a sim_option such as 'ghdl_elaborate_e' which can be True/False and the default False which is the current behaviour. I think having a dry run, then parsing a log to modify some command sounds like a fragile solution.

It would be useful to retrieve the simulation command with all the arguments, just as it would be executed by VUnit. However, the use case for that is far beyond the scope of VUnit. So, it is ok to just be able to build the binary without explicitly seeing the command.

I implemented the approach you proposed (a ghdl.elab_e sim option) and I updated this PR. Using ui.set_sim_option("ghdl.elab_e", True) in the run script will generate a binary, but it won't be executed. I didn't add any test yet. Which section does it belong to? Is it an acceptance test?

@kraigher
Copy link
Collaborator

If you want to know the output path for a test you need to use the test_name_to_path_mapping.txt file. We could probably add this information to the --export-json file as well.

It is possible to read test_name_to_path_mapping.txt directly as some Python type (dict or list)?

The file was intended for human use. The format is folder_name SPACE test_name.
I do not recommend that you use it. It is better we add this to --export-json instead.

It would be useful to retrieve the simulation command with all the arguments, just as it would be executed by VUnit. However, the use case for that is far beyond the scope of VUnit. So, it is ok to just be able to build the binary without explicitly seeing the command.

Currently the commands are logged if you use --log-level=DEBUG for human use. If an external tool needs this information we need to add it to --export-json.

I implemented the approach you proposed (a ghdl.elab_e sim option) and I updated this PR. Using ui.set_sim_option("ghdl.elab_e", True) in the run script will generate a binary, but it won't be executed. I didn't add any test yet. Which section does it belong to? Is it an acceptance test?

I think this should be a unit test of ghdl_interface.py. It is to complex to have an acceptance test for it, the ROI is not worth it.

@kraigher
Copy link
Collaborator

Reviewing your change it seems I was misunderstood. I meant the sim_option would change the behaviour of --elaborate for GHDL. It would not affect the non --elaborate behavior.

@umarcor
Copy link
Member Author

umarcor commented Apr 14, 2019

It is possible to read test_name_to_path_mapping.txt directly as some Python type (dict or list)?

The file was intended for human use. The format is folder_name SPACE test_name.
I do not recommend that you use it. It is better we add this to --export-json instead.

IMHO, there should be a simple and realiable programatic procedure to retrieve the output/log of a test. It don't really mind if I need to set some stdout in the python script, or get it as a return field from vu.main(), or parse some output file. I am trying to process test_name_to_path_mapping.txt just because I didn't find a better way to 'see' the content in 'output.txt'.

About adding it to --export-json, although useful, I'd rather avoid it for this specific use case. Using JSON is a better approach when it is to be used from Python, Go, etc. But, with Bash or C I find it easier not to require a JSON parsing library. The current format, as you described it, is straighforward, as one only needs to search for the first space.

It would be useful to retrieve the simulation command with all the arguments, just as it would be executed by VUnit. However, the use case for that is far beyond the scope of VUnit. So, it is ok to just be able to build the binary without explicitly seeing the command.

Currently the commands are logged if you use --log-level=DEBUG for human use. If an external tool needs this information we need to add it to --export-json.

The purpose of this external tool is to provide the objects produced by ghdl -a to GCC through an external build procedure (make, cmake, etc.). Therefore, it does not require the command itself, but some of the arguments used for ghdl -e (--std, -P, -Wl, etc.). However, for this use case it is reasonable to use JSON. I will check which fields/info are missing in --export-json after we are done with some of the PRs.

I implemented the approach you proposed (a ghdl.elab_e sim option) and I updated this PR. Using ui.set_sim_option("ghdl.elab_e", True) in the run script will generate a binary, but it won't be executed. I didn't add any test yet. Which section does it belong to? Is it an acceptance test?

I think this should be a unit test of ghdl_interface.py. It is to complex to have an acceptance test for it, the ROI is not worth it.

I will add it there.

Reviewing your change it seems I was misunderstood. I meant the sim_option would change the behaviour of --elaborate for GHDL. It would not affect the non --elaborate behavior.

I will adapt it.

@umarcor
Copy link
Member Author

umarcor commented Apr 14, 2019

I modified the PR to let ghdl.elab_e be a suboption for --elaborate, so that non-elaborate behaviour is not affected.

I also added a test, test_elaborate_e_project to vunit/test/test_ghdl_interface.py. However, the test is failing and I don't know why. It is complaining because No such file or directory: 'prefix/ghdl': 'prefix/ghdl'; but that should be ignored thanks to the mock.patch, isn't it?

@kraigher
Copy link
Collaborator

You are mocking the wrong function. The failure occurs at:

vunit/ghdl_interface.py:253: in simulate
    proc = Process(cmd)
vunit/ostools.py:131: in __init__
    preexec_fn=os.setpgrp)  # pylint: disable=no-member

An alternative to mocking is separating the code that generates the command from the code that executes the command and only test the code that generates the command.

@umarcor
Copy link
Member Author

umarcor commented Apr 14, 2019

An alternative to mocking is separating the code that generates the command from the code that executes the command and only test the code that generates the command.

I did that now. GHDLInterface._get_command is tested directly.

@umarcor umarcor changed the title fix(GHDL): --elaborate should -e, not --elab-run fix(GHDL): add sim_opt 'ghdl.elab_e' to run 'ghdl -e' instead of 'ghdl --elab-run --no-run' Apr 14, 2019
@umarcor umarcor changed the title fix(GHDL): add sim_opt 'ghdl.elab_e' to run 'ghdl -e' instead of 'ghdl --elab-run --no-run' add sim_opt 'ghdl.elab_e' to run 'ghdl -e' instead of 'ghdl --elab-run --no-run' Apr 14, 2019
@kraigher
Copy link
Collaborator

The AppVeyor (Windows tests) fail because you use a hardcoded POSIX path with forward slash / in your unit test whereas on windows it will become a backward slash . You should use os.path.join in the unit test to get a platform agnostic path.

@umarcor
Copy link
Member Author

umarcor commented Apr 16, 2019

The AppVeyor (Windows tests) fail because you use a hardcoded POSIX path with forward slash / in your unit test whereas on windows it will become a backward slash . You should use os.path.join in the unit test to get a platform agnostic path.

Thanks! It should be fixed now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[GHDL] run.py --elaborate should not execute the design
2 participants