-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caliper Support for test cases #61
Comments
I think the command line arg item would be problematic. That would require codes to use a fixed command line argument name for providing a caliper configuration string, and fixed path for the spot file location. For example, some codes at LLNL use '--caliper=', but another code uses '--performance=' The code in the example above wants to store the spot output in the log directory under a custom name that ATS doesn't know about. Another code may want to put all the cali files in one central directory, etc. I think the code will need to build the caliper configuration itself, that's not something ATS can do in a portable/general manner. Can ATS set some env variables for the test executable to pick up?
These would need to provided in the environment for each test, but that would allow the test executable to query these and construct the caliper config string with an appropriately named file identifying which test it came from. ( like Stephanie's example ). Maybe an implementation for this would be ATS inserting something like this before the executable? |
Note on the example from Stephanie- #ATS:if checkGlue("caliper"): The above code creates a unique caliper file name each time. In order for SPOT to be able to recognize a test across different runs, the file name needs to match ( I think ). |
An alternate implementation to what Stephanie had is to use the command line argument option and add the caliper arg there instead. The executable being run has an embedded python parser, so any arguments to the executable are first, followed by the python script, followed by any arguments to the python script. |
Shawn wondering if we could have an easy method for appending/prepending clas option on specific suites. Talked with Aaron and wondered if suite_list("performance").clas_append("--caliper 'runtime-report,spot(output=%s)'" % log.directory + "/" + test.name +".cali" That syntax is wonky, but the point is, could ATS give projects a method for appending an arbitrary string, built up with test driectory and test name and lable type info evaluated at run time and added to the clas string. The 'by suite' is such that it would only apply to some suites, not all suites. In this instance, only suites marked as 'performance', but could be 'caliper' or whatever. |
@daboehme Please help iron out Caliper config options for this. I think for the test suite we want the basics (with hierarchical data for the tools) - but enabling forwarding NVTX etc. as appropriate. |
One code would actually need the caliper arg inserted at the front of the clas, not appended (a clas_insert(foo )?). Specifically, something with a very lightweight C++ main around an embedded python parser. Their ATS tests usually look like: They actually need the '--caliper values' arg inserted before the python script in the command line. I'll note that my example with clas is working quite well, but I could see how it might be tedious duplicating those lines across lots of tests. |
@pearce8 Sure. So far it seems this is mostly focused on how the config string is passed to Caliper, which is up to the application and the test framework. For hierarchical timing information the "spot" config is fine (or "runtime-report" for a human-readable version); for NVTX you can use the "nvtx" config. |
Bumping this issue, as we have a new developer on the project. I do think we should have an easier method for specifying how caliper generated data can be stored. There will be some variation between projects, but let's see what we can do here. |
Hi, let me know if you need any help with the Caliper configurations. The Generally, it's a good idea to record all the relevant metadata information with Adiak. That way they're stored in the .cali files and can be accessed from Hatchet, Thicket, SPOT, and the Python .cali reader. That way you don't need to rely on encoding everything in the file/directory names. |
Excellent, thanks @daboehme we'll likely reach out to you on this task for suggestions |
Standardized use of ATS for Caliper runs is still a useful goal. This will take a lot of interaction with folks working on performance monitoring tools in general as there is a lot of churn here, and the danger is implementing a solution that is already obsolete or is not useful. Before any coding could happen, just a cross team discussion to come up with suggestions would be a place to start. |
Adding caliper support to the test suites that use ATS requires updates to each existing test case, which can be onerous. Since all codes wanting caliper output need to do this we'd like it to be an option in ATS.
Current implementation using ATS introspection:
#ATS:if checkGlue("caliper"):
#ATS: name = "TestCase2D" + str( uuid4() )
#ATS: outputdirectory=log.directory+'/caliperoutput/'+name+'/'+name+'.cali'
#ATS: myExe = manager.options.executable + ''' --caliper "spot(output=%s)" '''%outputdirectory
#ATS: t = test(executable=myExe, clas="%s radGroups=16 steps=5 useBC=False meshtype=polygonalrz runDirBaseName=%s" % (SELF,name), nn=1, np=ndomains, nt=nthreads, ngpu=ngpus, suite="threads", label='caliper 80a 16g rz FP regression')
What the option would do is:
The text was updated successfully, but these errors were encountered: