Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caliper Support for test cases #61

Open
stephdempsey opened this issue Aug 27, 2021 · 11 comments
Open

Caliper Support for test cases #61

stephdempsey opened this issue Aug 27, 2021 · 11 comments

Comments

@stephdempsey
Copy link

Adding caliper support to the test suites that use ATS requires updates to each existing test case, which can be onerous. Since all codes wanting caliper output need to do this we'd like it to be an option in ATS.

Current implementation using ATS introspection:
#ATS:if checkGlue("caliper"):
#ATS: name = "TestCase2D" + str( uuid4() )
#ATS: outputdirectory=log.directory+'/caliperoutput/'+name+'/'+name+'.cali'
#ATS: myExe = manager.options.executable + ''' --caliper "spot(output=%s)" '''%outputdirectory
#ATS: t = test(executable=myExe, clas="%s radGroups=16 steps=5 useBC=False meshtype=polygonalrz runDirBaseName=%s" % (SELF,name), nn=1, np=ndomains, nt=nthreads, ngpu=ngpus, suite="threads", label='caliper 80a 16g rz FP regression')

What the option would do is:

  • If caliper, Append --caliper spot(output=) to the test executable. Note that this didn't work as expected when passed as a clas, only when inserted into the executable var (myExe above).
  • The tests output directory needs to be unique to the test object so that after introspection each individual test has it's own directory. For example, a caliper dir could be saved with the same name as the test log directory, but extension cal or something like that. Perhaps at the same level, so that caliper data is preserved while run logs can still be cleaned up. We don't want to have to keep everything , just the caliper data.
  • That naming scheme would also be ideal for archiving caliper data for visualization making it easier to point SPOT at the cumulative runs for an individual test case.
@aaroncblack
Copy link
Collaborator

aaroncblack commented Aug 27, 2021

I think the command line arg item would be problematic. That would require codes to use a fixed command line argument name for providing a caliper configuration string, and fixed path for the spot file location.

For example, some codes at LLNL use '--caliper=', but another code uses '--performance='

The code in the example above wants to store the spot output in the log directory under a custom name that ATS doesn't know about. Another code may want to put all the cali files in one central directory, etc.

I think the code will need to build the caliper configuration itself, that's not something ATS can do in a portable/general manner.

Can ATS set some env variables for the test executable to pick up?

  • test name
  • test label
  • log directory

These would need to provided in the environment for each test, but that would allow the test executable to query these and construct the caliper config string with an appropriately named file identifying which test it came from. ( like Stephanie's example ).

Maybe an implementation for this would be ATS inserting something like this before the executable?
"env ATS_TEST_NAME=FOO ATS_TEST_LABEL=BAR ATS_LOG_DIR=BLAH myExecutable. "

@aaroncblack
Copy link
Collaborator

Note on the example from Stephanie-

#ATS:if checkGlue("caliper"):
#ATS: name = "TestCase2D" + str( uuid4() )
#ATS: outputdirectory=log.directory+'/caliperoutput/'+name+'/'+name+'.cali'
#ATS: myExe = manager.options.executable + ''' --caliper "spot(output=%s)" '''%outputdirectory
#ATS: t = test(executable=myExe, clas="%s radGroups=16 steps=5 useBC=False meshtype=polygonalrz runDirBaseName=%s" % (SELF,name), nn=1, np=ndomains, nt=nthreads, ngpu=ngpus, suite="threads", label='caliper 80a 16g rz FP regression')

The above code creates a unique caliper file name each time. In order for SPOT to be able to recognize a test across different runs, the file name needs to match ( I think ).

@aaroncblack
Copy link
Collaborator

An alternate implementation to what Stephanie had is to use the command line argument option and add the caliper arg there instead. The executable being run has an embedded python parser, so any arguments to the executable are first, followed by the python script, followed by any arguments to the python script.
testname = "MyProblem"
variant = '80a_16g_xyz_FP_GPU_Old_GTA'
caliper_output = log.directory + "/" + testname + variant + ".cali"
test(clas="--caliper 'runtime-report,spot(output=%s)' MyProblem.py a=16 b=False meshtype=polyhedral" % caliper_output, np=4, nt=10, ngpu=1, level=20, suite="threads", label=variant)

@dawson6
Copy link
Member

dawson6 commented Aug 31, 2021

Shawn wondering if we could have an easy method for appending/prepending clas option on specific suites. Talked with Aaron and wondered if

suite_list("performance").clas_append("--caliper 'runtime-report,spot(output=%s)'" % log.directory + "/" + test.name +".cali"

That syntax is wonky, but the point is, could ATS give projects a method for appending an arbitrary string, built up with test driectory and test name and lable type info evaluated at run time and added to the clas string.

The 'by suite' is such that it would only apply to some suites, not all suites. In this instance, only suites marked as 'performance', but could be 'caliper' or whatever.

@pearce8
Copy link

pearce8 commented Sep 1, 2021

@daboehme Please help iron out Caliper config options for this. I think for the test suite we want the basics (with hierarchical data for the tools) - but enabling forwarding NVTX etc. as appropriate.

@aaroncblack
Copy link
Collaborator

aaroncblack commented Sep 1, 2021

One code would actually need the caliper arg inserted at the front of the clas, not appended (a clas_insert(foo )?).

Specifically, something with a very lightweight C++ main around an embedded python parser. Their ATS tests usually look like:
test("my_python_script", "arg1 arg2 arg3")
or
test(SELF, "arg1 arg2 arg3")

They actually need the '--caliper values' arg inserted before the python script in the command line.

I'll note that my example with clas is working quite well, but I could see how it might be tedious duplicating those lines across lots of tests.

@daboehme
Copy link
Member

daboehme commented Sep 1, 2021

@daboehme Please help iron out Caliper config options for this. I think for the test suite we want the basics (with hierarchical data for the tools) - but enabling forwarding NVTX etc. as appropriate.

@pearce8 Sure. So far it seems this is mostly focused on how the config string is passed to Caliper, which is up to the application and the test framework. For hierarchical timing information the "spot" config is fine (or "runtime-report" for a human-readable version); for NVTX you can use the "nvtx" config.

@dawson6
Copy link
Member

dawson6 commented Mar 20, 2023

Bumping this issue, as we have a new developer on the project. I do think we should have an easier method for specifying how caliper generated data can be stored. There will be some variation between projects, but let's see what we can do here.

@daboehme
Copy link
Member

Hi, let me know if you need any help with the Caliper configurations.

The spot config now has an outdir option in addition to output to specify an output directory name. Maybe it helps here.

Generally, it's a good idea to record all the relevant metadata information with Adiak. That way they're stored in the .cali files and can be accessed from Hatchet, Thicket, SPOT, and the Python .cali reader. That way you don't need to rely on encoding everything in the file/directory names.

@dawson6
Copy link
Member

dawson6 commented Mar 21, 2023

Excellent, thanks @daboehme we'll likely reach out to you on this task for suggestions

@dawson6
Copy link
Member

dawson6 commented Aug 16, 2023

Standardized use of ATS for Caliper runs is still a useful goal.
Wondering if Misha could work with two projects (Healy's, Stephanies, Shawn's, etdc) who have codes with caliper capability to define what it really means to run 'caliper testing'. Define a way to do this which would still allow for project flexibility as needed to specify the command line, or env vars needed for projects to put out the caliper data files.

This will take a lot of interaction with folks working on performance monitoring tools in general as there is a lot of churn here, and the danger is implementing a solution that is already obsolete or is not useful.

Before any coding could happen, just a cross team discussion to come up with suggestions would be a place to start.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants