Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TASK] Allow functions in samplers/optimizers to be implemented in the same python module #2301

Merged
merged 9 commits into from
Apr 24, 2024

Conversation

alfoa
Copy link
Collaborator

@alfoa alfoa commented Apr 16, 2024


Pull Request Description

What issue does this change request address? (Use "#" before the issue to link it, i.e., #42.)

Closes #2300

What are the significant changes in functionality due to this change request?

This PR Allows functions in samplers/optimizers to be implemented in the same python module.
The old approach (method named ``evaluate'' is still available) but a new approach where a method named like the function is implemented.

@wangcj05 @mandd @idaholab/raven-team This is a modification I did in my local version and I thought it could be useful. Feel free to close/discard this feature/PR.


For Change Control Board: Change Request Review

The following review must be completed by an authorized member of the Change Control Board.

  • 1. Review all computer code.
  • 2. If any changes occur to the input syntax, there must be an accompanying change to the user manual and xsd schema. If the input syntax change deprecates existing input files, a conversion script needs to be added (see Conversion Scripts).
  • 3. Make sure the Python code and commenting standards are respected (camelBack, etc.) - See on the wiki for details.
  • 4. Automated Tests should pass, including run_tests, pylint, manual building and xsd tests. If there are changes to Simulation.py or JobHandler.py the qsub tests must pass.
  • 5. If significant functionality is added, there must be tests added to check this. Tests should cover all possible options. Multiple short tests are preferred over one large test. If new development on the internal JobHandler parallel system is performed, a cluster test must be added setting, in XML block, the node <internalParallel> to True.
  • 6. If the change modifies or adds a requirement or a requirement based test case, the Change Control Board's Chair or designee also needs to approve the change. The requirements and the requirements test shall be in sync.
  • 7. The merge request must reference an issue. If the issue is closed, the issue close checklist shall be done.
  • 8. If an analytic test is changed/added is the the analytic documentation updated/added?
  • 9. If any test used as a basis for documentation examples (currently found in raven/tests/framework/user_guide and raven/docs/workshop) have been changed, the associated documentation must be reviewed and assured the text matches the example.

@moosebuild
Copy link

Job Mingw Test on f85cedd : invalidated by @alfoa

@moosebuild
Copy link

Job Mingw Test on f85cedd : invalidated by @aalfonsi

@alfoa
Copy link
Collaborator Author

alfoa commented Apr 19, 2024

@wangcj05 @mandd @joshua-cogliati-inl the windows machine does not seem to work

@moosebuild
Copy link

Job Mingw Test on f85cedd : invalidated by @wangcj05

@moosebuild
Copy link

Job Mingw Test on f85cedd : invalidated by @joshua-cogliati-inl

failed in fetch

@moosebuild
Copy link

Job Mingw Test on f85cedd : invalidated by @alfoa

fetch error

@wangcj05
Copy link
Collaborator

@alfoa FYI, our HPC is under maintenance right now. This outage will last until May 6. Please let me know if you need to merge this PR before that.

@alfoa
Copy link
Collaborator Author

alfoa commented Apr 22, 2024

@alfoa FYI, our HPC is under maintenance right now. This outage will last until May 6. Please let me know if you need to merge this PR before that.

@wangcj05 No problem for me. I can use my local branch for now. Let me know if you have comments on this in the meanwhile (that I can address offline).

Thanks a lot

Copy link
Collaborator

@wangcj05 wangcj05 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alfoa I have some comments for you to consider.

Comment on lines +157 to +166
fPointer = namedtuple("func", ['methodName', 'instance'])
mName = 'evaluate'
# check if the correct method is present
if "evaluate" not in self.funcDict[key].availableMethods():
self.raiseAnError(IOError, f'Function {self.funcDict[key].name} does not contain a method named "evaluate". It must be present if this needs to be used in a Sampler!')
if val not in initDict['Functions'][val].availableMethods():
if "evaluate" not in initDict['Functions'][val].availableMethods():
self.raiseAnError(IOError, f'Function {initDict["Functions"][val].name} does contain neither a method named "{val}" nor "evaluate". '
'It must be present if this needs to be used in a Sampler!')
else:
mName = val
self.funcDict[key] = fPointer(mName, initDict['Functions'][val])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these lines be handled in the Sampler base class?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately it seems that the CustomSampler and the EnsembleForward are "special" cases and they take care of they functions directly. I did not add this "exception" with this implementation but it was like that already unfortunately.

Comment on lines -170 to +180
self.funcDict[key] = availableFunc[val]
fPointer = namedtuple("func", ['methodName', 'instance'])
mName = 'evaluate'
# check if the correct method is present
if "evaluate" not in self.funcDict[key].availableMethods():
self.raiseAnError(IOError, f'Function {self.funcDict[key].name} does not contain a method named "evaluate". It must be present if this needs to be used in a Sampler!')
if val not in availableFunc[val].availableMethods():
if "evaluate" not in availableFunc[val].availableMethods():
self.raiseAnError(IOError, f'Function {availableFunc[val].name} does contain neither a method named "{val}" nor "evaluate". '
'It must be present if this needs to be used in a Sampler!')
else:
mName = val
self.funcDict[key] = fPointer(mName, availableFunc[val])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these lines be handled in the Sampler base class?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately it seems that the CustomSampler and the EnsembleForward are "special" cases and they take care of they functions directly. I did not add this "exception" with this implementation but it was like that already unfortunately.

@moosebuild
Copy link

Job Mingw Test on f85cedd : invalidated by @joshua-cogliati-inl

failed in fetch

Copy link
Collaborator

@wangcj05 wangcj05 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes are good.

@wangcj05
Copy link
Collaborator

Checklist is good, PR can be merged.

@wangcj05 wangcj05 merged commit b56970b into devel Apr 24, 2024
12 checks passed
@wangcj05 wangcj05 deleted the alfoa/functionsInSamplersOptimizers branch April 24, 2024 14:50
@wangcj05
Copy link
Collaborator

@alfoa Thanks for your contribution, I have merged your PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[TASK] Allow implementation of functions in the same module for samplers/optimizers
3 participants