-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unify the node and dataflow versioning API #734
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me!
- Performed an incremental review on 248f191
- Looked at
93
lines of code in1
files - Took 1 minute and 37 seconds to review
More info
- Skipped
0
files when reviewing. - Skipped posting
1
additional comments because they didn't meet confidence threshold of50%
.
1. tests/test_driver_export.py:109
:
- Assessed confidence :
0%
- Comment:
Good job on updating the test to dynamically set theversion
attribute of each node in theEXPECTED_JSON
. This ensures that the test will pass regardless of the Python version used. - Reasoning:
The testtest_export_execution
has been updated to reflect the changes in the PR. Theversion
attribute of each node in theEXPECTED_JSON
is now dynamically set to the version of the corresponding node in the graph. This is a good change as it ensures that the test will pass regardless of the Python version used, as theversion
attribute is dependent on the Python version. I don't see any issues with this change.
Workflow ID: wflow_9buXnjTWmOPF5dWT
Not what you expected? You can customize the content of the reviews using rules. Learn more here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me!
- Performed an incremental review on a60db44
- Looked at
21
lines of code in1
files - Took 2 minutes and 23 seconds to review
More info
- Skipped
0
files when reviewing. - Skipped posting
1
additional comments because they didn't meet confidence threshold of50%
.
1. tests/test_driver_export.py:107
:
- Assessed confidence :
100%
- Grade:
40%
- Comment:
Consider creating a deep copy ofEXPECTED_JSON
within the test and modify that instead of the global variable to avoid side effects. - Reasoning:
The testtest_export_execution
modifies the globalEXPECTED_JSON
variable in each run. This could lead to unexpected results if multiple tests use this variable or if the test is run multiple times in the same session. It would be better to create a deep copy ofEXPECTED_JSON
within the test and modify that instead.
Workflow ID: wflow_2Otbj2mKo61mztca
Not what you expected? You can customize the content of the reviews using rules. Learn more here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few thoughts, but overall good
Motivation described in #728 and relates to #175.
Changes
hash_source_code()
and the utilityremove_docs_and_comments()
fromgrap_utils
tograph_types
graph_types.HamiltonNode
andgraph_types.HamiltonGraph
a.version
propertyhamilton.plugins.h_experiments.hook.ExperimentTracker
to use theHamiltonGraph.version
implementation to align with CLI and DiskCacheHow I tested this
Notes
hash_source_code()
directly on functions/Callables or onhamilton.node.Node
Checklist