Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v2compat] finalize pipeline run MLMD schema #5669

Closed
10 of 11 tasks
Bobgy opened this issue May 18, 2021 · 6 comments · Fixed by #6054
Closed
10 of 11 tasks

[v2compat] finalize pipeline run MLMD schema #5669

Bobgy opened this issue May 18, 2021 · 6 comments · Fixed by #6054
Assignees
Labels

Comments

@Bobgy
Copy link
Contributor

Bobgy commented May 18, 2021

TODOs:

  • Design, figure out the exact list of items we need to change (size/M)

Known work items:

@Bobgy Bobgy changed the title Verify MLMD state of v2 compatible mode pipelines Finalize MLMD spec of v2 compatible mode pipelines May 18, 2021
@Bobgy
Copy link
Contributor Author

Bobgy commented May 18, 2021

/assign @capri-xiyue

@Bobgy
Copy link
Contributor Author

Bobgy commented May 18, 2021

b/187556737

@Bobgy Bobgy changed the title Finalize MLMD spec of v2 compatible mode pipelines Finalize MLMD schema of v2 compatible mode pipelines May 18, 2021
@Bobgy Bobgy mentioned this issue May 18, 2021
1 task
@Bobgy
Copy link
Contributor Author

Bobgy commented May 21, 2021

One problem I found from #5668, is that

metrics.log_metric('accuracy', (result*100.0))

For now, we log a custom property called "accuracy" in the artifact, but if we do not know in advance which metric names are logged, we cannot know which custom properties correspond to scalar metrics.

Therefore, it seems we need more structured info for metrics.

@Bobgy Bobgy mentioned this issue May 21, 2021
5 tasks
@zijianjoy
Copy link
Collaborator

For now, we log a custom property called "accuracy" in the artifact, but if we do not know in advance which metric names are logged, we cannot know which custom properties correspond to scalar metrics.
Therefore, it seems we need more structured info for metrics.

SDK allows user to define any mount of log_metric call, therefore scalar metric is a dictionary instead of single value. See this example:
5i3R2aaD6LSeqer

See this SDK implementation: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/io_types.py#L112-L119

In order to visualize this scalar metrics, we need to create a table: one column for metric name, another column for metric value.

@zijianjoy
Copy link
Collaborator

But we also need to distinguish confidenceMetrics and confusionMatrix from scalar metrics. Otherwise they will appear from the same dictionary which we need to explicitly filter: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/test_data/expected_io_types_classification_metrics.json

@Bobgy Bobgy added the size/XL label May 24, 2021
@Bobgy
Copy link
Contributor Author

Bobgy commented May 24, 2021

Needs further work item breakdown

@Bobgy Bobgy assigned Bobgy and unassigned capri-xiyue May 24, 2021
@Bobgy Bobgy changed the title Finalize MLMD schema of v2 compatible mode pipelines [v2compat] finalize pipeline run MLMD schema Jun 4, 2021
google-oss-robot pushed a commit that referenced this issue Jul 20, 2021
* chore(v2): standardize MLMD data model

* change context type to system namespace

* update sdk snapshots

* fix go v2 tests

* update

* update v2 compat snapshots

* fix all samples

* fix must specify pipeline root

* add artifact display name

* add UI rendering of new fields

* fix sample tests

* let ui read artifact and execution names consistently

* fix samples

* fix frontend tests

* fix sample test

* fix last sample

* address feedback
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants