Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Support result upload for XL benchmarks #225

Merged
merged 3 commits into from
Dec 4, 2024
Merged

Conversation

jstlaurent
Copy link
Contributor

Changelogs

  • Update the upload_results client method to us the new V2 API endpoint in the Hub
  • Tighten types in Pydantic models to reflect the data structure once parsing is complete
  • Update client base request method to return the httpx response, in order to make headers accessible.

Linked issues

  • Closes [PRO-175]

Checklist:

  • Was this PR discussed in an issue? It is recommended to first discuss a new feature into a GitHub issue before opening a PR.
  • Add tests to cover the fixed bug(s) or the newly introduced feature(s) (if appropriate).
  • Update the API documentation if a new function is added, or an existing one is deleted.
  • Write concise and explanatory changelogs above.
  • If possible, assign one of the following labels to the PR: feature, fix, chore, documentation or test (or ask a maintainer to do it for you).

This PR updates the client to use the new V2 API endpoint for results. This will enable uploading supports for the upcoming V2 Benchmarks. Due to the nature of the result model in the client, which holds
their related benchmark identitifer but not an object reference, the same endpoint must accomodate both V1 and V2 benchmarks.

I also took the opportunity to tighten the typing in some Pydantic models. We had initially typed our models liberally, to show what could be accepted as inputs. While this still holds, the reality is that
our models normalized these input values to one representation. The updated model typing represents that normalized representation.

The new V2 result endpoint also now returns a Content-Location response header, which will provide the URL of the result on the
Hub. To leverage this, I exposed the full response object to the calling functions, instead of only the JSON body.

@jstlaurent jstlaurent added the enhancement New feature or request label Nov 30, 2024
@jstlaurent jstlaurent requested a review from cwognum as a code owner November 30, 2024 02:44
@jstlaurent jstlaurent added the feature Annotates any PR that adds new features; Used in the release process label Nov 30, 2024
@jstlaurent jstlaurent self-assigned this Nov 30, 2024
Copy link
Collaborator

@cwognum cwognum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jstlaurent! 🙏

polaris/evaluate/_results.py Outdated Show resolved Hide resolved
polaris/evaluate/_results.py Show resolved Hide resolved
polaris/evaluate/_results.py Show resolved Hide resolved
Copy link
Contributor

@mercuryseries mercuryseries left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, @jstlaurent! Thanks.

polaris/evaluate/_results.py Outdated Show resolved Hide resolved
Co-authored-by: Honoré Hounwanou <mercuryseries@gmail.com>
@jstlaurent jstlaurent merged commit bddbe9d into main Dec 4, 2024
10 checks passed
@jstlaurent jstlaurent deleted the feat/xl-results branch December 4, 2024 20:42
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature Annotates any PR that adds new features; Used in the release process
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants