-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add [ClearlyDefined] service #6944
Conversation
This pull request introduces 1 alert when merging df54dc1 into 22995e4 - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging bccf927 into 22995e4 - view on LGTM.com new alerts:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR. More context and content would've been helpful to get a better picture of the objective/motivation, including the questions asked in our issue template for new badges.
I've gone ahead and taken a pass through this anyway, and left several questions and requested changes inline below that will need to be resolved before we could potentially move ahead.
This pull request introduces 1 alert when merging aae9b82 into 22995e4 - view on LGTM.com new alerts:
|
Thanks for the quick the thorough feedback! I had used the default PR template and missed that there was a special one for new services. I updated the description accordingly. I will dig through the linting and testing now... |
I apologize but I can't figure out why it complains about "prettier". It runs on my vscode and shows no issues in the files for this PR. Also, where do we store the json results for test API calls or are those generated? |
The CI check runs prettier using the version of prettier that's specified in the project manifest. I'm not quite positive what you meant with this comment, but it sounds like you're referring to the Prettier plugin for VS Code? What do you see when you run the same script in your local workspace?
I don't understand the question. Are you referring to the mocha test results, or the json response that an integration/service test receives when calling the running server? You should be able to view the test results within the Circle CI job, e.g. https://app.circleci.com/pipelines/github/badges/shields/8507/workflows/d089bb9b-7a26-4b14-8944-917e8e5ff493/jobs/154701 And if your question is referring to the latter, we don't persist the API responses generated during test runs anywhere on disk. As of 7f608c1 the service tests are missing the required |
The second case exists in the ClearlyDefined data. It is rated 0. So we get a valid json response. Kind of the same as first case so we could remove it. I was expecting 404 for the third case as well. But it returns a json with an error code. |
So I grabbed the ref for this PR and am working with it locally, and one of the things that concerns me is the how long the ClearlyDefined API takes to respond, at least on the first call. The badges our service provides are primarily displayed in places like GitHub readmes which enforce a fairly narrow round trip window in order for things like our badges (and any image/fetched content) to render. If the entire request/response workflow doesn't complete within ~3-4 seconds then GitHub won't display the result. It seems like the initial request for a given package revision can take north of 10 seconds, at least in my anecdotal experience. Do you know if it's performing the entire analysis on demand when it receives the request? If so then I don't think this badge is going to be viable. |
Hmm. My current understanding is that the analysis happens when repos are harvested by ClearlyDefined. The API call shouldn't take long but then who knows how many people have actually tested this. I will request details from the CD folks. Let's put this PR into draft until we have clarity because I agree, it needs to be reasonably fast |
The API now respondes in 0.5 and 1.5sec. Looks ok with my tests (from Canada). |
Thanks for following up, it does indeed seem much more snappy now. The error behavior is still a bit of concern for me. Here's my observations, let me know if you expect and/or see anything different:
That's not necessarily a blocker, but I'd suggest that's a non-ideal response paradigm for an API and complicates our error handling and presentation to users. We could handle the 500 cases and set the message to be something like |
No response yet clearlydefined/service#870 |
Thanks so much for following up on that! I suggest we hold off for a bit to see what comes out of that, but if there's no news in the near future we can go ahead and proceed. |
I'm tracking along with the upstream discussion, and while it's good to see an active dialog, I'm also getting the sense that's not anything that's likely to change any time soon. We definitely wouldn't want to introduce an extra call and sequential call chain for our purposes, so I think we'll be stuck with the existing behavior as-is. What do you think? |
It certainly will take a while to sort this on their side, especially since it could be an incompatible change. Let's assume it stays as-is. The 200 cases are missing certain parts in the JSON. So I can check this and return "not found" for the badge. I'm not sure if I can handle the 500 or if this requires a change somewhere else. |
Co-authored-by: Caleb Cartwright <calebcartwright@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One last item on the new error handling scenario, then we'll be good to merge! Thanks again for all your and sticking with this even though it had to spawn off some upstream threads
I think we got it. Thanks for your help! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent, thank you!
Awesome. I don't see it on https://shields.io/category/analysis yet but I guess there's some follow-up on your end to make it productive. |
Correct. We do automatically deploy merged changes through our staging environment (https://shields-staging.herokuapp.com/), but we don't auto-deploy straight to prod. We typically deploy to prod at least once a week, so odds are pretty high this will be live by Monday if not earlier. |
Woohoo! Works well. 😄 It was a great learning experience for me. Thanks for your help. |
Description
This service is for displaying the overall score of repositories as defined by ClearlyDefined. ClearlyDefined is part of the Open Source Initiative, and on a mission to help FOSS projects thrive by being, well, clearly defined. For more details visit
https://clearlydefined.io/about
The badge will display the overall score out of 100 for a given repository. The badge describes how well licenses are defined. Since it's basically an analysis of license metadata and it has been placed under the "Analysis" category.
Example of a repository on ClearlyDefined:
https://clearlydefined.io/definitions/npm/npmjs/-/jquery/3.4.1
The badge will display the top right number on the screenshot using the "score/100" format. For the given example, the badge will show "Score: 88/100".
Corresponding API call:
https://api.clearlydefined.io/definitions/npm/npmjs/-/jquery/3.4.1
Data
The public API is available as Swagger UI and does not require any API key:
https://api.clearlydefined.io/api-docs/
Motivation
The badge shall promote the ClearlyDefined initiative and help to bring more clarity around licenses in the open source community. You can find more details at https://docs.clearlydefined.io/.