-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PROPOSAL] Code quality analysis #307
Comments
+1 on any kind of static analysis tooling |
I have a few questions related to that.
|
If your questions about Codacy, then my arguments are below. But if you are talking about any analysis, then there is describe a lot here, since there are different tools, different approaches. Personally, when I analyzed, there were two top tools, these are Why Codacy?
In general, I am open to any other tools that are offered, I am ready to go deeper and make integration |
Does it scans the PR code quality or the whole repo? I am generally okay with adding it and see how it works, but I don't think it makes sense in current state of this repo. As we first need to handle #65. I want to avoid that contributors start solving quality alerts now, before we change the whole lib, so they don´t wast there time on code that will change anyway. Beside, we could also replace the current linter with https://github.com/golangci/golangci-lint-action. |
I am interested on helping with this issue :) |
Thank you @tannerjones4075! Looking forward to your contributions :) |
It seems that this issue should be broken in to several issues:
I can start with the linter if that is priority and then security scanning. |
After defining what Code Quality Objectives we want, we can create sub tasks / issues. Tow ideas :
Version 3.7.0 is the version of the workflow and v1.53.3 is the actual golangci-lint version. The latest version is therefore v1.54.2. |
For the SonarQube we could leverage SonarCloud. This would allows us to create a workflow and implement SonarQube through a GitHub Action. For test coverage I came across this. We could have a report generated locally and when a PR is created. govulncheck: What do we want the output to be? Do want a report to be generated? This will require a token with write access. |
It looks like SonarQube has been suggested and research before. The conclusion was to update the current golinter, which as been done. |
Removed govulncheck workflow as it report vulnerabilities of imported libs that we have no control of and can only be solved by updating the go version. As we are a lib and don't build anything, the user need to update their version. |
What/Why
What are you proposing?
Analyze overall code quality in the automated pipeline.
What problems are you trying to solve?
When make PR pull , a contributors wants to check up requested code, so they can fix abstract errors and cover with tests
Are there any security considerations?
I haven't research deeper. If there is some cases with security, let's discuss it
Are there any breaking changes to the API
No, that is only loyality for developers
Why should it be built? Any reason not to?
Describe the value that this feature will bring to the OpenSearch community, as well as what impact it has if it isn't built, or new risks if it is. Highlight opportunities for additional research.
What will it take to execute?
We should integrate some static code quality analyzer like a SonarQube
The text was updated successfully, but these errors were encountered: