-
Notifications
You must be signed in to change notification settings - Fork 146
New: Ensure code coverage is met #651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New: Ensure code coverage is met #651
Conversation
a129335
to
2a4dccc
Compare
- This will enable the desired code coverage is met for the project - The coverage is set to 70 to start off with. This setting is in the Makefile - The coveragethreshold.json is an override for packages which have different coverage needs from the global coverage threshold. - The code coverage tool uses standard go tool. Signed-off-by: naveensrinivasan <[email protected]>
2a4dccc
to
cff9cd8
Compare
e5e6330
to
723bdba
Compare
Signed-off-by: naveensrinivasan <[email protected]>
723bdba
to
5c9e9d5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of improving test coverage, but I have some high-level questions:
- A coverage percentage is not very actionable for developers. Would it not be better to provide more actionable information in a PR for both reviewers and PR author? For example, annotation / information about functions added in a PR that are not hit by unit tests, that would be really useful to start with.
- The config file is going to go out-of-sync as soon as we add files, etc.
- Should this feature live in an external GHA rather than in this repo?
- Making this blocking on PRs is probably just going to lead to the PR author reducing the threshold, since it's hard to tell which lines / functions are not tested.
@@ -0,0 +1,34 @@ | |||
# Go Coverage tool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is the folder called "hack"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A coverage percentage is not very actionable for developers. Would it not be better to provide more actionable information in a PR for both reviewers and PR author? For example, annotation / information about functions added in a PR that are not hit by unit tests, that would be really useful to start with.
That is way harder, and there are existing tools like VSCode, and Go compiler which can do that.
The config file is going to go out-of-sync as soon as we add files, etc.
Yes, if we don't meet 70% coverage.
Should this feature live in an external GHA rather than in this repo?
As a dev, I would like to run the checks locally, and this gives that option
Making this blocking on PRs is probably just going to lead to the PR author reducing the threshold, since it's hard to tell which lines / functions are not tested.
The goal is to figure out the code coverage without using external dependencies. The code coverage is provided by VSCode and Go compiler. This isn't the design of this tool. Only then the code coverage is met. High-quality tests are critical for having confidence.
Right now, we are flying blind without knowing what the coverage of the project is. Any time new code gets added/updated, the project or the PR reviewer isn't aware of the test coverage.
What would be your suggestion to address this concern? Something like codecov would also suffice. Do you think we should enable an online tool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a dev, I would like to run the checks locally, and this gives that option
any reason why running go test
locally does not suffice?
What would be your suggestion to address this concern? Something like codecov would also suffice. Do you think we should enable an online tool?
If codecov does it and they don't require write permissions, that sounds like an easy win, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If codecov does it and they don't require write permissions, that sounds like an easy win, no?
@bobcallaway Do you have thoughts on codecov or similar other online tools?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing we can do is open an HTML report of code coverage per file using the native go tools. Generate the coverage profile with go test, and then run go tool cover
. We can make this a non-gating CI that allows reviewers (who ultimately should decide what can/cant be testable) to see if newly added code was tested.
The CI job can also add a comment decomenting coverage % in each pkg. If all of this can be done without new code to maintain, that would be great.
e.g.
go test -v -coverprofile cover.out ./YOUR_CODE_FOLDER/...
go tool cover -html=cover.out -o cover.html
open cover.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A high code coverage percentage does not guarantee high quality in the test coverage
More important than the percentage of lines covered is human judgment over the actual lines of code (and behaviors) that aren’t being covered
Qualitative feedback is nice. Imposing threshold does not help IMO. I've seen this in scorecard: when the coverage goes down in a PR, we just let the PR go thru because that's the best we can do. Over time we don't even look at it because it's just noise and not actionable.
Having a general view of code coverage is great. Having it in a (blocking) PR as a percentage number provides no benefits and is not actionable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK cool! I understand if you have strong opinions on this 👍. I will close this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand your argument @laurentsimon - we just let the PR go thru because thats the best we can do
- why not ask the contributor to add unit test coverage for what they're proposing to not erode the baseline percentage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I think having a floor is ok. Maybe somewhere around 50% minimum (though the 60% Google number is also fine)? But not necessarily any targets.
- We should strive to ask for tests on PRs when appropriate.
I can go either way on making this a blocking pre-submit, but I agree there are merits either way.
I'm not sure I understand your argument @laurentsimon -
we just let the PR go thru because thats the best we can do
- why not ask the contributor to add unit test coverage for what they're proposing to not erode the baseline percentage?-
I think that the biggest problem would be for a change that doesn't necessarily merit unit-tests or where unit-tests are especially hard to write, it might make doing it on that particular PR overly expensive just to maintain an arbitrary coverage baseline. We also have e2e tests that might cover certain scenarios better than unit-tests.
A better rule might be that we need to hit a baseline coverage % before we do a release?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A better rule might be that we need to hit a baseline coverage % before we do a release?
This is a good thing to enforce on "pre-release" -- if we do this it would be good to have a report generated per PR because accumulating tests at the end is a lot harder than per-change.
why not ask the contributor to add unit test coverage for what they're proposing to not erode the baseline percentage?
I think the problem was that per-file doesn't give reviewer or author enough actionable guidance. I still like the HTML report generated to help the reviewer evaluate test coverage of any added lines.
Makefile
different coverage needs from the global coverage threshold.
Signed-off-by: naveensrinivasan [email protected]