Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VM Tooling Analysis and general housekeeping #723

Closed
5 of 10 tasks
evertonfraga opened this issue Apr 17, 2020 · 15 comments
Closed
5 of 10 tasks

VM Tooling Analysis and general housekeeping #723

evertonfraga opened this issue Apr 17, 2020 · 15 comments

Comments

@evertonfraga
Copy link
Contributor

evertonfraga commented Apr 17, 2020

GitHub checks are a great way to keep our code quality up, along with some safeguards for common errors or security issues. The checks should be reliable, and that requires some extra work.

Tests

I'll skip the reasoning about the importance of tests 😛. But we have some improvements to make:

  • Drop node 8 from tests, as it has reached EOL in December 2019.
  • Include node 14 that will get to Active state in May 2020.
  • VM tests are quite heavy, but even so we should aim to test them in all supported node versions.

Code coverage

The tool is good for our needs, their flag feature is what makes it viable to use in our monorepo setup. But we need to master it, meaning that we should prevent false negatives like this.

Code checks

We currently use a combo of Prettier + TSLint to enforce code formatting, maintainability and functionality errors. They are defined in packages ethereumjs-config-tslint.

  • Migration from TSLint to ESLint. @ryanio has started taking care of these.
  • Remove LGTM for python analysis.

@ryanio and I share the perception that we have a ton of checks, and still we should try our best to keep it simple and organized.

Documentation

We should have a more descriptive documentation. @ryanio has pointed to TypeDoc Library mode as an alternative to produce better docs.

  • Improve documentation format
  • Improve descriptions, examples for method executions
  • Implement a check preventing outdated documentation

--

This issue will be updated.

@evertonfraga
Copy link
Contributor Author

To change the following behavior, we can set fail-fast to false.

On GitHub Actions, if one job fails, GitHub actions halts the remaining jobs from the same workflow, to save resources. So if linting is a separate job in the same workflow and it fails, tests will fail too.

https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstrategyfail-fast

@holgerd77
Copy link
Member

  • VM tests are quite heavy, but even so we should aim to test them in all supported node versions.

I am bit sceptical, at least we shouldn't overshoot here on the CI usage side, I rather had the impression that we should eventually level down here a bit again after the latest CI runs to not overstress the infrastructure.

Maybe running on a weekly basis might be a good compromise here if we decide to do so.

@evertonfraga
Copy link
Contributor Author

@holgerd77 I am not a fan of having too many checks too, but also we can't have fewer node version coverage for the VM package, compared to its siblings.

I think the weekly run is a good balance!

I will grow a list of optimization opportunities for our CIs, because there are plenty of them around. With the upcoming ethashjs addition, we must have a plan to keep the CI checks at a manageable level.

@alcuadrado
Copy link
Member

I think another interesting compromise is to only run everything on PRs that prepare releases.

There's no clear way to detect that from within GH Actions, but checking if package.jsons are being modified is a good over-approximation.

@evertonfraga
Copy link
Contributor Author

evertonfraga commented Jun 2, 2020

@alcuadrado that's a good route. Adding 2 wei to this, we can instead run all jobs for created tags. To play even safer, we can create rc tags that does not need to land on npm. So if there's any churn, we don't burn a production tag.

@holgerd77
Copy link
Member

@alcuadrado would also regard this as a good automated compromise here. @evertonfraga running on tags would run later than the package.json-modification run - and therefore eventually already too later after a release has been published - or am I missing something here?

So with the current level of understanding I would rather tend to the package.json-change runs?

@evertonfraga
Copy link
Contributor Author

@holgerd77

or am I missing something here?

That's what I explained in my previous comment. Git tags (e.g: 2.0.0-rc) that does not need to be published to npm can be used, so we don't burn production tags.

@holgerd77
Copy link
Member

@evertonfraga ah, ok, half-read that, but then we would be forced to do RC releases everytime on all packages just for the sake of a CI improvement? 🤔 Wouldn't be this package.json solution be a lot easier to handle and a lot more automated?

@evertonfraga
Copy link
Contributor Author

I have another contribution to make here, after thinking more about this yesterday. I just did the analysis below to validate it.

Most of the CI runs are made within PRs. And there are on average 6 commits per PR (not counting commits that get rebased and force-pushed).

This is the current strategy:

PR commits:          4 node versions per package
PR merged on master: 4 node versions per package

If we change it to:

PR commits:          1 node version per package
PR merged on master: 4 node versions per package

... we would have much less CI runs on average, and if there's any incompatibility with any supported Node version, we'd know right after it's merged.

In this real life example, considering the last 10 PRs, we'd have 65% less CI runs.

image

@holgerd77
Copy link
Member

@evertonfraga That's a great analysis, thanks! 👍 😄 You can't count all commits I guess, since only pushes count for CI runs and it won't get pushed on every single commit I guess? Or is this taken into account? But even though, seems the overall calculation/reasoning remains valid to some extend.

@evertonfraga
Copy link
Contributor Author

evertonfraga commented Jun 3, 2020

ah, ok, half-read that, but then we would be forced to do RC releases every time on all packages just for the sake of a CI improvement? 🤔 Wouldn't be this package.json solution be a lot easier to handle and a lot more automated?

Our CI setup does not filter tags by pattern, so it would take only a single tag to trigger all jobs :)

The package.json route can be a little tricky. If there's any code change after the version is bumped, the automation won't help us. So for that case I personally, would rather have a manual step instead of an automated one that it's not 100% reliable.

But I agree that git tags weren't ultimately made for that :)


You can't count all commits I guess, since only pushes count for CI runs and it won't get pushed on every single commit I guess?

Yes, you are right, I counted the commits the first time, not the CI runs. Updating it led to a 25% decrease in the number of CI runs, which still has an expressive result. Here's the updated table+chart:

image

@evertonfraga
Copy link
Contributor Author

Stats of files changed of latest 20 active PRs:

image

Nearly half of the PRs performed changes to /packages/vm subpaths only. This chart will help me outline the comeback of the cascade testing, with a simpler approach.

@holgerd77
Copy link
Member

Yes, I would support concentrate CI runs somewhat more on the VM, since otherwise it get's a bit hard to keep track of things, also - since GitHub Actions is not yet as reliable as desired - there are relatively often cases with failing tests due to CI unreliabilities, this is also increasing with an increasing number of jobs to be run.

@evertonfraga
Copy link
Contributor Author

I think another interesting compromise is to only run everything on PRs that prepare releases.

There's no clear way to detect that from within GH Actions, but checking if package.jsons are being modified is a good over-approximation.

@alcuadrado if we keep creating PR releases with the same branch convention, this could work:

on:
  push:
    branches: release/*
  pull_request:
    branches: release/*

@holgerd77
Copy link
Member

Outdated, will close.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants