Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Promote KubeVirt to a CNCF incubating project #96

Closed
49 tasks
mazzystr opened this issue Apr 20, 2021 · 12 comments · Fixed by #111
Closed
49 tasks

Promote KubeVirt to a CNCF incubating project #96

mazzystr opened this issue Apr 20, 2021 · 12 comments · Fixed by #111
Labels
kind/enhancement triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@mazzystr
Copy link
Contributor

mazzystr commented Apr 20, 2021

/kind enhancement

Per Josh Berckus in Red Hat OSPO (Open Source Program Office) it is time to begin the process to get project KubeVirt promoted to an incubating project under the CNCF.

CNCF Technical Oversight Committee (TOC) Due Diligence doc is here

Where to start

  • make sure you're clear on the TOC Principles, the project proposal process, the graduation criteria and desired cloud native properties are. The project sponsor (a member of the TOC) should have assisted in crafting the proposal to explain why it's a good fit for the CNCF. If anything's unclear to you, reach out to the project sponsor or, failing that, the TOC mailing list for advice.
  • make sure you've read, in detail, the relevant project proposal, This will usually be in the form of an open pull request. Consider holding off on commenting on the PR until you've completed the next three steps.
  • take a look at some previous submissions (both successful and unsuccessful) to help calibrate your expectations.
  • Verify that all of the basic project proposal requirements have been provided.
  • do as much reading up as you need to (and consult with experts in the specific field) in order to familiarize yourself with the technology landscape in the immediate vicinity of the project (and don't only use the proposal and that project's documentation as a guide in this regard).
  • at this point you should have a very clear technical idea of what exactly the project actually does and does not do, roughly how it compares with and differs from similar projects in it's technology area, and/or a set of unanswered questions in those regards.
  • go through the graduation criteria and for each item, decide for yourself whether or not you have enough info to make a strong, informed call on that item.
    ** If so, write it down, with motivation.
    ** If not, jot down what information you feel you're missing.
    ** Also take note of what unanswered questions the community might have posted in the PR review that you consider to be critically important.

Some example questions that will ideally need clear answers

Most of these should be covered in the project proposal document. The due diligence exercise involves validating any claims made there, verifying adequate coverage of the topics, and possibly summarizing the detail where necessary.
Technical

  • An architectural, design and feature overview should be available. (example, example)
  • What are the primary target cloud-native use cases?
  • Which of those:
    ** Can be accomplished now.
    ** Can be accomplished with reasonable additional effort (and are ideally already on the project roadmap).
    ** Are in-scope but beyond the current roadmap.
    ** Are out of scope.
  • What are the current performance, scalability and resource consumption bounds of the software? Have these been explicitly tested? Are they appropriate given the intended usage (e.g. agent-per-node or agent-per-container need to be lightweight, etc)?
  • What exactly are the failure modes? Are they well understood? Have they been tested? Do they form part of continuous integration testing? Are they appropriate given the intended usage (e.g. cluster-wide shared services need to fail gracefully etc)?
  • What trade-offs have been made regarding performance, scalability, complexity, reliability, security etc? Are these trade-offs explicit or implicit? Why? Are they appropriate given the intended usage? Are they user-tunable?
  • What are the most important holes? No High-Availability? No flow control? Inadequate integration points?
  • Code quality. Does it look good, bad or mediocre to you (based on a spot review). How thorough are the code reviews? Substance over form. Are there explicit coding guidelines for the project?
  • Dependencies. What external dependencies exist, do they seem justified?
  • What is the release model? Versioning scheme? Evidence of stability or otherwise of past stable released versions?
  • What is the CI/CD status? Do explicit code coverage metrics exist? If not, what is the subjective adequacy of automated testing? Do different levels of tests exist (e.g. unit, integration, interface, end-to-end), or is there only partial coverage in this regard? Why?
  • What licensing restrictions apply? Again, CNCF staff will handle the full legal due diligence.
  • What are the recommended operational models? Specifically, how is it operated in a cloud-native environment, such as on Kubernetes?

Project

The key high-level questions that the voting TOC members will be looking to have answered are (from the graduation criteria):

  • Do we believe this is a growing, thriving project with committed contributors?
  • Is it aligned with CNCF's values and mission?
  • Do we believe it could eventually meet the graduation criteria?
  • Should it start at the sandbox level or incubation level?

Some details that might inform the above include:

  • Does the project have a sound, documented process for source control, issue tracking, release management etc.
  • Does it have a documented process for adding committers?
  • Does it have a documented governance model of any kind?
  • Does it have committers from multiple organizations?
  • Does it have a code of conduct?
  • Does it have a license? Which one? Does it have a CLA or DCO? Are the licenses of it's dependencies compatible with their usage and CNCF policies? CNCF staff will handle the full legal due diligence.
  • What is the general quality of informal communication around the project (slack, github issues, PR reviews, technical blog posts, etc)?
  • How much time does the core team commit to the project?
  • How big is the team? Who funds them? Why? How much? For how long?
  • Who are the clear leaders? Are there any areas lacking clear leadership? Testing? Release? Documentation? These roles sometimes go unfilled.
    What is the rate of ongoing contributions to the project (typically in the form of merged commits).

Users / See #110

  • Who uses the project? Get a few in-depth references from 2-4 of them who actually know and understand it.
  • What do real users consider to be it's strengths and weaknesses? Any concrete examples of these?
  • Perception vs Reality: Is there lots of buzz, but the software is flaky/untested/unused? Does it have a bad reputation for some flaw that has already been addressed?

Contributor experience

Besides the core team, how active is the surrounding community? Bug reports? Assistance to newcomers? Blog posts etc.

  • Is it easy to contribute to the project as an external contributor? If not, what are the main obstacles?
  • Are there any especially difficult personalities to deal with? How is this done? Is it a problem?
  • Getting interviews with 2-3 external contributors is advisable for DD process, both from the community and technical perspective. It can help to identify technical depth in areas like extensibility, API design and general code architecture.
  • For more in-depth review of the contributor experience, consulting with sig-contributor-strategy is always a good idea.

Context

  • What is the origin and history of the project?
  • Where does it fit in the market and technical ecosystem?
  • Is it growing or shrinking in that space? Is that space growing or shrinking?
  • How necessary is it? What do people who don't use this project do? Why exactly is that not adequate, and in what situations?
  • Clearly compare and contrast with peers in this space. A summary matrix often helps. Beware of comparisons that are too superficial to be useful, or might have been manipulated so as to favor some projects over others. Most balanced comparisons will include both strengths and weaknesses, require significant detailed research, and usually there is no hands-down winner. Be suspicious if there appears to be one.

Other advice

  • Bring in other people (e.g. from your company) who might be more familiar with a particular area than you are, to assist where needed. Even if you know the area, additional perspectives from experts are usually valuable.
  • Conduct as much of the investigation in public as is practical. For example, favor explicit comments on the submission PR over private emails, phone calls etc. By all means conduct whatever communication might be necessary to do a thorough job, but always try to summarize these discussions in the PR so that others can follow along.
  • Explicitly disclose any vested interest or potential conflict of interest that you, the project sponsor, the project champion, or any of the reviewers have in the project. If this creates any significant concerns regarding impartiality, its usually best for those parties to recuse themselves from the submission and it's evaluation.
  • Fact-check where necessary. If an answer you get to a question doesn't smell right, check the underlying data, or get a second/third... opinion.
@mazzystr mazzystr added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Apr 20, 2021
@jberkus
Copy link
Contributor

jberkus commented Apr 21, 2021

So our original DD is here: https://github.com/cncf/toc/blob/main/proposals/sandbox/kubevirt.adoc

We start with that, and update it for the current status of Kubevirt.

@mazzystr
Copy link
Contributor Author

Sent introduction/petition email to Nutanix, Emporia State U, Nearby Computing, Canadian Centre for Cyber Security

@mazzystr
Copy link
Contributor Author

mazzystr commented Jul 8, 2021

https://docs.google.com/presentation/d/1eIyMHqr_ygktQwGz-U9MXEwmIQ8oa0dBRMWdPrEXWH0/edit#slide=id.ge3e23850b8_0_14

Adopters ... Reach out via mailing list, twitter, set heading in #virtualization. Check SUSE presentation from KubeVirt Summit for contact. Provide Diane with SUSE contact (Vasiliy Ulyanov)

  • sahibinden.com, appneta.com, slb.com, ciena.com, stackpath.com, packet.com, arm, Ateme
  • Add Platfom9. Ask them to ask their users to add themselves as adopters
  • Kubermatic .... Ask them to ask their users to add themselves as adopters
  • SAP ... Ask them to ask their users to add themselves as adopters

Monthly / Core maintainer meeting

  • Review stats to find new maintainers

Weekly Meeting

  • Encourage users to participate in meeting. Post notice to Slack

Reach out to Chinese mailing list

  • Provide CNCF infrastructure

@mazzystr
Copy link
Contributor Author

Due Diligence items now documented in the following doc per Alena ....

@mazzystr mazzystr reopened this Oct 26, 2021
@mazzystr
Copy link
Contributor Author

This is still in progress. Reopening the issue.

More progress on Alena's document ...

  • Filled in community health

  • Emailed Vasiliy / SuSe to get their opinion on community pro/cons

  • Conversation with Howard Zhang with ARM about interview and website advertisement

  • In progress of contacting Jeff Applewhite / STACKPATH regarding their opinion on community pro/con's and website advertisement

@mazzystr
Copy link
Contributor Author

Added note from Vasiliy / SuSe

@mazzystr
Copy link
Contributor Author

I spoke with someone I know at STACKPATH. They are reviewing their internal procedures for logo advertisement with Open Source projects as well as community participation. We are asked to stand by and wait for their review.

@mazzystr mazzystr changed the title Process to promote KubeVirt to a CNCF incubating project Promote KubeVirt to a CNCF incubating project Nov 24, 2021
@kubevirt-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2022
@mazzystr
Copy link
Contributor Author

mazzystr commented Feb 22, 2022 via email

@kubevirt-bot kubevirt-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2022
@mazzystr
Copy link
Contributor Author

@jberkus This issue can be closed complete, correct?

@jberkus
Copy link
Contributor

jberkus commented Feb 25, 2022

Oh, yes.

/close

@kubevirt-bot
Copy link

@jberkus: Closing this issue.

In response to this:

Oh, yes.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants