Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes needed to use upstream Che on che.openshift.io #13265

Closed
l0rd opened this issue Apr 30, 2019 · 8 comments
Closed

Changes needed to use upstream Che on che.openshift.io #13265

l0rd opened this issue Apr 30, 2019 · 8 comments
Labels
area/hosted-che kind/epic A long-lived, PM-driven feature request. Must include a checklist of items that must be completed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@l0rd
Copy link
Contributor

l0rd commented Apr 30, 2019

Motivations

  • Users should be able to deploy upstream Che as it is in any k8s/OpenShift cluster, without extra customizations
  • Maintaining 2 codebases (che and rh-che) is time consuming from a dev/test point of view

The goal

Reuse the same codebase for upstream che and rh-che

How are we going to achieve that

We should consider making some changes to the following upstream areas (@ibuziuk please help, I am pretty sure I am forgetting something):


Note 1: This is not a Che 7 GA task and should not be included in current sprints. Let's first evaluate how much work is needed and then prioritize it

Note 2: We should not take into consideration Che 6 legacy components (e.g. stacks) but only Che 7 ones

Note 3: To deploy upstream Che on openshift.io we will need changes on openshift.io platform side as well but this issue is not about those. This issue is about the changes Che side

@l0rd l0rd changed the title Make changes to be able to use upstream Che for che.openshif.io Changes needed to use upstream Che on che.openshift.io Apr 30, 2019
@l0rd l0rd added team/osio kind/epic A long-lived, PM-driven feature request. Must include a checklist of items that must be completed. labels May 1, 2019
@ibuziuk
Copy link
Member

ibuziuk commented May 2, 2019

In order to have a more productive discussion please find a simplified schematic component diagram of che.openshift.io:

image

Challenges:

@l0rd
Copy link
Contributor Author

l0rd commented May 2, 2019

@ibuziuk good points on telemetry and stacks. I understand the rest as well but this issue is to be able to unify rh-che and upstream Che. It's not about replacing fabric8-proxy or fabric8-auth with some new Che capabilities.

In particular:

  • It's not about making Che multi-cluster ready: we should, if needed, do the necessary changes in upstream Che to make it compatible with fabric8-proxy
  • It's not about bypassing fabric8-tenant: we should, if needed, do the necessary changes in upstream Che to make compatible with fabric8-auth
  • It's not about replacing fabric8-tenant provisioning features
  • It's not about replacing unleash toggle

These may be further steps but for now the goal is really to unify upstream che and rh-che. Does that makes sense?

@ibuziuk
Copy link
Member

ibuziuk commented May 2, 2019

@l0rd this does make sense. So, if I understand it correctly the idea is that che would be deployed on dsaas right from the upstream repo (similar che-plugin-registry is deployed as-is to dsaas via https://github.com/eclipse/che-plugin-registry/tree/master/openshift)

@amisevsk
Copy link
Contributor

I'm not so sure about unifying rh-che (with its openshift.io-specific changes) with the general upstream. For example, making Che support fabric8-oso-proxy means adding code related to our specific use case (the proxy does not follow a standard API). Unless we can make a case that fabric8-proxy is the way to run Che multicluster, it should not be included.

Rh-che has been made to fit into the specific environment we have on openshift.io; while adding these changes to upstream Che may make development easier for those of us working on rh-che, it also adds a fair bit of code that would be entirely irrelevant in the general case.

I don't believe downstream-specific modifications belong in the main Che repo. Che should not privilege our specific deployment requirements over others.

@l0rd
Copy link
Contributor Author

l0rd commented May 10, 2019

@amisevsk I agree with you and that's not about altering upstream Che to fit osio but rather:

  1. making osio as standard as possible
  2. improve upstream Che to work perfectly as it is in a prod k8s environment

Even if this issue is about the second point we need to work on the first point as well.

@ibuziuk
Copy link
Member

ibuziuk commented Jun 27, 2019

Areas that would be easy to move:

Areas that would be problematic to move to upstream:

  • e2e registration flow (OSIO specific, but could be switched via configuration)
  • fabric8-multi-tenant-manager
    • usage of OSIO specific DTO like UserCheTenantData
    • Fabric8AuthServiceClient - GitHub Account linking
    • Fabric8WorkspaceEnvironmentProvider getting the SA token
    • GuessedSubject related to SA token usage
    • unleash toogle usage

[1] https://github.com/redhat-developer/rh-che/blob/master/plugins/fabric8-multi-tenant-manager/src/main/java/com/redhat/che/multitenant/tenantdata/UserCheTenantData.java
[2] https://github.com/redhat-developer/rh-che/blob/master/plugins/fabric8-multi-tenant-manager/src/main/java/com/redhat/che/multitenant/Fabric8AuthServiceClient.java
[3] https://github.com/redhat-developer/rh-che/blob/master/plugins/fabric8-multi-tenant-manager/src/main/java/com/redhat/che/multitenant/Fabric8WorkspaceEnvironmentProvider.java
[4] https://github.com/redhat-developer/rh-che/blob/master/plugins/fabric8-multi-tenant-manager/src/main/java/com/redhat/che/multitenant/GuessedSubject.java

@l0rd l0rd added this to the 7.x milestone Jul 1, 2019
@che-bot che-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 4, 2021
@che-bot
Copy link
Contributor

che-bot commented Feb 4, 2021

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

@ibuziuk
Copy link
Member

ibuziuk commented Feb 4, 2021

I believe we can close this issue as won't fix

@ibuziuk ibuziuk closed this as completed Feb 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/hosted-che kind/epic A long-lived, PM-driven feature request. Must include a checklist of items that must be completed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants