-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster Up/Down document is missing #21553
Comments
@mfojtik @derekwaynecarr second bz/issue i've seen today about this doc missing, people are expecting it to be there, should we put in a tombstone or something |
...after digging it was removed on 14NOV with this commit: f1206f2#diff-e3e2a9bfd88566b05001b02a3f51d286 intention was to favor and so the question is - whether it is or will be in the future possible to start local OKD cluster w/o virtualization? |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
It would be nice if OpenShift / OKD 4.x supports local development with OpenShift and not require virtualization, AWS deployment etc - like it was possible with OKD 3.x |
I'm just starting to explore OpenShift and I've run into a bunch of confusion and inconsistent documentation. How do I start a local cluster? Do I use |
The best documentation for OKD is often found in archive.org - thanks |
If this is what RedHat expects people to do then they should expect an exodus of users. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
I'm working through the O'Reilly book, DevOps with OpenShift. It guides readers through the oc cluster up process, and references the subject document (cluster_up_down.md). If you don't want people to use that process any more, that's cool, but please provide SOME document called cluster_up_down.md so that people can learn about the new process when they come here looking for the old one. As things stand, one must do a bit of research just to find this issue. And eventually this issue will go away. Thanks |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
My research update: For local usage only OCP 4.x - only with embedded VM : https://code-ready.github.io/crc/ - for Ubuntu requires customization: crc-org/crc#917 (comment) However it requires spawning VM, it needs additional ~10 GB RAM just to run it and 27 GB storage, but it is full OpenShift with operators, dashboard, etc. If you need something smaller - try minikube with "none" provider (aka no VM) or microk8s - but it will differ from OpenShift a lot. if you need similar to OCP stack (k8s with crio, podman, buildah stack), run minikube with crio as container engine - however it is challenging to run (and minikube warns that it is not tested config permutation). I was able to run it with tons of workaround: https://github.com/matihost/learning/blob/master/k8s/minikube/start-minikube.sh - with |
You can find the |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[A cluster up/down document that was available at https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md got missing]
Version
[provide output of the
openshift version
oroc version
command]Steps To Reproduce
Current Result
The document is missing.
Expected Result
The document should be available or at least to point to the new location.
The text was updated successfully, but these errors were encountered: