Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oc-cluster destroy failed for reason "Permission denied". #77

Open
warmchang opened this issue Jun 30, 2017 · 2 comments
Open

oc-cluster destroy failed for reason "Permission denied". #77

warmchang opened this issue Jun 30, 2017 · 2 comments

Comments

@warmchang
Copy link

Run oc-cluster destroy failed for reason "Permission denied":

[root@appab-myproject ~]# oc-cluster list
# Using client for origin v1.5.1
Profiles:
- example
- workshop
- workshop2
[root@appab-myproject ~]# oc-cluster status
# Using client for origin v1.5.1
no cluster running
[root@appab-myproject ~]# oc-cluster destroy workshop2
# Using client for origin v1.5.1
Are you sure you want to destroy cluster with profile <workshop2> (y/n)? y
Removing profile workshop2
Removing /root/.oc/profiles/workshop2

/usr/bin/rm: cannot remove '/profiles/workshop2': Permission denied
Removing .kubeconfig profiles
error: cannot delete cluster workshop2, not in /root/.kube/config
error: cannot delete context workshop2, not in /root/.kube/config
Removing 0 images built with this cluster
[root@appab-myproject ~]#
[root@appab-myproject ~]# oc-cluster list
# Using client for origin v1.5.1
Profiles:
- example
- workshop
- workshop2

The docker version & OS version:

[root@appab-myproject ~]# docker version
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-28.git1398f24.el7.centos.x86_64
 Go version:      go1.7.4
 Git commit:      1398f24/1.12.6
 Built:           Fri May 26 17:28:18 2017
 OS/Arch:         linux/amd64

Server:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-28.git1398f24.el7.centos.x86_64
 Go version:      go1.7.4
 Git commit:      1398f24/1.12.6
 Built:           Fri May 26 17:28:18 2017
 OS/Arch:         linux/amd64
[root@appab-myproject ~]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[root@appab-myproject ~]#

Any advice to resolve this? thanks.

@warmchang
Copy link
Author

After check the script, I found the inconsistency between "/root/.oc/profiles/" and "/root/.kube/config":

[root@appab-myproject ~]# ll /root/.oc/profiles/
total 0
drwxr-xr-x. 7 root root 124 Jun 30 12:40 example
drwxr-xr-x. 7 root root 124 Jun 30 13:28 workshop
drwxr-xr-x. 7 root root 124 Jun 30 13:50 workshop2

the /root/.kube/config has no info about workshop2.

This is caused by the forced shutdown before the "oc-cluster up workshop2" finished executing.
Delete the workshop2 fouder and rerun oc-cluster up, then all thing is ok.

@warmchang
Copy link
Author

Rerun the oc-cluster up in the above condition, can saw the following error:

[root@appab-myproject ~]# oc-cluster up workshop2
# Using client for origin v1.5.1
[INFO] Running a previously created cluster
oc cluster up --version v1.5.1 --image openshift/origin --public-hostname 127.0.0.1 --routing-suffix apps.127.0.0.1.nip.io --host-data-dir /root/.oc/profiles/workshop2/data --host-config-dir /root/.oc/profiles/workshop2/config --host-pv-dir /root/.oc/profiles/workshop2/pv --use-existing-config -e TZ=CST
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.1 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ...
   WARNING: Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients.
-- Checking type of volume mount ...
   Using nsenter mounter for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ...
   Using 192.168.10.130 as the server IP
-- Starting OpenShift container ...
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
   OpenShift server started
-- Removing temporary directory ... OK
-- Checking container networking ... OK
-- Server Information ...
   OpenShift server started.
   The server is accessible via web console at:
       https://127.0.0.1:8443

   To login as administrator:
       oc login -u system:admin

-- Permissions on profile dir fixed
error: no context exists with the name: "workshop2".
[root@appab-myproject ~]# oc-cluster list

I think can we fix the error above by script automatic?
For example, adjust the order of the two commands. Run "${OC_BINARY} adm config use-context xxx" first, check the xxx context:

  • context xxx is ok, then run the "oc cluster up ...":
  • context xxx is wrong, no need to run the "oc cluster up ...", just print the error info & quit.

image

Or can do like this:
If context xxx is wrong and the foulder "/root/.oc/profiles/xxx" is exist, we can just backup it (/root/.oc/profiles/xxx_backup) and renew the xxx, just like to make a new profile.

How about your opinion? Hope to hear the feedback from the contributors. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant