Skip to content
This repository has been archived by the owner on Nov 15, 2018. It is now read-only.

CDK and Openshift #8

Closed
7 tasks done
helio-frota opened this issue Jun 27, 2016 · 8 comments
Closed
7 tasks done

CDK and Openshift #8

helio-frota opened this issue Jun 27, 2016 · 8 comments

Comments

@helio-frota
Copy link
Member

helio-frota commented Jun 27, 2016

  • Setup
  • Openshift Origin
  • Explore [overview] REST API
  • How much control did you have over the version of node you used?
  • What about ongoing development?
  • What is the workflow like?
  • What is required to see and test changes locally?
@helio-frota
Copy link
Member Author

helio-frota commented Jun 27, 2016

Setup

Installation using RHEL 7


[Almost equal] / Based on : Container development kit installation guide

Registration required: https://developers.redhat.com

Virtualization support BIOS enabled required

Download and install RHLE 7 and create at least one non-root user


As root #

subscription-manager repos --enable rhel-server-rhscl-7-rpms
subscription-manager repos --enable rhel-7-server-optional-rpms
yum-config-manager --add-repo=http://mirror.centos.org/centos-7/7/sclo/x86_64/sclo/
echo "gpgcheck=0" >> /etc/yum.repos.d/mirror.centos.org_centos-7_7_sclo_x86_64_sclo_.repo

kill packagekitd in case locking yum

ps -aux | grep "kit" 
kill -9 [ packagekitd PID ]
yum -y update
yum groupinstall -y "Virtualization Host"
systemctl start libvirtd
systemctl enable libvirtd

Need to specify the version 1.7.4 ---- sclo-vagrant1-vagrant-1.7.4

yum install sclo-vagrant1-vagrant-1.7.4 sclo-vagrant1-vagrant-libvirt sclo-vagrant1-vagrant-libvirt-doc sclo-vagrant1-vagrant-registration
cp /opt/rh/sclo-vagrant1/root/usr/share/vagrant/gems/doc/vagrant-libvirt-0.0.32/polkit/10-vagrant-libvirt.rules /etc/polkit-1/rules.d/
systemctl restart libvirtd
systemctl restart polkit
usermod -a -G vagrant non-root-user-here

Is good to logout all the users to be sure about group addition

[hf@localhost ~]$ id 
uid=1000(hf) gid=1000(hf) groups=1000(hf),10(wheel),982(vagrant) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[hf@localhost ~]$ cat /etc/group | grep vagrant
vagrant:x:982:hf
[hf@localhost ~]$

As user $

Download 2 files:

  1. cdk-2.1.0.zip
  2. rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box
unzip Downloads/cdk-2.1.0.zip
cd cdk/plugins/
scl enable sclo-vagrant1 bash
vagrant global-status
vagrant plugin install ./vagrant-registration-1.2.2.gem ./vagrant-service-manager-1.1.0.gem ./vagrant-sshfs-1.1.0.gem
vagrant box add --name cdkv2 rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box
vagrant box list
cd Downloads/cdk/components/rhel/rhel-ose
vagrant up

After reboot you can keep doing this:

scl enable sclo-vagrant1 bash
cd Downloads/cdk/components/rhel/rhel-ose/
vagrant up

In case openshift not started:

vagrant provision

Example output running:

[hf@localhost ~]$ scl enable sclo-vagrant1 bash
[hf@localhost ~]$ cd Downloads/cdk/components/rhel/rhel-ose/
[hf@localhost rhel-ose]$ vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Starting domain.
==> default: Waiting for domain to get an IP address...
==> default: Waiting for SSH to become available...
==> default: Creating shared folders metadata...
==> default: Registering box with vagrant-registration...
    default: Would you like to register the system now (default: yes)? [y|n]n
==> default: Copying TLS certificates to /home/hf/Downloads/cdk/components/rhel/rhel-ose/.vagrant/machines/default/libvirt/docker
==> default: Rsyncing folder: /home/hf/Downloads/cdk/components/rhel/rhel-ose/ => /vagrant
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
[hf@localhost rhel-ose]$ vagrant provision
==> default: Running provisioner: shell...
    default: Running: inline script
==> default: Running provisioner: shell...
    default: Running: inline script
==> default: Successfully started and provisioned VM with 2 cores and 3072 MB of memory.
==> default: To modify the number of cores and/or available memory set the environment variables
==> default: VM_CPU respectively VM_MEMORY.
==> default: You can now access the OpenShift console on: https://10.1.2.2:8443/console
==> default: To use OpenShift CLI, run:
==> default: $ vagrant ssh
==> default: $ oc login 10.1.2.2:8443
==> default: Configured users are (<username>/<password>):
==> default: openshift-dev/devel
==> default: admin/admin
==> default: If you have the oc client library on your host, you can also login from your host.
[hf@localhost rhel-ose]$ 

@helio-frota
Copy link
Member Author

helio-frota commented Jul 5, 2016

Explore [overview] REST API

example getting the auth-token:

oc login https://10.1.2.2:8443
oc whoami -t
ks0K_rCQlv23UudZGFCIA7gYj2cr1nDDIo_D4-c3JQs

API : https://10.1.2.2:8443/swaggerapi/api/v1

I think a client can be done.

auth-token running on travis:
https://travis-ci.org/helio-frota/poctravis/builds/142560434#L265

@helio-frota
Copy link
Member Author

helio-frota commented Jul 5, 2016

Openshift Origin

Fedora 24 setup:

wget https://github.com/openshift/origin/releases/download/v1.3.0-alpha.2/openshift-origin-server-v1.3.0-alpha.2-983578e-linux-64bit.tar.gz
tar xvzf openshift-origin-server-v1.3.0-alpha.2-983578e-linux-64bit.tar.gz
sudo dnf install docker
sudo systemctl start docker
sudo systemctl enable docker
sudo ./openshift start

Or using docker:
https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container

https://external_host_ip:8443/console/ [ not localhost ]

Script to start openshift origin and get a token using travis-CI
https://github.com/helio-frota/poctravis/blob/master/test.sh

After login, we have no images or templates to create new apps.
Good to explore this and try origin instead CDK ?

a

@helio-frota helio-frota changed the title CDK Openshift CDK and Openshift Jul 6, 2016
@sebastienblanc
Copy link

When I started to play with CDK, I used this document written by @burrsutter , it's using a nodejs sample app and I recommend it to you to discover CDK (and Kubernetes)

@helio-frota
Copy link
Member Author

@sebastienblanc thanks for share!

@helio-frota
Copy link
Member Author

How much control did you have over the version of node you used?

I think this is the easiest way to run any version of node with OSv3 :
https://hub.docker.com/r/ryanj/centos7-s2i-nodejs/

oc new-app ryanj/centos7-s2i-nodejs:RELEASE~REPO_URL

example I used:

oc new-app ryanj/centos7-s2i-nodejs:4.4.6~https://github.com/helio-frota/ost.git

@helio-frota
Copy link
Member Author

helio-frota commented Jul 11, 2016

What about ongoing development?
What is the workflow like?
What is required to see and test changes locally?

  1. Register http://www.ultrahook.com
  2. gem install ultrahook
  3. echo "api_key: your_api_key_here" > ~/.ultrahook
  4. Get the webhook URL of your app on Openshift and register using (example) :
ultrahook github https://10.1.2.2:8443/oapi/v1/namespaces/foo/buildconfigs/ost/webhooks/1EgA6Quakh-nM9zTy2ym/github

Go to github and add webhook using your registered ultrahook account

Full article


After this for each push on github a build is trigged inside local openshift.

Create a route (click on create route and save) and take a look on changes locally using the generated URL. Example:
http://ost-foo.rhel-cdk.10.1.2.2.xip.io/

App used for this tests:
https://github.com/helio-frota/ost

@helio-frota
Copy link
Member Author

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

2 participants