Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Container registry scenario #1267

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ But if you want to look at what's going on under the hood, you can follow the in

To access the OpenShift console:

* Click on the _Dashboard_ tab in the workshop dashboard. You will be presented with the OpenShift login screen.
* Click on the _Console_ tab in the workshop dashboard. You will be presented with the OpenShift login screen.

![Web Console Login](../../assets/ai-machine-learning/prometheus-api-client/03-openshift-login-page.png)

Expand All @@ -18,6 +18,12 @@ To access the OpenShift console:

Once you have logged in, you should be shown the list of projects you have access to. A project called `myproject` is where this workshop is deployed.

You should be able to see the deployment in the OpenShift Web Console by switching over to the **Developer** perspective of the OpenShift Web Console. Change from **Administrator** to **Developer** from the drop-down as shown below:

![Web Console Developer](../../assets/middleware/pipelines/web-console-developer.png)

Make sure you are on the `myproject` project by selecting it from the projects list or **Project** dropdown menu.

* In this project you should be able to see two different applications that have been deployed. <br>
![Web Console Project](../../assets/ai-machine-learning/prometheus-api-client/03-openshift-console-page.png)

Expand Down
18 changes: 9 additions & 9 deletions ai-machine-learning/prometheus-api-client/assets/setup.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
set +x
curl -LO https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem > /dev/null 2>&1
oc login -u developer -p developer --certificate-authority=lets-encrypt-x3-cross-signed.pem --insecure-skip-tls-verify=true > /dev/null 2>&1

Expand All @@ -9,25 +8,26 @@ oc process -f ./generate-metrics.yaml | oc apply -n myproject -f - --as system:a
# set up Prometheus
oc process -f ./deploy-prometheus.yaml | oc apply -n myproject -f - --as system:admin

curl https://raw.githubusercontent.com/jupyter-on-openshift/jupyter-notebooks/2.4.0/templates/notebook-deployer.json | sed -e 's/"Redirect"/"Allow"/' | oc apply -f - -n myproject
curl https://raw.githubusercontent.com/openshift-katacoda/ai-machine-learning-jupyter-notebooks/2.4.0/templates/notebook-deployer.json | sed -e 's/"Redirect"/"Allow"/' | oc apply -f - -n myproject

# set up Notebooks
oc process -f ./notebook-imagestream.yaml | oc apply -f - -n myproject
# Workaround permission issue in container image quay.io/hveeradh/prometheus-anomaly-detection-workshop
sleep 10
oc login -u admin -p admin --certificate-authority=lets-encrypt-x3-cross-signed.pem --insecure-skip-tls-verify=true > /dev/null 2>&1
oc adm policy add-scc-to-user anyuid -z default -n myproject
# Deploy Notebooks
oc process notebook-deployer -p APPLICATION_NAME=prometheus-anomaly-detection-workshop -p NOTEBOOK_IMAGE=prometheus-anomaly-detection-workshop:prometheus-api-client-katacoda -p NOTEBOOK_PASSWORD=secret | oc apply -f - -n myproject
oc login -u developer -p developer --certificate-authority=lets-encrypt-x3-cross-signed.pem --insecure-skip-tls-verify=true > /dev/null 2>&1

clear
echo -e "Waiting for metrics data to be generated... (This might take a couple minutes)"
until [ "$(oc get job prometheus-generate-data -o jsonpath='{.status.succeeded}' -n myproject &)" = "1" ];
do
sleep 10
done
oc adm policy remove-scc-from-user anyuid -z default -n myproject
clear
# set up Prometheus
echo -e "Metric data generated, Setting up Prometheus"
oc rollout latest prometheus-demo


oc logs bc/prometheus-anomaly-detection-workshop -f
clear
echo -e "The environment should be ready in a few seconds"
echo -e "The url to access the Jupyter Notebooks is: \n https://$(oc get route prometheus-anomaly-detection-workshop -o jsonpath='{.spec.host}' -n myproject) \n\n"
echo -e "Prometheus Console is available at: \n http://$(oc get route prometheus-demo-route -o jsonpath='{.spec.host}' -n myproject)"
6 changes: 2 additions & 4 deletions ai-machine-learning/prometheus-api-client/index.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,8 @@
},
"environment": {
"showdashboard": true,
"dashboard": "Dashboard",
"uilayout": "terminal",
"hideintro": false,
"hidefinish": false
"dashboards": [{"name": "Console", "href": "https://console-openshift-console-[[HOST_SUBDOMAIN]]-443-[[KATACODA_HOST]].environments.katacoda.com"}],
"uilayout": "terminal-iframe"
},
"details": {
"steps": [{
Expand Down
2 changes: 1 addition & 1 deletion ai-machine-learning/prometheus-api-client/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Once the environment is ready to be used, you will see the links to access it in

### The Environment

During this tutorial you will be using a hosted OpenShift 4.2 environment that is created just for you. This environment is not shared with other users of the system. <br>
During this tutorial you will be using a hosted OpenShift 4.7 environment that is created just for you. This environment is not shared with other users of the system. <br>
Your environment will only be active for a one hour period. Keep this in mind before embarking on getting through the content. <br>
Each time you start this tutorial, a new environment will be created on the fly.

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1,16 +1,9 @@
In this lab, we are going to focus on how [Container Enginers](https://developers.redhat.com/blog/2018/02/22/container-terminology-practical-introduction/#h.6yt1ex5wfo3l) cache [Repositories](https://developers.redhat.com/blog/2018/02/22/container-terminology-practical-introduction/#h.20722ydfjdj8) on the container host. There is a little known or understood fact - whenever you pull a container image, each layer is cached locally, mapped into a shared filesystem - typically overlay2 or devicemapper. This has a few implications. First, this means that caching a container image locally has historically been a root operation. Second, if you pull an image, or commit a new layer with a password in it, anybody on the system can see it, even if you never push it to a registry server.

Let's start with a quick look at Docker and Podman, to show the difference in storage:

``docker info 2>&1 | grep -E 'Storage | Root'``{{execute}}
Now, let's take a look at Podman container engine. It pulls OCI compliant, docker compatible images:

Notice what driver it's using and that it's storing container images in /var/lib/docker:

``tree /var/lib/docker/``{{execute}}

Now, let's take a look at a different container engine called podman. It pulls the same OCI compliant, docker compatible images, but uses a different drivers and storage on the system:

``podman info | grep -A3 Graph``{{execute}}
``podman info | grep -A4 graphRoot``{{execute}}

First, you might be asking yourself, [what the heck is d_type?](https://linuxer.pro/2017/03/what-is-d_type-and-why-docker-overlayfs-need-it/). Long story short, it's filesystem option that must be supported for overlay2 to work properly as a backing store for container images and running containers. Now, take a look at the actuall storage being used by Podman:

Expand All @@ -19,7 +12,7 @@ First, you might be asking yourself, [what the heck is d_type?](https://linuxer.
Now, pull an image and verify that the files are just mapped right into the filesystem:

``podman pull registry.access.redhat.com/ubi7/ubi
cat $(find /var/lib/containers/storage | grep redhat-release | tail -n 1)``{{execute}}
cat $(find /var/lib/containers/storage | grep /etc/redhat-release | tail -n 1)``{{execute}}

With both Docker and Podman, as well as most other container engines on the planet, image layers are mapped one for one to some kind of storage, be it thinp snapshots with devicemapper, or directories with overlay2.

Expand Down