- Docker
- skipper https://github.com/stratoscale/skipper
- minikube (for tests)
- kubectl
To push your build target to a Docker registry you first need to change the default target.
- Create a quay.io or Docker Hub account if you don't already have one. These instructions refer to quay.io, Docker Hub is similar.
- Create a repository called assisted-service.
- Make sure you have your
~/.docker/config.json
file set up to point to your account. For quay.io, you can go to quay.io -> User Settings, and click "Generate Encrypted Password" under "Docker CLI Password". - Login to quay.io using
docker login quay.io
. - Export the
SERVICE
environment variable to your Docker registry, and pass a tag of your choice, e.g., "test":
export SERVICE=quay.io/<username>/assisted-service:<tag>
For the first build of the build container run:
skipper build assisted-service-build
skipper make all
After every change in the API (swagger.yaml
) the code should be generated and the build must pass.
skipper make generate-from-swagger
- Run minikube on your system.
- Deploy services
skipper make deploy-test
skipper make test
skipper make test FOCUS=versions
skipper make unit-test
skipper make unit-test TEST=./internal/host
skipper make unit-test FOCUS=cluster
if you are making changes and don't want to deploy everything once again you can simply run this command:
skipper make update && kubectl get pod --namespace assisted-installer -o name | grep assisted-service | xargs kubectl delete --namespace assisted-installer
It will build and push a new image of the service to your Docker registry, then delete the service pod from minikube, the deployment will handle the update and pull the new image to start the service again.
The deployment is a system deployment, it contains all the components the service need for all the operations to work (if implemented). S3 service (scality), DB and will use the image generator to create the images in the deployed S3 and create relevant bucket in S3.
skipper make deploy-all
Besides default minikube deployment, the service support deployment to OpenShift cluster using ingress as the access point to the service.
skipper make deploy-all TARGET=oc-ingress
This deployment option have multiple optional parameters that should be used in case you are not the Admin of the cluster:
APPLY_NAMESPACE
- True by default. Will try to deploy "assisted-installer" namespace, if you are not the Admin of the cluster or maybe you don't have permissions for this operation you may skip namespace deployment.INGRESS_DOMAIN
- By default deployment script will try to get the domain prefix from OpenShift ingress controller. If you don't have access to it then you may specify the domain yourself. For example:apps.ocp.prod.psi.redhat.com
To set the parameters simply add them in the end of the command, for example
skipper make deploy-all TARGET=oc-ingress APPLY_NAMESPACE=False INGRESS_DOMAIN=apps.ocp.prod.psi.redhat.com
Note: All deployment configurations are under the deploy
directory in case more detailed configuration is required.
This service support optional UI deployment.
skipper make deploy-ui
- In case you are using podman run the above command without
skipper
.
For OpenShift users, look at the service deployment options on OpenShift platform.
This will allow you to deploy Prometheus and Grafana already integrated with Assisted installer:
-
On Minikube
# Step by step make deploy-olm make deploy-prometheus make deploy-grafana # Or just all-in make deploy-monitoring
-
On Openshift
# Step by step make deploy-prometheus TARGET=oc-ingress APPLY_NAMESPACE=false make deploy-grafana TARGET=oc-ingress APPLY_NAMESPACE=false # Or just all-in make deploy-monitoring TARGET=oc-ingress APPLY_NAMESPACE=false
NOTE: To expose the monitoring UI's on your local environment you could follow these steps
kubectl config set-context $(kubectl config current-context) --namespace assisted-installer
# To expose Prometheus
kubectl port-forward svc/prometheus-k8s 9090:9090
# To expose Grafana
kubectl port-forward svc/grafana 3000:3000
Now you just need to access http://127.0.0.1:3000 to access to your Grafana deployment or http://127.0.0.1:9090 for Prometheus.
This feature is for internal usage and not recommended to use by external users. This option will select the required tag that will be used for each dependency. If deploy-all use a new tag the update will be done automatically and there is no need to reboot/rollout any deployment.
Deploy images according to the manifest:
skipper make deploy-all DEPLOY_MANIFEST_PATH=./assisted-installer.yaml
Deploy images according to the manifest in the assisted-installer-deployment repo (require git tag/branch/hash):
skipper make deploy-all DEPLOY_MANIFEST_TAG=master
Deploy all the images with the same tag. The tag is not validated, so you need to make sure it actually exists.
skipper make deploy-all DEPLOY_TAG=<tag>
Default tag is latest
The assisted service can also be deployed without using a Kubernetes cluster. In this scenario the service and associated components are deployed onto your local host as a pod using Podman.
This type of deployment requires a different container image that combines components that are used to generate the installer ISO and configuration files. First build the image:
export SERVICE=quay.io/<your-org>/assisted-service:latest
make build-onprem
To deploy, update SERVICE_BASE_URL in the onprem-environment file to match the hostname or IP address of your host. For example if your IP address is 192.168.122.2, then the SERVICE_BASE_URL would be set to http://192.168.122.2:8090. Port 8090 is the assisted-service API.
Then deploy the containers:
make deploy-onprem
Check all containers are up and running:
podman ps -a
The UI will available at:
http://<host-ip-address>:8080
To remove the containers:
make clean-onprem
To run the subsystem tests:
make test-onprem
A document that can assist troubleshooting: link
-
https://github.com/oshercc/coreos_installation_iso
Image in charge of generating the Fedora-coreOs image used to install the host with the relevant ignition file.
Image is uploaded to deployed S3 under the name template "installer-image-". -
https://github.com/openshift/assisted-ignition-generator
Image in charge of generating the following installation files:
- kubeconfig
- bootstrap.ign
- master.ign
- worker.ign
- metadata.json
- kubeadmin-password
Files are uploaded to deployed S3 under the name template "/".