Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to enable scl in PodTemplate with maven and nodejs ? #582

Closed
arnaud-deprez opened this issue Apr 22, 2018 · 14 comments
Closed

How to enable scl in PodTemplate with maven and nodejs ? #582

arnaud-deprez opened this issue Apr 22, 2018 · 14 comments

Comments

@arnaud-deprez
Copy link
Contributor

arnaud-deprez commented Apr 22, 2018

Hi,

For some of our projects, we are using JHipster.

The idea is to embedded a Angular application into a spring-boot fat jar.
To build this, we need first to build the Angular application and then build the spring-boot with the result of the former build.

So basically I need capabilities of maven and nodejs jenkins slave.

So my idea was to create a PodTemplate config that inherit from maven and add a nodejs container such as:

apiVersion: v1
kind: Template
metadata:
  labels:
    template: jenkins-slave-gradle-nodejs-template
    role: jenkins-slave
  name: jenkins-slave-gradle-nodejs
objects:
- apiVersion: v1
  kind: ConfigMap
  metadata:
    labels:
      role: jenkins-slave
    name: jenkins-slave-maven-nodejs
  data:
    nodejs: |-
      <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
        <inheritFrom>maven</inheritFrom>
        <name>maven-nodejs</name>
        <instanceCap>10</instanceCap>
        <idleMinutes>15</idleMinutes>
        <label>maven-nodejs</label>
        <serviceAccount>jenkins</serviceAccount>
        <nodeSelector></nodeSelector>
        <volumes/>
        <containers>
          <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <name>nodejs</name>
            
            <privileged>false</privileged>
            <alwaysPullImage>true</alwaysPullImage>
            <workingDir>/tmp</workingDir>
            <command></command>
            <args>cat</args>
            <ttyEnabled>true</ttyEnabled>
            <resourceRequestCpu>100m</resourceRequestCpu>
            <resourceRequestMemory>128Mi</resourceRequestMemory>
            <resourceLimitCpu>2000m</resourceLimitCpu>
            <resourceLimitMemory>512Mi</resourceLimitMemory>
            <envVars/>
          </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
        </containers>
        <envVars>
          <org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
            <key>MAVEN_MIRROR_URL</key>
            <value>${MAVEN_MIRROR_URL}</value>
          </org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
          <org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
            <key>MAVEN_PUBLISH_SNAPSHOT_URL</key>
            <value>${MAVEN_PUBLISH_SNAPSHOT_URL}</value>
          </org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
          <org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
            <key>MAVEN_PUBLISH_URL</key>
            <value>${MAVEN_PUBLISH_URL}</value>
          </org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
          <org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
            <key>MAVEN_PUBLISH_USERNAME</key>
            <value>${MAVEN_PUBLISH_USERNAME}</value>
          </org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
          <org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
            <key>MAVEN_PUBLISH_PASSWORD</key>
            <value>${MAVEN_PUBLISH_PASSWORD}</value>
          </org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
          <org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
            <key>CI</key>
            <value>true</value>
          </org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
          <org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
            <key>NPM_MIRROR_URL</key>
            <value>${NPM_MIRROR_URL}</value>
          </org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
          <org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
            <key>NPM_PUBLISH_URL</key>
            <value>${NPM_PUBLISH_URL}</value>
          </org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
          <org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
            <key>NPM_PUBLISH_TOKEN</key>
            <value>${NPM_PUBLISH_TOKEN}</value>
          </org.csanchez.jenkins.plugins.kubernetes.PodEnvVar>
        </envVars>
        <annotations/>
        <imagePullSecrets/>
        <nodeProperties/>
      </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
parameters:
- name: MAVEN_MIRROR_URL
  description: "Maven mirror url from where jenkins will download artifacts"
  required: true
  value: "http://nexus3:8081/repository/maven-public"
- name: MAVEN_PUBLISH_SNAPSHOT_URL
  description: "Maven repository url to where jenkins will upload snapshots artifacts"
  required: true
  value: "http://nexus3:8081/repository/maven-snapshots"
- name: MAVEN_PUBLISH_URL
  description: "Maven repository url to where jenkins will upload releases artifacts"
  required: true
  value: "http://nexus3:8081/repository/maven-releases"
- name: MAVEN_PUBLISH_USERNAME
  description: "Username used when upload artifacts"
  required: true
  value: "jenkins"
- name: MAVEN_PUBLISH_PASSWORD
  description: "Password used when upload artifacts"
  required: true
  value: "jenkins"
- name: NPM_MIRROR_URL
  description: "Maven mirror url from where jenkins will download artifacts"
  required: true
  value: "http://nexus3:8081/repository/npm-public/"
- name: NPM_PUBLISH_URL
  description: "Maven repository url to where jenkins will upload releases artifacts"
  required: true
  value: "http://nexus3:8081/repository/npm-releases/"
- name: NPM_PUBLISH_TOKEN
  description: "Npm user used when upload artifacts"
  required: true
  value: "NpmToken.b5505337-ffb2-3fac-8b3a-fcd81b8bb8fb"
- name: IMAGE
  description: |-
    Docker image reference of the node slave.
    You can use imagestreamtag:<namespace>/<imagestream>:<tag> if you want to use
    an imagestreamtag as a reference for this
  required: true
  value: "openshift/jenkins-slave-base-centos7:v3.9"

Then I tried that setup with this pipeline:

pipeline { 
  agent none 
  stages {
    stage('test maven-nodejs') {
      agent { label 'maven-nodejs' }
      steps {
        echo 'Hello from maven-nodejs slave'
        sh 'mvn -version'
        sh 'gradle -version'
        script {
            container('nodejs') {
              echo 'In container nodejs...'
              sh 'node --version'
              sh 'npm --version'
            }
        }
      }
    }
  }
}

But it fails with:

[Pipeline] container
[Pipeline] {
[Pipeline] echo
In container nodejs...
[Pipeline] sh
[cicd-staging-slave-test-pipeline] Running shell script
+ node --version
/tmp/workspace/cicd-staging/cicd-staging-slave-test-pipeline@tmp/durable-b46ed3da/script.sh: line 2: node: command not found

For some reason, it seems it does not source /usr/local/bin/scl_enable in the second container.

Any idea how to solve that ?

Thanks,

@bparees
Copy link
Contributor

bparees commented Apr 22, 2018

You would need to construct an image that actually contains the nodejs+maven binaries. The inherit from you did is just for the slave pod configuration, it has no effect on the actual image you're running.

So you would need to extend the maven image and add all the nodejs packages, or vice versa.

And you'd need to make sure the scl enablement script[1] included in the image enabled both the nodejs and maven packages

[1] https://github.com/openshift/jenkins/blob/master/slave-maven/contrib/bin/scl_enable

@arnaud-deprez
Copy link
Contributor Author

Thanks for your reply @bparees

I know the inherit I did is for the pod configuration, my intent was actually to inherit either from maven and add node as a sidecar container like the multi-container example shown in kubernetes-plugin, so it can also simplify the maintenance as we can reuse these 2 containers (alone, together or with other containers if needed).

IMHO, it seems to be a good idea to reuse containers instead of rebuilding new one that are actually combinations of these.

Except if you tell me it's a terrible idea to use sidecar containers to run jenkins pipeline for reasons I'm not aware of, I think we should support it, shouldn't we ?

The only blocking point for that is the scl enablement which is as far as I know, specific to centos/rhel image.
Is there still not a way to enable the scl on sidecar container ?

@bparees
Copy link
Contributor

bparees commented Apr 22, 2018 via email

@arnaud-deprez
Copy link
Contributor Author

Sure it works, but this is not convenient as I have to do it for each commands (see below):

pipeline { 
  agent none 
  stages {
    stage('test maven-nodejs') {
      agent { label 'maven-nodejs' }
      steps {
        echo 'Hello from maven-nodejs slave'
        sh 'mvn -version'
        sh 'gradle -version'
        container('nodejs') {
          echo 'In container nodejs...'
          sh 'source /usr/local/bin/scl_enable && node --version'
          sh 'source /usr/local/bin/scl_enable && npm --version'
        }
      }
    }
  }
}

Also, it prints some warnings for each commands:

[Pipeline] sh
[cicd-staging-slave-test-pipeline] Running shell script
+ source /usr/local/bin/scl_enable
++ unset BASH_ENV PROMPT_COMMAND ENV
++ source scl_source enable rh-nodejs8
+++ _scl_source_help='Usage: source scl_source <action> [<collection> ...]

Don'\''t use this script outside of SCL scriptlets!

Options:
    -h, --help    display this help and exit'
+++ '[' 2 -eq 0 -o enable = -h -o enable = --help ']'
+++ '[' -z '' ']'
+++ _recursion=false
+++ '[' -z '' ']'
+++ _scl_scriptlet_name=enable
+++ shift 1
+++ '[' -z '' ']'
+++ _scl_dir=/etc/scl/conf
+++ '[' '!' -e /etc/scl/conf ']'
+++ for arg in '"$@"'
+++ _scl_prefix_file=/etc/scl/conf/rh-nodejs8
++++ cat /etc/scl/conf/rh-nodejs8
+++ _scl_prefix=/opt/rh
+++ '[' 0 -ne 0 ']'
+++ /usr/bin/scl_enabled rh-nodejs8
+++ '[' 1 -ne 0 ']'
+++ _scls+=($arg)
+++ _scl_prefixes+=($_scl_prefix)
+++ '[' false == false ']'
+++ _i=0
+++ _recursion=true
+++ '[' 0 -lt 1 ']'
+++ _scl_scriptlet_path=/opt/rh/rh-nodejs8/enable
+++ source /opt/rh/rh-nodejs8/enable
++++ export PATH=/opt/rh/rh-nodejs8/root/usr/bin:/home/jenkins/node_modules/.bin/:/home/jenkins/.npm-global/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++++ PATH=/opt/rh/rh-nodejs8/root/usr/bin:/home/jenkins/node_modules/.bin/:/home/jenkins/.npm-global/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++++ export LD_LIBRARY_PATH=/opt/rh/rh-nodejs8/root/usr/lib64
++++ LD_LIBRARY_PATH=/opt/rh/rh-nodejs8/root/usr/lib64
++++ export PYTHONPATH=/opt/rh/rh-nodejs8/root/usr/lib/python2.7/site-packages
++++ PYTHONPATH=/opt/rh/rh-nodejs8/root/usr/lib/python2.7/site-packages
++++ export MANPATH=/opt/rh/rh-nodejs8/root/usr/share/man:
++++ MANPATH=/opt/rh/rh-nodejs8/root/usr/share/man:
+++ '[' 0 -ne 0 ']'
+++ export 'X_SCLS=rh-nodejs8 '
+++ X_SCLS='rh-nodejs8 '
+++ _i=1
+++ '[' 1 -lt 1 ']'
+++ _scls=()
+++ _scl_prefixes=()
+++ _scl_scriptlet_name=
+++ _recursion=false
+ node --version
v8.9.4

Isn't it a better way ?
It would be nice if developers does not need to bother about scl enablement.

@bparees
Copy link
Contributor

bparees commented Apr 23, 2018

Sorry, I don't think it is not possible. The mechanism the kubernetes plugin uses to invoke the commands in the sidecar container does not give us a way to ensure the scl packages are enabled within that execution context, that I am aware of. We are only able to control the scl enablement within the primary container.

The plugin effectively execs into the sidecar container, so the scl environment the image normally sets up is not going to be available and i don't see a way for us to control that behavior.

If manual scl enablement of the commands are a problem for you, I suggest you build your own nodejs image using a different nodejs packaging.

@bparees bparees closed this as completed Apr 23, 2018
@arnaud-deprez
Copy link
Contributor Author

Ok, thank you a lot for your support!

@arnaud-deprez
Copy link
Contributor Author

Hi,

FYI, we've found a way to solve it by adding this line in configure-slave in each slaves:

#/bin/bash

echo "source /usr/local/bin/scl_enable" >> /tmp/.bashrc

Explanations:
Kubernetes plugin run an equivalent of kubectl exec in the specified sidecar container when:

//...
container('nodejs')
    sh "env"
    sh "node --version"
}

By inspecting the env variables, it appears it also set the HOME variable to /tmp which is the working directory.
So by putting a .bashrc file in /tmp that source the scl_enable script, we also have the scl enablement in the sidecar containers.

It might be interesting to at least document it and why not putting this by default in each slave/agent image so users can compose them.

WDYT @bparees @gabemontero ?

@grdryn
Copy link
Member

grdryn commented May 2, 2018

I don't work on this repo, but for the projects that I work on, we've got our own images based on the structure of the ones in this repo. We've done the following, which might be of use, if I understand the scenario correctly!

In the configure-slave script for the base image, we've added source scl_source enable ${ENABLED_COLLECTIONS} like this, then in the Dockerfile for that base image we set an environment variable with the collections that it has like this.

Then in images that are based on that one, we extend the collections listed in that variable like this.

@bparees
Copy link
Contributor

bparees commented May 2, 2018

@arnaud-deprez

I would have expected this:
https://github.com/openshift/jenkins/blob/master/agent-nodejs-8/Dockerfile#L8-L10

to accomplish the same thing as creating a .bashrc file, so i'm surprised creating the file works, but i have no particular objections to adding that file to our slave/agent images that need scl enablement. would you mind submitting a PR?

if you want to make it look more like what @grdryn suggested for flexibility/extension that would be ok too.

@bparees bparees reopened this May 2, 2018
@arnaud-deprez
Copy link
Contributor Author

@bparees Ok, I will send a PR for that.

@bparees the problem in the image is that it unset these variables https://github.com/openshift/jenkins/blob/master/agent-nodejs-8/contrib/bin/scl_enable#L2 so when I'm entering in a new bash, it does not source this. That's why I needed to create the file .bashrc.

I'm not sure images from @grdryn can be used in sidecar container, isn't it ?

I like the idea of @grdryn to define the scl in an environment variable so users can just inherit these images and eventually append some other scl to that variable.

Stay tuned :-)

@arnaud-deprez
Copy link
Contributor Author

Hi,

Just to give a feed back, it seems our trick worked in openshift 3.6 with kubernetes-plugin 0.12, but since openshift 3.9 and kubernetes-plugin 1.2 or higher, it does not work anymore...

So I will give up with this scl and build my own slaves without it as it is really a pain and doesn't make sense in a container world where we want it to be immutable, and so the PATH and the SHELL.

Thanks for your help.

@grdryn
Copy link
Member

grdryn commented May 10, 2018

@bparees let me know if you think having my solution would be beneficial in your images. I'm not sure if it is or not, but happy to make a PR if so.

@bparees
Copy link
Contributor

bparees commented May 10, 2018

@grdryn I think that's fairly similar to what we already do (though ours is less directly extensible):

https://github.com/openshift/jenkins/blob/master/agent-nodejs-8/Dockerfile#L8-L10
https://github.com/openshift/jenkins/blob/master/agent-nodejs-8/contrib/bin/scl_enable

I don't think directly invoking the scl_enable script in the configure-slave script would help in @arnaud-deprez's case since he's effectively exec'ing into the container so he gets a new shell environment where the configure-slave script has not been run.

But if i've missed something and it would help, i'm certainly open to the PR.

@arnaud-deprez
Copy link
Contributor Author

Yep @bparees got my point and I've look to your solution @grdryn and it is very close to what we currently have.

I tried to use a mix of what we had and what you did @grdryn here: https://github.com/arnaud-deprez/jenkins/commit/c4180feb6271db478e1aa4488d817c003a334d46.

The trick here https://github.com/arnaud-deprez/jenkins/blob/c4180feb6271db478e1aa4488d817c003a334d46/slave-base/contrib/bin/run-jnlp-client#L37-L42 did work with kubernetes-plugin but not with 1.2 and higher.
I tried with .profile and .bashrc, same result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants