-
Notifications
You must be signed in to change notification settings - Fork 37
FAQ
The most simple way is to create a Jenkinsfile in your git repo and create a multi-branch pipeline job in your Jenkins instance. See https://jenkins.io/doc/pipeline/tour/hello-world/ for more information. See below a simple Jenkinsfile. Note that the full list of available tools name can be found in the Tools (JDK, Maven, Ant) section.
pipeline {
agent any
tools {
maven 'apache-maven-latest'
jdk 'temurin-jdk17-latest'
}
options {
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
buildDiscarder(logRotator(numToKeepStr: '10', artifactNumToKeepStr: '5'))
}
stages {
stage('Build') {
steps {
sh '''
java -version
mvn -v
'''
}
}
}
post {
// send a mail on unsuccessful and fixed builds
unsuccessful { // means unstable || failure || aborted
emailext subject: 'Build $BUILD_STATUS $PROJECT_NAME #$BUILD_NUMBER!',
body: '''Check console output at $BUILD_URL to view the results.''',
recipientProviders: [culprits(), requestor()],
to: 'other.recipient@domain.org'
}
fixed { // back to normal
emailext subject: 'Build $BUILD_STATUS $PROJECT_NAME #$BUILD_NUMBER!',
body: '''Check console output at $BUILD_URL to view the results.''',
recipientProviders: [culprits(), requestor()],
to: 'other.recipient@domain.org'
}
}
}
- In general, you can use a pre-built/custom docker image and Jenkins pipelines, see https://wiki.eclipse.org/Jenkins#How_do_I_run_my_build_in_a_custom_container.3F.
- If your project requires UI test-specific dependencies (e.g. metacity, mutter), you can try to use the
ubuntu-latest
pod template. The list of installed applications can be found here (it does not show all dependencies): https://github.com/eclipse-cbi/jiro-agents/blob/master/ubuntu/Dockerfile - If it does not work, use a pre-build/custom docker image.
For freestyle jobs, the label can be specified in the job configuration under "Restrict where this project can be run":
Example for pipeline jobs:
pipeline {
agent {
kubernetes {
label 'ubuntu-latest'
}
}
tools {
maven 'apache-maven-latest'
jdk 'temurin-jdk17-latest'
}
options {
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
buildDiscarder(logRotator(numToKeepStr: '10', artifactNumToKeepStr: '5'))
}
stages {
stage('Build') {
steps {
wrap([$class: 'Xvnc', takeScreenshot: false, useXauthority: true]) {
sh 'mvn clean verify'
}
}
}
}
post {
//...
}
}
pipeline {
agent {
kubernetes {
inheritFrom 'ubuntu-latest'
yaml """
spec:
containers:
- name: jnlp
resources:
limits:
memory: "4Gi"
cpu: "4000m"
requests:
memory: "4Gi"
cpu: "2000m"
"""
}
}
stages {
stage('Main') {
steps {
sh 'hostname'
}
}
}
}
You need to use a Jenkins pipeline to do so. Then you can specify a Kubernetes pod template. See an example below.
You can either use already existing "official" docker images, for example the maven:<version>-alpine
images, or create your own custom docker image.
Important
Docker images need to be hosted on supported registries. We currently support: docker.io, quay.io and gcr.io
Example:
pipeline {
agent {
kubernetes {
label 'my-agent-pod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:alpine
command:
- cat
tty: true
- name: php
image: php:7.2.10-alpine
command:
- cat
tty: true
- name: hugo
image: eclipsecbi/hugo:0.110.0
command:
- cat
tty: true
"""
}
}
stages {
stage('Run maven') {
steps {
container('maven') {
sh 'mvn -version'
}
container('php') {
sh 'php -version'
}
container('hugo') {
sh 'hugo -version'
}
}
}
}
}
See the Kubernetes Jenkins plugin for more documentation.
For security reasons, you cannot do that. We run an infrastructure open to the internet, which potentially runs stuff from non-trusted code (e.g., PR) so we need to follow a strict policy to protect the common good.
More specifically, we run containers using an arbitrarily assigned user ID (e.g. 1000100000) in our OpenShift cluster. The group ID is always root (0) though. The security context constraints we use for running projects' containers are "restricted". You cannot change this level from your podTemplate
.
Unfortunately, most images you can find on DockerHub (including official images) do not support running as an arbitrary user. Actually, most of them expect to run as root, which is definitely a bad practice.
OpenShift publishes guidelines with best practices about how to create Docker images. More specifically, see the section about how to support running with arbitrary user ID.
To test if an image is ready to be run with an arbitrarily assigned user ID, you can try to start it with the following command line:
$ docker run -it --rm -u $((1000100000 + RANDOM % 100000)):0 image/name:tag
You can use and integrate the Eclipse Foundation Jenkins shared library named: jenkins-pipeline-library.
This library proposes a containerBuild function for building docker images in the Eclipse Foundation infrastructure.
@Library('releng-pipeline') _
pipeline {
agent any
environment {
HOME = "${env.WORKSPACE}"
}
stages {
stage('build') {
agent {
kubernetes {
yaml loadOverridableResource(
libraryResource: 'org/eclipsefdn/container/agent.yml'
)
}
}
steps {
container('containertools') {
containerBuild(
credentialsId: '<jenkins-credential-id>',
name: 'docker.io/<namespace-name>/<container-name>',
version: 'latest'
)
}
}
}
}
}
You need to specify the tools
persistence volume.
pipeline {
agent {
kubernetes {
label 'my-agent-pod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: custom-name
image: my-custom-image:latest
tty: true
command:
- cat
volumeMounts:
- name: tools
mountPath: /opt/tools
volumes:
- name: tools
persistentVolumeClaim:
claimName: tools-claim-jiro-<project_shortname>
"""
}
}
stages {
stage('Run maven') {
steps {
container('custom-name') {
sh '/opt/tools/apache-maven/latest/bin/mvn -version'
}
}
}
}
}
Important
Do not forget to replace <project_shortname>
in the claimName with your project name (e.g. tools-claim-jiro-cbi
for the CBI project).
Due to recent changes in the Jenkins Kubernetes plugin, you need to specify an empty dir volume for /home/jenkins
, if your build uses a directory like /home/jenkins/.ivy2
or /home/jenkins/.npm
.
pipeline {
agent {
kubernetes {
label 'my-agent-pod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: custom-name
image: my-custom-image:latest
tty: true
command:
- cat
volumeMounts:
- mountPath: "/home/jenkins"
name: "jenkins-home"
readOnly: false
volumes:
- name: "jenkins-home"
emptyDir: {}
"""
}
}
stages {
stage('Run maven') {
steps {
container('custom-name') {
sh 'mkdir -p /home/jenkins/foobar'
}
}
}
}
}
Note
We are not satisfied with this workaround and are actively looking for a more convenient way to let projects use custom containers without specifying a bunch of volume mounts.
You cannot just cp
stuff to a folder. You need to do that with ssh
and scp
while connecting to projects-storage.eclipse.org
. Therefore, SSH credentials need to be set up on the project's Jenkins instance. This is already set up by default for all instances on our infrastructure.
This service provides access to the Eclipse Foundation file servers storage:
/home/data/httpd/download.eclipse.org
/home/data/httpd/archive.eclipse.org
Depending on how you run your build, the way you will use them is different. See the different cases below.
You need to activate the "SSH Agent" plugin in your job configuration and select the proper credentials genie.<projectname> (ssh://projects-storage.eclipse.org)
.
Then you can use ssh
, scp
, rsync
and sftp
commands to deploy artifacts to the server, e.g.,
scp -o BatchMode=yes target/my_artifact.jar genie.<projectname>@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/<projectname>/
ssh -o BatchMode=yes genie.<projectname>@projects-storage.eclipse.org ls -al /home/data/httpd/download.eclipse.org/<projectname>/
rsync -a -e ssh <local_dir> genie.<projectname>@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/<projectname>/
It is possible to deploy build output from within Maven, using Maven Wagon and wagon-ssh-external. As the build environment uses an SSH agent, the Maven Wagon plugins must use the external SSH commands so that the agent is used.
If the build outputs are executables or p2 update site and not Maven artifacts then the standard maven deploy needs to be disabled. E.g. with this in the appropriate profile in the parent/pom.xml
<source lang="xml" style="border:1px solid;padding: 5px; margin: 5px;">
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
Define some properties for the destination in parent/pom.xml:
<source lang="xml" style="border:1px solid;padding: 5px; margin: 5px;">
<download-publish-path>/home/data/httpd/download.eclipse.org/[projectname]/snapshots/update-site</download-publish-path>
<download-remote-publish-path>genie.[projectname]@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/[projectname]/snapshots/update-site</download-remote-publish-path>
Define the Wagon transport in parent/pom.xml:
<source lang="xml" style="border:1px solid;padding: 5px; margin: 5px;">
<build>
<plugins>
<plugin>
...
</plugin>
</plugins>
<extensions>
<extension>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-ssh-external</artifactId>
<version>3.0.0</version>
</extension>
</extensions>
</build>
Do the actual upload during the deployment phase (be sure to add that to the Maven invocation).
<source lang="xml" style="border:1px solid;padding: 5px; margin: 5px;">
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>wagon-maven-plugin</artifactId>
<version>2.0.0</version>
<executions>
<execution>
<id>prepare-publish</id>
<phase>deploy</phase>
<goals>
<goal>sshexec</goal>
</goals>
<configuration>
<url>scpexe://${download-remote-publish-path}</url>
<commands>
<command>rm -rf ${download-publish-path}/*</command>
</commands>
</configuration>
</execution>
<execution>
<id>publish</id>
<phase>deploy</phase>
<goals>
<goal>upload</goal>
</goals>
<configuration>
<fromDir>target/repository</fromDir>
<includes>*/**</includes>
<url>scpexe://${download-remote-publish-path}</url>
<toDir></toDir>
</configuration>
</execution>
</executions>
</plugin>
This uses the sshexec
goal to delete old files and upload to copy new files. Note */**
for all directories.
Note
<toDir></toDir>
is relative to the path given in <url>
. It does NOT affect the working dir for <commands>
.
Be careful with paths and properties to ensure you upload to the correct place and do not delete the wrong thing.
Eclipse Memory Analyzer uses the above with Maven Wagon to deploy the snapshot nightly builds.
Note
According to https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/issues/3452#note_1185902 version org.apache.maven.wagon:wagon-ssh-external:3.3.4 is broken, version 3.5.3 works.
pipeline {
agent any
stages {
stage('stage 1') {
...
}
stage('Deploy') {
steps {
sshagent ( ['projects-storage.eclipse.org-bot-ssh']) {
sh '''
ssh -o BatchMode=yes genie.projectname@projects-storage.eclipse.org rm -rf /home/data/httpd/download.eclipse.org/projectname/snapshots
ssh -o BatchMode=yes genie.projectname@projects-storage.eclipse.org mkdir -p /home/data/httpd/download.eclipse.org/projectname/snapshots
scp -o BatchMode=yes -r repository/target/repository/* genie.projectname@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/projectname/snapshots
'''
}
}
}
}
}
Important
A 'jnlp' container is automatically added, when a custom pod template is used to ensure connectivity between the Jenkins master and the pod. If you want to deploy files to download.eclipse.org, you only need to specify the known-hosts volume for the JNLP container (as seen below) to avoid "host verification failed" errors.
pipeline {
agent {
kubernetes {
label 'my-pod'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:alpine
command:
- cat
tty: true
- name: jnlp
volumeMounts:
- name: volume-known-hosts
mountPath: /home/jenkins/.ssh
volumes:
- name: volume-known-hosts
configMap:
name: known-hosts
'''
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn clean verify'
}
}
}
stage('Deploy') {
steps {
container('jnlp') {
sshagent ( ['projects-storage.eclipse.org-bot-ssh']) {
sh '''
ssh -o BatchMode=yes genie.projectname@projects-storage.eclipse.org rm -rf /home/data/httpd/download.eclipse.org/projectname/snapshots
ssh -o BatchMode=yes genie.projectname@projects-storage.eclipse.org mkdir -p /home/data/httpd/download.eclipse.org/projectname/snapshots
scp -o BatchMode=yes -r repository/target/repository/* genie.projectname@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/projectname/snapshots
'''
}
}
}
}
}
}
Integrating SonarCloud into a project's CI builds is quite easy. Please open a HelpDesk issue for this and the releng/webmaster team will help you set this up.
We're setting this up with the webmaster's SonarCloud.io account, so there is no need to provide a SonarCloud token. To avoid leaking the token in the console log, we store it as secret text (Jenkins credentials).
We will configure the SonarQube Jenkins plugin to use SonarCloud to achieve a slightly better integration with Jenkins. For example, a link to SonarCloud will show up in the left menu of a job page.
For a freestyle job configuration, two things need to be done:
- Under "Build environment" enable "Use secret text(s) or file(s)", add "Secret text", name the variable "SONARCLOUD_TOKEN" and select the right credential (e.g. "Sonarcloud token").
- Either a shell build step or a Maven build step can be used to run the sonar goal with the right parameters:
mvn clean verify sonar:sonar -Dsonar.projectKey=<project-name> -Dsonar.organization=<organization> -Dsonar.host.url=${SONAR_HOST_URL} -Dsonar.login=${SONARCLOUD_TOKEN}
For a pipeline job, the following needs to be added:
withCredentials([string(credentialsId: 'sonarcloud-token', variable: 'SONARCLOUD_TOKEN')]) {
withSonarQubeEnv('SonarCloud.io') {
mvn clean verify sonar:sonar -Dsonar.projectKey=<project-name> -Dsonar.organization=<organization> -Dsonar.host.url=${SONAR_HOST_URL} -Dsonar.login=${SONARCLOUD_TOKEN}
}
}
Please note: <project-name> and <organization> should be replaced with the corresponding project name and organization.
In general, we want to avoid handing out admin rights. In the spirit of "configuration as code", project members can submit pull requests to our Jiro GitHub repo and change the configuration of their CI instance. E.g. adding plugins, etc. This allows better tracking of configuration changes and rollback in case of issues.
We understand that some projects heavily rely on their admin permissions. We will make sure to find an amicable solution in those cases.
The preferred way is to open a pull request in the Jiro GitHub repo. For example, to add a new plugin to the CBI instance, one would need to edit https://github.com/eclipse-cbi/jiro/blob/master/instances/technology.cbi/config.jsonnet and add the ID of the plugin to the plugins+
section. If the jenkins+/plugins+ section does not exist yet, it needs to be added as well.
Example:
{
project+: {
fullName: "technology.cbi",
displayName: "Eclipse CBI",
},
jenkins+: {
plugins+: [
"jacoco",
],
}
}
Before adding a plugin, please verify that it's not already listed in https://github.com/eclipse-cbi/jiro/wiki/Default-Jenkins-plugins.
The ID of a Jenkins plugin can be found here: https://plugins.jenkins.io/
If this sounds too complicated, you can also open a HelpDesk issue.
The preferred static website generator for building Eclipse project websites is Hugo. You should first put your Hugo sources in a dedicated Git repository, either at GitHub or https://gitlab.eclipse.org. If you don't have such a repository already, feel free to open a HelpDesk issue and the Eclipse IT team will create one for you.
Once your Hugo sources are in the proper repository, create a file named Jenkinsfile
at the root of the repository with the following content (don't forget to specify the proper value for PROJECT_NAME
and PROJECT_BOT_NAME
environment variable):
pipeline {
agent {
kubernetes {
label 'hugo-agent'
yaml """
apiVersion: v1
metadata:
labels:
run: hugo
name: hugo-pod
spec:
containers:
- name: jnlp
volumeMounts:
- mountPath: /home/jenkins/.ssh
name: volume-known-hosts
env:
- name: "HOME"
value: "/home/jenkins"
- name: hugo
image: eclipsecbi/hugo:0.110.0
command:
- cat
tty: true
volumes:
- configMap:
name: known-hosts
name: volume-known-hosts
"""
}
}
environment {
PROJECT_NAME = "<project_name>" // must be all lowercase.
PROJECT_BOT_NAME = "<Project_name> Bot" // Capitalize the name
}
triggers { pollSCM('H/10 * * * *')
}
options {
buildDiscarder(logRotator(numToKeepStr: '5'))
checkoutToSubdirectory('hugo')
}
stages {
stage('Checkout www repo') {
steps {
dir('www') {
sshagent(['git.eclipse.org-bot-ssh']) {
sh '''
git clone ssh://genie.${PROJECT_NAME}@git.eclipse.org:29418/www.eclipse.org/${PROJECT_NAME}.git .
git checkout ${BRANCH_NAME}
'''
}
}
}
}
stage('Build website (master) with Hugo') {
when {
branch 'master'
}
steps {
container('hugo') {
dir('hugo') {
sh 'hugo -b https://www.eclipse.org/${PROJECT_NAME}/'
}
}
}
}
stage('Build website (staging) with Hugo') {
when {
branch 'staging'
}
steps {
container('hugo') {
dir('hugo') {
sh 'hugo -b https://staging.eclipse.org/${PROJECT_NAME}/'
}
}
}
}
stage('Push to $env.BRANCH_NAME branch') {
when {
anyOf {
branch "master"
branch "staging"
}
}
steps {
sh 'rm -rf www/* && cp -Rvf hugo/public/* www/'
dir('www') {
sshagent(['git.eclipse.org-bot-ssh']) {
sh '''
git add -A
if ! git diff --cached --exit-code; then
echo "Changes have been detected, publishing to repo 'www.eclipse.org/${PROJECT_NAME}'"
git config user.email "${PROJECT_NAME}-bot@eclipse.org"
git config user.name "${PROJECT_BOT_NAME}"
git commit -m "Website build ${JOB_NAME}-${BUILD_NUMBER}"
git log --graph --abbrev-commit --date=relative -n 5
git push origin HEAD:${BRANCH_NAME}
else
echo "No changes have been detected since last build, nothing to publish"
fi
'''
}
}
}
}
}
}
Finally, you can create a multibranch pipeline job on your project's Jenkins instance. It will automatically be triggered on every new push to your Hugo source repository, build the website and push it to the target website repository. As mentioned above, the Eclipse Foundation website's infrastructure will eventually pull the content of the latter and your website will be published and available on https://www.eclipse.dev/\<project_name>.
If you don't have a Jenkins instance already, CBI#Requesting_a_JIPP_instance. If you need assistance with the process, please open a HelpDesk issue.
By default, Jenkins project configurations using the GitLab Branch Source plugin are set up with 'Trusted Members' as the default option.
Definition of 'trusted members': [Recommended] Discover merge requests from forked projects whose authors have Developer/Maintainer/Owner access levels in the origin project.
To accept contributions from contributors with fork project, the project should:
- Configure the CI project by changing 'Discover merge requests from forks' to 'Members'.
- The project lead should add the user to the list of collaborators in PMI.
- The contributor should change the forked project's visibility to public.