tags | projects | |||
---|---|---|---|---|
|
|
Many people are using containers to wrap their Spring Boot applications, and building containers is not a simple thing to do. This is a guide for developers of Spring Boot applications, and containers are not always a good abstraction for developers - they force you to learn about and think about very low level concerns - but you will on occasion be called on to create or use a container, so it pays to understand the building blocks. Here we aim to show you some of the choices you can make if you are faced with the prospect of needing to create your own container.
We will assume that you know how to create and build a basic Spring Boot application. If you don’t, go to one of the Getting Started Guides, for example the one on building a REST Service. Copy the code from there and practise with some of the ideas below.
Note
|
There is also a Getting Started Guide on Docker, which would also be a good starting point, but it doesn’t cover the range of choices that we have here, or in as much detail. |
A Spring Boot application is easy to convert into an executable JAR file. All the Getting Started Guides do this, and every app that you download from Spring Initializr will have a build step to create an executable JAR. With Maven you ./mvnw install
and with Gradle you ./gradlew build
. A basic Dockerfile to run that JAR would then look like this, at the top level of your project:
Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
The JAR_FILE
could be passed in as part of the docker
command (it will be different for Maven and Gradle). E.g. for Maven:
$ docker build --build-arg JAR_FILE=target/*.jar -t myorg/myapp .
and for Gradle:
$ docker build --build-arg JAR_FILE=build/libs/*.jar -t myorg/myapp .
Of course, once you have chosen a build system, you don’t need the ARG
- you can just hard code the jar location. E.g. for Maven:
Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Then we can simply build an image with
$ docker build -t myorg/myapp .
and run it like this:
$ docker run -p 8080:8080 myorg/myapp
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.2.RELEASE)
Nov 06, 2018 2:45:16 PM org.springframework.boot.StartupInfoLogger logStarting
INFO: Starting Application v0.1.0 on b8469cdc9b87 with PID 1 (/app.jar started by root in /)
Nov 06, 2018 2:45:16 PM org.springframework.boot.SpringApplication logStartupProfileInfo
...
If you want to poke around inside the image you can open a shell in it like this (the base image does not have bash
):
$ docker run -ti --entrypoint /bin/sh myorg/myapp
/ # ls
app.jar dev home media proc run srv tmp var
bin etc lib mnt root sbin sys usr
/ #
Note
|
The alpine base container we used in the example does not have bash , so this is an ash shell. It has some of the features of bash but not all.
|
If you have a running container and you want to peek into it, use docker exec
you can do this:
$ docker run --name myapp -ti --entrypoint /bin/sh myorg/myapp
$ docker exec -ti myapp /bin/sh
/ #
where myapp
is the --name
passed to the docker run
command. If you didn’t use --name
then docker assigns a mnemonic name which you can scrape from the output of docker ps
. You could also use the sha identifier of the container instead of the name, also visible from docker ps
.
The exec form of the Dockerfile ENTRYPOINT
is used so that there is no shell wrapping the java process. The advantage is that the java process will respond to KILL
signals sent to the container. In practice that means, for instance, that if you docker run
your image locally, you can stop it with CTRL-C
. If the command line gets a bit long you can extract it out into a shell script and COPY
it into the image before you run it. Example:
Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY run.sh .
COPY target/*.jar app.jar
ENTRYPOINT ["run.sh"]
Remember to use exec java …
to launch the java process (so it can handle the KILL
signals):
run.sh
#!/bin/sh
exec java -jar /app.jar
Another interesting aspect of the entry point is whether or not you can inject environment variables into the java process at runtime. For example, suppose you want to have the option to add java command lline options at runtime. You might try to do this:
Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","${JAVA_OPTS}","-jar","/app.jar"]
and
$ docker build -t myorg/myapp .
$ docker run -p 9000:9000 -e JAVA_OPTS=-Dserver.port=9000 myorg/myapp
This will fail because the ${}
substitution requires a shell; the exec form doesn’t use a shell to launch the process, so the options will not be applied. You can get round that by moving the entry point to a script (like the run.sh
example above), or by explicitly creating a shell in the entry point. For example:
Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar /app.jar"]
You can then launch this app with
$ docker run -p 8080:8080 -e "JAVA_OPTS=-Ddebug -Xmx128m" myorg/myapp
...
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.0.RELEASE)
...
2019-10-29 09:12:12.169 DEBUG 1 --- [ main] ConditionEvaluationReportLoggingListener :
============================
CONDITIONS EVALUATION REPORT
============================
...
(Showing parts of the full DEBUG
output that is generated with -Ddebug
by Spring Boot.)
Using an ENTRYPOINT
with an explicit shell like the above means that you can pass environment variables into the java command, but so far you cannot also provide command line arguments to the Spring Boot application. This trick doesn’t work to run the app on port 9000:
$ docker run -p 9000:9000 myorg/myapp --server.port=9000
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.0.RELEASE)
...
2019-10-29 09:20:19.718 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080
The reason it didn’t work is because the docker command (the --server.port=9000
part) is passed to the entry point (sh
), not to the java process which it launches. To fix that you need to add the command line from the CMD
to the ENTRYPOINT
:
Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar /app.jar ${0} ${@}"]
$ docker run -p 9000:9000 myorg/myapp --server.port=9000
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.0.RELEASE)
...
2019-10-29 09:30:19.751 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 9000
Note the use of ${0}
for the "command" (in this case the first program argument) and ${@}
for the "command arguments" (the rest of the program arguments). If you use a script for the entry point, then you don’t need the ${0}
(that would be /app/run.sh
in the example above). Example:
run.sh
#!/bin/sh
exec java ${JAVA_OPTS} -jar /app.jar ${@}
The docker configuration is very simple so far, and the generated image is not very efficient. The docker image has a single filesystem layer with the fat jar in it, and every change we make to the application code changes that layer, which might be 10MB or more (even as much as 50MB for some apps). We can improve on that by splitting the JAR up into multiple layers.
Notice that the base image in the example above is openjdk:8-jdk-alpine
. The alpine
images are smaller than the standard openjdk
library images from Dockerhub. There is no official alpine image for Java 11 yet (AdoptOpenJDK had one for a while but it no longer appears on their Dockerhub page). You can also save about 20MB in the base image by using the "jre" label instead of "jdk". Not all apps work with a JRE (as opposed to a JDK), but most do, and indeed some organizations enforce a rule that every app has to because of the risk of misuse of some of the JDK features (like compilation).
Another trick that could get you a smaller image is to use JLink, which is bundled with OpenJDK 11. JLink allows you to build a custom JRE distribution from a subset of modules in the full JDK, so you don’t need a JRE or JDK in the base image. In principle this would get you a smaller total image size than using the openjdk
official docker images. In practice, you won’t (yet) be able to use the alpine
base image with JDK 11, so your choice of base image will be limited and will probably result in a larger final image size. Also, a custom JRE in your own base image cannot be shared amongst other applications, since they would need different customizations. So you might have smaller images for all your applications, but they still take longer to start because they don’t benefit from caching the JRE layer.
That last point highlights a really important concern for image builders: the goal is not necessarily always going to be to build the smallest image possible. Smaller images are generally a good idea because they take less time to upload and download, but only if none of the layers in them are already cached. Image registries are quite sophisticated these days and you can easily lose the benefit of those features by trying to be clever with the image construction. If you use common base layers, the total size of an image is less of a concern, and will probably become even less of one as the registries and platforms evolve. Having said that, it is still important, and useful, to try and optimize the layers in our application image, but the goal should always be to put the fastest changing stuff in the highest layers, and to share as many of the large, lower layers as possible with other applications.
A Spring Boot fat jar naturally has "layers" because of the way that the jar itself is packaged. If we unpack it first it will already be divided into external and internal dependencies. To do this in one step in the docker build, we need to unpack the jar first. For example (sticking with Maven, but the Gradle version is pretty similar):
$ mkdir target/dependency
$ (cd target/dependency; jar -xf ../*.jar)
$ docker build -t myorg/myapp .
with this Dockerfile
Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]
There are now 3 layers, with all the application resources in the later 2 layers. If the application dependencies don’t change, then the first layer (from BOOT-INF/lib
) will not change, so the build will be faster, and so will the startup of the container at runtime as long as the base layers are already cached.
Note
|
We used a hard-coded main application class hello.Application . This will probably be different for your application. You could parameterize it with another ARG if you wanted. You could also copy the Spring Boot fat JarLauncher into the image and use it to run the app - it would work and you wouldn’t need to specify the main class, but it would be a bit slower on startup.
|
If you want to start your app as quickly as possible (most people do) there are some tweaks you might consider. Here are some ideas:
-
Use the
spring-context-indexer
(link to docs). It’s not going to add much for small apps, but every little helps. -
Don’t use actuators if you can afford not to.
-
Use Spring Boot 2.1 and Spring 5.1.
-
Fix the location of the Spring Boot config file(s) with
spring.config.location
(command line argument or System property etc.). -
Switch off JMX - you probably don’t need it in a container - with
spring.jmx.enabled=false
-
Run the JVM with
-noverify
. Also consider-XX:TieredStopAtLevel=1
(that will slow down the JIT later at the expense of the saved startup time). -
Use the container memory hints for Java 8:
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
. With Java 11 this is automatic by default.
Your app might not need a full CPU at runtime, but it will need multiple CPUs to start up as quickly as possible (at least 2, 4 are better). If you don’t mind a slower startup you could throttle the CPUs down below 4. If you are forced to start with less than 4 CPUs it might help to set -Dspring.backgroundpreinitializer.ignore=true
since it prevents Spring Boot from creating a new thread that it probably won’t be able to use (this works with Spring Boot 2.1.0 and above).
The Dockerfile
above assumed that the fat JAR was already built on the command line. You can also do that step in docker using a multi-stage build, copying the result from one image to another. Example, using Maven:
Dockerfile
FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src
RUN ./mvnw install -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]
The first image is labelled "build" and it is used to run Maven and build the fat jar, then unpack it. The unpacking could also be done by Maven or Gradle (this is the approach taken in the Getting Started Guide) - there really isn’t much difference, except that the build configuration would have to be edited and a plugin added.
Notice that the source code has been split into 4 layers. The later layers contain the build configuration and the source code for the app, and the earlier layers contain the build system itself (the Maven wrapper). This is a small optimization, and it also means that we don’t have to copy the target
directory to a docker image, even a temporary one used for the build.
Every build where the source code changes will be slow because the Maven cache has to be re-created in the first RUN
section. But you have a completely standalone build that anyone can run to get your application running as long as they have docker. That can be quite useful in some environments, e.g. where you need to share your code with people who don’t know Java.
Docker 18.06 comes with some "experimental" features that includes a way to cache build dependencies. To switch them on you need a flag in the daemon (dockerd
) and also an environment variable when you run the client, and then you can add a magic first line to your Dockerfile
:
Dockerfile
# syntax=docker/dockerfile:experimental
and the RUN
directive then accepts a new flag --mount
. Here’s a full example:
Dockerfile
# syntax=docker/dockerfile:experimental
FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src
RUN --mount=type=cache,target=/root/.m2 ./mvnw install -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]
Then run it:
$ DOCKER_BUILDKIT=1 docker build -t myorg/myapp .
...
=> /bin/sh -c ./mvnw install -DskipTests 5.7s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:3defa...
=> => naming to docker.io/myorg/myapp
With the experimental features you get a different output on the console, but you can see that a Maven build now only takes a few seconds instead of minutes, once the cache is warm.
The Gradle version of this Dockerfile
configuration is very similar:
Dockerfile
# syntax=docker/dockerfile:experimental
FROM openjdk:8-jdk-alpine AS build
WORKDIR /workspace/app
COPY . /workspace/app
RUN --mount=type=cache,target=/root/.gradle ./gradlew clean build
RUN mkdir -p build/dependency && (cd build/dependency; jar -xf ../libs/*.jar)
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/build/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]
Note
|
While these features are in the experimental phase, the options for switching buildkit on and off depend on the version of docker that you are using. Check the documentation for the version you have (the example above is correct for docker 18.0.6).
|
Just as in classic VM-deployments, processes should not be run with root permissions. Instead the image should contain a non-root user that runs the app.
In a Dockerfile
, this can be achieved by adding another layer that adds a (system) user and group, then set it as the current user (instead of the default, root):
Dockerfile
FROM openjdk:8-jdk-alpine
RUN addgroup -S demo && adduser -S demo -G demo
USER demo
...
In case someone manages to break out of your app and run system commands inside the container, this will limit their capabilities (principle of least privilege).
Note
|
Some of the further Dockerfile commands only work as root, so maybe you have to move the USER command further down (e.g. if you plan to install more packages into the container, which only works as root).
|
Note
|
Other approaches, not using a Dockerfile , might be more amenable. For instance, in the buildpack approach described later, most implementations will use a non-root user by default.
|
Another consideration is that the full JDK is probably not needed by most apps at runtime, so we can safely switch to the JRE base image, once we have a multi-stage build. So in the multi-stage build above we can use
Dockerfile
FROM openjdk:8-jre-alpine
...
for the final, runnable image. Ad mentioned above already, this will also save a bit of space in the image, taken up by tools that are not needed at runtime.
If you don’t want to call docker
directly in your build, there is quite a rich set of plugins for Maven and Gradle that can do that work for you. Here are just a few.
With Spring Boot 2.3 you have the option of building an image from Maven or Gradle directly with Spring Boot. As long as you are already building a Spring Boot jar file, you only need to call the plugin directly. With Maven:
$ ./mvnw spring-boot:build-image
and with Gradle
$ ./gradlew bootBuildImage
It uses the local docker daemon (which therefore must be installed) but doesn’t require a Dockerfile
. The result is an image called docker.io/<group>/<artifact>:latest
by default. You can modify the image name in Maven using
<project>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

</configuration>
</plugin>
</plugins>
</build>
</project>
and in Gradle using
bootBuildImage {
builder = "myorg/demo"
}
The image is built using Cloud Native Buildpacks, where the default builder is optimized for a Spring Boot application (you can customize it but the defaults are useful). The image is layered efficiently, like in the examples above. It also uses the CF memory calculator to size the JVM at runtime based on the container resources available, so when you run the image you will see the memory calculator reporting its results:
$ docker run -p 8080:8080 myorg/demo
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=86557K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx450018K (Head Room: 0%, Loaded Class Count: 12868, Thread Count: 250, Total Memory: 1073741824)
...
The Spotify Maven Plugin is a popular choice. It requires the application developer to write a Dockerfile
and then runs docker
for you, just as if you were doing it on the command line. There are some configuration options for the docker image tag and other stuff, but it keeps the docker knowledge in your application concentrated in a Dockerfile
, which many people like.
For really basic usage it will work out of the box with no extra configuration:
$ mvn com.spotify:dockerfile-maven-plugin:build
...
[INFO] Building Docker context /home/dsyer/dev/demo/workspace/myapp
[INFO]
[INFO] Image will be built without a name
[INFO]
...
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.630 s
[INFO] Finished at: 2018-11-06T16:03:16+00:00
[INFO] Final Memory: 26M/595M
[INFO] ------------------------------------------------------------------------
That builds an anonymous docker image. We can tag it with docker
on the command line now, or use Maven configuration to set it as the repository
. Example (without changing the pom.xml
):
$ mvn com.spotify:dockerfile-maven-plugin:build -Ddockerfile.repository=myorg/myapp
Or in the pom.xml
:
pom.xml
<build>
<plugins>
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.4.8</version>
<configuration>
<repository>myorg/${project.artifactId}</repository>
</configuration>
</plugin>
</plugins>
</build>
The Palantir Gradle Plugin works with a Dockerfile
and it also is able to generate a Dockerfile
for you, and then it runs docker
as if you were running it on the command line.
First you need to import the plugin into your build.gradle
:
build.gradle
buildscript {
...
dependencies {
...
classpath('gradle.plugin.com.palantir.gradle.docker:gradle-docker:0.13.0')
}
}
and then finally you apply the plugin and call its task:
build.gradle
apply plugin: 'com.palantir.docker'
group = 'myorg'
bootJar {
baseName = 'myapp'
version = '0.1.0'
}
task unpack(type: Copy) {
dependsOn bootJar
from(zipTree(tasks.bootJar.outputs.files.singleFile))
into("build/dependency")
}
docker {
name "${project.group}/${bootJar.baseName}"
copySpec.from(tasks.unpack.outputs).into("dependency")
buildArgs(['DEPENDENCY': "dependency"])
}
In this example we have chosen to unpack the Spring Boot fat jar in a specific location in the build
directory, which is the root for the docker build. Then the multi-layer (not multi-stage) Dockerfile
from above will work.
Spring Boot build plugins for Maven and Gradle can be used to create container images. The plugins create an OCI image (the same format as one created by docker build
) using Cloud Native Buildpacks. You don’t need a Dockerfile
but you do need a docker daemon, either locally (which is what you use when you build with docker) or remotely via ` DOCKER_HOST` environment variable.
Example with Maven (without changing the pom.xml
):
$ ./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=myorg/myapp
and with Gradle:
$ ./gradlew bootBuildImage --imageName=myorg/myapp
The first build might take a long time because it has to download some container images, and the JDK, but subsequent builds will be fast.
If you run the image:
$ docker run -p 8080:8080 -t myorg/myapp
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=86381K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx450194K (Head Room: 0%, Loaded Class Count: 12837, Thread Count: 250, Total Memory: 1073741824)
....
2015-03-31 13:25:48.035 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2015-03-31 13:25:48.037 INFO 1 --- [ main] hello.Application
you will see it start up as normal. You might also notice that the JVM memory requirements were computed and set as command line options inside the container. This is the same memory calculation that has been in use in Cloud Foundry build packs for many years. It represents significant research into the best choices for a range of JVM applications, including but not limited to Spring Boot applications, and the results are usually much better than the default setting from the JVM. You can customize the command line options and override the memory calculator using environment variables.
Google has an open source tool called Jib that is relatively new, but quite interesting for a number of reasons. Probably the most interesting thing is that you don’t need docker to run it - it builds the image using the same standard output as you get from docker build
but doesn’t use docker
unless you ask it to - so it works in environments where docker is not installed (not uncommon in build servers). You also don’t need a Dockerfile
(it would be ignored anyway), or anything in your pom.xml
to get an image built in Maven (Gradle would require you to at least install the plugin in build.gradle
).
Another interesting feature of Jib is that it is opinionated about layers, and it optimizes them in a slightly different way than the multi-layer Dockerfile
created above. Just like in the fat jar, Jib separates local application resources from dependencies, but it goes a step further and also puts snapshot dependencies into a separate layer, since they are more likely to change. There are configuration options for customizing the layout further.
Example with Maven (without changing the pom.xml
):
$ mvn com.google.cloud.tools:jib-maven-plugin:build -Dimage=myorg/myapp
To run the above command you will need to have permission to push to Dockerhub under the myorg
repository prefix. If you have authenticated with docker
on the command line, that will work from your local ~/.docker
configuration. You can also set up a Maven "server" authentication in your ~/.m2/settings.xml
(the id
of the repository is significant):
settings.xml
<server>
<id>registry.hub.docker.com</id>
<username>myorg</username>
<password>...</password>
</server>
There are other options, e.g. you can build locally against a docker daemon (like running docker
on the command line), using the dockerBuild
goal instead of build
. Other container registries are also supported and for each one you will need to set up local authentication via docker or Maven settings.
The gradle plugin has similar features, once you have it in your build.gradle
, e.g.
build.gradle
plugins {
...
id 'com.google.cloud.tools.jib' version '1.8.0'
}
or in the older style used in the Getting Started Guides:
build.gradle
buildscript {
repositories {
maven {
url "https://plugins.gradle.org/m2/"
}
mavenCentral()
}
dependencies {
classpath('org.springframework.boot:spring-boot-gradle-plugin:2.2.1.RELEASE')
classpath('com.google.cloud.tools.jib:com.google.cloud.tools.jib.gradle.plugin:1.8.0')
}
}
and then you can build an image with
$ ./gradlew jib --image=myorg/myapp
As with the Maven build, if you have authenticated with docker
on the command line, the image push will authenticate from your local ~/.docker
configuration.
Automation is part of every application lifecycle these days (or should be). The tools that people use to do the automation tend to be quite good at just invoking the build system from the source code. So if that gets you a docker image, and the environment in the build agents is sufficiently aligned with developer’s own environment, that might be good enough. Authenticating to the docker registry is likely to be the biggest challenge, but there are features in all the automation tools to help with that.
However, sometimes it is better to leave container creation completely to an automation layer, in which case the user’s code might not need to be polluted. Container creation is tricky, and developers sometimes don’t really care about it. If the user code is cleaner there is more chance that a different tool can "do the right thing", applying security fixes, optimizing caches etc. There are multiple options for automation and they will all come with some features related to containers these days. We are just going to look at a couple.
Concourse is a pipeline-based automation platform that can be used for CI and CD. It is heavily used inside Pivotal and the main authors of the project work there. Everything in Concourse is stateless and everything runs in a container, except the CLI. Since running containers is the main order of business for the automation pipelines, creating containers is well supported. The Docker Image Resource is responsible for keeping the output state of your build up to date, if it is a container image.
Here’s an example pipeline that builds a docker image for the sample above, assuming it is in github at myorg/myapp
and has a Dockerfile
at the root and a build task declaration in src/main/ci/build.yml
:
resources:
- name: myapp
type: git
source:
uri: https://github.com/myorg/myapp.git
- name: myapp-image
type: docker-image
source:
email: {{docker-hub-email}}
username: {{docker-hub-username}}
password: {{docker-hub-password}}
repository: myorg/myapp
jobs:
- name: main
plan:
- task: build
file: myapp/src/main/ci/build.yml
- put: myapp-image
params:
build: myapp
The structure of a pipeline is very declarative: you define "resources" (which are either input or output or both), and "jobs" (which use and apply actions to resources). If any of the input resources changes a new build is triggered. If any of the output resources changes during a job, then it is updated.
The pipeline could be defined in a different place than the application source code. And for a generic build setup the task declarations could be centralized or externalized as well. This allows some separation of concerns between development and automation, if that’s the way you roll.
Jenkins is another popular automation server. It has a huge range of features, but one that is the closest to the other automation samples here is the pipeline feature. Here’s a Jenkinsfile
that builds a Spring Boot project with Maven and then uses a Dockerfile
to build an image and push it to a repository:
Jenkinsfile
node {
checkout scm
sh './mvnw -B -DskipTests clean package'
docker.build("myorg/myapp").push()
}
For a (realistic) docker repository that needs authentication in the build server, you can add credentials to the docker
object above using docker.withCredentials(…)
.
Note
|
The Spring Boot Maven and Gradle plugins use buildpacks in exactly the same way that the pack CLI does in the examples below. The main difference is that the plugins use docker to run the builds, whereas pack doesn’t need to. The resulting images are identical given the same inputs.
|
Cloud Foundry has used containers internally for many years now, and part of the technology used to transform user code into containers is Build Packs, an idea originally borrowed from Heroku. The current generation of buildpacks (v2) generates generic binary output that is assembled into a container by the platform. The new generation of buildpacks (v3) is a collaboration between Heroku and other companies including Pivotal, and it builds container images directly and explicitly. This is very interesting for developers and operators. Developers don’t need to care so much about the details of how to build a container, but they can easily create one if they need to. Buildpacks also have lots of features for caching build results and dependencies, so often a buildpack will run much quicker than a native docker build. Operators can scan the containers to audit their contents and transform them to patch them for security updates. And you can run the buildpacks locally (e.g. on a developer machine, or in a CI service), or in a platform like Cloud Foundry.
The output from a buildpack lifecycle is a container image, but you don’t need docker or a Dockerfile
, so it’s CI and automation friendly. The filesystem layers in the output image are controlled by the buildpack, and typically many optimizations will be made without the developer having to know or care about them. There is also an Application Binary Interface between the lower level layers, like the base image containing the operating system, and the upper layers, containing middleware and language specific dependencies. This makes it possible for a platform, like Cloud Foundry, to patch lower layers if there are security updates without affecting the integrity and functionality of the application.
To give you an idea of the features of a buildpack here is an example using the Pack CLI from the command line (it would work with the sample app we have been using in thus guide, no need for a Dockerfile
or any special build configuration):
$ pack build myorg/myapp --builder=cloudfoundry/cnb:bionic --path=.
2018/11/07 09:54:48 Pulling builder image 'cloudfoundry/cnb:bionic' (use --no-pull flag to skip this step)
2018/11/07 09:54:49 Selected run image 'packs/run' from stack 'io.buildpacks.stacks.bionic'
2018/11/07 09:54:49 Pulling run image 'packs/run' (use --no-pull flag to skip this step)
*** DETECTING:
2018/11/07 09:54:52 Group: Cloud Foundry OpenJDK Buildpack: pass | Cloud Foundry Build System Buildpack: pass | Cloud Foundry JVM Application Buildpack: pass
*** ANALYZING: Reading information from previous image for possible re-use
*** BUILDING:
-----> Cloud Foundry OpenJDK Buildpack 1.0.0-BUILD-SNAPSHOT
-----> OpenJDK JDK 1.8.192: Reusing cached dependency
-----> OpenJDK JRE 1.8.192: Reusing cached launch layer
-----> Cloud Foundry Build System Buildpack 1.0.0-BUILD-SNAPSHOT
-----> Using Maven wrapper
Linking Maven Cache to /home/pack/.m2
-----> Building application
Running /workspace/app/mvnw -Dmaven.test.skip=true package
...
---> Running in e6c4a94240c2
---> 4f3a96a4f38c
---> 4f3a96a4f38c
Successfully built 4f3a96a4f38c
Successfully tagged myorg/myapp:latest
$ docker run -p 8080:8080 myorg/myapp
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.0.RELEASE)
2018-11-07 09:41:06.390 INFO 1 --- [main] hello.Application: Starting Application on 1989fb9a00a4 with PID 1 (/workspace/app/BOOT-INF/classes started by pack in /workspace/app)
...
The --builder
is a docker image that runs the buildpack lifecycle - typically it would be a shared resource for all developers, or all developers on a single platform. You can set the default builder on the command line (creates a file in ~/.pack
) and then omit that flag from subsequent builds.
Note
|
The cloudfoundry/cnb:bionic builder also knows how to build an image from an executable jar file, so you can build using mvnw first and then point the --path to the jar file for the same result.
|
Another new project in the container and platform space is Knative. Knative is a lot of things, but if you are not familiar with it you can think of it as a building block for building a serverless platform. It is built on Kubernetes so ultimately it consumes container images, and turns them into applications or "services" on the platform. One of the main features it has, though, is the ability to consume source code and build the container for you, making it more developer and operator friendly. Knative Build is the component that does this and is itself a flexible platform for transforming user code into containers - you can do it in pretty much any way you like. Some templates are provided with common patterns like Maven and Gradle builds, and multi-stage docker builds using Kaniko. There is also a template that use Buildpacks which is very interesting for us, since buildpacks have always had good support for Spring Boot.
This guide has presented a lot of options for building container images for Spring Boot applications. All of them are completely valid choices, and it is now up to you to decide which one you need. Your first question should be "do I really need to build a container image?" If the answer is "yes" then your choices will likely be driven by efficiency and cacheability, and by separation of concerns. Do you want to insulate developers from needing to know too much about how container images are created? Do you want to make developers responsible for updating images when operating system and middleware vulnerabilities neeed to be patched? Or maybe developers need complete control over the whole process and they have all the tools and knowledge they need.