-
Notifications
You must be signed in to change notification settings - Fork 52
Home
"...good designs fit our needs so well that the design is invisible"
- Donald Norman, Design of Everyday Things [1]
- A deployment pipeline is an automated version of a team's software development process - meaning, the steps to transform code from your developers into working software in the hands of your customers. [2]
- More concretely, the steps for a team's specific pipeline might:
- validate code changes from developers
- then deploy the same code changes to a test environment
- then deploy the same code changes to a user-acceptance environment
- then deploy the same code changes to production so our customers can use it
Delivering code changes to production through a pipeline is now an everyday thing. Because pipelines are automated, team members merely have to push a button to initiate the process, which is then complete within minutes. Deployment pipelines have given teams an unobstructed runway to deliver valuable software into the hands of their customers safely, reliably, quickly, and cheaply.
The efficiency boost provided by pipelines has been an incredible boon. As we continue to add automation to legacy applications, and continue to build new applications and APIs, we find ourselves writing new pipelines more often. But writing a pipeline from scratch is not trivial. Moreover, teams continue to build more reliability, safety, and speed into existing pipelines. These changes to existing pipelines are also non-trivial, and often require a higher level of effort than we'd like. This is the very reason that embedded ReleaseEngineers are highly sought after on teams, as their deep understanding of new deployment technologies and practices can be leveraged to more quickly make complicated changes.
We've spent a lot of time thinking about how we might make writing and maintaining pipelines more trivial. How could we write pipelines in a way that makes the complex implementation invisible, so that what remains directly reflects our goal of delivering valuable software to our customers?
We experimented with this idea and came up with a reusable solution for terraform pipelines available here: https://github.com/manheim/terraform-pipeline. Below, I'll talk through the specifics of how the terraform pipeline library works, then talk about how this idea could be extended to different deployment tools and technology stacks.
Because we use Jenkins [3] and Jenkinsfiles [4] as our automation platform, we have the ability to import external libraries. terraform-pipeline was written as an external library, so we can import it with a single line:
@Library(['terraform-pipeline@v5.0']) _
Some scaffolding is still required, but we tried to keep it to a minimum. The magic under-the-hood is configured with:
Jenkinsfile.init(this)
Once done, our pipeline can now start to reflect our goal: "validate the code, deploy to a test environment, deploy to a user-acceptance environment, deploy to a production environment". Four lines of code reflect parts of that statement:
def validate = new TerraformValidateStage()
def deployQa = new TerraformEnvironmentStage('qa')
def deployUat = new TerraformEnvironmentStage('uat')
def deployProd = new TerraformEnvironmentStage('prod')
Above, we're using the terraform-pipeline library to create a Validate stage, which will verify the terraform code changes that are made using terraform validate. Next, we create a stage that will deploy to our test environment. On my team we call our test environment 'qa' by convention. Deployments are done using the terraform plan and terraform apply commands. The next stage will deploy to a user-acceptance environment named 'uat' (UserAcceptanceTest). Lastly, we create a stage that will deploy to our production environment named 'prod'.
Other teams may have more or less environments, or they may use different naming conventions for their environments. The number of environments and their names are completely irrelevant to terraform-pipeline. Users of the terraform-pipeline library should feel free to create and name the environments that best reflect their own software development process.
At this point, we've defined our environments, but remember that our process should follow a specific order - we would never want to deploy to production without first validating our changes through a test environment. So our next few lines of code will ensure just that:
validate.then(deployQa)
.then(deployUat)
.then(deployProd)
Above, we express the order of our team's software development process: "validate the code changes, then deploy to qa, then deploy to uat, then deploy to prod", referencing the stages that we defined earlier on. If another team's software development process dictated a different deployment order, the code above can simply be reordered to reflect that difference.
One last bit of scaffolding is necessary to make this all work:
.build()
From beginning to end, your complete pipeline code should now look something like this (10 lines):
@Library(['terraform-pipeline@v5.0']) _
Jenkinsfile.init(this)
def validate = new TerraformValidateStage()
def deployQa = new TerraformEnvironmentStage('qa')
def deployUat = new TerraformEnvironmentStage('uat')
def deployProd = new TerraformEnvironmentStage('prod')
validate.then(deployQa)
.then(deployUat)
.then(deployProd)
.build()
The code above will generate a fully functional (if bare-boned) Jenkins pipeline using terraform.
Jenkins offers an endless list of plugins which extend its functionality, and you'll likely want to take advantage of many of these features. The terraform-pipeline was also written with a plugin-architecture in mind, allowing you to extend its functionality as you see fit [5]. Some common functionality has already been predefined within the terraform-pipeline library itself. Predefined plugins include:
- AnsiColorPlugin - add color to terraform plan and terraform apply
- WithAwsPlugin - assume IAM roles to perform deployments.
- ParameterStoreBuildWrapper - manage configuration with AWS ParameterStore
Plugins can be enabled simply by calling their init() method. Eg:
AnsiColorPlugin.init()
However, the intent of terraform-pipeline is to minimize distraction to keep our pipelines reflective of our goal - so directly configuring plugins in your pipeline this way is discouraged. Instead, pipeline plugins can be configured and reused easily in your projects by modifying a few of your existing lines of code [6].
@Library(['terraform-pipeline@v5.0', 'terraform-pipeline-customizations@v1.0']) _
Jenkinsfile.init(this, Customizations)
A host of other improvements are planned for the library - like all of the software that our team writes, we'll continue to iterate on this solution and prioritize improvements in our backlog. These improvements are best tracked through the Issues link [7] of the library itself. Feel free to comment on existing Issues, or create new Issues for features that you might find helpful for your team. Feel free to fork the library to explore ideas of your own. Implementations for any of these Issues will always be welcome in the form of a Pull Request.
There are plenty of applications that use deployment tools other than terraform. The ideas applied to this library could just as easily be extended to create new libraries for any other deployment tool or technology stack. Imagine an elastic-beanstalk library, which provides a similar function to terraform, but using ElasticBeanstalk [8] instead. Imagine a maven library, which provided steps common to a java project (eg: mvn clean, mvn test, mvn package, mvn install). The same could be said for a node-js-library, ruby-library, scala-library, or any other language.
Imagine pulling in stages and plugins from these various libraries, and composing the stages and plugins to create a pipeline that addresses the needs of your particular team, application, or technology stack.
The ideas presented here are not Jenkins-specific, and if our goal is to deliver valuable software into the hands of our customers, Jenkinsfiles are not the only way to do that. As teams explore other platforms for automation, I think it's worth spending some time thinking about how well those platforms let us achieve and express our goal. Just as we've built terraform-pipeline on top of Jenkinsfile to make it better reflect our goal, how might we accomplish the same thing with other automation platforms?
I'm eager to reiterate a central idea - as clearly as possible our tools should reflect the goal that we're trying to achieve. By applying this idea to the development of terraform-pipeline, we were able to make it cheap and easy to write new pipelines from scratch, and enable ourselves to easily modify and maintain those pipelines in the future.
The take-away that I do NOT want my readers to have: "pipelines are now so easy that developers do not have to think about them" - in fact, that would be the complete opposite of my intent. Instead, we've removed the unnecessarily complex scaffolding around our old pipelines that was causing a lot of distraction, so that all that's left better reflects our goal: transform code from your developers into working software for your customers. Developers still need to think about this because this is the very reason they write code each day.
Developers should be opinionated about their software development process, and should actively help shape and curate the steps taken to achieve that goal. Teams will not necessarily follow the same steps in this process, and for good reason - there's no such thing as a "right answer" for every team and every application. Moreover, teams are expected to continuously improve over time, and that improvement always entails change. The ideas and the library presented here should help enable teams to alter, improve, and experiment with their pipelines as easily and as readily as they do with their software development process, all in the service of our goal of providing valuable software to our customers.
- https://en.wikipedia.org/wiki/The_Design_of_Everyday_Things
- https://en.wikipedia.org/wiki/Continuous_delivery#Deployment_pipeline
- https://jenkins.io/
- https://jenkins.io/doc/book/pipeline/jenkinsfile/
- https://github.com/manheim/terraform-pipeline#write-your-own-plugin
- https://github.com/manheim/terraform-pipeline#drying-your-plugin-configuration
- https://github.com/manheim/terraform-pipeline/issues
- https://aws.amazon.com/elasticbeanstalk/