Skip to content

tkopczynski/spring-cloud-pipelines

 
 

Repository files navigation

Gitter
CircleCI

Spring Cloud Pipelines

Documentation Authors: Marcin Grzejszczak

Spring, Spring Boot and Spring Cloud are tools that allow developers speed up the time of creating new business features. It’s common knowledge however that the feature is only valuable if it’s in production. That’s why companies spend a lot of time and resources on building their own deployment pipelines.

This project tries to solve the following problems:

  • Creation of a common deployment pipeline

  • Propagation of good testing & deployment practices

  • Speed up the time required to deploy a feature to production

A common way of running, configuring and deploying applications lowers support costs and time needed by new developers to blend in when they change projects.

The opinionated pipeline

This repo contains an opinionated pipeline to deploy a Spring Cloud based microservice. We’ve taken the following opinionated decisions:

  • usage of Spring Cloud, Spring Cloud Contract Stub Runner and Spring Cloud Eureka

  • application deployment to Cloud Foundry

  • For Maven:

    • usage of Maven Wrapper

    • artifacts deployment by ./mvnw clean deploy

    • stubrunner.ids property to retrieve list of collaborators for which stubs should be downloaded

    • running smoke tests on a deployed app via the smoke Maven profile

    • running end to end tests on a deployed app via the e2e Maven profile

  • For Gradle (in the github-analytics application check the gradle/pipeline.gradle file):

    • usage of Gradlew Wrapper

    • deploy task for artifacts deployment

    • running smoke tests on a deployed app via the smoke task

    • running end to end tests on a deployed app via the e2e task

    • groupId task to retrieve group id

    • artifactId task to retrieve artifact id

    • currentVersion task to retrieve the current version

    • stubIds task to retrieve list of collaborators for which stubs should be downloaded

This is the initial approach that can be easily changed since all the core deployment logic is done in a form of Bash scripts.

You can go through the setup of the demo environment for

Each of these setups are available under the folders jenkins and concourse respectively.

Introduction

In the following section we will describe in more depth the rationale behind the presented opinionated pipeline. We will go through each deployment step and describe it in details.

Project setup

.
├── common
├── concourse
├── docs
└── jenkins

In the common folder you can find all the Bash scripts containing the pipeline logic. These scripts are reused by both Concourse and Jenkins pipelines.

In the concourse folder you can find all the necessary scripts and setup to run Concourse demo.

In the docs section you have the whole documentation of the project.

In the jenkins folder you can find all the necessary scripts and setup to run Jenkins demo.

How to use it?

This repository can be treated as a template for your pipeline. We provide some opinionated implementation that you can alter to suit your needs. The best approach to use it to build your production projects would be to download the Spring Cloud Pipelines repository as ZIP, then init a Git project there and modify it as you wish.

Example for using the code from master branch. You can use a tag for example 1.0.0.M2. Then your URL will look like this https://github.com/spring-cloud/spring-cloud-pipelines/archive/1.0.0.M2.zip .

curl -LOk https://github.com/spring-cloud/spring-cloud-pipelines/archive/master.zip
unzip master.zip
cd spring-cloud-pipelines-master
git init
# modify the pipelines to suit your needs
git add .
git commit -m "Initial commit"
git remote add origin ${YOUR_REPOSITORY_URL}
git push origin master
Note
Why aren’t you simply cloning the repo? This is meant to be a seed for building new, versioned pipelines for you. You don’t want to have all of our history dragged along with you, don’t you?

The flow

Let’s take a look at the flow of the opinionated pipeline

flow concourse
Figure 1. Flow in Concourse
flow
Figure 2. Flow in Jenkins

We’ll first describe the overall concept behind the flow and then we’ll split it into pieces and describe every piece independently.

Environments

So we’re on the same page let’s define some common vocabulary. We discern 4 typical environments in terms of running the pipeline.

  • build

  • test

  • stage

  • prod

Build environment is a machine where the building of the application takes place. It’s a CI / CD tool worker.

Test is an environment where you can deploy an application to test it. It doesn’t resemble production, we can’t be sure of it’s state (which application is deployed there and in which version). It can be used by multiple teams at the same time.

Stage is an environment that does resemble production. Most likely applications are deployed there in versions that correspond to those deployed to production. Typically databases there are filled up with (obfuscated) production data. Most often this environment is a single, shared one between many teams. In other words in order to run some performance, user acceptance tests you have to block and wait until the environment is free.

Prod is a production environment where we want our tested applications to be deployed for our customers.

Tests

Unit tests - tests that are executed on the application during the build phase. No integrations with databases / HTTP server stubs etc. take place. Generally speaking your application should have plenty of these to have fast feedback if your features are working fine.

Integration tests - tests that are executed on the built application during the build phase. Integrations with in memory databases / HTTP server stubs take place. According to the test pyramid, in most cases you should have not too many of these kind of tests.

Smoke tests - tests that are executed on a deployed application. The concept of these tests is to check the crucial parts of your application are working properly. If you have 100 features in your application but you gain most money from e.g. 5 features then you could write smoke tests for those 5 features. As you can see we’re talking about smoke tests of an application, not of the whole system. In our understanding inside the opinionated pipeline, these tests are executed against an application that is surrounded with stubs.

End to end tests - tests that are executed on a system composing of multiple applications. The idea of these tests is to check if the tested feature works when the whole system is set up. Due to the fact that it takes a lot of time, effort, resources to maintain such an environment and that often those tests are unreliable (due to many different moving pieces like network database etc.) you should have a handful of those tests. Only for critical parts of your business. Since only production is the key verifier of whether your feature works, some companies don’t even want to do those and move directly to deployment to production. When your system contains KPI monitoring and alerting you can quickly react when your deployed application is not behaving properly.

Performance testing - tests executed on an application or set of applications to check if your system can handle big load of input. In case of our opinionated pipeline these tests could be executed either on test (against stubbed environment) or stage (against the whole system)

Testing against stubs

Before we go into details of the flow let’s take a look at the following example.

monolith
Figure 3. Two monolithic applications deployed for end to end testing

When having only a handful of applications, performing end to end testing is beneficial. From the operations perspective it’s maintainable for a finite number of deployed instances. From the developers perspective it’s nice to verify the whole flow in the system for a feature.

In case of microservices the scale starts to be a problem:

many microservices
Figure 4. Many microservices deployed in different versions

The questions arise:

  • Should I queue deployments of microservices on one testing environment or should I have an environment per microservice?

    • If I queue deployments people will have to wait for hours to have their tests ran - that can be a problem

  • To remove that issue I can have an environment per microservice

    • Who will pay the bills (imagine 100 microservices - each having each own environment).

    • Who will support each of those environments?

    • Should we spawn a new environment each time we execute a new pipeline and then wrap it up or should we have them up and running for the whole day?

  • In which versions should I deploy the dependent microservices - development or production versions?

    • If I have development versions then I can test my application against a feature that is not yet on production. That can lead to exceptions on production

    • If I test against production versions then I’ll never be able to test against a feature under development anytime before deployment to production.

One of the possibilities of tackling these problems is to…​ not do end to end tests.

stubbed dependencies
Figure 5. Execute tests on a deployed microservice on stubbed dependencies

If we stub out all the dependencies of our application then most of the problems presented above disappear. There is no need to start and setup infrastructure required by the dependant microservices. That way the testing setup looks like this:

stubbed dependencies
Figure 6. We’re testing microservices in isolation

Such an approach to testing and deployment gives the following benefits (thanks to the usage of Spring Cloud Contract):

  • No need to deploy dependant services

  • The stubs used for the tests ran on a deployed microservice are the same as those used during integration tests

  • Those stubs have been tested against the application that produces them (check Spring Cloud Contract for more information)

  • We don’t have many slow tests running on a deployed application - thus the pipeline gets executed much faster

  • We don’t have to queue deployments - we’re testing in isolation thus pipelines don’t interfere with each other

  • We don’t have to spawn virtual machines each time for deployment purposes

It brings however the following challenges:

  • No end to end tests before production - you don’t have the full certainty that a feature is working

  • First time the applications will talk in a real way will be on production

Like every solution it has its benefits and drawbacks. The opinionated pipeline allows you to configure whether you want to follow this flow or not.

General view

The general view behind this deployment pipeline is to:

  • test the application in isolation

  • test the backwards compatibility of the application in order to roll it back if necessary

  • allow testing of the packaged app in a deployed environment

  • allow user acceptance tests / performance tests in a deployed environment

  • allow deployment to production

Obviously the pipeline could have been split to more steps but it seems that all of the aforementioned actions comprise nicely in our opinionated proposal.

Opinionated implementation

For the demo purposes we’re providing Docker Compose setup with Artifactory and Concourse / Jenkins tools. Regardless of the picked CD application for the pipeline to pass one needs a Cloud Foundry instance (for example Pivotal Web Services or PCF Dev) and the infrastructure applications deployed to the JAR hosting application (for the demo we’re providing Artifactory). The infrastructure applications are Eureka for Service Discovery and Stub Runner Boot for running Spring Cloud Contract stubs.

Tip
In the demos we’re showing you how to first build the github-webhook project. That’s because the github-analytics needs the stubs of github-webhook to pass the tests. Below you’ll find references to github-analytics project since it contains more interesting pieces as far as testing is concerned.
Build
build
Figure 7. Build and upload artifacts

In this step we’re generating a version of the pipeline and then we’re publishing 2 artifacts to Artifactory / Nexus:

  • a fat jar of the application

  • a Spring Cloud Contract jar containing stubs of the application

During this phase we’re executing a Maven build using Maven Wrapper or a Gradle build using Gradle Wrapper , with unit and integration tests. We’re also tagging the repository with dev/${version} format. That way in each subsequent step of the pipeline we’re able to retrieve the tagged version. Also we know exactly which version of the pipeline corresponds to which Git hash.

Test
test
Figure 8. Smoke test and rollback test on test environment

Here we’re

  • starting a RabbitMQ service in Cloud Foundry

  • deploying Eureka infrastructure application to Cloud Foundry

  • downloading the fat jar from Nexus and we’re uploading it to Cloud Foundry. We want the application to run in isolation (be surrounded by stubs). Currently due to port constraints in Cloud Foundry we cannot run multiple stubbed HTTP services in the cloud so to fix this issue we’re running the application with smoke Spring profile on which you can stub out all HTTP calls to return a mocked response

  • if the application is using a database then it gets upgraded at this point via Flyway, Liquibase or any other tool once the application gets started

  • from the project’s Maven or Gradle build we’re extracting stubrunner.ids property that contains all the groupId:artifactId:version:classifier notation of dependant projects for which the stubs should be downloaded.

  • then we’re uploading Stub Runner Boot and pass the extracted stubrunner.ids to it. That way we’ll have a running application in Cloud Foundry that will download all the necessary stubs of our application

  • from the checked out code we’re running the tests available under the smoke profile. In the case of GitHub Analytics application we’re triggering a message from the GitHub Webhook application’s stub, that is sent via RabbitMQ to GitHub Analytics. Then we’re checking if message count has increased. You can check those tests here.

  • once the tests pass we’re searching for the last production release. Once the application is deployed to production we’re tagging it with prod/${version} tag. If there is no such tag (there was no production release) there will be no rollback tests executed. If there was a production release the tests will get executed.

  • assuming that there was a production release we’re checking out the code corresponding to that release (we’re checking out the tag), we’re downloading the appropriate fat jar and we’re uploading it to Cloud Foundry. IMPORTANT the old jar is running against the NEW version of the database.

  • we’re running the old smoke tests against the freshly deployed application surrounded by stubs. If those tests pass then we have a high probability that the application is backwards compatible

  • the default behaviour is that after all of those steps the user can manually click to deploy the application to a stage environment

Stage
stage
Figure 9. End to end tests on stage environment

Here we’re

  • starting a RabbitMQ service in Cloud Foundry

  • deploying Eureka infrastructure application to Cloud Foundry

  • downloading the fat jar from Nexus and we’re uploading it to Cloud Foundry.

Next we have a manual step in which:

  • from the checked out code we’re running the tests available under the e2e profile. In the case of GitHub Analytics application we’re sending a HTTP message to GitHub Analytic’s endpoint. Then we’re checking if the received message count has increased. You can check those tests here.

The step is manual by default due to the fact that stage environment is often shared between teams and some preparations on databases / infrastructure have to take place before running the tests. Ideally these step should be fully automatic.

Prod
prod
Figure 10. Deployment to production

The step to deploy to production is manual but ideally it should be automatic.

Here we’re

  • starting a RabbitMQ service in Cloud Foundry (only for the demo to pass - you should provision the prod environment in a different way)

  • deploying Eureka infrastructure application to Cloud Foundry (only for the demo to pass - you should provision the prod environment in a different way)

  • tagging the Git repo with prod/${version} tag

  • downloading the fat jar from Nexus

  • we’re doing Blue Green deployment on Cloud Foundry

  • we’re renaming the current instance of the app e.g. fooService to fooService-venerable

  • we’re deploying the new instance of the app under the fooService name

  • now two instances of the same application are running on production

  • in the Complete switch over which is a manual step

  • we’re deleting the old instance

  • remember to run this step only after you have confirmed that both instances are working fine!

Concourse Pipeline

The repository contains an opinionated pipeline that will build and deploy - Github Webhook application.

All in all there are the following projects taking part in the whole microservice setup for this demo.

  • Github-Analytics - the app that has a REST endpoint and uses messaging. Our business application.

  • Github Webhook - project that emits messages that are used by Github Analytics. Our business application.

  • Eureka - simple Eureka Server. This is an infrastructure application.

  • Github Analytics Stub Runner Boot - Stub Runner Boot server to be used for tests with Github Analytics. Uses Eureka and Messaging. This is an infrastructure application.

Step by step

If you want to just run the demo as far as possible using PCF Dev and Docker Compose

Below you can find optional steps needed to be taken when you want to customize the pipeline

Fork repos

There are 4 apps that are composing the pipeline

You need to fork only these. That’s because only then will your user be able to tag and push the tag to repo.

Start Concourse and Artifactory

Concourse + Artifactory can be run locally. To do that just execute the start.sh script from this repo.

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/concourse
./setup_docker_compose.sh
./start.sh 192.168.99.100

The setup_docker_compose.sh script should be executed once only to allow generation of keys.

The 192.168.99.100 param is an example of an external URL of Concourse (equal to Docker-Machine ip in this example).

Then Concourse will be running on port 8080 and Artifactory 8081.

Deploy the infra JARs to Artifactory

When Artifactory is running, just execute the tools/deploy-infra.sh script from this repo.

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/
./tools/deploy-infra.sh

As a result both eureka and stub runner repos will be cloned, built and uploaded to Artifactory.

Start PCF Dev

TIP: You can skip this step if you have CF installed and don’t want to use PCF Dev The only thing you have to do is to set up spaces.

Warning
It’s more than likely that you’ll run out of resources when you reach stage step. Don’t worry! Keep calm and clear some apps from PCF Dev and continue.

You have to download and start PCF Dev. A link how to do it is available here.

The default credentials when using PCF Dev are:

username: user
password: pass
email: user
org: pcfdev-org
space: pcfdev-space
api: api.local.pcfdev.io

You can start the PCF dev like this:

cf dev start

You’ll have to create 3 separate spaces (email admin, pass admin)

cf login -a https://api.local.pcfdev.io --skip-ssl-validation -u admin -p admin -o pcfdev-org

cf create-space pcfdev-test
cf set-space-role user pcfdev-org pcfdev-test SpaceDeveloper
cf create-space pcfdev-stage
cf set-space-role user pcfdev-org pcfdev-stage SpaceDeveloper
cf create-space pcfdev-prod
cf set-space-role user pcfdev-org pcfdev-prod SpaceDeveloper

You can also execute the ./tools/pcfdev-helper.sh setup-spaces to do this.

Setup the fly CLI

If you go to Concourse website you should see sth like this:

   

running concourse

   

You can click one of the icons (depending on your OS) to download fly, which is the Concourse CLI. Once you’ve downloaded that (and maybe added to your PATH) you can run:

fly --version

If fly is properly installed then it should print out the version.

Setup your credentials.yml

The repo comes with credentials-sample.yml which is set up with sample data (most credentials) are set to be applicable for PCF Dev. Copy this file to a new file credentials.yml (the file is added to .gitignore so don’t worry that you’ll push it with your passwords) and edit it as you wish. For our demo jus setup:

  • app-url - url pointing to your forked github-webhook repo

  • github-private-key - your private key to clone / tag GitHub repos

  • repo-with-jars - the IP is set to the defaults for Docker Machine. You should update it to point to your setup

If you don’t have a Docker Machine just execute ./whats_my_ip.sh script to get an external IP that you can pass to your repo-with-jars instead of the default Docker Machine IP.

Below you can see what environment variables are required by the scripts. To the right hand side you can see the default values for PCF Dev that we set in the credentials-sample.yml.

Property Name Property Description Default value

CF_TEST_API_URL

The URL to the CF Api for TEST env

api.local.pcfdev.io

CF_STAGE_API_URL

The URL to the CF Api for STAGE env

api.local.pcfdev.io

CF_PROD_API_URL

The URL to the CF Api for PROD env

api.local.pcfdev.io

CF_TEST_ORG

Name of the org for the test env

pcfdev-org

CF_TEST_SPACE

Name of the space for the test env

pcfdev-space

CF_STAGE_ORG

Name of the org for the stage env

pcfdev-org

CF_STAGE_SPACE

Name of the space for the stage env

pcfdev-space

CF_PROD_ORG

Name of the org for the prod env

pcfdev-org

CF_PROD_SPACE

Name of the space for the prod env

pcfdev-space

REPO_WITH_JARS

URL to repo with the deployed jars

http://192.168.99.100:8081/artifactory/libs-release-local

M2_SETTINGS_REPO_ID

The id of server from Maven settings.xml

artifactory-local

CF_HOSTNAME_UUID

Additional suffix for the route. In a shared environment the default routes can be already taken

Build the pipeline

Log in (e.g. for Concourse running at 192.168.99.100 - if you don’t provide any value then localhost is assumed). If you execute this script (it assumes that either fly is on your PATH or it’s in the same folder as the script is):

./login.sh 192.168.99.100

Next run the command to create the pipeline.

./set_pipeline.sh

Then you’ll create a github-webhook pipeline under the docker alias, using the provided credentials.yml file. You can override these values in exactly that order (e.g. ./set-pipeline.sh some-project another-target some-other-credentials.yml)

Run the github-webhook pipeline

   

concourse login
Step 1: Click Login

   

concourse team main
Step 2: Pick main team

   

concourse user pass
Step 3: Log in with concourse user and changeme password

   

concourse pipeline
Step 4: Your screen should look more or less like this

   

start pipeline
Step 5: Unpause the pipeline by clicking in the top lefr corner and then clicking the play button

   

generate version
Step 6: Click 'generate-version'

   

run pipeline
Step 7: Click + sign to start a new build

   

concourse pending
Step 8: The job is pending

   

job running
Step 9: Job is pending in the main screen

   

running pipeline
Step 10: Job is running in the main screen

FAQ

Can I use the pipeline for some other repos?

Sure! Just change the app-url in credentials.yml!

Will this work for ANY project out of the box?

Not really. This is an opinionated pipeline that’s why we took some opinionated decisions like:

  • usage of Spring Cloud, Spring Cloud Contract Stub Runner and Spring Cloud Eureka

  • application deployment to Cloud Foundry

  • For Maven:

    • usage of Maven Wrapper

    • artifacts deployment by ./mvnw clean deploy

    • stubrunner.ids property to retrieve list of collaborators for which stubs should be downloaded

    • running smoke tests on a deployed app via the smoke Maven profile

    • running end to end tests on a deployed app via the e2e Maven profile

  • For Gradle (in the github-analytics application check the gradle/pipeline.gradle file):

    • usage of Gradlew Wrapper

    • deploy task for artifacts deployment

    • running smoke tests on a deployed app via the smoke task

    • running end to end tests on a deployed app via the e2e task

    • groupId task to retrieve group id

    • artifactId task to retrieve artifact id

    • currentVersion task to retrieve the current version

    • stubIds task to retrieve list of collaborators for which stubs should be downloaded

This is the initial approach that can be easily changed in the future.

Can I modify this to reuse in my project?

Sure! It’s open-source! The important thing is that the core part of the logic is written in Bash scripts. That way, in the majority of cases, you could change only the bash scripts without changing the whole pipeline. You can check out the scripts here.

I ran out of resources!!

When deploying the app to stage or prod you can get an exception Insufficient resources. The way to solve it is to kill some apps from test / stage env. To achieve that just call

cf target -o pcfdev-org -s pcfdev-test
cf stop github-webhook
cf stop github-eureka
cf stop stubrunner

You can also execute ./tools/pcfdev-helper.sh kill-all-apps that will remove all demo-related apps deployed to PCF dev.

The rollback step fails due to missing JAR ?!

You must have pushed some tags and have removed the Artifactory volume that contained them. To fix this, just remove the tags

git tag -l | xargs -n 1 git push --delete origin

Can I see the output of a job from the terminal?

Yes! Assuming that pieline name is github-webhook and job name is build-and-upload you can running

fly watch --job github-webhook/build-and-upload -t docker

I clicked the job and it’s constantly pending…​

Don’t worry…​ most likely you’ve just forgotten to click the play button to unpause the pipeline. Click to the top left, expand the list of pipelines and click the play button next to github-webhook.

Another problem that might occur is that you need to have the version branch. Concourse will wait for the version branch to appear in your repo. So in order for the pipeline to start ensure that when doing some git operations you haven’t forgotten to create / copy the version branch too.

The route is already in use

If you play around with Jenkins / Concourse you might end up with the routes occupied

Using route github-webhook-test.local.pcfdev.io
Binding github-webhook-test.local.pcfdev.io to github-webhook...
FAILED
The route github-webhook-test.local.pcfdev.io is already in use.

Just delete the routes

yes | cf delete-route local.pcfdev.io -n github-webhook-test
yes | cf delete-route local.pcfdev.io -n github-eureka-test
yes | cf delete-route local.pcfdev.io -n stubrunner-test
yes | cf delete-route local.pcfdev.io -n github-webhook-stage
yes | cf delete-route local.pcfdev.io -n github-eureka-stage
yes | cf delete-route local.pcfdev.io -n github-webhook-prod
yes | cf delete-route local.pcfdev.io -n github-eureka-prod

You can also execute the ./tools/pcfdev-helper.sh delete-routes

I’m unauthorized to deploy infrastructure jars

Most likely you’ve forgotten to update your local settings.xml with the Artifactory’s setup. Check out this section of the docs and update your settings.xml.

Jenkins DSL Pipeline

The repository contains job definitions and the opinionated setup pipeline using Jenkins Job Dsl plugin. Those jobs will form an empty pipeline and a sample, opinionated one that you can use in your company.

All in all there are the following projects taking part in the whole microservice setup for this demo.

  • Github-Analytics - the app that has a REST endpoint and uses messaging. Our business application.

  • Github Webhook - project that emits messages that are used by Github Analytics. Our business application.

  • Eureka - simple Eureka Server. This is an infrastructure application.

  • Github Analytics Stub Runner Boot - Stub Runner Boot server to be used for tests with Github Analytics. Uses Eureka and Messaging. This is an infrastructure application.

Project setup

.
├── declarative-pipeline
│   └── Jenkinsfile-sample.groovy
├── jobs
│   ├── jenkins_pipeline_empty.groovy
│   ├── jenkins_pipeline_jenkinsfile_empty.groovy
│   ├── jenkins_pipeline_sample.groovy
│   └── jenkins_pipeline_sample_view.groovy
├── seed
│   ├── gradle.properties
│   ├── init.groovy
│   ├── jenkins_pipeline.groovy
│   └── settings.xml
└── src
    ├── main
    └── test

In the declarative-pipeline you can find a definition of a Jenkinsfile-sample.groovy declarative pipeline. It’s used together with the Blueocean UI.

In the jobs folder you have all the seed jobs that will generate pipelines.

  • jenkins_pipeline_empty.groovy - is a template of a pipeline with empty steps using the Jenkins Job DSL plugin

  • jenkins_pipeline_jenkinsfile_empty.groovy - is a template of a pipeline with empty steps using the Pipeline plugin

  • jenkins_pipeline_sample.groovy - is an opinionated implementation using the Jenkins Job DSL plugin

  • jenkins_pipeline_sample_view.groovy - builds the views for the pipelines

In the seed folder you have the init.groovy file which is executed when Jenkins starts. That way we can configure most of Jenkins options for you (adding credentials, JDK etc.). jenkins_pipeline.groovy contains logic to build a seed job (that way you don’t have to even click that job - we generate it for you).

In the src folder you have production and test classes needed for you to build your own pipeline. Currently we have tests only cause the whole logic resides in the jenkins_pipeline_sample file.

Step by step

This is a guide for Jenkins JOB Dsl based pipeline.

If you want to just run the demo as far as possible using PCF Dev and Docker Compose

Below you can find optional steps needed to be taken when you want to customize the pipeline

Fork repos

There are 4 apps that are composing the pipeline

You need to fork only these. That’s because only then will your user be able to tag and push the tag to repo.

Start Jenkins and Artifactory

Jenkins + Artifactory can be ran locally. To do that just execute the start.sh script from this repo.

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/jenkins
./start.sh yourGitUsername yourGitPassword yourForkedGithubOrg

Then Jenkins will be running on port 8080 and Artifactory 8081. The provided parameters will be passed as env variables to Jenkins VM and credentials will be set in your set. That way you don’t have to do any manual work on the Jenkins side. In the above parameters, the third parameter could be yourForkedGithubOrg or yourGithubUsername. Also the REPOS env variable will contain your GitHub org in which you have the forked repos.

Deploy the infra JARs to Artifactory

When Artifactory is running, just execute the tools/deploy-infra.sh script from this repo.

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/
./tools/deploy-infra.sh

As a result both eureka and stub runner repos will be cloned, built and uploaded to Artifactory.

Start PCF Dev

Tip
You can skip this step if you have CF installed and don’t want to use PCF Dev The only thing you have to do is to set up spaces.
Warning
It’s more than likely that you’ll run out of resources when you reach stage step. Don’t worry! Keep calm and clear some apps from PCF Dev and continue.

You have to download and start PCF Dev. A link how to do it is available here.

The default credentials when using PCF Dev are:

username: user
password: pass
email: user
org: pcfdev-org
space: pcfdev-space
api: api.local.pcfdev.io

You can start the PCF dev like this:

cf dev start

You’ll have to create 3 separate spaces (email admin, pass admin)

cf login -a https://api.local.pcfdev.io --skip-ssl-validation -u admin -p admin -o pcfdev-org

cf create-space pcfdev-test
cf set-space-role user pcfdev-org pcfdev-test SpaceDeveloper
cf create-space pcfdev-stage
cf set-space-role user pcfdev-org pcfdev-stage SpaceDeveloper
cf create-space pcfdev-prod
cf set-space-role user pcfdev-org pcfdev-prod SpaceDeveloper

You can also execute the ./tools/pcfdev-helper.sh setup-spaces to do this.

Run the seed job

We already create the seed job for you but you’ll have to run it. When you do run it you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global env variables you have to remove them from the seed.

Anyways, to run the demo just provide in the REPOS var the comma separated list of URLs of the 2 aforementioned forks of github-webhook and `github-analytics'.

   

seed click
Step 1: Click the 'jenkins-pipeline-seed' job

   

seed run
Step 2: Click the 'Build with parameters'

   

seed
Step 3: Provide the REPOS parameter with URLs of your forks (you’ll have more properties than the ones in the screenshot)

   

seed built
Step 4: This is how the results of seed should look like

Run the github-webhook pipeline

We already create the seed job for you but you’ll have to run it. When you do run it you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global env variables you have to remove them from the seed.

Anyways, to run the demo just provide in the REPOS var the comma separated list of URLs of the 2 aforementioned forks of github-webhook and github-analytics.

   

seed views
Step 1: Click the 'github-webhook' view

   

pipeline run
Step 2: Run the pipeline

   

pipeline run props
Step 3: You can set some properties (just click 'Build' to proceed)

   

Important
Most likely your 1st build will suddenly hang for 10 minutes. If you rerun it again it should work after 2-3 minutes. My guess is that it’s related to Docker Compose so sorry for this unfortunate situation.
Important
If your build fails on the deploy previous version to stage due to missing jar, that means that you’ve forgotten to clear the tags in your repo. Typically that’s due to the fact that you’ve removed the Artifactory volume with deployed JAR whereas a tag in the repo is still pointing there. Check out this section on how to remove the tag.

   

pipeline manual
Step 4: Click the manual step to go to stage (remember about killing the apps on test env). To do this click the ARROW next to the job name

   

Important
Most likely you will run out of memory so when reaching the stage environment it’s good to kill all apps on test. Check out the FAQ section for more details!

   

pipeline finished
Step 5: The full pipeline should look like this

   

Declarative pipeline & Blue Ocean

You can also use the declarative pipeline approach with the Blue Ocean UI. Here is a step by step guide to run a pipeline via this approach.

The Blue Ocean UI is available under the blue/ URL. E.g. for Docker Machine based setup http://192.168.99.100:8080/blue.

   

blue 1
Step 1: Open Blue Ocean UI and click on github-webhook-declarative-pipeline

   

blue 2
Step 2: Your first run will look like this. Click Run button

   

blue 3
Step 3: Enter parameters required for the build and click run

   

blue 4
Step 4: A list of pipelines will be shown. Click your first run.

   

blue 5
Step 5: State if you want to go to production or not and click Proceed

   

blue 6
Step 6: The build is in progress…​

   

blue 7
Step 7: The pipeline is done!

   

Important
There is no possibility of restarting pipeline from specific stage, after failure. Please check out this issue for more information
Warning
Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you’ll run out of executors pretty fast. You can check out this issue for and this StackOverflow question for more information.

Optional steps

All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes.

Deploying infra jars to a different location

It’s enough to set the ARTIFACTORY_URL environmental variable before executing tools/deploy-infra.sh. Example for deploying to Artifactory at IP 192.168.99.100

git clone https://github.com/spring-cloud/spring-cloud-pipelines
cd spring-cloud-pipelines/
ARTIFACTORY_URL="http://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh

Setup settings.xml for Maven deployment

Tip
If you want to use the default connection to the Docker version of Artifactory you can skip this step

So that ./mvnw deploy works with Artifactory from Docker we’re already copying the missing settings.xml file for you. It looks like this:

<server>
  <id>artifactory-local</id>
  <username>admin</username>
  <password>password</password>
</server>

If you want to use your own version of Artifactory / Nexus you have to update the file (it’s in seed/settings.xml).

Setup Jenkins env vars

If you want to only play around with the demo that we’ve prepared you have to set ONE variable which is the REPOS variable. That variable needs to consists of comma separated list of URLs to repositories containing business apps. So you should pass your forked repos URLs.

You can do it in the following ways:

  • globally via Jenkins global env vars (then when you run the seed that variable will be taken into consideration and proper pipelines will get built)

  • modify the seed job parameters (you’ll have to modify the seed job configuration and change the REPOS property)

  • provide the repos parameter when running the seed job

For the sake of simplicity let’s go with the last option.

Important
If you’re choosing the global envs, you HAVE to remove the other approach (e.g. if you set the global env for REPOS, please remove that property in the seed job
Seed properties

Click on the seed job and pick Build with parameters. Then as presented in the screen below (you’ll have far more properties to set) just modify the REPOS property by providing the comma separated list of URLs to your forks. Whatever you set will be parsed by the seed job and passed to the generated Jenkins jobs.

Tip
This is very useful when the repos you want to build differ. E.g. use different JDK. Then some seeds can set the JDK_VERSION param to one version of Java installation and the others to another one.

Example screen:

seed

In the screenshot we could parametrize the REPOS and REPO_WITH_JARS params.

Global envs
Important
This section is presented only for informational purposes - for the sake of demo you can skip it

You can add env vars (go to configure Jenkins → Global Properties) for the following properties (the defaults are for PCF Dev):

Example screen:

env vars
All env vars

The env vars that are used in all of the jobs are as follows:

Property Name Property Description Default value

CF_TEST_API_URL

The URL to the CF Api for TEST env

api.local.pcfdev.io

CF_STAGE_API_URL

The URL to the CF Api for STAGE env

api.local.pcfdev.io

CF_PROD_API_URL

The URL to the CF Api for PROD env

api.local.pcfdev.io

CF_TEST_ORG

Name of the org for the test env

pcfdev-org

CF_TEST_SPACE

Name of the space for the test env

pcfdev-space

CF_STAGE_ORG

Name of the org for the stage env

pcfdev-org

CF_STAGE_SPACE

Name of the space for the stage env

pcfdev-space

CF_PROD_ORG

Name of the org for the prod env

pcfdev-org

CF_PROD_SPACE

Name of the space for the prod env

pcfdev-space

REPO_WITH_JARS

URL to repo with the deployed jars

http://artifactory:8081/artifactory/libs-release-local

M2_SETTINGS_REPO_ID

The id of server from Maven settings.xml

artifactory-local

JDK_VERSION

The name of the JDK installation

jdk8

PIPELINE_VERSION

What should be the version of the pipeline (ultimately also version of the jar)

1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION

GIT_EMAIL

The email used by Git to tag repo

email@example.com

GIT_NAME

The name used by Git to tag repo

Pivo Tal

CF_HOSTNAME_UUID

Additional suffix for the route. In a shared environment the default routes can be already taken

AUTO_DEPLOY_TO_STAGE

Should deployment to stage be automatic

false

AUTO_DEPLOY_TO_PROD

Should deployment to prod be automatic

false

ROLLBACK_STEP_REQUIRED

Should rollback step be present

true

DEPLOY_TO_STAGE_STEP_REQUIRED

Should deploy to stage step be present

true

Set Git email / user

Since our pipeline is setting the git user / name explicitly for the build step you’d have to go to Configure of the build step and modify the Git name / email. If you want to set it globally you’ll have to remove the section from the build step and follow these steps to set it globally.

You can set Git email / user globally like this:

   

manage jenkins
Step 1: Click 'Manage Jenkins'

   

configure system
Step 2: Click 'Configure System'

   

git
Step 3: Fill out Git user information

   

Jenkins Credentials

In your scripts we reference the credentials via IDs. These are the defaults for credentials

Property Name Property Description Default value

GIT_CREDENTIAL_ID

Credential ID used to tag a git repo

git

CF_TEST_CREDENTIAL_ID

Credential ID for CF Test env access

cf-test

CF_STAGE_CREDENTIAL_ID

Credential ID for CF Stage env access

cf-stage

CF_PROD_CREDENTIAL_ID

Credential ID for CF Prod env access

cf-prod

If you already have in your system a credential to for example tag a repo you can use it by passing the value of the property GIT_CREDENTIAL_ID

Add Jenkins credentials for GitHub

The scripts will need to access the credential in order to tag the repo.

You have to set credentials with id: git.

Below you can find instructions on how to set a credential (e.g. for cf-test credential but remember to provide the one with id git).

   

credentials system
Step 1: Click 'Credentials, System'

   

credentials global
Step 2: Click 'Global Credentials'

   

credentials add
Step 3: Click 'Add credentials'

   

credentials example
Step 4: Fill out the user / password and provide the git credential ID (in this example cf-test)

   

Enable Groovy Token Macro Processing

With scripted that but if you needed to this manually then this is how to do it:

   

manage jenkins
Step 1: Click 'Manage Jenkins'

   

configure system
Step 2: Click 'Configure System'

   

groovy token
Step 3: Click 'Allow token macro processing'

Docker Image

If you would like to run the pre-configured Jenkins image somewhere other than your local machine, we have an image you can pull and use on DockerHub. The latest tag corresponds to the latest snapshot build. You can also find tags corresponding to stable releases that you can use as well.

FAQ

Pipeline version contains ${PIPELINE_VERSION}

You can check the Jenkins logs and you’ll see

WARNING: Skipped parameter `PIPELINE_VERSION` as it is undefined on `jenkins-pipeline-sample-build`.
	Set `-Dhudson.model.ParametersAction.keepUndefinedParameters`=true to allow undefined parameters
	to be injected as environment variables or
	`-Dhudson.model.ParametersAction.safeParameters=[comma-separated list]`
	to whitelist specific parameter names, even though it represents a security breach

To fix it you have to do exactly what the warning suggests…​ Also ensure that the Groovy token macro processing checkbox is set.

Pipeline version is not passed to the build

You can see that the Jenkins version is properly set but in the build version is still snapshot and the echo "${PIPELINE_VERSION}" doesn’t print anything.

You can check the Jenkins logs and you’ll see

WARNING: Skipped parameter `PIPELINE_VERSION` as it is undefined on `jenkins-pipeline-sample-build`.
	Set `-Dhudson.model.ParametersAction.keepUndefinedParameters`=true to allow undefined parameters
	to be injected as environment variables or
	`-Dhudson.model.ParametersAction.safeParameters=[comma-separated list]`
	to whitelist specific parameter names, even though it represents a security breach

To fix it you have to do exactly what the warning suggests…​

The build times out with pipeline.sh info

Docker compose, docker compose, docker compose…​ The problem is that for some reason, only in Docker, the execution of Java hangs. But it hangs randomly and only the first time you try to execute the pipeline.

The solution to this is to run the pipeline again. If once it suddenly, magically passes then it will pass for any subsequent build.

Another thing that you can try is to run it with plain Docker. Maybe that will help.

Can I use the pipeline for some other repos?

Sure! you can pass REPOS variable with comma separated list of project_name$project_url format. If you don’t provide the PROJECT_NAME the repo name will be extracted and used as the name of the project.

E.g. for REPOS equal to:

will result in the creation of pipelines with root names github-analytics and github-webhook.

E.g. for REPOS equal to:

foo$https://github.com/spring-cloud-samples/github-analytics,bar$https://github.com/spring-cloud-samples/atom-feed

will result in the creation of pipelines with root names foo for github-analytics and bar for github-webhook.

Will this work for ANY project out of the box?

Not really. This is an opinionated pipeline that’s why we took some opinionated decisions like:

  • usage of Spring Cloud, Spring Cloud Contract Stub Runner and Spring Cloud Eureka

  • application deployment to Cloud Foundry

  • For Maven:

    • usage of Maven Wrapper

    • artifacts deployment by ./mvnw clean deploy

    • stubrunner.ids property to retrieve list of collaborators for which stubs should be downloaded

    • running smoke tests on a deployed app via the smoke Maven profile

    • running end to end tests on a deployed app via the e2e Maven profile

  • For Gradle (in the github-analytics application check the gradle/pipeline.gradle file):

    • usage of Gradlew Wrapper

    • deploy task for artifacts deployment

    • running smoke tests on a deployed app via the smoke task

    • running end to end tests on a deployed app via the e2e task

    • groupId task to retrieve group id

    • artifactId task to retrieve artifact id

    • currentVersion task to retrieve the current version

    • stubIds task to retrieve list of collaborators for which stubs should be downloaded

This is the initial approach that can be easily changed in the future.

Can I modify this to reuse in my project?

Sure! It’s open-source! The important thing is that the core part of the logic is written in Bash scripts. That way, in the majority of cases, you could change only the bash scripts without changing the whole pipeline.

I ran out of resources!!

[jenkins_resources]] When deploying the app to stage or prod you can get an exception Insufficient resources. The way to solve it is to kill some apps from test / stage env. To achieve that just call

cf target -o pcfdev-org -s pcfdev-test
cf stop github-webhook
cf stop github-eureka
cf stop stubrunner

You can also execute ./tools/pcfdev-helper.sh kill-all-apps that will remove all demo-related apps deployed to PCF dev.

The rollback step fails due to missing JAR ?!

You must have pushed some tags and have removed the Artifactory volume that contained them. To fix this, just remove the tags

git tag -l | xargs -n 1 git push --delete origin

I want to provide a different JDK version

  • by default we assume that you have jdk with id jdk8 configured

  • if you want a different one just override JDK_VERSION env var and point to the proper one

Tip
The docker image comes in with Java installed at /usr/lib/jvm/java-8-openjdk-amd64. You can go to Global Tools and create a JDK with jdk8 id and JAVA_HOME pointing to /usr/lib/jvm/java-8-openjdk-amd64

To change the default one just follow these steps:

   

manage jenkins
Step 1: Click 'Manage Jenkins'

   

global tool
Step 2: Click 'Global Tool'

   

jdk installation
Step 3: Click 'JDK Installations'

   

jdk
Step 4: Fill out JDK Installation with path to your JDK

   

And that’s it!

I want deployment to stage and prod be automatic

No problem, just set the property / env var to true

  • AUTO_DEPLOY_TO_STAGE to automatically deploy to stage

  • AUTO_DEPLOY_TO_PROD to automatically deploy to prod

I can’t tag the repo!

When you get sth like this:

19:01:44 stderr: remote: Invalid username or password.
19:01:44 fatal: Authentication failed for 'https://github.com/marcingrzejszczak/github-webhook/'
19:01:44
19:01:44 	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1740)
19:01:44 	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1476)
19:01:44 	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:63)
19:01:44 	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$8.execute(CliGitAPIImpl.java:1816)
19:01:44 	at hudson.plugins.git.GitPublisher.perform(GitPublisher.java:295)
19:01:44 	at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
19:01:44 	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779)
19:01:44 	at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:720)
19:01:44 	at hudson.model.Build$BuildExecution.post2(Build.java:185)
19:01:44 	at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:665)
19:01:44 	at hudson.model.Run.execute(Run.java:1745)
19:01:44 	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
19:01:44 	at hudson.model.ResourceController.execute(ResourceController.java:98)
19:01:44 	at hudson.model.Executor.run(Executor.java:404)

most likely you’ve passed a wrong password. Check the credentials section on how to update your credentials.

Deploying to test / stage / prod fails - error finding space

If you receive a similar exception:

20:26:18 API endpoint:   https://api.local.pcfdev.io (API version: 2.58.0)
20:26:18 User:           user
20:26:18 Org:            pcfdev-org
20:26:18 Space:          No space targeted, use 'cf target -s SPACE'
20:26:18 FAILED
20:26:18 Error finding space pcfdev-test
20:26:18 Space pcfdev-test not found

It means that you’ve forgotten to create the spaces in your PCF Dev installation.

The route is already in use

If you play around with Jenkins / Concourse you might end up with the routes occupied

Using route github-webhook-test.local.pcfdev.io
Binding github-webhook-test.local.pcfdev.io to github-webhook...
FAILED
The route github-webhook-test.local.pcfdev.io is already in use.

Just delete the routes

yes | cf delete-route local.pcfdev.io -n github-webhook-test
yes | cf delete-route local.pcfdev.io -n github-eureka-test
yes | cf delete-route local.pcfdev.io -n stubrunner-test
yes | cf delete-route local.pcfdev.io -n github-webhook-stage
yes | cf delete-route local.pcfdev.io -n github-eureka-stage
yes | cf delete-route local.pcfdev.io -n github-webhook-prod
yes | cf delete-route local.pcfdev.io -n github-eureka-prod

You can also execute the ./tools/pcfdev-helper.sh delete-routes

I’m unauthorized to deploy infrastructure jars

Most likely you’ve forgotten to update your local settings.xml with the Artifactory’s setup. Check out this section of the docs and update your settings.xml.

Signing Artifacts

In some cases it may be required that when performing a release that the artifacts be signed before pushing them to the repository. To do this you will need to import your GPG keys into the Docker image running Jenkins. This can be done by placing a file called public.key containing your public key and a file called private.key containing your private key in the seed directory. These keys will be imported by the init.groovy script that is run when Jenkins starts.

How to build it

./gradlew clean build

Warning
The ran test only checks if your scripts compile.

How to work with Jenkins Job DSL plugin

Check out the tutorial. Provide the link to this repository in your Jenkins installation.

Warning
Remember that views can be overridden that’s why the suggestion is to contain in one script all the logic needed to build a view for a single project (check out that spring_cloud_views.groovy is building all the spring-cloud views).

How to build it

Build and test

You can execute

./gradlew clean build

to build and test the project.

Generate readme

To generate readme just run

./gradlew generateReadme

Releasing

Publishing A Docker Image

When doing a release you also need to push a Docker image to Dockerhub. From the project root, run the following commands replacing <version> with the version of the release.

docker login
docker build -t springcloud/spring-cloud-pipeline-jenkins:<version> ./jenkins
docker push springcloud/spring-cloud-pipeline-jenkins:<version>

About

Codebase containing Concourse and Jenkins opinionated pipelines

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Shell 50.9%
  • Groovy 49.1%