Continuous Deployment with Pipeline as Code – #2 Concourse

by Benjamin Lallement, DevOps and member of the collective Gologic.

Goals Series

This series of articles explores various tools for executing pipeline-as-code and deployment.

The goal for each article remains the same: check out GIT source code, compile a JAVA project with Maven, run the tests then deploy the application on AWS BeanStalk.

Those steps will be written as code in a pipeline and executed with a CI/CD tool.

Each article is divided into several parts:

  • Installation and start-up of a CI/CD tool
  • Configuration of the CI/CD tool (if needed)
  • Code continuous deployment pipeline
  • Check deployment
  • A simple conclusion

If you want to run a pipeline, you will need:

  • Docker runtime to execute pipeline steps
  • an AWS BeanStalk environment with access key and secret to deploy application

Before starting, let’s define two key concepts: Continuous Deployment and Pipeline-As-Code.

What does “Continuous Deployment” mean?

Continuous Deployment is closely related to Continuous Integration and refers to the release into production of software that passes the automated tests.

“Essentially, it is the practice of releasing every good build to users”, explains Jez Humble, author of Continuous Delivery.

By adopting both Continuous Integration and Continuous Deployment, you not only reduce risks and catch bugs quickly, but also move rapidly to working software.

With low-risk releases, you can quickly adapt to business requirements and user needs. This allows for greater collaboration between ops and delivery, fueling real change within your organization, and turning your release process into a business advantage.

What does “Pipeline as Code” mean?

Teams are pushing for automation across their environments (testing), including development infrastructure.

Pipelines as code defines the deployment pipeline through code instead of configuring a running CI/CD tool.

Source code

Find GitHub Demo Reference is here : Continuous Deployment Demo

Concourse

Goal

For this second article, Concourse for our second article.  You’ll find the first article by clicking here: #1-Jenkins.

Concourse is a pipeline-based CI system written in Go. It is a very lightweight CI/CD engine that functions with a WEB UI and workers.
The specificity of Concourse lies in the fact that all jobs are completely isolated and must respond to 3 events: get, task and put.

Pipeline is based on resources and jobs:

  • Resources is used to retrieve (get) things for jobs and save (put) jobs results.
  • Job is a set of tasks required to complete a pipeline.

Jobs can be triggered from a GET step, for example by commit in a version control system, by a new version of an application in a S3 Bucket or Artifactory or any resource available.

Install and Run Concourse

Start Concourse with Docker (docker-compose) with the following instructions: http://concourse.ci/docker-repository.html

Check whether Concourse runs at localhost:8080 and login with docker-compose credentials (concourse:changeme)

Install Fly-cli

L’The fly tool is a command line interface to Concourse. It is used for a number of tasks from connecting to a shell in one of your build’s containers to uploading new pipeline configuration into a running Concourse. Install binary at Concourse Download section.

Login to Concourse with Fly

First, Fly needs to set a target to access team on Concourse. So, to connect to Concourse with Fly:

fly -t demo login -c http://localhost:8080 -u concourse -p changeme

Concourse is running, Fly is connected to it. Ready to rock!

Pipeline as code: let’s get started

Concourse is a declarative-style pipeline rather than a scripted-style pipeline. Documentation is quite clear as long as the Concourse concept of resources and jobs are understood.

In project, open concourse.yml and let’s take a look!

This pipeline is an inline pipeline for the demo purpose but tasks in pipeline can be separated into files.

# Declare Concourse resources to use as IN/OUT 
resources:  
  # 'code' resource is a GIT resource used to checkout source-code
  - name: code  
    type: git  
    source:  
      uri: https://gitlab.gologic.ca/gologic-technos/continuous-deployment.git
      branch: master
  # 'storage' resource is an S3 resource to store JAR between build and deploy since Concourse does not provide any internal storage tool  
  - name: storage
    type: s3
    source:
      # Name of the bucket in S3 account
      bucket: gologic-concourse-demo-bucket
      region_name: ca-central-1
      # filename of the application to read/write in S3 (check S3 resource documentation for parameters) 
      versioned_file: demo.jar
      # AWS Credentials are passed in command line on set-pipeline. Concourse can also use an external vault system to store credentials
      access_key_id: ((AWS_ACCESS_KEY_ID))
      secret_access_key: ((AWS_SECRET_ACCESS_KEY))
jobs:  
  # First job: Package Application as a JAR and Upload to S3 Bucket for storage
  - name: Build  
    plan:  
    # Check for new commit (trigger=true), 'code' refers to GIT resource
    - get: code  
      trigger: true  
    # Package and copy application to output 'build' folder
    - task: compile  
      config:
        # Use a docker image with Maven to build application
        platform: linux
        image_resource:
          type: docker-image
          source:
            repository: maven
        # 'code' folder contains checkout code
        inputs:
          - name: code
        # 'build' folder is used to store file for next PUT step after RUN step
        outputs: 
          - name: build
        caches:
          - path: code/.m2
        # RUN step allows inline command and FILE step allows use of external task file
        run:
          path: sh
          args:
          - -c
          - |
            mvn -f code/pom.xml package -Dmaven.repo.local=code/.m2
            cp code/target/demo-1.0.jar build/demo.jar
    # Upload build/demo.jar to S3 bucket, 'storage' refers to S3 Resource
    - put: storage
      params:
        file: build/demo.jar
        name: demo.jar
  # Second job: Retrieve application from S3 Bucket and Deploy to AWS Beanstalk
  - name: Deploy
    plan:  
    # Download application from S3 bucket, 'storage' refers to S3 Resource
    - get: storage
      # Only if build job has passed
      passed:
        - Build
      trigger: true
    # Deploy to AWS using credentials 
    - task: deploy-aws  
      params:
        AWS_ACCESS_KEY_ID: ((AWS_ACCESS_KEY_ID))
        AWS_SECRET_ACCESS_KEY: ((AWS_SECRET_ACCESS_KEY))
      config:
        # Use a docker image with AWS eb-cli to init, create environment and deploy application 
        platform: linux
        image_resource:
          type: docker-image
          source:
            repository: chriscamicas/awscli-awsebcli
        inputs:
          - name: storage
        # Run a set of AWS eb commands to deploy application to AWS (Check for AWS Beanstalk logs to check for creation and deployment)
        run:
          path: sh
          args:
          - -c
          - |
            eb init continuous-deployment-demo -p "64bit Amazon Linux 2017.09 v2.6.4 running Java 8" --region "ca-central-1"
            eb create concourse-env --single
            eb deploy concourse-env
            eb status

Now, it’s time to define pipeline in Concourse.  Run Fly commands:

Set pipeline: 
fly -t demo set-pipeline -p demo -c concourse.yml --var "AWS_ACCESS_KEY_ID=MY_AWS_ACCESS_KEY" --var "AWS_SECRET_ACCESS_KEY=MY_AWS_SECRET_KEY"

Unpause Pipeline to allow execution and triggering:
fly -t demo unpause-pipeline -p demo

To execute a Pipeline job in command line instead of web-ui, run Fly command:

fly -t demo trigger-job -j demo/Build --watch

The pipeline (concourse.yml) is in the application, it evolves at the same time as the application and is now under version control. Each code change will trigger a new pipeline job and redeploy to AWS, switch, and deploy!

Unfortunately, a change to the pipeline is required to execute the set-pipeline command with Fly-cli. A nice improvement would be to execute a set-pipeline command directly if a change is made from version control (GIT) in order to skip a manual operation.

Conclusion

Concourse is an extremely lightweight tool that requires a minimal infrastructure.

Concourse requires very little click to see the pipeline results. Fly command is quite useful to interact with Concourse.

Concourse focuses only on pipelines. It’s a stand-alone tool and it can needs a lot of external resources to run a pipeline. For a CI/CD orchestrator it can be useful to have built-in functions such as credentials or storage.

 Changes in pipeline (concourse.yml) are not triggered from a commit or change. Every change must be updated with the Fly-cli using the set-pipeline command, which can lead to unversioned pipeline changes.

 Concourse has no fine-grain authorization.

 Concourse debugging can be difficult.

Suivez-nous et partagez

Leave a Reply