Continuous Deployment with Pipeline as Code – #5 TravisCI

by Benjamin Lallement, DevOps and member of the collective Gologic.

Goals Series

This article series aims to explore the different tools for doing pipeline as code and deployment.

The goal for each article remains the same: checkout GIT source code, compile a JAVA project with Maven, run the tests then deploy the application on AWS BeanStalk.

Those steps will be written as code in a pipeline and executed with a CI/CD tools.

Each article is divided into several parts:

  • Installation and startup of a CI/CD tool
  • Configuration CI/CD tool (if needed)
  • Code continuous deployment pipeline
  • Check deployment
  • A simple conclusion

If you want to run pipeline, you will need:

  • Docker runtime to execute pipeline steps.
  • An AWS BeanStalk environment with access key and secret to deploy the application.

Before starting, let’s define two key concepts: continuous deployment and pipeline as code.

What does “Continuous Deployment” mean?

Continuous deployment is closely related to continuous integration and refers to the release into production of software that passes the automated tests.

“Essentially, it is the practice of releasing every good build to users”, explains Jez Humble, author of Continuous Delivery.

By adopting both continuous integration and continuous deployment, you not only reduce risks and catch bugs quickly, but also move rapidly to working software.

With low-risk releases, you can quickly adapt to business requirements and user needs. This allows for greater collaboration between ops and delivery, fueling real change in your organization, and turning your release process into a business advantage.

What does “Pipeline as Code” mean?

Teams are pushing for automation across their environments(testing), including their development infrastructure.

Pipelines as code is defining the deployment pipeline through code instead of configuring a running CI/CD tool.

Source code

GitHub demo reference is there: Continuous Deployment Demo

TravisCI

Goal

Our fifth guinea pig is none other than TravisCI, you’ll find the other articles by clicking here: #1-Jenkins, #2-Concourse, #3-GitLab, #4-CircleCI.

TravisCI is a free software for continuous integration. It provides an online service used to compile, test and deploy the source code of the software developed, particularly in connection with the GitHub source code hosting service. A command-line is available to interact with the online service: https://github.com/travis-ci/travis.rb.

In this article, we present the configuration part of a pipeline. The pipeline function of TravisCI works as a descriptor in YAML format. The configuration is stored in the project in a .travis.yml file.

TravisCI integrates very easily with a GitHub project. However, it cannot be used with a private repository because the tool is available in SaaS mode. The GitHub project is configured in the project, the pipeline is triggered when a source code is changed on a branch, whether it is a change in the content of the source or the pipeline itself.

A TravisCI pipeline follows a cycle of per-established tasks to follow clear steps and keep a bill of materials in each pipeline. The basic cycles are: before_install, install, before_script, script, after_success or after_failure, after_script (see job-lifecycle).

It is possible to have several distinct steps in the pipeline by defining jobs, each job must define the sequence of cycles. As in other tools, it is possible to keep the workspace from one cycle to another by passing options like “skip_cleanup: true”, for example.

The amount of language and deployment platform is impressive and makes this tool very versatile!

Configure a TravisCI project

The configuration of a TravisCI project is very fast, if your source code is in GitHub. Firstly, log in with your GitHub account.

TravisCI automatically discovers your projects in GitHub.

Select the project in TravisCI. If your project already has a .travis.yml file, then the pipeline will run immediately and offer a pipeline summary.

Management of environment variables

In order to deploy the application in AWS, it is necessary to add the configurations to AWS in the project environment variables. In the “More Options” tab, click on the “settings” icon of the project.

In the “Environment Variables” section, add the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY keys with your AWS configurations.

These variables are now available as environment variables in all project tasks!

Now that the project is set up, let’s go to the pipeline creation stage!

Pipeline as code : let’s get started

TravisCI uses declarative pipelines in YAML, unlike scripting pipelines (see Jenkinsfile). TravisCI uses templates by programming language, for example a pipeline containing “language: java”, will automatically look for a build Maven, which greatly facilitates the configurations.

In the example below, the pipeline has two steps to test “stages” but it is possible to have a very simple pipeline quickly.

In the project sources, open the .travis.yml script and study the content:

jobs:
  include:
    # build-and-deploy stage
    - stage: build-and-deploy
      # Define a template with JAVA
      language: java
      # cache: Add a cache folder to improve build-speed
      cache:
        directories:
        - $HOME/.m2
      # install phase: with maven project packaging a jar is implicit
      # script phase: Copy artefact to build folder to allow upload to S3 bucket
      script:
      - mkdir build
      - cp target/demo-1.0.jar build/demo.jar
      # Run a set of AWS eb commands to deploy application to AWS (Check for AWS Beanstalk logs to check for creation and deployment)
      before_deploy:
      - export ELASTIC_BEANSTALK_ENV=node-server-${TRAVIS_BRANCH}
      - export ELASTIC_BEANSTALK_LABEL=git-$(git rev-parse --verify HEAD --short)
      - export ELASTIC_BEANSTALK_DESCRIPTION=https://github.com/sumn2u/node-server/tree/$(git rev-parse HEAD)
      - docker pull chriscamicas/awscli-awsebcli
      - docker run -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY chriscamicas/awscli-awsebcli /bin/sh -c "eb init continuous-deployment-demo -p "64bit Amazon Linux 2017.09 v2.6.4 running Java 8" --region "ca-central-1"; eb create travisci-env --single || true; eb setenv SERVER_PORT=5000"
      after_deploy:
      - rm build/demo.jar
      # deploy binary from s3 bucket to beanstalk
      deploy:
      - provider: elasticbeanstalk
        access_key_id: $AWS_ACCESS_KEY_ID
        secret_access_key: $AWS_SECRET_ACCESS_KEY
        zip-file: "build/demo.jar"
        region: "ca-central-1"
        app: "continuous-deployment-demo"
        env: "travisci-env"
        bucket_name: "travisci-bucket"
        skip_cleanup: true
        only_create_app_version: false

As soon as the .travis.yml file is added to the project, the “Build History” tab detects the change in GitHub and provides a summary of the current builds and status. The “Current” view displays the current build and logs to see the progress of the pipeline steps.

Conclusion

TravisCI is hosted in SAAS mode for easier maintenance and integration with GitHub.

The amount of templates per programming languages and deployment platforms is impressive.

Very well integrated with GitHub, it becomes almost the default tools when the project is hosted in GitHub.

The interactive mode with the command-line is convenient for debugging an online service.

Pipelines with the basic “build, deploy” steps are simple but pipelines involving more complex steps like test runs can be very complex to set up.

TravisCI offers a version “On Premise” but it is reserved for companies, so impossible to demonstrate.

Suivez-nous et partagez

Leave a Reply