Nona Blog

CircleCI Orbs

The best analogy I can think of to an orb is an npm package (or library to be more general). You import packages and then the packages functionality is available to you to use in your file (or globally depending on how you’ve installed it).

In the latest update of circleCI they have included an interesting addition — orbs.

Photo by Wayan Aditya on Unsplash

What is an orb?

The best analogy I can think of to an orb is an npm package (or library to be more general). You import packages and then the packages functionality is available to you to use in your file (or globally depending on how you’ve installed it).

Why is this useful?

As with importing libraries the usefulness is in reducing the actual lines of code in your file. It also means you don’t have to write out a program yourself and can instead use someone else’s (which in theory should be more optimised and better since it should be maintained by open source developers).

Okay great, now that we have an understanding let’s do an example to hammer the point home. I will assume you have a general understanding of circleCI and how it works. We will be using the AWS S3 circleCI orb.

First create a .circleci folder with a file named config.yml, this is the file we will add all the code to.

We must use the latest version of circleCI at the top of the page which is

version: 2.1

Now we include the AWS S3 orb.

version: 2.1 

orbs:   
  aws-s3: circleci/aws-s3@1.0.13

where aws-s3 is the name of the orb to use, and the version number is 1.0.13. Currently it doesn’t seem like you can use @latest (or something similar) to make sure you are using the latest version. Therefore, it’s important to make sure you regularly check for newer versions and update when available if you need bug fixes or newer functionality.

We will run it in using an arbitrary node container as follows:

version: 2.1

orbs:   
  aws-s3: circleci/aws-s3@1.0.13

executors:
  node-container:
    docker:
      - image: circleci/node:10.16.3-browsers

Now to use the orb (I will assume you have added your AWS credentials to circle already):

version: 2.1

orbs:   
  aws-s3: circleci/aws-s3@1.0.13

executors:
  node-container:
    docker:
      - image: circleci/node:10.16.3-browsers

jobs:
  executer: node-container
  steps:
    - checkout
    - run: mkdir myBucket && echo "hello world" > /   
           myBucket/build_asset.txt
    - aws-s3/sync:
        from: myBucket
        to: 's3://my-s3-bucket/prefix'
        arguments: | 
          --acl public-read --cache-control "max-age=86400"
        overwrite: true

Here I’m running a job with some steps which will be executed inside the node-container. Let’s take a closer look at what is happening the steps.

– checkout checks out the source code from the repo.

– run: mkdir myBucket && echo “hello world” > / myBucket/build_asset.txt creates a folder called myBucket with a file called build_asset.txt which contains the text hello world.

- aws-s3/sync:
    from: myBucket
    to: 's3://my-s3-bucket/prefix'
    arguments: | 
      --acl public-read --cache-control "max-age=86400"
    overwrite: true

In this step we use the sync option (exactly what it does can be found here) of the aws-s3 orb. We are syncing the newly created myBucket with the bucket contained in S3 called my-s3-bucket. We pass arguments that allow public read access and a cache TTL of 86400 seconds (24 hours). We have also added the command overwrite:true which means files on the s3 bucket will be overwritten given files have the same name.

To summarize my-s3-bucket will now contain all the contents of myBucket where the contents will be publicly readable, with a cache TTL of 24 hours and any files with the same name would have been overwritten.

And that in a nutshell is how circleCI orbs work — almost exactly like libraries.

You can find more info (other functionality, options etc… ) about the aws-s3 orb here.

P.S. Here’s all the code we managed to avoid having to write

# This code is licensed from CircleCI to the user under the MIT license. See
# https://circleci.com/orbs/registry/licensing for details.
version: 2.1

description: |
  A set of tools for working with Amazon S3. Requirements: bash
  Source: https://github.com/circleci-public/circleci-orbs

examples:
  basic_commands:
    description: "Examples uses aws s3 commands"
    usage:
      version: 2.1
      orbs:
        aws-s3: circleci/aws-s3@1.0.0
      jobs:
        build:
          docker:
            - image: circleci/python:2.7
          steps:
            - checkout
            - run: mkdir bucket && echo "lorem ipsum" > bucket/build_asset.txt
            - aws-s3/sync:
                from: bucket
                to: "s3://my-s3-bucket-name/prefix"
                arguments: |
                  --acl public-read \
                  --cache-control "max-age=86400"
                overwrite: true
            - aws-s3/copy:
                from: bucket/build_asset.txt
                to: "s3://my-s3-bucket-name"
                arguments: --dryrun
  override_credentials:
    description: "Examples uses aws s3 commands with credentials overriding"
    usage:
      version: 2.1
      orbs:
        aws-s3: circleci/aws-s3@1.0.0
      jobs:
        build:
          docker:
            - image: circleci/python:2.7
          steps:
            - checkout
            - run: mkdir bucket && echo "lorem ipsum" > bucket/build_asset.txt
            - aws-s3/sync:
                from: bucket
                to: "s3://my-s3-bucket-name/prefix"
                aws-access-key-id: AWS_ACCESS_KEY_ID_BLUE
                aws-secret-access-key: AWS_SECRET_ACCESS_KEY_BLUE
                aws-region: AWS_REGION_BLUE
                arguments: |
                  --acl public-read \
                  --cache-control "max-age=86400"
                overwrite: true
            - aws-s3/copy:
                from: bucket/build_asset.txt
                to: "s3://my-s3-bucket-name"
                arguments: --dryrun

orbs:
  aws-cli: circleci/aws-cli@0.1.13

commands:
  sync:
    description: "Syncs directories and S3 prefixes. https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html"
    parameters:
      from:
        type: string
        description: A local *directory* path to sync with S3
      to:
        type: string
        description: A URI to an S3 bucket, i.e. 's3://the-name-my-bucket'

      arguments:
        type: string
        default: ""
        description: >
          Optional additional arguments to pass to the `aws sync` command
          (e.g., `--acl public-read`). Note: if passing a multi-line value
          to this parameter, include `\` characters after each line, so the
          Bash shell can correctly interpret the entire command.
      overwrite:
        default: false
        type: boolean
      aws-access-key-id:
        type: env_var_name
        description: aws access key id override
        default: AWS_ACCESS_KEY_ID
      aws-secret-access-key:
        type: env_var_name
        description: aws secret access key override
        default: AWS_SECRET_ACCESS_KEY
      aws-region:
        type: env_var_name
        description: aws region override
        default: AWS_REGION
    steps:
      - aws-cli/install
      - aws-cli/configure:
          aws-access-key-id: << parameters.aws-access-key-id >>
          aws-secret-access-key: << parameters.aws-secret-access-key >>
          aws-region: << parameters.aws-region >>
      - deploy:
          name: S3 Sync
          command: |
            aws s3 sync \
              <<parameters.from>> <<parameters.to>><<#parameters.overwrite>> --delete<</parameters.overwrite>><<#parameters.arguments>> \
              <<parameters.arguments>><</parameters.arguments>>
  copy:
    description: "Copies a local file or S3 object to another location locally or in S3. https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html"
    parameters:
      from:
        type: string
        description: A local file or source s3 object
      to:
        type: string
        description: A local target or s3 destination
      arguments:
        description: If you wish to pass any additional arguments to the aws copy command (i.e. -sse)
        default: ''
        type: string
      aws-access-key-id:
        type: env_var_name
        description: aws access key id override
        default: AWS_ACCESS_KEY_ID
      aws-secret-access-key:
        type: env_var_name
        description: aws secret access key override
        default: AWS_SECRET_ACCESS_KEY
      aws-region:
        type: env_var_name
        description: aws region override
        default: AWS_REGION
    steps:
      - aws-cli/install
      - aws-cli/configure:
          aws-access-key-id: << parameters.aws-access-key-id >>
          aws-secret-access-key: << parameters.aws-secret-access-key >>
          aws-region: << parameters.aws-region >>
      - run:
          name: S3 Copy << parameters.from >> -> << parameters.to >>
          command: "aws s3 cp << parameters.from >> << parameters.to >><<# parameters.arguments >> << parameters.arguments >><</ parameters.arguments >>"

Looks like the entire gist is too long to embed – here’s the link to access all the code – https://gist.github.com/DominicGBauer/35f2e2e941e200e377c95d6b6b72a294#file-aws-s3-yml (I do feel this helped my point though 😛 )

Dom Bauer

Dom Bauer

Junior Developer - Nona

Add comment