Edward Thomson

GitHub Actions Day 28: Repository Automation

December 28, 2019  •  2:47 PM

This is day 28 of my GitHub Actions Advent Calendar. If you want to see the whole list of tips as they're published, see the index.

Advent calendars usually run through Christmas, but I'm going to keep posting about GitHub Actions through the end of December. Consider it bonus content!

This month we've looked at a lot of different ways to build and test your code when a pull request is opened, or when a pull request is merged into the master branch. And we've looked at different ways to deploy your code to a package registry or to a cloud provider.

But GitHub Actions provides triggers for any operation that happens in your repository, not just the ones start CI/CD workflows. Here's some simple examples that display information about the event and are a good basis to build on.

Issue Comment

The issue_comment event is triggered whenever someone adds a comment on an issue or a pull request. The payload provides information about the issue and the comment that was added.

Here's a workflow that uses jq to get the issue comment out of the payload.

When this workflow runs it will print the comment.

Issue Comment

Wiki Changed

The gollum event is triggered whenever someone changes the repository's wiki. The payload provides information about the wiki pages that were changed.

Here's a workflow that uses jq to get the wiki information out of the payload.

When this workflow runs it will print the URL to the first wiki page that was changed.

Issue Comment

Scheduled

You don't have to run a workflow based on any activity in your repository at all; you can also run workflows on a schedule. When you use the schedule trigger, you can specify when the workflow should run using crontab(5) syntax.

on:
  schedule:
  - cron: 0 2 1 * *

In this case, the workflow will run at 2:00 AM every morning on the first day of every month (UTC). cron syntax is tricky, so be sure to consult the documentation if you haven't worked with it before.

Schedule

Although GitHub Actions provides great functionality for CI/CD workflows, there are triggers for almost every operation in your repository that you can use to automate functionality. With the last few days of the month, we'll look at a more concrete example.

GitHub Actions Day 27: Deploy to Cloud

December 27, 2019  •  2:47 PM

This is day 27 of my GitHub Actions Advent Calendar. If you want to see the whole list of tips as they're published, see the index.

Advent calendars usually run through Christmas, but I'm going to keep posting about GitHub Actions through the end of December. Consider it bonus content!

So far this month, we've looked at a lot of ways to build and test your software. And we've looked at a few ways to package up your software. But how do you actually get it running in your cloud service provider?

As my buddy Damian says, "friends don't let friends right-click publish". Instead, a best practice is to script your deployments so that they're reliable and reproducible.

Several cloud providers have created actions to help you deploy to their services as part of your workflow.

AWS

The AWS team has created several actions that will help you deploy. They've also created a sample workflow to show you how to push an image to Amazon Elastic Container Registry (ECR), update an Elastic Container Service (ECS) definition and then deploy that definition.

Azure

The Azure team has also been busy creating actions. And they also get the prize for creating the most complete set of starter workflows that you can use as examples and starting points. Want to deploy a serverless Function App? They've got examples for that. Deploying a database? Sure, they've got examples for that, too. Kubernetes? You bet. It seems that whatever you want to deploy to Azure, they have a sample workflow for.

Google Cloud Platform

The Google Cloud team has also created actions to help you deploy to Google Cloud Platform. They've got an action to set up the Google Cloud SDK so that you can use the gcloud command line application to script your deploys. Better yet – if you need help, you can drop by their slack.

And Anywhere Else

Of course, since it's easy to run scripts as a workflow, you can set up a workflow that will deploy wherever you want – whether that's using the AWS, Azure or Google Cloud actions to help you, or writing a workflow that scps your code up to your own servers.

GitHub Actions Day 26: Self-Hosted Runners

December 26, 2019  •  2:47 PM

This is day 26 of my GitHub Actions Advent Calendar. If you want to see the whole list of tips as they're published, see the index.

Advent calendars usually run through Christmas, but I'm going to keep posting about GitHub Actions through the end of December. Consider it bonus content!

This month I've talked a lot about the software installed on the runners that GitHub provides for running your workflows, and how to install new software.

But what if you wanted to customize your own runner instead of using the ones that GitHub Actions provides? You can use the GitHub Actions self-hosted runner to run workflows on any infrastructure that you have, whether it's an on-premises machine or a runner that you configure in the cloud.

Being able to set up a self-hosted runner is important if you have incredibly custom dependencies – some people still need to use software that has heavy license dependencies like hardware dongles. Or you might want to run a build on an ARM device that you have, instead of the GitHub Actions runners, which are amd64.

More commonly, you might want to talk to machines within your firewall to run tests against them. Or do a deployment step to servers within your firewall.

To set up a self-hosted runner, you first need to download the software to the machine you want to configure. To go the Settings tab in your repository, then select Actions in the left hand menu. There you can configure your self-hosted runners.

Self-Hosted Runners

Just click "Add Runner", and follow the instructions.

Setup

Once you've set up and started the self-hosted runner, it will start polling GitHub to look for workflow runs. You can configure a workflow to run on your self-hosted runner by setting the runs-on to self-hosted. Here I have a simple workflow that will run on my laptop, it just runs uname -a.

When I trigger this workflow, you can see that it accepts the job and then runs in.

Execution

And in GitHub itself, you can see the results of the run.

Results

It's easy to get workflows running on the GitHub Actions runners, or on a self-hosted runner within your network. And the GitHub Actions self-hosted runner will poll GitHub so that you don't need a hole in your firewall to be able to run workflows on machines located inside your firewall.

GitHub Actions Day 25: Sparkle a Christmas Tree

December 25, 2019  •  2:47 PM

This is day 25 of my GitHub Actions Advent Calendar. If you want to see the whole list of tips as they're published, see the index.

Today's Christmas, which means I'll spend it with my family. But I also thought it would be a good time to highlight a neat – and holiday-themed – use of GitHub Actions.

My buddy Martin set up a workflow that will run whenever somebody stars his repository. When that happens, he'll use curl to hit an external API that his internet-connected Christmas tree lights listen for. So whenever somebody stars his repository, his lights light up!

This is another inventive use of actions – listening for an event within the repository, and running a clever workflow.

Merry Christmas, Martin! I hope you get lots of lights on your tree today. And Merry Christmas to you, reader. Thanks for following along this month.

GitHub Actions Day 24: Caching Dependencies

December 24, 2019  •  2:47 PM

This is day 24 of my GitHub Actions Advent Calendar. If you want to see the whole list of tips as they're published, see the index.

Most software projects depend on a set of dependencies that need to be installed as part of the build and test workflows. If you're building a Node application, the first step is usually an npm install to download and install the dependencies. If you're building a .NET application, you'll install NuGet packages. And if you're building a Go application, you'll go get your dependencies.

But this initial step of downloading dependencies is expensive. By caching them, we can reduce this time spent setting up our workflow.

Basically, when the actions/cache action runs for the first time, at the beginning of our workflow, it will look for our dependency cache. Since this is the first run, it won't find it. Our npm install step will run as normal. But after the workflow is completed, the path that we specify will be stored in the cache.

Subsequent workflow runs will download that cache at the beginning of the run, meaning that our npm install step has everything that it needs, and doesn't need to spend time downloading.

The simplest setup is just to specify a cache key and the path to cache.

- uses: actions/cache@v1
  with:
    path: ~/.npm
    key: npm-packages

However, this setup is a little too simplistic, because caches are shared across all the workflows for your repository. That means that if you had a cache for the npm packages in your master branch, and a cache for the npm packages in a maintenance branch, then you'd always have to download the packages that changed between those two branches.

That is to say: when the master branch build runs, it will store the packages that it uses in the cache. When the maintenance branch build runs, it will restore the cache of packages from the master branch build. Then npm install will need to download all the packages that aren't in the master branch but are in the maintenance branch.

Instead, you can tailor the cache to exactly what it's storing. The best way to do this is to use a key that identifies exactly what's being cached. You can take a hash of the file that identifies the dependencies you're installing – in this case, we're using npm, so we'll hash the package-lock.json. This will give us a cache key that is tailored to our packages. We'll actually have multiple caches, one for each branch that changes the package.json, so each branch will restore efficiently.

- uses: actions/cache@v1
  with:
    path: ~/.npm
    key: npm-packages-${{ hashFiles('**/package-lock.json') }}

Okay, this is an improvement. But we still have a problem: since the key depends on the contents of package-lock.json, any time we change the dependencies at all, we invalidate the cache completely.

We can add one more key – in this case, the restore-keys – that can be used as a fuzzier match. It will match the prefixes of the cache keys. In this case, we could set the restore-keys to npm-packages-. If there's an exact match for the key, then that will be the cache that's restored. But on a cache miss, then it will look for the first cache with a key that starts with npm-packages-.

This means that npm install will have to download some dependencies, but probably not all of them. So it's a big improvement over the case when there's a total cache miss.

- uses: actions/cache@v1
  with:
    path: ~/.npm
    key: npm-packages-${{ hashFiles('**/package-lock.json') }}
    restore-keys: npm-packages-

Using the actions/cache action is a good way to reduce the time spent setting up your dependencies, and it works on a wide variety of platforms. So whether you're building a project with Node, .NET, Java, or another technology, it can speed up your build.