6 minutes read
GitHub Actions orchestrate workflows including CI/CD on the GitHub platform. Jobs defined within GitHub actions are executed on runners. You can execute your jobs on self hosted runners as a beta feature since early November. GitHub Actions self hosted runners can run on these operating systems: Windows, macOS and Linux. On Linux, four architectures are supported: x86, x64, ARM and ARM64. This post describes how to setup and run GitHub Action self hosted runners on AWS with Linux operating system on X64 architecture.
Why self hosted runners?
Self hosted runners allow you to execute jobs within the environment that you shape. Job executions that require access to not publicly available resources like persistence stores or event systems can be realized with self hosted runners that would not be available otherwise.
How to connect AWS based self hosted runners
Self hosted runners require only a https connection between GitHub platform and the runner to function. However, in order to set up the runners on unix-like systems in AWS you need to run setup shell scripts that require ssh access – you can disable ssh access after setup again.
There are 4 main steps needed to achieve this:
- Spin up an AWS instance (t2.micro) with an AWS security group that allows incoming traffic via https and ssh:
- The second step requires to switch context from the AWS console to your terminal. You need to establish a ssh connection to the created AWS instance e.g.
ssh -vi your-pem-file.pem ec2-user@ip-address. Don’t close the ssh connection in your terminal.
- The third step requires to switch context from the terminal to the GitHub repository. You need repository admin permissions to access the Settings tab and navigate to the Actions section.
The Add runner button opens a dialog with instructions to create a self hosted runner:
Make sure you execute those steps in the terminal with the established ssh connection.
- The last step is performed in the AWS console.
Change the security group of the AWS instance allowing only https incoming traffic following the principle of least privilege:
Once all steps are performed, you can enjoy your runner in Actions section within the Settings tab:
The self hosted runner setup as described so far allows to execute GitHub Action based jobs. Almost all Github Action jobs I experienced will git clone/pull the git repository content and do build/test/deploy the code. You need to install git on a plain Amazon instance (based on AMI type 1) e.g. with:
sudo yum install git
In order to use a self hosted runner you have to adapt the job definition within the workflow definition:
... jobs: build: ##########runs-on: ubuntu-18.04 runs-on: self-hosted steps: - uses: actions/checkout@v1 ...
The example above illustrates the transition from using an ubuntu based runner on the GitHub platform to a self-hosted runner.
runs-on directive empowers you to run job executions on both self hosted runners and GitHub platform runners. You would define a list like
runs-on: [self-hosted, linux, ARM64]. You’ll find more details in use self-hosted runners in a workflow.
A new commit in your repository could start a workflow execution. Jobs succeeding on your local runner will produce terminal output similar to:
A failure case would produce terminal output similar to:
Running the sole
./run.sh (last step of the GitHub dialog instructions to create a self hosted runner) command as is will cause the process being terminated when terminating the ssh connection.
One option to work around this is nohup e.g.:
nohup ./run.sh &. That enables Github Action job execution after the ssh connection has been terminated.
For a permanent installation, one may start the runner via an init system. This would be upstart with Amazon Linux, which is used for Github Action self hosted runners in this context. To start the runner via upstart this configuration file should be created:
to initialize the run.sh process:
description "github self hosted runner initializer" author "Sebastian" start on (runlevel  and started network) stop on (runlevel [!345] or stopping network) exec sudo -u ec2-user /home/ec2-user/actions-runner/run.sh
This config makes sure the run.sh process comes back even after reboot.
However, the instructions above will not work with Amazon Linux 2 which uses systemd.
Running CI/CD worker load on AWS is a prime example for spot instances.
The AWS console projected 72% savings when I setup a t2.micro Amazon instance to run a self hosted runner compared to an on-demand t2.micro instance.
Spot instances can be taken away at any time. That happened to me only very few times so I always strive to run CI/CD workload in AWS as spot instances.
You can define Spot Instance Requests to make sure a new AWS spot instance comes up once an existing AWS spot instance disappears.
However, the new spot instance will not have the GitHub self hosted runner setup regardless of the workaround described above. One option I see is to use user-data to Run Commands on AWS Linux Instance at Launch, which would have to include the self hosted runner setup and installation of git. The self hosted runner setup includes config.sh with a token. This token is provided by GitHub and possibly not known at AWS instance launch time. That’s one issue of spot agnosticity I did not find a sustainable solution yet. Going forward, GitHub Actions will have an API to request a new self-hosted runner token on the fly.
GitHub Action self hosted runners are available as a beta release and allow you to run GitHub Action based jobs in your custom environment with your tools and resources available at runtime. AWS like other cloud providers allows you spin up and prepare a self hosted runner within minutes so you are good to check it out yourself.
Thanks to Johannes Nicolai and Sebastian Bator for your suggestions!