How I Built a Spot-Based Self-Hosted GitHub Runner (Because GitHub Said No to Arm64 in Private Repos)


April 20, 2025   ·   4 min read  ·   Edit on GitHub 

Tagged as:

When GitHub told me I couldn’t use Arm64 runners on private repositories, I did what any stubborn, self-respecting developer would do: I rolled my own. What followed was a whirlwind journey through AWS, EC2 networking arcana, IAM wizardry, Docker oddities, and more YAML than is healthy for one human.

In the end? I now have a fully working, on-demand, ephemeral, OIDC-authenticated, spot-priced GitHub Actions runner that builds Arm64 containers. Here’s how it went down.

⚠️ Heads up: This isn’t a step-by-step tutorial. It’s more of a postmortem vent log about all the weird little things I had to figure out along the way.


The Motivation

GitHub now supports Arm64 runners … but only for public repos. I needed them for a private project. I didn’t want to rewrite or abandon my CI pipeline, and spinning up an always-on runner was expensive and inelegant. I wanted:

  • Spot instances (because money)
  • Arm64 architecture for building Docker images (because that’s what my server runs on)
  • Spawn per CI job, destroy when done (ephemeral, stateless, self-cleaning infra)
  • GitHub Actions compatibility so I could keep using familiar steps to do my work

The Tools

  • GitHub Actions for the workflow
  • AWS EC2 Spot Instances for cheap compute
  • Amazon Linux 2 for the runner OS
  • machulav/ec2-github-runner to glue it all together
  • OIDC authentication for secure, short-lived AWS creds
  • Docker + BuildKit for container builds

The Buildout

Step 1: The Image

First, I needed an AMI. I used the stock Amazon Linux 2023 AMI:

ami-0aa56f6a386a2b5a5

No changes. I knew I could install everything I needed via user data.

Step 2: The Network Labyrinth

To launch EC2 instances, I had to understand VPCs, subnets, and CIDR blocks. I learned:

  • 10.0.0.0/26 gives 64 IPs (minus 5 for AWS use)
  • Subnets must not overlap
  • Public subnets need a route to an Internet Gateway
  • EC2 instances need a public IP and port 22 open for EC2 Instance Connect

Turns out, routing tables are harder than they look.

Step 3: IAM, Roles, and Trust

To let GitHub spin up instances via OIDC, I:

  • Created a role with ec2:RunInstances, ec2:TerminateInstances, etc.
  • Enabled ec2:PassRole to pass the runner instance profile
  • Configured the trust relationship to accept tokens from token.actions.githubusercontent.com
  • Scoped the GitHub repo/org correctly in the OIDC condition

It took a few tries to get the policy scoping right.

Step 4: Pre-Runner Script and Dependencies

When the instance booted, a user-data script ran as root. It had to:

  • Install Docker, Git, and libicu (thanks .NET)
  • Start Docker (because enable != start)
  • Optionally preload the runner
sudo yum update -y && \
  sudo yum install -y docker git libicu && \
  sudo systemctl enable docker && \
  sudo systemctl start docker

This took a few tries to get right, since missing libicu causes the runner to crash without telling you why. I had to SSH into the failing runner and check logs to figure it out.

Step 5: Spot Instances and the Mysterious Role

Spot support requires the AWSServiceRoleForEC2Spot role to exist. Luckily, I already had it from a previous attempt.

Step 6: Docker Buildx Problems

At first, Docker caching failed:

ERROR: This legacy service is shutting down...

Turns out, the self-hosted runner didn’t have a recent enough Buildx binary. I had to explicitly set this:

- uses: docker/setup-buildx-action@v3
  with:
    version: latest

Otherwise, it used the instance’s default version — which lacked support for the latest GitHub cache APIs.

Step 7: Workflow Architecture

I modularized everything. My reusable GitHub Action accepts:

  • aws_role
  • subnet_id
  • security_group_id
  • pre-runner-script
  • github_personal_access_token

I call start-self-hosted, run build, and call stop-self-hosted in a clean job chain. The runner lives for exactly one job.


The Result

✅ Runner spins up in ~1 minute
✅ Docker builds happen on Arm64
✅ Spot pricing keeps costs low
✅ I control the whole pipeline
✅ GitHub’s artificial limitation? Bypassed

Final Thoughts

Do I wish GitHub just let me use Arm64 on private repos? Absolutely.

But I don’t regret the journey. I learned a ton, and now I have a system that works exactly how I want.

Not faster. Not simpler. But definitely cheaper. And mine.

© 2025 Alexander Krivács Schrøder. CC-BY-SA Some Rights Reserved.  

Let others know about this article:

Comments


You can also leave comments  directly on GitHub, should you not want to authorize the utteranc.es API to post on your behalf.