April 20, 2025 · 4 min read · Edit on GitHub
Tagged as:
When GitHub told me I couldn’t use Arm64 runners on private repositories, I did what any stubborn, self-respecting developer would do: I rolled my own. What followed was a whirlwind journey through AWS, EC2 networking arcana, IAM wizardry, Docker oddities, and more YAML than is healthy for one human.
In the end? I now have a fully working, on-demand, ephemeral, OIDC-authenticated, spot-priced GitHub Actions runner that builds Arm64 containers. Here’s how it went down.
⚠️ Heads up: This isn’t a step-by-step tutorial. It’s more of a postmortem vent log about all the weird little things I had to figure out along the way.
GitHub now supports Arm64 runners … but only for public repos. I needed them for a private project. I didn’t want to rewrite or abandon my CI pipeline, and spinning up an always-on runner was expensive and inelegant. I wanted:
First, I needed an AMI. I used the stock Amazon Linux 2023 AMI:
ami-0aa56f6a386a2b5a5
No changes. I knew I could install everything I needed via user data.
To launch EC2 instances, I had to understand VPCs, subnets, and CIDR blocks. I learned:
10.0.0.0/26
gives 64 IPs (minus 5 for AWS use)Turns out, routing tables are harder than they look.
To let GitHub spin up instances via OIDC, I:
ec2:RunInstances
, ec2:TerminateInstances
, etc.ec2:PassRole
to pass the runner instance profiletoken.actions.githubusercontent.com
It took a few tries to get the policy scoping right.
When the instance booted, a user-data script ran as root. It had to:
libicu
(thanks .NET)enable
!= start
)sudo yum update -y && \
sudo yum install -y docker git libicu && \
sudo systemctl enable docker && \
sudo systemctl start docker
This took a few tries to get right, since missing libicu
causes the runner to crash without telling you why. I had to SSH into the failing runner and check logs to figure it out.
Spot support requires the AWSServiceRoleForEC2Spot
role to exist. Luckily, I already had it from a previous attempt.
At first, Docker caching failed:
ERROR: This legacy service is shutting down...
Turns out, the self-hosted runner didn’t have a recent enough Buildx binary. I had to explicitly set this:
- uses: docker/setup-buildx-action@v3
with:
version: latest
Otherwise, it used the instance’s default version — which lacked support for the latest GitHub cache APIs.
I modularized everything. My reusable GitHub Action accepts:
aws_role
subnet_id
security_group_id
pre-runner-script
github_personal_access_token
I call start-self-hosted
, run build
, and call stop-self-hosted
in a clean job chain. The runner lives for exactly one job.
✅ Runner spins up in ~1 minute
✅ Docker builds happen on Arm64
✅ Spot pricing keeps costs low
✅ I control the whole pipeline
✅ GitHub’s artificial limitation? Bypassed
Do I wish GitHub just let me use Arm64 on private repos? Absolutely.
But I don’t regret the journey. I learned a ton, and now I have a system that works exactly how I want.
Not faster. Not simpler. But definitely cheaper. And mine.
© 2025 Alexander Krivács Schrøder. Some Rights Reserved.
You can also leave comments directly on GitHub, should you not want to authorize the utteranc.es API to post on your behalf.