Github Actions: Execute kubectl commands from a local cluster with the help of self-hosted runner with RBAC

you can also read this article in Medium -> click here
In this article, we will see how to set up a self-hosted runner in Kubernetes for a GitHub repository and run kubectl commands from a GitHub Actions pipeline on the same cluster where the self-hosted runner is running.
What do we need to follow along:
- Local Kubernetes cluster (You can use Minikube or kind, the Docker Desktop Kubernetes Solution, Rancher Desktop or whatever you like). I must say I've been impressed by Rancher Desktop lately.
- kubectl and helm
- Github Account
- Basic Kubernetes and Github Actions knowledge
Alternatives:
There are, of course, other ways to run kubectl commands from GitHub Actions. For example, using a kubeconfig file. The kubeconfig file contains information about the cluster, including the API server endpoint, the cluster's CA certificate, and the credentials for a user or service account. This is a way to authenticate and authorize kubectl to access a specific Kubernetes cluster. The best way to do this is to add the kubeconfig file content as a secret in Github.
However, when you use a self-hosted runner to execute kubectl commands from the GitHub Actions pipeline, they run on the same cluster where your applications are running. Additionally, self-hosted runners can provide more flexibility, reliability, and scalability for running complex and long-running tasks, like deploying your application on the same cluster, but with more control and visibility over the infrastructure
Goal:
By the end of this article, you should be able to run kubectl commands in a self-hosted runner in your own cluster. We'll also set up Role-based access control (RBAC) to regulate the access to our cluster resources in our runners. In our example, we will only allow the "get" and "list" verbs on Pods.
1. Install Actions Runner Controller (ARC)
Execute the following command, which installs the Actions Runner Controller in the arc-systems namespace of your cluster:
helm install arc \
--namespace arc-systems \
--create-namespace \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
2. Configure allowed actions
We'll create an apps namespace where our "normal" apps should be deployed. We'll also create the arc-runners namespace manually. This is the namespace we're going to use for our self-hosted runners
kubectl create namespace apps
kubectl create namespace arc-runners
We'll allow our runners certain actions only in this "apps" namespace.
Now, let's set up some Kubernetes resources.
First, we'll create a service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-reader-sa
namespace: arc-systems
You can think of this as an ID Card for a Pod.
We already said that the goal is to be able to execute only "get" and "list" on pods in our cluster through our self-hosted runner - We need to create a Role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: apps
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "list"]
This is like a list of allowed actions: It allows you only to get pod details and list pods in the "apps" namespace. It does not allow, for example, "delete", "edit", "watch" etc.
Next, we need a Role Binding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bind-pod-reader
namespace: apps
subjects:
- kind: ServiceAccount
name: pod-reader-sa
namespace: arc-systems
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
This is like assigning an ID card to a specific set of permissions.
Let's summarize this real quick - our goal is to allow our runners to execute only kubectl get pods
and kubectl get pod <POD_NAME>
in the "apps" namespace.
So we created a role pod-reader
that defines list
and get
as the only allowed actions, then we created a Service Account, which can hold roles,
and then we assign the pod-reader
role to the service account with a RoleBinding
.
You can create the resources with the following:
kubectl apply -f service-account.yaml
kubectl apply -f role.yaml
kubectl apply -f role-binding.yaml
3. Authenticate ARC with personal access token (classic)
Go to Your Profile in Github -> Settings -> Developer Settings -> Personal Access Tokens -> Tokens (classic)
and choose access token scopes for ARC runners (we'll use repo
in our example)
- Repository runners:
repo
- Organization runners:
admin:org

Save this token someplace safe - we'll use it in the next sections.
4. Create a GitHub repository
Alright, this one is pretty straightforward - you can simply follow the GitHub docs if you're having trouble here.
5. Configure runners
Now, let's configure a Runner Scale Set - this will be a collection of self-hosted runners, designed to autoscale: Automatically increasing or decreasing the number of runner pods in a Kubernetes cluster based on the number of queued GitHub Actions jobs.
Open your terminal and run the following command to install the runner scale set:
INSTALLATION_NAME="arc-runner-set"
NAMESPACE="arc-runners"
GITHUB_CONFIG_URL="https://github.com/github_username/repo_name"
GITHUB_PAT="<YOUR_GITHUB_TOKEN>"
SERVICE_ACCOUNT="pod-reader-sa"
helm upgrade --install "${INSTALLATION_NAME}" \
--namespace "${NAMESPACE}" \
--create-namespace \
--set githubConfigUrl="${GITHUB_CONFIG_URL}" \
--set githubConfigSecret.github_token="${GITHUB_PAT}" \
--set template.spec.serviceAccountName="${SERVICE_ACCOUNT}" \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set
- GITHUB_CONFIG_URL is the repository URL from Step 4 you'll use to trigger the GitHub Actions Workflows
- Replace
<YOUR_GITHUB_TOKEN>
with the token from Step 3 - Note the **SERVICE_ACCOUNT - **this is the name of the service account we created in Step 2
6. Run a Github Workflow on a self-hosted runner
Now is the time to see some action!
- Go to your GitHub repository and create a new file
.github/workflows/test.yml
- Paste the following content there:
name: Actions Runner Controller Demo
on:
workflow_dispatch:
jobs:
GitHub-Actions-Self-Hosted-Runners:
runs-on: arc-runner-set
steps:
- uses: azure/setup-kubectl@v4
id: install
- run: |
echo "🎉 This job uses runner scale set runners!"
kubectl get pods -n apps
This is a simple GitHub Workflow that we specify to run on our self-hosted runners. It uses the azure/setup-kubectl@v4
workflow to install a
specific version of kubectl
binary on the runner and then runs the kubectl get pods -n apps
command.
This command will be executed on our self-hosted runners!
It's important to note the runs-on
part: Here you must use the INSTALATION_NAME from Step 4 (arc-runner-set
).
Now we can go to Actions in GitHub and we'll be able to trigger the workflow:

And here is the result!

The output is "No resources found in the 'apps' namespace." because we haven't deployed any applications there. But the operation was successful!
If we, for example, tried to delete
a pod from our pipeline, the action is going to fail with 403 Forbidden
, because of our RBAC setup in Step 2.
Conclusion
We managed to execute kubectl commands within our local cluster through GitHub Actions Workflow! We've essentially connected our local Kubernetes cluster to our CI/CD pipeline. This gives us great flexibility to automate and customize our Kubernetes cluster operations. With the help of Role-based access control (RBAC) we have also set up good security measures and have fine control over the actions allowed to be executed by our runners.