|
Hello, I'm trying to migrate from the legacy ARC to the new gha-runner-scale-set. With the legacy ARC I had a RunnerDeployment and added multiple labels. e.g. Is it also possible to add multiple runner labels to a gha-runner-scale-set? Thanks for helping me! Timon |
Replies: 11 comments 49 replies
|
Labels are not supported and there are no plans to support them. It's already discussed in our documentation: Runner scale sets cannot be comprised of heterogeneous types of runners (different OSs, specs etc.). As such, each deployment should have a specific runner configuration and a unique name. If you want different types of runners, you need to configure different runner scale sets. |
|
The reason we opted for single labels on ARC (and are doing the same thing on the larger hosted runners as well) is to get us ready to improve the Dev UX about 'why isn't my job starting'. When we had multiple labels, doing deterministic assignment of jobs to a 'scale set' wasn't possible. The ven diagram of runners that matched labels vs. the labels I was looking for meant that jobs always just waited until 'something was free' and 'where' something would start was less clear. This conflicted with a reasonably high request we had to 'see the queue of jobs and when mine will start'. To mitigate this, we want to move to a world of more deterministic assignment of jobs to a scale set that takes jobs in a FIFO fashion. We made the decision that the way to approach this was to move to single named scale sets/queues as a target. That way when a job is assigned it 'goes to its queue and waits' rather than sitting in some meta queue of all jobs for all potential machines. I do appreciate that this results in a change in workflow files :( this is why we haven't “done away” with multiple labels at a system level yet. What we don't really want to do, if we can help it, is to create an experience that is good for our hosted machines to review their queue but sucks for self-hosted as no one moves off multi label. Which lastly led to the decision – what if we used new ARC as a bit of a forcing factor for this to help gently nudge people to the future if people want to come over. |
|
👋 I don't want to dupe out too much and @tmehlinger had put quite a full and detailed single breakdown here https://github.com/actions/actions-runner-controller/discussions/3340 so I have added some updates there following more discussions with @juliandunn and other folks :) |
|
I am trying to use CodeQL with self-hosted runners. The documentation over here on GitHub's help page says to add a label to the runner, but it seems like ARC does not support adding additional labels. Would I have to create a scale-set just for CodeQL with a |
|
Gah - only 2 weeks ago I finally got over the hurdle that the git cli was finally added to the runner container and now this 🤯 Not being able to even use the self-hosted label is painful |
|
There is not even an option in GitHub org settings to forbid using public runners. With such option, going live with |
|
The decision here seems to contradict guidance GitHub give for other parts of their product suite. See Configuring self-hosted runners for code scanning in your enterprise Enabling GHAS Code scanning on self hosted runners requires you to set a label called "self-hosted". Between this thread and that document am I to take it that you cannot use Code scanning on self hosted runners when using ARC? This seems a bit short sighted. And before anyone suggests it we are moving to EMU and removing access to public runners through that. |
|
Hey everyone! I'm not a GitHubber but the new ARC looks designed well with a proper constraint on the number of runner labels being 1, so that the scheduling is easy and the runners are more scalable, thanks to the out-of-the-box support for "sub-controller sharding" via the listener per runnerscaleset architecture. In terms of infrastructure cost(I saw this is mentioned in relation to lack of multiple runner labels), listener pods cannot be scaled to zero, but are relatively small, and you can still scale runner pods to zero with the new ARC. Honestly speaking, I guess there could be a "potential" enhancement to the new ARC where you can "consolidate" listener processes into one pod so that you can run multiple RunnerScaleSets with less footprint(mainly memory usage across the cluster), so that it looks like there's a controller pod and a listener pod in the cluster and nothing else, where they look like the equivalents to the legacy ARC's controller and webhookserver, respectively. Still, no need to have multiple labels per runnerscaleset. Put another way, listener-per-runnerscaleset is actually a killer feature to me, because you no longer need to deploy multiple ARC instances to scale horizontally. Could everyone please read my lengthy comment at actions/actions-runner-controller#3598 (comment) and share your thoughts? I see requests for more queue visibility mentioned in various threads related to the runner label count limitation, so let me leave my opinion for that too- I would love it if GitHub Actions added more visibility to queue statuses and lengths. But again, multiple labels to runnersets- that looks unnecessary to me. |
|
so, to get this straight:
so, we are basically having the worst of the both worlds, right? or am I missing something? |
|
I just want to call out that there's an absolute conflict In both the documentation and the UI because what's happening here is the docs are saying that label management is supported and then it just flat out acts like it's not the case for large runners but you only ever find that one single line saying that in the wrong spot deeply hidden; And the reasoning is insane Because there is no reasoning given It's just something critical that doesn't work. It's absolutely insane that we can't manage our labels because the group of runners is defined by the query and perspective that we look at them; using labels allows us to consider the same pool of resources to be larger and allows us to get our work done faster because now a set of 3 runners can be multiple distinct groups that depend on the labels used. If a CPU has multiple chipset support, or a runner can do different things, and I need specific things as part of the build for testing , builds, etc, then that allowed me to set up the same runner to be Part of multiple different pools for actions that responds more flexibly, so it honestly just feels to me like GitHub is nerfing runners to intentionally make them less useful because you want us to spend more money on more runners instead of actually letting us just get our work done with what we currently have, Which has incidentally also been the result of all of this because now people are required to spit up more runners and throw them explicitly into groups that can't be shared ... so it feels like technical debt that's creating technical inflation and creating bigger invoices but not actually helping developers, imho. |
|
Ok, only one label. But why does that label have to be locked to the name of the resources in K8s? K8s resources have different syntax constraints than GitHub labels. I'm trying to use a perfectly valid GitHub Runner label 'linux-x86_64', for instance, same as I did for my non-ARC self-hosted runner.
This is especially problematic in the context of multi-platform reusable workflows. All consumers are required to name their scale sets identically or provide workflow input overrides. Previously all we needed was "just tag your runners with this generic/free label". Instead they need to design their infrastructure around this label. Which leads directly into the next arbitrary limitation of only 10 precious inputs on There's very good value in using labels that are distinct from the name of the runner for non-ARC self-hosted runners, and ARC nukes that value. Enterprise use requires a higher level of sophistication and flexibility, not just lowest common denominator. GitHub is missing the mark here. |
Labels are not supported and there are no plans to support them. It's already discussed in our documentation:
Here: https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#about-runner-scale-sets
and here: https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller#using-arc-runners-in-a-workflow
Runner scale sets cannot be comprised of heterogeneous types of runners (different OSs, specs etc.). As such, each deployment should have a specific runner configuration and a unique n…