Skip to content

Conversation

@thomasbreland
Copy link
Contributor

The number of jobs should scale with the runner CPU count, up to a point. The bottleneck becomes network / disk I/O at very large numbers, and 8 seems like a reasonable maximum based on my experience at this time.

Many of the private repos that I work on use runners with GitHub-hosted runners with only 2 cores, and oversubscribing there can actually lead to slower runs.

The number of jobs should scale with the runner CPU count, up to a
point. The bottleneck becomes network / disk I/O at very large numbers,
and 8 seems like a reasonable maximum based on my experience at this
time.
@eregon
Copy link
Member

eregon commented Oct 26, 2025

#673 (comment)

So I think 4 was and might still be the fastest on the GH-hosted runners.
Could you benchmark that by doing some runs with various values for jobs?
I wouldn't want to make the install slower on GH-hosted runners.

@thomasbreland
Copy link
Contributor Author

This is likely because most public runners have four cores; in this case, the result here would be the same - it would still use four jobs. Private default runners and paid larger runners do not all have the same number of CPU cores.

https://docs.github.com/en/actions/reference/runners/github-hosted-runners#standard-github-hosted-runners-for--private-repositories

@eregon
Copy link
Member

eregon commented Oct 27, 2025

That page says Linux & Windows have 2 CPU, macOS 3 or 4.
But let's see the number of jobs used in the CI directly which probably includes hyperthreads.

bundler.js Outdated
}

// Number of jobs should scale with runner, up to a point
const jobs = Math.min(os.cpus().length, 8)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it be os.availableParallelism() maybe? I found https://stackoverflow.com/a/77532237/388803
Also os.cpus().length doesn't seem to work on Windows, see CI results

Copy link
Member

@eregon eregon Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it should from the https://nodejs.org/api/os.html#oscpus docs, I pushed a commit to try os.availableParallelism()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmh, that doesn't seem to work on Windows either :/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah the error probably means it needs to be a string and not a number for CLI arguments

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. I should know by now that Windows is particular about objects and types. I've been working mostly on Mac and Linux, and I figured the tests runners would catch any issues anyway, but this is good to know.

@eregon
Copy link
Member

eregon commented Oct 27, 2025

That page says Linux & Windows have 2 CPU, macOS 3 or 4. But let's see the number of jobs used in the CI directly which probably includes hyperthreads.

From https://github.com/ruby/setup-ruby/actions/runs/18838903560/job/53746296174?pr=822#step:3:59
ubuntu-22 & 24 & ubuntu-arm variants & macos-13 & macos-15-intel: 4 jobs
macos-14 & macos-15: 3 jobs

So it looks safe indeed and shouldn't be slower except maybe a little bit on macOS arm64 (3 instead of 4).

* As https://nodejs.org/api/os.html#oscpus says:
  os.cpus().length should not be used to calculate the amount of parallelism available to an application. Use os.availableParallelism() for this purpose.
Copy link
Member

@eregon eregon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR

@eregon eregon merged commit d5126b9 into ruby:master Oct 27, 2025
@thomasbreland
Copy link
Contributor Author

That page says Linux & Windows have 2 CPU, macOS 3 or 4. But let's see the number of jobs used in the CI directly which probably includes hyperthreads.

From https://github.com/ruby/setup-ruby/actions/runs/18838903560/job/53746296174?pr=822#step:3:59 ubuntu-22 & 24 & ubuntu-arm variants & macos-13 & macos-15-intel: 4 jobs macos-14 & macos-15: 3 jobs

So it looks safe indeed and shouldn't be slower except maybe a little bit on macOS arm64 (3 instead of 4).

My bet is that it will still be marginally faster on average, because it will just use three jobs, but that it won't be a huge difference overall. Using slightly more jobs than the number of cores does not always slow things down, but it can because of context switching and overhead / higher memory usage. In general, if nothing else is running, using the same number as core count is a safe bet until you get into extreme parallelism. In cases where you have other services running such as PostgreSQL or Redis, it might make sense to use even less.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants