-
-
Notifications
You must be signed in to change notification settings - Fork 354
Ref(CI): Reduce processor/ram used by jest test locally. #5055
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Not sure if it's worth adding in the project but here is the measuring script that I used #!/usr/bin/env node
const { spawn } = require('child_process');
const os = require('os');
function getUsedMemory() {
return os.totalmem() - os.freemem();
}
const startMem = getUsedMemory();
console.log(`Starting memory usage: ${(startMem / 1024 / 1024).toFixed(2)} MB`);
const startTime = Date.now();
let peakMemIncrease = 0;
const jest = spawn('npx', ['jest', '--maxWorkers=8'], { stdio: 'inherit' });
const memoryInterval = setInterval(() => {
const currentUsed = getUsedMemory();
const increase = currentUsed - startMem;
if (increase > peakMemIncrease) {
peakMemIncrease = increase;
}
}, 1000);
jest.on('close', (code) => {
clearInterval(memoryInterval);
const endTime = Date.now();
console.log(`\nTest duration: ${(endTime - startTime) / 1000}s`);
console.log(`Peak memory increase: ${(peakMemIncrease / 1024 / 1024).toFixed(2)} MB`);
process.exit(code);
});All I had to do is place it on the core folder, play with the maxWorkers value and replace the |
iOS (new) Performance metrics 🚀
|
| Revision | Plain | With Sentry | Diff |
|---|---|---|---|
| 534ba8c+dirty | 1225.00 ms | 1237.43 ms | 12.43 ms |
| 3e0a5f9+dirty | 1233.65 ms | 1239.10 ms | 5.45 ms |
| eb07ba3+dirty | 1214.49 ms | 1221.59 ms | 7.10 ms |
App size
| Revision | Plain | With Sentry | Diff |
|---|---|---|---|
| 534ba8c+dirty | 3.19 MiB | 4.38 MiB | 1.19 MiB |
| 3e0a5f9+dirty | 3.19 MiB | 4.38 MiB | 1.19 MiB |
| eb07ba3+dirty | 3.19 MiB | 4.38 MiB | 1.19 MiB |
iOS (legacy) Performance metrics 🚀
|
| Revision | Plain | With Sentry | Diff |
|---|---|---|---|
| 534ba8c+dirty | 1230.22 ms | 1231.18 ms | 0.96 ms |
| 3e0a5f9+dirty | 1226.94 ms | 1230.02 ms | 3.08 ms |
| eb07ba3+dirty | 1222.46 ms | 1220.37 ms | -2.08 ms |
App size
| Revision | Plain | With Sentry | Diff |
|---|---|---|---|
| 534ba8c+dirty | 2.63 MiB | 3.81 MiB | 1.18 MiB |
| 3e0a5f9+dirty | 2.63 MiB | 3.81 MiB | 1.18 MiB |
| eb07ba3+dirty | 2.63 MiB | 3.81 MiB | 1.18 MiB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for looking at this and running the metrics 🙇
The changes LGTM 🎸
Running this locally on a Macbook M1 Pro / 32GB, I didn't notice any difference, but I think we should experiment and see how this performs on CI.
Not sure if it's worth adding in the project but here is the measuring script that I used
Thank you for sharing this. I'm not sure if we should add it but we can have this PR as a reference.
It will only be noticeable on the latest Pro Max where the cpu count is higher than 8 cores
Android (new) Performance metrics 🚀
|
| Revision | Plain | With Sentry | Diff |
|---|---|---|---|
| 3e0a5f9+dirty | 379.92 ms | 450.96 ms | 71.04 ms |
| eb07ba3+dirty | 419.49 ms | 482.12 ms | 62.63 ms |
| 534ba8c+dirty | 472.35 ms | 537.31 ms | 64.96 ms |
App size
| Revision | Plain | With Sentry | Diff |
|---|---|---|---|
| 3e0a5f9+dirty | 7.15 MiB | 8.42 MiB | 1.27 MiB |
| eb07ba3+dirty | 7.15 MiB | 8.42 MiB | 1.27 MiB |
| 534ba8c+dirty | 7.15 MiB | 8.42 MiB | 1.27 MiB |
Android (legacy) Performance metrics 🚀
|
| Revision | Plain | With Sentry | Diff |
|---|---|---|---|
| eb07ba3 | 470.04 ms | 473.35 ms | 3.31 ms |
| 534ba8c | 484.00 ms | 499.93 ms | 15.93 ms |
| 3e0a5f9 | 401.72 ms | 394.98 ms | -6.74 ms |
App size
| Revision | Plain | With Sentry | Diff |
|---|---|---|---|
| eb07ba3 | 17.75 MiB | 20.15 MiB | 2.41 MiB |
| 534ba8c | 17.75 MiB | 20.15 MiB | 2.41 MiB |
| 3e0a5f9 | 17.75 MiB | 20.15 MiB | 2.41 MiB |
This changes only impact local tests and not github tests.
If your device has too many CPU cores, our tests will spam too many node processes, making the test to use a lot of memory and CPU resources.
By limiting the amount of workers to 25% I was able to reduce locally the time to finish the tests in 16%, reduce memory usage by 54% on an environment with 32 CPU Cores and 32 GB of ram. Assuming that our GitHub actions are running with 2 core and only 8 GB of RAM, I expect those values to be even better.
Here is my local-measured data given the total amount of workers
Workers : how many workers jest can make
Duration: time to complete the test
Memory Avg: the average memory increase before - during the tests
#skip-changelog.