-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: make a standard API to run tests inside a worker #4441
refactor: make a standard API to run tests inside a worker #4441
Conversation
Run & review this pull request in StackBlitz Codeflow. |
✅ Deploy Preview for fastidious-cascaron-4ded94 ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
5eebc3b
to
b52461e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The runtime/workers
look good. I would imagine similar changes on main thread's side would simplify things a lot.
Something like this should work on main thread's side for single instance of Tinypool. I didn't test it but it might work as is. import { Tinypool } from 'tinypool'
const pool = new Tinypool({
filename: 'dist/worker.js',
// min/max options here. Shared by all pools.
})
async function waitForEmptyQueue() {
return new Promise<void>((resolve) => {
if (pool.queueSize === 0) {
return resolve()
}
pool.once('drain', resolve)
})
}
const runningTasks: Promise<void>[] = [];
// TODO handle vm ones too
async function runTests(files: string[], vitestPool: 'threads' | 'forks') {
const runtime = vitestPool === 'threads' ? 'worker_threads' : 'child_process'
await pool.recycleWorkers({ runtime })
const data = {
pool,
files,
// ... add more options here
}
// At this point there may be previous tasks running on different runtime.
// All tasks that are now pushed to queue with `run` will start with the new runtime.
// Note that `run` is not await'ed. Empty queue will indicate finished tasks.
runningTasks.push(pool.run(data))
await waitForEmptyQueue()
}
type TestFile = string
const testsByPool: ['threads' | 'forks', TestFile[]][] = [
['threads', ['... files ...']],
['forks', ['... files ...']],
]
// Run tests
for (const [pool, files] of testsByPool) {
await runTests(files, pool)
}
// Wait for last round's tasks to finish too
await Promise.all(runningTasks); |
Is it fast though? What does this mean for a non-isolated environment? Does it mean we would need to wait to start another process/worker and then restart it again? Can we optimize it somehow? Maybe create half workers and half forks, and close/restart other runtimes (so, 2/2 -> 3/1) when they finish running? (So, when there is a workers queue, but forks are idling) |
Calling
From Tinypool's API's perspective it's identical usage. When we want to change the
Currently it's working like this:
Explaining this would be easier with whiteboard 😄. Hopefully this explains the situation a bit. |
But if we call it 100 times at the same time, and isolation is disabled (so, we only need 4 workers for example) - won't it just create new ones and destroy previous instead of reusing them? |
It will destroy workers and create new ones, yes. And that's expected. We are also using this mechanism when looping over environments: vitest/packages/vitest/src/node/pools/threads.ts Lines 198 to 200 in 7f50229
But we definitely should not call it 100 times. Instead, we should sort files by pools and their options like done currently.
|
b630c30
to
f8ffa7b
Compare
@AriPerkkio looks like vm now takes even more memory :( |
I haven't checked the latest changes yet but by comparing export default defineConfig({
test: {
pool: "vmThreads",
poolOptions: {
vmThreads: {
execArgv: ["--heap-prof", "--heap-prof-dir=./profiling"],
},
},
},
}); Then open the files in Chrome dev-tools |
It looks like vite-node test calls mocked |
I've also run into this during development. Back then it caused some of those 100% CPU spikes and left zombie processes. Maybe we should add some handling for these cases: process.exit(1)
+ // This is run if exit was overwritten
+ throw new Error('Unable to exit since process.exit is overwritten') |
What kind of setup does reproducing this require? I'm seeing some memory increment on |
I don't know actually, but CI always fails here. I didn't start working on it locally yet. |
e9363b9
to
2ba2c00
Compare
@AriPerkkio this is reproducible 100% on my local machine:
It's called because const exit = process.exit
try {
// run tests
} finally {
process.exit = exit
} After bringing it back, tests work correctly. 🤔 🤔 🤔 |
Tinypool uses first If overwriting Is this issue still reproducible easily? I'm interested to see where Node hangs when proces/thread is trying to exit in these cases. Might be able to capture that with debugger. 🤔 I'll take a good look at this PR later this week and run some manual tests myself. |
Yes, if you remove the |
Need to add
I wonder why we exit whole process here instead of vitest/packages/vitest/src/node/pools/rpc.ts Lines 8 to 11 in c946b1c
Also need to check why Tinypool calls Here's minimal repro for the zombie process: import { fork } from "node:child_process";
import { fileURLToPath } from "node:url";
const filename = fileURLToPath(import.meta.url);
const isMainThread = !Boolean(process.env.CHILD_PROCESS);
if (isMainThread) {
console.log("Running main");
const subprocess = fork(filename, { env: { CHILD_PROCESS: "true" } });
subprocess.unref();
subprocess.on("message", console.log);
await new Promise((r) => setTimeout(r, 3000));
console.log("Exit main");
process.exit();
} else {
setInterval(() => console.log("Fork here"), 1000);
} |
f8fbb67
to
ba5715e
Compare
420d645
to
8619291
Compare
Instead of having several entry points for each pool, Vitest now exposes a single entry point that accepts an object with
worker
property that should export an object withgetRpcOptions
andrunTests
methods.getRpcOptions
should return an object that is passed down tobirpc
- this is primarily to establish communication between the threads (Vitest exposescreateForksRpcOptions
andcreateThreadsRpcOptions
fromvitest/workers
entry point)runTests
receivesWorkerGlobalState
object which should be used to decide what tests to run and what the current config is (Vitest exposesrunVmTests
andrunBaseTests
methods fromvitest/workers
)All pools workers are still bundled, but now they are located inside
workers
folder:vitest/dist/workers/forks.js
vitest/dist/workers/threads.js
vitest/dist/workers/vmForks.js
vitest/dist/workers/vmThreads.js
projectFiles
on Vitest instance doesn't exist anymore, instead, there isdistPath
that can be used to resolve pool paths.We don't consider this to be a breaking change because Vitest API is in an experimental state and doesn't follow SemVer.
pool
option is a public API but it doesn't have to rely on experimental Vitest API to work, - it's functionality was not changed.