Replies: 1 comment 3 replies
-
Thanks @krafs ! Currently this isn't an out of the box feature for automatic splitting of work, and would be a substantial bit of work for the different processes/machines to be able to communicate and send/receive module results to each other. But if you can work out manually an efficient way to split up your jobs, then that's possible. You could pass in some sort of environment variable to the program E.G. JobCategory=Test and another JobCategory=Build and then either: In the startup, check that env variable and conditionally register modules Or In a module, override the ShouldSkip method and skip it based on a condition such as your environment variable. Then in your GitHub actions yaml file (or other build system) just create two or more separate jobs to run your modular pipeline program, each time passing in different variables The only thing to consider is that if a module was run on another agent, then you won't be able to wait for it or get it's results on a different agent. |
Beta Was this translation helpful? Give feedback.
-
Liking the idea of the project! Found it when researching how to implement this kind of app myself :)
Question:
Could ModularPipelines be made to distribute its parallelizable work on multiple agents, instead of running it all on one agent?
Beta Was this translation helpful? Give feedback.
All reactions