Releases: frankkopp/workerpool
Releases · frankkopp/workerpool
Version 1.0.0
Requirements for Version 1.0
This implementation of a Worker Pool aims to meet these requirements.
- Configurable number of workers - OK
- Configurable size for job queue - OK
- Non blocking until job queue is full - OK
- Configurable if finished jobs should be queued or ignored - OK
- Stoppable (skip waiting jobs) - OK
- prevent adding of new jobs - OK
- completing all jobs already started - OK
- skipping all jobs not yet started - OK
- keep finished queue (channel) open - OK
- ignore multiple call to stop - OK
- Closable (complete all waiting jobs) - OK
- prevent adding of new jobs - OK
- completing all jobs already started - OK
- start and complete all jobs already in the job queue - OK
- keep finished queue (channel) open - OK
- ignore multiple calls to close - OK
- be stoppable (skipp all remaining queued jobs) - OK
- Allow queuing of jobs - OK
- if the job queue still has capacity return immediately - OK
- if the job queue is full, block the caller until a slot is free - OK
- if the job queue is closed, wake/unblock any callers who are blocked and return an error - OK
- If the queue is closed, return immediately with an error - OK
- Allow retrieving of finished jobs - OK
- Processes can retrieve finished jobs by polling the WorkerPool - OK
- This can be blocking or non blacking - OK
- In case of non-blocking the response must either:
- return a finished job - OK
- return nil and a signal if the WorkerPool has no chance to ever have another finished job (done) - OK
- E.g. if the job queue is closed but there are still jobs in progress ==> false
- E.g. if the job queue is closed and there are no more jobs in progress ==> true
- In case of blocking:
- wait until a finished job becomes available if the WorkPool is still able to produce finished jobs - OK
- E.g. the job queue is not closed
- E.g. the job queue is closed but there are still jobs in progress
- unblock and return nil if the job queue is closed and there are no more jobs in progress - OK
- wait until a finished job becomes available if the WorkPool is still able to produce finished jobs - OK
Ideas for future versions:
- make Jobs interruptible - e.g. add interface function "Stop()"
Definition of a work package (Job)
- Implements an interface to be "runnable" (e.g. has func job.Run())
- The WorkerPool does not need any specific knowledge about the Job apart that it implements the interface
- If the work package has a result or error it should store the result into the Job instance within the Run() function
- The Result can be stored in the work package struct itself
- this means we can't make copies of Jobs but need to use pointers to the work package instances
- this is somewhat tricky with GO and interfaces
- this means we can't make copies of Jobs but need to use pointers to the work package instances
Challenges so far:
- avoid busy polling loops - OK
- context.Context WithCancel was the solution.
- how to interrupt / wake up a read from a channel?
- using select and context with cancel (Done())
- WaitGroup does not give access to its counter - so a separate counter is necessary to see how many workers are still running
- no better solution found yet
- how to interrupt or even kill a go routine which is running
- only with flags or channels - not found any other way yet
- how to address a specific go routine - like a pointer to a thread in C++
- not found a way yet
- Using Interface for Jobs. It is very confusing to work with pointers and Interfaces.
- works but still needs lots of careful attention