Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Unsafe Methods for Group Processing #2

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

racerxdl
Copy link

@racerxdl racerxdl commented Jun 4, 2018

I changed a bit the library to be able to do batch processing.

It doesn't change the behavior of the current methods, but I added few:

  • UnsafeAdd => Adds without locking
  • UnsafeNext => Get's next without locking
  • UnsafeLock => Locks
  • UnsafeUnlock => Unlocks

That might seens a bit odd, but I have a use case here. Let's suppose I have a routine that needs N samples from the fifo, and other routine that puts V samples into the fifo.

In the current version you would do:

func RoutineThatPuts() {
   for i:= 0; i<V; i++ {
      fifo.Add(item[V]);
   }
}

func RoutineThatGets() {
    myitems := make([]item, N)
   for i:= 0; i<V; i++ {
      myitems[N] = fifo.Next()
   }
}

So the thing is that two things happen here:

  1. Each iteration of the loop will call lock/unlock from the mutex,
  2. If the two operations are concurrent, they will add and remove at "same" time

For sure that won't give any data corruption, but can impact performance due the mutex.lock() / unlock() calls (which are sort of expensive according to pprof). With my changes I can do this:

func RoutineThatPuts() {
   fifo.UnsafeLock();
   defer fifo.UnsafeUnlock();
   for i:= 0; i<V; i++ {
      fifo.UnsafeAdd(item[V]);
   }
}

func RoutineThatGets() {
    myitems := make([]item, N)
   fifo.UnsafeLock();
   defer fifo.UnsafeUnlock();
   for i:= 0; i<V; i++ {
      myitems[N] = fifo.UnsafeNext()
   }
}

This way, it will ensure that the loop runs without lock/unlocking the fifo unnecessarily and both operations will write all data that it wants before anyone reading it.

It doesn't impact in any way the current flow of Add / Next, so that is just a new feature I would like to add to the oficial library, since it might be useful depending on the use case.

@GeertJohan
Copy link
Member

GeertJohan commented Mar 24, 2019

Hmm, I can see the usecase, but why not do this with:

  • AddList(items []interface{})
  • NextN(n int) []interface{}

That keeps the library simple to use, and helps with the kind of operation you need.
Actually, it's even faster than the iteration solution, NextN(..) can slice the required number of items in one operations, instead of making a function call for n times, operating the iteration (for...) and appending each item separately to the slice myitems..

@racerxdl
Copy link
Author

Well, thats a good point. Doing it.

*	Created LockingFifo to avoid having a mutex locked when not needed
*	Added Peek call
*	Added AddList
*	Added NextN
@racerxdl
Copy link
Author

racerxdl commented Mar 24, 2019

Ok I added, the NextN can be improved by slicing it though (I left a todo).

I also did:

  • Removed the Lock / Unlock from Queue, and created a LockingQueue (that wraps queue) when someone needs a Safe Queue
  • Added Peek (I saw in the comments)
  • Added Tests for NextN and AddList

@racerxdl
Copy link
Author

Just added travis script for testing with golangci-lint and coveralls based on my logging repo: https://github.com/quan-to/slog

Just enable this repo at https://coveralls.io and at travis and it should be good to go.

@racerxdl
Copy link
Author

Ok just noticed that the default was to be Safe not Unsafe. So I changed the Queue to UnsafeQueue and renamed LockingQueue to Queue. That way the default is safe for Go Routines.

@GeertJohan
Copy link
Member

Can we remove the UnsafeQueue completely? It doesn't seem to add much anymore over the normal Queue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants