Skip to content

Logger types

goodsign edited this page May 10, 2014 · 19 revisions

The concept

There are two types of loggers: synchronous and asynchronous.

Synchronous

Synchronous logger is used to process (check constraints/exceptions, dispatch, and format) log messages in the same goroutine where the Trace/Debug/.../Critical func was called. This behavior affects application performance, as any blocking operation, but preserves time-order: log is produced at the same time the Trace/Debug/.../Critical func returns.

Asynchronous

Asynchronous logger is used to perform message creation and processing in two different goroutines. Trace/Debug/.../Critical put messages in a queue and the consumer goroutine fetches messages from the queue performing logging. There are different asynchronous behaviors based on the consuming algorithms:

  • Async loop - processes the queue in a simple 'for loop'. Async loop logger doesn't block the caller goroutine and makes no pauses between consuming and processing messages. Might lead to high load when the queue gets long.
  • Async timer - processes the queue messages solely at timer ticks. This behavior is the most lightweight in terms of application load because the frequency of logger operations is strictly limited by the interval parameter. However, set this interval reasonable. The main rule is that the average interval of message producing should be greater than the average interval of message consuming. Disregarding the rule, you make the logger accumulate too many messages in the queue. As a result, the queue size exceeds its limit and the logger flushes the messages causing a performance lag.
  • Async adaptive - acts like the async timer logger, but its timer interval depends on the current queue message count. This type of logger is created to avoid log message queue overflow: the more messages are in the queue, the faster they are fetched. It is demonstrated in the seelog-examples/adaptive.

Flush

We've just recommended the rule for async timer's interval tuning: the average interval of message consuming should be less than the average interval of message producing.

Current log messages limit in the queue is 10000.

But how do you know whether you set it correct and unwanted flushes don't happen? This is pretty simple: Seelog will write a following message to the stdout (Seelog uses stdout to output its internal errors, messages, etc.): "Seelog queue overflow: more than 10000 messages in the queue." So if you are tuning your async timer interval and want to see whether it is all okay with the queue check the flush messages from Seelog in your stdout.

Recommendations

Synchronous logger

First of all, weight all pros and cons of using sync logger in production. Actually, we would recommend using nothing but async timer behavior in production. Use cases are all for development stage:

  • You use your log near other output funcs and you want to see output in the right order
  • You are debugging your app using log traces so you want all the errors, panics, messages, etc. to appear in the order they occurred
  • You measure performance, for correct benchmarks you want everything to be executed in one goroutine
  • Any other development cases where the execution order is important or log processing in other goroutine is bad

NOTE: Actually, in most real cases you don't need synchronous behavior. It is intended for rare specific cases, use it neatly in development.

Asynchronous loop logger

The most common logger type, set by default. You should use it in most cases of development and you can use it in production if you are not generating tons of log messages within performance-critical operations. In the latter case, you should try another option - async timer behavior.

NOTE: In real life, once you follow our log level recommendations (Log levels), the difference between async loop and async timer won't be significant. Most likely, async loop can meet your regular logging demands.

Asynchronous timer logger

The special behavior for high-load production, because it limits the log processing operations frequency making log performance effect predictable (the best option for production). As said above, in 99% normal cases you don't need it. But if your application is specific and you really need to limit the amount of log operations per second, that is your choice.

NOTE: Keep in mind, the timer interval should be set so that the average log messages processing frequency is higher than the average log producing frequency.

Demonstration

Take a look at the example: examples/types

The main test code here is:

start := time.Now()
for i := 0; i < 50; i += 2 {
	fmt.Printf("%d\n", i)
	log.Trace(longMessage)
	log.Debugf("%d", i+1)
}
end := time.Now()
dur := end.Sub(start)
fmt.Printf("Test took %d ns\n", dur)

The outputs configuration dispatch trace to a file and debug - to console:

<outputs>
	<filter levels="trace">
		<file path="log.log"/>
	</filter>
	<filter levels="debug">
		<console />
	</filter>
</outputs>

On the console we will only see numbers printed by printf and numbers printed by log debug. Trace func just outputs big log messages to a file to create a laggy operation to feel the difference between behaviors.

Now let's see the difference in behaviors:

Sync logger

Output

Sync test
0
1329085917299504000 [Debug] 1
2
1329085917300137000 [Debug] 3
4
1329085917300469000 [Debug] 5
6
1329085917301213000 [Debug] 7
8
1329085917301566000 [Debug] 9
...
44
1329085917308022000 [Debug] 45
46
1329085917308602000 [Debug] 47
48
1329085917308803000 [Debug] 49
Test took 14024000 ns

Explanation:

  • Strict order (Look at numbers sequence)
  • Lower application logic performance (14024000 ns)
  • Log messages appear immediately

Async loop

Output

Async loop test
0
2
4
...
14
16
1329085917309751000 [Debug] 1
18
20
...
26
28
1329085917309993000 [Debug] 3
30
32
...
44
1329085917310296000 [Debug] 5
46
48
Test took 1984000 ns
1329085917311020000 [Debug] 7
1329085917311218000 [Debug] 9
1329085917312355000 [Debug] 11
...
1329085917318957000 [Debug] 47
1329085917319536000 [Debug] 49

Explanation:

  • Not strict order
  • Higher application logic performance (1984000 ns)
  • Log messages appear with a small delay

Async timer (Interval = 5 milliseconds)

Output

Async timer test
0
2
4
6
8
10
12
14
...
46
48
Test took 525000 ns
1329085918316872000 [Debug] 1
1329085918327830000 [Debug] 3
1329085918338282000 [Debug] 5
....
1329085918529592000 [Debug] 41
1329085918540085000 [Debug] 43
1329085918550559000 [Debug] 45
1329085918561510000 [Debug] 47
1329085918571977000 [Debug] 49

Explanation:

  • Not strict order
  • Top application logic performance (525000 ns)
  • Most log messages started to appear when test was already over