-
Notifications
You must be signed in to change notification settings - Fork 1
/
config.go
159 lines (148 loc) · 5.78 KB
/
config.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
package sc
import (
"time"
)
// CacheOption represents a single cache option.
// See other package-level functions which return CacheOption for more details.
type CacheOption func(c *cacheConfig)
type cacheConfig struct {
enableStrictCoalescing bool
backend cacheBackendType
capacity int
cleanupInterval time.Duration
}
type cacheBackendType int
const (
cacheBackendMap cacheBackendType = iota
cacheBackendLRU
cacheBackend2Q
)
func defaultConfig(ttl time.Duration) cacheConfig {
cleanupInterval := 2 * ttl
if ttl == 0 {
cleanupInterval = 60 * time.Second
}
return cacheConfig{
enableStrictCoalescing: false,
backend: cacheBackendMap,
capacity: 0,
cleanupInterval: cleanupInterval,
}
}
// WithMapBackend specifies to use the built-in map for storing cache items (the default).
//
// Note that this default map backend cannot have the maximum number of items configured,
// so it holds all values in memory until expired values are cleaned regularly
// at the interval specified by WithCleanupInterval.
//
// If your key's cardinality is high and if you would like to hard-limit the cache's memory usage,
// consider using other backends such as LRU backend.
//
// Initial capacity needs to be non-negative.
func WithMapBackend(initialCapacity int) CacheOption {
return func(c *cacheConfig) {
c.backend = cacheBackendMap
c.capacity = initialCapacity
}
}
// WithLRUBackend specifies to use LRU for storing cache items.
// Capacity needs to be greater than 0.
func WithLRUBackend(capacity int) CacheOption {
return func(c *cacheConfig) {
c.backend = cacheBackendLRU
c.capacity = capacity
}
}
// With2QBackend specifies to use 2Q cache for storing cache items.
// Capacity needs to be greater than 0.
func With2QBackend(capacity int) CacheOption {
return func(c *cacheConfig) {
c.backend = cacheBackend2Q
c.capacity = capacity
}
}
// EnableStrictCoalescing enables 'strict coalescing check' with a slight overhead. The check prevents Get() calls
// coming later in time to be coalesced with already stale response generated by a Get() call earlier in time.
//
// Ordinary cache users should not need this behavior.
//
// This is similar to 'automatically calling' (*Cache).Forget after a value is expired, but different in that
// it does not allow initiating new request until the current one finishes or (*Cache).Forget is explicitly called.
//
// Using this option, one may construct a 'throttler' / 'coalescer' not only of get requests but also of update requests.
//
// This is a generalization of so-called 'zero-time-cache', where the original zero-time-cache behavior is
// achievable with zero freshFor/ttl values.
// see also: https://qiita.com/methane/items/27ccaee5b989fb5fca72 (ja)
//
// ## Example with freshFor == 0 and ttl == 0
//
// 1st Get() call will return value from the first replaceFn call.
//
// 2nd Get() call will NOT return value from the first replaceFn call, since by the time 2nd Get() call is made,
// value from the first replaceFn call is already considered expired.
// Instead, 2nd Get() call will initiate the second replaceFn call, and will return that value.
// Without EnableStrictCoalescing option, 2nd Get() call will share the value from the first replaceFn call.
//
// In order to immediately initiate next replaceFn call without waiting for the previous replaceFn call to finish,
// use (*Cache).Forget or (*Cache).Purge.
//
// Similarly, 3rd and 4th Get() call will NOT return value from the second replaceFn call, but instead initiate
// the third replaceFn call.
//
// With EnableStrictCoalescing
//
// Get() is called....: 1 2 3 4
// returned value.....: 1 2 3 3
// replaceFn is called: 1---->12---->23---->3
//
// Without EnableStrictCoalescing
//
// Get() is called....: 1 2 3 4
// returned value.....: 1 1 2 2
// replaceFn is called: 1---->1 2---->2
//
// ## Example with freshFor == 1s and ttl == 1s
//
// 1st, 2nd, and 3rd Get() calls all return value from the first replaceFn call, since the value is considered fresh.
//
// 4th and 5th call do NOT return value from the first replaceFn call, since by the time these calls are made,
// value by the first replaceFn call is already considered expired.
// Instead, 4th (and 5th) call will initiate the second replaceFn call.
// Without EnableStrictCoalescing option, 4th call will share the value from the first replaceFn call,
// and 5th Get() call will initiate the second replaceFn call.
//
// With EnableStrictCoalescing:
//
// Elapsed time (s)...: 0 1 2
// Get() is called....: 1 2 3 4 5
// returned value.....: 1 1 1 2 2
// replaceFn is called: 1------------>12------------>2
//
// Without EnableStrictCoalescing:
//
// Elapsed time (s)...: 0 1 2
// Get() is called....: 1 2 3 4 5
// returned value.....: 1 1 1 1 2
// replaceFn is called: 1------------>1 2------------>2
func EnableStrictCoalescing() CacheOption {
return func(c *cacheConfig) {
c.enableStrictCoalescing = true
}
}
// WithCleanupInterval specifies cleanup interval of expired items.
//
// Setting interval of 0 (or negative) will disable the cleaner.
// This means if you use non-evicting cache backend (that is, the default, built-in map backend),
// the cache keeps holding key-value pairs indefinitely.
// If cardinality of key is very large, this leads to memory leak.
//
// By default, a cleaner runs every once in 2x ttl (or every 60s if ttl == 0).
// Try tuning your cache size (and using non-map backend) before tuning this option.
// Using cleanup interval on a cache with many items may decrease the through-put,
// since the cleaner has to acquire the lock to iterate through all items.
func WithCleanupInterval(interval time.Duration) CacheOption {
return func(c *cacheConfig) {
c.cleanupInterval = interval
}
}