oncecache
is a strongly-typed, concurrency-safe, context-aware,
dependency-free, in-memory, on-demand Golang object cache, focused on write-once,
read-often ergonomics.
The package also provides an event mechanism useful for logging, metrics, or propagating cache entries between overlapping composite caches.
oncecache
is targeted at write-once, read-often situations, where a value
corresponding to a key is expensive to compute or fetch, the value is likely to be read
multiple times, and is not expected to change. The cache is not intended for general purpose
caching where values are frequently updated.
Add to your go.mod
via go get
:
go get github.com/neilotoole/oncecache
The basic theory of operation is that a oncecache.Cache
is created with a
function that returns the value corresponding to a key. When a key is requested
from the cache, the cache checks if the value is already present. If not, the
cache calls the provided function to compute the value, stores the value, and
returns it. Subsequent requests for the same key return the cached value.
Here's a trivial example that caches computed fibonacci numbers:
func ExampleFibonacci() {
// Ignore error handling for brevity.
ctx := context.Background()
c := oncecache.New[int, int](calcFibonacci)
key := 6
val, _ := c.Get(ctx, key) // Cache MISS - calcFibonacci(6) is invoked
fmt.Println(key, val)
val, _ = c.Get(ctx, key) // Cache HIT
fmt.Println(key, val)
key = 9
val, _ = c.Get(ctx, key) // Cache MISS - calcFibonacci(9) is invoked
fmt.Println(key, val)
// Output:
// 6 8
// 6 8
// 9 34
}
func calcFibonacci(ctx context.Context, n int) (val int, err error) {
a, b, temp := 0, 1, 0 //nolint:wastedassign
for i := 0; i < n && ctx.Err() == nil; i++ {
temp = a
a = b
b = temp + a
}
if ctx.Err() != nil {
return 0, ctx.Err()
}
return a, nil
}
oncecache.Cache
provides typical operations to interact with the cache, such as
Delete
,
Has
,
Keys
, etc.
func ExampleOperations() {
// Ignore error handling for brevity.
ctx := context.Background()
c := oncecache.New[int, int](calcFibonacci)
for key := 4; key < 7; key++ {
val, _ := c.Get(ctx, key) // Prime the cache for keys 4, 5, 6
fmt.Println(key, val)
}
keys := c.Keys() // Keys returns indeterminate order
slices.Sort(keys)
fmt.Println("Keys in cache:", keys)
fmt.Println("Num entries:", c.Len())
fmt.Println("Has key 2?", c.Has(2))
c.Delete(ctx, 5)
keys = c.Keys()
slices.Sort(keys)
fmt.Println("Keys in cache after Delete(5):", keys)
// MaybeSet sets the value if the key is not already in the cache.
didSet := c.MaybeSet(ctx, 4, 3, nil) // No-op: 4 already in cache
fmt.Println("Did set 4?", didSet)
didSet = c.MaybeSet(ctx, 7, 13, nil) // Cache write: 7 not in cache
fmt.Println("Did set 7?", didSet)
c.Clear(ctx) // Clear empties c, but it's still usable
fmt.Println("Keys after cache clear:", c.Keys())
// Close clears c and releases resources. Afterwards, c is unusable,
// and operations on it may return an error.
_ = c.Close()
// Output:
// 4 3
// 5 5
// 6 8
// Keys in cache: [4 5 6]
// Num entries: 3
// Has key 2? false
// Keys in cache after Delete(5): [4 6]
// Did set 4? false
// Did set 7? true
// Keys after cache clear: []
}
When constructing a cache, you can provide callback functions that are invoked when cache events occur. Callbacks are useful for logging, metrics, or propagating cache entries between overlapping composite caches.
Here's an example that logs cache events:
func main() {
log := slog.Default()
c := oncecache.New[int, int](
calcFibonacci,
oncecache.Name("fibs"), // Name the cache for logging
oncecache.Log(log, slog.LevelInfo, oncecache.OpFill, oncecache.OpEvict),
oncecache.Log(log, slog.LevelDebug, oncecache.OpHit, oncecache.OpMiss),
)
_, _ = c.Get(ctx, 10) // Cache miss, and fill
_, _ = c.Get(ctx, 10) // Cache hit
}
This would produce log output similar to:
level=DEBUG msg="Cache event" ev.cache=fibs ev.op=miss ev.k=10
level=INFO msg="Cache event" ev.cache=fibs ev.op=fill ev.k=10 ev.v=55
level=DEBUG msg="Cache event" ev.cache=fibs ev.op=hit ev.k=10 ev.v=55
Note that oncecache.Log
is a pre-canned
functional opt that writes events to a slog.Logger
. For custom callbacks,
you can use one of the (synchronous) OnHit
,
OnMiss
,
OnFill
or OnEvict
handlers, or the more generic OnEvent
handler,
which receives cache events on a channel.
See TestCallbacks
or TestOnEventChan
in oncecache_test.go
for
more details, or the take a look at the hrsystem
example.
Consider a trivial HR system:
---
title: HR System
---
erDiagram
Org ||--|{ Department : contains
Org {
string name
}
Department ||--|{ Employee : contains
Department {
string name
}
Employee {
int id
string name
string role
}
In Go, we might model this system as:
type Org struct {
Name string `json:"name"`
Departments []*Department `json:"departments"`
}
type Department struct {
Name string `json:"name"`
Staff []*Employee `json:"staff"`
}
type Employee struct {
Name string `json:"name"`
Role string `json:"role"`
ID int `json:"id"`
}
type HRSystem interface {
GetOrg(ctx context.Context, org string) (*Org, error)
GetDepartment(ctx context.Context, dept string) (*Department, error)
GetEmployee(ctx context.Context, ID int) (*Employee, error)
}
Now, consider when HRSystem.GetOrg
is called: it returns the
entire constructed tree containing all Department
s, which in turn contain
all Employee
s. We could use a oncecache.Cache[string, *Org]
to cache
the Org
objects.
But, later, we might want to retrieve a single Employee
via HRSystem.GetEmployee(ctx, 1234)
. Typically, the HRSystem
impl would fetch that Employee
from the database, but note that the Employee
is already present in the Org
cache, as
a child of a Department
object.
We can use oncecache
to propagate cache entries across composite caches.
// NewHRCache wraps db with a caching layer.
func NewHRCache(log *slog.Logger, db HRSystem) *HRCache {
c := &HRCache{
log: log,
db: db,
}
c.orgs = oncecache.New[string, *Org](
db.GetOrg,
oncecache.OnFill(c.onFillOrg),
)
c.depts = oncecache.New[string, *Department](
db.GetDepartment,
oncecache.OnFill(c.onFillDept),
)
c.employees = oncecache.New[int, *Employee](
db.GetEmployee,
)
return c
}
In the code above, the NewHRCache
constructor adds OnFill
event handlers to the
Org
and Department
caches. Thus, a Get
call to the Org cache will trigger
onFillOrg
:
// onFillOrg is invoked by HRCache.orgs when that cache fills an [Org] value from
// the DB. This handler propagates values from the returned [Org] to the
// HRCache.depts cache.
func (c *HRCache) onFillOrg(ctx context.Context, _ string, org *Org, err error) {
if err != nil {
return
}
for _, dept := range org.Departments {
// Filling a dept entry should in turn propagate to the employees cache.
_ = c.depts.MaybeSet(ctx, dept.Name, dept, nil)
}
}
When onFillOrg
is invoked, it iterates over the Department
s in the Org
and
calls MaybeSet
on the Department
cache. This in turn triggers the onFillDept
,
which invokes MaybeSet
on the Employee
cache:
// onFillDept is invoked by HRCache.depts when that cache fills a [Department]
// value from the DB. This handler propagates [Employee] values from the
// returned [Department] to the HRCache.employees cache.
func (c *HRCache) onFillDept(ctx context.Context, _ string, dept *Department, err error) {
if err != nil {
return
}
for _, emp := range dept.Staff {
_ = c.employees.MaybeSet(ctx, emp.ID, emp, nil)
}
}
oncecache
currently lacks a TTL or reaper mechanism.