You might prefer a fixed size cache rather than a TTL, so that your cache's memory size never gets too big. This is also simpler logic, using just a map to provide the cache, and a slice to constrain the cache size. It's almost too short to be a library, see below. Off the top of my head -- not run, but you'll get the idea:
type Payload struct {
Key string // typical, but your key doesn't have to be a string. Any suitable map key will work.
pos int // where we are in Cache.Order
Val string // change type from string to store your data. You can add multiple elements after Val if you desire.
}
type Cache struct {
Map map[string]*Payload
Order []*Payload
MaxSize int
}
func NewCache(maxSize int) *Cache {
return &Cache{
Map: make(map[string]*Payload),
MaxSize: maxSize,
}
}
func (c *Cache) Get(key string) *Payload {
return c.Map[key]
}
func (c *Cache) Set(p *Payload) {
v, already := c.Map[p.Key]
if already {
// update logic, may not be needed if key -> value mapping is immutable
// remove any old payload under this same key
c.Order = append(c.Order[:v.pos], c.Order[v.pos+1:])
}
// add the new
p.pos = len(c.Order)
c.Order = append(c.Order, p)
c.Map[p.Key] = p
// keep the cache size constant
if len(c.Order) > c.MaxSize {
// deleted the oldest
kill := c.Order[0]
delete(c.Map, kill.Key)
c.Order = c.Order[1:]
}
}
If you really need a Time To Live and want to allow memory to balloon uncontrolled, then MaxSize would change from an int to a time.Time, and the deletion condition would change from being size based to being time.Time.Since() based.
Also look at sync.Pool if you need goroutine safety. Obviously you can just add a sync.Mutex to Cache and lock during Set/Get, but for heavy contention sync.Pool can perform better.