in-memory caching in golang

439 views
Skip to first unread message

Rakesh K R

unread,
Dec 9, 2021, 10:59:32 AM12/9/21
to golang-nuts
Hi,
In my application I have this necessity of looking into DBs to get the data(read intensive application) so I am planning to store these data in-memory with some ttl for expiry.
Can someone suggest some in-memory caching libraries with better performance available to suit my requirement?

Jason E. Aten

unread,
Dec 10, 2021, 2:08:16 AM12/10/21
to golang-nuts
You might prefer a fixed size cache rather than a TTL, so that your cache's memory size never gets too big.  This is also simpler logic, using just a map to provide the cache, and a slice to constrain the cache size.  It's almost too short to be a library, see below. Off the top of my head -- not run, but you'll get the idea:

type Payload struct {
   Key string  // typical, but your key doesn't have to be a string. Any suitable map key will work. 
   pos int // where we are in Cache.Order
   Val  string // change type from string to store your data. You can add multiple elements after Val if you desire.
}

type Cache struct {
  Map map[string]*Payload
  Order []*Payload
  MaxSize int
}

func NewCache(maxSize int) *Cache {
    return &Cache{
        Map: make(map[string]*Payload),
        MaxSize: maxSize,
    }
}

func (c *Cache) Get(key string) *Payload {
    return c.Map[key]
}

func (c *Cache) Set(p *Payload) {
     v, already := c.Map[p.Key]
     if already {
           // update logic, may not be needed if key -> value mapping is immutable
          //  remove any old payload under this same key
           c.Order = append(c.Order[:v.pos], c.Order[v.pos+1:]) 
     }
     // add the new
     p.pos = len(c.Order)
     c.Order = append(c.Order, p)
     c.Map[p.Key] = p

     // keep the cache size constant
    if len(c.Order) > c.MaxSize {
           // deleted the oldest
           kill := c.Order[0]
           delete(c.Map, kill.Key)
           c.Order = c.Order[1:]
    }
}

If you really need a Time To Live and want to allow memory to balloon uncontrolled, then MaxSize would change from an int to a time.Time, and the deletion condition would change from being size based to being time.Time.Since() based.

Also look at sync.Pool if you need goroutine safety. Obviously you can just add a sync.Mutex to Cache and lock during Set/Get, but for heavy contention sync.Pool can perform better.

Rick

unread,
Dec 10, 2021, 3:42:55 PM12/10/21
to golang-nuts
Don't forget to think about cache coherency. Caching is more involved with multiple caching microservices talking to the database. Creates and updates require notification of all replicas to refresh their caches.

Rakesh K R

unread,
Mar 9, 2022, 10:55:27 AM3/9/22
to golang-nuts
Thank you Rick and Jason for the pointers
Reply all
Reply to author
Forward
0 new messages