The usual pattern is to have a lock around the cache and then have each entry be a pointer to a struct that can itself be locked. The lookup is then:
lock(cache)
if cache is missing entry:
cache[key] = new entry
entry := cache[key]
unlock(cache)
lock(entry)
if entry is not populated:
entry.val = compute()
val := entry.val
unlock(entry)
return val
It is true that you can create a goroutine and channel for every entry instead of putting a mutex in the entry. But putting the mutex in will be faster at run time and also have less overhead (a channel already contains a mutex, so sizeof(mutex) < sizeof(channel) + sizeof(goroutine) for sure).
Russ