A major reason was that per-key expiration usually makes sense in a large remote caches, like memcached, where different types of entries coexist. In applications, local caches are usually restricted to a single type and details like loading are contextual. A single type is usually treated uniformly, so a policy like TTL can be fixed. Therefore the internal need for per-key expiration was very low, as other the storage layers were more appropriate solutions at Google. We also did not want to corrupt the API with expiration assumptions, especially for feature rarely requested internally.
That is what I recall. Others may have different memories.
The algorithms is a non-trivial problem to do safely and efficiently. The most common approach is to rely on a maximumSize and hide expired entries until they are size evicted. This is fast and simple, but creates a lot of pollution and requires a size constraint that isn't required by expireAfterXXX. The other popular solution is to use a tree-like structure, like a heap. This is more complex, the performance cost of reordering quickly adds up at, and we wanted to use strictly on amortized O(1) algorithms.
The eventual solution that looked promising was a Hierarchical TimerWheel. There were not a lot of good examples to draw from. This data structure uses hashing to be amortized O(1). It hashes by time to a coarse bin and chains, like a hash table. As time elapses, a cursor sweeps the bins and cascades them to lower bins until expired. Thus one has N hash tables and cheap cascading cost. This was eventually implemented in Caffeine's cache (a Guava inspired library), though is a rarely used feature.