You can't read cache. You can only read memory addresses. Since memory addresses are translated into different physical addresses, your memory accesses are never going to hit the same cache line.
However, what they do, is a fairly well known trick. What makes the paper original, is that the setting is in the VM, new unknowns, more noise, harder to force the CPU do what you want.
The process is as follows:
* determine how the cryptographic library is _likely_ to be laid out in memory. This is important.
* flood instruction cache with specially laid out code and measure how much time it took to do that
* if you can trick the VM manager to run your code on the same CPU as the victim, you can measure which cache lines had to be evicted, and which hadn't
This is where I was wrong; I thought I'd need to reference to the actual physical address, which is not possible in VM, but remember that because of cache associativity, the cache line number can be predicted from the physical address, and _many_ physical addresses match the same cache line. So, knowing which cache lines are filled slower, you know which cache lines were evicted by the victim VM.
By knowing which lines of code the victim executes, you can predict which branches in the cryptographic library the code took. If the branching in the algorithm depends on key material, you can work out the secret key by monitoring which branches the algorithm took.
That's the sketch. There are lots of gotchas to look out for, these are outlined in the paper, and you can never know the key for sure, but this attack permits to reduce the scope of the search for the key.
I am not expert enough to judge how real the threat is outside the lab.