Hi,
As part of our upgrade evaluation of Mongodb (mmapv1) from 2.6 to 3.2, we added a 3.0 secondary to our production replica set, and started sending read traffic to it (using flashback). But, we noticed degraded performance compared to 2.6 secondaries. The hardware is the same for all nodes in the cluster.
To illustrate :
[ mongo2.6 secondary ] ---------using flashback---------> [ mongo3.0 secondary ]
Both nodes are in the same replica set.
The QPS graph below shows the spiky qps, with the same pattern observed for disk and page faults. We have run this setup for days, but it never normalises. The page faults seem to indicate that the working set isn't fitting in memory (although it does for 2.6 nodes). So, to test this theory, we controlled the qps and increased it gradually. The second part of the graphs shows the gradual increase, and no degradation. So, its able to handle less (around 2k in the graphs), but not full qps, which is around 5-6k.
QPS :

Page faults :

DiskIO :

This seems to confirm that in 3.0 the memory requirements have changed for mmapv1, and that we will have to add more memory to get the same performance as 2.6. Is this expected behaviour? Any other reason for this behaviour?
Thanks,
Raghu