There is no "disk based replication" in Redis. If you transfer over an RDB you have to restart the slave which will then discard the RDB you copied over. All replication for Redis is from Redis directly to Redis. Yes, the slave saves the RDB to disk, no you can't disable that. it does it to prevent a large delay at the end when it needs to load it in. If it were to load the RDB from memory you'd wind up needing about 2X the memory - one of the in-memory RDB it is loading from and one for the end-point data. For smaller data sets that might be fine, but in this case you're talking 22GB or more of memory needed to load your 11GB.
That said I am curious as to how much quicker it would be with a memory only rdbLoad call. i doubt it is worth the price of the memory at the larger end of the data set size scale.
"Is there an option to stream data from master's memory to slave's memory, there by eliminating the time taken to write to slave's disk before loading to memory.
If there is no option to stream data directly to slave's memory, is there a way i can speed up the process of loading the data from disk to memory. For our case its 11GB of data in disk, which takes ~6mins to load. "
How often are you doing this? You should be firing up the system once and letting Redis keep it in sync natively and naturally. If you're copying over the RDB file you're just spinning your wheels. If you are restarting your slave often you will not be happy. If you're experiencing disconnects between the master and slave causing a full resync you need to figure out why and fix that. Even on naked GBe it will take a couple minutes to transfer 11GB+ of data.
So:
1. Set up M/S replication
2. Let it complete
3. Keep it running
4. Let the master keep the slave in sync as things change
If you don't care about actual persistent on the slave, set the `dir` directive to a memory filesystem to speed it up. You'd need the extra memory space for the RDB of course. Alternatively, split the data up into smaller instances of Redis with either client-side sharding or Redis Cluster. Then each instance will be smaller, thus reducing the problem. For example, splitting it up among 6 instances would be under 2GB each depending on how the data is distributed.
Cheers,
Bill