Hi,
Let's say we have a simple application in which we only make "get" calls in redis and we find out from redis-cli info commandstats that each get call takes 1 microsec.
Considering redis is single threaded, is it fair to assume that one redis instance will be able to serve 10^6 calls per second and not more(1 sec <=> 10^6 microsecs).
Now, lets say that the redis server is asked to serve 1.5* 10^6 requests per second. Will it be fair to say that the redis instance will not be able to serve, but clients will eventually start lagging and we need to switch to a cluster.
Similarly, in redis-cli info, we get the "used_cpu_sys" and "used_cpu_user". We hit redis-cli info all every 300 secs and push the output to grafana. Further, we can see that in some of the application where we are using a standalone redis, the sum of the above "used_cpu_sys" and "used_cpu_user" is close to 200 secs. Can we assume that once it reaches above 300 consistently, we will be in the soup.
Also, in one of the case, we use lot of lua scripting, and some of the processing is done inside lua. Now, we can see that the "used_cpu_sys" and "used_cpu_user" of those instances is high. Also, the usec value in cmdstat_evalsha is increasing and has reached 150+ seconds(for our 300 second window). Is it fair to say that we need to expand our cluster or reduce the processing in lua.
Basically we are trying to see if a redis instance will not be able to serve more than a particular number of request(for our usage pattern) and start lagging behind on increase in the value of "used_cpu_sys" and "used_cpu_user", and usec values in cmdstat_* entries. We can try to test the same using a test environment, but we felt that it will require lot of infra to have redis use all its cpu, and some expert advice would help.
Thanks
Tuco