Yes the command will be sent individually and may be processed interleaved with other requests.
There is nothing magical about pipelining, its entirely a client side feature. We stream a set of requests to the server, it streams us the responses and we defer the parsing of the results until we are ready. It just allows us to wait and decide when we are going to wait for the response for the server. If we are doing many commands in a row without needing the results to any of the requests we can be much faster.
Keep in mind that this means the requests can be interleaved with other clients, use MULTI/EXEC as well if you want the commands to be processed in a single shot.
Sam
BTW: I believe this note from the pipelining page is a bit misleading. Isn't the queue/storage of the results effectively the TCP/IP stream and socket buffers? It reads like redis server is doing something special for pipelining.
IMPORTANT NOTE: while the client sends commands using pipelining, the server will be forced to queue the replies, using memory. So if you need to send many many commands with pipelining it's better to send this commands up to a given reasonable number, for instance 10k commands, read the replies, and send again other 10k commands and so forth. The speed will be nearly the same, but the additional memory used will be at max the amount needed to queue the replies for this 10k commands.