Say all processes P1 to P11 hit to line 4 at "same time". P1 to p11 will all get nulls.
What you're saying is that the latency between the clients and Redis is high enough that Redis' command processing loop will perform all 11 GET keyname commands before any of the clients are done performing their IF/THEN/ELSE tests and sending their MULTI / INCR / EXPIRE / EXEC transactions back to Redis to increment the key.
This is certainly possible, but consider what happens next: The key's value was instantly incremented to the value 11 within 1 second (or faster) and the key will take 10 seconds after that to expire. During that 10 seconds no clients will make API calls. So over an 11 second period 11, api calls are made. They were made in the 1st second, followed by 10 idle seconds. The average of 11 calls made in 11 seconds (1 call/second) is pretty close to the average of 10 calls made in 10 seconds (1 call/second).
I don't consider this algorithm to be a great one because the counter is only incremented, and never decremented. With a steady rate of api calls, the counter will eventually exceed 10 and all api traffic will stop for 10 seconds. The lock itself coerces the clients into a burst/idle/burst/idle pattern of api calls rather than smoothing out the rate. However, when you read this example in context with the other ones on the INCR command page, it's clear that this algorithm was never intended to be production ready. It's the first example of a series, showing how good locking and rate limiting procedures can be more complex than you think.