The lpOverlapped parameter must be valid for the duration of the overlapped operation. If multiple I/O operations are simultaneously outstanding, each must reference a separate WSAOVERLAPPEDstructure.
The lpOverlapped parameter must be valid for the duration of the overlapped operation. If multiple I/O operations are simultaneously outstanding, each must reference a separate WSAOVERLAPPEDstructure.
We intentionally don't expose the low-level details, which is a non-goal. We instead limit ourselves to APIs we can uniformly support cross platform, and we support platforms implemented via non-blocking IO patterns.
Are you asking about read/write pairing or about multiple reads?
--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CABM%3Dn%2B2XgniEuMQhoKYy0ga1VpOX%2BxmCw3sBJC%3DPLfotU9QWWQ%40mail.gmail.com.
| Overlapped operations will complete later. |
|
|
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CACvaWvY6Jve4RWfWYWJDMyDRQo1e-9isNVhcMVSnoLkCECfY8w%40mail.gmail.com.
TL;DR;1. On windows, WSAEWOULDBLOCK means the same as EWOULDBLOCK on linux; we are out of buffer, you should try again. However, WSA_IO_PENDING just means an overlapped io has started as the HW can't send fast enough.2. Translating WSA_IO_PENDING to ERR_IO_PENDING prevent us from using the full send buffer. We're basically limited by how fast the HW could send a packet synchronously.3. Worse, translating both WSA_IO_PENDING and WSAEWOULDBLOCK to net::ERR_IO_PENDING could cause upper layer bugs like data lost or even deadlock.
4. Handling of WSA_IO_PENDING doesn't really expose any OS specific details. In fact, it makes the handling of WSAEWOULDBLOCK the same as what it does on linux.
5. This unnecessarily lowers the overall send speed as we're not overlapping multiple outstanding sends.
--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CACvaWvZ791dFCbJ5-GiUebQGWpqayNiX%3DjynGLOwkZ%3DfGjSXqQ%40mail.gmail.com.
I got what you said. However, there is still one problem. On POSIX system, nonblocking udp socket send just copies the buffer to kernel until it reaches the limit and EWOULDBLOCK will return. On windows, however, it seems as long as we're not sending faster than HW could drain, ERR_IO_PENDING will return. At that point, my guess is that we haven't reached the amount of sendbuffer we have.
I got what you said. However, there is still one problem. On POSIX system, nonblocking udp socket send just copies the buffer to kernel until it reaches the limit and EWOULDBLOCK will return. On windows, however, it seems as long as we're not sending faster than HW could drain, ERR_IO_PENDING will return. At that point, my guess is that we haven't reached the amount of sendbuffer we have.In other words, why do we even do overlapped IO on windows? Why don't we just go back to nonblocking one, at least, we could get more packets on the way simultaneously?
Posix also doesn't tell you whether each send succeeds or not in the nonblocking mode. Why do we need to know that for windows platform?
In the case of the TCP recv() implementation it was a really easy and clean switch to go from Overlapped to non-blocking and it allowed us to remove a hack where Chrome was trying to pace the recv buffer sizes to match what it expected the TCP stack would deliver during slow start. The non-blocking mode also matches how the Linux and Mac stacks work so it ends up being a lot cleaner in general. I didn't tackle send() or UDP because I had a very specific issue I was fixing but it looks like it makes sense to transition over all the way.The //net contract should be able to remain the same, it's just a cleaner implementation of the contract where the underlying API's actually match the behavior that the //net interfaces expose. The sockets should allow the sends to complete until the underlying kernel buffer is full and return WSAEWOULDBLOCK when it fills which is pretty much what the consumers of //net would want. The fact that overlapped I/O has tighter time constraints and will return a pending write when there is still space in the kernel buffer just because it couldn't get it there immediately causes a mismatch with the //net API. It would be interesting to see if it also manifests in the case of large TCP uploads.That said, it's a pretty big change to a crazy-sensitive part of the stack with lots of potential for LSP and A/V issues. A good test case where the improvements can be tested in dev will help make sure the assumptions are right but having some UMAs and finch control so we can measure the actual impact are going to be critical if we make the change.
--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAJ_4DfQXrc8SkvyBACbJA06JmdEj2OFHKOQodQXfPawpvuQ3SQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CABM%3Dn%2B1_wuROnMSNs5pFtAaG20VLR6k0izFYYGsXb7Dog%2BNEaw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CACPgMqXsCYc%3Do2WMxL1H_Fs%2BscQy5BK0fTo0p1132avJ%3DiWTFw%40mail.gmail.com.