When a connection is closed, the buffers are automatically flushed. More info:
There are multiple buffers involved.
The first buffer is in the kernel/OS. Data arrives (via ethernet, WiFi, etc), and the kernel inspects the packet headers, and then passes the packet to the underlying socket code within the OS networking stack. The OS tcp stack code then extracts the received data, and stuffs it in some buffer. When the application layer invokes read(), what we're actually doing is reading from this internal buffer.
There is another buffer within GCDAsyncSocket. One might wonder why this is needed. It has to do with the fact that issuing a read() call is actually an expensive operation. First, it has to do all the jazz necessary to call down into the kernel. Then the kernel has to lookup the socket based on the file descriptor we pass it. Then it has to inspect the recv buffer (in a thread-safe manner), and copy data out of the kernel buffer into our application buffer, then update some pointers, blah, blah, blah. It's not the cheapest operation in the world. The problem is that many applications are written like this:
// Read 4 bytes from the socket to get the packet type
// (Inspect packet type, since each packet type has different length header.)
// Read 16 bytes from the socket to get the packet headers
// (Extract packet length from field in packet header)
// Read 32 bytes from the socket
This was a total of 3 reads from the socket. But all the data probably arrived in a single TCP packet, and could have been scooped up in a single read.
So am I compensating for user silliness? After all, the developer could have used a big buffer, done a big read, and then extracted the packet type, packet headers, and data from said buffer…
This is true. But I have a lot of experience writing networking code, and implementing various protocols. And trust me, after writing code as described above for the 400th time, one starts to wonder why there isn't a library somewhere that does this for us. After all, we want to focus on the code that makes our application special. Not write the same old buffering code over and over and over.
So CocoaAsyncSocket does the buffering, and buffer scanning for you. But there's actually more to it in the case of GCDAsyncSocket.
You see, in the "olden" days, one had no idea how much data was available in the buffer. That is, there was a function you could call that would say either "yes, there's data available" or "no, there's not any data available". But if you knew there was data, how much data was available? So one would simply guess, and allocate a buffer accordingly. But with GCD and the underlying kqueue's, the OS tells us exactly how much data is available. CocoaAsyncSocket takes advantage of this information to optimize it's buffer allocations.
So, I went on a bit of a tangent there. But, long story short, there are various buffers involved. CocoaAsyncSocket has its own internal buffer for performance reasons. All buffers are automatically flushed when the connection is closed.
You can accomplish this with the use of tags. Before I begin, I want to quickly mention that I'm assuming you're familiar with the concepts presented here:
So your data stream is required to have some kind of structure, such that you can separate one request/response from the next. Thus, all that is required, is for you to issue the proper read statements to get through the request/response you want to ignore. But use a tag (one of the parameters to the read method), that indicates the response should be ignored. Perhaps something like this:
#define TAG_IGNORE = -1
and then in your socket:didReadData:withTag: method, you just do something like this at the very beginning:
if (tag == TAG_IGNORE) return;
-Robbie Hanson