I have been reading and re-reading Martin Fowler's architecture overview of the LMAX exchange[1]. It's still not clear to me how back pressure and errors were handled in this design.For back pressure, I think it was accomplished by blocking when acquiring the next sequence number from the disruptor when it's completely full. I think that makes sense if this thing was using blocking I/O to read from the network.If it was using asynchronous I/O, it's not clear to me how back pressure was applied to new incoming connections. Can anyone provide some hindsight here?
For error handling. If the journal could not be written to, say the disk was full, how did this prevent:
- The business logic from continuing?
- Allow resuming when disk space was cleared?
- How was an error reported back to the client? Would putting something like a Netty channel inside a disruptor event be something that is recommended? I have a feeling it's not, hence why I'm asking for advice here.
I have been reading and re-reading Martin Fowler's architecture overview of the LMAX exchange[1]. It's still not clear to me how back pressure and errors were handled in this design.For back pressure, I think it was accomplished by blocking when acquiring the next sequence number from the disruptor when it's completely full. I think that makes sense if this thing was using blocking I/O to read from the network.
If it was using asynchronous I/O, it's not clear to me how back pressure was applied to new incoming connections. Can anyone provide some hindsight here?
For error handling. If the journal could not be written to, say the disk was full, how did this prevent:- The business logic from continuing?
- Allow resuming when disk space was cleared?
- How was an error reported back to the client? Would putting something like a Netty channel inside a disruptor event be something that is recommended? I have a feeling it's not, hence why I'm asking for advice here.