Let's say I have two threads, T1 and T2.
Thread T1 makes a blocking write() call on a TCP socket S to send a large buffer of bytes B1. The buffer of bytes B1 is so large that (a) the write call blocks and (b) TCP has to use multiple segments to send the buffer.
Thread T2 also makes a blocking write() call on the same TCP socket S to send some other large buffer of bytes B2.
My questions is this:
Does the implementation of TCP on UNIX guarantee that the all bytes of B1 will be sent before all bytes of B2 (or vice versa)?
Or is it possible that TCP interleaves the contents of B1 and B2 (e.g. TCP sends a segment with B1 data, then a segment with B2 data, an then a segment with B1 data again).
PS - I know it is not a good idea to do this. I'm trying to determine whether or not some code which I did not write is correct.
It is always going to be bad if a
send(2) (same as
write(2)) on a tcp socket is not atomic. There is never a good reason to implement a non-atomic write. All versions of Unix and Windows attempt to keep the write atomic, but apparently very few provide a guarantee.
Linux is known to "usually" get this right but it has a bug, even in recent kernels. It does attempt to lock the socket but under certain circumstances a memory allocation can fail and a write will be split up. See this IBM blog entry on sendmsg for details.
According to those tests, only AIX and Solaris completely passed a thread-stress-test. It is not known if even those systems have failure cases that simply were not uncovered.
TL;DR: for the purpose of writing and debugging code, it's safe to assume atomicity, unless your target is a life support system.