I have a quite powerful embedded linux device that is to be used for collecting data from various sockets/fd:s using C. This data is to be parsed, buffered and passed on to a TCP/IP or a UDP socket to be transferred somewhere else for long term storage. This last step happens either when a sufficient amount of data has been acquired, or when some other event triggers.
My question is: is there any reason not to buffer everything on the heap (as opposed to writing/reading to some linux file descriptor) given that
I don't quite get why you say "using the heap is counter-intuitive" - Millions of embedded routers and switches use the heap for store-and-forward queues (I understand what you do is similar).
It very much depends on the data that you acquire. Anything that can be re-acquired in case of a power failure or other reset events of your device doesn't really need to go into permanent storage.
Data that is hard or impossible to re-acquire and this valuable (like sensor data , for example), you might possibly want to push into a safe place where it is protected from resets and power-down, however.
On the other hand, if your data is not segmented but rather stream-oriented, storing it to a file might be a lot easier - Also beware that out-of-memory conditions and heap memory leaks can be a real nuisance to debug in embedded systems.