I’ve devoted a large number of lines of C code to cleanup-labels/conditionals for failed memory allocation (indicated by the alloc family returning NULL). I was taught that this was a good practice so that, on memory failure, an appropriate error status could be flagged and the caller could potentially perform "graceful memory cleanup" and retry. I now have some doubts about this philosophy that I’m hoping to clear up.
I guess it’s possible that a caller could deallocate excessive buffer space or strip relational objects of their data, but I find the caller rarely has the capability (or is at the appropriate level of abstraction) to do so. Also, early-returning from the called function without side effects is often non-trivial.
I also just discovered the Linux OOM killer, which seems to make these efforts totally pointless on my primary development platform.
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer.
I figure there are probably other platforms out there that follow the same principle. Is there something pragmatic that makes checking for OOM conditions worthwhile?
Out of memory conditions can happen even on modern computers with lots of memory, if the user or system administrator restricts (see ulimit) the memory space for a process, or the operating system supports memory allocation limits per user. In pathological cases, fragmentation makes this fairly likely, even.
However, since use of dynamically allocated memory is prevalent in modern programs, for good reasons, it becomes very hairy to handle out-of-memory errors. Checking and handling errors of this kind would have to be done everywhere, at high cost of complexity.
I find that it is better to design the program so that it can crash at any time. For example, make sure data the user has created gets saved on disk all the time, even if the user does not explicitly save it. (See vi -r, for example.) This way, you can create a function to allocate memory that terminates the program if there is an error. Since your application is designed to handle crashes at any time, it’s OK to crash. The user will be surprised, but won’t lose (much) work.
The never-failing allocation function might be something like this (untested, uncompiled code, for demonstration purposes only):
Valerie Aurora’s article Crash-only software might be illuminating.