NULL being passed in is not the only problem. Dangling pointers, coding mistakes by other team members and race conditions during initialisation can all contribute to this blowing up. Some argue (correctly) that it's best to have it blow up, other that assertions are the way to go. I'm of the latter ideology. Make your debug build do assertions, but release code be crash proof by doing NULL checks.
There are several comments indicating the performance cost of a NULL check. Go and look at the disassembler and see the code. You should find that it equates to JZ, which at worst case costs 1 clock cycle. Am I incorrect?
So, another comment below mentioned 10 million deletions per second. Ignoring the fact that you should be using a double linked list at that point (like the kernel does), in a single threaded 2.4GHz CPU a 1 step opcode (JZ) equates to a bit over 4ms/sec. With branch prediction, I expect to be far less.
So, the NULL check is almost free. Even on embedded systems, it'd still be a good idea to keep it in, to insulate yourself from flipped bits (eg, solar flares, or degrading memory) and coding mistakes elsewhere.
In these days of security research, I strongly advise for defensive code: it means that each function has had it's edge cases thought about, which means the developer has spent time thinking about the stability of their program, which cannot be a bad thing. Don't make your code a deliberate point of exploitation.
This makes a big assumption that the null check, which is the result of a programmer error, can be recovered. What would you do once you catch the null? Log that there was a null, maybe with some debug information, then halt? I personally would prefer a proper crash handler to do all of that for me.
I guess my question is, what's wrong with crashing, in the case of a real screwup, where something has gone horrible wrong, assuming you have a proper crash handler?
it's one instruction in the generated code and we can reasonably expect the branch predictor to do the right thing, but branches and error handling can inhibit vectorization which is a much higher performance cost. While in this particular case you probably aren't vectorizing much, it is not always the case that error handling is almost free
There are several comments indicating the performance cost of a NULL check. Go and look at the disassembler and see the code. You should find that it equates to JZ, which at worst case costs 1 clock cycle. Am I incorrect?
So, another comment below mentioned 10 million deletions per second. Ignoring the fact that you should be using a double linked list at that point (like the kernel does), in a single threaded 2.4GHz CPU a 1 step opcode (JZ) equates to a bit over 4ms/sec. With branch prediction, I expect to be far less.
So, the NULL check is almost free. Even on embedded systems, it'd still be a good idea to keep it in, to insulate yourself from flipped bits (eg, solar flares, or degrading memory) and coding mistakes elsewhere.
In these days of security research, I strongly advise for defensive code: it means that each function has had it's edge cases thought about, which means the developer has spent time thinking about the stability of their program, which cannot be a bad thing. Don't make your code a deliberate point of exploitation.