Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel the same way about error handling, but I think it is more of a design issue than a language issue. Ideally, an application has a barrier that deals with anything from the outside world that can cause an error. Past that barrier, code can concentrate on the main goal, not errors.

Bertrand Meyer once said that exceptions are for cases where you can't tell whether an operation will succeed or not before trying it. Generally, that happens in I/O, system calls and input validation. The problem with a lot of error handling is that it moves beyond that realm and mixes with the logic of the system.



A compiler usually has three parts, usually called front-, middle- and backend. Errors can usually only happen at the frontend (syntax errors, type checking) and rarely at the backend (no enough space for output). During the middle-end, the optimization, no errors can happen. However, the compiler is usually not perfect and might contain bugs. So for debugging purposes some error handling is helpful. Should we use different error handling mechanisms for internal and external errors? For performance reasons, we could remove the internal error handling for release builds.

Another problem is to distuingish between inside and outside. A library usually does some input checking, because the caller is considered "outside". To remove this overhead, the library could document some restrictions and leave it to the users, which usually does not end very well. For example, memcpy requires that src and dst must not overlap, but in reality this causes problems [0]. This only works, if the programmer who writes the caller has the possibility and skills to adapt the callee. In other words, there is no inside-outside difference for her.

[0] https://bugzilla.redhat.com/show_bug.cgi?id=638477#c38


> The problem with a lot of error handling is that it moves beyond that realm and mixes with the logic of the system.

What is the motivation for this? What are the perceived pains that programmers are trying to cure which are not, "cases where you can't tell whether an operation will succeed or not before trying it?"


Most often I think it's due to poor separation of concerns. The programmer is about to push an input down a stack of method calls several layers deep. There's the success condition, the failure condition, then the exception which captures unexpected behavior from the rat's nest of code he just called.


How often can this be addressed with orthogonal finite state machines? (For example: one which embodies whatever business process, and another that embodies errors and failures with IO and the network.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: