Posted On: 2020-06-15
One common software programming mistake is to try to prevent Exceptions by any means necessary. As best I can tell, this is largely a product of misunderstanding: many developers only experience Exceptions in the context of error recovery, and, as such, they come to associate the Exception itself with errors. They blame the Exception, and erroneously believe that, by preventing Exceptions, they are preventing errors. In reality, however, Exceptions are merely messengers - they are not to be feared or shunned, but rather tools to be used to understand more about what's going on.
In software, an Exception is a construct that indicates that the program has entered an "exceptional" state, and therefore should not continue to follow its ordinary behavior. In most (if not all) implementations, Exceptions contain important details about the state of the program, identifying not just what is exceptional about it but also where it occurred in the code (a stack trace) and additional details that may help diagnosing an unexpected Exception (such as a message.)
When a program enters an exceptional state, the program "throws" an Exception and other code is given the opportunity to react to it (to "catch" it). In most languages, a caught Exception is assumed to be handled (that is, resolved in a way that code can processing normally again) unless specified otherwise (such as by throwing its own Exception.) Exceptions that are uncaught by the application will be handled by the framework or operating system - though that normally involves terminating the process (a "crash"). Thus, most developers aim to handle Exceptions in their application, to avoid users experiencing unexpected program crashes.
Although Exceptions are not technically required to be a message about an error, many languages (including C#) strongly encourage developers to only throw Exceptions to indicate an error. Personally, this strong association between Exceptions and errors has been a part of both my own learning and the experiences I have had mentoring others - and it is a key part of the misunderstanding that prompted this post.
For developers that are focused on getting their code working (the "happy path" as it's called), Exceptions can be quite troublesome. When everything works as expected, Exceptions won't happen* - yet, the realities of testing code can often lead one into one of the many "unhappy paths" where Exceptions reside. If one spends too much time focused on such "happy path" development, one can come to see Exceptions as an enemy - a deviation from the desired path and an unwanted obstacle to development and testing.
Beyond fixating on the "happy path", Exceptions are also problematic from testing and reliability standpoint. Many Exceptions come from places where the pristine, predictable world of the software intersects with the messy, unpredictable realities of hardware/IT (such as file access or out of memory errors.) Accounting for each and every one of the possible paths requires a lot of attention to detail, and, due to their dependency upon other systems, can be difficult to accurately test on the developer's machine*.
Above all, however, the Null Reference Exception (NRE for short) is probably the biggest culprit for many developers' fear/distrust of Exceptions. Although the technical details of the Exception vary by language and/or implementation*, in general, a NRE occurs when a developer writes code in a way that assumes an object exists, but the run-time discovers that it does not. NREs are quite possibly the single most common type of Exception, as they can happen from a wide variety of causes: from unvalidated input, to mistakes in code flow, to unexpected outputs from dependencies. What's more, some languages (including C#) don't provide any information about which variable is null - further frustrating developers as they try to understand what caused the Exception.
When a developer fears/distrusts Exceptions, they often end up writing code that duplicates Exceptions' behavior - often in ways that are less reliable or more complex. Perhaps the clearest example of how this can go awry is with the file system. If one fears file access Exceptions, one may try to avoid them by performing a variety of tests before attempting to access the file (does it exist, is it possible to write-lock it, etc.) All these tests are duplicates of work that is performed by the framework/library that is then actually used to access the file. Unfortunately, any attempt to validate a file before using it will be subject to a race condition: another process could come along and delete the file between the validation and the actual use. Thus, no amount of manual validation can possibly avoid Exceptions.
Another way that fear of Exceptions can lead one to go too far in avoiding them is by hiding (aka. "swallowing") all Exceptions. This occurs when a developer catches and "handles" all Exceptions, without actually doing anything to handle them. This is the programming equivalent of a supervisor ignoring all subordinates' complaints and simply telling them "get back to work." Yes, if the complaints are frivolous then things will work out, but there are many kinds of problems (such as building fire) that become far worse if they are hidden*.
Central to effectively dealing with Exceptions is to respect and understand that an Exception is a message about an error, not the error itself. Thus, hiding an Exception is not preventing the error, it is instead saying "I deem this message is not important." Likewise, one should not fear the messenger, rather, one should act appropriately according to the message: handling things that one knows how to handle, and escalating things that one does not. Finally, one should understand that such messages won't show up on the "happy path", and, in many cases won't show up on a variety of "unhappy paths" as well. Instead, one needs to be thoughtful about the design: anticipate and plan to test those paths which can result in such messages*.
Hopefully, this post has been helpful for appreciating Exceptions. As messengers of errors, they tell developers what went wrong, where, and (sometimes) why. When considered as a part of the design, Exceptions are essential for making more robust, error-resistant code. When they are feared, however, the code can become bloated, less stable, and potentially even dangerous. Thus, it is by respecting Exceptions that one can best avoid errors.