Posted On: 2020-11-02
Whenever something goes wrong in a program, it's helpful to find a set of simple steps that can replicate the problem. Such steps may be provided by software testers*, but the responsibility for using replication steps to effectively solve the problem ultimately falls to programmers. In my time developing software, I've created something of an informal process that I use to make the most of such replication steps - and I thought I'd share that process with you in today's post.
Whenever I tackle an issue, I typically have some idea about what the possible cause of the issue is*. I often apply the scientific method for such situations, treating that idea as my hypothesis, and devising ways to test its validity. When I have high confidence, I will usually skip right to the code, trying to fix the issue and using the replication steps to verify that I have succeeded. When I have low confidence, however, I prefer to begin by testing the replication steps themselves, in an effort to gain a better understanding of how the steps fit together.
When testing the steps themselves, I often start by trying to simplify. If there are any steps that seem extraneous, I test whether I can replicate the issue without them. Likewise, for any steps that seem overly specific, I try to make them more vague, picking out details that seem irrelevant and trying variations to rule out specificity. Importantly, when the steps to replicate have come from someone else, I can often save time here by reaching out to that person and talking directly with them: in many cases, that other person has also done similar tests, and can provide insights and ideas that can point me in the right direction.
Once I have as simple a set as possible, I then try to disprove my hypothesis, by selectively removing or altering steps that are required for my hypothesis to be true. Importantly, I try to focus on other, competing hypotheses here, trying to pick changes that have the best chance of narrowing the possibilities to as few as possible.
Once I am confident in my hypothesis, I try to verify it by changing the code itself. I'll often do this in a temporary way, such as making use of runtime debugging features like the immediate window to test individual changes in isolation. Once I've proven out a fix, only then will I attempt a permanent change*.
Sometimes, I find that my hypotheses are whittled down to things that I simply cannot test. Typically this is due to factors beyond my control*, but whenever I am faced with such a situation, I've found my way out by stepping away from my code and trying to replicate it using a whole new project - using as little (and simple) code as possible.
Using this approach serves two purposes: firstly, if the issue is caused by code that I do not own, I can provide the minimal project as part of any bug reports I submit. Secondly, and more importantly, the effort of making a minimal project serves as a form of validating the hypothesis in its own right: more often that not, I find that the minimal project does not actually replicate the issue*. Whenever this happens, it's both inspiring and humbling - as it forces me back to square one, all while teaching me things I'd never known before.
Hopefully this has been a useful window into my own process for using replication steps to solve issues. As you can see, not only do I use the steps as a part of validating fixes, but I also use them to hone my understanding of the issues at hand, and even as a template to make whole new projects. Whether working on your own development process, or simply curious about others, I hope that my explanation of my own has been helpful.