[RETRACTED] Debuggers Make You A Bad Detective


Hear this article read aloud here!

AUTHOR’S NOTE SEPT. 11 2021: I no longer endorse this article.

Imagine you’re a detective, perhaps in the midst of Victorian-era London. You’re at the scene of a murder, a blood-splattered living room in a townhouse. Your plucky assistant, Blatson, immediately has an idea:

“Alright, we need to figure out who committed this foul act, and fast. Let us search for clues!”, he says. You agree, and split up to cover the room.

You start your search at the body, following a trail of blood into the kitchen. The trail stops at the dishwasher. Opening it, you see a single knife in one of the pockets. You take out your duster, and discover large fingerprints. The murder victim has small fingers, so the knife may have been the murder weapon.

“Blatson, look here!” you say excitedly. “I think I have a lead!”

“Oh thank goodness!” replies Blatson. “I was getting nowhere, even though I looked everywhere.”

“What do you mean?” you ask, before peeking your head out of the kitchen. You see Blatson standing in a small corner of the living room, bent over, staring at blank carpet.

“What the devil are you doing?” you ask Blatson.

“Oh, I was just looking through the carpet for stains. It was taking so long though, I’m glad I have an expert like you on the team!”

“But,” you reply, “I just started at the body and tried to find a common type of clue. Like a murder weapon, like this knife!” You point at said knife in your hand, Blatson staring in confusion.

“Ah, you must be one of those 10X detectives who can solve problems in your mind. Well, not all of us have your “gifts”, O wise one!”, Blatson replies sarcastically. “But seriously, how was I supposed to notice the knife?”


Software development is a process of creating information. You learn about your tools, the environment, the product, the problems, the solutions. Programming is learning by and for making, while debugging is learning by and for fixing.

Debugging is basically tracking down the cause (or causes) of a problem in a piece of software, and then fixing the software to prevent the problem in the future. The key reason debuggers are bad at debugging, is that they don’t actively encourage real understanding of code and data.

We can think of debugging as getting to the truth of some matter. You want to hit on the cause (or group of causes) for a given problem, so you can fix it. Anyone versed in Bayesian rationality, or the twists and turns of detective stories, can intuit what to do: narrow down the causes. Come up with hypotheses, test them, shift the probability mass of your expectations to a smaller and smaller percent of the total possibilities, until you have the actual answer. Eliminate the impossible, shift your confidence to the more likely.

Debuggers don’t do any of that. At least, not nearly as well as print statements. In fact, they can make the whole process of debugging slow and unsophisticated.

A debugger shows you the state of your program at a given moment.

(EDIT 2020-06-14: detaro on Hacker News correctly notes that a debugger can usually do far more than single-stepping through code. Through the rest of the article, assume the term “debugger” refers specifically to a tool or tool component where the primary use is to walk step-by-step through code showing the current program state).

All the variables, states, etc. are at your fingertips. This should be great for finding problems, right? Not usually. The debugger does not help you judiciously guess what’s wrong with your code, or test whether a particular thing is broken or working as expected. Like a mystery with too many clues, you can spend time on dozens of red herrings, depending on what catches your fancy in the current program state. This was Blatson’s mistake above: scanning everything in-depth the first time around, rather than actually trying to explain the situation. Looking for bugs in a debugger is as undiscriminating and needlessly inefficient as looking for murder clues everywhere in the room. Instead, one should strive to make educated guesses, and test them.

To use another example: your car is making a strange noise, and you don’t know why. Which method would get the car fixed faster: checking a cross-section of the whole car for anything that looks suspicious? Or listening for where the sound is coming from, and then checking more thoroughly in that area?

To take the use of debuggers to an extreme, you could look through memory registers by hand. But that’s obviously inefficient and unhelpful to really finding the causes of problems. A debugger is, thankfully, a higher-level, more-discriminating tool than that. You could say that reading through registers is analogous to programming in Assembly: you’ll understand everything, given infinite time. So using a debugger is like programming in C, and print statements are the evil lazy noob solution like Python, right?

Except… C is in basically the same speed ballpark as Assembly, while Python is far slower than either. There’s a real trade-off to working in a more abstract language like Python. But there’s no comparable advantage to working with a debugger, as opposed to actually thinking about the code and using print statements and unit tests. A program gets faster from memory management cleverness, but a developer does not fix problems faster by seeing every tiny possible cause all at once. Using abstractions and hypotheses and guesses is, in fact, the correct way to narrow down problems.

So the current smart way to debug is twofold: write unit tests, and use print statements. If you write two print statements that demonstrate a program working up to a certain point, you could eliminate half the lines of code from your consideration as causes of the bug. Write more print statements, narrow down the potential cause areas, collect bits of information,

Do debuggers have any uses? Yes, but only in rare situations. Since a debugger is a kind of information firehose, it could only be expected to be useful when you have basically no concrete hypotheses to test. If you’re working in a highly-unfamiliar programming environment, and the data flows are inscrutable, and you have no clue which line or file the bug is in… go ahead and use the debugger! Your mental model, your explanation of the situation, is vague and imprecise, so you may as well get some information to grab onto. Otherwise, you won’t be able to generate any comprehensible hypotheses at all!

Debuggers are most useful for such situations, where you don’t understand much of anything happening (and can thus benefit from crude basic information, like the current program state). So in our detective story, there are still situations where Blatson could have been right in painstakingly checking the entire carpet for clues! If the murder scene had been wiped down thoroughly, for instance, with no obvious traces left behind, it would then make sense to do a more thorough, debugger-like scan.

Even then, actual debuggers can still be quite dumb! They usually lack the control-flow analysis features of coverage testing, they are usually as text-based and clunky as other programming tools, and it can be hard to filter down to the variables that matter. A debugger could be a valuable tool in the work of creating and eliminating hypotheses about bugs… but only if it can grow to the challenge, being more customizable to the intuitions and hypothesis of the programmer and situation.

A somewhat good example of a debugger done right, is the web inspector tool in your browser of choice. Information is grouped logically (network packets, page source), known-important information is made visually noticeable (colors and labels), and you can usually hover over things on the webpage itself to get info in the inspector. Debuggers ought to, at the very least, incorporate those advances, and in greater depth. But the debugger is still no substitute for unit testing, and can’t always handle the complex data-and-logic-following that print statements can.

I used to think I was an inferior programmer because I didn’t pay much attention to debuggers in CS class 1. But after reading Daniel Lemiere’s popular article on the subject, I realized that I wasn’t missing much. Programming greats like Linus Torvalds and Guido Van Rossum have made professional-grade tools using non-debugger methods. Robert C. Martin’s criticism of debuggers was a key inspiration for mine. The idea of narrowing down bugs like investigating a hypothesis was grounded in earlier writings here and here. And, of course, the idea of thinking more precisely about problems (rather than just banging your head against them blindly), is nowhere near new, though some of the best explanations of it can be found in Yudkowsky’s writings.

I owe a great debt to all these people’s writings. They converge on the basic ideas I’ve tried to condense and make clear:

Debugging, and programming as a whole, are fundamentally about knowledge creation. Efficient knowledge creation requires honing in on a correct explanation, and quickly. Debuggers are the slowest, crudest way to build knowledge of a problem. Thus, debuggers are almost always the worst tools for debugging.


  1. I’m actually an inferior programmer because of my laziness and lack of interest in many problems/languages/tools. At least, as of this writing. ↩︎


If
, help us write more by donating to our Patreon.


Tagged: