Back in ’87 when I started programming professionally, debugging was done by embedding printf statements in your C programs. Text printed with this method went to the console output on Unix systems (and was silently eaten by Windows unless you went through the necessary contortions to redirect it). You used this text for tracing control flow and printing out bits of state that you were interested in. So a typical debug cycle looked like this:
- Think about the observed behavior and then analyze the code involved.
- Form a hypothesis about what is causing the problem.
- Add printf statements into the code that will either confirm or destroy the hypothesis.
- Observe the output.
And it was good…
The gdb debugger rolled across the transom of my awareness, followed over the years by ever increasingly sophisticated interactive debuggers. I’ve tried them all, and kept returning back to printf-like tools for debugging. Perhaps it is simply mental inflexibility on my part, but I like to think I have good reasons.
First, consider steps #1 and #2 above: they are by far the most important in the cycle. I think it is absolutely crucial that the steps you take when debugging arise out of your understanding of the code involved. If it doesn’t you are just throwing darts, and that’s why I dislike the approach of “stepping through the code” with the debugger. Let me batter you over the head with this again:
The steps you take when debugging arise out of your understanding of the code.
If you don’t already know what the code is doing and why, stepping through with the debugger isn’t going to enlighten you — it’s just going to show you more (possibly buggy) information, from which you might infer the intended behavior, although you may infer incorrectly. If you do already know what the code is doing then stepping through won’t help you. Barring compiler bugs, all you need to do is read the code to really understand it.
Second, printf doesn’t rely on debug builds — it works equally well with release builds. Further, if you build the right kind of debug/logging facility into your framework (more on that in the next post) you can have a variety of information at your fingertips to diagnose problems at any time, debug or release.
Third, debuggers are useless in diagnosing race conditions. Because running under a debugger changes the timing of operations, race conditions become extremely difficult to isolate, and setting breakpoints just makes the problem worse. Perhaps there are some debugger tricks I’m not aware of that help with the latter case, but the former still holds. Especially in these days when threads and parallel processing are becoming more prominent, this is a crucial flaw.
Fourth, running in debug mode is slow. Launching the app takes way too long. I haven’t measured it, but I’m pretty sure that the time it takes to edit files to insert printf statements is more than made up for by the time savings when you launch the app (not to mention when you do anything else with it).
Do debuggers have advantages? Sure. And from time to time I’ll use them. But I always circle back to my first objection above: you need to really understand the code, and that is best, most efficiently done by analyzing it.
I have a snarky (but talented) colleague who enjoys accusing me of using “bear skins and stone knives” instead of modern tools. But I tend to look at it more as a choice of using simple tools that will always work.