It made program execution feel visible: stacks, data, and control flow were all there at once. You could really “see” what the program was doing.
At the same time, it’s clearly a product of a different era:
– single-process
– mostly synchronous code
– no real notion of concurrency or async
– dated UI and interaction model
Today we debug very different systems: multithreaded code, async runtimes, long-running services, distributed components.
Yet most debuggers still feel conceptually close to GDB + stepping, just wrapped in a nicer UI.
I’m curious how others think about this:
– what ideas from DDD (or similar old tools) are still valuable?
– what would a “modern DDD” need to handle today’s software?
– do you think interactive debugging is still the right abstraction at all?
I’m asking mostly from a design perspective — I’ve been experimenting with some debugger ideas myself, but I’m much more interested in hearing how experienced engineers see this problem today.
Sadly it's windows only yet, but they have plans to port it to other platforms.
- [0]: https://github.com/EpicGamesExt/raddebugger
It's weird for a company to explicitly say, "if you use this one operating system you can go F yourself, we don't want your money". (Note: this is not the same as saying "we only officially support Windows at this time, sorry". There's seething hatred in Sweeney's words.)
Valve does.
Of course the CPU side usually lacks semantics to automatically create data visualizations (while modern 3D APIs have enough context to figure out data dependencies and data formats), and that would be the "interesting" part to solve - e.g. how to tunnel richer debug information from the programming language to the debugger.
Also there's a middle ground of directly adding a realtime runtime debugging UI to applications via something like Dear Imgui (https://github.com/ocornut/imgui/) which is at least extremely popular in game development - and in this case it's trivial to provide the additional context since you basically develop the debugging system alongside the application.
PS: I'd also like a timeslider that "just works", e.g. travelling back to a previous state, taking snapshots and exploring different "state forks". And of course while at it, live editing / hot code reloading, so that there is no difference between development and debugging session, both merge into the same workflow.
We used to have the Apple OpenGL Profiler etc on macOS, but all of Apple's tools these days are only focused on Metal. We used to have RenderDoc on Linux, but it got left behind by the Wayland transition and doesn't work anymore. So I'm kinda lacking anything to debug OpenGL at the moment...
RenderDoc on Linux appears to work fine for X11 apps via XWayland btw (just testing here via the sokol-samples with the sokol-gfx GL backend: https://github.com/floooh/sokol-samples, just another reason to not have a native Wayland backend but instead rely on the XWayland shim I guess...
https://github.com/epasveer/seer
Interactive debugging is definitely useful when teaching but obviously teaching is a different context. But Seer is not an educational tool and I believe it will hold up in other cases as well.
Also rr is impressive in theory, although it never worked on codebases that I worked on.
https://www.youtube.com/watch?v=O-3gEsfEm0g
Casey also makes a good point here on why printf-debugging is still extremely popular.
I've worked in a company that, for all intents and purposes, had the same thing - single thread & multi process everything (i.e. process per core), asserts in prod (like why tf would you not), absurdly detailed in-memory ring buffer binary logs & good tooling to access them plus normal logs (journalctl), telemetry, graphing, etc.
So basically - it's about making your software debuggable and resilient in the first place. These two kind of go hand-in-hand, and absolutely don't have to cost you performance. They might even add performance, actually :P
If you're lucky enough to be able to code significant amounts with a modern agent (someone's paying, your task is amenable to it, etc) then you may experience development shifting (further) from "type in the code" to "express the concepts". Maybe you still write some code - but not as much.
What does this look like for debugging / understanding? There's a potential outcome of "AI just solves all the bugs" but I think it's reasonable to imagine that AI will be a (preferably helpful!) partner to a human developer who needs to debug.
My best guess is:
* The entities you manage are "investigations" (mapping onto agents) * You interact primarily through some kind of rich chat (includes sensibly formatted code, data, etc) * The primary artefact(s) of this workflow are not code but something more like "clues" / "evidence".
Managing all the theories and snippets of evidence is already core to debugging the old fashioned way. I think having agents in the loop gives us an opportunity to make that explicit part of the process (and then be able to assign agents to follow up gaps in the evidence, or investigate them yourself or get someone else to...).
The first is that it gives you an unparalleled insight into the real stuff behind the scenes. You'll never stop learning new things about the machine with a debugger. But at the minimum, it will make you a much better programmer. With the newly found context, those convoluted pesky programming guidelines will finally start to make sense.
The second is that print is an option only for a program you have the source code for. A debugger gives you observability and even control over practically any program, even one already in flight or one that's on a different machine altogether. Granted, it's hard to debug a binary program. But in most cases on Linux or BSD, that's only because the source code and the debugging symbols are too large to ship with the software. Most distros and BSDs actually make them available on demand using the debuginfod software. It's a powerful tool in the hands of anyone who wishes to tinker with it. But even without it, Linux gamers are known to ship coredumps to the developers when games crash. Debugging is the doorway to an entirely different world.
- Brian Kernighan
https://whitebox.systems/
Doesn't seem to meet all your desired features though.
Sophisticated live debuggers are great when you can use them but you have to be able to reproduce the bug under the debugger. Particularly in distributed systems, the hardest bugs aren't reproducible at all and there are multiple levels of difficulty below that before you get to ones that can be reliably reproduced under a live debugger, which are usually relatively easy. Not being able to use your most powerful tools on your hardest problems rather reduces their value. (Time travel debuggers do record/replay, which expands the set of problems you can use them on, but you still need to get the behaviour to happen while it's being recorded.)
[1] https://en.wikipedia.org/wiki/Time_travel_debugging
It's definitely convoluted as it comes to memory obtained from the stack, but for heap allocations, a debugger could trace the returns of the allocator APIs, use that as a beginning point of some data's lifetime, and then trace any access to that address, and then gather the high-level info on the address of the reader/writer.
Global variables should also be trivial (fairly so) as you'll just need to track memory accesses to their address.
(Of course, further work is required to actually apply this.)
For variables on the stack, or registers, though, you'll possibly need heuristics which account for reusage of memory/variables, and maybe maintain a strong association with the thread this is happening in (for both the thread's allocated stack and the thread context), etc.
https://scrutinydebugger.com/
It's for embedded systems though, which is where I come from. In embedded we have this concept called instruction trace where every instruction executed with the target gets sent over to the host. The host can reconstruct part of what's been going on in the target system. But there's usually so much data, I've always assumed a live view is kind of impractical and only used it for offline debugging. But maybe that's not a correct assumption. I would love to see better observability in embedded systems.
Blows everything else out of the water.
https://pernos.co/ ( I’m not affiliated to them in any way, just a happy customer)
Takes some effort to configure it but beats "printf" (i.e. logging) in the end.
Most RE tools today will integrate a debugger (or talk to gdb).
I used it back in Uni, in 98, and it really helped me to understand debuggers. After it, even using gdb made sense.
To add something constructive, this demo represents an amazing ideal of what debugging could be: https://www.youtube.com/watch?v=72y2EC5fkcE