On 9/19/2019 2:50 AM, Öö Tiib wrote:
> So what does detaching that debugger even mean?
In Windows, a process launches and has its own allocation, threads,
open handles, etc. It runs in isolation and is protected against
corruption from other processes that may later crash or try to ac-
cess its data by hardware and OS protection mechanisms.
A debugger is like a hand that reaches out to grab a tennis ball.
The hand wraps around the ball and holds onto it. When that hap-
pens, the debugger "attaches" to the process being debugged. It
shares information about it as a debugger debugging another proc-
ess. It has access to the same memory, access to handles, etc.
The debugger typically recognizes compile-time information from
something like a PDB (program database) created at compile + link
time. That PDB allows a correlation between binary location ac-
cess in memory (even through the ABI loader's randomizer) and the
source code which relates to that code.
By making changes to the source code, re-compiling, and applying
the code changes to the running ABI, many common things changed
in source code (adding code, altering code, deleting code, doing
the same for variables, etc.), the ABI can be expanded or con-
tracted as needed by the debugger changes.
When the debugger is done making changes to the running ABI foot-
print in memory, it can detach. This is the same as taking your
hand off that tennis ball and letting it go. The process that
had been being debugged is then released, left again to its own
encapsulated existence.
> The "stale versions"
> of functions are still present as part of original executable and
> the "vtable-like" hooking mechanism with "branching off places" are
> new goods owned by debugger. Avoiding detaching it does not sound
> like entirely policy-based decision.
They are committed into the running process's memory. They're
just not on the disk file, though honestly I cannot see why the
OS + compiler + debugger couldn't coordinate an exchange which
allows the .exe on disk to be updated with the new changes.
I know in VS when you exit a process you've been working on, it
does an auto-link after terminating the process so it can update
the disk image. Seems an unnecessary step IMHO.
>> Edit-and-continue is, by far, the greatest developer asset I have.
>> In languages I code where I don't have edit-and-continue, it is
>> sorely missed.
>
> But that is totally orthogonal to need of detaching from such
> processes.
The advantage of edit-and-continue is the saved contextual state.
You can attach to a running process, suspend its execution where
it is, examine things, change things, inject new things, etc.,
and then resume from where you were. It saves the time of loading
the app, getting back to the place where you were, selecting the
data options you were working with, to get to the point where the
algorithm you're debugging exists.
Using an edit-and-continue ABI reduces code performance by about
20% or so. But the time it saves the developer more than makes
up for that. In addition, most CPUs today on most applications
today running at 80% of their full potential speed is more than
fast enough. Running at 20% of their full potential speed is
probably usable.
I have an old 2007 HP Pavillion desktop computer. It was getting
slow on Windows 7 with a dual-core 2.2 GHz CPU and SATA hard disk.
I upgraded to a quad-core 2.4 GHz CPU, 8 GB of RAM, and an SSD,
and it's like a new machine. I couldn't believe the difference.
That $110 investment resurrected (scavenged DRAM from another
machine, bought CPU on eBay, new SSD) the aging computer and I've
been using it for even Java development in Eclipse without any
performance issues.
In any event, most people I know do not appreciate edit-and-con-
tinue. I find it difficult to code without it. I am 10x or more
productive when I have an edit-and-continue toolset because it
allows me to code as I go, testing bits as I go, which greatly
helps me overcome the issues I code into my code by my dyslexia.
--
Rick C. Hodgin