[net.unix] how does adb/dbx work?

hartley@uvm-cs.UUCP (Stephen J. Hartley) (07/31/85)

I was wondering if somebody could explain how adb (and dbx) executes programs.
Does it have a software simulator that interprets the instructions in the
object file of the program being debugged?  Or does it use the VAX hardware
to execute the program, say by hardware single-stepping?  I could trudge through
the code, but I was wondering if somebody already knew.  Thanks.
-- 
"If that's true, then I'm the Pope!"		Stephen J. Hartley
USENET:	decvax!dartvax!uvm-gen!uvm-cs!hartley	The University of Vermont
CSNET:	hartley%uvm@csnet-relay			(802) 656-3330

goldman@ittvax.ATC.ITT.UUCP (Ken Goldman) (08/09/85)

> I was wondering if somebody could explain how adb (and dbx) executes programs.
> Does it have a software simulator that interprets the instructions in the
> object file of the program being debugged?  Or does it use the VAX hardware
> to execute the program, say by hardware single-stepping. 

Maybe I can help with some PDP11 insight.  First, hardware single step
is out of the question in a multi-user system.  The way the PDP11
debuggers work is by replacing the instruction to be traced with a
trap instruction.  Then the software runs full speed until it hits the
trap, at which point you enter the debug software.  At that point you
can examine registers, memory, etc.

The debugger I used was for assembly language.  I suppose that a symbolic
debugger understands the symbol tables and such.

guy@sun.uucp (Guy Harris) (08/12/85)

> > Or does it use the VAX hardware to execute the program, say by
> > hardware single-stepping. 

> Maybe I can help with some PDP11 insight.  First, hardware single step
> is out of the question in a multi-user system.

Depends on what you mean by "hardware single step" and "out of the
question".  The PDP-11, VAX, M68000, and several other processors have a
"trace bit" in the processor status {word|longword} which causes the
processor to trap at the end of the current instruction.  This is perfectly
usable in a multi-user system, considering the UNIX debuggers use it.
Running a program by single-stepping it is, however, slow as molasses in
January, whether single-user or multi-user (consider how many instructions
are executed by the kernel and the debugger per instruction executed by the
debuggee).

	Guy Harris

thomas@utah-gr.UUCP (Spencer W. Thomas) (08/12/85)

In article <456@ittvax.ATC.ITT.UUCP> goldman@ittvax.ATC.ITT.UUCP (Ken Goldman) writes:
>Maybe I can help with some PDP11 insight.  First, hardware single step
>is out of the question in a multi-user system.  The way the PDP11
>debuggers work is by replacing the instruction to be traced with a
>trap instruction.  

This is true for breakpoints.  On the other hand, if you are
"single-stepping" (on a PDP-11 or a Vax, at least), the debugger sets
the "trace" bit in the subprocess's Program Status Word (PSW).  This bit
causes a trace interrupt after the next instruction executes, returning
control to the debugger.  This way, the debugger doesn't need to know
all the possible places the program might end up after an instruction is
executed (so it could stuff a trap in each possible location).

-- 
=Spencer   ({ihnp4,decvax}!utah-cs!thomas, thomas@utah-cs.ARPA)
	"You don't get to choose how you're going to die.  Or when.
	 You can only decide how you're going to live." Joan Baez

jdb@mordor.UUCP (John Bruner) (08/12/85)

> This is true for breakpoints.  On the other hand, if you are
> "single-stepping" (on a PDP-11 or a Vax, at least), the debugger sets
> the "trace" bit in the subprocess's Program Status Word (PSW).  This bit
> causes a trace interrupt after the next instruction executes, returning
> control to the debugger.  This way, the debugger doesn't need to know
> all the possible places the program might end up after an instruction is
> executed (so it could stuff a trap in each possible location).

The "trace" bit can also be used by an intrusive debugger (one which
is linked with the program), since a program can use the "signal"
system call to catch trace traps and can set the T bit in its own
PS.  I used this once to implement a PC address trace package.
-- 
  John Bruner (S-1 Project, Lawrence Livermore National Laboratory)
  MILNET: jdb@mordor [jdb@s1-c.ARPA]	(415) 422-0758
  UUCP: ...!ucbvax!dual!mordor!jdb 	...!seismo!mordor!jdb

herbie@watdcsu.UUCP (Herb Chong - DCS) (08/13/85)

In article <456@ittvax> goldman@ittvax (Ken Goldman) writes:
>Maybe I can help with some PDP11 insight.  First, hardware single step
>is out of the question in a multi-user system.  The way the PDP11
>debuggers work is by replacing the instruction to be traced with a
>trap instruction.  Then the software runs full speed until it hits the
>trap, at which point you enter the debug software.  At that point you
>can examine registers, memory, etc.

just a side note here.  when using the PER debugger on an IBM VM/SP
system, the hardware PER facility is enabled.  you can specify an
address range where virtual storage modification, instruction fetch or
branching causes an interrupt.  the real machine operating system CP
handles this interrupt and gives the appearance of single step
execution to the users.  our 4341's have supported about 25 users
simultaneously running PER of programs while supporting an additional
80 or so users on the machine.  response was noticably degraded since
an interrupt was generated for every instruction executed within the
address ranges for each user using PER, and the virtual machine must
run in 370 EC mode which requires faking empty page and segment table
entries for the virtual DAT that goes on (see how messy it is to have
virtual machines).  370 EC mode is required because the control
registers that specify PER information are only available in EC mode.
BC mode only provides what a 360 does, a PSW.  this is of course
debugging at the assembly language level only, though you can use PER
on anything.  in other words, hardware single step is practical
provided your context switching includes switching of the hardware
control information.

Herb Chong...

I'm user-friendly -- I don't byte, I nybble....

UUCP:  {decvax|utzoo|ihnp4|allegra|clyde}!watmath!water!watdcsu!herbie
CSNET: herbie%watdcsu@waterloo.csnet
ARPA:  herbie%watdcsu%waterloo.csnet@csnet-relay.arpa
NETNORTH, BITNET, EARN: herbie@watdcs, herbie@watdcsu

smithrd@rtp47.UUCP (Randy D. Smith) (08/15/85)

In article <410@uvm-cs.UUCP> hartley@uvm-cs.UUCP (Stephen J. Hartley) writes:
>I was wondering if somebody could explain how adb (and dbx) executes programs.
>Does it have a software simulator that interprets the instructions in the
>object file of the program being debugged?  Or does it use the VAX hardware
>to execute the program, say by hardware single-stepping?...

dbx uses software simulation.  I recall finding code to use the hardware-
trace-bit method, but it was never called from anywhere.  Instead, dbx
would determine where the next location would be based on the current
machine state and the next instruction to be executed.  I think some of
the stuff was taken from adb verbatim, by the way, so I expect it behaves
similarly.  It sure made my day when I found the trace bit wasn't being
used (the machine I ported it to had no trace bit).
-- 
				Randy D. Smith	(919) 248-6136
			   Data General, Research Triangle Park, NC
			 <the known world>!mcnc!rti-sel!rtp47!smithrd