[comp.sys.amiga] losing MIDI bytes

tas@mtuxo.UUCP (T.SKROBALA) (09/07/87)

I have an application which uses serial.device to read bursts of ~400 MIDI
"system exclusive" bytes, and I've had problems with occasional lost
bytes, even with SERF_RAD_BOOGIE set.  So I'm looking into the
possibility of writing my own serial device driver.  But I *still* seem
to lose bytes, even with a darn-near trivial interrupt handler.  Right
now I have code that looks something like this:

	long byteCount ;

	main()
	{
	    byteCount = 0 ;
	    initMIDI( &byteCount ) ;
	    getchar() ;	/* wait until RETURN is hit */
	    printf( "%ld\n", byteCount ) ;
	    cleanupMIDI() ;
	}

and a serial "read buffer full" interrupt handler that is:

	_RBFHandler:
	    move.w	SERDATR(a0),d1		; Copy input byte/word to d0.
	    move.w	#INTF_RBF,INTREQ(a0)	; Clear interrupt.
	    add.l	#1,(a1)			; Inc count pointed to by a1.
	    rts

I have the 1.2 operating system, 2 floppy drives, and 2 Mbytes of Fast RAM.
In my tests, I do not have any background processes or other devices running
unless otherwise noted.  I have set the SERPER register to both 113 and 114
for 31250 bits per second.

The above code, with no other processes running, will generally not lose
bytes.  But if I add a single process (popCLI) in the background, or if
I add about 15 microseconds worth of data collection code to the
interrupt handler, I start seeing data loss of 1 or more bytes per 20K
(the 20K always in bursts of 400 bytes, the bursts at close to the full
MIDI bandwidth of 3125 bytes per second).  When I lose a byte, I usually
see that the OVRUN bit has been set in the SERDATR register.

Is there a bug in my simple handler?  Or is there some system overhead
that is occasionally locking out serial interrupts for a significantly
large percentage of the 300+ microseconds between MIDI data bytes?  If
the latter, is there a workaround?

Tom Skrobala  AT&T  mtuxo!tas  201-957-5446

P.S. I've had similar problems with a timer interrupt handler.  I start
losing 1 percent and more of my interrupts when I get above 2000
interrupts per second or so.  The timer is programmed to fire
repeatedly.  That interrupt handler simply decrements a counter and
sends a signal to the application process if the counter has gone down
to 0.

wtm@neoucom.UUCP (Bill Mayhew) (09/09/87)

In reference to the aritcle mentioning ocasionally losing MIDI
bytes:

I was wondering about the practicality of putting forbid() and
permit() on either side of your part of the code that is reading
the system exclusive data dump.  If it is only a burst of 400 or so
bytes, it doesn't seem that unreasonalbe to lock out competing
processes for that amount of time, provided that you aren't doing
the dump very frequently.

True, it would be a kludge, but would probably be a little simpler
than writing your own serial device driver.

Bill
(wtm@neoucom.UUCP)

dillon@CORY.BERKELEY.EDU (Matt Dillon) (09/11/87)

>I was wondering about the practicality of putting forbid() and
>permit() on either side of your part of the code that is reading
>the system exclusive data dump.  If it is only a burst of 400 or so

	Not practical.
	
	Hell, if this is a *problem* may I suggest somebody
write a program which Disable()'s interrupts, reads the bytes into a
pre-allocated RAM buffer (timestamping them with one of the hardware clocks)
then Enable() when they are through.

	Sure it will freeze the machine for a while... while your playing
your keyboard, but hey... now you have ~4uS accuracy (tight way loop on 
serial data loop exit -> immediate read of hardware timers.  The 4uS is
due to the fact that the serial data is asyncronous and can latch in 
the middle of an instruction).  It takes about 300uS per byte, I believe.

	(The above isn't entirely practical, but not entirely off the wall
	either)

				-Matt

page@ulowell.cs.ulowell.edu (Bob Page) (09/11/87)

wtm@neoucom.UUCP (Bill Mayhew) wrote:
>I was wondering about the practicality of putting forbid() and
>permit() on either side of your part of the code that is reading
>the system exclusive data dump.
 ...
>True, it would be a kludge, but would probably be a little simpler
>than writing your own serial device driver.

It won't work.  As soon as you ask the serial device to do the
transfer, the Forbid() goes away, and is restored to you when
serial device finishes the I/O.

You can't use Forbit/Permit around a Wait or any I/O stuff, since it
would disable multitasking...the device handlers would never get a
chance to run.  The Amiga OS protects against this potential system
lockup by temporarily forcing you to give up your Forbid request.

Yes, this explanation is overly simplistic, but you get the point.

..Bob
-- 
Bob Page, U of Lowell CS Dept.   page@ulowell.{uucp,edu,csnet} 

cmcmanis%pepper@Sun.COM (Chuck McManis) (09/11/87)

In article <8709110105.AA27649@cory.Berkeley.EDU> (Matt Dillon) writes:
>	Sure it will freeze the machine for a while... while your playing
>your keyboard, but hey... now you have ~4uS accuracy (tight way loop on 
>serial data loop exit -> immediate read of hardware timers.  The 4uS is
>due to the fact that the serial data is asyncronous and can latch in 
>the middle of an instruction).  It takes about 300uS per byte, I believe.

Actually the above is entirely practical. You can use the hardware timer
associated with the serial device to timestamp your 'events' down to 
a micro-second resolution, however the darn bytes take 320 uS to arrive
and a full 'NOTE-ON' - 'NOTE-OFF' sequence is something like 10 bytes
anyway so now your down to 3 milleseconds. I think that is may be a 
question of thinking your not fast enough when you really are, anyway
I'm working on a Midi.device that will do all of the above and be
'easy' to use. I'll post it when it works enough to be usable. 

Note to the previous poster about problems with dropping bytes, the 
thing to do is after you read a byte from the serial port you check it
again, because another byte may have arrived already. In my 'critical'
applications I wait in the interrupt loop for 1/2 a character time before
I return from the interrupt. That way streams of characters like system
dumps will be caught correctly.


--Chuck McManis
uucp: {anywhere}!sun!cmcmanis   BIX: cmcmanis  ARPAnet: cmcmanis@sun.com
These opinions are my own and no one elses, but you knew that didn't you.

tas@mtuxo.UUCP (T.SKROBALA) (09/21/87)

Thanks to all who responded to my question about losing MIDI system-exclusive
bytes.  I didn't have any luck with Forbid()/Permit(), and I wasn't keen
on busy waiting with all interrupts locked out, and I was indeed using
SetIntVector(), so I tried Chuck McManis'suggestion, and it worked
beautifully.  Now my Read Buffer Full handler has assembly code at the
end that does something like the following:

    for( i = 0 ; i < 50 ; i++ )
    {
	if( custom.intreq & INTF_RBF )
	    goto RBFHandler ; /* do it all again w.o. leaving the handler */
    }
    return ;

Just checking the bit once isn't good enough: apparently you can still
get screwed from time to time if the bit gets set right after you leave
the handler.  The loop does cause a few (~300) uSeconds of wasted time
if there is indeed nothing coming down the pike, but the gain in integrity
is worth it.  (I may be able to trim a few iterations off the loop and
still maintain integrity, but I know that there's a point where I start
losing again.)

By the way, when I was struggling with the problem, I saw loss of bytes
even when the MIDI data was only coming in bursts of 6 bytes, so I
imagine that the problem could arise even with just ordinary NOTE_ON/
NOTE_OFF types of stuff.  Anyway, Chuck's solution works, even with vd0:
running (not that I ever suspected that, but some people did), and I
suspect that serial.device will never work reliably unless it, too, has
the fix put in.

Now, can someone tell me just *what* was running and causing me to
lose those bytes before?  There isn't much that runs at priority
higher than the serial read interrupt, and I only lost about 1 byte
in 15000, i.e. about 1 every 5 seconds.  What high priority CPU-hog
thing happens on a more or less idle Amiga once every 5 seconds?
Am I guaranteed that Chuck's solution will *always* work?

Tom Skrobala  AT&T  mtuxo!tas

higgin@cbmvax.UUCP (Paul Higginbottom SALES) (09/22/87)

in article <755@mtuxo.UUCP>, tas@mtuxo.UUCP (T.SKROBALA) says:
> What high priority CPU-hog
> thing happens on a more or less idle Amiga once every 5 seconds?

Maybe the disk drive clicking without a disk in it?

	Paul.

jesup@mizar.steinmetz (Randell Jesup) (09/22/87)

>in article <755@mtuxo.UUCP>, tas@mtuxo.UUCP (T.SKROBALA) says:
>> What high priority CPU-hog
>> thing happens on a more or less idle Amiga once every 5 seconds?

	Maybe VD0: is doing something off a timer interrupt (or VBLANK,
though I doubt that).  I'm fairly certain vd0: does some things off of
a timer (CleanRamDisk doesn't work if you type it while the delete is
running, works if you wait a few seconds after the delete is done.)
Vd0: does have a tendency to lock things out while, for example, verifying
a mount survived.

[Note: these net addresses are going away ASAP.  Do not mail to them.]


	Randell Jesup  (Please use one of these paths for mail)
	sungod!jesup@steinmetz.UUCP (uunet!steinmetz!{sungod|crd}!jesup)
	jesup@ge-crd.ARPA

perry@well.UUCP (09/25/87)

Contrary to what others  have  said, there  is no timer based processing
going on in the ASDG recoverable ram disk driver. The action of cleaning
out the ram disk and compacting is driven in two phases:

Every 128 Writes: cleaning and compaction
Every 256 Accesses (either write or read): compaction

cleaning is  defined  as  releasing  tracks which  are  no longer in use.
compaction is defined as checking the spread of the ram  disk around your
memory. This may result in the RRD moving things around to make more chip
ram available or reduce fragmentation.

(these two features are often overlooked but because of them, the ASDG
RRD while the first RRD ever made for the Amiga, is still in some ways
the best :-)

Perry S. Kivolowitz
ASDG Incorporated