pratt@mira.cs.nps.navy.mil (david pratt) (09/10/90)
Good Morning to all! I hope you had a better weekend then I did. My code (4 threads on an Iris 4D/120 GTX running 3.3) gives me the following errors: Bus Error MP Status Register: 0x580:id 0 FP MP Bus Error:IOEACK MP Bus Error Register 0 (Read Address): 0x17300004 MP Bus Error Register 1: 0xA4FEB0 <Write Back Address 0xA4XXXX WRITE PIO Requestor:CPU id=0> Bus Error MP Status Register: 0x580:id 0 FP MP Bus Error:IOEACK MP Bus Error Register 0 (Read Address): 0x17302608 MP Bus Error Register 1: 0x1FBFEB0 <Write Back Address 0x1FBXXXX WRITE PIO Requestor:CPU id=0> Bus Error MP Status Register: 0x580:id 0 FP MP Bus Error:IOEACK MP Bus Error Register 0 (Read Address): 0x17300004 MP Bus Error Register 1: 0x78FEB0 <Write Back Address 0x78XXXX WRITE PIO Requestor:CPU id=0> If it is a case of RTFM, please tell me what FM. I have tried to find the errors to no avail. Any help would be greatfully accepted. -- Dave Pratt pratt@cs.nps.navy.mil (408) 646-2865 If the meek shall inherit the earth, I'm SOL! These are my opinions, who knows what the Navy thinks.
jwag@moose.asd.sgi.com (Chris Wagner) (09/10/90)
In article <1378@mira.cs.nps.navy.mil>, pratt@mira.cs.nps.navy.mil (david pratt) writes: > Good Morning to all! > > I hope you had a better weekend then I did. My code (4 threads on an > Iris 4D/120 GTX running 3.3) gives me the following errors: > > Bus Error > MP Status Register: 0x580:id 0 FP MP Bus Error:IOEACK > MP Bus Error Register 0 (Read Address): 0x17300004 > MP Bus Error Register 1: 0xA4FEB0 > <Write Back Address 0xA4XXXX WRITE PIO Requestor:CPU id=0> > Bus Error > MP Status Register: 0x580:id 0 FP MP Bus Error:IOEACK > MP Bus Error Register 0 (Read Address): 0x17302608 > MP Bus Error Register 1: 0x1FBFEB0 > <Write Back Address 0x1FBXXXX WRITE PIO Requestor:CPU id=0> > Bus Error > MP Status Register: 0x580:id 0 FP MP Bus Error:IOEACK > MP Bus Error Register 0 (Read Address): 0x17300004 > MP Bus Error Register 1: 0x78FEB0 > <Write Back Address 0x78XXXX WRITE PIO Requestor:CPU id=0> > > No, there is no TFM here - this is alas a bug in 3.3 to do with large shared address processes - the exact conditions under which this ocurrs is diffucult to explain - but a fix is in an upcoming maintenance release - please contact your local rep for details. Basically if a large ( > 7 distinct address spaces of < 2MB each) process shrinks its space - by either sbrk(-xx) or be detaching say a mapped file this problem can result. In this example each part of your program - text, data, stack, graphics pipe, new_server shd mem, mapped files, etc. count as a 'distinct space'. SO for a 4 thread process linking with shared gl and shared libc: text, data, stack1, stack2, stack3. stack4, gltext, gldada, libctest, libcdata grpipe, gr shdmem - will be your spaces (clearly > 7) As long as you don;t shrink any of them, I believe you will not get into any problems. Note: there is NOTHING wrong with your hardware, etc. it is completely a software generated problem.. Chris Wagner