[comp.unix.i386] _UNIX_Today!_

steve@simon.UUCP (Steven E. Piette) (09/06/89)

In article <9468@chinet.chi.il.us>, randy@chinet.chi.il.us (Randy Suess) writes:
> In article <966@utoday.UUCP> greenber@utoday.UUCP (Ross M. Greenberg) writes:
> >This is me in an official capacity for UNIX Today!.
> >Based on what I'm reading here, I guess we blew the benchmarks on the MPE.  
> >I blew it.  I screwed up.  I screwed up in a major way.
> >Ross M. Greenberg
> >UNIX TODAY! Review Editor
> 
> 	I like this guy!
> 	Now, if I could only get a subscription....
> Randy Suess  randy@chinet.chi.il.us

It's nice to find someone in this industry who takes full responsibility
for their actions. We all make mistakes from time to time, but too many
hide from the heat when that time comes.

Thanks Ross for being honest with us.

Now if I only could figure out what happened to the subscription I had...
(only two copies ever showed up here from the beginning of Unix Today!)





-- 
Steven E. Piette                              Applied Computer Technology Inc.
UUCP: {smarthost}!simon!steve                             1750 Riverwood Drive
INET: steve@simon.CHI.IL.US or spiette@SUN.COM             Algonquin, IL 60102
-------------------------------------------------------------------------------

greenber@utoday.UUCP (Ross M. Greenberg) (09/07/89)

In article <357@simon.UUCP> steve@simon.UUCP (Steven E. Piette) writes:
> We all make mistakes from time to time, but too many
>hide from the heat when that time comes.
>
>Thanks Ross for being honest with us.
>

I figure it's easier to be honest than to try to coverup a mistake.

I must admit that the problems the MPE review caused have already changed
things here.  The review coming out in the next issue is on the ATT 6386E
unit.  The benchmarks we *were* going to run were going to be be
Whetstone/Dhrystone/IOstone on it, with 1 to five processes running.

Without a comparison, they'd be almost useless, right?  So, we went to
do a comparative benchmark.  The only machine we had available was
a Compaq/386/16MHz running XENIX.  It would have been obvious to anyone
reading the review and seeing the benchmarks that we were obviously
coparing apples and oranges (the 6386E is a 20Megger), but due to the
postings in *this* group, I've opted to not include the benchmarks as
stated.  Instead, when we can more reliably cross-check the hardware and
software (6386E & SVR3.2 v 6386E & XENIX v COMPAQ 386/20 & SVR3.2 v
COMPAQ 386/20 & XENIX), then I'll include the benchmark data.

But, that brings up some interesting points.

As I've indicated, we're going to be doing some massive work in the
benchmarking area shortly.  Part of my responsibility is to figure out
what those benchmarks really need, and how to compare apples and oranges.
In the DOS world, it's easy.  In the UNIX world, not so easy.  So. I 
figure I'll ask the same guys that gave me such a hard time (thanks for
volunteering! :-) ):  what do *you* think is important in a UNIX
benchmark?  How do I compare Machine A (with hardware and software
configuration of 'A') to Machine B with its H/W configuration of 'B'?

Doe Whetstone/Dhrystones/IOStones cover enough?  Application testing
is important, obviously, but should it be *real* applications (limiting
the number of machines we can test) or psuedo-apps (take a 100,000 x 1024
byte file and sort it a coupla hundred times while adding up a given
field)?  Is it important to test with one or more processes doing the
same thing?  How about with one or more *users* doing the same thing?
Should the results be output in tabular format, graphical format, both?
Should we list all tested machines in each issue (doubtful:  each editorial
page is precious)?  Should a given machine's configured price be used
to figure out which other machines to compare its benchmark to?  And,
which price? The list price or the quantity 100 price?

Setting up a benchmarking suite is going to be an ongoing process for
us.  I'd like to ask your help in helping us to figure oput what to do.
No promises, though: if we can't afford it in money or in time, we
can't implement it.  But, I *know* that I'm not a benchmarking expert! :-)

I'd be happy to collect mailgrams with people's ideas, but an open discussion
might prove a bit more interesting to all.

One thing:  flames regarding our *current* testing (or even compliments)
should be sent to me at the below address.  Until we get a biz.group set
up, let me not use the net to tell people what a great magazine we are? :-)

Anyway...just to lay one thing to rest.  Space permitting, we'll
be printing the MPE benchmarks versus a *current* version of Interactive
as soon as possible.  And the cartoon on our /etc page will probably show me
wiping a bunch of egg off my face.





>Now if I only could figure out what happened to the subscription I had...
>(only two copies ever showed up here from the beginning of Unix Today!)
>

Sigh.  Considering that our circ department is just now geting on-line, 
bear with us.  We're getting there.  Chances are you lost out when we
requalified.  Send mail to the below address and qualify again?

Ross

-- 
Ross M. Greenberg, Review Editor, UNIX Today!   greenber@utoday.UUCP
             594 Third Avenue, New York, New York, 10016
 Voice:(212)-889-6431 BIX: greenber  MCI: greenber   CIS: 72461,3212
  To subscribe, send mail to circ@utoday.UUCP with "Subject: Request"

pfrennin@altos86.Altos.COM (Peter Frenning) (09/07/89)

In article <971@utoday.UUCP> greenber@utoday.UUCP (Ross M. Greenberg) writes:
>
>In article <357@simon.UUCP> steve@simon.UUCP (Steven E. Piette) writes:
>> We all make mistakes from time to time, but too many
>>hide from the heat when that time comes.
>>
>>Thanks Ross for being honest with us.
>>
>Doe Whetstone/Dhrystones/IOStones cover enough?  Application testing
>is important, obviously, but should it be *real* applications (limiting
>us.  I'd like to ask your help in helping us to figure oput what to do.
>No promises, though: if we can't afford it in money or in time, we
>can't implement it.  But, I *know* that I'm not a benchmarking expert! :-)
>
>I'd be happy to collect mailgrams with people's ideas, but an open discussion
>might prove a bit more interesting to all.
>
>Ross
--- lotsa stuff deleted ----

Since "real users" are doing boring stuff like spreadsheets, databases, word-
processing and similar things, I think what you need are benchmarks that more
closely reflects such activities in a multiuser environment. Two benchmarks 
comes to my mind: Neal Nelson's UNIX multiuser suite (commercial product, but
available at a reasonable price) and TP1(public domain, and what all the biggies
use for their benchmarks, operating system independant or reasonably so). 
I think that a combination of those come as close as possible to what we would
really like to see.


+-----------------------------------------------------------------------+
|      Peter Frenning, Altos Computer Systems, San Jose                 |
+--------------------+--------------------------------------------------+
| Who? Me?           | USENET:         pfrennin@altos.COM               |
| No way!            | 	       {sun,amdahl,elxsi}!altos86!pfrennin      |
| I wouldn't even    | VOICE:          (408) 496-6700                   |
| think of doing such| SNAILMAIL:      2641 Orchard Parkway             |
| thing. It must have|                 San Jose, CA 95134               |
| been somebody else!|                                                  |
|                    | FAX:            (408) 433-9335                   |
|                PF  |                                                  |
+--------------------+--------------------------------------------------+

danno@onm3b2.UUCP (dan notov) (09/07/89)

First, <9468@chinet.chi.il.us>, randy@chinet.chi.il.us (Randy Suess) writes:
>> 	I like this guy!
>> 	Now, if I could only get a subscription....
>
Then, <357@simon.UUCP> steve@simon.UUCP (Steven E. Piette) writes:
>Now if I only could figure out what happened to the subscription I had...
>(only two copies ever showed up here from the beginning of Unix Today!)

Finally, I write:

Same thing happened to my subscription.  I even sent in an application.  I
guess Fortune 1000 Types need not apply.  I guess I can go upstairs to the
account team & get their "comp" subscription.

danno
-- 
Daniel Notov			    uunet!onm3b2!danno
Ogilvy & Mather		      
New York, NY                 

greenber@utoday.UUCP (Ross M. Greenberg) (09/08/89)

In article <332@onm3b2.UUCP> danno@onm3b2.UUCP (dan notov) writes:
>
>Finally, I write:
>
>Same thing happened to my subscription.  I even sent in an application.  I
>guess Fortune 1000 Types need not apply.  I guess I can go upstairs to the
>account team & get their "comp" subscription.
>

Trying to be a good net citizen, could I request that all subscription problems
be sent to us (at the below address, with Subject: Problem) rather then
being posted?  If you cc: a copy to me, I'll make sure to follow up on
them...

Ross



-- 
Ross M. Greenberg, Review Editor, UNIX Today!   greenber@utoday.UUCP
             594 Third Avenue, New York, New York, 10016
 Voice:(212)-889-6431 BIX: greenber  MCI: greenber   CIS: 72461,3212
  To subscribe, send mail to circ@utoday.UUCP with "Subject: Request"

palowoda@fiver.UUCP (Bob Palowoda) (09/08/89)

From article <3526@altos86.Altos.COM>, by pfrennin@altos86.Altos.COM (Peter Frenning):
> In article <971@utoday.UUCP> greenber@utoday.UUCP (Ross M. Greenberg) writes:
>>
>>In article <357@simon.UUCP> steve@simon.UUCP (Steven E. Piette) writes:
> 
> Since "real users" are doing boring stuff like spreadsheets, databases, word-
> processing and similar things, I think what you need are benchmarks that more
> closely reflects such activities in a multiuser environment. Two benchmarks 

  This is a good idea. What would be interesting is what happens to 
the benchmark numbers when lets say a user recalulates a spreadsheet,
rebuilds an index to a database, or better yet compiles a large program
in C. This would at least give a vauge idea how the kernel and "one" 
apps program affects system performance.

> comes to my mind: Neal Nelson's UNIX multiuser suite (commercial product, but
> available at a reasonable price) and TP1(public domain, and what all the biggies
> use for their benchmarks, operating system independant or reasonably so). 
> I think that a combination of those come as close as possible to what we would
> really like to see.
   
  Also if you go this route include an explaination of why the benchmarks
are performed this way. i.e. like they are trying to simulate effect
of tree building of the first pass of a compilier, they are trying
to simulate how such and such company database disk io routines etc.

  Also if you can include a list of the tuneable parameters for the
kernel of each system you benchmark. 


---Bob


-- 
Bob Palowoda                                *Home of Fiver BBS*  login: bbs
Home {sun,dasiy}!ys2!fiver!palowoda         (415)-623-8809 1200/2400
Work {sun,pyramid,decwrl}!megatest!palowoda (415)-623-8806 1200/2400/9600/19200
Voice: (415)-623-7495                        Public access UNIX system 

david@psitech.UUCP (david Fridley) (09/08/89)

> volunteering! :-) ):  what do *you* think is important in a UNIX
> benchmark?  How do I compare Machine A (with hardware and software
> configuration of 'A') to Machine B with its H/W configuration of 'B'?
> 
I have a simple, easy to run, easy to understand bench mark that wants to be
run on unix machines.  For now lets call it the wait for echo test.  Set up
a task connected to one of the serial ports that will send out a character,
wait for it to be echoed, and then send out the next character of a 100K file.
Make the baudrate for the port something that is high, like 38400, but it needs
to be a baudrate that can be setup the same for all machines tested, so maybe
9600.  The serial port connector would connect the the TX line to the RX line
(Pin 2 to Pin 3 on a 25 pin connector) and do what ever on the control lines.
The result of this test would be a time to transfer the file, which can be
divided by the file size to yeild the effective baudrate. Another test would
be to run the wait for echo test on each of several serial ports at the same
time, and reporting the average effective baudrate as a function of the number
of processes.

What does this show? Well first off, it shows that UNIX is not a real time
operating system because the results for this test are far lower than the real
baud rate.  It also shows things like the latency time of the task switching
facility, and the raw speed of the machine.  The real feature of this test
is the reality of it, I know the difference between 9600 baud, and 2400 baud.
Drystones and Wheatstones are usefull benchmarks, but most people do not
have a good idea how that translates into things that they work with.  Other
bench marks could be how long it takes to compile the X - Windows distribution,
or backup your 320 megabyte hard disk onto diskettes.

-- 
david.
DISCLAIMER: If it's important have a backup.  If it ain't broke don't fix it.
Proceed at your own risk.  My oponions are MY own.  Spelling does not count.
My fondest dream is to leave this planet.

jr@frog.UUCP (John Richardson) (09/09/89)

In article <971@utoday.UUCP>, greenber@utoday.UUCP (Ross M. Greenberg) writes:
> 
>  Text about a _UNIX_TODAY!_ magazine benchmark, and a request for some
> input/discussion on benchmarks to use in the future.
>
>

   Well, for the past year, I have been dealing with measuring I/O performance
of various systems, disk drives, controllers, etc in the UNIX/386 world.
This is for a product that is targeted to the OLTP data base market, where 
I/O performance is critical. I have written a short program that trys to
take a simple model of the access pattern of a data base running the infamous
'TP1' benchmark. I have found that it is mostly random reads of 2K bytes
in size across the disk. Since multiple users are accessing the disk,
the benchmark also needs to run multiple processes to check the overall
throughput (or degradation in some cases).

  This bench mark takes arguments for the size of the drive being accessed,
the size of the read, and the number of concurrent processes accessing the
drive. This has shown some interesting obervations:

1: In the DOS world, a lot of drives have good access times advertised based
   on accessing only 32 MB of the drive. (IE: An 80MB drive at 35ms has an
   access time of 21ms over only 32MB of the drive)
   The bench mark can be run on the whole drive to wring out this trickery.

2: Caching drives or controllers can slow down performance if the firmware
   writers do not do it right. This is because of the cache search time
   taking away from the time that the hardware can be started. Since this
   benchmark (and application that its modeling) hits the drive cache
   VERY infrequently. (the data base has its own 2-4MB cache, so it WON'T ask
   for a block within 2MB of I/O from a previous request)
   Drives that do not have a performance penalty in this case start the seek
   on the hardware while searching the cache. This makes up almost 4 ms of
   time difference.

3: Some operating systems/and or device drivers have their overall through-
   put GO DOWN as users are added. This means that if 1 process is run,
   and a given drive/controller gives 38 I/O's per second, running a second
   process knocks this down to a total through put of 34 I/O's per second.
   (17 per process)
   Adding processes makes it even lower.
   My theory is some contended for shared resource, like a buffer used
   for DMA that crosses a page boundry on controllers that do not support
   byte level scatter/gather.

  This is some information that I have found useful in evaluating systems,
controllers, and drives for OLTP applications.


					JR


John Richardson
Principle Engineer
Charles River Data Systems
983 Concord St. Framingham Mass 01701

karl@ficc.uu.net (Karl Lehenbauer) (09/13/89)

In article <127@psitech.UUCP>, david@psitech.UUCP (david Fridley) writes:
> ...For now lets call it the wait for echo test.

> What does this show? Well first off, it shows that UNIX is not a real time
> operating system because the results for this test are far lower than the real
> baud rate.  

I don't think that shows that Unix is not a realtime OS, although of course
Unix (System V) isn't one until V/4.0.

>It also shows things like the latency time of the task switching
> facility, and the raw speed of the machine.  

Correct.  And it shows the overhead and latency of getting characters through
the interrupt handler and into a task.

>The real feature of this test
> is the reality of it, I know the difference between 9600 baud, and 2400 baud.

A couple "unreal" things about it:  One, input in your test is guaranteed not
to overrun because you don't output a character to the port until you read
the previous one.  In "real life," reading from a Trailblazer or null-modem
to another computer at 9600 or 19200 bps, Unix systems all too often have
trouble reading that stuff without getting overruns.  Perhaps a test where
a second computer generates the data so the computer under test can have
underruns could be used.

Second, your benchmark (from the way you have described it) passes the 
characters around one at a time.  The overhead for processing characters
on a per-character basis is much higher than for making requests for a
number of characters with a defined limit and timeout.  These options have
been available in System V for a long time (Xenix, too.)  They are setup
within the VMIN and VTIME fields (and some bit) of the termio structure
you diddle with ioctl.  The performance improvements come from fewer task
switches and maybe from the system being able to pass many characters into
your address space at once.  Hopefully uucico uses this.  Chuck Forsberg's
x,y,zmodem programs certainly do.

> Other bench marks could be how long it takes to ... backup your 320 megabyte 
> hard disk onto diskettes.

You must be a masochist.
-- 
-- uunet!ficc!karl	"Have you debugged your wolf today?"