[comp.simulation] SIMULATION DIGEST V9 N3

simulation@uflorida.cis.ufl.edu (Moderator: Paul Fishwick) (06/08/89)

Volume: 9, Issue: 3, Wed Jun  7 16:59:56 EDT 1989

+----------------+
| TODAY'S TOPICS |
+----------------+

(1) Re: Event Cancellation
(2) Point to Point Network Simulator
(3) TimeWarp

* Moderator: Paul Fishwick, Univ. of Florida
* Send topical mail to: simulation@uflorida.cis.ufl.edu
* Archives available via FTP to bikini.cis.ufl.edu, login as
  'anonymous', use your last name as the password, change
  directory to pub/simdigest.
* Simulation Tools available by doing above and changing the
  directory to pub/simdigest/tools.



-----------------------------------------------------------------------------

Return-Path: <popeye!srcnance@vtodie.cs.vt.edu>
Date: Tue, 6 Jun 89 08:53:54 edt
From: popeye!srcnance@vtodie.cs.vt.edu
To: vtodie!simulation@bikini.cis.ufl.edu
Subject: Event Cancellation as a Primitive
Cc: cmo\@xanth.cs.odu.edu@vtodie.cs.vt.edu


This is in reply to Narain's inquiry about the primacy of event
cancellation.

My position is that event cancellation is not a true primitive
but IS a useful artifact of the event scheduling world view.
In our paper in CACM, Feb. 1985 ("A Specification Language to
Assist in Analysis of Discrete Event Simulation Models"), Mike
Overstreet and I included a cancel alarm, but that in my opinion
was to avoid extra verbiage to explain why it was not needed or
else argue with a referee.  Since it was not fundamental to the
subject of the paper, we took the easy route.

In DES, we can stipulate that the simulation executive (time
flow mechanism) has all the data to be able "to reason" what
the next imminent event will be.  The crux of the decision
problem and the "cost" to advance the model to that event
rests with (1) the definition of "imminent event" and (2) the
amount of computation to determine it.

To understand my points here, you might want to refer to the
Management Science paper of Sept. 1971 ("On Time Flow Mechanisms
for Discrete System Simulation"), in which the patrolling repairman
model serves as a counterexample to the prevailing notions (at
that time) that next event was inherently superior to fixed time
incrementing (or more globally, event scheduling always resulted
in less model execution time than activity scan).  With the
patrolling repair model using event scheduling, the modeler can
take the simple approach of defining a machine failure as the
imminent event.  In this case a cancel is needed, for the repairman
might encounter a failure occuring later than the minimum before
reaching the machine which has the minimum failure time.  However,
if the next imminent event is defined as the next repair of a machine,
then no cancel is needed but the cost of determining the nie is
much greater.

If you would like a reprint of the Management Science paper or
want to discuss it further, I would be happy to do so.  Note
also the relationship to your question and the issue of conservative
versus optimistic protocols for parallel simulation.

Dick Nance
nance@vtopus.cs.vt.edu
       or
nance@vtvm1.bitnet




------------------------------

To: comp-simulation%hp4nl@uunet.UU.NET
Path: euraiv1!loggen
From: loggen@cs.eur.nl (R.P. Loggen)
Newsgroups: comp.simulation
Subject: Network simulation
Date: 6 Jun 89 17:00:26 GMT
Organization: Erasmus Univ.;EUR/EF/AIV; P.O.B. 1738; NL 3000 DR  Rotterdam


 Hello there readers,

 here at Erasmus University, Holland, we are busy trying
 to build a *huge* WAN point-to-point Netwerk simulator
 in Smalltalk-80 (Sun/4). 
 We are very much interested in hearing from people who
 are planning to start or are busy with this or related
 subjects !

 Our goals are the following :

  1. Create a topology, with cost,flow and traffic-matrices given.
  
  2. Simulate the corresponding network-structure with all kinds of
     protocols (ISO)
     
  3. Try to optimize the network, minimize cost and maximize efficiency
     based on information gathered in the simulation.

Eventually the nodes in the network will be processes, with some
specified task, instead of just dumb sending messages of some length.

It would be great hearing from you !


------------------------------

Date: Wed, 7 Jun 89 16:57:07 -0400
From: Paul Fishwick <fishwick@fish.cis.ufl.edu>
To: simulation@ufl.edu


[[EDITOR: This was copied directly from group comp.parallel -PAF]]


>From uflorida!gatech!hubcap!reiher Wed Jun  7 16:56:42 EDT 1989
Article 739 of comp.parallel:
Path: uflorida!gatech!hubcap!reiher
>From: reiher@amethyst.Jpl.Nasa.Gov (Peter Reiher)
Newsgroups: comp.parallel
Subject: Re: TimeWarp
Date: 7 Jun 89 20:16:38 GMT
Sender: fpst@hubcap.clemson.edu
Approved: parallel@hubcap.clemson.edu

In article <PJIJ-2815-4574@nasamail> awilkinson@nasamail.nasa.gov (ALLEN R. WILKINSON) writes:
>> [...]
>> Offhand, I don't actually know of a 3rd approach that fits this
>> general framework, unless one includes Jefferson's time-warp work (you
>> simulation users will want to find out about that, too!)
>
>    Jade Simulations International has developed a parallel simulation
>language based on David Jefferson's TimeWarp 

JPL has an implementation of Time Warp, as well.  Some of the things said 
about the Jade version seem a little misleading to me, so I'd like to clarify
some basic principles that apply to both the Jade and JPL systems.
. . .

>    Jefferson's TimeWarp is intended to solve the problems of synchronizing
>simulation time across multiprocessors in the operating system, rather
>than in the user's code.  
. . . 
>An incredibly brief description of TimeWarp follows. 
>    Basically, a simulation consists of entities which pass events
>between themselves, and take some action when an event is received.
>Each event has a time-stamp, which is when that event is to occur 
>in simulation time.  An entity acts on each event as soon as it is
>received, *with no attempt to verify that the event is in correct
>simulation time order*.  
>Briefly, the rational is that if
>you verify the event order, you are wasting potential execution time
>and clogging your communication links with pointless queries.  
. . . 
>    "But, but, you've wasted processing time on event X!" stammer
>the distractors.  Well, not really.  If we had remained blocked until
>X-10 had arrived, that processing time was 'wasted' anyways.  If fact,
>the worst case of TimeWarp, where all events arrive out of order, is
>exactly equivalent to the 'conservative' synchronization method;
>where entities remain blocked until event order has been verified.

Well, not really.  There is overhead associated with doing things out of
order.  First, there is the overhead of the rollback mechanism.  Second,
if the incorrectly processed event sent out messages, any overhead effort
spent by the system on handling them is wasted work.  They may cause 
congestion of communications links, they may cause queues to fill 
unnecessarily, they may cause secondary incorrect processing at other objects,
etc.  So, sometimes, from a performance point of view, you would be better
off not doing this work optimistically.  You might be better off doing no
work at all, which is the underlying idea behind conservative mechanisms.
Generally speaking, in fact, if you magically knew that a piece of work will
be rolled back and discarded later, you would always be better off doing
nothing than incorrectly doing that work.

While incorrectly done work may prove expensive, our experience with Time
Warp has shown that usually it is cheaper than running conservatively, for
the class of applications we are interested in.  We get good speedups despite 
rollbacks.  In fact, as more nodes are added to a computation, we typically 
get both more speedup and more rollbacks.  On the other hand, for a given 
number of nodes but different assignment of simulation objects to each node, 
typically the configuration that performs the least rollbacks gets better 
speedup.

One tempting approach to speeding up Time Warp systems is to try to distinguish
between "good optimism" and "bad optimism".  In other words, try to avoid
optimistically executing work that will eventually be rolled back.  The problem
is to separate out the bad optimism.  We've found no good way of determining
whether a given piece of work is likely or unlikely to be undone later.
Any attempts we've made to limit the optimism in our system have slowed the
system down, or, at best, not helped it.

One more point - this article stated that programs written in Sim++ for a
single processor will run, without change, on a parallel processor.  That's
probably true.  However, unless the application was written with parallelism
in mind, they may extract little or no extra speed from the extra nodes.  Our
project has seen several examples of such cases.  Sim++ may be far more
sophisticated than I think, but it would have to be very smart indeed to
parallelize an inherently sequential program.  My current reading of the
situation in optimistic computation is that it has not provided the ideal
of automatically extracting speedup from programs written for sequential
machines, and is not likely to in the near future, if ever.
			Peter Reiher
			reiher@amethyst.jpl.nasa.gov
			(DO NOT send to reiher@amethyst.uucp)
			. . . cit-vax!elroy!jato!jade!reiher





------------------------------




END OF SIMULATION DIGEST
************************