[comp.software-eng] Resource requirements for Ada development

gaabor@runx.oz.au (Gabor Rozman) (03/29/91)

Hello Networld,

This is my first posting, so I apologize for the errors. I am avid reader of
the discussions on planning projects, but I have not seen a discussion on
how do you estimate the hardware requirements for a medium to large software
development.
Most of the work is aimed to establish the size of the software, the manning
and scheduling of the projects. That is fine for the bidding phase, but once
you won the contract, you have to set up a software development environment.
How do you go about it? Are there any "rules", accepted ratios (CPU power,
disk storage etc...)?
I am particularly interested wether anyone has tried to monitor resource
utilization during a medium to large size project ( 300+ K Ada statement :
terminating semicolons) on a SUN 4 network ?  What would be a good estimate
for MIPS/seat Mbyte_disk/seat ?  What are the most influential factors for
reasonable turn-around time (over and including the usual Unix NFS issues) ?
Where is (if there is one) the point of diminishing return, i.e. adding more
"iron" won't make any difference ?

The specific I am facing, a SUN-4 network, about 80 software developers, VADS
for self-hosted and cross-compilers, and a project of the above mentioned size.

I am curious, anyone has any idea what would give us a responsive system, with
a not too long (I know it is relative) waiting time, and build (link) time of
less then a couple of days ?

Language constructs to be avoided, work practices to be followed, and a method
to estimate how to extend the network, or any suggestion are welcomed (general
or specific).

I am following the ada-info and the software-eng thread, so either postings or
direct e-mail is fine.  If I get enough information, I will summarize and post
it on both (splitting the answers according to the relevance).

Thanks.   Gabor Rozman

-- 
+--------------------+-------------------------------------------------------+
| Gabor Rozman       | ACSnet, Ean, CSnet: gaabor@runxtsa.runx.oz.au         |
| 29/7-9 Gilbert St  | Arpa: gaabor%runxtsa.runx.oz.au@UUNET.UU.NET          |
| DOVER HEIGHTS      | Janet: gaabor%runxtsa.runx.oz.au@UK.AC.UKC            |

mcgregor@hemlock.Atherton.COM (Scott McGregor) (04/05/91)

In article <1991Mar29.025338.8221@runx.oz.au>, gaabor@runx.oz.au (Gabor
Rozman) writes:

> Most of the work is aimed to establish the size of the software, the manning
> and scheduling of the projects. That is fine for the bidding phase, but once
> you won the contract, you have to set up a software development environment.
> How do you go about it? Are there any "rules", accepted ratios (CPU power,
> disk storage etc...)?

The "rules" vary GREATLY from company to company and often even between
departments in large companies. Largely this is driven by "commonly
accepted practices" local to your company, department or manager.  What
has been done in the past is a good guide to what can be done in the future
without the imposition of significant force. There is a large cultural
reason for this sort of way of determining the correct balances, but there
are actually some economic reasons as well.  For instance, the cost of
capital purchases varies greatly.  At a large hardware producing company,
their own hardware may be perceived as inexpensive, and you can get lots of
it. At a small start-up, equipment can be dear because it is hard to get
loans when you have few assets and are still losing money. Even within
big companies, individual managers may be good or poor at negotiating 
budgets and may be making points or losing points based upon how tightly
they control their budgets.

In general, it is extremely hard to hit a point of resource utilization
where more computers won't be used.  The availability of increased
computing resources typically changes the nature of the problems attempted. 
I did a study for a major computer manufacturer who had R&D on new
chip technologies.  When their researchers only had a minicomputer, they
did a few analytical studies, when they had a mainframe they started some
simulations, with a supercomputer they contemplated full Monte Carlo
simulations.  So you see, it changes the nature of the problem.    On the
other hand, you may reach the knee of the perceived value of the curve.
Someone might say, 'Gee, we'll just hire the *smartest* physicists, they can
use just a minicomputer and their innate "good guesses" and still do
just as well as the *moron* physicists they hire at XYZ who have a Cray
and just use
"brute force" techniques.'  You'll find this sort of instinctual valueing
usually overrides any other aspects of deciding the right solution BECAUSE
no one has really hit the drop off point for extra power.  Similar effects
occur with disk space, memory, etc.

The companies financing of computers also affects the equation.  If a
company charges a flat rate to a group whether they use it or not, it
tends to drive them to use it to capacity (and often to increase their
capacity) on the other  hand, if the large fixed costs are allocated based
on usage, marginal use will fall, which raises the average cost to the
remaining users, causing other usage to become perceived as marginal, and
so on until the computing power is very expensive for a few remaining
heavy duty users.  Since Mainframes and supercomputers are often internally
charged in this latter way, and workstations are largely charged in the
flat rate way there is a tilted playing field towards workstations and PCs.

I mention this because many people remain mystified as to why these kinds
of changes happen when technology is not the only reason.  Management
is a form of chaotic dynamics: the future  is sensitively dependent in
unforseen ways to small changes in initial conditions even when the same
forces are consistantly in play.

Scott McGregor
Atherton Technology