aro@cs.aber.ac.uk (Andrew Ormsby) (10/24/90)
In article <1990Oct19.220747.5536@Neon.Stanford.EDU> craig@Neon.Stanford.EDU (Craig D. Chambers) writes:
I agree wholeheartedly that anything that convinces programmers to use
nicer languages like Smalltalk (or, even better, Self) would be a
Great Thing, and better run-time performance is certainly one
important factor.
I've often seen complaints about the performance of Smalltalk. To what
extent is performance really an obstacle to the adoption of Smalltalk
these days? My (very limited) experience is that Smalltalk seems to
produce much more responsive applications than C++/Interviews. Not a
fair comparison, I know, but I'd still be interested to see comments.
Andy Ormsby
aro@cs.aber.ac.uk
pkr@media01.UUCP (Peter Kriens) (10/25/90)
We have been building a number of applications in Smalltalk that involve mostly user interface processing. Though we sometimes would like to have a more direct responding screen sometimes, (we are running Smalltalk V286, VGA and 386 SX), the overall performance is really very good. Speed problems occur mostly when we try do "massive" processing. For example, we have written a multiuser database in Smalltalk, that works very nice for searches and stores, but when we have to walk along all records, it becomes slow. What I am trying to say that "running code", cq code which does an awfull lot of different things and reacts to the user is perfect in Smalltalk. But the moment you start to handle thousands of "objects", the overhead becomes sometimes prohibitive. On the other hand, the first version of one of our programs took over 45 minutes to make a certain report. Just by using better hardware and going from V86 to v286 we have brought this time back to 10 minutes. Don't be fooled by claims that the overhead of Smalltalk is only 30 percent. This overhead counts only for the comparison between a message send and a procedure call. The difference between C and Smalltalk is that in Smalltalk, each line is one or more message sends, in C a lot of statements are directly expanded to op-codes. Even though we realize that the overhead is there, we have found that the increasing hardware speed allows us to develop a LOT faster and make much more stable code which also usually looks a lot nicer. Peter Kriens hp4nl!media01!pkr
cohill@vtserf.cc.vt.edu (Andrew M. Cohill) (10/26/90)
We have found that performance can be affected substantially by the way that you define and use objects. We were working with a little database ourselves, and were initially pulling records out of the db and stuffing them into a set of nested arrays. What was passed was a single array, but within that array we had additional arrays or collections. Then the receiver unpacked those things. We only had a few fairly short strings in there, so it didn't seem like a big deal at the time. Response time was *slooooooow*. I went in and created a new class defined to hold these strings as instance variables; no more nested arrays. It went from 20-30 seconds to no visible delay--a big difference. To make a long story short, we found that little things make a big difference. Finding this sort of thing out takes, I think, a lot of experience with Smalltalk if you want to run really big stuff. -- | ...we have to look for routes of power our teachers never | imagined, or were encouraged to avoid. T. Pynchon |Andy Cohill |703/231-7855 cohill@vtserf.cc.vt.edu VPI&SU
klimas@iccgcc.decnet.ab.com (10/30/90)
In article <ARO.90Oct24150357@raphael.cs.aber.ac.uk>, aro@cs.aber.ac.uk (Andrew Ormsby) writes: > In article <1990Oct19.220747.5536@Neon.Stanford.EDU> craig@Neon.Stanford.EDU (Craig D. Chambers) writes: > > I agree wholeheartedly that anything that convinces programmers to use > nicer languages like Smalltalk (or, even better, Self) would be a > Great Thing, and better run-time performance is certainly one > important factor. > > I've often seen complaints about the performance of Smalltalk. To what > extent is performance really an obstacle to the adoption of Smalltalk > these days? My (very limited) experience is that Smalltalk seems to > produce much more responsive applications than C++/Interviews. Not a > fair comparison, I know, but I'd still be interested to see comments. At the recent OOPSLA, I had the opportunity to discuss performance of Smalltalk with several sources. The general rule of thumb is: the more OO your C++ the more like Smalltalk your performance. For example, A group in France has recently completed porting a fairly sizeable Smalltalk-80 simulation that makes use of the simulation classes to C++ and received only a 10% performance improvement. Durring one of the panel discussions folks from IBM said that the performance differences were not a significant factor anymore and that in some cases they had actually seen Smalltalk execute more quickly than C++.
scp@acl.lanl.gov (Stephen C. Pope) (11/02/90)
on 25 Oct 90 09:07:29 GMT, pkr@media01.UUCP (Peter Kriens) said: [ ... ] Peter> Speed problems occur mostly when we try do "massive" processing.... [ ... ] Peter> What I am trying to say that "running code", cq code which does an awfull Peter> lot of different things and reacts to the user is perfect in Smalltalk. But Peter> the moment you start to handle thousands of "objects", the overhead becomes Peter> sometimes prohibitive. [ ... ] Peter> Don't be fooled by claims that the overhead of Smalltalk is only 30 Peter> percent. This overhead counts only for the comparison between a message Peter> send and a procedure call. The difference between C and Smalltalk is that Peter> in Smalltalk, each line is one or more message sends, in C a lot of statements Peter> are directly expanded to op-codes. Yes. Then, consider the domain of scientific computing, where OOD/OOP have much to offer. Because scientific computations often model ``real world physics'', the object model can be used to create very powerful and intuitive abstractions of real-world phenomena, leading to code which is significantly easier to understand (and play with) than the type of design which leads to highly optimized (vectorized/parallelized) FORTRAN code, the incumbent with which it must in some sense compete. The kinds of overhead implied by Smalltalk, although comparatively trivial when treating complex composite types of coarse granularity, are simply unacceptable when you want to do simple arithmetic operations on collections (arrays) of ``fundamental'' types such as floating point values. In many scientific codes these "complex composites" are important as the means to achieve coherent and transparent design (via abstraction and encapsulation). They are as such the ``products'' of the design phase. However, on the computational side, it is really the simple arithmetic operations on floats and such which matters; the side of the coin which OOP has a ways to go before it delivers something of interest to the average burner of sumpercomputer cycles. With a language such as C++, there is some hope that significant portions of the computational/numeric sides of scientific codes may get some treatment; even lacking a vectorizing C compiler, it is not difficult to encapsulate essential behavoir within class whose essential behavoir is implemented via carefully crafted code [and even calls to fortran routines]. This is possible because the fundamental types are exactly those types which the underlying hardware supports (more or less) directly. If these so called fundamental types are buried (in implementation! The abstraction of the program model can be whatever you like) underneath the bagagge of message lookup and indirection, you're going to lose. The existance of an OOL which directly supported the notion of data-parallelism (particularily if integrated fully with the OO model, not just for fundamental types) would go a long way to address some of these very real efficiency issues. Lacking that, most will continue to work in FORTRAN, some will experiment with C++, and few will vemture elsewhere. Peter> Even though we realize that the overhead is there, we have found that Peter> the increasing hardware speed allows us to develop a LOT faster and Peter> make much more stable code which also usually looks a lot nicer. Unfortunately, the speedups made possible by pushing up the clock rate and such still don't come close to the speedups possible through vectorization and parallelization. In the supercomputing world, it is typical to put person-years of effort into squeezing every last ounce of preformance out of one piece of code and very expensive machinery; even with 10's of gigabytes of memory and 10s of thousands of processing elements, our machines are still too wimpy to do some very interesting and important work. stephen pope advanced computing lab los alamos national laboratory scp@acl.lanl.gov
timm@runxtsa.runx.oz.au (Tim Menzies) (11/05/90)
My remarks re smalltalk performance relate to Smalltalk/V and V286 running on 286 and 386 PS-2/ ATs/ Compaqs. 0) I've never not done things in smalltalk due to speed issues. I usually say that brain beats brawn. Yes, it goes faster on a Cray, but a little thinking can go a long way to speeding up a system. 1) The interface speed is impressive. The limiting factor on its performance is usually how fast i can move a mouse around. 2) The speed of the internal code is good, but. I usually tell people that its fast to do thousands of things in Smalltalk, ok to do tens of thousands of things, and slow to do hundreds of thousands of things.For example: Time millisecondsToRun: [30000 timesRepeat: [true]] takes a non-trivial amount of time. So, the thing that Smalltalk should be best at (simulation) is ironically the thing that it is worst at. I tell people that if there is some massively computationally expensive process to be run, write it in "C" compile it to assembler, and load it into the smalltalk environment using: Smalltalk loadPrimitives: 'foo.bin'. 3) Smalltalk/V goes very slow when its memory is full, just prior to a disk swap. Haven't had a swap yet in V286. The following code returns the amount of memory free. Its a good idea to monitor this number. Smalltalk unUsedMemory One trick I've found, is to leave strings on disc. Instead of loading lots and lots and lots of text, leave it in a disc file and store internally: textStart, textStop. The method "File position: textStart" works very fast. I've accessed a string in a four megabyte file in 60ms (not bad, considering that the latentcy time on that drive was 30ms). 4) Compaqs are faster than IBMs. The disc swaps on a Comapq are about twice as fast as on an IBM. 5) And on a final note: multi-tasking under V286 is surprisingly fast. Recently I wrote an "autosave" utility that forks a process at lowest priority that loops infinitely and saves the image every twenty minutes. I was a bit reluctant to do this at first since I thought it would slow every else down. I've been running the thing now constantly for four days while I do my development work and I haven't noticed a slow down (or, surprise surprise,a nd idiosyncratic time-dependant processing errors). -- _--_|\ Tim Menzies (timm@runxtsa.oz) "Its amazing how much 'mature / \ HiSoft Expert Systems Group, wisdom' resembles being too \_.--._/ 2-6 Orion Rd Lane Cove, NSW, 2d066 tired." - Lazarus Long v 02 9297729(voice),61 2 4280200(fax) (a.k.a. Bob Heinlein)