[comp.human-factors] adaptive user interfaces

kathy@cs.sfu.ca (Kathy Peters) (06/13/91)

I am interested in the concept of adaptive user interfaces - somehow
tailoring the user interface for individual skills and preferences.
(Rather than just having categories like 'naive', 'expert', etc.).

Adaption could be as simple as setting up the preferred display colours,
or having dynamic menus (allowing the user to perform only certain things),
OR it could be giving the user more help if he or she is making more
mistakes (getting tired after a long day!) ....

Any thoughts on what things could be tailored for individuals (like
display characteristics or help text)? What individual characteristics
(name, age, authorization level, allowed tasks, vision and hearing
difficulties, ....) could be used to tailor the interface?


Kathy Peters
Simon Fraser University, Burnaby, Canada
kathy@cs.sfu.ca

sven@cs.widener.edu (Sven Heinicke) (06/17/91)

In article <1991Jun12.182221.10179@cs.sfu.ca> kathy@cs.sfu.ca (Kathy Peters) writes:
   I am interested in the concept of adaptive user interfaces - somehow
   tailoring the user interface for individual skills and preferences.
   (Rather than just having categories like 'naive', 'expert', etc.).

I think that this all sounds like a great idea, but the kinds of
interfaces that most people like are those that they have used in the
past.  This would be fine too but if anything that gets written with
the format that comes into conflict with compinies like Lotus, Apple 
and some other compinies there might be a lawsuit brewing.

I do hope what ever Kathy is working on that it will come through.  It
is just something to wory about.  I still get shocked sometimes when I
think that lotus brought a company to court for using the same user
interface.

-- 
sven@cs.widener.edu                                  Widener CS system manager
Sven Mike Heinicke                                          and Student
(pssmheinicke@cyber.widener.edu (if you must))

dsy@psych.toronto.edu (Desiree Sy) (06/17/91)

>In article <1991Jun12.182221.10179@cs.sfu.ca> kathy@cs.sfu.ca (Kathy Peters) writes:
>   I am interested in the concept of adaptive user interfaces - somehow
>   tailoring the user interface for individual skills and preferences.
>   (Rather than just having categories like 'naive', 'expert', etc.).

Microsoft has done a little work on this. Users can adjust their
menus and keystroke combinations in Word 4.0. I love this feature,
and would be delighted if more applications would incorporate it.

-desiree

sasingh@rose.waterloo.edu (Sanjay Singh) (06/17/91)

I think user interfaces could be made more adaptive if some artificial
intelligence techniques were used to make using them more intuitive.

I am still new to AI, but I think natural language understanding and 
neural nets in VLSI could provide promise for making computers do what we
mean rather than what we say.

I believe Xerox PARC has been working on some 3-d type of GUI. It was on
the cover of Byte some time back.

dmorin@wpi.WPI.EDU (Duane D Morin) (06/17/91)

>>In article <1991Jun12.182221.10179@cs.sfu.ca> kathy@cs.sfu.ca (Kathy Peters) writes:
>>   I am interested in the concept of adaptive user interfaces - somehow
>>   tailoring the user interface for individual skills and preferences.
>>   (Rather than just having categories like 'naive', 'expert', etc.).

This issue has far reaching ramifications when put to use in an educational
environment.  Many students have had little or no access to PC's while they
are in school (provided we hit a grade level early enough) and a system that
can be trained to suit their needs is often very well received.  For example,
a system that I recently designed kept a profile of each user:  message level,
file working on, customized vocabulary, alias file.... so that each user
appaered to be customizing their interface in their own way.  The system even
went as far as getting the person's first name out of the password file on
the network, and then asking them if they would like to be called a certain
nickname (upon which the name to use was stored in the profile as well and,
during particular error messages or the next startup greeting, was used).

AI is a wonderful thing, but at least in my area of work in needs to work
flawlessly if it is to be used at all.  For example, we designed a spell 
checking system to go with the internal vocabulary.  Now, this system was good
enough (and the vocab was small enough) that it almost always guessed the
correct word.  We wanted to use some simple AI to ASSUME that the user wanted
to use whichever word, and then go through and finish up the sentence.  But,
like I said, it ALMOST always worked.  We couldn't assume anything.  So that
idea never flew.

Duane Morin
Worcester Polytechnic Institute
Worcester, MA 01609-2208
dmorin@wpi.wpi.edu

No sig - I write these things by hand whenever I post!

mcgredo@prism.cs.orst.edu (Don McGregor) (06/17/91)

kathy@cs.sfu.ca (Kathy Peters) writes:
>   I am interested in the concept of adaptive user interfaces - somehow
>   tailoring the user interface for individual skills and preferences.
>   (Rather than just having categories like 'naive', 'expert', etc.).
>
  You might look up the article "An Architecture for Intelligent Interfaces:
  Outline of an Approach to Supporting Operators of Complex Systems",
  Rouse, Geddes and Curry, Human-Computer Interaction, 1987-88, pp87-
  122, and "Adaptive Aiding for Human/Computer Control", Rouse,
  Human Factors 1988, 30(4), 431-443.  

  Rouses' main interest is in complex systems and aircraft, particularly
  when the user is juggling several concurrent tasks. Interesting that
  you came up with the same term (that is, if you haven't already seen
  the articles :-). 

  Sorry, no more recent articles; haven't been keeping up in this area.

Don McGregor             | "I too seek the light, so long as it tastes  
mcgredo@prism.cs.orst.edu|  great and is not too filling."

azieba@trillium.waterloo.edu (Warren Baird) (06/18/91)

In article <1991Jun16.205355.12316@psych.toronto.edu> dsy@psych.toronto.edu (Desiree Sy) writes:
>
>Microsoft has done a little work on this. Users can adjust their
>menus and keystroke combinations in Word 4.0. I love this feature,
>and would be delighted if more applications would incorporate it.

Sun's old UI, SunView, provided a lot of user definable parameters.
You can tell the UI where and how big you want your scrollbars, where
your icons go when you close them, and many other things.  It also
provides a graphical defaults editor to change and save these
parameters.  OpenWindows, Sun's new UI, doesn't seem to provide quite
the same level of flexibility (although I admit that I haven't used it
nearly as much as I have SunView).

I certainly agree that user interfaces should allow the user to
customize them, but I'm not sure if AI is the way to go.  It seems to
me that for most current interfaces, a list of parameters would be
sufficent, perhaps with a pretty/intellegent interface.

Warren

>
>-desiree
>

tom@elan.Elan.COM (Thomas Smith) (06/18/91)

From dsy@psych.toronto.edu (Desiree Sy):
>>In reply to kathy@cs.sfu.ca (Kathy Peters):
>>   I am interested in the concept of adaptive user interfaces - somehow
>>   tailoring the user interface for individual skills and preferences.
>>   (Rather than just having categories like 'naive', 'expert', etc.).
> 
> Microsoft has done a little work on this. Users can adjust their
> menus and keystroke combinations in Word 4.0. I love this feature,
> and would be delighted if more applications would incorporate it.

The problem with putting this kind of control strictly in the hands
of the users is that it assumes that
  a) the user knows what skill level he/she is at
  b) once a user attains a skill level in a particular area, he/she
     never regresses to a previous skill level

Regarding the first point, Microsoft's answer (as quoted above, anyway)
is merely a workaround.  In order for a user to configure a shortcut,
that user must be an "expert" in both the feature being shortcutted,
and the shortcutting functionality itself.  This is usually too big a
jump from when the user was just a beginner.

Regarding the second point, I'm sure many of you have gone on vacation
for a couple of weeks, only to come back and think "now what was the
combination of command line arguments that cpio takes for reading an
HP cartridge tape?"  (Isn't UNIX fun).

In other words, not only is it incorrect to divide users into "naive" and
"expert" - it can be equally inadequate to categorize them based solely on
prior experience with a particular feature.

Just adding to the confusion...
    Thomas Smith
    Elan Computer Group, Inc.
    (415) 964-2200
    tom@elan.com, ...!{ames, uunet, hplabs}!elan!tom

mcgregor@hemlock.Atherton.COM (Scott McGregor) (06/18/91)

In article <1991Jun16.213531.8517@watdragon.waterloo.edu>,
>sasingh@rose.waterloo.edu (Sanjay Singh) writes:
>>I think user interfaces could be made more adaptive if some artificial
>>intelligence techniques were used to make using them more intuitive.

>I am still new to AI, but I think natural language understanding and 
>neural nets in VLSI could provide promise for making computers do what we
>mean rather than what we say.

>I believe Xerox PARC has been working on some 3-d type of GUI. It was on
>the cover of Byte some time back.

You might be interested in the articles about software that anticipate
what you want to do, and adapt based upon past behavior.  I wrote such
an article on Prescient Agents for the HP Professional (May 1990), and
there was an interesting article on EAGER in the most recent
SIGCHI proceedings. Neural nets, and natural language understanding
aren't necessary to provide an improved intuitive interface--merely
paying attention to what people are doing and predicting what they
will do can offer a more adaptive, intuitive, "Radar O'Reilley"
anticipatory type interface.  No doubt some of these interfaces will be more 
attractive than others.

--Scott McGregor
Atherton Technology
mcgregor@atherton.com

mwtilden@watmath.waterloo.edu (Mark W. Tilden) (06/18/91)

About two years ago, I came up with what I think is a good human-factors
design for a Cyberspace Workdesk.  I posted to the VR newsgroup but alas
was lost in the discussions of VR software parameters.  It needs work
and refining, but I still think it'd be great to build.  The story 
describes a computer workspace environment which is both forward and 
backward compatible (hardware and software) and has strong ergonimic 
considerations for the user (ie: won't mess your hair).  I also believe 
it to be fully feasible given the current level of technology.

For those local to Waterloo U. a sketchbook of the design is available 
via University mail.

This story not only details my design, but also brings to light many 
problems I see with current Cyberspace equipment.  I welcome any comments
or critiques.  I would also like to know if anyone is working on 
such a design somewhere.

Why a story?  Neuromancer complex, I guess.  Gibson was Canadian too.

and so...



            CYBERDESK: Take a Trip while still in your Chair


                          Dateline: 199X
                     Place: A Major Metropolis


"It still astonishes me" Jon mumbled to the ether of the empty elevator,
"that with a billion dollars of computers in this building, we 
still ride in an elevator controlled by a Whack-a-mole game."

The doors opened again to an empty hallway, closed and then continued
on it's errant flight in the opposite desired direction, leaving it's
sole occupant wondering if maybe head-butting the button panel would 
give it a better indication of his intentions.  He was especially
frustrated because he wanted to get to his Desk badly and had come in 
early just to avoid this kind of problem.  God help the people during
rush times, he thought.

--

"Morning Mr. Simpson"

"Morning."

"A little late, aren't we?"

Jon stared at the stereotypical blue-haired receptionist, but was 
too tired from his 22 story stair-climb to respond as he'd like.  He 
trudged slowly to his office, wondering instead if elevator 
engineers could be tortured into eating poisoned gumbo.

Huh? Where did that negative thought come from?  Better get some coffee in.
One bitter glug later he was feeling more himself.

His office was a typical cubical; window, trashcan, bookshelves, 
tasteful silkscreened lithographs on the walls (black velvet, no less), 
and of course, the Desk.  Jon plodded to the comfortable bucket 
seat and crashed in with a sigh.  The chair springs compensated 
for the load admirably.  After a minute, he opened his eyes 
and looked thoughtfully at his workspace.

The Desk took up most of one corner of the room and thankfully so.
The surface was squarish with twice the area of a normal office desk, 
inlayed with various use-specific arborite patterns, mostly 
obscured now by normal desk clutter.  The 33 inch inlaid color 
multisync and the high-res x-window terminal beside it were, of course,
plastered with 'urgent' post-it notes from various people.  The 
button-framed databall to the left of the keyboard even had 
a few on it.  Funny why they leave the CHUV (Circular Heads Up 
Viewscreen) and the keyboard unscathed, thought Jon, maybe because 
one is so familiar and the other so scary. 

Jon leaned forward, fitting the chair into the recessed left of the
desk which gave his arms maximum support on all sides.  A quick touch of
the keyboard and the x-screen came to life.  Login, password, an
electronic voice check and he was in.  The local processor was a 
sophisticated Unix box based on the old IRIS graphics system.  The
mainframe was mounted at the lower back of the desk with disk and
tape slots within easy reach.  It hummed and a fan was still audible, 
but this was, after all, one of the first systems on the market.  
The new systems fit entirely onto the CHUV chair now, thought Jon, 
quiet, faster, lighter.  Hopefully the price will come down so he 
could soon buy one for home use.  Of course, even getting a used one
for around $8,000 would be a steal.  Sell the car?  Divorce the wife?
Sell the kids for medical experiments?  Sure!

A quick read of mail and net news ("sci.virtual-worlds - 4837 unread
articles since yesterday. Read now? [ynq]".  To hell with cross-posting.
We gotta split that up into sub-groups.  I know, I'll post.) and he 
was ready to go to work.  He scanned the post-its while adjusting and 
locking the heavy chair into position.  Reaching blindly for the 
Desk's built-in mouse, he felt the buckle of paper under his palm.  
Lifting his hand, he noticed that the post-it had been placed 
stick-side up to catch him just like this.  He read:

'Review brunswick building 
 for plant modification!  
 Need report *yesterday*! 

                     Dan'

That man knows me far too well, but ok.  Jon cd'd to the appropriate 
directory under "ARCH-CAD", the architecture CAD software, and typed 
at the prompt:

'arcad Brunswick.bldg -Fi'

Fi: Full Interface

"Fi-fy-fo-fum" mused Jon, sitting back in his chair while reflecting
on the 'Full interface' option that most CAD packages featured now
days.  He really should 'alias' the commands to include it anyway
but the power surge of typing those two letters gave him a pride he
didn't want to relinquish.  He felt a small pang of regret that 
many places still use the old 'terminal' style design systems and
will never know the feeling of typing those letters.  Too new, he
had heard people say.  What if it's just a passing craze?  Remember
what happened with CD's!

After selecting and mounting the 'glove' option of the many in his 
right desk drawer (pen? Nah, too simple.  Only good for surface-mapping 
physical objects into the computer anyway.  3-D mouseball?  Like 
a regular mouse but takes far too much effort to use.  There was the normal 
2-D mouse of course, but it was next to useless for what he was about
to do).  The 'glove' did not so much resemble the original datagloves from
the late eighties as a normal biking glove.   The open fingertips meant
that he could still touch-type with ease while offering him the best
feedback versatility while in Cyberspace.  The palm of the glove he knew 
bristled with sensors and a micro-power processor, but that was academic.
He longed for the 'sensor sphere' he had tried last month at Siggraph: just 
reach inside and it not only gave you full-finger control, but also full 
temperature and tactile feedback, and not just on the fingertips and 
palms!  He still remembered the feeling of reaching into a virtual
'puddle' and getting a suprisingly accurate sensation of flowing, cool 
liquid.  Amazing.  Too bad you couldn't type with it, but a single 
button press on the databall released it from your hands in less 
than a second.  Beta testing now.  Available next summer.  Backordered 
well into the next century.  Bah.

Oh well, enough dreaming.  He settled himself comfortably, put his 
left hand on the databall now within easy reach, returned the mouse
to it's recessed hole to the far right of the desk and hit 'RET'.

Like a power car roof, the Circular Heads Up Viewscreen (CHUV) descended 
about his head like a 30" diameter goldfish bowl that was open at top 
and bottom.  Jon still had full view of his keyboard, part of his desk
and the ceiling (smart that, as he did suffer from minor claustrophobia,
most probably elevator induced) but now around his head was a 15" high
plexiglass ribbon a uniform 15" inches from his nose.   Just above
the screen was a small array of square indicator LEDs.  When they
flashed green, Jon knew he could continue.

At the moment, the CHUV was transparent and he could see his office easily
through it.  If he turned his head he could just barely make out the
video projectors and stereo speaker assemblies behind his headrest.  As
he watched, the Follow-Restraint Arm (FRA) slowly lowered under it's
own power to the desk surface.  Just before it touched however, Jon
reached his right hand underneath it and the connector on the 
back of the 'glove' seated into the FRA with a click.  The FRA was mounted on
the back right of the CHUV assembly on a level with the operators head,
allowing the operator full mobility without ever interfering with 
typing or even turning of manual pages.  The FRA was feather light,
being spring compensated.  Only the inertial mass of the small limiting
motor assemblies hindered free movement of his right wrist.  That was a
bonus more than a flaw however, as the additional inertia had prevented
him from making dangerous mistakes many a time.  Cyberspace was so
realistic nowdays it was sometimes difficult to remember that humans
were omnipotent and infinitely powerful there.

Once the CHUV had settled, Jon reached his left arm over to grab 
the databall.  An optical interupter in the databall sleeve detected 
his arms presence and immediately turned the polarized CHUV black.  
Instinctively, Jon reached out for the pressure switches under his 
feet which closed the window blinds and dimmed his office lights.  
He then put both feet on ridgid pedals which did nothing but help 
prevent the usual vertigo that resulted when exploring someone elses 
CAD designs.

Suddenly, he was falling in dreamland.

Very nice, he thought, but he was immediately suspicious.  What are
they hiding?  Why the elaborate show?

The scene was a 220 degree panorama of beautious fractal landscape
with reptitive but not untasteful fractal forests, complete with 
shadows in over 4000 shades and colors.  Funny at how you get good
at that, Jon thought.  He could give a demographic of any animation he
was put through.  At the moment he was falling towards a small town
where the other buildings were only white cube shells.  The building
of intent being dressed up sharply with, Jon recognized, slightly 
brighter than real colorization.

Now he was really suspicious.

Jon pressed the 'pause' button on his databall, thinking "I'm going 
to need the glasses for this."  A simple gesture with his right hand
and a blue status window appeared, showing, among other things, that
the file would have no problems with the 3-D interpreteur he'd
have to pipe it through to use the lightweight glasses, found amongst
the other junk in his top right drawer.  After putting them on, he
typed the command modifier to invoke the 3-D shell.  He had to
do this one eyed, these particular alternately-polarizing glasses had
a bug in them as a result being folded into a not-quite-empty coffee cup.

In a flash he was back in the simulation, now in glorious 3-D (when 
the 3-d system was on, the 70hz flicker rate was not even perceptible, 
and the effects fabulous).  He noticed that he still had no control
over his position from the databall and hoped that this wasn't just a
one-way demo.  

As Jon circled down towards the structure, the music started.
Stereo.  Well balanced, coming from both sides of the CHUV.  Bach, he
thought.  He was just about to nail down the opus when the voice 
kicked in.

"The Brunswick-Bullcock building is a fine example of mid-80's
engineering situated on the outskirts of..."

Jon listened to the polite banter for about 3 minutes while he was
taken for a guided tour of the structure both inside and out.  It was
the typical modified warehouse-now-officeblock.  He found he could 
use his databall to speed up, slow down or rewind the tour as
necessary.  When it was finished, the demo left him 'standing' in the
front walk with a nice leaves-in-wind view of the structure.  There
was a slight perceptive mismatch which kicked out as soon as the demo
finished and dropped him into 'iso-edit' mode.  Standard practice.  
Made the building appear comic-bookish real.  Jon's suspicion meter now 
pegged the needle in the red zone.

Jon was now looking at the building in isometric 3-d.  To simplify
things he reduced the color resolution and installed false color
shading to all distinct surfaces, making them easier to manipulate.
This was done by selecting options from circular command windows,
which he called up from the easiest of the three databall 
thumb buttons, and selected using his virtual disembodied 'hand'. 
This technique was almost blindingly fast in comparison to the 
plodding of the old 'drop-down' menus.  His 'hand' was now merely 
amplify-echoing the miniscule motions of his own right arm, which 
spent most of it's time relaxing comfortably on the desk/chair armrest.

Using the databall again, Jon re-scaled himself so he was about three
stories high.  He selected 'semi-transparent' walls and was now set
to edit.  The databall was designed so that all translational
movements of his virtual 'body' could be preformed with simple twists
and pushes.  The armrest which housed the optical interupter (which
disabled the FRA when the arm was removed for typing) also held 
his left arm vertically ridgid so that all databall motions required 
no muscle effort from his upper arm and chest.  An improvement over 
the original databalls which caused undue fatigue after long use.

The project called for turning this place into a robot-cell
manufacturing plant.  The pre-planned 'blocks' for which were in a
seperate file.  Jon called them in from the circular menus and 
tryed a test fit.  The screen immediately rebounded with multiple
fire-regulation and structural integrity warnings in glowing neon.
Jon disabled these temporarily while trying various options, picking
up entire rows of robot-assembly cells and forcing them into position.
Where his 'hand' hit a wall, the FRA immediately limited his motion
with a spongey resiliance.  If he persisted, however, the action was
carried through.  Typically he then removed the offending wall with
the option of immediately replacing it should it give an unsatisfactory 
result.

Phonecall.  A window opened to the far right of his vision with stat 
details of the caller.  Jon gave it a glance and flicked the 
'receiver' databall switch.

"Hello, Simpson here."

"You still in that thing?  Don't you know it's break time?"

"Really?" Jon cursed.  He'd forgotten to enable his visual bell
appointment calender that morning.  "Out in a minute, Paul."

Virtual space.  They should call it virtual time as well.

One 'pause and release' command later, the CHUV retracted and released
Jons glove in less than 3 seconds.  He dropped his glasses on the desk
but didn't bother removing his glove.  It was stylish and,
co-incidentally enough, matched his sweater.  Besides, he liked to
flaunt it periodically to prove that he was doing 'work' in here and
not just playing arcade games.  Interesting to think that that was one
of the main reasons he had managed to keep his position: he had
disipline enough to leave the games until later.  Unlike the last
Pilot who had written super-elaborate program shells to cover the 
fact that he was just 'playing around' in a major way.

Coffee over.  Snide remark to snarky Receptionist about whether she 
preferred rats or spiders in her Kentucky Fried.  Back to mines.

Scene: just outside of Jon's office.  A vicious "AH-HA!!" is heard at
11:21AM and 4 seconds precisely.

Thirteen seconds later, a phone rings in Purchasing Supervisor Dan
Olsen's office.

"Dan Olsen."

"Dan.  It's Jon.  Listen, I've been playing (excuse me, Working) with 
the Brunswick building most of the morning.  I've got three viable
solutions which are fairly optimal and don't seem to violate too many 
regs..."  

"Great!  Whip it up in a demonstration video and I'll pick it up for
this aft's meeting."

"...but!"

"'But'?"

"It's a fudge-factory, Dan.  They've sent us bum data."

four seconds of silence, then, "Say what?"

"Pop over here and I'll show you."

"Is it complicated?"

"Yes, but don't worry, I'll draw you lots of pretty pictures."

Dan paused for a second, making sure of the sarcastic intent.
"And your mother.  Ok, I'll be right there."

Dan entered Jons office to see him still sitting with that 'thing' on
his head ("Around my head." as Jon continually reminded him.  Part
of Jon's standard lecture on the topic.  The Cyberdesk is a 
vast improvement over the cumbersome and oft ridiculous Cyberhelmets 
that flared briefly in the fields infantcy, he would explain.  Blind 
Pilots continually bumping into walls and hyper-reacting to real-life
stimulus.  People being paranoid to use them in offices because you
could never tell when someone was going to sneak up on you, even
inadvertantly.  It got to the point that even the telephone going off
would result in periodic screams.  And how could you convince the 
general populous to accept something that basicly made them look and
act like a village idiot, responding to things that just weren't
there?  How could someone like that share an office with anybody
without feeling self-concious?  The Desk gives you freedom of movement
without violating the dignity of the Pilot while still maintaining
full forward and backward compatibility with...  Blah, blah blah.  All
Dan knew is that to get complex analyses done fast, it was the best 
$70K investment they'd made this decade.  That is, so long as you got a
pilot who didn't go game-happy.)

Dan was relieved to see that the CHUV was in transparent mode, so he 
would be talking to a face and not just a huge liquorice Lifesaver.
The CHUV was awash with pretty pictures, just as promised, hanging in
the air between them.  

"Grab a chair." said Jon.  Dan did, watching facinated as the images 
changed kolidascope fashion around Jons head.  Jon said that he was
certain he didn't work faster than normal, but to the outside observer
the speed of the flashing images was almost superhuman.  When asked, 
Jon usually responded "Good human interfacing." 

The swirl stopped.  An isographic framework image of the Brunswick 
building (with an imposed robot shop floor) appeared between Jon 
and Dan just to one side of the CHUV so both could see it easily.  
It rotated slowly as if it was on a horizontal turntable.  Jon 
used the desk mouse and moved a cursor to indicate the image center.

"This is the best design I found.  Everything checks out with room to
spare for future expansion.  Seventeen internal walls would have to 
be removed" the image highlighted wall positions. "But according to 
their specs, none of these are load-bearing walls.  The CAD check
Deamons confirm this and said it was all right."

"Yeah, so what's the problem?"

"Well, the whole thing screamed 'set-up' to me.  So after I did my
mods, which the ARCH-DEAMONS told me were perfectly valid, I did a
complete structural continuity check.  Took a bit of time, but it was
worth it."

"Yeah, and?"

"Insufficient Mass."

Dan stared blankly.

"Huh?"

Jon continued. "The difference between the given values and the sum 
total of the weight of all walls and floors, taking into consideration 
the structural techniques of the incept-era and the structural distortion 
of the building over time (plus, of course, fatigue on joint struts 
and other joining members) basically lead to only one conclusion."

Smartass, thought Dan. "Which is?"

Jon made an invisible gesture and the image between them folded upon
itself with a satisfying, simulated crunch.

"They were clever" said Jon.  "They made only subtle changes to the
file so that any smart CAD package would be fooled into giving this
place the go-ahead to return it to a warehouse style.  I got
suspicious however when I could 'feel' that many walls had
subtley different thicknesses, even though they were used in very
self-similar circumstances.  I first thought it was limited resolution 
problems with the Desk, but by scaling myself very small, I was able 
to see and confirm the differences.  The bottom line is that this 
place is a office-block now because it's the only way of keeping the roof 
from falling in."

"No way to re-inforce it?"

"I'm no expert, but I would say it'd be cheaper to build an entire new
building.  And of course we can't install the robots without removing
the walls.  No inspector would pass it without a substantial bribe.
Besides, why would they take such efforts to hide this if it
probably didn't hide deeper problems?  I'd do a bit of back-pedalling
if I were you."

Dan stared out the window for a minute.  "I'll get a site inspector to
make a 'suprise' visit of the place right after lunch.  I've got a
friend out there who owes me a favor.  In the meantime, put everything
you've just told me into a very, very legible video-tape to show to
the old-boys this aft, just in case."

Jon looked at his system clock.  11:37AM.  "I'll have it for you by
one, but don't forget, I'm saving up all these lunches you owe me."

"Put it on my tab." said Dan without humor, leaving quickly.  He had to
get to a phone.  He needed action before lunchtime.

---

By 12:40PM the seven minute video was finished, complete with
narrative.  It could have been sooner but he hated talking into a tape
without a formatted script.  He always stuttered like a fool when he
tryed to ad-lib.  Mike-phobia.  The one bonus was that now the office 
was practically empty.  Time just enough to sneak in a game.

The directory flashed before him.  Something light, perhaps.  Pinball?
No.  Being smashed around by all those flippers in real time gave him
a headache.  He chose "asteroid_hunt".  Before selecting, he remembered 
to set a 'kill switch' for 12:55PM and to select 'low_volume' from the CHUV
options menu.  No sense in stirring suspicions in the natives.

Close curtians, dim lights, mount glove.  Let's party.

Lost among a asteroid belt.  You verses your opponent. Object: nail
his ass before he nails yours.  Constantly around you, the 'chink, 
crunch' of colliding asteroids and maybe, if he's careless, the
tell-tale sounds of thrusters or a cocked plasma-rifle.  An old game
but a classic none the less.  Fifteen minutes pass like hours...

---

During the afternoon, Jon works on a multi-page report, viewing 4 full
sized pages of text similtaniously.  Shifting and moving paragraphs
with lightning speed.  As the CHUV visual screen was composed of 4
carefully alligned projection color video displays, this was the
equivalent of 'dumb terminal' mode for the system.  

Jon checks out another demo-VR package, this one allowing the 
Pilot to manipulate individual molecules inside human cells 
and see the results.  Educational, but bloody exhaustive work
for little reward, Jon concluded.  Too many damn atoms inside a human
nucleus.  Somebody ought to do something about it.

Jon checked several disk-magazines.  The latest issue of 'Invisible
Touch' (the Cyberspace journal) had a test sample of the new 5000-fold
Barnsley video compression technique now under chip development.  When 
available, it'll be possible to send live video via phone.  Ethernets
would no longer be restricted to the fifty-video-channel limit and
everybody will be able to talk via visi-phones or real-time CHUV windows.
Great, thought Jon, I'll stick to my phone stats data any day.  There
are just some times when you don't want to be seen by or see who you're
talking to.  The breakthrough could mean a new market for read/write
CD players however.

At the 2:30 break, Dan confided to him that that building was a white
elephant of incredible proportions, and that it was a damn good thing
that we caught it in time.

"We?" echoed Jon.

"Ok, ok.  But I don't have to tell you you're a genius.  Everybody
knows all your smarts are under your belt... in your Desk."

"Ha, bloody ha."

"Video went over great, by the way.  Nice collapsing structure with
flames and smoke.  Good sound too.  Thanks for leaving out the 
screaming bodies this time."

"That was a hospital.  This is a warehouse.  It just wouldn't have been
the same."

"Ha, bloody ha."

That afternoon he also reviewed a report while sitting under a tree in
the forest from this mornings demo.  Bach this time replaced with
Motley Crue ballads.  He checked the stress durability of a
new plastic butress support by whacking it with a heavy hammer at
different temperatures until it shattered.  Each strike reflected
multi-colored stress fractures and waves which seemed to concentrate
in one h-joint.  He videoed the entire session on 8mm tape and mailed
it off to the materials boys downstairs.  That should generate lots of
discussion, he thought.  He checked out the mass of VR icons that came
down the net, saving the ones that he had a passing interest in
and dumping the rest ("God, I swear that if I ever see another 
three-dimensional extension of the Julia set...!").  He took some of
the icons and attempted to texture-map them onto more interesting
structures.  As was usually the case, he wound up with a object that
looked just the way a car factory sounds.  Maybe someday we'll get
some artists to a Desk, he thought.  With a wince of disgust, he
'grabbed' the object and flung it over his left shoulder.  He didn't
have to look to know that it would slowly spiral down the maw of
Cygnus X-1: the black hole that he used as his electronic trash can.

An old idea, but a classic none the less.




Point being: I really think that gloves and a helmet just won't
cut it in the VR world.  Feedback and comfort are necessary or
it will not sell.

Discussion/comments welcome.  Please post. 

The above work is Copyrighted by me, just in case.


Is all.



-- 
Mark Tilden: _-_-_-__--__--_      /(glitch!)  M.F.C.F Hardware Design Lab.
-_-___       |              \  /\/            U of Waterloo. Ont. Can, N2L-3G1
     |__-_-_-|               \/               (519) - 885 - 1211 ext.2454,
"MY OPINIONS, YOU HEAR!? MINE! MINE! MINE! MINE! MINE! AH HAHAHAHAHAHAHAHAHA!!"

afzal@cui.unige.ch (Afzal Ballim) (06/18/91)

Just for information, there is a new Journal starting up this 
summer called "User Modelling and Adaptive User Interfaces."

It is published by Kluwer.


-Afzal

braudes@seas.gwu.edu (Bob Braudes) (06/18/91)

Another major issues with adaptive user interfaces, when using the AI
domain, is determining when and how to change the UI.  Assume that the
interface controller had decided, through whatever means, that the user
is now "ready" for a new interface.  How is the change made?  You
probably don't want to surprise the user by doing it automatically;
imagine using your favorit program and suddenly all of your commands
no longer work (obviously this is at the extreme end of change, but is
used to show the point.)  Do you prompt the user in the middle of a session,
or wait until the next time the program is invoked?  How is this implemented on
multi-user systems; it appears to require some sort of logon.

I strongly believe in adaptive interfaces; however, there are a lot of 
basic issues which need to be addressed before they can become practical.

chalek@rosings.crd.ge.com (catherine chalek) (06/18/91)

In the discussions on adaptive user interfaces, there seems to be some
confusion between customizable interfaces and adaptive interfaces.  
It seems to me that adaptive interfaces are interfaces that change
themselves based upon whatever rules the developers coded.  The only
part that the user plays is to perhaps initiate the adaption and 
accept/reject the adaption.  In a customizable interface, the user
is responsible for initiating the changes and specifying the changes
(selecting from among the possible changes).  What she/he can change
is of course limited to what the developers allow users to change.
In the first case, the user wouldn't have to have much proficiency with
the system.  In the second case, she/he probably would.

catherine

ncliffe@hfserver.hfnet.bt.co.uk (Nigel Cliffe) (06/19/91)

From article <3320@sparko.gwu.edu>, by braudes@seas.gwu.edu (Bob Braudes):
> Another major issues with adaptive user interfaces, when using the AI
> domain, is determining when and how to change the UI.  Assume that the
> interface controller had decided, through whatever means, that the user
> is now "ready" for a new interface.  How is the change made?  You
> probably don't want to surprise the user by doing it automatically;
> imagine using your favorit program and suddenly all of your commands
> no longer work (obviously this is at the extreme end of change, but is
> used to show the point.)  Do you prompt the user in the middle of a session,
> or wait until the next time the program is invoked?  How is this implemented on
> multi-user systems; it appears to require some sort of logon.
> 
> I strongly believe in adaptive interfaces; however, there are a lot of 
> basic issues which need to be addressed before they can become practical.

It might be useful to get hold of the UK Alvey reports on the Adaptive
Intelligent Dialogues (AID) project. This project addressed the issues of
adaption in a user interface. The work was done about 4 or 5 years ago.
Unfortunately I don't have the papers, or the references.

(For those who don't know, the Alvey programme was a UK initiative to 
promote research in IT, by collaboration between academic and commercial
organisations).

- Nigel.
--
                     Nigel Cliffe, Human Factors, 
         BT Laboratories, Martlesham Heath, Ipswich, IP5 7RE, UK
         Email: ncliffe@hfnet.bt.co.uk    Tel: +44 (0)473 645275

rloon@praxis.cs.ruu.nl (Ronald van Loon) (06/19/91)

In <20689@crdgw1.crd.ge.com> chalek@rosings.crd.ge.com (catherine chalek) writes:


||In the discussions on adaptive user interfaces, there seems to be some
||confusion between customizable interfaces and adaptive interfaces.  
||It seems to me that adaptive interfaces are interfaces that change
||themselves based upon whatever rules the developers coded.  The only
||part that the user plays is to perhaps initiate the adaption and 
||accept/reject the adaption.  In a customizable interface, the user
||is responsible for initiating the changes and specifying the changes
||(selecting from among the possible changes).  What she/he can change
||is of course limited to what the developers allow users to change.
||In the first case, the user wouldn't have to have much proficiency with
||the system.  In the second case, she/he probably would.
||
||catherine

It seems to me that an adaptive interface is just a special case of
a customizable interface. Please correct me if I am wrong.

As an aside, I am working on an (object-oriented) UIMS, in which
the interface configures itself on a) settings used in other -similar-
programs, b) user-defaults (I want this kind of input represented as
this type of input-object) and c) developer-defaults. The user can
indicate how the programinput should be arranged on the screen (this
includes reordering and sizing, but it also includes type-changing,
like changing a slider into a dial or an "edit-control", and hiding
(the object-value is set once and used in subsequent use of the program.)

As you probably have noticed, I am strongly in favour of a user-
customizable interface. After all, the developer is not the one
using the program, the user is - he should be the one in control,
working the program as he wants it to work, not vice versa.

But that's just MHO.
-- 
Ronald van Loon (rloon@praxis.cs.ruu.nl)

"Howdy Folks, I'm Oedipus Tex, you may have heard of my brother Rex..."
- P.D.Q. Bach (1807-1742?) "Oedipus Tex"

haltraet@gondle.idt.unit.no (Hallvard Traetteberg) (06/19/91)

In article <20689@crdgw1.crd.ge.com> chalek@rosings.crd.ge.com (catherine chalek) writes:
>   In the discussions on adaptive user interfaces, there seems to be some
>   confusion between customizable interfaces and adaptive interfaces.
>   It seems to me that adaptive interfaces are interfaces that change
>   themselves based upon whatever rules the developers coded.  The only
>   part that the user plays is to perhaps initiate the adaption and 
>   accept/reject the adaption.  In a customizable interface, the user
>   is responsible for initiating the changes and specifying the changes
>   (selecting from among the possible changes).  What she/he can change
>   is of course limited to what the developers allow users to change.
>   In the first case, the user wouldn't have to have much proficiency with
>   the system.  In the second case, she/he probably would.

>   catherine

The distinction between user-modifyable and self-modifying programs, as
suggested above, is important. Most people use "adaptive" for both. I'm used
to saying "adaptable" for user-modifyable (customizable) and "adaptive" for
self-modifying. (Actually, since I'm Norwegian, I'm not used to _saying_ it
but rather making the distinction :-).
--

                                                           - hal

dsr@stl.stc.co.uk (D.S.Riches) (06/19/91)

In the referenced article kathy@cs.sfu.ca (Kathy Peters) writes:
>I am interested in the concept of adaptive user interfaces - somehow
>tailoring the user interface for individual skills and preferences.
>(Rather than just having categories like 'naive', 'expert', etc.).
>
>Adaption could be as simple as setting up the preferred display colours,
>or having dynamic menus (allowing the user to perform only certain things),
>OR it could be giving the user more help if he or she is making more
>mistakes (getting tired after a long day!) ....
>
>Any thoughts on what things could be tailored for individuals (like
>display characteristics or help text)? What individual characteristics
>(name, age, authorization level, allowed tasks, vision and hearing
>difficulties, ....) could be used to tailor the interface?
>
>
>Kathy Peters
>Simon Fraser University, Burnaby, Canada
>kathy@cs.sfu.ca

In England there was an Alvey project which ran for 4 years which
researched Adaptive Intelligent Dialogues.  A book has since been
published about the findings of the research :-

	"Adaptive User Interfaces"
	Eds. D. Browne, P. Totterdell, M. Norman
	Computers and People Series,
	Academic Press

	ISBN0-12-137755-5

   Dave Riches
   PSS:    David.S.Riches@stl.stc.co.uk (or dsr@stl.stc.co.uk)
   Smail:  BNR Europe Ltd (formerly STL), London Road,
	   Harlow, Essex. CM17 9NA.  England
   Phone:  +44 (0)279-429531 x2496
   Fax:	   +44 (0)279-454187

peterj@swanee.ee.uwa.oz.au (Peter E. Jones) (06/20/91)

ncliffe@hfserver.hfnet.bt.co.uk (Nigel Cliffe) writes:

>It might be useful to get hold of the UK Alvey reports on the Adaptive
>Intelligent Dialogues (AID) project. This project addressed the issues of
>adaption in a user interface. The work was done about 4 or 5 years ago.
>Unfortunately I don't have the papers, or the references.

>(For those who don't know, the Alvey programme was a UK initiative to 
>promote research in IT, by collaboration between academic and commercial
>organisations).

You can contact Mr. Paul Cooper, STC Technology, London Road, Harlow,
Essex, CM17 9NA, England. +44 279 29531 email: pac@stl.stc.co.uk.
He was the main architect for the project which completed in Sept 1988
after 4 years collaborative work involving STL, Data Logic, BTRL, the
Universities of Heriot Watt, Strathclyde, Hull and Essex.  He has a list
of papers and reports that are a) for the public, b) for Alvey club members.
There is also a video of the several adaptive exemplars in action.  The final
report is probably also available from the DTI, Joint Framework for Information
Technology, Kingsgate House, 66-74 Victoria Street, London SW1E 6SW, England.
+44 71-215-8308.

--
Peter Jones                             .-_!\   Phone: +61 9 380 3100
Electrical & Electronic Eng            /     \  Fax:   +61 9 380 1065
University of Western Australia --->>  *_.-._/
NEDLANDS, Western Australia 6009            o   E-mail: peterj@swanee.uwa.oz.au

toone@looney.Corp.Sun.COM (Nolan Toone) (06/21/91)

Alot has been said concerning adaptive vs. adaptable interfaces. I think
my approch would be to use the adaptive approch to set the default action
and the user can choose it with little or no effort but ALLOW the user to override 
it at will, but he user should also be allowed to set the defaults as well.

That my humble opinion.


		Regards,

     /\
    \\ \	Nolan C. Toone, ISV Engineering
   \ \\ /	Sun Microsystems 
  / \/ / / 	MailStop PAL1-316
 / /   \//\ 	2550 Garcia Avenue
 \//\   / / 	Mountain View, California  94043
  / / /\ /  	
   / \\ \	Phone:  415-336-0391
    \ \\ 	EMail:	toone@Corp.Sun.Com
     \/

bernie@metapro.DIALix.oz.au (Bernd Felsche) (06/29/91)

In <4488@jethro.Corp.Sun.COM> toone@looney.Corp.Sun.COM (Nolan Toone) writes:

>Alot has been said concerning adaptive vs. adaptable interfaces. I think
>my approch would be to use the adaptive approch to set the default action
>and the user can choose it with little or no effort but ALLOW the user to override 
>it at will, but he user should also be allowed to set the defaults as well.

>That my humble opinion.

The trick (IMHO) with adaptive interfaces is to figue out when and
how to adapt.

Taking a leaf out of nature's pocket book, it seems like the time to
change the interface's behaviour is on startup, not while the
application is running. Having said that, I have to back off a little
and consider what happens with long-running applications, and of
course the nature of the application.

It is probably most natural that applications adapt at a very slow
rate, and never by more than one iteration (however that may be
defined) in each function.

Some of the things which come to mind in adaptive applications are the
presentation of error and status messages. For example, initially a
user may be required to take some action (press a key or click on a
gadget) before the application continues. Later, that may be
down-graded for the less-important conditions, to a time-delay, of
gradually reducing magnitude.

When the user exits, interface adaptation parameters should be saved
for later re-use. On the next startup, it is intuitively appropriate
to downgrade the adaptation a little, and allow the user to come up to
speed again.

Unlike machine tools, where adaptive control is relatively easy
because the controller knows the desired result, the adaptive user
interface has absolutely no idea of the target. In fact, the interface
is not the controller, it's the user. The interface is a tool used to
perform a function, which is probably another clue as to how adaptive
interfaces could adapt.

If the tool adapts to the work at hand, then the user will adapt to
the tool, seeking the easiest path to a solution, as long as it's not
obfuscated. The tool must therefore adapt to the type of work and how
the user perceives the interface. Using this, the interface can be
modified to present the most likely functions in a way which the user
can easily perceive and use them.

In terms of control systems, the situation is most interesting, with
two adaptive systems (hopefully) in synergy. I'd rather not sit down
and work out the mathematics, thank you. In situations like this, I
prefer to stand back and use some intuition to come up with a usable
"machine".

It could also be likened to two people building a brick wall. The
first knows what he wants to do, but only by starting to excavate for
the foundations, does the other know to dig as well. Then when the
brick laying starts, how does the second know to cart the bricks or
even lay them, or when and how much cement to mix? If the second is
over-enthusiatic is some respect he could mix too much cement or
building the wall too high.

Incessant nagging and "stupid" questions may lead to a very unfriendly
interface.
-- 
Bernd Felsche,                 _--_|\   #include <std/disclaimer.h>
Metapro Systems,              / sold \  Fax:   +61 9 472 3337
328 Albany Highway,           \_.--._/  Phone: +61 9 362 9355
Victoria Park,  Western Australia   v   Email: bernie@metapro.DIALix.oz.au