[comp.sys.next] One Step...

cory@gloom.UUCP (Cory Kempf) (12/28/88)

The following is a conceptualization of how I imagine that the next generation
(not the NeXT, but the REAL next generation) of user-interfaces will work...
nb: I had never heard of Rooms when I wrote my first article... As soon
as I get a chance, I am going to go look it up...

			-----------------

The Engineer enters his office in the morning and puts on his computer.
Almost instantly, the plain off-white office is transformed.  Now there is
a drawing table in front of him, a telephone, and all of the usual desk 
accutraments.  He picks up his pen, and begins to add those last finishing 
touches to the design for the Shuttle project.  Oops!  a mistake.  He turns
the pen upside down and uses the eraser.  Perfect.  The phone rings... it's
the boss... he wants a meeting in the small conference room.  (Good timing)

The Engineer presses the image of a map on the desk, and the navigator sub-
system comes up.  A map fades into existance on the top of the desk, but the
engineer decides not to teleport today...  He holds his hands in a position
as if he was holding a joystick... and a joystick fades into existance in 
his hands.  The desk fades from view as the chair moves forward out into 
the hall.  After a short journey, he finds himself at the enterance to the
small conference system.  He opens the door and is confronted by a blank black
wall with the word "Password:"  in large gold letters at eye level.  The 
boss is such a stickler for security!  He forms his hands into typing position,
and a keyboard appears... He types in a code word, the computer counters, he 
types in another counter code, and, as the codes match, he is let in.  His 
boss has set up the place in his typical brass and glass style.  No
problem... a hand gesture, and his desktop appears in front of him...
the touch of a button, and the Interior Decorator module is
activated... a few moments later, the room (as far as the engineer can
see is decorated in woodtones.  Once everyone arrives, the boss gives
the engineer the floor to present his proposel for the project... he
activates the modeling system, and once again the tabletop is
changed... instead of being made of wood (or glass, depending) it is
now appears to be a runway, with a small model of the new Shuttle... 

Everyone's view now changes to be looking over the shoulder of the
pilot.  He slowly taxis it over to the launch rails, and calls the
tower for permission to launch... 
...Later, he pulls the top off of the shuttle to show the passenger and
cargo space.  

...The meeting is breaking up, they like the design overall... but 
they want a few minor adjustments... the Engineer calls up the
navigator sub-system, and (this time, being in more of a hurry) 
points to the button marked "My Office" on the bottom.  A fade to
black to his office... 

Back in his office, he pulls open his file-cabinet, and walks down the
file-tree looking to see if he has the specs on that new engine system 
for the shuttle.  Damn... have to go to the library... Navigator...
touch the icon of the book with the letter 'E' on the cover... The
Engineering Library.  A small control pannel appears floating off to
the side.  He touches a few buttons, and in front of his is a shelf
containing spec. sheets on new engines.  He selects a few and then
returns to his office.  After scanning the data, he decides it can't
be done, and begins to write a memo... He selects the image of a memo
from the desktop, and forms his hands as if he were about to start
typing... the typewriter appears with his finger over the home keys.
With the cursor over the "To:" field, he takes one hand off of the
keyboard and pulls down a menu (out of thin air) and selects the list
of people on the shuttle project.  He then types in the rest of the
memo.  After pulling the paper out, reading it one last time and
signing it, he sticks it in the out basket.  

Oh yea... better let the boss know.  Grab the phone, pulls a
hierarchical menu out of nowhere, and selects his boss.  A window 
appears with his boss in it...

+C
-- 
Cory ( "...Love is like Oxygen..." ) Kempf
UUCP: encore.com!gloom!cory
	"...it's a mistake in the making."	-KT

cory@gloom.UUCP (Cory Kempf) (12/28/88)

The other article that I wrote gave a nice discription of how I
envision the next generation of userinterfaces to work.  It was a nice
fluffy type article, but it was a bit long.  This one should be much
shorter.  

Like I said in the last one, I have not yet seen the article on Rooms
(I am going to look it up 'Real Soon Now!'.  But first, I wanted to
throw another $2.00 into the pot (inflation you know).  

There have been a lot of interesting points that have been raised
since my forst post on the idea of a new interface design... I wanted
to answer them from the perspective of the design that I proposed
(mostly in the hopes that someone will either impliment it or give me
the bucks to (fat chance].  

On the subject of teleportation, of course there would be macro's to
get you from point 'A' to point 'B' without going the long way... to
the point that you would never have to go know just where you were
going through to get to commen points.  

On security, user verification is still necessary, although it can 
be implemented in a much better fashion (I hope) than the old password 
scheme.  One idea that I like is to have the system give you a phrase, 
and you speak it (or possibly complete it) out loud.  The computer would 
then identify you based on that (A bit harder to fake I hope!).  

On control of resources, what would you normally do?  You would call
up the Master Control Pannel, and kill that person's access/usage/etc.
The result is that the owner of the resources have more privileges
than other users do.  

Filesystems (networked).  Some people have commented that they would
like to see the filesystems made transparant... Thats OK for a LAN, 
but when you start to think in terms of WANs (like Banyan for example), 
I think that the problem will soon break down... either you need to
have a map in your head of a large portion of the filetree (like a lot
of us have with unix already) or something new needs to come about.  I
have seen some hardware designed to work on mini's that has broken the
TeraByte barrier... what's going to happen in 20 years?  How many
levels deep is the tree going to get?

On the need for the metaphor.  I see the idea of a metaphor'd GUI as
having the same relationship to csh as {C,Pascal,Fortran, etc) does to
assembler... yea, you can do it in assembler, but why?  Compare
programming in Hypercard with writing a program to do the same thing.
Ya, the program will run faster than the stack, but to write the
program, you need to know a lot of other things.  Also, it will take
longer to write.  To create the stack will take MUCH less time, and
will be more flexable.  People who don't want to spend the time
writing an C program are creating stacks to do the same things.

+C

-- 
Cory ( "...Love is like Oxygen..." ) Kempf
UUCP: encore.com!gloom!cory
	"...it's a mistake in the making."	-KT

bzs@Encore.COM (Barry Shein) (12/28/88)

Fun note but why do a password challenge when a retinal scan would
have been more secure? (please, no disgusting remarks about how to fool
a retinal scanner.)

	-Barry Shein, ||Encore||

irawan@apple.cis.ohio-state.edu (hindra irawan) (12/28/88)

Hm.......very interesting. Maybe a science fiction movie of this writing
will be more interesting 8-)

-hindra irawan-

cory@gloom.UUCP (Cory Kempf) (12/28/88)

In article <4498@xenna.Encore.COM> bzs@Encore.COM (Barry Shein) writes:
>
>Fun note but why do a password challenge when a retinal scan would
>have been more secure? (please, no disgusting remarks about how to fool
>a retinal scanner.)
>
Two reasons actually... first, I wasn't too sure about retinal
scans... the only place I have seen any refs. to them has been in SF
(haven't looked much though), so I didn't (and still don't) know how
practical they are for security.  

The second was that the engineer was actually doing an rlogin from his
workstation to a local mini.  Since the hardware to do the scanning
would necessarily have to be attached to the workstation, it would be
trivial to subvert (ie have it record the retinal image from the
authorized user and play it back to the verification program.  Login
simulators anyone?).  Thus, while it could be safely used in the
example I gave, it probably wouldn't be used for user verification
over a network from an untrusted host (ie a workstation).

It does bring up a point... user verification over a network from
untrusted hosts.  But that is a thread that is better suited for
comp.security.  

+C
-- 
Cory ( "...Love is like Oxygen..." ) Kempf
UUCP: encore.com!gloom!cory
	"...it's a mistake in the making."	-KT

oster@dewey.soe.berkeley.edu (David Phillip Oster) (12/29/88)

te, you can use this hardwrae to simulate a 2-D display as big as you
want: wherever you look, there is more display.  So, you can still run X,
or vi.  Fun on a double newspaper page size display.

mr@homxb.ATT.COM (mark) (12/29/88)

In article <30100@tut.cis.ohio-state.edu>, irawan@apple.cis.ohio-state.edu (hindra irawan) writes:
> Hm.......very interesting. Maybe a science fiction movie of this writing
> will be more interesting 8-)

Remember. 60 years ago nuclear weapons were science fiction.

> -hindra irawan-

mark
homxb!mr

tim@hoptoad.uucp (Tim Maroney) (12/29/88)

If you have teleportation, what do you need a space shuttle for?  If
you can't teleport all the way to orbit, isn't a spaceplane more likely
than a shuttle for orbital transport?
-- 
Tim Maroney, Consultant, Eclectic Software, sun!hoptoad!tim
"Religion flourishes in greater purity without than with the aid
 of government." -- James Madison

pds@quintus.uucp (Peter Schachte) (12/29/88)

In article <263@gloom.UUCP> cory@gloom.UUCP (Cory Kempf) writes:
>The following is a conceptualization of how I imagine that the next generation
>(not the NeXT, but the REAL next generation) of user-interfaces will work...

[ long scenario of how a really nice user interface might work ]

What, no voice input?  Seriously, most people can talk a lot faster
than they can type.  I'd substitute a microphone for the (pseudo)
keyboard.  Good speech recognition hardware can't be more than 5 or 10
years away, can it?
-Peter Schachte
pds@quintus.uucp
..!sun!quintus!pds

hyc@math.lsa.umich.edu (Howard Chu) (12/29/88)

In article <908@quintus.UUCP> pds@quintus.UUCP (Peter Schachte) writes:
>In article <263@gloom.UUCP> cory@gloom.UUCP (Cory Kempf) writes:
>[ long scenario of how a really nice user interface might work ]
>
>What, no voice input?  Seriously, most people can talk a lot faster
>than they can type.  I'd substitute a microphone for the (pseudo)
>keyboard.  Good speech recognition hardware can't be more than 5 or 10
>years away, can it?

Hmm.... I don't know about you, but more often than not, I'd rather
have a note pad and a pencil than anything else. It's usually pretty
difficult to get simple arithmetic done quickly on a computer keyboard,
particularly without a desk calculator type program always accessible.
You might be able to formulate an equation well enough to enunciate it
clearly, but more likely it's going to present lots of ambiguities that
a speech recognition system won't know how to handle. And, it's always
nice to be able to see things written out. (It's also kinda fun to
doodle when you're stuck somewhere, or just plain bored...)

Even when working with primarily plain text, I often find it preferable
to lay out the groundwork with paper and pencil, before committing it to
the rigidity of keyboarded text. Who knows... Maybe it's because I like
to jot stuff near the center of a page, and my text editors always force
me to start typing at line 1 column 1... I dunno. I think moving a pencil
across paper is faster than hitting an auto-repeating cursor key, or
dragging a silly mouse across a surface and clicking a button...

How about a nice, large, touch-sensitive pad hooked up to a handwriting
recognition system? Seems the perfect thing for a quick and easy input
device. (Easy to use, not easy to implement...) Although, on second
thought, a speech based system would also be very nice for boilerplate
stuff, like what is normally done by command line processors today.
("Open the pod bay doors, HAL. Copy .login to /usr/lib, HAL.") Gee,
maybe it wouldn't work so well after all. ("Egrep ay-zee-star-left-bracket-
paren-right-bracket star-dot-see, HAL.")
--
  /
 /_ , ,_.                      Howard Chu
/ /(_/(__                University of Michigan
    /           Computing Center          College of LS&A
   '              Unix Project          Information Systems

cory@gloom.UUCP (Cory Kempf) (12/29/88)

In article <6122@hoptoad.uucp> tim@hoptoad.UUCP (Tim Maroney) writes:
>If you have teleportation, what do you need a space shuttle for?

You entirely missed the point of the article... the Engineer *Never
left his office*.  He was utilizing a network (in this case a Wide
Area Network to participate in a meeeting of the people involved in a
project for designing a new piece of hardware (shuttle).  The people
involved were not necessarily in the same building or country or
continent or even on the same planet!
>								  If
>you can't teleport all the way to orbit, isn't a spaceplane more likely
>than a shuttle for orbital transport?

You missed the part about 'rails' I suppose?

+C
-- 
Cory ( "...Love is like Oxygen..." ) Kempf
UUCP: encore.com!gloom!cory
	"...it's a mistake in the making."	-KT

bet@dukeac.UUCP (Bennett Todd) (12/30/88)

In article <268@gloom.UUCP> cory@gloom.UUCP (Cory Kempf) writes:
>[...]
>It does bring up a point... user verification over a network from
>untrusted hosts.  But that is a thread that is better suited for
>comp.security.  

Everytime I get concerned about user verification over an insecure network
from untrusted clients I go back and reread kerberos/doc/dialogue, then I feel
better. I think this one is a solved problem, at least in principle; applying
it is something that folks will or won't get around to, depending on their
perception of their needs.

-Bennett

bzs@Encore.COM (Barry Shein) (12/30/88)

>Good speech recognition hardware can't be more than 5 or 10
>years away, can it?
>-Peter Schachte

As far as I can tell it's only been 5 or 10 years away for the past
decade or so, I'd imagine that figure is still correct.

	-Barry Shein, ||Encore||

bzs@Encore.COM (Barry Shein) (12/30/88)

>Remember. 60 years ago nuclear weapons were science fiction.
>mark

So were anti-gravity machines.

	-B

malcolm@Apple.COM (Malcolm Slaney) (12/30/88)

In article <4524@xenna.Encore.COM> bzs@Encore.COM (Barry Shein) writes:
>>Good speech recognition hardware can't be more than 5 or 10
>>years away, can it?
>As far as I can tell it's only been 5 or 10 years away for the past
>decade or so, I'd imagine that figure is still correct.

It depends on what you mean by good speech recognition.  It can be argued that
speech recognition is here now.  It also can be argued that we have a long way
to go.

There are a lot of systems in the field that are used for inventory tracking 
(speaker dependent).  Dragon Systems is selling a system on PCs for doctors 
that lets them dictate medical reports (speaker dependent isolated words.)
Also, IBM is beta testing a system for speaker independent isolated words for
medical offices and insurance companies.  Finally Kai Fu Lee and the gang
at CMU (and also now SRI and Lincoln Labs) have demonstrated a system
that does speaker independent, continuous word recognition with a 90% correct
word rate.

You can buy a system today that connects to a Mac and lets you execute 
arbitrary commands based on spoken text.  Thus you can say "save" and have
the Command S key sent to the application.

On the other hand....human like speech recognition has been postulated to
take on the order of a Tera Flop by the IBM people.  The current problems
remaining to be solved (other than processor power) are a bunch of front end 
issues (like noise, multiple speakers speaker adaptation) AND incorporating
natural language so that homonyms and missing works can be filled in.

Drop me a note if you want references to any of this work.

							Malcolm

eric@snark.UUCP (Eric S. Raymond) (12/30/88)

In article <264@gloom.uucp>, cory@gloom.UUCP (Cory Kempf) writes:
> The other article that I wrote gave a nice discription of how I
> envision the next generation of userinterfaces to work.

Neat description, but I spot a major problem. With no tactile feedback on
the phantom keyboards, how are ya gonna type? Are you assuming something
like a dataglove that can generate pressure on the hand to simulate touch of
the virtual objects? If so, realize that that is a *very* hard problem just
from the mechanical-effector point of view.

Better we should be working on neural-interface devices, not for the sensory
side (that's a very very hard problem) but for the motor-affector side (which
is a relatively easy one). Screw keyboards; it's already known that you can
quickly biofeedback-train people to spark hair-thin electrodes attached to
individual muscle fibers in the balls of their thumbs. *This* is the interface
technology we should be investigating, looking for a non-intrusive version.

Perhaps the gurus of tomorrow will sit lotus-fashion in the midst of multi-
sensory samsaras of the kind you described, controlling everything through a
discreet little cable with a myolectric sensor box on one end, placed next
to the skin.
-- 
      Eric S. Raymond                     (the mad mastermind of TMN-Netnews)
      Email: eric@snark.uu.net                       CompuServe: [72037,2306]
      Post: 22 S. Warren Avenue, Malvern, PA 19355      Phone: (215)-296-5718

cory@gloom.UUCP (Cory Kempf) (12/30/88)

In article <eZFm4#2QXrzQ=eric@snark.UUCP> eric@snark.UUCP (Eric S. Raymond) writes:
>In article <264@gloom.uucp>, cory@gloom.UUCP (Cory Kempf) writes:
>> The other article that I wrote gave a nice discription of how I
>> envision the next generation of userinterfaces to work.
>
>Neat description, but I spot a major problem. With no tactile feedback on
>the phantom keyboards, how are ya gonna type? Are you assuming something
>like a dataglove that can generate pressure on the hand to simulate touch of
>the virtual objects? If so, realize that that is a *very* hard problem just
>from the mechanical-effector point of view.

uh, just what made you think that I was not planning on generating
feedback?  Whan I was in college, I started designing a project for an
electronics class that did just that... motion detection AND feedback.
The method that I came up with was (necessarily) a low budget
approach, but the prof seemed to think that it was  possible to
develop.  The version that I designed was a mitten approach for
simplicity, and didn't use stepper motors in order to cut costs.  One
of these days, (when I get rich) I would like a chance to build the
system the way it should be done... (I think one of the reasons that I
was able to design the system was that nobody thought to tell me that
it couldn't be done :-) )

+C


-- 
Cory ( "...Love is like Oxygen..." ) Kempf
UUCP: encore.com!gloom!cory
	"...it's a mistake in the making."	-KT

darin@laic.UUCP (Darin Johnson) (01/03/89)

In article <530@stag.math.lsa.umich.edu> hyc@math.lsa.umich.edu (Howard Chu) writes:
>Hmm.... I don't know about you, but more often than not, I'd rather
>have a note pad and a pencil than anything else. It's usually pretty
>difficult to get simple arithmetic done quickly on a computer keyboard,
>particularly without a desk calculator type program always accessible.

How about using a light pen to draw out number on top of each other, and
then when you draw the magic line under the group, they automatically
get added together (unless you put a -, x, / next to them).  All the
advantages of paper and pen, except you don't have to do the actual
arithmetic.  If you're a stickler for detail, the computer can put in
the carry marks in your own handwriting.

-- 
Darin Johnson (leadsv!laic!darin@pyramid.pyramid.com)
	"You can't fight in here! This is the war room.."

darin@laic.UUCP (Darin Johnson) (01/03/89)

In article <23040@apple.Apple.COM> malcolm@Apple.COM (Malcolm Slaney) writes:
>On the other hand....human like speech recognition has been postulated to
>take on the order of a Tera Flop by the IBM people.  The current problems
>remaining to be solved (other than processor power) are a bunch of front end 
>issues (like noise, multiple speakers speaker adaptation) AND incorporating
>natural language so that homonyms and missing works can be filled in.

This is if you use the standard Von-Neumann architecture, standard
algorithm's etc.  I saw a setup awhile back at UCSD that did speach
recognition using neural networks (P.D.P. for you purists).  Although I
never actually saw it run (only graphics output), it was supposed to be
able to 'decode' sentences in roughly 1/4 real time.  Presumably, this
was with ideal conditions, short sentences, etc.  With a hardware neural
net, real time speach recognition is quite possible with less than a
super-computer.  You wouldn't even have to devote an entire machine room
to it.

-- 
Darin Johnson (leadsv!laic!darin@pyramid.pyramid.com)
	"You can't fight in here! This is the war room.."

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/04/89)

From article <397@laic.UUCP>, by darin@laic.UUCP (Darin Johnson):
" In article <23040@apple.Apple.COM> malcolm@Apple.COM (Malcolm Slaney) writes:
" >On the other hand....human like speech recognition has been postulated to
" >take on the order of a Tera Flop by the IBM people.  The current problems
" ...  Presumably, this
" was with ideal conditions, short sentences, etc.  With a hardware neural
" net, real time speach recognition is quite possible with less than a
" super-computer. ...

Extending recognition from ideal circumstances to human like recognition
will require an understanding of human speech perception which does
not now exist.  New science is needed, not new technology.  Give
the IBM people a Tera Flop and watch them make bigga flop.

		Greg, lee@uhccux.uhcc.hawaii.edu

mr@homxb.ATT.COM (mark) (01/04/89)

In article <4525@xenna.Encore.COM>, bzs@Encore.COM (Barry Shein) writes:
> 
> >Remember. 60 years ago nuclear weapons were science fiction.
> >mark
> 
> So were anti-gravity machines.

What's the point ?

The statement above says that some of the things that used to
be science fiction are now real. (and some of the things that
are now science fiction will be real someday)

> 	-B

mark
homxb!mr

malcolm@Apple.COM (Malcolm Slaney) (01/04/89)

In article <397@laic.UUCP> darin@laic.UUCP (Darin Johnson) writes:
>>On the other hand....human like speech recognition has been postulated to
>>take on the order of a Tera Flop by the IBM people.  

>This is if you use the standard Von-Neumann architecture, standard
>algorithm's etc.  I saw a setup awhile back at UCSD that did speach
>recognition using neural networks (P.D.P. for you purists).  Although I
>never actually saw it run (only graphics output), it was supposed to be
>able to 'decode' sentences in roughly 1/4 real time.  Presumably, this
>was with ideal conditions, short sentences, etc.  

Boy, do I have some grant proposals I'd like *you* to review.  You've been
duped on this one.  Neural nets have been used to do speech recognition
but I haven't seen a neural net that performs nearly as well as the Hidden
Markov models that are now the state of the art.  

The neural net you saw probably accepted input symbols at a low rate (100Hz).
Getting the data that the neural net needs as input (10ms of audio signal
converted into a single vector in a small dimensional space) is a pretty
hard problem.  They probably used LPC and this completely ignores the issues
of speaker adaptation, multiple speakers and some other very hard (unsolved) 
problems.

> With a hardware neural
>net, real time speach recognition is quite possible with less than a
>super-computer.  You wouldn't even have to devote an entire machine room
>to it.
Only if you count the neurons between your ears as a neural net.  It is 
important to realize that 80% or so word recognition rates are relatively 
easy to attain but cutting your error rate by a factor of two gets 
progressively harder.  This is the reason that speech recognition has been
so tantalizingly close for so long.

The amazing thing about neural nets is that people have got some good
results without putting much knowlege (or understanding) into the model.
Whether neural nets (aka ignorance engineering) will do better than more
carefully crafted systems remains to be seen.

								Malcolm

diamond@csl.sony.JUNET (Norman Diamond) (01/06/89)

> >Good speech recognition hardware can't be more than 5 or 10
> >years away, can it?
> >-Peter Schachte

In article <4524@xenna.Encore.COM>, bzs@Encore.COM (Barry Shein) writes:

> As far as I can tell it's only been 5 or 10 years away for the past
> decade or so, I'd imagine that figure is still correct.

Well, in 1956, in an advertisement on the back cover of Scientific
American, speech recognition equipment was only 4 years away.
-- 
Norman Diamond, Sony Computer Science Lab (diamond%csl.sony.jp@relay.cs.net)
  The above opinions are my own.   |  Why are programmers criticized for
  If they're also your opinions,   |  re-inventing the wheel, when car
  you're infringing my copyright.  |  manufacturers are praised for it?

dickey@ssc-vax.UUCP (Frederick J Dickey) (01/07/89)

In article <263@gloom.UUCP>, cory@gloom.UUCP (Cory Kempf) writes:
> The following is a conceptualization of how I imagine that the next generation
> (not the NeXT, but the REAL next generation) of user-interfaces will work...
> 
> The Engineer enters his office in the morning and puts on his computer.
> Almost instantly, the plain off-white office is transformed.  Now there is
> a drawing table in front of him, a telephone, and all of the usual desk 
> accutraments.  He picks up his pen, and begins to add those last finishing 
> touches to the design for the Shuttle project.  Oops!  a mistake.  He turns
> the pen upside down and uses the eraser.  Perfect.  The phone rings... it's
> the boss... he wants a meeting in the small conference room.  (Good timing)

When I walk into my office (bay), reality comes up INSTANTLY. This is faster 
than ALMOST INSTANTLY, the speed of the interface of the future (IOTF). 
I pick up a pen, and I CAN WRITE WITH IT! The phone rings, and I CAN TALK ON IT!
All of this happens in REAL TIME! Incredible! Reality is faster than the
IOTF and it is a lot cheaper too. Why would I want the IOTF :-) ?

jeff@stormy.atmos.washington.edu (Jeff L. Bowden) (01/07/89)

In article <2462@ssc-vax.UUCP> dickey@ssc-vax.UUCP (Frederick J Dickey) writes:

>All of this happens in REAL TIME! Incredible! Reality is faster than the
>IOTF and it is a lot cheaper too. Why would I want the IOTF :-) ?

Because where reality leaves off IOTF picks up.  The illusion of IOTF
is done primarily to make the human more comfortable using it.  It is less
constrained than reality in what it can do.
--
"...lies, damned lies, and heuristics."

wald-david@CS.YALE.EDU (david wald) (01/08/89)

In article <10098@socslgw.csl.sony.JUNET> diamond@csl.sony.JUNET (Norman
Diamond) writes:
>In article <4524@xenna.Encore.COM>, bzs@Encore.COM (Barry Shein) writes:
>>>Good speech recognition hardware can't be more than 5 or 10
>>>years away, can it?
>>>-Peter Schachte
>>
>> As far as I can tell it's only been 5 or 10 years away for the past
>> decade or so, I'd imagine that figure is still correct.
>
>Well, in 1956, in an advertisement on the back cover of Scientific
>American, speech recognition equipment was only 4 years away.

Oh, no!  It's getting farther away!  Run faster!


============================================================================
David Wald                                              wald-david@yale.UUCP
waldave@yalevm.bitnet                                 wald-david@cs.yale.edu
"A monk, a clone and a ferengi decide to go bowling together..."
============================================================================

maujt@warwick.ac.uk (Richard J Cox) (01/08/89)

In article <4498@xenna.Encore.COM> bzs@Encore.COM (Barry Shein) writes:
>Fun note but why do a password challenge when a retinal scan would
>have been more secure? (please, no disgusting remarks about how to fool
>a retinal scanner.)
>
>	-Barry Shein, ||Encore||

How about using some kind of DNA finger printing? - take a small sample
of blood (ouch!) and check on this. This would be almost impossible to fool.

/*--------------------------------------------------------------------------*/
JANET:  maujt@uk.ac.warwick.cu     BITNET:  maujt%uk.ac.warwick.cu@UKACRL
ARPA:   maujt@cu.warwick.ac.uk	   UUCP:    maujt%cu.warwick.ac.uk@ukc.uucp
Richard Cox, 84 St. Georges Rd, Coventry, CV1 2DL; UK PHONE: (0203) 520995

merlyn@intelob.biin.com (Randal L. Schwartz @ Stonehenge) (01/10/89)

In article <10098@socslgw.csl.sony.JUNET>, diamond@csl (Norman Diamond) writes:
| > >Good speech recognition hardware can't be more than 5 or 10
| > >years away, can it?
| > >-Peter Schachte
| 
| In article <4524@xenna.Encore.COM>, bzs@Encore.COM (Barry Shein) writes:
| 
| > As far as I can tell it's only been 5 or 10 years away for the past
| > decade or so, I'd imagine that figure is still correct.
| 
| Well, in 1956, in an advertisement on the back cover of Scientific
| American, speech recognition equipment was only 4 years away.

Maybe it is getting farther away? :-)

(How many dipthongs in NeXT?)
-- 
Randal L. Schwartz, Stonehenge Consulting Services (503)777-0095
on contract to BiiN (for now :-), Hillsboro, Oregon, USA.
<merlyn@intelob.intel.com> or ...!tektronix!inteloa[!intelob]!merlyn
HEADER ADDRESS MAY BE UNREPLYABLE if it says merlyn@intelob.biin.com ...
Standard disclaimer: I *am* my employer!

bzs@Encore.COM (Barry Shein) (01/10/89)

Re: need a teraflop to do human speech recognition...

Last time someone told me this I assured them that if they could
cobble together something which would recognize human speech
accurately on a more conventional machine but veerrry slowwly I would
find them a few hundred million to merely speed it up, no problem.

That's not as smarmy a comment as it first sounds, graphics and
physics folks have been doing this successfully for years, putting
together fantastic but hideously slow algorithms which then justify
someone investing megabucks in hardware to speed them up is no major
problem, but you have to have the algorithm.

I smell a hand waving, or at least a box of pencils being sharpened
over and over again...

	-Barry Shein, ||Encore||

root@radar.UUCP (root) (01/16/89)

In article <69@poppy.warwick.ac.uk> maujt@warwick.ac.uk (Richard J Cox) writes:
>In article <4498@xenna.Encore.COM> bzs@Encore.COM (Barry Shein) writes:
>>Fun note but why do a password challenge when a retinal scan would
>>have been more secure? (please, no disgusting remarks about how to fool
>>a retinal scanner.)
>
>How about using some kind of DNA finger printing? - take a small sample
>of blood (ouch!) and check on this. This would be almost impossible to fool.

Be serious. Besides the invasiveness of the procedure, it's just too slow and
labor intensive using present technology.

Donn S. Fishbein, MD

dbell@maths.tcd.ie (Derek Bell) (01/18/89)

In article <69@poppy.warwick.ac.uk> maujt@warwick.ac.uk (Richard J Cox) writes:
>How about using some kind of DNA finger printing? - take a small sample
>of blood (ouch!) and check on this. This would be almost impossible to fool.

		Er, couldn't this be fooled by an identical twin?
Come to think of it, how much of the DNA would be scanned? DNA has an enormous
information "density". 

>/*--------------------------------------------------------------------------*/
>JANET:  maujt@uk.ac.warwick.cu     BITNET:  maujt%uk.ac.warwick.cu@UKACRL
>ARPA:   maujt@cu.warwick.ac.uk	   UUCP:    maujt%cu.warwick.ac.uk@ukc.uucp
>Richard Cox, 84 St. Georges Rd, Coventry, CV1 2DL; UK PHONE: (0203) 520995


-- 
			dbell@maths.tcd.ie
		If Basic is for backward children, and Pascal for naughty
schoolboys, then C is the language for consenting adults
				-	Brian Kernighan