[comp.ai.neural-nets] Hand Shape recognition

peters@ee.udel.edu (Shirley Peters) (05/30/91)

I'm looking for references and names of research done in the area of
Sign language recognition.  This could be hand shape recognition using a
glove sensing device, or video image recognition, or anything else that
will ultimately end up dealing with sign language.

Thanx in advance,
Shirley
-- 
+------------------------------------------------------------------------+
Shirley Peters                                       peters@dewey.udel.edu
                        I'd rather be sleeping!
+------------------------------------------------------------------------+

danr@autodesk.com (Dan Rosenfeld) (05/30/91)

peters@ee.udel.edu (Shirley Peters) writes:

>I'm looking for references and names of research done in the area of
>Sign language recognition.  This could be hand shape recognition using a
>glove sensing device, or video image recognition, or anything else that
>will ultimately end up dealing with sign language.

Jim(?) Kramer developed a system at Stanford which could recognize
finger-spelling.  I think his original goal was to develop a system
which could recognize ASL gestures more generally, but this was never
realized.

I believe his system used Bayesian methods for recognizing patterns of
joint angles from a sensing glove of his own design.

Unfortunately, I don't have any contact information for Jim.  Can
anyone else out there help?

Dan Rosenfeld

danr@autodesk.com

tap@ai.toronto.edu (Tony Plate) (05/30/91)

In article <54953@nigel.ee.udel.edu> peters@ee.udel.edu (Shirley Peters) writes:
>I'm looking for references and names of research done in the area of
>Sign language recognition.  This could be hand shape recognition using a
>glove sensing device, or video image recognition, or anything else that
>will ultimately end up dealing with sign language.
>

Sidney Fels here at the University of Toronto is working
on a system called "Glove-Talk", which uses a neural net
to recognize hand configurations and trajectories (with
a dataglove). The system generates speech as output.

Here's a reference:

@inproceedings  (fels-hinton-90,
key     =       "fels",
author  =       "Fels, S.~S. and Hinton, G.~E.",
title   =       "Building adaptive interfaces with neural networks: The
{G}love-{T}alk pilot study. ",
booktitle   =   "Proceedings of the IFIP TC 13 Third International Conference
on Human-Computer Interaction ",
publisher   =   "Elsevier",
address     =   "North Holland",
pages       =   "683-688",
year        =   "1990"
)

Tony Plate
-- 
---------------- Tony Plate ----------------------  tap@ai.utoronto.ca -----
Department of Computer Science, University of Toronto, 
10 Kings College Road, Toronto, 
Ontario, CANADA M5S 1A4
----------------------------------------------------------------------------

benlarbi@sarcelle (Zeroual-Benlarbi) (05/30/91)

Hello,
	can someone tell me how can I subscript a membership in the Iternational Neural Network Society (INNS)
--
Benlarbi Sayda
Departement Math-Informatique
Universite de SHERBROOKE
Sherbrooke, PQ, J1K 2R1
CANADA

benlarbi@DMI.USherb.CA

nlonginow@falcon.aamrl.wpafb.af.mil (05/31/91)

In article <54953@nigel.ee.udel.edu>, peters@ee.udel.edu (Shirley Peters) writes:
> I'm looking for references and names of research done in the area of
> Sign language recognition.  This could be hand shape recognition using a
> glove sensing device, or video image recognition, or anything else that
> will ultimately end up dealing with sign language.
> 
> Thanx in advance,
> Shirley
> -- 
> +------------------------------------------------------------------------+
> Shirley Peters                                       peters@dewey.udel.edu
>                         I'd rather be sleeping!
> +------------------------------------------------------------------------+


A guy by the name of Sidney Fels did this using neural nets. He appears to have
had a lot of success, too. The work goes by the name of 'Glove-talk', and is
published <somewhere> and he is <somewhere> at a university in Canada (Alberta
?).  Anyway, he takes the output from a Dataglove, and converts it to words
using a net. There was a writeup on this (or another work) where this concept
was actually made into a product, where the output of the net was used to drive
a speech synthesizer. Supposedly worked quite well. Wish I could tell  you
more. Try the sci.virt-worlds newsgroup, someone there should know more.

Nick

russell@minster.york.ac.uk (05/31/91)

In article <54953@nigel.ee.udel.edu> peters@ee.udel.edu (Shirley Peters) writes:
>I'm looking for references and names of research done in the area of
>Sign language recognition.  This could be hand shape recognition using a
>glove sensing device, or video image recognition, or anything else that
>will ultimately end up dealing with sign language.
>
>Thanx in advance,
>Shirley

Try

Gestures and Neural Networks in Human-Computer Interaction.  IJCNN-91, Seattle.
To appear July 1991.  (R. Beale and A. D. N. Edwards)

Interpreting Gestural Input using Neural Networks.  IEE Colloquium on Neural
Nets in Human-Computer Interaction.  IEE Digest 1990/179.  (Russell Beale and
Alistair D. N.  Edwards)

(available from me if you have problems!)

In Brief
++++++++
This research used a Powerglove (a cheap DataGlove) and back-prop.  Simulated 
data showed excellent recognition results for a subset of American One-Handed
Spelling Language.  Error in signing gestures coped with perfectly
satisfactorily.  There is currently more work in progress.

Also note that other details of this work, and related stuff of potential
interest, can be found in

Pattern Recognition and Neural Networks in Human-Computer Interaction
Russell Beale and Janet Finlay, eds.  Ellis--Horwood.  Available late
1991.

Also look at Fels and Hinton, GloveTalk, 

Fels, S. S. \& Hinton, G. E. (1990) Building Adaptive Interfaces with
Neural Networks: The Glove-Talk Pilot Study, in: Diaper, D., Gilmore,
D., Cockton, G. \& Shackel B. (Eds.) {\em Human-Computer Interaction:
Interact U90}, Proceedings of the IFIP TC 13 Third International
Conference on Human-Computer Interaction, North-Holland, Oxford, pp.
683--687.
 
which uses a dataglove to produce speech via a customised language.

Gesture Recognition using Recurrent Neural Networks, Kouichi Murakami
and Hitomi Taguchi, CHI'91, pp 237--242. 

reports work on posture (static)
and gesture (dynamic) sign recognition.

Kramer, J. and Leifer, L. (1988) The talking glove: a communication aid
for deaf, deaf-blind, and non-vocal individuals, {\em Rehabilitation
Research and Development Center 1988 Progress Report}, Veterans
Administration, Palo Alto, California, pp.123--124.

use a dataglove as an aid for the disabled, while Pausch and Williams

Pausch R. and Williams, R. D. (1990) Tailor: Creating Custom User
Interfaces Based on Gesture, Computer Science Report No. TR-90-06,
Department of Computer Science, University of Virginia.

have taken a very different approach to using gestures to generate
speech. Instead of using sign language, their aim is that the
communicator to move his or her hand in motions which mimic the
movements of the tongue in natural speech.
Their objective is to use this technique as a means of communication
for people who cannot communicate vocally because of disabilities
such as cerebral palsy.

So far Pausch and Williams' results have been encouraging in that they
have managed to get subjects with cerebral palsy to consistently generate
suitable gesture curves.

The latter two use conventional pattern recognition techniques.

Hope this helps.

Russell.

____________________________________________________________
 Russell Beale, Advanced Computer Architecture Group,
 Dept. of Computer Science, University of York, Heslington,
 YORK. YO1 5DD.  UK.               Tel: [044] (904) 432771
				   Fax: [044] (904) 432767

 russell@uk.ac.york.minster			JANET
 russell%minster.york.ac.uk@nsfnet-relay.ac.uk  ARPA 
 ..!ukc!minster!russell				UUCP
 russell@minster.york.ac.uk			eab mail
____________________________________________________________