[sci.virtual-worlds] British VR company, VR protocols, and uses for Datagloves

D. Jenkins (jenkins@prcs3.decnet.philips.be) (10/03/90)

Spotted two items in British publications which may be of interest to the 
newsgroup.  Hope they're relevant.

Dean Jenkins

---------------------------<CUT HERE>-------------------------------------------

I saw a couple of items in 'The Guardian Newspaper' and the 'Electronics Times'
(both UK publications) which may be of interest to followers of sci.virtual-
worlds on September 27th.  I paraphrase them below:  

--------------------------------------------------------------------------------

1. British VR company and VR protocols

[From: 'Insight' by Mike McLean, 'Electronics Times', September 27th 1990]

Apparently a British VR product was demonstrated in London on September 26th.

The product is Vision, created by a spin-off of INMOS known as Division.
Division are based in Bristol.

Vision uses INMOS Transputers in a 'massively parallel architecture'. 
Intel i860 cpus are used to get the floating point power for stereo vision 
applications.  The architecture is said to be designed so as to make it
possible to add virtual-reality to existing applications and create new ones.

The Vison modules are placed between a host system running the substantive 
application and the VR peripherals.  The company's virtual environment system
is based on the X-windows client server model.  It includes an object model
database, a high level control interface to the application, and client modules.
Some of the client modules handle object creation/updates and the peripherals.

--------------------------------------------------------------------------------

2.  Uses for Datagloves

[From: 'Computer Guardian' in 'The Guardian' September 27th.]

Apparently a system called 'Glove-Talk' was discussed at the Human-Computer
Interaction conference held in Cambridge during August.

The system uses a VPL Dataglove to convert sign language into synthesised
speech.  Neural network software is used to decipher gestures. The system
has a vocabulary of 203 words based on 66 core words of American Sign 
Language.  It is said that it 'almost works in real-time', with a response
in under 50ms for each gesture.

The developer is Dr Sidney Fels of the University of Toronto. 

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
- D. Jenkins           !Standard Disclaimers Apply                             -
- PRCS Ltd             !INTERNET    jenkins@prcs3.decnet.philips.be            -
--------------------------------------------------------------------------------