[comp.robotics] 6 DOF Joysticks

gm26@prism.gatech.EDU (MCMURRAY,GARY V) (12/07/90)

I am interested in finding out information on the development of 6 DOF joysticks by various people.  In particular , I am interested in the manner that the motions of the joystick are converted into a motion for the robot.  I know that JPL has been doing work in this area for many years but I have not been able to locate any papers that explicitily define this mapping process.  There is also a company called Kraft Teleoperation (I believe that is the name) that has a commercially available joystick.  Also,






 are the devices mentioned above generic in nature such that they can be used to control any robot, or are they restricted to robots of similiar kinematic structure?
Thanks for any ans all of your input!!!

Gary McMurray
Home of the "Number 1" Football Team???

-- 
MCMURRAY,GARY V
Georgia Institute of Technology, Atlanta Georgia, 30332
uucp:	  ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!gm26
Internet: gm26@prism.gatech.edu

smith@sctc.com (Rick Smith) (12/07/90)

gm26@prism.gatech.EDU (MCMURRAY,GARY V) writes:

>I am interested in finding out information on the development of 6 DOF
>joysticks by various people.  In particular , I am interested in the
>manner that the motions of the joystick are converted into a motion for
>the robot.

I spent a year or so working on a project invloving this, and you connect
the components together like this:

off-the-shelf 6 DOF joystick ==> LOTS OF WORK ==> off-the-shelf robot

The LOTS OF WORK section is the implementation of your teleoperation control
scheme. We looked at several approaches, and it all depends on your joystick
and your robot's geometry. I don't think there's much out there in the way
of off-the-shelf teleoperation software that supports a range of joysticks
and robots. Usually the best you can do is buy a robot with a handheld
programming control that uses something approximating a joystick.

I assume that by "joystick" you mean anything that will map operator movements
into robot motions, and not just handles like they use on video games...

In the project I worked on, we planned to do teleoperation controlled by
DataGloves -- that's those bizarre gauntlets that measure finger flexes
and hand position/orientation. We were going to use hand displacement
(scaled by a gain signal from a foot pedal) to specify end effector
displacement. The project was axed before the robot was built, though we
had lots of fun with the DataGloves.

On the other hand (sorry!) we had another project evaluating 6 DOF hand
controllers... I got the impression that the classic technique there was
to use displacement to specify a velocity vector for the robot's end effector
motion. Let go of the controller and motion stopped; exert some effort
and motion followed the direction/orientation you pushed.

Ideally, you want the kinematics of the joystick (well, hand controller) to
match that of the robot. Thus, the DataGlove is really best with a cartesian
robot, as are the 6 DOF generalizations of 2 DOF joysticks. With articulated
robots like Pumas, however, you have to worry about singularities in your
work envelope (e.g. places you can't quite reach). If you look
around, though, there IS some company that builds an articulated hand
controller designed to match the kinematic configuration of things like Pumas.
Sorry, but I don't remember the company name. I do remember that they cost
lots, though you save something in software development by avoiding the
singularity issue.

Rick.
smith@sctc.com   Arden Hills, Minnesota

gerry@frc2.frc.ri.cmu.edu (Gerry Roston) (12/07/90)

In article <1990Dec6.232210.2638@sctc.com> smith@sctc.com (Rick Smith) writes:

   >I am interested in finding out information on the development of 6 DOF
   >joysticks by various people.  In particular , I am interested in the
   >manner that the motions of the joystick are converted into a motion for
   >the robot.

The work that was done at JPL was headed up by Dr. Tony Bejczy, the
last number I have for him is (818) 354-4568.  Or, try writing to him
at: 
	Jet Propulsion Laboratory, ms 198-330
	4800 Oak Grove Drive
	Pasadena, California, 91109

Another person you might try to track down is Bill Townsend, who
recently completed a PhD at MIT.  He has done extensive work in this
area and has produced some interested "joystick" designs.

   Ideally, you want the kinematics of the joystick (well, hand controller) to
   match that of the robot. 

Actually, this statement is very far from the truth.  You want your
master arm, i.e. the joy stick, to be constructed in such a fashion
that it is easily operable by the human; and you want the slave arm to
be designed to achieve the required task in the best possible fashion.
Connecting the two is a computer which performs the kinemtic
translation required.  Furthermore, tests done at JPL and elsewhere
have shown that to perform meaningful tasks, force reflection is
required and that time delay from your sensors (cameras, force
sensors, etc) will seriously degrade performance.

gerry
--
gerry roston, field robotics center
robotics institute, carnegie mellon university
pittsburgh, pennsylvania, 15213  (412) 268-6557
gerry@cs.cmu.edu

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (12/08/90)

In article <GERRY.90Dec7094126@onion.frc.ri.cmu.edu> gerry@frc2.frc.ri.cmu.edu (Gerry Roston) writes:
>In article <1990Dec6.232210.2638@sctc.com> smith@sctc.com (Rick Smith) writes:
>   Ideally, you want the kinematics of the joystick (well, hand controller) to
>   match that of the robot. 

> ... tests done at JPL and elsewhere
>have shown that to perform meaningful tasks, force reflection is
>required and that time delay from your sensors (cameras, force
>sensors, etc) will seriously degrade performance.

Terms like "seriously degraded" seriously degrade our appreciation of
some problems.  I mention this because I'm convninced that earth-based
remote control of, for example, a space station, would yiled a huge
advantage in performance/cost payoff.  So the question is, what did
"tests done at JPL and elsewhere" really demonstrate?  

Consider that the internal sensor-brain-muscle roudtrip time of a
human is of the order of 1/5 second -- so that when your brain tries
to do anythig in the outer world, you have a delay time of this order.
Now, suppose that the sensory-motor loop time of a remote control
system were, say, 1 second.  Then you could expect the human
performance time to increase six-fold, so that it would be "degraded"
by that much.  My question: has JPL or anyone else shown that
delays cause substantially more degradation than this?

smith@sndpit.dec.com (Willie Smith) (12/08/90)

In article [...], minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) writes...
>In article [...] gerry@frc2.frc.ri.cmu.edu (Gerry Roston) writes:
>>In article [...] smith@sctc.com (Rick Smith) writes:
>>   Ideally, you want the kinematics of the joystick (well, hand controller) to
>>   match that of the robot. 
> 
>> ... tests done at JPL and elsewhere
>>have shown that to perform meaningful tasks, force reflection is
>>required and that time delay from your sensors (cameras, force
>>sensors, etc) will seriously degrade performance.
> 
>Terms like "seriously degraded" seriously degrade our appreciation of
>some problems.  I mention this because I'm convninced that earth-based
>remote control of, for example, a space station, would yiled a huge
>advantage in performance/cost payoff.  So the question is, what did
>"tests done at JPL and elsewhere" really demonstrate?  
> 
>Consider that the internal sensor-brain-muscle roudtrip time of a
>human is of the order of 1/5 second [...]

Another consideration is that humans can learn to predict and anticipate
control inputs to systems with long delays.  After about 1/2 hour driving
my (simulated) lunar teleoperated vehicle I've found I can do significantly
better than when I started, and the 'training time' gets shorter with 
repeated 'missions'.

While I don't doubt that force-feedback and other tightly-coupled systems
can get unstable when the delay approximates the human reaction time, more
loosely-coupled systems (joysticks and video feedback with Heads-Up-Display 
to show the operator where he's pointing his controls) allow the operator to
compensate for the delays.  Yes, precision tasks take longer, but nothing is
quite as bad as the expected "move, wait to see what happened, move again..."
case, even when running with a full 3-second lunar teleoperations delay.

In case anyone is interested, the vehicle in question is a modified RC 
truck with TV camera and transmitter, controlled by a Z80-based S-100 
machine, with the received TV signal routed through an Amiga with genlock 
for HUD.  You can indeed do lunar teleoperations research in your basement!
The phase II vehicle is based on lawn-tractor wheels and cordless drill 
motors, with a couple of onboard computers.  Lots more documentation 
available on request.

Willie Smith
smith@sndpit.enet.dec.com
smith%sndpit.enet.dec.com@decwrl.dec.com
{Usenet!Backbone}!decwrl!sndpit.enet.dec.com!smith

nagle@well.sf.ca.us (John Nagle) (12/09/90)

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) writes:

>Consider that the internal sensor-brain-muscle roudtrip time of a
>human is of the order of 1/5 second -- so that when your brain tries
>to do anythig in the outer world, you have a delay time of this order.

     Eye-hand control loops are of that order, but many purely tactile
control loops in the body are much faster.  The grasping reflex, which
maintains finger contact forces at a level sufficient to prevent slip,
operates in about 20ms.  Flight simulator designers have discovered
that update rates as high as 500Hz are required to make control
forces "feel right".  ("Flight Simulation", Rolfe and Staples, 
Cambridge University Press, 1986, section 4.10).

     A 1 sec delay in a tactile control loop thus represents roughly
a 2 order of magnitude performance degradation.

					John Nagle

lance@motcsd.csd.mot.com (lance.norskog) (12/11/90)

smith@sndpit.dec.com (Willie Smith) writes:


>Another consideration is that humans can learn to predict and anticipate
>control inputs to systems with long delays.  After about 1/2 hour driving
>my (simulated) lunar teleoperated vehicle I've found I can do significantly
>better than when I started, and the 'training time' gets shorter with 
>repeated 'missions'.

Indeed.  If the delay is constant, you can learn to live it.  It's variable
delays that kill you.  You can check this out using a PC v.s. a time-shared
mini.  You can train to very long rhythms.

Lance