[sci.virtual-worlds] Reading text in virtual reality?

jellinghaus-robert@CS.YALE.EDU (Rob Jellinghaus) (08/13/90)

The "office of the future" concept in virtual reality has been widely
discussed--put on your goggles and bang you're in the office.  But one
problem is going to have to be solved, and solved well, before the
virtual office exists:  the technology will have to support virtual
documents that you can _read_.

Consider the problems in rendering a screen, or a page, of text.  Not
only must the "eyephone" screens possess high enough resolution to let
people read virtual documents comfortably, for hours, but one must be
able to pick up virtual screens, move around them, etc., which means
text rendering must be very fast and very accurate.

Has anyone done any work on smooth, realistic rotation of text?  Almost
all the 3D graphics stuff I've ever seen specializes in blitting
lots of polygons with good shading effects, which doesn't seem very 
applicable to the special problems text presents.

Also, has anyone noticed the parallels between the discussions we've
been having here about virtual space and navigation therein, and the
work that's been done on hypertext information spaces?  In both contexts
there is a lot of stuff in the world, and you need to be able to know
where you are and where you want to be.  Maybe the two fields will inter-
breed at some proximate date.


-- 
Rob Jellinghaus                | "Next time you see a lie being spread or a
jellinghaus-robert@CS.Yale.EDU |  bad decision being made out of sheer ignor-
ROBERTJ@{yalecs,yalevm}.BITNET |  ance, pause, and think of hypertext."
{everyone}!decvax!yale!robertj |     -- K. Eric Drexler, _Engines of Creation_

auric1@milton.u.washington.edu (Alan Stearns) (08/16/90)

In article <25797@cs.yale.edu> jellinghaus-robert@CS.YALE.EDU (Rob Jellinghaus) writes:
>The "office of the future" concept in virtual reality has been widely
>discussed--put on your goggles and bang you're in the office.  But one
>problem is going to have to be solved, and solved well, before the
>virtual office exists:  the technology will have to support virtual
>documents that you can _read_.

Don't worry too much about reading text in VR just yet.  We haven't got
to the point where the 2D representation of text I'm typing now is
suitable for reading hours upon end.  The virtual office is at least as
far away as the paperless office, which is remotely feasible with today's
technology but involves too many compromises to make it a widespread choice.

If it isn't easier to use a computer for a given task, then don't.  Text
display will have to get as good as paper before we should make the switch.
But even if we don't use a computer or VR for everything, they still 
have their place.

How about, for the virtual office, we have partial-immersion VR.  The VR
display is projected on a pair of see-through goggles so we can see the
real world around us, with a simperimposition of VR objects.  The real 
world could be cut off by hooding the goggles, or the VR could be turned
off so we could see only the real world.  In a meeting with another office
person, thousands of miles away, I see him superimposed in a chair in my 
office.  I can read real-world documents on my desk, look at virtual objects
he shows me, or read my computer screen (that he may be sending data to).
We can also put on our blinders to shut out the real world, perhaps to
traverse a data space or construct a presentation.

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (08/16/90)

jellinghaus-robert@CS.YALE.EDU (Rob Jellinghaus) writes:
>
>The "office of the future" concept in virtual reality has been widely
>discussed--put on your goggles and bang you're in the office.  But one
>problem is going to have to be solved, and solved well, before the
>virtual office exists:  the technology will have to support virtual
>documents that you can _read_.

This brought to mind another human ability, nearly magic, that will be
of _crucial_ experience for navigating virtual reality, and that seems
to have been neglected as yet.  This is the "cocktail party" phenomenon:
in a crowd of fifty conversations, our ears can somehow pick out the
one of interest to us, and focus on it to the exclusion of all others.

Two items of research interest 1) how do we do it; i.e., what are the
parts of the signal crucial to make this work, so that our virtual
reality generator can be designed to present them to us; and 2) how
do we indicate our new focus of attention to our virtual reality
interface, so that we may use it for navigation, when at present it is
all done "in our heads".

The application is obvious: one can navigate a database along "fifty"
simultaneous threads until the one of interest is isolated, then branch
"fifty" times again to narrow the focus.

This should be much faster than a visual interface to textual/verbal
information, on a guess, if it can be replicated from the real world
into the virtual reality world.

Besides, isn't that how it's done in all the sexy cyberpunk stories?  ;-)

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

brucec%phoebus.phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen;;50-662;LP=A;) (08/17/90)

In article <25797@cs.yale.edu> jellinghaus-robert@CS.YALE.EDU (Rob Jellinghaus) writes:
> ...
> 
> Has anyone done any work on smooth, realistic rotation of text?  Almost
> all the 3D graphics stuff I've ever seen specializes in blitting
> lots of polygons with good shading effects, which doesn't seem very 
> applicable to the special problems text presents.
> 
Yes, a lot of work has been done.  This is essentially an anti-aliasing
problem, with image-processing sorts of solutions.  See, for instance:

@Article{,
  author = 	"Crow, Frank C.",
  title = 	"The Use of Grayscale for Improved Raster Display of Vectors
  and Characters",
  journal = 	"Siggraph Proceedings",
  year = 	"1978",
}
% The copy of the article I have doesn't have any publishing information.

@Article{,
  author = 	"Wieman, Carl F. R.",
  title = 	"Continuous Anti-Aliased Rotation and Zoom of Raster Images",
  journal = 	"Siggraph Proceedings",
  year = 	"1980",
}

@Article{,
  author = 	"Warnock, John E.",
  title = 	"THe Display of Characters Using Gray Level Sample Arrays",
  journal = 	"Xerox Technical Report CSL-80-6",
  year = 	"1980",
}

There are some more recent papers, but I haven't gotten them out of the
boxes since the last move.  I seem to remember one by Maureen Stone of
Xerox PARC three or four years ago.

> Also, has anyone noticed the parallels between the discussions we've
> been having here about virtual space and navigation therein, and the
> work that's been done on hypertext information spaces?  In both contexts
> there is a lot of stuff in the world, and you need to be able to know
> where you are and where you want to be.  Maybe the two fields will inter-
> breed at some proximate date.
> 

Yes, I noticed it because I recently spent a year building a hypertext
system for software documentation, adn I had to spend quite a bit of time
both researching the literature and thinking about navigation.  The two
problems are really one; IMHO hypermedia will become VR as the number and
complexity of the input and output modalites increase.
---------------------------------------------------------------------------
NOTE: USE THIS ADDRESS TO REPLY, REPLY-TO IN HEADER MAY BE BROKEN!
Bruce Cohen, Computer Research Lab        email: brucec@tekcrl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

beshers@division.cs.columbia.edu (Clifford Beshers) (08/18/90)

In article <BRUCEC.90Aug16120816@phoebus.phoebus.labs.tek.com> brucec%phoebus.phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen;;50-662;LP=A;) writes:

   > 
   > Has anyone done any work on smooth, realistic rotation of text?  Almost
   > all the 3D graphics stuff I've ever seen specializes in blitting
   > lots of polygons with good shading effects, which doesn't seem very 
   > applicable to the special problems text presents.
   > 
   Yes, a lot of work has been done.  This is essentially an anti-aliasing
   problem, with image-processing sorts of solutions.  See, for instance:

That addresses the issue of high quality rendering of text, but
if you want to read about a VR system that has text in 3D, look
at the papers by Card, Mackinlay and Robertson from Xerox Parc,
(not necessarily in that order).  They have a system they call
the "cognitive co-processor" with a 3D rooms flavor.  See ACM
User Interface and Software Technology (UIST) '89 and SIGGRAPH
90.  There are blackboards on the walls with messages, etc., that
pop up to a "head's up display", i.e. rotate from their 3D
position into the plane of the screen.  The SIGGRAPH paper was
about how you build controllers for steering towards a particular
object or piece of text that you would like to investigate.


--
-----------------------------------------------
Clifford Beshers
450 Computer Science Department
Columbia University
New York, NY 10027
beshers@cs.columbia.edu

clw%tornado.Berkeley.EDU@ucbvax.Berkeley.EDU (A Ghost in the Machine) (08/21/90)

In article <1990Aug16.134553.29297@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>
>This brought to mind another human ability, nearly magic, that will be
>of _crucial_ experience for navigating virtual reality, and that seems
>to have been neglected as yet.  This is the "cocktail party" phenomenon:
>in a crowd of fifty conversations, our ears can somehow pick out the
>one of interest to us, and focus on it to the exclusion of all others.


	It is my understanding that this ability stems mostly from auditory
preprocessing in the brain:  the signals from both ears are compared 
(experiments show that relative amplitude, phase, and possibly timing
are taken into account), and sounds are sorted according to location.  The
shape of the ear eliminates front-to-back symmetry (again, an experimental
result).
	Thus it turns out that much of this ability is lost if the sensitivity
balance between ears changes (i.e. partial loss in one ear), even if the
actual hearing loss is quite small.  People with such minor hearing damage
do very poorly picking voices out of crowds or in noisy backgrounds, despite
having reasonably acute hearing.
	An excellent example is to sit in a room of several conversations,
and tape it with a single mike.  Later, try to pick out what each person
was saying from the monaural tape (not just the loudest, all of them).
That's what it would be like without binaural hardware and a dedicated 
preprocessor.

	clw

mkant@a.gp.cs.cmu.edu (Mark Kantrowitz) (08/25/90)

In theory, 3D text should not be much harder than 2D text. Using an
intelligent scan conversion algorithm, first generate the orthogonal
projection of the outlines onto the plane of the "screen", then use
the algorithm to produce bitmaps. Of course, while the Bitstream and
Compugraphic algorithms could be modified to do this, neither would be
fast enough unless supported in hardware.

One could get good approximations by estimating visual point size,
rasterizing at that size, and then doing a small projection. This
would be much faster.

Ideally, one would need an intelligent scan conversion algorithm which
is independent of character orientation. Sadly, there is no such beast
at this time. Naive scan conversion would fit the bill, but looks
awful at resolutions of 100 dpi. Boost screen resolution to 1000dpi,
and you'd have no problems. It also has the benefit of being the
fastest method.

--mark