[comp.graphics] Digital Holography

baker@csl.dl.nec.com (Larry Baker) (05/08/91)

Does anyone have any references for computer-generated holography?

Please mail responses if possible, I will summarise - I don't always have
time to pursue the net, and our groups expire quickly.

Thanks,

Larry
-- 
--
Larry Baker
NEC America C&C Software Laboratories, Irving (near Dallas), TX
baker@texas.csl.dl.nec.com  cs.utexas.edu!necssd!baker

will@rins.ryukoku.ac.jp (will) (05/09/91)

In article <1991May7.215514.6676@csl.dl.nec.com>, baker@csl.dl.nec.com (Larry Baker) writes:
>Does anyone have any references for computer-generated holography?
>

	The following is from previous postings on this subject:

	wiml@milton.acs.washington.edu:
   There was a "how to" article on this in the Apr-May 1990 issue of
Circuit Cellar Ink ('issue 14'). The author managed to generate holograms
by taking a picture of his VGA screen and photoreducing it, but it should
be possible to photoreduce, say, laser printer output and get better
results from having more dots ...
   Last time this topic came up (around Apr-May 1990) there was some
source code posted to do the calculation. I didn't save any of it, however.

	From halazar@media-lab.MEDIA.MIT.EDU Sat Jan 12 13:17:17 1991:
Taking a momentary break from the ol' thesis to answer this repeated
and nagging question, "What about those computer generated holograms,
anyway?", he dived in....

ALL 94% OF YOU EVER WANTED TO KNOW ABOUT COMPUTER GENERATED HOLOGRAMS

A hologram is a medium that records the direction and intensity of
light, in contrast to a photograph, which only records light's
intensity.  Typically, the holographic material (usually a high
resolution photosensitive emulsion) records an interference pattern
caused by the simultaneous exposure of two sources of coherent light:
one reflected from the object being imaged, the other directly from a
reference or carrier beam.  This interference pattern is such that if
the developed hologram is placed in the original reference beam, light
is diffracted or reflected in such a way that the original object
appears to float in space at its original location.  The spatial
relationship between the viewer and objects in the scene appears
identical in "real life" and in the hologram.  More complicated
holographic processes can produce white-light illuminable, even
multi-color holograms.

Computer generated holograms replace the objects in the scene with
synthetic objects.  Presently, two major types of computer generated
holograms exist.  The first, and the most difficult to produce, is
commonly called a CGH (computer generated hologram; yes, brace
yourself for confusion).  CGHs are made by calculating the
interference patterns to be recorded on the holographic plate by first
figuring out what part of the synthetic object is visible from what
part of the hologram, then summing the phase and amplitude of the
light that each part of the object reflects.  For interesting objects,
this calculation must be performed for many points on the hologram
because the spatial frequencies range from 100-1000 fringes per
millimeter.  Recording the information onto the holographic medium is
also a problem;  for CGH optical elements, for instance, the pattern
is often recorded using an electron beam writer.

Although computing fringe patterns may seem like the obvious way to
make computed holograms, the technique is impractical for large,
complicated, static images.  CGH is computationally viable for simple
or repetitive interference patterns, such as optical elements.
Computing fringe patterns is also useful for dynamic holography (or
holographic video).  In MIT's system, data from a memory store is
converted to an analog signal and used to modulate an acoustic signal
emitted from a transducer.  This transducer is coupled to an optical
crystal in which the sound waves form compression patterns capable of
diffracting light.  A small crystal can be used to "sweep out" a large
diffractive area.  The diffractive pattern in memory is a holographic
fringe pattern, currently computed at up to several frames per second
(for simple wireframe objects) using a 16K processor Connection
Machine 2.  The memory store is the CM2's framebuffer.  However, the
image size is still quite small (3x3x3 cm) usable volume updated at
40Hz, I'd guess), and complicated objects take a long time to compute.

High quality synthesized display holograms are almost exclusively
produced using a technique known as holographic stereography.  If a
hologram is analogous to a window onto the original scene, then a
stereogram is a series of many slit small windows, each only big
enough horizontally to fit the pupil of the viewer's eye when the
viewer stands up next to the plate.  Instead of a view onto a 3D
scene, each little window has information about a single, 2D
projection of that scene.  These projections can be created using a
moving cinema camera or standard polygonal or raytracing computer
graphics program.  The different views are computed by moving the
camera, with its lens axis always facing perpendicular to the camera's
direction of travel, horizontally through the view zone.  A new image
of the scene is captured every pupil's width or so.  To make the
stereogram, these images are projected using laser light onto a
diffusion screen, and a vertical slit of a holographic plate is
exposed to the screen and to a reference beam.  The geometrical
relationship of the slit to the projection screen is the same as the
relationship between the camera and its plane of focus when the view
for that slit was captured.  So when the hologram is illuminated, a
viewer looking through the plate actually looks through two different
slits, and thus sees to different image perspectives, the same ones
that would have been seen were the viewer really looking at the
object.

A second hologram, called a transfer hologram, is commonly used to
allow the viewer to stand some distance from the stereogram.  The
transfer hologram is actually a hologram of the  slit hologram.  When
illuminated, the transfer hologram projects an image of the slits of
the master hologram out into space, so the viewer can easily step into
the master plane without suffering facial lacerations.  Because images
are only captured side to side, the stereogram exhibits only
horizontal parallax:  vertical viewer motion doesn't change the
appearance of the subject.

The holographic stereogram has a lot going for it.  The input
perspectives are relatively easy to produce using widely available
computer graphics techniques.  In general, interesting and realistic
graphics hacks look even more interesting and realistic in a
stereogram. Only about 100 perspective images need to be generated for
a standard 20x25cm (8x10") stereogram.  Transfer holograms can be
made in full, vibrant color, with a little work.  Size is almost
unlimited; with a little cleverness, a rig that would fit in a
suburban garage could crank out life size computer images of Miatas.
Fringe-pattern-type CGHs just aren't anywhere near as convenient,
useful, or satisfying, and won't be for quite a while.

But, sadly, only a handful of places in the world can make
stereograms, and even fewer know how.  Most of them are research
facilities, like our group.  The rest are usually involved in mass
production or commissioned work so its tough unless your images or
data is really cool.  A full, high quality stereogram lab costs about
$500 thousand.  And the holography market is hardly booming.  The
technology almost exists for a holographic printer computer
peripheral, which would open the world of low cost (couple dollars a
page), high quality 3D hardcopy to many more people, but no one wants
to put much money into it.  You'd think the 1 meter square computer
generated hubcaps in the basement would convince somebody....

So the short answer is, "No, it isn't hard to compute a holographic
image.  It's really hard, however, to make it into a hologram."
Unless you'd like to be a lab sponsor, that is.

                                        --Michael Halle
                                          Spatial Imaging Group
                                          MIT Media Lab
                                          mhalle@media-lab.media.mit.edu

	HOPE THIS HELPS.


                                        William Dee Rieken
                                        Researcher, Computer Visualization
                                        Faculty of Science and Technology
                                        Ryukoku University
                                        Seta, Otsu 520-21,
                                        Japan

                                        Tel: 0775-43-7418(direct)
                                        Fax: 0775-43-7749
                                        will@rins.ryukoku.ac.jp

rick@pangea.Stanford.EDU (Rick Ottolini) (05/09/91)

IMHO digital holography will be THE 3-D graphics technique of the future.
Current rendering techniques SIMULATE 3-D through the using of lighting
models, shape [perspective, stereo], and motion [fly-thru animation,
virtual reality animation].  Holography seeks to compute actual light waves
themselves.  I envision "Princess Lea" displays, floating images like that
of the help message in Star Wars I.  This avoids the sensory sheaths the
VR people are using.
The mathematics of digital holography are fairly well known, but the computations
are expensive.  They are similar to other imaging mathematics such as my field
of seismic imaging.  Even with all kinds of shortcuts thrown in, it will take
billions to trillions of calculations per second to display interesting
holographic images.  With the computing speeds increasing an order of magnitude
every five years and no end in sight, we are taking about the early 21st
century for this capability.  The Popular Science article of last year equates
the complexity of MIT Media Lab holo-images with 2-D graphics on oscilloscopes 30
years ago.  So this technology is probably realizable in most readers lifetimes.
As pointed out in an early posting, much work still has to be done in the
display hardware, that is getting the numerical description of the light waves
converted into light.

tmb@ai.mit.edu (Thomas M. Breuel) (05/10/91)

In article <1991May9.153446.21742@leland.Stanford.EDU> rick@pangea.Stanford.EDU (Rick Ottolini) writes:

   IMHO digital holography will be THE 3-D graphics technique of the future.
   Current rendering techniques SIMULATE 3-D through the using of lighting
   models, shape [perspective, stereo], and motion [fly-thru animation,
   virtual reality animation].  Holography seeks to compute actual light waves
   themselves.  I envision "Princess Lea" displays, floating images like that
   of the help message in Star Wars I.  This avoids the sensory sheaths the
   VR people are using.

Digital holography will probably eventually have its place. However,
holography is still bound by the laws of optics. If something comes in
between you and the holographic screen, the screen cannot project
beyond the obstacle. Likewise, the appearance of an object floating in
front of a hologram is only maintained if you are looking at the
hologram; you cannot have an object float "above", say, a table if you
are looking at it "from the side".

The requirements of virtual reality go further, and it remains to be
seen whether any practical solutions can be found that do not require
the user to wear goggles.

uselton@nas.nasa.gov (Samuel P. Uselton) (05/10/91)

In article <1991May9.153446.21742@leland.Stanford.EDU> rick@pangea.Stanford.EDU (Rick Ottolini) writes:
>IMHO digital holography will be THE 3-D graphics technique of the future.
>Current rendering techniques SIMULATE 3-D through the using of lighting
>models, shape [perspective, stereo], and motion [fly-thru animation,
>virtual reality animation].  Holography seeks to compute actual light waves
>themselves.  
	As a tray racer, I mean ray tracer, I've thought a bit on this,
	and have a little experience too.
>I envision "Princess Lea" displays, floating images like that
>of the help message in Star Wars I.  This avoids the sensory sheaths the
>VR people are using.
>The mathematics of digital holography are fairly well known, but the computations
>are expensive.  
	Probably more than you realize.
>They are similar to other imaging mathematics such as my field
>of seismic imaging.  
	I've also consulted with "Big Oil".  Seismic imaging is more similar
	to image processing and scene analysis than to image generation
	techniques.  You HAVE the image, and are trying to guess the most
	likely scene which could have created it.   
>Even with all kinds of shortcuts thrown in, it will take
>billions to trillions of calculations per second to display interesting
>holographic images.  
	Current realistic image synthesis techniques can take from 100 million
	 to 1 billion operations per image.  Laser holography is AT LEAST
	a couple of orders of magnitude more.  And you still WANT the animation
	so add another one or two orders of magnitude.  I see trillions of
	operations per second as a LOWER bound on what it might take.
	The NAS project at NASA Ames regards pushing industry into producing
	a teraflops computer by the year 2000 as a "Grand Challenge" problem.
	It'll be quite a while longer before that capacity finds its way
	into workstations for the broad market.
>With the computing speeds increasing an order of magnitude
>every five years and no end in sight, 
	          ^^^^^^^^^^^^^^^^^^^
	There is a growing number of "experts" pointing out limits to current
	hardware techniques that we ARE rapidly approaching.  We need 
	BREAK-THROUGH improvements in technology, not just incremental
	improvements in the technology we have now.
>we are taking about the early 21st
>century for this capability.  
	Proof of concept maybe.  To get the image quality you want, the speed
	you'll want, and the cost to make it usable by someone other than
	national labs, I think it'll most likely be after 2030.
>The Popular Science article of last year equates
>the complexity of MIT Media Lab holo-images with 2-D graphics on oscilloscopes 30
>years ago.  So this technology is probably realizable in most readers lifetimes.
	Some yes. Most? That depends as much on health technology as 
	anything.
>As pointed out in an early posting, much work still has to be done in the
>display hardware, that is getting the numerical description of the light waves
>converted into light.
	That too.

Sam Uselton		uselton@nas.nasa.gov
employed by CSC		working for NASA (Ames)		speaking for myself

halazar@media-lab.media.mit.edu.MEDIA.MIT.EDU (Michael Halle) (05/11/91)

"Rendering with/like holography" (more correctly, rendering at the
light wave level) is a little like building macroscopic structures out
of individual molecules: sure, you could imagine doing it, and with
enough effort you could, but in the general case you might be able to
think of a better way.  In a similar way, natural phenomena can be
considered solely in terms of quantum effects, but classical mechanics
generally work pretty well and are much less painful.

What level of realism are you trying to achieve that would require
such accuracy?  It's a lot of work to go to just to spatially quantize
to x by y (by z?) pixels.  The display (and the eye itself) usually
define the reasonable limits.  Perceptual restrictions are *always* an
issue in display.

Sure, computing holograms can be expensive.  Holograms (usually)
contain more information than do two-dimensional images, so computing
them *should* take longer.  And the expense of the calculation is
proportional to the stupidity of the approach (and not linearly,
either), especially if you're a purist and insist on diffraction
limited three-dimensional images.  But if you make those tradeoffs
like they taught ya in engineerin' school and don't do more work than
you have to, computation time for 3-D images might be only, say, ten
times that for 2-D images.  And you might be able to actually make
holograms instead of dreaming about them.  (And a previous poster was
right; unless there's some physics that we don't know about, mid-air
projection of 3-D images, with no display material in front or behind,
is right out.)

Here's a little thought question that may shead some light on the
comutation question:  What is the intrinsic information content of a
pure sine wave oscillating at an arbitrarily high frequency?  Think
about the relative costs of the different ways of producing such a
sine wave.  Would your answer differ if the signal were specified to
be analog or digital?


					Michael Halle
					Spatial Imaging Group
					MIT Media Laboratory
					mhalle@media-lab.media.mit.edu

rick@pangea.Stanford.EDU (Rick Ottolini) (05/11/91)

In article <1019.282B28FD@nwark.fidonet.org> Samuel.P..Uselton@p0.f13.n391.z1.fidonet.org (Samuel P. Uselton) writes:
>Newsgroups: comp.graphics
>
>In article <1991May9.153446.21742@leland.Stanford.EDU> rick@pangea.Stanford.EDU (Rick Ottolini) writes:
>>They are similar to other imaging mathematics such as my field
>>of seismic imaging.  
>	I've also consulted with "Big Oil".  Seismic imaging is more similar
>	to image processing and scene analysis than to image generation
>	techniques.  You HAVE the image, and are trying to guess the most
>	likely scene which could have created it.   

I represent Big Oil.  Seismics is my business and graphics an avocation.
The two disciplines use approximately the same universe of algorithms,
but in different porportions.  The cross-fertilization of ideas is fruitful.

>>Even with all kinds of shortcuts thrown in, it will take
>>billions to trillions of calculations per second to display interesting
>>holographic images.  
>	Current realistic image synthesis techniques can take from 100 million
>	 to 1 billion operations per image.  Laser holography is AT LEAST
>	a couple of orders of magnitude more.  And you still WANT the animation
>	so add another one or two orders of magnitude.  I see trillions of
>	operations per second as a LOWER bound on what it might take.
>	The NAS project at NASA Ames regards pushing industry into producing
>	a teraflops computer by the year 2000 as a "Grand Challenge" problem.
>	It'll be quite a while longer before that capacity finds its way
>	into workstations for the broad market.

A typical seismic imaging algorithm that took 100,000 seconds in the early 1970s
takes about a second these days.  Two orders of magnitude are due to smarter
algorithms and three orders of magnitude are due to that my desktop RS/6000 is
a thousand times faster than my old PDP-11/34.  These improvements will continue
for both seismics and graphics.

>>With the computing speeds increasing an order of magnitude
>>every five years and no end in sight, 
>	          ^^^^^^^^^^^^^^^^^^^
>	There is a growing number of "experts" pointing out limits to current
>	hardware techniques that we ARE rapidly approaching.  

I've heard this doom and gloom for the past 15 years and remain unconvinced.
"Breakthroughs" aren't always obvious when they start.  There is enough stirring 
in the pot now to keep us occupied for a long time.

Samuel.P..Uselton@p0.f13.n391.z1.fidonet.org (Samuel P. Uselton) (05/11/91)

Newsgroups: comp.graphics

In article <1991May9.153446.21742@leland.Stanford.EDU> rick@pangea.Stanford.EDU (Rick Ottolini) writes:
>IMHO digital holography will be THE 3-D graphics technique of the future.
>Current rendering techniques SIMULATE 3-D through the using of lighting
>models, shape [perspective, stereo], and motion [fly-thru animation,
>virtual reality animation].  Holography seeks to compute actual light waves
>themselves.  
	As a tray racer, I mean ray tracer, I've thought a bit on this,
	and have a little experience too.
>I envision "Princess Lea" displays, floating images like that
>of the help message in Star Wars I.  This avoids the sensory sheaths the
>VR people are using.
>The mathematics of digital holography are fairly well known, but the computations
>are expensive.  
	Probably more than you realize.
>They are similar to other imaging mathematics such as my field
>of seismic imaging.  
	I've also consulted with "Big Oil".  Seismic imaging is more similar
	to image processing and scene analysis than to image generation
	techniques.  You HAVE the image, and are trying to guess the most
	likely scene which could have created it.   
>Even with all kinds of shortcuts thrown in, it will take
>billions to trillions of calculations per second to display interesting
>holographic images.  
	Current realistic image synthesis techniques can take from 100 million
	 to 1 billion operations per image.  Laser holography is AT LEAST
	a couple of orders of magnitude more.  And you still WANT the animation
	so add another one or two orders of magnitude.  I see trillions of
	operations per second as a LOWER bound on what it might take.
	The NAS project at NASA Ames regards pushing industry into producing
	a teraflops computer by the year 2000 as a "Grand Challenge" problem.
	It'll be quite a while longer before that capacity finds its way
	into workstations for the broad market.
>With the computing speeds increasing an order of magnitude
>every five years and no end in sight, 
	          ^^^^^^^^^^^^^^^^^^^
	There is a growing number of "experts" pointing out limits to current
	hardware techniques that we ARE rapidly approaching.  We need 
	BREAK-THROUGH improvements in technology, not just incremental
	improvements in the technology we have now.
>we are taking about the early 21st
>century for this capability.  
	Proof of concept maybe.  To get the image quality you want, the speed
	you'll want, and the cost to make it usable by someone other than
	national labs, I think it'll most likely be after 2030.
>The Popular Science article of last year equates
>the complexity of MIT Media Lab holo-images with 2-D graphics on oscilloscopes 30
>years ago.  So this technology is probably realizable in most readers lifetimes.
	Some yes. Most? That depends as much on health technology as 
	anything.
>As pointed out in an early posting, much work still has to be done in the
>display hardware, that is getting the numerical description of the light waves
>converted into light.
	That too.

Sam Uselton		uselton@nas.nasa.gov
employed by CSC		working for NASA (Ames)		speaking for myself

eugene@nas.nasa.gov (Eugene N. Miya) (05/11/91)

In article <1991May9.153446.21742@leland.Stanford.EDU>
rick@pangea.Stanford.EDU (Rick Ottolini) writes:
>IMHO digital holography will be THE 3-D graphics technique of the future.
>Current rendering techniques SIMULATE 3-D through the using of lighting
>models, shape [perspective, stereo], and motion [fly-thru animation,
>virtual reality animation].  Holography seeks to compute actual light waves
>themselves.  
>I envision "Princess Lea" displays, floating images like that
>of the help message in Star Wars I.  This avoids the sensory sheaths the
>VR people are using.

I just happened to glance this.  Two points.

1) I am a little bit disturbed by the special effects Star Wars image.
2) I don't think it will be "THE" but it will certainly be a powerful
technique.  I think part of the effectiveness is dependent upon the
audience (who pays the bucks and what their background expectations are).

This film (SW) seems to be THE image of what we think holography might be.
I hope not.  We have some fundamental human limitations with our
eye balls.  Retinas are flat.  We need to look (no pun intended) at
what we want in 3-D (and 4-D).  Things like depth cues, superposition
information, etc.  But it fundamentally does not get rid of problems
like hidden objects.  You can't see what's behind Leah, she obscures it.
That's not good.  You won't be able to see behind that 3-D rendering of
an oil reservoir without doing something else (head parallax, time
varying (or not) cross-sections, etc.)  This costs is computation time,
storage, etc.

I still think we will need ball-and stick models, computer generated CAD-type
3-D sculpture outputs, sounds, etc.  But, holography might be helped if
optical benches, and analogy and digital optical computer were cheaper
and available.  Computing one pixel or voxel at a time is inefficient.

I do think its neat, we should fund it, and we should have lots of people
playing with holograms.  Hell, I have holograms sitting on my desk.
I just don't think it will be the end-all of computer graphics.

--eugene miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov
  Resident Cynic, Rock of Ages Home for Retired Hackers
  {uunet,mailrus,other gateways}!ames!eugene

npw@eleazar.dartmouth.edu (Nicholas Wilt) (05/11/91)

In article <1991May10.165256.12414@nas.nasa.gov> uselton@nas.nasa.gov (Samuel P. Uselton) writes:
>In article <1991May9.153446.21742@leland.Stanford.EDU> rick@pangea.Stanford.EDU (Rick Ottolini) writes:
>>Even with all kinds of shortcuts thrown in, it will take
>>billions to trillions of calculations per second to display interesting
>>holographic images.  
>	Current realistic image synthesis techniques can take from 100 million
>	 to 1 billion operations per image.  Laser holography is AT LEAST
>	a couple of orders of magnitude more.  And you still WANT the animation
>	so add another one or two orders of magnitude.  I see trillions of
>	operations per second as a LOWER bound on what it might take.
>	The NAS project at NASA Ames regards pushing industry into producing
>	a teraflops computer by the year 2000 as a "Grand Challenge" problem.
>	It'll be quite a while longer before that capacity finds its way
>	into workstations for the broad market.
>>With the computing speeds increasing an order of magnitude
>>every five years and no end in sight, 
>	          ^^^^^^^^^^^^^^^^^^^
>	There is a growing number of "experts" pointing out limits to current
>	hardware techniques that we ARE rapidly approaching.  We need 
>	BREAK-THROUGH improvements in technology, not just incremental
>	improvements in the technology we have now.

What about massively parallel architectures?  If digital holography
techniques are as trivially parallelizable as ray tracing, then you don't
even need any bandwidth between nodes.

Sure there are issues (load balancing and stuff).  That's just software.
_Lots_ of people are working on better software for parallel architectures.

>>The Popular Science article of last year equates
>>the complexity of MIT Media Lab holo-images with 2-D graphics on oscilloscopes 30
>>years ago.  So this technology is probably realizable in most readers lifetimes.
>	Some yes. Most? That depends as much on health technology as 
>	anything.

The hardware guys specialize in disproving statements like this.

>
>Sam Uselton		uselton@nas.nasa.gov
>employed by CSC		working for NASA (Ames)		speaking for myself

--Nick
  npw@eleazar.dartmouth.edu

eugene@nas.nasa.gov (Eugene N. Miya) (05/13/91)

In article <1991May11.152915.6488@dartvax.dartmouth.edu>
npw@eleazar.dartmouth.edu (Nicholas Wilt) writes:
>In article <1991May10.165256.12414@nas.nasa.gov>
uselton@nas.nasa.gov (Samuel P. Uselton) writes:
>>In article <1991May9.153446.21742@leland.Stanford.EDU>
rick@pangea.Stanford.EDU (Rick Ottolini) writes:
>>>With the computing speeds increasing an order of magnitude
>>>every five years and no end in sight, 
>>	          ^^^^^^^^^^^^^^^^^^^
>>	There is a growing number of "experts" pointing out limits to current
>>	hardware techniques that we ARE rapidly approaching.  We need 
>>	BREAK-THROUGH improvements in technology, not just incremental
>>	improvements in the technology we have now.
>
>What about massively parallel architectures?
>techniques are as trivially parallelizable as ray tracing, then you don't
>even need any bandwidth between nodes.
>
>Sure there are issues (load balancing and stuff).  That's just software.
>_Lots_ of people are working on better software for parallel architectures.
>The hardware guys specialize in disproving statements like this.

You have three things working here.
1) Do not trivialize the software problem.  If the problem was simple,
it would have been solved in 1968.  Similar to the "automatic programming"
problem of the 1950s.
2) Every body has all these graphics codes in sequential languages.
The future dusty decks.
We are either going to have to throw out all of this code, and rewrite,
or have some awfully good software.
3) Parallel architectures are an O(n) (or at best (n^2)) solution to
problems that many times have greater complexity.  This is why students
should take computing theory classes.

We can only rely on so many hardware improvements.  Got to rememebr we
compute in the physical (real, not virtual) world.  Start reading and
reasing issues in comp.arch (although few architects read that group any more).
There are some ends in sight.

Those components (that software) which runs fastest and most reliable
are those which aren't there.  --Gordon Bell

"It's the things that nobody knows anything about that we can discuss..."
--R.P. Feynman

--eugene miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov
  Resident Cynic, Rock of Ages Home for Retired Hackers
  {uunet,mailrus,other gateways}!ames!eugene

aipdc@castle.ed.ac.uk (Paul Crowley) (05/13/91)

In article <1991May13.045426.7871@nas.nasa.gov> eugene@amelia.nas.nasa.gov (Eugene N. Miya) writes:
>3) Parallel architectures are an O(n) (or at best (n^2)) solution to
>problems that many times have greater complexity.  This is why students
>should take computing theory classes.

Yeah, what we really need is quantum mechanical computers!

(Quantum mechanical computers do different bits of the calulation in
different eigenstates.  Essentially, they can fork every millisecond
without limit.  The hard part is combining the results at the end, since
it's probabilistic.  Theoretically possible it is, "feasable" wouldn't
be my choice of wording...
                                         ____
\/ o\ Paul Crowley aipdc@castle.ed.ac.uk \  /
/\__/ Part straight. Part gay. All queer. \/

rick@pangea.Stanford.EDU (Rick Ottolini) (05/14/91)

In article <1991May10.235611.18365@nas.nasa.gov> eugene@amelia.nas.nasa.gov (Eugene N. Miya) writes:
>This film (Star Wars) seems to be THE image of what we think holography might be.
>I hope not.  We have some fundamental human limitations with our
>eye balls.  Retinas are flat.  We need to look (no pun intended) at
>what we want in 3-D (and 4-D).  Things like depth cues, superposition
>information, etc.  But it fundamentally does not get rid of problems
>like hidden objects.  You can't see what's behind Leah, she obscures it.
>That's not good.  You won't be able to see behind that 3-D rendering of
>an oil reservoir without doing something else (head parallax, time
>varying (or not) cross-sections, etc.)  This costs is computation time,
>storage, etc.

I prefer to think of digital holography as a RENDERING method not a MODELING
method.  As rendering, we would compare DH to the current methods of lighting
models, etc. and the sensory sheaths of virtual reality. IHMO DH may be better.
As to modeling, I am not implying that DH has to be "realism".  We can pull
apart objects, create fantasties that are physically unrealizable.

rick@pangea.Stanford.EDU (Rick Ottolini) (05/14/91)

I started inquiries into digital holography about 15 months ago because of a
project I wanted to try.  I wanted to make a "floating earth globe" holo
for the 20th anniversary of Earth Day.  I have the model data and few spare
terra-flops, but didn't know how easy it was to generate the hologram.

will@rins.ryukoku.ac.jp (will) (05/14/91)

In article <1019.282B28FD@nwark.fidonet.org>, Samuel.P..Uselton@p0.f13.n391.z1.fidonet.org (Samuel P. Uselton) writes:
>        The NAS project at NASA Ames regards pushing industry into producing
>        a teraflops computer by the year 2000 as a "Grand Challenge" problem.
>        It'll be quite a while longer before that capacity finds its way
>        into workstations for the broad market.
>

	From what I have seen and heard (from the people doing this research)
	this technology should be availible to the government and big
	industries by 1997 (and maybe by 1995) and into the home a few years
	later.  As I have been told, the biggest problem now is not making
	teraflop computers, but manufacturing techniques must be updated for
	production and quality control standards must be updated for mass prod.
	The reason cited was that these computers use components that require
	more advanced manufacturing technologies and that current facilities
	must be redesigned to meet these requirements.

	A little side note: I was once told that another reason for the delay
	of the teraflop technologies was that:
		All of the Corporations that make computers and have these
		technologies currently cannot just release it at this time even
		if they could.  The reason is "Economics".  They and thier
		customers have invested billions in current technolgy.  To make
		it absolutly obsolete over night would destroy there companys.
		Not to forget that many of these manufacturers have warehouses
		full of new equipment ready to be sold.  Worth billions.

		This is one reason that companies do incremental scaling of
		computer technologies.  To get as much money as possible with
		as small an investment as possible.  It's all "Economics".



							Will...

will@rins.ryukoku.ac.jp (will) (05/14/91)

In article <1991May10.235611.18365@nas.nasa.gov>, eugene@nas.nasa.gov (Eugene N. Miya) writes:
>You can't see what's behind Leah, she obscures it.
>That's not good.  You won't be able to see behind that 3-D rendering of
>an oil reservoir without doing something else (head parallax, time
>varying (or not) cross-sections, etc.)  This costs is computation time,
>storage, etc.
>
	Eugene, I completly disagree.  The fact that Leah is obscured is
	most important for graphics like scientific visualization.  Such
	as the oil reservoir problem.  If your computer can produce such
	an image it won't make any difference about the extra costs of
	computation time and data storage.  Other algorithms such as for
	transparency, etc will handle the rest.

>I still think we will need ball-and stick models, computer generated CAD-type
>3-D sculpture outputs, sounds, etc.  But, holography might be helped if
>optical benches, and analogy and digital optical computer were cheaper
>and available.  Computing one pixel or voxel at a time is inefficient.

	Agreed, ball and stick models will always have their place.


>I just don't think it will be the end-all of computer graphics.

	Also agreed, I don't think that holograms will be the end thing.  Their
	are so many ways that data must be shown for humans to get the most of
	it.  Besides, once the hologram problems are solved, we will most likly
	as is always the case find new methods as good or better.  Research is
	never ending.  There is always a place for new ideas.


						Will....

mark@calvin..westford.ccur.com (Mark Thompson) (05/15/91)

In article <268@rins.ryukoku.ac.jp> will@rins.ryukoku.ac.jp (will) writes:
>	A little side note: I was once told that another reason for the delay
>	of the teraflop technologies was that:
>		All of the Corporations that make computers and have these
>		technologies currently cannot just release it at this time even
>		if they could.  The reason is "Economics".  They and thier
>		customers have invested billions in current technolgy.  To make
>		it absolutly obsolete over night would destroy there companys.
>		Not to forget that many of these manufacturers have warehouses
>		full of new equipment ready to be sold.  Worth billions.

I would believe this argument in a heartbeat for US auto manufacturers
but I find it a little hard to swallow for high tech computer companies.
Any company that could advance the state of the art by a few magnitudes
would do so instantly (provided it was cost effective), with the hope of
annihilating the competition and reaping massive profits. High tech computer
companies generally don't wharehouse huge masses of systems because of
the cost and the volatility of the high-end market. Today's Mega-Monster
Number Smasher is tommorrow's doorstop.
%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~%
%      `       '        Mark Thompson                 CONCURRENT COMPUTER  %
% --==* RADIANT *==--   mark@westford.ccur.com        Principal Graphics   %
%      ' Image `        ...!uunet!masscomp!mark       Hardware Architect   %
%     Productions       (508)392-2480 (603)424-1829   & General Nuisance   %
%                                                                          %
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

falk@peregrine.Sun.COM (Ed Falk) (05/16/91)

In article <10210@castle.ed.ac.uk> aipdc@castle.ed.ac.uk (Paul Crowley) writes:
>
>Yeah, what we really need is quantum mechanical computers!
>
>(Quantum mechanical computers do different bits of the calulation in
>different eigenstates.  Essentially, they can fork every millisecond
>without limit.

No way!  It would be a nightmare to program.  Every try to draw an
eigen state diagram?

		-ed falk, sun microsystems
		 sun!falk, falk@sun.com

In the future, somebody will quote Andy Warhol every 15 minutes.