[comp.graphics] Terrain Rendering

thaw@pixar.UUCP (Tom Williams) (07/14/89)

  > [Frank Vance writes]
  >(pages 6-7) Alliant features 4 images showing "typical" applications.
  >The one on the right, captioned "Mission Planning" shows an image of a
  >landscape, with valley, river, and mountains.  It has something of a fractal
  >character to it.

Hmmm, though I can't be absolutely positive this sounds exactly like
an animation that we did when I worked at a TASC (The Analytic Sciences
Corporation, Reading MA). Later, we did some high-res images for marketing.

The animation was only partly done on an Alliant. All of the rendering 
and some image processing was done on PIXAR Image Computers.  The Alliant 
did all the heavy-duty IP, including some particularly nasty natural 
coloration.  (BTW, TASC provides mission planning software and analysis, 
BUT that picture was done for ABC's coverage of the Calagary Winter 
Olympics.)


  >1. Is this image just a fractal landscape, or does the image represent a	
  >"reconstruction" of a real-world location based on data points with 
  >elevation, etc.

The latter. The animation runs from the city of Calgary, along the
Bow (sp?) river valley, into the Canadian rockies, to finish at the 
sites where the downhill was held and the site where the cross country 
events were held. (Both endings were rendered and broadcast as seperate 
animations)


  >2. If the latter, how is the real landscape modeled?  As contour lines or
  >discrete points (or a grid)?  The detailing of the mountain (in particular) 
  >seems to imply either a very high level of detail for the model or some 
  >degree of fractal "interpolating" to add texture to the landscape.  

Ah well, that would be telling... The elevation was originally represented
as a sparse contour set.  With adaptive, heat diffusion techniques
the data was converted to a regular grid with 10m resolution. The color
data was originally SPOT satellite data (10m res).  This was put through a 
custom natural coloration algorithm and the results were fine tuned.  
Then the two data sets, the natural color and elevation, where registered.
The Alliant was great for this because the data set was immense 5k x 5k 
pixels ( Actually it's isn't so immense considering we used 32k x 32k 
images for other things).


  >3. Assuming it is not just a fractal image, is this an application some one
  >markets?

TASC still markets terrain images and animations, but all of the technical 
people who worked on those projects have left. If you're interested in
seeing other results look at the February issue of National Geographic 
(Yellowstone fires), or ask ABC or TASC for a copy of the Calgary Animation. 
Hope this helps.

                                          -thaw-


----
Thomas Williams ..!{ucbvax|sun}!pixar!thaw

                         'For a long time I felt without style or grace
                          Wearing shoes with no socks in cold weather'
----

alves@castor.usc.edu (William Alves) (07/15/89)

> [Frank Vance writes]
>(pages 6-7) Alliant features 4 images showing "typical" applications.
>The one on the right, captioned "Mission Planning" shows an image of a
>landscape, with valley, river, and mountains.  It has something of a fractal
>character to it.

I haven't seen the image you mention, but on the Alliant in the USC Computer
Animation Lab I saw a 3-d image of Yosemite Valley. It was generated by
texture mapping a Land-Sat image of the region on top of a contour model.
Pretty darned impressive.

crum@lipari.usc.edu (Gary L. Crum) (07/19/89)

In article <4381@merlin.usc.edu> alves@castor.usc.edu (William Alves) writes:
   I haven't seen the image you mention, but on the Alliant in the USC Computer
   Animation Lab I saw a 3-d image of Yosemite Valley. It was generated by

Quick correction: The lab of Prof. Richard Weinberg (lecturer in
SIGGRAPH '89 workshop #13, on visualization) includes an Ardent Titan
computer, not an Alliant.  The Ardent is named Popeye -- ha ha.

There are Alliant FX/{1,40,80} computers on campus, however.

Gary

eugene@eos.UUCP (Eugene Miya) (07/19/89)

Sorry, I've not found the terrain rendering of things like Yosemite Valley
all that impressive.  Its getting there, but you have to learn to take
a closer look.  There is more detail and information in SOME of the original
images.  The surfacing mapping loses some critical details, some shadows
get portrayed wrong (due to original image capture, etc).  Of the better
renderings are LA the Movie, the aforementioned Calgary piece, but they
all still lack a few things here and there.

Another gross generalization from

--eugene miya, NASA Ames Research Center, eugene@aurora.arc.nasa.gov
  resident cynic at the Rock of Ages Home for Retired Hackers:
  "You trust the `reply' command with all those different mailers out there?"
  "If my mail does not reach you, please accept my apology."
  {ncar,decwrl,hplabs,uunet}!ames!eugene
  				Live free or die.

jcallen@esunix.UUCP (John Callen) (07/21/89)

Though I'm not familliar with exactly how the Alliant renderings were
generated I have been involved in a similar project here at E&S (if you
think those images are impressive you should see the ESIG1000 real-time
demonstration where you fly over a imagery derived database of Salt Lake
City!).  Anyway, the big question here is how are terrain databases like
this one generated.  There are three ways that I know of:

1. Digital Terrain Maps (DTM) or Defense Mapping Agency's Digital Terrain
Elevation Data (DTED) - this provides the height fields that can be
polygonalized and processed like any other polygon database.  The slick part
comes when you take digitial imagery (like that available from satellites -
LANDSAT or Spot Image, or digitally scanned aerial photographs) and register
them with the terrain as texture maps.  JPL's "LA The Movie" does this using
lots of VAX CPU hours.  E&S image generators do this texture mapping in
real-time.

2. Photogrammetrically derived height fields - using a pair of stereo
images, either from satellites or aerial reconnaisance, and known camera
parameters (focal length, position and orientation) it is possible to
reconstruct the 3-dimensional location of corresponding pixels in the
images.  Companies (like GeoSpectra in Ann Arbor, MI) make a living off
reconstructing 3-D data from imagery.  For your reference, you might want to
check out an issue of National Geographic on digitally modeling the
Himalayas - GeoSpectra did the photogrammetric reconstruction.  It is
impressive!  Once the height fields have been developed, they can be
polygonalized and texture mapped using the original imagery as texture data.

3. Voxels - actually a variation of #2, where the height fields can be
photogrammetrically derived, except that instead of the polygonalization
step, the height samples have a volumetric aspect to them.  The source
pixels in the imagery that were correlated for the generation of the height
values have their intensity values associated with the resulting height
volume.  Hughes' RealScene is a good example of this approach.

Each of these approaches have different benefits and drawbacks.  Anything
that is photogrammetrically based is VERY computationally intensive as the
process is done on a per pixel basis (with a correlation function).  Spot
satellite images come at roughly a 6Kx6K data set (remember, you need a pair
of them for reconstruction!), so we're talking LOTS of data.  Though I have
seen some crude PC-based photogrammetry systems, I haven't seen any that
were so ambitious to work on large terrain areas.

BTW, IMHO this has been a significant disappointment to me as far as
workstation capabilities go.  Surveying the marketplace there are FEW
workstations out there that are capable of not only manipulating data sets
of this magnitude, but also displaying extremely large texture mapped fields
once they've been built!  It seems as though the industry is still stuck on
the ol' point-line-poly mentality with a few novel lighting models thrown in
for thrills.  Heaven help the serious earth resources applications (and you
could even throw in the medical imaging community) whose data types differ
so radically from point-line-poly approaches!  Maybe "Visualization" will
offer some promise for these applications ...

John Callen
Project Lead, Photo-Derived Modeling Project
Evans & Sutherland
Simulation Division
600 Komas Drive
Salt Lake City, UT  84108
801-582-5847

jcallen@esunix.UUCP (John Callen) (07/21/89)

In article <4393@eos.UUCP>, eugene@eos.UUCP (Eugene Miya) writes:
> Sorry, I've not found the terrain rendering of things like Yosemite Valley

If I'm not mistaken, the terrain information for Yosemite Valley was also
generated by GeoSpectra (in Ann Arbor, MI).  If anyone knows otherwise or
can confirm this, I'd appreciate it.

> all that impressive.  Its getting there, but you have to learn to take
> a closer look.  There is more detail and information in SOME of the original
> images.  

One of the problems with working with satellite imagery is the resolution of
the imagery.  Spot Image's best resolution is 10m panachromatic data.
That's pretty coarse if you're flying "in the weeds" as alot of military
helicopter pilots want to do.  On the other hand, aerial photography has
incredible resolution only being limited by the actual grain of the film
(and they use DARN fine-grained film!).  Taking an aerial photograph and
scanning it with a high resolution scanner can give you some truely awesome
digital source data.  Of course, it takes alot more aerial photographs to
cover a comparable area covered by satellite imagery, so you end up dealing
with digitally mosaicking the pictures together - TEDIOUS.  Of course, I've
also seen texture mapping onto terrain that has rude sampling errors
resulting in blockiness that looks totally unnatural (kind of like a Serrat
painting done with square paint chips!)  Better sampling methods still have
problems when a texel covers a very large area of screen space - it looks
really blurred because there isn't any higher frequency data to display.
Still in all, I'd rather build complex images using imagery-derived texture
maps than have to go in and hand model all that information!
 
> The surfacing mapping loses some critical details, some shadows
> get portrayed wrong (due to original image capture, etc).  

Yes, unfortunately artifacts from the original images can contribute some
really bogus visual cues to the display.  One of my favorites is imagery
that was taken when the sun angle was really low.  Shadows are exaggerated
and VERY noticeable.  This is one of the reasons why Spot Image does most of
their image gathering when they're over places between 10AM and 1PM - the
shadows are really shortened.  Interestingly enough, I have seen
demonstrations of fairly intense false shadowed images (where the shadows
are directly opposite of the digital sun angle for a scene) look pretty
good. But now we're getting into the realm of visual perception and that's not
my area of expertise! ...
 
> Of the better
> renderings are LA the Movie, the aforementioned Calgary piece, but they
> all still lack a few things here and there.
> 

Yes, JPL's "LA, the Movie" is definitely a must see for those who haven't
already (it was distributed on one of the recent SIGGRAPH videos on
visualization).  Of course, bear in mind that generating that film loop took
lots of VAX CPU hours!  E&S's ESIG1000 high performance image generator does
that type of thing in real-time!  There's a great demonstration database
that flies you over and around the Salt Lake City valley ala "LA, the Movie"
except in real-time.  I sure hope we're showing a loop of it during the film
show at SIGGRAPH this year!

John Callen
Project Lead, Photo-Derived Modeling Project
Evans & Sutherland
Simulation Division
600 Komas Drive
Salt Lake City, UT  84108
801-582-5847