[comp.sys.mac.programmer] Saving area under dialog boxes

davidl@intelob.intel.com (David Levine) (04/28/89)

In newsgroup comp.sys.mac, in article <Apr.27.10.36.05.1989.11550@math.rutgers.edu> aberg@math.rutgers.edu (Hans Aberg) writes:
> >(Would it be a stupid idea to BitCopy the area where the dialogue box
> >will apear into an array, then display the dialogue box, remove it
> when
> >done and BitCopy the array back to the window? Would have saved me
> >another 2 minutes)
> 
> >>Not to argue with the other points (I don't have WingZ), but this is definitely
> >>a Terrible Bad Awful Idea, and the Apple Thought Police will lock you up if
> >>you do this.
> 
> The Finder, and the earlier systems actually work like this: By
> temporarily saving the portion where a dialog box or or menu pops up,
> and then copying it back again. In a multitasking environment this
> does not work, because some other application may decide to change
> the graphics in the area temporarily saved. This is in fact reason to
> why all processes used to start hanging when pressing the mouse button
> in earlier systems.

OK, I've been following this discussion for a while, and I don't
understand the problem.  Yes, it would be a Bad Idea to CopyBits from
the SCREEN to a buffer and then restore after the dialog goes away.
But the original poster seems to have been talking about the
application copying the affected area of ITS OWN WINDOW to a buffer
and then restoring ITS OWN WINDOW.  What could be wrong with that?  It
seems a fairly standard technique.  (Actually, an application with
complicated drawings should keep a copy of its window in an offscreen
bitmap at all times, and blit it to the screen whenever it's needed.
Right?)

Warning: I'm coming at this from recent experience with the X Window
System.  I may be making invalid assumptions about the way that the
Mac works.  Please enlighten me!

            David D. Levine                BBBBBBBBB  IIII IIII NNN  NNNN TM
        Senior Technical Writer            BBBB  BBBB iiii iiii NNNN NNNN
                                           BBBBBBBBB  IIII IIII NNNNNNNNN
UUCP: ...[!uunet]!tektronix!biin!davidl    BBBB  BBBB IIII IIII NNNN NNNN
MX-Internet: <davidl@intelob.intel.com>    BBBBBBBBB  IIII IIII NNNN  NNN
ARPA: <@iwarp.intel.com:davidl@intelob.intel.com>

keith@Apple.COM (Keith Rollin) (04/29/89)

In article <DAVIDL.89Apr28100419@intelob.intel.com> davidl@intelob.intel.com (David Levine) writes:
>In newsgroup comp.sys.mac, in article <Apr.27.10.36.05.1989.11550@math.rutgers.edu> aberg@math.rutgers.edu (Hans Aberg) writes:
>> >(Would it be a stupid idea to BitCopy the area where the dialogue box
>> >will apear into an array, then display the dialogue box, remove it
>> when
>> >done and BitCopy the array back to the window? Would have saved me
>> >another 2 minutes)
>> 
>> >>Not to argue with the other points (I don't have WingZ), but this is definitely
>> >>a Terrible Bad Awful Idea, and the Apple Thought Police will lock you up if
>> >>you do this.
>> 
>> The Finder, and the earlier systems actually work like this: By
>> temporarily saving the portion where a dialog box or or menu pops up,
>> and then copying it back again. In a multitasking environment this
>> does not work, because some other application may decide to change
>> the graphics in the area temporarily saved. This is in fact reason to
>> why all processes used to start hanging when pressing the mouse button
>> in earlier systems.
>
>OK, I've been following this discussion for a while, and I don't
>understand the problem.  Yes, it would be a Bad Idea to CopyBits from
>the SCREEN to a buffer and then restore after the dialog goes away.
>But the original poster seems to have been talking about the
>application copying the affected area of ITS OWN WINDOW to a buffer
>and then restoring ITS OWN WINDOW.  What could be wrong with that?  It
>seems a fairly standard technique.  (Actually, an application with
>complicated drawings should keep a copy of its window in an offscreen
>bitmap at all times, and blit it to the screen whenever it's needed.
>Right?)
>

Saving data from your own window into an offscreen buffer is not much of a 
problem _right now_. It should even work in the forseeable future. However, I 
would recommend against it in favor of doing it the other way around; keep 
the master image in an offscreen buffer, and refresh from it exclusively.

One of the things that some people have been clamoring for on the Mac is
pre-emptive multitasking, rather than the GNE/WNE/EA switching that goes on
now. Imagine a scenario where this is actually implemented. The program in
the background is about to put up a dialog box or some such, and is stashing
the bits in the window away somewhere in memory. However, at the same time
(or close to it), the user is futzing around in the Finder, dragging icons
around. If the user drags the icon outline over the window that is being
cached into some offscreen buffer, that outline will get stored also. When
the dialog box is put away, then window contents, plus the icon outline will
be put back.

A better approach is to keep your master image in an offscreen buffer in the
first place. The first advantage of this is that you won't have the afore-
mentioned problem. The second is that you will be able to cache more than just
what appears in the window; by caching a full page or three, you can get
significant increases in scrolling. (The third advantage is that the demand for
RAM goes way up and Apple gets to sell more. :-) 


------------------------------------------------------------------------------
Keith Rollin  ---  Apple Computer, Inc.  ---  Developer Technical Support
INTERNET: keith@apple.com
    UUCP: {decwrl, hoptoad, nsc, sun, amdahl}!apple!keith
"Argue for your Apple, and sure enough, it's yours" - Keith Rollin, Contusions

ra_robert@gsbacd.uchicago.edu (04/30/89)

In article <29827@apple.Apple.COM>, keith@Apple.COM (Keith Rollin) writes...

> 
>Saving data from your own window into an offscreen buffer is not much of a 
>problem _right now_. It should even work in the forseeable future. However, I 
>would recommend against it in favor of doing it the other way around; keep 
>the master image in an offscreen buffer, and refresh from it exclusively.
[...]


What about in a paint type program?  I've been working on one, and when someone
"draws" on the screen I first draw it on the screen (e.g. on the simplest level
with calls such as LineTo)  and then blit it to my offscreen buffer (from which
I refresh my image) for permanent storage when the drawn object is done (i.e,
on MouseUp).  Wouldn't there be times when you need to CopyBits from screen
memory to your offscreen buffer?  I suppose one _could_ draw to your offscreen
bugger and CopyBits it on screen constantly, but that sounds fairly awful in
terms of speed, etc.


Robert
------
ra_robert@gsbacd.uchicago.edu
------
generic disclaimer: all my opinions are mine

Marriott1@AppleLink.Apple.Com (Greggy) (04/30/89)

In article <2993@tank.uchicago.edu> ra_robert@gsbacd.uchicago.edu writes:
> What about in a paint type program?  I've been working on one, and when 
someone
> "draws" on the screen I first draw it on the screen (e.g. on the 
simplest level
> with calls such as LineTo)  and then blit it to my offscreen buffer 
(from which
> I refresh my image) for permanent storage when the drawn object is done 
(i.e,
> on MouseUp).  Wouldn't there be times when you need to CopyBits from 
screen
> memory to your offscreen buffer?  I suppose one _could_ draw to your 
offscreen
> bugger and CopyBits it on screen constantly, but that sounds fairly 
awful in
> terms of speed, etc.

Microsoft Word caches its data this way when it scrolls.  The result is 
that if you have a floating window (such as Overtime, a clock that always 
floats above everything else), then it also gets cached!  When you scroll 
back down, you get old images of the clock.

Keeping an offscreen image to blit from still seems like the best idea to 
me.

> Robert
> ------
> ra_robert@gsbacd.uchicago.edu
> ------

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Greg Marriott               +                      AppleLink: Marriott1 +
+ Just Some Guy               +                                           +
+ "My phone is always busy"   +   Internet: Marriott1@AppleLink.Apple.Com +
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Just in case disclaimers mean anything at all:                          +
+    What I say comes from my own twisted perception of the world         +
+    and does not represent the policy or opinion of Apple Computer, Inc. +
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

keith@Apple.COM (Keith Rollin) (04/30/89)

In article <2993@tank.uchicago.edu> ra_robert@gsbacd.uchicago.edu writes:
>In article <29827@apple.Apple.COM>, keith@Apple.COM (Keith Rollin) writes...
>
>> 
>>Saving data from your own window into an offscreen buffer is not much of a 
>>problem _right now_. It should even work in the forseeable future. However, I 
>>would recommend against it in favor of doing it the other way around; keep 
>>the master image in an offscreen buffer, and refresh from it exclusively.
>[...]
>
>
>What about in a paint type program?  I've been working on one, and when someone
>"draws" on the screen I first draw it on the screen (e.g. on the simplest level
>with calls such as LineTo)  and then blit it to my offscreen buffer (from which
>I refresh my image) for permanent storage when the drawn object is done (i.e,
>on MouseUp).  Wouldn't there be times when you need to CopyBits from screen
>memory to your offscreen buffer?  I suppose one _could_ draw to your offscreen
>bugger and CopyBits it on screen constantly, but that sounds fairly awful in
>terms of speed, etc.

I'm afraid that I don't see the problem here. Either you draw to the screen
and copy it to an offscreen buffer, or you draw to an offscreen buffer and
copy it to the screen.

With paint programs, there is another advantage of keeping your master image
created and stored in an offscreen buffer. By using two buffers, you can not
only implement flickerless animation, but also undo.

Imagine the following setup where you have a fullpage picture stored in an
offscreen buffer, a temporary buffer, and the screen:

   offscreen image            temporary buffer            image on screen
+-----------------------+  +-----------------------+  +-----------------------+
|$$$$$$$$$$$$$$$$$$$$$$$|  |                       |  |$$$$$$$$$$$$$$$$$$$$$$$|
|$$$$$$$$$$$$$$$$$$$$$$$|  |   .$$$$$$             |  |$$$.$$$$$$$$$$$$$$$$$$$|
|$$$$$$$$$$$$$$$$$$$$$$$|  |   $.$$$$$             |  |$$$$.$$$$$$$$$$$$$$$$$$|
|$$$$$$$$$$$$$$$$$$$$$$$|  |   $$.$$$$             |  |$$$$$.$$$$$$$$$$$$$$$$$|
|$$$$$$$$$$$$$$$$$$$$$$$|  |   $$$.$$$             |  |$$$$$$.$$$$$$$$$$$$$$$$|
|$$$$$$$$$$$$$$$$$$$$$$$|  |   $$$$.$$             |  |$$$$$$$.$$$$$$$$$$$$$$$|
|$$$$$$$$$$$$$$$$$$$$$$$|  |   $$$$$.$             |  |$$$$$$$$.$$$$$$$$$$$$$$|
|$$$$$$$$$$$$$$$$$$$$$$$|  |   $$$$$$.             |  |$$$$$$$$$.$$$$$$$$$$$$$|
|$$$$$$$$$$$$$$$$$$$$$$$|  |                       |  |$$$$$$$$$$$$$$$$$$$$$$$|
|$$$$$$$$$$$$$$$$$$$$$$$|  |                       |  |$$$$$$$$$$$$$$$$$$$$$$$|
+-----------------------+  +-----------------------+  +-----------------------+

The offscreen image holds your picture up to that time. The temporary buffer
holds a copy of the affected background, along with the line/rectangle/duodeca-
hedron/whatever you are drawing at that time. This combination is the blitted
onto the screen. (Admittedly, this is slower than what you are doing right now,
but not always appreciable so. Degradation really only comes into play when
you are dealing with large amounts of data in large-bit color modes.)

When the drawing is done, the buffers are left in this state until the next
drawing begins. Up until that time, the user will be able to Undo the drawing
s/he has done by recovering it from the left hand buffer.
------------------------------------------------------------------------------
Keith Rollin  ---  Apple Computer, Inc.  ---  Developer Technical Support
INTERNET: keith@apple.com
    UUCP: {decwrl, hoptoad, nsc, sun, amdahl}!apple!keith
"Argue for your Apple, and sure enough, it's yours" - Keith Rollin, Contusions

matthews@eleazar.dartmouth.edu (Jim Matthews) (05/01/89)

In article <29827@apple.Apple.COM> keith@Apple.COM (Keith Rollin) writes:
>In article <DAVIDL.89Apr28100419@intelob.intel.com> davidl@intelob.intel.com (David Levine) writes:

>>But the original poster seems to have been talking about the
>>application copying the affected area of ITS OWN WINDOW to a buffer
>>and then restoring ITS OWN WINDOW.  What could be wrong with that?  
>>
>
>Saving data from your own window into an offscreen buffer is not much of a 
>problem _right now_. It should even work in the forseeable future.

There's (at least) one way in which it doesn't work right now.  I have a
program that saves the bits of a text-editing window as part of a scheme
for doing menus in windows.  I only save a 1-bit deep BitMap, since I
use old-style GrafPorts.  But some of my users have the Kolor cdev set
up to make their text selection some presumably gaudy color.  When I
restore the bits of the selected text the color information is lost,
and my users are upset.  I don't know whether to blame myself or the
architects of Color QuickDraw, but this is one way in which CQD is *not*
backwards compatible.

I understand that I can keep Kolor from putting color pixels in my
grafports somehow, but so far I haven't figured out how.  After all,
this bug is my only leverage for getting a color monitor :-).

Jim Matthews
Dartmouth Software Development

jamesm@sco.COM (James M. Moore) (05/02/89)

In article <29855@apple.Apple.COM> keith@Apple.COM (Keith Rollin) writes:
>>I refresh my image) for permanent storage when the drawn object is done (i.e,
>>on MouseUp).  Wouldn't there be times when you need to CopyBits from screen
>>memory to your offscreen buffer?  I suppose one _could_ draw to your offscreen
>>bugger and CopyBits it on screen constantly, but that sounds fairly awful in
>>terms of speed, etc.
>
>I'm afraid that I don't see the problem here. Either you draw to the screen
>and copy it to an offscreen buffer, or you draw to an offscreen buffer and
>copy it to the screen.

But if/when context switching becomes preemptive, you can't be sure
that something like this won't happen:

	draw to screen
	** context switch - someone else puts another window on the screen,
		which erases part of your window
	** context switch - back to paint program
	update event generated for application window
	copy screenbits to buffer - we've just copied a blank screen.

In the case of the paint program, the speed hit for drawing offscreen
first probably wouldn't be that large.  Just pick a small section of
the drawing to copy back and forth.
-- 
** James Moore **
** Internet:  jamesm@sco.com **
** uucp:  {decvax!microsoft | uunet | ucbvax!ucscc | amd}!sco!jamesm **
** Nil clu no suim ar bith ag SCO ceard a bhfuil me ag scriobh anois. **

alexis@ccnysci.UUCP (Alexis Rosen) (05/07/89)

In article <DAVIDL.89Apr28100419@intelob.intel.com> davidl@intelob.intel.com
(David Levine) writes:
>OK, I've been following this discussion for a while, and I don't
>understand the problem.  Yes, it would be a Bad Idea to CopyBits from
>the SCREEN to a buffer and then restore after the dialog goes away.
>But the original poster seems to have been talking about the
>application copying the affected area of ITS OWN WINDOW to a buffer
>and then restoring ITS OWN WINDOW.  What could be wrong with that?

Nothing at all, except that there's usually no cheap way to tell what part
of your window is really visible and what's covered (and what needs updating
already). Whatever's not current would need to be drawn the slow way anyway.

So the best solution (for speed) is very close to that. You draw everything
to an off-screen bitmap and then copybits (or copyPix) it to the screen. For
example, FoxBase+/Mac does that. So do many other programs.

As is often the case, it's really a tradeoff between memory and speed.

---
Alexis Rosen
alexis@ccnysci.{uucp,bitnet}
alexis@rascal.ics.utexas.edu  (last resort)

lsr@Apple.COM (Larry Rosenstein) (05/11/89)

In article <2993@tank.uchicago.edu> ra_robert@gsbacd.uchicago.edu writes:

> "draws" on the screen I first draw it on the screen (e.g. on the 
simplest level
> with calls such as LineTo)  and then blit it to my offscreen buffer 
(from which
> I refresh my image) for permanent storage when the drawn object is done 
(i.e,
> on MouseUp).  Wouldn't there be times when you need to CopyBits from 
screen

You can't do the blit only on the mouse up and support automatic 
scrolling.  The user could draw a shape that is larger than the window, so 
part of the shape would be offscreen by the tim you did the blit.

I think most painting programs draw offscreen and do 1 blit to put the 
image on screen.  If you try MacPaint, you will see that it draws the 
object as it will appear (rather than drawing an XOR outline), and if you 
shrink the object the original image appears.  The only way you can do 
this properly is if you do a CopyBits each time the user moves the mouse.  

Each of these operatiosn will involve a small rectangle, and will be very 
fast.  I did this in my MacApp painting program, and it worked well in 
both B&W and color.

Larry Rosenstein, Apple Computer, Inc.
Object Specialist

Internet: lsr@Apple.com   UUCP: {nsc, sun}!apple!lsr
AppleLink: Rosenstein1