[comp.sys.amiga.tech] Files larger than available memory.

U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) (08/11/90)

G'day,

the discussion re: Virtual Memory/MMU/Protection reminds me of a tangentially
related issue.  Some criticism of Amiga software (such as word processors) is
related to the inability to deal with files/data larger than available memory.

Have developers dealt with this problem? If so how are they doing it?  If not
is it because they are waiting for VM?

I do assume this point has been discussed before. Please rather than chastise
me for bringing up the problem again perhaps someone could send me a summary.

I would have thought that this special (commercially important) case may have
led to the creation of  special purpose demand based paged memory scheme/s by
someone out there. Or would that hack (?) be too inefficient on non MMU Amiga
systems? {Kind of answering my own question there eh? :-)}

Don't Mac's run s/w that can deal with files larger than available memory?
I would presume such s/w could run on non MMU based Mac's.

{ I'm not really criticising here. I'm just wondering whether the Amiga s/w }
{ developers are (or will be) writing s/w that can deal these questions.    }

yours truly,
Lou Cavallo.

jet@karazm.math.uh.edu (J. Eric Townsend) (09/24/90)

In article <924@ucsvc.ucs.unimelb.edu.au> U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes:
>the discussion re: Virtual Memory/MMU/Protection reminds me of a tangentially
>related issue.  Some criticism of Amiga software (such as word processors) is
>related to the inability to deal with files/data larger than available memory.

What a coinkadink.  A friend of mine is taking a graduate software
engineering class where the project is to (tah-dah) write a text editor
that can deal with files large than available memory (be it virtual or not).
Well, I don't know if they're going to "write" it or not, it *is*
a software engineering class.

It can be done.  It's somewhat bizarre, but it can be done.  Essentially,
you keep track of the changes to the file.  You change the file whenever
you reach one of a set of conditions:
-- size of changes > some default size
-- user is not doing anything (cycle stealing, of a sort :-)
-- user goes forward or backwards in the file some distance > x

The "problem" here is that disk speed affects text scrolling and searching
speed.  If you "^F" (page forward in vi), then the next page has to
be read from disk and then updated from the in-memory list of changes
to that page (if any changes exist).

Any questions?
--
J. Eric Townsend -- University of Houston Dept. of Mathematics (713) 749-2120
Internet: jet@uh.edu
Bitnet: jet@UHOU
Skate UNIX(r)

d6b@psuecl.bitnet (09/24/90)

In article <924@ucsvc.ucs.unimelb.edu.au>, U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes:
> Have developers dealt with this problem? If so how are they doing it?  If not
> is it because they are waiting for VM?

Perhaps before we start another big discussion of this stuff I'd like to
question whether VM (virtual memory, not to be confused with virtual
machine or whatever) in an editor (to use your example) is needed. Right now
it's reasonably affordable to have a 9-megabyte Amiga. Do you edit really BIG
files? :-)

The largest file I've ever dealt with was 1.4 megabytes, and I thought that
was pretty big. It fits nicely in my 3 megabytes. So, it's not clear that
having some sort of VM built into an editor is terribly important. Whether or
not having VM in the OS is desirable is another matter. I don't need it
myself, but I suppose there are those that do (who are you??)

-- Dan Babcock

dailey@frith.uucp (Chris Dailey) (09/24/90)

In article <1990Sep23.174736.16118@lavaca.uh.edu> jet@karazm.math.uh.edu (J. Eric Townsend) writes:
>It can be done.  It's somewhat bizarre, but it can be done.  Essentially,
>you keep track of the changes to the file.  You change the file whenever
>you reach one of a set of conditions:
>-- size of changes > some default size
>-- user is not doing anything (cycle stealing, of a sort :-)
>-- user goes forward or backwards in the file some distance > x

>Any questions?

Why, yes.  What if the text to be inserted is greater than the space on
the disk?  There is no super-elegant way of doing this.  You'd have to
allocate more disk space, link it in.  Sounds like a lot of
system-dependent stuff to me.

>J. Eric Townsend -- University of Houston Dept. of Mathematics (713) 749-2120
--
Chris Dailey   dailey@(cpsin.cps|frith.egr).msu.edu
"Rise again, rise again/Though your heart it be broken and life about to
end./No matter what you've lost, be it a home, a love, a friend,/
Like the Mary Ellen Carter, rise again!" -- a song by the late Stan Rogers

jet@karazm.math.uh.edu (J. Eric Townsend) (09/25/90)

In article <1990Sep24.150432.25049@msuinfo.cl.msu.edu> dailey@frith.uucp (Chris Dailey) writes:
>Why, yes.  What if the text to be inserted is greater than the space on
>the disk?

You write out a new file of the correct size.  No problem.

> There is no super-elegant way of doing this.  You'd have to
>allocate more disk space, link it in.  Sounds like a lot of
>system-dependent stuff to me.

Any efficient editor will have to be system dependent to some degree.
I can't just take the source to vi(1) and recompile on the Amiga.
(If only I could...)
--
J. Eric Townsend -- University of Houston Dept. of Mathematics (713) 749-2120
Internet: jet@uh.edu
Bitnet: jet@UHOU
Skate UNIX(r)

hassinger@lmrc.uucp (Bob Hassinger) (09/25/90)

In article <1990Sep23.174736.16118@lavaca.uh.edu>, jet@karazm.math.uh.edu (J. Eric Townsend) writes:
> In article <924@ucsvc.ucs.unimelb.edu.au> U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes:
>>the discussion re: Virtual Memory/MMU/Protection reminds me of a tangentially
>>related issue.  Some criticism of Amiga software (such as word processors) is
>>related to the inability to deal with files/data larger than available memory.
> 
...
> 
> It can be done.  It's somewhat bizarre, but it can be done.  Essentially,
> you keep track of the changes to the file.  You change the file whenever
> you reach one of a set of conditions:
> -- size of changes > some default size
> -- user is not doing anything (cycle stealing, of a sort :-)
> -- user goes forward or backwards in the file some distance > x
...

It sure can be done!  In fact it has been done at least since the mid-60s on
DECtape/LINCtape based systems (no disk at all).  The classic editor for the
LINC-8 worked in this way - and quite well too everything considered.

If you have virtual memory you just allocate as much as you need in a
conventional editor (ref: VMS EDT or TPU for example).  If you do not have
virtual memory you use this "scrolling editor" scheme that has been around on
machines FAR smaller and slower than the Amiga for 25 years.  There is a
classic paper (in one of the IEEE journals I think) on the LINC design.

Note too the LINC design was later successfully ported to OS/8 on the PDP-8 and
PDP-12 where the file system did not even support dynamic extension of files
and later yet to RT-11 on the PDP-11 (KED) where the file system was only a
little more cooperative.

If it could be done on a 4K LINC-8 with a single LINCtape, is certainly could
be done a 512K Amiga with floppies that are several times bigger and many times
faster!

(Remember the famous quote about the fate of those or forget history...)

Bob Hassinger
hassinger@lmrc.UUCP

nsw@cbnewsm.att.com (Neil Weinstock) (09/25/90)

In article <1990Sep23.174736.16118@lavaca.uh.edu>, jet@karazm.math.uh.edu (J. Eric Townsend) writes:
[ ... ]
> The "problem" here is that disk speed affects text scrolling and searching
> speed.  If you "^F" (page forward in vi), then the next page has to
> be read from disk and then updated from the in-memory list of changes
> to that page (if any changes exist).

Of course, virtual memory is no different in this regard...

                                   - Neil

--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--
Neil Weinstock @ AT&T Bell Labs        //     What was sliced bread
att!edsel!nsw or nsw@edsel.att.com   \X/    the greatest thing since?

martens@dinghy.cis.ohio-state.edu (Jeff Martens) (09/26/90)

In article <1990Sep23.174736.16118@lavaca.uh.edu> jet@karazm.math.uh.edu (J. Eric Townsend) writes:

>In article <924@ucsvc.ucs.unimelb.edu.au> U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes:
>>the discussion re: Virtual Memory/MMU/Protection reminds me of a tangentially
>>related issue.  Some criticism of Amiga software (such as word processors) is
>>related to the inability to deal with files/data larger than available memory.
	[ ... ]

>It can be done.  

	[ ... ]

CP/M software, e.g., Wordstar, has been handling files bigger than
memory (typically 64k) for years.  Basically, you have to handle
paging yourself, which is inconvenient but not all that difficult.
--
-- Jeff (martens@cis.ohio-state.edu)
	Dan Quayle on education: "We're going to have the best educated
	American people in the world."  I wonder if he was also considering
	S. and C.  Americans, as well as Canadians and Mexicans?

daveh@cbmvax.commodore.com (Dave Haynie) (09/26/90)

In article <83986@tut.cis.ohio-state.edu> Jeff Martens <martens@cis.ohio-state.edu> writes:
>In article <1990Sep23.174736.16118@lavaca.uh.edu> jet@karazm.math.uh.edu (J. Eric Townsend) writes:

>>In article <924@ucsvc.ucs.unimelb.edu.au> U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes:
>>>the discussion re: Virtual Memory/MMU/Protection reminds me of a tangentially
>>>related issue.  Some criticism of Amiga software (such as word processors) is
>>>related to the inability to deal with files/data larger than available memory.

>CP/M software, e.g., Wordstar, has been handling files bigger than
>memory (typically 64k) for years.  Basically, you have to handle
>paging yourself, which is inconvenient but not all that difficult.

But of course, it made quite a bit more sense on CP/M systems.  When you have
to cram program and data into 64K, things like manual paging to disk, program
overlays, etc. make alot of sense.  A basic Amiga these days has 1 Meg of RAM,
which is generally plenty for at least one wordprocessor or text editor and
a pretty large document.  And even with the application loaded, there's about
as much room in memory as on floppy disk.  Paging to hard disk would make 
more sense, but once you have a few megabytes of memory, it's extremely hard
to run out doing anything but the most memory intensive DTP kind of stuff,
or perhaps ray tracing, which for speed reasons might have problems with any
disk paging.

Not to say that it doesn't make any sense, just that it's not all that 
necessary in most Amiga applications, while you couldn't do anything serious
at all without it on a CP/M machine, and it's still quite important on many
MS-DOS applications, where you're still basically limited to 640K for both
program and data (you could of course add banked memory, like the CP/M
machines did, to help out a little).  What you really want is true virtual
memory, which makes the paging to disk transparent to every application.  But
you couldn't have that on every Amiga.  It should be possible to write a
swapping library that any application could easily use to swap data between
a disk file and a memory buffer.  It would be silly to have to create such a
mechanism more than once on the Amiga.

>-- Jeff (martens@cis.ohio-state.edu)

-- 
Dave Haynie Commodore-Amiga (Amiga 3000) "The Crew That Never Rests"
   {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: hazy     BIX: hazy
	Standing on the shoulders of giants leaves me cold	-REM

mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) (09/26/90)

I've just started poking at this, because one of the changes to mg3
before the release is going to be ditching the linked list of lines in
favor of a buffer gap editor. Making the editor page to disk after
this is done would be simple, but it's not clear even that's worth the
trouble.

For those not familiar with it, the idea behind a buffer gap editor is
to keep all the text in one contiguous chunk, with a single gap where
editing is going on. Text is inserted into the gap; deleted text is
added to the gap, and the gap is shuffled around as needed (there's a
multiple-gap variation I haven't thought about yet, but will).

With memory mapping of some kind, doing everything in memory requires
a contiguous chunk of memory as big as the file. This is clearly
unacceptable on the Amiga, especially for an editor like mg. The
solution is a "paged" buffer, where the file buffer is a linked list
of pages, each of a fixed size (or each with a gap...hmmmm).

Given that structure, the only thing that needs to happen to be able
to edit files larger than memory is choosing a place to store the
pages that aren't in memory at the time, and adding the simple code to
page things to/from disk.

The obvious "tmp" place is ram, which doesn't buy anything. Likewise,
paging to floppy doesn't buy much. I tend to agree with Dave - with a
1 meg nominal minimum machine, and those with hard disks probably
having more, there's not much reason to want to page to disk, and the
space/time spent on that code can probably be put to better use.

Anyone have any comments?

	<mike



--
Here's a song about absolutely nothing.			Mike Meyer
It's not about me, not about anyone else,		mwm@relay.pa.dec.com
Not about love, not about being young.			decwrl!mwm
Not about anything else, either.

mrush@csuchico.edu (Matt "C P." Rush) (09/26/90)

In article <1990Sep24.150432.25049@msuinfo.cl.msu.edu> dailey@frith.uucp (Chris Dailey) writes:
>In article <1990Sep23.174736.16118@lavaca.uh.edu> jet@karazm.math.uh.edu (J. Eric Townsend) writes:
>>It can be done.  It's somewhat bizarre, but it can be done.  Essentially,
>>you keep track of the changes to the file.  You change the file whenever
>>you reach one of a set of conditions:
>>-- size of changes > some default size
>>-- user is not doing anything (cycle stealing, of a sort :-)
>>-- user goes forward or backwards in the file some distance > x
>
>>Any questions?
>
>Why, yes.  What if the text to be inserted is greater than the space on
>the disk?  There is no super-elegant way of doing this.  You'd have to
>allocate more disk space, link it in.  Sounds like a lot of
>system-dependent stuff to me.

	It shouldn't be anymore 'system-dependent' than if you try to insert
more stuff than you have Main memory for.  IE, when your I/O to disk fails,
you tell the user that "sorry mem/disk is full" and drop them OUT of insert
mode.
	If you're buffering the inserted text before writing it, then you
ought to keep track of how much disk space is available, but it you just
send it direct to disk (gets(userinput); fputs(userinput, userfile);)
then there's no problem (other than speed, and the OS will hopefully buffer
it enough to be usable).

	-- Matt

    *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
    %    "I programmed three days        %      Beam me up, Scotty.      %
    %     And heard no human voices.     %     There's no Artificial     %
    %     But the hard disk sang."       %    Intelligence down here.    %
    %          -- Yoshiko                                                %
    %                            E-mail:  mrush@cscihp.ecst.csuchico.edu %
    *~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*
     This is a SCHOOL!  Do you think they even CARE about MY opinions?!

andy@cbmvax.commodore.com (Andy Finkel) (09/26/90)

In article <MWM.90Sep25180852@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
>I've just started poking at this, because one of the changes to mg3
>before the release is going to be ditching the linked list of lines in
>favor of a buffer gap editor. Making the editor page to disk after
>this is done would be simple, but it's not clear even that's worth the
>trouble.
>With memory mapping of some kind, doing everything in memory requires
>a contiguous chunk of memory as big as the file. This is clearly
>unacceptable on the Amiga, especially for an editor like mg. The
>solution is a "paged" buffer, where the file buffer is a linked list
>of pages, each of a fixed size (or each with a gap...hmmmm).
>
>The obvious "tmp" place is ram, which doesn't buy anything. Likewise,
>paging to floppy doesn't buy much. I tend to agree with Dave - with a
>1 meg nominal minimum machine, and those with hard disks probably
>having more, there's not much reason to want to page to disk, and the
>space/time spent on that code can probably be put to better use.
>
>Anyone have any comments?

Just a couple:  Most gap editors I'm familiar with have
two places where the user experiences speed hits

1)  when the editor is spooling text off to disk
and
2) when the gap is being shifted in a large file

Most of the implementations I've seen don't spend
enough time hiding these slow downs from the user, so
they'd end up editing files no larger than 32K, for instance,
because otherwise the pauses as they scrolled and
edited drove them crazy.

And, the logical place to use as a temporary directory is :t
if one doesn't exist, create it.  Edit, which can edit files
larger than memory available uses a temp file is :t for this
purpose. 
>
>	<mike

		andy
-- 
andy finkel		{uunet|rutgers|amiga}!cbmvax!andy
Commodore-Amiga, Inc.

"If you don't open the manual, all features are undocumented."

Any expressed opinions are mine; but feel free to share.
I disclaim all responsibilities, all shapes, all sizes, all colors.

mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) (09/27/90)

In article <14669@cbmvax.commodore.com> andy@cbmvax.commodore.com (Andy Finkel) writes:
   Just a couple:  Most gap editors I'm familiar with have
   two places where the user experiences speed hits

   1)  when the editor is spooling text off to disk

The shouldn't be a problem. In fact, a buffer gap should be faster
than the linked list of lines that mg currently use, as you replace
one write per line with two per buffer. That's one of the reasons for
wanting to make the change. Now, if you're talking about paging unused
pages to disk, and new pages back - that's a different problem, and
one I haven't looked into.

   2) when the gap is being shifted in a large file

That's the critical problem. That's why I like the idea of one gap per
buffer, with middlin-sized buffers (4 to 16K, say). That way, the
worst thing to have to move is a chunk of text of that size.

   editing files no larger than 32K, for instance,
   because otherwise the pauses as they scrolled and
   edited drove them crazy.

Stock buffer gap shouldn't have a problem scrolling (you don't move
the gap unless edit text, so scrolling doesn't involve any buffer
movement). If it's paging, then there might be a problem.

   And, the logical place to use as a temporary directory is :t
   if one doesn't exist, create it.  Edit, which can edit files
   larger than memory available uses a temp file is :t for this
   purpose. 

Is that on the device the file is on, or on the device the text editor
was started from? And it's sort of silly if you that device winds up
being ram:. Likewise, you have to deal with being able to write on
that file. Since mg isn't part of a commercial product, I'm tempted to
make this feature something you have to enable at compile time, and if
it's on, allow the user to give a file name for the disk buffer file.

	Thanx,
	<mike


--
And then up spoke his own dear wife,			Mike Meyer
Never heard to speak so free.				mwm@relay.pa.dec.com
"I'd rather a kiss from dead Matty's lips,		decwrl!mwm
Than you or your finery."

ridder@elvira.enet.dec.com (Hans Ridder) (09/27/90)

In article <MWM.90Sep25180852@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
>I've just started poking at this, because one of the changes to mg3
>before the release is going to be ditching the linked list of lines in
>favor of a buffer gap editor.

Ahhh, finally!  That "line too long" stuff is bogus.  TECO used a buffer
gap, and I *think* GNU Emacs does too.  It's a great idea, the
performance is quite good.

>Making the editor page to disk after this is done would be simple, but
>it's not clear even that's worth the trouble.

But it would be nice to be able to edit large files, even when memory is
low.  After all, this is a multitasking machine.  I'd hate to have to
shut down everything just to edit a large file.

>With memory mapping of some kind, doing everything in memory requires
>a contiguous chunk of memory as big as the file. This is clearly
>unacceptable on the Amiga, especially for an editor like mg. The
>solution is a "paged" buffer, where the file buffer is a linked list
>of pages, each of a fixed size (or each with a gap...hmmmm).

I was thinking about this once, and I decided that I'd have one gap,
which would be copied between "pages".  When the gap gets too small, you
just insert a new "page" in the linked list, and when the gap is as big
as a page, you delete/remove it from the list.  I thought that having a
gap in each page would get messy rather quickly.  But, maybe not.

The one-buffer, one-gap you scheme can move the gap anywhere in the
editing buffer in a single move (copy) operation.  However, the multiple
page buffer scheme requires multiple copy operations to move the gap to
any point in the buffer.  I think you could probably optimize this after
thinking about it a bit, and reasonably large pages, say 1K to 4K
(dynamic?), would help.

Perhaps if you came up with a huristic as to when to compress multiple
gaps into one, then it would be resonable.  I just have this vision of
many pages each with only a few bytes and a hugh gap.  I'll stew on it.

>The obvious "tmp" place is ram, which doesn't buy anything. Likewise,
>paging to floppy doesn't buy much. I tend to agree with Dave - with a
>1 meg nominal minimum machine, and those with hard disks probably
>having more, there's not much reason to want to page to disk, and the
>space/time spent on that code can probably be put to better use.

I understand Dave's point, but I still think that even with a 1 meg
machine, you'll still be multitasking other things.  I sure like being
able to keep my file(s) in the editor while I run the compiler and
linker!  Sure, paging to a floppy won't be as nice as a hard disk, but
then that would be just another reason for a person to buy a hard disk.

>Anyone have any comments?

Nope, none.  :-)  Actually, sounds like a great idea!

>	<mike

-hans
------------------------------------------------------------------------
  Hans-Gabriel Ridder			Digital Equipment Corporation
  ridder@elvira.enet.dec.com		Customer Support Center
  ...decwrl!elvira.enet!ridder		Colorado Springs, CO

ifarqhar@sunc.mqcc.mq.oz.au (Ian Farquhar) (09/27/90)

In article <14646@cbmvax.commodore.com> daveh@cbmvax.commodore.com (Dave Haynie) writes:
>But of course, it made quite a bit more sense on CP/M systems.  When you have
>to cram program and data into 64K, things like manual paging to disk, program
>overlays, etc. make alot of sense.  A basic Amiga these days has 1 Meg of RAM,
>which is generally plenty for at least one wordprocessor or text editor and
>a pretty large document.  And even with the application loaded, there's about
>as much room in memory as on floppy disk.  Paging to hard disk would make 
>more sense, but once you have a few megabytes of memory, it's extremely hard
>to run out doing anything but the most memory intensive DTP kind of stuff,
>or perhaps ray tracing, which for speed reasons might have problems with any
>disk paging.

I disagree here.  I have to regularly edit largish files (4M or so)
automatically generated on UNIX workstations and such-like.  I have yet
to find an Amiga editor that can do this effectively, so I am forced to
take to good-olde WordStar running on an MS-DOS machine.  Some people
*do* need a text editor that can edit huge files in reasonable time.
And of course, let's not forget Parkison's law of computer resources
usage :-)

Incidentally, Dr Jim Blinn (of the JPL Voyager Simulation fame) recently
wrote an article in IEEE Computer Graphics and Applications about the
effect of virtual memory and paging on graphics systems.  It's
interesting, and well worth looking out.

--
Ian Farquhar                      Phone : 61 2 805-9403
Office of Computing Services      Fax   : 61 2 805-7433
Macquarie University  NSW  2109   Also  : 61 2 805-7205
Australia                         EMail : ifarqhar@suna.mqcc.mq.oz.au

dailey@cpsin2.cps.msu.edu (Chris Dailey) (09/27/90)

In article <83986@tut.cis.ohio-state.edu> Jeff Martens <martens@cis.ohio-state.edu> writes:
>In article <1990Sep23.174736.16118@lavaca.uh.edu> jet@karazm.math.uh.edu (J. Eric Townsend) writes:
>
>>In article <924@ucsvc.ucs.unimelb.edu.au> U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes:
>>>related issue.  Some criticism of Amiga software (such as word processors) is
>>>related to the inability to deal with files/data larger than available memory.
>CP/M software, e.g., Wordstar, has been handling files bigger than
>memory (typically 64k) for years.  Basically, you have to handle
>paging yourself, which is inconvenient but not all that difficult.

I've thought of a way of doing it, however.  Wordstar renames the
original file to a backup name, and then copies the file to another
work file (not exactly, but almost).  I do not know how it really
works, but one writing a word processor could have the file stored in
REVERSE.  Then available memory would be a window for what is in the
file.  You start out at the end of the file, which is the beginning of
the document.  If the user moves down in the document past what is in
memory, what is in memory is written to a work file IN REGULAR ORDER
and then more is read from the first work file, with the length of the
file shortened (what is now in the sedcond work file does not need to
be in the first work file).  So, to visualise the document:

    |------------------|-----------|-----------------|
    | Second Workfile  |   Memory  |  First Workfile |
    |------------------|-----------|-----------------|

As you move your cursor (which points to somewhere in memory), you will
cause the 'Memory' window to move, meaning part of what is in memory
will be moved from memory to the Second Workfile and then part of the
First Workfile will be moved to memory.  So, as you move through the
document, you are in effect moving information from the First Workfile
to the second.

One nice thing about this approach is that, although you need free disk
space equal to the document you wish to edit, you do not use up more
disk space than the size of the document because (unless you are
inserting text) you are just moving information from one workfile to
another.  Also, you can do it by just having to append/remove stuff
from the ends of each file.

Is this the way things are actually done in real Word Processors?
(WordPerfect 5.(0|1)? Many others..)  Is it easy enough to
append/remove stuff from the ends of files like would be needed in this
approach?

I ask because I am doing some pre-thought on writing a word processor
(I won't have time until after I graduate) and am trying to figure out
some of the details now.

Sorry if this wasn't all coherent.

>-- Jeff (martens@cis.ohio-state.edu)
--
  /~\  Chris Dailey   (CPS Undergrad, SOC Lab Coord, AMIG user group Secretary)
 C oo  dailey@(cpsin1.cps|frith.egr).msu.edu         (make WP5.1 for the Amiga)
 _( ^)   "I am thankful for one leg.  To limp is no disgrace --
/   ~\    I may not be number one, but I can still run the race." -from B.C.

dailey@frith.uucp (Chris Dailey) (09/27/90)

And after reading the rest of the comments on this subject I see that I
wasn't too far off the mark.

--
Chris Dailey   dailey@(frith.egr|cpsin.cps).msu.edu
"Rise again, rise again/Though your heart it be broken and life about to
end./No matter what you've lost, be it a home, a love, a friend,/
Like the Mary Ellen Carter, rise again!" -- a song by the late Stan Rogers

eeh@public.BTR.COM (Eduardo E. Horvath eeh@btr.com) (09/28/90)

In article <MWM.90Sep26152421@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
>In article <14669@cbmvax.commodore.com> andy@cbmvax.commodore.com (Andy Finkel) writes:

>   And, the logical place to use as a temporary directory is :t
>   if one doesn't exist, create it.  Edit, which can edit files
>   larger than memory available uses a temp file is :t for this
>   purpose. 

>Is that on the device the file is on, or on the device the text editor
>was started from? And it's sort of silly if you that device winds up
>being ram:. Likewise, you have to deal with being able to write on
>that file. Since mg isn't part of a commercial product, I'm tempted to
>make this feature something you have to enable at compile time, and if
>it's on, allow the user to give a file name for the disk buffer file.

	The logical place to put the temporary file is in t: (maybe that's
what Andy meant) so you can assign the temporary directory wherever you want:
RAM: for speed, or a HD partition for size.

	If you are planning to keep several pages of the file in RAM, how
do you decide the amount of memory to use?  What if the user wants to run
a memory hogging program (i.e. C++ compiler) while still editing a file?
Two suggestions:

	1) Allow the user to (optionally) specify the number of buffers
		to use when launching mg.

	2) Provide a "flush out all buffers to disk" command in mg to 
		free up memory just before launching a memory hog application.


=========================================================================
Eduardo Horvath				eeh@btr.com
					..!{decwrl,mips,fernwood}!btr!eeh
	"Trust me, I know what I'm doing." - Sledge Hammer
=========================================================================

andy@cbmvax.commodore.com (Andy Finkel) (09/28/90)

In article <MWM.90Sep26152421@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
>In article <14669@cbmvax.commodore.com> andy@cbmvax.commodore.com (Andy Finkel) writes:
>Is that on the device the file is on, or on the device the text editor
>was started from? And it's sort of silly if you that device winds up
>being ram:. Likewise, you have to deal with being able to write on
>that file. Since mg isn't part of a commercial product, I'm tempted to

Its :t relative to the current directory, actually.

I'd suggest :t relative to current directory, falling back to
:t on the same volume as the source file if there's not enough
space in the first choice volume.

This neatly avoids the ram: problem, since RAM: is always 100% full.

(actually, you could also try the same volume first, falling
 back to relative to the current directory; that would work out too)

			andy


>make this feature something you have to enable at compile time, and if
>it's on, allow the user to give a file name for the disk buffer file.

That wouldn't be that bad either.

>
>	Thanx,
>	<mike

		andy
-- 
andy finkel		{uunet|rutgers|amiga}!cbmvax!andy
Commodore-Amiga, Inc.

"Nothing is as permanent as a temporary kludge."

Any expressed opinions are mine; but feel free to share.
I disclaim all responsibilities, all shapes, all sizes, all colors.

UH2@psuvm.psu.edu (Lee Sailer) (09/29/90)

>        The logical place to put the temporary file is in t: (maybe that's
>what Andy meant) so you can assign the temporary directory wherever you want:

It would be OK if the temp file defaulted to T:, but I think it should be
flexibly respecifiable by the user, perhaps using an environment variable,
startup file, or keyboard command.

Perhaps my compiler would prefer if T: is in ram:, while my editor would like
to put its temp files elsewhere.

U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) (09/30/90)

G'day,

I'm not sure where to start with these responses to my original query. Dave's
reply seems close to what I was anticipating in the large.

DH> In article <14646@cbmvax.commodore.com>, daveh@cbmvax.commodore.com
DH> (Dave Haynie) writes: 

ME> In article <924@ucsvc.ucs.unimelb.edu.au> U3364521@ucsvc.ucs.unimelb.edu.au
ME> (Lou Cavallo) writes: 
ME> .. discussion re: Virtual Memory/MMU/Protection reminds me of a tangentially
ME> related issue. Some criticism of Amiga software (such as word processors) is
ME> related to the inability to deal with files/data larger than available
ME> memory. 

[...some discussion re Jeff Martens' points on CP/M s/w and paging deleted...]

Dave's reply:

> But of course, it made quite a bit more sense on CP/M systems.  When you have
> to cram program and data into 64K, things like manual paging to disk, program
> overlays, etc. make alot of sense.  A basic Amiga these days has 1 Meg of RAM,
> which is generally plenty for at least one wordprocessor or text editor and
> a pretty large document.  And even with the application loaded, there's about
> as much room in memory as on floppy disk.  Paging to hard disk would make 
> more sense, but once you have a few megabytes of memory, it's extremely hard
> to run out doing anything but the most memory intensive DTP kind of stuff,
> or perhaps ray tracing, which for speed reasons might have problems with any
> disk paging.

I prompted this original note stream with a short query as I was hoping to learn
of any developments (and perhaps new products) in this area.

In my short note (for brevity) I didn't I believe (I've forgotten) mention I was
thinking purely in marketing terms. I agree that for everyday `normal' size file
manipulation few Amiga users will exceed there RAM allotment for single document
processing. {But see my points later.}

> Not to say that it doesn't make any sense, just that it's not all that 
> necessary in most Amiga applications, while you couldn't do anything serious
> at all without it on a CP/M machine, and it's still quite important on many
> MS-DOS applications, where you're still basically limited to 640K for both
> program and data (you could of course add banked memory, like the CP/M
> machines did, to help out a little).  What you really want is true virtual
> memory, which makes the paging to disk transparent to every application.  But
> you couldn't have that on every Amiga.  It should be possible to write a
> swapping library that any application could easily use to swap data between
> a disk file and a memory buffer.  It would be silly to have to create such a
> mechanism more than once on the Amiga.

Yes all MMU Amiga users want true virtual memory :-) and I was hoping that some
one would suggest that the effort of writing a swapping system (library) should
be done once only (thanks Dave).

Non MMU Amiga owners of which there are a few :-) may not see the benefits of a
virtual memory system in any future OS upgrades.   Amiga owners who are limited
by monetary resources may not be able to fix there problems by buying RAM.  The
heavy duty image processors out there may have more H/D space than RAM and want
to do long run animations or data analysis.  ( I admit this last group would be
best served if they had a virtual memory capability. ) 

Those multitasking fans of us out there {:-)} that have come too close too GURU
time because a download used up too much memory while we were editing a file at
the same time as ... {familiar problem anyone :-)}.

I asked about (swapping) methods because I see them as a way to patch a problem
that users can have if they don't have enough physical memory and there applic-
ation cannot alloc the RAM required.

My stock A1000 with 414K usable { ugh, WB 1.3.2, Shell, Pop-some-thing-or-other
is necessary, make that 300K usable :-) } helps me to appreciate these things.

:-)

> Dave Haynie Commodore-Amiga (Amiga 3000) "The Crew That Never Rests"
>    {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: hazy     BIX: hazy
> 	Standing on the shoulders of giants leaves me cold	-REM

PS: perhaps the question of technical merit is whether there are any other ways
    for Amiga 500 & 2000 owners of the future that will not have MMUs & virtual
    memory capability to process data when allocatable memory is too low ?

    As Dave points out the work should only be done once ...

    BTW isn't there a word processor out now that allows "virtual memory" a la
    memory swapping. Excellence 2.0 ? { The advertising said "virtual mem" not
    me :-) }.

yours truly,
Lou Cavallo.

U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) (09/30/90)

G'day,

MM> In article <MWM.90Sep25180852@raven.pa.dec.com>, mwm@raven.pa.dec.com
MM> (Mike (My Watch Has Windows) Meyer) writes: 

MM> I've just started poking at this, because one of the changes to mg3
MM> before the release is going to be ditching the linked list of lines in
MM> favor of a buffer gap editor. Making the editor page to disk after
MM> this is done would be simple, but it's not clear even that's worth the
MM> trouble.

I guess that is really a "marketing" question and not one of technical degree
(as you mention). This could be put to the mg3 user population ...

MM> [...your buffer gap editor ideas. Go for it Mike! :-)...]

MM> The obvious "tmp" place is ram, which doesn't buy anything. Likewise,
MM> paging to floppy doesn't buy much. I tend to agree with Dave - with a
MM> 1 meg nominal minimum machine, and those with hard disks probably
MM> having more, there's not much reason to want to page to disk, and the
MM> space/time spent on that code can probably be put to better use.

I prompted the original note stream but I do admit it might be painful to use
mem paging on a minimal system. However at this point in time the only choice
is to buy more RAM and while I agree that is preferable I _believe_ that many
users would like a choice.

MM> Anyone have any comments?

I'm in a marketing frame of mind re this problem (and so it is probably not a
c.s.a.t type of discussion now...).

I believe that I read that some (all) Mac s/ware for non MMU capable Macs can
page memory (I don't know first hand, sorry) and similarly for Windows 3.0. I
am not one of those "they have, we should have it" cry babies however. I just
think it makes sense.

My thoughts are that if all s/w writers could choose to support a mem. paging
(Dave Haynie suggests a swapping library) system that nonMMU (hence virt. mem
Amiga OS support..one day) Amiga owners would have more choice in s/w and RAM
configurations.

I appreciate your point re: whether the space/time spent on the code would be
worth it to you. I say that for you go for the most worthwhile choices.

However as I've hoped for a while (and Dave suggested in an earlier post) the
effort here should probably be done once only. If there were a (hypothetical)
swapping.library available would you put in hooks to use it?

MM> 	<mike

yours truly,
Lou Cavallo.

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (10/01/90)

Lou's postings subsequent to his original question raised the question of
a library to support software virtual memory.

Back in the old days of core memory, software virtual memory was all there
was available to evade the limits of dinky real memories, and was
occasionally implemented for both code and data. As soon as hardware assist
became available, however, software virtual memory was deprecated as "ungodly
slow" in comparision to the hardware solution, and essentially abandoned.

It's 1990, processors are faster, RAM is bigger, disks are faster, and the
folks writing code are either smarter or taking lots of advantage of a better
knowledge base.  Just how bad would software virtual memory have to be today?

We can already overlay code with our compilers, so it's probably okay to
think about data virtual memory only, and only for data explicitly allocated
there, which simplifies the task substantially.

Have we lost track of a useful technique that could be ported to our desktop
machines which are now much bigger than those old mainframes?

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

eeh@public.BTR.COM (Eduardo E. Horvath eeh@btr.com) (10/02/90)

In article <1089@ucsvc.ucs.unimelb.edu.au> U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes:
>G'day,
[...]
>I believe that I read that some (all) Mac s/ware for non MMU capable Macs can
>page memory (I don't know first hand, sorry) and similarly for Windows 3.0. I
>am not one of those "they have, we should have it" cry babies however. I just
>think it makes sense.

	Mac's and All versions of Windows have virtual memory run by software.
The technique they use is having handles instead of pointers.  Handles are
basically pointers to pointers.  When you want a piece of memory, you ask the
OS for it, and the OS gives you a handle.  The handle points to a table of
pointers to the actual memory you recieve.  Here's a picture:

	You Get		OS Table		Memory Blocks
	--------	--------		-------------
	handle ----	NULL	   ^	------>	Your Block
		  |	Pointer ----	|
		   --> 	Pointer --------- 	Other Block
			NULL
			Pointer -------------->	Other Block
			.				.
			.				.

	To access memory you need to use double indirection.  This technique is
used because the OS free to swap the pointers in the table any time.  Since the
OS runs only when you make an OS call, you only need to wory about the memory
being moved around after you call the OS.  This works fine for a Mac or Windows
because they don't really mutitask.  On the Amiga, if you tried this technique
there is a high probability that the pointer would be changed when your task
just finished loading the pointer, but before it accessed the data.  This could
be fixed either by surrounding _ALL_ accesses to memory managed data with
Forbid()/Permit() calls or only allowing re-mapping when programs call a 
"O.K., you can remap my data" function.  The second method will probably not
provide any memory to other progams that call AllocMem(), because there is no
guarantee that the other VM tasks will ever allow their mamory to be remapped.
This is a nasty way to cause a deadlock.

	Is S/W VM really worth while?

>yours truly,
>Lou Cavallo.


=========================================================================
Eduardo Horvath				eeh@btr.com
					..!{decwrl,mips,fernwood}!btr!eeh
	"Trust me, I know what I'm doing." - Sledge Hammer
=========================================================================

papa@pollux.usc.edu (Marco Papa) (10/02/90)

In article <523@public.BTR.COM> eeh@public.BTR.COM (Eduardo E. Horvath  eeh@btr.com) writes:
>	Mac's and All versions of Windows have virtual memory run by software.

The above statement is all false.  Macs do not have virtual memory, unless you
are running unix (a/ux).  The mac os 7.0, which has been delayed twice already,
*should* have virtual memory.  About windows, only windows 3.0 has virtual 
memory and this feature is available only on 386-based machines. No xt, at or
other 286-based machine can use virtual memory with windows 3.0.

Virtual memory is not what you describe.  Pick up any book on operating systems
to find out what virtual memory is.

-- Marco
 
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
"Xerox sues somebody for copying?" -- David Letterman
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

trantow@csd4.csd.uwm.edu (Jerry J Trantow) (10/02/90)

In article <27352@usc.edu> papa@pollux.usc.edu (Marco Papa) writes:
>In article <523@public.BTR.COM> eeh@public.BTR.COM (Eduardo E. Horvath  eeh@btr.com) writes:
>>	Mac's and All versions of Windows have virtual memory run by software.
>
>The above statement is all false.  Macs do not have virtual memory, unless you
>are running unix (a/ux).  The mac os 7.0, which has been delayed twice already,

While virtual memory may not be in the current Mac OS, you can certainly use
virtual memory with the addition of "Virtual" by Connectix.  We run Virtual
on most the Macs in my department (it is a must have for Mathematica).  I 
have 14M of virtual memory on my MacIIci. (I know it has a MMU, but the SEs
don't and they run it too)  While my Amiga isn't nearly the memory hog of
a multi-finder Mac (my IIci system is 2936K) it sure would be nice to have
virtual memory.  I'm hoping when I buy my next machine (A3000) that it will
eventually have virtual.  Anyone know if Amax on a A3000 can run Virtual???

>Virtual memory is not what you describe.  Pick up any book on operating systems

>-- Marco

_____________________________________________________________________________
Jerry J. Trantow          | The concern for man and his destiny must always 
1560 A. East Irving Place | be the chief interest of all technical effort.
Milwaukee, Wi 53202-1460  | Never forget it among your diagrams and equations.
(414) 289-0503            |                               Albert Einstein
_____________________________________________________________________________

dillon@overload.Berkeley.CA.US (Matthew Dillon) (10/02/90)

    I've always envisioned a scheme like this:

    char **Lines;
    long **Storage;
    long NumLines;
    long MaxLines;

    Basically, Lines points to an array of pointers to each line in the
    file.  NumLines specifies the number of lines in the file and MaxLines
    specifies the maximum number of entries in the Lines & Storage arrays
    that have been allocated (determines when you have to reallocate Lines
    due to extending past the MaxLines limit).	Storage points to the
    file position in either the original file or a scratch file.

    Unmodified line, line not in memory
	Lines[i] = NULL;
	Storage[i] = seek-position-in-original-file

    Modified line, line not in memory
	Lines[i] = NULL;
	Storage[i] = 0x80000000 | seek-position-in-scratch-file

    Unmodified line, line in memory
	Lines[i] = <ptr-to-memory>
	Storage[i] = seek-position-in-original-file

    Modified line, line in memory
	Lines[i] = <ptr-to-memory>
	Storage[i] = 0x80000000 | seek-position-in-scratch-file

    In anycase, all that would be required to remain in main memory is
    NumLines * 8 bytes.  You also get random access out of this scheme.
    Additionally, you could implement a locking system (lock line in memory)
    internally to make editor functions more efficient.

    The scratch file would slowly fill up with changes and, upon reaching
    a certain size, would be 'packed' by creating a new scratch file,
    copying pertainent entries, and deleting the old.

					-Matt

--

    Matthew Dillon	    dillon@Overload.Berkeley.CA.US
    891 Regal Rd.	    uunet.uu.net!overload!dillon
    Berkeley, Ca. 94708
    USA

eachus@linus.mitre.org (Robert I. Eachus) (10/02/90)

     A couple of comments on this issue.  First don't use T: for
paging.  This has come to be the place to put small temporary files,
and most users have it assigned to RAM: or some other ram disk. The
best idea would be to use a new name such as SWAP:

     Second, I like the library idea, and doing it right should be
easy and highly portable.  Create a swap.device -- actually two, one
for machines with MMU's and one for machines without.  Programs would
be would be guaranteed that the most recently seeked to page was
valid.  The swap device would have to write all pages opened
read/write when it was necessary to flush them on the non-MMU version,
but could check the hardware flags on the MMU version.

     I lean strongly toward making SWAP: a normal file system, with
the normal protocol for using it being to delete the file instead of
closing it.  If the swap.device flushed pages after a user settable
interval of non-activity this would mean that editors, etc. could
provide automatic file backup with no effort.

     What do people think?  This unfortunately sounds like a project
for Dave Haynie or Bill Hawes who certainly have more to do than they
can handle.  (In particular we really need just one person writing
stuff which manages the page table entries, this MUST be compatible
with setcpu.)

--

					Robert I. Eachus

with STANDARD_DISCLAIMER;
use  STANDARD_DISCLAIMER;
function MESSAGE (TEXT: in CLEVER_IDEAS) return BETTER_IDEAS is...

daveh@cbmvax.commodore.com (Dave Haynie) (10/03/90)

In article <1089@ucsvc.ucs.unimelb.edu.au> U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes:

>I believe that I read that some (all) Mac s/ware for non MMU capable Macs can
>page memory (I don't know first hand, sorry) and similarly for Windows 3.0. I
>am not one of those "they have, we should have it" cry babies however. I just
>think it makes sense.

I'm not quite sure how much the Mac uses its tricks to swap out to disk, but
I did read up on what they do.  Basically, there's a memory allocation function
that returns a memory handle, which is essentially a pointer to a pointer to
a chunk of memory.  If you lock the handle, it doesn't move, but if you
unlock it, it can be relocated.  The main idea in the article I read (probably
in Dr. Dobbs or BYTE, at least a year or two ago) was that this mechanism 
prevented memory fragmentation by making chunks of allocated memory movable.

>My thoughts are that if all s/w writers could choose to support a mem. paging
>(Dave Haynie suggests a swapping library) system that nonMMU (hence virt. mem
>Amiga OS support..one day) Amiga owners would have more choice in s/w and RAM
>configurations.

There are quite a few systems that use or lend themselves to using a number
of memory blocks.  Like text editors and the like.  If you implemented some
kind of memory handle library for the Amiga, it could be based on a system of
handles.  A function call initializes a handle list/array with the path name
of a backup device, the optional maximum number of blocks in the array, 
optional maximum number of blocks in memory at one time, and the block size.  
Blocks initially point to nothing.  You obtain access to a block by locking 
it, and relinquish access by unlocking it.  No pointers are permitted to 
the contents of an unlocked block, though the block itself may be referenced
via a pointer.

While such a scheme isn't as nice as true virtual memory, it does have some
advantages.  It'll work on a plain 68000 system, in which the library calls
use normal AmigaDOS Read()/Write() to perform the swapping, manage a pool 
of real memory buffers, etc.  In an MMU based system without OS management of
the MMU, a real system could be implemented, with the library performing the
actual swap routines as blocks are locked and unlocked.  On an Amiga with
real virtual memory, a block is simply allocated as a chunk of MEMF_VIRTUAL
when it's first locked, freed when the block array is closed, and the block
free function becomes a no-op.

>Lou Cavallo.


-- 
Dave Haynie Commodore-Amiga (Amiga 3000) "The Crew That Never Rests"
   {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: hazy     BIX: hazy
	Standing on the shoulders of giants leaves me cold	-REM

mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) (10/03/90)

In article <483@public.BTR.COM> eeh@public.BTR.COM (Eduardo E. Horvath  eeh@btr.com) writes:
	   The logical place to put the temporary file is in t: (maybe that's
   what Andy meant) so you can assign the temporary directory wherever you want:
   RAM: for speed, or a HD partition for size.

Let me see if I've got this straight: the stuff that the editor pages
out to disk because there's not enough ram, you want to put in RAM:
for speed? Methinks there's a problem here. If nothing else, it'd
probably take less memory and be faster just to leave the information
paged in.

	   1) Allow the user to (optionally) specify the number of buffers
		   to use when launching mg.

Sorry, the number of buffers is unlimited, and will stay that way.
Besides which, a single buffer can grow arbitrarily large.  Allowing
for a fixed amount of memory to be used might (just might) be
possible, but would either require 1) pre-allocating that much memory,
which isn't acceptable for editing small files, or 2) keeping a
running total of how much memory was allocated, which is liable to add
to much overhead. Since the point of paging to disk is to allow
editing of files larger than memory, adding other strange constraints
seems counterintuitive.

	   2) Provide a "flush out all buffers to disk" command in mg to 
		   free up memory just before launching a memory hog
		   application.

Not possible, mostly for portability reasons.

	<mike
--
He was your reason for living				Mike Meyer
So you once said					mwm@relay.pa.dec.com
Now your reason for living				decwrl!mwm
Has left you half dead

sysop@tlvx.UUCP (SysOp) (10/03/90)

In article <1088@ucsvc.ucs.unimelb.edu.au>, U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes:
> G'day,
....
> > you couldn't have that on every Amiga.  It should be possible to write a
> > swapping library that any application could easily use to swap data between
> > a disk file and a memory buffer.  It would be silly to have to create such a
> > mechanism more than once on the Amiga.
> 
> Yes all MMU Amiga users want true virtual memory :-) and I was hoping that some
> one would suggest that the effort of writing a swapping system (library) should
> be done once only (thanks Dave).
> 
> Non MMU Amiga owners of which there are a few :-) may not see the benefits of a
> virtual memory system in any future OS upgrades.   Amiga owners who are limited

Ok, here's an idea for those of us without an MMU.  OK, some of you might
call it a kludge, but basically, it's not my idea, so I won't take blame
for it.  ;-)

There's a library of routines for use on the clones that will allocate memory
from EMS, Extended, or disk, depending on what's available.  It will copy
data in from the storage into a buffer.  Your routines must go through
the library functions, which determine where the data is stored, and then
load it into a buffer.  Your routines get a pointer to this temporary buffer.
This pointer is basically invalid upon subsequent calls.

On the Amiga, this type of system could be used, but you would only allocate
from real memory, or if memory was low, from disk.  It really isn't that
hard.  It's sort of annoying having to do a double-dereference, but if you
know pointers anyway, it's not hard either.  The advantage is that you don't
need anything fancy, it'll work on anything.  (You could even swap to floppy.)
And, it's very easy to write such a library.

You would think it'd be slow, but using a particular library in my
experience, it worked quite well.  It does the job when "real" VM isn't
possible.
....
> Those multitasking fans of us out there {:-)} that have come too close too GURU
> time because a download used up too much memory while we were editing a file at
> the same time as ... {familiar problem anyone :-)}.

I sorta wonder, how would you know at what point to start swapping?  I guess
you could leave it a user-defined option (like, 50K, or 100K).  My guess is
you don't really know how low you can go, and you don't want to take it all,
or horrible nasty things happen.  (I hate filling up a RAM disk, and then
trying to delete things, but not having enough RAM to run the particular
delete program I'm using... :-)  Luckily, this has only happened to me once.)

....
> My stock A1000 with 414K usable { ugh, WB 1.3.2, Shell, Pop-some-thing-or-other
> is necessary, make that 300K usable :-) } helps me to appreciate these things.
> 
> :-)

Yeah, people say I'm crazy, but I bought a 2 meg expansion for my A1000, and to
this day, still haven't gotten a 2nd floppy drive.  Hey, it works for me.  :-)
I can always use the memory as a disk substitute, but can I use the disk
as memory?  Not really.  (I could still use a hard drive, VM or no. :-)
> 
....
> PS: perhaps the question of technical merit is whether there are any other ways
>     for Amiga 500 & 2000 owners of the future that will not have MMUs & virtual
>     memory capability to process data when allocatable memory is too low ?

(This prompted me to reply....)

>     As Dave points out the work should only be done once ...

But actually, I'm talking about something different (although in a way, it
may appear to be the same to the user), but this is "a way... to process
data when allocatable memory is too low."

This could be a library that gets linked in, easily enough, but that would
end up as a copy bound into each executable.  However, the code to do this
sort of thing would not amount to much.  It's not "real" VM, but it can solve
some basic problems.  It cannot be used for code, only data, and only 
groups of data, no larger than your temporary "swap" space.

This idea can't really be applied to just any program (such as by some how
placing routines inbetween the application and the OS), since you're working
out of a limited buffer, and the application's programmer must realize that
the pointer will be invalidated once other memory areas are overlayed.
So, you can't use this method with your favorite program, unless you have
the source for it, and are able to modify it.

....
> yours truly,
> Lou Cavallo.

(You know, this strikes me as similar to my previous post about using the
24 bit standard file to transfer between programs and display cards...
but that's another story, but I still don't see why not, in the absence
of a proper standard..... :-)  So, I'm bracing myself for comments about
this not being a good idea.. :-) 

I'm not suggesting this method as a replacement for VM, but as one alternative
that could be useful for some people who need this capability.  (Hopefully
someone will find something of interest here.... :-)
--
Gary Wolfe, SYSOP of the Temporal Vortex BBS                        // Amiga!
..uflorida!unf7!tlvx!sysop, ..unf7!tlvx!sysop@bikini.cis.ufl.edu  \X/  Yeah!
 

peter@sugar.hackercorp.com (Peter da Silva) (10/03/90)

In article <MWM.90Oct2160058@raven.pa.dec.com>, mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
> In article <483@public.BTR.COM> eeh@public.BTR.COM (Eduardo E. Horvath  eeh@btr.com) writes:
> 	   1) Allow the user to (optionally) specify the number of buffers
> 		   to use when launching mg.

> Sorry, the number of buffers is unlimited, and will stay that way.
> Besides which, a single buffer can grow arbitrarily large. [...] Since
> the point of paging to disk is to allow editing of files larger than
> memory, adding other strange constraints seems counterintuitive.

The Amiga is a multitasking and (to some extent) multiuser system. It is
desirable that it be possible to limit any program... particularly memory
hogs... to less than "all available memory". The consequences of running out
of memory in that background compile you're doing are much worse than those
of running out of memory in an editor... it just gets slower (and on a hard
disk, very little slower... it's still I/O bound on the user).

Gotta get those priorities straight.

> 	   2) Provide a "flush out all buffers to disk" command in mg to 
> 		   free up memory just before launching a memory hog
> 		   application.

> Not possible, mostly for portability reasons.

This one went right over my head. What's so non-portable about a flush
command? All the code has to be in there to manage paging blocks out
anyway, so

#ifdef AMIGA
		case FLUSHCMD:
			for(each block)
				page_out(block);
#endif
-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

jdickson@jato.jpl.nasa.gov (Jeff Dickson) (10/04/90)

Newsgroups: comp.sys.amiga.tech
Subject: Indicating end of file
Summary: 
Expires: 
Sender: 
Reply-To: jdickson@jato.Jpl.Nasa.Gov (Jeff Dickson)
Followup-To: 
Distribution: 
Organization: Jet Propulsion Laboratory, Pasadena, CA
Keywords: 

	Hi all.

	During late last year and then again some this year I wrote a
serial I/O handler. It inserts itself as a DOS device. At first I had
to perform my own DOS packet interchange with it, because it was not
DOS compatible in the least bit. Recently, I changed it so that it
would be DOS compatible. Now it works with such Amiga commands as copy,
echo, filenote, etc. 

	The problem I am having comes when I am copying from my 
handler to a file.

	copy SID:2400,1,8,N,031 sidout
             |_| |____________|
             |         |
	   DOS device  |
	    name       |
		       |
		  communication
		  parameters

	If the file exists, Copy concludes by printing the message,
"Error during writing - sidout removed". Otherwise, copy doesn't
print an error, but I can't Type the file because (I guess) the 
end of file (^J ?) has been obscured. Funny thing is according to
the List command, the file contains the same number of bytes as
the characters I typed on the terminal.

	The sequence of DOS packets is as follows:

		ACTION_LOCATE_OBJECT
		ACTION_EXAMINE_OBJECT
			...so that the whatever is inputing from
			the SID doesn't give up because there is
			nothing to read, I block the DOS packet
			until something does become available.

		        When the DOS request can be satisfied, I
			insert the number of bytes available in
			the 'fib_Size' field and the number of
			blocks in the 'fib_NumBlocks' field (usally
			zero). I have no idea what to initialize
			the 'fib_DiskKey' field as. I've tried -1
			and 0.
		ACTION_FIND_INPUT
		ACTION_READ
			.
			.
	           and so forth
			.
			.
		ACTION_END

	I have a rather old Transactor magazine that includes an article
on AmigaDOS. It says on end of file that the 'RES1' field of the DOS packet
is to contain a -1 (DOSFALSE) and that the 'RES2' is to contain the end of
file indication (in my case zero). I tried that, but it didn't work. So
now, I just put zero in the 'RES1' field when end of file occurs.

	Any comments on what I could be doing wrong? I am using MANX 'C'
compiler v3.4a. Please don't inundate me with things along the lines of 
INTs VS LONGs. Think I could have gotton this far if I didn't know the
implications of each?

						Jeff

---------------------------------------------------------------------------
Jeff S. Dickson                                 jdickson@zook.jpl.nasa.gov

valentin@cbmvax.commodore.com (Valentin Pepelea) (10/05/90)

In article <1990Sep24.101616.20657@psuecl.bitnet> d6b@psuecl.bitnet writes:
>
> The largest file I've ever dealt with was 1.4 megabytes, and I thought that
> was pretty big. It fits nicely in my 3 megabytes. So, it's not clear that
> having some sort of VM built into an editor is terribly important. Whether or
> not having VM in the OS is desirable is another matter. I don't need it
> myself, but I suppose there are those that do (who are you??)

Good point. There are many people who do not realize the usefullness of
virtual memory for the Amiga. They say, 'Hey, I've got 4MB and that's plenty
anough for me'. Quite a simplistic approach. But the fact of the matter is,
we all need virtual meemory - right now.

Would you like to have GCC, the GNU C compiler running on your Amiga? Well,
you better stock up on memory chips - or get virtual memory. And you'd like
too see Mathematica running on your Amiga? It won't work without oodles of
memory. How about a digitizer that catures a 1024x1024x24 image, for you
to manipulae later on. How do you store that picture without virtual memory?

So we might be tempted to sit back in our chair and claim that we don't need
virtual memory, because no applications have been written that require it.
Well, you'll never see those application if we don't ship virtual memory
first.

Now getting back to your example, how can I edit a 500 page book, if I don't
have virtual memory, and the editor does not have a built-in VM manager? I'd
be forced to devide the book in section, and keep guessing at what page each
chapter starts. And what a nightmare to create an index!

Valentin
-- 
The Goddess of democracy? "The tyrants    Name:    Valentin Pepelea
may destroy a statue,  but they cannot    Phone:   (215) 431-9327
kill a god."                              UseNet:  cbmvax!valentin@uunet.uu.net
             - Ancient Chinese Proverb    Claimer: I not Commodore spokesman be

U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) (10/05/90)

G'day,

EH> In article <523@public.BTR.COM>, eeh@public.BTR.COM
EH> (Eduardo E. Horvath eeh@btr.com) writes: 

LC> In article <1089@ucsvc.ucs.unimelb.edu.au>
LC> U3364521@ucsvc.ucs.unimelb.edu.au (Lou Cavallo) writes: 

Thanks Eduardo for the useful discussion of VM.  I've saved the article.

EH> Is S/W VM really worth while?

I just thought I'd re-emphasize to everyone (and no-one in particular)
that I'm not promoting s/w VM in general.  I'd just like to see memory
paging/swapping/whatever_anyone_wants_to_call_it:-) available for data
handling where the data is larger than allocatable memory.

Kent Dolan put this quite nicely in a recent posting in this newsgroup.

By the way...

Jim Wright I think in comp.sys.amiga was (may still be) asking for help
to deal with his (pre-processed!) datafiles of up to 30 Mbytes in size!

From a marketing perspective this type of s/w VM could be a help to the
prospective Amiga power user/buyer (eg scientific user, heavy DTP type).

I am still hoping someone can positively confirm whether Excellence 2.0
does do s/w VM ... {I'll try to dig up the magazine I read that in :-(}.

EH> Eduardo Horvath				eeh@btr.com

yours truly,
Lou Cavallo.

lron@easy.HIAM (Dwight Hubbard) (10/06/90)

>In article <14893@cbmvax.commodore.com> valentin@cbmvax.commodore.com (Valentin Pepelea) writes:
>In article <1990Sep24.101616.20657@psuecl.bitnet> d6b@psuecl.bitnet writes:
>>
>> The largest file I've ever dealt with was 1.4 megabytes, and I thought that
>> was pretty big. It fits nicely in my 3 megabytes. So, it's not clear that
>> having some sort of VM built into an editor is terribly important. Whether or
>> not having VM in the OS is desirable is another matter. I don't need it
>> myself, but I suppose there are those that do (who are you??)
>
>Good point. There are many people who do not realize the usefullness of
>virtual memory for the Amiga. They say, 'Hey, I've got 4MB and that's plenty
>anough for me'. Quite a simplistic approach. But the fact of the matter is,
>we all need virtual meemory - right now.

Very true, but even if that 1.4 Megabyte file fits into your 3 Megabytes of memory
what do you do if you have some other tasks started?  It doesn't take to many
tasks to fill up memory, I mention this because I have tried to edit files that
are larger than 2 megabytes on a 3 megabyte machine which works fine if I close
everything down.

>Would you like to have GCC, the GNU C compiler running on your Amiga? Well,
>you better stock up on memory chips - or get virtual memory. And you'd like
>too see Mathematica running on your Amiga? It won't work without oodles of
>memory. How about a digitizer that catures a 1024x1024x24 image, for you
>to manipulae later on. How do you store that picture without virtual memory?
>
>So we might be tempted to sit back in our chair and claim that we don't need
>virtual memory, because no applications have been written that require it.
>Well, you'll never see those application if we don't ship virtual memory
>first.

Exactly, but wouldn't it be better to work on getting AmigaDos applications to
run under the new version of Unix.  It does have virtual memory and has been
mentioned before virtual memory and real time performance just don't go
together on a 68000.  So, the 68020/68030 owners could run Amiga Dos apps
under Unix and have virtual memory and 68000 owners would still be able to
run them but without virtual memory.

At the very least it would seem to me that adding virtual memory to the OS
would make it more reliable since those poorly written applications that
hang when they run out of memory wouldn't run out of memory.

>Now getting back to your example, how can I edit a 500 page book, if I don't
>have virtual memory, and the editor does not have a built-in VM manager? I'd
>be forced to devide the book in section, and keep guessing at what page each
>chapter starts. And what a nightmare to create an index!
>

>Valentin
>--
>The Goddess of democracy? "The tyrants    Name:    Valentin Pepelea
>may destroy a statue,  but they cannot    Phone:   (215) 431-9327
>kill a god."                              UseNet:  cbmvax!valentin@uunet.uu.net
>             - Ancient Chinese Proverb    Claimer: I not Commodore spokesman be

--
-Dwight Hubbard,                      |-Kaneohe, HI
-USENET:   uunet!easy!lron            |-Genie:    D.Hubbard1
           lron@easy.hiam             |-GT-Power: 029/004

mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) (10/06/90)

In article <6694@sugar.hackercorp.com> peter@sugar.hackercorp.com (Peter da Silva) writes:
   > Sorry, the number of buffers is unlimited, and will stay that way.
   > Besides which, a single buffer can grow arbitrarily large. [...] Since
   > the point of paging to disk is to allow editing of files larger than
   > memory, adding other strange constraints seems counterintuitive.

   The Amiga is a multitasking and (to some extent) multiuser system. It is
   desirable that it be possible to limit any program... particularly memory
   hogs... to less than "all available memory". The consequences of running out
   of memory in that background compile you're doing are much worse than those
   of running out of memory in an editor... it just gets slower (and on a hard
   disk, very little slower... it's still I/O bound on the user).

If there were a pre-defined way of giving programs specific memory
limits (ala OS/9), it might be worth thinking about. However, there
isn't. In fact, I've as yet to run across any Amiga program that had
such a facility, even though I've encountered a fair number that will
grow as needed, or even start by grabbing large chunks of memory. The
Amiga generally doesn't have limits except those imposed by hardware,
and I feel no urge to start a trend towards doing so. However, the
sources are available, and you're free to do it yourself.

   Gotta get those priorities straight.

Yup; more features/speed is far more important than adding arbitrary
limits.

   > 	   2) Provide a "flush out all buffers to disk" command in mg to 
   > 		   free up memory just before launching a memory hog
   > 		   application.

   > Not possible, mostly for portability reasons.

   This one went right over my head. What's so non-portable about a flush
   command? All the code has to be in there to manage paging blocks out
   anyway, so

   #ifdef AMIGA
		   case FLUSHCMD:
			   for(each block)
				   page_out(block);
   #endif

Fine, you've paged out all the blocks. Who now owns the the blocks?
Well, the editor does, as page_out is presumably used to free blocks
for reuse. Having it free blocks by default means you wind up freeing
& reallocating a slew of blocks to read a file in. Not a good idea. So
you change this loop to free each block. Now who owns the blocks? The
library routine free. Does it give it back to the system?  Maybe.
However, if it does so, you take a general performance hit in compiled
code (verified by experiment, posted as part of an earlier
discussion). So you have to implement your own memory allocation
strategy, that supports giving freed memory back to the system. This
is going to add confusion to the editor, and is a worse than useless
for systems that have real VM.

	<mike
--
It's been a hard day's night,				Mike Meyer
And I been working like a dog.				mwm@relay.pa.dec.com
It's been a hard day's night,				decwrl!mwm
I should be sleeping like a log.

new@ee.udel.edu (Darren New) (10/06/90)

I have a library that I use when I want to work with files larger than
memory.  It has several features that I find indispensible:

1) The virtual memory regions are randomly addressable. That is, there
is none of this "read an area into your own memory and then use it from
there" baloney.  Having the facility to directly access memory makes
many operations much easier. Without it, you might as well use files.

2) The VM is associated with a particular file. Hence, one can "open"
the file, access it like memory, and then "close" the file with the
assurance that the info in the file will still be there next time you
open it.  Combined with (1), this allows applications to have "global
variables" whose lifetime exceeds that of the program.  There is no
need to write routines to file-out and file-in the memory structures
while correcting pointers and etc.

3) There are routines for efficient malloc/realloc/free-style calls,
and eventually I will add GC and such. Generally not difficult unless
you have to reinvent them every time you need them.

4) There are calls to dump the VM in a readable format for ease of
debugging.

5) I have the source, and it's fairly portable :-)

There are three features it ought to have that it doesn't:

1) You have to use special macros to access the VM, so you can't (for
example) map a structure onto the VM region and use it there.  So far,
my structures have been simple enough that I could get away with L =
GM(offset) to get a long and C = GM(offset,byteoffs) to get a character
with similar routines for storing values. It would be nice to be able
to use standard unary * notation; however, I don't think this could be
done on a system without an MMU.

2) You can't have more than one VM file open at once.  I may rewrite
mine to allow this, as I am currently trying to think of how to do my
latest project without this and it would be much easier with this.

3) You can't have more than one process accessing the same file at the
same time consistently. So far, this is not a problem for me because
all programs using the VM are servers to other programs (via ARexx,
mostly).  However, I can see where some programs would want this.

All in all, some subset of the mmap() functionality is what I find
useful.  Possibly just "Open a VM file and tell me where you put it"
would be enough, given that the other exec memory allocators would work
on that area.  Again, this won't work on a nonMMU system.
         -- Darren
-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, 
      Formal Description Techniques (esp. Estelle), Coffee -----

peter@sugar.hackercorp.com (Peter da Silva) (10/06/90)

In article <14893@cbmvax.commodore.com> valentin@cbmvax.commodore.com (Valentin Pepelea) writes:
A bunch of good stuff about useful VM applications, but the following is a
bit stretched...

> Now getting back to your example, how can I edit a 500 page book, if I don't
> have virtual memory, and the editor does not have a built-in VM manager?

Break it up into sections.

> I'd
> be forced to devide the book in section, and keep guessing at what page each
> chapter starts.

1> list book
Directory "dh0:book"
Chapter1                  901852 ----rw-d 20-Jun-90 17:21:50
Chapter2                  878616 ----rw-d 20-Jun-90 17:21:50
Chapter3                 1001184 ----rwed 03-Sep-90 14:05:13
...

> And what a nightmare to create an index!

1> cd book
1> nroff -mindex Chapter#?
; nroff -mindex is just nroff -me with a global divert to nil: for everything
; but the index...
-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

peter@sugar.hackercorp.com (Peter da Silva) (10/06/90)

In article <MWM.90Oct5145603@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
> If there were a pre-defined way of giving programs specific memory
> limits (ala OS/9), it might be worth thinking about. However, there
> isn't. In fact, I've as yet to run across any Amiga program that had
> such a facility, even though I've encountered a fair number that will
> grow as needed, or even start by grabbing large chunks of memory.

This is more or less true, modulo FACC, RAD:, and other long-running potential
memory hogs. Also consider that there aren't that many Amiga programs that
support virtual memory at all.

> The
> Amiga generally doesn't have limits except those imposed by hardware,
> and I feel no urge to start a trend towards doing so.

1> stack 4000

> Yup; more features/speed is far more important than adding arbitrary
> limits.

Not an arbitrary limit... a user-settable limit. Which is a feature.

> Fine, you've paged out all the blocks. Who now owns the the blocks?

The system.

> Well, the editor does, as page_out is presumably used to free blocks
> for reuse. Having it free blocks by default means you wind up freeing
> & reallocating a slew of blocks to read a file in.

Elementary optimisation: don't free the blocks until you've finished the
current operation.

> Not a good idea. So
> you change this loop to free each block. Now who owns the blocks? The
> library routine free.

For allocating fixed size blocks like this, I'd call AllocMem and FreeMem
myself.

> So you have to implement your own memory allocation
> strategy, that supports giving freed memory back to the system.

Why? It's already there.

> This
> is going to add confusion to the editor, and is a worse than useless
> for systems that have real VM.

#ifdef Amiga
freeblock(b) { FreeMem(b, BLOCKSIZE); }
#else
freeblock(b) { free(b); }
#endif

Simply shocking!
-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

peter@sugar.hackercorp.com (Peter da Silva) (10/06/90)

In article <EACHUS.90Oct2113309@aries.linus.mitre.org> eachus@linus.mitre.org (Robert I. Eachus) writes:
>      Second, I like the library idea, and doing it right should be
> easy and highly portable.  Create a swap.device -- actually two, one
> for machines with MMU's and one for machines without.  Programs would
> be would be guaranteed that the most recently seeked to page was
> valid.

From experience with Forth, I would recommend that you guarantee the *two*
most recently opened pages. You need to be able to do:

	from = swapin(fromaddr, length, RONLY);
	to = swapin(toaddr, length, RDWR);
	memcpy(to, from, length);

Also, I don't see why a device. A library, where you call "swap_open(name)",
so your swapped pages don't have to be in the same address space as anyone
elses.

For MMU systems, swapin(addr, length, RONLY) would be a no-op. With RDWR
it would just mark all pages in that range dirty... or if you use trap on
access you can make it a no-op as well... but why discard a useful hint?
-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

eeh@public.BTR.COM (Eduardo E. Horvath eeh@btr.com) (10/07/90)

In article <EACHUS.90Oct2113309@aries.linus.mitre.org> eachus@linus.mitre.org (Robert I. Eachus) writes:

>     A couple of comments on this issue.  First don't use T: for
>paging.  This has come to be the place to put small temporary files,
>and most users have it assigned to RAM: or some other ram disk. The
>best idea would be to use a new name such as SWAP:

	Sorry, too late.  WordPerfect already uses T: for its storage.

>     Second, I like the library idea, and doing it right should be
>easy and highly portable.  Create a swap.device -- actually two, one
>for machines with MMU's and one for machines without.  Programs would
>be would be guaranteed that the most recently seeked to page was
>valid.  The swap device would have to write all pages opened
>read/write when it was necessary to flush them on the non-MMU version,
>but could check the hardware flags on the MMU version.

	Software VM would require a great deal of locking/unlocking code
to make certain that the data you are accessing is in memory when its needed
and can be purged when its not.  If the same program were to run on a
machine with an MMU, you would still be executing all of that code, so the
MMU would buy you nothing.  The overhead involved in locking and unlocking
chunks of data would greatly slow execution especially if it involved library
or function calls.  Macros would be best.  Pointers would not be valid
unless VM had its own address space.  

	You would receive about the same benefit as using fread()/fwrite()/
lseek() but your buffers would eventually take up all free RAM.  The only 
advantage would be the ability to control the size of the resulting file.
This is not a general purpose solution for running programs with more data 
than available RAM.  

	I think we ought to deal with S/W VM and H/W VM as two separate issues.
H/W VM is a general solution to times when programs need more memory than is
available.  On startup, a VM program could link into the VM manager and ask
for VM.  This task would then receive its own MMU table and could request
whatever memory it needed.  AllocMem() would need to be SetFunction()ed to
provide the necessary information to the VM manager (when swap pages to disk.)
MEMF_PUBLIC would become very important.  (Imagine your message port being
swapped out! 8^)   Internally, VM would be completely transparent, pointer
would work, etc. but communicating with other tasks could be difficult due
to different address spaces. 

	S/W VM would likely be very application dependent.  A system that
works well for an editor would likely not work for a draw program where
you want to swap out individual or collections of objects.

	On a side note, assuming that a system using systems level VM with
one address space was implemented, and swapping was only allowed in Fast RAM,
is there any easy method to determine which portions of RAM are occupied by 
non-swappable code like interrupt servers?  It would be very embarbassing to
discover the VM handler had swapped out an interrupt server code or data block
and the system came crashing down.  A second interesting situation would occur
if the swap space were assigned to a RAM drive or virtual disk that was running
under VM...

>--
>					Robert I. Eachus

=========================================================================
Eduardo Horvath				eeh@btr.com
					..!{decwrl,mips,fernwood}!btr!eeh
	"Trust me, I know what I'm doing." - Sledge Hammer
=========================================================================

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (10/07/90)

lron@easy.HIAM (Dwight Hubbard) writes:
[lots of other good thoughts from several folks omitted:]
>Exactly, but wouldn't it be better to work on getting AmigaDos applications to
>run under the new version of Unix.  It does have virtual memory and has been
>mentioned before virtual memory and real time performance just don't go
>together on a 68000.

Envision an editor that defaults to a 60KByte edit buffer, but goes into
software virtual memory mode for larger files, and which I can control to
open the memory buffer to a couple of megabytes if it is available and not
needed to satisfy contention from background processes.  Now I have great
performance on small files, pretty good performance on big files, and, while
things may slow down a lot, it is still _possible_ to edit any file I can
fit a copy of into my available hard drive space.  It is that quality of
still being possible to edit large files that makes the effort to install
software virtual memory worthwhile.

The situation I mentioned a week back in this thread, of editing a 3.5MByte
file, was done on a fast MS-DOS box and a fast disk, and while a global
substitution might take 20 seconds, going to a particular line in the file
never took more than two.  It is certainly possible for a clever programmer
to make the performance acceptable for a particular task by careful design
and tuning, and if the same cleverness is then isolated into a library, it
is there for all programmers to use.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

lron@easy.HIAM (Dwight Hubbard) (10/07/90)

>In article <1990Oct7.032409.22928@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>lron@easy.HIAM (Dwight Hubbard) writes:
[previous junk removed]
>
>Envision an editor that defaults to a 60KByte edit buffer, but goes into
>software virtual memory mode for larger files, and which I can control to
>open the memory buffer to a couple of megabytes if it is available and not
>needed to satisfy contention from background processes.  Now I have great
>performance on small files, pretty good performance on big files, and, while
>things may slow down a lot, it is still _possible_ to edit any file I can
>fit a copy of into my available hard drive space.  It is that quality of
>still being possible to edit large files that makes the effort to install
>software virtual memory worthwhile.
>
Are we talking about software VM (which in my opinion is a waste of time) or
editing files larger than physical memory (a very nice ability to have) adding
VM just to edit files larger than memory is a waste.  I don't see the reason
to add virtual memory for this special case.  In my opinion the better
solution would be for the programers to write the software so that it will
swap off of disk.  This is not an uncommon ability on MessyDos machines and
I believe that Matt Dillion did post an article a couple days ago explaining
a possible way of doing this.  The basic principles for editing a file
larger than physical memory are quite different than writing a software VM
system that would have to worry whether or not the system is going to crash
if this particluar chunk of code gets swaped out.

[Comments MessyDos virtual file handling removed]
>
>Kent, the man from xanth.
><xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

--
-Dwight Hubbard,                      |-Kaneohe, HI
-USENET:   uunet!easy!lron            |-Genie:    D.Hubbard1
           lron@easy.hiam             |-GT-Power: 029/004

jwright@cfht.hawaii.edu (Jim Wright) (10/07/90)

valentin@cbmvax.commodore.com (Valentin Pepelea) writes:
>Well, you'll never see those application if we don't ship virtual memory
>first.                                               ^^^^^^^^^^^^^^^^^^^

Interesting.

>Now getting back to your example, how can I edit a 500 page book, if I don't
>have virtual memory, and the editor does not have a built-in VM manager?

Use AmigaTeX.

--
Jim Wright
jwright@cfht.hawaii.edu
Canada-France-Hawaii Telescope Corp.

peterk@cbmger.UUCP (Peter Kittel GERMANY) (10/08/90)

In article <MWM.90Oct5145603@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
>
>If there were a pre-defined way of giving programs specific memory
>limits (ala OS/9), it might be worth thinking about. However, there
>isn't. In fact, I've as yet to run across any Amiga program that had
>such a facility, even though I've encountered a fair number that will
>grow as needed, or even start by grabbing large chunks of memory. The

Is AmigaBasic with it's CLEAR statement not a candidate? It doesn't
grow automatically, the programmer has to do it explicitly.

-- 
Best regards, Dr. Peter Kittel  // E-Mail to  \\  Only my personal opinions... 
Commodore Frankfurt, Germany  \X/ {uunet|pyramid|rutgers}!cbmvax!cbmger!peterk

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (10/09/90)

lron@easy.HIAM (Dwight Hubbard) writes:
> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>> lron@easy.HIAM (Dwight Hubbard) writes:
>[previous junk removed]
[ditto]

> Are we talking about software VM (which in my opinion is a waste of
> time) or editing files larger than physical memory (a very nice
> ability to have) adding VM just to edit files larger than memory is a
> waste.[...] The basic principles for editing a file larger than
> physical memory are quite different than writing a software VM system
> that would have to worry whether or not the system is going to crash
> if this particluar chunk of code gets swaped out.

I think we agree on the right thing to do, but maybe not on how to do
it. I'm not really interested in virtual memory for code, just for data.
Truely huge pieces of code that overwhelm the hardware need to be
overlaid anyway; the intelligence that can be added by a careful overlay
scheme will always allow it to beat software code VM.

Another poster has suggested, and I wish to promote the idea, that we
should have a library (or handler, I'm not clear on the implications of
one against the other) that captures the best available design for
software data VM, so that this facility is easily available to
programmers otherwise not capable of developing it, or with other work
to do than redeveloping the wheel.

As a checklist for what this facility should provide, an editor is a
pretty good worst case, particularly one, like emacs, that is capable of
opening multiple windows in multiple files simultaneously, and in
multiple instances of the editor running at once. You can't just work in
blocks, since the addition of text in any of the windows will change the
byte boundaries of all the blocks, and scrolling multiple, modified
windows at once in the same file (a facility of some multiple windows
editors) is going to keep any design hopping to keep up.

I guess what I want to say is that this is such a complex task that it
needs to be done well once, and made generally available, rather than
done as a compromise of the available time to devote to each product.

It doesn't have to do "magic" (capture every pointer use and resolve it
to currently in-RMA data); just provide a way to get and confirm handles
on data, with the necessary paging in and out, and shifting of bytes
within blocks, done as needed.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

new@ee.udel.edu (Darren New) (10/10/90)

In article <1990Oct9.061755.15112@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:
>As a checklist for what this facility should provide, an editor is a
>pretty good worst case, particularly one, like emacs, 

Yeah, I'd say EMacs is a worst-case. :-) :-) :-)

Seriously tho, you also want to consider an object-oriented memory,
possibly supporting a database system of some sort. You would like it
to be persistant, quick to access, and maybe even garbage collectable.
The primitives that you think up when thinking about editing buffers
are *very different* than the primitives you think up when you consider
keyed access (like btrees), variable-size records, quick random access,
persistance, pointers to other pieces of memory, and so on.  If you
want a library only for editors, why not just write the editor to do
its own paging?  It would be much easier and more efficient than
setting up a general system.  If you want ideas on other things that
people (or person :-) would use the library for, I'd be glad to explain
how I use mine.

	 -- Darren
-- 
--- Darren New --- Grad Student --- CIS --- Univ. of Delaware ---
----- Network Protocols, Graphics, Programming Languages, 
      Formal Description Techniques (esp. Estelle), Coffee -----

chanson@isis.cs.du.edu (Chris Hanson) (10/13/90)

   How about this concept: Suppose you went to edit one of those vaunted and
overwhelming 6 meg files, on your 512k machine. Your editor, (being smart,
clever, and posessing much farvergnugen) realizes the impossibility of the
situation, and calls on virtual.library for help.
 
(Virtual.Library, clad in red-n-blue tights, enters from stage left.)

   So anyway, the editor opens virtual.library, and calls SetupVirtualBlock(),
telling it how big of an area it wants (6 stinkin' meg), and optionally, where
to load that 6 meg of data from. (Could speed things up a bit if the library
was informed of the source of that data, but it isn't important.) Ok, the 
library returns some code stating that the planets ARE in alignment, and
that everything is in fact groovy. The editor opens its screen, and goes to
display the first page of text. Oh. No text. So, the editor calls on the
virtual.library RealizeChunk(), (to make a chunk of 'virtual memory' into
'real memory') passing it a 'size' parameter, (oh, about 80 x 25 or about
2k, more than enough text to fill the screen) a pointer to where our
virtual.library should put the data, and a index of where in the 6m file it
should get the 2k of data. (Probably the beginning.)

   The request is good, and virtual.library deals with it, loading the data 
either from the source file or from its temporary file, and putting it into
the memory buffer that the editor pointed out. Great. Now you start editing.

  As you insert characters (say, adding in one of these little notes-in-
parenthesis things that I use alot), the editor displays the changes
onscreen, while doing a virtual.library InsertChunk(), telling virtual.
library where, how big, and what was inserted. Virtual, being a very clever
beast, just queues these up, until you stop doing InsertChunk()s, and do 
something else. When you, for example, do a couple of page-downs in the
editor, the editor does a RelinquishChunk() on the 2k chunk you were 
working on, and does RealizeChunk() on another area a couple pages down.

   The virtual.library (being clever, et al) notices that you stopped
doing all those ninny one-character inserts, and decided to go work somewhere
else. It Groks its way through its queue of all the InsertChunk()s you did,
and builds a new version of that 2k block of memory that you were working
on, a new version which is, oh, 2.5k after all the additions you made.
It writes the new 2.5k block to the disk in its temporary area, noting that
this 2.5k block now takes the place of the 2k block in the original file. 
Then it goes to Work on the new area of the file that you page-down'ed to.
The RealizeChunk() succeeds, and the editor has another ~2k block to 
work with.

  This time you get really daring, and delete some reference
to IBM-multitasking-under-MSDOG from your file. A big reference, that
amounts to about, 512 bytes. You can delete it as a block cut, or sit there
and hammer away on the backspace key until it is gone. The editor will
do one big DeleteChunk(location, size) or a bunch of little DeleteChunk(
location, 1)s. Either way, you finish erradicating that fiction, and decide
to skip to the end of the file. The editor does RelinquishChunk() on the
area you modified, and RealizeChunk()s the end of the file. (Two notes
here, we might be nice, and allow start-relative, current relative, and 
end relative positioning, like fseek does, or we just might make the editor
keep track of how many bytes are in the file currently. You chose. Also,
the editor might have to do another RealizeChunk() after the cut operation,
in order to repaint the entire screen.)  

  So, when the editor did RelinquishChunk on the chunk that had had some
stuff deleted out of it, the virtual.library notes this too, and comes
up with the new, final representation of that ~2k block, which is now 
around 1.5k. And it writes this too to its temporary area, noting that
this 1.5k chunk now replaces this other 2k area in the old file. And 
then, having cleaned THAT up, goes on to give you the last 2k in the file
to edit.

  Ok, so the file looks good to you, and you SAVE'n'CLOSE it. The editor
does a UpDateVirtualBlock() with the necessary arguments, and then a
CloseVirtualBlock(). When it does UpDate...(), the virtual.library goes
into serious action, and builds a quick list of all of the locations in
the original file where something has been changed or added/deleted. It
then (possibly copies the original file and) opens the original file,
fseeks to the first modification, fwrites out _ITS_ version of the modified
area, writes out all the unchanged data between there and the next
modification (getting the unchanged data from the copy it just made),
writes out _ITS_ version of the next change, etc, etc, etc.

It is even cleaner if you don't add/delete data, and just modify it. Because
RelinquishChunk can then just modify the working copy of the file right
then, rather then waiting for you to quit, and then rewriting the ENTIRE thing.

Basically, it involves just asking for, modifying, and relinquishing
certain blocks of a known file. We could be nice and put in functions
to quickly (or, as quickly as POSSIBLE) search a given block for strings,
etc.
 
   As my summary sez, it is late, I need more caffeine. Please,
1.DH0:> Flame >NIL:
1.DH0:> Constructive_Comments >ME:
 
    Chris - Xenon

r


-- 
#define chanson Christopher_Eric_Hanson || Lord_Xenon || Kelson_Haldane 
I work, but you don't know who I work for. And they don't know I'm here.

::I'm @ chanson@nyx.cs.du.edu

peter@sugar.hackercorp.com (Peter da Silva) (10/14/90)

In article <1990Oct13.031948.25857@isis.cs.du.edu> chanson@isis.UUCP (Chris Hanson) writes:
> 'real memory') passing it a 'size' parameter, (oh, about 80 x 25 or about
> 2k, more than enough text to fill the screen) a pointer to where our
> virtual.library should put the data...

Nope. It shuld return a pointer to a "locked chunk" of that size. What if you
ask for an overlapping chunk, later? Also, provide calls to unlock a chunk
(I'm not going to use this chunk for a while... move it or swap it out
if you want) and mark a chunk readonly (don't bother to swap it out).

>   As you insert characters (say, adding in one of these little notes-in-
> parenthesis things that I use alot), the editor displays the changes
> onscreen, while doing a virtual.library InsertChunk(), telling virtual.
> library where, how big, and what was inserted.

Hold it right there... is this a VM library or a buffer-management library?
Both are useful, but they're not the same thing. For one thing, addresses
in a buffer can change if you do inserts... so you need to deal with handles
or have the ability to set marks. What if you have two chunks and insert
stuff into the first one? In a VM library, that's just pushed data past the
second page. In a buffer library, the second page moves down with it.

If you do this, see if you can make it portable... and crosspost to
alt.lang.cfutures. This is handy stuff on any system... not just an Amiga.
-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

pds@quintus.UUCP (Peter Schachte) (10/16/90)

In article <MWM.90Oct5145603@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (My Watch Has Windows) Meyer) writes:
>In article <6694@sugar.hackercorp.com> peter@sugar.hackercorp.com (Peter da Silva) writes:
>   The Amiga is a multitasking and (to some extent) multiuser system. It is
>   desirable that it be possible to limit any program... particularly memory
>   hogs... to less than "all available memory".
>I've as yet to run across any Amiga program that had
>such a facility, even though I've encountered a fair number that will
>grow as needed, or even start by grabbing large chunks of memory.

Why not free clean (unmodified wrt the file) buffers when someone needs
more memory than is available?  I believe this can be done (if not in some
easier way) by having a library which has an expunge function.  This seems
like a good compromise.
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds