[comp.lang.functional] "reverse diff" arrays

news@massey.ac.nz (USENET News System) (06/27/91)

Hi,

In response to the discussion on graph algorithms, someone (I forget
who) posted two messages on "reverse diff" arrays, one of the messages
including ML source code which I assume implemented these arrays using
references.

Could the poster please re-send me the article containing the source
code, and any associated references?

Would anyone else care to comment on why such a simple technique (at
least in hindsight) appears to have received so little attention?  Why
bother with compile-time analysis to detect "single-threadedness" when
a simple run-time organisation for arrays will suffice?
_______________________________________________________________________________

E.Ireland@massey.ac.nz          Evan Ireland, School of Information Sciences,
 +64-6-3569099 x8541          Massey University, Palmerston North, New Zealand.
-- 
_______________________________________________________________________________

E.Ireland@massey.ac.nz          Evan Ireland, School of Information Sciences,
 +64-6-3569099 x8541          Massey University, Palmerston North, New Zealand.

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (06/27/91)

In article <591sis-b@massey.ac.nz>, news@massey.ac.nz (USENET News System) writes:
> Would anyone else care to comment on why such a simple technique (at
> least in hindsight) appears to have received so little attention?

Possibly it has been regarded as too well known?  The technique was
applied to Prolog in a paper in the Uppsala conference (I've never had
the proceedings for that) in 83? 84? something like that.  There's a
small literature on multi-version data structures.

> Why
> bother with compile-time analysis to detect "single-threadedness" when
> a simple run-time organisation for arrays will suffice?

Because with reverse diffs you still have to keep the reverse diffs around
and maybe garbage collect later; with compile-time analysis you should be
able to generate exactly the same code you'd have had in an imperative
language.  (See Trilogy.)
-- 
I agree with Jim Giles about many of the deficiencies of present UNIX.

sra@ecs.soton.ac.uk (Stephen Adams) (06/27/91)

In article <591sis-b@massey.ac.nz> news@massey.ac.nz (USENET News System) writes:

 > ...
 > who) posted two messages on "reverse diff" arrays, one of the messages
 > including ML source code which I assume implemented these arrays using
 > references.
 > ...
 > Would anyone else care to comment on why such a simple technique (at
 > least in hindsight) appears to have received so little attention?  Why
 > bother with compile-time analysis to detect "single-threadedness" when
 > a simple run-time organisation for arrays will suffice?

I find there are many things which seem simple in hindsight.
Here are my opinions on why so many may not have stumbled
across reverse diffs or enthused about them before.

 0.  As a data stucture tailored to some specific
     requirements, they are great.

 1.  Unless built in I cant get the benefits of reverse
     diffs in *pure* functional languages.  I would like to
     use them in, say, Hope and Miranda, but I cant.

 2.  Because of (1) my vision is limited by the functional
     programming paradigm and I am unlikely to think of a
     similar solution (i.e. one that uses mutable data
     structures and requires a proof that the abstract data
     type is still referentially transparent).

 3.  Reverse diffs can be really bad if the data structure
     is not quite single threaded.  If I make a zillion
     updates to an array which just happens to have another
     copy I will wonder where all my memory is going and why
     it takes so long to look up anything in the original
     copy.  They are not a general solution so they should
     not be used by default.  And if I am to do analysis to
     detect this I might as well go the whole hog and to
     thing properly.

 4.  As always, anything that you do a run time has a cost.
     An access to the most recent version of the reverse
     diff array still costs 2-3 times a normal array
     reference.

 5.  Reverse diff arrays are a different type to my ordinary
     arrays.  I have to recode my array library.
--
Stephen Adams                        S.R.Adams@ecs.soton.ac.uk
Computer Science
University of Southampton
Southampton SO9 5NH, UK

dlester@cs.man.ac.uk (David Lester) (06/28/91)

In article <SRA.91Jun27101445@pessoa.ecs.soton.ac.uk> sra@ecs.soton.ac.uk
(Stephen Adams) writes:
> 3.  Reverse diffs can be really bad if the data structure
>     is not quite single threaded.  If I make a zillion
>     updates to an array which just happens to have another
>     copy I will wonder where all my memory is going and why
>     it takes so long to look up anything in the original
>     copy.

One might consider this behaviour to be more acceptable than the
equivalent case in an imperative paradigm!

--

David Lester, Manchester University, UK.

hudak-paul@CS.YALE.EDU (Paul Hudak) (06/28/91)

As someone already pointed out, the "reverse diff arrays" are really
an old idea.  Besides the Prolog paper, Adrienne Bloss and I described
the technique (we called it "trailers") in our 1985 POPL paper [1].
Since then Adrienne implemented several versions of array updating,
and came up with some interesting results.  For example, besides the
obvious fact that "multiply-threaded arrays" don't work well with
trailers, she found that in certain cases trailers performed worse
than outright copying!  See her FPCA paper [2] or better yet her
thesis [3] for more details.

  -Paul


[1] @InProceedings{
    ,author={Hudak, P. and Bloss, A.}
    ,title={The aggregate update problem in functional programming systems}
    ,booktitle={12th ACM Symposium on Principles of Programming Languages}
    ,organization={ACM}
    ,year=1985
    ,pages={300-314}
    }

[2] @inproceedings{
    ,author={Bloss, A.}
    ,title={Update Analysis and the Efficient Implementation of
            Functional Aggregates}
    ,booktitle={Proceedings of 4th Int'l Conference on Functional
                Programming Languages and Computer Architecture}
    ,organization={ACM, IFIP}
    ,year=1989
    ,pages={26-38}
    }

[3] @phdthesis{
    ,author={Bloss, A.}
    ,title={Path Analysis: Using Order-of-Evaluation Information to
            Optimize Lazy Functional Languages}
    ,school={Yale University, Department of Computer Science}
    ,year=1988
    }