Newsgroups: comp.lang.apl
Path: watmath!watserv1!utgpu!cs.utexas.edu!wupost!darwin.sura.net!haven.umd.edu!socrates!socrates!rockwell
From: rockwell@socrates.umd.edu (Raul Deluth Miller-Rockwell)
Subject: Re: APL execution efficiency revisited
In-Reply-To: sam@csi.jpl.nasa.gov's message of Mon, 23 Mar 1992 18:55:58 GMT
Message-ID: <ROCKWELL.92Mar23224842@socrates.umd.edu>
Sender: rockwell@socrates.umd.edu (Raul Deluth Miller-Rockwell)
Organization: Traveller
References: <920322073241_70530.1226_CHC87-1@CompuServe.COM>
	<1992Mar23.185558.2647@csi.jpl.nasa.gov>
Date: Tue, 24 Mar 1992 03:48:42 GMT
Lines: 33

Andrew Mullhaupt:
   |> No vendor has yet (so far as I know) optimized
   |>     first plus.times / (vector of matrices)

I'm not quite clear on what's supposed to be optimized here.

As I understand it,   plus.times / (vector of matrices)  will yield a
"scalar representation of an array", and   first  will convert that
into a flat array.

Presumably, if you knew something about the structure of the matrices,
you could perform some kind of strength reduction on the algorithm
[use a less powerful algorithm which requires fewer cpu resources].

And the criticism that Mr. Mullhaupt has made is that there are no
generally useful tools to analyze the structure of the code which
generates that vector of arrays, so as to perform this sort of
strength reduction automatically?

Do I have the idea straight, so far?

If so, well, I agree that we need more tools along the line of
symbolic manipulation of expressions and programs.  But until that
point, unless you want to work on designing such things [specifying,
coding, whatever], it seems that the most fruitful approach is in
picking "the right algorithm".

Human design effort is one probably the most powerful and general
strength reduction tool available.  Which is not to say that it can't
be reduced in strength as well.  Or something like that...

-- 
Raul Deluth Miller-Rockwell                   <rockwell@socrates.umd.edu>
