Newsgroups: comp.lang.apl
Path: watmath!watserv1!utgpu!news-server.csri.toronto.edu!torsqnt!jtsv16!itcyyz!yrloc!rbe
From: rbe@yrloc.ipsa.reuter.COM (Robert Bernecky)
Subject: Re: APL execution efficiency revisited
Message-ID: <1992Mar26.163242.5298@yrloc.ipsa.reuter.COM>
Reply-To: rbe@yrloc.ipsa.reuter.COM (Robert Bernecky)
Organization: Snake Island Research Inc, Toronto
References: <920322073241_70530.1226_CHC87-1@CompuServe.COM> <1992Mar23.185558.2647@csi.jpl.nasa.gov> <ROCKWELL.92Mar23224842@socrates.umd.edu> <756@kepler1.rentec.com>
Date: Thu, 26 Mar 92 16:32:42 GMT

In article <756@kepler1.rentec.com> andrew@rentec.com (Andrew Mullhaupt) writes:
>Optimizing the calculation of a matrix product is a classical problem
>in computer science. The idea is that some of the intermediate products
>are much smaller than others, depending on the sequence of shapes.
>In order to 'efficiently' compute the product, you usually have to 
>solve a dynamic program whose inputs are these shapes, and then do
>the matrix arithmetic.
>
>The Matrix Chain Product problem is for example, an exercise in Horowitz
>and Sahni's _Fundamentals of Computer Algorithms_ (p.242 - 243) and
>in Sedgewick's _Algorithms in C_ pp. 598ff. Sedgewick has an example
>where left-to-right order uses 6024 multiplications and right-to-left
>order uses 274,200 multiplications. (I wonder if he had APL in mind...)
>
>Recall that when I raised this subject, the point was that it can be
>difficult to avoid putting inner/outer products in a bad order when
>writing code. This is a classical, well understood issue in computer
>science. Either you're going to solve the dynamic program for every
>matrix chain or you're going to accept a _lot_ of excess calculation.
>
>Precisely. Usually all that is needed is to know the shapes, then
>you can decide what order to do the multiplications in. Of course
>the interesting case is when the shapes are changing at run time...
>
>
Note that APL is ideally set to perform the operations in any order
ASSUMING YOU DON'T CARE ABOUT PRECISION LOSS: The list of arrays and
their shape is immediately available at run time, whereas it can be
lost in a swirl of DO-loops and other junk in other languages.

IN SHARP APL, we had a whole bunch of different ways to do matrix 
product. The appropriate one was picked at run time based on
matrix shape(fat vs skinny), available workspace, the two functions
involved in the product, and on the two data types involved.
No big deal to include a bit more code to handle the reduction 
across a bunch of arrays. It is obvious that you pay a performance
penalty for doing this, but you DO get the ability to do 
the reduction in ANY order, not just left to right, or right to left. 
Bob





Robert Bernecky      rbe@yrloc.ipsa.reuter.com  bernecky@itrchq.itrc.on.ca 
Snake Island Research Inc  (416) 368-6944   FAX: (416) 360-4694 
18 Fifth Street, Ward's Island
Toronto, Ontario M5J 2B9 
Canada
