Newsgroups: comp.lang.apl
Path: watmath!watserv1!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uunet.ca!rose!tmsoft!itcyyz!yrloc!rbe
From: rbe@yrloc.ipsa.reuter.COM (Robert Bernecky)
Subject: Re: APL execution efficiency revisited
Message-ID: <1992Apr5.180150.41@yrloc.ipsa.reuter.COM>
Reply-To: rbe@yrloc.ipsa.reuter.COM (Robert Bernecky)
Organization: SNake Island Research Inc, Toronto
References: <1992Mar26.163242.5298@yrloc.ipsa.reuter.COM> <771@kepler1.rentec.com> <1992Mar30.203818.15221@yrloc.ipsa.reuter.COM> <783@kepler1.rentec.com>
Date: Sun, 5 Apr 92 18:01:50 GMT

In article <783@kepler1.rentec.com> andrew@rentec.com (Andrew Mullhaupt) writes:
>In article <1992Mar30.203818.15221@yrloc.ipsa.reuter.COM> rbe@yrloc.ipsa.reuter.COM (Robert Bernecky) writes:
>>>Now for the particular case  of +.x / (A B C D ...) you will be able to
>>
>>So, if you hand ne the matrices one at a time, the whole question 
>>of ordering is moot. This is like trying to nail Jello to a tree.
>>Can we please stick with one problem at a time, please?
>Sure, but I originally posed this not as a problem but as an example where
>you cannot avoid doing 'bad' inner and outer products. I _never_ stated
>that you were allowed to assume these matrices were given to you in a vector,
>but some people have continually assumed this. It should also be noted that
>the Matrix Chain Product was not brought up as an example where APL is any worse
>than FORTRAN, but as an example where you have to do bad inner products
>in nearly any language.

The point I am STILL trying to make (and will give up if this iteration
fails...) is that APL already HAS all the information at hand to 
optimally (from the standpoint of ordering the matrix products to 
minimize ops) determine the best ordering. 

This is regardless of the number of arrays (A B C D ...) being 
multiplied at that instant. 

Furthermore, the application, or lack of application of said optimization
can be done without ANY changes to the application, so that relatively
inexperienced programmers obtain the benefit of such optimizations with
no work on their part.

It is this exploitation of skilled system programmer skills that is
valuable: An APL programmer need not be highly skilled in understanding
the machine architecture, cache structure, heavy duty dynamic programming
math, etc., to take advantage of those algorithms.

>   Since I had seen nothing but 'J' in this group for months, I thought
>   I'd take this chance to stir things up by pointing out how, although
>   this intersection idiom is really the best you can do in APL2, it's
>   a piker compared to any compiled language. (Which it is). I pointed
>   out that it's relatively slow and very hard to read. (Which it is).

A. A posting by a single programmer is not necessarily optimal for any
   language.

B. Predicting the speed of an idiom is walking on thin ice: Many
   APL (and other) systems detect idioms and produce special purpose 
   code to handle them efficiently. This is no different from Fortran
   compilers which stare at 5 lines of DO loop code, and pump out a
   matrix product.

>      c) Some people said 'nobody's forcing you to use APL'. That's not true.
>         When you have to optimize an APL function, you normally have to call
>         it from APL. This has recently been fixed in APL2 but not in most
>         APL's yet. 

Don't make assertions about products about which you are ignorant.
I am not aware of any APL which has been released in the last several years
which does not have call-in call-out capabilities.


Robert Bernecky      rbe@yrloc.ipsa.reuter.com  bernecky@itrchq.itrc.on.ca 
Snake Island Research Inc  (416) 368-6944   FAX: (416) 360-4694 
18 Fifth Street, Ward's Island
Toronto, Ontario M5J 2B9 
Canada
