Newsgroups: comp.lang.apl
Path: watmath!watserv1!70530.1226@compuserve.com
From: Mike Kent <70530.1226@CompuServe.COM>
Subject: APL execution efficiency
Message-ID: <920324052537_70530.1226_CHC142-1@CompuServe.COM>
Sender: root@watserv1.waterloo.edu (Operator)
Organization: University of Waterloo
Date: Tue, 24 Mar 1992 05:25:37 GMT
Lines: 49

In article <752@kepler1.rentec.com>, andrew@rentec.com (Andrew Mullhaupt)
writes [concerning optimizing sequential/iterated multiplication of
matrices]: 

 > Two of the cases where this optimizatin is likely to be practiced is   
 > where not all the matrices can be held in memory at the same time, or
 > where they are being sequentially computed in control bound code.

In the current (or just around the corner) version 2 of APL2 (mainframe)
IBM has done something about the not-enough-memory problem, by introducing
external variables (they resde on file, but this is (reasonably)
transparent (i.e., { V <- V, enclose  M } writes to the end of the file,
{ V[22] <- enclose M } updates the file, { 4 pick V } reads from the
file).  Don't know about efficiency since I lost acccess to mainframe APL2
in early January when I changed jobs (but I suspect the worst).

 > You spend all your time up against problems where the obvious approach
 > is not OK.

Ture, but hardly unique to APL.  At least in APL you HAVE the time to
optimize where it counts.  And if you can't find an APL optimization
which gives acceptable performance, you can call a compiled routine,
either from a library or something written specially for the application
in (for instance) FORTRAN.  I would think that for scientific computation
this would be the preferred approach; there's a ton of FORTRAN library
code for doing just about any kind of heavy-duty number crunching, so use
it to attack the bottlenecks, using APL for the 80% of the code for which
APL is sufficient.  That way, you get good execution efficiency, and
_most_ of the fast development benefits as well.  With varying degrees of
convenience, APL2, APL*PLUS, and Sharp APL can all call external routines,
so I don't see this as a serious problem.

 > "Ordinarily" has a great deal to do with what you do.

True again.  "Ordinarily" I do straight commercial applications.  It
happens that options pricing leads to interesting problems sometimes,
for instance, when there is no closed form for the arbitrage-free price
for an option and the Markov process which models the changes in forward
prices [ it has a quite sparse transition maatrix ] is best studied by
Monte Carlo simulaltion and some (simple) statistical analysis.  Here, the
"obvious" code is either a killer on both space and time (taking a power
limit of the transition matrix) or on time (nested loop to randomly walk
the phase space).  But this sparse problem is not too hard to convert to a
"dense" problem (using roll, indexing, and +\), and then it's only a mild
space problem, easily overcome by looping on blocks of 100 or 250 (or
whatever []WA allows) "parallel" iterations at a time.  [The overhead to
interpret "+\" is swamped by the computation on e.g. a 100 x 1500 matrix.
Likewise the time to do memory management.]

