Newsgroups: comp.lang.apl
Path: watmath!watserv1!70530.1226@compuserve.com
From: Mike Kent <70530.1226@CompuServe.COM>
Subject: Dead horse matrix reduction
Message-ID: <920722044324_70530.1226_CHC117-1@CompuServe.COM>
Sender: root@watserv1.waterloo.edu
Organization: University of Waterloo
Date: Wed, 22 Jul 1992 04:43:24 GMT

Enough already, please!!!  It's pretty clear that sparse matrix bandwidth
reduction is a problem which, in the informed opinion of the only person
hereabouts with any actual expertise in the area, is not a good candidate
problem for an APL solution.

OTOH, this is just math, _not_ science, and _not_ engineering (i.e.,
it's just something you need to do *in the process of modelling something
REALLY interesting*).  And there are interesting things you _can_ do
effficiently, or efficiently enough, in APL.  

See, for instance, the "Diamonds in the Sky" paper in APL 89 proceedings
-- a stellar-interior model for white dwarf stars.  Also Scott
Kimbrough's kinematics modeling (with graphics) in the APL 89 and APL 90
Software Exchange packages. 

Or if math turns you on, look at Charles Sims's computational work
connected with the enumeration of the sporadic finite simple groups (done
12-15 years ago with APLSV in a 24K workspace ... under those
constraints, he was able to build _and manipulate_ representations of the
two largest -- Fischer's Monster, and the _really_ big one).  The grinding
calculations were done in FORTRAN for speed (several months of full
weekends of 308x time), but the FORTRAN was just "hand compiled" APL.

Or if numerical techniques (of more general interest than bandwidth
reduction) intrigue you, check out Richard Neidinger's papers on 
automatic differentiation, zero-finding by arbitrary-order Newton-Raphson
techniques, expressing DEs as recurrence relations; check out the APL 89
and APL 92 proceedings [and the citations in those papers].

Anybody who's spent much time doing serious computational work with APL
knows that there are classes of problems for which APL is not well suited
-- anything which invloves a lot of data which has to be accessed an item
at a time (which sparse represenations of matrices, for instance, force
upon you).  (That's why IBM invented []NA, and IPSA invented AP (RBE, pls
fill in AP #), and STSC invented []CALL and []XP, etc.)  If you use APL
for what it's good for, and use one of these interfaces to hand off the
other 10% of the problem to some FORTRAN or C or assembler code, hopefully
from a package like LINPACK or ESSL rather tahn purpose-built, you are
using your resources intelligently:  the human isn't trying to get several
thousand lines of scalar-oriented code working (and working right), and
the machine isn't encountering interpretive and space-management overhead
while doing scalar-oriented computations.

Oh, and BTW if you want an APL compiler you might contact STSC --
they sell one which works within their mainframe product -- or
Interprocess Systems -- they have the Yorktown APL Translator (APL to
FORTRAN, vector FORTRAN if you like) which IBM Research did originally,
only now as a nicely packaged product [IS is an IBM Business Partner].
Or if your code has to run on a Cray, drop a note to Bob Bernecky and ask
about ACORN -- APL to Cray C.


