Newsgroups: comp.lang.apl
Path: watmath!watserv2.uwaterloo.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!uunet.ca!geac!itcyyz!yrloc!rbe
From: rbe@yrloc.ipsa.reuter.COM (Robert Bernecky)
Subject: Re: SIGNUM of teaching numerical methods
Message-ID: <1992Aug2.181215.3380@yrloc.ipsa.reuter.COM>
Reply-To: rbe@yrloc.ipsa.reuter.COM (Robert Bernecky)
Organization: Snake Island Research Inc, Toronto
References: <1992Jul23.045513.20017@yrloc.ipsa.reuter.COM> <1117@kepler1.rentec.com> <ROCKWELL.92Jul26190755@socrates.umd.edu> <1122@kepler1.rentec.com> <1992Jul30.050314.19813@yrloc.ipsa.reuter.COM> <1992Jul30.185628.6516@csi.jpl.nasa.gov>
Date: Sun, 2 Aug 92 18:12:15 GMT
Lines: 100

In article <1992Jul30.185628.6516@csi.jpl.nasa.gov> sam@csi.jpl.nasa.gov (Sam Sirlin) writes:
>
>In article <1992Jul30.050314.19813@yrloc.ipsa.reuter.COM>, rbe@yrloc.ipsa.reuter.COM (Robert Bernecky) writes:
>(commenting on previous articles)
>
>|> \begin{serious}
>|> a. Calling other languages is a pig, because they don't believe in
>|>    arrays. Once they believe in arrays, then we'll see progress,
>|>    because they'll slow down to the speed of APL.
>
>I don't think lots of people with scientific uses for code will
>tolerate this. If C++ is slow they'll just use f77. 

I was thinking of Fortran 90, not C++. But I have this feeling that
although you're right for some users, that group of users will
shrink as machine speed increases swamp their "must be ultimate
fast" requirement: By the time they get their special case coded
for machine X, machine Y (boring mundane code outruns Special Case
Coded For MachineX) will be in the field.

Time to solution is key. How we get there changes with time.

>
>|> b. Calling other languages is a pig, because they don't believe in 
>|>    arrays. They believe in pointers to a hunk of undifferentiated
>|>    mainstore, which you Promise To Treat Gently. In particular,
>|>    there are Other Variables, which may (or may not) tell you
>|>    things such as: Rank, Shape, Element Count, etc. 
>|> 
>|>    Of course, data type is Fixed At Compile Time.
>
>And rank and shape as well. But this seems reasonable if it gives a
>real speedup. Tim Budd's APLc allows this (fixed type) as well. 

Fixed type is the big winner in code production. Fixed rank comes 
next. Fixed shape is least important. Probably, ability to detect
scalars of type X is the biggest single win to be made.

>|>    you still need to write a specific "impedance matcher" for 
>|>    each of them, which specifies all the above junk, because
>|>    each Foreign Routine is different. 
>
>This of course seems an insurmountable problem to get to run fast in
>all possible cases, but is this really a requirement? It seems to me
>the interface should be flexible enough that the programmer can make
>it work. For example if you want to pass an array to a fortran
>routine, the programmer must be aware that fortran wants column
>orientation whereas the apl internal storage is probably row oriented.
>But if she adds suitable transposes, possible flattening, and then can
>just hand the fortran routine a pointer to the start of the data
>(after the header) shouldn't this work?

Nope. Well, not all the time. My point was that you can't embed
a Blas6d call in your APL code and expect it to work. You have to
do what you pointed about in the above paragraph, namely:
embed Language-specific (column vs row) and Subroutine-specific
(I want floating point, damnit. None of this wimpy integer stuff
for my subroutine, nosiree...) constructs in your code, and these
constructs:
a. Have NO relevance to the problem being solved, and hence are
   undesirable, as they obscure the algorithm.
b. Are guaranteed to require changes in YOUR application if anything
   ever changes in the subroutine. For example, assume that the
   Blas6d subroutine all of a sudden is changed from Fortran
   to Fivetran, A New Improved Dialect of C, in which arrays are
   stored in row-major order.
   You now have to go in and change all your calls to that routine.
    Evil.
>
>Of course the easiest way to do the interface is to ignore it
>completely, but translate the APL into compileable code (fortran or
>c), and then the programmer can add whatever he likes at whatever cost
>he wants.

At a severe cost in maintainability. Either you write ONCE in APL,
compile it, and NEVER go back, or you have to somehow guess how
to pluck your generated Fortran Patches out of this code, and into
the New Improved generated Fortran. It's the "let's patch the object
deck" problem back in new clothes. Evil. (Lots of evil around today)

>Actually I had an idea for something that might catch on. Matlab is
>very popular here, but suffers from many of the same problems as APL.
>Specifically it's an interpreter, and many people just use simple
>array operations to write code fast, but then the code ends up running
>slow. I think I'll try writing a translator from (simple) Matlab to
>APL (in APL), then voila, using APLc I get a Matlab compiler. Pretty
>twisted huh? Well I could always write the translator in J... Anyway
>I'll have to see about speed tests...

This is a typical approach for compiler types. Think of APL as 
"An Array Intermediate Language", sort of like IF2 in SISAL.
It also can work fairly well.



Robert Bernecky      rbe@yrloc.ipsa.reuter.com  bernecky@itrchq.itrc.on.ca 
Snake Island Research Inc  (416) 368-6944   FAX: (416) 360-4694 
18 Fifth Street, Ward's Island
Toronto, Ontario M5J 2B9 
Canada
