Newsgroups: comp.lang.apl
Path: watmath!watserv1!torn!utcsri!rpi!uwm.edu!cs.utexas.edu!uunet!haven.umd.edu!socrates!socrates!rockwell
From: rockwell@socrates.umd.edu (Raul Deluth Miller-Rockwell)
Subject: Re: SIGNUM of teaching numerical methods
In-Reply-To: andrew@rentec.com's message of 20 Jul 92 15:41:48 GMT
Message-ID: <ROCKWELL.92Jul21093324@socrates.umd.edu>
Sender: rockwell@socrates.umd.edu (Raul Deluth Miller-Rockwell)
Organization: Traveller
References: <ROCKWELL.92Jul19155653@socrates.umd.edu> <1089@kepler1.rentec.com>
	<ROCKWELL.92Jul20011749@socrates.umd.edu> <1093@kepler1.rentec.com>
Date: Tue, 21 Jul 1992 14:33:24 GMT
Lines: 114

Andrew Mullhaupt:
   [Lot's of S code [deleted], which illustrates that intermediates
   are copied.]

   ... Now this kind of trick is the same kind of performance clooge
   the APL forces you into at times, but at least in APL nobody made a
   rule that _prevents_ the interpreter from being smart about copying
   arguments.

   You may ask - why not use reference counts like real human beings?
   Because the 'language standard' is construed as 'the interpreter
   must _copy_ the function arguments to localize them' as opposed to
   'the interpreter can guarantee the locality of function arguments
   by whatever effective means it chooses.' The reason for this seems
   to be some kind of extra safety or something but it's too galling
   to think about clearly.

This sounds like what happens when you confuse the language with the
implementation.  But, if I may submit, the "real answer" is more
likely that no one has dealt with the aspect where each of the
sub-arrays needs an array header, so they're falling back to a simple
mechanism [extraction of sub-array] which is guaranteed to give them
the array header -- which their implementation of cumsum is going to
expect.

As I understand it, the most recent implementation of J is similar in
this regard.

   >With a globally optimizing compiler you can get the kinds of
   >serial-machine optimizations you're looking for, even with
   >call-by-value semantics.

   Sounds nice. Except that compiling S is a horrible scare - I think
   that the fact that there's a lot of times when you build a string
   and then execute it (just like in APL) that mean compiling will not
   apply to all of S. Just like full-APL compilers, you'll spend a lot
   of your time inside what are effectively calls to the old
   interpreter. Even so it might make sense to do it.

I think I agree with you.

   There should be some kind of 'compile on the fly' which would cut
   out the bulk of the fat. The argument is something like this: If
   you spend time compiling each bit of code before you run it, then
   you'll slow down things where there isn't much advantage, but the
   slowdown will only really be noticable in small operations.
   However, the small operations which occur repeatedly will benefit
   greatly since the compilation cost is amortized over the many
   repetitions. (E.g. you keep around the bits of compiled code you've
   already done and access them by hashing, etc.) Then the picture is
   much improved.

   However, unless you do something about the extra copying of S,
   compiling it will not help - since the copying is being done as
   fast as can be as it is now. In other words the language is
   _broken_ because it requires you to do stuff that you shouldn't
   _have_ to do, and even as fast as it can be done, the cost is
   onerous.

Well, again, if you compile at the global level this goes away.  What
I mean is, if you've got a function f which depends on g, and function
g depends on h, then you include in f the code for g and h.  Note that
this does make your compilation a bit more expensive, and
reassignments to g and h blow away your compiled code -- if this gets
to be painful you can add something to the language to isolate such
functions.  This has a lot of benefits -- for example, you don't need
to build array headers just to pass them to some function -- you just
short circuit the process and provide the appropriate values straight.
Also, eval as applied to some known text, can be made to be quite a
bit faster than a general call.

Note that you're going to be compiling some of these functions many
times, but each instance is going to be particularly optimized for the
code which is using it.  Also note that for calls to functions which
are themselves recursive you're going to have to compile that
recursive function twice [once for the initial entry point, again for
the recursive case -- which is analogous to, but not the same as, the
base case and recursive case used in defining the function].

   >And, J seems to be doing a lot of the right things for this kind
   >of an implementation to be viable.

   I have mixed emotions about that. If J gets _fast_ enough, I
   suppose I'll have to take it seriously. But having been 'up close
   and personal' with it, I really wish that Morgan Stanley would
   market Arthur Whitney's 'a'.

Having never seen 'a', and having no way to access it, I really can't
agree or disagree on this one.  Keeping a language secret does have a
certain advantage -- no one else is going to use it.  But, it does
have a disadvantage -- no one else is going to use it.  You makes your
choices...

   And one thing you can say for 'a' is that it is so good because it
   does the things it does very well, but it isn't restricted to a
   small world-view. It allows for full use of the operating system
   and is very effectively extensible.

Ah.. one specific operating system or some variety of operating
systems?

As an aside, note that J is very effectively extensible, too.

   If you want to see where APL _could have_ gone, take a look at
   where it's going at Morgan Stanley.

I worked for Morgan Stanley a long time ago [before I'd learned APL],
but I was just a temp -- I moved boxes, did some secretarial and
filing work.  I think that's about as close as I've gotten to taking a
look at where APL is going at Morgan Stanley -- which is to say, not
close at all.

-- 
Raul Deluth Miller-Rockwell                   <rockwell@socrates.umd.edu>
