This article describes EGREGION, a simple, easy-to-use tool that assesses branch coverage in APL functions. The tool comprises a pair of APL functions that provide detailed and summary information about code coverage at the APL function level. The tool provides a line-by-line analysis of statement coverage, labels not branched to, branches never taken, branches always taken, transfer of control via non-branches, and branches to non-labeled lines. Although we do not consider this groundbreaking work, we do believe that the coverage tool will be of value to APL programmers who are engaged in the creation of large, reliable applications.
We also present the results of using EGREGION to validate a Y2K upgrade of a database application.
Since Karmarkar's paper of 1984, interior point methods for linear programming have developed sufficiently to challenge the traditional simplex method, especially on large problems. Moreover, interior point methods generalize to certain types of nonlinear problems that cannot be handled by the simplex method. A search of the internet will reveal a number of software packages that implement interior point methods for linear and nonlinear problems. The programming languages used include FORTRAN, C, PASCAL, and (most frequently) MATLAB. Examination of these codes shows that they involve extensive manipulation of arrays of numbers. Some MATLAB codes use arrays of dimension greater than two and arrays whose elements are arrays. Higher dimensional and nested arrays are a recent addition to MATLAB (added in Version 5, the current version) but have been available in APL and J for some time. Why is the array processing language MATLAB such a popular choice rather than APL or J? Is this an area where APL and J can make a contribution? To gain insight into these questions, we compare interior point method code written in MATLAB to code written in APL and J.
The Collateral Analysis System (CAS) is an analytic database written in APL which combines the flexibility of a database with the power of a sophisticated analysis package. Much of the power and flexibility of CAS comes from the ability of users to create their own selection statements, calculations, and summary expressions. CAS Expressions are syntactly equivalent to APL expressions; however, they are designed to be used by those with no knowledge of APL. Expressions consist of three types of objects: data, functions, and punctuation. Data contain information, functions use that information to create new information, and punctuation controls the order of execution. Data can be either a constant such as 360, 'Smith', or 19971231 or the name of a field in the database such as ORIGTERM, NAME or ORIGDATE. Fields can have a datatype of character, numeric, date or boolean. A character field is represented internally as a character matrix; all other fields are represented internally as numeric vectors. CAS has a rich set of selection, financial, statistical, text, and date functions. Functions are of two basic types: item-by-item or summary. An item-by-item function applies separately to each item in one or more fields. A summary function produces a single result from a field. The paper will demonstrate how the CAS allows users with little or no APL knowledge to select data, perform sophisticated calculations and generate complex reports using CAS expressions.
In alcuni recenti lavori (ad esempio, C. Rose - M.D. Smith in Mathematica for Education and Research n.3/97 ) si è sottolineata la possibilità di generare numeri pseudo-casuali di “buona qualità” con l'utilizzo di Mathematica 3, generando gli stessi secondo prefissate distribuzioni discrete di probabilità.Gli AA. confrontano le caratteristiche ed il rendimento di alcuni generatori utilizzando prima Mathematica 3 , poi il linguaggio C++. Indi si soffermano sul problema della scelta dei generatori di numeri pseudo casuali con particolare riguardo alla possibile influenza degli stessi su applicazioni in campo finanziario ed attuariale. Vengono poi condotte alcune esemplificazioni su moderni temi di Finanza Matematica utilizzando il metodo della simulazione stocastica per far emergere analogie e discordanze tra i generatori applicati.
Most of the existing high-level array processing languages support a fixed set of pre-defined array operations and a few high-order functions for constructing new array operations from old ones. In this paper, we discuss a more general approach made feasible by Sac (for Single Assignment C), a functional variant of C.
SAC provides a meta-level language construct called With-loop, which may be considered a sophisticated variant of the forall-loops in HPF or of array comprehension in functional languages. It allows the element-wise specification of arbitrary high-level array operations, even those applicable to array of any dimensionality. As a consequence, any set of pre-defined high-level array operations can be specified by means of with-loops and be provided as a library. This does not only improve the specificational flexibility, but also simplifies the compilation process.
By means of a few examples it is shown that the high-level array operations that are typically available in array processing languages such as Apl or Fortran90 can be easily specified as with-loops in Sac. Furthermore, we briefly outline the most important optimization techniques used in the current Sac compiler for achieving efficiently executable code.
In order to substantiate the performance claims, the paper finally presents a performance comparison between a high-level specification for the multigrid relaxation kernel from the NAS benchmarks in Sac and low-level specifications in Sisal and in Fortran77. It shows that the Sac implementation, despite its higher level of abstraction, is competitive with the others both in terms of program runtimes and memory consumption.
Languages with array-based semantics, such as APL and J, frequently lend themselves to the description of non-iterative solutions to problems with traditionally explicitly iterative solutions. Although these non-iterative solutions are often terse, enlightening, and obviously correct, the are often impractical for realistic use. This is because their computational complexity, both in time and space, result in unacceptably high execution time and memory requirements. These computational complexity problems can sometimes be alleviated without recourse to explicit iteration or recursion. This is done by writing subtle problems that exploit classical methods such as sorting or (that thing that reduction does). Unfortunately, these methods generally produce unsatisfying programs, in the sense that their didactic nature and their feeling of mathematical correctness is lost. Furthermore,
they may introduce undesired errors, due to factors such as unforeseen loss of precision or edge conditions. Finally, these subtle algorithms remain unsatisfying from the computer scientist's point of view, because their computational complexity remains higher than that of an obvious algorithm expressed in a scalar-oriented language. This paper describes several techniques used in the APEX APL compiler to reduce the computational complexity of APL expressions to that of scalar-oriented algorithms, while preserving the didactic clarity of expression. Specifically, we address the following types of complexity and describe how to reduce them:
- Reducing certain upgrades and sorts to linear time
- Reducing common array search expressions to linear time
- Eliminating intermediate result arrays
- Simplifying functional compositions through algebraic analysis
Examples are chosen from the APL literature to demonstrate these techniques.
Twenty five years ago Martin Gardner wrote an article in "Mathematical Games" of the Scientific American with title Fantastic patterns traced by programmed "worms". Later on these worms called "turtles". These turtle graphics are well known from the LOGO-system.
These graphics are also vector graphics not made by setting absolute coordinates but settings relative increments of distances and angles. With tiny APL2 idioms I have developed many 2D-graphics.It has happened in short time, in normal manner and as dialogue form. My top is "one-liner as eye liner".
Looking up a character string in a previously compiled dictionary of strings is a frequent problem in computer applications. In classic APL, the solution is to create a 2-dimensional character matrix and use a MATIOTA routine with an interface similar to. This is perfectly functional, but quite slow. If the core of MATIOTA is coded in assembly or a complied and optimized routine, its performance becomes reasonable for moderately size dictionaries. If the assembly implementation supports hashing, as Jim Weigang's code does, large case performance is further improved, but the hash table must be re-built for each search. The obvious solution in a nested-array APL is to create a vector of vectors, enclose the test string, and use dyadic iota directly. Again, performance is reasonable in the small to moderate size case. However, both of these are essentially brute-force solutions, which examine each string in the dictionary until a match is found, or the dictionary is exhausted. Thus for a dictionary of W words each of length L (or of average length L), these methods have complexity O(W x L). A hashing solution has O(log W) but the constants hidden by O-notation may be large, particularly if the search string is long. If the dictionary is sorted, a binary search can be performed. The selected string is compared to the middle element of the dictionary, and the appropriate half of the dictionary is then considered. The process is repeated until a match is found, or the remaining section of dictionary to search is empty, proving that the string sought is not to be found. This has complexity O(L x log W). This is good for large dictionaries of short words, but not for long strings. This is because each comparison examines the entire string, at least potentially. There are other data structures available, such as digital tries (N-way trees). However, these generally require extensive data storage, which is often impractical for real applications. A ternary tree is a data structure where each element (node) has up to three sub-elements (sub-trees). It can be diagrammed as a tree where there is a left, middle and right branch at each step. For the problem at hand, each node stores a single character of the string, with the sub-trees storing all those strings whose matching character is less than (left), equal to (middle) or greater than the nodes "split" character, respectively. Searches of a ternary tree for a given string have worst-case complexity O(log W+L). Search misses, where the string is not in the dictionary, are typically quick. Additionally, more advanced (partial match, near neighbor, etc.) searches can be easily performed on a ternary tree. A ternary tree can be represented in APL by a nested structure. It can be built and processed with reasonable efficiency by fairly simple APL code. Storage requirements are not excessive. The speaker will demonstrate APL functions for building, searching, and maintaining a ternary tree of strings. The tree will be implemented as a deeply nested APL vector. Comparisons to other searching methods will be presented.
A version of this paper was presented at the March meeting of NY/SIGAPL, a local chapter of ACM.
Different translator writing systems (TWS) have been built in the past, some of which are being sold as products. This paper puts the focus on the special capabilities of APL for the construction of programs of this kind. The TWS takes as input a list of all the symbols and reserved words in the language whose compiler will be generated, together with the production rules of its grammar in Backus normal form. The grammar must be SLR(1). Each rule may have a semantic action associated to it -an APL function- which will be executed by the generated compiler when the rule is reduced, to perform the code generation. The TWS uses quad FX to generate all the auxiliary functions in the workspace.
The TWS takes advantage of many APL capabilities, such as the following:
The source language description (a text file) may include a description of the semantic actions written in APL. Quad FX may then be used to generate the corresponding executable functions. If no semantic actions are included, the compiler will just check the correct syntax of the programs.
The generated compiler is executed under the APL interpreter. Therefore, the semantic functions can do anything APL can do, and global variables may be used to store information between different calls to the semantic functions. This is useful for some of them, such as those that optimize the use of the registers.
The APL2 general arrays are useful to simplify the construction of the compiler. For instance, the compiler stacks use this structure to store lots of information in a simple way.
The performance of the generated compiler is not very good (it loses around one order of magnitude when compared to a specific C-built compiler), but it may be used for educational purposes with students of Computer Science. Furthermore, since few programming languages have instructions more complex than those in APL, it is easy to build a compiler translating any language into APL2, using a one-to-one mapping.
Another workspace has been developed to provide debugging facilities for these to-APL2 compilers, which has been tested successfully. Specific compilers have been written translating into PC assembly language and APL2.
Epistemology is the study of what we know and how we know it. Several concepts introduced during the development of APL significantly expand the perceptual environment of those who know the language. One can describe three stages in the our perceptions of computers, computer languages, and computer problem solving:
1940's John von Neuman invented electronic computers.
1960's John Backus of IBM wondered if programming could ever escape the von Neuman trap.
1980's I wondered if thinking could ever escape the programming trap.
A language like APL was a masterpiece of simplification when seen through the eyes of a computer user of the seventies. The virtues of simplicity are usually held to be many. This paper will firstly discuss simplicity in general, review some of the writing on simplicity coming from the computing world, and briefly construe the development of APL, and the later J, as being essentially efforts in simplification. Possibilities for further simplification will then be canvassed. Firstly, simplification of the usually accepted but unfortunate naming conventions adopted by array processing languages will be proposed. Secondly, simplification of the arithmetic will be very briefly outlined, more detailed treatment of this topic being available elsewhere. Thirdly, the possibilities for a new kind of function (called an application) will be described. These will be considered as a kind of systematic renaming to supply arguments to functions. Fourthly, syntactic means for having all functions and operations dyadic will be treated, and the advantages of adopting such means evaluated. Fifthly, and in the context of J's simplifications, the need for hyperoperators will be asserted. Finally, the nature of interpreters for array processing languages will be reviewed, and suggestions made for facilities to be provided by such interpreters to simplify the process of developing array processing code.
This paper will start with the description of how requirements for Japanese language handling have been treated by APL vendors(mainly IBM), and how APL users and application developers have invented means of realizing the requirements in various APL implementations from APLSV thru APL2/OS2 in their own way in the past and at present. The main purpose of the paper is an attempt to clarify the problems, technical as well as cultural, which can be solved by today's conventional computer technology in the future APL implementations. The people's emotion in regard to the national language may seem to have little to do with the make of computer languages, and will be a difficult problem that the manufacturers do not want to be involved. The paper tries to convey the gravity of the problem even for the productivity of the APL language in the future.
This line of research has been pursued since the seventies, giving rise to several computer products for mainframe computers in the eighties. The simulation language compiled by these products is CSMP and the object language produced is APL, which is also the language the compilers are written in. This approach has the following advantages:
A simulation model written in CSMP is compiled into an equivalent APL program which can be invoked and controlled by another APL program.
The CSMP model data and parameters become APL variables with the same names, whose values may be changed at any point by simple APL assignments. This makes it very easy to run "what
if" experiments, one of the main justifications of modeling and simulation.
The APL powerful graphic capabilities (such as those provided by the AP206 and AP207 auxiliary processors) may be used by the simulation programs to display data. During the last two years, this approach has been extended in the following way:
- A new simulation language (OOCSMP) has been designed, a pure extension of CSMP with object-oriented constructs that help to simplify the models of systems with several equivalent interactuating components. The simplification obtained may be very significant. OOCSMP is also capable of solving a large family of partial differential equations.
- A compiler has been written in APL2 to translate OOCSMP code into C++. This compiler reuses the scanner, the parser and the semantic analyzer of the previous compiler (with the appropriate additions due to the extensions added to the source language), while the code generator has been completely replaced.
To maintain the above mentioned advantages of APL translated models, a compiler library is provided that makes it possible to add to the compiled C++ programs a user interface and a graphic presentation system similar to those obtained with the compiler that generates APL. The APL2 compiler provides options to automatically generate a main program with the appropriate calls to the library user interface functions.
The APL2 written OOCSMP compiler has been built as a packaged workspace that may be invoked from a DOS session, either natively, under Windows 95 or OS/2. Other compilers for OOCSMP built by our group generate Java code and C++ code executable under Windows 95 and Unix with the appropriate user interfaces. These compilers have been used for educational purposes, in postgraduate courses on continuous simulation, and also for research, making it very easy to build interesting OOCSMP models.
Two years ago during the APL96 conference we discussed tools for Crosstab analysis. The advantages of the Coding-technique were shown. At Lancaster Conference itself as well as in the following years some proposals as how to increase the speed of the programs were made. C. A. Jones contributed several good ideas. Now we see new trends in data analysis: the data mining programs are 'en vogue'. In data mining the number of columns to analyze is much higher. Codd, the founder of the relational database model, is of the opinion that OLAP-techniques must be able to handle between 15 and 20 dimensions. Perhaps this is going a bit too far but today the need for processing more than 10 classification variables at the same time is obvious. That is the reason for the modification of our CROSSTAB on the basis of the new RF-function of APL2. Die external function RF enables really a fast extraction of the distinct values. The CUBE program works without Coding and our tests have shown that CUBE can classify more columns than CROSSTAB or CLASSINDEX from CAJ. The application of CUBE, however, is limited by the spreading of the multidimensional result array. Frequently most items of the complex array are zeroes. In such cases we propose to make use of the advantages of the sparse matrix technique. Thus we get CUBE_SPARSE, which is extremely fast and allows the successful treatment of very large datasets with many columns. Each element of the resulting nested vector contains the index- path or key and the data-part. With special programs it is possible to apply common functions to the data part of this complex vector and to obtain flat matrices for the output of the analysis.
 Curtis A. Jones at APL96 Lancaster Conference
 J. A. Brown, H. P. Crowder: IBM Systems Journal Vol. 30, No. 4, p. 440
APL is like FORTH a powerful programming language, which deserves to be better known. Despite his unusual style and his power, it remains too confidential.
In my job, I often notice that, in order to solve little problems, our PCs and their software become unfriendly, and their documentation too important. In addition, sometimes, they are out of our reach and the solution of the problem takes time!
Even if their size reduction had made lot of progress in few years, they don't always have the easiness of the pocket calculators. This conclusion suggests me to build an APL pocket calculator, which could offer us a powerful computing power in a small size.
If we could implement programmable facilities, this could promote this kind of APL calculator to become a wonderful tool for industry and education.
In this paper, we wish to outline a new idea namely the object-oriented positioning using simple tools of set-theory and combinatorics. Nowadays the domains of a structured spatial positioning systems are extend only to simple 2D or 3D boundary rectangle (MBR - Minimal axes-parallel Boundary Rectangle) domains. We try to encapsulate the domains strictly for the inside and/or the boundary (surface) of the object during our attempts. In case of such domains it is impossible to use traditional coordinates because of boundary irregularity. Therefore we applied 2D and 3D version of subrange type data structure for spatial indexing which are well known from dynamic program languages. The two important methods of combinatorics namely the enumeration and the ranking-unranking were also applied so the spatial index (or rank) used by us derived from a ranking procedure. (This should not to be confuse with concept of the rank used in APL and J programming language.) During development we had to partly modify and extend the traditional concepts of the circumference and the area moreover the surface and the volume also. There was a natural possibility with creation of nested OOP superclasses and subclasses by means of inheritance to transmit the transformation parameters (translation, orientation, etc.) between positioning systems which belongs to different classes. This solution - in contrast to coordinate systems - is database-friendly because it is consequently redundantless and it uses the irregular boundered real spatial objects itself as domains. Developing hidden virtual systems at any time can be perform a transformation between any kind of coordinates and linear indices according to polymorphism. Thus the our J-based positioning systems can be rightly called object oriented including every advantage and disadvantage of it.
The article presents AxE, a package designed to help transport networks design and analysis. AxE was born in 1995. It stems from the experience of prof. G. Salerno in both teaching transportation theory at an Italian University and consulting large institutions and companies on transportation modeling and planning. In his work Prof. Salerno has been extensively using APL for years. He regularly teaches his student an introduction to APL and uses APL to code the major algorithms related to transportation theory. He also uses APL for his professional work as a consultant and to perform management tasks. The implementation of AxE has been a joint effort led by Prof. Salerno with help from other professionals, including but not limited to the speaker. The AxE package currently runs on Apple Macintosh systems and has been coded entirely using APL.68000. A porting to MS Windows95 is currently under evaluation. First, the article briefly presents the package and its features. The body of the article is devoted to a discussion of the advantages (and, sometimes, limitations) that the use of APL in general and APL.68000 in particular has provided in the following areas during the design and implementation phases of AxE:
user interface generation;
management of a relatively large and complex application.
A demographic projection model for regional planning is presented. The population of the region, about one million inhabitants, was disaggregated into age/sex categories (one hundred annual age classes by each sex). The APL2 language was chosen for its strength to develop a dynamic analytical model for the population projection, offering in the mean time the possibility to have a graphical interface for planners to evaluate any aspect of population dynamics. The high level of interactivity allows estimates of effects related to any variation in the component of demographic growth (migration, mortality, fertility).
Algorithm and software prototyping using APL2 has been effective in developing several new capabilities for space object tracking using satellite-based sensors. Applications include: sensor simulation, sensor scheduling, orbit propagation, and refinement of space object orbital parameters using sensor observations. This work capitalized on a rich APL function library and experience in both missile tracking and orbital applications related to GPS (Global Positioning System). For each of these applications, APL2 provided a prototyping environment that was rapid and effective. Even the sensor scheduling problem, which required tasking tens of satellites to observe thousands of space objects multiple times per orbit, was amenable to an APL solution. This paper provides an overview of space object tracking algorithms that we recently developed, and the techniques used to develop a complete end-to-end simulation test bed. We also discuss conversion of this prototype code to production code that is delivered to the customer. We also discuss the relative merits of this approach compared with alternative approaches such as COTS. Overall we have found APL to be ideal in supporting simulation and rapid development of large-scale applications of national importance.
The paper describes implementation in APL of some methods of pattern recognition. These general-purpose techniques of data analysis are illustrated by application to Nuclear Power Plant Diagnostics. In particular we consider vibration spectra analysis, including simple descriptive statistics, smoothing and peak extractions, multidimensional scaling for data visualization, informative features selection, cluster analysis and classification.
Implementation of algorithms used in the paper has been done in Dyalog APL. The paper discusses benefits of using APL for the problem area.
Application is based on analysis of real vibration characteristics measured at Nuclear Power Plant in Novii Voronecz. The paper discusses technology of analysis, discovered data structure and malfunction diagnostics.
The paper also mentions using of implemented techniques in training of engineers for Nuclear Power plant and other applications
A simple APL2 program (385 instructions in all), packaged with the interpreter as an executable PC program, has made it very easy to develop and execute advanced courses in science for the high school level. These courses are different from the run of the mill, for they do not impart theoretical information, but guide the student to solve a potentially unlimited number of applied problems.
Each course consists of 18 to 25 lessons, each containing five problem models, for a total of 90 to 125 problem models per course. Every time the program proposes a given lesson, each of the five models is used to generate a problem. Each problem model may give rise to a large number of actual problems (up to several thousands, in some cases), so that the probability of being invited to solve the same problem two times is small. The following is an example of a problem proposed by the program:
Which is the 5th term of the arithmetic progression whose first term is 3 and whose difference is 2?
The pupil is supposed to solve the problem by himself on a piece of paper, using a calculator or any other means, and to type the solution at the keyboard. The program compares it with the expected solution and provides feedback. If the solution is incorrect, the pupil is offered another try. If this is also wrong, the correct solution to the problem is explained, after which a new problem of the same model is proposed to find out if the pupil has followed the explanations. The program keeps information about the performance of each pupil on the different lessons. This information may be obtained by the teacher. Several unique features of APL2 have been used to develop these courses:
- The fact that the packaged program includes the interpreter, makes it possible to use the execute primitive to execute the problem models.
- A problem model may use quad FX to generate new functions, which expands the possibilities enormously.
- The two APL random generators (monadic and dyadic ?) are used to obtain many different specific problems from a single problem model.
Five different products have been built using this procedure: three courses on Mathematics, and two on Physics, all of them in DOS and Windows 3.1 versions. The Windows versions are partially written in C++ and partially in APL2. A procedure has been developed to make it possible to call APL2 programs from C++ programs. A different procedure, which makes extensive use of the graphics auxiliary processor (AP207) has been used to build a course on Chemistry (inorganic formulation) for DOS.
The course of calculation methods is standard for mathematical and engineering high education. It is commonly to organize the computational practice by the aid of classic languages such as C or Pascal. But it is obvious that many details which accompany the programming in this languages are redundant for teaching calculations itself.
The APL is very useful in this sense. At first the interactivity of APL, secondly many incorporated functions, thirdly pliability of array processing are the main features which give advantages in programming of calculation methods. The new facilities in APL such as nested arrays or each and composition operators gives additional power.
We shall discuss in this communication our experience of teaching the classical course of calculation methods for the students in mathematics by the aid of Dyalog APL. Our course includes the following topics: solution of systems of nonlinear equations, numerical differentiation and integration, interpolation with curves and splines, solution of Couchi and boundary problems for ordinary differential equations. The idea of this course was to illustrate the calculation methods as the mathematical models which can be quickly modified, used in other models as "black box" and simultaneously applied to solution of set of problems. We show that the method programmed by APL is very useful for investigation the influence of parameters of the problem and of the method themselves. You can see that many calculation problems are programmed as defined dyadic APL-operator with function as first argument and data (parameters of method) as second one. The possibility to create the derived function with the aid of such operator is very powerful means for programming of calculation methods modification.
We use the graphical system GRAN written in APL to illustrate the numerical solutions. Of course, there are very big problem with the speed of calculations. We have previously shown (see my contribution at APL-97 conference) that these problems could be solved by introducing the recurrent operator in APL because near ALL calculations methods are recurrences in APL terms. From the other hand the problem of speed is not very important for solution of educational problems.
Lingo Allegro maintains a web server at http://www.lingo.com which uses Dyalog APL to run various active web server demonstration programs. This paper will discuss a Dyalog APL education and Certification program implemented in Dyalog APL, which will be running on this server, by February 1st. The program will have a user logon feature and will take the student from beginner through advanced use of Dyalog APL. There will also be a series of self-administered on-line tests and automatic certification of Dyalog APL competency at various levels of completion. The paper will discuss the pedagogical and integrity issues of such a program, as well as implementation strategies.
The USQ array-based first unit in mathematics developed with the assistance of a federal government grant and developed through the Australian Open Learning Agency has been modified for delivery over the World Wide Web. The materials consist of Web pages with embedded executable lines of mathematics, which may be modified, and re-executed, by the user, and which return results to the page.
The array based mathematics program at USQ has been evolving since 1988 when a public domain array processing language IAPL was made available by the international APL community. A subsequent major software grant from the STSC made further progress possible and a complete numerically based introductory mathematics unit together with supporting materials (for on-campus students) was developed with the assistance of a CAUT Grant in 1993. This program has been offered to a laboratory-based section of the first semester mathematics class in subsequent years and has been refined and extended to cover the second semester unit.
Through 1994-6 a Quality Enhancement Grant from the Open Learning Agency of Australia provided an opportunity to apply the numerically based approach at a lower level in the Foundation Mathematics unit. This work led to the development of an environment, delivered on disc, in which students could work through templates of computational sequences and modify the statements involved to solve similar problems, or if desired, investigate completely different problems. It included the provision of a graphing tool that students found useful beyond their Foundation Mathematics studies. To create this environment J (without its session manager) was linked with a Windows version of Tcl-Tk. This development was possible only through access to the source code of pre-commercial versions of J.
The particular way in which this environment was constructed has shown how the same or similar package might be delivered over the World Wide Web in a unique fashion. Early WWW tools offered CGI interfaces to a database (for multiple choice question-and-answer interactions) or to a server process (returning the results of computations) or use mime types to fire up a mathematical engine on the client machine which then loads the appropriate document. The novel feature of the process developed is that the user is presented a Web page with the mathematical tool (J) embedded in the page rather than with an engine specific document, such as a Mathematica Notebook in a separate window. The use of such engine specific documents requires both familiarity with the user interface of the engine (e.g. Mathematica) and a copy of the software (to run the Notebook).
The development provides an alternative approach tied closely to concrete arithmetic operations and the extensive use of graphical representations. It affords an opportunity for students who have difficulty with abstract symbolism to succeed in mastering the basic concepts of the calculus and caters for a different learning style. Links to related and pre-requisite pages ensure that the student is continually initiating actions rather than reading.
The software tools create interactive web pages and pass information between an interface language (Tcl-Tk), which plugs in to a WWW browser and (the essential portion of) a mathematical interpreter (J).
The objective of this research is to develop effective formats and functions for generating displays to provide feedback to students who are learning independently in on-line environments. Such mechanisms for motivating students is becoming increasingly important as distance learning becomes more commonplace with computer networks and without the instructor physically present. As the direction of technical education advances from fact delivery and the evaluation of results of problem solving, there is a greater emphasis on understanding of problem-solving processes and relying on experience for knowledge of the operations useful for problem identification and solution. Using novel APL-generated forms of visual displays students can easily recognize there degree of progress in their performance in the context of their prior performance and of competitors. In one form of histogram plot, each student's label is plotted within the tower or histogram column, indicating each individual student's strength of membership in a particular grade category. Such categories correspond to a subset used for representing the values of a fuzzy variable for indicating student progress. Designs for the graphics in the histogram displays are constructed by APL functions to portray distributions. These summarize how students scored on a task relative to previously and to others. Plots of the same form can present raw scores, scaled scores, or rank of each student in the class. The graphic plots are complemented with compact symbolic and numerical tabulations. In these, every student is identified by a numeric label and both the average performance over all previous tasks and the percent change produced by the grade earned for the last task are grouped in this table. They rapid availability of these displays of performance is a new source of motivation for students who must depend on incentives for learning outside of the tradition classroom setting.
Parallel computation can be very useful in the field of Finance, where real time is the key of success. In fact, in the world of Finance fast decisions are often more important than more precise solutions. In spite of that, financial operators started using the power of the computers just in the recent history. So, it is easy to imagine how far and spread are the opportunities of parallel computation in Finance. Here we present a simple, but useful, implementation of high parallel computation in a sensitivity of bonds portfolio management problem. Dupacova (1998) proved that small errors in constructing scenarios will not destroy the optimal solution. The aim of this contribution is to quantify, through carefully planned simulation studies, the magnitude of the above mentioned errors and to give bounds, at a specified confidence level, for the optimal gap between the value related to the optimal first-stage solution of the unperturbed problem and the "true" optimal value. Results in the Italian market are reported.
Many optimization methods are iterative in nature and require users to specify constraints and objectives. The normal way to handle this type of problem would be to create a loop with an execute statement inside the body of the loop. The loop is necessary because of the iterative nature of the algorithm and the execute statement is necessary because the objective function is user-defined.
“Execute” can be costly inside a loop. While there are existing ways to remove “execute” from the loop, they are awkward. However, dynamic functions can be built ahead of time and called inside the loop. Defined operators allow an arbitrary optimization function to be passed to the looping function. The method of simulated annealing can deal with large-scale optimization problems without getting trapped in local minima. Constraints can be treated as objectives with a high penalty for violations. Hence we can eliminate constraints and deal only with objectives. A series of objectives is combined into a single objective function. Since the objectives are defined by the user, a dynamic function is created. The goal is to reduce the result of the dynamic function to zero (negative values are not permitted). The function is called iteratively until a specific tolerance is reached. This paper will use an example from the mortgage industry to demonstrate the use of dynamic functions in iterative optimization methods.
The paper deals with one of the most important fields of research in the modern financial theory: the valuation of derivatives. The aim of the work is the use of a discontinuous Markow processes to model the dynamics of stock prices and, consequently to price the derivative instruments, in place of the traditional binomial multiplicative process used frequently in many discrete models. At the end of the work various applications are presented and in them the stochastic process of the stock prices is modeled with the use of a simple and powerful APL program. In the applications APL shows all its powerful capacity of calculation and matrix manipulation for mathematical uses.
In Finland corporate analysis has long traditions. In the late 70's a first computer assisted calculation model was built. In 1984 it was replaced by an on-line mainframe APL application. But the PC was soon available and in 1985 the first PC model, called Trennus, also in APL, was programmed. Since then the application has been commercialized and nowadays all the main Finnish banks and insurance companies use the model for the financial statement analysis. Companies use the model as a planning tool for their future investments. Nearly 50000 analyses are made each year. A short description of corporate analysis and financial statement analysis methodology is presented. The main components of the application are outlined. Windows version In the last few years a Windows version of the application has been programmed, also by APL. The "conversion" project is described. It turns out that in spite of similarity between Dos and Windows versions very little of the old code could be used directly. The Windows version has many advantages compared to the old version:
windowed simultaneous use of all the modules and multiple instances
enhanced output & graphics
data transfer to other applications
automatic HTML page generation of reports
For GUI programming we have developed a tool, UITW. It is based on API calls and runs in APL+Win. It has given very significant advantages:
Often in these last times the industries ask the mathematicians to determine their "true” utility functions directly from the available data about the used resources and about the corresponding profits in order to optimize the latter with respect to the former. The possibility to determine the utility function directly from the data is very important because in this way the exact situation of the company is described. Moreover the biggest companies divide their investments in several activities and the optimization of their utility function can lead to problems that involve separable or partitionable functions.
The 2-layered feed-forward neural networks are able to approximate any separable function while fitting the data maintaining the separable structure with the desired approximation error. Thus the theory of the partitionable variational inequalities can be used in order to find the optimum of the utility function subjected to some constraints.
The presence of the partitionable structure is important because it simplifies the resolution algorithms and makes them more efficient. Moreover, under stronger assumption on the function the above results can be generalized to a larger class of utility functions: one problem of dimension n can be split into n problems of dimension one.