A Blustery Spring

Dyalog version 17.1 will be released soon, with the HTMLRenderer working under Windows, macOS and GNU/Linux, the “Link” system providing infrastructure for connecting APL to source code management systems, pre-built Docker containers with Dyalog APL for Linux installed and many other enhancements that simplify the installation and maintenance of systems based on Dyalog APL.

We’ll be writing much more about version 17.1 soon, and next year’s 18.0 release in due course. The main purpose of this blog entry is to let you know about new members of the Dyalog team and, unfortunately, a couple of departures as well.

Departures


In February, John Scholes passed away. Together with Geoff Streeter, John was one of the original implementors of Dyalog APL in 1982-1983, a cornerstone of all aspects of the Dyalog language and business, and one of the pillars of the APL community. Many members of the community have paid tribute to our Genius, Gentleman and Mischievous Schoolboy at http://johnscholes.rip.

At the end of May 2019, Jay Foad is leaving Dyalog to return to his first love (as a software developer) and become a proper compiler geek again, after nearly a decade of helping move Dyalog APL forward and, for the last three years, helping to “herd the cats” as CTO. We will sorely miss Jay’s technical excellence but understand the desire to hit the sweet skill spot when the opportunity arises, and we wish him good fortune in that pursuit! You can read Jay’s farewell blog post here.

Jay’s management responsibilities will be shared between Richard Smith, our Development Manager and myself; I will be re-assuming the role of CTO until further notice.

New Faces in 2019

The good news is that we will welcome several new people to Dyalog in 2019 – new hands to write code in APL, to work on the APL interpreter, and to write documentation and training materials to help new and old users get their work done more effectively.

APL Consultants

In response to client requests and to help new clients get started writing their first APL systems, we are creating a consulting group in the USA. To date, we have recruited two members for this team: Nathan Rogers joined the team at the end of April and is based in Denver, Colorado, and Josh David starts work for Dyalog in early June (as soon as he graduates) and will be based in New Jersey. If you think you have heard of Josh before, that is probably because he was a winner of the Dyalog Problem-Solving Contest in 2016 (https://www.dyalog.com/news/112/420/2016-APL-Programming-Contest-Winners.htm) – and a runner up in 2015. Nathan found us thanks to Adam Brudzewsky’s work on Stack Exchange: https://chat.stackexchange.com/rooms/52405/the-apl-orchard. You can reach them both using e-addresses in the form firstname at dyalog.com.

When members of the consulting team are not working for clients, the intention is that they will be members of the APL Tools Group at Dyalog, working on new tools for APL application development and helping create test suites for Dyalog APL. They will also support Richard Park, who joined us late in 2018, to work on the creation of training materials and tutorials for new users.

Once we have a better idea of the demand for consulting in North America, we expect to grow the team. Please let us know if you could use hired APL hands – in any territory! If we don’t have the resources ourselves, we may be able to find someone else.

Programming Language Implementors

Nathan comes to us with experience from a broad set of tools and programming languages. In addition to writing tools in APL, he will be a part-time member of the core development team, working on the APL interpreter and its interfaces in C, C#, JavaScript, Python and other languages. However, he won’t spend enough time on this to make up for the loss of Jay, who (like most managers at Dyalog) spent a significant amount of his time writing code.

Therefore, as described at https://www.dyalog.com/careers.htm, we are recruiting at least one C / C++ programmer to help us grow the core team.

A Busy – and Exciting Time

2019 is looking like an extremely busy year, with significant growth at Dyalog. As usual, our plan is to bring all the new (and old) hands to the Dyalog user meeting, which will be held in Elsinore, Denmark this year – September 8th to 12th. Details of the programme will soon start to appear at https://www.dyalog.com/user-meetings/dyalog19.htm. If you would like to present an APL-related experience to the user community, make proposals for new features of Dyalog products or suggest topics that you would like Dyalog to speak about at the user meeting, then please let us know as soon as possible!

Goodbye

At the end of May 2019 I am leaving Dyalog, so it seems like a good time to reflect on my time here and what I’ve learned from APL and the APL community.

When I joined Dyalog in 2010 I knew nothing about APL, so there was a really steep learning curve as I got to grips with both the language and its implementation. I was using some of my previous experience with compilers to improve the performance of the implementation, and thinking about ways to compile APL. This is a tough problem, and one that many people have worked on over the years (see for example Timothy Budd’s 1988 book An APL Compiler). My own ideas have shifted as I’ve gained more experience with APL and the way it is used. At first I thought “writing a compiler” was an obvious thing to do; now I think that hybrid compiler/interpreter techniques are much more promising, and Dyalog’s recent experiments with deferred execution and thunks are a good step in that direction.

At the same time, there was a lot of excitement around the APL language itself. Dyalog was working on APL#, a new .NET-based APL dialect (sadly abandoned as Microsoft’s own commitment to .NET waned). And Dyalog APL itself was starting to borrow more language features from the SharpAPL/J branch of the family tree, starting with the Rank operator and continuing over many years. This prompted me to delve more into the history of APL, to try to understand some of the fundamental differences between different implementations, so that we could reconcile those differences in Dyalog APL and provide, as far as possible, the best of both worlds. I think we’ve done pretty well in that, as evidenced by the fact that many APLers are happily using Rank, Key, function trains et al in an APL2-based language, something that seemed unthinkable a decade ago.

One of the most gratifying developments in the time I’ve been working with APL is the rapid growth of new APL implementations, open source projects and grass-roots enthusiasm. In particular, the open source movement has made it much easier for anyone with a good idea about language design to implement it, and share it with the world. We’ve seen a wide variety of new APLs and APL-inspired languages popping up over the years, ranging from full-featured to highly experimental, including but not limited to (in roughly the order I remember hearing about them): ELI, ngn/apl, GNU APL, Ivy, Aprildzaima/APL and APL\iv.

And speaking of new APLs, of course there is Co-dfns, a compiled APL implementation that tries to solve another tough problem: harnessing the power of GPUs and other massively parallel hardware in a way that makes it accessible to the end user. This is something that many people are trying to do, in a wide variety of languages, but as far as I can tell no-one has quite succeeded yet. The state of the art is still that, in order to get good performance, you need quite a lot of mechanical sympathy for the underlying hardware. But Co-dfns has come a long way, and if any language is well-suited to run on parallel array processors then surely it is APL!

This brings me on neatly to my next job: I’ll be working on compilers for GPUs, the parallel computers that render 3D graphics. They are closely related to their “general purpose” cousins the GPGPUs, which are used for pure number crunching, and to so-called tensor processing units (TPUs) that simulate neural networks for use in machine learning and artificial intelligence. “Tensor” here just means an array of arbitrary rank, or as we would say: an array. For programming TPUs there is a Python-based framework called TensorFlow. But, look closely at the APIs for some of the core TensorFlow libraries, and you’ll see operations like reshape, reverse and transpose, which are eerily similar to their APL equivalents. There truly is nothing new under the sun!

With fond regards to all APLers,
Jay.

Speed versus Accuracy: the User’s Choice

At Dyalog we have long striven for both correctness and high performance in our implementation. However, our views on this matter have recently undergone an historic shift in paradigm which we are excited to share with our users. We now intend to provide the best experience to the user of Dyalog APL not by providing correctness and speed, but rather correctness or speed, with a user-specified tradeoff between the two.

The upcoming release of version 17.1 includes a powerful new feature: the correctness–performance slider. To find this option, select Options>Configure>General in the IDE, or Edit>Preferences>General in RIDE. The slider is labelled “Execution Properties” and may be set at any time, although users should note that the effective correctness may be reduced if this is done while an in-progress function is on the stack.

With the slider at its default position near the middle, Dyalog will make an effort to balance performance and correctness. Computations will proceed at a reasonably brisk pace, and slightly wrong answers will appear occasionally while very wrong ones come up only rarely. As the slider is moved to the left, correctness is increased at the expense of performance. You’ll have to wait for your results but when you get them they’ll be numbers you can trust. Moving the slider to the right will have the opposite effect, increasing speed at the expense of more frequent misparsings and significant floating point error. Perfect for startups!

Motivation

The seasoned programmer has most likely experienced the same issues as us, and may already be rushing to incorporate our ideas in his or her own code. In the interest of transparency, however, we wish to explain a bit further our experiences with the speed-correctness tradeoff.

Most often we encounter this tradeoff in one direction: when writing to improve the performance of a particular interpreter operation we sometimes find the results returned are different. In the past such cases were seen as bugs to be corrected, but we now understand them to simply be instances of a universal rule. Conversely, fixes for obscure parsing issues which slow down parsing of equally obscure but already correct cases are no longer cause for concern: we simply condition them on the appropriate slider threshold.

In the graph below we plot the accuracy and performance of several algorithms to compute the inner product !.○ on two large array arguments. Performance is measured in throughput (GB/s) while accuracy is defined to be the cosine similarity of the returned solution relative to a very precise result worked out with paper and pencil.

On plotting these results the nature of our plight became clear, and we added the performance-correctness slider to Dyalog version 17.1 as fast as possible. This post was written as accompaniment, with similar haste.

Results

We profiled a large sample program with many different execution settings and obtained the results shown below. As you can see, Dyalog can be quite stable, or quite fast, depending on how the performance-correctness slider is set.

We believe these results demonstrate excellent value for all of our clients. Large and conscientious businesses can set the slider to correctness to encounter very few errors in execution. Rest assured, if errors are reported with these settings, we will do our best to shift them to the right side of the performance-correctness continuum! In contrast, the APL thrill-seeker will find much to like at the speedy end of the spectrum, as more frequent crashes are compounded by an interpreter that gets to them faster.

Future extensions

Although we believe the provided options will satisfy most users, some tasks require an implementation so fast, or so correct, that Dyalog cannot currently offer satisfactory performance along the relevant axis. To rectify this in the future we intend to offer more powerful facilities which extend the extremes of the correctness-performance slider. Potential clients who are exceptionally interested in correctness, such as NASA and Airbus, should contact us about an interpreter which runs multiple algorithms for each operation and chooses the majority result. For users interested in speed above all else we propose to offer an interpreter which only computes a part of its result and leaves the rest uninitialised, thus obtaining for example a 50% correct result in only half the time.

Even more extreme tradeoffs are possible. For the most correct results we are considering an algorithm which adds the desired computation to Wikipedia’s “List of unsolved problems in mathematics”, and then scans mathematical journals until a result with proof is obtained. For extremely fast responses we propose to train a shallow neural network on APL sessions so that it can, without interpreting any APL, print something that basically looks like it could be the right answer. Such an option would be a useful and efficient tool for programmers who cannot use APL, but insist on doing so anyway.

Although our plans for the future may be much grander, we’re quite excited to be the first language to offer user-selectable implementation tradeoffs at all. We’re sure you’ll be happy with either the correctness or the performance of Dyalog 17.1!

Welcome Richard Park

Richard Park is the latest addition to the Dyalog team and is based at Dyalog HQ in the UK. Richard has been living in the same house in Bramley his entire life and recently returned to the village after one successful degree in physics at the University of Manchester, and one unsuccessful degree in education at Manchester Metropolitan University.

How Richard and Dyalog came to meet is a story that deserves to be told, albeit a bit embarrassing for poor Richard!

One day, while walking his dog and father, Richard rang the office doorbell to inquire about any software-related opportunities that might be available. He had developed an interest in computers from using them at young age, building circuits for A-level electronics and learning about the physics of computers (as well as computational physics) at University. If only Richard had been able to convey any of that to Jay Foad, who answered the door, instead of standing like a “gormless idiot” (his words) whilst his father jumped in and asked about internships and job opportunities…

Richard went home that evening and browsed the Dyalog website… the Pandora’s box of APL edged ajar and he began to imbibe the symbols. He also sent an amusing email explaining the earlier awkward encounter and clearly showing that he was not a “gormless idiot”. This resulted in a meeting with the CEO and CXO, Gitte and Morten, in October 2018. After that meeting Richard showed enough aptitude and interest, while pair-programming a simplistic physics simulation with Morten, that they decided to keep him.

Given that Richard’s previous experience includes programming an autonomous robot for the QMC team in the 2012 and 2013 Student Robotics competitions and using MATLAB and CERN’s ROOT library to process experimental data and run simulations, he was very surprised that he had never heard of APL before. He sees a potential in APL to develop domain specific programming languages and software packages to help teachers convey concepts in a way which more closely matches the syntax and jargon in which problems are already described.

In his new role at Dyalog, Richard is developing teaching materials and demos to promote APL.

Pool Stories: Fergus back on Form

It has been almost 2 years since our last post about Jo Fergus (daughter of our Customer Account Manager, Karen Shaw) the pool player who Dyalog have sponsored in the past. When we last updated you, Jo had just had her daughter, Remy, and made the difficult decision to take some time out from pool to concentrate on her new family.

On June 30th 2018 she married Tom Fergus, Remy’s Dad, in a small ceremony at Trowbridge County Hall and became Jo Fergus. The wedding came as a bit of a surprise to friends and family who thought they were attending a birthday lunch but the sun shone and a fantastic day was had by all. Jo started to miss playing pool and in 2018 she went to the Wiltshire county pool trials and was invited back into the Wiltshire Ladies A county team. After a successful season, in which they didn’t lose a match, the team has qualified for National County Finals taking place at the beginning of March. Jo has also joined a team playing in her local pool league.

In 2019 Jo took the decision to test herself and her pool skills and signed up to the 2019 IPA Ladies Tour which will see her play at 5 events across the UK. There are only 32 places on this Ladies tour and it will give Jo a chance to play against high level opposition, which she feels will improve her game. Ranking points will be awarded and the top 8 in the rankings after Tour 5 will automatically qualify for the first ever IPA Ladies Blackball Premier League.

Dyalog are proud to be sponsoring Jo for this tour and it is planned that all Ladies finals will be live streamed on freesports.tv so we have our fingers crossed she makes at least one final!

Ascending and Descending

Lexicographic Ordering

Lexicographic ordering is what the APL primitives and provide:

   ⎕io←0     ⍝ ⎕io delenda est
   ⎕rl←7*5   ⍝ to get reproducible random results

   a←?11 3⍴3

   a          a ⌷⍨⊂ ⍋a
2 1 0      0 1 0
0 2 2      0 2 2
1 1 1      1 0 0
1 0 0      1 0 1
1 1 1      1 0 1
1 2 1      1 1 0
1 0 1      1 1 1
1 0 1      1 1 1
1 1 0      1 2 1
0 1 0      1 2 2
1 2 2      2 1 0

First order by column 0, resulting in groups of rows with the same values in column 0. Then, within each group, order by column 1, getting subgroups with the same values in columns 0 and 1. Then, within each subgroup, order by column 2, getting subsubgroups with the same values in columns 0, 1, and 2. In general, for each subsub…subgroup, order by column k, getting groups with identical values in columns ⍳k.

The preceding discourse is descriptive rather than prescriptive—algorithms for can use more efficient and more straightforward approaches. As well, for ease of understanding, the description is for a matrix and speaks of columns and rows. In general, any non-scalar array can be ordered, whence instead of rows, think major cells, instead of column, think item in a major cell. Symbolically and more succinctly, ⍋⍵ ←→ ⍋⍪⍵.

can be used if the orderings in the process are all ascending, or if all descending. The problem to be solved here is where the orderings are a mix of ascending and descending.

Numeric Arrays

Since ⍒⍵ ←→ ⍋-⍵ if is numeric, for such multiply each descending column by ¯1 and each ascending column by 1, then apply . This induces a “control array” having the same shape as a major cell of the argument, with a ¯1 for descending and a 1 for ascending.

   adn←{⍵ ⌷⍨⊂ ⍋ ⍺ ×⍤99 ¯1 ⊢⍵}

For the array a in the previous section:

   a              1 ¯1 1 adn a        ¯1 1 1 adn a
2 1 0          0 2 2               2 1 0  
0 2 2          0 1 0               1 0 0  
1 1 1          1 2 1               1 0 1  
1 0 0          1 2 2               1 0 1  
1 1 1          1 1 0               1 1 0  
1 2 1          1 1 1               1 1 1  
1 0 1          1 1 1               1 1 1  
1 0 1          1 0 0               1 2 1  
1 1 0          1 0 1               1 2 2  
0 1 0          1 0 1               0 1 0  
1 2 2          2 1 0               0 2 2  

In 1 ¯1 1 adn a, column 0 is ascending, and within that, column 1 is descending, and within that, column 2 is ascending. In ¯1 1 1 adn a, column 0 is descending, and within that, column 1 is ascending, and within that, column 2 is ascending.

Ordinals

An array to be sorted can be converted to an order-equivalent integer array by assigning to each item an ordinal (an integer) which has the same ordering relationship as the original item relative to other items in the array:

   sort    ← {(⊂⍋⍵)⌷⍵}
   ordinal ← {⎕ct←0 ⋄ ⍵⍳⍨sort,⍵}

That is, the ordinals obtain as the indices of the original array in the sorted list of the ravelled elements, using exact comparisons. (Exact comparisons are used because sorting necessarily uses exact comparisons.)
For example:

   ⊢ d←¯1 'syzygy' (3 ¯5) 1j2 'chthonic' (¯1)
┌──┬──────┬────┬───┬────────┬──┐
│¯1│syzygy│3 ¯5│1J2│chthonic│¯1│
└──┴──────┴────┴───┴────────┴──┘
   ordinal d
0 5 3 2 4 0

In the example, the data items are ¯1, 'syzygy', 'chthonic', 3 ¯5, 1j2, and ¯1 again. With respect to ordering, these data items are perfectly represented by the ordinals (numbers) 0, 5, 3, 2, 4, and 0, respectively. That is, ⍋d ←→ ⍋ordinal d.

   ⍋ d
0 5 3 2 4 1
   ⍋ ordinal d
0 5 3 2 4 1

As the example illustrates, it is imperative that identical ordinals are assigned to identical items, else the ordering relationships would be changed. For example, if b←0,⍪2 1 and the 0s are assigned different ordinals,

   ⊢ b←0,⍪2 1               
0 2
0 1
   ordinal b                 ⊢ bo←0 3,⍪1 2  ⍝ faux ordinals
0 3                       0 3
0 2                       1 2
   ⍋ ordinal b               ⍋ bo
1 0                       0 1
   ⍋ b
1 0

Computation of ordinals is greatly facilitated by the total array ordering introduced in Dyalog APL version 17.0.

Non-Numeric Arrays

A general solution for the ordering problem obtains by first converting the array to an order-equivalent integer array through the use of ordinals.

   ado ← {⍵ ⌷⍨⊂ ⍋ ⍺ ×⍤99 ¯1 ordinal ⍵}

For example:

   ⎕rl←7*5   ⍝ to get reproducible random results

   x0← ?19⍴4                            
   x1← (⊂?19⍴2) ⌷ 'alpha' 'beta'          
   x2← (⊂?19⍴3) ⌷ 'abc'                   
   x3← (⊂?19⍴3) ⌷ 'able' 'baker' 'charlie'

   x ← x0,x1,x2,⍪x3

   ordinal x
13 49 19 42
10 49 32 68
13 49 63 68
 4 49 63 42
 0 27 19 23
13 49 32 42
 0 49 19 42
10 49 32 68
10 27 32 23
 4 49 32 68
 4 49 32 68
 4 27 32 23
 4 49 32 68
 0 49 63 68
13 49 63 68
 0 49 32 42
13 27 32 23
 4 27 63 42
13 49 19 42

   (⍋x) ≡ ⍋ ordinal x
1

Suppose x is to be sorted ascending in columns 0 and 2 and descending in columns 1 and 3. The control array is 1 ¯1 1 ¯1, and:

   x                       1 ¯1 1 ¯1 ado x
┌─┬─────┬─┬───────┐     ┌─┬─────┬─┬───────┐
│2│beta │b│baker  │     │0│beta │a│able   │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│3│alpha│a│able   │     │0│beta │b│charlie│
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│3│beta │b│able   │     │0│beta │b│baker  │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│3│alpha│b│baker  │     │0│beta │c│charlie│
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│1│beta │b│charlie│     │0│beta │c│able   │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│1│beta │a│baker  │     │0│alpha│c│baker  │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│0│beta │c│charlie│     │1│beta │a│baker  │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│0│beta │b│baker  │     │1│beta │b│charlie│
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│0│beta │c│able   │     │1│alpha│c│able   │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│0│beta │a│able   │     │2│beta │a│baker  │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│3│alpha│a│baker  │     │2│beta │b│baker  │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│3│alpha│a│baker  │     │3│beta │b│able   │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│1│alpha│c│able   │     │3│beta │b│able   │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│0│beta │b│charlie│     │3│beta │c│charlie│
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│0│alpha│c│baker  │     │3│alpha│a│baker  │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│3│beta │b│able   │     │3│alpha│a│baker  │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│2│beta │a│baker  │     │3│alpha│a│able   │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│3│beta │c│charlie│     │3│alpha│a│able   │
├─┼─────┼─┼───────┤     ├─┼─────┼─┼───────┤
│3│alpha│a│able   │     │3│alpha│b│baker  │
└─┴─────┴─┴───────┘     └─┴─────┴─┴───────┘

Finally

   (ordinal x) ≡ ordinal ordinal x
1

That is, ordinal is idempotent. Actually, this is kind of obvious, but I never miss an opportunity to use the word “idempotent”.☺