Dyalog ’18 Videos, Week 4

This week is mostly a deep dive into the new world of storing source code in text files rather than workspaces and other “binary” formats. However, in case that is not your cup of tea yet, we can offer you another talk by Marshall Lochbaum, who presents more amazing algorithms to make the very widely used primitive search functions ∊, ⍳ and ⍸ run faster than ever before. By combining non-branching algorithms with vector instructions and a technique known as Robin Hood Hashing, Marshall is able to drive a modern CPU close to the theoretical maximum throughput, and in many cases spend less than one nanosecond searching for each item of an array.

Source code in text files is the dominant theme, and we are fortunate enough to have three pioneers to show us the way: Paul Mansour, Gilgamesh Athoraya and Kai Jaeger.

Paul has been working on – and using – source code management systems for decades. Recently, his team have implemented a lightweight version of the Acre project management system, named Acre Desktop, based entirely on textual source files. Apart from having to start your day by ]Open-ing a project, rather than by )LOAD-ing a workspace, there are very few changes to how you would actually use APL – but now you have access to a huge collection of professional tools developed for programmers using other programming languages, such as GitHub.

One of the very significant advantages of the APL community starting to use common structures for source code – and projects – is that it becomes realistic to share tools and utilities.
Following on from Paul’s talk, Gilgamesh Athoraya demonstrates a prototype of an APL Package Manager (APM). The APM connects to a repository of packages written in APL and allows you to declare package depenedencies from a public or private repository. It also keeps tabs on the availability of new versions of dependencies, and allows you to easily update them when the time is right.

A package manager can only be successful if there are packages to be managed. Kai Jaeger has been an APL Toolsmith for a very long time, and made much of his work available via the APLTree. Now, Kai has transferred the contents of the APLTree to GitHub, making everything available as textual source. With a bit of luck, once the APM finds its legs, we’ll all be able to use Acre Desktop to define projects, Git[Hub/Lab] to manage the source, and APM to search for Kai’s tools and manage our dependencies on them!

Summary of this week’s videos:

Dyalog ’18 Videos, Week 3

The four presentations from Dyalog’18 that we are releasing this week address both the visible (user interface) and invisible (performance) parts of application design. Starting with performance:

“You don’t have to be an engineer to be a racing driver, but you do have to have Mechanical Sympathy.” – former Formula One racing driver Sir John Young “Jackie” Stewart, OBE


This quote was at the heart of the talk by our invited keynote speaker Martin Thompson. In order to write software which performs well, you need to have a basic understanding of how the underlying machinery works. Understanding basic mathematical models for the theoretical throughput of software and hardware helps us take the step from being alchemists to scientists, as we endeavour to write high-performance systems.

Martin takes us for an entertaining stroll through the evolution of modern processors, and some of the maths behind high performance systems. The good news is that systems which make sequential and predictable memory accesses are likely to find sympathy with modern hardware…

Marshall Lochbaum, the most recent addition to the core interpreter team at Dyalog, followed up with a talk on a number of his ideas for increasing the mechanical sympathy of Dyalog APL, to take maximum advantage of branch prediction and other features of modern processors. Some strategies take advantage of runtime inspection of the arguments, something that is more natural in an interpreter with the ability to dynamically select data types, as opposed to strongly typed strategies typically employed by compilers.


TamStat is an application which helps students Tame Statistics. In two talks at Dyalog’18, Stephen Mansour and Michael Baas focus on two different aspects of the user experience. In the first talk, Stephen focuses on the notation available to users of TamStat. Where many statistical libraries contain dozens of strangely named functions with a variety of switches and parameters, TamStat uses a small set of functions, combined with another small set of operators, to provide a very simple but extremely elegant notation for computing probabilities based on a wide variety of distributions. For example:

⍝ Probability that 7 coin flips (0.5 specifying a "fair" coin) will result 
⍝ in at least 3 heads:
7 0.5 binomial probability ≥ 3
⍝ Probability that a number from a normal distribution with a mean of 0 and 
⍝ standard deviation of 1 will be ≤ 3:
0 1   normal   probability ≤ 3

I almost wish I could go back to University and start Statistics 101 again 😊.


Notation is a powerful tool of thought, but graphs make it easier to visualise the results. Following Stephen’s talk, Michael Baas describes work that Dyalog is doing in collaboration with Stephen, with the goal of wrapping TamStat in a modern, HTML/JavaScript based frontend. Current TamStat is based on the ⎕WC (Window Create) library function and is therefore restricted to running on Microsoft Windows. However, many of Stephen’s students use Mac or Linux laptops. The new interface also makes it possible to run TamStat as a web-based service with a web site. We expect that this work will make TamStat accessible to a much wider audience.

Summary of this week’s videos:

Dyalog ’18 Videos, Week 2

Each week until early January, we will be releasing a selection of recordings of presentations from Dyalog’18, which was held in Belfast at the end of October 2018. Last week we kicked off with the opening keynote talks and the prize ceremony and acceptance speech by the winner of our annual problem-solving competition.

Just under half of the presentations at Dyalog User Meetings are by users who have volunteered – or sometimes been commandeered – to share stories about how they have used APL for fun or profit. These user stories provide significant motivation to the Dyalog team for future direction.

Aaron Hsu’s talk on “High Performance Tree Wrangling, the APL Way” is a pearl. Back in 2015 I gave a talk at Google on APL. One of the Google engineers asked about working with trees in APL and I was unable to give him a useful answer. Aaron is working on a compiler for APL, and trees that represent the code that is being compiled are his most important type of data structure.

In this talk Aaron demonstrates that APL is an elegant – and highly efficient – notation for working with trees, if you just pick the right representation!


Most of the talks at Dyalog User Meetings are fairly technical. The subject at the core of Ilaria Piccirilli’s talk – the fair pricing of financial instruments and subsequent evaluation of portfolios – is no exception. Mercifully, Ilaria spares us the details of the calculations – as she dryly notes, there is no “Fair Pricing for Dummies”. Instead, she offers humorous insights into the way her team used APL to deal with the explosion of computations required by regular additions to legislation requiring health checks – and the day that negative interest pulled the rug out from under most standard pricing calculations.

The other, slightly larger half of the talks at Dyalog Users meetings are by members of the Dyalog Team, talking about work that has recently been done on our products or presenting designs for future enhancements.


Adám Brudzewsky’s talk, titled Array Notation Mk III, is about a potential future extension to the APL language, which will make it possible to easily and clearly describe arrays of high rank, or with deeply nested structure, without using APL primitives to “construct” them, as is common practice today. In addition to making application code easier to read and write, a literal notation for data structures will make it easy to use text files to describe data structures which are essentially part of the source code of an application, and should be managed by a source code management system. As the name suggests, this work has been ongoing for some time, with the initial inspiration coming from a user presentation by Phil Last, back at Dyalog ’15 in Sicily. Watch the presentation and give us feedback on whether you think this idea is now sufficiently baked to become part of Dyalog APL, or we’ll need a “Mk IV” talk next year!


With the growth in usage of Dyalog APL under macOS and Linux – especially in server or cloud environments – the Dyalog Remote Integrated Development Environment is becoming a “mainstream” tool, rather than the curiosity that it was during the first few years of development. Our partners at Optima Systems are developing RIDE on Dyalog’s behalf, and Gilgamesh Athoraya is now the lead developer. In his talk on “RIDE 4.1 and Next Generation Integrations”, Gil talks first about significant new features and performance improvements to RIDE in 4.1 – and then continues to talk about how components of the RIDE technology may be re-purposed to provide APL add-ins for popular development frameworks like the new Microsoft VS Code.

Summary of this week’s videos:

Diane’s Lasagne Problem

Making Lasagne

Participants in the SA2 Performance Tuning workshop at the Dyalog ’18 User Meeting were encouraged to bring their own problems for the group to work on. Diane Hymas of ExxonMobil brought a good one. The one-liner computation is as follows:

lasagne0 ← {groups {+⌿⍵}⌸ amts ×[⎕io] spices[inds;]}

where

   n ← 8e5
   spices ← ?6000 44⍴0
   groups ← +\(16↑1 2)[?n⍴16]
   inds   ← ?n⍴≢spices
   amts   ← ?n⍴0

Applying lasagne0 to this dataset:

   ⍴ lasagne0 ⍬
100015 44
   ≢ ∪ groups
100015

   )copy dfns wsreq cmpx

   wsreq 'lasagne0 ⍬'
844799820
   cmpx  'lasagne0 ⍬'
2.12E¯1

The problem with lasagne0 is space rather than time. The 845 MB required for this dataset may be acceptable, but we can be called upon to cook up large batches of lasagne in a smallish workspace, on a machine with limited RAM. (Large n and large ≢∪groups.)

All benchmarks in this document were run in Dyalog APL version 17.0, in a 2 GB workspace, on a machine with generous RAM.

Solutions

Marshall Lochbaum solved the problem. The alternative solutions are as follows:

lasagne0 ← {groups {+⌿⍵}⌸ amts ×[⎕io] spices[inds;]}
lasagne1 ← {↑ (groups{⊂⍵}⌸amts) {+⌿⍺×[⎕io]spices[⍵;]}¨ groups{⊂⍵}⌸inds}
lasagne2 ← {↑ (groups{⊂⍵}⌸amts)      {⍺+.×spices[⍵;]}¨ groups{⊂⍵}⌸inds}
lasagne3 ← {↑ {amts[⍵]+.×spices[inds[⍵];]}¨ {⊂⍵}⌸groups}

lasagne0 is the original expression; lasagne1 and lasagne2 were derived by Marshall during the workshop; lasagne3 was suggested by a participant in the workshop. The four functions produce matching results. Comparing the space and time:

space (MB)           time
lasagne0 845 2.29e¯1 ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
lasagne1 74 3.60e¯1 ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
lasagne2 74 2.39e¯1 ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
lasagne3 74 2.93e¯1 ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕

lasagne0 v  lasagne1

Nearly all of the space required to evaluate lasagne0 is accounted for by the space for computing the right argument to key:

   wsreq 'lasagne0 ⍬'
844799820

   wsreq 'amts ×[⎕io] spices[inds;]'
844799548

In fact, the array spices[inds;], all by itself, is already very large. It has shape (⍴inds),1↓⍴spices (≡ 8e5 44), each item requiring 8 bytes.

   wsreq 'spices[inds;]'
281599556

   ⍴ spices[inds;]
800000 44

   8 × ×/ ⍴ spices[inds;]
281600000

   qsize←{⎕size '⍵'}    ⍝ # bytes for array ⍵
   qsize spices[inds;]
281600040

lasagne1 avoids creating these large intermediate results, by first partitioning the arguments (groups{⊂⍵}⌸amts and groups{⊂⍵}⌸inds) and then applying a computation to each partition. In that computation, the operand function {+⌿⍺×[⎕io]spices[⍵;]}, is a partition of amts and is the corresponding partition of inds.

lasagne1, lasagne2 and lasagne3 require the same amount of space to run, so the comparison among them is on time.

lasagne1 v  lasagne2

The only change is from +⌿⍺×[⎕io]spices[⍵;] to ⍺+.×spices[⍵;], which are equivalent when is a vector. The interpreter can compute +.× in one go rather than doing +⌿ separately after doing ×[⎕io]; in such computation the interpreter can and does exploit the additional information afforded by +.× and is faster by a factor of 1.5 (= 2.39 ÷⍨ 3.60).

lasagne2 v  lasagne3

The idea in lasagne3 is doing one key operation rather than the two in lasagne2. Therefore, the changes between lasagne2 v lasagne3 are:

lasagne2 groups{⊂⍵}⌸amts
groups{⊂⍵}⌸inds
spices[⍵;]
lasagne3 {⊂⍵}⌸groups amts[⍵] spices[inds[⍵];]

All three key operations involve {⊂⍵}⌸ with groups as the key, and are roughly equally fast, each taking up no more than 7% of the total time.

   cmpx 'groups{⊂⍵}⌸amts' 'groups{⊂⍵}⌸inds' '{⊂⍵}⌸groups'
  groups{⊂⍵}⌸amts → 1.69E¯2 |   0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
* groups{⊂⍵}⌸inds → 1.39E¯2 | -18% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
* {⊂⍵}⌸groups     → 1.36E¯2 | -20% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕

lasagne3 is doing one less key operation than lasagne2 in exchange for doing, for each of the ≢∪groups (= 100015) executions of the operand function, amt[⍵] v and spices[ind[⍵];] v spices[⍵;]. Indexing is by no means slow, but it’s not as fast as references to and . Therefore, lasagne2 is faster.

The trade-off may differ for different values of groups. In this case groups are small-range integers so operations using it as the key value are fast.

Welcome to the Dyalog ’18 Videos!

Three weeks have gone by since we waved goodbye to the last Dyalog ’18 delegates in Belfast. We’ve had time to catch up on sleep, half of us have had colds and recovered from them. Jason Rivers and Richard Park have started mixing and improving the audio and video recordings, and we are ready to release the first group of processed videos.


Our plan is to release batches of 3-5 videos, with enough variety for everyone to find at least one topic of interest each week. We have not reviewed all of the material yet; there are always one or two where something went wrong and we are unable to publish the recordings (or the presenter asks that we refrain from making the talk public), but we do expect to be able to make the vast majority of the talks available over the next couple of months.

Each week, I’ll be doing my best to introduce each set with a blog entry: The first batch contains cleaned-up versions of the presentations that were streamed live from Belfast. The audio and video quality is significantly enhanced compared to the live stream, and the most confusing gaffes in my own live demo have been removed 😊.


As usual, the user meeting opened with the traditional trio of keynotes by Dyalog’s CEO Gitte Christensen, the CXO Morten Kromberg (that’s me) and CTO Jay Foad. Gitte introduces a couple of new faces at Dyalog, and the contest winners, so everyone can plan to buy the winners drinks during the week. Gitte then discusses high level direction – announcing our intention to make the Linux version available to download, and included in public Docker containers and Cloud VM images, with no questions asked.


My own session mostly consists of a live demo of the potential consequences of making Linux licences really easy to get hold of. In an imaginary conversation with a data scientist, I demonstrate the use of Dyalog APL to implement an (admittedly silly) analytical function, and subsequently make it available as a web service and via a web site, finally deplying it to the cloud using a set of Public Docker containers, without once installing Dyalog APL itself.


Jay Foad rounded Monday’s live stream off with a review of the features of the recently released version 17.0, before moving on to talk about the work that the development team is planning for versions 17.1 and 18.0, scheduled for the spring of 2019 and 2020, respectively.


In accordance with tradition, we also streamed the Prize Ceremony for the International Problem-Solving Competition and – often the most interesting talk of the year – the acceptance speech where this year’s winner talked about his code, and the experience of learning APL. This year’s winner did not let us down; it is amazing how quickly you can learn to write really, really good APL code!

Summary of this week’s videos:

Dyalog ’17: Day 2 (Monday 11 September)

by Vibeke Ulmann

Focus on Dyalog APL the language – Monday 11th September 2017

Where Sunday is traditionally filled with workshops and hands-on experiences – the first proper day of the annual user meeting is Monday – and this year was no different.

CEO, Gitte Christensen opened the meeting and emphasised a few of the major new things that have been achieved since last year, namely:

  • RIDE Version 4
  • Embedded HMTL Rendering Engine and – as always –
  • Performance enhancements.

She highlighted the fact that Version 16.0 was the beginning of a tool chain for developing distributed applications – including Cloud computing.

The licence for the SyncFusion library has been renewed for another 5 years. So, for those working with SyncFusion, you will have the usual widgets for dashboarding and graphing to hand.

A new multi-platform developer licence is now available allowing for development on all platforms; Windows, Linux, Mac, and soon, Android.

The Tools Group has been expanded, and they are producing more and more examples and templates.

Dyalog is now producing live content (outside of the user meeting) in the form of webcasts – currently one a month – and Podcasts are also planned.

CXO Morten Kromberg gave us a look at the next generation APL.

Authors note: if you are wondering what the X in CXO stands for it is ‘Experience’.

Morten established that there is a general ‘climate change’ in the world of computing especially as cloud computing is now ‘THERE’. This means that performance once again becomes key, as the true cost of cloud computing is measured in Watts – meaning CPU and memory consumption. So, if you can reduce the footprint of your application, you can reduce the costs of cloud hosting. Another point made is that Cloud computing generally means Linux – as it uses less memory and, therefore, fewer Watts. Whereas macOS and Androids can be considered UNIXes.

Morten focused on the demand for a new generation of APL developers and more to the point, managers who are comfortable using APL, and APL programmers. There are a number of criteria that both groups need; the developer need a modern set of libraries to build upon and to be able to find them easily for example using Git. Whereas managers need test driven development, source code management, and continuous development cycles.

Not everyone is a familiar with Git, and some are even a tad intimidated by it. But there is much good to be said for Git. You can have a private area, as well as a public shared area. Dyalog currently has 25 public repositories on Git. More will follow over time.

CTO Jay Foad proceeded to outline how we can make some of the Dyalog dreams for the future come true.

Version 16.0 of Dyalog APL was released in June this year, and work on version 17.0 continues apace. Speculatively Jay highlighted some of the key areas in version 17.0 to be: scripting, language, performance, and bridges to Python, Julia, MATLAB and Haskell. More work on RIDE, GPUs (and Xion Phi), portability and Android, the Cloud, shuffle testing and PQA.

The Key Note speech before Lunch was presented by Aaron Hsu from Indiana University (USA). Aaron went through how you can escape the beginner’s plateau when starting to work with APL.

The key takeaways for yours truly was that Aaron has observed two attitudes to APL

1)     Never in my life

2)     Can’t imagine life without it

He also observed that there seems to be a ‘learning wall’ which we need to find a way to overcome.

A directory of best practices can give insight into why computer scientists, or those trained in traditional programming methods, often find APL jarring and difficult, whereas those with no prior training fall in love with APL and take to it like ducks to water.

Watch his presentation in which he walks through 8 patterns, he considers to be key for newbie APL programmers. We will announce when the presentation is available on Dyalog’s YouTube Channel later in the autumn.

After lunch, most of the afternoon was dedicated to looking deeper into some of the new feature/functionality and topics many APL programmers find of particular interest.

In the interest of enticing you to watch the presentations online when they’re posted on Dyalog’s YouTube Channel, this blog only touches the basics on a couple of the presentations.

John Scholes went through re-coding from procedural to denotative style and showed us how pure functions opens for code reduction when implemented. ‘Massaging’ the code was the new expression I came away with.

Roger Hui showed us how he has now managed to solve a 20-year-old problem: ‘Index-of’ on Multiple Floats. After having initially established – to much hilarity – that the best way of solving the problem at hand was to not introduce it in the first place in your code, Roger proceeded to show how it can now be solved. Intentionally I am not giving away what Rogers 20-year-old ‘Problem’ was. Let me just briefly mention: it has to do with X and Y.

The afternoon was rounded off with a user presentation by Kostas Blekos from the University of Patras (Greece), where a group of physicists have used APL for the research they did for a paper.

His initial premise was that Physicists + Programming = Disaster. On the other hand, physicists need to do a lot of programming, so when they were developing the basis for the paper, they wanted to find a (new) language that made it easier to do better (and faster) prototyping.

Lots of Kostas’ and his colleagues’ previous work had been done in FORTRAN and, as he said, we needed something a bit easier to work with and the choice was Dyalog APL. Outside of the ability to do fast prototyping, the terseness of the language was attractive, as were the close mathematical relations, which made it easy to understand.

What they learned was that APL is GREAT, suitable for fast prototyping, and for avoiding making mistakes. The quote of the day surely must be

In FORTRAN I could spend a whole day trying to find a missing comma……..

Asked if there were any downsides to APL, Kostas said no, not really, except it is difficult to convince people to use it.