Dyalog ’22 Day 4: Celebrations

Links to recordings from this day of the user meeting are at the bottom of this post.

The third day of presentations kicked off with Brian Becker running us through the gauntlet of setting up and deploying web services. Initial setup with Jarvis and Docker containers seems like an absolute breeze. However, the later stages configuring cloud services can be more fiddly.

Brian Becker talks about creating and deploying web services.

Brian Becker talks about creating and deploying web services.

Stephen Mansour of Misericordia University then gave us the hot tips for producing hot tubs. His new system TAMPA (Taming Mathematical Programming in APL) can be used to optimise some decision (e.g. how many hot tubs to produce) for some objective function (e.g. maximise profit) according to some constraints (e.g. resources available). The use of APL in TAMPA makes a near 1-to-1 translation of linear programming expressions into executable code.

Stephen Mansour explains the TAMPA mathematical programming framework.

Stephen Mansour explains the TAMPA mathematical programming framework.

We then got to hear some strong but convincing opinions about graphical user interfaces from Norbert Jurkiewicz, who told us about how The Carlisle Group has been incrementally integrating the HTMLRenderer and “the triad” of HTML, CSS and JavaScript into their systems. He championed the benefit of hiring external developers, in addition to the portability of using web-technologies, for graphical front ends.

Norbert Jurkiewicz gives his views on using web stack for front end development.

Norbert Jurkiewicz gives his views on using the web stack for front end development.

Neither Gitte Christensen nor Brian Becker are shy about saying that some of their favourite parts of every user meeting are the competition presentations. Luckily for us, both this year’s student winner and professional winner came to share their experiences about participating.

Professional winner Michael Higginson had actually been a kdb+ and q programmer for many years before recently deciding to expand his horizons with APL. He gave a fantastic breakdown of his thought process in solving both an easier problem which he found built his confidence, and then the notorious problem 6 on interpreting human-readable dates and times.

Michael Higginson takes us through his array programming journey.

Michael Higginson takes us through his array programming journey.

The audience could definitely empathise with all of the reasons given by student winner Tzu-Ching Lee as to why he likes APL: the glyphs; the concise syntax; operators; and algorithms as primitives. Alongside his excellent walkthroughs of two of his solutions, he had brilliant ideas for extending the problem description for the Base85 encoding/decoding problem once he noticed additional symmetry which could be expressed in his solutions.

Student winner Tzu-Ching Lee presents his winning solutions to the APL Problem Solving Competition.

Student winner Tzu-Ching Lee presents his winning solutions to the APL Problem Solving Competition.

In the afternoon, we took a coach about an hour away to the Quinta dos Vales vineyards and winery. We were treated to a tour of the winery, learning about the fermentation process; what goes into deciding whether to make single grape or blended wines; and the use of wooden barrels to imbibe additional flavour. Afterwards, we were split into teams and challenged to make our best and favourite blends of wines from three grapes. According to the judges, a majority Cabernet Sauvignon, with about a third Aragonês and just ten percent Touriga Nacional makes for the most delicious blend of tannins and spices. Later that evening, we enjoyed a delicious Portuguese churrasco – or barbeque.

Delegates enjoy the afternoon sun at Quinta dos Vales winery in the Algarve, Portugal.

Delegates enjoy the afternoon sun at Quinta dos Vales winery in the Algarve, Portugal.

Congratulations to the winners of the APL Problem Solving Competition, and congratulations also to the winners of the wine blending competition!

Today’s presentations (links to recordings will be added as they become available):

Dyalog ’22 Day 3: Automation, architecture and performance

Links to recordings from this day of the user meeting are at the bottom of this post.

Automation, architecture and performance were throughlines of the second day of presentations. Lars Stampe Villadsen from SimCorp A/S provided some advice on how to write tests and gave a live demonstration of a small testing framework together with continuous integration tools which run the test suite every time a change is committed to the project – useful for when we forget to run some tests locally.

Lars Stampe Villadsen talks about testing.

Lars Stampe Villadsen talks about testing.

Norbert Jurkiewicz presented his “10,000 ft overview” of automation processes used by The Carlisle Group and described some of the complexities of building distributables according to varying requirements for different customers. He discussed some of the cost considerations to be made when using Amazon Web Services (or any cloud computing service) to leverage lots of computing power to do builds quickly.

Norbert Jurkiewicz discusses automation architecture.

Norbert Jurkiewicz discusses automation architecture.

This was followed by Michael Baas with a similar issue viewed from a different angle. He talked about using the ]DTest testing framework to produce code coverage reports – showing which lines of code had actually been executed when running a test suite – and also automating testing across different platforms and versions of the Dyalog interpreter.

Michael Baas shows us the ]DTest framework

Michael Baas shows us the ]DTest framework

Changing tack, we heard the story of getting to grips with semi-global variables in a multithreaded application. Elena Pavarotti of SimCorp Italiana talked about her experience trying to imagine clearly what a complex system is doing so that it is easier to reliably refactor and adapt it to talk to new external systems.

Elena Pavarotti on managing complexity in SimCorp Sofia.

Elena Pavarotti on managing complexity in SimCorp Sofia.

Rodrigo Girão Serrão then told us how himself and Aaron Hsu had implemented a U-Net Convolutional Neural Network from scratch in APL. The comparisons between their implementation and industry standard libraries was very interesting, and then seeing the mapping from diagrams to code showed us how the APL becomes a natural way to express the data flow involved in the system.

Rodrigo Girão Serrão walks us through the U-net CNN in APL.

Rodrigo Girão Serrão walks us through the U-net CNN in APL.

Delving deeper into more intellectual musings, Justin Dowdy of Semantic Arts – also known for his work on the April APL to Lisp compiler and the May bridge between Dyalog and Clojure – drew some interesting analogies between the Resource Description Framework, used for representing data relationships as ontologies, and points raised in Iverson’s paper Notation as a Tool of Thought.

Justin Dowdy compares Notation as a Tool of Thought and concepts of Data Semantics.

Justin Dowdy compares Notation as a Tool of Thought and concepts of Data Semantics.

Juuso Haavisto is a DPhil student at the University of Oxford. He brought up some hot topics in computer science academia – static analysis, rank polymorphism and scheduling for multi-core systems – that he believes can be tackled effectively if we can learn to make the computer think a bit more like an APLer.

Juuso Haavisto with three hot topics in computer science academia.

Juuso Haavisto with three hot topics in computer science academia.

The theme of performance continued when Vali-Matti Jantunen from Statistics Finland compared the performance of some short APL phrases across several versions of Dyalog. Of course, this can be important in an application like PxEdit which processes thousands of text files; a fraction of a second difference processing a single file can lead to several minutes across a job.

Vali-Matti Jantunen on Dyalog performance across versions.

Vali-Matti Jantunen on Dyalog performance across versions.

We hope Veli-Matti will forgive us for the performance regressions found in v18.2 (although not slower than v17.1). However, for us it was no surprise, and next Karta Kooner reflected on some of the assumptions made when changes, intended to improve performance, had been implemented over the lifetime of Dyalog APL. He is looking for volunteers to run a version of the interpreter which can gather usage statistics, so please get in touch if you can do this.

Karta Kooner analyses performance in the interpreter.

Karta Kooner analyses performance in the interpreter.

Presenting the final features of his special Dyalog ’22 Conference Edition of the interpreter, John Daintree talked about the state of asynchronous programming in the interpreter now. Eventually he showed us an interface, in the from of an ⎕AWAIT system function, to unify Spawn (F&⍵) threads, .NET Tasks and Futures and Isolates.

John Daintree with some ideas about asynchronous programming.

John Daintree with some ideas about asynchronous programming.

Finally, Aaron Hsu’s Co-dfns report showed a new public API for the parser which could be useful for static analysis of existing code and better error reporting within co-dfns. The new code generator written in APL opens the door to targeting more platforms more easily in the future.

Aaron Hsu presents an update on co-dfns.

Aaron Hsu presents an update on co-dfns.

Today’s presentations (links to recordings will be added as they become available):

Dyalog ’22 Day 2: Welcome Back to User Meetings

Links to recordings from this day of the user meeting are at the bottom of this post.

We were welcomed to the 2nd day of the user meeting by managing director (CEO) Gitte Christensen. Everybody could appreciate the emotion and sentiment when she noted the distinct “APL Hum” apparent in the atmosphere at this year’s event.

Managing Director Gitte Christensen welcomes us to the user meeting.

Managing Director Gitte Christensen welcomes us to the user meeting.

In some exciting news, Stine Kromberg announced that Dyalog’s board of directors have approved an APL fund for education and science. She also made a call to action for members of the APL community to present their stories to the wider world.

Stine Kromberg announces the APL fund.

Stine Kromberg announces the APL fund

Morten Kromberg’s road map provided an overview of the multitude of projects and requirements Dyalog is to address in the upcoming years. From the need for new APLers to be recruited to replace those career programmers leaving the fold, to developing tools and training resources that can help all users navigate the modern, complex world of interconnected software development and making the user experience more pleasant and consistent across different platforms.

Morten Kromberg presents the roadmap of activity at Dyalog

Morten Kromberg presents the roadmap of activity at Dyalog.

Morten later returned to speak about the status of project and package managers for Dyalog. While Carlisle Group’s Dado is an excellent recommendation for those who can adopt its tools and workflow, Dyalog Ltd has employed the help of APLTeam to develop the more agnostic Tatin package manager and Cider project manager, with the hopes of providing a more flexible and extensible set of tools to meet a wider range of needs.

In a double bill of talks from John Daintree, we were shown a special 2022 Conference Edition of the Dyalog interpreter which has features stemming from both internal discussions and projects and our users’ requests and ideas. He reiterated some of the complications of input and output in Dyalog, but showed how the conference edition can more intelligently handle output from the various sources – for example error messages, quad output ⎕←message and output from system commands.

In his second talk, he demonstrated a system for piece-by-piece debugging of inline APL code and functions. We think you will agree that being able to step through the evaluation of a single line of APL has massive implications for teaching, writing and understanding APL.

John Daintree demonstrates inline "token by token" debugging

John Daintree demonstrates inline “token by token” debugging.

Charles Brenner from DNA-View took us on a journey of his discovery of a method of performing numerical integrations in high dimensional spaces – using a kind of Riemann summation (analogous to the rectangle or trapezium methods) using simplexes (multidimensional generalisation of triangle and tetrahedrons).

Charles Brenner talks about numerical integration.

Charles Brenner talks about numerical integration

Gilgamesh Athoraya presented a recent project by Tiamatica AB to migrate a process planning system from mainframe APL2 to Dyalog in the cloud. This large undertaking began with creating a bridge from Dyalog to APL2, and the challenge continues as code must be migrate to Dyalog while being able to fold in updates made by the APL2 team still maintaining the current software.

Gilgamesh Athoraya talks about migrating APL2 mainframe to Dyalog in the cloud

Gilgamesh Athoraya talks about migrating APL2 mainframe to Dyalog in the cloud.

Another APL2 conversion, Mark Wolfson of BIG provided some fascinating insights into the world of Jewelers in North America. Many of these business have managed to resist the trend of becoming parts of large conglomerates and have stayed as small, often family owned businesses. This has meant that they usually don’t have sophisticated software for managing their sales and inventories. Mark’s system uses APL to ingest data from an incredible variety of formats and provide insights to help these businesses thrive.

Mark Wolfson gives insight into the world of North American Jewellery retailers.

Mark Wolfson gives insight into the world of North American Jewellery retailers.

Unfortunately, Kimmo Linna from Finnair could not join us in person due to his obligations as a pilot. Fortunately for us, his time away from the cockpit during the pandemic led to him developing useful tools and workflows that he was able to show us remotely via video feed. He is using DuckDB to store data and Vega-lite to present it visually, all being driven from Dyalog Jupyter notebooks and connected using bridges that he has published and made available for free on GitHub.

Kimmo Linna presents his Dyalog/DuckDB/Vega-lite/Jupyter workflow

Kimmo Linna presents his Dyalog/DuckDB/Vega-lite/Jupyter workflow

To finish the day, new recruit on the development team Peter Mikkelsen talked about a personal project from before he joined Dyalog: an APL implementation for the Plan9 research operating system. As he explained, it may be lacking performance, documentation and even users, but we could see from his approach and language design choices that he is someone who can bring great ideas to the interpreter development team.

Peter Mikkelsen shows message passing in APL9

Peter Mikkelsen shows message passing in APL9

Today’s presentations:

Dyalog ’22 Day 1: Welcome to Sunny Olhão!

Arriving to meet the warm sea air is already a refreshing change as this year’s long, hot summer started to close in the weeks leading up to this year’s user meeting. Of course, for some with less far to travel, the warmth is a familiar comfort.

Apartments at the Real Marina in Olhão, Portugal

Apartments at the Real Marina in Olhão, Portugal

As delegates arrived on Sunday night we were treated to a lovely surprise – a birthday cake to celebrate Tony Corso’s birthday! Happy Birthday, Tony!

Today we kicked off the user meeting with workshops, giving delegates hands-on experience with a range of Dyalog offerings for application development.

Attendees of Rich Park and Rodrigo Girão Serrão’s language workshops explored different ways to express ideas in APL using both tried and true idioms in the morning and newer language features in the afternoon – they compared expressiveness and performance implications in difference circumstances.

Morten and Josh got participants up to speed with using and maintaining code which lives outside of your workspace in text files, while Richard Smith and Bjørn Christensen showed how to manage and interact with data from outside of the workspace.

Brian Becker guided delegates into the world of SAAS (Software As A Service). It is encouraging to see how straightforward having APL code talk to the outside world can be. As usual, however, the complexities reveal themselves as you delve deeper in to specific use cases and circumstances.

Morten presents a Dyalog workshop

Morten and Josh show how to store source code as text files using Link

Without a doubt, getting to meet our users face to face has already proved to be the greatest enjoyment of the meeting so far. Some familiar faces long since last seen, and others only over a screen. Discussing ideas face to face still retains a value impossible to quantify.

Overall a fantastic beginning to the week and we cannot wait to experience the rest of this week!

Maintaining Py’n’APL Part 2: APL Arrays, Python Objects, and JSON

As part of the bigger, overarching refactoring goal of making Py’n’APL great again, I refactored some of the code that deals with sending data from Python to APL and receiving data from APL into Python. In this blog post, I will describe – to the best of my abilities – how that part of the code works, how it differs from what was in place, and why those changes were made.

The starting point for this blog post is the commit b7d4749.

This blog post is mostly concerned with the files Array.py, ConversionInterface.py, and ObjectWrapper.py (these were the original file names before I tore them apart and moved things around). It does not make much sense to list where all the things went, but you can use GitHub’s compare feature to compare the starting commit for this blog post with the “final” commit for this blog post.

State of Affairs

If you are going to refactor a working piece of code, the first thing you need to do is to make sure that you know what the code is doing! This will help to ensure that your refactoring does not break the functionality of the code. With that in mind, I started working my way through the code.

I started by looking at the file ConversionInterface.py and the two classes Sendable and Receivable that were defined in there. By reading the comments, I understood that these two classes were defining the “conversion interface”. In this context, the word “interface” has approximately the Java meaning of interface: it defines a set of methods that the classes that inherit from these base classes have to implement. For the class Sendable, there are two methods toJSONDict and toJSONString; and for the class Receivable, there is one method to_python.

Even though I had just started, I already had a couple of questions:

  1. Do the names Sendable and Receivable mean that these objects will be sent to/received from APL or from Python respectively?
  2. Why is there a comment next to the definition of Sendable that says that classes that implement a method from_python will inherit from Sendable? Is that a comment that became a lie as the code evolved? If not, why isn’t there a stub for that method in the class itself?

The more I pondered on these questions, the more I started to think that the “conversion interface” isn’t necessarily about the sending to/receiving from APL, but rather the conversion of built-in Python types to helper classes like APLArray or APLNamespace (from the file Array.py) and back. So, it might be that Sendable and Receivable are supposed to be base classes for these helper classes, telling us which ones can be converted to/from built-in Python types. I needed to solve this conundrum before I could prepare these two base classes and use Python mechanisms to enforce these “interfaces”.

What the Interface Really Means

After playing around with the code a bit more, I felt more confident that Sendable should be inherited by classes that represent things that can be sent to APL and Receivable represents things that can be received from APL. However, it must be noted that Py’n’APL doesn’t send Python built-in types directly to APL. Whenever we want to send something to APL, Py’n’APL first converts it to the suitable intermediate (Python) class. For example, lists and tuples are converted to APLArray, and dictionaries are converted to APLNamespace.

If an APLArray instance is supposed to be sendable to APL, we must first be able to build it from the corresponding Python built-in types, and that is why almost all Sendable subclasses also implement a method from_python. Looking at it from the other end of the connection, Receivable instances come from APL and Py’n’APL starts by taking the JSON and converting it into the appropriate APLArray instances, APLNamespace instances, etc. Only then can we convert those intermediate representations to Python, and that is why all Receivable subclasses come with a method to_python. In addition, those Receivable instances come from APL as JSON, so we need to be able to instantiate them from JSON. That is why Receivable subclasses also implement a method fromJSONString, although that is not defined in the Receivable interface.

So, we have established that APL needs to know how to make sense of Python’s objects and Python needs to know how to make sense of APL’s arrays. (In Python, everything is an object, and in APL, everything is an array. In less precise – but maybe clearer – words, Python needs to be able to handle whatever APL passes to it, and APL needs to be able to handle whatever Python passes to it.) To implement this, we need to determine how Python objects map to APL arrays and how APL arrays map to Python objects. This is not trivial, otherwise I wouldn’t be writing about it! Here are two simple examples showing why this is not trivial:

  • Python does not have native support for arrays of arbitrary rank.
  • APL does not have a key-value mapping type like Python’s dict.

To solve the issues around Python and APL not having exactly the same type of data, we create lossless intermediate representations in both host languages. For example, Python needs to have an intermediate representation for APL arrays so that we can preserve rank information in Python. When possible, intermediate representations should know how to convert into the closest value in the host language. For example, the Python intermediate representation of a high-rank APL array should know how to convert itself into a Python list.

I began by looking at the handling of APL arrays and namespaces. These are the conversions that need to be in place:

  • APL arrays ←→ Python lists
  • APL arrays ← arbitrary Python iterables
  • APL namespaces ←→ Python dictionaries

When sending data from the Python side, it first needs to be converted into an instance of the appropriate APLProxy subclass. For example, a dictionary will be converted into an instance of APLNamespace. That object is converted to JSON, which is then sent to APL. APL receives the JSON and looks for a special field __extended_json_type__, which identifies the type of object. In this example, that is "APLNamespace". APL then uses that information to decode the JSON data into the appropriate thing (a namespace in this example).

When sending data from the APL side, a similar thing happens. First, the object is converted into a namespace that ⎕JSON knows how to handle. For example, an array becomes a namespace with attributes shape (the shape of the original array) and data (the ravel of the original array); the namespace is tagged with an attribute __extended_json_type__, which is a simple character vector informing Python what the object is. That namespace gets converted to JSON with ⎕JSON, and the JSON is sent to Python. Python receives the JSON and decodes it into a Python dictionary. Python then uses __extended_json_type__ to determine the actual object that the dictionary represents (an array, in our example) and uses the information available to build an instance of the appropriate APLProxy subclass (APLArray in this example).

Github commit 40523b9 shows one initial implementation of the APL code that takes APL arrays and namespaces and converts them into namespaces that ⎕JSON can handle and that Python knows how to interpret. This commit also shows the APL code for the reverse operation. For now, this APL code lives in the file Proxies.apln and the respective Python code lives in the file proxies.py. Everything is ready for me to hook this into the Py’n’APL machinery so that Py’n’APL uses this mechanism to pass data around…but that’s for another blog post!

Summary of Changes

GitHub’s compare feature shows all the changes I made since the commit that was the starting point for this post. The most notable changes are:

  • Moving the contents of ConversionInterface.py and ObjectWrapper.py into Array.py.
  • Adding the file proxies.py that will have the Python code to deal with the JSON and conversions, which will end up replacing most of the code I mentioned in the previous bullet point.
  • Adding the file Proxies.apln that will have the APL code to deal with the JSON and conversions, which will end up replacing a chunk of code that currently lives in Py.dyalog, which is a huge file with almost all of the Py’n’APL APL code.

Blog posts in this series:

Maintaining Py’n’APL Part 1: The Beginning

Py’n’APL is an interface between APL and Python that allows you to run Python code from within APL and APL code from within Python. This interface was originally developed by Dyalog Ltd intern Marinus Oosters, who presented it in a webinar and at Dyalog ’17. I subsequently talked about Py’n’APL at Dyalog ’21, where I promised to update it and make it into an awesome and robust tool.

I’ve now stared at Py’n’APL’s code base for longer than I’m proud to admit, but without any proper goals and some basic project management this has been as effective in cleaning it up as a Magikarp’s Splash – in other words, it has had no effect.

For that reason, and in another attempt to take up the maintenance of Py’n’APL, I have decided to start blogging about my progress. This will be a way for me to share with the world what it feels like to take up the maintenance of a project that you aren’t necessarily very familiar with.

(By the way, Py’n’APL is open source and has a very permissive licence. This means that, like me, you can also stare at the source code; it also means that you can go to GitHub, star the project, fork it, and play around with it!)

Tasks

There are some obvious tasks that I need to do, like testing Py’n’APL thoroughly. This will help make Py’n’APL more robust, it will certainly uncover bugs, and it will help me to document Py’n’APL capabilities. The Python side will be tested with pytest and the APL side will be tested with CITA, which is a Continuous Integration Tool for APL.

The code base also needs to be updated. Py’n’APL currently supports Python 2 up to Python 3.5. At the time of writing this blog post, Python 2 has been in end-of-life for more than 2 years and Python 3.7 is reaching end of life in a couple of months. In other words, there is no overlap between the original Python versions supported and the Python versions that an application should currently support. In addition, Dyalog has progressed from v16.0 to v18.2, and the new tools available with the later versions are also likely to be useful.

Another big thing that should be done (and that would pay high dividends) is to update the project management of the Python part of Py’n’APL. By using the appropriate tooling, we make it easier to clone the (open source) repository so that others can poke around, play with it, modify it, and/or contribute.

The First Commits

Let GitHub commit 4283176f4ffd7f1067f216c1459306cdbc49189a be the starting point of my documented journey. At this point in time, I have two handfuls of commits on the branch master that fixed a (simple) issue with a Python import and added the usage examples I showed at Dyalog ’21. So, what will my first commits look like?

Setting up Poetry

The first thing I decided to do was to set up Poetry to manage the packaging and dependencies of the Python-side of code. By using Poetry, isolating whatever I do to/with the Python code from all the other (Python) things I have on my computer becomes trivial and it makes it very easy to install the package pynapl on my machine.

Auto-Formatting the Source Code

Another thing that I did was to use black (which I added as a development dependency to Poetry) to auto-format all the Python code in the repository. I imagine that this might sound surprising if you come from a different world! But if you look at the commit in question, you will see that although that commit was a big one, the changes were only at the level of the structure of the source code; by using a tool like black, I can play with a code base that is consistently formatted and – most importantly – that is formatted like every other Python project I have taken a look at. This consistency in the Python world makes it easier to read code, because the structure of the code on the page is always the same. This means that there is one less thing for my brain to worry about, which my brain appreciates!

In a typical Python project using black, or any other formatter, the idea is that the formatter is used frequently so that the code always has that consistent formatting style; the idea is not to occasionally insert an artificial commit that is just auto-formatting.

Fixing (Star) Imports

The other major minor change that I made was fixing (star) imports across the Python source code. Star imports look like from module_name import * and are like )LOADing a whole workspace in APL – you will gain access to whatever is inside the workspace you loaded. In Python, star imports are typically discouraged because after a star import you have no idea what names you have available, nor do you know what comes from where, which can be confusing if you star imported multiple modules. Instead, if you need the tools foo and bar from the module module_name, you should import the module and use the tools as module_name.foo and module_name.bar, or import the specific names that you need: from module_name import foo, bar.

I therefore went through the Py’n’APL Python source code and eliminated all the star imports, replacing them by the specific imports that were needed. (OK, not quite all star imports; the tests still need to be reworked.) As well as fixing star imports, I also reordered the imports for consistency and removed imports that were no longer needed.

Python 2-Related Low-Hanging Fruit

To get started with my task of removing old Python 2 code, I decided to start with some basic trimming. For example, there were plenty of instances where the code included conditional assignments that depended on the major version of Python (2 or 3) that were supposed to homogenise the code, making it look as much as possible like Python 3. I could remove those because I know we will be running Python 3. Another fairly basic and inconsequential change I could make was removing the explicit inheriting from object when creating classes (this was needed in Python 2, but not in Python 3).

Explicit Type Checking and Duck Typing

Python is a dynamically-typed language, and sometimes you might need to make use of duck typing to ensure that you are working with the right kind of objects. At Dyalog Ltd we are very fond of ducks, but duck typing is something else entirely:

If it walks like a duck and if it quacks like a duck then it must be a duck.

In other words, in Python we tend to care more about what an object can do (its methods) than what the object is (its type). The Py’n’APL source code included many occurrences of the built-in type and I went through them, replacing them with isinstance to implement better duck typing.

What Happens Next?

These are some of the main changes that I have made so far; they happen to be mostly inconsequential and all on the Python side of the code. Of course, I won’t be able to maintain Py’n’APL by only making inconsequential changes, so more substantial changes will come next. I also need to take a look at the APL code and see what can and what needs to be done there. Although I haven’t looked at the APL code as much as at the Python code, I have a feeling that I will not need to make as many changes there. Fingers crossed!

This blog post covers (approximately) the changes included in this GitHub diff.