Response to Name Colouring for Dfns

This post contains comments to John Scholes’ post on name colouring; please continue to post any further comments with the original post.

This is a very interesting topic – as the discussion has already showed, there are many different needs. It seems to me that some of the suggestions that have been made are forms of highlighting that you would want to turn on briefly in order to search for something specific or highlight structures while editing code. For example:

  • to verify the structure (highlight the matching parenthesis or bracket, or rainbow colouring of parentheses/brackets)
  • highlight all uses of a particular name or small group of names.

I don’t think I would really want any of these enabled for any length of time; some of them can probably work very well if they are turned on and off as the cursor moves through the text (when on a parenthesis, highlight the other half of the pair, when on a name, highlight all uses of that name, etc).

Making it easier to read statements through highlighting of “kinds” feels more like something that I might well want to have enabled all the time. However, I find colouring to be quite distracting – it breaks the flow for me. I would certainly want the colours to be as calm as possible for the most common kinds, getting more exciting for the “exotic ones”. So I would suggest:

 Array
 Function
 MonOp
 DyOp

For example:

   r←(foo dop 1 2 3) mat hoo mop vec

I find that syntax colouring is much more effective in a bold font:

   r←(foo dop 1 2 3) mat hoo mop vec 

But…looking at the above, I personally find the colours to be quite distracting; they make it hard for me to read the expression as a “sentence”. How about if we experiment with emphasis instead, for example we could make functions italic, monadic operators bold, and dyadic operators both.

   r←(foo dop 1 2 3) mat hoo mop vec

This is very subjective of course, but this is much more readable to me than the coloured version.

Further comments with the original post please.

Name Colouring for Dfns

APL is sometimes criticised because expressions that include names cannot, in general, be parsed without knowing whether the names represent functions or variables. For example, the name thing in the expression thing⍳3 could reference an array (in which case the is dyadic) or it could reference a function (making the monadic).

An APL expression becomes completely parsable if we distinguish each name with one of four colours, depending on the “kind” of its referent: array, function, monadic operator, dyadic operator. Now, with a bit of practice, we can at least parse thing⍳3 vs thing⍳3 without recourse to the definition of thing. Notice how kind differs slightly from APL’s name-class, which does not distinguish monadic from dyadic operators.

Names whose definitions are external to the code under consideration and so cannot be classified would be given a distinct colour, say red, which would at least draw attention to them. Colouring a number of interdependent functions at the same time should help with such issues.

Name-colouring can co-exist with the more superficial token-colouring ( green for comments and so forth) though we would probably want to configure the two schemes separately.

There’s a related argument for colouring parentheses to show the kind of value they contain: (~∘(⊂'')). This would mean that we could always determine the valency of a function call, or the kind of a hybrid token, such as /, by looking at the colour of the single token immediately to its left. Finally, we should probably kind-colour curly braces: {}, {⍺⍺ ⍵}, {⍵⍵ ⍵}.

Yeah but how?

In the following, where appropriate, “function” is a shorthand for “function or operator”.

Most generally, we want to process a number of definitions at the same time, so that inter-function references can be coloured. Such a set of functions may be considered as internal definitions in an anonymous outer function so, without loss of generality, we need consider only the case of a single multi-line nested function.

A function can be viewed as a tree, with nested subfunctions as subtrees. The definitions at each lexical level in the function comprise the information stored at each node of the tree.

The colouring process traverses the tree, accumulating a dictionary of (name kind) pairs at each level. At a particular level, definitions are processed in the same order as they would be executed, so that the (name kind) entries from earlier definitions are available to inform later expressions. Visiting each node before traversing its subtrees ensures that outer dictionary entries are available to lexically inner functions. Prefixing dictionary entries for inner functions ensures that a left-to-right search (à la dyadic iota) finds those names first, thus modelling lexical name-shadowing.

Each assignment expression adds one or more names to the dictionary. The kind of the assignment is inferred by looking at the defining expression to the right of the assignment arrow. This sounds heavy, but if we examine the expression from right-to-left then we can often stop after examining very few tokens. For example, an arbitrarily long expression ending in (… function array) must, if it is syntactically correct, reduce to an array value, while (… function function) must be a function (train). A sequence in braces {…} resolves to a function, monadic operator or dyadic operator, depending only on whether there are occurrences of ⍺⍺ or ⍵⍵ at its outermost level.

This approach, while relatively simple, is not perfect. It takes no account of the order in which names are defined relative to their being referenced. The assumption is that all definitions at an outer level are available to its inner levels. The following small function illustrates the problem:

    {
        M{A}      ⍝ this A correctly coloured function
        N{A}⍵     ⍝ this A should be coloured unclassified
        A←÷          ⍝ A defined as a function
        M N          ⍝ application of function to array
    }

It remains to be seen whether this would be a problem in practice or whether, if name-colouring proved popular, people might adjust their code to avoid such anomalies. The problem does not occur if we avoid re-using the same name for items of different kinds at the same lexical level – which IMHO is not a bad rule, anyway.

Implementation Status

I would like to experiment with name-kind colouring for the dfns code samples on the Dyalog website.

This project is under way, but has rather a low priority and so keeps stalling. In addition, I keep wavering between processing the tokens as a list with tail calling and the dictionary as an accumulating left argument, or in more classic APL style as a vector, developing parallel vectors for lexical depth and masks for assignment arrow positions and expression boundaries, etc.

In the meantime, here is an artist’s impression of what name- and bracket- coloured code might look like, with colours: array, function, dyadic-operator.

    eval{
        stk I(op ops)←⍵
        op in⊃⍺:⍺ prt stk I((dref op)cat ops)
        f(fr rr)f rstk
        c{op≡⍵~' '}
        c'dip ':⍺('*⌷'rgs{prt rr I(f cat fr ops)})
        ...

Notice how functions stand out from their arguments. For example, at the start of the third line: op in⊃⍺:, it is clear that in is a function taking ⊃⍺ (first of alpha) as right argument and op as left argument, rather than a monadic function op taking in⊃⍺ (in pick of alpha) as right argument. Other interpretations of the uncoloured expression op in⊃⍺: include:

    op in⊃⍺:    op in is a vector (array), so is pick.
    op in⊃⍺:    both op and in are monadic function calls, so is first.
    op in⊃⍺:    in is a monadic operator with array operand op.
    op in⊃⍺:    in is a dyadic operator with function operands op and .

I’m keen to hear any feedback and suggestions about name-colouring for dfns. What d’ya think?

A Dialog on APL

A discussion between Nicolas Delcros and Roger Hui

Nicolas, Prologue: From a language point of view, thanks to Ken Iverson, it is obvious that you want grade rather than sort as a primitive. Yet from a performance point of view, sort is currently faster than grade.

Can one be “more fundamental” than the other? If so, who’s wrong…APL or our CPUs? In any case, what does “fundamental” mean?

Roger: Sorting is faster than grading due to reasons presented in the J Wiki essay Sorting versus Grading, in the paragraph which begins “Is sorting necessarily faster than grading?” I can not prove it in the mathematical sense, but I believe that to be the case on any CPU when the items to be sorted/graded are machine units.

Nicolas, Formulation 1: Now, parsing ⍵[⍋⍵] (and not scanning the idiom) begs the question of how deeply an APL intepreter can “understand” what it’s doing to arrays.

How would an APL compiler resolve this conjunction in the parse tree? Do you simply have a bunch of state pointers such as “is the grade of” or “is sorted” or “is squozen” or “axis ordering” etc. walking along the tree? If so, do we have an idea of the number of such state pointers required to exhaustively describe what the APL language can do to arrays? If not, is there something more clever out there?

Roger: I don’t know of any general principles that can tell you what things can be faster. I do have two lists, one for J and another for Dyalog. A big part of the lists consists of compositions of functions, composition in the mathematical sense, that can be faster than doing the functions one after the other if you “recognize” what the composed function is doing and write a native implementation of it. Sort vs. grade is one example (sort is indexing composed with grade). Another one is (⍳∘1 >) or (1 ⍳⍨ >). The function is “find the first 1” composed with >. These compositions have native implementations and:

      x←?1e6⍴1e6

      cmpx '5e5(1 ⍳⍨ >)x' '(5e5>x)⍳1' 
5e5(1 ⍳⍨ >)x → 0.00E0  |       0%
(5e5>x)⍳1    → 1.06E¯2 | +272100% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕

      cmpx '¯1(1 ⍳⍨ >)x' '(¯1>x)⍳1' 
¯1(1 ⍳⍨ >)x → 2.41E¯3 |   0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
(¯1>x)⍳1    → 4.15E¯3 | +71% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕

If you get a “hit” near the beginning, as would be the case with 5e5, you win big. Even if you have to go to the end (as with ¯1), you still save the cost of explicitly generating the Boolean vector and then scanning it to the end.

Another one, introduced in 14.1, is:

      cmpx '(≢∪)x' '≢∪x'
(≢∪)x → 4.43E¯3 |    0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
≢∪x   → 1.14E¯2 | +157% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕

This is the tally nub composition, used often in a customer application. If you “know” that you just want the tally of the nub (uniques), you don’t actually have to materialise the array for the nub.

I am not conversant with compiler technology so I don’t know what all an APL compiler can do. I do know that there’s a thing call “loop fusion” where, for example, in a+b×c÷2, it doesn’t have to go through a,b,c in separate loops, but can instead do

    :for i :in ⍳≢a ⋄ z[i]←a[i]+b[i]×c[i]÷2 ⋄ :endif

saving on the array temp on every step. You win some with this, but I think the function composition approach wins bigger. On the other hand, I don’t know that there is a general technique for function composition. I mean, what general statements can you make about what things can be faster (AKA algebraically simpler)?

Nicholas: I sort of see…so jot is a “direct” conjunction. An indirect conjunction could be ⍵[⍺⍴⍋⍵] where the intermediate grade is reshaped. We “know” that grade and shape are “orthogonal” and can rewrite the formula to ⍴⍵[⍋⍵].

So if we can establish a list of flags, and establish how primitives touch these flags, and how these flags affect each other, then we can extend to any path of the parse tree, provided that the intermediate nodes don’t destroy the relationship between the two operations (here grade + indexing provides sort, independently of the reshape).

Of course we can spend our lives finding such tricks. Or we could try and systemise it.

Roger: What you are saying above is that reshape and indexing commute (i.e. reshape indexing ←→ indexing reshape). More generally, compositions of the so-called structural functions and perhaps of the selection functions are amenable to optimisations. This would especially be the case if arrays use the “strided representation” described in Nick Nickolov’s paper Compiling APL to JavaScript in Vector. I used strided representation implicitly to code a terse model of ⍺⍉⍵ in 1987.

Nicolas, Formulation 2: On which grounds did the guys in the ’50s manage to estimate the minimal list of operations that you needed to express data processing?

Roger: APL developed after Ken Iverson struggled with using conventional mathematical notation to teach various topics in data processing. You can get an idea of the process from the following papers:

In our own humble way, we go through a similar process: We talk to customers to find out what problems they are faced with, what things are still awkward, and think about what if anything we can do to the language or the implementation to make things better. Sometimes we come up with a winner, for example . You know, the idea for (grade) is that often you don’t just use ⍋x to order x (sort) but you use ⍋x to order something else. Similarly with , you often don’t want just x⍳y but you use it to apply a function to items with like indices. The J Wiki essay Key describes how the idea arose in applications, and then connected with something I read about, the Connection Machine, a machine with 64K processors (this was in the 1980s).

Nicolas, Formulation 3: Do we have something with a wider spectrum than “Turing complete or not” to categorise the “usefulness and/or efficiency” of a language?

Roger: Still no general principles, but I can say this:

  • Study the languages of designers whom you respect, and “borrow” their primitives, or at least pay attention to the idea. For example, =x is a grouping primitive in k. Symbols are Arthur Whitney’s most precious resource and for him to use up a symbol is significant.

    =x ←→ {⊂⍵}⌸x   old k definition

    =x ←→ {⍺⍵}⌸x   current k definition

    Both {⊂⍵}⌸x (14.0) and {⍺⍵}⌸x (14.1) are supported by special code.

  • Study the design of machines. For a machine to make something “primitive” is significant. For example, the Connection Machine has an instruction “generalised beta” which is basically {f⌿⍵}⌸. Around the time the key operator was introduced, we (Ken Iverson and I) realised that it’s better not to build reduction into a partitioning, so that if you actually want to have a sum you have to say +⌿⌸ rather than just +⌸. The efficiency can be recovered by recognising {f⌿⍵}⌸ and doing a native implementation for it. (Which Dyalog APL does, in 14.0.)

Roger, Epilogue: To repeat Nicolas’ question in the opening section, what does “fundamental” mean? (Is grade more fundamental than sort?)

In computability, we can use Turing completeness as the metric. But here the requirement is not computability but expressiveness. I don’t know that there is a definitive answer to the question. One way to judge the goodness of a notation, or of an addition to the existing notation such as dfns or key or rank, is to use it on problems from diverse domains and see how well it satisfies the important characteristics of notation set out in Ken Iverson’s Turing lecture:

  • ease of expressing constructs arising in problems
  • suggestivity
  • subordination of detail
  • economy
  • amenability to formal proofs

I would add to these an additional characteristic: beauty.

Do Functions Know Their Own Names?

Going back a long way when John Scholes and I were writing version 0 of Dyalog there was a big discussion about whether functions knew their own names. This discussion still surfaces, with John taking the side that they don’t and me taking the side that they do.

Essentially, John would argue that after A←2, the “2” does not know that it is called “A”. So after (in modern parlance):

      add←{
          ⍺+⍵
      }

the part in {} does not know that it is called “add”.

The real question here can be put in different terms: Is the symbol + representing the addition function itself or is it one of the possible names of the addition function.

From an APL perspective, does this matter? Most of the time it makes no difference. However, when you view your SI stack it does. Consider:

      add←{
          ⍺+⍵
      }
      times←{
          ⍺×⍵
      }
      inner←{
          ⍺ ⍺⍺.⍵⍵ ⍵
      }

Now if we trace into

      1 2 add inner times 3 4

and stop on inner[1] what do we want to see when we type ⍺⍺ in the session. There are two possibilities:

Either you see:

{
    ⍺+⍵
}

or you see:

∇add

Which of these is more useful?

Being more provocative, try putting the functions in a capsule:

[0] foo
[1] 1 2{
[2]     ⍺+⍵
[3] }{
[4]     ⍺ ⍺⍺.⍵⍵ ⍵
[5] }{
[6]     ⍺×⍵
[7] }3 4

and repeatedly trace until [6]:

      )SI
#.foo[6]*
.
#.foo[4]
#.foo[1]

Compare this with the following:

[0] goo
[1] add←{
[2]     ⍺+⍵
[3] }
[4] inner←{
[5]     ⍺ ⍺⍺.⍵⍵ ⍵
[6] }
[7] times←{
[8]     ⍺×⍵
[9] }
[10] 1 2 add inner times 3 4
      )SI
#.times[1]*
.
#.inner[1]
#.goo[10]

In my view, the latter is much more communicative in a debugging environment.

Going back to the version 0 discussion: We didn’t have dfns or dops, so everything was traditional. The discussion was centred around:

∇r←a add b
[1] r←a+b
∇

∇r←a times b
[1] r←a×b
∇

∇ r←a (f inner g) b
[1] r←a f.g b
∇

Now trace this:

      1 2 add inner times 3 4

until at times[1]

The key question at the time was whether )SI should show this:

      )SI
#.times[1]*
.
#.inner[1]

or this:

      )SI
#.g[1]*
.
#.inner[1]

We choose the first of these options as more informative.

So naming things is good and using those names when reporting state information is also good. When the issue was disputed, David Crossley (who was managing the development of Dyalog) resolved it using the argument about the )SI output.

These days it might not be so obvious. In those days we were essentially thinking in terms of a scrolling paper terminal. It pre-dates the full screen experience that even the tty version gives you. We had to wait for Adam Curtis to join the team before we got that. With the context display whilst tracing there is a stronger argument that the eyes using the debugging information do not need the names. Whilst I admit to the weakening I don’t think it actually changes the balance of the case.

We use a lot of C macros in the interpreter. On Linux, gdb gives us access to those macros when we debug the C code – lldb on MAC, dbx on AIX and Visual Studio on Windows all do not have that information and are, therefore, far less helpful.

Exploring Key

In ⍺ f⌸ ⍵, major cells of specify keys for the corresponding major cells of , and f applies to each unique major cell of and the major cells of having that key. The monadic case f⌸⍵ is equivalent to ⍵ f⌸ ⍳≢⍵. Key is similar to the GROUP BY clause in SQL.

For example:

   ⎕io←0

   (⍳11) ,[¯0.5] 'Mississippi'
0 1 2 3 4 5 6 7 8 9 10
M i s s i s s i p p  i

   {⍺}⌸'Mississippi'                 {⍺⍵}⌸'Mississippi'
Misp                              ┌─┬────────┐
                                  │M│0       │
   {⊂⍵}⌸'Mississippi'             ├─┼────────┤
┌─┬────────┬───────┬───┐          │i│1 4 7 10│
│0│1 4 7 10│2 3 5 6│8 9│          ├─┼────────┤
└─┴────────┴───────┴───┘          │s│2 3 5 6 │
                                  ├─┼────────┤
   {≢⍵}⌸'Mississippi'             │p│8 9     │
1 4 4 2                           └─┴────────┘

We illustrate key by using it to model some APL primitives and to motivate and solve a programming puzzle.

Index-Of and Friends

Since key is defined in terms of unique and indices, we expect to be able to effect members of the index-of family. And so we can.

The specifications say that f applies to each unique major cell of . Therefore, {⍺}⌸ and equivalently ⊣⌸ computes unique. Since the specifications are for unique major cells, these are what ∪⍵ would be if it were extended to non-vector arguments, analagous to the way was extended to non-vector left arguments.

   x←↑'Aspen' 'John' 'Susan' 'Roger' 'Opal' 'John' 'Aspen'
   x
Aspen
John 
Susan
Roger
Opal 
John 
Aspen

   {⍺}⌸x                  ⊣⌸x
Aspen                  Aspen
John                   John 
Susan                  Susan
Roger                  Roger
Opal                   Opal 

A little surprisingly, key can also be used to model extended index-of and extended membership. For reasons which will become clear, we model (AKA ∊⍨) instead of . Both models use the merge operator, whimsically denoted here by .

ix  ← {(≢⍺) ↓ (0⍴⍨(≢⍺)+≢⍵) ((∊i)→)⍨ (≢¨i)/(≢⍺)⌊⊃¨i←{⊂⍵}⌸⍺⍪⍵}
has ← {(≢⍺) ↓ (0⍴⍨(≢⍺)+≢⍵) ((∊i)→)⍨ (≢¨i)/(≢⍺)>⊃¨i←{⊂⍵}⌸⍺⍪⍵}

   y←↑'China' 'Susan' 'John' 'John' 'Anne' 'Roger'
   y
China
Susan
John 
John 
Anne 
Roger

   x ix y
7 2 1 1 7 3

   x has y
0 1 1 1 0 1

To focus on essentials, ix and has do not directly handle the case where the right argument has higher rank than the left argument (the principal argument). Such higher rank arguments can be handled by pre-application of {↑,⊂⍤(¯1+⍴⍴⍺)⊢⍵} and post-application of ((1-⍴⍴⍺)↓⍴⍵)⍴.

Partition

Since key can effect non-contiguous partitions, we expect to be able to use it to effect contiguous partitions, a special case of the former. And so we can:

   part ← {(+\b/⍺){⊂⍵}⌸b⌿⍵⊣b←∨\⍺}

   x
Aspen
John 
Susan
Roger
Opal 
John 
Aspen
   0 1 0 1 0 0 1 part x
┌─────┬─────┬─────┐
│John │Roger│Aspen│
│Susan│Opal │     │
│     │John │     │
└─────┴─────┴─────┘
   0 1 0 1 0 0 1 (⊂[0] ≡ part) x
1
   (7⍴0) (⊂[0] ≡ part) x
1

Sort and Grade

sort  ← {⍺⌿⍨¯1+{≢⍵}⌸⍺⍪⍵}
grade ← {(≢⍺)-⍨∊1↓¨{⊂⍵}⌸⍺⍪⍵}

If is an array from a small “alphabet” , then ⍺ sort ⍵ sorts and ⍺ grade ⍵ grades it. For example:

   x←?1e6⍴256                        a←⎕a[?1e6⍴26]

   (⍋x)  ≡ (⍳256) grade x            (⍋a)  ≡ ⎕a grade a
1                                 1
   x[⍋x] ≡ (⍳256) sort  x            a[⍋a] ≡ ⎕a sort  a
1                                 1

The Maximum of a Vector

In Dyalog ’13 session D11, I included a slide giving relative timings on the then new key operator:

x←?1e6⍴1000

a. ¯1+{≢⍵}⌸(⍳1000),x     1.00
b. ¯1+{≢⍵}⌸(⍳1+⌈/x),x    1.62
c. ...

The timings show that key with tally takes less than a factor of 2 over finding the maximum. This is fast; unbelievably fast even. The following puzzle arose from investigations into this timing: Find the maximum of a vector of unsigned 1-byte integers without using multicore, vector instructions, loop unrolling, etc. Can you do it faster in C than the following?

max=*x++; 
for(i=1;i<n;++i){if(max<*x)max=*x; ++x;}

I posed the puzzle to some expert C programmers, and they failed to solve it. An “obvious” solution obtains by considering an obvious implementation of {≢⍵}⌸x in C:

M=256;
memset(c,0,M*4);              // initialize table of counts
for(i=0;i<n;++i)c[*x++]++;    // analoguous to c[x]+←1 in APL

Therefore, one way to find the maximum of x is to have an M-element (byte) Boolean table b, and:

memset(b,0,M);
for(i=0;i<n;++i)b[*x++]=1;    // analoguous to b[x]←1 in APL
c=b+M;                        // scan from the back
for(i=0;i<M;++i)if(*--c){max=c-b; break;}

Timing show that the table method is faster than the comparison method by a factor of 1.5 for 1-byte integers and 1.3 for 2-byte integers. The table method requires more code, but the main n-loop, the time for which dominates the overall time, is simpler for the table method.

Jay Foad countered that finding the maximum (and the minimum) is much faster still with the vector instructions available in modern CPUs. Dyalog has now (version 14.1) implemented such finding, with performance benefits for key, the index-of family, grade, and sort.

Foo, Shmoo

“We do not realize what tremendous power the structure of an habitual language has. It is not an exaggeration to say that it enslaves us through the mechanism of s[emantic] r[eactions] and that the structure which a language exhibits, and impresses upon us unconsciously, is automatically projected upon the world around us.”
—Korzybski (1930) in Science & Sanity p. 90

I’m trying to train myself to choose nouns as names for my functions, in the hope that doing so will help to free me from the procedural mindset, which was pervasive when I learned to program. In those days, coding concentrated more or less exclusively on the manipulation of state and only rarely on the construction of definitions. In saying this, I’m distinguishing procedures, which legitimately deal in state, from functions, which don’t. Transitive verbs: print, init, reverse(vt), … make good names for procedures.

Many of APL’s primitive monadic functions have nouns as names: floor, reciprocal, reverse(n), shape, …

Except this doesn’t quite work. Each of these names seems to need an “of” when applied to an argument, where the of seems to bind to the word that follows it:

   ⌊÷blah   ⍝ "floor of reciprocal of blah"

I’ve seen dyadic iota documented as index-of, though this grates a bit as “index of” isn’t a noun phrase in English. In fact, as far as I know, all of the romance languages would parse Richard of York as Richard (of York), where (of York) is a noun phrase in the genitive case: York’s Richard; Ricardus Eboracum. Nobody seems to parse it (Richard of) York.

So much for that idea for a blog post; ho hum; back to real work.

Then Nick Nickolov found a Wikipedia entry that suggested: “Some languages, such as the Cariban languages, can be said to have a possessed case, used to indicate the other party (the thing possessed) in a possession relationship”. So the (Richard of) parsing would work in one of the Cariban languages and isn’t therefore a property of any deep structure of language, per se.

Now when I see ceiling, I can still think of it as a noun, but in the possessed case. To emphasise this, for the rest of this posting, I’ll write my function names with a back-tick inflection to represent the case: reverse` meaning reverse-of.

   ⌊÷blah   ⍝ floor` reciprocal` blah

… and when reading my code, I can vocalise the ending with a slight [əv] or [ə] sound as in “Land’v plenty”, “Bagguh chips, please”, “Floor’v reciprocal’v blah”.

But what about dyadic functions? Glad you asked. The left argument of a function (arrays are naturally nouns) can be seen as a noun acting as determiner, technically a noun adjunctcoffee tablebicycle thiefcube root, noun adjunct, … Thus: 2-residue`, N-take`, zilde-reshape`⎕AV-index`

Practising this discipline has been quite illuminating: it has pushed me towards left argument currying. I now mentally parse ¯2⌽V as (¯2⌽)V the ((neg-two)-rotation`) V, where neg qualifies two, which in turn qualifies rotation`.

But surely 2+3 is just a symmetrical expression with respect to its arguments? Well yes, but we could also parse it as follows: Starting from the monadic successor` function 1∘+, the curried dyadic version 1+ would be the one-successor` and so 2+3 is the two-successor` three.

Now, how should we incorporate operators and their operands into this scheme? “The reverse` each` V“, where each` is functioning as a determiner?

PS: Choosing nouns for the names of functions shouldn’t be confused with J’s excellent taxonomy of language components using parts of speech, in which functions are labelled “verbs”.

PPS: Functions reverse`, successor`, shape`, … could also be seen as properties of their argument. A number N has many properties: its successor, its square-root, its floor, ceiling, prime-factorisation vector, … This is beginning to sound a bit like OO…