Core type calculus

I want to run a core type system calculus by people here. It's implemented and backed by algebra, the first bit is stock standard but i introduce some novel constructions bit later. It will take a few posts to present.

The collection of all basic types and functions forms a category I'll call TYPE, by basic I'm talking shared values and excluding, for the moment, substructural type like borrowed or uniq classes. I'm also only considering structural types, most languages have at least some nominally typed things, for example a C struct is a nominally types version of a tuple.

There are two basic constructions: product and coproduct formation, these are families of data functors TYPE^N -> N for finite natural numbers N. A functor is a structure preserving map, structure is given by function composition, so the rule is F(f * g) = F (f) * F(g).

Products are also called tuples, in set theory products are your usual cartesian product with elements being tuples. In Chapel and Felix and Ocaml its .. a tuple :slight_smile: The standard notation is like int * string * real using a non-associative infix operator. You can also use tuple(int,string,real). Note Chapels notation is not acceptable (int, string,real) is an element of TYPE^3, that is, its a tuple of types, not a type which is a tuple.

Coproducts are also called sums or variants. They represent alternatives. Chapel enums are variants but they're deficient because they lack arguments. In Felix I use the standard notation like int + long + real. I think Felix is the only programming language to have structural sums.

The two operations are required to be distributive, that is

X * ( A + B ) === X * A + X * B

which means these types are isomorphic (NOT equal). Product and sum formation is NOT associative. We all know for example (1,(2,3)) is isomorphic to ((1,2),3) but the types are not equal. In sets, a sum is just a discriminated union.

There is an empty sum, the set of no elements, it is unique, and it is usually called void and notated 0.

There is a canonical empty product, which is a type with only one value if you like, it is the empty tuple () and the type is called unit. In Chapel its called nothing. It is notated 1.

So far, everything is very basic. I'm going to add some syntactic sugar now which I will remove almost immediately. I will write 2 = 1 + 1, and 3 = 1 + 1 + 1. I call these unitsums (because they're sums of units). Note that because they're not associative, we have that 3 + 2 is isomorphic to 5 but not equal. Similarly 3 * 2 is isomorphic to 6 but not equal.

I'm going to add sugar now. If you have a type T * T * T I'm going to allow you to write T ^ 3. Note Chapel gets this wrong and it must be fixed. Now this is just syntactic sugar. For sums, T + T + T will be written 3 \* T. Note, I cannot use * for this because a repeated sum is isomorphic to a pair, (3,T) but not equal.

Now, I'm going to throw out the sugar and replace it by a real, actual, type constructor. Exponentiation is usually written in two ways: either C ^ D, or D->C. The latter is obviously a function type. The former is logically equivalent but I'm going to reserve that notation for arrays. Chapel uses [D] C for arrays which is acceptable. It also uses ** for exponentiation which is also acceptable but ugly. The literature almost universally uses ^.

Now, we can describe types consisting of products, sums, and arrays, using the usual mathematical index notation, remembering that the operator are not associative, so parentheses matter. However moving the parens around produces an isomorphic type.

Now, clearly the domain of an array can't be any old type. I'm now going to define a constraint which specifies what it can be. Let 0 and 1 be compact linear types, then every sum, product, or exponential of a compact linear type is compact linear.

Any compact linear type can be an array index. Let me show an key example:

real ^ (3 * 5) === (real^5)^3

What does this say? It says a matrix which accepts a pair of values in range 0..<3 and 0..<5 as an index, is isomorphic but not equal to an array of 3 arrays of length 5. Note again, 3 and 5 are types NOT integers. So values of these types are used for indexing and bounds checking is not only not required, it isn't even possible. The indices cannot be out of range.

Another formula is:

real ^ 3 * real ^ 5 === real ^ (3 + 5)

The says a tuple of two arrays length 3 and 5 respectively is isomorphic to a single array indexed by a conditional which selects either the first or second array and then choses the index.

The type system is very powerful in describing shapes. Now I am going to talk representations. First we introduce a function size on compact linear types, which returns a natural number. Unsurprisingly, size(5)=5, and in fact, if you evaluate the formula as if it we numbers not types, that's your size!

There's a hidden piece of magic. The types, mapped by size to natural numbers, is associative! Now on to the encoding. For products, a type like 5 * 3 * 2 is represented by a number in the range 0 to 29 and we use the well known variadic base representation. A projection to find the second component of the product is just given by x / 2 % 5. Its the same as bits, except right shift is replaced by division and mask is replaced by modulus. The format is compact because all values in the range 0 to size - 1 are used up, and linear because .. well its an integer :slight_smile:

The encoding of sums is similar, except you have to add and multiple to construct, and reverse the operations to extract.

Now, suppose you have a matrix and you want to perform an operation on rows. You have to write two loops to do it. But there is a better way: you reshape the matrix into and array of arrays, which is a type cast only, the storage layout doesn't change, and now you only need one loop.

All Chapel programmers will understand the idea, in Chapel, you just define ranges for the array domain, and Chapel interprets them as iterators. In C, you just use formulas like mx + b. But with my calculus, you don't need to do any of this! In Chapel as I understand it, a range specification for a domain is a value of the range type and is associated with an iterator. But in my calculus, the index is a type. Every different shape is a different type. So the compiler can generate the ranges automatically. All you need to do is apply the desired isomomorphisms.

The calculations are simple and can also be done at run time (with the same code actually!). In particular, given a kind I will call COMPACTLINEAR which can be applied as a constraint to a type variable, you can do calculations in which you do not even know the type fully. In other words, you can perform calculations without knowing the rank of a matrix. The compiler (or run time!) fixes the variables when known and turns the symbolic calculations into actual numbers.

The calculus is thus amenable to parametric polymorphism with the constraint of compact linearity imposed on type variables. NOT all computations you want to do can be done with isomorphisms. But I will bet that a lot of the ones that can be parallelised can be. The really nice property here is that you can take an array shape and linearise it, split the array up into equal pieces irrespective of the shape, farm them out to different machines, and then take the computations and divide the index sets up in a way that would be completely unnatural to a programmer, and it will still work.

Now I want to show you how it works in practice and explain the good properties. I hope you can guess the meaning of the notation. It's quite similar to what Chapel can do, with one key but vital difference that makes this method very much better. All the example are real, working code. Here's an array of arrays (homogenous tuple in Chapel):

var x : int ^ 2 ^ 3= 
println$ "Ex1: " + x._strr;

and now loop through all the elements:

// two loops, double indexing
print "Ex2: ";
for i in ..[3] do 
  print " | ";
  for j in ..[2] perform
    print$ x.i.j.str + " ";
println "";


Ex1: ((0,1),(10,11),(20,21))
Ex2:  | 0 1  | 10 11  | 20 21

Now, I'm going to apply an canonical reshaping isomorphism using a coercion. In Felix this is not correctly checked but it should be. I'm going to change the type from an array of arrays to a matrix:

// [2,3] int (matrix)
print "Ex3: ";
var y = x :>> int ^ (3 \* 2);
for i in ..[3] do
  print " | ";
  for j in ..[2] perform
    print$ y.(i\,j).str + " ";
println "";

The coercion is just doing what you will recognise immediately as a simple index law in high school maths. Raising something to power N and then raising that to power M is the same as raising to power N * M.

The strange multiplication operator \* is used not a plain * because, if you recall, the index type of an array must be a compact linear type. Normally, tuple formation makes a tuple where each component is addressable. But in compact linear types, the components are all packed into a single integer which is a different operation, so we have to use a different operator. Similarly, a tuple (i,j) has addressable components, whereas (i\,j) is a single integer. But as you can see, it works like a matrix: there is only a single index, which is a tuple.

The next example just shows you can iterator with one index variable over all the values of a compact product:

print "Ex5: ";
for index in ..[3\*2] perform
  print$ y.index.str + " ";
println "";

And finally, you can do a naughty coercion and linearise the whole thing:

// linear array
print "Ex6: ";
var z = x :>> int ^ 6;
for index in ..[6] perform
  print$ z.index.str + " ";
println "";

As the output shows, all these things iterate through the underlying sequential storage sequentially:

Ex1: ((0,1),(10,11),(20,21))
Ex2:  | 0 1  | 10 11  | 20 21
Ex3:  | 0 1  | 10 11  | 20 21
Ex5: 0 1 10 11 20 21
Ex6: 0 1 10 11 20 21

I doubt anything is being done here you cannot already do in Chapel with a suitably populated range as the index, but there is a key difference. Notice, I didn't write a single numerical formula. There isn't a single integer calculation anywhere. Chapel has a single range type with many different values that generate iterators. You can mess up the ranges.

In the code above, everything is (supposed to be) fully type checked, and the compiler is generating the iterator directly from the types. It cannot go wrong. There is no bounds checking, because the indexes are not integers but values of enumeration types such as 2, which, by the way, is another name for bool.

So the key point to observe is that using a type calculus improves safety, it improves readability, and it also can improve performance. All the compact tuple indices are a single integer underneath and the iterations with one variable are trivial sequential loops underneath.

So now part 3.

var x :int ^ 3 * int ^ 2= ((1,2,3),(4,5));
println$ x. (`1:2) . (`1:3);
println$ x. 1 . 1;

var y = x :>> int ^ (3 \+ 2);
var topsel : 2 -> (3 \+ 2) = case 1 of (3 \+ 2);
var sel : 3 \+ 2 = topsel (`1:2);
var aprj : int ^ (3 \+ 2) -> int = aproj  sel  of int ^ (3 \+ 2);
println$ y . sel . _strr;
println$ y . aprj . _strr;
println$ y . _strr;



The notation is ghastly. But the bottom line is as before: we start with a tuple of two arrays and flatten it with a coercion which moves the structure out of the array and into the index type. Since the index is a compact linear type it we can iterate sequentially through it .. or

iterate in parallel

The key thing is that ALL iterations on arrays indexed by compact linear types can be mapped, reduced, scanned, or whatever is appropriate with exactly the same iterators, the only difference is the upper bound. I want to say this another way: it's independent of rank and individual dimensions. Only one number characterises it, the upper bound.

All of the calculations are done by the compiler if the types are known at compile time. At run time, all the same calculations can be done too. The cost for any reasonably sized array is effectively zero, with the caveat that a type error discovered at run time .. occurs at run time. Once the run time type calculations are done, the no bounds checks are required and no error accessing the arrays is possible (the computation the user want on the arrays can, of course, still fail).

This is a relatively lightweight discussion of polynomial types .. if you happen to know a bit of category theory.