Overload resolution

Here is the algorithm for overload resolution. It is quite simple. For a set of candidate functions, we will consider only the set of domains D. We also have an argument, with a type A. We have a partial order based on subtyping. In the first step we throw out domains in D for unless d >= A so A is now a subtype of each domain. We now have to pick the most specialised.

We start with an empty set S, which satisfies the invariant that no member of S is greater than any other, and every step will preserve this property. Therefore at the end, if S is empty, there is no matching function, if it has one member, that is the most specialised, and if it has two or more is is ambiguous.

We now consider each element d of D in turn. If d > s, for any domain in S, discard d and try the next element. The invariant is preserved. The candidate d cannot be the most specialised because we found an element s that is a subtype of it.

Now, for every element s in S, if d < s, throw that s out of S, then, add d into S. The invariant is preserved because (a) if there is an element s such that d > s, the first case would have been executed, otherwise, if d < s, the s gets thrown out, so the only remaining possibility is that d is incomparable with every s.

Therefore by induction the invariant is preserved when S is exhausted and the result follows and is proven correct.

In particular, the algorithm depends only on a binary relation between types expressing whether a type is a strict subtype of another or not. Given a transitive set of orderings between primitive monomorphic types, and type constructors with known properties such as tuple formation, the algorithm for determining this relation is well known modification of unification. Note in particular it handles both subtyping and subsumption simultaneously (that is a concrete type is a subtype of a type variable also).

It needs to noted that tuple formation is covariant for example, to write the algorithm to include tuples. The key thing here is that none of this can work unless the subtyping relation is a partial order which means it MUST BE TRANSITIVE.

The subtyping rules do not have to agree with the set of implicit conversion. However implicit conversions which are not subtypes will never be invoked (since all type resolution uses the specialisation algorithm) so they might as well be thrown out. On the other hand a subtyping relation MUST be an implicit conversion, because the compiler is going to insert one around the argument of every function call (noting the special case of an exact match will indeed insert the identity as an implicit conversion which can of course be discarded).

There is more than one encoding of the specialisation routine, The one I use is inefficient but it works. I can spell that out if necessary.

Finally: finding the most specialised overload is used in TWO PLACES. The first place is ordinary overloading. The second place is choice of the most specialised implementation of an (CG) interface. Note that this second case is not set up the same way because you are NOT selecting the most specialised function, but the most specialised implementation, which contains multiple functions (because the interface does). The specialisation algorithm is the same, but the candidates are the implementation specialisations of the interface (not the domains of functions).

@skaller - this algorithm only considers a single argument. The main reason it is complicated in Chapel is that we have multiple arguments.

That's easy, just pack the arguments into a pseudo tuple, similarly, pack the parameters into one. The unification algorithm will immediately unpack them anyhow. Here is the code for doing this in my system, with some special hacks removed for clarity:

  | BTYP_tuple ls, BTYP_tuple rs ->
      if List.length ls <> List.length rs then raise Not_found;
      List.iter2 (fun l r -> add_ge(l,r)) ls rs
 

This is Ocaml. The LHS component is the parameter type and the RHS is the argument type. The algorithm throws Not_found exception on a failure, so here we require the same number of components in the parameter and argument type.

If the number is equal, we just require each parameter component to be a supertype of the corresponding argument component, this is because tuples are covariant, which means the subtyping relation of the whole tuple corresponds to the subtyping of each component considered separately.

The add_ge(l,r) function adds a new inequality to the set the algorithm must solve.

Following up on something from definition of shadowing is inconsistent between variables and functions · Issue #19167 · chapel-lang/chapel · GitHub -- it sounds like we are having a terminology issue with what it means for something to be a "subtype".

If the subtype relationship is exactly meaning "that you can pass it to a function of that type" then I think of subtypes === implicit conversions. (Of course, to use your suggested disambiguation algorithm, that would mean that the implicit conversions need to be transitively closed).

The thing is, in Chapel, it depends on the intent of the argument whether or not implicit conversion is allowed.

For

proc fIn(in arg: T) { }

we can call fIn with any argument that can implicitly convert to T.

But for

proc fRef(ref arg: T) { }

we can call fRef only with an argument that is already of type T (or, if T is generic, an instantiation of T).

Lastly, for

proc fConstRef(const ref arg: T) { }

we can call fConstRef with anything that is a subtype of T. This is how I decide when two things are subtypes vs being implicitly convertible. See Conversions — Chapel Documentation 1.32 for our current definition of subtype which is more or less that child classes are subtypes of parent classes.

So, given all of that, I suspect we are using a different definition of subtype and that the algorithm you are describing could potentially apply if we

I would like to understand more about the algorithm you are describing. In particular, does it necessarily have nested loops?

If d > s, for any domain in S, discard d and try the next element.
Now, for every element s in S, if d < s, throw that s out of S

Do these necessarily involve checking against each element of S? Or is there a trick to make these comparisons O(1)?

If you have to check against each element of S, I think what we have today is not particularly worse in terms of algorithmic complexity. (But it is certainly more complicated to describe -- Procedures — Chapel Documentation 1.32 being our current description of it).

No, I just left references and intents out. They do not pose a problem. For example, I will show you what happens in Felix, which has pointers rather than references, but the idea is the same.

In Felix, a pointer has an attribute which is R, RW, W or N. R means read only, W means write only, RW means read-write, and N means none of the above. :slight_smile:

Guess what? RW is a subtype or R and W. Note, these attributes are not types, but the subtyping rules for the attribute propagate to the pointer types. Here is the Ocaml code:

  | BTYP_ptr (ml,l,[]), BTYP_ptr (mr,r,[]) ->
    mode_supertype ml mr;
    add_eq (l,r)

where ml, mr are the pointer modes, l and r are the types pointed at, and the other bit is for a more complex kind of pointer that handles compact linear types (which I described in another discourse topic). Here's the mode handler:

let mode_supertype m1 m2 = match m1,m2 with
  | `R, `RW
  | `W, `RW
  | `N, _ -> ()
  | x,y when x = y -> ()
  | _ -> raise Not_found

Notice carefully the algorithm says add_eq(l,r) which says there is NO subtyping on pointers at all (other than the access mode). However there is subsumption (meaning the parameter could be a pointer to a type variable and the argument a pointer to int).

This rule would NOT work in Chapel, because you want a pointer to a derived class to be a subtype of a pointer to a base (but Felix has no OO style classes so Felix doesn't need that rule). So in Chapel you'd have to use a different rule. But the rule will always be of the form above because that's how the unification algorithm works.

So now, we don't have a different definition of subtype as such. What we have is that references cause problems that go away if you use pointers. However, you can still write the rules with references as long as you have a type for them.

[More in next reply lol!]

So the algorithm I actually use appears to be quadratic, however the complexity is irrelevant because the number of functions is very small. The actual overloading routine in full is very complex due to things like type constraints you don't have. For example Felix can do this:

typedef ints = typesetof(int, long);
fun add[T with T in ints]: T * T -> T  = "$1+$2";

which makes the routine add available for int and long, so the T in ints is a way of making a set of routines that bind to C++ generic operators all in one go. So this complicates the overloading a bit. [Er .. like about 5 pages of code complicated]

In Felix, for primitive types, I have a table of pairs (P,A) where P and A are primitive (nonpolymorphic) types. In Felix, this table is exclusively defined by the programmer (mainly me, in the standard library).

The user specification must be acyclic or its an error (which I do not detect! Bad!). The system provides the transitive closure. It may be that there are two or more paths from X to Y in the graph. In this case the compiler is allowed to chose any path. It tries to chose the shortest one so the result is maximally efficient. This also imposes a constraint on the user: if the user defines a coercion which is an alternative path, it must be semantically equivalent. For example if you have a coercion from A to B, and from B to C, and also from A to C, then the requirement is that the result of applying A to B then B to C must be identical to applying A to C.

Felix also has polymorphic primitives, which are lifted from C++ like this:

type vector[T] = "::std::vector<?1>";

At present, subtyping cannot be specified for polymorphic primitives. The problem here is one has to know the variance of the type variables in the type. Most types will either be covariant or invariant but there has to be a way to specify which. The above vector will be invariant because a vector of A will not be a subtype of a vector of P in C++. It could probably be made one by providing a mapping routine that promotes the subtyping coercion to a the vector (in the sense Chapel uses promotion to mean a functorial map).

BTW: that's a nice and interesting feature of Chapel (promoting a function on T to a function on an array of T, which is precisely what a category theoretic functor does).

BTW: I think the point i would make here is this: given subtyping rules, together with the specialisation algorithm and the algorithm to chose the most specialised overload, you make these relatively simple algorithms core, and you throw out anything in the language you have at the moment which doesn't fit in. The algorithms are conceptually simple and specify a coherent algebra without special cases and that is more important than compatibility, or anything else. If doing this breaks something too bad.

Of course, you can propose another algorithm. The point is that the language must be defined by simple algorithms or you cannot generalise and extend the language. You need to do that, because everyone generalises and extends languages, and you get into a mess if the core is not based on simple algebraic designs.

There are abstract notions of quality based on ideas like being able to reason about correctness (which includes the ability of the compiler to reason about correctness), and expressiveness which is a lot to do with composability and reuse. And there are principles, well espoused in early days of Object Orientation, which are good ways to measure these things (even if OO itself fails to meet the goals), such as the Open/Closed principle.

The real question for language designers is not that something should be thrown out, but how to throw it out without losing all your users. Or being fired by your employer :slight_smile:

It often turns out that you can actually do this with relatively little pain. But the first step is to actually decide that a change is needed. Then worry about how to do it.

For example: generic as you have them are crap. But we're getting Interfaces which partially fix that problem. How can we actually fix it? Well, if you put a where clause in a function, it uses bounded polymorphism and NO generic rubbish. That breaks NOTHING because no one is using interfaces yet. And if you want parametric polymorphism, just put where (). No constraints, and no generic crap. Pure parametric polymorphism as a special case of constraints. So with this ALL functions using Interfaces can be fully type checked.

This isn't the only way but it should be considered as a path to what is actually required: fully type checked everything .. except for existing unmodified generics which can't be type checked until instantiated. So fine, the old generics are rubbish but we haven't broken them, just provided a better way which allows the user to opt in or not as they see fit.

Similarly I proposed adding functions. The idea is simply that having purely functional subsystem is useful and almost completely isolated from everything else so there is freedom to do stuff a different way without breaking anything.