Overloading elementary function

I am using a version of Chapel which fixed issues #23533 and #23560 (assuming that I built it right).

use Math;

proc cmplx(x : real(?w), y : real(w))
{
    return (x, y):complex(w + w);
}

proc zsqrt(x : complex(?w))
{
    const half = 1 / 2:real(w / 2);
    const re = x.re;
    const im = x.im;
    const t = sqrt(re * re + im * im);
    const u = half * sqrt(2 * (t + re));
    const v = half * sqrt(2 * (t - re));

    return cmplx(u, if x.im < 0 then -v else v);
}

proc main
{
    const t = cmplx(2.0, 2.0);

    writeln(zsqrt(t));
    return 0;
}

If I change the name of zsqrt to just sqrt which is just an overload of sqrt, it fails with

csqrt.chpl:8: In function 'sqrt':
csqrt.chpl:13: warning: deprecated use of implicit conversion when passing to a generic formal
csqrt.chpl:13: note: actual with type 'real(64)'
csqrt.chpl:8: note: is passed to formal with type 'complex(?w)'
note: consider adding a cast to 'complex(128)' or an overload to handle 'real(64)'
csqrt.chpl:8: error: unable to resolve return type of function 'sqrt'
csqrt.chpl:13: error: called recursively at this point

Not what I expected. Is it a function of the fix or something else.

Or is my brain not working today.

Hi Damian,

On my local developer build with a recent version of main, I'm not
seeing the same behavior, sqrt seems to work just fine (there is an
explicit overload for real(64) in AutoMath that the compiler should be
finding and preferring to the complex version). So I think it may be
something about the way your build is set up?

If a fresh checkout doesn't help, I might try modifying the real(64)
overload to be a function wrapping the extern call instead of an
unwrapped extern declaration, though that's mostly to eliminate a factor
rather than because I think we've done anything recently that would
impact it.

Curious to hear how that goes,
Lydia

I will try a fresh checkout. I got not errors when I built what I downloaded. Next week. Very odd.

I reverted to the version I downloaded and built on October and got the same error.

chpl --version:

chpl version 1.33.0 pre-release (xxxxxxxxxx)
  built with LLVM version 15.0.7
  available LLVM targets: amdgcn, r600, nvptx64, nvptx, aarch64_32, aarch64_be, aarch64, arm64_32, arm64, x86-64, x86
Copyright 2020-2023 Hewlett Packard Enterprise Development LP
Copyright 2004-2019 Cray Inc.
(See LICENSE file for more details)

Here is the output from Attempt This Online

/ATO/code.chpl:8: error: unable to resolve return type of function 'sqrt'
/ATO/code.chpl:8: In function 'sqrt':
/ATO/code.chpl:12: error: called recursively at this point
yargs: execvp: No such file or directory

This is the underlying error I saw in my own build. See my file.

csqrt.chpl.txt (377 Bytes)

Downloaded latest and rebuilt. Same problem exists as at ATO and from earlier in the weekl. I have no idea why yours works. Are you sure you are using the right version - See program code below.

csqrt.chpl:8: In function 'sqrt':
csqrt.chpl:12: warning: deprecated use of implicit conversion when passing to a generic formal
csqrt.chpl:12: note: actual with type 'real(64)'
csqrt.chpl:8: note: is passed to formal with type 'complex(?w)'
note: consider adding a cast to 'complex(128)' or an overload to handle 'real(64)'
csqrt.chpl:8: error: unable to resolve return type of function 'sqrt'
csqrt.chpl:12: error: called recursively at this point

Maybe it is related to the fix for nested functions. The code follows:

se Math;

proc cmplx(x : real(?w), y : real(w))
{
	return (x, y):complex(w + w);
}

proc sqrt(x : complex(?w))
{
	const re = x.re;
	const im = x.im;
	const t = sqrt(re * re + im * im);
	const u = sqrt(2 * (t + re));
	const v = sqrt(2 * (t - re));

	return cmplx(u, if x.im < 0 then -v else v) / 2;
}

proc main
{
	const t = cmplx(2.0, 2.0);

	writeln(sqrt(t));
	return 0;
}

Hi Damian —

I didn't get a chance to look at this thread today, but just gave your latest code a try and see the same behavior that you are. I believe that this is working as intended and that the explanation is as follows:

The calls to sqrt() on lines 12–14 are trying to resolve the the same sqrt() routine that they are defined within—i.e., the one defined within your module. The reason for this is that it is the closest/most obvious symbol named sqrt() available and the only one defined within this module's scope. Symbols in the local scope will always be preferred over ones defined within modules brought in at that scope via use or import in order to avoid having code break if someone alters the symbols made available by that module.

As an example of how that breakage would occur, if we were to add a sqrt(x: complex(?w)) definition in 'Math' in the next release of Chapel, you presumably wouldn't want your call to sqrt() within main() to start complaining that it can't decide which version to use—the one in Math vs. the one local to your module. Hence, local/obvious/visible symbols win over ones made available by use or import if they are closer in scope.

Again, the overarching goal is to prevent external modules that are not developed or controlled by the user from surprisingly breaking code or changing behavior as they evolve over time.

Here are two ways to resolve this and get your code working:

  • You could change the calls to sqrt() intended to call the Math/real versions to Math.sqrt() to clarify "Don't call my local sqrt() routine, but call the one defined within this module instead". I.e.,

         const t = Math.sqrt(re * re + im * im);
         const u = Math.sqrt(2 * (t + re));
         const v = Math.sqrt(2 * (t - re));
    
  • Or you could move the use Math; into the body of your sqrt() function itself. This will bring the versions of sqrt that it defines into the callsite's scope which makes them closer / more local / preferred relative to the ones defined in your module:

    proc sqrt(x : complex(?w))
    {
         use Math;
         ...
         const t = sqrt(re * re + im * im);
         const u = sqrt(2 * (t + re));
         const v = sqrt(2 * (t - re));
         ...
    }
    

Somebody observant might note that by putting this use Math; into that scope, we're potentially opening up other opportunities for confusion or hijacking in the future. For example, let's say our next version of Math defines a routine called cmplx()... Your call in the return statement might suddenly start calling it rather than yours, which would be surprising. This is why the import statement was developed. While use plays a bit fast and loose by making all of a module's public symbols available by default, import is a much more precise tool. So a better way to write this second version would actually be:

proc sqrt(x : complex(?w))
{
     import Math.sqrt;
     ...
     const t = sqrt(re * re + im * im);
     const u = sqrt(2 * (t + re));
     const v = sqrt(2 * (t - re));
     ...
}

which says to only bring in the sqrt() symbol(s) from Math, which would prevent any other symbols from polluting your local scope. Or, equivalently, you could write use Math only sqrt; to limit what the use brings in.

Hope this is helpful and feels sensible,
-Brad

Thanks for taking the time to look into this.

I must admit I would have expected that a call to a sqrt with a real(w) argument would have gone looking for a routine of that name with an argument of that type. Expected is not strong enough but let's leave it for now. Consider:

proc cmplx(x : real(?w), y : real(w))
{
        return (x, y):complex(w + w);
}
proc sqrt(x : real(?w))
{
        use Math;

        return sqrt(x);
}
proc sqrt(x : complex(?w))
{
        const re = x.re;
        const im = x.im;
        const t = sqrt(re * re + im * im);
        const u = sqrt(2 * (t + re));
        const v = sqrt(2 * (t - re));

        return cmplx(u, if x.im < 0 then -v else v) / 2;
}
proc main
{
        const t = cmplx(2.0, 2.0);

        writeln(sqrt(t));
        return 0;
}

As I read my new code, and your logic, the closest sqrt has an argument of complex(w), while the only correct one is the first definition. I deliberately did not use the word obvious because it is a loaded word, a bit like intuitive. Everybody has different opinions.

If I then delete

proc sqrt(x : real(?w))
{
        use Math;

        return sqrt(x);
}

then the closest sqrt with an argument of real(w) is that contained with Math.chpl.

Once you have a language that use overloaded functions name, defining the symbol name as the name of the procedure for purposes of resolution is just wrong. The compiler needs to treat the procedure and its parameters as a single concept, i.e. it needs to work with the signature, the symbol is the signature.

My original code defined a routine called sqrt with a complex(w) argument. If the compiler saw that I then wanted a routine called sqrt with a real(w) argument and none was defined locally, and my original code did not do that, then the compiler must by definition go looking for it elsewhere. If it cannot find it, then tell me I was too stupid to provide it. But I did. I explicitly said to look for any undefined procedure name and signature combinations in Math.chpl if it cannot find them locally. It really should be is that simple. Letting the compiler promote a real(w) in an argument to a complex(w) in an effort to resolve a routine-name+signature combo is way too adventurous. The compiler should not be making decisions for the programmer. If the programmer forgot to define something, complain to the silly programmer. I am happy for the compiler to tell me I am silly when I forget or mistype things.

In the presence of a

use Math;
...
proc sqrt(...)
...

and my redefinition of a routine/signature that appears in Math, the rule is to use the most recent definition which is this case says to use mine. If I had written

proc sqrt(...)
...
use Math; // HERE
...

then the use statement is like me redefining every routine that is pulled in by that use. so those names in Math.chpl then have precedence.

If the compiler wonders which I had meant to use, it is free to issue a warning in case I had forgotten to switch my brain on.

This new name resolution rule is going to break lots of our old code. When was it proposed?

Thanks.

Hi Damian -

I haven't been following this thread closely but I'm jumping in to say something about the name resolution rule you are asking about.

In order to understand for myself if the problem you are worried about in your last post is present, I made a really tiny program to explore it. My experiment indicates that it's not a problem. Here is the tiny program:

proc foo(x: real(?w)) {
  writeln("in foo(real)");
}

proc foo(x: complex(?w)) {
  writeln("in foo(complex)");
  foo(x.re); // intention: it should call foo(real)
}

var x: complex(128);
foo(x); // it will call foo(complex)

This program behaves as expected with main (which will soon be 1.33). I wouldn't expect that this program has changed in behavior in any recent releases. (and it also behaves in this expected way with the new type and call resolver which is not yet in production).

There was indeed a change to the disambiguation rules in 1.28 which is described in https://chapel-lang.org/releaseNotes/1.27-1.28/01-language.pdf starting from slide 64. Note that 1.28 was the release that fixed some unfortunate situations where code you thought was working with real(32) exclusively would use real(64) for some operations. The details of function overload resolution aka "disambiguation" are documented in the language spec as well; if you are interested in looking at it, head to Procedures — Chapel Documentation 1.32 and more specifically to Procedures — Chapel Documentation 1.32 .

Now, what Brad was describing about the original program is accurate, but the "more visible" rule does not come up in my tiny program above because, just because we are within foo(complex), we do not consider foo(complex) to be more visible than foo(real). Instead, the visibility of both is the same (after all, they are both defined at the top-level scope).

What would make it "more visible" ? Well, as Brad already described, the things brought in by a use / private use are less visible than the things declared in a scope. Other than that, the visibility distance can be thought of as the number of code blocks (e.g. demarcated with { } although you can also have a code block with a do) between the call in question and the definitions. For example:

proc bar(x: real) {
  writeln("in bar(real)");
}

proc baz() {
  proc bar(x: complex) {
    writeln("in foo(complex)");
  }
  var num : real(64);
  bar(num); // this runs bar(complex) because it is closer / "more visible"
}

baz();

Admittedly, the language specification does not currently do a great job of defining "more visible" as a concept, and I think that's a good area for improvement to that document.

Also, one more thing about visibility. Using import instead of use changes the visibility because things brought in with import are considered siblings of the things defined in a scope. The reason is that, since import brings in just a named function / set of overloads, we think it's less likely to be surprising if it starts to bring in a new symbol. However, trying this with your sqrt example led to other problems: now you sqrt is ambiguous with the one brought in by import Math.sqrt; and additionally we get an overload sets error which is attempting to protect your code from changes in the libraries you are using.

In any case, I think it's very likely that your existing code will work fine in this regard if it worked with 1.28, and I suspect it did because that release fixed some implicit conversion issues that were troubling you. We are expecting the language rules in this area to be stable and I'm not aware of any upcoming changes to them (and, IIRC, 1.28 was the version where they last changed in any big way). And, if you use the strategy of defining groups of overloads next to each other (e.g. in your latest example you defined proc sqrt(x : real(?w)) next to proc sqrt(x : complex(?w))) things should be pretty satisfying. You also might consider trying to use different function names from Math/AutoMath in order to avoid confusion about which is called.

Tangentially related, I think you'd still like a compilation mode that disabled all implicit conversions, but we have issues for that and it's a longer-term TODO that is described in issue Compiler warnings about implicit type conversions involving floating point numbers · Issue #20687 · chapel-lang/chapel · GitHub . Of course, we are interested to know if that becomes a blocker / major issue for you.

Additionally, I think we'd consider adding warnings for cases that are both unlikely to come up in correct codes and likely to represent an error or confusion on the part of the programmer; so if you feel you have found such a case, do let us know about it.

Best,

-michael

[Thanks Michael for providing more information and context].

Damian, I remain concerned about your assertion that this will break
longstanding code of yours and would like to understand better what the
code patterns in question are, and the motivations for them.

For example, if the case you sent is an exact example of such a pattern,
I'm imagining the motivation to be something like:

"Math defines some sqrt() overloads that I'd like to use, but I want to
create my own sqrt() overload for complexes because...

  • 'Math' doesn't/didn't support a sqrt() on complex."
  • I don't think the sqrt() on complex on Math is good so want to provide
    my own but not provide the other overloads as well."
  • [something else]."

Can you help me understand:

(a) which of these motivations is the one for the code pattern in your
reproducer so that we can suggest a best practice for it and try to
help better rationalize the current behavior?

(b) whether your reproducer is indicative of the code patterns in your
existing code so that we can try to help you come up with a path
forward for them?

Also, can you let me know what the most recent version of the compiler
your existing code has worked with? I'm specifically trying to figure out
whether something has changed after 1.28 that we're brushing past in
thinking that that was the last time we'd made big resolution changes.

Thanks,
-Brad

Thanks guys for your explanation. I was originally planning on doing that testing on the latest version next week as I am a bit swamped with paperwork this week. I have read both replies once but they need to be read more times as there is a lot in there. So I will answer your queries at length next week.

To me, a use on a file Include.chpl should behave the same way as direct insertion of that same file. Instead, it behaves differently.

As before, if I run the following simple test, it fails:

use AutoMath;

proc cmplx(x : real(?w), y : real(w))
{
        var t : complex(w + w);

        t.re = x; t.im = y;
        return t;
}
proc sqrt(x : complex(?w))
{
        const re = x.re;
        const im = x.im;
        const t = sqrt(re * re + im * im);
        const u = sqrt(2 * (t + re));
        const v = sqrt(2 * (t - re));

        return cmplx(u, if x.im < 0 then -v else v) / 2;
}
proc main
{
        const t = cmplx(2.0, 2.0);

        writeln(sqrt(t));
        return 0;
}

If I explicitly pull in the code from AutoMath into the program, it works.

use AutoMath;

  pragma "fn synchronization free"
  pragma "codegen for CPU and GPU"
  extern proc sqrt(x: real(64)): real(64);

  inline proc sqrt(x : real(32)): real(32)
  {
    pragma "fn synchronization free"
    pragma "codegen for CPU and GPU"
    extern proc sqrtf(x: real(32)): real(32);
    return sqrtf(x);
  }

proc cmplx(x : real(?w), y : real(w))
{
        var t : complex(w + w);

        t.re = x; t.im = y;
        return t;
}
proc sqrt(x : complex(?w))
{
        const re = x.re;
        const im = x.im;
        const t = sqrt(re * re + im * im);
        const u = sqrt(2 * (t + re));
        const v = sqrt(2 * (t - re));

        return cmplx(u, if x.im < 0 then -v else v) / 2;
}
proc main
{
        const t = cmplx(2.0, 2.0);

        writeln(sqrt(t));
        return 0;
|

Maybe after reading your emails a few more time I will understand what the difference is between these two approaches because to me a use is logically the same as textual inclusion. But at this point, I cannot see the difference and hence I am totally bewildered.

I rip code out of a big file and put it into another file for subsequent inclusion, i.e. an import or use, and then reverse that operation all the time. If this resolution rule breaks that mechanism, I am in big trouble.

Also, you keep using the word symbol to mean the name of the overloaded routine or proc. To me, the symbol name (for purposes of resolution) should be the procedure name and its signature.

Sorry, I have to get back to (the joyous task of) writing invoices and then dealing with a computer room damaged by a rain deluge which happened yesterday.

Please note that my complex square root algorithm is not the way I would write such a routine in production code. It is there purely for testing purposes. There are far better ways to write the code

Thanks again for your explanations.

Hi Damian —

No need to reply quickly to these responses quickly for our sake, but just to address your most immediate question in case it helps (now or next week):

To me, a use on a file Include.chpl should behave the same way as
direct insertion of that same file. Instead, it behaves differently.

This "behaves differently" is correct and by intention. Chapel's use definitely isn't an equivalent to '#include' in C/C++ or \input in LaTeX (not that you're necessarily suggesting that, but others sometimes think so, and others may read this thread). It also doesn't make the public declarations within 'Include.chpl' act as though they were defined within the current scope. Instead, it is as though they are introduced into a scope just outside of the current scope. Let me try to explain why.

I'm fairly certain that at Chapel's outset (and maybe for some number of years thereafter? I can't keep track) it was more like you are describing and expecting. But what we found was that it led to more confusion, errors, and instability than benefits. As an example, imagine I write some code like the following:

        use Math;  // I want to use sqrt(), so am use-ing Math to get it

        // here are some scalar variables:
        var a, b, c, d, e, f: real;
        var x0, y0: int;
        var i1, j1, k1: int;

        // here are some array variables
        var x: [1..y0] real = [i in 1..y0] sqrt(i: real);
        var A: [1..x0, 1..y0] real;
        var Cube: [1..i1, 1..j1, 1..k1] real;

        // compute a conjugate gradient...
        var B = conjg(A, x);

        // by defining a conjugate gradient routine...
        proc conjg(M: [?D], v: [?vD]) {
          ...
        }

Now, being very familiar with the Math module, you probably see the pitfalls that could occur here and are wanting to scream at my naivete in choosing these identifier names. But to someone not terribly familiar with all the symbols introduced by the Math module, who just wanted to use sqrt() and correctly guessed it was there; yet was too lazy or unaware to filter down to just that symbol using 'use Math only sqrt;', they made the unforunate choice of naming some of their variables and procedures the same thing as several symbols in the Math module:

  • y0 and j1 conflict with the Bessel functions
  • e conflicts with the mathematical constant
  • conjg() nearly conflicts with conjg() but instead just adds a new overload, potentially resulting in confusion

In your "use inserts here" model (and Chapel's historical model), this code would result in errors due to having duplicate definitions of 'e', 'y0', and 'j1' within the same scope. The diverging definitions of 'conjg()' wouldn't cause an error outright, but would result in an overload that I probably wasn't intending or aware of.

[Note that this example isn't entirely fictious. For example, we definitely had (multiple) users and developers who tried to declare symbols named 'e' and ran into surprises and errors due to conflicts with the Math module's definition of 'e'.]

Maybe the above isn't so bad, though? After all, the compiler will yell at me, I'll learn that the Math module defines those symbols, swear that someone took the names e, y0, and j1` away from me, rename them grumpily and move on?

But then, more generally, we started to get concerned about function hijacking or code instability across releases where adding new procedures or variables to a module like 'Math' might change a program's behavior if the author of the program wasn't aware of those changes and the new symbols started conflicting with their own or becoming better matches than their overloads.

As a simple example, maybe I rename e above to avoid a conflict, but a later release of Chapel introduces a variable named f. Suddenly my code starts breaking and I have to rename another variable? More swearing...

As another example, imagine that a future version of the Math module defined a conjg() overload with a similar signature as the one in my code above, yet with a more precise element type like this:

        proc conjg(M: [] real, v: [] real) { ... }

Moreover, imagine it does something very different than computing the conjugate gradient as mine did. Suddenly, my program would see a better match for the given routine, and the behavior of my program would completely change meaning through no fault of my own (well, other than potentially the fact that I relied on 'use' which is inherently subject surprises since it opens the gate so wide by default).

As a result of both these concerns, (quite some time ago) we made 'use' statements start inserting their symbols into a "shadow scope" just outside of the use statement's scope. Schematically, if my Chapel code looked like this:

        {
          var a, b, c;
          {
            use Math;

            var x, y, z;
          }
        }

the resulting scoping ends up being something like this:

        {
          a, b, c are defined here
          {
            Math's sqrt, y0, j1, e, conjg, and everything else it defines are here 
            {
              x, y, z are defined here
            }
          }
        }

This avoids the conflicts in my original code: My y0, j1, and e are now defined at a different scope from Math's, so the fact that I was blissfully unaware that its no longer matters; and by preferring "more local" routines, I prevent the possibility of a new overload of Math.conjg() from accidentally (or maliciously) hijacking mine.

Now, as Michael said, if you really want that "inject symbol at this scope" behavior — and/or you want a safer alternative to 'use' to begin with — then 'import' is your friend. Specifically:

  • 'import' does not automatically bring in any symbols; it only brings in the ones you name

  • because of this, it is also considered to bring the symbols into the current scope rather than using a shadow scope (because it's self-evident in the code precisely which symbols are being made available).

Reconsidering my scoping example with import, if I wrote:

        {
          var a, b, c;
          {
            import Math.sqrt;
   
            var x, y, z;
          }
        }

I would get the following scoping:

        {
          a, b, c are defined here
          {
            Math's sqrt is made available here
            x, y, and z are also defined here
          }
        }

I think all of us would say today that if your goal is just to sketch out code quickly and sloppily, use is just fine and very convenient. But for anything considered to be production code, you really want to be using import for all of its precision benefits and lack of surprises.

Briefly, import made me think that the way you'd want to write your example would be:

import Math.sqrt;
         
proc cmplx(...) ...   
proc sqrt(...) ...
proc main ...

but as Michael suggested, that doesn't work because:

  • Math already defines sqrt() overloads taking complex arguments
  • so now I have two routines with conflicting signatures at the same scope
  • so now the call doesn't know which one to dispatch to

Perhaps there should be a way to say "only import the version of sqrt that accepts real values" but we don't have that sort of control today. Importing a symbol brings in all overloads of the symbol.

I rip code out of a big file and put it into another file for subsequent
inclusion

In many cases, this should probably "just work". Cases where it doesn't—as illustrated by your example here—are ones where a set of overloads of the same routine are split between multiple modules. And again, this is by intention and relates to the "overload set" concept that Michael mentioned.

I don't want to go much into that concept at this point since this response has gone a bit long already, but the concept is related to the two definitions of conjg() above as "complex conjugate" vs. "conjugate gradient". We don't want users to accidentally end up with overloads of a single name unless those overloads were meant to be aware of one another. Most often, this would be done by:

  • defining the overloads in the same module
  • or at modules that are all similarly used/imported (so they're "equidistant")
  • or one module defines some overloads while publicly importing (re-exporting) others, causing them all to virtually be defined at the same scope

Again, if you find cases that don't "just work" as Michael and I are hoping, we'd like to be aware of that and to understand what the motivation for those use cases is to see how we/Chapel can help.

One other tool that may be useful here (though I'm skeptical): Chapel has an include statement that can be used to bring in a file as a sub-module. This still isn't the same as a C/C++-style #include, but can be a useful way to refactor code into distinct files for various reasons while still making it accessible, albeit through a sub-module.

Also, I have proposed extending the include statement to support C/C++-style #include and while that proposal hasn't generated enough support to implement yet, if real users like yourself were interested in it, that would increase the chances of it happening.

Also, you keep using the word symbol to mean the name of the
overloaded routine or proc. To me, the symbol name (for purposes of
resolution) should be the procedure name and its signature.

Fair enough. If it was me, I am admittedly sloppier with terminology than I should be much of the time.

-Brad

1 Like

Break-time from invoicing....

Up until late July this year, I was using 1.25.1.

I have done nothing with old code up until this week except to make sure I was not talking out the top of my head in some statements I made in my hopefully constructive reples to issues related to the Math/AutoMath shuffle and some of the name changes going on.

Since August, I have done some stuff with regards to an FFT rework and also with fma (thank you again). But that is all new and as yet, has no overloaded functions. I think the nested function and type query in an array work now.

So, very little experience with 1.28 and more recent.

More answers later.

1 Like

As with all things math/IEEE floating point related, your input was extremely helpful, thank you!

Up until late July this year, I was using 1.25.1.

Typically, my strong suggestion for catching code up across several releases like this is to take it a release at a time if you've got the stomach for it. The reason is that we work hard from release-to-release to warn about changes, so moving from 1.25 to 1.26 should not be too bad. But jumping from 1.25 to 1.32, you may jump right past those warnings and end up with a very differently behaving program.

That said, 7 releases (soon to be 8) is a lot of iterations, so you may want to be more daring than I'm suggesting. A middle ground is that we've tried to preserve such warnings for six month windows (two releases in the current scheme), so you could potentially jump from 1.25 to 1.27, 1.29, 1.31, and 1.33 without being surprised if we've been good about that.

-Brad

Yes. Sadly, I have only just noticed the change.

I never saw such problems except when I forgot to define my own e which was normally the binary exponent of a floating point number.

Looking at your example:

        use Math;  // I want to use sqrt(), so am use-ing Math to get it

        // here are some scalar variables:
        var a, b, c, d, e, f: real;
        var x0, y0: int;
        var i1, j1, k1: int;

        // here are some array variables
        var x: [1..y0] real = [i in 1..y0] sqrt(i: real);
        var A: [1..x0, 1..y0] real;
        var Cube: [1..i1, 1..j1, 1..k1] real;

        // compute a conjugate gradient...
        var B = conjg(A, x);

        // by defining a conjugate gradient routine...
        proc conjg(M: [?D], v: [?vD]) {
          ...
        }

Unless I am being very lazy, I would never see most of those problems because I
a) avoid global variables unless I am violating my own coding guidelines - silly me;
b) would declare a conjugate gradient routine in a separate module; and
c) would declare things like your y0' and j1inside a **proc**main`;

Addressing your points:
- y0 and j1 conflict with the Bessel functions 
--- this disappears if they are declared inside a **proc** `main`
- e conflicts with the mathematical constant
--- again, this disappears if they are declared inside a **proc** `main`
- conjg() nearly conflicts with conjg() but instead just adds a new overload
--- I would use a name like `conjgrad`
--- Also, I would rename `conjg` to `conj`

Sadly, the very longer the case after the recent changes.

That last part sounds like me most days!!

The compiler can normally complain loud enough to address this.

You note that

It is good for the soul and its the only language computers understand natively.

You mentioned that shadow scoping

However, shadow scoping looks like it hijacks the use of parentheses and introduces a far more complicated concept to include external symbols. Scary. Looks like I need to avoid any use statement which also means killing off the latest AutoMath.

The use of proc main solves the problems with scoping without the additional complication.

I missed the discussion on shadow scopes but I would have been an opponent. A use or an import statement needs to be simple. Too late now.

Anyway, it looks like I need to avoid use altogether.

As Michael noted, one should use import when pulling symbols from other modules. Wise Michael. Lazy Damian. But old habits with routines tjay C encapsulated in math.h die hard..

To resolve the import issues, consider the following:

import name from module; // imports name but fails for overloaded routines
import A,B,C from module; // imports A,B,C but fails for overloaded routines
import name(...) from module; // imports all overloads of routine called name
import name(signature) from module; // imports symbol = name+signature
import A(signatureA), B(signatureB) from module; // imports multiple symbols 

I am sure this Pythonesque approach is not perfect. Does it work for generic declarations? Needs more thought.

I still think you should be able to provide a replacement definition of a routine (or whatever) although the compiler is more than welcome to complain about a redefinition (but still do it). Maybe precede the proc with a keyword like revised. I could be being very unwise (or just very stupid).

Thanks for the insight about the use statement. Learned a lot. While a transparent solution, I still think it is an overly complicated approach to solve the problem and as we have seen, introduces problems of its own.

Hi Damian —

Before I get into replying to your latest message, I'm remembering one more tool that may be of use to you: I'm 98% certain that if you write public use M; (i.e., "Make all of M's symbols available to this scope and as visible as any symbols within this scope"), it will treat use the way you want and inject the module M's contents into the current scope rather than using a shadow scope.

However, changing your toy program to use public use AutoMath; won't make it work as you wanted because it will insert the AutoMath module's sqrt() overloads into the currents cope, including the overload taking complexes, and then the compiler will complain about there being an amguity w.r.t. the overload you provided since they're equally viable and defined at the same scope.

But even though it doesn't help with your toy, perhaps it'll help in your original/main motivating cases?

Speaking of those cases, I remain interested in the answers to my previous questions. That is, in my explanations (previously and below), I'm not trying to say "sorry, you missed that boat, [private] use is not for you", but rather am trying to understand the scenario(s) where use isn't doing what you'd like and what that scenario is.

Taking your comments out of order, it sounds like this is at the heart of it(?):

I'd argue that the shadow scope definition of use permits you to do just this. What it doesn't (currently) support is replacing a subset of an overload set—you have to redefine all the overloads (that you want to use) or none of them (if you want to keep using the originals).

And again, this is so that if the module you're using later adds a new overload, it won't cause surprises or problems by having your module extend something it wasn't anticipating from the start. Quoting you back to yourself:

The compiler can normally complain loud enough to address this.

I'd say that's what it's doing now—complaining about things that seem potentially amiss to alert you to the possibility of surprises later if they're not addressed.

Sadly, I have only just noticed the change.
...
I missed the discussion on shadow scopes but I would have been an opponent.

I'm fairly certain that Chapel's use of shadow scope for use statements predates your use of Chapel (i.e., My guess it happened in the mid-2000's). This may explain:

I never saw such problems...

What did change more recently was the treatment of outer vs. inner overloads of a given procedure or operator, which came about as part of other resolution improvements as Michael mentioned above. So it's very likely that these changes made you more aware of shadow scopes than you had been.

Anyway, it looks like I need to avoid use altogether.

For any production-grade code worried about forwards compatibility, import is definitely the recommended approach. That said, I will be curious whether public use gives you what you want.

To resolve the import issues...

I'm not clear: What issues are you referring to here?

...consider the following:
[proposal elided for space]

Note that one or more symbols can be imported from a module using:

import module.name;       // import one symbol from `module`
import module.{A, B, C};  // import multiple symbols from `module`

That said, these forms would bring in all overloads of name, A, B, and C (which is what we've typically wanted, so feels like the right case to optimize for via brevity).

I expect that if we were to add support for importing a single overload of a routine, it'd be expressed as something like:

import module.sqrt(x: real(64)): real(64);

We haven't had a request or good motivating case for this to date, but you're welcome to file a feature request issue for it if it's what you want.

That last part sounds like me most days!!
...
[swearing] is good for the soul and its the only language computers understand natively.

:slight_smile:

-Brad