Optimal handling of max and min of real numbers

There are several GitHub Issues related to this topic and I do not know what one is best to which to attach a discussion about ensuring that one get the optimal instructions generated along the lines of ensuring that one uses a sqrt instruction if there is one.

The underlying issue is also tied up with whether you want propagation of NaNs and also whether one wants Chapel to be IEEE 754 compliant from the perspective of both the actual result and floating point exceptions. Chapel currently satisfies the first, but not the second.

It is also related to the hardware target on which one is running ones program. Be it X86, ARM, RISC-V, MIPS, POWER?? or whatever else needs to be covered. And then there are GPUs. And when one remembers that the actual maximum/minimum instructions on all but ARM (and Power10 if you want to generalize things) are broken relative to IEEE-2019's NaN propagation, it really complicates things.

And then there are issues with reductions.

How do we bring these issues together?

Optimal handling of max and min of real numbers

a discussion about ensuring that one get the optimal instructions generated along the lines of ensuring that one uses a sqrt instruction if there is one.

Did you mean for this post to be about complex numbers? I'm not seeing how sqrt would come up for min/max of real numbers.

Apologies for the lack of clarity. I blame New Year parties.

It is not about complex numbers. It is not about sqrt either except that most architectures these days, and LLVM, know about assembler instructions to provide fundamental IEEE 754 operations.
While sqrt is the most obvious and has just been addressed recently, max and min functionality is similar, easier in some ways and more difficult in other ways. A single instruction providing that functionality is available on an ARM, there is the underlying building block on hardware like X86-64, RISC-V, MIPS or even POWER (although the last of these four is different to the other three in that group). I am curious as to the quality of the code produced by the compiler for this operation and wondered whether we need to go through the same rigorous exercise that we just did with sqrt.

Additionally, there are multiple Github issues associated with this subject by multiple persons where we have all contributed various insightful perspectives on this really fundamental (but conceptually simple) mathematical operation that occurs in one form or another, often as the critical operation, in so many numerical algorithms. I wondered whether those issues needed to be lumped together. Those issues span compiler issues, (effectively) numerical ones, and standards issues.

Just a thought. It might be too low a priority which is why I mentioned it here rather than Github. Not urgent.