Speaking for myself, while this is something that I agree can be valuable to
communicate to compiler,
The word is more like in-valuable.
it isn't something that I'd personally want to add a keyword for within the language
proper since it doesn't affect the correctness or behavior of the program, just its
implementation and performance.
To me behavior is performance. In trying to convince Fortran users to try Chapel, its
performance and the language were one and the same thing.
.... we've recently been discussing adding an annotation/attribute/tag feature to the
language as a means of expressing things about the program outside of the language.
Annotations destroy readability in every case I have seen, not just in Chapel. I hope no numerical
code of mine ever needs annotations.
I am not sure what you mean by attributes and tags. If you mean attribute along the lines of the concepts of pure and const functions, then those attributes will be a part of the language.
... ongoing discussion about where the line between language features and these
tags should be drawn, but my personal favorite option is that things which are meant
to communicate directly to tools (where I consider the compiler itself to be a tool) and
don't impact the program's behavior (modulo performance or implementation details)
are fair game for these.
To that end, I might imagine expressing these as something like:
@likely true
if boolean-expression then
@likely false
if boolean-expression then
Two lines instead of one is bad. The if statement is not as obvious in the way the above is written.
Also, the above is not as readable as
if likely(<boolean-expression>) || unlikely(<boolean-expression>) then
{
// block of code
}
else
{
// alternative block of code
}
I am not convinced that my ifl and ifu are an optimal solution.
The likely and unlikely are inherent in the mathematical algorithm and its expression
in a language.
... I'm surprised by your comment that you think this could as much as double performance,
Proven. I discovered that in tests on replacement rounding routines. I was surprised too.
I was trying to write inline rounding routines in Chapel to replace the C library routines that handled all the edge cases and were IEEE 754 correct. It did not work. I got funny results. In the end I rewrote things in C, and got that right. But the code needed the concept of likely and unlikely conditions to perform well, i.e. boolean expressions. But at least I know the concept works. And yes, I know that some RISC chips have machine instructions for individual rounding modes making that work irrelevant on some newer architectures.
I'd imagine that if a given conditional were that predictably true/false and executed enough times
to matter, the hardware branch predictors would do a good job of picking up on the pattern...?
Well they did not. And my tests ran through every possible real(32), i.e. from min(real(32)) to max(real(32)) where two consecutive test scenarios differed only in their least significant bit. And
besides, why rely on run-time hardware branch predictions when I can predict it at compile time.