18888, "skaller", "Implicit conversion from single to double precision is wrong.", "2021-12-20T02:01:02Z"

Chapel currently allows an implicit conversion from real(s) to real(t) if s<=t. This is backwards. I recommend disallowing all conversions between real (and hence complex) types. It is a common mistake.

A floating point number basically represents a range between the next lower and next higher representable number, therefore it can be thought of as partitioning the real number line into subranges. A higher precision number supports more equivalence classes and thus a finer partition. These smaller ranges contained inside the larger range of a lower precision number can be embedded in the larger range, in OO terms we can say each of these smaller ranges "isA" member of the larger range.

Therefore the correct rule is actually t<=s. The intuition is that more bits means more precision and so a lower precision value can be converted to a higher precision without losing information. But this intuition is completely wrong. Consider an approximation process in which we get better and better approximations by iteration, for example, the usual way to solve the eigen problem. Now, the larger the error tolerated in the result, the faster we converge. So again, the longer running processes are actually embedded in the shorter running ones; refinements of solutions can be embedded in the solution they're refining.

I apologise for not citing an academic reference (I know they exist)