18497, "lydia-duncan", "[Library Stabilization] Meaning of '%.6' varies depending on what type it is applied to", "2021-09-30T21:45:54Z"
The number after the decimal point in formatted string specification varies depending on the type of the argument it is applied to. For integers, it means "insert a decimal point and pad to the specified number of zeroes", while for reals, we only print out the number of decimals that were already provided. This is especially confusing when the format string written doesn't appear to have changed otherwise, such as when using the generic
n to indicate the type required:
use IO.FormattedIO; writef("%.6n\n", 35); // prints `35.000000` writef("%.6n\n", 2.13); // prints `2.13`
Such differences are reflected when explicitly specifying the type:
use IO.FormattedIO; writef("%.6i\n", 35); // prints `35.000000` writef("%.6r\n", 2.13); // prints `2.13`
While it may make sense in an individual type situation, in the larger picture it seems confusing and could cause problems when copying formatting lines for adjustment later in your program.
Should we change this behavior? How?