Integer Promotion Weirdness

I revisited this as it is about to cause me even more grief than it already has. I still do not understand the reasoning behind having what looks like a generic literal integer to always have an explicit non-generic type, i.e.

a literal 1 is an int(64) regardless of context;

It complicates generic programming as generic integral expressions of identifiers and literals are ugly hard to read because literals always need to be explicitly generically cast to avoid the Chapel compiler doing naughty things to a programmer. By an integral expression I mean one comprised only of integer symbols or literals, not Newton's (and Leibnitz's) continuous analog of a sum.

Even with regards to

It's important that 1 have a type so that
we (and the compiler) know what var x = 1
means (i.e. that x will be an int(64)).

You only need to choose a type at assignment time, i.e. across the assignment operator, not within an expression, and even then it only needs an explicit type when the value of the expression is not known at compile time.

I looked at:

And, this is very different from C, where
integer literals are typically 32-bit ints

That is not my interpretation and I did reread the C standard a few times. If say x is a short integer, then the expression x + 1 in C is treated as a short integer expression for all intents and purposes, or at least by the compilers and static analysis tools I use. So the integer literal is really contextual in C.

For truly generic code, a literal within an expression should inherit its type from any programmer-specified identifiers within the same expression, otherwise the compiler is making a decision contradicting the programmer.