If you pass the C expression
ss + 1
as a parameter to a C routine expecting short, it is treated as a short expression. It only looses information if the expression evaluates to something larger than a short which my (hopefully) thorough testing and judicious use of asserts ensures never happens. But C is not the topic of conversation
I would agree.
That said, an integer literal to somebody reading algebra is generic, just as a floating point literal is generic and I am trying to write generic code that is both readable and reflects how the algebra is written. Because the existing rule to treat an integer literal as an int(64) is not generic, to support those who want generic code, Chapel needs a compiler option that treats an integer literal as generic, effectively int(0), although you do need to capture the value of that literal temporarily during the compilation process as say int(256), i.e. beyond the accuracy of the hardware, until it needs to be evaluated.That way int(0) then never interferes with type promotion rules and remains generic. This approach also needs none of the internal shortening rules which must be a nightmare. The extra handling occurs where the literal does not know what type to inherit at some point, e.g. where it is the only element in the RHS of an assignment, or where there are only literals in a parenthesised expression, or in a range with no other programmer defined identifier.
param b = 1023;
const x = 1;
const fred = .... blah-blah.... +(2*5)*x ... blah-blah ...
const r = 1..100;
var t : [1..10, 1..10] real(32);
Then you evaluate the integer literal (or the literal expression) to appropriate accuracy. For the assignment case, that size could be deduced as the minimum size needed for the value in question. For the parenthesised expression, it would be the precision used to temporarily capture int(0) literals. For a range, it should default to be the best type to use for indexing operations. For backward compatibility, it should be int(64) although a compilation option should allow something which is truly generic.
This allows for a truly generic expression of some expression
i + 2
Treating a literal integers as int(0) should also work for where/when Chapel needs to support 128-bit integers. The underlying logic should also handle floating point literals as well, both real(16) and real(128) and beyond.
Whether having truly generic literals breaks anything else I do not know. But the way Chapel currently maps a generic literal integer to a non-generic int(64) certainly breaks pretty well every piece of non 64-bit Fortran or C or C++ or Java code I am, or anyone else is, trying to port to Chapel unless that old code only worked with 64-bit data. And nearly 50% of existing HPC code is Fortran and some sources estimate that another 20% of C or C++ HPC code has been translated from, or written like, Fortran. The number of people in the boat who are affected by that non-general implied type is non-trivial. From a personal perspective, it is costing us huge amounts of time and results in frustrated programmers and ugly code because every integer literal needs to rewritten to look like
1:int(w)
which trashes one of Chapel's features, its readability and clarity of expression. And frustrated programmers are very much less productive programmers.
This approach also means that either of
param x = t + 1;
const x = t + 1;
will always yield the same result for some param t, which is not the case now. That is a real nightmare.
Thanks in advance.