*Recognize the 'll' and 'j' size modifiers as denoting long long times.
*Recognize '%P' as equivalent to '%b'.
*Give a warning for 'L' length modifier in scanf, which is currently not supported (except when assignment is suppressed).
The changes to constant expressions were not allowing the unsupported constant expressions to be evaluated at run time when they appear in regular code.
This makes it more likely that unsupported ops on long long or any other types added in the future will give an error rather than silently generating bad code.
Also, update a comment.
This affected initializers like the following:
static int a[50];
static int *ip = &a[0] + 2U;
Also, introduce some basic range checks for calculations that are obviously outside the 65816's address space.
This has no functional effect, since it is all comments. It does mean that printed listings of CGI.pas would not contain those comments, but it is easy enough to restore if someone wants such listings.
This change should make compilation slightly faster, and it also avoids issues with filetypes when using certain tools (since they cannot infer the filetype of CGI.Comments from its extension).
This was previously happening in intermediate code peephole optimization.
The following example program demonstrates the problem:
#pragma optimize 1
int main(void) {
int i = 0;
long j = 0;
++i | -1;
++i & 0;
++j | -1;
++j & 0;
return i+j; /* should be 4 */
}
These involve recent standards-conformance patches for printf and scanf, which changed some (non-standard) behaviors that the test cases were expecting.
I also fixed a couple things that clang flagged as undefined behavior, even though they weren't really causing problems under ORCA/C.
These use a new calling convention specific to functions returning these types. When such functions are called, the caller must set the X register to the address within bank 0 that the return value is to be saved to. The function is then responsible for saving it there before returning to the caller.
Currently, the calling code always makes space for the return value on the stack and sets X to point to that. (As an optimization, it would be possible to have the return value written directly to a local variable on the direct page, with no change needed to the function being called, but that has not yet been implemented.)
This converts them to 32-bit values before doing computations, which is (more than) sufficient for address calculations on the 65816. Trying to compute an address outside the legal range is undefined behavior, and does not necessarily "wrap around" in a predictable way.
Right now, decimal constants can have long long types based on their suffix, but they are still limited to a maximum value of 2^32-1.
This also implements the C99 change where decimal constants without a u suffix always have signed types. Thus, decimal constants of 2^31 and up now have type long long, even if their values could be represented in the type unsigned long.
Currently, the actual values they can have are still constrained to the 32-bit range. Also, there are some bits of functionality (e.g. for initializers) that are not implemented yet.
This affects command lines like:
cmpl myprog.c cc=(-da=+) ...
Previously, this would be accepted, but a was actually defined to 0 rather than +.
Now, this gives an error, consistent with other tokens that are not supported in such definitions on the command line. (Perhaps we should support definitions using any tokens, but that would require bigger code changes.)
This also cleans up some related code to avoid possible null-pointer dereferences.
For now, "long long" is represented with the existing code for the SANE comp format, since their representation is the same except for the comp NaN. This allows existing debuggers that support comp to work with it. The code for "unsigned long long" includes the unsigned flag, so it is unambiguous.
This affects cases where the floating value, truncated to an integer, is outside the range of the destination type. Previously, the result value might appear to be an int value outside the range of the character type.
These situations are undefined behavior under the C standards, so this was not technically a bug, but the new behavior is less surprising. (Note that it still may not raise the "invalid" floating-point exception in some cases where Annex F would call for that.)
The basic issue with all of these is that they failed to sign-extend the 8-bit signed char value to the full 16-bit A register. This could make certain operations on negative signed char values appear to yield positive values outside the range of signed char.
The following example code demonstrates the problems:
#include <stdio.h>
signed char f(void) {return -50;}
int main(void) {
long l = -123;
int i = -99;
signed char sc = -47;
signed char *scp = ≻
printf("%i\n", (signed char)l);
printf("%i\n", (signed char)i);
printf("%i\n", f());
printf("%i\n", (*scp)++);
printf("%i\n", *scp = -32);
}
There are several conversions that do not set the necessary flags, so they must be set separately before doing a comparison. Without this fix, comparisons of a value that was just converted might be mis-evaluated.
This led to bugs where the wrong side of an "if" could be followed in some cases, as in the below examples:
#include <stdio.h>
int g(void) {return 50;}
signed char h(void) {return 50;}
long lf(void) {return 50;}
int main(void) {
signed char sc = 50;
if ((int)(signed char)g()) puts("OK1");
if ((int)h()) puts("OK2");
if ((int)sc) puts("OK3");
if ((int)lf()) puts("OK4");
}