The C standards generally allow floating-point operations to be done with extra range and precision, but they require that explicit casts convert to the actual type specified. ORCA/C was not previously doing that.
This patch relies on some new library routines (currently in ORCALib) to do this precision reduction.
This fixes#64.
This was already done by the optimizer, but it is simple enough to just do it all the time. This avoids most performance regressions from the previous commit, and also generates more efficient code for long long stores (in the common cases where the value of an assignment expression is not used in any larger expression).
The value of an assignment expression should be exactly what gets written to the destination, without any extra range or precision. Since floating-point expressions generally do have extra precision, we need to load the actual stored value to get rid of it.
This means that floating-point constants can now have the range and precision of the extended type (aka long double), and floating-point constant expressions evaluated within the compiler also have that same range and precision (matching expressions evaluated at run time). This new behavior is intended to match the behavior specified in the C99 and later standards for FLT_EVAL_METHOD 2.
This fixes the previous problem where long double constants and constant expressions of type long double were not represented and evaluated with the full range and precision that they should be. It also gives extra range and precision to constants and constant expressions of type double or float. This may have pluses and minuses, but at any rate it is consistent with the existing behavior for expressions evaluated at run time, and with one of the possible models of floating point evaluation specified in the C standards.
This works when both operands are simple loads, such that they can be broken up into operations on their subwords in a standard format.
Currently, this is implemented for bitwise binary ops, but it can also be expanded to arithmetic, etc.
This optimization works when the return value is stored directly to a local variable and not used otherwise (typically only recognized when using intermediate code peephole optimization).
This allows them to bypass the intermediate step of loading the value onto the stack. Currently, this only works for simple cases where a value is loaded and immediately stored.
For the moment, this does not really do anything, but it lays the groundwork for not always having to load quadword values to the stack before operating on or storing them.
This makes it more likely that unsupported ops on long long or any other types added in the future will give an error rather than silently generating bad code.
Also, update a comment.
These use a new calling convention specific to functions returning these types. When such functions are called, the caller must set the X register to the address within bank 0 that the return value is to be saved to. The function is then responsible for saving it there before returning to the caller.
Currently, the calling code always makes space for the return value on the stack and sets X to point to that. (As an optimization, it would be possible to have the return value written directly to a local variable on the direct page, with no change needed to the function being called, but that has not yet been implemented.)
Currently, the actual values they can have are still constrained to the 32-bit range. Also, there are some bits of functionality (e.g. for initializers) that are not implemented yet.
This affects cases where the floating value, truncated to an integer, is outside the range of the destination type. Previously, the result value might appear to be an int value outside the range of the character type.
These situations are undefined behavior under the C standards, so this was not technically a bug, but the new behavior is less surprising. (Note that it still may not raise the "invalid" floating-point exception in some cases where Annex F would call for that.)
The basic issue with all of these is that they failed to sign-extend the 8-bit signed char value to the full 16-bit A register. This could make certain operations on negative signed char values appear to yield positive values outside the range of signed char.
The following example code demonstrates the problems:
#include <stdio.h>
signed char f(void) {return -50;}
int main(void) {
long l = -123;
int i = -99;
signed char sc = -47;
signed char *scp = ≻
printf("%i\n", (signed char)l);
printf("%i\n", (signed char)i);
printf("%i\n", f());
printf("%i\n", (*scp)++);
printf("%i\n", *scp = -32);
}
There are several conversions that do not set the necessary flags, so they must be set separately before doing a comparison. Without this fix, comparisons of a value that was just converted might be mis-evaluated.
This led to bugs where the wrong side of an "if" could be followed in some cases, as in the below examples:
#include <stdio.h>
int g(void) {return 50;}
signed char h(void) {return 50;}
long lf(void) {return 50;}
int main(void) {
signed char sc = 50;
if ((int)(signed char)g()) puts("OK1");
if ((int)h()) puts("OK2");
if ((int)sc) puts("OK3");
if ((int)lf()) puts("OK4");
}
This could happen in certain cases where the condition codes might not be set at expected. The following program gives an example:
#pragma optimize 1
#include <stdio.h>
int one(void) {return 1;}
int negative_one(void) {return -1;}
int main(void) {
puts((one() + negative_one()) ? "A" : "B");
}
This could also occur if the condition used the % operator, particularly after the recent changes to it.
Also, add unsigned multiplication, division, and modulo operations to the list of those that may not set the condition codes based on the result value, both in this and other contexts.
Detected based on several programs from FizzBuzz-C.
Per the C standards, the % operator should give a remainder after division, such that (a/b)*b + a%b equals a (provided that a/b is representable). As such, the operation of % is defined for cases where either or both of the operands are negative. Since division truncates toward 0, a%b should give a negative result (or 0) in cases where a is negative.
Previously, the % operator was essentially behaving like the "mod" operator in Pascal, which is equivalent for positive operands but not if either operand is negative. It would generally give incorrect results in those cases, or in some cases give compile-time or run-time errors.
This patch addresses both 16-bit and 32-bit signed computations at run time, and operations in constant expressions. The approach at run time is to call existing division routines, which return the correct remainder, except always as a positive number. The generated code checks the sign of the first operand, and if it is negative negates the remainder.
The code generated is somewhat large (especially for the 32-bit case), so it might be sensible to put it in a library function and call that, but for now it's just generated in-line. This avoids introducing a dependency on a new library function, so the generated code remains compatible with older versions of ORCALib (e.g. the GNO one).
Fixes#10.
There was a bug when storing addresses generated by expressions like &a[i], where a is a global array and i is a variable. In certain cases where the destination location was a local variable that didn't fit in the direct page, the result of the address calculation would be stored to the wrong location on the stack. This failed to give the correct result, and could also sometimes cause crashes or other problems due to stack corruption.
The following program (derived from a csmith-generated test case) illustrates the issues:
#pragma optimize 1
long g_87[5];
static int g_242 = 4;
int main(void) {
char l_298[256];
long *l_284[3] = {0, 0, &g_87[g_242]};
return l_284[2]-g_87; /* should be 4 */
}
The code would trash other data on the stack, which could corrupt other variables and in some cases lead to crashes.
The following program (derived from a csmith-generated test case) shows the problem:
#pragma optimize -1
int main(void) {
char arr[256] = {0};
char l_565[3][2] = {{3,4}, {5,6}, {7,8}};
l_565[0][0]++;
return l_565[0][0];
}
The following program (derived from a csmith-generated test case) demonstrates the crash:
#pragma optimize 8+64
#include <stdio.h>
long g = 0;
int main (void) {
long l = 0x10305070;
printf("%08lx\n", l ^ (g = (1 , 0x12345678)));
}
This was a bug with the code for moving the return address. It would generate a "LDA 0" instruction when it was trying to load the value at DP+256.
The following program (derived from a csmith-generated test case) demonstrates the crash:
#pragma optimize 8
int main (int argc, char **argv) {
char s[0xFC];
}
If there are no varargs calls (and nothing else that saves stack positions), then space doesn't need to be allocated for the saved stack position. This can also lead to more efficient prolog/epilog code for small functions.
Previously, the stack repair code always generated code to save and restore a register, but this can be omitted except in cases where a 32-bit value or pointer is returned.