The issue was that if a 64-bit value was being loaded via one pointer and stored via another, the load and store parts could both be using y for their indexing, but they would clash with each other, potentially leading to loads coming from the wrong place.
Here are some examples that illustrate the problem:
/* example 1 */
int main(void) {
struct {
char c[16];
long long x;
} s = {.x = 0x1234567890abcdef}, *sp = &s;
long long ll, *llp = ≪
*llp = sp->x;
return ll != s.x; // should return 0
}
/* example 2 */
int main(void) {
struct {
char c[16];
long long x;
} s = {.x = 0x1234567890abcdef}, *sp = &s;
long long ll, *llp = ≪
unsigned i = 0;
*llp = sp[i].x;
return ll != s.x; // should return 0
}
/* example 3 */
int main(void) {
long long x[2] = {0, 0x1234567890abcdef}, *xp = x;
long long ll, *llp = ≪
unsigned i = 1;
*llp = xp[i];
return ll != x[1]; // should return 0
}
The code was not properly adding in the offset of the 64-bit value from the pointed-to location, so the wrong memory location would be accessed. This affected indirect accesses to non-initial structure members, when used as operands to certain operations.
Here is an example showing the problem:
#include <stdio.h>
long long x = 123456;
struct S {
long long a;
long long b;
} s = {0, 123456};
int main(void) {
struct S *sp = &s;
if (sp->b != x) {
puts("error");
}
}
This affects certain places where code like the following could be generated:
bCC lab2
lab1 brl ...
lab2 ...
If lab1 is no longer referenced due to previous optimizations, it can be removed. This then allows the bCC+brl combination to be shortened to a single conditional branch, if the target is close enough.
This introduces a flag for tracking and potentially removing labels that are only used as the target of one branch. This could be used more widely, but currently it is only used for the specific code sequences shown above. Using it in other places could potentially open up possibilities for invalid native-code optimizations that were previously blocked due to the presence of the label.
This generates slightly better code for indexing a global/static char array with a signed 16-bit index and a positive offset, e.g. a[i+1].
Here is an example that is affected:
#pragma memorymodel 1
#pragma optimize -1
char a[] = {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15};
int main(int argc, char *argv[]) {
return a[argc+2];
}
We now recognize cases where the same value needs to be pushed for several consecutive words, so it is more efficient to load it into a register and push that rather than just using PEA instructions.
Any stack-allocated array must be < 32KB, so we can use the same approach as in the small memory model to compute indexes for it (which is considerably more efficient than the large-memory-model code).
The new code is smaller and (in the common case where the subtraction does not overflow) faster. It takes advantage of the fact that in overflow cases the carry flag always gets set to the opposite of the sign bit of the result.
This will change a "jump if true" to "jump if false" (or vice versa) and logically negate the condition in certain cases where that generates better code.
An assembly peephole optimization for certain "branch to branch" instructions is also added. (Certain conditionals could generate these.)
This adds debugging code to detect null pointer dereferences, as well as pointer arithmetic on null pointers (which is also undefined behavior, and can lead to later dereferences of the resulting pointers).
Note that ORCA/Pascal can already detect null pointer dereferences as part of its more general range-checking code. This implementation for ORCA/C will report the same error as ORCA/Pascal ("Subrange exceeded"). However, it does not include any of the other forms of range checking that ORCA/Pascal does, and (unlike in ORCA/Pascal) it is controlled by a separate flag from stack overflow checking.
The intention may have been to set the flags based on the return value, but that is not part of the calling convention and nothing should be relying on it.
Varargs-only stack repair (i.e. using #pragma optimize bit 3 but not bit 6) was broken by commit 32975b720fb4d79. It removed some code that was needed to allocate the direct page location used to hold the stack pointer value in that case. This would lead to invalid code being produced, which could cause a crash when run. The fix is to revert the erroneous parts of commit 32975b720fb4d79 (which do not affect its core purpose of enabling intermediate code peephole optimization to be used when stack repair code is active).
If the return value is just a numeric constant or static address, it can simply be loaded right before the RTL instruction, avoiding any data movement.
This could actually be applied to a somewhat broader class of expressions, but for now it only applies to numeric or address constants, for which it is clearly correct.
When a function has a single return statement at the end and meets certain other constraints, we now generate a different intermediate code instruction to evaluate the return value as part of the return operation, rather than assigning it to (effectively) a variable and then reading that value again to return it.
This approach could actually be used for all returns in C code, but for now we only use it for a single return at the end. Directly applying it in other cases could increase the code size by duplicating the function epilogue code.
I think the reason this was originally disallowed is that the old code sequence for stack repair code (in ORCA/C 2.1.0) ended with TYA. If this was followed by STA dp or STA abs, the native code peephole optimizer (prior to commit 7364e2d2d329d81) would have turned the combination into a STY instruction. That is invalid if the value in A is needed. This could come up, e.g., when assigning the return value from a function to two different variables.
This is no longer an issue, because the current code sequence for stack repair code no longer ends in TYA and is not susceptible to the same kind of invalid optimization. So it is no longer necessary to disable the native code peephole optimizer when using stack repair code (either for all calls or just varargs calls).
This allows it to use MVN-based copying code in more cases, including when moving to/from local variables on the stack. This is slightly shorter and more efficient than calling a helper function.
Long addressing was not being used to access the values, which could lead to mis-evaluation of comparisons against values in global structs, unions, or arrays, depending on the memory layout.
This could sometimes affect the c99desinit.c test, when run with large memory model and at least intermediate code peephole optimization. It could also affect this simpler test (depending on memory layout):
#pragma memorymodel 1
#pragma optimize 1
struct S {
void *p;
} s = {&s};
int main(void) {
return s.p != &s; /* should be 0 */
}
Previously, the assembly-level optimizations applied to code in asm statements. In many cases, this was fine (and could even do useful optimizations), but occasionally the optimizations could be invalid. This was especially the case if the assembly involved tricky things like self-modifying code.
To avoid these problems, this patch makes the assembly optimizers ignore code from asm statements, so it is always emitted as-is, without any changes.
This fixes#34.
This differs from the usual ORCA/C behavior of treating all floating-point parameters as extended. With the option enabled, they will still be passed in the extended format, but will be converted to their declared type at the start of the function. This is needed for strict standards conformance, because you should be able to take the address of a parameter and get a usable pointer to its declared type. The difference in types can also affect the behavior of _Generic expressions.
The implementation of this is based on ORCA/Pascal, which already did the same thing (unconditionally) with real/double/comp parameters.
They now use a jmp (addr,X) instruction, rather than a more complicated code sequence using rts. This is an improvement that was suggested in an old Genie message from Todd Whitesel.
This converts comparisons like x > N (with constant N) to instead be evaluated as x >= N+1, since >= comparisons generate better code. This is possible as long as N is not the maximum value in the type, but in that case the comparison is always false. There are also a few other tweaks to the generated code in some cases.
The TAY instruction would set the flags, but that is unnecessary because pc_cnv is a "NeedsCondition" operation (and some other conversions also do not reliably set the flags).
The code is also changed to preserve the Y register, possibly facilitating register optimizations.
This was a problem introduced in commit 393b7304a05. It could cause a compiler error for unoptimized array indexing code, e.g.:
int a[100];
int main(void) {
return a[5];
}
This affects 16-bit subtractions where where only the left operand is "complex" (i.e. most things other than constants and simple loads). They were using an unnecessarily complicated code path suitable for the case where both operands are complex.
These could occur because the code for certain operations was assumed to set the z flag based on the result value, but did not actually do so. The affected operations were shifts, loads or stores of bit-fields, and ? : expressions.
Here is an example showing the problem with a shift:
#pragma optimize 1
int main(void) {
int i = 1, j = 0;
return (i >> j) ? 1 : 0;
}
Here is an example showing the problem with a bit-field load:
struct {
signed int i : 16;
} s = {1};
int main(void) {
return (s.i) ? 1 : 0;
}
Here is an example showing the problem with a bit-field store:
#pragma optimize 1
struct {
signed int i : 16;
} s;
int main(void) {
return (s.i = 1) ? 1 : 0;
}
Here is an example showing the problem with a ? : expression:
#pragma optimize 1
int main(void) {
int a = 5;
return (a ? (a<<a) : 0) ? 0 : 1;
}
This optimizes most multiplications by a power of 2 or the sum of two powers of 2, converting them to equivalent operations using shifts which should be faster than the general-purpose multiplication routine.
This changes unsigned 16-bit multiplies to use the new ~CUMul2 routine in ORCALib, rather than ~UMul2 in SysLib. They differ in that ~CUMul2 gives the low-order 16 bits of the true result in case of overflow. The C standards require this behavior for arithmetic on unsigned types.
The desired location for the quad result was not saved, so it could be overwritten when generating code for the left operand. This could result in incorrect code that might trash the stack.
Here is an example affected by this:
#pragma optimize 1
int main(void) {
long long a, b=2;
char c = (a=1,b);
}