This was previously happening in intermediate code peephole optimization.
The following example program demonstrates the problem:
#pragma optimize 1
int main(void) {
int i = 0;
long j = 0;
++i | -1;
++i & 0;
++j | -1;
++j & 0;
return i+j; /* should be 4 */
}
Currently, the actual values they can have are still constrained to the 32-bit range. Also, there are some bits of functionality (e.g. for initializers) that are not implemented yet.
This generalizes the heuristic approach for checking whether _Noreturn functions could execute to the end of the function, extending it to apply to any function with a non-void return type. These checks use the same #pragma lint bit but give different messages depending on the situation.
This uses a heuristic that may produce both false positives and false negatives, but any false positives should reflect extraneous code at the end of the function that is not actually reachable.
In certain rare cases, constant subexpression elimination could set the left subtree of a pc_bno operation in the intermediate code to nil. This could lead to null pointer dereferences, sometimes resulting in a crash or error during native code generation.
The below program sometimes demonstrates the problem (dependent on zero page contents):
#pragma optimize 16
struct F {int *p;};
void foo(struct F* f)
{
struct {int c;} s = {0};
++f->p;
s.c |= *--f->p;
}
This could happen because the left subexpression does not produce a result for use in the enclosing expression, and therefore is not of the form expected by the CSE code.
The following program (derived from a csmith-generated test case) illustrates the problem:
#pragma optimize 16
int main(void) {
int i;
i, (i, 1);
}
This problem could lead to crashes in code like the following (derived from a csmith-generated test case):
#pragma optimize 1
int main (void)
{
if (1L) ;
}
This problem could lead to crashes in code like the following (derived from a csmith-generated test case):
#pragma optimize 1
static int main(void) {
long i = 2;
(long)(i > 1);
}
The previous code may have been intended to convert this to a "!=0" test, which would have been valid if correctly implemented, but with the current code generator that actually yields worse code than the original version, so for now I just removed the optimization for this case.
This problem could lead to crashes in code like the following (derived from a csmith-generated test case):
#pragma optimize 1
int main(int argc, char *argv[]){
long l_57 = argc;
return (4 ^ l_57) && 6;
}
This affected comparisons of the form "logical operation or comparison == constant other than 0 or 1". These should always evaluate to 0 (false), but could mis-evaluate to true due to the bad optimization.
The following program gives an example showing the problem:
#pragma optimize 1
int main(void) {
int i = 0, j = 42;
return (i || j) == 123;
}
Such subexpressions are not of the right form to work with the existing code, because they do not generate a value for use in the enclosing expression. For now, the code has been changed to simply not remove the subexpression in these cases. Alternative code could be written to make it work, but that might be more trouble than it's worth.
Here's an example that shows the problem (derived from a csmith-generated test case):
#pragma optimize 32+1 /* also had a problem with just 32 */
int main(void) {
int x, y=10; /* also had problems if x was global */
do {
x=42, y-=1;
} while (y);
return x+y;
}
These mainly related to situations where the optimization of multiple natural loops (including those created by continue statements) could interact to generate invalid results. Invalid optimizations could also be performed in certain other cases where there were multiple goto statements targeting a single label and at least one of them formed a loop.
These issues are addressed by appropriately adjusting the control flow and updating various data structures after each loop is processed during loop invariant removal.
This fixes#18 (compca18.c).
These cases should now always work when using an expression of type unsigned as the index. They will work in some cases but not others when using an int as the index: making those cases work consistently would require more extensive changes and/or a speed hit, so I haven't done it for now.
Note that this now uses an "unsigned multiply" operation for all 16-bit index computations. This should actually work even when the index is a negative signed value, because it will wind up producing (the low-order 16 bits of) the right answer. The signed multiply, on the other hand, generally does not produce the low-order 16 bits of the right answer in cases where it overflows.
The following program is an example that was miscompiled (both with and without optimization):
int c[20000] = {3};
int main(void) {
int *p;
unsigned i = 17000;
p = c + 17000u;
return *(p-i); /* should return 3 */
}
This could occur with computations where multiple variables were added to a pointer.
The following program is an example that was miscompiled:
#pragma optimize 1
#pragma memorymodel 1
char c[80000];
int main(void) {
unsigned i = 30000, j = 40000;
c[70000] = 3;
return *(c+i+j); /* should return 3 */
}
This type information is currently used when generating code for the large memory model, but not for the short memory model (which is a bug in itself, causing issue such as #45).
Because the correct type information was not being provided, the code generator could incorrectly use signed index computations when a 16-bit unsigned index value was used in large-memory-model code. The following program is an example that was being miscompiled:
#pragma optimize 1
#pragma memorymodel 1
char c[0xFFFF];
int main(void) {
unsigned i = 0xABCD;
c[0xABCD] = 3;
return c[i]; /* should return 3 */
}
This optimization could apply when indexing into an array whose elements are a power-of-2 size using a 16-bit index value. It is now only used when addressing arrays on the stack (which are necessarily smaller than 64k).
The following program demonstrates the problem:
#pragma optimize 1
#pragma memorymodel 1
long c[40000];
int main(void) {
int i = 30000;
c[30000] = 3;
return c[i]; /* should return 3 */
}
This could generate bad code (e.g. invalidly moving stores ahead of loads, as in #44). It would be possible to do this validly in some cases, but it would take more work to do the necessary checks. For now, we'll just block the optimization for bitfield stores.
In combination with the previous commit, this fixes#44.
The code was not accounting for the possibility that the loaded-from location aliases with the destination of an indirect store in the loop, or for the possibility that it may be written by a function called in the loop. Since we don't have sophisticated alias analysis, we now conservatively assume there may be aliasing in all such cases.
This fixes#20 (compca20.c) and #21 (compca21.c).
Previously, the structure load would be treated as a common subexpression eligible for elimination, but the structure would always be treated as if it had a size of 4 bytes. If it did not, this would generally lead to a crash. (I'm also not sure if dependency analysis was being performed properly for these structures.)
The following program illustrates the problem:
#pragma optimize 17
struct mystruct { char x; } ms;
static void foo(struct mystruct pk) {}
int main(void)
{
struct mystruct *p = &ms;
foo(*p);
foo(*p);
}
This bug could both cause accesses to volatile variables to be omitted, and also cause other expressions to be erroneously optimized out in certain circumstances.
As an example, both the access of x and the call to bar() would be erroneously removed in the following program:
#pragma optimize 1
volatile int x;
int bar(void);
void foo(void)
{
if(x) ;
if(bar()) ;
}
Note that this patch disables even more optimizations than previously if the 'volatile' keyword is used anywhere in a translation unit. This is necessary for correctness given the current design of ORCA/C, but it means that care should be taken to avoid unnecessary use of 'volatile'.
This occurred because the values were being rounded rather than truncated when converted to long, unsigned long, or unsigned int.
This was causing problems in the C6.2.3.5.CC test case when compiled with optimization.
The below program demonstrates the problem:
#pragma optimize 1
#include <stdio.h>
int main (void)
{
long L;
unsigned int ui;
unsigned long ul;
L = -1.5;
ui = 1.5;
ul = 1.5;
printf("%li %u %lu\n", L, ui, ul); /* should print "-1 1 1" */
}
The issue was that one of the procedures used for CSE would recursively call itself for every 'next' link in the code of the basic block. To avoid this, I made it loop back to the top instead (i.e. did a manual tail-call elimination transformation).
This problem could be observed with large switch statements as in the following example, although other codes with very large basic blocks might have triggered it too. Whether ORCA/C actually crashes will depend on the memory layout--in my testing, this example consistently caused it to crash when running under GNO:
#pragma optimize 16
int main (int argc, char **argv)
{
switch (argc)
{
case 0: case 1: case 2: case 3: case 4: case 5: case 6: case 7:
case 8: case 9: case 10: case 11: case 12: case 13: case 14: case 15:
case 16: case 17: case 18: case 19: case 20: case 21: case 22: case 23:
case 24: case 25: case 26: case 27: case 28: case 29: case 30: case 31:
case 32: case 33: case 34: case 35: case 36: case 37: case 38: case 39:
case 40: case 41: case 42:
case 262:
;
}
}
This cuts a few instructions from code like what is shown in commit affbe9. (It also works around the bug with that example, although that patch addresses the root cause of the problem.)
This also fixes a logic error that may have permitted other conversions to be improperly omitted in some cases.
The following program demonstrates the problem (should print 211):
#pragma optimize 1
#include <stdio.h>
int main(void)
{
unsigned int i = 1234;
long l = (unsigned char)(i+1);
printf("%li\n", l);
}
This resulted from the addition of the signed-to-unsigned comparison optimization. Specifically, it calls TypeOf for the expressions on each side of the comparison, and this did not handle function calls. That support has now been added, and will give the proper return type for direct and indirect calls to C functions. The IR for tool calls doesn't include the return type (just the number of bytes), so we return cgVoid for them. This is OK for the present use case.
Without this fix, an expression of the form "0 * exp" would be reduced to simply "0" unless exp contained a function call; other side effects of exp (such as assignments or increments) would be removed.
A similar issue could occur with additions that use the same expression on both sides of the "+": after optimization, it would only be evaluated once. I think the cases addressed here are all undefined behavior under the C standards, so the old behavior wasn't technically wrong, but the new behavior is still less confusing.
Specifically, this ensures that the depth-first numbering of basic blocks starts from 1, which is what ReachingDefinitions expects. Without this fix, reaching definitions wouldn't be correctly computed for functions that contain unreachable basic blocks (including the implicit one to return at the end). This could result in invalid hoisting of operations out of the loop.
This fixes the compca26.c test case.
Specifically, convert signed word comparisons to unsigned if both sides were either unsigned byte values or non-negative constants. This is incorporated as part of intermediate code peephole optimization (bit 0).
This should alleviate some cases of performance regressions due to promoting char to int instead of unsigned int.
This fixes the compca22.c test case.
This optimization could be fixed and re-enabled, but to do so, you would have to check if the stored value is ever used subsequently, which is not information that's readily available in the peephole optimization pass. It would also be necessary to check if there are any stores to the same location within the right-side expression, which could kill the optimization.