Previously, when a struct type first appeared in a symbol table nested within another struct type, subsequent references to that type would use the wrong offset and be corrupted. This occurred because the symbol table length had not yet been updated to reflect the size of the entry for the outer structure at the time the inner one was processed.
Fixes#54.
It had been changed to reflect changes in the ORCALib code that added a second putback buffer element, but those changes were problematic and have been reverted for now. (It's also not clear if ORCALib binaries with the larger putback buffer were ever distributed--at the least, they aren't on Opus ][ or in any of the ORCA/C 2.2.0 beta releases.)
This allows the code to be displayed properly on GitHub and in modern text editors, which typically do not support the irregularly-spaced tab stops used for ORCA/M code. It also avoids any possibility of problems building the code if the SysTabs file is missing or has been customized with non-standard tab stops.
If there are no varargs calls (and nothing else that saves stack positions), then space doesn't need to be allocated for the saved stack position. This can also lead to more efficient prolog/epilog code for small functions.
Previously, the stack repair code always generated code to save and restore a register, but this can be omitted except in cases where a 32-bit value or pointer is returned.
Previously, when stack repair code was generated, it always included instructions to save and restore a previously-saved stack position, but this was only actually used for function calls nested within the arguments to other function calls using stack repair code. Now that code is only generated in cases where it is needed, and the stack repair code for other calls is simplified to omit it.
This optimization affects all (non-nested) function calls when not using optimize bit 3, and varargs function calls when not using optimize bit 6.
This could occur due to the new native-code peephole optimizations for stz instructions, which can collapse consecutive identical ones down to one instruction. This is OK most of the time, but not when dealing with volatile variables, so disable it in that case.
The following test case shows the issue (look at the generated code):
#pragma optimize -1
volatile int a;
int main(void) {
a = 0;
a = 0;
}
This could happen in native-code peephole optimization if two stz instructions targeting different global/static locations occurred consecutively.
This was a regression introduced by commit a3170ea7.
The following program demonstrates the problem:
#pragma optimize 1+2+8+64
int i,j=1;
int main (void) {
i = 0;
j = 0;
return j; /* should return 0 */
}
These mainly related to situations where the optimization of multiple natural loops (including those created by continue statements) could interact to generate invalid results. Invalid optimizations could also be performed in certain other cases where there were multiple goto statements targeting a single label and at least one of them formed a loop.
These issues are addressed by appropriately adjusting the control flow and updating various data structures after each loop is processed during loop invariant removal.
This fixes#18 (compca18.c).
This should implement the C standard rules about making macro name tokens ineligible for replacement, except that it does not currently handle cases of nested replacements (such as a cycle of mutually-referential macros).
This fixes#12. There are still a couple other bugs with macro expansion in obscure cases, but I'll consider them separate issues.
The macro was slightly broken in that its 'buf' argument might be evaluated twice. This could be a problem if it was, e.g., a call to an allocation function.
This is needed to ensure correct behavior in cases where the macro is bypassed to access the library function, e.g. by enclosing the function name in parentheses or by taking its address.
These cases should now always work when using an expression of type unsigned as the index. They will work in some cases but not others when using an int as the index: making those cases work consistently would require more extensive changes and/or a speed hit, so I haven't done it for now.
Note that this now uses an "unsigned multiply" operation for all 16-bit index computations. This should actually work even when the index is a negative signed value, because it will wind up producing (the low-order 16 bits of) the right answer. The signed multiply, on the other hand, generally does not produce the low-order 16 bits of the right answer in cases where it overflows.
The following program is an example that was miscompiled (both with and without optimization):
int c[20000] = {3};
int main(void) {
int *p;
unsigned i = 17000;
p = c + 17000u;
return *(p-i); /* should return 3 */
}
This could occur with computations where multiple variables were added to a pointer.
The following program is an example that was miscompiled:
#pragma optimize 1
#pragma memorymodel 1
char c[80000];
int main(void) {
unsigned i = 30000, j = 40000;
c[70000] = 3;
return *(c+i+j); /* should return 3 */
}
This introduces a function to check whether the index portion of a pc_ixa intermediate code operation (used for array indexing) may be negative. This is also used when generating code for the large memory model, which can allow slightly more efficient code to be generated in some cases.
This fixes#45.
This type information is currently used when generating code for the large memory model, but not for the short memory model (which is a bug in itself, causing issue such as #45).
Because the correct type information was not being provided, the code generator could incorrectly use signed index computations when a 16-bit unsigned index value was used in large-memory-model code. The following program is an example that was being miscompiled:
#pragma optimize 1
#pragma memorymodel 1
char c[0xFFFF];
int main(void) {
unsigned i = 0xABCD;
c[0xABCD] = 3;
return c[i]; /* should return 3 */
}
This optimization could apply when indexing into an array whose elements are a power-of-2 size using a 16-bit index value. It is now only used when addressing arrays on the stack (which are necessarily smaller than 64k).
The following program demonstrates the problem:
#pragma optimize 1
#pragma memorymodel 1
long c[40000];
int main(void) {
int i = 30000;
c[30000] = 3;
return c[i]; /* should return 3 */
}
This could already be optimized out by the peephole optimizer, but it's bad enough code that it really shouldn't be generated even when not using that optimization.
This could generate bad code (e.g. invalidly moving stores ahead of loads, as in #44). It would be possible to do this validly in some cases, but it would take more work to do the necessary checks. For now, we'll just block the optimization for bitfield stores.
In combination with the previous commit, this fixes#44.
The code was not accounting for the possibility that the loaded-from location aliases with the destination of an indirect store in the loop, or for the possibility that it may be written by a function called in the loop. Since we don't have sophisticated alias analysis, we now conservatively assume there may be aliasing in all such cases.
This fixes#20 (compca20.c) and #21 (compca21.c).
This is what is required by the C standards.
This partially reverts a change in ORCA/C 2.1.0, which should only have been applied to hexadecimal escape sequences.
For example, the following is now allowed:
typedef void v;
void foo(v) {}
This appears to be permitted under at least C99 and C11 (the C89 wording is less clear), and is accepted by other modern compilers.