This introduces a function to check whether the index portion of a pc_ixa intermediate code operation (used for array indexing) may be negative. This is also used when generating code for the large memory model, which can allow slightly more efficient code to be generated in some cases.
This fixes#45.
This type information is currently used when generating code for the large memory model, but not for the short memory model (which is a bug in itself, causing issue such as #45).
Because the correct type information was not being provided, the code generator could incorrectly use signed index computations when a 16-bit unsigned index value was used in large-memory-model code. The following program is an example that was being miscompiled:
#pragma optimize 1
#pragma memorymodel 1
char c[0xFFFF];
int main(void) {
unsigned i = 0xABCD;
c[0xABCD] = 3;
return c[i]; /* should return 3 */
}
This optimization could apply when indexing into an array whose elements are a power-of-2 size using a 16-bit index value. It is now only used when addressing arrays on the stack (which are necessarily smaller than 64k).
The following program demonstrates the problem:
#pragma optimize 1
#pragma memorymodel 1
long c[40000];
int main(void) {
int i = 30000;
c[30000] = 3;
return c[i]; /* should return 3 */
}
This could already be optimized out by the peephole optimizer, but it's bad enough code that it really shouldn't be generated even when not using that optimization.
This could generate bad code (e.g. invalidly moving stores ahead of loads, as in #44). It would be possible to do this validly in some cases, but it would take more work to do the necessary checks. For now, we'll just block the optimization for bitfield stores.
In combination with the previous commit, this fixes#44.
The code was not accounting for the possibility that the loaded-from location aliases with the destination of an indirect store in the loop, or for the possibility that it may be written by a function called in the loop. Since we don't have sophisticated alias analysis, we now conservatively assume there may be aliasing in all such cases.
This fixes#20 (compca20.c) and #21 (compca21.c).
This is what is required by the C standards.
This partially reverts a change in ORCA/C 2.1.0, which should only have been applied to hexadecimal escape sequences.
For example, the following is now allowed:
typedef void v;
void foo(v) {}
This appears to be permitted under at least C99 and C11 (the C89 wording is less clear), and is accepted by other modern compilers.
This is based on a patch from Kelvin Sherlock, and in turn on code from MPW IIgs ORCA/C, but with modifications to be more standards-compliant.
Bit 1 in #pragma ignore controls a new option to (non-standardly) treat character constants with three or more characters as having type long, so they can contain up to four bytes.
Note that this patch orders the bytes the opposite way from MPW IIgs ORCA/C, but the same way as GCC and Clang.
These are enabled when bit 15 is set in the #pragma debug directive.
Support is still needed to ensure these work properly with pre-compiled headers.
This patch is from Kelvin Sherlock.
Previously, the structure load would be treated as a common subexpression eligible for elimination, but the structure would always be treated as if it had a size of 4 bytes. If it did not, this would generally lead to a crash. (I'm also not sure if dependency analysis was being performed properly for these structures.)
The following program illustrates the problem:
#pragma optimize 17
struct mystruct { char x; } ms;
static void foo(struct mystruct pk) {}
int main(void)
{
struct mystruct *p = &ms;
foo(*p);
foo(*p);
}
This could happen in certain cases where the destination is not considered "simple" (e.g. because it is a local array location that does not fit in the direct page).
The following program demonstrates the problem:
#pragma optimize 1
int main(void) {
long temp1 = 1, temp2 = 2, A[64];
long B[2] = {0};
B[1] = temp1 + temp2;
return B[1]; /* should return 3 */
}
This could occur because a temporary location might be used both in the l-value and r-value computations, but the final assignment code assumed it still had the value from the l-value computation.
The following function demonstrates this problem (*ip is not updated, and *p is trashed):
int badinc(char **p, int *ip)
{
*ip += *(*p)++;
}
Previously, several optimizations would be disabled for the rest of the translation unit whenever the keyword 'volatile' appeared. Now, if 'volatile' is used within a function, it only reduces optimization for that function, since whatever was declared as 'volatile' will be out of scope after the function is over. Uses of 'volatile' outside functions still behave as before.
This bug could both cause accesses to volatile variables to be omitted, and also cause other expressions to be erroneously optimized out in certain circumstances.
As an example, both the access of x and the call to bar() would be erroneously removed in the following program:
#pragma optimize 1
volatile int x;
int bar(void);
void foo(void)
{
if(x) ;
if(bar()) ;
}
Note that this patch disables even more optimizations than previously if the 'volatile' keyword is used anywhere in a translation unit. This is necessary for correctness given the current design of ORCA/C, but it means that care should be taken to avoid unnecessary use of 'volatile'.
This bug occurred because the generated code tried to store part of the return address to a direct page offset of 256, but instead an offset of 0 was used, resulting in an invalid return address and (typically) a crash. It could occur if the function took one or more parameters, and the total size of parameters and local variables (including compiler-generated ones) was 254 bytes.
The following program demonstrates the problem:
int main(int argc, char **argv) {
char x[244];
}
Global structs and unions with the const qualifier were not being generated in object files. This occurred because they were represented as having "defined types" and the code was not handling those types properly.
The following example demonstrated this problem:
const struct x { int i; } X = {9};
int main(void) {
return X.i;
}
This problem could cause "duplicate symbol" and "undeclared identifier" errors, for example in the following program:
typedef int f1( void );
void bar( void ) {
int i;
f1 *foo;
int baz;
i = 10;
}
int foo;
long baz;
The C standards define "pp-number" tokens handled by the preprocessor using a syntax that encompasses various things that aren't valid integer or floating constants, or are constants too large for ORCA/C to handle. These cases would previously give errors even in code skipped by the preprocessor. With this patch, most such errors in skipped code are now ignored.
This is useful, e.g., to allow for #ifdefed-out code containing 64-bit constants.
There are still some cases involving pp-numbers that should be allowed but aren't, particularly in the context of macros.
This should give C99-compatible behavior, as far as it goes. The functions aren't actually inlined, but that's just a quality-of-implementation issue. No C standard requires actual inlining.
Non-static inline functions are still not supported. The C99 semantics for them are more complicated, and they're less widely used, so they're a lower priority for now.
The "inline" function specifier can currently only come after the "static" storage class specifier. This relates to a broader issue where not all legal orderings of declaration specifiers are supported.
Since "inline" was already treated as a keyword in ORCA/C, this shouldn't create any extra compatibility issues for C89 code.
This allows code like the following to compile:
#if 0
#if some bogus stuff !
#endif
#endif
This is what the C standards require. The change affects #if, #ifdef, and #ifndef directives.
This may be needed to handle code targeted at other compilers that allow pseudo-functions such as "__has_feature" in preprocessor expressions.
This is necessary to make the "compile" command halt and not process additional source files, as well as to make occ stop and not run the linker.
Previously, this was not happening when the #error directive was used, or when an undefined label was used in a goto.
This addressed the issue with the compco07.c test case.
Previously the result type was based on the operand types (using the arithmetic conversions), which is incorrect. The following program illustrates the issue:
#include <stdio.h>
int main(void)
{
/* should print "1 0 2 2" */
printf("%i %i %lu %lu\n", 0L || 2, 0.0 && 2,
sizeof(1L || 5), sizeof(1.0 && 2.5));
}