This does not really do anything, because ORCA/C does not support multithreading, but the C11 and later standards indicate it should be allowed anyway.
If a struct contained a function pointer with a prototyped parameter list, processing the parameters could reset the declaredTagOrEnumConst flag, potentially leading to a spurious error, as in this example:
struct S {
int (*f)(int);
};
This also gives a better error for structs declared as containing functions.
Note that this implementation allows anonymous structures and unions to participate in initialization. That is, you can have a braced initializer list corresponding to an anonymous structure or union. Also, anonymous structures within unions follow the initialization rules for structures (and vice versa).
I think the better interpretation of the standard text is that anonymous structures and unions cannot participate in initialization as such, and instead their members are treated as members of the containing structure or union for purposes of initialization. However, all other compilers I am aware of allow anonymous structures and unions to participate in initialization, so I have implemented it that way too.
C90 had constraints requiring # and ## tokens to only appear in preprocessing directives, but C99 and later removed those constraints, so this code is no longer necessary when targeting current languages versions. (It would be necessary in a "strict C90" mode, if that was ever implemented.)
The main practical effect of this is that # and ## tokens can be passed as parameters to macros, provided the macro either ignores or stringizes that parameter. # and ## tokens still have no role in the grammar of the C language after preprocessing, so they will be an unexpected token and produce some kind of error if they appear anywhere.
This also contains a change to ensure that a line containing one or more illegal characters (e.g. $) and then a # is not treated as a preprocessing directive.
This accords with its definition in the C standards. For the time being, the old form of three separate tokens is still accepted too, because the ... token may not be scanned correctly in the obscure case where there is a line continuation between the second and third dots.
One observable effect of this is that there are no longer spaces between the dots in #pragma expand output.
This enforces the constraint from C17 section 6.7 p2 that declarations "shall declare at least a declarator (other than the parameters of a function or the members of a structure or union), a tag, or the members of an enumeration."
Somewhat relaxed rules are used for enums in the default loose type checking mode, similar to what GCC and Clang do.
A declaration of this exact form always declares the tag T within the current scope, and as such makes this "struct T" a distinct type from any other "struct T" type in an outer scope. (Similarly for unions.)
See C17 section 6.7.2.3 p7 (and corresponding places in all other C standards).
Here is an example of a program affected by this:
struct S {char a;};
int main(void) {
struct S;
struct S *sp;
struct S {long b;} s;
sp = &s;
sp->b = sizeof(*sp);
return s.b;
}
They are now represented in local structures instead. This keeps the representation of declaration specifiers together and eliminates the need for awkward and error-prone code to save and restore the global variables.
This differs from the usual ORCA/C behavior of treating all floating-point parameters as extended. With the option enabled, they will still be passed in the extended format, but will be converted to their declared type at the start of the function. This is needed for strict standards conformance, because you should be able to take the address of a parameter and get a usable pointer to its declared type. The difference in types can also affect the behavior of _Generic expressions.
The implementation of this is based on ORCA/Pascal, which already did the same thing (unconditionally) with real/double/comp parameters.
If strict type checking is enabled, this will prohibit redefinition of enums, like:
enum E {a,b,c};
enum E {x,y,z};
It also prohibits use of an "enum E" type specifier if the enum has not been previously declared (with its constants).
These things were historically supported by ORCA/C, but they are prohibited by constraints in section 6.7.2.3 of C99 and later. (The C90 wording was different and less clear, but I think they were not intended to be valid there either.)
This affects code like the following:
enum E {a,b,c};
int main(void) {
enum E e;
struct E {int x;}; /* or: enum E {x,y,z}; */
}
The line "enum E e;" should refer to the enum type declared in the outer scope, but not redeclare it in the inner scope. Therefore, a subsequent struct, union, or enum declaration using the same tag in the same scope is acceptable.
This avoids strangeness where an enum tag declared within a typedef declaration would act like a typedef. For example, the following would compile without error:
typedef enum E {a,b,c} T;
E e;
The basic approach is to generate a single expression tree containing the code for the initialization plus the reference to the compound literal (or its address). The various subexpressions are joined together with pc_bno pcodes, similar to the code generated for the comma operator. The initializer expressions are placed in a balanced binary tree, so that it is not excessively deep.
Note: Common subexpression elimination has poor performance for very large trees. This is not specific to compound literals, but compound literals for relatively large arrays can run into this issue. It will eventually complete and generate a correct program, but it may be quite slow. To avoid this, turn off CSE.
Initialized variables have always been one of the things that stops PCH generation, but previously this was only detected when trying to write out the symbol records at the point of a later #include. On a subsequent compile using the sym file, nothing would recognize that PCH generation had stopped for this reason, so the PCH code would recognize the later #include as a potential opportunity to extend the sym file, and therefore would delete it to force regeneration next time. This led to the sym file being deleted and regenerated on alternate compiles, so its full benefit was not realized.
There is code in Header.pas to abort PCH generation if an initialized symbol is found. That is probably superfluous after this change, but it has been left in place for now.
There were various places where the flag for macro expansions was saved, set to false, and then later restored. If #pragma expand was used within those areas, it would not be properly applied. Here is an example showing that problem:
void f(void
#pragma expand 1
) {}
This could also affect some uses of #pragma expand within precompiled headers, e.g.:
#pragma expand 1
#include "a.h"
#undef foobar
#include "b.h"
...
Also, add a note saying that code in precompiled headers will not be expanded. (This has always been the case, but was not clearly documented.)
This affects functions whose body spans multiple files due to includes, or is treated as doing so due to #line directives. ORCA/C will now generate a COP 6 instruction to record each source file change, allowing debuggers to properly track the flow of execution across files.
There were several forms that should be permitted but were not, such as &"str"[1], &*"str", &*a (where a is an array), and &*f (where f is a function).
This fixes#15 and also certain other cases illustrated in the following example:
char a[10];
int main(void);
static char *s1 = &"string"[1];
static char *s2 = &*"string";
static char *s3 = &*a;
static int (*f2)(void)=&*main;
This is necessary to allow declarations of pascal-qualified function pointers as members of a structure, among other things.
Note that the behavior for "pascal" now differs from that for the standard function specifiers, which have more restrictive rules for where they can be used. This is justified by the fact that the "pascal" qualifier is allowed and meaningful for function pointer types, so it should be able to appear anywhere they can.
This fixes#28.
The parameters of the underlying function type were not being reversed when applying the "pascal" qualifier to a function pointer type. This resulted in the parameters not being in the expected order when a call was made using such a function pointer. This could result in spurious errors in some cases or inappropriate parameter conversions in others.
This fixes#75.
Failing to do this could allow the type spec to be overwritten if the expression contained another type name within it (e.g. a cast). This could cause the wrong type to be computed, which could lead to incorrect behavior for constructs that use type names, e.g. sizeof.
Here is an example program that demonstrated the problem:
int main(void) {
return sizeof(short[(long)50]);
}
This gives an error for code like the following, which was previously allowed:
void (*p)(int);
void p(int i) {}
Note that the opposite order still does not give a compiler error, but does give linker errors. Making sure we give a compiler error for all similar cases would require larger changes, but this patch at least catches some erroneous cases that were previously being allowed.
This could occur with strict type checking on, because the parameter types were compared at a point where they had been reversed for the original declaration but not for the subsequent one.
Here is an example that would give an error:
#pragma ignore 24
extern pascal void func(int, long);
extern pascal void func(int, long);
When initializing (e.g.) an array of arrays of char, a string literal would be taken as an initializer for the outer array rather than for an inner array, so not all elements would be initialized properly. This was a bug introduced in commit 222c34a385.
This bug affected the C4.6.4.2.CC test case, and the following reduced version:
#include <stdio.h>
#include <string.h>
int main (void) {
char ch2[][20] = {"for all good people", "to come to the aid "};
if (strcmp(ch2[1], "to come to the aid "))
puts("Failed");
}
For example, declarations like the following should be accepted:
char *p[] = {"abc", "def"};
This previously worked, but it was broken by commit 5871820e0c.
Parameters declared directly with array types were already adjusted to pointer types in commit 5b953e2db0, but this code is needed for the remaining case where a typedef'd array type is used.
With these changes, 'array' parameters are treated for all purposes as really having pointer types, which is what the standards call for. This affects at least their size as reported by sizeof and the debugging information generated for them.
This has the side effect of treating most parameters declared as arrays as actually having pointer types. This affects the value returned by sizeof, among other things. The new behavior is correct under the C standards; however, it does not yet apply when using a typedef'd array type.
In the new implementation, variable arguments are not removed until the end of the function. This allows variable argument processing to be restarted, and it prevents the addresses of local variables from changing in the middle of the function. The requirement to turn off stack repair code around varargs functions is also removed.
This fixes#58.
We previously ignored this, but it is a constraint violation under the C standards, so it should be reported as an error.
GCC and Clang allow this as an extension, as we were effectively doing previously. We will follow the standards for now, but if there was demand for such an extension in ORCA/C, it could be re-introduced subject to a #pragma ignore flag.
When using such a string literal to initialize an array with automatic storage duration, the bytes after the first null would be set to 0, rather than the values from the string literal.
Here is an example program showing the problem:
#include <stdio.h>
int main(void) {
char s[] = "a\0b";
puts(s+2);
}
Compound literals outside of functions should work at this point.
Compound literals inside of functions are not fully implemented, so they are disabled for now. (There is some code to support them, but the code to actually initialize them at the appropriate time is not written yet.)
This is a change that was introduced in C17. However, it actually keeps things closer to ORCA/C's historical behavior, which generally ignored qualifiers in return types.
This causes an error to be produced when trying to assign to these fields, which was being allowed before. It is also necessary for correct behavior of _Generic in some cases.
This behavior is specified by the C standards. It can come up when declaring an array using a typedef'd array type and a qualifier.
This is necessary for correct behavior of _Generic, as well as to give an error if code tries to write to const arrays declared in this way.
Here is an example showing these issues:
#define f(e) _Generic((e), int *: 1, const int *:2, default: 0)
int main(void) {
typedef int A[2][3];
const A a = {{4, 5, 6}, {7, 8, 9}};
_Static_assert(f(&a[0][0]) == 2, "qualifier error"); // OK
a[1][1] = 42; // error
}
These are needed to correctly distinguish pointer types in _Generic. They should also be used for type compatibility checks in other contexts, but currently are not.
This also fixes a couple small problems related to type qualifiers:
*restrict was not allowed to appear after * in type-names
*volatile status was not properly recorded in sym files
Here is an example of using _Generic to distinguish pointer types based on the qualifiers of the pointed-to type:
#include <stdio.h>
#define f(e) _Generic((e),\
int * restrict *: 1,\
int * volatile const *: 2,\
int **: 3,\
default: 0)
#define g(e) _Generic((e),\
int *: 1,\
const int *: 2,\
volatile int *: 3,\
default: 0)
int main(void) {
int * restrict * p1;
int * volatile const * p2;
int * const * p3;
// should print "1 2 0 1"
printf("%i %i %i %i\n", f(p1), f(p2), f(p3), f((int * restrict *)0));
int *q1;
const int *q2;
volatile int *q3;
const volatile int *q4;
// should print "1 2 3 0"
printf("%i %i %i %i\n", g(q1), g(q2), g(q3), g(q4));
}
Here is an example of a problem resulting from volatile not being recorded in sym files (if a sym file was present, the read of x was lifted out of the loop):
#pragma optimize -1
static volatile int x;
#include <stdio.h>
int main(void) {
int y;
for (unsigned i = 0; i < 100; i++) {
y = x*2 + 7;
}
}
Enumeration constants must have values representable as an int (i.e. 16-bit signed values, in ORCA/C), but errors were not being reported if code tried to use the values 0xFFFF8000 to 0xFFFFFFFF. This problem could also affect certain larger values of type unsigned long long. The issue stemmed from not properly accounting for whether the constant expression had a signed or unsigned type.
This sample code demonstrated the problem:
enum E {
a = 0xFFFFFFFF,
b = 0xFFFF8000,
y = 0x7FFFFFFFFFFFFFFFull,
z = 0x8000000000000000
};
These were previously treated as having type int. This resulted in incorrect results from sizeof, and would also be a problem for _Generic if it was implemented.
Note that this creates a token kind of "charconst", but this is not the kind for character constants in the source code. Those have type int, so their kind is intconst. The new kinds of "tokens" are created only through casts of constant expressions.
The FENV_ACCESS pragma is now implemented. It causes floating-point operations to be evaluated at run time to the maximum extent possible, so that they can affect and be affected by the floating-point environment. It also disables optimizations that might evaluate floating-point operations at compile time or move them around calls to the <fenv.h> functions.
The FP_CONTRACT and CX_LIMITED_RANGE pragmas are also recognized, but they have no effect. (FP_CONTRACT relates to "contracting" floating-point expressions in a way that ORCA/C does not do, and CX_LIMITED_RANGE relates to complex arithmetic, which ORCA/C does not support.)
This gives the name of the current function, as if the following definition appeared at the beginning of the function body:
static const char __func__[] = "function-name";
This affected initializers like the following:
static int a[50];
static int *ip = &a[0] + 2U;
Also, introduce some basic range checks for calculations that are obviously outside the 65816's address space.
These use a new calling convention specific to functions returning these types. When such functions are called, the caller must set the X register to the address within bank 0 that the return value is to be saved to. The function is then responsible for saving it there before returning to the caller.
Currently, the calling code always makes space for the return value on the stack and sets X to point to that. (As an optimization, it would be possible to have the return value written directly to a local variable on the direct page, with no change needed to the function being called, but that has not yet been implemented.)
Currently, the actual values they can have are still constrained to the 32-bit range. Also, there are some bits of functionality (e.g. for initializers) that are not implemented yet.
This generalizes the heuristic approach for checking whether _Noreturn functions could execute to the end of the function, extending it to apply to any function with a non-void return type. These checks use the same #pragma lint bit but give different messages depending on the situation.
This uses a heuristic that may produce both false positives and false negatives, but any false positives should reflect extraneous code at the end of the function that is not actually reachable.
Currently, this only flags return statements, not cases where they may execute to the end of the function. (Whether the function will actually return is not decidable in general, although it may be in special cases).
This currently checks for:
*Calls to undefined functions (same as bit 0)
*Parameters not declared in K&R-style function definitions
*Declarations or type names with no type specifiers (includes but is broader than the condition checked by bit 1)
Previously, the designated initializer syntax could confuse the parser enough to cause null pointer dereferences. This avoids that, and also gives a more meaningful error message to the user.
In combination with earlier patches, this fixes#53.
Also, if the lint flag requiring explicit function types is set, then also require that K&R-style parameters be explicitly declared with types, rather than not being declared and defaulting to int. (This is a requirement in C99 and later.)
Previously, these would report "identifier expected"; now they correctly say "')' expected".
This introduces a new UnexpectedTokenError procedure that can be used more generally for cases where the expected token may differ based on context.
_Thread_local is recognized but gives a "not supported" error. It could arguably be 'supported' trivially by saying the execution of an ORCA/C program is just one thread and so no special handling is needed, but that likely isn't what someone using it would expect.
There would be a possible issue if a "static" or "typedef" storage class specifier occurred after a type specifier that required memory to be allocated for it, because that memory conceptually might be in the local pool, but static objects are processed at the end of the translation unit, so their types need to stick around. In practice, this should not occur, because the local pool isn't currently used for much (in particular, not for statements or declarations in the body of a function). We give an error in case this somehow might occur.
In combination with preceding commits, this fixes#14. Declaration specifiers can now appear in any order, as required by the C standards.
This includes both the standard ones (inline and _Noreturn) and the ORCA/C-specific ones (asm and pascal). They can now be freely mixed with other declaration specifiers.
Some errors related to function specifiers are not yet detected.
This could happen in a declaration like "char _Alignas(long) c;", where typeSpec wound up specifying long rather than char.
Also, tweak error checks for _Alignas and _Atomic.
_Bool, _Complex, _Imaginary, _Atomic, restrict, and _Alignas are now recognized in types, but all except restrict and _Alignas will give an error saying they are not supported.
This also introduces uniform definitions of the syntactic classes of tokens that can be used in declaration specifiers and related constructs (currently used in some places but not yet in others).
These qualifiers were previously sometimes accepted between the name and left brace of struct and enum type specifiers. This was non-standard and is no longer allowed.
Type specifiers and type qualifiers can now appear in any order, as specified by the C standards. However, storage class specifiers and function specifiers still cannot be freely mixed with them.
As of C11, type names are now used as part of the declaration syntax (in _Alignas and _Atomic specifiers), in addition to their uses in expressions. Moving the TypeName method will allow it to be called when processing declarations.
When these invalid sym files were used during subsequent compiles, certain type pointers (for what should be const-qualified struct or union types) could be left uninitialized, or possibly initialized pointing to different types. This could result in spurious errors or potentially in other problems.
This relates to unions or structs that are "filled" with zeros because the initializer does not include explicit terms for them, and that contain bit-fields or (for unions) do not start with the longest member.
The following program is an example that was miscompiled:
#include <stdio.h>
struct BF {
int i:3;
int j:4;
};
union U {
int i;
long l;
};
struct Outer1 {
int n;
struct BF bf[7];
union U u[5];
};
struct Outer2 {
long p;
struct Outer1 o1;
long q;
};
int main(void) {
static struct Outer2 s = {1,{0},212};
printf("%li %li\n", s.p, s.q);
}
This fixes case 1 (dealing with run-time initialization of structures containing bit-fields) and case 2 (dealing with initialization of structs where initializer values are not provided for all elements) from issue #59. It also fixes cases that could result in invalid initialization of unions if their first element was not the longest, as in the following example:
#include <stdio.h>
union U {
int i;
long l;
};
int main(void) {
union U a[5] = {1,2,3,4};
printf("a[0].i=%i, a[1].i=%i, a[2].i=%i, a[3].i=%i, a[4].i=%i\n",
a[0].i, a[1].i, a[2].i, a[3].i, a[4].i);
}