This is allowed based on the C standard syntax, but it previously gave a spurious error in ORCA/C, because the parenthesized type name at the beginning of the compound literal was parsed as the complete operand to sizeof.
Here is an example program affected by this:
int main(void) {
return sizeof (char[]){1,2,3}; // should return 3
}
These are tokens that follow the syntax for a preprocessing number, but not for an integer or floating constant after preprocessing. They are now allowed within the preprocessing phases of the compiler. They are not legal after preprocessing, but they may be used as operands of the # and ## preprocessor operators to produce legal tokens.
The issue was that if a 64-bit value was being loaded via one pointer and stored via another, the load and store parts could both be using y for their indexing, but they would clash with each other, potentially leading to loads coming from the wrong place.
Here are some examples that illustrate the problem:
/* example 1 */
int main(void) {
struct {
char c[16];
long long x;
} s = {.x = 0x1234567890abcdef}, *sp = &s;
long long ll, *llp = ≪
*llp = sp->x;
return ll != s.x; // should return 0
}
/* example 2 */
int main(void) {
struct {
char c[16];
long long x;
} s = {.x = 0x1234567890abcdef}, *sp = &s;
long long ll, *llp = ≪
unsigned i = 0;
*llp = sp[i].x;
return ll != s.x; // should return 0
}
/* example 3 */
int main(void) {
long long x[2] = {0, 0x1234567890abcdef}, *xp = x;
long long ll, *llp = ≪
unsigned i = 1;
*llp = xp[i];
return ll != x[1]; // should return 0
}
The code was not properly adding in the offset of the 64-bit value from the pointed-to location, so the wrong memory location would be accessed. This affected indirect accesses to non-initial structure members, when used as operands to certain operations.
Here is an example showing the problem:
#include <stdio.h>
long long x = 123456;
struct S {
long long a;
long long b;
} s = {0, 123456};
int main(void) {
struct S *sp = &s;
if (sp->b != x) {
puts("error");
}
}
They were not being properly recognized as structs/unions, so they were being passed by address rather than by value as they should be.
Here is an example affected by this:
struct S {int a,b,c,d;};
int f(struct S s) {
return s.a + s.b + s.c + s.d;
}
int main(void) {
const struct S s = {1,2,3,4};
return f(s);
}
The optimization applies to code sequences like:
dec abs
lda abs
beq ...
where the dec and lda were supposed to refer to the same location.
There were two problems with this optimization as written:
-It considered the dec and lda to refer to the same location even if they were actually references to different elements of the same array.
-It did not work in the case where the A register value was needed in subsequent code.
The first of these was already an issue in previous ORCA/C releases, as in the following example:
#pragma optimize -1
int x[2] = {0,0};
int main(void) {
--x[0];
if (x[1] != 0)
return 123;
return 0; /* should return 0 */
}
I do not believe the second problem was triggered by any code sequences generated in previous releases of ORCA/C, but it can be triggered after commit 4c402fc88, e.g. by the following example:
#pragma optimize -1
int x = 1;
int main(void) {
int y = 123;
--x;
return x == 0; /* should return 1 */
}
Since the circumstances where this peephole optimization was triggered validly are pretty obscure, just disabling it should have a minimal impact on the generated code.
There were a couple issues here:
*If the type name contained a semicolon (for struct/union member declarations), a spurious error would be reported.
*Tags or enumeration constants declared in the type name should be in scope within the loop, but were not.
These both stemmed from the way the parser handled the third expression, which was to save the tokens from it and re-inject them at the end of the loop. To get the scope issues right, the expression really needs to be evaluated at the point where it occurs, so we now do that. To enable that while still placing the code at the end of the loop, a mechanism to remove and re-insert sections of generated code is introduced.
Here is an example illustrating the issues:
int main(void) {
int i, j, x;
for (i = 0; i < 123; i += sizeof(struct {int a;}))
for (j = 0; j < 123; j += sizeof(enum E {A,B,C}))
x = i + j + A;
}
Previously, there were a couple problems:
*If the parameter that was passed an empty argument appeared directly after the ##, the ## would permanently be removed from the macro record, affecting subsequent uses of the macro even if the argument was not empty.
*If the parameter that was passed an empty argument appeared between two ## operators, both would effectively be skipped, so the tokens to the left of the first ## and to the right of the second would not be combined.
This example illustrates both issues (not expected to compile; just check preprocessor output):
#pragma expand 1
#define x(a,b,c) a##b##c
x(1, ,3)
x(a,b,c)
Previously, it was not necessarily set correctly for the newly-generated token. This would result in incorrect behavior if that token was an operand to another ## operator, as in the following example:
#define x(a,b,c) a##b##c
x(1,2,3)
There was code that would attempt to use the cType field of the type record, but this is only valid for scalar types, not pointer types. In the case of a pointer type, the upper two bytes of the pointer would be interpreted as a cType value, and if they happened to have one of the values being tested for, incorrect intermediate code would be generated. The lower two bytes of the pointer would be used as a baseType value; this would most likely result in "compiler error" messages from the code generator, but might cause incorrect code generation with no errors if that value happened to correspond to a real baseType.
Code like the following might cause this error, although it only occurs if pointers have certain values and therefore depends on the memory layout at compile time:
void f(const int **p) {
(*p)++;
}
This bug was introduced in commit f2a66a524a.
Division by zero produces undefined behavior if it is evaluated, but in general we cannot tell whether a given expression will actually be evaluated at run time, so we should not report this as a compile-time error.
We still report an error for division by zero in constant expressions that need to be evaluated at compile time. We also still produce a lint message about division by zero if the appropriate flag is enabled.
The second parameter of #pragma float is now optional, and if it missing or invalid then the FPE slot is auto-detected by the start-up code. This is done by calling the new ~InitFloat function in the FPE version of SysFloat.
This allows valid FPE-using programs to be compiled using only #pragma float, with no changes needed to the code itself.
The slot-setting code is only generated if the slot is 1..7, and even then it can be overridden by calling setfpeslot(), so this should not cause compatibility problems for existing code.
This could occur because when FindSymbol was called to look for symbols in all spaces, it would find a tag in an inner scope before a typedef in an outer scope. The processing order has been changed to look for regular symbols (including typedefs) in any scope, and only look for tags if no regular symbol is found.
Here is an example illustrating the problem:
typedef int T;
int main(void) {
struct T;
T x;
}
This occurred due to looking for the symbol in all namespaces rather than only variable space.
Here is an example affected by this:
int X;
int main(void) {
struct X {int i;};
static int *i = &X;
}
If an identifier is used as a typedef in an outer scope but then declared as something else in an inner scope (e.g. a variable name or tag), and that same identifier is the next token after the end of the inner scope, it would not be recognized properly as a typedef name leading to spurious errors.\
Here is an example that triggered this:
typedef char Type;
void f(int Type);
Type t;
Here is another one:
int main(void) {
typedef int S;
if (1)
(struct S {int a;} *)0;
S x;
}
This adds debugging code to detect null pointer dereferences, as well as pointer arithmetic on null pointers (which is also undefined behavior, and can lead to later dereferences of the resulting pointers).
Note that ORCA/Pascal can already detect null pointer dereferences as part of its more general range-checking code. This implementation for ORCA/C will report the same error as ORCA/Pascal ("Subrange exceeded"). However, it does not include any of the other forms of range checking that ORCA/Pascal does, and (unlike in ORCA/Pascal) it is controlled by a separate flag from stack overflow checking.
This occurs when the constant value is out of range of the type being assigned to. This is likely indicative of an error, or of code that assumes types have larger ranges than they do in ORCA/C (e.g. 32-bit int).
This intentionally does not report cases where a value is assigned to a signed type but is within the range of the corresponding unsigned type, or vice versa. These may be done intentionally, e.g. setting an unsigned value to "-1" or setting a signed value using a hex constant with the high bit set. Also, only conversions to 8-bit or 16-bit integer types are currently checked.
A macro is used to control whether struct timespec is declared, because GNO might want to declare it in other headers, and this would allow it to avoid duplicate declarations. (This will still require changes in the GNO headers. Currently, they declare struct timespec with different field names, although the layout is the same.)
This mostly implements the rule in C17 6.9 p3, which requires a definition to be provided only if the function is used in an expression. Per that rule, we should also exclude most sizeof or _Alignof operands, but we don't do that yet.
Formerly, the code would allocate user IDs but never free them. The result was that one user ID was leaked for each time a CDev was opened and closed.
The new root code calls new cleanup code in ORCALib, which detects if the CDev is going away and deallocates its user ID if so.