The second parameter of #pragma float is now optional, and if it missing or invalid then the FPE slot is auto-detected by the start-up code. This is done by calling the new ~InitFloat function in the FPE version of SysFloat.
This allows valid FPE-using programs to be compiled using only #pragma float, with no changes needed to the code itself.
The slot-setting code is only generated if the slot is 1..7, and even then it can be overridden by calling setfpeslot(), so this should not cause compatibility problems for existing code.
This could occur because when FindSymbol was called to look for symbols in all spaces, it would find a tag in an inner scope before a typedef in an outer scope. The processing order has been changed to look for regular symbols (including typedefs) in any scope, and only look for tags if no regular symbol is found.
Here is an example illustrating the problem:
typedef int T;
int main(void) {
struct T;
T x;
}
This occurred due to looking for the symbol in all namespaces rather than only variable space.
Here is an example affected by this:
int X;
int main(void) {
struct X {int i;};
static int *i = &X;
}
If an identifier is used as a typedef in an outer scope but then declared as something else in an inner scope (e.g. a variable name or tag), and that same identifier is the next token after the end of the inner scope, it would not be recognized properly as a typedef name leading to spurious errors.\
Here is an example that triggered this:
typedef char Type;
void f(int Type);
Type t;
Here is another one:
int main(void) {
typedef int S;
if (1)
(struct S {int a;} *)0;
S x;
}
This adds debugging code to detect null pointer dereferences, as well as pointer arithmetic on null pointers (which is also undefined behavior, and can lead to later dereferences of the resulting pointers).
Note that ORCA/Pascal can already detect null pointer dereferences as part of its more general range-checking code. This implementation for ORCA/C will report the same error as ORCA/Pascal ("Subrange exceeded"). However, it does not include any of the other forms of range checking that ORCA/Pascal does, and (unlike in ORCA/Pascal) it is controlled by a separate flag from stack overflow checking.
This occurs when the constant value is out of range of the type being assigned to. This is likely indicative of an error, or of code that assumes types have larger ranges than they do in ORCA/C (e.g. 32-bit int).
This intentionally does not report cases where a value is assigned to a signed type but is within the range of the corresponding unsigned type, or vice versa. These may be done intentionally, e.g. setting an unsigned value to "-1" or setting a signed value using a hex constant with the high bit set. Also, only conversions to 8-bit or 16-bit integer types are currently checked.
A macro is used to control whether struct timespec is declared, because GNO might want to declare it in other headers, and this would allow it to avoid duplicate declarations. (This will still require changes in the GNO headers. Currently, they declare struct timespec with different field names, although the layout is the same.)
This mostly implements the rule in C17 6.9 p3, which requires a definition to be provided only if the function is used in an expression. Per that rule, we should also exclude most sizeof or _Alignof operands, but we don't do that yet.
Formerly, the code would allocate user IDs but never free them. The result was that one user ID was leaked for each time a CDev was opened and closed.
The new root code calls new cleanup code in ORCALib, which detects if the CDev is going away and deallocates its user ID if so.
This would previously happen if a segment directive with "dynamic" appeared before the first function in the program. That would cause the resulting program not to work, because the root segment needs to be a static segment at the start of the program, but if it is dynamic it would come after a jump table and a static segment of library code.
The root segments are also configured to refer to main or the NDA/CDA entry points using LEXPR records, so that they can be in dynamic segments (not that they necessarily should be). That change is intentionally not done for CDEV/XCMD/NBA, because they use code resources, which do not support dynamic segments, so it is better to force a linker error in these cases.
I think the reason this was originally disallowed is that the old code sequence for stack repair code (in ORCA/C 2.1.0) ended with TYA. If this was followed by STA dp or STA abs, the native code peephole optimizer (prior to commit 7364e2d2d329d81) would have turned the combination into a STY instruction. That is invalid if the value in A is needed. This could come up, e.g., when assigning the return value from a function to two different variables.
This is no longer an issue, because the current code sequence for stack repair code no longer ends in TYA and is not susceptible to the same kind of invalid optimization. So it is no longer necessary to disable the native code peephole optimizer when using stack repair code (either for all calls or just varargs calls).
This would be changed to STY, but that is invalid if the A value is needed afterward. This could affect the code for certain division operations (after the optimizations in commit 4470626adeeeea).
Here is an example that would be miscompiled:
#pragma optimize -1
#include <stdio.h>
int main(void) {
unsigned i = 55555;
unsigned a,b;
a = b = i / 10000;
printf("%u %u\n", a,b);
}
Also, remove MVN from the list of "ASafe" instructions since it really isn't, although I don't think this was affecting anything in practice.
This is necessary both to detect errors (using unary + on non-arithmetic types) and to correctly perform the integer promotions when unary + is used (which can be detected with sizeof or _Generic).
This provides a more straightforward way to place the compiler in a "strict conformance" mode. This could essentially be achieved by setting several pragma options, but having a single setting is simpler. "Compatibility modes" for older standards can also be selected, although these actually continue to enable most C17 features (since they are unlikely to cause compatibility problems for older code).
With the addition of designated initializers, ORCA/C now supports all the major mandatory language features added between C90 and C17, apart from those made optional by C11. There are still various small areas of nonconformance and a number of missing library functions, but at this point it is reasonable for ORCA/C to report itself as being a C17 implementation.
Long addressing was not being used to access the values, which could lead to mis-evaluation of comparisons against values in global structs, unions, or arrays, depending on the memory layout.
This could sometimes affect the c99desinit.c test, when run with large memory model and at least intermediate code peephole optimization. It could also affect this simpler test (depending on memory layout):
#pragma memorymodel 1
#pragma optimize 1
struct S {
void *p;
} s = {&s};
int main(void) {
return s.p != &s; /* should be 0 */
}
*Initialization of floating-point variables from unsigned long expressions with value > LONG_MAX would give the wrong value.
*Initialization of floating-point variables from (unsigned) long long expressions would give the wrong value.
*Initialization of _Bool variables should give 0 or 1, as per the usual rules for conversion to _Bool.
*Initialization of integer variables from floating-point expressions should be allowed, applying the usual conversions.
When the expression is initially parsed, we do not necessarily know whether it is the initializer for the struct/union or for its first member. That needs to be determined based on the type. To support that, a new function is added to evaluate the expression separately from using it to initialize an object.
The part of the declaration within the header could be ignored on subsequent compilations using the .sym file, which could lead to errors or misbehavior.
(This also applies to headers that end in the middle of a _Static_assert(...) or segment directive.)
This is a minimal implementation that does not actually inline anything, but it is intended to implement the semantics defined by the C99 and later standards.
One complication is that a declaration that appears somewhere after the function body may create an external definition for a function that appeared to be an inline definition when it was defined. To support this while preserving ORCA/C's general one-pass compilation strategy, we generate code even for inline definitions, but treat them as private and add the prefix "~inline~" to the name. If they are "un-inlined" based on a later declaration, we generate a stub with external linkage that just jumps to the apparently-inline function.