The old approach would call GenerateCode twice for each parameter expression. Now, it is only called once. This is faster, and also avoids some oddities with error handling. With the previous approach, expressionType would not be set if there was an error in the expression, which could lead to additional spurious errors. Also, a lint message treated as a warning could appear twice.
This could happen in some cases where one subexpression of a larger expression was a constant. One effect of this was to cause spurious "lint: implicit conversion changes value of constant" messages in certain cases (when that lint check was enabled). It may also have caused certain errors to be missed in other situations.
It is legal for source files that do not include <stdio.h> to define static functions named printf, scanf, etc. These obviously are not the standard library functions and are not required to work the some way as them, so they should not be subject to format checking.
This adds debugging code to detect null pointer dereferences, as well as pointer arithmetic on null pointers (which is also undefined behavior, and can lead to later dereferences of the resulting pointers).
Note that ORCA/Pascal can already detect null pointer dereferences as part of its more general range-checking code. This implementation for ORCA/C will report the same error as ORCA/Pascal ("Subrange exceeded"). However, it does not include any of the other forms of range checking that ORCA/Pascal does, and (unlike in ORCA/Pascal) it is controlled by a separate flag from stack overflow checking.
The intention may have been to set the flags based on the return value, but that is not part of the calling convention and nothing should be relying on it.
These are erroneous, in situations where the expression is used for its value. For function return types, this violates a constraint (C17 6.5.2.2 p1), so a diagnostic is required. We also now diagnose this issue for identifier expressions or unary * (indirection) expressions. These cases cause undefined behavior per C17 6.3.2.1 p2, so a diagnostic is not required, but it is nice to give one.
Varargs-only stack repair (i.e. using #pragma optimize bit 3 but not bit 6) was broken by commit 32975b720f. It removed some code that was needed to allocate the direct page location used to hold the stack pointer value in that case. This would lead to invalid code being produced, which could cause a crash when run. The fix is to revert the erroneous parts of commit 32975b720f (which do not affect its core purpose of enabling intermediate code peephole optimization to be used when stack repair code is active).
Redeclaration of a pascal function could cause spurious errors when using strict type checking. (This was similar to the issue fixed in commit b5b276d0f4, but this time occurring due to the CompTypes call in NewSymbol.) There may also have been subtle misbehavior in other corner cases.
Now the reversal of parameters for pascal functions is applied only once, in Declarator prior to calling NewSymbol. This ensures that symbols for pascal functions have the correct types whenever they are processed, and also simplifies the previous code, where the parameters could be reversed, un-reversed, and re-reversed in three separate places.
The pc_rev intermediate code always returns a value, so the check is not needed, and (since the generated code does not jump to a return label) it can yield false positives.
This occurs when the constant value is out of range of the type being assigned to. This is likely indicative of an error, or of code that assumes types have larger ranges than they do in ORCA/C (e.g. 32-bit int).
This intentionally does not report cases where a value is assigned to a signed type but is within the range of the corresponding unsigned type, or vice versa. These may be done intentionally, e.g. setting an unsigned value to "-1" or setting a signed value using a hex constant with the high bit set. Also, only conversions to 8-bit or 16-bit integer types are currently checked.
A macro is used to control whether struct timespec is declared, because GNO might want to declare it in other headers, and this would allow it to avoid duplicate declarations. (This will still require changes in the GNO headers. Currently, they declare struct timespec with different field names, although the layout is the same.)
In ORCA/Pascal's code generation, a case statement may use a jump table or a sequence of comparisons depending on whether it is considered sparse. This one was just a little too sparse to use a jump table, but changing it to use one makes it considerably faster. To force generation of a jump table, this commit adds several more explicit cases (even though they don't do anything).
If the return value is just a numeric constant or static address, it can simply be loaded right before the RTL instruction, avoiding any data movement.
This could actually be applied to a somewhat broader class of expressions, but for now it only applies to numeric or address constants, for which it is clearly correct.
When a function has a single return statement at the end and meets certain other constraints, we now generate a different intermediate code instruction to evaluate the return value as part of the return operation, rather than assigning it to (effectively) a variable and then reading that value again to return it.
This approach could actually be used for all returns in C code, but for now we only use it for a single return at the end. Directly applying it in other cases could increase the code size by duplicating the function epilogue code.
The motivation for this is that allocating and clearing symbol tables is a common operation, especially with C99+, where a construct like "if (...) { ... }" involves three levels of scope with their own symbol tables. In some tests, it could take an appreciable fraction of total execution time (sometimes ~10%).
This patch allows symbol tables that have already been allocated and cleared to be reused for a subsequent scope, as long as they are still empty. It does this by maintaining a pool of empty symbol tables and taking one from there rather than allocating a new one when possible.
We impose a somewhat arbitrary limit of MaxBlock/150000 on the number of symbol tables we keep, to avoid filling up memory with them. It would probably be better to use purgeable handles here, but that would be a little more work, and this should be good enough for now.
This mostly implements the rule in C17 6.9 p3, which requires a definition to be provided only if the function is used in an expression. Per that rule, we should also exclude most sizeof or _Alignof operands, but we don't do that yet.
Formerly, the code would allocate user IDs but never free them. The result was that one user ID was leaked for each time a CDev was opened and closed.
The new root code calls new cleanup code in ORCALib, which detects if the CDev is going away and deallocates its user ID if so.
This would previously happen if a segment directive with "dynamic" appeared before the first function in the program. That would cause the resulting program not to work, because the root segment needs to be a static segment at the start of the program, but if it is dynamic it would come after a jump table and a static segment of library code.
The root segments are also configured to refer to main or the NDA/CDA entry points using LEXPR records, so that they can be in dynamic segments (not that they necessarily should be). That change is intentionally not done for CDEV/XCMD/NBA, because they use code resources, which do not support dynamic segments, so it is better to force a linker error in these cases.
I think the reason this was originally disallowed is that the old code sequence for stack repair code (in ORCA/C 2.1.0) ended with TYA. If this was followed by STA dp or STA abs, the native code peephole optimizer (prior to commit 7364e2d2d3) would have turned the combination into a STY instruction. That is invalid if the value in A is needed. This could come up, e.g., when assigning the return value from a function to two different variables.
This is no longer an issue, because the current code sequence for stack repair code no longer ends in TYA and is not susceptible to the same kind of invalid optimization. So it is no longer necessary to disable the native code peephole optimizer when using stack repair code (either for all calls or just varargs calls).
This would be changed to STY, but that is invalid if the A value is needed afterward. This could affect the code for certain division operations (after the optimizations in commit 4470626ade).
Here is an example that would be miscompiled:
#pragma optimize -1
#include <stdio.h>
int main(void) {
unsigned i = 55555;
unsigned a,b;
a = b = i / 10000;
printf("%u %u\n", a,b);
}
Also, remove MVN from the list of "ASafe" instructions since it really isn't, although I don't think this was affecting anything in practice.
This is necessary both to detect errors (using unary + on non-arithmetic types) and to correctly perform the integer promotions when unary + is used (which can be detected with sizeof or _Generic).
This affects calls to a varargs function that do not actually supply any arguments beyond the fixed portion of the argument list, e.g. printf("foo"). Since these calls do not supply any variable arguments, they clearly do not include any extra variable arguments beyond those used by the function, so the standards-conformance issue that requires varargs stack repair code does not apply.
It is possible that the call may include too few variable arguments, but that is illegal behavior, and it will trash the stack even when varargs stack repair code is present (although in some cases programs may "get away" with it). If stack repair code around these calls is still desired, then general stack repair code can be enabled for all function calls.
This is necessary to work correctly if the last fixed argument is of type double and #pragma extensions bit 1 is clear. It will also be necessary for C23, where va_start does not require the LastFixedParm argument. (For now, however, we stick with the pre-C23 definition of va_start where that argument needs to be supplied, even though it is now ignored.)
This provides a more straightforward way to place the compiler in a "strict conformance" mode. This could essentially be achieved by setting several pragma options, but having a single setting is simpler. "Compatibility modes" for older standards can also be selected, although these actually continue to enable most C17 features (since they are unlikely to cause compatibility problems for older code).
It will now grow as needed to accommodate large segments, subject to the constraints of available memory. In practice, this mostly affects the size of initialized static arrays that can be used.
This also removes any limit apart from memory size on how large the object representation produced by a "compile to memory" can be, and cleans up error reporting regarding size limits.
This is smaller and more efficient than the previous code that called memcpy(). It also avoids a theoretical issue if the user's code included an incompatible definition of memcpy.