There were a couple issues that could occur with #pragma keep and sym files:
*If a source file used #pragma keep but it was overridden by KEEP= on the command line or {KeepName} in the shell, then the overriding keep name would be saved to the sym file. It would therefore be applied to subsequent compilations even if it was no longer specified in the command line or shell variable.
*If a source file used #pragma keep, that keep name would be recorded in the sym file. On subsequent compilations, it would always be used, overriding any keep name specified by the command line or shell, contrary to the usual rule that the name on the command line takes priority.
With this patch, the keep name recorded in the sym file (if any) should always be the one specified by #pragma keep, but it can be overridden as usual.
This affects functions whose body spans multiple files due to includes, or is treated as doing so due to #line directives. ORCA/C will now generate a COP 6 instruction to record each source file change, allowing debuggers to properly track the flow of execution across files.
This causes __FILE__ to give the name of an include file if used within it, which seems to be what the standards intend (and what other compilers do). It also affects the file name recorded in debugging information for functions declared in an include file.
(Note that occ will generate a #line directive before an #append, essentially to work around the problem this patch fixes. After the patch, such a #line directive is effectively ignored. This should be OK, although it may result in a difference in whether a full or partial pathname is used for __FILE__ and in debug info.)
There were several existing optimizations that could change behavior in ways that violated the IEEE standard with regard to infinities, NaNs, or signed zeros. They are now gated behind a new #pragma optimize flag. This change allows intermediate code peephole optimization and common subexpression elimination to be used while maintaining IEEE conformance, but also keeps the rule-breaking optimizations available if desired.
See section F.9.2 of recent C standards for a discussion of how these optimizations violate IEEE rules.
This allows the length of the string plus a few extra bytes used internally to be represented by a 16-bit integer. Since the size limit for memory allocations has been raised, there is no good reason to impose a shorter limit on strings.
Note that C99 and later specify a minimum translation limit for string constants of at least 4095 characters.
We previously ignored this, but it is a constraint violation under the C standards, so it should be reported as an error.
GCC and Clang allow this as an extension, as we were effectively doing previously. We will follow the standards for now, but if there was demand for such an extension in ORCA/C, it could be re-introduced subject to a #pragma ignore flag.
The code for this was recursive and could overflow if there were several dozen consecutive string literals. It has been changed to only use one level of recursion, avoiding the problem.
Compound literals outside of functions should work at this point.
Compound literals inside of functions are not fully implemented, so they are disabled for now. (There is some code to support them, but the code to actually initialize them at the appropriate time is not written yet.)
The standard wording is not always clear on these cases, but I think at least some of them should be allowed and others may be undefined behavior (which we can choose to allow). At any rate, this allows non-standard escape sequences targeted at other compilers to appear in skipped-over code.
There probably ought to be similar handling for #defines that are never expanded, but that would require more code changes.
This applies to octal and hexadecimal sequences with out-of-range values, and also to unrecognized escape characters. The C standards say both of these cases are syntax/constraint violations requiring a diagnostic.
These were previously treated as having type int. This resulted in incorrect results from sizeof, and would also be a problem for _Generic if it was implemented.
Note that this creates a token kind of "charconst", but this is not the kind for character constants in the source code. Those have type int, so their kind is intconst. The new kinds of "tokens" are created only through casts of constant expressions.
The FENV_ACCESS pragma is now implemented. It causes floating-point operations to be evaluated at run time to the maximum extent possible, so that they can affect and be affected by the floating-point environment. It also disables optimizations that might evaluate floating-point operations at compile time or move them around calls to the <fenv.h> functions.
The FP_CONTRACT and CX_LIMITED_RANGE pragmas are also recognized, but they have no effect. (FP_CONTRACT relates to "contracting" floating-point expressions in a way that ORCA/C does not do, and CX_LIMITED_RANGE relates to complex arithmetic, which ORCA/C does not support.)
This means that floating-point constants can now have the range and precision of the extended type (aka long double), and floating-point constant expressions evaluated within the compiler also have that same range and precision (matching expressions evaluated at run time). This new behavior is intended to match the behavior specified in the C99 and later standards for FLT_EVAL_METHOD 2.
This fixes the previous problem where long double constants and constant expressions of type long double were not represented and evaluated with the full range and precision that they should be. It also gives extra range and precision to constants and constant expressions of type double or float. This may have pluses and minuses, but at any rate it is consistent with the existing behavior for expressions evaluated at run time, and with one of the possible models of floating point evaluation specified in the C standards.
Note that we currently defer evaluation of such expressions to run time if the long long value cannot be represented exactly in a double, because statically-evaluated floating point expressions use the double format rather than the extended (long double) format used at run time.
Right now, decimal constants can have long long types based on their suffix, but they are still limited to a maximum value of 2^32-1.
This also implements the C99 change where decimal constants without a u suffix always have signed types. Thus, decimal constants of 2^31 and up now have type long long, even if their values could be represented in the type unsigned long.
Currently, the actual values they can have are still constrained to the 32-bit range. Also, there are some bits of functionality (e.g. for initializers) that are not implemented yet.
This affects command lines like:
cmpl myprog.c cc=(-da=+) ...
Previously, this would be accepted, but a was actually defined to 0 rather than +.
Now, this gives an error, consistent with other tokens that are not supported in such definitions on the command line. (Perhaps we should support definitions using any tokens, but that would require bigger code changes.)
This also cleans up some related code to avoid possible null-pointer dereferences.
This is contrary to the C standards, but ORCA/C historically permitted it (as do some other compilers), and I think there is a fair amount of existing code that relies on it.
This is normally 1 (indicating a hosted implementation, where the full standard library is available and the program starts by executing main()), but it is 0 if one of the pragmas for special types of programs with different entry points has been used.
This generalizes the heuristic approach for checking whether _Noreturn functions could execute to the end of the function, extending it to apply to any function with a non-void return type. These checks use the same #pragma lint bit but give different messages depending on the situation.
This uses a heuristic that may produce both false positives and false negatives, but any false positives should reflect extraneous code at the end of the function that is not actually reachable.