This optimizes most multiplications by a power of 2 or the sum of two powers of 2, converting them to equivalent operations using shifts which should be faster than the general-purpose multiplication routine.
This changes unsigned 16-bit multiplies to use the new ~CUMul2 routine in ORCALib, rather than ~UMul2 in SysLib. They differ in that ~CUMul2 gives the low-order 16 bits of the true result in case of overflow. The C standards require this behavior for arithmetic on unsigned types.
Now rewind() will always be called as a function. In combination with an update to the rewind() function in ORCALib, this will ensure that the error indicator is always cleared, as required by the C standards.
This was non-standard in various ways, mainly in regard to pointer types. It has been rewritten to closely follow the specification in the C standards.
Several helper functions dealing with types have been introduced. They are currently only used for ? :, but they might also be useful for other purposes.
New tests are also introduced to check the behavior for the ? : operator.
This fixes#35 (including the initializer-specific case).
This affects expressions like &*a (where a is an array) or &*"string". In most contexts, these undergo array-to-pointer conversion anyway, but as an operand of sizeof they do not. This leads to sizeof either giving the wrong value (the size of the array rather than of a pointer) or reporting an error when the array size is not recorded as part of the type (which is currently the case for string constants).
In combination with an earlier patch, this fixes#8.
This makes a macro defined on the command line like -Dfoo=-1 consist of two tokens, the same as it would if defined in code. (Previously, it was just one token.)
This also somewhat expands the set of macros accepted on the command line. A prefix of +, -, *, &, ~, or ! (the one-character unary operators) can now be used ahead of any identifier, number, or string. Empty macro definitions like -Dfoo= are also permitted.
The basic approach is to generate a single expression tree containing the code for the initialization plus the reference to the compound literal (or its address). The various subexpressions are joined together with pc_bno pcodes, similar to the code generated for the comma operator. The initializer expressions are placed in a balanced binary tree, so that it is not excessively deep.
Note: Common subexpression elimination has poor performance for very large trees. This is not specific to compound literals, but compound literals for relatively large arrays can run into this issue. It will eventually complete and generate a correct program, but it may be quite slow. To avoid this, turn off CSE.
The desired location for the quad result was not saved, so it could be overwritten when generating code for the left operand. This could result in incorrect code that might trash the stack.
Here is an example affected by this:
#pragma optimize 1
int main(void) {
long long a, b=2;
char c = (a=1,b);
}
It should only be done after all the ## operators in the macro have been evaluated, potentially merging together several tokens via successive ## operators.
Here is an example illustrating the problem:
#define merge(a,b,c) a##b##c
#define foobar
#define foobarbaz a
int merge(foo,bar,baz) = 42;
int main(void) {
return a;
}
If such macros were used within other macros, they would generally not be expanded, due to the order in which operations were evaluated during preprocessing.
This is actually an issue that was fixed by the changes from ORCA/C 2.1.0 to 2.1.1 B3, but then broken again by commit d0b4b75970.
Here is an example with the name of a keyword:
#define X long int
#define long
X x;
int main(void) {
return sizeof(x); /* should be sizeof(int) */
}
Here is an example with the name of a typedef:
typedef short T;
#define T long
#define X T
X x;
int main(void) {
return sizeof(x); /* should be sizeof(long) */
}
Previously, one-byte loads were typically done by reading a 16-bit value and then masking off the upper 8 bits. This is a problem when accessing softswitches or slot IO locations, because reading the subsequent byte may have some undesired effect. Now, ORCA/C will do an 8-bit read for such cases, if the volatile qualifier is used.
There were also a couple optimizations that could occasionally result in not all the bytes of a larger value actually being read. These are now disabled for volatile loads that may access softswitches or IO.
These changes should make ORCA/C more suitable for writing low-level software like device drivers.
This is part of the general requirement that macro redefinitions be "identical" as defined in the standard.
This affects code like:
#define x [
#define x <:
This allows those tokens (asm, comp, extended, pascal, and segment) to be used as identifiers, consistent with the C standards.
A new pragma (#pragma extensions) is introduced to control this. It might also be used for other things in the future.
This did not work correctly before, because such tokens were recorded as starting with the third character of the trigraph.
Here is an example affected by this:
#define mkstr(a) # a
#include <stdio.h>
int main(void) {
puts(mkstr(??!));
puts(mkstr(??!??!));
puts(mkstr('??<'));
puts(mkstr(+??!));
puts(mkstr(+??'));
}
A suffix will now be printed on any integer constant with a type other than int, or any floating constant with a type other than double. This ensures that all constants have the correct types, and also serves as documentation of the types.
Previously, continuations or trigraphs would be included in the string as-is, which should not be the case because they are (conceptually) processed in earlier compilation phases. Initial trigraphs still do not get stringized properly, because the token starting position is not recorded correctly for them.
This fixes code like the following:
#define mkstr(a) # a
#include <stdio.h>
int main(void) {
puts(mkstr(a\
bc));
puts(mkstr(qr\
));
puts(mkstr(\
xy));
puts(mkstr(12??/
34));
puts(mkstr('??<'));
}
This is necessary for correct behavior if such tokens are subsequently stringized with #. Previously, only the first half of the token would be produced.
Here is an example demonstrating the issue:
#define mkstr(a) # a
#define in_between(a) mkstr(a)
#define joinstr(a,b) in_between(a ## b)
#include <stdio.h>
int main(void) {
puts(joinstr(123,456));
puts(joinstr(abc,def));
puts(joinstr(dou,ble));
puts(joinstr(+,=));
puts(joinstr(:,>));
}
The string representation of macro tokens is needed for some preprocessor operations, but we get this in other ways (e.g. based on tokenStart/tokenEnd).
These are conceptually separate operations occurring in different phases of the translation process. This change means that ## can no longer merge string constants: such operations will give an error about an illegal token. Cases like this are technically undefined behavior, so the old behavior could have been permitted, but it is clearer and more consistent with other compilers to treat this as an error.
This ultimately should be supported, but that will be more work. For now, we just set the string representation to '?', which will usually give an error when merged. (Previously, whatever was at memory location 0 would be treated as the string representation of the token. Frequently this would just be an empty string, leading to no error but incorrect results.)
This is necessary for correct operation of the # and ## preprocessor operators on the tokens from such macros.
Integers with a sign character still have the non-standard property of being treated as a single token, so they cannot be used with ##, but in most cases such uses will now give an error.
Initialized variables have always been one of the things that stops PCH generation, but previously this was only detected when trying to write out the symbol records at the point of a later #include. On a subsequent compile using the sym file, nothing would recognize that PCH generation had stopped for this reason, so the PCH code would recognize the later #include as a potential opportunity to extend the sym file, and therefore would delete it to force regeneration next time. This led to the sym file being deleted and regenerated on alternate compiles, so its full benefit was not realized.
There is code in Header.pas to abort PCH generation if an initialized symbol is found. That is probably superfluous after this change, but it has been left in place for now.
If the appended file was another C file and that file contained an #include, this would create an invalid record in the sym file. It would record memory from the buffer holding the original file to the buffer holding the appended file. In general, these are not contiguous, so superfluous data from other parts of memory would be included in the sym file. This record would normally just be treated as invalid on subsequent compiles, but it could theoretically be very large (depending on the memory layout) and might contain sensitive data from other parts of memory.
This could happen if a header was saved in the sym file, but the sym file data was not actually used because the source code in the main file did not match what was saved.
They were not being saved, which would result in ORCA/C not searching the proper paths when looking for an include file after the sym file had ended. Here is an example showing the problem:
#pragma path "include"
#include <stdio.h>
int k = 50;
#include "n.h" /* will not find include:n.h */
There were various places where the flag for macro expansions was saved, set to false, and then later restored. If #pragma expand was used within those areas, it would not be properly applied. Here is an example showing that problem:
void f(void
#pragma expand 1
) {}
This could also affect some uses of #pragma expand within precompiled headers, e.g.:
#pragma expand 1
#include "a.h"
#undef foobar
#include "b.h"
...
Also, add a note saying that code in precompiled headers will not be expanded. (This has always been the case, but was not clearly documented.)
Previously, these might or might not be saved (based on the contents of uninitialized memory), but in many cases they were. This was unnecessary, since these macros are automatically defined when the scanner is initialized. Reading them from the sym file could result in duplicate copies of them in the macro list. This is usually harmless, but might result in #undefs of macros from the command line not working properly.
This would occur if the macro had already been saved in the sym file and the #undef occurred before a subsequent #include that was also recorded in the sym file. The solution is simply to terminate sym file generation if an #undef of an already-saved macro is encountered.
Here is an example showing the problem:
test.c:
#include "test1.h"
#undef x
#include "test2.h"
int main(void) {
#ifdef x
return x;
#else
return y;
#endif
}
test1.h:
#define x 27
test2.h:
#define y 6