Commit Graph

8 Commits

Author SHA1 Message Date
Stephen Heumann 8c81b23b6f Expand the size of the object buffer from 64K to 128K, and use 32-bit values to track related sizes.
This allows functions that require an OMF segment byte count of up to 128K to be compiled, although the length in memory at run time is still limited to 64K. (The OMF segment byte count is usually larger, due to the size of relocation records, etc.)

This is useful for compiling large functions, e.g. the main interpreter loop in git. It also fixes the bug shown in the compca23 test case, where functions that require a segment of over 64K may appear to compile correctly but generate corrupted OMF segment headers. This related to tracking sizes with 16-bit values that could roll over.

This patch increases the memory needed at run time by 64K. This shouldn’t generally be a problem on systems with sufficient memory, although it does increase the minimum memory requirement a bit. If behavior in low-memory configurations is a concern, buffSize could be made into a run-time option.
2017-10-21 20:36:21 -05:00
Stephen Heumann 41fb05404e Don’t add the length of the last segment generated in the previous execution to that of the segment in the root file.
This would occur if ORCA/C remained in memory and was restarted after a previous execution, because the 'pc' value was not reinitialized. The ORCA linker seems to ignore the too-long segment length value, but ORCA/C should generate a correct value that actually corresponds to the length of the segment.
2017-10-21 20:36:21 -05:00
Stephen Heumann 709f9b3f25 Fix bug where comparing 32-bit values in static arrays or structs against 0 may give wrong results with large memory model.
The issue was that 16-bit absolute addressing (in the data bank) was being used to access the data to compare, but with the large memory model the static arrays or structs are not necessarily in the same bank, so absolute long addressing should be used.

This was sometimes causing failures in the C4.6.4.1.CC and C4.6.6.1.CC conformance tests in the ORCA/C test suite.

The following program often demonstrates the problem (depending on memory layout and contents):

#pragma memorymodel 1
#pragma optimize 1

#include <stdio.h>

int i;
char ch1[32000];
long L1[1];

int main (void)
{
    if (L1 [0] != 0)
        printf("%li\n", L1[0]); /* shouldn't print */

    /* buggy behavior can happen if the bank bytes of these pointers differ */
    printf("%p %p\n", &L1[0], &i);
}
2017-10-21 20:36:21 -05:00
Stephen Heumann fd48d77c60 Don’t erroneously optimize out lda instructions in certain cases involving instructions the native-code optimizer didn’t know about.
This could cause problems when asm blocks contained instructions that the ORCA/C native code optimizer didn’t know about, as in the example below. It might also be possible to trigger this bug without asm blocks (particularly with the large memory model), but I haven’t run into a case that does.

The new approach conservatively assumes that unknown instructions block the optimization. This should be equivalent to the old code with respect to the instructions defined in CGI.pas, except that m_bit_imm should have been treated as blocking the optimization but was not. There are still some other potential problem cases with applying this lda-elimination optimization to arbitrary assembly code, but fixing them might interfere with the optimization in useful cases, so I’m leaving those alone for now.

Here is an example of a program with an asm block affected by this problem:

#pragma optimize 74
int x,y;

/* should print 2 when invoked with argc==1 */
int main(int argc, char **argv)
{
    x = argc;
    y = argc + 6;

    asm {
        lda #1
        pha
        eor >x
        bne done
        inc argc
done:   pla
    }

    printf("%i\n", argc);
}
2017-10-21 20:36:21 -05:00
Stephen Heumann efb23003f7 Don't do an optimization that would move a store to a DP location above an indirect load using that DP location.
This generated invalid code in instances like the following. The code generated for "s = s->u.next" would update the most significant word of s first, then use an indirect load with the half-updated pointer value to update the least significant word of s. This would generally corrupt the result if the new and old pointers had different bank bytes.

#pragma optimize 79
#include <stdio.h>

struct S {
	int i;
	union {
		struct S * next;
	} u;
} s1 = {0, 0};

int main (void)
{
	struct S * s = &s1;
	s = s->u.next;
	if (s != 0)
		puts("compiler bug detected\n"); /* May not always be triggered, depending on memory contents. */
}
2017-10-21 20:36:20 -05:00
Stephen Heumann 97cca84713 When pointer arithmetic is used to initialize a global or static variable to point before the beginning of a string constant, initialize it to the value indicated by the pointer arithmetic.
Previously, such initializations would sometimes generate a garbage value pointing up to 65535 bytes beyond the start of the string constant. (This was due to a lack of sign-extension in the object code generation.)

Computing a pointer to before the start of an object invokes undefined behavior, so the previous behavior wasn't technically wrong, but it was unintuitive and served no useful purpose. The new behavior should at least be easier to understand and debug.
2017-10-21 20:36:20 -05:00
Stephen Heumann 46b6aa389f Change all text/source files to LF line endings. 2017-10-21 18:40:19 -05:00
mikew50 e72177985e ORCA/C 2.1.0 source from the Opus ][ CD 2017-10-01 17:47:47 -06:00