When optimizing certain index calculations, properly indicate whether they should be signed or unsigned.

This type information is currently used when generating code for the large memory model, but not for the short memory model (which is a bug in itself, causing issue such as #45).

Because the correct type information was not being provided, the code generator could incorrectly use signed index computations when a 16-bit unsigned index value was used in large-memory-model code. The following program is an example that was being miscompiled:

#pragma optimize 1
#pragma memorymodel 1

char c[0xFFFF];

int main(void) {
    unsigned i = 0xABCD;
    c[0xABCD] = 3;
    return c[i]; /* should return 3 */
}
This commit is contained in:
Stephen Heumann 2017-11-01 23:20:34 -05:00
parent 730544a6ce
commit 763c5192df
1 changed files with 8 additions and 0 deletions

View File

@ -822,6 +822,10 @@ case op^.opcode of {check for optimizations of this node}
end; {with}
op^.right^.opcode := pc_shl;
op^.opcode := pc_ixa;
if fromType.optype in [cgByte,cgWord] then
op^.optype := cgWord
else
op^.optype := cgUWord;
PeepHoleOptimization(opv);
end; {if}
end; {if}
@ -836,6 +840,10 @@ case op^.opcode of {check for optimizations of this node}
else
op^.right := op^.right^.left;
op^.opcode := pc_ixa;
if fromType.optype in [cgByte,cgWord] then
op^.optype := cgWord
else
op^.optype := cgUWord;
PeepHoleOptimization(opv);
end; {if}
end; {else if}