mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-12-13 20:32:21 +00:00
f0569be4a9
DAGcombine's ability to find reasons to remove truncates when they were not needed. Consequently, the CellSPU backend would produce correct, but _really slow and horrible_, code. Replaced with instruction sequences that do the equivalent truncation in SPUInstrInfo.td. - Re-examine how unaligned loads and stores work. Generated unaligned load code has been tested on the CellSPU hardware; see the i32operations.c and i64operations.c in CodeGen/CellSPU/useful-harnesses. (While they may be toy test code, it does prove that some real world code does compile correctly.) - Fix truncating stores in bug 3193 (note: unpack_df.ll will still make llc fault because i64 ult is not yet implemented.) - Added i64 eq and neq for setcc and select/setcc; started new instruction information file for them in SPU64InstrInfo.td. Additional i64 operations should be added to this file and not to SPUInstrInfo.td. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61447 91177308-0d34-0410-b5e6-96231b3b80d8 |
||
---|---|---|
.. | ||
useful-harnesses | ||
and_ops.ll | ||
call_indirect.ll | ||
call.ll | ||
ctpop.ll | ||
dg.exp | ||
dp_farith.ll | ||
eqv.ll | ||
extract_elt.ll | ||
fcmp.ll | ||
fdiv.ll | ||
fneg-fabs.ll | ||
i64ops.ll | ||
icmp8.ll | ||
icmp16.ll | ||
icmp32.ll | ||
icmp64.ll | ||
immed16.ll | ||
immed32.ll | ||
immed64.ll | ||
int2fp.ll | ||
intrinsics_branch.ll | ||
intrinsics_float.ll | ||
intrinsics_logical.ll | ||
loads.ll | ||
mul_ops.ll | ||
nand.ll | ||
or_ops.ll | ||
rotate_ops.ll | ||
select_bits.ll | ||
shift_ops.ll | ||
sp_farith.ll | ||
stores.ll | ||
struct_1.ll | ||
trunc.ll | ||
vec_const.ll | ||
vecinsert.ll |