FU per CPU arch to 32 per intinerary allowing precise modelling of quite
complex pipelines in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101754 91177308-0d34-0410-b5e6-96231b3b80d8
const_casts, and it reinforces the design of the Target classes being
immutable.
SelectionDAGISel::IsLegalToFold is now a static member function, because
PIC16 uses it in an unconventional way. There is more room for API
cleanup here.
And PIC16's AsmPrinter no longer uses TargetLowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101635 91177308-0d34-0410-b5e6-96231b3b80d8
may be called when either the source or destination type is i64, and my
change also hadn't fixed the most obvious problem -- assuming that i64 will
only be bitconverted to f64, ignoring the various vector types.
Radar 7873160.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101615 91177308-0d34-0410-b5e6-96231b3b80d8
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101579 91177308-0d34-0410-b5e6-96231b3b80d8
case. Also, the 0xFF hex literal involved in the shift for ESize64 should be
suffixed "ul" to preserve the shift result.
Implemented printHex*ImmOperand() by copying from ARMAsmPrinter.cpp and added a
test case for DisassembleN1RegModImmFrm()/printHex64ImmOperand().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101557 91177308-0d34-0410-b5e6-96231b3b80d8
to the UAL syntax of LDCL<c>, instead.
Add a test case for this change which also tests the removal of assert() from
printAddrMode2OffsetOperand().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101527 91177308-0d34-0410-b5e6-96231b3b80d8
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101465 91177308-0d34-0410-b5e6-96231b3b80d8
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101397 91177308-0d34-0410-b5e6-96231b3b80d8
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101364 91177308-0d34-0410-b5e6-96231b3b80d8
a load/or/and/store sequence into a narrower store when it is
safe. Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.
This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll
into:
movl %eax, 36(%rdi)
instead of:
movl $4294967295, %eax ## imm = 0xFFFFFFFF
andq 32(%rdi), %rax
shlq $32, %rcx
addq %rax, %rcx
movq %rcx, 32(%rdi)
and each of the testcases into a single store. Each of them used
to compile into craziness like this:
_test4:
movl $65535, %eax ## imm = 0xFFFF
andl (%rdi), %eax
shll $16, %esi
addl %eax, %esi
movl %esi, (%rdi)
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101343 91177308-0d34-0410-b5e6-96231b3b80d8
function checks whether we have a valid submode for VLDM/VSTM (must be either
"ia" or "db") before calling ARM_AM::getAM5Opc(AMSubMode, unsigned char).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101306 91177308-0d34-0410-b5e6-96231b3b80d8