which is more convenient, and change getPICJumpTableRelocBaseExpr
to take a MachineFunction to match.
Next, move the X86 code that create a PICBase symbol to
X86TargetLowering::getPICBaseSymbol from
X86MCInstLower::GetPICBaseSymbol, which was an asmprinter specific
library. This eliminates a 'gross hack', and allows us to
implement X86ISelLowering::getPICJumpTableRelocBaseExpr which now
calls it.
This in turn allows us to eliminate the
X86AsmPrinter::printPICJumpTableSetLabel method, which was the
only overload of printPICJumpTableSetLabel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94526 91177308-0d34-0410-b5e6-96231b3b80d8
dbg.declare's we currently generate go through both
register allocators without perturbing the results.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94480 91177308-0d34-0410-b5e6-96231b3b80d8
1. MachineJumpTableInfo is now created lazily for a function the first time
it actually makes a jump table instead of for every function.
2. The encoding of jump table entries is now described by the
MachineJumpTableInfo::JTEntryKind enum. This enum is determined by the
TLI::getJumpTableEncoding() hook, instead of by lots of code scattered
throughout the compiler that "knows" that jump table entries are always
32-bits in pic mode (for example).
3. The size and alignment of jump table entries is now calculated based on
their kind, instead of at machinefunction creation time.
Future work includes using the EntryKind in more places in the compiler,
eliminating other logic that "knows" the layout of jump tables in various
situations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94470 91177308-0d34-0410-b5e6-96231b3b80d8
the '-pre-RA-sched' flag. It actually makes more sense to do it this way. Also,
keep track of the SDNode ordering by default. Eventually, we would like to make
this ordering a way to break a "tie" in the scheduler. However, doing that now
breaks the "CodeGen/X86/abi-isel.ll" test for 32-bit Linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94308 91177308-0d34-0410-b5e6-96231b3b80d8
missing ones are libsupport, libsystem and libvmcore. libvmcore is
currently blocked on bugpoint, which uses EH. Once it stops using
EH, we can switch it off.
This #if 0's out 3 unit tests, because gtest requires RTTI information.
Suggestions welcome on how to fix this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94164 91177308-0d34-0410-b5e6-96231b3b80d8
comments (fast isel, X86). This doesn't seem
to break any functionality, but will introduce
cases where -g affects the generated code. I'll
be fixing that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93811 91177308-0d34-0410-b5e6-96231b3b80d8
Instcombine does this but apparently there are situations where this pattern will escape the optimizer and / or created by isel. Here is a case that's seen in JavaScriptCore:
%t1 = sub i32 0, %a
%t2 = add i32 %t1, -1
The dag combiner pattern: ((c1-A)+c2) -> (c1+c2)-A
will fold it to -1 - %a.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93773 91177308-0d34-0410-b5e6-96231b3b80d8
print/dumpWithDepth allows one to dump a DAG up to N levels deep.
dump/printWithFullDepth prints the whole DAG, subject to a depth limit
on 100 in the default case (to prevent infinite recursion).
Have CannotYetSelect to a dumpWithFullDepth so it is clearer exactly
what the non-matching DAG looks like.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93538 91177308-0d34-0410-b5e6-96231b3b80d8
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
It also strips old llvm.dbg.declare intrinsics that did not pass metadata as the first argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93531 91177308-0d34-0410-b5e6-96231b3b80d8
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93504 91177308-0d34-0410-b5e6-96231b3b80d8
really does need to be a vector type, because
TargetLowering::getOperationAction for SIGN_EXTEND_INREG uses that type,
and it needs to be able to distinguish between vectors and scalars.
Also, fix some more issues with legalization of vector casts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93043 91177308-0d34-0410-b5e6-96231b3b80d8
When folding a and(any_ext(load)) both the any_ext and the
load have to have only a single use.
This removes the anyext-uses.ll testcase which started failing
because it is unreduced and unclear what it is testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92950 91177308-0d34-0410-b5e6-96231b3b80d8
(OP (trunc x), (trunc y)) -> (trunc (OP x, y))
Unfortunately this simple change causes dag combine to infinite looping. The problem is the shrink demanded ops optimization tend to canonicalize expressions in the opposite manner. That is badness. This patch disable those optimizations in dag combine but instead it is done as a late pass in sdisel.
This also exposes some deficiencies in dag combine and x86 setcc / brcond lowering. Teach them to look pass ISD::TRUNCATE in various places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92849 91177308-0d34-0410-b5e6-96231b3b80d8