fix bugs exposed by the gcc dejagnu testsuite:
1. The load may actually be used by a dead instruction, which
would cause an assert.
2. The load may not be used by the current chain of instructions,
and we could move it past a side-effecting instruction. Change
how we process uses to define the problem away.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130018 91177308-0d34-0410-b5e6-96231b3b80d8
On x86 this allows to fold a load into the cmp, greatly reducing register pressure.
movzbl (%rdi), %eax
cmpl $47, %eax
->
cmpb $47, (%rdi)
This shaves 8k off gcc.o on i386. I'll leave applying the patch in README.txt to Chris :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130005 91177308-0d34-0410-b5e6-96231b3b80d8
An exception is thrown via a call to _cxa_throw, which we don't expect to
return. Therefore, the "true" part of the invoke goes to a BB that has
'unreachable' as its only instruction. This is lowered into an empty MachineBB.
The landing pad for this invoke, however, is directly after the "true" MBB.
When the empty MBB is removed, the landing pad is directly below the BB with the
invoke call. The unconditional branch is removed and then the two blocks are
merged together.
The testcase is too big for a regression test.
<rdar://problem/9305728>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129965 91177308-0d34-0410-b5e6-96231b3b80d8
These intervals are allocatable immediately after splitting, but they may be
evicted because of later splitting. This is rare, but when it happens they
should be split again.
The remainder intervals that cannot be allocated after splitting still move
directly to spilling.
SplitEditor::finish can optionally provide a mapping from new live intervals
back to the original interval indexes returned by openIntv().
Each original interval index can map to multiple new intervals after connected
components have been separated. Dead code elimination may also add existing
intervals to the list.
The reverse mapping allows the SplitEditor client to treat the new intervals
differently depending on the split region they came from.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129925 91177308-0d34-0410-b5e6-96231b3b80d8
TII::isTriviallyReMaterializable() shouldn't depend on any properties of the
register being defined by the instruction. Rematerialization is going to create
a new virtual register anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129882 91177308-0d34-0410-b5e6-96231b3b80d8
On the x86-64 and thumb2 targets, some registers are more expensive to encode
than others in the same register class.
Add a CostPerUse field to the TableGen register description, and make it
available from TRI->getCostPerUse. This represents the cost of a REX prefix or a
32-bit instruction encoding required by choosing a high register.
Teach the greedy register allocator to prefer cheap registers for busy live
ranges (as indicated by spill weight).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129864 91177308-0d34-0410-b5e6-96231b3b80d8
manually and pass all (now) 4 arguments to the mul libcall. Add a new
ExpandLibCall for just this (copied gratuitously from type legalization).
Fixes rdar://9292577
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129842 91177308-0d34-0410-b5e6-96231b3b80d8
- There is a minor semantic change here (evidenced by the test change) for
Darwin triples that have no version component. I debated changing the default
behavior of isOSVersionLT, but decided it made more sense for triples to be
explicit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129802 91177308-0d34-0410-b5e6-96231b3b80d8
Add a avoidWriteAfterWrite() target hook to identify register classes that
suffer from write-after-write hazards. For those register classes, try to avoid
writing the same register in two consecutive instructions.
This is currently disabled by default. We should not spill to avoid hazards!
The command line flag -avoid-waw-hazard can be used to enable waw avoidance.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129772 91177308-0d34-0410-b5e6-96231b3b80d8
en-mass for C++ PODs. On my c++ test file, this cuts the fast isel rejects by 10x
and shrinks the generated .s file by 5%
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129755 91177308-0d34-0410-b5e6-96231b3b80d8
registers for fast allocation a different way. This has us updating
used registers only when we're using that exact register.
Fixes rdar://9207598
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129711 91177308-0d34-0410-b5e6-96231b3b80d8
2. implement rdar://9289501 - fast isel should fold trivial multiplies to shifts
3. teach tblgen to handle shift immediates that are different sizes than the
shifted operands, eliminating some code from the X86 fast isel backend.
4. Have FastISel::SelectBinaryOp use (the poorly named) FastEmit_ri_ function
instead of FastEmit_ri to simplify code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129666 91177308-0d34-0410-b5e6-96231b3b80d8
less trivial things) into a dummy lea. Before we generated:
_test: ## @test
movq _G@GOTPCREL(%rip), %rax
leaq (%rax), %rax
ret
now we produce:
_test: ## @test
movq _G@GOTPCREL(%rip), %rax
ret
This is part of rdar://9289558
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129662 91177308-0d34-0410-b5e6-96231b3b80d8
The basic issue here is that bottom-up isel is matching the branch
and compare, and was failing to fold the load into the branch/compare
combo. Fixing this (by allowing folding into any instruction of a
sequence that is selected) allows us to produce things like:
cmpb $0, 52(%rax)
je LBB4_2
instead of:
movb 52(%rax), %cl
cmpb $0, %cl
je LBB4_2
This makes the generated -O0 code run a bit faster, but also speeds up
compile time by putting less pressure on the register allocator and
generating less code.
This was one of the biggest classes of missing load folding. Implementing
this shrinks 176.gcc's c-decl.s (as a random example) by about 4% in (verbose-asm)
line count.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129656 91177308-0d34-0410-b5e6-96231b3b80d8
which don't need to check for falling off the end of a block *and* end of phi
nodes, since terminators are never phis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129655 91177308-0d34-0410-b5e6-96231b3b80d8
Returning a new node makes the code try to replace the old node, which
in the included testcase is killed by CSE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129650 91177308-0d34-0410-b5e6-96231b3b80d8
The transferValues() function can now handle both singly and multiply defined
values, as long as the resulting live range is known. Only rematerialized values
have their live range recomputed by extendRange().
The updateSSA() function can now insert PHI values in bulk across multiple
values in multiple target registers in one pass. The list of blocks received
from transferValues() is in layout order which seems to work well for the
iterative algorithm. Blocks from extendRange() are still in reverse BFS order,
but this function is used so rarely now that it doesn't matter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129580 91177308-0d34-0410-b5e6-96231b3b80d8
Change ELF systems to use CFI for producing the EH tables. This reduces the
size of the clang binary in Debug builds from 690MB to 679MB.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129571 91177308-0d34-0410-b5e6-96231b3b80d8