- Don't call malloc+free in the very hot forward().
- Don't call isTiedToDefOperand().
- Don't create BitVector temporaries.
- Merge DeadRegs into KillRegs.
- Eliminate the early clobber checks, they were irrelevant to scavenging.
- Remove unnecessary code from -Asserts builds.
This speeds up ARM PEI by 3.4x and overall llc -O0 codegen time by 11%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149189 91177308-0d34-0410-b5e6-96231b3b80d8
Sometimes there is only one 'resume' instruction per function. In those
situations, we don't need a separate block for the call to _Unwind_Resume. In
fact, it adds a lot of overhead to code-gen if we do that -- especially at -O0.
If we have a single 'resume' instruction, just generate the call within that
block.
<rdar://problem/10694814>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149159 91177308-0d34-0410-b5e6-96231b3b80d8
Move to a model where we build whatever branches are checked out
in the source directories. This was a bit too smart (and complicated)
in handling details best left to the user and the revision control
system.
In addition, get rid of support for llvm-gcc and building gcc as
these are no longer necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149149 91177308-0d34-0410-b5e6-96231b3b80d8
Unfortunately I also had to disable constant-pool-sharing.ll the code it tests has been
updated to use the IL logic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149148 91177308-0d34-0410-b5e6-96231b3b80d8
around within a basic block while maintaining live-intervals.
Updated ScheduleTopDownLive in MachineScheduler.cpp to use the moveInstr API
when reordering MIs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149147 91177308-0d34-0410-b5e6-96231b3b80d8
GEP instructions are there for the compiler and shouldn't really output much
code (if any at all). When a GEP is stored in the entry block, Fast ISel (for
one) will not know that it could fold it into further uses. For instance, inside
of the EH handling code. This results in a lot of unnecessary spills and loads
which bloat code and slows down pretty much everything.
<rdar://problem/10694814>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149114 91177308-0d34-0410-b5e6-96231b3b80d8
mid-level constant folding APIs instead of doing its own analysis.
This makes it more general (e.g. can now share a <2 x i64> with a
<4 x i32>) and avoid duplicating a bunch of logic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149111 91177308-0d34-0410-b5e6-96231b3b80d8
Adjust an example MachObjectWriter diagnostic to use the information
to issue a better message.
Before:
LLVM ERROR: unknown ARM fixup kind!
After:
x.s:6:5: error: unsupported relocation on symbol
beq bar
^
rdar://9800182
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149093 91177308-0d34-0410-b5e6-96231b3b80d8
The Win64 calling convention has xmm6-15 as callee-saved while still
clobbering all ymm registers.
Add a YMM_HI_6_15 pseudo-register that aliases the clobbered part of the
ymm registers, and mark that as call-clobbered. This allows live xmm
registers across calls.
This hack wouldn't be necessary with RegisterMask operands representing
the call clobbers, but they are not quite operational yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149088 91177308-0d34-0410-b5e6-96231b3b80d8
we're at it, allow PatternMatch's "neg" pattern to match integer
vector negations, and enhance ComputeNumSigned bits to handle
shl of vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149082 91177308-0d34-0410-b5e6-96231b3b80d8