physical registers. This is especially critical for the later two since they
start the live interval of a super-register. e.g.
%DO<def> = INSERT_SUBREG %D0<undef>, %S0<kill>, 1
If this instruction is eliminated, the register scavenger will not be happy as
D0 is not defined previously.
This fixes PR5055.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82968 91177308-0d34-0410-b5e6-96231b3b80d8
- Allocate MachineMemOperands and MachineMemOperand lists in MachineFunctions.
This eliminates MachineInstr's std::list member and allows the data to be
created by isel and live for the remainder of codegen, avoiding a lot of
copying and unnecessary translation. This also shrinks MemSDNode.
- Delete MemOperandSDNode. Introduce MachineSDNode which has dedicated
fields for MachineMemOperands.
- Change MemSDNode to have a MachineMemOperand member instead of its own
fields with the same information. This introduces some redundancy, but
it's more consistent with what MachineInstr will eventually want.
- Ignore alignment when searching for redundant loads for CSE, but remember
the greatest alignment.
Target-specific code which previously used MemOperandSDNodes with generic
SDNodes now use MemIntrinsicSDNodes, with opcodes in a designated range
so that the SelectionDAG framework knows that MachineMemOperand information
is available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82794 91177308-0d34-0410-b5e6-96231b3b80d8
For the AAPCS ABI, SP must always be 4-byte aligned, and at any "public
interface" it must be 8-byte aligned. For the older ARM APCS ABI, the stack
alignment is just always 4 bytes. For X86, we currently align SP at
entry to a function (e.g., to 16 bytes for Darwin), but no stack alignment
is needed at other times, such as for a leaf function.
After discussing this with Dan, I decided to go with the approach of adding
a new "TransientStackAlignment" field to TargetFrameInfo. This value
specifies the stack alignment that must be maintained even in between calls.
It defaults to 1 except for ARM, where it is 4. (Some other targets may
also want to set this if they have similar stack requirements. It's not
currently required for PPC because it sets targetHandlesStackFrameRounding
and handles the alignment in target-specific code.) The existing StackAlignment
value specifies the alignment upon entry to a function, which is how we've
been using it anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82767 91177308-0d34-0410-b5e6-96231b3b80d8
LiveVariables add implicit kills to correctly track partial register kills. This works well enough and is fairly accurate. But coalescer can make it impossible to maintain these markers. e.g.
BL <ga:sss1>, %R0<kill,undef>, %S0<kill>, %R0<imp-def>, %R1<imp-def,dead>, %R2<imp-def,dead>, %R3<imp-def,dead>, %R12<imp-def,dead>, %LR<imp-def,dead>, %D0<imp-def>, ...
...
%reg1031<def> = FLDS <cp#1>, 0, 14, %reg0, Mem:LD4[ConstantPool]
...
%S0<def> = FCPYS %reg1031<kill>, 14, %reg0, %D0<imp-use,kill>
When reg1031 and S0 are coalesced, the copy (FCPYS) will be eliminated the the implicit-kill of D0 is lost. In this case it's possible to move the marker to the FLDS. But in many cases, this is not possible. Suppose
%reg1031<def> = FOO <cp#1>, %D0<imp-def>
...
%S0<def> = FCPYS %reg1031<kill>, 14, %reg0, %D0<imp-use,kill>
When FCPYS goes away, the definition of S0 is the "FOO" instruction. However, transferring the D0 implicit-kill to FOO doesn't work since it is the def of D0 itself. We need to fix this in another time by introducing a "kill" pseudo instruction to track liveness.
Disabling the assertion is not ideal, but machine verifier is doing that job now. It's important to know double-def is not a miscomputation since it means a register should be free but it's not tracked as free. It's a performance issue instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82677 91177308-0d34-0410-b5e6-96231b3b80d8
of the defs are processed.
Also fix a implicit_def propagation bug: a implicit_def of a physical register
should be applied to uses of the sub-registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82616 91177308-0d34-0410-b5e6-96231b3b80d8
variable increment / decrement slighter high priority.
This has major impact on some micro-benchmarks. On MultiSource/Applications
and spec tests, it's a minor win. It also reduce 256.bzip instruction count
by 8%, 55 on 164.gzip on i386 / Darwin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82485 91177308-0d34-0410-b5e6-96231b3b80d8
tied to different source registers, the TwoAddressInstructionPass needs to
be smarter. Change it to check before replacing a source register whether
that source register is tied to a different destination register, and if so,
defer handling it until a subsequent iteration.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80654 91177308-0d34-0410-b5e6-96231b3b80d8
makes an eggregious hack somewhat more palatable. Bringing the LSDA forward
and making it a GV available for reference would be even better, but is
beyond the scope of what I'm looking to solve at this point.
Objective C++ code could generate function names that broke the previous
scheme. This fixes that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80649 91177308-0d34-0410-b5e6-96231b3b80d8
vector shuffles. Temporarily remove the tests for these operations until the
new implementation is working.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79579 91177308-0d34-0410-b5e6-96231b3b80d8
This is derived from a patch by Anton Korzh. I modified it to recognize
the VEXT shuffles during legalization and lower them to a target-specific
DAG node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79428 91177308-0d34-0410-b5e6-96231b3b80d8
support unaligned mem access only for certain types. (Should it be size
instead?)
ARM v7 supports unaligned access for i16 and i32, some v6 variants support it
as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79127 91177308-0d34-0410-b5e6-96231b3b80d8
It is legal for an inline asm operand to use an earlyclobber register if the
use operand is tied to the earlyclobber operand. The issue is discussed here:
http://gcc.gnu.org/ml/gcc/1999-04n/msg00431.html
We should perhaps let only the machine code verifier worry about these finer
details. EarlyClobber operands are not really interesting to the scavenger.
This fixes PR4528 for the third time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79122 91177308-0d34-0410-b5e6-96231b3b80d8
In a naked function, the flag is never set and getPristineRegs() returns an
empty list. That means naked functions are able to clobber callee saved
registers, but that is the whole point of naked functions.
This fixes PR4716.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79096 91177308-0d34-0410-b5e6-96231b3b80d8
the overloaded vector types allowed floating-point or integer vector elements.
Most of these operations actually depend on the element type, so bitcasting
was not an option.
If you include the vpadd intrinsics that I updated earlier, this gets rid
of 20 intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78646 91177308-0d34-0410-b5e6-96231b3b80d8
instead of syntactically as a string. This means that it keeps track of the
segment, section, flags, etc directly and asmprints them in the right format.
This also includes parsing and validation support for llvm-mc and
"attribute(section)", so we should now start getting errors about invalid
section attributes from the compiler instead of the assembler on darwin.
Still todo:
1) Uniquing of darwin mcsections
2) Move all the Darwin stuff out to MCSectionMachO.[cpp|h]
3) there are a few FIXMEs, for example what is the syntax to get the
S_GB_ZEROFILL segment type?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78547 91177308-0d34-0410-b5e6-96231b3b80d8
killed by another operand.
There is probably a better fix. Either 1) scavenger can look at other operands, or
2) livevariables can be smarter about kill markers. Patches welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78072 91177308-0d34-0410-b5e6-96231b3b80d8
When LowerSubregsInstructionPass::LowerInsert eliminates an INSERT_SUBREG
instriction because it is an identity copy, make sure that the same registers
are alive before and after the elimination.
When the super-register is marked <undef> this requires inserting an
IMPLICIT_DEF instruction to make sure the super register is live.
Fix a related bug where a kill flag on the inserted sub-register was not transferred properly.
Finally, clear the undef flag in MachineInstr::addRegisterKilled. Undef implies dead and kill implies live, so they cant both be valid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77989 91177308-0d34-0410-b5e6-96231b3b80d8
wide vectors. Likewise, change VSTn intrinsics to take separate arguments
for each vector in a multi-vector struct. Adjust tests accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77468 91177308-0d34-0410-b5e6-96231b3b80d8
symbols were not getting stubs. While I'm at it, add a big testcase for
stub generation to make sure I don't break anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@75737 91177308-0d34-0410-b5e6-96231b3b80d8
as an (index,bool) pair. The bool flag records whether the kill is a
PHI kill or not. This code will be used to enable splitting of live
intervals containing PHI-kills.
A slight change to live interval weights introduced an extra spill
into lsr-code-insertion (outside the critical sections). The test
condition has been updated to reflect this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@75097 91177308-0d34-0410-b5e6-96231b3b80d8
Note, isUndef marker must be placed even on implicit_def def operand or else the scavenger will not ignore it. This is necessary because -O0 path does not use liveintervalanalysis, it treats implicit_def just like any other def.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74601 91177308-0d34-0410-b5e6-96231b3b80d8
The register allocator, when it allocates a register to a virtual register defined by an implicit_def, can allocate any physical register without worrying about overlapping live ranges. It should mark all of operands of the said virtual register so later passes will do the right thing.
This is not the best solution. But it should be a lot less fragile to having the scavenger try to track what is defined by implicit_def.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74518 91177308-0d34-0410-b5e6-96231b3b80d8
After much back and forth, I decided to deviate from ARM design and split LDR into 4 instructions (r + imm12, r + imm8, r + r << imm12, constantpool). The advantage of this is 1) it follows the latest ARM technical manual, and 2) makes it easier to reduce the width of the instruction later. The down side is this creates more inconsistency between the two sub-targets. We should split ARM LDR instruction in a similar fashion later. I've added a README entry for this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74420 91177308-0d34-0410-b5e6-96231b3b80d8
while experimenting. I'm reasonably sure this is correct, but please
tell me if these instructions have some strange property which makes this
change unsafe.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73746 91177308-0d34-0410-b5e6-96231b3b80d8
(this is the case when we have thumb vararg function with single
callee-saved register, which is handled separately).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73529 91177308-0d34-0410-b5e6-96231b3b80d8
TurnCopyIntoImpDef turns a copy into implicit_def and remove the val# defined by it. This causes an scavenger assertion later if the def reaches other blocks. Disable the transformation if the value live interval extends beyond its def block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73478 91177308-0d34-0410-b5e6-96231b3b80d8
- Change register allocation hint to a pair of unsigned integers. The hint type is zero (which means prefer the register specified as second part of the pair) or entirely target dependent.
- Allow targets to specify alternative register allocation orders based on allocation hint.
Part 2.
- Use the register allocation hint system to implement more aggressive load / store multiple formation.
- Aggressively form LDRD / STRD. These are formed *before* register allocation. It has to be done this way to shorten live interval of base and offset registers. e.g.
v1025 = LDR v1024, 0
v1026 = LDR v1024, 0
=>
v1025,v1026 = LDRD v1024, 0
If this transformation isn't done before allocation, v1024 will overlap v1025 which means it more difficult to allocate a register pair.
- Even with the register allocation hint, it may not be possible to get the desired allocation. In that case, the post-allocation load / store multiple pass must fix the ldrd / strd instructions. They can either become ldm / stm instructions or back to a pair of ldr / str instructions.
This is work in progress, not yet enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73381 91177308-0d34-0410-b5e6-96231b3b80d8
consecutive addresses togther. This makes it easier for the post-allocation pass
to form ldm / stm.
This is step 1. We are still missing a lot of ldm / stm opportunities because
of register allocation are not done in the desired order. More enhancements
coming.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73291 91177308-0d34-0410-b5e6-96231b3b80d8
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.
For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.
This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72897 91177308-0d34-0410-b5e6-96231b3b80d8
after finding the (unique) layout predecessor. Sometimes a block may be listed
more than once, and processing it more than once in this loop can lead to
inconsistent values for FtTBB/FtFBB, since the AnalyzeBranch method does not
clear these values. There's no point in continuing the loop regardless.
The testcase for this is reduced from the 2003-05-02-DependentPHI SingleSource
test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71536 91177308-0d34-0410-b5e6-96231b3b80d8
scavenger gets confused about register liveness if it doesn't see them.
I'm not thrilled with this solution, but it only comes up when there are dead
copies in the code, which is something that hopefully doesn't happen much.
Here is what happens in pr4100: As shown in the following excerpt from the
debug output of llc, the source of a move gets reloaded from the stack,
inserting a new load instruction before the move. Since that source operand
is a kill, the physical register is free to be reused for the destination
of the move. The move ends up being a no-op, copying R3 to R3, so it is
deleted. But, it leaves behind the load to reload %reg1028 into R3, and
that load is not updated to show that it's destination operand (R3) is dead.
The scavenger gets confused by that load because it thinks that R3 is live.
Starting RegAlloc of: %reg1025<def,dead> = MOVr %reg1028<kill>, 14, %reg0, %reg0
Regs have values:
Reloading %reg1028 into R3
Last use of R3[%reg1028], removing it from live set
Assigning R3 to %reg1025
Register R3 [%reg1025] is never used, removing it from live set
Alternative solutions might be either marking the load as dead, or zapping
the load along with the no-op copy. I couldn't see an easy way to do
either of those, though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71196 91177308-0d34-0410-b5e6-96231b3b80d8
of returning a list of pointers to Values that are deleted. This was
unsafe, because the pointers in the list are, by nature of what
RecursivelyDeleteDeadInstructions does, always dangling. Replace this
with a simple callback mechanism. This may eventually be removed if
all clients can reasonably be expected to use CallbackVH.
Use this to factor out the dead-phi-cycle-elimination code from LSR
utility function, and generalize it to use the
RecursivelyDeleteTriviallyDeadInstructions utility function.
This makes LSR more aggressive about eliminating dead PHI cycles;
adjust tests to either be less trivial or to simply expect fewer
instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70636 91177308-0d34-0410-b5e6-96231b3b80d8
have pointer types, though in contrast to C pointer types, SCEV
addition is never implicitly scaled. This not only eliminates the
need for special code like IndVars' EliminatePointerRecurrence
and LSR's own GEP expansion code, it also does a better job because
it lets the normal optimizations handle pointer expressions just
like integer expressions.
Also, since LLVM IR GEPs can't directly index into multi-dimensional
VLAs, moving the GEP analysis out of client code and into the SCEV
framework makes it easier for clients to handle multi-dimensional
VLAs the same way as other arrays.
Some existing regression tests show improved optimization.
test/CodeGen/ARM/2007-03-13-InstrSched.ll in particular improved to
the point where if-conversion started kicking in; I turned it off
for this test to preserve the intent of the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@69258 91177308-0d34-0410-b5e6-96231b3b80d8
register destinations that are tied to source operands. The
TargetInstrDescr::findTiedToSrcOperand method silently fails for inline
assembly. The existing MachineInstr::isRegReDefinedByTwoAddr was very
close to doing what is needed, so this revision makes a few changes to
that method and also renames it to isRegTiedToUseOperand (for consistency
with the very similar isRegTiedToDefOperand and because it handles both
two-address instructions and inline assembly with tied registers).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68714 91177308-0d34-0410-b5e6-96231b3b80d8
- When scavenging a register, in addition to the spill, insert a restore before the first use.
- Abort if client is looking to scavenge a register even when a previously scavenged register is still live.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59697 91177308-0d34-0410-b5e6-96231b3b80d8
If local spiller optimization turns some instruction into an identity copy, it will be removed. If the output register happens to be dead (and source is obviously killed), transfer the kill / dead information to last use / def in the same MBB.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51306 91177308-0d34-0410-b5e6-96231b3b80d8
2. Coalescer can now create an interesting situation where a register def can
reaches itself without being killed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49246 91177308-0d34-0410-b5e6-96231b3b80d8
Change the keywords for the zext and sext parameter attributes to be
zeroext and signext so they don't conflict with the keywords for the
instructions of the same name. This gets around the ambiguity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40069 91177308-0d34-0410-b5e6-96231b3b80d8