No test case unfortunately as i couldn't find a target which fit all
the conditions needed to hit this code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163075 91177308-0d34-0410-b5e6-96231b3b80d8
Manage tied operands entirely internally to MachineInstr. This makes it
possible to change the representation of tied operands, as I will do
shortly.
The constraint that tied uses and defs must be in the same order was too
restrictive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163021 91177308-0d34-0410-b5e6-96231b3b80d8
I was too optimistic, inline asm can have tied operands that don't
follow the def order.
Fixes PR13742.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162998 91177308-0d34-0410-b5e6-96231b3b80d8
because it does not support CMOV of vectors. To implement this efficientlyi, we broadcast the condition bit and use a sequence of NAND-OR
to select between the two operands. This is the same sequence we use for targets that don't have vector BLENDs (like SSE2).
rdar://12201387
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162926 91177308-0d34-0410-b5e6-96231b3b80d8
When a MachineInstr is constructed, its implicit operands are added
first, then the explicit operands are inserted before the implicits.
MCInstrDesc has oprand flags like early clobber and operand ties that
apply to the explicit operands.
Don't look at those flags when the implicit operands are first added in
the explicit operands's positions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162910 91177308-0d34-0410-b5e6-96231b3b80d8
When there are multiple tied use-def pairs on an inline asm instruction,
the tied uses must appear in the same order as the defs.
It is possible to write an LLVM IR inline asm instruction that breaks
this constraint, but there is no reason for a front end to emit the
operands out of order.
The gnu inline asm syntax specifies tied operands as a single read/write
constraint "+r", so ouf of order operands are not possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162878 91177308-0d34-0410-b5e6-96231b3b80d8
For normal instructions, isTied() is set automatically by addOperand(),
based on MCInstrDesc, but inline asm has tied operands outside the
descriptor.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162869 91177308-0d34-0410-b5e6-96231b3b80d8
Ordered memory operations are more constrained than volatile loads and
stores because they must be ordered with respect to all other memory
operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162861 91177308-0d34-0410-b5e6-96231b3b80d8
It is technically allowed to move a normal load across a volatile load,
but probably not a good idea.
It is not allowed to move a load across an atomic load with
Ordering > Monotonic, and we model those with MOVolatile as well.
I recently removed the mayStore flag from atomic load instructions, so
they don't need a pseudo-opcode. This patch makes up for the difference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162857 91177308-0d34-0410-b5e6-96231b3b80d8
The operands on an INLINEASM machine instruction are divided into groups
headed by immediate flag operands. Verify this structure.
Extract verifyTiedOperands(), and only call it for non-inlineasm
instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162849 91177308-0d34-0410-b5e6-96231b3b80d8
WHen running with -verify-machineinstrs, check that tied operands come
in matching use/def pairs, and that they are consistent with MCInstrDesc
when it applies.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162816 91177308-0d34-0410-b5e6-96231b3b80d8
The isTied bit is set automatically when a tied use is added and
MCInstrDesc indicates a tied operand. The tie is broken when one of the
tied operands is removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162814 91177308-0d34-0410-b5e6-96231b3b80d8
While in SSA form, a MachineInstr can have pairs of tied defs and uses.
The tied operands are used to represent read-modify-write operands that
must be assigned the same physical register.
Previously, tied operand pairs were computed from fixed MCInstrDesc
fields, or by using black magic on inline assembly instructions.
The isTied flag makes it possible to add tied operands to any
instruction while getting rid of (some of) the inlineasm magic.
Tied operands on normal instructions are needed to represent predicated
individual instructions in SSA form. An extra <tied,imp-use> operand is
required to represent the output value when the instruction predicate is
false.
Adding a predicate to:
%vreg0<def> = ADD %vreg1, %vreg2
Will look like:
%vreg0<tied,def> = ADD %vreg1, %vreg2, pred:3, %vreg7<tied,imp-use>
The virtual register %vreg7 is the value given to %vreg0 when the
predicate is false. It will be assigned the same physreg as %vreg0.
This commit adds the isTied flag and sets it based on MCInstrDesc when
building an instruction. The flag is not used for anything yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162774 91177308-0d34-0410-b5e6-96231b3b80d8
Register operands are manipulated by a lot of target-independent code,
and it is not always possible to preserve target flags. That means it is
not safe to use target flags on register operands.
None of the targets in the tree are using register operand target flags.
External targets should be using immediate operands to annotate
instructions with operand modifiers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162770 91177308-0d34-0410-b5e6-96231b3b80d8
These extra flags are not required to properly order the atomic
load/store instructions. SelectionDAGBuilder chains atomics as if they
were volatile, and SelectionDAG::getAtomic() sets the isVolatile bit on
the memory operands of all atomic operations.
The volatile bit is enough to order atomic loads and stores during and
after SelectionDAG.
This means we set mayLoad on atomic_load, mayStore on atomic_store, and
mayLoad+mayStore on the remaining atomic read-modify-write operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162733 91177308-0d34-0410-b5e6-96231b3b80d8
In SelectionDAGLegalize::ExpandLegalINT_TO_FP, expand INT_TO_FP nodes without
using any f64 operations if f64 is not a legal type.
Patch by Stefan Kristiansson.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162728 91177308-0d34-0410-b5e6-96231b3b80d8
It is legal to have a register node as an explicit operand, it shouldn't
be counted as an implicit use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162591 91177308-0d34-0410-b5e6-96231b3b80d8
the case of multiple edges from one block to another.
A simple example is a switch statement with multiple values to the same
destination. The definition of an edge is modified from a pair of blocks to
a pair of PredBlock and an index into the successors.
Also set the weight correctly when building SelectionDAG from LLVM IR,
especially when converting a Switch.
IntegersSubsetMapping is updated to calculate the weight for each cluster.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162572 91177308-0d34-0410-b5e6-96231b3b80d8
output (we're emitting a specification already and the information
isn't changing) and we're not in old gdb compat mode.
Saves 1% on the debug information for a build of llvm.
Fixes rdar://11043421
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162493 91177308-0d34-0410-b5e6-96231b3b80d8
The logic for recomputing latency based on a ScheduleDAG edge was
shady. This bypasses the problem by requiring the client to provide
operand indices. This ensures consistent use of the machine model's
API.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162420 91177308-0d34-0410-b5e6-96231b3b80d8
Based on CR feedback from r162301 and Craig Topper's refactoring in r162347
here are a few other places that could use the same API (& in one instance drop
a Function.h dependency).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162367 91177308-0d34-0410-b5e6-96231b3b80d8
SelectionDAG's 'init' has not been called when the SelectionDAGBuilder is
constructed (in SelectionDAGISel's constructor), so this was previously always
initialized with 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162333 91177308-0d34-0410-b5e6-96231b3b80d8
Even looking at the revision history I couldn't quite piece together why this
cast was ever written in the first place, but I assume it was because of some
change in the inheritance, perhaps this function was reimplemented in a
derived type & this caller was meant to get the base version (& it wasn't
virtual)?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162301 91177308-0d34-0410-b5e6-96231b3b80d8
The getSumForBlock function was quadratic in the number of successors
because getSuccWeight would perform a linear search for an already known
iterator.
This patch was originally committed as r161460, but reverted again
because of assertion failures. Now that duplicate Machine CFG edges have
been eliminated, this works properly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162233 91177308-0d34-0410-b5e6-96231b3b80d8
IR that hasn't been through SimplifyCFG can look like this:
br i1 %b, label %r, label %r
Make sure we don't create duplicate Machine CFG edges in this case.
Fix the machine code verifier to accept conditional branches with a
single CFG edge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162230 91177308-0d34-0410-b5e6-96231b3b80d8
The DAGCombiner tries to optimise a BUILD_VECTOR by checking if it
consists purely of get_vector_elts from one or two source vectors. If
so, it either makes a concat_vectors node or a shufflevector node.
However, it doesn't check the element type width of the underlying
vector, so if you have this sequence:
Node0: v4i16 = ...
Node1: i32 = extract_vector_elt Node0
Node2: i32 = extract_vector_elt Node0
Node3: v16i8 = BUILD_VECTOR Node1, Node2, ...
It will attempt to:
Node0: v4i16 = ...
NewNode1: v16i8 = concat_vectors Node0, ...
Where this is actually invalid because the element width is completely
different. This causes an assertion failure on DAG legalization stage.
Fix:
If output item type of BUILD_VECTOR differs from input item type.
Make concat_vectors based on input element type and then bitcast it to the output vector type. So the case described above will transformed to:
Node0: v4i16 = ...
NewNode1: v8i16 = concat_vectors Node0, ...
NewNode2: v16i8 = bitcast NewNode1
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162195 91177308-0d34-0410-b5e6-96231b3b80d8
make it more consistent with its intended semantics.
The `linker_private_weak_def_auto' linkage type was meant to automatically hide
globals which never had their addresses taken. It has nothing to do with the
`linker_private' linkage type, which outputs the symbols with a `l' (ell) prefix
among other things.
The intended semantic is more like the `linkonce_odr' linkage type.
Change the name of the linkage type to `linkonce_odr_auto_hide'. And therefore
changing the semantics so that it produces the correct output for the linker.
Note: The old linkage name `linker_private_weak_def_auto' will still parse but
is not a synonym for `linkonce_odr_auto_hide'. This should be removed in 4.0.
<rdar://problem/11754934>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162114 91177308-0d34-0410-b5e6-96231b3b80d8
Increment the MBB iterator at the top of the loop to properly handle the
current (and previous) instructions getting erased.
This fixes PR13625.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162099 91177308-0d34-0410-b5e6-96231b3b80d8
Select instructions pick one of two virtual registers based on a
condition, like x86 cmov. On targets like ARM that support predication,
selects can sometimes be eliminated by predicating the instruction
defining one of the operands.
Teach PeepholeOptimizer to recognize select instructions, and ask the
target to optimize them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162059 91177308-0d34-0410-b5e6-96231b3b80d8
It never does anything when running 'make check', and it get's in the
way of updating live intervals in 2-addr.
The hook was originally added to help form IT blocks in Thumb2 code
before register allocation, but the pass ordering has changed since
then, and we run if-conversion after register allocation now.
When the MI scheduler is enabled, there will be no less than two
schedulers between 2-addr and Thumb2ITBlockPass, so this hook is
unlikely to help anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161794 91177308-0d34-0410-b5e6-96231b3b80d8
It is still possible to if-convert if the tail block has extra
predecessors, but the tail phis must be rewritten instead of being
removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161781 91177308-0d34-0410-b5e6-96231b3b80d8
Detect when there is not enough available ILP, so if-conversion can't
speculate instructions for free.
Compute the lengthening of the critical path when inserting a select
instruction that depends on the condition as well as both sides of the
if.
Reject conversions that would stretch the critical path by more than
half a mispredict penalty.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161713 91177308-0d34-0410-b5e6-96231b3b80d8
Trace::getResourceLength() computes the number of cycles required to
execute the trace when ignoring data dependencies. The number can be
compared to the critical path to estimate the trace ILP.
Trace::getPHIDepth() computes the data dependency depth of a PHI in a
trace successor that isn't necessarily part of the trace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161711 91177308-0d34-0410-b5e6-96231b3b80d8
When a trace ends with a back-edge, include PHIs in the loop header in
the height computations. This makes the critical path through a loop
more accurate by including the latencies of the last instructions in the
loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161688 91177308-0d34-0410-b5e6-96231b3b80d8
When replacing Old with New, it can happen that New is already a
successor. Add the old and new edge weights instead of creating a
duplicate edge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161653 91177308-0d34-0410-b5e6-96231b3b80d8
This makes it possible to speed up def_iterator by stopping at the first
use. This makes def_empty() and getUniqueVRegDef() much faster when
there are many uses.
In a +Asserts build, LiveVariables is 100x faster in one case because
getVRegDef() has an assertion that would scan to the end of a
def_iterator chain.
Spill weight calculation is significantly faster (300x in one case)
because isTriviallyReMaterializable() calls MRI->isConstantPhysReg(%RIP)
which calls def_empty(%RIP).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161634 91177308-0d34-0410-b5e6-96231b3b80d8
Use a more conventional doubly linked list where the Prev pointers form
a cycle. This means it is no longer necessary to adjust the Prev
pointers when reallocating the VRegInfo array.
The test changes are required because the register allocation hint is
using the use-list order to break ties.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161633 91177308-0d34-0410-b5e6-96231b3b80d8
Register MachineOperands are kept in linked lists accessible via MRI's
reg_iterator interfaces. The linked list management was handled partly
by MachineOperand methods, partly by MRI methods.
Move all of the list management into MRI, delete
MO::AddRegOperandToRegInfo() and MO::RemoveRegOperandFromRegInfo().
Be more explicit about handling the cases where an MRI pointer isn't
available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161632 91177308-0d34-0410-b5e6-96231b3b80d8
We filter out MachineLoop back-edges during the trace-building PO
traversals, but it is possible to have CFG cycles that aren't natural
loops, and MachineLoopInfo doesn't include such cycles.
Use a standard visited set to detect such CFG cycles, and completely
ignore them when picking traces.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161532 91177308-0d34-0410-b5e6-96231b3b80d8
We perform the following:
1> Use SUB instead of CMP for i8,i16,i32 and i64 in ISel lowering.
2> Modify MachineCSE to correctly handle implicit defs.
3> Convert SUB back to CMP if possible at peephole.
Removed pattern matching of (a>b) ? (a-b):0 and like, since they are handled
by peephole now.
rdar://11873276
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161462 91177308-0d34-0410-b5e6-96231b3b80d8
The getSumForBlock function was quadratic in the number of successors
because getSuccWeight would perform a linear search for an already known
iterator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161460 91177308-0d34-0410-b5e6-96231b3b80d8
This adds support for TargetIndex operands during isel. The meaning of
these (index, offset, flags) operands is entirely defined by the target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161453 91177308-0d34-0410-b5e6-96231b3b80d8
A target index operand looks a lot like a constant pool reference, but
it is completely target-defined. It contains the 8-bit TargetFlags, a
32-bit index, and a 64-bit offset. It is preserved by all code generator
passes.
TargetIndex operands can be used to carry target-specific information in
cases where immediate operands won't suffice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161441 91177308-0d34-0410-b5e6-96231b3b80d8
Compare the critical paths of the two traces through an if-conversion
candidate. If the difference is larger than the branch brediction
penalty, reject the if-conversion. If would never pay.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161433 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, MBP essentially aligned every branch target it could. This
bloats code quite a bit, especially non-looping code which has no real
reason to prefer aligned branch targets so heavily.
As Andy said in review, it's still a bit odd to do this without a real
cost model, but this at least has much more plausible heuristics.
Fixes PR13265.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161409 91177308-0d34-0410-b5e6-96231b3b80d8
If the result of a common subexpression is used at all uses of the candidate
expression, CSE should not increase the live range of the common subexpression.
rdar://11393714 and rdar://11819721
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161396 91177308-0d34-0410-b5e6-96231b3b80d8
This patch is mostly just refactoring a bunch of copy-and-pasted code, but
it also adds a check that the call instructions are readnone or readonly.
That check was already present for sin, cos, sqrt, log2, and exp2 calls, but
it was missing for the rest of the builtins being handled in this code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161282 91177308-0d34-0410-b5e6-96231b3b80d8
No functional change intended, except replacing a DenseMap with a
SmallDenseMap which should behave identically.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161281 91177308-0d34-0410-b5e6-96231b3b80d8
This is far from complete, and only changes behavior when the
-early-live-intervals flag is passed to llc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161273 91177308-0d34-0410-b5e6-96231b3b80d8
This option runs LiveIntervals before TwoAddressInstructionPass which
will eventually learn to exploit and update the analysis.
Eventually, LiveIntervals will run before PHIElimination, and we can get
rid of LiveVariables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161270 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, the identity copy would survive through register allocation
before it was removed by the rewriter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161269 91177308-0d34-0410-b5e6-96231b3b80d8
The previous change caused fast isel to not attempt handling any calls to
builtin functions. That included things like "printf" and caused some
noticable regressions in compile time. I wanted to avoid having fast isel
keep a separate list of functions that had to be kept in sync with what the
code in SelectionDAGBuilder.cpp was handling. I've resolved that here by
moving the list into TargetLibraryInfo. This is somewhat redundant in
SelectionDAGBuilder but it will ensure that we keep things consistent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161263 91177308-0d34-0410-b5e6-96231b3b80d8
I noticed that SelectionDAGBuilder::visitCall was missing a check for memcmp
in TargetLibraryInfo, so that it would use custom code for memcmp calls even
with -fno-builtin. I also had to add a new -disable-simplify-libcalls option
to llc so that I could write a test for this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161262 91177308-0d34-0410-b5e6-96231b3b80d8
The 'unused' state of a value number can be represented as an invalid
def SlotIndex. This also exposed code that shouldn't have been looking
at unused value VNInfos.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161258 91177308-0d34-0410-b5e6-96231b3b80d8
The only real user of the flag was removeCopyByCommutingDef(), and it
has been switched to LiveIntervals::hasPHIKill().
All the code changed by this patch was only concerned with computing and
propagating the flag.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161255 91177308-0d34-0410-b5e6-96231b3b80d8
The VNInfo::HAS_PHI_KILL is only half supported. We precompute it in
LiveIntervalAnalysis, but it isn't properly updated by live range
splitting and functions like shrinkToUses().
It is only used in one place: RegisterCoalescer::removeCopyByCommutingDef().
This patch changes that function to use a new LiveIntervals::hasPHIKill()
function that computes the flag for a given value number.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161254 91177308-0d34-0410-b5e6-96231b3b80d8
This functionality was added before we started running
DeadMachineInstructionElim on all targets. It serves no purpose now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161241 91177308-0d34-0410-b5e6-96231b3b80d8
Fast isel doesn't currently have support for translating builtin function
calls to target instructions. For embedded environments where the library
functions are not available, this is a matter of correctness and not
just optimization. Most of this patch is just arranging to make the
TargetLibraryInfo available in fast isel. <rdar://problem/12008746>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161232 91177308-0d34-0410-b5e6-96231b3b80d8
Add more comments and use early returns to reduce nesting in isLoadFoldable.
Also disable folding for V_SET0 to avoid introducing a const pool entry and
a const pool load.
rdar://10554090 and rdar://11873276
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161207 91177308-0d34-0410-b5e6-96231b3b80d8
Whenever both instruction depths and instruction heights are known in a
block, it is possible to compute the length of the critical path as
max(depth+height) over the instructions in the block.
The stored live-in lists make it possible to accurately compute the
length of a critical path that bypasses the current (small) block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161197 91177308-0d34-0410-b5e6-96231b3b80d8
Don't cause regunit intervals to be computed just to verify them. Only
check the already cached intervals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161183 91177308-0d34-0410-b5e6-96231b3b80d8
LiveRangeEdit::eliminateDeadDefs() can delete a dead instruction that
reads unreserved physregs. This would leave the corresponding regunit
live interval dangling because we don't have shrinkToUses() for physical
registers.
Fix this problem by turning the instruction into a KILL instead of
deleting it. This happens in a landing pad in
test/CodeGen/X86/2012-05-19-CoalescerCrash.ll:
%vreg27<def,dead> = COPY %EDX<kill>; GR32:%vreg27
becomes:
KILL %EDX<kill>
An upcoming fix to the machine verifier will catch problems like this by
verifying regunit live intervals.
This fixes PR13498. I am not including the test case from the PR since
we already have one exposing the problem once the verifier is fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161182 91177308-0d34-0410-b5e6-96231b3b80d8
Machine CSE and other optimizations can remove instructions so folding
is possible at peephole while not possible at ISel.
This patch is a rework of r160919 and was tested on clang self-host on my local
machine.
rdar://10554090 and rdar://11873276
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161152 91177308-0d34-0410-b5e6-96231b3b80d8
The height on an instruction is the minimum number of cycles from the
instruction is issued to the end of the trace. Heights are computed for
all instructions in and below the trace center block.
The method for computing heights is different from the depth
computation. As we visit instructions in the trace bottom-up, heights of
used instructions are pushed upwards. This way, we avoid scanning long
use lists, looking for uses in the current trace.
At each basic block boundary, a list of live-in registers and their
minimum heights is saved in the trace block info. These live-in lists
are used when restarting depth computations on a trace that
converges with an already computed trace. They will also be used to
accurately compute the critical path length.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161138 91177308-0d34-0410-b5e6-96231b3b80d8
Assuming infinite issue width, compute the earliest each instruction in
the trace can issue, when considering the latency of data dependencies.
The issue cycle is record as a 'depth' from the beginning of the trace.
This is half the computation required to find the length of the critical
path through the trace. Heights are next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161074 91177308-0d34-0410-b5e6-96231b3b80d8
One motivating example is to sink an instruction from a basic block which has
two successors: one outside the loop, the other inside the loop. We should try
to sink the instruction outside the loop.
rdar://11980766
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161062 91177308-0d34-0410-b5e6-96231b3b80d8
We are extending live ranges, so kill flags are not accurate. They
aren't needed until they are recomputed after RA anyway.
<rdar://problem/11950722>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161023 91177308-0d34-0410-b5e6-96231b3b80d8
We branch to the successor with higher edge weight first.
Convert from
je LBB4_8 --> to outer loop
jmp LBB4_14 --> to inner loop
to
jne LBB4_14
jmp LBB4_8
PR12750
rdar: 11393714
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161018 91177308-0d34-0410-b5e6-96231b3b80d8
This lets traces include the final iteration of a nested loop above the
center block, and the first iteration of a nested loop below the center
block.
We still don't allow traces to contain backedges, and traces are
truncated where they would leave a loop, as seen from the center block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161003 91177308-0d34-0410-b5e6-96231b3b80d8
When computing a trace, all the candidates for pred/succ must have been
visited. Filter out back-edges first, though. The PO traversal ignores
them.
Thanks to Andy for spotting this in review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160995 91177308-0d34-0410-b5e6-96231b3b80d8
This is a cleaned up version of the isFree() function in
MachineTraceMetrics.cpp.
Transient instructions are very unlikely to produce any code in the
final output. Either because they get eliminated by RegisterCoalescing,
or because they are pseudo-instructions like labels and debug values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160977 91177308-0d34-0410-b5e6-96231b3b80d8
The MachineTraceMetrics analysis must be invalidated before modifying
the CFG. This will catch some of the violations of that rule.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160969 91177308-0d34-0410-b5e6-96231b3b80d8
A->isPredecessor(B) is the same as B->isSuccessor(A), but it can
tolerate a B that is null or dangling. This shouldn't happen normally,
but it it useful for verification code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160968 91177308-0d34-0410-b5e6-96231b3b80d8
Machine CSE and other optimizations can remove instructions so folding
is possible at peephole while not possible at ISel.
rdar://10554090 and rdar://11873276
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160919 91177308-0d34-0410-b5e6-96231b3b80d8
A value number is a PHI def if and only if it begins at a block
boundary. This can be derived from the def slot, a separate flag is not
necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160893 91177308-0d34-0410-b5e6-96231b3b80d8
This option replaces the existing live interval computation with one
based on LiveRangeCalc.cpp. The new algorithm does not depend on
LiveVariables, and it can be run at any time, before or after leaving
SSA form.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160892 91177308-0d34-0410-b5e6-96231b3b80d8
This is still a work in progress.
Out-of-order CPUs usually execute instructions from multiple basic
blocks simultaneously, so it is necessary to look at longer traces when
estimating the performance effects of code transformations.
The MachineTraceMetrics analysis will pick a typical trace through a
given basic block and provide performance metrics for the trace. Metrics
will include:
- Instruction count through the trace.
- Issue count per functional unit.
- Critical path length, and per-instruction 'slack'.
These metrics can be used to determine the performance limiting factor
when executing the trace, and how it will be affected by a code
transformation.
Initially, this will be used by the early if-conversion pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160796 91177308-0d34-0410-b5e6-96231b3b80d8
It is redundant; RegisterCoalescer will do the remat if it can't eliminate
the copy. Collected instruction counts before and after this. A few extra
instructions are generated due to spilling but it is normal to see these kinds
of changes with almost any small codegen change, according to Jakob.
This also fixed rdar://11830760 where xor is expected instead of movi0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160749 91177308-0d34-0410-b5e6-96231b3b80d8
When a live range splits into multiple connected components, we would
arbitrarily assign <undef> uses to component 0. This is wrong when the
use is tied to a def that gets assigned to a different component:
%vreg69<def> = ADD8ri %vreg68<undef>, 1
The use and def must get the same virtual register.
Fix this by assigning <undef> uses to the same component as the value
defined by the instruction, if any:
%vreg69<def> = ADD8ri %vreg69<undef>, 1
This fixes PR13402. The PR has a test case which I am not including
because it is unlikely to keep exposing this behavior in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160739 91177308-0d34-0410-b5e6-96231b3b80d8
that do not support it (X86 does not lower select_cc).
PR: 13428
Together with Michael Kuperstein <michael.m.kuperstein@intel.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160619 91177308-0d34-0410-b5e6-96231b3b80d8
LiveRangeEdit::foldAsLoad() can eliminate a register by folding a load
into its only use. Only do that when the load is safe to move, and it
won't extend any live ranges.
This fixes PR13414.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160575 91177308-0d34-0410-b5e6-96231b3b80d8
PHIElimination splits critical edges when it predicts it can resolve
interference and eliminate copies. It doesn't split the edge if the
interference wouldn't be resolved anyway because the phi-use register is
live in the critical edge anyway.
Teach PHIElimination to split loop exiting edges with interference, even
if it wouldn't resolve the interference. This removes the necessary
copies from the loop, which is still an improvement from injecting the
copies into the loop.
The test case demonstrates the improvement. Before:
LBB0_1:
cmpb $0, (%rdx)
leaq 1(%rdx), %rdx
movl %esi, %eax
je LBB0_1
After:
LBB0_1:
cmpb $0, (%rdx)
leaq 1(%rdx), %rdx
je LBB0_1
movl %esi, %eax
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160571 91177308-0d34-0410-b5e6-96231b3b80d8
LiveIntervals due to the two-addr pass generating bogus MI code.
The crux of the issue was a loop nesting problem. The intent of the code
which attempts to transform instructions before converting them to
two-addr form is to defer and reprocess any transformed instructions as
the second processing is likely to have more opportunities to coalesce
copies, etc. Unfortunately, there was one section of processing that was
not deferred -- the INSERT_SUBREG rewriting. Due to quirks of how this
rewriting proceeded, not only did it occur early, it removed the bits of
information needed for the deferred processing to correctly generate the
necessary two address form (specifically inserting a copy), but didn't
trigger any immediate assertions and produced what appeared to be
already valid two-address from code. Thus, the assertion only fired much
later in the pipeline.
The fix is to hoist the transformation logic up layer to where it can
more firmly defer all further processing, and to teach the normal
processing to handle an edge case previously handled as part of the
transformation logic. This edge case (already matched tied register
operands) needs to *not* defer any steps.
As has been brought up repeatedly in the process: wow does this code
need refactoring. I *may* squeeze in some time to at least bring sanity
to this loop... but wow... =]
Thanks to Jakob for helpful hints on the way here, and the review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160443 91177308-0d34-0410-b5e6-96231b3b80d8
When truncating a result of a vector that is split we need
to use the result of the split vector, and not re-split the dead node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160357 91177308-0d34-0410-b5e6-96231b3b80d8
large immediates. Add dag combine logic to recover in case the large
immediates doesn't fit in cmp immediate operand field.
int foo(unsigned long l) {
return (l>> 47) == 1;
}
we produce
%shr.mask = and i64 %l, -140737488355328
%cmp = icmp eq i64 %shr.mask, 140737488355328
%conv = zext i1 %cmp to i32
ret i32 %conv
which codegens to
movq $0xffff800000000000,%rax
andq %rdi,%rax
movq $0x0000800000000000,%rcx
cmpq %rcx,%rax
sete %al
movzbl %al,%eax
ret
TargetLowering::SimplifySetCC would transform
(X & -256) == 256 -> (X >> 8) == 1
if the immediate fails the isLegalICmpImmediate() test. For x86,
that's immediates which are not a signed 32-bit immediate.
Based on a patch by Eli Friedman.
PR10328
rdar://9758774
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160346 91177308-0d34-0410-b5e6-96231b3b80d8
In the added testcase the constant 55 was behind an AssertZext of type i1, and ComputeDemandedBits
reported that some of the bits were both known to be one and known to be zero.
Together with Michael Kuperstein <michael.m.kuperstein@intel.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160305 91177308-0d34-0410-b5e6-96231b3b80d8
Add a micro-optimization to getNode of CONCAT_VECTORS when both operands are undefs.
Can't find a testcase for this because VECTOR_SHUFFLE already handles undef operands, but Duncan suggested that we add this.
Together with Michael Kuperstein <michael.m.kuperstein@intel.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160229 91177308-0d34-0410-b5e6-96231b3b80d8
The notable fix is to look at any dependencies attached to the kill
instruction (or other instructions between MI nad the kill) where the
dependencies are specific to the register in question.
The old code implicitly handled this by rejecting the transform if *any*
other uses were found within the block, but after the start point. The
new code directly finds the kill, and has to re-use the existing
dependency scan to check for non-kill uses.
This was caught by self-host, but I found the bug via inspection and use
of absurd assert scaffolding to compute the kills in two ways and
compare them. So I have no useful testcase for this other than
"bootstrap". I'd work harder to reduce a test case if this particular
code were likely to live for a long time.
Thanks to Benjamin Kramer for reviewing the fix itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160228 91177308-0d34-0410-b5e6-96231b3b80d8
Catch uses of undefined physregs that haven't been added to basic block
live-in lists. Run the verifier to pinpoint the problem.
Also run the verifier when a virtual register use is not jointly
dominated by defs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160207 91177308-0d34-0410-b5e6-96231b3b80d8
removes the largest scaling problem in the test cases from PR13225 when
ASan is switched to insert basic blocks in the natural CFG order.
It may also solve some scaling problems for more normal code with large
numbers of basic blocks and variables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160194 91177308-0d34-0410-b5e6-96231b3b80d8
When dumping the DAG for a fatal 'Cannot select' back-end error, also
provide the name of the function the construct is in. Useful when dealing
with large testcases, as the next step is to llvm-extract the function
in question to get a small(er) testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160152 91177308-0d34-0410-b5e6-96231b3b80d8
the input vector, it can be bigger (this is helpful for powerpc where <2 x i16>
is a legal vector type but i16 isn't a legal type, IIRC). However this wasn't
being taken into account by ExpandRes_EXTRACT_VECTOR_ELT, causing PR13220.
Lightly tweaked version of a patch by Michael Liao.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160116 91177308-0d34-0410-b5e6-96231b3b80d8
r1025 = s/zext r1024, 4
r1026 = extract_subreg r1025, 4
to a copy:
r1026 = copy r1024
This is correct. However it uses TII->isCoalescableExtInstr() which can return
true for instructions which essentially does a sext_in_reg so this can end up
with an illegal copy where the source and destination register classes do not
match. Add a check to avoid it. Sorry, no test case possible at this time.
rdar://11849816
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160059 91177308-0d34-0410-b5e6-96231b3b80d8