It is OK for an alias live range to overlap if there is a copy to or from the
physical register. CoalescerPair can work out if the copy is coalescable
independently of the alias.
This means that we can join with the actual destination interval instead of
using the getOrigDstReg() hack. It is no longer necessary to merge clobber
ranges into subregisters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107695 91177308-0d34-0410-b5e6-96231b3b80d8
the example in the testcase, we now generate:
_test1: ## @test1
movss 4(%esp), %xmm0
addss 8(%esp), %xmm0
movl 12(%esp), %eax
movss %xmm0, (%eax)
ret
instead of:
_test1: ## @test1
subl $20, %esp
movl 24(%esp), %eax
movq %mm0, (%esp)
movq %mm0, 8(%esp)
movss (%esp), %xmm0
addss 12(%esp), %xmm0
movss %xmm0, (%eax)
addl $20, %esp
ret
v2f32 support did not work reliably because most of the X86
backend didn't know it was legal. It was apparently only added
to support returning source-level v2f32 values in MMX registers
in x86-32 mode. If ABI compatibility is important on this
GCC-extended-vector type for some reason, then the frontend
should generate IR that returns v2i32 instead of v2f32. However,
we generally don't try very hard to be abi compatible on gcc
extended vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107601 91177308-0d34-0410-b5e6-96231b3b80d8
v2f32 as legal in 32-bit mode. It is just as terrible there,
but I just care about x86-64 and noone claims it is valuable
in 64-bit mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107600 91177308-0d34-0410-b5e6-96231b3b80d8
- X86 unfolding should check if the instructions being unfolded has memoperands.
If there is no memoperands, then it must assume conservative alignment. If this
would introduce an expensive sse unaligned load / store, then unfoldMemoryOperand
etc. should not unfold the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107509 91177308-0d34-0410-b5e6-96231b3b80d8
PrologEpilog code, and use it to determine whether
the asm forces stack alignment or not. gcc consistently
does not do this for GCC-style asms; Apple gcc inconsistently
sometimes does it for asm blocks. There is no
convenient place to put a bit in either the SDNode or
the MachineInstr form, so I've added an extra operand
to each; unlovely, but it does allow for expansion for
more bits, should we need it. PR 5125. Some
existing testcases are affected.
The operand lists of the SDNode and MachineInstr forms
are indexed with awesome mnemonics, like "2"; I may
fix this someday, but not now. I'm not making it any
worse. If anyone is inspired I think you can find all
the right places from this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107506 91177308-0d34-0410-b5e6-96231b3b80d8
Objective-C metadata types which should be marked as "weak", but which the
linker will remove upon final linkage. However, this linkage isn't specific to
Objective-C.
For example, the "objc_msgSend_fixup_alloc" symbol is defined like this:
.globl l_objc_msgSend_fixup_alloc
.weak_definition l_objc_msgSend_fixup_alloc
.section __DATA, __objc_msgrefs, coalesced
.align 3
l_objc_msgSend_fixup_alloc:
.quad _objc_msgSend_fixup
.quad L_OBJC_METH_VAR_NAME_1
This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".
Currently only supported on Darwin platforms.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107433 91177308-0d34-0410-b5e6-96231b3b80d8
available in a register. This is pretty primitive, but it reduces the
number of instructions in common testcases by 4%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107380 91177308-0d34-0410-b5e6-96231b3b80d8
have to be registers, per gcc documentation. This affects
the logic for determining what "g" should lower to. PR 7393.
A couple of existing testcases are affected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107079 91177308-0d34-0410-b5e6-96231b3b80d8
When an instruction has tied operands and physreg defines, we must take extra
care that the tied operands conflict with neither physreg defs nor uses.
The special treatment is given to inline asm and instructions with tied operands
/ early clobbers and physreg defines.
This fixes PR7509.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107043 91177308-0d34-0410-b5e6-96231b3b80d8
CopyFromReg nodes for aliasing registers (AX and AL). This confuses the fast
register allocator.
Instead of CopyFromReg(AL), use ExtractSubReg(CopyFromReg(AX), sub_8bit).
This fixes PR7312.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106934 91177308-0d34-0410-b5e6-96231b3b80d8
for an "i" constraint should get lowered; PR 6309. While
this argument was passed around a lot, this is the only
place it was used, so it goes away from a lot of other
places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106893 91177308-0d34-0410-b5e6-96231b3b80d8
address requires a register or secondary load to compute
(most PIC modes). This improves "g" constraint handling. 8015842.
The test from 2007 is attempting to test the fix for PR1761,
but since -relocation-model=static doesn't work on Darwin
x86-64, it was not testing what it was supposed to be testing
and was passing erroneously. Fixed to use Linux x86-64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106779 91177308-0d34-0410-b5e6-96231b3b80d8
when the condition is constant. This optimization shouldn't be
necessary, because codegen shouldn't be able to find dead control
paths that the IR-level optimizer can't find. And it's undesirable,
because it encourages bugpoint to leave "br i1 false" branches
in its output. And it wasn't updating the CFG.
I updated all the tests I could, but some tests are too reduced
and I wasn't able to meaningfully preserve them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106748 91177308-0d34-0410-b5e6-96231b3b80d8
into the same node, but with different non-memory operands, we need to replace
the memory operands after it's finished morphing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106643 91177308-0d34-0410-b5e6-96231b3b80d8
Measurements show that it does not speed up coalescing, so there is no reason
the keep the added complexity around.
Also clean out some unused methods and static functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106548 91177308-0d34-0410-b5e6-96231b3b80d8
opportunities. For example, this lets it emit this:
movq (%rax), %rcx
addq %rdx, %rcx
instead of this:
movq %rdx, %rcx
addq (%rax), %rcx
in the case where %rdx has subsequent uses. It's the same number
of instructions, and usually the same encoding size on x86, but
it appears faster, and in general, it may allow better scheduling
for the load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106493 91177308-0d34-0410-b5e6-96231b3b80d8
use sharing map. The reconcileNewOffset logic already forces a
separate use if the kinds differ, so incorporating the kind in the
key means we can track more sharing opportunities.
More sharing means fewer total uses to track, which means smaller
problem sizes, which means the conservative throttles don't kick
in as often.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106396 91177308-0d34-0410-b5e6-96231b3b80d8
will conflict with another live range. The place which creates this scenerio is
the code in X86 that lowers a select instruction by splitting the MBBs. This
eliminates the need to check from the bottom up in an MBB for live pregs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106066 91177308-0d34-0410-b5e6-96231b3b80d8
Early clobbers defining a virtual register were first alocated to a physreg and
then processed as a physreg EC, spilling the virtreg.
This fixes PR7382.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105998 91177308-0d34-0410-b5e6-96231b3b80d8
symbols as declarations in the X86 backend. This would manifest
on darwin x86-32 as errors like this with -fvisibility=hidden:
symbol '__ZNSbIcED1Ev' can not be undefined in a subtraction expression
This fixes PR7353.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105954 91177308-0d34-0410-b5e6-96231b3b80d8
This is a bit of a hack to make inline asm look more like call instructions.
It would be better to produce correct dead flags during isel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105749 91177308-0d34-0410-b5e6-96231b3b80d8
there could be multiple subexpressions within a single expansion which
require insert point adjustment. This fixes PR7306.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105510 91177308-0d34-0410-b5e6-96231b3b80d8
replace an OpA with a widened OpB, it is possible to get new uses of OpA due to CSE
when recursively updating nodes. Since OpA has been processed, the new uses are
not examined again. The patch checks if this occurred and it it did, updates the
new uses of OpA to use OpB.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105453 91177308-0d34-0410-b5e6-96231b3b80d8
registers it defines then interfere with an existing preg live range.
For instance, if we had something like these machine instructions:
BB#0
... = imul ... EFLAGS<imp-def,dead>
test ..., EFLAGS<imp-def>
jcc BB#2 EFLAGS<imp-use>
BB#1
... ; fallthrough to BB#2
BB#2
... ; No code that defines EFLAGS
jcc ... EFLAGS<imp-use>
Machine sink will come along, see that imul implicitly defines EFLAGS, but
because it's "dead", it assumes that it can move imul into BB#2. But when it
does, imul's "dead" imp-def of EFLAGS is raised from the dead (a zombie) and
messes up the condition code for the jump (and pretty much anything else which
relies upon it being correct).
The solution is to know which pregs are live going into a basic block. However,
that information isn't calculated at this point. Nor does the LiveVariables pass
take into account non-allocatable physical registers. In lieu of this, we do a
*very* conservative pass through the basic block to determine if a preg is live
coming out of it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105387 91177308-0d34-0410-b5e6-96231b3b80d8
that are too large. This causes the freebsd bootloader to be too
large apparently.
It's unclear if this should be an -Os or -Oz thing. Thoughts welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105228 91177308-0d34-0410-b5e6-96231b3b80d8
optimization level.
This only really affects llc for now because both the llvm-gcc and clang front
ends override the default register allocator. I intend to remove that code later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104904 91177308-0d34-0410-b5e6-96231b3b80d8
Mon Ping provided; unfortunately bugpoint failed to
reduce it, but I think it's important to have a test for
this in the suite. 8023512.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104624 91177308-0d34-0410-b5e6-96231b3b80d8
pass after isel instead of being interlaced with it, we can
trust that all the code for a function has been isel'd before
it is run.
The practical impact of this is that we can scan for machine
instr phis instead of doing a fuzzy match on the LLVM BB for
phi nodes. Doing the fuzzy match required knowing when isel
would produce an fp reg stack phi which was gross. It was
also wrong in cases where select got lowered to a branch
tree because cmovs aren't available (PR6828).
Just do the scan on machine phis which is simpler, faster
and more correct. This fixes PR6828.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104333 91177308-0d34-0410-b5e6-96231b3b80d8
operand on the left, the interesting operand is on the right. This
fixes a bug where LSR was failing to recognize ICmpZero uses,
which led it to be unable to reverse the induction variable in the
attached testcase.
Delete test/CodeGen/X86/stack-color-with-reg-2.ll, because its test
is extremely fragile and hard to meaningfully update.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104262 91177308-0d34-0410-b5e6-96231b3b80d8
<1xi64> -> i64 to work in MMX registers on hosts where -no-sse
is the default (not mine). The right thing is
to accept this and make i64->f64 conversions go through memory,
but I don't have time right now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103914 91177308-0d34-0410-b5e6-96231b3b80d8
(This worked as of about 6 months ago and I didn't track down
exactly what broke it; I think this fix is appropriate.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103911 91177308-0d34-0410-b5e6-96231b3b80d8
Sorry for the big change. The path leading up to this patch had some TableGen
changes that I didn't want to commit before I knew they were useful. They
weren't, and this version does not need them.
The fast register allocator now does no liveness calculations. Instead it relies
on kill flags provided by isel. (Currently those kill flags are also ignored due
to isel bugs). The allocation algorithm is supposed to work with any subset of
valid kill flags. More kill flags simply means fewer spills inserted.
Registers are allocated from a working set that contains no aliases. That means
most allocations can be done directly without expensive alias checks. When the
working set runs out of registers we do the full alias check to find new free
registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103488 91177308-0d34-0410-b5e6-96231b3b80d8
LSRUse's Regs set after all pruning is done, rather than trying
to do it on the fly, which can produce an incomplete result.
This fixes a case where heuristic pruning was stripping all
formulae from a use, which led the solver to enter an infinite
loop.
Also, add a few asserts to diagnose this kind of situation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103328 91177308-0d34-0410-b5e6-96231b3b80d8
getConstantFP to accept the two supported long double
target types. This was not the original intent, but
there are other places that assume this works and it's
easy enough to do.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103299 91177308-0d34-0410-b5e6-96231b3b80d8
Users can write broken code that emits the same label twice with asm renaming,
detect this and emit a fatal backend error instead of aborting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103140 91177308-0d34-0410-b5e6-96231b3b80d8
beneficial cases. See the changes in test/CodeGen/X86/tail-opts.ll and
test/CodeGen/ARM/ifcvt2.ll for details.
The fix is to change HashEndOfMBB to hash at most one instruction,
instead of trying to apply heuristics about when it will be profitable to
consider more than one instruction. The regular tail-merging heuristics
are already prepared to handle the same cases, and they're more precise.
Also, make test/CodeGen/ARM/ifcvt5.ll and
test/CodeGen/Thumb2/thumb2-branch.ll slightly more complex so that they
continue to test what they're intended to test.
And, this eliminates the problem in
test/CodeGen/Thumb2/2009-10-15-ITBlockBranch.ll, the testcase from
PR5204. Update it accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102907 91177308-0d34-0410-b5e6-96231b3b80d8
indexes could be of a different value type. Or not even using the same SDNode
for the constant (weird, I know). Compare the actual values instead of the
pointers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102791 91177308-0d34-0410-b5e6-96231b3b80d8
call that might throw. The landing pad assumes that all registers are in stack
slots.
We used to spill those dirty CSRs after the call, and the stack slots would be
wrong when arriving at the landing pad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102770 91177308-0d34-0410-b5e6-96231b3b80d8
of different register classes. e.g.
%reg1048:3<def> = EXTRACT_SUBREG %RAX<kill>, 3
Where %reg1048 is a GR32 register. This is not impossible to handle, but it is
pretty hard and very rare.
This should unbreak the dragonegg builder.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102672 91177308-0d34-0410-b5e6-96231b3b80d8
alignment of globals to the preferred alignment, but only when
there is no section specified on the global (by far the common
case).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102515 91177308-0d34-0410-b5e6-96231b3b80d8
otherwise labels get incorrectly merged. We handled this by emitting a
".byte 0", but this isn't correct on thumb/arm targets where the text segment
needs to be a multiple of 2/4 bytes. Handle this by emitting a noop. This
is more gross than it should be because arm/ppc are not fully mc'ized yet.
This fixes rdar://7908505
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102400 91177308-0d34-0410-b5e6-96231b3b80d8
doesn't dominate the header is needed, don't check whether the increment
expression has computable loop evolution. While the operands of an
addrec are required to be loop-invariant, they're not required to
dominate any part of the loop. This fixes PR6914.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102389 91177308-0d34-0410-b5e6-96231b3b80d8
alignment of globals with a specified alignment, we fix
common variables to obey their alignment. Add a comment
explaining why this behavior is important.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102365 91177308-0d34-0410-b5e6-96231b3b80d8
alignment to match what's used in clang and GCC for __alignof, rather
than trying to guess what Legalize is going to be doing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102206 91177308-0d34-0410-b5e6-96231b3b80d8
misses an opportunity to fold add operands, but folds them
after LSR has separated them out. This fixes rdar://7886751.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102157 91177308-0d34-0410-b5e6-96231b3b80d8
optimization for non-leaf functions. This will be hooked up to gcc's
-momit-leaf-frame-pointer option. rdar://7886181
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101984 91177308-0d34-0410-b5e6-96231b3b80d8
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering. In that
case it is saves a few bytes on x86-32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101350 91177308-0d34-0410-b5e6-96231b3b80d8
a load/or/and/store sequence into a narrower store when it is
safe. Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.
This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll
into:
movl %eax, 36(%rdi)
instead of:
movl $4294967295, %eax ## imm = 0xFFFFFFFF
andq 32(%rdi), %rax
shlq $32, %rcx
addq %rax, %rcx
movq %rcx, 32(%rdi)
and each of the testcases into a single store. Each of them used
to compile into craziness like this:
_test4:
movl $65535, %eax ## imm = 0xFFFF
andl (%rdi), %eax
shll $16, %esi
addl %eax, %esi
movl %esi, (%rdi)
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101343 91177308-0d34-0410-b5e6-96231b3b80d8
If we have this situation:
jCC L1
jmp L2
L1:
...
L2:
...
We can get a small performance boost by emitting this instead:
jnCC L2
L1:
...
L2:
...
This testcase shows an example of this:
float func(float x, float y) {
double product = (double)x * y;
if (product == 0.0)
return product;
return product - 1.0;
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101075 91177308-0d34-0410-b5e6-96231b3b80d8
explicitly split into stride-and-offset pairs. Also, add the
ability to track multiple post-increment loops on the same expression.
This refines the concept of "normalizing" SCEV expressions used for
to post-increment uses, and introduces a dedicated utility routine for
normalizing and denormalizing expressions.
This fixes the expansion of expressions which are post-increment users
of more than one loop at a time. More broadly, this takes LSR another
step closer to being able to reason about more than one loop at a time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100699 91177308-0d34-0410-b5e6-96231b3b80d8
in particular, they end up aligning strings at 16-byte boundaries, and
there's no way for GlobalOpt to check OptForSize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100172 91177308-0d34-0410-b5e6-96231b3b80d8
- Do not try to infer GV alignment unless its type is sized. It's not possible to infer alignment if it has opaque type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100118 91177308-0d34-0410-b5e6-96231b3b80d8
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
ops more effectively.
rdar://7774704
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100090 91177308-0d34-0410-b5e6-96231b3b80d8
Rewrite the pmulld patterns, and make sure that they fold in loads of
arguments into the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99910 91177308-0d34-0410-b5e6-96231b3b80d8
transforming it into (add (i32 GPR), 4). This allows us to write type
generic multi patterns and have tblgen automatically drop the bitconvert
in the case when the types align. This allows us to fold an extra load
in the changed testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99756 91177308-0d34-0410-b5e6-96231b3b80d8
happening.
Enhance scheduling to set the DEAD flag on implicit defs
more aggressively. Before, we'd set an implicit def operand
to dead if it were present in the SDNode corresponding to
the machineinstr but had no use. Now we do it in this case
AND if the implicit def does not exist in the SDNode at all.
This exposes a couple of problems: one is the FIXME, which
causes a live intervals crash on CodeGen/X86/sibcall.ll.
The second is that it makes machinecse and licm more
aggressive (which is a good thing) but also exposes a case
where licm hoists a set0 and then it doesn't get resunk.
Talking to codegen folks about both these issues, but I need
this patch in in the meantime.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99485 91177308-0d34-0410-b5e6-96231b3b80d8
override prefix and only the r/m16 forms should have had that. Also for variant
one, the AT&T syntax, added suffixes to all forms. Also added the missing
64-bit form for 'CRC32 r64, r/m8'. Plus added test cases for all forms and
tweaked one test case to add the needed suffixes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98980 91177308-0d34-0410-b5e6-96231b3b80d8
- Although it would be nice to allow this decoupling, the assembler needs to be able to reason about MCSymbolRefExprs in too many places to make this viable. We can use a target specific encoding of the variant if this becomes an issue.
- This patch also extends llvm-mc to support parsing of the modifiers, as opposed to lumping them in with the symbol.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98592 91177308-0d34-0410-b5e6-96231b3b80d8
32-bit indices. Instead of shuffling each element out of the index vector,
when all indices are needed, just store the input vector to the stack and
load the elements out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98588 91177308-0d34-0410-b5e6-96231b3b80d8
label is generated, but then the block is deleted. Since the
value is undefined, we just emit the label right after the entry
label of the function. It might matter that the label is in the
same section as the function was afterall.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98579 91177308-0d34-0410-b5e6-96231b3b80d8
function, then the BB is RAUW'd before the definition is emitted. There
are still two cases not being handled, but this should improve us back to
the situation before I touched anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98566 91177308-0d34-0410-b5e6-96231b3b80d8
no arguments instead of having to come up with a unique name.
This also makes the code less fragile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98364 91177308-0d34-0410-b5e6-96231b3b80d8
cl = EXTRACT_SUBREG reg1024, 1, is overly conservative. It should check
for overlaps of vr's live interval with the super registers of the
physical register (ECX in this case) and let JoinIntervals() handle checking
the coalescing feasibility against the physical register (cl in this case).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98251 91177308-0d34-0410-b5e6-96231b3b80d8
available, the only thing this affects is that we produce
.set in one case we didn't before, which shouldn't harm
anything. Make EmitSectionOffset call EmitDifference
instead of duplicating it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98005 91177308-0d34-0410-b5e6-96231b3b80d8
is a workaround for <rdar://problem/7672401/> (which I filed).
This let's us build Wine on Darwin, and it gets the Qt build there a little bit
further (so Doug says).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97845 91177308-0d34-0410-b5e6-96231b3b80d8
CALL ... %RAX<imp-def>
... [not using %RAX]
%EAX = ..., %RAX<imp-use, kill>
RET %EAX<imp-use,kill>
Now we do this:
CALL ... %RAX<imp-def, dead>
... [not using %RAX]
%EAX = ...
RET %EAX<imp-use,kill>
By not artificially keeping %RAX alive, we lower register pressure a bit.
The correct number of instructions for 2008-08-05-SpillerBug.ll is obviously
55, anybody can see that. Sheesh.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97838 91177308-0d34-0410-b5e6-96231b3b80d8
node which has a flag. That flag in turn was used by an
already-selected adde which turned into an ADC32ri8 which
used a selected load which was chained to the load we
folded. This flag use caused us to form a cycle. Fix
this by not ignoring chains in IsLegalToFold even in
cases where the isel thinks it can.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97791 91177308-0d34-0410-b5e6-96231b3b80d8
This code:
float floatingPointComparison(float x, float y) {
double product = (double)x * y;
if (product == 0.0)
return product;
return product - 1.0;
}
produces this:
_floatingPointComparison:
0000000000000000 cvtss2sd %xmm1,%xmm1
0000000000000004 cvtss2sd %xmm0,%xmm0
0000000000000008 mulsd %xmm1,%xmm0
000000000000000c pxor %xmm1,%xmm1
0000000000000010 ucomisd %xmm1,%xmm0
0000000000000014 jne 0x00000004
0000000000000016 jp 0x00000002
0000000000000018 jmp 0x00000008
000000000000001a addsd 0x00000006(%rip),%xmm0
0000000000000022 cvtsd2ss %xmm0,%xmm0
0000000000000026 ret
The "jne/jp/jmp" sequence can be reduced to this instead:
_floatingPointComparison:
0000000000000000 cvtss2sd %xmm1,%xmm1
0000000000000004 cvtss2sd %xmm0,%xmm0
0000000000000008 mulsd %xmm1,%xmm0
000000000000000c pxor %xmm1,%xmm1
0000000000000010 ucomisd %xmm1,%xmm0
0000000000000014 jp 0x00000002
0000000000000016 je 0x00000008
0000000000000018 addsd 0x00000006(%rip),%xmm0
0000000000000020 cvtsd2ss %xmm0,%xmm0
0000000000000024 ret
for a savings of 2 bytes.
This xform can happen when we recognize that jne and jp jump to the same "true"
MBB, the unconditional jump would jump to the "false" MBB, and the "true" branch
is the fall-through MBB.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97766 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions technically define AL,AH, but a trick in X86ISelDAGToDAG
reads AX in order to avoid reading AH with a REX instruction.
Fix PR6489.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97742 91177308-0d34-0410-b5e6-96231b3b80d8
long test(long x) { return (x & 123124) | 3; }
Currently compiles to:
_test:
orl $3, %edi
movq %rdi, %rax
andq $123127, %rax
ret
This is because instruction and DAG combiners canonicalize
(or (and x, C), D) -> (and (or, D), (C | D))
However, this is only profitable if (C & D) != 0. It gets in the way of the
3-addressification because the input bits are known to be zero.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97616 91177308-0d34-0410-b5e6-96231b3b80d8
CopyToReg/CopyFromReg/INLINEASM. These are annoying because
they have the same opcode before an after isel. Fix this by
setting their NodeID to -1 to indicate that they are selected,
just like what automatically happens when selecting things that
end up being machine nodes.
With that done, give IsLegalToFold a new flag that causes it to
ignore chains. This lets the HandleMergeInputChains routine be
the one place that validates chains after a match is successful,
enabling the new hotness in chain processing. This smarter
chain processing eliminates the need for "PreprocessRMW" in the
X86 and MSP430 backends and enables MSP to start matching it's
multiple mem operand instructions more aggressively.
I currently #if out the dead code in the X86 backend and MSP
backend, I'll remove it for real in a follow-on patch.
The testcase changes are:
test/CodeGen/X86/sse3.ll: we generate better code
test/CodeGen/X86/store_op_load_fold2.ll: PreprocessRMW was
miscompiling this before, we now generate correct code
Convert it to filecheck while I'm at it.
test/CodeGen/MSP430/Inst16mm.ll: Add a testcase for mem/mem
folding to make anton happy. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97596 91177308-0d34-0410-b5e6-96231b3b80d8
by loop depth and emit loop-invariant subexpressions outside of loops.
This speeds up MultiSource/Applications/viterbi and others.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97580 91177308-0d34-0410-b5e6-96231b3b80d8
was that we weren't properly handling the case when interior
nodes of a matched pattern become dead after updating chain
and flag uses. Now we handle this explicitly in
UpdateChainsAndFlags.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97561 91177308-0d34-0410-b5e6-96231b3b80d8
stuff now that we don't care about emulating the old broken
behavior of the old isel. This eliminates the
'CheckChainCompatible' check (along with IsChainCompatible) which
did an incorrect and inefficient scan *up* the chain nodes which
happened as the pattern was being formed and does the validation
at the end in HandleMergeInputChains when it forms a structural
pattern. This scans "down" the graph, which means that it is
quickly bounded by nodes already selected. This also handles
token factors that get "trapped" in the dag.
Removing the CheckChainCompatible nodes also shrinks the
generated tables by about 6K for X86 (down to 83K).
There are two pieces remaining before I can nuke PreprocessRMW:
1. I xfailed a test because we're now producing worse code in a
case that has nothing to do with the change: it turns out that
our use of MorphNodeTo will leave dead nodes in the graph
which (depending on how the graph is walked) end up causing
bogus uses of chains and blocking matches. This is really
bad for other reasons, so I'll fix this in a follow-up patch.
2. CheckFoldableChainNode needs to be improved to handle the TF.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97539 91177308-0d34-0410-b5e6-96231b3b80d8
instead of to have a chained series of scope nodes. This makes
the generated table smaller, improves the efficiency of the
interpreter, and make the factoring optimization much more
reasonable to implement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97160 91177308-0d34-0410-b5e6-96231b3b80d8
terms of store and load, which means bitcasting between scalar
integer and vector has endian-specific results, which undermines
this whole approach.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97137 91177308-0d34-0410-b5e6-96231b3b80d8
which branch on undef to branch on a boolean constant for the edge
exiting the loop. This helps ScalarEvolution compute trip counts for
loops.
Teach ScalarEvolution to recognize single-value PHIs, when safe, and
ForgetSymbolicName to forget such single-value PHI nodes as apprpriate
in ForgetSymbolicName.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97126 91177308-0d34-0410-b5e6-96231b3b80d8
the number of value bits, not the number of bits of allocation for in-memory
storage.
Make getTypeStoreSize and getTypeAllocSize work consistently for arrays and
vectors.
Fix several places in CodeGen which compute offsets into in-memory vectors
to use TargetData information.
This fixes PR1784.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97064 91177308-0d34-0410-b5e6-96231b3b80d8
necessary to swap the operands to handle NaN and negative zero properly.
Also, reintroduce logic for checking for NaN conditions when forming
SSE min and max instructions, fixed to take into consideration NaNs and
negative zeros. This allows forming min and max instructions in more
cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97025 91177308-0d34-0410-b5e6-96231b3b80d8
to adding them in a determinstic order (bottom up from
the root) based on the structure of the graph itself.
This updates tests for some random changes, interesting
bits: CodeGen/Blackfin/promote-logic.ll no longer crashes.
I have no idea why, but that's good right?
CodeGen/X86/2009-07-16-LoadFoldingBug.ll also fails, but
now compiles to have one fewer constant pool entry, making
the expected load that was being folded disappear. Since it
is an unreduced mass of gnast, I just removed it.
This fixes PR6370
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97023 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, LiveIntervalAnalysis would infer phi joins by looking for multiply
defined registers. That doesn't work if the phi join is implicitly defined in
all but one of the predecessors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96994 91177308-0d34-0410-b5e6-96231b3b80d8
operators.
The test difference is just due to the multiplication operands
being commuted (and thus requiring a more elaborate match). In
optimized code, that expression would be folded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96816 91177308-0d34-0410-b5e6-96231b3b80d8
during a tail call. A parameter might overwrite this stack slot during the tail
call.
The sequence during a tail call is:
1.) load return address to temp reg
2.) move parameters (might involve storing to return address stack slot)
3.) store return address to new location from temp reg
If the stack location is marked immutable CodeGen can colocate load (1) with the
store (3).
This fixes bug 6225.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96783 91177308-0d34-0410-b5e6-96231b3b80d8
SSE min and max instructions. The real thing this code needs to be
concerned about is negative zero.
Update the sse-minmax.ll test accordingly, and add tests for
-enable-unsafe-fp-math mode as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96775 91177308-0d34-0410-b5e6-96231b3b80d8
dragonegg self-host build. I reverted 96640 in order to revert
96556 (96640 goes on top of 96556), but it also looks like with
both of them applied the breakage happens even earlier. The
symptom of the 96556 miscompile is the following crash:
llvm[3]: Compiling AlphaISelLowering.cpp for Release build
cc1plus: /home/duncan/tmp/tmp/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp:4982: void llvm::SelectionDAG::ReplaceAllUsesWith(llvm::SDNode*, llvm::SDNode*, llvm::SelectionDAG::DAGUpdateListener*): Assertion `(!From->hasAnyUseOfValue(i) || From->getValueType(i) == To->getValueType(i)) && "Cannot use this version of ReplaceAllUsesWith!"' failed.
Stack dump:
0. Running pass 'X86 DAG->DAG Instruction Selection' on function '@_ZN4llvm19AlphaTargetLowering14LowerOperationENS_7SDValueERNS_12SelectionDAGE'
g++: Internal error: Aborted (program cc1plus)
This occurs when building LLVM using LLVM built by LLVM (via
dragonegg). Probably LLVM has miscompiled itself, though it
may have miscompiled GCC and/or dragonegg itself: at this point
of the self-host build, all of GCC, LLVM and dragonegg were built
using LLVM. Unfortunately this kind of thing is extremely hard
to debug, and while I did rummage around a bit I didn't find any
smoking guns, aka obviously miscompiled code.
Found by bisection.
r96556 | evancheng | 2010-02-18 03:13:50 +0100 (Thu, 18 Feb 2010) | 5 lines
Some dag combiner goodness:
Transform br (xor (x, y)) -> br (x != y)
Transform br (xor (xor (x,y), 1)) -> br (x == y)
Also normalize (and (X, 1) == / != 1 -> (and (X, 1)) != / == 0 to match to "test on x86" and "tst on arm"
r96640 | evancheng | 2010-02-19 01:34:39 +0100 (Fri, 19 Feb 2010) | 16 lines
Transform (xor (setcc), (setcc)) == / != 1 to
(xor (setcc), (setcc)) != / == 1.
e.g. On x86_64
%0 = icmp eq i32 %x, 0
%1 = icmp eq i32 %y, 0
%2 = xor i1 %1, %0
br i1 %2, label %bb, label %return
=>
testl %edi, %edi
sete %al
testl %esi, %esi
sete %cl
cmpb %al, %cl
je LBB1_2
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96672 91177308-0d34-0410-b5e6-96231b3b80d8
strides in foreign loops. This helps locate reuse opportunities
with existing induction variables in foreign loops and reduces
the need for inserting new ones. This fixes rdar://7657764.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96629 91177308-0d34-0410-b5e6-96231b3b80d8
which is not always true if the mask contains undefs. Modified it to return
the first non undef value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96621 91177308-0d34-0410-b5e6-96231b3b80d8
Moderate the weight given to very small intervals.
The spill weight given to new intervals created when spilling was not
normalized in the same way as the original spill weights calculated by
CalcSpillWeights. That meant that restored registers would tend to hang around
because they had a much higher spill weight that unspilled registers.
This improves the runtime of a few tests by up to 10%, and there are no
significant regressions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96613 91177308-0d34-0410-b5e6-96231b3b80d8
checking whether AnalyzeBranch disagrees with the CFG
directly, rather than looking for EH_LABEL instructions.
EH_LABEL instructions aren't always at the end of the
block, due to FP_REG_KILL and other things. This fixes
an infinite loop compiling MultiSource/Benchmarks/Bullet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96611 91177308-0d34-0410-b5e6-96231b3b80d8
into a roundss intrinsic, producing a cyclic dag. The root cause
of this is badness handling ComplexPattern nodes in the old dagisel
that I noticed through inspection. Eliminate a copy of the of the
code that handled ComplexPatterns by making EmitChildMatchCode call
into EmitMatchCode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96408 91177308-0d34-0410-b5e6-96231b3b80d8
If there exists a use of a build_vector that's the bitwise complement of the mask,
then transform the node to
(and (xor x, (build_vector -1,-1,-1,-1)), (build_vector ~c1,~c2,~c3,~c4)).
Since this transformation is only useful when 1) the given build_vector will
become a load from constpool, and 2) (and (xor x -1), y) matches to a single
instruction, I decided this is appropriate as a x86 specific transformation.
rdar://7323335
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96389 91177308-0d34-0410-b5e6-96231b3b80d8
non-temporal. Fix from r96241 for botched encoding of MOVNTDQ.
Add documentation for !nontemporal metadata.
Add a simpler movnt testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96386 91177308-0d34-0410-b5e6-96231b3b80d8
as it also peeks at which registers are being used by other uses. This
makes LSR less sensitive to use-list order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96308 91177308-0d34-0410-b5e6-96231b3b80d8
A virtual register can be used before it is defined in the same MBB if the MBB
is part of a loop. Teach the implicit-def pass about this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96279 91177308-0d34-0410-b5e6-96231b3b80d8
When coalescing with a physreg, remember to add imp-def and imp-kill when
dealing with sub-registers.
Also fix a related bug in VirtRegRewriter where substitutePhysReg may
reallocate the operand list on an instruction and invalidate the reg_iterator.
This can happen when a register is mentioned twice on the same instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96072 91177308-0d34-0410-b5e6-96231b3b80d8
phi cycles. Adjust a few tests to keep dead instructions from being optimized
away. This (together with my previous change for phi cycles) fixes Apple
radar 7627077.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96057 91177308-0d34-0410-b5e6-96231b3b80d8
bug fixes, and with improved heuristics for analyzing foreign-loop
addrecs.
This change also flattens IVUsers, eliminating the stride-oriented
groupings, which makes it easier to work with.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95975 91177308-0d34-0410-b5e6-96231b3b80d8
lowering and requires that certain types exist in ValueTypes.h. Modified widening to
check if an op can trap and if so, the widening algorithm will apply only the op on
the defined elements. It is safer to do this in widening because the optimizer can't
guarantee removing unused ops in some cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95823 91177308-0d34-0410-b5e6-96231b3b80d8
legalization even when the IR-level optimizer has removed dead phis, such
as when the high half of an i64 value is unused on a 32-bit target.
I had to adjust a few test cases that had dead phis.
This is a partial fix for Radar 7627077.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95816 91177308-0d34-0410-b5e6-96231b3b80d8
in global initializers. Instead of aborting, attempt to fold them on the
spot. If folding succeeds, emit the folded expression instead.
This fixes PR6255.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95583 91177308-0d34-0410-b5e6-96231b3b80d8
only run for x86 with fastisel. I've found it being very effective in
eliminating some obvious dead code as result of formal parameter lowering
especially when tail call optimization eliminated the need for some of the loads
from fixed frame objects. It also shrinks a number of the tests. A couple of
tests no longer make sense and are now eliminated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95493 91177308-0d34-0410-b5e6-96231b3b80d8
where callee's arguments are already in the caller's own caller's stack and
they line up perfectly. e.g.
extern int foo(int a, int b, int c);
int bar(int a, int b, int c) {
return foo(a, b, c);
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95053 91177308-0d34-0410-b5e6-96231b3b80d8
as output. Needed for (functional) correctness in inline asm,
and should be generally beneficial. 7361612.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95050 91177308-0d34-0410-b5e6-96231b3b80d8
even when -tailcallopt is not specified and it does not require changing ABI.
First case is the most trivial one. Perform tail call optimization when both
the caller and callee do not return values and when the callee does not take
any input arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94664 91177308-0d34-0410-b5e6-96231b3b80d8
to MCExpr then emit them through MCStreamer with EmitValue. I think all
global variable initializers are now going through mcstreamer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94293 91177308-0d34-0410-b5e6-96231b3b80d8