in other types. fix this by only bumping zero-byte globals
up to a single byte if the *entire global* is zero size,
fixing PR6340.
This also fixes empty arrays etc to be handled correctly,
and only does this on subsection-via-symbols targets (aka
darwin) which is the only place where this matters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101879 91177308-0d34-0410-b5e6-96231b3b80d8
the intrinsics. The reason for those i8* types is that the intrinsics are
overloaded on the vector type and we don't have a way to declare an intrinsic
where one argument is an overloaded vector type and another argument is a
pointer to the vector element type. The bitcasts added here will match what
the frontend will typically generate when these intrinsics are used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101840 91177308-0d34-0410-b5e6-96231b3b80d8
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering. In that
case it is saves a few bytes on x86-32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101350 91177308-0d34-0410-b5e6-96231b3b80d8
a load/or/and/store sequence into a narrower store when it is
safe. Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.
This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll
into:
movl %eax, 36(%rdi)
instead of:
movl $4294967295, %eax ## imm = 0xFFFFFFFF
andq 32(%rdi), %rax
shlq $32, %rcx
addq %rax, %rcx
movq %rcx, 32(%rdi)
and each of the testcases into a single store. Each of them used
to compile into craziness like this:
_test4:
movl $65535, %eax ## imm = 0xFFFF
andl (%rdi), %eax
shll $16, %esi
addl %eax, %esi
movl %esi, (%rdi)
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101343 91177308-0d34-0410-b5e6-96231b3b80d8
If we have this situation:
jCC L1
jmp L2
L1:
...
L2:
...
We can get a small performance boost by emitting this instead:
jnCC L2
L1:
...
L2:
...
This testcase shows an example of this:
float func(float x, float y) {
double product = (double)x * y;
if (product == 0.0)
return product;
return product - 1.0;
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101075 91177308-0d34-0410-b5e6-96231b3b80d8
explicitly split into stride-and-offset pairs. Also, add the
ability to track multiple post-increment loops on the same expression.
This refines the concept of "normalizing" SCEV expressions used for
to post-increment uses, and introduces a dedicated utility routine for
normalizing and denormalizing expressions.
This fixes the expansion of expressions which are post-increment users
of more than one loop at a time. More broadly, this takes LSR another
step closer to being able to reason about more than one loop at a time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100699 91177308-0d34-0410-b5e6-96231b3b80d8
in particular, they end up aligning strings at 16-byte boundaries, and
there's no way for GlobalOpt to check OptForSize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100172 91177308-0d34-0410-b5e6-96231b3b80d8
- Do not try to infer GV alignment unless its type is sized. It's not possible to infer alignment if it has opaque type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100118 91177308-0d34-0410-b5e6-96231b3b80d8
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
ops more effectively.
rdar://7774704
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100090 91177308-0d34-0410-b5e6-96231b3b80d8
Rewrite the pmulld patterns, and make sure that they fold in loads of
arguments into the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99910 91177308-0d34-0410-b5e6-96231b3b80d8
"the bigstack patch for SPU, with testcase. It is essentially the patch committed as 97091, and reverted as 97099, but with the following additions:
-in vararg handling, registers are marked to be live, to not confuse the register scavenger
-function prologue and epilogue are not emitted, if the stack size is 16. 16 means it is empty - there is only the register scavenger emergency spill slot, which is not used as there is no stack."
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99819 91177308-0d34-0410-b5e6-96231b3b80d8
transforming it into (add (i32 GPR), 4). This allows us to write type
generic multi patterns and have tblgen automatically drop the bitconvert
in the case when the types align. This allows us to fold an extra load
in the changed testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99756 91177308-0d34-0410-b5e6-96231b3b80d8
happening.
Enhance scheduling to set the DEAD flag on implicit defs
more aggressively. Before, we'd set an implicit def operand
to dead if it were present in the SDNode corresponding to
the machineinstr but had no use. Now we do it in this case
AND if the implicit def does not exist in the SDNode at all.
This exposes a couple of problems: one is the FIXME, which
causes a live intervals crash on CodeGen/X86/sibcall.ll.
The second is that it makes machinecse and licm more
aggressive (which is a good thing) but also exposes a case
where licm hoists a set0 and then it doesn't get resunk.
Talking to codegen folks about both these issues, but I need
this patch in in the meantime.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99485 91177308-0d34-0410-b5e6-96231b3b80d8
--- Reverse-merging r99400 into '.':
D test/CodeGen/Generic/2010-03-24-liveintervalleak.ll
U lib/CodeGen/LiveIntervalAnalysis.cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99419 91177308-0d34-0410-b5e6-96231b3b80d8
otherwise the SmallVector it contains doesn't free its memory.
In most cases LiveIntervalAnalysis could get away by not calling the destructor,
because VNInfos are bumpptr-allocated, and smallvectors usually don't grow.
However when the SmallVector does grow it always leaks.
This is the valgrind shown leak from the original testcase:
==8206== 18,304 bytes in 151 blocks are definitely lost in loss record 164 of 164
==8206== at 0x4A079C7: operator new(unsigned long) (vg_replace_malloc.c:220)
==8206== by 0x4DB7A7E: llvm::SmallVectorBase::grow_pod(unsigned long, unsigned long) (in /home/edwin/clam/git/builds/defaul
t/libclamav/.libs/libclamav.so.6.1.0)
==8206== by 0x4F90382: llvm::VNInfo::addKill(llvm::SlotIndex) (in /home/edwin/clam/git/builds/default/libclamav/.libs/libcl
amav.so.6.1.0)
==8206== by 0x5126B5C: llvm::LiveIntervals::handleVirtualRegisterDef(llvm::MachineBasicBlock*, llvm::ilist_iterator<llvm::M
achineInstr>, llvm::SlotIndex, llvm::MachineOperand&, unsigned int, llvm::LiveInterval&) (in /home/edwin/clam/git/builds/defau
lt/libclamav/.libs/libclamav.so.6.1.0)
==8206== by 0x512725E: llvm::LiveIntervals::handleRegisterDef(llvm::MachineBasicBlock*, llvm::ilist_iterator<llvm::MachineI
nstr>, llvm::SlotIndex, llvm::MachineOperand&, unsigned int) (in /home/edwin/clam/git/builds/default/libclamav/.libs/libclamav
.so.6.1.0)
==8206== by 0x51278A8: llvm::LiveIntervals::computeIntervals() (in /home/edwin/clam/git/builds/default/libclamav/.libs/libc
lamav.so.6.1.0)
==8206== by 0x5127CB4: llvm::LiveIntervals::runOnMachineFunction(llvm::MachineFunction&) (in /home/edwin/clam/git/builds/de
fault/libclamav/.libs/libclamav.so.6.1.0)
==8206== by 0x4DAE935: llvm::FPPassManager::runOnFunction(llvm::Function&) (in /home/edwin/clam/git/builds/default/libclama
v/.libs/libclamav.so.6.1.0)
==8206== by 0x4DAEB10: llvm::FunctionPassManagerImpl::run(llvm::Function&) (in /home/edwin/clam/git/builds/default/libclama
v/.libs/libclamav.so.6.1.0)
==8206== by 0x4DAED3D: llvm::FunctionPassManager::run(llvm::Function&) (in /home/edwin/clam/git/builds/default/libclamav/.l
ibs/libclamav.so.6.1.0)
==8206== by 0x4D8BE8E: llvm::JIT::runJITOnFunctionUnlocked(llvm::Function*, llvm::MutexGuard const&) (in /home/edwin/clam/git/builds/default/libclamav/.libs/libclamav.so.6.1.0)
==8206== by 0x4D8CA72: llvm::JIT::getPointerToFunction(llvm::Function*) (in /home/edwin/clam/git/builds/default/libclamav/.libs/libclamav.so.6.1.0)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99400 91177308-0d34-0410-b5e6-96231b3b80d8
override prefix and only the r/m16 forms should have had that. Also for variant
one, the AT&T syntax, added suffixes to all forms. Also added the missing
64-bit form for 'CRC32 r64, r/m8'. Plus added test cases for all forms and
tweaked one test case to add the needed suffixes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98980 91177308-0d34-0410-b5e6-96231b3b80d8
instructions to help disassembly.
We also changed the output of the addressing modes to omit the '+' from the
assembler syntax #+/-<imm> or +/-<Rm>. See, for example, A8.6.57/58/60.
And modified test cases to not expect '+' in +reg or #+num. For example,
; CHECK: ldr.w r9, [r7, #28]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98745 91177308-0d34-0410-b5e6-96231b3b80d8
U test/CodeGen/ARM/tls2.ll
U test/CodeGen/ARM/arm-negative-stride.ll
U test/CodeGen/ARM/2009-10-30.ll
U test/CodeGen/ARM/globals.ll
U test/CodeGen/ARM/str_pre-2.ll
U test/CodeGen/ARM/ldrd.ll
U test/CodeGen/ARM/2009-10-27-double-align.ll
U test/CodeGen/Thumb2/thumb2-strb.ll
U test/CodeGen/Thumb2/ldr-str-imm12.ll
U test/CodeGen/Thumb2/thumb2-strh.ll
U test/CodeGen/Thumb2/thumb2-ldr.ll
U test/CodeGen/Thumb2/thumb2-str_pre.ll
U test/CodeGen/Thumb2/thumb2-str.ll
U test/CodeGen/Thumb2/thumb2-ldrh.ll
U utils/TableGen/TableGen.cpp
U utils/TableGen/DisassemblerEmitter.cpp
D utils/TableGen/RISCDisassemblerEmitter.h
D utils/TableGen/RISCDisassemblerEmitter.cpp
U Makefile.rules
U lib/Target/ARM/ARMInstrNEON.td
U lib/Target/ARM/Makefile
U lib/Target/ARM/AsmPrinter/ARMInstPrinter.cpp
U lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
U lib/Target/ARM/AsmPrinter/ARMInstPrinter.h
D lib/Target/ARM/Disassembler
U lib/Target/ARM/ARMInstrFormats.td
U lib/Target/ARM/ARMAddressingModes.h
U lib/Target/ARM/Thumb2ITBlockPass.cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98640 91177308-0d34-0410-b5e6-96231b3b80d8
(RISCDisassemblerEmitter) which emits the decoder functions for ARM and Thumb,
and the disassembler core which invokes the decoder function and builds up the
MCInst based on the decoded Opcode.
Added sub-formats to the NeonI/NeonXI instructions to further refine the NEONFrm
instructions to help disassembly.
We also changed the output of the addressing modes to omit the '+' from the
assembler syntax #+/-<imm> or +/-<Rm>. See, for example, A8.6.57/58/60.
And modified test cases to not expect '+' in +reg or #+num. For example,
; CHECK: ldr.w r9, [r7, #28]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98637 91177308-0d34-0410-b5e6-96231b3b80d8
This does not move entirely to UAL syntax, since the default "increment after"
suffix is empty but we still use "IA" for that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98635 91177308-0d34-0410-b5e6-96231b3b80d8
to LLVM IR changes with addr label weirdness. In the testcase, we
generate references to the two bb's when codegen'ing the first
function:
_test1: ## @test1
leaq Ltmp0(%rip), %rax
..
leaq Ltmp1(%rip), %rax
Then continue to codegen the second function where the blocks
get merged. We're now smart enough to emit both labels, producing
this code:
_test_fun: ## @test_fun
## BB#0: ## %entry
Ltmp1: ## Block address taken
Ltmp0:
## BB#1: ## %ret
movl $-1, %eax
ret
Rejoice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98595 91177308-0d34-0410-b5e6-96231b3b80d8
- Although it would be nice to allow this decoupling, the assembler needs to be able to reason about MCSymbolRefExprs in too many places to make this viable. We can use a target specific encoding of the variant if this becomes an issue.
- This patch also extends llvm-mc to support parsing of the modifiers, as opposed to lumping them in with the symbol.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98592 91177308-0d34-0410-b5e6-96231b3b80d8
32-bit indices. Instead of shuffling each element out of the index vector,
when all indices are needed, just store the input vector to the stack and
load the elements out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98588 91177308-0d34-0410-b5e6-96231b3b80d8
label is generated, but then the block is deleted. Since the
value is undefined, we just emit the label right after the entry
label of the function. It might matter that the label is in the
same section as the function was afterall.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98579 91177308-0d34-0410-b5e6-96231b3b80d8
function, then the BB is RAUW'd before the definition is emitted. There
are still two cases not being handled, but this should improve us back to
the situation before I touched anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98566 91177308-0d34-0410-b5e6-96231b3b80d8
label instead of trying to form one based on the BB name (which
causes collisions if the name is empty). This fixes PR6608
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98495 91177308-0d34-0410-b5e6-96231b3b80d8
no arguments instead of having to come up with a unique name.
This also makes the code less fragile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98364 91177308-0d34-0410-b5e6-96231b3b80d8
cl = EXTRACT_SUBREG reg1024, 1, is overly conservative. It should check
for overlaps of vr's live interval with the super registers of the
physical register (ECX in this case) and let JoinIntervals() handle checking
the coalescing feasibility against the physical register (cl in this case).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98251 91177308-0d34-0410-b5e6-96231b3b80d8
directly to the maccu / maccs instructions. We handle this in
ExpandADDSUB since after type legalisation it is messy to
recognise these operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98150 91177308-0d34-0410-b5e6-96231b3b80d8
Make it so. (This patch is in LowerCall_Darwin, which seems
to be used by SVR4 code as well; since that doesn't belong here,
I haven't worried about this case.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98077 91177308-0d34-0410-b5e6-96231b3b80d8
available, the only thing this affects is that we produce
.set in one case we didn't before, which shouldn't harm
anything. Make EmitSectionOffset call EmitDifference
instead of duplicating it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98005 91177308-0d34-0410-b5e6-96231b3b80d8
immediate instructions cannot set the condition codes, so they do not have
the extra cc_out operand. We hit an assertion during tail duplication
because the instruction being duplicated had more operands that expected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98001 91177308-0d34-0410-b5e6-96231b3b80d8
is a workaround for <rdar://problem/7672401/> (which I filed).
This let's us build Wine on Darwin, and it gets the Qt build there a little bit
further (so Doug says).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97845 91177308-0d34-0410-b5e6-96231b3b80d8
CALL ... %RAX<imp-def>
... [not using %RAX]
%EAX = ..., %RAX<imp-use, kill>
RET %EAX<imp-use,kill>
Now we do this:
CALL ... %RAX<imp-def, dead>
... [not using %RAX]
%EAX = ...
RET %EAX<imp-use,kill>
By not artificially keeping %RAX alive, we lower register pressure a bit.
The correct number of instructions for 2008-08-05-SpillerBug.ll is obviously
55, anybody can see that. Sheesh.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97838 91177308-0d34-0410-b5e6-96231b3b80d8
The MicroBlaze backend was generating stack layouts that did not
conform correctly to the ABI. This update generates stack layouts
which are closer to what GCC does.
Variable arguments support was added as well but the stack layout
for varargs has not been finalized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97807 91177308-0d34-0410-b5e6-96231b3b80d8
node which has a flag. That flag in turn was used by an
already-selected adde which turned into an ADC32ri8 which
used a selected load which was chained to the load we
folded. This flag use caused us to form a cycle. Fix
this by not ignoring chains in IsLegalToFold even in
cases where the isel thinks it can.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97791 91177308-0d34-0410-b5e6-96231b3b80d8
This code:
float floatingPointComparison(float x, float y) {
double product = (double)x * y;
if (product == 0.0)
return product;
return product - 1.0;
}
produces this:
_floatingPointComparison:
0000000000000000 cvtss2sd %xmm1,%xmm1
0000000000000004 cvtss2sd %xmm0,%xmm0
0000000000000008 mulsd %xmm1,%xmm0
000000000000000c pxor %xmm1,%xmm1
0000000000000010 ucomisd %xmm1,%xmm0
0000000000000014 jne 0x00000004
0000000000000016 jp 0x00000002
0000000000000018 jmp 0x00000008
000000000000001a addsd 0x00000006(%rip),%xmm0
0000000000000022 cvtsd2ss %xmm0,%xmm0
0000000000000026 ret
The "jne/jp/jmp" sequence can be reduced to this instead:
_floatingPointComparison:
0000000000000000 cvtss2sd %xmm1,%xmm1
0000000000000004 cvtss2sd %xmm0,%xmm0
0000000000000008 mulsd %xmm1,%xmm0
000000000000000c pxor %xmm1,%xmm1
0000000000000010 ucomisd %xmm1,%xmm0
0000000000000014 jp 0x00000002
0000000000000016 je 0x00000008
0000000000000018 addsd 0x00000006(%rip),%xmm0
0000000000000020 cvtsd2ss %xmm0,%xmm0
0000000000000024 ret
for a savings of 2 bytes.
This xform can happen when we recognize that jne and jp jump to the same "true"
MBB, the unconditional jump would jump to the "false" MBB, and the "true" branch
is the fall-through MBB.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97766 91177308-0d34-0410-b5e6-96231b3b80d8
an undef value. This is only going to come up for bugpoint-reduced tests --
correct programs will not access memory at undefined addresses -- so it's not
worth the effort of doing anything more aggressive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97745 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions technically define AL,AH, but a trick in X86ISelDAGToDAG
reads AX in order to avoid reading AH with a REX instruction.
Fix PR6489.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97742 91177308-0d34-0410-b5e6-96231b3b80d8
long test(long x) { return (x & 123124) | 3; }
Currently compiles to:
_test:
orl $3, %edi
movq %rdi, %rax
andq $123127, %rax
ret
This is because instruction and DAG combiners canonicalize
(or (and x, C), D) -> (and (or, D), (C | D))
However, this is only profitable if (C & D) != 0. It gets in the way of the
3-addressification because the input bits are known to be zero.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97616 91177308-0d34-0410-b5e6-96231b3b80d8
CopyToReg/CopyFromReg/INLINEASM. These are annoying because
they have the same opcode before an after isel. Fix this by
setting their NodeID to -1 to indicate that they are selected,
just like what automatically happens when selecting things that
end up being machine nodes.
With that done, give IsLegalToFold a new flag that causes it to
ignore chains. This lets the HandleMergeInputChains routine be
the one place that validates chains after a match is successful,
enabling the new hotness in chain processing. This smarter
chain processing eliminates the need for "PreprocessRMW" in the
X86 and MSP430 backends and enables MSP to start matching it's
multiple mem operand instructions more aggressively.
I currently #if out the dead code in the X86 backend and MSP
backend, I'll remove it for real in a follow-on patch.
The testcase changes are:
test/CodeGen/X86/sse3.ll: we generate better code
test/CodeGen/X86/store_op_load_fold2.ll: PreprocessRMW was
miscompiling this before, we now generate correct code
Convert it to filecheck while I'm at it.
test/CodeGen/MSP430/Inst16mm.ll: Add a testcase for mem/mem
folding to make anton happy. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97596 91177308-0d34-0410-b5e6-96231b3b80d8
by loop depth and emit loop-invariant subexpressions outside of loops.
This speeds up MultiSource/Applications/viterbi and others.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97580 91177308-0d34-0410-b5e6-96231b3b80d8
was that we weren't properly handling the case when interior
nodes of a matched pattern become dead after updating chain
and flag uses. Now we handle this explicitly in
UpdateChainsAndFlags.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97561 91177308-0d34-0410-b5e6-96231b3b80d8
stuff now that we don't care about emulating the old broken
behavior of the old isel. This eliminates the
'CheckChainCompatible' check (along with IsChainCompatible) which
did an incorrect and inefficient scan *up* the chain nodes which
happened as the pattern was being formed and does the validation
at the end in HandleMergeInputChains when it forms a structural
pattern. This scans "down" the graph, which means that it is
quickly bounded by nodes already selected. This also handles
token factors that get "trapped" in the dag.
Removing the CheckChainCompatible nodes also shrinks the
generated tables by about 6K for X86 (down to 83K).
There are two pieces remaining before I can nuke PreprocessRMW:
1. I xfailed a test because we're now producing worse code in a
case that has nothing to do with the change: it turns out that
our use of MorphNodeTo will leave dead nodes in the graph
which (depending on how the graph is walked) end up causing
bogus uses of chains and blocking matches. This is really
bad for other reasons, so I'll fix this in a follow-up patch.
2. CheckFoldableChainNode needs to be improved to handle the TF.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97539 91177308-0d34-0410-b5e6-96231b3b80d8
ordered correctly. Previously it would get in trouble when
two patterns were too similar and give them nondet ordering.
We force this by using the record ID order as a fallback.
The testsuite diff is due to alpha patterns being ordered
slightly differently, the change is a semantic noop afaict:
< lda $0,-100($16)
---
> subq $16,100,$0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97509 91177308-0d34-0410-b5e6-96231b3b80d8
The PowerPC floating point registers can represent both f32 and f64 via the
two register classes F4RC and F8RC. F8RC is considered a subclass of F4RC to
allow cross-class coalescing. This coalescing only affects whether registers
are spilled as f32 or f64.
Spill slots must be accessed with load/store instructions corresponding to the
class of the spilled register. PPCInstrInfo::foldMemoryOperandImpl was looking
at the instruction opcode which is wrong.
X86 has similar floating point register classes, but doesn't try to fold
memory operands, so there is no problem there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97262 91177308-0d34-0410-b5e6-96231b3b80d8
Previously LoopStrengthReduce would sometimes be unable to find
a legal formula, causing an assertion failure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97226 91177308-0d34-0410-b5e6-96231b3b80d8
instead of to have a chained series of scope nodes. This makes
the generated table smaller, improves the efficiency of the
interpreter, and make the factoring optimization much more
reasonable to implement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97160 91177308-0d34-0410-b5e6-96231b3b80d8
terms of store and load, which means bitcasting between scalar
integer and vector has endian-specific results, which undermines
this whole approach.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97137 91177308-0d34-0410-b5e6-96231b3b80d8
which branch on undef to branch on a boolean constant for the edge
exiting the loop. This helps ScalarEvolution compute trip counts for
loops.
Teach ScalarEvolution to recognize single-value PHIs, when safe, and
ForgetSymbolicName to forget such single-value PHI nodes as apprpriate
in ForgetSymbolicName.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97126 91177308-0d34-0410-b5e6-96231b3b80d8
- Function uses all scratch registers AND
- Function does not use any callee saved registers AND
- Stack size is too big to address with immediate offsets.
In this case a register must be scavenged to calculate the address of a stack
object, and the scavenger needs a spare register or emergency spill slot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97071 91177308-0d34-0410-b5e6-96231b3b80d8
greater-than-or-equal SELECT_CCs to NEON vmin/vmax instructions. This is
only allowed when UnsafeFPMath is set or when at least one of the operands
is known to be nonzero.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97065 91177308-0d34-0410-b5e6-96231b3b80d8
the number of value bits, not the number of bits of allocation for in-memory
storage.
Make getTypeStoreSize and getTypeAllocSize work consistently for arrays and
vectors.
Fix several places in CodeGen which compute offsets into in-memory vectors
to use TargetData information.
This fixes PR1784.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97064 91177308-0d34-0410-b5e6-96231b3b80d8
necessary to swap the operands to handle NaN and negative zero properly.
Also, reintroduce logic for checking for NaN conditions when forming
SSE min and max instructions, fixed to take into consideration NaNs and
negative zeros. This allows forming min and max instructions in more
cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97025 91177308-0d34-0410-b5e6-96231b3b80d8
to adding them in a determinstic order (bottom up from
the root) based on the structure of the graph itself.
This updates tests for some random changes, interesting
bits: CodeGen/Blackfin/promote-logic.ll no longer crashes.
I have no idea why, but that's good right?
CodeGen/X86/2009-07-16-LoadFoldingBug.ll also fails, but
now compiles to have one fewer constant pool entry, making
the expected load that was being folded disappear. Since it
is an unreduced mass of gnast, I just removed it.
This fixes PR6370
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97023 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, LiveIntervalAnalysis would infer phi joins by looking for multiply
defined registers. That doesn't work if the phi join is implicitly defined in
all but one of the predecessors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96994 91177308-0d34-0410-b5e6-96231b3b80d8
operators.
The test difference is just due to the multiplication operands
being commuted (and thus requiring a more elaborate match). In
optimized code, that expression would be folded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96816 91177308-0d34-0410-b5e6-96231b3b80d8
during a tail call. A parameter might overwrite this stack slot during the tail
call.
The sequence during a tail call is:
1.) load return address to temp reg
2.) move parameters (might involve storing to return address stack slot)
3.) store return address to new location from temp reg
If the stack location is marked immutable CodeGen can colocate load (1) with the
store (3).
This fixes bug 6225.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96783 91177308-0d34-0410-b5e6-96231b3b80d8
SSE min and max instructions. The real thing this code needs to be
concerned about is negative zero.
Update the sse-minmax.ll test accordingly, and add tests for
-enable-unsafe-fp-math mode as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96775 91177308-0d34-0410-b5e6-96231b3b80d8
induction variable value and a loop-variant value, don't force the
insert position to be at the post-increment position, because it may
not be dominated by the loop-variant value. This fixes a
use-before-def problem noticed on PPC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96774 91177308-0d34-0410-b5e6-96231b3b80d8
dragonegg self-host build. I reverted 96640 in order to revert
96556 (96640 goes on top of 96556), but it also looks like with
both of them applied the breakage happens even earlier. The
symptom of the 96556 miscompile is the following crash:
llvm[3]: Compiling AlphaISelLowering.cpp for Release build
cc1plus: /home/duncan/tmp/tmp/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp:4982: void llvm::SelectionDAG::ReplaceAllUsesWith(llvm::SDNode*, llvm::SDNode*, llvm::SelectionDAG::DAGUpdateListener*): Assertion `(!From->hasAnyUseOfValue(i) || From->getValueType(i) == To->getValueType(i)) && "Cannot use this version of ReplaceAllUsesWith!"' failed.
Stack dump:
0. Running pass 'X86 DAG->DAG Instruction Selection' on function '@_ZN4llvm19AlphaTargetLowering14LowerOperationENS_7SDValueERNS_12SelectionDAGE'
g++: Internal error: Aborted (program cc1plus)
This occurs when building LLVM using LLVM built by LLVM (via
dragonegg). Probably LLVM has miscompiled itself, though it
may have miscompiled GCC and/or dragonegg itself: at this point
of the self-host build, all of GCC, LLVM and dragonegg were built
using LLVM. Unfortunately this kind of thing is extremely hard
to debug, and while I did rummage around a bit I didn't find any
smoking guns, aka obviously miscompiled code.
Found by bisection.
r96556 | evancheng | 2010-02-18 03:13:50 +0100 (Thu, 18 Feb 2010) | 5 lines
Some dag combiner goodness:
Transform br (xor (x, y)) -> br (x != y)
Transform br (xor (xor (x,y), 1)) -> br (x == y)
Also normalize (and (X, 1) == / != 1 -> (and (X, 1)) != / == 0 to match to "test on x86" and "tst on arm"
r96640 | evancheng | 2010-02-19 01:34:39 +0100 (Fri, 19 Feb 2010) | 16 lines
Transform (xor (setcc), (setcc)) == / != 1 to
(xor (setcc), (setcc)) != / == 1.
e.g. On x86_64
%0 = icmp eq i32 %x, 0
%1 = icmp eq i32 %y, 0
%2 = xor i1 %1, %0
br i1 %2, label %bb, label %return
=>
testl %edi, %edi
sete %al
testl %esi, %esi
sete %cl
cmpb %al, %cl
je LBB1_2
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96672 91177308-0d34-0410-b5e6-96231b3b80d8
strides in foreign loops. This helps locate reuse opportunities
with existing induction variables in foreign loops and reduces
the need for inserting new ones. This fixes rdar://7657764.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96629 91177308-0d34-0410-b5e6-96231b3b80d8
which is not always true if the mask contains undefs. Modified it to return
the first non undef value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96621 91177308-0d34-0410-b5e6-96231b3b80d8
Moderate the weight given to very small intervals.
The spill weight given to new intervals created when spilling was not
normalized in the same way as the original spill weights calculated by
CalcSpillWeights. That meant that restored registers would tend to hang around
because they had a much higher spill weight that unspilled registers.
This improves the runtime of a few tests by up to 10%, and there are no
significant regressions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96613 91177308-0d34-0410-b5e6-96231b3b80d8
checking whether AnalyzeBranch disagrees with the CFG
directly, rather than looking for EH_LABEL instructions.
EH_LABEL instructions aren't always at the end of the
block, due to FP_REG_KILL and other things. This fixes
an infinite loop compiling MultiSource/Benchmarks/Bullet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96611 91177308-0d34-0410-b5e6-96231b3b80d8
into a roundss intrinsic, producing a cyclic dag. The root cause
of this is badness handling ComplexPattern nodes in the old dagisel
that I noticed through inspection. Eliminate a copy of the of the
code that handled ComplexPatterns by making EmitChildMatchCode call
into EmitMatchCode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96408 91177308-0d34-0410-b5e6-96231b3b80d8
If there exists a use of a build_vector that's the bitwise complement of the mask,
then transform the node to
(and (xor x, (build_vector -1,-1,-1,-1)), (build_vector ~c1,~c2,~c3,~c4)).
Since this transformation is only useful when 1) the given build_vector will
become a load from constpool, and 2) (and (xor x -1), y) matches to a single
instruction, I decided this is appropriate as a x86 specific transformation.
rdar://7323335
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96389 91177308-0d34-0410-b5e6-96231b3b80d8
non-temporal. Fix from r96241 for botched encoding of MOVNTDQ.
Add documentation for !nontemporal metadata.
Add a simpler movnt testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96386 91177308-0d34-0410-b5e6-96231b3b80d8
branch in ARM v4 code, since it gets clobbered by the return address before
it is used. Instead of adding a new register class containing all the GPRs
except LR, just use the existing tGPR class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96360 91177308-0d34-0410-b5e6-96231b3b80d8
as it also peeks at which registers are being used by other uses. This
makes LSR less sensitive to use-list order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96308 91177308-0d34-0410-b5e6-96231b3b80d8
A virtual register can be used before it is defined in the same MBB if the MBB
is part of a loop. Teach the implicit-def pass about this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96279 91177308-0d34-0410-b5e6-96231b3b80d8
When coalescing with a physreg, remember to add imp-def and imp-kill when
dealing with sub-registers.
Also fix a related bug in VirtRegRewriter where substitutePhysReg may
reallocate the operand list on an instruction and invalidate the reg_iterator.
This can happen when a register is mentioned twice on the same instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96072 91177308-0d34-0410-b5e6-96231b3b80d8
phi cycles. Adjust a few tests to keep dead instructions from being optimized
away. This (together with my previous change for phi cycles) fixes Apple
radar 7627077.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96057 91177308-0d34-0410-b5e6-96231b3b80d8
stack frame, the prolog/epilog code was using the same
register for the copy of CR and the address of the save slot. Oops.
This is fixed here for Darwin, sort of, by reserving R2 for this case.
A better way would be to do the store before the decrement of SP,
which is safe on Darwin due to the red zone.
SVR4 probably has the same problem, but I don't know how to fix it;
there is no red zone and R2 is already used for something else.
I'm going to leave it to someone interested in that target.
Better still would be to rewrite the CR-saving code completely;
spilling each CR subregister individually is horrible code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96015 91177308-0d34-0410-b5e6-96231b3b80d8
bug fixes, and with improved heuristics for analyzing foreign-loop
addrecs.
This change also flattens IVUsers, eliminating the stride-oriented
groupings, which makes it easier to work with.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95975 91177308-0d34-0410-b5e6-96231b3b80d8
reduce down to a single value. InstCombine already does this transformation
but DAG legalization may introduce new opportunities. This has turned out to
be important for ARM where 64-bit values are split up during type legalization:
InstCombine is not able to remove the PHI cycles on the 64-bit values but
the separate 32-bit values can be optimized. I measured the compile time
impact of this (running llc on 176.gcc) and it was not significant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95951 91177308-0d34-0410-b5e6-96231b3b80d8
lowering and requires that certain types exist in ValueTypes.h. Modified widening to
check if an op can trap and if so, the widening algorithm will apply only the op on
the defined elements. It is safer to do this in widening because the optimizer can't
guarantee removing unused ops in some cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95823 91177308-0d34-0410-b5e6-96231b3b80d8
legalization even when the IR-level optimizer has removed dead phis, such
as when the high half of an i64 value is unused on a 32-bit target.
I had to adjust a few test cases that had dead phis.
This is a partial fix for Radar 7627077.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95816 91177308-0d34-0410-b5e6-96231b3b80d8