instruction for non-constant operands. This includes the case referenced
in the README.txt regarding a bitfield copy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108608 91177308-0d34-0410-b5e6-96231b3b80d8
and a combine pattern to use it for setting a bit-field to a constant
value. More to come for non-constant stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108570 91177308-0d34-0410-b5e6-96231b3b80d8
void foo() { __builtin_unreachable(); }
It will output the following on Darwin X86:
_func1:
Leh_func_begin0:
pushq %rbp
Ltmp0:
movq %rsp, %rbp
Ltmp1:
Leh_func_end0:
This prolog adds a new Call Frame Information (CFI) row to the FDE with an
address that is not within the address range of the code it describes -- part is
equal to the end of the function -- and therefore results in an invalid EH
frame. If we emit a nop in this situation, then the CFI row is now within the
address range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108568 91177308-0d34-0410-b5e6-96231b3b80d8
pass that inserted it.
It is no longer necessary to limit the live ranges of FP registers to a single
basic block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108536 91177308-0d34-0410-b5e6-96231b3b80d8
occasions, caused code to be generated in a different order.
All cases I've seen involved float softening in the type
legalizer, and this could be perhaps be fixed there, but
it's better not to generate things differently in the first
place. 7797940 (6/29/2010..7/15/2010).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108484 91177308-0d34-0410-b5e6-96231b3b80d8
it doesn't miss an opportunity to form a GEP, regardless of the
relative loop depths of the operands. This fixes rdar://8197217.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108475 91177308-0d34-0410-b5e6-96231b3b80d8
the function. We'll just turn it into a "trap" instruction instead.
The problem with not handling this is that it might generate a prologue without
the equivalent epilogue to go with it:
$ cat t.ll
define void @foo() {
entry:
unreachable
}
$ llc -o - t.ll -relocation-model=pic -disable-fp-elim -unwind-tables
.section __TEXT,__text,regular,pure_instructions
.globl _foo
.align 4, 0x90
_foo: ## @foo
Leh_func_begin0:
## BB#0: ## %entry
pushq %rbp
Ltmp0:
movq %rsp, %rbp
Ltmp1:
Leh_func_end0:
...
The unwind tables then have bad data in them causing all sorts of problems.
Fixes <rdar://problem/8096481>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108473 91177308-0d34-0410-b5e6-96231b3b80d8
-enable-no-nans-fp-math and -enable-no-infs-fp-math. All of the current codegen fp math optimizations only care whether the fp arithmetics arguments and results can never be NaN.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108465 91177308-0d34-0410-b5e6-96231b3b80d8
to keep "Text" in sync with the "pure instructions" section attribute.
Lack of this attribute was preventing the assembler from emitting
multibyte noops instructions for templates (and inlines, and other
coalesced stuff) and was causing the assembler to mismatch .o files.
This fixes rdar://8018335
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108461 91177308-0d34-0410-b5e6-96231b3b80d8
a zero. This situation arrises in Fortran code with induction variables
that start at 1 instead of 0. This fixes PR7651.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108424 91177308-0d34-0410-b5e6-96231b3b80d8
mutated by recursive simplification. This also enhances
ReplaceAndSimplifyAllUses to actually do a real RAUW
at the end of it, which updates any value handles
pointing to "From" to start pointing to "To". This
seems useful for debug info and random other VH users.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108415 91177308-0d34-0410-b5e6-96231b3b80d8
in the literal field of an instruction. E.g.,
long long foo(long long a) {
return a - 734439407618LL;
}
rdar://7038284
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108339 91177308-0d34-0410-b5e6-96231b3b80d8
address cannot be allocated a register is in 32-bit mode where the first
three arguments are marked inreg. In that case EAX, EDX, and ECX will be
used for argument passing.
This fixes PR7610.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108327 91177308-0d34-0410-b5e6-96231b3b80d8
by a return that returns a constant, while elsewhere in the function
another return instruction returns a different constant. This is a
special case of accumulator recursion, so just generalize the existing
logic a bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108241 91177308-0d34-0410-b5e6-96231b3b80d8
the LHS and RHS of an and/or instruction, don't multiply add
known predecessor values. This fixes the crash on testcase
from PR7498
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108114 91177308-0d34-0410-b5e6-96231b3b80d8
Targets must now implement TargetInstrInfo::copyPhysReg instead. There is no
longer a default implementation forwarding to copyRegToReg.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108095 91177308-0d34-0410-b5e6-96231b3b80d8
correct alignment information, which simplifies ExpandRes_VAARG a bit.
The patch introduces a new alignment information to TargetLoweringInfo. This is
needed since the two natural candidates cannot be used:
* The 's' in target data: If this is set to the minimal alignment of any
argument, getCallFrameTypeAlignment would return 4 for doubles on ARM for
example.
* The getTransientStackAlignment method. It is possible for an architecture to
have argument less aligned than what we maintain the stack pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108072 91177308-0d34-0410-b5e6-96231b3b80d8
- Check getBytesToPopOnReturn().
- Eschew ST0 and ST1 for return values.
- Fix the PIC base register initialization so that it doesn't ever
fail to end up the top of the entry block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108039 91177308-0d34-0410-b5e6-96231b3b80d8
notes:
- The instructions are being added with dummy placeholder patterns using some 256
specifiers, this is not meant to work now, but since there are some multiclasses
generic enough to accept them, when we go for codegen, the stuff will be already
there.
- Add VEX encoding bits to support YMM
- Add MOVUPS and MOVAPS in the first round
- Use "Y" as suffix for those Instructions: MOVUPSYrr, ...
- All AVX instructions in X86InstrSSE.td will move soon to a new X86InstrAVX
file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107996 91177308-0d34-0410-b5e6-96231b3b80d8
U utils/TableGen/FastISelEmitter.cpp
--- Reverse-merging r107943 into '.':
U test/CodeGen/X86/fast-isel.ll
U test/CodeGen/X86/fast-isel-loads.ll
U include/llvm/Target/TargetLowering.h
U include/llvm/Support/PassNameParser.h
U include/llvm/CodeGen/FunctionLoweringInfo.h
U include/llvm/CodeGen/CallingConvLower.h
U include/llvm/CodeGen/FastISel.h
U include/llvm/CodeGen/SelectionDAGISel.h
U lib/CodeGen/LLVMTargetMachine.cpp
U lib/CodeGen/CallingConvLower.cpp
U lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
U lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
U lib/CodeGen/SelectionDAG/FastISel.cpp
U lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
U lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
U lib/CodeGen/SelectionDAG/InstrEmitter.cpp
U lib/CodeGen/SelectionDAG/TargetLowering.cpp
U lib/Target/XCore/XCoreISelLowering.cpp
U lib/Target/XCore/XCoreISelLowering.h
U lib/Target/X86/X86ISelLowering.cpp
U lib/Target/X86/X86FastISel.cpp
U lib/Target/X86/X86ISelLowering.h
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107987 91177308-0d34-0410-b5e6-96231b3b80d8
disabled and then never turned back on again. Adjust some tests, one because
this change avoids an unnecessary instruction, and the other to make it
continue testing what it was intended to test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107941 91177308-0d34-0410-b5e6-96231b3b80d8
in memory operands at the same type as hard coded segments.
This fixes problems where we'd emit the segment override after
the REX prefix on instructions like:
mov %gs:(%rdi), %rax
This fixes rdar://8127102. I have several cleanup patches coming
next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107917 91177308-0d34-0410-b5e6-96231b3b80d8
(X >s -1) ? C1 : C2 and (X <s 0) ? C2 : C1
into ((X >>s 31) & (C2 - C1)) + C1, avoiding the conditional.
This optimization could be extended to take non-const C1 and C2 but we better
stay conservative to avoid code size bloat for now.
for
int sel(int n) {
return n >= 0 ? 60 : 100;
}
we now generate
sarl $31, %edi
andl $40, %edi
leal 60(%rdi), %eax
instead of
testl %edi, %edi
movl $60, %ecx
movl $100, %eax
cmovnsl %ecx, %eax
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107866 91177308-0d34-0410-b5e6-96231b3b80d8
1. The arguments are f32.
2. The arguments are loads and they have no uses other than the comparison.
3. The comparison code is EQ or NE.
e.g.
vldr.32 s0, [r1]
vldr.32 s1, [r0]
vcmpe.f32 s1, s0
vmrs apsr_nzcv, fpscr
beq LBB0_2
=>
ldr r1, [r1]
ldr r0, [r0]
cmp r0, r1
beq LBB0_2
More complicated cases will be implemented in subsequent patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107852 91177308-0d34-0410-b5e6-96231b3b80d8
Add explicit testcases for tail calls within the same module.
Duplicate some code to humor those who think .w doesn't apply on ARM.
Leave this disabled on Thumb1, and add some comments explaining why it's hard
and won't gain much.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107851 91177308-0d34-0410-b5e6-96231b3b80d8
interface needs implementations to be consistent, so any code which
wants to support different semantics must use a different interface.
It's not currently worthwhile to add a new interface for this new
concept.
Document that AliasAnalysis doesn't support cross-function queries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107776 91177308-0d34-0410-b5e6-96231b3b80d8
It is OK for an alias live range to overlap if there is a copy to or from the
physical register. CoalescerPair can work out if the copy is coalescable
independently of the alias.
This means that we can join with the actual destination interval instead of
using the getOrigDstReg() hack. It is no longer necessary to merge clobber
ranges into subregisters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107695 91177308-0d34-0410-b5e6-96231b3b80d8
the example in the testcase, we now generate:
_test1: ## @test1
movss 4(%esp), %xmm0
addss 8(%esp), %xmm0
movl 12(%esp), %eax
movss %xmm0, (%eax)
ret
instead of:
_test1: ## @test1
subl $20, %esp
movl 24(%esp), %eax
movq %mm0, (%esp)
movq %mm0, 8(%esp)
movss (%esp), %xmm0
addss 12(%esp), %xmm0
movss %xmm0, (%eax)
addl $20, %esp
ret
v2f32 support did not work reliably because most of the X86
backend didn't know it was legal. It was apparently only added
to support returning source-level v2f32 values in MMX registers
in x86-32 mode. If ABI compatibility is important on this
GCC-extended-vector type for some reason, then the frontend
should generate IR that returns v2i32 instead of v2f32. However,
we generally don't try very hard to be abi compatible on gcc
extended vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107601 91177308-0d34-0410-b5e6-96231b3b80d8
v2f32 as legal in 32-bit mode. It is just as terrible there,
but I just care about x86-64 and noone claims it is valuable
in 64-bit mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107600 91177308-0d34-0410-b5e6-96231b3b80d8
- Fix VEX prefix to be emitted with 3 bytes whenever VEX_5M
represents a REX equivalent two byte leading opcode
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107523 91177308-0d34-0410-b5e6-96231b3b80d8
- X86 unfolding should check if the instructions being unfolded has memoperands.
If there is no memoperands, then it must assume conservative alignment. If this
would introduce an expensive sse unaligned load / store, then unfoldMemoryOperand
etc. should not unfold the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107509 91177308-0d34-0410-b5e6-96231b3b80d8
PrologEpilog code, and use it to determine whether
the asm forces stack alignment or not. gcc consistently
does not do this for GCC-style asms; Apple gcc inconsistently
sometimes does it for asm blocks. There is no
convenient place to put a bit in either the SDNode or
the MachineInstr form, so I've added an extra operand
to each; unlovely, but it does allow for expansion for
more bits, should we need it. PR 5125. Some
existing testcases are affected.
The operand lists of the SDNode and MachineInstr forms
are indexed with awesome mnemonics, like "2"; I may
fix this someday, but not now. I'm not making it any
worse. If anyone is inspired I think you can find all
the right places from this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107506 91177308-0d34-0410-b5e6-96231b3b80d8
getFunctionAlignment and the corresponding use of that value in the ARM
asm printer, but now we're using the standard asm printer. The result of
this was that function alignments were dropped completely for Thumb functions.
Radar 8143571.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107435 91177308-0d34-0410-b5e6-96231b3b80d8
Objective-C metadata types which should be marked as "weak", but which the
linker will remove upon final linkage. However, this linkage isn't specific to
Objective-C.
For example, the "objc_msgSend_fixup_alloc" symbol is defined like this:
.globl l_objc_msgSend_fixup_alloc
.weak_definition l_objc_msgSend_fixup_alloc
.section __DATA, __objc_msgrefs, coalesced
.align 3
l_objc_msgSend_fixup_alloc:
.quad _objc_msgSend_fixup
.quad L_OBJC_METH_VAR_NAME_1
This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".
Currently only supported on Darwin platforms.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107433 91177308-0d34-0410-b5e6-96231b3b80d8
such a way that debug info for symbols preserved even if symbols are
optimized away by the optimizer.
Add new special pass to remove debug info for such symbols.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107416 91177308-0d34-0410-b5e6-96231b3b80d8
available in a register. This is pretty primitive, but it reduces the
number of instructions in common testcases by 4%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107380 91177308-0d34-0410-b5e6-96231b3b80d8
- Add encode bits for VEX_W
- All 128-bit SSE 1 & SSE2 instructions that are described
in the .td file now have a AVX encoded form already working.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@107365 91177308-0d34-0410-b5e6-96231b3b80d8