Commit Graph

259 Commits

Author SHA1 Message Date
Bill Wendling
f865ea85bd Revert r79127. It was causing compilation errors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79135 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-15 21:14:01 +00:00
Evan Cheng
088880cb19 Change allowsUnalignedMemoryAccesses to take type argument since some targets
support unaligned mem access only for certain types. (Should it be size
instead?)

ARM v7 supports unaligned access for i16 and i32, some v6 variants support it
as well.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79127 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-15 19:23:44 +00:00
Dan Gohman
d6708eade0 On x86-64, for a varargs function, don't store the xmm registers to
the register save area if %al is 0. This avoids touching xmm
regsiters when they aren't actually used.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79061 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-15 01:38:56 +00:00
Owen Anderson
825b72b057 Split EVT into MVT and EVT, the former representing _just_ a primitive type, while
the latter is capable of representing either a primitive or an extended type.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78713 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-11 20:47:22 +00:00
Owen Anderson
e50ed30282 Rename MVT to EVT, in preparation for splitting SimpleValueType out into its own struct type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78610 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-10 22:56:29 +00:00
Owen Anderson
77547befdc Start moving TargetLowering away from using full MVTs and towards SimpleValueType, which will simplify the privatization of IntegerType in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78584 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-10 18:56:59 +00:00
Anton Korobeynikov
b5e0172405 Better handle kernel code model. Also, generalize the things and fix one
subtle bug with small code model.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78255 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-05 23:01:26 +00:00
Dan Gohman
98ca4f2a32 Major calling convention code refactoring.
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.

This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.

This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78142 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-05 01:29:28 +00:00
Dan Gohman
a9c1dd7820 Fix typos in comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77806 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-01 21:25:00 +00:00
Evan Cheng
37b7387da9 Optimize some common usage patterns of atomic built-ins __sync_add_and_fetch() and __sync_sub_and_fetch.
When the return value is not used (i.e. only care about the value in the memory), x86 does not have to use add to implement these. Instead, it can use add, sub, inc, dec instructions with the "lock" prefix.

This is currently implemented using a bit of instruction selection trick. The issue is the target independent pattern produces one output and a chain and we want to map it into one that just output a chain. The current trick is to select it into a merge_values with the first definition being an implicit_def. The proper solution is to add new ISD opcodes for the no-output variant. DAG combiner can then transform the node before it gets to target node selection.

Problem #2 is we are adding a whole bunch of x86 atomic instructions when in fact these instructions are identical to the non-lock versions. We need a way to add target specific information to target nodes and have this information carried over to machine instructions. Asm printer (or JIT) can use this information to add the "lock" prefix.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77582 91177308-0d34-0410-b5e6-96231b3b80d8
2009-07-30 08:33:02 +00:00
Eric Christopher
71c6753d03 Add support for gcc __builtin_ia32_ptest{z,c,nzc} intrinsics. Lower
to ptest instruction plus setcc. Revamp ptest instruction. Add test.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77407 91177308-0d34-0410-b5e6-96231b3b80d8
2009-07-29 00:28:05 +00:00
Chris Lattner
b810565152 Copy ExpandInlineAsm to TargetLowering from TargetAsmInfo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@76441 91177308-0d34-0410-b5e6-96231b3b80d8
2009-07-20 17:51:36 +00:00
Chris Lattner
951bf7d74f change a few methods to be static functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@75089 91177308-0d34-0410-b5e6-96231b3b80d8
2009-07-09 02:44:11 +00:00
Bill Wendling
b4202b84d7 Update comments to make it clear that the function alignment is the Log2 of the
bytes and not bytes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74624 91177308-0d34-0410-b5e6-96231b3b80d8
2009-07-01 18:50:55 +00:00
Bill Wendling
20c568f366 Add an "alignment" field to the MachineFunction object. It makes more sense to
have the alignment be calculated up front, and have the back-ends obey whatever
alignment is decided upon.

This allows for future work that would allow for precise no-op placement and the
like.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74564 91177308-0d34-0410-b5e6-96231b3b80d8
2009-06-30 22:38:32 +00:00
Devang Patel
578efa920a Add new function attribute - noimplicitfloat
Update code generator to use this attribute and remove NoImplicitFloat target option.
Update llc to set this attribute when -no-implicit-float command line option is used.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72959 91177308-0d34-0410-b5e6-96231b3b80d8
2009-06-05 21:57:13 +00:00
Dale Johannesen
874ae251c3 Revert 72707 and 72709, for the moment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72712 91177308-0d34-0410-b5e6-96231b3b80d8
2009-06-02 03:12:52 +00:00
Dale Johannesen
4150d83abe Make the implicit inputs and outputs of target-independent
ADDC/ADDE use MVT::i1 (later, whatever it gets legalized to)
instead of MVT::Flag.  Remove CARRY_FALSE in favor of 0; adjust
all target-independent code to use this format.

Most targets will still produce a Flag-setting target-dependent
version when selection is done.  X86 is converted to use i32
instead, which means TableGen needs to produce different code
in xxxGenDAGISel.inc.  This keys off the new supportsHasI1 bit
in xxxInstrInfo, currently set only for X86; in principle this
is temporary and should go away when all other targets have
been converted.  All relevant X86 instruction patterns are
modified to represent setting and using EFLAGS explicitly.  The
same can be done on other targets.

The immediate behavior change is that an ADC/ADD pair are no
longer tightly coupled in the X86 scheduler; they can be
separated by instructions that don't clobber the flags (MOV).
I will soon add some peephole optimizations based on using
other instructions that set the flags to feed into ADC.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72707 91177308-0d34-0410-b5e6-96231b3b80d8
2009-06-01 23:27:20 +00:00
Evan Cheng
8b944d39b3 Added optimization that narrow load / op / store and the 'op' is a bit twiddling instruction and its second operand is an immediate. If bits that are touched by 'op' can be done with a narrower instruction, reduce the width of the load and store as well. This happens a lot with bitfield manipulation code.
e.g.
orl     $65536, 8(%rax)
=>
orb     $1, 10(%rax)

Since narrowing is not always a win, e.g. i32 -> i16 is a loss on x86, dag combiner consults with the target before performing the optimization.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72507 91177308-0d34-0410-b5e6-96231b3b80d8
2009-05-28 00:35:15 +00:00
Eli Friedman
948e95a381 Make the x86 backend custom-lower UINT_TO_FP and FP_TO_UINT on 32-bit
systems instead of attempting to promote them to a 64-bit SINT_TO_FP or 
FP_TO_SINT.  This is in preparation for removing the type legalization 
code from LegalizeDAG: once type legalization is gone from LegalizeDAG, 
it won't be able to handle the i64 operand/result correctly.

This isn't quite ideal, but I don't think any other operation for any 
target ends up in this situation, so treating this case specially seems 
reasonable.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72324 91177308-0d34-0410-b5e6-96231b3b80d8
2009-05-23 09:59:16 +00:00
Nate Begeman
5a5ca1519e Implement review feedback for vector shuffle work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70372 91177308-0d34-0410-b5e6-96231b3b80d8
2009-04-29 05:20:52 +00:00
Nate Begeman
9008ca6b6b 2nd attempt, fixing SSE4.1 issues and implementing feedback from duncan.
PR2957

ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask.  A value of -1 represents UNDEF.

In addition to eliminating the creation of illegal BUILD_VECTORS just to 
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70225 91177308-0d34-0410-b5e6-96231b3b80d8
2009-04-27 18:41:29 +00:00
Rafael Espindola
15684b2955 Revert 69952. Causes testsuite failures on linux x86-64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@69967 91177308-0d34-0410-b5e6-96231b3b80d8
2009-04-24 12:40:33 +00:00
Nate Begeman
b706d29f9c PR2957
ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask.  A value of -1 represents UNDEF.

In addition to eliminating the creation of illegal BUILD_VECTORS just to 
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.

A clean up of x86 shuffle code, and some canonicalizing in DAGCombiner is next.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@69952 91177308-0d34-0410-b5e6-96231b3b80d8
2009-04-24 03:42:54 +00:00
Rafael Espindola
094fad37b9 Re-apply 68552.
Tested by bootstrapping llvm-gcc and using that to build llvm.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68645 91177308-0d34-0410-b5e6-96231b3b80d8
2009-04-08 21:14:34 +00:00
Dan Gohman
97121ba2af Implement support for using modeling implicit-zero-extension on x86-64
with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce
SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG
instructions), and teach the DAGCombiner to take advantage of this on
targets which support it. This eliminates many redundant
zero-extension operations on x86-64.

This adds a new TargetLowering hook, isZExtFree. It's similar to
isTruncateFree, except it only applies to actual definitions, and not
no-op truncates which may not zero the high bits.

Also, this adds a new optimization to SimplifyDemandedBits: transform
operations like x+y into (zext (add (trunc x), (trunc y))) on targets
where all the casts are no-ops. In contexts where the high part of the
add is explicitly masked off, this allows the mask operation to be
eliminated. Fix the DAGCombiner to avoid undoing these transformations
to eliminate casts on targets where the casts are no-ops.

Also, this adds a new two-address lowering heuristic. Since
two-address lowering runs before coalescing, it helps to be able to
look through copies when deciding whether commuting and/or
three-address conversion are profitable.

Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle
the case that a clobber range extended both before and beyond an
existing live range. In that case, multiple live ranges need to be
added. This was exposed by the new subreg coalescing code.

Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the
spiller behavior it was looking for no longer occurrs with the new
instruction selection.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68576 91177308-0d34-0410-b5e6-96231b3b80d8
2009-04-08 00:15:30 +00:00
Bill Wendling
044b5344c4 Temporarily revert r68552. This was causing a failure in the self-hosting LLVM
builds.

--- Reverse-merging (from foreign repository) r68552 into '.':
U    test/CodeGen/X86/tls8.ll
U    test/CodeGen/X86/tls10.ll
U    test/CodeGen/X86/tls2.ll
U    test/CodeGen/X86/tls6.ll
U    lib/Target/X86/X86Instr64bit.td
U    lib/Target/X86/X86InstrSSE.td
U    lib/Target/X86/X86InstrInfo.td
U    lib/Target/X86/X86RegisterInfo.cpp
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86CodeEmitter.cpp
U    lib/Target/X86/X86FastISel.cpp
U    lib/Target/X86/X86InstrInfo.h
U    lib/Target/X86/X86ISelDAGToDAG.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h
U    lib/Target/X86/X86ISelLowering.h
U    lib/Target/X86/X86InstrInfo.cpp
U    lib/Target/X86/X86InstrBuilder.h
U    lib/Target/X86/X86RegisterInfo.td



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68560 91177308-0d34-0410-b5e6-96231b3b80d8
2009-04-07 22:35:25 +00:00
Rafael Espindola
2a6411bbbd Reduce code duplication on the TLS implementation.
This introduces a small regression on the generated code
quality in the case we are just computing addresses, not
loading values.

Will work on it and on X86-64 support.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68552 91177308-0d34-0410-b5e6-96231b3b80d8
2009-04-07 21:37:46 +00:00
Evan Cheng
73f24c9f0d When optimzing a mul by immediate into two, the resulting mul's should get a x86 specific node to avoid dag combiner from hacking on them further.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68066 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-30 21:36:47 +00:00
Bill Wendling
bddc442a00 Doxygen-ify comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67727 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-26 01:46:56 +00:00
Dan Gohman
2004eb6272 Correct some comments. Operand numbers start at 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67518 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-23 15:40:10 +00:00
Chris Lattner
2b9f434908 improve comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66778 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-12 06:46:02 +00:00
Dan Gohman
3112581441 Arithmetic instructions don't set EFLAGS bits OF and CF bits
the same say the "test" instruction does in overflow cases,
so eliminating the test is only safe when those bits aren't
needed, as is the case for COND_E and COND_NE, or if it
can be proven that no overflow will occur. For now, just
restrict the optimization to COND_E and COND_NE and don't
do any overflow analysis.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66318 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-07 01:58:32 +00:00
Dan Gohman
076aee32e8 Re-apply 66008, now that the unfoldMemoryOperand bug is fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66058 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-04 19:44:21 +00:00
Dan Gohman
29582d1223 Revert r66004 for now; it's causing a variety of test failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66008 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-04 03:54:19 +00:00
Dan Gohman
12bbc52aa7 Teach the x86 backend to eliminate "test" instructions by using the EFLAGS
result from add, sub, inc, and dec instructions in simple cases.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66004 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-04 02:33:24 +00:00
Nate Begeman
b9a47b824f Generate better code for v8i16 shuffles on SSE2
Generate better code for v16i8 shuffles on SSE2 (avoids stack)
Generate pshufb for v8i16 and v16i8 shuffles on SSSE3 where it is fewer uops.
Document the shuffle matching logic and add some FIXMEs for later further
  cleanups.
New tests that test the above.

Examples:

New:
_shuf2:
	pextrw	$7, %xmm0, %eax
	punpcklqdq	%xmm1, %xmm0
	pshuflw	$128, %xmm0, %xmm0
	pinsrw	$2, %eax, %xmm0

Old:
_shuf2:
	pextrw	$2, %xmm0, %eax
	pextrw	$7, %xmm0, %ecx
	pinsrw	$2, %ecx, %xmm0
	pinsrw	$3, %eax, %xmm0
	movd	%xmm1, %eax
	pinsrw	$4, %eax, %xmm0
	ret

=========

New:
_shuf4:
	punpcklqdq	%xmm1, %xmm0
	pshufb	LCPI1_0, %xmm0

Old:
_shuf4:
	pextrw	$3, %xmm0, %eax
	movsd	%xmm1, %xmm0
	pextrw	$3, %xmm1, %ecx
	pinsrw	$4, %ecx, %xmm0
	pinsrw	$5, %eax, %xmm0

========

New:
_shuf1:
	pushl	%ebx
	pushl	%edi
	pushl	%esi
	pextrw	$1, %xmm0, %eax
	rolw	$8, %ax
	movd	%xmm0, %ecx
	rolw	$8, %cx
	pextrw	$5, %xmm0, %edx
	pextrw	$4, %xmm0, %esi
	pextrw	$3, %xmm0, %edi
	pextrw	$2, %xmm0, %ebx
	movaps	%xmm0, %xmm1
	pinsrw	$0, %ecx, %xmm1
	pinsrw	$1, %eax, %xmm1
	rolw	$8, %bx
	pinsrw	$2, %ebx, %xmm1
	rolw	$8, %di
	pinsrw	$3, %edi, %xmm1
	rolw	$8, %si
	pinsrw	$4, %esi, %xmm1
	rolw	$8, %dx
	pinsrw	$5, %edx, %xmm1
	pextrw	$7, %xmm0, %eax
	rolw	$8, %ax
	movaps	%xmm1, %xmm0
	pinsrw	$7, %eax, %xmm0
	popl	%esi
	popl	%edi
	popl	%ebx
	ret

Old:
_shuf1:
	subl	$252, %esp
	movaps	%xmm0, (%esp)
	movaps	%xmm0, 16(%esp)
	movaps	%xmm0, 32(%esp)
	movaps	%xmm0, 48(%esp)
	movaps	%xmm0, 64(%esp)
	movaps	%xmm0, 80(%esp)
	movaps	%xmm0, 96(%esp)
	movaps	%xmm0, 224(%esp)
	movaps	%xmm0, 208(%esp)
	movaps	%xmm0, 192(%esp)
	movaps	%xmm0, 176(%esp)
	movaps	%xmm0, 160(%esp)
	movaps	%xmm0, 144(%esp)
	movaps	%xmm0, 128(%esp)
	movaps	%xmm0, 112(%esp)
	movzbl	14(%esp), %eax
	movd	%eax, %xmm1
	movzbl	22(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm1, %xmm2
	movzbl	42(%esp), %eax
	movd	%eax, %xmm1
	movzbl	50(%esp), %eax
	movd	%eax, %xmm3
	punpcklbw	%xmm1, %xmm3
	punpcklbw	%xmm2, %xmm3
	movzbl	77(%esp), %eax
	movd	%eax, %xmm1
	movzbl	84(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm1, %xmm2
	movzbl	104(%esp), %eax
	movd	%eax, %xmm1
	punpcklbw	%xmm1, %xmm0
	punpcklbw	%xmm2, %xmm0
	movaps	%xmm0, %xmm1
	punpcklbw	%xmm3, %xmm1
	movzbl	127(%esp), %eax
	movd	%eax, %xmm0
	movzbl	135(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm0, %xmm2
	movzbl	155(%esp), %eax
	movd	%eax, %xmm0
	movzbl	163(%esp), %eax
	movd	%eax, %xmm3
	punpcklbw	%xmm0, %xmm3
	punpcklbw	%xmm2, %xmm3
	movzbl	188(%esp), %eax
	movd	%eax, %xmm0
	movzbl	197(%esp), %eax
	movd	%eax, %xmm2
	punpcklbw	%xmm0, %xmm2
	movzbl	217(%esp), %eax
	movd	%eax, %xmm4
	movzbl	225(%esp), %eax
	movd	%eax, %xmm0
	punpcklbw	%xmm4, %xmm0
	punpcklbw	%xmm2, %xmm0
	punpcklbw	%xmm3, %xmm0
	punpcklbw	%xmm1, %xmm0
	addl	$252, %esp
	ret



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@65311 91177308-0d34-0410-b5e6-96231b3b80d8
2009-02-23 08:49:38 +00:00
Dan Gohman
1fdbc1dd4e Constify TargetInstrInfo::EmitInstrWithCustomInserter, allowing
ScheduleDAG's TLI member to use const.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64018 91177308-0d34-0410-b5e6-96231b3b80d8
2009-02-07 16:15:20 +00:00
Dale Johannesen
33c960f523 Remove non-DebugLoc versions of getLoad and getStore.
Adjust the many callers of those versions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63767 91177308-0d34-0410-b5e6-96231b3b80d8
2009-02-04 20:06:27 +00:00
Dale Johannesen
eacf2dc4bb Need this file too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63674 91177308-0d34-0410-b5e6-96231b3b80d8
2009-02-03 22:26:34 +00:00
Dale Johannesen
ace1610df5 DebugLoc propagation. 2/3 through file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63650 91177308-0d34-0410-b5e6-96231b3b80d8
2009-02-03 19:33:06 +00:00
Nate Begeman
9b99485074 Fix an indent and a typo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62940 91177308-0d34-0410-b5e6-96231b3b80d8
2009-01-24 22:12:48 +00:00
Bill Wendling
8b8a636843 Implement a special algorithm for converting uint_to_fp for i32 values on
X86. This code:

void f() {
  uint32_t x;
  float y = (float)x;
}

used to be:

     movl     %eax, -8(%ebp)
     movl     [2^52 double], -4(%ebp)
     movsd    -8(%ebp), %xmm0
     subsd    [2^52 double], %xmm0
     cvtsd2ss %xmm0, %xmm0

Is now:

   movsd        [2^52 double], %xmm0
   movsd        %xmm0, %xmm1
   movd         %ecx, %xmm2
   orps         %xmm2, %xmm1
   subsd        %xmm0, %xmm1
   cvtsd2ss     %xmm1, %xmm0

This is faster on X86. Note that there's an extra load of %xmm0 into %xmm1. That
will be fixed in a later coalescer fix.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62404 91177308-0d34-0410-b5e6-96231b3b80d8
2009-01-17 03:56:04 +00:00
Dan Gohman
c13cf130c4 Make getWidenVectorType const.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62265 91177308-0d34-0410-b5e6-96231b3b80d8
2009-01-15 17:34:08 +00:00
Devang Patel
83489bb770 Use DebugInfo interface to lower dbg_* intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62127 91177308-0d34-0410-b5e6-96231b3b80d8
2009-01-13 00:35:13 +00:00
Duncan Sands
5480c0469e Fix PR3274: when promoting the condition of a BRCOND node,
promote from i1 all the way up to the canonical SetCC type.
In order to discover an appropriate type to use, pass
MVT::Other to getSetCCResultType.  In order to be able to
do this, change getSetCCResultType to take a type as an
argument, not a value (this is also more logical).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61542 91177308-0d34-0410-b5e6-96231b3b80d8
2009-01-01 15:52:00 +00:00
Dan Gohman
c7a37d4ff2 Add instruction patterns and encodings for the x86 bt instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61400 91177308-0d34-0410-b5e6-96231b3b80d8
2008-12-23 22:45:23 +00:00
Mon P Wang
af9b952627 Fixed x86 code generation of multiple for v2i64. It was incorrect for SSE4.1.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61211 91177308-0d34-0410-b5e6-96231b3b80d8
2008-12-18 21:42:19 +00:00
Bill Wendling
d350e02e19 - Use patterns instead of creating completely new instruction matching patterns,
which are identical to the original patterns.

- Change the multiply with overflow so that we distinguish between signed and
  unsigned multiplication. Currently, unsigned multiplication with overflow
  isn't working!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60963 91177308-0d34-0410-b5e6-96231b3b80d8
2008-12-12 21:15:41 +00:00
Bill Wendling
ab55ebda1c Redo the arithmetic with overflow architecture. I was changing the semantics of
ISD::ADD to emit an implicit EFLAGS. This was horribly broken. Instead, replace
the intrinsic with an ISD::SADDO node. Then custom lower that into an
X86ISD::ADD node with a associated SETCC that checks the correct condition code
(overflow or carry). Then that gets lowered into the correct X86::ADDOvf
instruction.

Similar for SUB and MUL instructions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60915 91177308-0d34-0410-b5e6-96231b3b80d8
2008-12-12 00:56:36 +00:00