Commit Graph

4148 Commits

Author SHA1 Message Date
Evan Cheng
d54f2d571d i128 shift libcalls are not available on x86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68133 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-31 19:38:51 +00:00
Dan Gohman
968dc7a207 Reapply 68073, with fixes. EH Landing-pad basic blocks are not
entered via fall-through. Don't miss fallthroughs from blocks
terminated by conditional branches. Also, move
isOnlyReachableByFallthrough out of line.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68129 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-31 18:39:13 +00:00
Rafael Espindola
523249f856 remove unused arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68109 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-31 16:16:57 +00:00
Bill Wendling
df4881c68a Really temporarily revert r68073.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68100 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-31 08:42:40 +00:00
Bill Wendling
e67f5e4273 Oy! When reverting r68073, I added in experimental code. Sorry...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68099 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-31 08:41:31 +00:00
Bill Wendling
8fe00540fc Revert r68073. It's causing a failure in the Apple-style builds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68092 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-31 08:26:26 +00:00
Evan Cheng
4d95232469 X86 address mode isel tweak. If the base of the address is also used by a CopyToReg (i.e. it's likely live-out), do not fold the sub-expressions into the addressing mode to avoid computing the address twice. The CopyToReg use will be isel'ed to a LEA, re-use it for address instead.
This is not yet enabled.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68082 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-31 01:13:53 +00:00
Dan Gohman
80c93e7442 Except in asm-verbose mode, avoid printing labels for blocks that are
only reachable via fall-through edges. This dramatically reduces the
number of labels printed, and thus also the number of labels the
assembler must parse and remember.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68073 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-30 22:55:17 +00:00
Evan Cheng
73f24c9f0d When optimzing a mul by immediate into two, the resulting mul's should get a x86 specific node to avoid dag combiner from hacking on them further.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68066 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-30 21:36:47 +00:00
Anton Korobeynikov
fca82deecb Do not propagate ELF-specific stuff (data.rel) into other targets. This simplifies code and also ensures correctness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68032 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-30 15:27:43 +00:00
Anton Korobeynikov
71a7c6cde0 Add data.rel stuff
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68031 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-30 15:27:03 +00:00
Rafael Espindola
a0a4f07fb6 Use array_lengthof
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67950 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-28 19:02:18 +00:00
Rafael Espindola
da945e3bb2 Have only one definition of X86AddrNumOperands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67949 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-28 18:55:31 +00:00
Rafael Espindola
b449a68146 Make code a bit less brittle by no hardcoding the number
of operands in an address in so many places.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67945 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-28 17:03:24 +00:00
Evan Cheng
0b0cd9113a Optimize some 64-bit multiplication by constants into two lea's or one lea + shl since imulq is slow (latency 5). e.g.
x * 40
=>
shlq    $3, %rdi
leaq    (%rdi,%rdi,4), %rax

This has the added benefit of allowing more multiply to be folded into addressing mode. e.g.
a * 24 + b
=>
leaq    (%rdi,%rdi,2), %rax
leaq    (%rsi,%rax,8), %rax


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67917 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-28 05:57:29 +00:00
Rafael Espindola
705d800879 Avoid hardcoding that X86 addresses have 4 operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67848 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-27 15:57:50 +00:00
Rafael Espindola
e4d5d34cfc Use less hard coded constants to make the code less brittle.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67846 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-27 15:45:05 +00:00
Rafael Espindola
a82dfca8c6 I am trying to add a segment to the X86 addresses matching to
improve TLS support (see http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20090309/075220.html), but that code is VERY brittle.

This patch just makes it a bit more resistant.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67843 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-27 15:26:30 +00:00
Evan Cheng
9272253381 -no-implicit-float means explicit fp operations are legal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67784 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-26 23:06:32 +00:00
Bill Wendling
a02a3dda56 Pull transform from target-dependent code into target-independent code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67742 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-26 06:14:09 +00:00
Bill Wendling
8b4b874cc6 Match this pattern so that we can generate simpler code:
%a = ...
  %b = and i32 %a, 2
  %c = srl i32 %b, 1
  %d = br i32 %c, 

into

  %a = ...
  %b = and %a, 2
  %c = X86ISD::CMP %b, 0
  %d = X86ISD::BRCOND %c ...

This applies only when the AND constant value has one bit set and the SRL
constant is equal to the log2 of the AND constant. The back-end is smart enough
to convert the result into a TEST/JMP sequence.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67728 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-26 01:47:50 +00:00
Bill Wendling
bddc442a00 Doxygen-ify comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67727 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-26 01:46:56 +00:00
Evan Cheng
42bf74be14 CodeGen still defaults to non-verbose asm, but llc now overrides it and default to verbose.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67668 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-25 01:47:28 +00:00
Evan Cheng
7db860d4de Don't print global names twice with -asm-verbose.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67667 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-25 01:08:42 +00:00
Dan Gohman
a96dc14968 I was convinced that it's ok to allow a second i8 return value
to be returned in DL. LLVM's multiple-return-value support is
not ABI-conforming; front-ends that wish to have code emitted
that conforms to an ABI are currently expected to make
arrangements for this on their own rather than assuming that
multiple-return-values will automatically do the right thing.
This commit doesn't fundamentally change this situation.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67588 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-24 01:04:34 +00:00
Evan Cheng
f1c0ae9de5 Do not emit comments unless -asm-verbose.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67580 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-24 00:17:40 +00:00
Dan Gohman
2004eb6272 Correct some comments. Operand numbers start at 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67518 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-23 15:40:10 +00:00
Evan Cheng
fb11288109 Model inline asm constraint which ties an input to an output register as machine operand TIED_TO constraint. This eliminated the need to pre-allocate registers for these. This also allows register allocator can eliminate the unneeded copies.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67512 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-23 08:01:15 +00:00
Dan Gohman
3aff0a63f9 Fix a grammaro in a comment that Bill noticed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67507 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-23 05:02:44 +00:00
Dan Gohman
82f84159e0 Add comments explaining why there's only one register for
i8 return values.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67502 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-23 04:28:24 +00:00
Nick Lewycky
9c0f146d50 Remove strange extra semicolons.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67287 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-19 05:51:39 +00:00
Chris Lattner
ff81ebf758 Disable the "call to immediate" optimization on x86-64. It is
not safe in general because the immediate could be an arbitrary
value that does not fit in a 32-bit pcrel displacement.  
Conservatively fall back to loading the value into a register
and calling through it.

We still do the optzn on X86-32.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67142 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-18 00:43:52 +00:00
Dan Gohman
9626447e70 Recognize bswapl as bswap too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67072 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-17 02:45:40 +00:00
Dan Gohman
d73566609e Recognize "bswapq" as an alternate spelling for the bswap instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67071 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-17 02:17:27 +00:00
Dan Gohman
72bb0a64af Use %rip-relative addressing on x86-64 whenever practical, as
it has a smaller encoding than absolute addressing.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67002 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-14 02:33:41 +00:00
Dan Gohman
9a49d31b6f Don't forego folding of loads into 64-bit adds when the other
operand is a signed 32-bit immediate. Unlike with the 8-bit
signed immediate case, it isn't actually smaller to fold a
32-bit signed immediate instead of a load. In fact, it's
larger in the case of 32-bit unsigned immediates, because
they can be materialized with movl instead of movq.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67001 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-14 02:07:16 +00:00
Dan Gohman
474d3b3f40 Improve FastISel's handling of truncates to i1, and implement
ptrtoint and inttoptr in X86FastISel. These casts aren't always
handled in the generic FastISel code because X86 sometimes needs
custom code to do truncation and zero-extension.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66988 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 23:53:06 +00:00
Dan Gohman
14ea1ec232 Fix FastISel's assumption that i1 values are always zero-extended
by inserting explicit zero extensions where necessary. Included
is a testcase where SelectionDAG produces a virtual register
holding an i1 value which FastISel previously mistakenly assumed
to be zero-extended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66941 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 20:42:20 +00:00
Rafael Espindola
520ebe6c2f add 8 and 16 bit TLS moves.
add a fixme note on how to remove code duplication.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66932 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 19:39:55 +00:00
Rafael Espindola
9b922aa3b8 Improve sext and zext of TLS variables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66922 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 18:37:06 +00:00
Chris Lattner
44ceb8a341 generalize this code so that fast isel handles integer truncates to i1, which
codegen to the same thing as integer truncates to i8 (the top bits are 
just undefined).  This implements rdar://6667338


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66902 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 16:36:42 +00:00
Bill Wendling
105be5ac99 These instructions have special lowering that may lower them to SSE
instructions. Prevent that if we don't want implicit uses of SSE.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66877 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 08:41:47 +00:00
Evan Cheng
1606e8e4cd Fix some significant problems with constant pools that resulted in unnecessary paddings between constant pool entries, larger than necessary alignments (e.g. 8 byte alignment for .literal4 sections), and potentially other issues.
1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants.
2. MachineConstantPool alignment field is also a log2 value.
3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values.
4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries.
5. Asm printer uses expensive data structure multimap to track constant pool entries by sections.
6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic.


Solutions:
1. ConstantPoolSDNode alignment field is changed to keep non-log2 value.
2. MachineConstantPool alignment field is also changed to keep non-log2 value.
3. Functions that create ConstantPool nodes are passing in non-log2 alignments.
4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT.
5. Asm printer uses cheaper data structure to group constant pool entries.
6. Asm printer compute entry offsets after grouping is done.
7. Change JIT code to compute entry offsets on the fly.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66875 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 07:51:59 +00:00
Chris Lattner
cee56e7d33 generalize the previous code to use the full generality of LEA
for i32/i64 expressions (we could also do i16 on cpus where
i16 lea is fast, but I didn't add this).  On the example, we now
generate:

_test:
	movl	4(%esp), %eax
	cmpl	$42, (%eax)
	setl	%al
	movzbl	%al, %eax
	leal	4(%eax,%eax,8), %eax
	ret

instead of:

_test:
	movl	4(%esp), %eax
	cmpl	$41, (%eax)
	movl	$4, %ecx
	movl	$13, %eax
	cmovg	%ecx, %eax
	ret



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66869 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 05:53:31 +00:00
Chris Lattner
97a29a5fee optimize the case of cond ? 42 : 41 and friends. This compiles the
example to:

_test:
	movl	4(%esp), %eax
	cmpl	$41, (%eax)
	setg	%al
	movzbl	%al, %eax
	orl	$4294967294, %eax
	ret

instead of:

        movl    4(%esp), %eax
        cmpl    $41, (%eax)
	movl	$4294967294, %ecx
	movl	$4294967295, %eax
	cmova	%ecx, %eax
	ret

which is smaller in code size and faster. rdar://6668608



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66868 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 05:22:11 +00:00
Dan Gohman
77502c9344 Enhance address-mode folding of ISD::ADD to handle cases where the
operands can't both be fully folded at the same time. For example,
in the included testcase, a global variable is being added with
an add of two values. The global variable wants RIP-relative
addressing, so it can't share the address with another base
register, but it's still possible to fold the initial add.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66865 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-13 02:25:09 +00:00
Evan Cheng
a065200eaf Re-apply 66024 with fixes: 1. Fixed indirect call to immediate address assembly. 2. Fixed JIT encoding by making the address pc-relative.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66803 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-12 18:15:39 +00:00
Chris Lattner
d1980a5acd Move 3 "(add (select cc, 0, c), x) -> (select cc, x, (add, x, c))"
related transformations out of target-specific dag combine into the
ARM backend.  These were added by Evan in r37685 with no testcases
and only seems to help ARM (e.g. test/CodeGen/ARM/select_xform.ll).

Add some simple X86-specific (for now) DAG combines that turn things
like cond ? 8 : 0  -> (zext(cond) << 3).  This happens frequently
with the recently added cp constant select optimization, but is a
very general xform.  For example, we now compile the second example
in const-select.ll to:

_test:
        movsd   LCPI2_0, %xmm0
        ucomisd 8(%esp), %xmm0
        seta    %al
        movzbl  %al, %eax
        movl    4(%esp), %ecx
        movsbl  (%ecx,%eax,4), %eax
        ret

instead of:

_test:
        movl    4(%esp), %eax
        leal    4(%eax), %ecx
        movsd   LCPI2_0, %xmm0
        ucomisd 8(%esp), %xmm0
        cmovbe  %eax, %ecx
        movsbl  (%ecx), %eax
        ret

This passes multisource and dejagnu.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66779 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-12 06:52:53 +00:00
Chris Lattner
2b9f434908 improve comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66778 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-12 06:46:02 +00:00
Evan Cheng
536e66764b On x86, if the only use of a i64 load is a i64 store, generate a pair of double load and store instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66776 91177308-0d34-0410-b5e6-96231b3b80d8
2009-03-12 05:59:15 +00:00