Commit Graph

2521 Commits

Author SHA1 Message Date
Eric Christopher
02050986d9 Expand invalid return values for umulo and smulo. Handle these similarly
to add/sub by doing the normal operation and then checking for overflow
afterwards. This generally relies on the DAG handling the later invalid
operations as well.

Fixes the 64-bit part of rdar://8622122 and rdar://8774702.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123908 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-20 08:54:28 +00:00
Benjamin Kramer
c9b6a3eb90 Fix an off-by-one error in ctpop combining.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123664 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-17 18:00:28 +00:00
Benjamin Kramer
d822892455 Add a DAGCombine to turn (ctpop x) u< 2 into (x & x-1) == 0.
This shaves off 4 popcounts from the hacked 186.crafty source.

This is enabled even when a native popcount instruction is available. The
combined code is one operation longer but it should be faster nevertheless.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123621 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-17 12:04:57 +00:00
Rafael Espindola
1c10db3da9 Update tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123591 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-16 18:02:57 +00:00
Chris Lattner
dec28ceb02 fix PR8514, a bug where the "heroic" transformation of shift/and
into and/shift would cause nodes to move around and a dangling pointer
to happen.  The code tried to avoid this with a HandleSDNode, but 
got the details wrong.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123578 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-16 08:48:11 +00:00
Chris Lattner
9cd3da47f9 fix PR8981, a crash trying to form a conditional inc with a floating point compare.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123560 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-16 02:56:53 +00:00
Chris Lattner
b99fdee325 reapply my fix for PR8961 with a tweak to properly handle
multi-instruction sequences like calls.  Many thanks to Jakob for
finding a testcase.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123559 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-16 02:27:38 +00:00
Chris Lattner
d357e88740 revert my fastisel patch again which apparently still gives the
llvm-gcc-i386-linux-selfhost buildbot heartburn...


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123431 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-14 06:14:33 +00:00
Chris Lattner
9e27cc8049 reapply r123414 now that the botz are calmed down and the fix is already in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123427 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-14 04:24:28 +00:00
Chris Lattner
a899d1c264 r123414 broke llvm-gcc bootstrap apparently, revert
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123422 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-14 02:07:32 +00:00
Chris Lattner
d754041493 fix PR8961 - a fast isel miscompilation where we'd insert a new instruction
after sext's generated for addressing that got folded.  Previously we compiled
test5 into:

_test5:                                 ## @test5
## BB#0:
        movq    -8(%rsp), %rax          ## 8-byte Reload
        movq    (%rdi,%rax), %rdi
        addq    %rdx, %rdi
        movslq  %esi, %rax
        movq    %rax, -8(%rsp)          ## 8-byte Spill
        movq    %rdi, %rax
        ret

which is insane and wrong.  Now we produce:

_test5:                                 ## @test5
## BB#0:
	movslq	%esi, %rax
	movq	(%rdi,%rax), %rax
	addq	%rdx, %rax
	ret



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123414 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-14 00:01:01 +00:00
Eric Christopher
04f5079ca1 Experiment with changing the default 32-bit linux stack alignment to
16 bytes for PR8969. Update all testcases accordingly.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123367 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-13 06:47:10 +00:00
Jakob Stoklund Olesen
25dc2268a5 Try again enabling LiveDebugVariables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123342 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-12 23:36:21 +00:00
Jakob Stoklund Olesen
2df5458535 The world is not ready for LiveDebugVariables yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123290 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-11 23:20:33 +00:00
Jakob Stoklund Olesen
a518ccc26a Enable LiveDebugVariables by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123282 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-11 22:45:28 +00:00
Dale Johannesen
97fd9a58de Fix PR 8916 (qv for analysis), at least the immediate problem.
There's an inherent tension in DAGCombine between assuming
that things will be put in canonical form, and the Depth
mechanism that disables transformations when recursion gets
too deep.  It would not surprise me if there's a lot of little
bugs like this one waiting to be discovered.  The mechanism
seems fragile and I'd suggest looking at it from a design viewpoint.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123191 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-10 21:53:07 +00:00
Evan Cheng
55d4200336 Recognize inline asm 'rev /bin/bash, ' as a bswap intrinsic call.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123048 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-08 01:24:27 +00:00
Evan Cheng
c36b7069b4 Do not model all INLINEASM instructions as having unmodelled side effects.
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.

This allows memory instructions to be moved around INLINEASM instructions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123044 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 23:50:32 +00:00
Devang Patel
51a666f0e5 Speculatively revert r123032.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123039 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 22:33:41 +00:00
Devang Patel
1dea232624 Appropriately truncate debug info range in dwarf output.
Enable live debug variables pass.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123032 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 21:30:41 +00:00
Evan Cheng
a5e1362f96 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123015 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 19:35:30 +00:00
Benjamin Kramer
50dd09bd85 Try to unbreak the arm buildbot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122999 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 11:35:21 +00:00
Duncan Sands
d9aa80038f Fix the other problem reported in PR8582. Testcase and patch by
Nadav Rotem.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122983 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 23:45:22 +00:00
Evan Cheng
461f1fc359 Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122955 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 07:58:36 +00:00
Evan Cheng
0521928ae7 Re-implement r122936 with proper target hooks. Now getMaxStoresPerMemcpy
etc. takes an option OptSize. If OptSize is true, it would return
the inline limit for functions with attribute OptSize.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122952 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 06:52:41 +00:00
Evan Cheng
255874ff52 Revert r122936. I'll re-implement the change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122949 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 06:17:53 +00:00
Bill Wendling
05e353c4ed Fix test to coincide with r122934 change from PR8919.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122937 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 01:09:35 +00:00
Evan Cheng
9a9d847afa r105228 reduced the memcpy / memset inline limit to 4 with -Os to avoid blowing
up freebsd bootloader. However, this doesn't make much sense for Darwin, whose
-Os is meant to optimize for size only if it doesn't hurt performance.
rdar://8821501


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122936 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 01:04:47 +00:00
Evan Cheng
d08e5b48bc Avoid zero extend bit test operands to pointer type if all the masks fit in
the original type of the switch statement key.
rdar://8781238


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122935 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 01:02:44 +00:00
Evan Cheng
0b71d3972d Optimize:
r1025 = s/zext r1024, 4
  r1026 = extract_subreg r1025, 4
to:
  r1026 = copy r1024


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122925 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-05 23:06:49 +00:00
Chris Lattner
c010e61ae1 fix PR8900, a shuffle miscompilation. Patch by Nadav Rotem!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122921 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-05 22:28:46 +00:00
Evan Cheng
7158e08b8e Use pushq / popq instead of subq $8, %rsp / addq $8, %rsp to adjust stack in
prologue and epilogue if the adjustment is 8. Similarly, use pushl / popl if
the adjustment is 4 in 32-bit mode.

In the epilogue, takes care to pop to a caller-saved register that's not live
at the exit (either return or tailcall instruction).
rdar://8771137


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122783 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-03 22:53:22 +00:00
Benjamin Kramer
80220369b0 Try to reuse the value when lowering memset.
This allows us to compile:
  void test(char *s, int a) {
    __builtin_memset(s, a, 15);
  }
into 1 mul + 3 stores instead of 3 muls + 3 stores.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122710 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-02 19:57:05 +00:00
Benjamin Kramer
8c06aa1c59 Lower the i8 extension in memset to a multiply instead of a potentially long series of shifts and ors.
We could implement a DAGCombine to turn x * 0x0101 back into logic operations
on targets that doesn't support the multiply or it is slow (p4) if someone cares
enough.

Example code:
  void test(char *s, int a) {
      __builtin_memset(s, a, 4);
  }
before:
  _test:                                  ## @test
    movzbl  8(%esp), %eax
    movl  %eax, %ecx
    shll  $8, %ecx
    orl %eax, %ecx
    movl  %ecx, %eax
    shll  $16, %eax
    orl %ecx, %eax
    movl  4(%esp), %ecx
    movl  %eax, 4(%ecx)
    movl  %eax, (%ecx)
    ret
after:
  _test:                                  ## @test
    movzbl  8(%esp), %eax
    imull $16843009, %eax, %eax   ## imm = 0x1010101
    movl  4(%esp), %ecx
    movl  %eax, 4(%ecx)
    movl  %eax, (%ecx)
    ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122707 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-02 19:44:58 +00:00
Rafael Espindola
1acf707cce Fix darwin bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122672 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-01 21:58:41 +00:00
Rafael Espindola
03277e7fb4 Add support for the 'H' modifier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122667 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-01 20:58:46 +00:00
NAKAMURA Takumi
e5eff5f6a2 test/CodeGen/X86/negative-sin.ll: FileCheck-ize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122619 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-29 03:58:47 +00:00
NAKAMURA Takumi
a9eb163261 test/CodeGen/X86/fp-in-intregs.ll: FileCheck-ize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122618 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-29 03:58:36 +00:00
Benjamin Kramer
f50125ecaa DAGCombine add (sext i1), X into sub X, (zext i1) if sext from i1 is illegal. The latter usually compiles into smaller code.
example code:
unsigned foo(unsigned x, unsigned y) {
  if (x != 0) y--;
  return y;
}

before:
  _foo:                           ## @foo
    cmpl  $1, 4(%esp)             ## encoding: [0x83,0x7c,0x24,0x04,0x01]
    sbbl  %eax, %eax              ## encoding: [0x19,0xc0]
    notl  %eax                    ## encoding: [0xf7,0xd0]
    addl  8(%esp), %eax           ## encoding: [0x03,0x44,0x24,0x08]
    ret                           ## encoding: [0xc3]

after:
  _foo:                           ## @foo
    cmpl  $1, 4(%esp)             ## encoding: [0x83,0x7c,0x24,0x04,0x01]
    movl  8(%esp), %eax           ## encoding: [0x8b,0x44,0x24,0x08]
    adcl  $-1, %eax               ## encoding: [0x83,0xd0,0xff]
    ret                           ## encoding: [0xc3]



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122455 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-22 23:17:45 +00:00
Benjamin Kramer
e915ff30cd X86: Lower a select directly to a setcc_carry if possible.
int test(unsigned long a, unsigned long b) { return -(a < b); }
compiles to
  _test:                              ## @test
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    sbbl  %eax, %eax                  ## encoding: [0x19,0xc0]
    ret                               ## encoding: [0xc3]
instead of
  _test:                              ## @test
    xorl  %ecx, %ecx                  ## encoding: [0x31,0xc9]
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    movl  $-1, %eax                   ## encoding: [0xb8,0xff,0xff,0xff,0xff]
    cmovael %ecx, %eax                ## encoding: [0x0f,0x43,0xc1]
    ret                               ## encoding: [0xc3]


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122451 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-22 23:09:28 +00:00
Chris Lattner
cbf68dfbc0 Fix a bug in ReduceLoadWidth that wasn't handling extending
loads properly.  We miscompiled the testcase into:

_test:                                  ## @test
	movl	$128, (%rdi)
	movzbl	1(%rdi), %eax
	ret

Now we get a proper:

_test:                                  ## @test
	movl	$128, (%rdi)
	movsbl	(%rdi), %eax
	movzbl	%ah, %eax
	ret

This fixes PR8757.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122392 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-22 08:02:57 +00:00
Dale Johannesen
c72b18cdc8 Reapply 122353-122355 with fixes. 122354 was wrong;
the shift type was needed one place, the shift count
type another.  The transform in 123555 had the same
problem.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122366 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-21 21:55:50 +00:00
Benjamin Kramer
7d6fe13efc Add some x86 specific dagcombines for conditional increments.
(add Y, (sete  X, 0)) -> cmp X, 1; adc  0, Y
(add Y, (setne X, 0)) -> cmp X, 1; sbb -1, Y
(sub (sete  X, 0), Y) -> cmp X, 1; sbb  0, Y
(sub (setne X, 0), Y) -> cmp X, 1; adc -1, Y

for
  unsigned foo(unsigned a, unsigned b) {
    if (a == 0) b++;
    return b;
  }
we now get:
  foo:
    cmpl  $1, %edi
    movl  %esi, %eax
    adcl  $0, %eax
    ret
instead of:
  foo:
    testl %edi, %edi
    sete  %al
    movzbl  %al, %eax
    addl  %esi, %eax
    ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122364 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-21 21:41:44 +00:00
Dale Johannesen
d0cf2585a0 Revert 122353-122355 for the moment, they broke stuff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122360 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-21 21:22:27 +00:00
Dale Johannesen
a83bf35d16 Add a new transform to DAGCombiner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122355 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-21 20:10:51 +00:00
Dale Johannesen
5ecc340e34 Get the type of a shift from the shift, not from its shift
count operand.  These should be the same but apparently are
not always, and this is cleaner anyway.  This improves the
code in an existing test.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122354 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-21 20:06:19 +00:00
Dale Johannesen
025cc6e1be Cosmetic changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122259 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 20:10:50 +00:00
Chris Lattner
23a0199f05 now that addc/adde are gone, "ADDC" in the X86 backend uses EFLAGS results,
the same as setcc.  Optimize ADDC(0,0,FLAGS) -> SET_CARRY(FLAGS).  This is
a step towards finishing off PR5443.  In the testcase in that bug we now  get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbq	%rcx, %rcx
	testb	$1, %cl
	setne	%dl
	ret

instead of:

	movq	%rdi, %rax
	addq	%rsi, %rax
	movl	$0, %ecx
	adcq	$0, %rcx
	testq	%rcx, %rcx
	setne	%dl
	ret



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122219 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 01:37:09 +00:00
Chris Lattner
39ffcb7b62 We lower setb to sbb with the hope that the and will go away, when it
doesn't, match it back to setb.

On a 64-bit version of the testcase before we'd get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbb	%dl, %dl
	andb	$1, %dl
	ret

now we get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	setb	%dl
	ret




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122217 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 01:16:03 +00:00
Mon P Wang
e273690d7a Add comment for testcase for 122206
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122210 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 00:54:26 +00:00