Commit Graph

41 Commits

Author SHA1 Message Date
Andrew Trick
6a7770b7ae Enable MI Sched for x86.
This changes the SelectionDAG scheduling preference to source
order. Soon, the SelectionDAG scheduler can be bypassed saving
a nice chunk of compile time.

Performance differences that result from this change are often a
consequence of register coalescing. The register coalescer is far from
perfect. Bugs can be filed for deficiencies.

On x86 SandyBridge/Haswell, the source order schedule is often
preserved, particularly for small blocks.

Register pressure is generally improved over the SD scheduler's ILP
mode. However, we are still able to handle large blocks that require
latency hiding, unlike the SD scheduler's BURR mode. MI scheduler also
attempts to discover the critical path in single-block loops and
adjust heuristics accordingly.

The MI scheduler relies on the new machine model. This is currently
unimplemented for AVX, so we may not be generating the best code yet.

Unit tests are updated so they don't depend on SD scheduling heuristics.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192750 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-15 23:33:07 +00:00
Andrew Trick
6ea2b9608a Allocate local registers in order for optimal coloring.
Also avoid locals evicting locals just because they want a cheaper register.

Problem: MI Sched knows exactly how many registers we have and assumes
they can be colored. In cases where we have large blocks, usually from
unrolled loops, greedy coloring fails. This is a source of
"regressions" from the MI Scheduler on x86. I noticed this issue on
x86 where we have long chains of two-address defs in the same live
range. It's easy to see this in matrix multiplication benchmarks like
IRSmk and even the unit test misched-matmul.ll.

A fundamental difference between the LLVM register allocator and
conventional graph coloring is that in our model a live range can't
discover its neighbors, it can only verify its neighbors. That's why
we initially went for greedy coloring and added eviction to deal with
the hard cases. However, for singly defined and two-address live
ranges, we can optimally color without visiting neighbors simply by
processing the live ranges in instruction order.

Other beneficial side effects:

It is much easier to understand and debug regalloc for large blocks
when the live ranges are allocated in order. Yes, global allocation is
still very confusing, but it's nice to be able to comprehend what
happened locally.

Heuristics could be added to bias register assignment based on
instruction locality (think late register pairing, banks...).

Intuituvely this will make some test cases that are on the threshold
of register pressure more stable.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187139 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-25 18:35:14 +00:00
Stephen Lin
b4dc0233c9 Convert CodeGen/*/*.ll tests to use the new CHECK-LABEL for easier debugging. No functionality change and all tests pass after conversion.
This was done with the following sed invocation to catch label lines demarking function boundaries:
    sed -i '' "s/^;\( *\)\([A-Z0-9_]*\):\( *\)test\([A-Za-z0-9_-]*\):\( *\)$/;\1\2-LABEL:\3test\4:\5/g" test/CodeGen/*/*.ll
which was written conservatively to avoid false positives rather than false negatives. I scanned through all the changes and everything looks correct.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186258 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-13 20:38:47 +00:00
Andrew Trick
b2b5dc642c Revert "Temporarily enable MI-Sched on X86."
This reverts commit 98a9b72e8c.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184823 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-25 02:48:58 +00:00
Andrew Trick
98a9b72e8c Temporarily enable MI-Sched on X86.
Sorry for the unit test churn. I'll try to make the change permanently
next time.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184705 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-24 09:13:20 +00:00
Preston Gurd
c7b902e7fe Pad Short Functions for Intel Atom
The current Intel Atom microarchitecture has a feature whereby
when a function returns early then it is slightly faster to execute
a sequence of NOP instructions to wait until the return address is ready,
as opposed to simply stalling on the ret instruction until
the return address is ready.

When compiling for X86 Atom only, this patch will run a pass,
called "X86PadShortFunction" which will add NOP instructions where less
than four cycles elapse between function entry and return.

It includes tests.

This patch has been updated to address Nadav's review comments
- Optimize only at >= O1 and don't do optimization if -Os is set
- Stores MachineBasicBlock* instead of BBNum
- Uses DenseMap instead of std::map
- Fixes placement of braces

Patch by Andy Zhang.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171879 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-08 18:27:24 +00:00
Nadav Rotem
5d1f5c1737 Revert revision 171524. Original message:
URL: http://llvm.org/viewvc/llvm-project?rev=171524&view=rev
Log:
The current Intel Atom microarchitecture has a feature whereby when a function
returns early then it is slightly faster to execute a sequence of NOP
instructions to wait until the return address is ready,
as opposed to simply stalling on the ret instruction
until the return address is ready.

When compiling for X86 Atom only, this patch will run a pass, called
"X86PadShortFunction" which will add NOP instructions where less than four
cycles elapse between function entry and return.

It includes tests.

Patch by Andy Zhang.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171603 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-05 05:42:48 +00:00
Preston Gurd
dd30b47175 The current Intel Atom microarchitecture has a feature whereby when a function
returns early then it is slightly faster to execute a sequence of NOP
instructions to wait until the return address is ready,
as opposed to simply stalling on the ret instruction
until the return address is ready.

When compiling for X86 Atom only, this patch will run a pass, called
"X86PadShortFunction" which will add NOP instructions where less than four
cycles elapse between function entry and return.

It includes tests.

Patch by Andy Zhang.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171524 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-04 20:54:54 +00:00
Benjamin Kramer
f8b65aaf39 X86: Fix accidentally swapped operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165871 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-13 12:50:19 +00:00
Benjamin Kramer
444dccecfc X86: Promote i8 cmov when both operands are coming from truncates of the same width.
X86 doesn't have i8 cmovs so isel would emit a branch. Emitting branches at this
level is often not a good idea because it's too late for many optimizations to
kick in. This solution doesn't add any extensions (truncs are free) and tries
to avoid introducing partial register stalls by filtering direct copyfromregs.

I'm seeing a ~10% speedup on reading a random .png file with libpng15 via
graphicsmagick on x86_64/westmere, but YMMV depending on the microarchitecture.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165868 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-13 10:39:49 +00:00
Manman Ren
24182757bf Update testing case for Atom when disabling rematerialization in
TwoAddressInstructionPass.

The generated code for Atom has a different code sequence. This is realted
to commit r160749.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160755 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-25 20:17:14 +00:00
Manman Ren
d68e8cda24 Disable rematerialization in TwoAddressInstructionPass.
It is redundant; RegisterCoalescer will do the remat if it can't eliminate
the copy. Collected instruction counts before and after this. A few extra
instructions are generated due to spilling but it is normal to see these kinds
of changes with almost any small codegen change, according to Jakob.

This also fixed rdar://11830760 where xor is expected instead of movi0.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160749 91177308-0d34-0410-b5e6-96231b3b80d8
2012-07-25 18:28:13 +00:00
Manman Ren
ed57984483 X86: optimization for -(x != 0)
This patch will optimize -(x != 0) on X86
FROM 
cmpl	$0x01,%edi
sbbl	%eax,%eax
notl	%eax
TO
negl %edi
sbbl %eax %eax

In order to generate negl, I added patterns in Target/X86/X86InstrCompiler.td:
def : Pat<(X86sub_flag 0, GR32:$src), (NEG32r GR32:$src)>;

rdar: 10961709


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156312 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-07 18:06:23 +00:00
Manman Ren
e2849851b2 Revert r155853
The commit is intended to fix rdar://10961709.
But it is the root cause of PR12720.
Revert it for now.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155992 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-02 15:24:32 +00:00
Manman Ren
16a76519a5 X86: optimization for -(x != 0)
This patch will optimize -(x != 0) on X86
FROM 
cmpl	$0x01,%edi
sbbl	%eax,%eax
notl	%eax
TO
negl %edi
sbbl %eax %eax


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155853 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-30 22:51:25 +00:00
Manman Ren
1701105022 test/CodeGen/X86/select.ll: remove spaces
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155840 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-30 18:54:27 +00:00
Chandler Carruth
9e67db4af1 Flip the new block-placement pass to be on by default.
This is mostly to test the waters. I'd like to get results from FNT
build bots and other bots running on non-x86 platforms.

This feature has been pretty heavily tested over the last few months by
me, and it fixes several of the execution time regressions caused by the
inlining work by preventing inlining decisions from radically impacting
block layout.

I've seen very large improvements in yacr2 and ackermann benchmarks,
along with the expected noise across all of the benchmark suite whenever
code layout changes. I've analyzed all of the regressions and fixed
them, or found them to be impossible to fix. See my email to llvmdev for
more details.

I'd like for this to be in 3.1 as it complements the inliner changes,
but if any failures are showing up or anyone has concerns, it is just
a flag flip and so can be easily turned off.

I'm switching it on tonight to try and get at least one run through
various folks' performance suites in case SPEC or something else has
serious issues with it. I'll watch bots and revert if anything shows up.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154816 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-16 13:49:17 +00:00
Bill Wendling
d336de318e As Dan pointed out, movzbl, movsbl, and friends are nicer than their alias
(movzx/movsx) because they give more information. Revert that part of the patch.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129498 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-14 01:46:37 +00:00
Bill Wendling
c6df9883da Have the X86 back-end emit the alias instead of what's being aliased. In most
cases, it's much nicer and more informative reading the alias.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129497 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-14 01:11:51 +00:00
Benjamin Kramer
e915ff30cd X86: Lower a select directly to a setcc_carry if possible.
int test(unsigned long a, unsigned long b) { return -(a < b); }
compiles to
  _test:                              ## @test
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    sbbl  %eax, %eax                  ## encoding: [0x19,0xc0]
    ret                               ## encoding: [0xc3]
instead of
  _test:                              ## @test
    xorl  %ecx, %ecx                  ## encoding: [0x31,0xc9]
    cmpq  %rsi, %rdi                  ## encoding: [0x48,0x39,0xf7]
    movl  $-1, %eax                   ## encoding: [0xb8,0xff,0xff,0xff,0xff]
    cmovael %ecx, %eax                ## encoding: [0x0f,0x43,0xc1]
    ret                               ## encoding: [0xc3]


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122451 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-22 23:09:28 +00:00
Chris Lattner
9637d5b22e Teach X86ISelLowering that the second result of X86ISD::UMUL is a flags
result.  This allows us to compile:

void *test12(long count) {
      return new int[count];
}

into:

test12:
	movl	$4, %ecx
	movq	%rdi, %rax
	mulq	%rcx
	movq	$-1, %rdi
	cmovnoq	%rax, %rdi
	jmp	__Znam                  ## TAILCALL

instead of:

test12:
	movl	$4, %ecx
	movq	%rdi, %rax
	mulq	%rcx
	seto	%cl
	testb	%cl, %cl
	movq	$-1, %rdi
	cmoveq	%rax, %rdi
	jmp	__Znam

Of course it would be even better if the regalloc inverted the cmov to 'cmovoq',
which would eliminate the need for the 'movq %rdi, %rax'.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120936 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-05 07:49:54 +00:00
Chris Lattner
777dd07394 fix the rest of the linux miscompares :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120933 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-05 02:08:07 +00:00
Chris Lattner
96908b17ae generalize the previous check to handle -1 on either side of the
select, inserting a not to compensate.  Add a missing isZero check
that I lost somehow.

This improves codegen of:

void *func(long count) {
      return new int[count];
}

from:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	testq	%rdx, %rdx              ## encoding: [0x48,0x85,0xd2]
	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
	jmp	__Znam                  ## TAILCALL
                                        ## encoding: [0xeb,A]

to:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	cmpq	$1, %rdx                ## encoding: [0x48,0x83,0xfa,0x01]
	sbbq	%rdi, %rdi              ## encoding: [0x48,0x19,0xff]
	notq	%rdi                    ## encoding: [0x48,0xf7,0xd7]
	orq	%rax, %rdi              ## encoding: [0x48,0x09,0xc7]
	jmp	__Znam                  ## TAILCALL
                                        ## encoding: [0xeb,A]



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120932 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-05 02:00:51 +00:00
Chris Lattner
c8c20d1486 relax this to handle linux defaulting to -static.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120930 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-05 01:31:13 +00:00
Chris Lattner
a2b5600e61 Improve an integer select optimization in two ways:
1. generalize 
    (select (x == 0), -1, 0) -> (sign_bit (x - 1))
to:
    (select (x == 0), -1, y) -> (sign_bit (x - 1)) | y

2. Handle the identical pattern that happens with !=:
   (select (x != 0), y, -1) -> (sign_bit (x - 1)) | y

cmov is often high latency and can't fold immediates or
memory operands.  For example for (x == 0) ? -1 : 1, before 
we got:

< 	testb	%sil, %sil
< 	movl	$-1, %ecx
< 	movl	$1, %eax
< 	cmovel	%ecx, %eax

now we get:

> 	cmpb	$1, %sil
> 	sbbl	%eax, %eax
> 	orl	$1, %eax




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120929 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-05 01:23:24 +00:00
Chris Lattner
bced6a1b8f merge some tests into select.ll and make them more specific.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120928 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-05 01:13:58 +00:00
Chris Lattner
bbdabf411b rename test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120927 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-05 01:02:23 +00:00
Chris Lattner
63d7c17ff1 remove two tests that aren't really testing anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120926 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-05 01:02:13 +00:00
Dan Gohman
36a0947820 Eliminate more uses of llvm-as and llvm-dis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81290 91177308-0d34-0410-b5e6-96231b3b80d8
2009-09-08 23:54:48 +00:00
Dale Johannesen
0a6ee6d131 Remove -unwind-tables-optional everywhere, since
this is now the default.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49667 91177308-0d34-0410-b5e6-96231b3b80d8
2008-04-14 17:56:54 +00:00
Dale Johannesen
c8abfdec84 Rename -disable-required-unwind-tables to -unwind-tables-optional.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49391 91177308-0d34-0410-b5e6-96231b3b80d8
2008-04-08 18:10:08 +00:00
Dale Johannesen
235f7fb474 Add -disable-required-unwind-tables to tests
that need it (usually, grepping for some string
found in unwind info)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49364 91177308-0d34-0410-b5e6-96231b3b80d8
2008-04-08 00:14:17 +00:00
Dale Johannesen
1d3863fdbc Mark functions in some tests as 'nounwind'. Generating
EH info for these functions causes the tests to fail for
random reasons (e.g. looking for 'or' or counting lines
with asm-printer; labels count as lines.)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49003 91177308-0d34-0410-b5e6-96231b3b80d8
2008-03-31 23:20:09 +00:00
Evan Cheng
f6be158e9e Update test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42775 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-08 22:20:32 +00:00
Reid Spencer
eb1d74e0c8 For PR1319:
Remove && from the end of the lines to prevent tests from throwing run
lines into the background. Also, clean up places where the same command
is run multiple times by using a temporary file.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@36142 91177308-0d34-0410-b5e6-96231b3b80d8
2007-04-16 17:36:08 +00:00
Reid Spencer
908504347b For PR411:
Update these tests to not use the same name even though the type of the
value differs. After PR411 hits, type planes will be gone and it will be
illegal for a name to be used twice, regardless of type.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33660 91177308-0d34-0410-b5e6-96231b3b80d8
2007-01-30 16:16:01 +00:00
Reid Spencer
69ccadd753 Use the llvm-upgrade program to upgrade llvm assembly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32115 91177308-0d34-0410-b5e6-96231b3b80d8
2006-12-02 04:23:10 +00:00
Chris Lattner
c39d194499 make this harder
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30120 91177308-0d34-0410-b5e6-96231b3b80d8
2006-09-05 20:27:06 +00:00
Chris Lattner
140385e0ac Tests for fp cmov's that I forgot to check in earlier
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12585 91177308-0d34-0410-b5e6-96231b3b80d8
2004-04-01 03:47:56 +00:00
Chris Lattner
dd285eab68 Test folding comparisons into select instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12559 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-30 22:37:04 +00:00
Chris Lattner
3b68c201c8 New testcase for select instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@12552 91177308-0d34-0410-b5e6-96231b3b80d8
2004-03-30 21:21:14 +00:00