Commit Graph

31 Commits

Author SHA1 Message Date
Stephen Lin
b4dc0233c9 Convert CodeGen/*/*.ll tests to use the new CHECK-LABEL for easier debugging. No functionality change and all tests pass after conversion.
This was done with the following sed invocation to catch label lines demarking function boundaries:
    sed -i '' "s/^;\( *\)\([A-Z0-9_]*\):\( *\)test\([A-Za-z0-9_-]*\):\( *\)$/;\1\2-LABEL:\3test\4:\5/g" test/CodeGen/*/*.ll
which was written conservatively to avoid false positives rather than false negatives. I scanned through all the changes and everything looks correct.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186258 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-13 20:38:47 +00:00
Preston Gurd
c7b902e7fe Pad Short Functions for Intel Atom
The current Intel Atom microarchitecture has a feature whereby
when a function returns early then it is slightly faster to execute
a sequence of NOP instructions to wait until the return address is ready,
as opposed to simply stalling on the ret instruction until
the return address is ready.

When compiling for X86 Atom only, this patch will run a pass,
called "X86PadShortFunction" which will add NOP instructions where less
than four cycles elapse between function entry and return.

It includes tests.

This patch has been updated to address Nadav's review comments
- Optimize only at >= O1 and don't do optimization if -Os is set
- Stores MachineBasicBlock* instead of BBNum
- Uses DenseMap instead of std::map
- Fixes placement of braces

Patch by Andy Zhang.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171879 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-08 18:27:24 +00:00
Nadav Rotem
5d1f5c1737 Revert revision 171524. Original message:
URL: http://llvm.org/viewvc/llvm-project?rev=171524&view=rev
Log:
The current Intel Atom microarchitecture has a feature whereby when a function
returns early then it is slightly faster to execute a sequence of NOP
instructions to wait until the return address is ready,
as opposed to simply stalling on the ret instruction
until the return address is ready.

When compiling for X86 Atom only, this patch will run a pass, called
"X86PadShortFunction" which will add NOP instructions where less than four
cycles elapse between function entry and return.

It includes tests.

Patch by Andy Zhang.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171603 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-05 05:42:48 +00:00
Preston Gurd
dd30b47175 The current Intel Atom microarchitecture has a feature whereby when a function
returns early then it is slightly faster to execute a sequence of NOP
instructions to wait until the return address is ready,
as opposed to simply stalling on the ret instruction
until the return address is ready.

When compiling for X86 Atom only, this patch will run a pass, called
"X86PadShortFunction" which will add NOP instructions where less than four
cycles elapse between function entry and return.

It includes tests.

Patch by Andy Zhang.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171524 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-04 20:54:54 +00:00
Nick Lewycky
b09649b335 Make sure to put our sret argument into %rax on x86-64. Fixes PR13563!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165063 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-02 22:45:06 +00:00
Michael Liao
a03c44117b Introduce 'UseSSEx' to force SSE legacy encoding
- Add 'UseSSEx' to force SSE legacy insn not being selected when AVX is
  enabled.

  As the penalty of inter-mixing SSE and AVX instructions, we need
  prevent SSE legacy insn from being generated except explicitly
  specified through some intrinsics. For patterns supported by both
  SSE and AVX, so far, we force AVX insn will be tried first relying on
  AddedComplexity or position in td file. It's error-prone and
  introduces bugs accidentally.

  'UseSSEx' is disabled when AVX is turned on. For SSE insns inherited
  by AVX, we need this predicate to force VEX encoding or SSE legacy
  encoding only.

  For insns not inherited by AVX, we still use the previous predicates,
  i.e. 'HasSSEx'. So far, these insns fall into the following
  categories:
  * SSE insns with MMX operands
  * SSE insns with GPR/MEM operands only (xFENCE, PREFETCH, CLFLUSH,
    CRC, and etc.)
  * SSE4A insns.
  * MMX insns.
  * x87 insns added by SSE.

2 test cases are modified:

 - test/CodeGen/X86/fast-isel-x86-64.ll
   AVX code generation is different from SSE one. 'vcvtsi2sdq' cannot be
   selected by fast-isel due to complicated pattern and fast-isel
   fallback to materialize it from constant pool.

 - test/CodeGen/X86/widen_load-1.ll
   AVX code generation is different from SSE one after fixing SSE/AVX
   inter-mixing. Exec-domain fixing prefers 'vmovapd' instead of
   'vmovaps'.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162919 91177308-0d34-0410-b5e6-96231b3b80d8
2012-08-30 16:54:46 +00:00
Jakob Stoklund Olesen
0edd83bfff Make X86::FsFLD0SS / FsFLD0SD real pseudo-instructions.
Like V_SET0, these instructions are expanded by ExpandPostRA to xorps /
vxorps so they can participate in execution domain swizzling.

This also makes the AVX variants redundant.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145440 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 22:27:25 +00:00
Evan Cheng
c3aa7c5c5a Disable expensive two-address optimizations at -O0. rdar://10453055
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144806 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-16 18:44:48 +00:00
Ivan Krasin
74af88a666 FastISel: avoid function calls between the materialization of the constant and its use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137993 91177308-0d34-0410-b5e6-96231b3b80d8
2011-08-18 22:06:10 +00:00
Eli Friedman
d227eedf82 fast-isel sret calls, try 2. We actually do need to do something on x86-32. rdar://problem/9303592 .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130429 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-28 20:19:12 +00:00
Eli Friedman
3fb6e00238 Actually revert r130348 correctly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130418 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-28 18:20:24 +00:00
Eli Friedman
6cf31b0a1a Revert r130348; causing buildbot issues on x86-32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130412 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-28 18:06:10 +00:00
Eli Friedman
bd1253809b Fix a silly mistake in r130338.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130360 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-28 00:42:03 +00:00
Eli Friedman
8211a6a61d fast-isel sret. We actually don't need to do anything special on x86. :) rdar://problem/9303592 .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130348 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-27 23:58:52 +00:00
Eli Friedman
2790ba8e5a Make the fast-isel code for literal 0.0 a bit shorter/faster, since 0.0 is common. rdar://problem/9303592 .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130338 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-27 22:41:55 +00:00
Chris Lattner
b686af053e Recommit the fix for rdar://9289512 with a couple tweaks to
fix bugs exposed by the gcc dejagnu testsuite:
1. The load may actually be used by a dead instruction, which
   would cause an assert.
2. The load may not be used by the current chain of instructions,
   and we could move it past a side-effecting instruction. Change
   how we process uses to define the problem away.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130018 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-22 21:59:37 +00:00
Daniel Dunbar
63c21deee1 Revert r1296656, "Fix rdar://9289512 - not folding load into compare at -O0...",
which broke a couple GCC test suite tests at -O0.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129914 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-21 16:14:46 +00:00
Eli Friedman
3762046dbf Add support for FastISel'ing varargs calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129765 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-19 17:22:22 +00:00
Chris Lattner
832e494359 Implement support for x86 fastisel of small fixed-sized memcpys, which are generated
en-mass for C++ PODs.  On my c++ test file, this cuts the fast isel rejects by 10x 
and shrinks the generated .s file by 5%


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129755 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-19 05:52:03 +00:00
Chris Lattner
b44101c140 Implement support for fast isel of calls of i1 arguments, even though they are illegal,
when they are a truncate from something else.  This eliminates fully half of all the 
fastisel rejections on a test c++ file I'm working with, which should make a substantial
improvement for -O0 compile of c++ code.

This fixed rdar://9297003 - fast isel bails out on all functions taking bools


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129752 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-19 05:09:50 +00:00
Chris Lattner
e03b8d3162 Handle i1/i8/i16 constant integer arguments to calls by prepromoting them.
Before we would bail out on i1 arguments all together, now we just bail on
non-constant ones.  Also, we used to emit extraneous code.  e.g. test12 was:

	movb	$0, %al
	movzbl	%al, %edi
	callq	_test12

and test13 was:
	movb	$0, %al
	xorl	%edi, %edi
	movb	%al, 7(%rsp)
	callq	_test13f

Now we get:

	movl	$0, %edi
	callq	_test12
and:
	movl	$0, %edi
	callq	_test13f



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129751 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-19 04:42:38 +00:00
Chris Lattner
c76d121807 be layout aware, to produce:
testb	$1, %al
	je	LBB0_2
## BB#1:                                ## %if.then
	movb	$0, %al

instead of:

	testb	$1, %al
	jne	LBB0_1
	jmp	LBB0_2
LBB0_1:                                 ## %if.then
	movb	$0, %al

how 'bout that.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129749 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-19 04:26:32 +00:00
Chris Lattner
90cb88a9b4 fix rdar://9297006 - fast isel bails out on trunc to i1 -> bools cry,
a common cause of fast isel rejects on c++ code.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129748 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-19 04:22:17 +00:00
Chris Lattner
f051c1a29d while we're at it, handle 'sdiv exact' of a power of 2 also,
this fixes a few rejects on c++ iterator loops.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129694 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-18 07:00:40 +00:00
Chris Lattner
090ca9108b fix rdar://9297011 - udiv by power of two causing fast-isel rejects
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129693 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-18 06:55:51 +00:00
Chris Lattner
1518afddea Implement major new fastisel functionality: the matcher can now handle immediates with
value constraints on them (when defined as ImmLeaf's).  This is particularly important
for X86-64, where almost all reg/imm instructions take a i64immSExt32 immediate operand,
which has a value constraint.  Before this patch we ended up iseling the examples into
such amazing code as:

	movabsq	$7, %rax
	imulq	%rax, %rdi
	movq	%rdi, %rax
	ret

now we produce:

	imulq	$7, %rdi, %rax
	ret

This dramatically shrinks the generated code at -O0 on x86-64.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129691 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-18 06:22:33 +00:00
Chris Lattner
602fc06817 1. merge fast-isel-shift-imm.ll into fast-isel-x86-64.ll
2. implement rdar://9289501 - fast isel should fold trivial multiplies to shifts
3. teach tblgen to handle shift immediates that are different sizes than the 
   shifted operands, eliminating some code from the X86 fast isel backend.
4. Have FastISel::SelectBinaryOp use (the poorly named) FastEmit_ri_ function
   instead of FastEmit_ri to simplify code.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129666 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-17 20:23:29 +00:00
Chris Lattner
0a1c997c27 fix an x86 fast isel issue where we'd completely give up on folding an address
when we have a global variable base an an index.  Instead, just give up on
folding the global variable.

Before we'd geenrate:

_test:                                  ## @test
## BB#0:
	movq	_rtx_length@GOTPCREL(%rip), %rax
	leaq	(%rax), %rax
	addq	%rdi, %rax
	movzbl	(%rax), %eax
	ret

now we generate:

_test:                                  ## @test
## BB#0:
	movq	_rtx_length@GOTPCREL(%rip), %rax
	movzbl	(%rax,%rdi), %eax
	ret

The difference is even more significant when there is a scale
involved.

This fixes rdar://9289558 - total fail with addr mode formation at -O0/x86-64


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129664 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-17 17:47:38 +00:00
Chris Lattner
685090f598 fix an oversight which caused us to compile the testcase (and other
less trivial things) into a dummy lea.  Before we generated:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	leaq	(%rax), %rax
	ret

now we produce:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	ret

This is part of rdar://9289558



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129662 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-17 17:12:08 +00:00
Chris Lattner
fd3f635103 Fix rdar://9289512 - not folding load into compare at -O0
The basic issue here is that bottom-up isel is matching the branch
and compare, and was failing to fold the load into the branch/compare
combo.  Fixing this (by allowing folding into any instruction of a
sequence that is selected) allows us to produce things like:


cmpb    $0, 52(%rax)
je      LBB4_2

instead of:

movb    52(%rax), %cl
cmpb    $0, %cl
je      LBB4_2

This makes the generated -O0 code run a bit faster, but also speeds up
compile time by putting less pressure on the register allocator and 
generating less code.

This was one of the biggest classes of missing load folding.  Implementing
this shrinks 176.gcc's c-decl.s (as a random example) by about 4% in (verbose-asm)
line count.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129656 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-17 06:35:44 +00:00
Chris Lattner
fff65b354f fix rdar://9289583 - fast isel should handle non-canonical commutative binops
allowing us to fold the immediate into the 'and' in this case:

int test1(int i) {
  return 8&i;
}



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129653 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-17 01:16:47 +00:00