Commit Graph

70 Commits

Author SHA1 Message Date
Hans Wennborg
f0234fcbc9 Implement the local-dynamic TLS model for x86 (PR3985)
This implements codegen support for accesses to thread-local variables
using the local-dynamic model, and adds a clean-up pass so that the base
address for the TLS block can be re-used between local-dynamic access on
an execution path.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157818 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-01 16:27:21 +00:00
Jakob Stoklund Olesen
cf661a040c Use ptr_rc_tailcall instead of GR32_TC.
The getPointerRegClass() hook will return GR32_TC, or whatever is
appropriate for the current function.

Patch by Yiannis Tsiouris!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156459 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-09 01:50:09 +00:00
Manman Ren
ed57984483 X86: optimization for -(x != 0)
This patch will optimize -(x != 0) on X86
FROM 
cmpl	$0x01,%edi
sbbl	%eax,%eax
notl	%eax
TO
negl %edi
sbbl %eax %eax

In order to generate negl, I added patterns in Target/X86/X86InstrCompiler.td:
def : Pat<(X86sub_flag 0, GR32:$src), (NEG32r GR32:$src)>;

rdar: 10961709


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156312 91177308-0d34-0410-b5e6-96231b3b80d8
2012-05-07 18:06:23 +00:00
Rafael Espindola
26c8dcc692 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154011 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-04 12:51:34 +00:00
Lang Hames
616c841946 Make x86 REP_MOV* and REP_STO instructions use the correct operand sizes in 64-bit mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153680 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-29 19:54:28 +00:00
Preston Gurd
3e99b715d1 This patch adds X86 instruction itineraries for non-pseudo opcodes in
X86InstrCompiler.td.
 
It also adds –mcpu-generic to the legalize-shift-64.ll test so the test
will pass if run on an Intel Atom CPU, which would otherwise
produce an instruction schedule which differs from that which the test expects.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153033 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-19 14:10:12 +00:00
Michael J. Spencer
1a2d061ec0 Add WIN_FTOL_* psudo-instructions to model the unique calling convention
used by the Win32 _ftol2 runtime function. Patch by Joe Groff!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151382 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-24 19:01:22 +00:00
Jakob Stoklund Olesen
527a08b253 Use the same CALL instructions for Windows as for everything else.
The different calling conventions and call-preserved registers are
represented with regmask operands that are added dynamically.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150708 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-16 17:56:02 +00:00
Eli Friedman
1857b51ef5 Make sure the non-SSE lowering for fences correctly clobbers EFLAGS. PR11768.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148240 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-16 16:42:21 +00:00
Eli Friedman
a20b71518a Get rid of unused codegen-only instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148239 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-16 16:29:35 +00:00
Benjamin Kramer
fb418bab97 X86: Generalize the x << (y & const) optimization to also catch masks with more set bits set than 31 or 63.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148024 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-12 12:41:34 +00:00
Chandler Carruth
acc068e873 Switch the lowering of CTLZ_ZERO_UNDEF from a .td pattern back to the
X86ISelLowering C++ code. Because this is lowered via an xor wrapped
around a bsr, we want the dagcombine which runs after isel lowering to
have a chance to clean things up. In particular, it is very common to
see code which looks like:

  (sizeof(x)*8 - 1) ^ __builtin_clz(x)

Which is trying to compute the most significant bit of 'x'. That's
actually the value computed directly by the 'bsr' instruction, but if we
match it too late, we'll get completely redundant xor instructions.

The more naive code for the above (subtracting rather than using an xor)
still isn't handled correctly due to the dagcombine getting confused.

Also, while here fix an issue spotted by inspection: we should have been
expanding the zero-undef variants to the normal variants when there is
an 'lzcnt' instruction. Do so, and test for this. We don't want to
generate unnecessary 'bsr' instructions.

These two changes fix some regressions in encoding and decoding
benchmarks. However, there is still a *lot* to be improve on in this
type of code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147244 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-24 10:55:54 +00:00
Chandler Carruth
f2d7693fbb Begin teaching the X86 target how to efficiently codegen patterns that
use the zero-undefined variants of CTTZ and CTLZ. These are just simple
patterns for now, there is more to be done to make real world code using
these constructs be optimized and codegen'ed properly on X86.

The existing tests are spiffed up to check that we no longer generate
unnecessary cmov instructions, and that we generate the very important
'xor' to transform bsr which counts the index of the most significant
one bit to the number of leading (most significant) zero bits. Also they
now check that when the variant with defined zero result is used, the
cmov is still produced.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146974 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-20 11:19:37 +00:00
Rafael Espindola
66bf7430f5 Fixes an issue reported by -verify-machineinstrs.
Patch by Sanjoy Das.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@143064 91177308-0d34-0410-b5e6-96231b3b80d8
2011-10-26 21:16:41 +00:00
Rafael Espindola
e840e88239 This commit introduces two fake instructions MORESTACK_RET and
MORESTACK_RET_RESTORE_R10; which are lowered to a RET and a RET
followed by a MOV respectively.  Having a fake instruction prevents
the verifier from seeing a MachineBasicBlock end with a
non-terminator (MOV).  It also prevents the rather eccentric case of a
MachineBasicBlock ending with RET but having successors nevertheless.

Patch by Sanjoy Das.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@143062 91177308-0d34-0410-b5e6-96231b3b80d8
2011-10-26 21:12:27 +00:00
Eli Friedman
f73c881f4a Fix the assembler strings for a couple of atomic instructions. Doesn't really matter much in practice, but it's a bit cleaner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139563 91177308-0d34-0410-b5e6-96231b3b80d8
2011-09-13 00:27:04 +00:00
Eli Friedman
d5ccb0558f Fix atomic load and store on x86 to pass -verify-machineinstrs (and possibly fix some subtle bugs involving passes which check mayStore()).
This isn't exactly ideal, but it is good enough for the moment.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139245 91177308-0d34-0410-b5e6-96231b3b80d8
2011-09-07 18:48:32 +00:00
Jakob Stoklund Olesen
5047d76575 Pseudo CMOV instructions don't clobber EFLAGS.
The explanation about a 0 argument being materialized as xor is no
longer valid.  Rematerialization will check if EFLAGS is live before
clobbering it.

The code produced by X86TargetLowering::EmitLoweredSelect does not
clobber EFLAGS.

This causes one less testb instruction to be generated in the cmov.ll
test case.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139057 91177308-0d34-0410-b5e6-96231b3b80d8
2011-09-02 23:52:55 +00:00
Rafael Espindola
d07b7ec772 Adds a SelectionDAG node X86SegAlloca which will be custom lowered
from DYNAMIC_STACKALLOC.

Two new pseudo instructions (SEG_ALLOCA_32 and SEG_ALLOCA_64) which
will match X86SegAlloca (based on word size) are also added.  They
will be custom emitted to inject the actual stack handling code.

Patch by Sanjoy Das.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138814 91177308-0d34-0410-b5e6-96231b3b80d8
2011-08-30 19:43:21 +00:00
Eli Friedman
43f51aeca8 Add support for generating CMPXCHG16B on x86-64 for the cmpxchg IR instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138660 91177308-0d34-0410-b5e6-96231b3b80d8
2011-08-26 21:21:21 +00:00
Eli Friedman
327236cd6c Basic x86 code generation for atomic load and store instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138478 91177308-0d34-0410-b5e6-96231b3b80d8
2011-08-24 20:50:09 +00:00
Bruno Cardoso Lopes
d40aa24ebf Add 256-bit support for v8i32, v4i64 and v4f64 ISD::SELECT. Fix PR10556
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137179 91177308-0d34-0410-b5e6-96231b3b80d8
2011-08-09 23:27:13 +00:00
Eli Friedman
fc430a662f Fix a couple ridiculous copy-paste errors. rdar://9914773 .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137160 91177308-0d34-0410-b5e6-96231b3b80d8
2011-08-09 22:17:39 +00:00
Eli Friedman
84e7f7e267 X86ISD::MEMBARRIER does not require SSE2; it doesn't actually generate any code, and all x86 processors will honor the required semantics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136249 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-27 19:43:50 +00:00
Dan Gohman
a0697a7ef5 Add a comment describing why transforming (shl x, 1) to (add x, x) is to be
considered safe enough in this context.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133159 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-16 15:55:48 +00:00
Benjamin Kramer
b22da2a72c X86: smulo -> add is now done target-independently in DAGCombiner, remove the patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131801 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-21 18:32:01 +00:00
Stuart Hastings
0e29ed081b Re-commit 131641 with fixes; de-pseudoize MOVSX16rr8 and friends.
rdar://problem/8614450


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131746 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-20 19:04:40 +00:00
Stuart Hastings
d22f036c2a Reverting 131641 to investigate 'bot complaint.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131654 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-19 17:54:42 +00:00
Stuart Hastings
b6dcf3c514 Revise MOVSX16rr8/MOVZX16rr8 (and rm variants) to no longer be
pseudos.  rdar://problem/8614450


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131641 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-19 16:59:50 +00:00
Eric Christopher
c324f72ab7 Support XOR and AND optimization with no return value.
Finishes off rdar://8470697


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131458 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-17 08:10:18 +00:00
Eric Christopher
b38fe4b52d Optimize atomic lock or that doesn't use the result value.
Next up: xor and and.

Part of rdar://8470697


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131171 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-10 23:57:45 +00:00
Eric Christopher
988397dcbc Refactor lock versions of binary operators to be a little less
cut and paste.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131139 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-10 18:36:16 +00:00
Benjamin Kramer
f51190b697 X86: Add a bunch of peeps for add and sub of SETB.
"b + ((a < b) ? 1 : 0)" compiles into
	cmpl	%esi, %edi
	adcl	$0, %esi
instead of
	cmpl	%esi, %edi
	sbbl	%eax, %eax
	andl	$1, %eax
	addl	%esi, %eax

This saves a register, a false dependency on %eax
(Intel's CPUs still don't ignore it) and it's shorter.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131070 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-08 18:36:07 +00:00
Dan Gohman
64849ce66f The labyrinthine X86 backend no longer appears to require
these patterns.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125759 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-17 18:50:19 +00:00
NAKAMURA Takumi
7754f85885 Target/X86: Tweak win64's tailcall.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124272 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-26 02:04:09 +00:00
NAKAMURA Takumi
e5fffe9c3f Fix whitespace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124270 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-26 02:03:37 +00:00
Eric Christopher
cdfe3c359b The stub routine that we're calling uses test and so clobbers
the flags.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123712 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-18 01:37:20 +00:00
Chris Lattner
39ffcb7b62 We lower setb to sbb with the hope that the and will go away, when it
doesn't, match it back to setb.

On a 64-bit version of the testcase before we'd get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	sbbb	%dl, %dl
	andb	$1, %dl
	ret

now we get:

	movq	%rdi, %rax
	addq	%rsi, %rax
	setb	%dl
	ret




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122217 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 01:16:03 +00:00
Chris Lattner
c19d1c3ba2 improve the setcc -> setcc_carry optimization to happen more
consistently by moving it out of lowering into dag combine.

Add some missing patterns for matching away extended versions of setcc_c.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122201 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-19 22:08:31 +00:00
Evan Cheng
f735f2da6e Only rr forms of ADD*_DB are commutable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121908 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-15 22:57:36 +00:00
Eric Christopher
2871768f40 Add rsp to the uses for the same reason as 32-bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121328 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-09 00:26:41 +00:00
Rafael Espindola
d652dbe720 Move lowering of TLS_addr32 and TLS_addr64 to X86MCInstLower.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120263 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-28 21:16:39 +00:00
Rafael Espindola
5bf7c534cf Lower TLS_addr32 and TLS_addr64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120225 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-27 20:43:02 +00:00
Chris Lattner
4d1189f385 reject instructions that contain a \n in their asmstring. Mark
various X86 and ARM instructions that are bitten by this as isCodeGenOnly,
as they are.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117884 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-01 00:46:16 +00:00
Chris Lattner
a4a3a5e3c2 two changes: make the asmmatcher generator ignore ARM pseudos properly,
and make it a hard error for instructions to not have an asm string.
These instructions should be marked isCodeGenOnly.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117861 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-31 19:15:18 +00:00
Michael J. Spencer
e9c253e0bc X86: Add alloca probing to dynamic alloca on Windows. Fixes PR8424.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116984 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-21 01:41:01 +00:00
Michael J. Spencer
6e56b18e57 Fix Whitespace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116972 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-20 23:40:27 +00:00
Rafael Espindola
6d8628061b Fix another case where we were preferring instructions with large
immediates instead of 8 bits ones.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116410 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-13 17:14:25 +00:00
Rafael Espindola
dba81cf40e Fix PR8365 by adding a more specialized Pat that checks if an 'and' with
8 bit constants can be used.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116403 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-13 13:31:20 +00:00
Dan Gohman
320afb8c81 Initial va_arg support for x86-64. Patch by David Meyer!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116319 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-12 18:00:49 +00:00