Commit Graph

11708 Commits

Author SHA1 Message Date
Asaf Badouh
82fa06895e AVX-512: Implemented GETEXP instruction for KNL and SKX
Added rounding mode modifier for SQRTPS/PD
Added tests for encoding and intrinsics.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238809 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 07:18:14 +00:00
Elena Demikhovsky
bbd7cab2b9 AVX-512: Optimized vector shuffle for v16f32 and v16i32 types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238743 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 13:26:18 +00:00
Elena Demikhovsky
af0e519127 AVX-512: Implemented VRANGEPD and VRANGEPD instructions for SKX.
Implemented DAG lowering for all these forms.
Added tests for encoding.

By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238738 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 11:05:34 +00:00
Elena Demikhovsky
aa62d8a6b2 AVX-512: Implemented vector shuffle lowering for v8i64 and v8f64 types.
I removed the vector-shuffle-512-v8.ll, it is auto-generated test, not valid any more.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238735 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 09:49:53 +00:00
Elena Demikhovsky
8e12b59b13 AVX-512: added all forms of VPSHUFD and VPSHUFHW, VPSHUFLW
including encodings.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238729 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 07:17:23 +00:00
Elena Demikhovsky
9029910d9f AVX-512: Implemented VFIXUPIMMPD and VFIXUPIMMPS instructions for KNL and SKX
Implemented DAG lowering for all these forms.
Added tests for encoding.

by Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238728 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 06:50:49 +00:00
Elena Demikhovsky
9f63519857 AVX-512: Fixed a bug in compress and expand intrinsics.
By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238724 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 06:30:13 +00:00
Matt Arsenault
5f3a6430d6 Add address space argument to isLegalAddressingMode
This is important because of different addressing modes
depending on the address space for GPU targets.

This only adds the argument, and does not update
any of the uses to provide the correct address space.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238723 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 05:31:59 +00:00
Rafael Espindola
481f35f113 Simplify another function that doesn't fail.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238703 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 00:27:26 +00:00
Simon Pilgrim
08786dc314 Stripped trailing whitespace. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238654 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 13:01:42 +00:00
Chandler Carruth
fa68750e54 [x86] Unify the horizontal adding used for popcount lowering taking the
best approach of each.

For vNi16, we use SHL + ADD + SRL pattern that seem easily the best.

For vNi32, we use the PUNPCK + PSADBW + PACKUSWB pattern. In some cases
there is a huge improvement with this in IACA's estimated throughput --
over 2x higher throughput!!!! -- but the measurements are too good to be
true. In one narrow case, the SHL + ADD + SHL + ADD + SRL pattern looks
slightly faster, but I'm not sure I believe any of the measurements at
this point. Both are the exact same uops though. Hard to be confident of
anything past that.

If anyone wants to collect very detailed (Agner-level) timings with the
result of this patch, or with the i32 case replaced with SHL + ADD + SHl
+ ADD + SRL, I'd be very interested. Note that you'll need to test it on
both Ivybridge and Haswell, with both SSE3, SSSE3, and AVX selected as
I saw unique behavior in each of these buckets with IACA all of which
should be checked against measured performance.

But this patch is still a useful improvement by dropping duplicate work
and getting the much nicer PSADBW lowering for v2i64.

I'd still like to rephrase this in terms of generic horizontal sum. It's
a bit lame to have a special case of that just for popcount.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238652 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 10:35:03 +00:00
Chandler Carruth
da8bb20158 [x86] Split out the horizontal byte sum lowering component of the LUT
lowering into a helper function.

NFC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238650 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 09:46:16 +00:00
Chandler Carruth
3279f2381b [x86] Replace the long spelling of getting a bitcast with the *much*
shorter one. NFC.

In addition to being much shorter to type and requiring fewer arguments,
this change saves over 30 lines from this one file, all wasted on total
boilerplate...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238640 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 04:23:13 +00:00
Chandler Carruth
b26a073acb [x86] Replace the long spelling of getting a bitcast with the new short
spelling. NFC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238639 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 04:19:57 +00:00
Chandler Carruth
89a133960b [sdag] Add the helper I most want to the DAG -- building a bitcast
around a value using its existing SDLoc.

Start using this in just one function to save omg lines of code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238638 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 04:14:10 +00:00
Chandler Carruth
d8018eeac9 [x86] Restore the bitcasts I removed when refactoring this to avoid
shifting vectors of bytes as x86 doesn't have direct support for that.

This removes a bunch of redundant masking in the generated code for SSE2
and SSE3.

In order to avoid the really significant code size growth this would
have triggered, I also factored the completely repeatative logic for
shifting and masking into two lambdas which in turn makes all of this
much easier to read IMO.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238637 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 04:05:11 +00:00
Chandler Carruth
828f5b807c [x86] Implement a faster vector population count based on the PSHUFB
in-register LUT technique.

Summary:
A description of this technique can be found here:
http://wm.ite.pl/articles/sse-popcount.html

The core of the idea is to use an in-register lookup table and the
PSHUFB instruction to compute the population count for the low and high
nibbles of each byte, and then to use horizontal sums to aggregate these
into vector population counts with wider element types.

On x86 there is an instruction that will directly compute the horizontal
sum for the low 8 and high 8 bytes, giving vNi64 popcount very easily.
Various tricks are used to get vNi32 and vNi16 from the vNi8 that the
LUT computes.

The base implemantion of this, and most of the work, was done by Bruno
in a follow up to D6531. See Bruno's detailed post there for lots of
timing information about these changes.

I have extended Bruno's patch in the following ways:

0) I committed the new tests with baseline sequences so this shows
   a diff, and regenerated the tests using the update scripts.

1) Bruno had noticed and mentioned in IRC a redundant mask that
   I removed.

2) I introduced a particular optimization for the i32 vector cases where
   we use PSHL + PSADBW to compute the the low i32 popcounts, and PSHUFD
   + PSADBW to compute doubled high i32 popcounts. This takes advantage
   of the fact that to line up the high i32 popcounts we have to shift
   them anyways, and we can shift them by one fewer bit to effectively
   divide the count by two. While the PSHUFD based horizontal add is no
   faster, it doesn't require registers or load traffic the way a mask
   would, and provides more ILP as it happens on different ports with
   high throughput.

3) I did some code cleanups throughout to simplify the implementation
   logic.

4) I refactored it to continue to use the parallel bitmath lowering when
   SSSE3 is not available to preserve the performance of that version on
   SSE2 targets where it is still much better than scalarizing as we'll
   still do a bitmath implementation of popcount even in scalar code
   there.

With #1 and #2 above, I analyzed the result in IACA for sandybridge,
ivybridge, and haswell. In every case I measured, the throughput is the
same or better using the LUT lowering, even v2i64 and v4i64, and even
compared with using the native popcnt instruction! The latency of the
LUT lowering is often higher than the latency of the scalarized popcnt
instruction sequence, but I think those latency measurements are deeply
misleading. Keeping the operation fully in the vector unit and having
many chances for increased throughput seems much more likely to win.

With this, we can lower every integer vector popcount implementation
using the LUT strategy if we have SSSE3 or better (and thus have
PSHUFB). I've updated the operation lowering to reflect this. This also
fixes an issue where we were scalarizing horribly some AVX lowerings.

Finally, there are some remaining cleanups. There is duplication between
the two techniques in how they perform the horizontal sum once the byte
population count is computed. I'm going to factor and merge those two in
a separate follow-up commit.

Differential Revision: http://reviews.llvm.org/D10084

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238636 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 03:20:59 +00:00
Chandler Carruth
43d1e87d73 [x86] Restructure the parallel bitmath lowering of popcount into
a separate routine, generalize it to work for all the integer vector
sizes, and do general code cleanups.

This dramatically improves lowerings of byte and short element vector
popcount, but more importantly it will make the introduction of the
LUT-approach much cleaner.

The biggest cleanup I've done is to just force the legalizer to do the
bitcasting we need. We run these iteratively now and it makes the code
much simpler IMO. Other changes were minor, and mostly naming and
splitting things up in a way that makes it more clear what is going on.

The other significant change is to use a different final horizontal sum
approach. This is the same number of instructions as the old method, but
shifts left instead of right so that we can clear everything but the
final sum with a single shift right. This seems likely better than
a mask which will usually have to read the mask from memory. It is
certaily fewer u-ops. Also, this will be temporary. This and the LUT
approach share the need of horizontal adds to finish the computation,
and we have more clever approaches than this one that I'll switch over
to.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238635 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 03:20:55 +00:00
Jim Grosbach
586c0042da MC: Clean up MCExpr naming. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238634 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 01:25:56 +00:00
Reid Kleckner
bfa311df8c [WinEH] Adjust the 32-bit SEH prologue to better match reality
It turns out that _except_handler3 and _except_handler4 really use the
same stack allocation layout, at least today. They just make different
choices about encoding the LSDA.

This is in preparation for lowering the llvm.eh.exceptioninfo().

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238627 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 22:57:46 +00:00
Reid Kleckner
f0e3e4cd84 Disable FP elimination in funcs using 32-bit MSVC EH personalities
The value in 'ebp' acts as an implicit argument to the outlined
handlers, and is recovered with frameaddress(1).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238619 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 21:58:11 +00:00
Rafael Espindola
cfac75ad0e Remove getData.
This completes the mechanical part of merging MCSymbol and MCSymbolData.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238617 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 21:45:01 +00:00
Reid Kleckner
38a2e49d1c Only add the EH state insertion pass on 32-bit Windows
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238612 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 20:43:10 +00:00
Rafael Espindola
5760c5fe31 Remove the MCSymbolData typedef.
The getData member function is next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238611 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 20:41:47 +00:00
Reid Kleckner
16e4a624c4 [WinEH] Emit EH tables for __CxxFrameHandler3 on 32-bit x86
Small (really small!) C++ exception handling examples work on 32-bit x86
now.

This change disables the use of .seh_* directives in WinException when
CFI is not in use. It also uses absolute symbol references in the tables
instead of imagerel32 relocations.

Also fixes a cache invalidation bug in MMI personality classification.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238575 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 17:00:57 +00:00
Matthias Braun
e67bd6c248 CodeGen: Use mop_iterator instead of MIOperands/ConstMIOperands
MIOperands/ConstMIOperands are classes iterating over the MachineOperand
of a MachineInstr, however MachineInstr::mop_iterator does the same
thing.

I assume these two iterators exist to have a uniform interface to
iterate over the operands of a machine instruction bundle and a single
machine instruction. However in practice I find it more confusing to have 2
different iterator classes, so this patch transforms (nearly all) the
code to use mop_iterators.

The only exception being MIOperands::anlayzePhysReg() and
MIOperands::analyzeVirtReg() still needing an equivalent, I leave that
as an exercise for the next patch.

Differential Revision: http://reviews.llvm.org/D9932

This version is slightly modified from the proposed revision in that it
introduces MachineInstr::getOperandNo to avoid the extra counting
variable in the few loops that previously used MIOperands::getOperandNo.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238539 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 02:56:46 +00:00
Reid Kleckner
5f50442d79 [WinEH] Start inserting state number stores for C++ EH
This moves all the state numbering code for C++ EH to WinEHPrepare so
that we can call it from the X86 state numbering IR pass that runs
before isel.

Now we just call the same state numbering machinery and insert a bunch
of stores. It also populates MachineModuleInfo with information about
the current function.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238514 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-28 22:00:24 +00:00
Rafael Espindola
9886da621d Remove a trivial forwarding function. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238506 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-28 21:36:02 +00:00
Reid Kleckner
cb95e9ef45 Remove debug prints from r238487
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238501 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-28 21:23:53 +00:00
Reid Kleckner
7738ecd62b Disable x86 tail call optimizations that jump through GOT
For x86 targets, do not do sibling call optimization when materializing
the callee's address would require a GOT relocation. We can still do
tail calls to internal functions, hidden functions, and protected
functions, because they do not require this kind of relocation. It is
still possible to get GOT relocations when the user explicitly asks for
it with musttail or -tailcallopt, both of which are supposed to
guarantee TCO.

Based on a patch by Chih-hung Hsieh.

Reviewers: srhines, timmurray, danalbert, enh, void, nadav, rnk

Subscribers: joerg, davidxl, llvm-commits

Differential Revision: http://reviews.llvm.org/D9799

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238487 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-28 20:44:28 +00:00
Reid Kleckner
9417bdcc55 [WinEH] Remove debugging dump() call
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238472 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-28 20:02:05 +00:00
Elena Demikhovsky
d56dcc4243 AVX-512: Fixed a bug in extracting subvector from v64i1
By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238322 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-27 14:09:33 +00:00
Elena Demikhovsky
078088b790 AVX-512: Implemented all forms of sign-extend and zero-extend instructions for KNL and SKX
Implemented DAG lowering for all these forms.
Added tests for DAG lowering and encoding.

By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238301 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-27 08:15:19 +00:00
Quentin Colombet
60c91c28e4 [X86] Implement the support for shrink-wrapping.
With this patch the x86 backend is now shrink-wrapping capable
and this functionality can be tested by using the
-enable-shrink-wrap switch.

The next step is to make more test and enable shrink-wrapping by
default for x86.

Related to <rdar://problem/20821487>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238293 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-27 06:28:41 +00:00
Rafael Espindola
890a876e0e Print "lock \t foo" instead of "lock \n foo".
This gets gas and llc -filetype=obj to agree on the order of prefixes.

For llvm-mc we need to fix the asm parser to know that it makes a difference
on which line the "lock" is in.

Part of pr23594.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238232 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-26 18:35:10 +00:00
Elena Demikhovsky
001a2ba63f AVX-512: fixed a bug in arithmetic operations lowering for i1 type
https://llvm.org/bugs/show_bug.cgi?id=23630



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238198 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-26 12:37:17 +00:00
Elena Demikhovsky
55fd78065f AVX-512: fixed a bug in lowering VSELECT for 512-bit vector
https://llvm.org/bugs/show_bug.cgi?id=23634



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238195 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-26 11:32:39 +00:00
Michael Kuperstein
d714fcf5c8 Use std::bitset for SubtargetFeatures.
Previously, subtarget features were a bitfield with the underlying type being uint64_t. 
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.

The first several times this was committed (e.g. r229831, r233055), it caused several buildbot failures.
Apparently the reason for most failures was both clang and gcc's inability to deal with large numbers (> 10K) of bitset constructor calls in tablegen-generated initializers of instruction info tables. 
This should now be fixed.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238192 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-26 10:47:10 +00:00
Rafael Espindola
1826cd69f3 Stop using MCSectionData in MCMachObjectWriter.h.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238165 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-26 01:15:30 +00:00
Rafael Espindola
504473e6ae Stop using MCSectionData in MCExpr.h.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238163 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-26 00:52:18 +00:00
Rafael Espindola
f363960679 Return a MCSection from MCFragment::getParent().
Another step in merging MCSectionData and MCSection.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238162 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-26 00:36:57 +00:00
Simon Pilgrim
4da23583b6 [X86][AVX2] Vectorized i16 shift operators
Part of D9474, this patch extends AVX2 v16i16 types to 2 x 8i32 vectors and uses i32 shift variable shifts before packing back to i16.

Adds AVX2 tests for v8i16 and v16i16 

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238149 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-25 17:49:13 +00:00
Rafael Espindola
8823110a85 Stop forwarding getOrdinal and setOrdinal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238139 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-25 14:12:48 +00:00
Michael Kuperstein
8ffbb68a86 [X86] When pattern-matching scalar FMA3 intrinsics, don't re-arrange the first and second operands.
The semantics of the scalar FMA intrinsics are that the high vector elements are copied from the first source.
The existing pattern switches src1 and src2 around, to match the "213" order, which ends up tying the original src2 to the dest. Since the actual scalar fma3 instructions copy the high elements from the dest register, the wrong values are copied.

This modifies the pattern to leave src1 and src2 in their original order.

Differential Revision: http://reviews.llvm.org/D9908

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238131 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-25 12:35:25 +00:00
Elena Demikhovsky
17b7d6bf25 Added promotion to EXTRACT_SUBVECTOR operand.
I encountered with this case in one of KNL tests for i1 vectors.
v16i1 = EXTRACT_SUBVECTOR v32i1, x



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238130 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-25 11:33:13 +00:00
NAKAMURA Takumi
4d3b6d43cc Reformat.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238126 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-25 01:43:34 +00:00
NAKAMURA Takumi
f61fb0c9a7 Prune CRLFs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238125 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-25 01:43:23 +00:00
Rafael Espindola
58bf2827d3 Revert "make reciprocal estimate code generation more flexible by adding command-line options"
This reverts commit r238051.

It broke some bots:

http://lab.llvm.org:8011/builders/llvm-ppc64-linux1/builds/18190

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238075 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-23 00:22:44 +00:00
Sanjay Patel
7e80a67d35 make reciprocal estimate code generation more flexible by adding command-line options
This patch adds a class for processing many recip codegen possibilities.
The TargetRecip class is intended to handle both command-line options to llc as well
as options passed in from a front-end such as clang with the -mrecip option.

The x86 backend is updated to use the new functionality.
Only -mcpu=btver2 with -ffast-math should see a functional change from this patch.
All other CPUs continue to *not* use reciprocal estimates by default with -ffast-math.

Differential Revision: http://reviews.llvm.org/D8982



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238051 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-22 21:10:06 +00:00
Quentin Colombet
57cc146595 Reapply r238011 with a fix for the trap instruction.
The problem was that I slipped a change required for shrink-wrapping, namely I
used getFirstTerminator instead of the getLastNonDebugInstr that was here before
the refactoring, whereas the surrounding code is not yet patched for that.

Original message:
[X86] Refactor the prologue emission to prepare for shrink-wrapping.

- Add a late pass to expand pseudo instructions (tail call and EH returns).
 Instead of doing it in the prologue emission.
- Factor some static methods in X86FrameLowering to ease code sharing.

NFC.

Related to <rdar://problem/20821487>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238035 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-22 18:10:47 +00:00