Commit Graph

11556 Commits

Author SHA1 Message Date
Quentin Colombet
49e423ca30 [CodeGenPrepare][AddressingModeMatcher] Fix a think-o for the sext(zext) -> zext promotion
introduced in r217629.
We were returning the old sext instead of the new zext as the promoted instruction!

Thanks Joerg Sonnenberger for the test case.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217800 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 18:26:58 +00:00
Akira Hatanaka
348e9e7b6d [X86] Fix a bug in X86's peephole optimization.
Peephole optimization was folding MOVSDrm, which is a zero-extending double
precision floating point load, into ADDPDrr, which is a SIMD add of two packed
double precision floating point values.

(before)
%vreg21<def> = MOVSDrm <fi#0>, 1, %noreg, 0, %noreg; mem:LD8[%7](align=16)(tbaa=<badref>) VR128:%vreg21
%vreg23<def,tied1> = ADDPDrr %vreg20<tied0>, %vreg21; VR128:%vreg23,%vreg20,%vreg21

(after)
%vreg23<def,tied1> = ADDPDrm %vreg20<tied0>, <fi#0>, 1, %noreg, 0, %noreg; mem:LD8[%7](align=16)(tbaa=<badref>) VR128:%vreg23,%vreg20

X86InstrInfo::foldMemoryOperandImpl already had the logic that prevented this
from happening. However the check wasn't being conducted for loads from stack
objects. This commit factors out the logic into a new function and uses it for
checking loads from stack slots are not zero-extending loads.

rdar://problem/18236850


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217799 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 18:23:52 +00:00
Matt Arsenault
f1b16047b7 R600/SI: Prefer selecting more e64 instruction forms.
Add some more tests to make sure better operand
choices are still made. Leave some cases that seem
to have no reason to ever be e64 alone.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217789 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 17:15:02 +00:00
Matt Arsenault
6fc71a0cfc R600/SI: Make sure double vector fmul is tested
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217787 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 17:04:54 +00:00
Matt Arsenault
e626ee51b6 R600/SI: Add some mubuf testcases.
I noticed some odd looking cases where addr64 wasn't set
when storing to a pointer in an SGPR. This seems to be intentional,
and partially tested already.

The documentation seems to describe addr64 in terms of which registers
addressing modifiers come from, but I would expect to always need
addr64 when using 64-bit pointers. If no offset is applied,
it makes sense to not need to worry about doing a 64-bit add
for the final address. A small immediate offset can be applied,
so is it OK to not have addr64 set if a carry is necessary when adding
the base pointer in the resource to the offset?

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217785 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 16:48:01 +00:00
Matt Arsenault
d189a0407d R600/SI: Add preliminary support for flat address space
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217777 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 15:41:53 +00:00
Chandler Carruth
c5371836a5 [x86] Begin emitting PBLENDW instructions for integer blend operations
when SSE4.1 is available.

This removes a ton of domain crossing from blend code paths that were
ending up in the floating point code path.

This is just the tip of the iceberg though. The real switch is for
integer blend lowering to more actively rely on this instruction being
available so we don't hit shufps at all any longer. =] That will come in
a follow-up patch.

Another place where we need better support is for using PBLENDVB when
doing so avoids the need to have two complementary PSHUFB masks.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217767 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 12:40:54 +00:00
Chandler Carruth
9277ad2d36 [x86] Add an explicit SSE3 run to this test and flesh out a bunch of
missing specific checks.

While there is a lot of redundancy here where all-but-one mode use the
same code generation, I'd rather have each variant spelled out and
checked so that readers aren't misled by an omission in the test suite.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217765 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 11:40:20 +00:00
Chandler Carruth
2fdec16fbe [x86] Teach the x86 DAG combiner to form UNPCKLPS and UNPCKHPS
instructions from the relevant shuffle patterns.

This is the last tweak I'm aware of to generate essentially perfect
v4f32 and v2f64 shuffles with the new vector shuffle lowering up through
SSE4.1. I'm sure I've missed some and it'd be nice to check since v4f32
is amenable to exhaustive exploration, but this is all of the tricks I'm
aware of.

With AVX there is a new trick to use the VPERMILPS instruction, that's
coming up in a subsequent patch.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217761 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 11:26:25 +00:00
Chandler Carruth
08780d4c1d [x86] Teach the x86 DAG combiner to form MOVSLDUP and MOVSHDUP
instructions when it finds an appropriate pattern.

These are lovely instructions, and its a shame to not use them. =] They
are fast, and can hand loads folded into their operands, etc.

I've also plumbed the comment shuffle decoding through the various
layers so that the test cases are printed nicely.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217758 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 11:15:23 +00:00
Chandler Carruth
04402a6c13 [x86] Undo a flawed transform I added to form UNPCK instructions when
AVX is available, and generally tidy up things surrounding UNPCK
formation.

Originally, I was thinking that the only advantage of PSHUFD over UNPCK
instruction variants was its free copy, and otherwise we should use the
shorter encoding UNPCK instructions. This isn't right though, there is
a larger advantage of being able to fold a load into the operand of
a PSHUFD. For UNPCK, the operand *must* be in a register so it can be
the second input.

This removes the UNPCK formation in the target-specific DAG combine for
v4i32 shuffles. It also lifts the v8 and v16 cases out of the
AVX-specific check as they are potentially replacing multiple
instructions with a single instruction and so should always be valuable.
The floating point checks are simplified accordingly.

This also adjusts the formation of PSHUFD instructions to attempt to
match the shuffle mask to one which would fit an UNPCK instruction
variant. This was originally motivated to allow it to match the UNPCK
instructions in the combiner, but clearly won't now.

Eventually, we should add a MachineCombiner pass that can form UNPCK
instructions post-RA when the operand is known to be in a register and
thus there is no loss.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217755 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 10:35:41 +00:00
Chandler Carruth
a6cc351c5b [x86] Teach the new vector shuffle lowering to use 'punpcklwd' and
'punpckhwd' instructions when suitable rather than falling back to the
generic algorithm.

While we could canonicalize to these patterns late in the process, that
wouldn't help when the freedom to use them is only visible during
initial lowering when undef lanes are well understood. This, it turns
out, is very important for matching the shuffle patterns that are used
to lower sign extension. Fixes a small but relevant regression in
gcc-loops with the new lowering.

When I changed this I noticed that several 'pshufd' lowerings became
unpck variants. This is bad because it removes the ability to freely
copy in the same instruction. I've adjusted the widening test to handle
undef lanes correctly and now those will correctly continue to use
'pshufd' to lower. However, this caused a bunch of churn in the test
cases. No functional change, just churn.

Both of these changes are part of addressing a general weakness in the
new lowering -- it doesn't sufficiently leverage undef lanes. I've at
least a couple of patches that will help there at least in an academic
sense.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217752 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-15 09:02:37 +00:00
Chandler Carruth
e610c324e1 [x86] Teach the new vector shuffle lowering to use BLENDPS and BLENDPD.
These are super simple. They even take precedence over crazy
instructions like INSERTPS because they have very high throughput on
modern x86 chips.

I still have to teach the integer shuffle variants about this to avoid
so many domain crossings. However, due to the particular instructions
available, that's a touch more complex and so a separate patch.

Also, the backend doesn't seem to realize it can commute blend
instructions by negating the mask. That would help remove a number of
copies here. Suggestions on how to do this welcome, it's an area I'm
less familiar with.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217744 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-14 23:43:33 +00:00
NAKAMURA Takumi
0309c5d4bc llvm/test/CodeGen/X86/vec_shuffle-38.ll: Add explicit -mtriple=x86_64-unknown to avoid incompatibility of win32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217742 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-14 23:39:01 +00:00
Chandler Carruth
59def2bffa [x86] Add an SSE41 mode to this test. Nothing interesting here, its the
same as SSE3.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217741 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-14 23:28:12 +00:00
Chandler Carruth
151a867774 [x86] Switch this test to use an ALL prefix with special SSE2 and SSE3
variants where significant.

This will make it more obvious what is happening when we start using
blends in SSE41.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217740 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-14 23:19:37 +00:00
Chandler Carruth
85e37090e8 [x86] Add some test cases where we should emit blendpd in SSE4.1. No
actual change yet though.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217739 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-14 23:15:52 +00:00
Chandler Carruth
33957173a7 [x86] Teach the vector combiner that picks a canonical shuffle from to
support transforming the forms from the new vector shuffle lowering to
use 'movddup' when appropriate.

A bunch of the cases where we actually form 'movddup' don't actually
show up in the test results because something even later than DAG
legalization maps them back to 'unpcklpd'. If this shows back up as
a performance problem, I'll probably chase it down, but it is at least
an encoded size loss. =/

To make this work, also always do this canonicalizing step for floating
point vectors where the baseline shuffle instructions don't provide any
free copies of their inputs. This also causes us to canonicalize
unpck[hl]pd into mov{hl,lh}ps (resp.) which is a nice encoding space
win.

There is one test which is "regressed" by this: extractelement-load.
There, the test case where the optimization it is testing *fails*, the
exact instruction pattern which results is slightly different. This
should probably be fixed by having the appropriate extract formed
earlier in the DAG, but that would defeat the purpose of the test.... If
this test case is critically important for anyone, please let me know
and I'll try to work on it. The prior behavior was actually contrary to
the comment in the test case and seems likely to have been an accident.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217738 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-14 22:41:37 +00:00
Matt Arsenault
a0ba49844c R600/SI: Fix broken check lines
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217736 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-14 18:32:05 +00:00
Juergen Ributzka
5bf1f01c15 [FastISel][AArch64] Add support for non-native types for logical ops.
Extend the logical ops selection to also support non-native types such as i1,
i8, and i16.

Fixes rdar://problem/18330589.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217732 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-13 23:46:28 +00:00
Chad Rosier
4fb3a966d0 [AArch64] Enable post-RA MI scheduler.
Phabricator Revision: http://reviews.llvm.org/D5278
Patch by Sanjin Sijaric!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217693 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-12 17:40:39 +00:00
NAKAMURA Takumi
35d9566e68 llvm/test/CodeGen/X86/vec_ctbits.ll: Add explicit -mtriple=x86_64-unknown. It was incompatible to Win32 x64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217683 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-12 15:10:56 +00:00
Bill Schmidt
183704cb08 Address comments on r217622
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217680 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-12 14:26:36 +00:00
Benjamin Kramer
66feb63c3c Legalizer: Use the scalar bit width when promoting bit counting instrs on
vectors.

e.g. when promoting ctlz from <2 x i32> to <2 x i64> we have to fixup
the result by 32 bits, not 64. PR20917.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217671 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-12 12:50:27 +00:00
Matt Arsenault
86ffcddf42 R600/SI: Fix off by 1 error in used register count
The register numbers start at 0, so if only 1 register
was used, this was reported as 0.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217636 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 22:51:37 +00:00
Quentin Colombet
dcc0e7eaa1 [CodeGenPrepare] Teach the addressing mode matcher how to promote zext.
I.e., teach it about 'sext (zext a to ty) to ty2' => zext a to ty2.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217629 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 21:22:14 +00:00
Bill Schmidt
d4604dca94 Add missing colon to RUN line...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217623 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 20:13:52 +00:00
Bill Schmidt
24ebd0edd6 [PATCH, PowerPC] Accept 'U' and 'X' constraints in inline asm
Inline asm may specify 'U' and 'X' constraints to print a 'u' for an
update-form memory reference, or an 'x' for an indexed-form memory
reference.  However, these are really only useful in GCC internal code
generation.  In inline asm the operand of the memory constraint is
typically just a register containing the address, so 'U' and 'X' make
no sense.

This patch quietly accepts 'U' and 'X' in inline asm patterns, but
otherwise does nothing.  If we ever unexpectedly see a non-register,
we'll assert and sort it out afterwards.

I've added a new test for these constraints; the test case should be
used for other asm-constraints changes down the road.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217622 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 20:10:03 +00:00
Matt Arsenault
e7d5fc9e53 Add triple to test to fix bots
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217612 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 17:50:20 +00:00
Brad Smith
da9bce2e13 Provide an implementation of getNoopForMachoTarget for SPARC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217611 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 17:40:51 +00:00
Matt Arsenault
31b1bdbd95 Add DAG combine for shl + add of constants.
Do
 (shl (add x, c1), c2) -> (add (shl x, c2), c1 << c2)

This is already done for multiplies, but since multiplies
by powers of two are turned into shifts, we also need
to handle it here.

This might want checks for isLegalAddImmediate to avoid
transforming an add of a legal immediate with one that isn't.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217610 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 17:34:19 +00:00
Adam Nemet
49f31255be [AVX512] Fix miscompile for unpack
r189189 implemented AVX512 unpack by essentially performing a 256-bit unpack
between the low and the high 256 bits of src1 into the low part of the
destination and another unpack of the low and high 256 bits of src2 into the
high part of the destination.

I don't think that's how unpack works.  AVX512 unpack simply has more 128-bit
lanes but other than it works the same way as AVX.  So in each 128-bit lane,
we're always interleaving certain parts of both operands rather different
parts of one of the operands.

E.g. for this:
__v16sf a = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 };
__v16sf b = { 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31 };
__v16sf c = __builtin_shufflevector(a, b, 0, 8, 1, 9, 4, 12, 5, 13, 16,
	    			       	     24, 17, 25, 20, 28, 21, 29);

we generated punpcklps (notice how the elements of a and b are not interleaved
in the shuffle).  In turn, c was set to this:

  0 16 1 17 4 20 5 21 8 24 9 25 12 28 13 29

Obviously this should have just returned the mask vector of the shuffle
vector.

I mostly reverted this change and made sure the original AVX code worked
for 512-bit vectors as well.

Also updated the tests because they matched the logic from the code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217602 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 16:51:10 +00:00
Sanjay Patel
558cd3c6a2 Add triple and remove hashes to account for buildbot differences in comment strings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217601 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 16:08:44 +00:00
Sanjay Patel
04bb0e721f Combine fmul vector FP constants when unsafe math is allowed.
This is an extension of the change made with r215820:
http://llvm.org/viewvc/llvm-project?view=revision&revision=215820

That patch allowed combining of splatted vector FP constants that are multiplied.

This patch allows combining non-uniform vector FP constants too by relaxing the
check on the type of vector. Also, canonicalize a vector fmul in the
same way that we already do for scalars - if only one operand of the fmul is a
constant, make it operand 1. Otherwise, we miss potential folds.

This fold is also done by -instcombine, but it's possible that extra
fmuls may have been generated during lowering.

Differential Revision: http://reviews.llvm.org/D5254



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217599 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 15:45:27 +00:00
Aaron Watry
1ff44854f6 R600: Test local atomics for evergreen
Now that the operations are all implemented, we can test this sub-arch here.

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217595 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 15:02:52 +00:00
Tilmann Scheller
c1df48dde2 [ARM] Add Thumb-2 code size optimization regression test for LSR (register).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217582 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 10:45:50 +00:00
Tilmann Scheller
171bd26061 [ARM] Add Thumb-2 code size optimization regression test for LSR (immediate).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 10:42:17 +00:00
Arnaud A. de Grandmaison
d1c83953b9 [AArch64] Reenable the PBQP test now that the leak issue has been fixed.
David Blaikie's commits r217563 & r217564, which added shared_ptr to the
CostPool have fixed some memory leak issues exposed by the PBQP with
coalescing constraints.

The sanitizer bot was failing because of those leaks. Now that the leaks
are gone, we can reenable the aarch64/pbqp test.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217580 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 10:39:52 +00:00
Tilmann Scheller
993f84c9bc [ARM] Add Thumb-2 code size optimization regression test for LSL (register).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217579 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 10:33:39 +00:00
Tilmann Scheller
98ff94a759 [ARM] Add Thumb2 code size optimization regression test for LSL (immediate).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217576 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 10:29:42 +00:00
Chandler Carruth
3c1808f628 [x86] Fixup r217565 which baked in an assumption about the function
name that breaks on some platforms. This part of the test just doesn't
matter...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217575 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 10:21:25 +00:00
David Xu
65aac0f8e3 Build correct vector filled with undef nodes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217570 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 05:10:28 +00:00
Chandler Carruth
463a096177 [x86] FileCheck-ize this test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217565 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-11 00:13:35 +00:00
Matt Arsenault
5ee5d45e7e R600/SI: Fix losing chain when fixing reg class of loads.
The lost chain resulting in earlier side effecting nodes
being deleted.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217561 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-10 23:26:19 +00:00
Matt Arsenault
257e85e7c2 R600: Custom lower frem
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217553 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-10 21:44:27 +00:00
Arnaud A. de Grandmaison
50196a89d1 [AArch64] Temporarily desactivate the PBQP test, while I investigate some leaks in the allocator
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217531 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-10 18:40:18 +00:00
Arnaud A. de Grandmaison
438669ca81 [AArch64] Add experimental PBQP support
This adds target specific support for using the PBQP register allocator on the
AArch64, for the A57 cpu.

By default, the PBQP allocator is not used, unless explicitely required
on the command line with "-aarch64-pbqp".

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217504 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-10 14:06:10 +00:00
Asiri Rathnayake
3babc141b2 [AArch 64] Use a constant pool load for weak symbol references when
using static relocation model and small code model.

Summary: currently we generate GOT based relocations for weak symbol
references regardless of the underlying relocation model. This should
be change so that in static relocation model we use a constant pool
load instead.

Patch from: Keith Walker

Reviewers: Renato Golin, Tim Northover

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217503 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-10 13:54:38 +00:00
Tim Northover
01dbae1163 ARM: don't size-reduce STMs using the LR register.
The only Thumb-1 multi-store capable of using LR is the PUSH instruction, which
translates to STMDB, so we shouldn't convert STMIAs.

Patch by Sergey Dmitrouk.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217498 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-10 12:53:28 +00:00
Job Noorman
77f923cfc1 Drop the W postfix on the 16-bit registers.
This ensures the inline assembly register constraints are properly recognised in
TargetLowering::getRegForInlineAsmConstraint.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217479 91177308-0d34-0410-b5e6-96231b3b80d8
2014-09-10 06:58:14 +00:00