Commit Graph

54862 Commits

Author SHA1 Message Date
Hal Finkel
9770be91de Add A2 to the list of PPC CPUs recognized by Linux host CPU-type detection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158322 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 19:56:57 +00:00
Hal Finkel
0a1852b774 Emit the two-operand form of the PPC mfcr instruction as mfocrf.
This is necessary on Linux and supported on Darwin, see PR2604.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158315 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 15:43:15 +00:00
Hal Finkel
2bd0acd250 Add local CPU detection for Linux PPC.
This functionality mirrors that available on PPC/Darwin.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158314 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 15:43:13 +00:00
Hal Finkel
622382fc5e Add POWER6 and POWER7 CPU types to the PPC backend.
No functional change; these will be used by upcoming scheduler enhancements.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158313 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 15:43:08 +00:00
Jakob Stoklund Olesen
6f36fa981a Write llvm-tblgen backends as functions instead of sub-classes.
The TableGenBackend base class doesn't do much, and will be removed
completely soon.

Patch by Sean Silva!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158311 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 15:37:55 +00:00
Bill Wendling
ad5c880892 Re-enable the CMN instruction.
We turned off the CMN instruction because it had semantics which we weren't
getting correct. If we are comparing with an immediate, then it's okay to use
the CMN instruction.
<rdar://problem/7569620>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158302 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 08:07:26 +00:00
Benjamin Kramer
7a99b467df InstCombine: factor code better.
No functionality change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158301 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 08:01:25 +00:00
Benjamin Kramer
66821d9020 InstCombine: Turn (zext A) == (B & (1<<X)-1) into A == (trunc B), narrowing the compare.
This saves a cast, and zext is more expensive on platforms with subreg support
than trunc is. This occurs in the BSD implementation of memchr(3), see PR12750.
On the synthetic benchmark from that bug stupid_memchr and bsd_memchr have the
same performance now when not inlining either function.

stupid_memchr: 323.0us
bsd_memchr: 321.0us
memchr: 479.0us

where memchr is the llvm-gcc compiled bsd_memchr from osx lion's libc. When
inlining is enabled bsd_memchr still regresses down to llvm-gcc memchr time,
I haven't fully understood the issue yet, something is grossly mangling the
loop after inlining.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158297 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-10 20:35:00 +00:00
Hal Finkel
71ffcfe9f8 Enable ILP scheduling for all nodes by default on PPC.
Over the entire test-suite, this has an insignificantly negative average
performance impact, but reduces some of the worst slowdowns from the
anti-dep. change (r158294).

Largest speedups:
SingleSource/Benchmarks/Stanford/Quicksort - 28%
SingleSource/Benchmarks/Stanford/Towers - 24%
SingleSource/Benchmarks/Shootout-C++/matrix - 23%
MultiSource/Benchmarks/SciMark2-C/scimark2 - 19%
MultiSource/Benchmarks/MiBench/automotive-bitcount/automotive-bitcount - 15%
(matrix and automotive-bitcount were both in the top-5 slowdown list from the
anti-dep. change)

Largest slowdowns:
MultiSource/Benchmarks/McCat/03-testtrie/testtrie - 28%
MultiSource/Benchmarks/mediabench/gsm/toast/toast - 26%
MultiSource/Benchmarks/MiBench/automotive-susan/automotive-susan - 21%
SingleSource/Benchmarks/CoyoteBench/lpbench - 20%
MultiSource/Applications/d/make_dparser - 16%

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158296 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-10 19:32:29 +00:00
Nadav Rotem
3c98ce242e Add AutoUpgrade support for the SSE4 ptest intrinsics.
Patch by Michael Kuperstein.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158295 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-10 18:42:51 +00:00
Hal Finkel
01a90f4f8f Use critical anti-dep. breaking on all PPC targets, but also add other register classes.
Using 'all' instead of 'critical' would be better because it would make it easier to
satisfy the bundling constraints, but, as noted in the FIXME, that is currently not
possible with the crs.

This yields an average 1% speedup over the entire test suite (on Power 7). Largest speedups:
SingleSource/Benchmarks/Shootout-C++/moments - 40%
MultiSource/Benchmarks/McCat/03-testtrie/testtrie - 28%
SingleSource/Benchmarks/BenchmarkGame/nsieve-bits - 26%
SingleSource/Benchmarks/McGill/misr - 23%
MultiSource/Applications/JM/ldecod/ldecod - 22%

Largest slowdowns:
SingleSource/Benchmarks/Shootout-C++/matrix - -29%
SingleSource/Benchmarks/Shootout-C++/ary3 - -22%
MultiSource/Benchmarks/BitBench/uuencode/uuencode - -18%
SingleSource/Benchmarks/Shootout-C++/ary - -17%
MultiSource/Benchmarks/MiBench/automotive-bitcount/automotive-bitcount - -15%

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158294 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-10 11:15:36 +00:00
Craig Topper
cfd3ed9eaf Add intrinsics for immediate form of XOP vprot instructions. Use i128mem instead of f128mem for integer XOP instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158291 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-10 07:31:56 +00:00
Hal Finkel
0a3e33b633 Improve ext/trunc patterns on PPC64.
The PPC64 backend had patterns for i32 <-> i64 extensions and truncations that
would leave self-moves in the final assembly. Replacing those patterns with ones
based on the SUBREG builtins yields better-looking code.

Thanks to Jakob and Owen for their suggestions in this matter.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158283 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 22:10:19 +00:00
Craig Topper
2a5dc43bd9 Use XOP vpcom intrinsics in patterns instead of a target specific SDNode type. Remove the custom lowering code that selected the SDNode type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158279 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 17:02:24 +00:00
Craig Topper
c29106b36f Replace XOP vpcom intrinsics with fewer intrinsics that take the immediate as an argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158278 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 16:46:13 +00:00
Aaron Ballman
133fe7542c Disabling a spurious deprecation warning about using PathV1 from within the PathV1 implementation file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158274 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 13:59:29 +00:00
Aaron Ballman
19472a6f5f Fixing a typo in the comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158273 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 13:46:36 +00:00
Benjamin Kramer
f33a79c590 Allocate the contents of DwarfDebug's StringMaps in a single big BumpPtrAllocator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158265 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 10:34:15 +00:00
Duncan Sands
af0d459e36 Silence a gcc-4.6 warning: GCC fails to understand that secondReg and cmpOp2 are
correlated, and thinks that cmpOp2 may be used uninitialized.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158263 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 10:04:03 +00:00
Hal Finkel
8bf75ed41c Enable tail merging on PPC.
Tail merging had been disabled on PPC because it would disturb bundling decisions
made during pre-RA scheduling on the 970 cores. Now, however, all bundling decisions
are made during post-RA scheduling, and tail merging is generally beneficial (the
average test-suite speedup is insignificantly positive).

Largest test-suite speedups:
MultiSource/Benchmarks/mediabench/gsm/toast/toast - 30%
MultiSource/Benchmarks/BitBench/uuencode/uuencode - 23%
SingleSource/Benchmarks/Shootout-C++/ary - 21%
SingleSource/Benchmarks/Stanford/Queens - 17%

Largest slowdowns:
MultiSource/Benchmarks/MiBench/security-sha/security-sha - 24%
MultiSource/Benchmarks/McCat/03-testtrie/testtrie - 22%
MultiSource/Applications/JM/ldecod/ldecod - 14%
MultiSource/Benchmarks/mediabench/g721/g721encode/encode - 9%

This is improved by using full (instead of just critical) anti-dependency breaking,
but doing so still causes miscompiles and so cannot yet be enabled by default.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158259 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 03:14:50 +00:00
Andrew Trick
ba17293a88 Register pressure: added getPressureAfterInstr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158256 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 02:16:58 +00:00
Jakob Stoklund Olesen
8879480ed7 Sketch a LiveRegMatrix analysis pass.
The LiveRegMatrix represents the live range of assigned virtual
registers in a Live interval union per register unit. This is not
fundamentally different from the interference tracking in RegAllocBase
that both RABasic and RAGreedy use.

The important differences are:

- LiveRegMatrix tracks interference per register unit instead of per
  physical register. This makes interference checks cheaper and
  assignments slightly more expensive. For example, the ARM D7 reigster
  has 24 aliases, so we would check 24 physregs before assigning to one.
  With unit-based interference, we check 2 units before assigning to 2
  units.

- LiveRegMatrix caches regmask interference checks. That is currently
  duplicated functionality in RABasic and RAGreedy.

- LiveRegMatrix is a pass which makes it possible to insert
  target-dependent passes between register allocation and rewriting.
  Such passes could tweak the register assignments with interference
  checking support from LiveRegMatrix.

Eventually, RABasic and RAGreedy will be switched to LiveRegMatrix.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158255 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 02:13:10 +00:00
Jack Carter
b9bfe48e0a Test commit
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158250 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 00:27:55 +00:00
Jakob Stoklund Olesen
fe17bdbb50 Also compute MBB live-in lists in the new rewriter pass.
This deduplicates some code from the optimizing register allocators, and
it means that it is now possible to change the register allocators'
solutions simply by editing the VirtRegMap between the register
allocator pass and the rewriter.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158249 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 00:14:47 +00:00
Dmitri Gribenko
77592fe39c Convert comments to proper Doxygen comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158248 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 00:01:45 +00:00
Jakob Stoklund Olesen
05ec712e7f Reintroduce VirtRegRewriter.
OK, not really. We don't want to reintroduce the old rewriter hacks.

This patch extracts virtual register rewriting as a separate pass that
runs after the register allocator. This is possible now that
CodeGen/Passes.cpp can configure the full optimizing register allocator
pipeline.

The rewriter pass uses register assignments in VirtRegMap to rewrite
virtual registers to physical registers, and it inserts kill flags based
on live intervals.

These finalization steps are the same for the optimizing register
allocators: RABasic, RAGreedy, and PBQP.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158244 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 23:44:45 +00:00
Nuno Lopes
0f68fbb9e5 canonicalize:
-%a + 42
into
42 - %a

previously we were emitting:
-(%a + 42)

This fixes the infinite loop in PR12338. The generated code is still not perfect, though.
Will work on that next

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158237 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 22:30:05 +00:00
Evan Cheng
791e2e0867 Start implementing pre-ra if-converter: using speculation and selects to eliminate branches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158234 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 21:53:50 +00:00
Andrew Trick
eb81df7d95 TargetInstrInfo hooks implemented in codegen should be declared pure virtual.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158233 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 21:52:38 +00:00
Duncan Sands
841f426175 Reapply commit 158073 with a fix (the testcase was already committed). The
problem was that by moving instructions around inside the function, the pass
could accidentally move the iterator being used to advance over the function
too.  Fix this by only processing the instruction equal to the iterator, and
leaving processing of instructions that might not be equal to the iterator
to later (later = after traversing the basic block; it could also wait until
after traversing the entire function, but this might make the sets quite big).
Original commit message:

Grab-bag of reassociate tweaks.  Unify handling of dead instructions and
instructions to reoptimize.  Exploit this to more systematically eliminate
dead instructions (this isn't very useful in practice but is convenient for
analysing some testcase I am working on).  No need for WeakVH any more: use
an AssertingVH instead.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158226 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 20:15:33 +00:00
Hal Finkel
16b16ac840 Remove the TODO statement in the PPC README re: CTR loops
As Chris points out, this can now be removed!

TODO: check if the associated section on viterbi's inner loop can also be removed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158224 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 20:02:09 +00:00
Hal Finkel
7255d2a808 Enable PPC CTR loop formation by default.
Thanks to Jakob's help, this now causes no new test suite failures!

Over the entire test suite, this gives an average 1% speedup. The largest speedups are:
SingleSource/Benchmarks/Misc/pi - 108%
SingleSource/Benchmarks/CoyoteBench/lpbench - 54%
MultiSource/Benchmarks/Prolangs-C/unix-smail/unix-smail - 50%
SingleSource/Benchmarks/Shootout/ary3 - 32%
SingleSource/Benchmarks/Shootout-C++/matrix - 30%

The largest slowdowns are:
MultiSource/Benchmarks/mediabench/gsm/toast/toast - -30%
MultiSource/Benchmarks/Prolangs-C/bison/mybison - -25%
MultiSource/Benchmarks/BitBench/uuencode/uuencode - -22%
MultiSource/Applications/d/make_dparser - -14%
SingleSource/Benchmarks/Shootout-C++/ary - -13%

In light of these slowdowns, additional profiling work is obviously needed!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158223 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 19:19:53 +00:00
Hal Finkel
7e5631202a Mark the PPC CTRRC and CTRRC8 register classes as non-allocatable.
Marking these classes as non-alocatable allows CTR loop generation to
work correctly with the block placement passes, etc. These register
classes are currently used only by some unused TCRETURN patterns.
In future cleanup, these will be removed.

Thanks again to Jakob for suggesting this fix to the CTR loop problem!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158221 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 19:02:08 +00:00
Manman Ren
45d53b866e Enable optimization for integer ABS on X86 if Subtarget has CMOV.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158220 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 18:58:26 +00:00
Chad Rosier
28dd960cd1 Fix a crash in APInt::lshr when shiftAmt > BitWidth.
Patch by James Benton <jbenton@vmware.com>.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158213 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 18:04:52 +00:00
Andrew Trick
c36d033c08 Fix Target->Codegen dependence.
Bulk move of TargetInstrInfo implementation into
TargetInstrInfoImpl. This is dirty because the code isn't part of
TargetInstrInfoImpl class, nor should it be, because the methods are
not target hooks. However, it's the current mechanism for keeping
libTarget useful outside the backend. You'll get a not-so-nice link
error if you invoke a TargetInstrInfo method that depends on CodeGen.

The TargetInstrInfoImpl class should probably be removed since it
doesn't really solve this problem.

To really fix this, we probably need separate interfaces for the
CodeGen/nonCodeGen sides of TargetInstrInfo.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158212 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 17:23:27 +00:00
Nuno Lopes
eb90adffe1 BoundsChecking: add support for ConstantPointerNull. fixes a bunch of instrumentation failures in loops with reallocs
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158210 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 16:31:42 +00:00
Hal Finkel
09fdc7baae Disable the PPC CTR-Loops pass by default.
The pass itself works well, but the something in the Machine* infrastructure
does not understand terminators which define registers. Without the ability
to use the block-placement pass, etc. this causes performance regressions (and
so is turned off by default). Turning off the analysis turns off the problems
with the Machine* infrastructure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158206 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 15:38:25 +00:00
Hal Finkel
daa03ec604 Fix a bug in the new PPC CTR-Loops pass.
The code which tests for an induction operation cannot assume that any
ADDI instruction will have a register operand because the operand could
also be a frame index; for example:
    %vreg16<def> = ADDI8 <fi#0>, 0; G8RC:%vreg16

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158205 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 15:38:23 +00:00
Hal Finkel
99f823f943 Add the PPCCTRLoops pass: a PPC machine-code-level optimization pass to form CTR-based loop branching code.
This pass is derived from the Hexagon HardwareLoops pass. The only significant enhancement over the Hexagon
pass is that PPCCTRLoops will also attempt to delete the replaced add and compare operations if they are
no longer otherwise used. Also, invalid preheader DebugLoc is not used.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158204 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 15:38:21 +00:00
Duncan Sands
69938a85bd Revert commit 158073 while waiting for a fix. The issue is that reassociate
can move instructions within the instruction list.  If the instruction just
happens to be the one the basic block iterator is pointing to, and it is
moved to a different basic block, then we get into an infinite loop due to
the iterator running off the end of the basic block (for some reason this
doesn't fire any assertions).  Original commit message:

Grab-bag of reassociate tweaks.  Unify handling of dead instructions and
instructions to reoptimize.  Exploit this to more systematically eliminate
dead instructions (this isn't very useful in practice but is convenient for
analysing some testcase I am working on).  No need for WeakVH any more: use
an AssertingVH instead.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158199 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 13:37:30 +00:00
Manman Ren
9236362a64 X86: optimize generated code for integer ABS
This patch will generate the following for integer ABS:
      movl    %edi, %eax
      negl    %eax
      cmovll  %edi, %eax
INSTEAD OF
      movl    %edi, %ecx
      sarl    $31, %ecx
      leal    (%rdi,%rcx), %eax
      xorl    %ecx, %eax

There exists a target-independent DAG combine for integer ABS, which converts
integer ABS to sar+add+xor. For X86, we match this pattern back to neg+cmov. 
This is implemented in PerformXorCombine.

rdar://10695237


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158175 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 22:39:10 +00:00
Nadav Rotem
bdcae38256 Do not optimize the used bits of the x86 vselect condition operand, when the condition operand is a vector of 1-bit predicates.
This may happen on MIC devices.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158168 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 20:53:48 +00:00
Nadav Rotem
2f6622c7bf Fix a bug in FoldSelectOpOp. Bitcast ops may change the number of vector elements, which may disagree with the select condition type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158166 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 20:28:57 +00:00
Andrew Trick
397f4e3583 Continue factoring computeOperandLatency. Use it for ARM hasHighOperandLatency.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158164 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 19:42:04 +00:00
Andrew Trick
68b16541cc ARM getOperandLatency rewrite.
Match expectations of the new latency API. Cleanup and make the logic consistent.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158163 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 19:42:00 +00:00
Andrew Trick
f377071bf8 ARM getOperandLatency should return -1 for unknown, consistent with API
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158162 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 19:41:58 +00:00
Andrew Trick
ed7a51e692 Fix ARM getInstrLatency logic to work with the current API.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158161 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 19:41:55 +00:00
Manman Ren
e6fc9d40b3 PR13046: we can't replace usage of SUB with CMP in the lowering phase.
It will cause assertion failure later on.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158160 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 19:27:33 +00:00
Rafael Espindola
c07f5bbd3b Use a base register instead of an index register with the local dynamic model.
Fixes pr13048.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158158 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 18:39:19 +00:00