Commit Graph

24253 Commits

Author SHA1 Message Date
Jack Carter
d3107fbc54 Fix the invalid opcode for Mips branch instructions in the assembler
For mips a branch an 18-bit signed offset (the 16-bit 
offset field shifted left 2 bits) is added to the 
address of the instruction following the branch 
(not the branch itself), in the branch delay slot, 
to form a PC-relative effective target address. 

Previously, the code generator did not perform the 
shift of the immediate branch offset which resulted 
in wrong instruction opcode. This patch fixes the issue.

Contributor: Vladimir Medic



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177687 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 00:29:10 +00:00
Jack Carter
25df6a93f3 This patch that enables the Mips assembler to use symbols for offset for instructions
This patch uses the generated instruction info tables to 
identify memory/load store instructions.
After successful matching and based on the operand type 
and size, it generates additional instructions to the output.

Contributor: Vladimir Medic



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177685 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 00:05:30 +00:00
Hal Finkel
7697370adf Remove the G8RC_NOX0_and_GPRC_NOR0 PPC register class
As Jakob pointed out in his review of r177423, having a shared ZERO
register between the 32- and 64-bit register classes causes this
odd G8RC_NOX0_and_GPRC_NOR0 class to be created. As recommended,
this adds a ZERO8 register which differentiates the 32- and 64-bit
zeros.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177683 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 23:45:03 +00:00
Hal Finkel
3ea1b064a0 Fix a register-class comparison bug in PPCCTRLoops
Thanks to Jakob for isolating the underlying problem from the
test case in r177423. The original commit had introduced
asymmetric copy operations, but these turned out to be a work-around
to the real problem (the use of == instead of hasSubClassEq in PPCCTRLoops).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177679 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 23:23:34 +00:00
Jack Carter
c91b5e197b This patch enables the Mips .set directive to define aliases
The .set directive in the Mips the assembler can be 
used to set the value of a symbol to an expression. 
This changes the symbol's value and type to conform 
to the expression's.

Syntax: .set symbol, expression

This patch implements the parsing of the above syntax 
and enables the parser to use defined symbols when 
parsing operands.

Contributor: Vladimir Medic



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177667 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 21:44:16 +00:00
Hal Finkel
7ee74a663a Implement builtin_{setjmp/longjmp} on PPC
This implements SJLJ lowering on PPC, making the Clang functions
__builtin_{setjmp/longjmp} functional on PPC platforms. The implementation
strategy is similar to that on X86, with the exception that a branch-and-link
variant is used to get the right jump address. Credit goes to Bill Schmidt for
suggesting the use of the unconditional bcl form (instead of the regular bl
instruction) to limit return-address-cache pollution.

Benchmarking the speed at -O3 of:

static jmp_buf env_sigill;

void foo() {
                __builtin_longjmp(env_sigill,1);
}

main() {
	...

        for (int i = 0; i < c; ++i) {
                if (__builtin_setjmp(env_sigill)) {
                        goto done;
                } else {
                        foo();
                }

done:;
        }

	...
}

vs. the same code using the libc setjmp/longjmp functions on a P7 shows that
this builtin implementation is ~4x faster with Altivec enabled and ~7.25x
faster with Altivec disabled. This comparison is somewhat unfair because the
libc version must also save/restore the VSX registers which we don't yet
support.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177666 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 21:37:52 +00:00
Hal Finkel
10f7f2a222 Add support for spilling VRSAVE on PPC
Although there is only one Altivec VRSAVE register, it is a member of
a register class, and we need the ability to spill it. Because this
register is normally callee-preserved and handled by special code this
has never before been necessary. However, this capability will be required by
a forthcoming commit adding SjLj support.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177654 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 19:03:21 +00:00
Hal Finkel
e9cc0a09ae Correct PPC FRAMEADDR lowering using a pseudo-register
The old code used to lower FRAMEADDR tried to replicate the logic in the real
frame-lowering code that determines whether or not the frame pointer (r31) will
be used. When it seemed as through the frame pointer would not be used, the
stack pointer (r1) was used instead. Unfortunately, because the stack size is
not yet known, this does not work. Instead, this change introduces new
always-reserved pseudo-registers (FP and FP8) that are replaced during prologue
insertion with the real frame-pointer register (either r1 or r31).

It is important that this intrinsic always return a valid frame address because
it is used by Clang to store the frame address as part of code generation for
__builtin_setjmp.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177653 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 19:03:19 +00:00
Renato Golin
3382a84074 Avoid NEON SP-FP unless unsafe-math or Darwin
NEON is not IEEE 754 compliant, so we should avoid lowering single-precision
floating point operations with NEON unless unsafe-math is turned on. The
equivalent VFP instructions are IEEE 754 compliant, but in some cores they're
much slower, so some archs/OSs might still request it to be on by default,
such as Swift and Darwin.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177651 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 18:47:47 +00:00
Jakob Stoklund Olesen
c1ea2c5d6d Add a WriteMicrocoded for ancient microcoded instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177611 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 00:07:17 +00:00
Jakob Stoklund Olesen
3c987aead2 Model prefetches and barriers as loads.
It's not yet clear if these instructions need a more careful model.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177599 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 23:09:53 +00:00
Jakob Stoklund Olesen
eab5f7678b Add a catch-all WriteSystem SchedWrite type.
This is used for all the expensive system instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177598 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 23:09:50 +00:00
Jakob Stoklund Olesen
dcb4d349b6 Annotate the remaining SSE MOV instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177592 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 22:37:16 +00:00
Jakob Stoklund Olesen
2e9aadda63 Annotate SSE horizontal and integer instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177591 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 22:37:13 +00:00
Michael Liao
f74e9bf650 Correct cost model for vector shift on AVX2
- After moving logic recognizing vector shift with scalar amount from
  DAG combining into DAG lowering, we declare to customize all vector
  shifts even vector shift on AVX is legal. As a result, the cost model
  needs special tuning to identify these legal cases.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177586 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 22:01:10 +00:00
Jakob Stoklund Olesen
279ad470b6 Add some missing SSE annotations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177540 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 16:56:39 +00:00
Jakob Stoklund Olesen
374a204f02 Annotate remaining IIC_BIN_* instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177539 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 16:56:36 +00:00
Michael Liao
42317ccb5f Fix PR15296
- Move SRA/SRL/SHL lowering support from DAG combination to DAG lowering
  to support extended 256-bit integer in AVX but not AVX2.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177478 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 02:33:21 +00:00
Michael Liao
5c5f1908f0 Mark all variable shifts needing customizing
- Prepare moving logic from DAG combining into DAG lowering. There's no
  functionality change.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177477 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 02:28:20 +00:00
Michael Liao
4b7ab12d93 Move scalar immediate shift lowering into a dedicated func
- no functionality change



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177476 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 02:20:36 +00:00
Chad Rosier
580f9c85fd Fix pr13145 - Naming a function like a register name confuses the asm parser.
Patch by Stepan Dyatkovskiy <stpworld@narod.ru>
rdar://13457826

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177463 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 23:44:03 +00:00
Jakob Stoklund Olesen
361706a718 Annotate various null idioms with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177461 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 23:23:31 +00:00
Jakob Stoklund Olesen
f2914c3b2b Annotate SSE float conversions with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177460 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 23:23:29 +00:00
Jakob Stoklund Olesen
fea666b540 Annotate X86InstrCMovSetCC.td with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177459 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 23:23:26 +00:00
Chad Rosier
811ddf64af [ms-inline asm] Move the immediate asm rewrite into the target specific
logic as a QOI cleanup.  No functional change.  Tests already in place.
rdar://13456414

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177446 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 21:58:18 +00:00
Jakob Stoklund Olesen
f36a4afaae Annotate X86InstrCompiler.td with SchedRW lists.
Add a new WriteZero SchedWrite type for the common dependency-breaking
instructions that clear a register.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177442 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 21:16:56 +00:00
Chad Rosier
d3e7416de7 [ms-inline asm] Create a helper function, CreateMemForInlineAsm, that creates
an X86Operand, but also performs a Sema lookup and adds the sizing directive
when appropriate.  Use this when parsing a bracketed statement.  This is
necessary to get the instruction matching correct as well.  Test case coming
on clang side.
rdar://13455408

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177439 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 21:11:56 +00:00
Ulrich Weigand
dff4d1522a Add missing mayLoad flag to LHAUX8 and LWAUX.
All pre-increment load patterns need to set the mayLoad flag (since
they don't provide a DAG pattern).

This was missing for LHAUX8 and LWAUX, which is added by this patch.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177431 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:53:27 +00:00
Ulrich Weigand
8353d1e0e5 Rewrite LHAU8 pattern to use standard memory operand.
As opposed to to pre-increment store patterns, the pre-increment
load patterns were already using standard memory operands, with
the sole exception of LHAU8.

As there's no real reason why LHAU8 should be different here,
this patch simply rewrites the pattern to also use a memri
operand, just like all the other patterns.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177430 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:52:30 +00:00
Ulrich Weigand
5882e3d828 Rewrite pre-increment store patterns to use standard memory operands.
Currently, pre-increment store patterns are written to use two separate
operands to represent address base and displacement:

  stwu $rS, $ptroff($ptrreg)

This causes problems when implementing the assembler parser, so this
commit changes the patterns to use standard (complex) memory operands
like in all other memory access instruction patterns:

  stwu $rS, $dst

To still match those instructions against the appropriate pre_store
SelectionDAG nodes, the patch uses the new feature that allows a Pat
to match multiple DAG operands against a single (complex) instruction
operand.

Approved by Hal Finkel.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177429 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:52:04 +00:00
Ulrich Weigand
880d82e3db Fix sub-operand size mismatch in tocentry operands.
The tocentry operand class refers to 64-bit values (it is only used in 64-bit,
where iPTR is a 64-bit type), but its sole suboperand is designated as 32-bit
type.  This causes a mismatch to be detected at compile-time with the TableGen
patch I'll check in shortly.

To fix this, this commit changes the suboperand to a 64-bit type as well.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177427 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:50:30 +00:00
Ulrich Weigand
58ebc04078 Remove an invalid and unnecessary Pat pattern from the X86 backend:
def : Pat<(load (i64 (X86Wrapper tglobaltlsaddr :$dst))),
            (MOV64rm tglobaltlsaddr :$dst)>;

This pattern is invalid because the MOV64rm instruction expects a
source operand of type "i64mem", which is a subclass of X86MemOperand
and thus actually consists of five MI operands, but the Pat provides
only a single MI operand ("tglobaltlsaddr" matches an SDnode of
type ISD::TargetGlobalTLSAddress and provides a single output).

Thus, if the pattern were ever matched, subsequent uses of the MOV64rm
instruction pattern would access uninitialized memory.  In addition,
with the TableGen patch I'm about to check in, this would actually be
reported as a build-time error.

Fortunately, the pattern does in fact never match, for at least two
independent reasons.

First, the code generator actually never generates a pattern of the
form (load (X86Wrapper (tglobaltlsaddr))).  For most combinations of
TLS and code models, (tglobaltlsaddr) represents just an offset that
needs to be added to some base register, so it is never directly
dereferenced.  The only exception is the initial-exec model, where
(tglobaltlsaddr) refers to the (pc-relative) address of a GOT slot,
which *is* in fact directly dereferenced: but in that case, the
X86WrapperRIP node is used, not X86Wrapper, so the Pat doesn't match.

Second, even if some patterns along those lines *were* ever generated,
we should not need an extra Pat pattern to match it.  Instead, the
original MOV64rm instruction pattern ought to match directly, since
it uses an "addr" operand, which is implemented via the SelectAddr
C++ routine; this routine is supposed to accept the full range of
input DAGs that may be implemented by a single mov instruction,
including those cases involving ISD::TargetGlobalTLSAddress (and
actually does so e.g. in the initial-exec case as above).

To avoid build breaks (due to the above-mentioned error) after the
TableGen patch is checked in, I'm removing this Pat here.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177426 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:49:52 +00:00
Hal Finkel
a548afc98f Prepare to make r0 an allocatable register on PPC
Currently the PPC r0 register is unconditionally reserved. There are two reasons
for this:

 1. r0 is treated specially (as the constant 0) by certain instructions, and so
    cannot be used with those instructions as a regular register.

 2. r0 is used as a temporary register in the CR-register spilling process
    (where, under some circumstances, we require two GPRs).

This change addresses the first reason by introducing a restricted register
class (without r0) for use by those instructions that treat r0 specially. These
register classes have a new pseudo-register, ZERO, which represents the r0-as-0
use. This has the side benefit of making the existing target code simpler (and
easier to understand), and will make it clear to the register allocator that
uses of r0 as 0 don't conflict will real uses of the r0 register.

Once the CR spilling code is improved, we'll be able to allocate r0.

Adding these extra register classes, for some reason unclear to me, causes
requests to the target to copy 32-bit registers to 64-bit registers. The
resulting code seems correct (and causes no test-suite failures), and the new
test case covers this new kind of asymmetric copy.

As r0 is still reserved, no functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177423 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 18:51:05 +00:00
Nadav Rotem
b05130e1b2 Optimize sext <4 x i8> and <4 x i16> to <4 x i64>.
Patch by Ahmad, Muhammad T <muhammad.t.ahmad@intel.com>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177421 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 18:38:27 +00:00
Jakob Stoklund Olesen
a45a22758d Annotate X86InstrExtension.td with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177418 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 18:03:58 +00:00
Jakob Stoklund Olesen
528c761124 Annotate a lot of X86InstrInfo.td with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177417 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 18:03:55 +00:00
Chad Rosier
023c880220 [ms-inline asm] Move the size directive asm rewrite into the target specific
logic as a QOI cleanup.
rdar://13445327

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177413 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 17:32:17 +00:00
Hal Finkel
ec2e968b7a Cleanup PPC64 unaligned i64 load/store
Remove an accidentally-added instruction definition and add a comment in the
test case. This is in response to a post-commit review by Bill Schmidt.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177404 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 15:23:39 +00:00
Renato Golin
5ad5f5931e Improve long vector sext/zext lowering on ARM
The ARM backend currently has poor codegen for long sext/zext
operations, such as v8i8 -> v8i32. This patch addresses this
by performing a custom expansion in ARMISelLowering. It also
adds/changes the cost of such lowering in ARMTTI.

This partially addresses PR14867.

Patch by Pete Couperus

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177380 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 08:15:38 +00:00
Hal Finkel
54e57f8cb7 Don't reserve R31 on PPC64 unless the frame pointer is needed
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177379 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 08:09:38 +00:00
Hal Finkel
9f2518cdc6 Fix a sign-extension bug in PPCCTRLoops
Don't sign extend the immediate value from the OR instruction in
an LIS/OR pair.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177361 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 23:58:28 +00:00
Chad Rosier
ee29c16890 [ms-inline asm] Avoid emitting a redundant sizing directive, if we've already
parsed one.  Test case coming shortly.
rdar://13446980

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177347 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 23:31:24 +00:00
Hal Finkel
08a215c286 Fix PPC unaligned 64-bit loads and stores
PPC64 supports unaligned loads and stores of 64-bit values, but
in order to use the r+i forms, the offset must be a multiple of 4.
Unfortunately, this cannot always be determined by examining the
immediate itself because it might be available only via a TOC entry.

In order to get around this issue, we additionally predicate the
selection of the r+i form on the alignment of the load or store
(forcing it to be at least 4 in order to select the r+i form).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177338 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 23:00:58 +00:00
Arnold Schwaighofer
bf37bf9e21 ARM cost model: Make some vector integer to float casts cheaper
The default logic marks them as too expensive.

For example, before this patch we estimated:
  cost of 16 for instruction:   %r = uitofp <4 x i16> %v0 to <4 x float>

While this translates to:
  vmovl.u16 q8, d16
  vcvt.f32.u32  q8, q8

All other costs are left to the values assigned by the fallback logic. Theses
costs are mostly reasonable in the sense that they get progressively more
expensive as the instruction sequences emitted get longer.

radar://13445992

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177334 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 22:47:09 +00:00
Arnold Schwaighofer
01f2571014 ARM cost model: Correct cost for some cheap float to integer conversions
Fix cost of some "cheap" cast instructions. Before this patch we used to
estimate for example:
  cost of 16 for instruction:   %r = fptoui <4 x float> %v0 to <4 x i16>

While we would emit:
  vcvt.s32.f32  q8, q8
  vmovn.i32 d16, q8
  vuzp.8  d16, d17

All other costs are left to the values assigned by the fallback logic. Theses
costs are mostly reasonable in the sense that they get progressively more
expensive as the instruction sequences emitted get longer.

radar://13434072

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177333 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 22:47:06 +00:00
Jakob Stoklund Olesen
9beae49622 Add SchedRW annotations to most of X86InstrSSE.td.
We hitch a ride with the existing OpndItins class that was used to add
instruction itinerary classes in the many multiclasses in this file.

Use the link provided by the X86FoldableSchedWrite.Folded to find the
right SchedWrite for folded loads.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177326 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 22:01:35 +00:00
Jakob Stoklund Olesen
30d25f0a30 Annotate X86 arithmetic instructions with SchedRW lists.
This new-style scheduling information is going to replace the
instruction iteneraries.

This also serves as a test case for Andy's fix in r177317.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177323 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 21:32:39 +00:00
Hal Finkel
e39b107c46 Fix 80-col. violations in PPCCTRLoops
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177296 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 17:40:46 +00:00
Hal Finkel
9887ec31e6 Fix large count and negative constant count handling in PPCCTRLoops
This commit fixes an assert that would occur on loops with large constant counts
(like looping for ((uint32_t) -1) iterations on PPC64). The existing code did
not handle counts that it computed to be negative (asserting instead), but
these can be created with valid inputs.

This bug was discovered by bugpoint while I was attempting to isolate a
completely different problem.

Also, in writing test cases for the negative-count problem, I discovered that
the ori/lsi handling was broken (there was a typo which caused the logic that
was supposed to detect these pairs and extract the iteration count to always
fail). This has now also been corrected (and is covered by one of the new test
cases).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177295 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 17:40:44 +00:00
Hal Finkel
1448d06156 Cleanup initial-value constants in PPCCTRLoops
Because the initial-value constants had not been added to the list
of instructions considered for DCE the resulting code had redundant
constant-materialization instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177294 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-18 17:40:27 +00:00