with a SimpleValueType, while an EVT supports equality and
inequality comparisons with SimpleValueType.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118169 91177308-0d34-0410-b5e6-96231b3b80d8
The SPU ABI does not mention v64, and all examples
in C suggest v128 are treated similarily to arrays,
we use array alignment for v64 too. This makes the
alignment of e.g. [2 x <2 x i32>] behave "intuitively"
and similar to as if the elements were e.g. i32s.
This also makes an "unaligned store" test to be
aligned, with different (but functionally equivalent)
code generated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117360 91177308-0d34-0410-b5e6-96231b3b80d8
The old algorithm inserted a 'rotqmbyi' instruction which was
both redundant and wrong - it made shufb select bytes from the
wrong end of the input quad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116701 91177308-0d34-0410-b5e6-96231b3b80d8
Before the implementation of isLegalAddressingMode, some rare cases
of code were miscompiled if optimized with the LoopStrengthReduce pass.
It is unclear (to me) if LSR is "allowed" to produce wrong code with a
bad TargetLowering, or if the bug is elsewhere and this patch just
hides it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115919 91177308-0d34-0410-b5e6-96231b3b80d8
passed the root of the match, even though only a few patterns
actually needed this (one in X86, several in ARM [which should
be refactored anyway], and some in CellSPU that I don't feel
like detangling). Instead of requiring all ComplexPatterns to
take the dead root, have targets opt into getting the root by
putting SDNPWantRoot on the ComplexPattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114471 91177308-0d34-0410-b5e6-96231b3b80d8
"getFixedStack" on the MachinePointerInfo class. While
this isn't the problem I'm setting out to solve, it is the
right way to eliminate PseudoSourceValue, so lets go with it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114406 91177308-0d34-0410-b5e6-96231b3b80d8
This cleans up after the mess r108567 left in the CellSPU backend.
ORCvt-instruction were used to reinterpret registers, and the ORs were then
removed by isMoveInstr(). This patch now removes 350 instrucions of format:
or $3, $3, $3
(from the 52 testcases in CodeGen/CellSPU). One case of a nonexistant or is
checked for.
Some moves of the form 'ori $., $., 0' and 'ai $., $., 0' still remain.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114074 91177308-0d34-0410-b5e6-96231b3b80d8
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113570 91177308-0d34-0410-b5e6-96231b3b80d8
The "half vectors" are now widened to full size by the legalizer.
The only exception is in parameter passing, where half vectors are
expanded. This causes changes to some dejagnu tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111360 91177308-0d34-0410-b5e6-96231b3b80d8
"SPU Application Binary Interface Specification, v1.9" by
IBM.
Specifically: use r3-r74 to pass parameters and the return value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111358 91177308-0d34-0410-b5e6-96231b3b80d8
duplicate the instructions and operate on half vectors.
Also reorder code in SPUInstrInfo.td for better coherency.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110037 91177308-0d34-0410-b5e6-96231b3b80d8
such registers in SPU, this support boils down to "emulating"
them by duplicating instructions on the general purpose registers.
This adds the most basic operations on v2i32: passing parameters,
addition, subtraction, multiplication and a few others.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110035 91177308-0d34-0410-b5e6-96231b3b80d8
The only folding these load/store architectures can do is converting COPY into a
load or store, and the target independent part of foldMemoryOperand already
knows how to do that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108099 91177308-0d34-0410-b5e6-96231b3b80d8
for an "i" constraint should get lowered; PR 6309. While
this argument was passed around a lot, this is the only
place it was used, so it goes away from a lot of other
places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106893 91177308-0d34-0410-b5e6-96231b3b80d8
This allows the fast regiser allocator to remove redundant
register moves.
Update a set of tests that depend on the register allocator
to be linear scan.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106420 91177308-0d34-0410-b5e6-96231b3b80d8
addresses a longstanding deficiency noted in many FIXMEs scattered
across all the targets.
This effectively moves the problem up one level, replacing eleven
FIXMEs in the targets with eight FIXMEs in CodeGen, plus one path
through FastISel where we actually supply a DebugLoc, fixing Radar
7421831.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106243 91177308-0d34-0410-b5e6-96231b3b80d8