As an example, attach range info to the "invalid instruction" message:
$ clang -arch arm -c asm.c
asm.c:2:11: error: invalid instruction
__asm__("foo r0");
^
<inline asm>:1:2: note: instantiated into assembly here
foo r0
^~~
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154765 91177308-0d34-0410-b5e6-96231b3b80d8
targets so if the branch target has the high bit set it does not get printed as:
beq 0xffffffff8008c404
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154685 91177308-0d34-0410-b5e6-96231b3b80d8
- FCOPYSIGN nodes that have operands of different types were not handled.
- Different code was generated depending on the endianness of the target.
Additionally, code is added that emits INS and EXT instructions, if they are
supported by target (they are R2 instructions).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154540 91177308-0d34-0410-b5e6-96231b3b80d8
While there is an encoding for it in VUZP, the result of that is undefined,
so we should avoid it. Define the instruction as a pseudo for VTRN.32
instead, as the ARM ARM indicates.
rdar://11222366
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154511 91177308-0d34-0410-b5e6-96231b3b80d8
While there is an encoding for it in VZIP, the result of that is undefined,
so we should avoid it. Define the instruction as a pseudo for VTRN.32
instead, as the ARM ARM indicates.
rdar://11221911
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154505 91177308-0d34-0410-b5e6-96231b3b80d8
binary and assembly. Patch by Carlo Kok. Emitting was inspired by but not based
on the D llvm bindings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154493 91177308-0d34-0410-b5e6-96231b3b80d8
Original message:
Modify the code that lowers shuffles to blends from using blendvXX to vblendXX.
blendV uses a register for the selection while Vblend uses an immediate.
On sandybridge they still have the same latency and execute on the same execution ports.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154483 91177308-0d34-0410-b5e6-96231b3b80d8
predicates.
Also remove NEON2 since it's not really useful and it is confusing. If
NEON + VFP4 implies NEON2 but NEON2 doesn't imply NEON + VFP4, what does it
really mean?
rdar://10139676
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154480 91177308-0d34-0410-b5e6-96231b3b80d8
1. The new instruction itinerary entries are not properly described.
2. The asm parser can't handle vfms and vfnms.
3. There were no assembler, disassembler test cases.
4. HasNEON2 has the wrong assembler predicate.
rdar://10139676
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154456 91177308-0d34-0410-b5e6-96231b3b80d8
We were incorrectly conflating some add variants which don't have a
cc_out operand with the mirroring sub encodings, which do. Part of the
awesome non-orthogonality legacy of thumb1. Similarly, handling of
add/sub of an immediate was sometimes incorrectly removing the cc_out
operand for add/sub register variants.
rdar://11216577
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154411 91177308-0d34-0410-b5e6-96231b3b80d8
blendv uses a register for the selection while vblend uses an immediate.
On sandybridge they still have the same latency and execute on the same execution ports.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154396 91177308-0d34-0410-b5e6-96231b3b80d8
legalizer always use the DAG entry node. This is wrong when the libcall is
emitted as a tail call since it effectively folds the return node. If
the return node's input chain is not the entry (i.e. call, load, or store)
use that as the tail call input chain.
PR12419
rdar://9770785
rdar://11195178
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154370 91177308-0d34-0410-b5e6-96231b3b80d8
in-register, such that we can use a single vector store rather then a
series of scalar stores.
For func_4_8 the generated code
vldr d16, LCPI0_0
vmov d17, r0, r1
vadd.i16 d16, d17, d16
vmov.u16 r0, d16[3]
strb r0, [r2, #3]
vmov.u16 r0, d16[2]
strb r0, [r2, #2]
vmov.u16 r0, d16[1]
strb r0, [r2, #1]
vmov.u16 r0, d16[0]
strb r0, [r2]
bx lr
becomes
vldr d16, LCPI0_0
vmov d17, r0, r1
vadd.i16 d16, d17, d16
vuzp.8 d16, d17
vst1.32 {d16[0]}, [r2, :32]
bx lr
I'm not fond of how this combine pessimizes 2012-03-13-DAGCombineBug.ll,
but I couldn't think of a way to judiciously apply this combine.
This
ldrh r0, [r0, #4]
strh r0, [r1]
becomes
vldr d16, [r0]
vmov.u16 r0, d16[2]
vmov.32 d16[0], r0
vuzp.16 d16, d17
vst1.32 {d16[0]}, [r1, :32]
PR11158
rdar://10703339
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154340 91177308-0d34-0410-b5e6-96231b3b80d8
A couple of cases where we were accidentally creating constant conditions by
something like "x == a || b" instead of "x == a || x == b". In one case a
conditional & then unreachable was used - I transformed this into a direct
assert instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154324 91177308-0d34-0410-b5e6-96231b3b80d8
x86 addressing modes. This allows PIE-based TLS offsets to fit directly
into an addressing mode immediate offset, which is the last remaining
code quality issue from PR12380. With this patch, that PR is completely
fixed.
To understand why this patch is correct to match these offsets into
addressing mode immediates, break it down by cases:
1) 32-bit is trivially correct, and unmodified here.
2) 64-bit non-small mode is unchanged and never matches.
3) 64-bit small PIC code which is RIP-relative is handled specially in
the match to try to fit RIP into the base register. If it fails, it
now early exits. This behavior is unchanged by the patch.
4) 64-bit small non-PIC code which is not RIP-relative continues to work
as it did before. The reason these immediates are safe is because the
ABI ensures they fit in small mode. This behavior is unchanged.
5) 64-bit small PIC code which is *not* using RIP-relative addressing.
This is the only case changed by the patch, and the primary place you
see it is in TLS, either the win64 section offset TLS or Linux
local-exec TLS model in a PIC compilation. Here the ABI again ensures
that the immediates fit because we are in small mode, and any other
operations required due to the PIC relocation model have been handled
externally to the Wrapper node (extra loads etc are made around the
wrapper node in ISelLowering).
I've tested this as much as I can comparing it with GCC's output, and
everything appears safe. I discussed this with Anton and it made sense
to him at least at face value. That said, if there are issues with PIC
code after this patch, yell and we can revert it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154304 91177308-0d34-0410-b5e6-96231b3b80d8
optimizations which are valid for position independent code being linked
into a single executable, but not for such code being linked into
a shared library.
I discussed the design of this with Eric Christopher, and the decision
was to support an optional bit rather than a completely separate
relocation model. Fundamentally, this is still PIC relocation, its just
that certain optimizations are only valid under a PIC relocation model
when the resulting code won't be in a shared library. The simplest path
to here is to expose a single bit option in the TargetOptions. If folks
have different/better designs, I'm all ears. =]
I've included the first optimization based upon this: changing TLS
models to the *Exec models when PIE is enabled. This is the LLVM
component of PR12380 and is all of the hard work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154294 91177308-0d34-0410-b5e6-96231b3b80d8
in TargetLowering. There was already a FIXME about this location being
odd. The interface is simplified as a consequence. This will also make
it easier to change TLS models when compiling with PIE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154292 91177308-0d34-0410-b5e6-96231b3b80d8
Previously we used three instructions to broadcast an immediate value into a
vector register.
On Sandybridge we continue to load the broadcasted value from the constant pool.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154284 91177308-0d34-0410-b5e6-96231b3b80d8
The tLDRr instruction with the last register operand set to the zero register
prints in assembly as if no register was specified, and the assembler encodes
it as a tLDRi instruction with a zero immediate. With the integrated assembler,
that zero register gets emitted as "r0", so we get "ldr rx, [ry, r0]" which
is broken. Emit the instruction as tLDRi with a zero immediate. I don't
know if there's a good way to write a testcase for this. Suggestions welcome.
Opportunities for follow-up work:
1) The asm printer should complain if a non-optional register operand is set
to the zero register, instead of silently dropping it.
2) The integrated assembler should complain in the same situation, instead of
silently emitting the operand as "r0".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154261 91177308-0d34-0410-b5e6-96231b3b80d8
Cygwin-1.7 supports dw2. Some recent mingw distros support one, too.
I have confirmed test-suite/SingleSource/Benchmarks/Shootout-C++/except.cpp can pass on Cygwin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154247 91177308-0d34-0410-b5e6-96231b3b80d8
by default.
This is a behaviour configurable in the MCAsmInfo. I've decided to turn
it on by default in (possibly optimistic) hopes that most assemblers are
reasonably sane. If this proves a problem, switching to default seems
reasonable.
I'm not sure if this is the opportune place to test, but it seemed good
to make sure it was tested somewhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154235 91177308-0d34-0410-b5e6-96231b3b80d8
After register masks were introdruced to represent the call clobbers, it
is no longer necessary to have duplicate instruction for iOS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154209 91177308-0d34-0410-b5e6-96231b3b80d8
ARM and Thumb2 mode can use cmn instructions to compare against negative
immediates. Thumb1 mode can't.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154183 91177308-0d34-0410-b5e6-96231b3b80d8
We had special instructions for iOS because r9 is call-clobbered, but
that is represented dynamically by the register mask operands now, so
there is no need for the pseudo-instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154144 91177308-0d34-0410-b5e6-96231b3b80d8
The load/store optimizer splits LDRD/STRD into two instructions when the
register pairing doesn't work out. For negative offsets in Thumb2, it uses
t2STRi8 to do that. That's fine, except for the case when the offset is in
the range [-4,-1]. In that case, we'll also form a second t2STRi8 with
the original offset plus 4, resulting in a t2STRi8 with a non-negative
offset, which ends up as if it were an STRT, which is completely bogus.
Similarly for loads.
No testcase, unfortunately, as any I've been able to construct is both large
and extremely fragile.
rdar://11193937
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154141 91177308-0d34-0410-b5e6-96231b3b80d8
'add r2, #-1024' should just use 'sub r2, #1024' rather than erroring out.
Thumb1 aliases for adding a negative immediate to the stack pointer,
also.
rdar://11192734
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154123 91177308-0d34-0410-b5e6-96231b3b80d8
types for N32 ABI. Test case will be updated after the patch that fixes
TargetLowering::getPICJumpTableRelocBase is checked in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154036 91177308-0d34-0410-b5e6-96231b3b80d8
A MOVCCr instruction can be commuted by inverting the condition. This
can help reduce register pressure and remove unnecessary copies in some
cases.
<rdar://problem/11182914>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154033 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154011 91177308-0d34-0410-b5e6-96231b3b80d8
This patch allows llvm to recognize that a 64 bit object file is being produced
and that the subsequently generated ELF header has the correct information.
The test case checks for both big and little endian flavors.
Patch by Jack Carter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153889 91177308-0d34-0410-b5e6-96231b3b80d8
The 440 and A2 cores have detailed itineraries, and this allows them to be
fully used to maximize throughput.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153845 91177308-0d34-0410-b5e6-96231b3b80d8
Post-RA scheduling gives a significant performance improvement on
the embedded cores, so turn it on. Using full anti-dep. breaking is
important for FP-intensive blocks, so turn it on (just on the
embedded cores for now; this should also be good on the 970s because
post-ra scheduling is all that we have for now, but that should have
more testing first).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153843 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a full itinerary for IBM's PPC64 A2 embedded core. These
cores form the basis for the CPUs in the new IBM BG/Q supercomputer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153842 91177308-0d34-0410-b5e6-96231b3b80d8
Loads and stores can have different pipeline behavior, especially on
embedded chips. This change allows those differences to be expressed.
Except for the 440 scheduler, there are no functionality changes.
On the 440, the latency adjustment is only by one cycle, and so this
probably does not affect much. Nevertheless, it will make a larger
difference in the future and this removes a FIXME from the 440 itin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153821 91177308-0d34-0410-b5e6-96231b3b80d8
Dynamic linking on PPC64 has had problems since we had to move the top-down
hazard-detection logic post-ra. For dynamic linking to work there needs to be
a nop placed after every call. It turns out that it is really hard to guarantee
that nothing will be placed in between the call (bl) and the nop during post-ra
scheduling. Previous attempts at fixing this by placing logic inside the
hazard detector only partially worked.
This is now fixed in a different way: call+nop codegen-only instructions. As far
as CodeGen is concerned the pair is now a single instruction and cannot be split.
This solution works much better than previous attempts.
The scoreboard hazard detector is also renamed to be more generic, there is currently
no cpu-specific logic in it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153816 91177308-0d34-0410-b5e6-96231b3b80d8
ARMConstantIslandPass still has bugs where jump table compression can
cause constant pool entries to go out of range.
Add a safety margin of 2 bytes when placing constant islands, but use
the real max displacement for verification.
<rdar://problem/11156595>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153789 91177308-0d34-0410-b5e6-96231b3b80d8
The 8-bit payload is not contiguous in the opcode. Move the upper nibble
over 4 bits into the correct place.
rdar://11158641
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153780 91177308-0d34-0410-b5e6-96231b3b80d8
When an immediate is both a value [t2_]so_imm and a [t2_]so_imm_neg,
we want to use the non-negated form to make sure we prefer the normal
encoding, not the aliased encoding via the negation of, e.g., 'cmp.w'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153770 91177308-0d34-0410-b5e6-96231b3b80d8
Make the non-tied register operand names line up with what the base
class encoding handler expects.
rdar://11157236
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153766 91177308-0d34-0410-b5e6-96231b3b80d8
For 'adds r2, r2, #56' outside of an IT block, the 16-bit encoding T2
can be used for this syntax. Prefer the narrow encoding when possible.
rdar://11156277
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153759 91177308-0d34-0410-b5e6-96231b3b80d8
This pass splits basic blocks to insert constant islands, and it
doesn't recompute the live-in lists. No later passes depend on accurate
liveness information.
This fixes PR12410 where the machine code verifier was complaining.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153700 91177308-0d34-0410-b5e6-96231b3b80d8
We are sometimes allocatinog from the DPair register class which
contains odd-even pairs in addition to the Q registers.
Place the Q registers first in the DPair allocation order as they can be
copied with a single instruction. The odd-even pairs should only be
allocated as a last resort.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153699 91177308-0d34-0410-b5e6-96231b3b80d8
The CMP->CMN alias was matching for an immediate of zero when it
should only match for negative values.
rdar://11129224
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153689 91177308-0d34-0410-b5e6-96231b3b80d8
ARM recently gained DPair, DTriple, and DQuad register classes.
Update copyPhysReg() to handle copies in these register classes.
No test case, it is difficult to make the register allocator emit the
odd copies reliably. The missing DPair copy caused a failure on
partialsums in the nightly test suite.
<rdar://problem/11147997>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153686 91177308-0d34-0410-b5e6-96231b3b80d8
This is a code change to add support for changing instruction sequences of the form:
load
inc/dec of 8/16/32/64 bits
store
into the appropriate X86 inc/dec through memory instruction:
inc[qlwb] / dec[qlwb]
The checks that were in X86DAGToDAGISel::Select(SDNode *Node)>>ISD::STORE have been extracted to isLoadIncOrDecStore and reworked to use the better
named wrappers for getOperand(unsigned) (e.g. getOffset()) and replaced Chain.getNode() with LoadNode. The comments have also been expanded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153635 91177308-0d34-0410-b5e6-96231b3b80d8
This is a code change to add support for changing instruction sequences of the form:
load
inc/dec of 8/16/32/64 bits
store
into the appropriate X86 inc/dec through memory instruction:
inc[qlwb] / dec[qlwb]
The checks that were in X86DAGToDAGISel::Select(SDNode *Node)>>ISD::STORE have been extracted to isLoadIncOrDecStore and reworked to use the better
named wrappers for getOperand(unsigned) (e.g. getOffset()) and replaced Chain.getNode() with LoadNode. The comments have also been expanded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153617 91177308-0d34-0410-b5e6-96231b3b80d8
Some targets still mess up the liveness information, but that isn't
verified after MRI->invalidateLiveness().
The verifier can still check other useful things like register classes
and CFG, so it should be enabled after all passes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153615 91177308-0d34-0410-b5e6-96231b3b80d8
When an strd instruction doesn't get the registers it wants, it can be
expanded into two str instructions. Make sure the first str doesn't kill
the base register in the case where the base and data registers are
identical:
t2STRi12 %R0<kill>, %R0, 4, pred:14, pred:%noreg
t2STRi12 %R2<kill>, %R0, 8, pred:14, pred:%noreg
<rdar://problem/11101911>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153611 91177308-0d34-0410-b5e6-96231b3b80d8
When a number of sub-register VLRDS instructions are combined into a
VLDM, preserve any super-register implicit defs. This is required to
keep the register scavenger and machine code verifier happy.
Enable machine code verification after ARMLoadStoreOptimizer.
ARM/2012-01-26-CopyPropKills.ll was failing because of this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153610 91177308-0d34-0410-b5e6-96231b3b80d8
The arm_neon intrinsics can create virtual registers from the DPair
register class which allows both even-odd and odd-even D-register pairs.
This fixes PR12389.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153603 91177308-0d34-0410-b5e6-96231b3b80d8
Revert r153519: "ARMLoadStoreOptimizer invalidates register liveness."
These patches caused miscompilations in povray by turning off branch
folding's updating of live-in lists.
It turns out the the late scheduler depends on the live-in lists, even
if it doesn't need correct kill flags.
<rdar://problem/11139228>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153593 91177308-0d34-0410-b5e6-96231b3b80d8
imposes a constraint that GOT16 referring to a local symbol or HI16 has to be
followed immediately by a matching LO16 relocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153553 91177308-0d34-0410-b5e6-96231b3b80d8
them as machine instructions. Directives ".set noat" and ".set at" are now
emitted only at the beginning and end of a function except in the case where
they are emitted to enclose .cpload with an immediate operand that doesn't fit
in 16-bit field or unaligned load/stores.
Also, make the following changes:
- Remove function isUnalignedLoadStore and use a switch-case statement to
determine whether an instruction is an unaligned load or store.
- Define helper function CreateMCInst which generates an instance of an MCInst
from an opcode and a list of operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153552 91177308-0d34-0410-b5e6-96231b3b80d8
If EmitNOAT is true, directives ".set noat" and ".set at" are emitted at the
beginning and end of a function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153528 91177308-0d34-0410-b5e6-96231b3b80d8
This pass tries to update kill flags, but there are still many bugs.
Passes after the load/store optimizer don't need accurate liveness, so
don't even try.
<rdar://problem/11101911>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153519 91177308-0d34-0410-b5e6-96231b3b80d8
MachinePointerInfo when getStore is called to create a node that stores an
argument passed in register to the stack. Without this change, the post RA
scheduler will fail to discover the dependencies between the stores
instructions and the instructions that load from a structure passed by value.
The link to the related discussion is here:
http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-March/048055.html
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153499 91177308-0d34-0410-b5e6-96231b3b80d8
set it in MipsMCCodeEmitter::getMachineOpValue. Assert in getMachineOpValue if
MachineOperand MO is of an unexpected type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153494 91177308-0d34-0410-b5e6-96231b3b80d8
produces a 32-bit immediate which is consumed by the use. It tries to
fold the immediate by breaking it into two parts and fold them into the
immmediate fields of two uses. e.g
movw r2, #40885
movt r3, #46540
add r0, r0, r3
=>
add.w r0, r0, #3019898880
add.w r0, r0, #30146560
;
However, this transformation is incorrect if the user produces a flag. e.g.
movw r2, #40885
movt r3, #46540
adds r0, r0, r3
=>
add.w r0, r0, #3019898880
adds.w r0, r0, #30146560
Note the adds.w may not set the carry flag even if the original sequence
would.
rdar://11116189
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153484 91177308-0d34-0410-b5e6-96231b3b80d8
The PPC64 SVR4 ABI requires integer stack arguments, and thus the var. args., that
are smaller than 64 bits be zero extended to 64 bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153373 91177308-0d34-0410-b5e6-96231b3b80d8
Code such as:
%vreg100 = setcc %vreg10, -1, SETNE
brcond %vreg10, %tgt
was being incorrectly morphed into
%vreg100 = and %vreg10, 1
brcond %vreg10, %tgt
where the 'and' instruction could be eliminated since
such logic is on 1-bit types in the PTX back-end, leaving
us with just:
brcond %vreg10, %tgt
which essentially gives us inverted branch conditions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153364 91177308-0d34-0410-b5e6-96231b3b80d8
These changes allow us to compile big endian from the command line for 32 bit
Mips targets. This patch will result in code and data actually being produced
in the correct endianess.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153153 91177308-0d34-0410-b5e6-96231b3b80d8
ARMBaseRegisterInfo::canRealignStack was checking for variable-sized objects
but not for stack adjustments around calls. Use hasReservedCallFrame() to
check for both. The hasBasePointer function was already correctly checking
both conditions, so the effect of this was that a base pointer would be used
without checking whether the base pointer register could be reserved. I don't
have a small testcase for this.
<rdar://problem/11075906>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153110 91177308-0d34-0410-b5e6-96231b3b80d8
ARMFrameLowering::hasReservedCallFrame is already checking for variable
sized objects, so there's no point in checking it twice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153109 91177308-0d34-0410-b5e6-96231b3b80d8
This results in things such as
vmovups 16(%rdi), %xmm0
vinsertf128 $1, %xmm0, %ymm0, %ymm0
to be combined to
vinsertf128 $1, 16(%rdi), %ymm0, %ymm0
rdar://11076953
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153092 91177308-0d34-0410-b5e6-96231b3b80d8
X86InstrCompiler.td.
It also adds –mcpu-generic to the legalize-shift-64.ll test so the test
will pass if run on an Intel Atom CPU, which would otherwise
produce an instruction schedule which differs from that which the test expects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153033 91177308-0d34-0410-b5e6-96231b3b80d8
fast-isel before emitting code. If the program bails after code was emitted,
then it could lead to the stack being adjusted more than once (two
CALLSEQ_BEGINs emitted) but being adjuste back only once after the call. This
leads to general badness and gnashing of teeth.
<rdar://problem/11050630>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152959 91177308-0d34-0410-b5e6-96231b3b80d8
It's not a good style idea, as the registers will be laid down in memory in
numerical order, not the order they're in the list, but it's legal. vldm/vstm
are stricter.
rdar://11064740
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152943 91177308-0d34-0410-b5e6-96231b3b80d8
This results in things such as
vmovaps -96(%rbx), %xmm1
vinsertf128 $1, %xmm1, %ymm0, %ymm0
to be combined to
vinsertf128 $1, -96(%rbx), %ymm0, %ymm0
rdar://10643481
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152762 91177308-0d34-0410-b5e6-96231b3b80d8
instruction's destination operand like it does for the source operand.
Also fix a typo in the comment for X86AsmParser::isSrcOp().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152654 91177308-0d34-0410-b5e6-96231b3b80d8
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152532 91177308-0d34-0410-b5e6-96231b3b80d8
Original commit message from r147481:
DAGCombine for transforming 128->256 casts into a vmovaps, rather
then a vxorps + vinsertf128 pair if the original vector came from a load.
Fix:
Unaligned loads need to generate a vmovups.
rdar://10974078
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152366 91177308-0d34-0410-b5e6-96231b3b80d8
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html
Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".
ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.
Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.
Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow
for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
BasicBlock *BB = i.getCaseSuccessor();
ConstantInt *V = i.getCaseValue();
// Do something.
}
If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.
There are also related changes in llvm-clients: klee and clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152297 91177308-0d34-0410-b5e6-96231b3b80d8
For example, this pattern
(select (setcc lhs, rhs, cc), true, 0)
is transformed to this one:
(select (setcc lhs, rhs, inverse(cc)), 0, true)
This enables MipsDAGToDAGISel::ReplaceUsesWithZeroReg (added in r152280) to
replace 0 with $zero.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152285 91177308-0d34-0410-b5e6-96231b3b80d8
For example, the first instruction in the code below can be eliminated if the
use of $vr0 is replaced with $zero:
addiu $vr0, $zero, 0
add $vr2, $vr1, $vr0
add $vr2, $vr1, $zero
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152280 91177308-0d34-0410-b5e6-96231b3b80d8