Thanks for Dimitry Andric, Rafael Espindola, and Benjamin Kramer
for providing and progressively reducing the test case!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191782 91177308-0d34-0410-b5e6-96231b3b80d8
There are no corresponding patterns for small immediates because they would
prevent the use of fused compare-and-branch instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191775 91177308-0d34-0410-b5e6-96231b3b80d8
This function-attribute modifies the callee-saved register list and function
epilogue (specifically the return instruction) so that a routine is suitable
for use as an interrupt-handler of the specified type without disrupting
user-mode applications.
rdar://problem/14207019
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191766 91177308-0d34-0410-b5e6-96231b3b80d8
Similar to low words, we can use the shorter LLIHL and LLIHH if it turns
out that the other half of the GR64 isn't live.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191750 91177308-0d34-0410-b5e6-96231b3b80d8
This just adds the basics necessary for allocating the upper words to
virtual registers (move, load and store). The move support is parameterised
in a way that makes it easy to handle zero extensions, but the associated
zero-extend patterns are added by a later patch.
The easiest way of testing this seemed to be add a new "h" register
constraint for high words. I don't expect the constraint to be useful
in real inline asms, but it should work, so I didn't try to hide it
behind an option.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191739 91177308-0d34-0410-b5e6-96231b3b80d8
We were completely ignoring the unorder/ordered attributes of condition
codes and also incorrectly lowering seto and setuo.
Reviewed-by: Vincent Lejeune<vljn at ovi.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191603 91177308-0d34-0410-b5e6-96231b3b80d8
of loops.
Previously, two consecutive calls to function "func" would result in the
following sequence of instructions:
1. load $16, %got(func)($gp) // load address of lazy-binding stub.
2. move $25, $16
3. jalr $25 // jump to lazy-binding stub.
4. nop
5. move $25, $16
6. jalr $25 // jump to lazy-binding stub again.
With this patch, the second call directly jumps to func's address, bypassing
the lazy-binding resolution routine:
1. load $25, %got(func)($gp) // load address of lazy-binding stub.
2. jalr $25 // jump to lazy-binding stub.
3. nop
4. load $25, %got(func)($gp) // load resolved address of func.
5. jalr $25 // directly jump to func.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191591 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the command line argument "struct-path-tbaa" since we should not depend
on command line argument to decide which format the IR file is using. Instead,
we check the first operand of the tbaa tag node, if it is a MDNode, we treat
it as struct-path aware TBAA format, otherwise, we treat it as scalar TBAA
format.
When clang starts to use struct-path aware TBAA format no matter whether
struct-path-tbaa is no, and we can auto-upgrade existing bc files, the support
for scalar TBAA format can be dropped.
Existing testing cases are updated to use the struct-path aware TBAA format.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191538 91177308-0d34-0410-b5e6-96231b3b80d8
The backend tries to use block operations like MVC, NC, OC and XC for
simple scalar operations. For correctness reasons, it rejects any case
in which the regions might partially overlap. However, for performance
reasons, it should also reject cases where the regions might be equal,
since the instruction might then not use the fast path.
This fixes a performance regression seen in bzip2. We may want to limit
the optimisation even more in future, or even remove it entirely, but I'll
try with this for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191525 91177308-0d34-0410-b5e6-96231b3b80d8
The backend previously folded offsets into PC-relative addresses
whereever possible. That's the right thing to do when the address
can be used directly in a PC-relative memory reference (using things
like LRL). But if we have a register-based memory reference and need
to load the PC-relative address separately, it's better to use an anchor
point that could be shared with other accesses to the same area of the
variable.
Fixes a FIXME.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191524 91177308-0d34-0410-b5e6-96231b3b80d8
This intrinsic is lowered into an equivalent INSERT_VECTOR_ELT which is
further lowered into a sequence of insert.w's on MIPS32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191521 91177308-0d34-0410-b5e6-96231b3b80d8
This intrinsic is lowered into an equivalent BUILD_VECTOR which is further
lowered into a sequence of insert.w's on MIPS32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191519 91177308-0d34-0410-b5e6-96231b3b80d8
For v4f32 and v2f64, INSERT_VECTOR_ELT is matched by a pseudo-insn which is
later expanded to appropriate insve.[wd] insns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191515 91177308-0d34-0410-b5e6-96231b3b80d8
For v4f32 and v2f64, EXTRACT_VECTOR_ELT is matched by a pseudo-insn which may
be expanded to subregister copies and/or instructions as appropriate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191514 91177308-0d34-0410-b5e6-96231b3b80d8
This change fixes the problem reported in pr17380 and re-add the dagcombine
transformation ensuring that the value types are always legal if the
transformation is triggered after Legalization took place.
Added the test case from pr17380.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191509 91177308-0d34-0410-b5e6-96231b3b80d8
When generating code for shared libraries, even local calls may be
intercepted, so we need a nop after the call for the linker to fix up the
TOC. Test case adapted from the one provided in PR17354.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191440 91177308-0d34-0410-b5e6-96231b3b80d8