We need to ensure that StackSlotColoring.cpp does not reuse stack
spill slots in functions that call "returns_twice" functions such as
setjmp(), otherwise this can lead to miscompiled code, because a stack
slot would be clobbered when it's still live.
This was already handled correctly for functions that call setjmp()
(though this wasn't covered by a test), but not for functions that
invoke setjmp().
We fix this by changing callsFunctionThatReturnsTwice() to check for
invoke instructions.
This fixes PR18244.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199180 91177308-0d34-0410-b5e6-96231b3b80d8
This commit teaches DAG to reassociate vector ops, which in turn enables
constant folding of vector op chains that appear later on during custom lowering
and DAG combine.
Reviewed by Andrea Di Biagio
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199135 91177308-0d34-0410-b5e6-96231b3b80d8
The issue is caused when Post-RA scheduler reorders a bundle instruction
(IT block). However, it only flips the CPSR liveness of the bundle instruction,
leaves the instructions inside the bundle unchanged, which causes inconstancy and crashes
Thumb2SizeReduction.cpp::ReduceMBB().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199127 91177308-0d34-0410-b5e6-96231b3b80d8
APInt only knows how to compare values with the same BitWidth and asserts
in all other cases.
With this fix, function PerformORCombine does not use the APInt equality
operator if the APInt values returned by 'isConstantSplat' differ in BitWidth.
In that case they are different and no comparison is needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199119 91177308-0d34-0410-b5e6-96231b3b80d8
The old mask in f24 wasn't well chosen because the lshr would always be zero.
CodeGen didn't detect this but InstCombine would. The new mask ensures
that both shifts are needed.
f26 is specifically testing for a wrap-around mask. The AND can be applied
to just the shift left, either before or after the shift. Again, CodeGen
kept it in the original form but InstCombine would mask after the shift
instead. The exact choice of NILF isn't important for the test so I just
dropped it and kept the rotate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199115 91177308-0d34-0410-b5e6-96231b3b80d8
...into (ashr (shl (anyext X), ...), ...), which requires one fewer
instruction. The (anyext X) can sometimes be simplified too.
I didn't do this in DAGCombiner because widening shifts isn't a win
on all targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199114 91177308-0d34-0410-b5e6-96231b3b80d8
This patch covered 2 more scenarios:
1. Two operands of shuffle_vector are the same, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> %a, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>
2. One of operands is undef, like
%shuffle.i = shufflevector <8 x i8> %a, <8 x i8> undef, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14>
After this patch, perm instructions will have chance to be emitted instead of lots of INS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199069 91177308-0d34-0410-b5e6-96231b3b80d8
Targets like SPARC and MIPS have delay slots and normally bundle the
delay slot instruction with the corresponding terminator.
Teach isBlockOnlyReachableByFallthrough to find any MBB operands on
bundled terminators so SPARC doesn't need to specialize this function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199061 91177308-0d34-0410-b5e6-96231b3b80d8
This is different from the argument passing convention which puts the
first float argument in %f1.
With this patch, all returned floats are treated as if the 'inreg' flag
were set. This means multiple float return values get packed in %f0,
%f1, %f2, ...
Note that when returning a struct in registers, clang will set the
'inreg' flag on the return value, so that behavior is unchanged. This
also happens when returning a float _Complex.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199028 91177308-0d34-0410-b5e6-96231b3b80d8
Use separate callee-save masks for XMM and YMM registers for anyregcc on X86 and
select the proper mask depending on the target cpu we compile for.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198985 91177308-0d34-0410-b5e6-96231b3b80d8
The zext handling added in r197802 wasn't right for RNSBG. This patch
restricts it to ROSBG, RXSBG and RISBG. (The tests for RISBG were added
in r197802 since RISBG was the motivating example.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198862 91177308-0d34-0410-b5e6-96231b3b80d8
At the moment we expect rotates to have the form:
(or (shl X, Y), (shr X, Z))
where Y == bitsize(X) - Z or Z == bitsize(X) - Y. This form means that
the (or ...) is undefined for Y == 0 or Z == 0. This undefinedness can
be avoided by using Y == (C * bitsize(X) - Z) & (bitsize(X) - 1) or
Z == (C * bitsize(X) - Y) & (bitsize(X) - 1) for any integer C
(including 0, the most natural choice).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198861 91177308-0d34-0410-b5e6-96231b3b80d8
InstCombine converts (sub 32, (add X, C)) into (sub 32-C, X),
so a rotate left of a 32-bit Y by X+C could appear as either:
(or (shl Y, (add X, C)), (shr Y, (sub 32, (add X, C))))
without InstCombine or:
(or (shl Y, (add X, C)), (shr Y, (sub 32-C, X)))
with it.
We already matched the first form. This patch handles the second too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198860 91177308-0d34-0410-b5e6-96231b3b80d8
In the stackmap format we advertise the constant field as signed.
However, we were determining whether to promote to a 64-bit constant
pool based on an unsigned comparison.
This fix allows -1 to be encoded as a small constant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198816 91177308-0d34-0410-b5e6-96231b3b80d8
MIsNeedChainEdge, which is used by -enable-aa-sched-mi (AA in misched), had an
llvm_unreachable when -enable-aa-sched-mi is enabled and we reach an
instruction with multiple MMOs. Instead, return a conservative answer. This
allows testing -enable-aa-sched-mi on x86.
Also, this moves the check above the isUnsafeMemoryObject checks.
isUnsafeMemoryObject is currently correct only for instructions with one MMO
(as noted in the comment in isUnsafeMemoryObject):
// We purposefully do no check for hasOneMemOperand() here
// in hope to trigger an assert downstream in order to
// finish implementation.
The problem with this is that, had the candidate edge passed the
"!MIa->mayStore() && !MIb->mayStore()" check, the hoped-for assert would never
happen (which could, in theory, lead to incorrect behavior if one of these
secondary MMOs was volatile, for example).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198795 91177308-0d34-0410-b5e6-96231b3b80d8
to the following two rules:
1) fold (vselect (build_vector AllOnes), A, B) -> A
2) fold (vselect (build_vector AllZeros), A, B) -> B
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198777 91177308-0d34-0410-b5e6-96231b3b80d8
I couldn't see how to do this sanely without splitting RETQ from RETL.
Eric says: "sad about the inability to roundtrip them now, but...".
I have no idea what that means, but perhaps it wants preserving in the
commit comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198756 91177308-0d34-0410-b5e6-96231b3b80d8
Modern versions of OSX/Darwin's ld (ld64 > 97.17) have an optimisation present that allows the back end to omit relocations (and replace them with an absolute difference) for FDE some text section refs.
This patch allows a backend to opt-in to this behaviour by setting "DwarfFDESymbolsUseAbsDiff". At present, this is only enabled for modern x86 OSX ports.
test changes by David Fang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198744 91177308-0d34-0410-b5e6-96231b3b80d8
With the gnu objc runtime private strings are used. Since we only need to
produce a unique label, the fix is to just drop the asserts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198701 91177308-0d34-0410-b5e6-96231b3b80d8
Parse tag names as well as expressions. The former is part of the
specification, the latter is for improved compatibility with the GNU assembler.
Fix attribute value handling to be comformant to the specification.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198662 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM backend has been using most of the MachO related subtarget
checks almost interchangeably, and since the only target it's had to
run on has been IOS (which is all three of MachO, Darwin and IOS) it's
worked out OK so far.
But we'd like to support embedded targets under the "*-*-none-macho"
triple, which means everything starts falling apart and inconsistent
behaviours emerge.
This patch should pick a reasonably sensible set of behaviours for the
new triple (and any others that come along, with luck). Some choices
were debatable (notably FP == r7 or r11), but we can revisit those
later when deficiencies become apparent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198617 91177308-0d34-0410-b5e6-96231b3b80d8
This requires a knowledge of the stack size which is not known until
the frame is complete, hence the need for the XCoreFTAOElim pass
which lowers the XCoreISD::FRAME_TO_ARGS_OFFSET instrution into its
final form.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198614 91177308-0d34-0410-b5e6-96231b3b80d8
Longer term, we want to move users to "*-*-*-macho" for embedded work, but for
now people are relying on the last thing we told them, which is unfortunately
"*-*-darwin-eabi".
rdar://problem/15703934
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198602 91177308-0d34-0410-b5e6-96231b3b80d8
There is a wrong assumption that the vector element type and the
type of each ConstantSDNode in the build_vector were the same.
However, when promoting the integer operand of a legally typed
build_vector, the operand type and the vector element type do not
need to be the same
(See method 'DAGTypeLegalizer::PromoteIntOp_BUILD_VECTOR' in
LegalizeIntegerTypes.cpp).
in AArch64 backend, the following dag sequence:
C0: i1 = Constant<0>
C1: i1 = Constant<-1>
V: v8i1 = BUILD_VECTOR C1, C1, C0, C0, C0, C0, C0, C0
is type-legalized into:
NewC0: i32 = Constant<0>
NewC1: i32 = Constant<1>
V: v8i8 = BUILD_VECTOR NewC1, NewC1, NewC0, NewC0, NewC0, NewC0, NewC0, NewC0
Forcing a getZeroExtend to VTBits to ensure that the new constant
is correctly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198582 91177308-0d34-0410-b5e6-96231b3b80d8