objects, since they'll end up using the implicit conversion to "bool"
and causing some very "fun" surprises.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125380 91177308-0d34-0410-b5e6-96231b3b80d8
a loop when unswitching it. It only does this in the complex case, because
everything should be fine already in the simple case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125369 91177308-0d34-0410-b5e6-96231b3b80d8
This
define float @foo(float %x, float %y) nounwind readnone {
entry:
%0 = tail call float @copysignf(float %x, float %y) nounwind readnone
ret float %0
}
Was compiled to:
vmov s0, r1
bic r0, r0, #-2147483648
vmov s1, r0
vcmpe.f32 s0, #0
vmrs apsr_nzcv, fpscr
it lt
vneglt.f32 s1, s1
vmov r0, s1
bx lr
This fails to copy the sign of -0.0f because it's lost during the float to int
conversion. Also, it's sub-optimal when the inputs are in GPR registers.
Now it uses integer and + or operations when it's profitable. And it's correct!
lsrs r1, r1, #31
bfi r0, r1, #31, #1
bx lr
rdar://8984306
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125357 91177308-0d34-0410-b5e6-96231b3b80d8
gep to explicit addressing, we know that none of the intermediate
computation overflows.
This could use review: it seems that the shifts certainly wouldn't
overflow, but could the intermediate adds overflow if there is a
negative index?
Previously the testcase would instcombine to:
define i1 @test(i64 %i) {
%p1.idx.mask = and i64 %i, 4611686018427387903
%cmp = icmp eq i64 %p1.idx.mask, 1000
ret i1 %cmp
}
now we get:
define i1 @test(i64 %i) {
%cmp = icmp eq i64 %i, 1000
ret i1 %cmp
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125271 91177308-0d34-0410-b5e6-96231b3b80d8
for NSW/NUW binops to follow the pattern of exact binops. This
allows someone to use Builder.CreateAdd(x, y, "tmp", MaybeNUW);
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125270 91177308-0d34-0410-b5e6-96231b3b80d8
exact/nsw/nuw shifts and have instcombine infer them when it can prove
that the relevant properties are true for a given shift without them.
Also, a variety of refactoring to use the new patternmatch logic thrown
in for good luck. I believe that this takes care of a bunch of related
code quality issues attached to PR8862.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125267 91177308-0d34-0410-b5e6-96231b3b80d8
optimizations to be much more aggressive in the face of
exact/nsw/nuw div and shifts. For example, these (which
are the same except the first is 'exact' sdiv:
define i1 @sdiv_icmp4_exact(i64 %X) nounwind {
%A = sdiv exact i64 %X, -5 ; X/-5 == 0 --> x == 0
%B = icmp eq i64 %A, 0
ret i1 %B
}
define i1 @sdiv_icmp4(i64 %X) nounwind {
%A = sdiv i64 %X, -5 ; X/-5 == 0 --> x == 0
%B = icmp eq i64 %A, 0
ret i1 %B
}
compile down to:
define i1 @sdiv_icmp4_exact(i64 %X) nounwind {
%1 = icmp eq i64 %X, 0
ret i1 %1
}
define i1 @sdiv_icmp4(i64 %X) nounwind {
%X.off = add i64 %X, 4
%1 = icmp ult i64 %X.off, 9
ret i1 %1
}
This happens when you do something like:
(ptr1-ptr2) == 42
where the pointers are pointers to non-unit types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125266 91177308-0d34-0410-b5e6-96231b3b80d8
and generally tidying things up. Only very trivial functionality changes
like now doing (-1 - A) -> (~A) for vectors too.
InstCombineAddSub.cpp | 296 +++++++++++++++++++++-----------------------------
1 file changed, 126 insertions(+), 170 deletions(-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125264 91177308-0d34-0410-b5e6-96231b3b80d8
Natural Loop Information
Loop Pass Manager
Canonicalize natural loops
Scalar Evolution Analysis
Loop Pass Manager
Induction Variable Users
Canonicalize natural loops
Induction Variable Users
Loop Strength Reduction
into this:
Scalar Evolution Analysis
Loop Pass Manager
Canonicalize natural loops
Induction Variable Users
Loop Strength Reduction
This fixes <rdar://problem/8869639>. I also filed PR9184 on doing this sort of
thing automatically, but it seems easier to just change the ordering of the
passes if this is the only case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125254 91177308-0d34-0410-b5e6-96231b3b80d8
When matching operands for a candidate opcode match in the auto-generated
AsmMatcher, check each operand against the expected operand match class.
Previously, operands were classified independently of the opcode being
handled, which led to difficulties when operand match classes were
more complicated than simple subclass relationships.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125245 91177308-0d34-0410-b5e6-96231b3b80d8
Loop splitting is better handled by the more generic global region splitting
based on the edge bundle graph.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125243 91177308-0d34-0410-b5e6-96231b3b80d8
name of a path, after resolving symbolic links and eliminating excess
path elements such as "foo/../" and "./".
This routine still needs a Windows implementation, but I don't have a
Windows machine available. Help? Please?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125228 91177308-0d34-0410-b5e6-96231b3b80d8
The tag is updated whenever the live interval union is changed, and it is tested
before using cached information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125224 91177308-0d34-0410-b5e6-96231b3b80d8
Now, Syntax is only used as a tie-breaker if the Arch
matches. Previously, a request for x86_64 disassembler followed by the
i386 disassembler in a single process would return the cached x86_64
disassembler. Fixes <rdar://problem/8958982>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125215 91177308-0d34-0410-b5e6-96231b3b80d8