We've been performing the wrong operation on ARM for "atomicrmw nand" for
years, since "a NAND b" is "~(a & b)" rather than ARM's very tempting "a & ~b".
This bled over into the generic expansion pass.
So I assume no-one has ever actually tried to do an atomic nand in the real
world. Oh well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212443 91177308-0d34-0410-b5e6-96231b3b80d8
This is useful for functions that are not actually available externally but
referenced by a vtable of some kind. Clang emits functions like this for the MS
ABI.
PR20182.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212337 91177308-0d34-0410-b5e6-96231b3b80d8
When INT_MIN is the numerator in a sdiv, we would not properly handle
overflow when calculating the bounds of possible values; abs(INT_MIN) is
not a meaningful number.
Instead, check and handle INT_MIN by reasoning that the largest value is
INT_MIN/-2 and the smallest value is INT_MIN.
This fixes PR20199.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212307 91177308-0d34-0410-b5e6-96231b3b80d8
It is not safe to negate the smallest signed integer, doing so yields
the same number back.
This fixes PR20186.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212164 91177308-0d34-0410-b5e6-96231b3b80d8
Matching behavior with DeadArgumentElimination (and leveraging some
now-common infrastructure), keep track of the function from debug info
metadata if arguments are promoted.
This may produce interesting debug info - since the arguments may be
missing or of different types... but at least backtraces, inlining, etc,
will be correct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212128 91177308-0d34-0410-b5e6-96231b3b80d8
There were transforms whose *intent* was to downgrade the linkage of
external objects to have internal linkage.
However, it fired on things with private linkage as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212104 91177308-0d34-0410-b5e6-96231b3b80d8
Inlining functions with block addresses can cause many problem and requires a
rich infrastructure to support including escape analysis. At this point the
safest approach to address these problems is by blocking inlining from
happening.
Background:
There have been reports on Ruby segmentation faults triggered by inlining
functions with block addresses like
//Ruby code snippet
vm_exec_core() {
finish_insn_seq_0 = &&INSN_LABEL_finish;
INSN_LABEL_finish:
;
}
This kind of scenario can also happen when LLVM picks a subset of blocks for
inlining, which is the case with the actual code in the Ruby environment.
LLVM suppresses inlining for such functions when there is an indirect branch.
The attached patch does so even when there is no indirect branch. Note that
user code like above would not make much sense: using the global for jumping
across function boundaries would be illegal.
Why was there a segfault:
In the snipped above the block with the label is recognized as dead So it is
eliminated. Instead of a block address the cloner stores a constant (sic!) into
the global resulting in the segfault (when the global is used in a goto).
Why had it worked in the past then:
By luck. In older versions vm_exec_core was also inlined but the label address
used was the block label address in vm_exec_core. So the global jump ended up
in the original function rather than in the caller which accidentally happened
to work.
Test case ./tools/clang/test/CodeGen/indirect-goto.c will fail as a result
of this commit.
rdar://17245966
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212077 91177308-0d34-0410-b5e6-96231b3b80d8
This both improves basic debug info quality, but also fixes a larger
hole whenever we inline a call/invoke without a location (debug info for
the entire inlining is lost and other badness that the debug info
emission code is currently working around but shouldn't have to).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@212065 91177308-0d34-0410-b5e6-96231b3b80d8
The test added in r211762 was sloppy, the correct initializer wasn't
added to @llvm.global_ctors
Spotted by Pasi Parviainen!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211879 91177308-0d34-0410-b5e6-96231b3b80d8
If both instructions to be replaced are marked invariant the resulting
instruction is invariant.
rdar://13358910
Fix by Erik Eckstein!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211801 91177308-0d34-0410-b5e6-96231b3b80d8
Folding a reference to a thread_local variable into another global
variable's initializer is very problematic, there is no relocation that
exists to represent such an access.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211762 91177308-0d34-0410-b5e6-96231b3b80d8
This is a follow-up to r211331, which failed to notice that we were
returning early from ValidLookupTableConstant for GEPs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211753 91177308-0d34-0410-b5e6-96231b3b80d8
[LLVM part]
These patches rename the loop unrolling and loop vectorizer metadata
such that they have a common 'llvm.loop.' prefix. Metadata name
changes:
llvm.vectorizer.* => llvm.loop.vectorizer.*
llvm.loopunroll.* => llvm.loop.unroll.*
This was a suggestion from an earlier review
(http://reviews.llvm.org/D4090) which added the loop unrolling
metadata.
Patch by Mark Heffernan.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211710 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes exponential compilation complexity in PR19835, caused by
LICM::sink not handling the following pattern well:
f = op g
e = op f, g
d = op e
c = op d, e
b = op c
a = op b, c
When an instruction with N uses is sunk, each of its operands gets N
new uses (all of them - phi nodes). In the example above, if a had 1
use, c would have 2, e would have 4, and g would have 8.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211673 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This new debug emission kind supports emitting line location
information in all instructions, but stops code generation
from emitting debug info to the final output.
This mode is useful when the backend wants to track source
locations during code generation, but it does not want to
produce debug info. This is currently used by optimization
remarks (-pass-remarks, -pass-remarks-missed and
-pass-remarks-analysis).
To prevent debug info emission, DIBuilder never inserts the
annotation 'llvm.dbg.cu' when LocTrackingOnly is enabled.
Reviewers: echristo, dblaikie
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D4234
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211609 91177308-0d34-0410-b5e6-96231b3b80d8
The induction variables start value needs to be defined before we branch
(overflow check) to the scalar preheader where we used it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211460 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Different range metadata can lead to different optimizations in later
passes, possibly breaking the semantics of the merged function. So range
metadata must be taken into consideration when comparing Load
instructions.
Thanks!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211391 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support to recognize patterns such as fadd,fsub,fadd,fsub.../add,sub,add,sub... and
vectorizes them as vector shuffles if they are profitable.
These patterns of vector shuffle can later be converted to instructions such as addsubpd etc on X86.
Thanks to Arnold and Hal for the reviews. http://reviews.llvm.org/D4015
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211339 91177308-0d34-0410-b5e6-96231b3b80d8
We would previously put dllimport variables in switch lookup tables, which
doesn't work because the address cannot be used in a constant initializer.
This is basically the same problem that we have in PR19955.
Putting TLS variables in switch tables also desn't work, because the
address of such a variable is not constant.
Differential Revision: http://reviews.llvm.org/D4220
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211331 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
With this patch, range metadata can be added to call/invoke including
IntrinsicInst. Previously, it could only be added to load.
Rename computeKnownBitsLoad to computeKnownBitsFromRangeMetadata because
range metadata is not only used by load.
Update the language reference to reflect this change.
Test Plan:
Add several tests in range-2.ll to confirm the verifier is happy with
having range metadata on call/invoke.
Add two tests in AddOverFlow.ll to confirm annotating range metadata to
call/invoke can benefit InstCombine.
Reviewers: meheff, nlewycky, reames, hfinkel, eliben
Reviewed By: eliben
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D4187
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211281 91177308-0d34-0410-b5e6-96231b3b80d8
* Find factorization opportunities using identity values.
* Find factorization opportunities by treating shl(X, C) as mul (X, shl(C))
* Keep NSW flag while simplifying instruction using factorization.
This fixes PR19263.
Differential Revision: http://reviews.llvm.org/D3799
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211261 91177308-0d34-0410-b5e6-96231b3b80d8
InstCombineMulDivRem has:
// Canonicalize (X+C1)*CI -> X*CI+C1*CI.
InstCombineAddSub has:
// W*X + Y*Z --> W * (X+Z) iff W == Y
These two transforms could fight with each other if C1*CI would not fold
away to something simpler than a ConstantExpr mul.
The InstCombineMulDivRem transform only acted on ConstantInts until
r199602 when it was changed to operate on all Constants in order to
let it fire on ConstantVectors.
To fix this, make this transform more careful by checking to see if we
actually folded away C1*CI.
This fixes PR20079.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211258 91177308-0d34-0410-b5e6-96231b3b80d8
These will be used for custom lowering and for library
implementations of various math functions, so it's useful
to expose these as builtins.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211247 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
As a starting step, we only use one simple heuristic: if the sign bits
of both a and b are zero, we can prove "add a, b" do not unsigned
overflow, and thus convert it to "add nuw a, b".
Updated all affected tests and added two new tests (@zero_sign_bit and
@zero_sign_bit2) in AddOverflow.ll
Test Plan: make check-all
Reviewers: eliben, rafael, meheff, chandlerc
Reviewed By: chandlerc
Subscribers: chandlerc, llvm-commits
Differential Revision: http://reviews.llvm.org/D4144
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211084 91177308-0d34-0410-b5e6-96231b3b80d8
r199771 accidently broke the logic that makes sure that SROA only splits
load on byte boundaries. If such a split happens, some bits get lost
when reassembling loads of wider types, causing data corruption.
Move the width check up to reject such splits early, avoiding the
corruption. Fixes PR19250.
Patch by: Björn Steinbrink <bsteinbr@gmail.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211082 91177308-0d34-0410-b5e6-96231b3b80d8