The C++ and C semantics of the compare_and_swap operations actually
require us to return a boolean "success" value. In LLVM terms this
means a second comparison of the output of "cmpxchg" against the input
desired value.
However, x86's "cmpxchg" instruction sets all flags for the comparison
formed, so we can skip any secondary comparison. (N.b. this isn't true
for cmpxchg8b/16b, which only set ZF).
rdar://problem/13201607
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210523 91177308-0d34-0410-b5e6-96231b3b80d8
Previously we were abandonning the attempt, leading to some combination of
extra work (when selection of a load/store fails completely) and inferior code
(when this leads to a real memcpy call instead of inlining).
rdar://problem/17187463
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210520 91177308-0d34-0410-b5e6-96231b3b80d8
We were hitting an assert if FastISel couldn't create the load or store we
requested. Currently this happens for large frame-local addresses, though
CodeGen could be improved there.
rdar://problem/17187463
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210519 91177308-0d34-0410-b5e6-96231b3b80d8
This improves the X86 cost model for small constants with large types. Before
this commit we would even hoist trivial constants such as i96 2.
This is related to <rdar://problem/17070936>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210504 91177308-0d34-0410-b5e6-96231b3b80d8
never be true in a well-defined context. The checking for null pointers
has been moved into the caller logic so it does not rely on undefined behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210497 91177308-0d34-0410-b5e6-96231b3b80d8
Unfortunately there's no way to elegantly do this with pre-canned
algorithms. Using a generating iterator doesn't work because you default
construct for each element, then move construct into the actual slot
(bad for copy but non-movable types, and a little unneeded overhead even
in the move-only case), so just write it out manually.
This solution isn't exception safe (if one of the element's ctors calls
we don't fall back, destroy the constructed elements, and throw on -
which std::uninitialized_fill does do) but SmallVector (and LLVM) isn't
exception safe anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210495 91177308-0d34-0410-b5e6-96231b3b80d8
The code in PPCTargetLowering::PerformDAGCombine() that handles
unaligned Altivec vector loads generates a lvsl followed by a vperm.
As we've seen in numerous other places, the vperm instruction has a
big-endian bias, and this is fixed for little endian by complementing
the permute control vector and swapping the input operands. In this
case the lvsl is providing the permute control vector. Rather than
generating an lvsl and a complement operation, it is sufficient to
generate an lvsr instruction instead. Thus for LE code generation we
will generate an lvsr rather than an lvsl, and swap the other input
arguments on the vperm.
The existing test/CodeGen/PowerPC/vec_misalign.ll is updated to test
the code generation for PPC64 and PPC64LE, in addition to the existing
PPC32/G5 testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210493 91177308-0d34-0410-b5e6-96231b3b80d8
Don't terminate location ranges for register-described variables
at the end of machine basic block if this register is never modified
in the function body, except for the prologue and epilogue. Prologue
location is guessed by FrameSetup flags on MachineInstructions, while
epilogue location is deduced from debug locations of instructions
in the basic blocks ending with return instructions.
This patch is mostly targeted to fix non-trivial debug locations for
variables addressed via stack and frame pointers.
It is not really a generic fix. We can still produce poor debug info
for register-described variables if this register *is* modified somewhere
in the function, but in unrelated places. This might be the case for the debug
info in optimized binaries (e.g. for local variables in inlined functions).
LiveDebugVariables pass in CodeGen attempts to fix this problem by adjusting
DBG_VALUE instructions, but this pass is tied to greedy register allocator,
which is used in optimized builds only. Proper fix would likely involve
generalizing LiveDebugVariables to all register allocators. See more discussion
in http://reviews.llvm.org/D3933 review thread.
I'm proceeding with this patch to fix immediate severe problems and
important cases, e.g. fix completely broken debug info with AddressSanitizer
and fix PR19307 (missing debug info for by-value std::string arguments).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210492 91177308-0d34-0410-b5e6-96231b3b80d8
The armv7-windows-itanium environment is nearly identical to the MSVC ABI. It
has a few divergences, mostly revolving around the use of the Itanium ABI for
C++. VLA support is one of the extensions that are amongst the set of the
extensions.
This adds support for proper VLA emission for this environment. This is
somewhat similar to the handling for __chkstk emission on X86 and the large
stack frame emission for ARM. The invocation style for chkstk is still
controlled via the -mcmodel flag to clang.
Make an explicit note that this is an extension.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210489 91177308-0d34-0410-b5e6-96231b3b80d8
Originally this similar was initiated by Björn Steinbrink here:
http://reviews.llvm.org/D3437
Bug itself has been fixed by principal changes in MergeFunctions. Though
special checks for functions merging are still actual. And the test has
been accepted with slight modifications.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210486 91177308-0d34-0410-b5e6-96231b3b80d8
Tested and works fine with clang using libstdc++.
All indications are that this was fixed some time ago and isn't a problem with
any clang version we support.
I've added a note in PR6907 which is still open for some reason.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210485 91177308-0d34-0410-b5e6-96231b3b80d8
Support headers shouldn't use config.h definitions, and they should never be
undefined like this.
ConstantFolding.cpp was the only user of this facility and already includes
config.h for other math features, so it makes sense to move the checks there at
point of use.
(The implicit config.h was also quite dangerous -- removing the FEnv.h include
would have silently disabled math constant folding without causing any tests to
fail. Need to investigate -Wundef once the cleanup is done.)
This eliminates the last config.h include from LLVM headers, paving the way for
more consistent configuration checks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210483 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds new target specific combine rules to identify horizontal
add/sub idioms from BUILD_VECTOR dag nodes.
This patch also teaches the DAGCombiner how to canonicalize sequences of
insert_vector_elt dag nodes according to the following rule:
(insert_vector_elt (insert_vector_elt A, I0), I1) ->
(insert_vecto_elt (insert_vector_elt A, I1), I0)
This new canonicalization rule only triggers if the inner insert_vector
dag node has exactly one use; also, both indices must be known constants,
and I1 < I0.
This last rule made it possible to write a simpler algorithm to identify
horizontal add/sub patterns because now we don't have to worry about the
ordering of insert_vector_elt dag nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210477 91177308-0d34-0410-b5e6-96231b3b80d8
The existing code in PPCTargetLowering::LowerMUL() for multiplying two
v16i8 values assumes that vector elements are numbered in big-endian
order. For little-endian targets, the vector element numbering is
reversed, but the vmuleub, vmuloub, and vperm instructions still
assume big-endian numbering. To account for this, we must adjust the
permute control vector and reverse the order of the input registers on
the vperm instruction.
The existing test/CodeGen/PowerPC/vec_mul.ll is updated to be executed
on powerpc64 and powerpc64le targets as well as the original powerpc
(32-bit) target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210474 91177308-0d34-0410-b5e6-96231b3b80d8
This patch teaches the backend how to check for the 'NoSignedWrap' flag on
binary operations to improve the emission of 'test' instructions.
If the result of a binary operation is known not to overflow we know that
resetting the Overflow flag is unnecessary and so we can avoid emitting
the test instruction.
Patch by Marcello Maggioni.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210468 91177308-0d34-0410-b5e6-96231b3b80d8
This patch modifies SelectionDAGBuilder to construct SDNodes with associated
NoSignedWrap, NoUnsignedWrap and Exact flags coming from IR BinaryOperator
instructions.
Added a new SDNode type called 'BinaryWithFlagsSDNode' to allow accessing
nsw/nuw/exact flags during codegen.
Patch by Marcello Maggioni.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210467 91177308-0d34-0410-b5e6-96231b3b80d8
According to Intel Software Optimization Manual
on Silvermont INC or DEC instructions require
an additional uop to merge the flags.
As a result, a branch instruction depending
on an INC or a DEC instruction incurs a 1 cycle penalty.
Differential Revision: http://reviews.llvm.org/D3990
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210466 91177308-0d34-0410-b5e6-96231b3b80d8
Instructions from __nodebug__ functions don't have file:line
information even when inlined into no-nodebug functions. As a result,
intrinsics (SSE and other) from <*intrin.h> clang headers _never_
have file:line information.
With this change, an instruction without !dbg metadata gets one from
the call instruction when inlined.
Fixes PR19001.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210459 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a crash on MMX intrinsics, as well as a corner case in handling of
all unsigned pack intrinsics.
PR19953.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210454 91177308-0d34-0410-b5e6-96231b3b80d8
For each array index that is in the form of zext(a), convert it to sext(a)
if we can prove zext(a) <= max signed value of typeof(a). The conversion
helps to split zext(x + y) into sext(x) + sext(y).
Reviewed in http://reviews.llvm.org/D4060
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210444 91177308-0d34-0410-b5e6-96231b3b80d8
zext(a + b) != zext(a) + zext(b) even if a + b >= 0 && b >= 0.
e.g., a = i4 0b1111, b = i4 0b0001
zext a + b to i8 = zext 0b0000 to i8 = 0b00000000
(zext a to i8) + (zext b to i8) = 0b00001111 + 0b00000001 = 0b00010000
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210439 91177308-0d34-0410-b5e6-96231b3b80d8
inbounds are not necessary in these two tests. zext(a +nuw b) = zext(a) +
zext(b) should hold with or without inbounds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210437 91177308-0d34-0410-b5e6-96231b3b80d8
To test cases that involve actual repetition (> 1 elements), at least
one element before the insertion point, and some elements of the
original range that still fit in that range space after insertion.
Actually we need coverage for the inverse case too (where no elements
after the insertion point fit into the previously allocated space), but
this'll do for now, and I might end up rewriting bits of SmallVector to
avoid that special case anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210436 91177308-0d34-0410-b5e6-96231b3b80d8