with fixes. Includes the move of tests for llvm-objdump for universal files to an X86
directory. And the fix where it was failing on linux Rafael tracked down with asan.
I had both Jim Grosbach and Adam Hemet look over the second fix since I could not
set up asan to reproduce with the old version but not with the fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223416 91177308-0d34-0410-b5e6-96231b3b80d8
r32900 introduced custom lowering for fcopysign, with two checks to
change the magnitude value's type if it's larger/smaller than the sign
value's type. r32932 replaced that code for the smaller case.
r43205 did the same for the larger case, but left the old code, now dead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223415 91177308-0d34-0410-b5e6-96231b3b80d8
eliminating all uses of a vreg, update any DBG_VALUE describing that vreg
to point to the rematerialized register instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223401 91177308-0d34-0410-b5e6-96231b3b80d8
Offset == 0 is a valid offset for kernel code model according to the
x86_64 System V ABI. Found by inspection, no testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223383 91177308-0d34-0410-b5e6-96231b3b80d8
r223113 added support for ARM modified immediate assembly syntax. Which
assumes all immediate operands are prefixed with a '#'. This assumption
is wrong as per the ARMARM - which recommends that all '#' characters be
treated optional. The current patch fixes this regression and adds a test
case. A follow-up patch will expand the test coverage to other instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223381 91177308-0d34-0410-b5e6-96231b3b80d8
So there are a couple of issues with indirect calls on thumbv4t. First, the most
'obvious' instruction, 'blx' isn't available until v5t. And secondly, the
next-most-obvious sequence: 'mov lr, pc; bx rN' doesn't DTRT in thumb code
because the saved off pc has its thumb bit cleared, so when the callee returns
we end up in ARM mode.... yuck.
The solution is to 'bl' to a nearby landing pad with a 'bx rN' in it.
We could cut down on code size by sharing the landing pads between call sites
that are close enough, but for the moment let's do correctness first and look at
performance later.
Patch by: Iain Sandoe
http://reviews.llvm.org/D6519
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223380 91177308-0d34-0410-b5e6-96231b3b80d8
Reapply r223347, with a fix to not crash on uninserted instructions (or more
precisely, instructions in uninserted blocks). bugpoint was able to reduce the
test case somewhat, but it is still somewhat large (and relies on setting
things up to be simplified during inlining), so I've not included it here.
Nevertheless, it is clear what is going on and why.
Original commit message:
Restrict somewhat the memory-allocation pointer cmp opt from r223093
Based on review comments from Richard Smith, restrict this optimization from
applying to globals that might resolve lazily to other dynamically-loaded
modules, and also from dynamic allocas (which might be transformed into malloc
calls). In short, take extra care that the compared-to pointer is really
simultaneously live with the memory allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223371 91177308-0d34-0410-b5e6-96231b3b80d8
This bug has the effect of converting a test of isGCRelocate(InvokeInst*) from a false return to a crash.
This may be the root cause of the crash Joerg reported against r223137, but I'm still waiting for a clean build of clang to complete to be able to confirm. Once I've confirmed the issue, I'll submit a test case separately.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223370 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
I currently have to specify --build=mips-linux-gnu or --build=mipsel-linux-gnu
to configure in order to successfully recurse a 32-bit build of the compiler on
my mips64-linux-gnu and mips64el-linux-gnu targets. This is a bug and will be
fixed but in the meantime it will be useful to have a way to work around this.
Reviewers: tstellarAMD
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6522
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223369 91177308-0d34-0410-b5e6-96231b3b80d8
r223113 added support for ARM modified immediate assembly syntax. That patch
has broken support for immediate expressions, as in:
add r0, #(4 * 4)
It wasn't caught because we don't have any tests for this feature. This patch
fixes this regression and adds test cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223366 91177308-0d34-0410-b5e6-96231b3b80d8
This matches std::vector and should avoid unnecessary masking to 32 bits
when calling them on o 64 bits system.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223365 91177308-0d34-0410-b5e6-96231b3b80d8
The current DAG combine turns a sequence of extracts from <4 x i32> followed by zexts into a store followed by scalar loads.
According to measurements by Martin Krastev (see PR 21269) for x86-64, a sequence of an extract, movs and shifts gives better performance. However, for 32-bit x86, the previous sequence still seems better.
Differential Revision: http://reviews.llvm.org/D6501
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223360 91177308-0d34-0410-b5e6-96231b3b80d8
Replaced some logic that checked if a build_vector node is doing a splat of a
non-undef value with a call to method BuildVectorSDNode::getSplatValue().
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223354 91177308-0d34-0410-b5e6-96231b3b80d8
--disable-timestamps was added to the configure command way back in r142647 but
the command that echos this command to the log was not updated at the time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223351 91177308-0d34-0410-b5e6-96231b3b80d8
According to a previous FIXME comment we now not only look at MBB
successors, but also handle code sinking past them:
x = computation
if () {} else {}
use x
The instruction could be sunk over the whole diamond for the
if/then/else (or loop, etc), allowing it to be sunk into other blocks
after that.
Modified test added in r204522, due to one spill less present.
Minor fixes in comments.
Patch provided by Jonas Paulsson. Reviewed by Hal Finkel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223350 91177308-0d34-0410-b5e6-96231b3b80d8
Added instcombine optimizations for BSWAP with AND/OR/XOR ops:
OP( BSWAP(x), BSWAP(y) ) -> BSWAP( OP(x, y) )
OP( BSWAP(x), CONSTANT ) -> BSWAP( OP(x, BSWAP(CONSTANT) ) )
Since its just a one liner, I've also added BSWAP to the DAGCombiner equivalent as well:
fold (OP (bswap x), (bswap y)) -> (bswap (OP x, y))
Refactored bswap-fold tests to use FileCheck instead of just checking that the bswaps had gone.
Differential Revision: http://reviews.llvm.org/D6407
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223349 91177308-0d34-0410-b5e6-96231b3b80d8
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.
Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223348 91177308-0d34-0410-b5e6-96231b3b80d8
Based on review comments from Richard Smith, restrict this optimization from
applying to globals that might resolve lazily to other dynamically-loaded
modules, and also from dynamic allocas (which might be transformed into malloc
calls). In short, take extra care that the compared-to pointer is really
simultaneously live with the memory allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223347 91177308-0d34-0410-b5e6-96231b3b80d8
Commit on
- This patch fixes the bug described in
http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-May/062343.html
The fix allocates an extra slot just below the GPRs and stores the base pointer
there. This is done only for functions containing llvm.eh.sjlj.setjmp that also
need a base pointer. Because code containing llvm.eh.sjlj.setjmp saves all of
the callee-save GPRs in the prologue, the offset to the extra slot can be
computed before prologue generation runs.
Impact at run-time on affected functions is::
- One extra store in the prologue, The store saves the base pointer.
- One extra load after a llvm.eh.sjlj.setjmp. The load restores the base pointer.
Because the extra slot is just above a gap between frame-pointer-relative and
base-pointer-relative chunks of memory, there is no impact on other offset
calculations other than ensuring there is room for the extra slot.
http://reviews.llvm.org/D6388
Patch by Arch Robison <arch.robison@intel.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223329 91177308-0d34-0410-b5e6-96231b3b80d8
We had mistakenly believed that GCC's 'cc' referred to the entire
condition-code register (cr0 through cr7) -- and implemented this in r205630 to
fix PR19326, but 'cc' is actually an alias only to 'cr0'. This is causing LLVM
to clobber too much with legacy code with inline asm using the 'cc' clobber.
Fixes PR21451.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223328 91177308-0d34-0410-b5e6-96231b3b80d8