Summary:
Follow up to [x32] "Use ebp/esp as frame and stack pointer":
http://reviews.llvm.org/D4617
In that earlier patch, NaCl64 was made to always use rbp.
That's needed for most cases because rbp should hold a full
64-bit address within the NaCl sandbox so that load/stores
off of rbp don't require sandbox adjustment (zeroing the top
32-bits, then filling those by adding r15).
However, llvm.frameaddress returns a pointer and pointers
are 32-bit for NaCl64. In this case, use ebp instead, which
will make the register copy type check. A similar mechanism
may be needed for llvm.eh.return, but is not added in this change.
Test Plan: test/CodeGen/X86/frameaddr.ll
Reviewers: dschuff, nadav
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D6514
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223510 91177308-0d34-0410-b5e6-96231b3b80d8
SSE2/AVX non-constant packed shift instructions only use the lower 64-bit of
the shift count.
This patch teaches function 'getTargetVShiftNode' how to deal with shifts
where the shift count node is of type MVT::i64.
Before this patch, function 'getTargetVShiftNode' only knew how to deal with
shift count nodes of type MVT::i32. This forced the backend to wrongly
truncate the shift count to MVT::i32, and then zero-extend it back to MVT::i64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223505 91177308-0d34-0410-b5e6-96231b3b80d8
When a loop gets bundled up, its outgoing edges are quite large, and can
just barely overflow 64-bits. If one successor has multiple incoming
edges -- and that successor is getting all the incoming mass --
combining just its edges can overflow. Handle that by saturating rather
than asserting.
This fixes PR21622.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223500 91177308-0d34-0410-b5e6-96231b3b80d8
No functional changes. Got myself bitten in r223113 when adding support for
modified immediate syntax (regressions reported by joerg@britannica.bec.de,
fixes in r223366 and r223381). Our assembler tests did not cover serveral
different syntax variants. This patch expands the test coverage to check for
the following cases:
1. Modified immediate operands may be expressed with expressions, as in #(4 * 2)
instead of #8.
2. Modified immediate operands may be _optionally_ prefixed by a '#' symbol or a
'$' symbol.
3. Certain instructions (e.g. ADD) support single input register variants;
[ADD r0, #mod_imm] is same as [ADD r0, r0, #mod_imm].
4. Certain instructions have aliases which convert plain immediates to modified
immediates. For an example, [ADD r0, -10] is not valid because -10 (in two's
complement) cannot be encoded as a modified immediate, but ARMInstrInfo.td
defines an alias which can transform this into a [SUB r0, 10].
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223475 91177308-0d34-0410-b5e6-96231b3b80d8
Do not realign origin address if the corresponding application
address is at least 4-byte-aligned.
Saves 2.5% code size in track-origins mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223464 91177308-0d34-0410-b5e6-96231b3b80d8
When lowering a vector shift node, the backend checks if the shift count is a
shuffle with a splat mask. If so, then it introduces an extra dag node to
extract the splat value from the shuffle. The splat value is then used
to generate a shift count of a target specific shift.
However, if we know that the shift count is a splat shuffle, we can use the
splat index 'I' to extract the I-th element from the first shuffle operand.
The advantage is that the splat shuffle may become dead since we no longer
use it.
Example:
;;
define <4 x i32> @example(<4 x i32> %a, <4 x i32> %b) {
%c = shufflevector <4 x i32> %b, <4 x i32> undef, <4 x i32> zeroinitializer
%shl = shl <4 x i32> %a, %c
ret <4 x i32> %shl
}
;;
Before this patch, llc generated the following code (-mattr=+avx):
vpshufd $0, %xmm1, %xmm1 # xmm1 = xmm1[0,0,0,0]
vpxor %xmm2, %xmm2
vpblendw $3, %xmm1, %xmm2, %xmm1 # xmm1 = xmm1[0,1],xmm2[2,3,4,5,6,7]
vpslld %xmm1, %xmm0, %xmm0
retq
With this patch, the redundant splat operation is removed from the code.
vpxor %xmm2, %xmm2
vpblendw $3, %xmm1, %xmm2, %xmm1 # xmm1 = xmm1[0,1],xmm2[2,3,4,5,6,7]
vpslld %xmm1, %xmm0, %xmm0
retq
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223461 91177308-0d34-0410-b5e6-96231b3b80d8
The test file test/CodeGen/ARM/build-attributes.ll was missing several
floating-point build attribute tests. The intention of this commit is that for
each CPU / architecture currently tested, there are now tests that make sure
the following attributes are sufficiently checked,
* Tag_ABI_FP_rounding
* Tag_ABI_FP_denormal
* Tag_ABI_FP_exceptions
* Tag_ABI_FP_user_exceptions
* Tag_ABI_FP_number_model
Also in this commit, the -unsafe-fp-math flag has been augmented with the full
suite of flags Clang sends to LLVM when you pass -ffast-math to Clang. That is,
`-unsafe-fp-math' has been changed to `-enable-unsafe-fp-math -disable-fp-elim
-enable-no-infs-fp-math -enable-no-nans-fp-math -fp-contract=fast'
Change-Id: I35d766076bcbbf09021021c0a534bf8bf9a32dfc
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223454 91177308-0d34-0410-b5e6-96231b3b80d8
Reverting this because, while it fixes the problem in the reduced test case, it
does not fix the problem in the full test case from the bug report.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223442 91177308-0d34-0410-b5e6-96231b3b80d8
The scheduling dependency graph is built bottom-up within each scheduling
region, and ScheduleDAGInstrs::addPhysRegDeps is called to add output/anti
dependencies, based on physical registers, to the SUs for instructions
based on those that come before them.
In the test case, we start before post-RA scheduling with a block that looks
like this:
...
INLINEASM <...
andc $0,$0,$2
stdcx. $0,0,$3
bne- 1b
> [sideeffect] [mayload] [maystore] [attdialect], $0:[regdef-ec:G8RC], %X6<earlyclobber,def,dead>, $1:[mem], %X3<kill>, $2:[reguse:G8RC], %X5<kill>, $3:[reguse:G8RC], %X3, $4:[mem], %X3, $5:[clobber], %CC<earlyclobber,imp-def,dead>, <<badref>>
...
%X4<def,dead> = ANDIo8 %X4<kill>, 1, %CR0<imp-def,dead>, %CR0GT<imp-def>
...
%R29<def> = ISEL %R3<undef>, %R4<kill>, %CR0GT<kill>
where it is relevant that %CC is an alias to %CR0, and that %CR0GT is a
subregister of %CR0. However, for post-RA scheduling, no dependency was added
to prevent the INLINEASM from being scheduled in between the ANDIo8 and the
ISEL (which communicate via the %CR0GT register).
In ScheduleDAGInstrs::addPhysRegDeps, when called for the %CC operand, we'd
iterate over all of its aliases (which include %CC itself and also %CR0), and
look for previously-encountered defs of those registers. We'd find the ANDIo8,
but decide not to add a dependency between the INLINEASM and the ANDIo8 because
both the INLINEASM's def of %CC is dead, and also the ANDIo8 def of %CR0 is
dead. This ignores, however, that ANDIo8 has a non-dead def of %CR0GT, a
subregister of %CR0, and thus a dependency still must exist.
To fix this problem, when calling registerDefIsDead on the SU with the def, we
also check all subregisters for possible non-dead defs, and add the dependency
if any are found.
Fixes PR21742.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223440 91177308-0d34-0410-b5e6-96231b3b80d8
with fixes. Includes the move of tests for llvm-objdump for universal files to an X86
directory. And the fix where it was failing on linux Rafael tracked down with asan.
I had both Jim Grosbach and Adam Hemet look over the second fix since I could not
set up asan to reproduce with the old version but not with the fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223416 91177308-0d34-0410-b5e6-96231b3b80d8
eliminating all uses of a vreg, update any DBG_VALUE describing that vreg
to point to the rematerialized register instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223401 91177308-0d34-0410-b5e6-96231b3b80d8
r223113 added support for ARM modified immediate assembly syntax. Which
assumes all immediate operands are prefixed with a '#'. This assumption
is wrong as per the ARMARM - which recommends that all '#' characters be
treated optional. The current patch fixes this regression and adds a test
case. A follow-up patch will expand the test coverage to other instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223381 91177308-0d34-0410-b5e6-96231b3b80d8
So there are a couple of issues with indirect calls on thumbv4t. First, the most
'obvious' instruction, 'blx' isn't available until v5t. And secondly, the
next-most-obvious sequence: 'mov lr, pc; bx rN' doesn't DTRT in thumb code
because the saved off pc has its thumb bit cleared, so when the callee returns
we end up in ARM mode.... yuck.
The solution is to 'bl' to a nearby landing pad with a 'bx rN' in it.
We could cut down on code size by sharing the landing pads between call sites
that are close enough, but for the moment let's do correctness first and look at
performance later.
Patch by: Iain Sandoe
http://reviews.llvm.org/D6519
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223380 91177308-0d34-0410-b5e6-96231b3b80d8
Reapply r223347, with a fix to not crash on uninserted instructions (or more
precisely, instructions in uninserted blocks). bugpoint was able to reduce the
test case somewhat, but it is still somewhat large (and relies on setting
things up to be simplified during inlining), so I've not included it here.
Nevertheless, it is clear what is going on and why.
Original commit message:
Restrict somewhat the memory-allocation pointer cmp opt from r223093
Based on review comments from Richard Smith, restrict this optimization from
applying to globals that might resolve lazily to other dynamically-loaded
modules, and also from dynamic allocas (which might be transformed into malloc
calls). In short, take extra care that the compared-to pointer is really
simultaneously live with the memory allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223371 91177308-0d34-0410-b5e6-96231b3b80d8
r223113 added support for ARM modified immediate assembly syntax. That patch
has broken support for immediate expressions, as in:
add r0, #(4 * 4)
It wasn't caught because we don't have any tests for this feature. This patch
fixes this regression and adds test cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223366 91177308-0d34-0410-b5e6-96231b3b80d8
The current DAG combine turns a sequence of extracts from <4 x i32> followed by zexts into a store followed by scalar loads.
According to measurements by Martin Krastev (see PR 21269) for x86-64, a sequence of an extract, movs and shifts gives better performance. However, for 32-bit x86, the previous sequence still seems better.
Differential Revision: http://reviews.llvm.org/D6501
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223360 91177308-0d34-0410-b5e6-96231b3b80d8
According to a previous FIXME comment we now not only look at MBB
successors, but also handle code sinking past them:
x = computation
if () {} else {}
use x
The instruction could be sunk over the whole diamond for the
if/then/else (or loop, etc), allowing it to be sunk into other blocks
after that.
Modified test added in r204522, due to one spill less present.
Minor fixes in comments.
Patch provided by Jonas Paulsson. Reviewed by Hal Finkel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223350 91177308-0d34-0410-b5e6-96231b3b80d8
Added instcombine optimizations for BSWAP with AND/OR/XOR ops:
OP( BSWAP(x), BSWAP(y) ) -> BSWAP( OP(x, y) )
OP( BSWAP(x), CONSTANT ) -> BSWAP( OP(x, BSWAP(CONSTANT) ) )
Since its just a one liner, I've also added BSWAP to the DAGCombiner equivalent as well:
fold (OP (bswap x), (bswap y)) -> (bswap (OP x, y))
Refactored bswap-fold tests to use FileCheck instead of just checking that the bswaps had gone.
Differential Revision: http://reviews.llvm.org/D6407
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223349 91177308-0d34-0410-b5e6-96231b3b80d8
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.
Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223348 91177308-0d34-0410-b5e6-96231b3b80d8
Based on review comments from Richard Smith, restrict this optimization from
applying to globals that might resolve lazily to other dynamically-loaded
modules, and also from dynamic allocas (which might be transformed into malloc
calls). In short, take extra care that the compared-to pointer is really
simultaneously live with the memory allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223347 91177308-0d34-0410-b5e6-96231b3b80d8
Commit on
- This patch fixes the bug described in
http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-May/062343.html
The fix allocates an extra slot just below the GPRs and stores the base pointer
there. This is done only for functions containing llvm.eh.sjlj.setjmp that also
need a base pointer. Because code containing llvm.eh.sjlj.setjmp saves all of
the callee-save GPRs in the prologue, the offset to the extra slot can be
computed before prologue generation runs.
Impact at run-time on affected functions is::
- One extra store in the prologue, The store saves the base pointer.
- One extra load after a llvm.eh.sjlj.setjmp. The load restores the base pointer.
Because the extra slot is just above a gap between frame-pointer-relative and
base-pointer-relative chunks of memory, there is no impact on other offset
calculations other than ensuring there is room for the extra slot.
http://reviews.llvm.org/D6388
Patch by Arch Robison <arch.robison@intel.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223329 91177308-0d34-0410-b5e6-96231b3b80d8
We had mistakenly believed that GCC's 'cc' referred to the entire
condition-code register (cr0 through cr7) -- and implemented this in r205630 to
fix PR19326, but 'cc' is actually an alias only to 'cr0'. This is causing LLVM
to clobber too much with legacy code with inline asm using the 'cc' clobber.
Fixes PR21451.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223328 91177308-0d34-0410-b5e6-96231b3b80d8
On PowerPC, inline asm memory operands might be expanded as 0($r), where $r is
a register containing the address. As a result, this register cannot be r0, and
we need to enforce this register subclass constraint to prevent miscompiling
the code (we'd get this constraint for free with the usual instruction
definitions, but that scheme has no knowledge of how we end up printing inline
asm memory operands, and so here we need to do it 'by hand'). We can accomplish
this within the current address-mode selection framework by introducing an
explicit COPY_TO_REGCLASS node.
Fixes PR21443.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223318 91177308-0d34-0410-b5e6-96231b3b80d8