This patch does the following:
* Fix FIXME on `needsStackRealignment`: it is now shared between multiple targets, implemented in `TargetRegisterInfo`, and isn't `virtual` anymore. This will break out-of-tree targets, silently if they used `virtual` and with a build error if they used `override`.
* Factor out `canRealignStack` as a `virtual` function on `TargetRegisterInfo`, by default only looks for the `no-realign-stack` function attribute.
Multiple targets duplicated the same `needsStackRealignment` code:
- Aarch64.
- ARM.
- Mips almost: had extra `DEBUG` diagnostic, which the default implementation now has.
- PowerPC.
- WebAssembly.
- x86 almost: has an extra `-force-align-stack` option, which the default implementation now has.
The default implementation of `needsStackRealignment` used to just return `false`. My current patch changes the behavior by simply using the above shared behavior. This affects:
- AMDGPU
- BPF
- CppBackend
- MSP430
- NVPTX
- Sparc
- SystemZ
- XCore
- Out-of-tree targets
This is a breaking change! `make check` passes.
The only implementation of the `virtual` function (besides the slight different in x86) was Hexagon (which did `MF.getFrameInfo()->getMaxAlignment() > 8`), and potentially some out-of-tree targets. Hexagon now uses the default implementation.
`needsStackRealignment` was being overwritten in `<Target>GenRegisterInfo.inc`, to return `false` as the default also did. That was odd and is now gone.
Reviewers: sunfish
Subscribers: aemerson, llvm-commits, jfb
Differential Revision: http://reviews.llvm.org/D11160
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242727 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Arguments to llvm.localescape must be static allocas. They must be at
some statically known offset from the frame or stack pointer so that
other functions can access them with localrecover.
If we ever want to instrument these, we can use more indirection to
recover the addresses of these local variables. We can do it during
clang irgen or with the asan module pass.
Reviewers: eugenis
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11307
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242726 91177308-0d34-0410-b5e6-96231b3b80d8
Before creating a schedule edge to encourage MacroOpFusion check that:
- The predecessor actually writes a register that the branch reads.
- The predecessor has no successors in the ScheduleDAG so we can
schedule it in front of the branch.
This avoids skewing the scheduling heuristic in cases where macroop
fusion cannot happen.
Differential Revision: http://reviews.llvm.org/D10745
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242723 91177308-0d34-0410-b5e6-96231b3b80d8
This is the first step toward supporting shrink-wrapping for this target.
The changes could be summarized by these items:
- Expand the tail-call return as part of the expand pseudo pass.
- Get rid of the assumptions that the epilogue is the exit block:
* Do not assume which registers are free in the epilogue. (This indirectly
improve the lowering of the code for the segmented stacks, see the test
cases.)
* Take into account that the basic block can be empty.
Related to <rdar://problem/20821730>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242714 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
[NVPTX] make load on global readonly memory to use ldg
Summary:
As describe in [1], ld.global.nc may be used to load memory by nvcc when
__restrict__ is used and compiler can detect whether read-only data cache
is safe to use.
This patch will try to check whether ldg is safe to use and use them to
replace ld.global when possible. This change can improve the performance
by 18~29% on affected kernels (ratt*_kernel and rwdot*_kernel) in
S3D benchmark of shoc [2].
Patched by Xuetian Weng.
[1] http://docs.nvidia.com/cuda/kepler-tuning-guide/#read-only-data-cache
[2] https://github.com/vetter/shoc
Test Plan: test/CodeGen/NVPTX/load-with-non-coherent-cache.ll
Reviewers: jholewinski, jingyue
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D11314
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242713 91177308-0d34-0410-b5e6-96231b3b80d8
This commit implements the initial serialization of machine constant pools and
the constant pool index machine operands. The constant pool is serialized using
a YAML sequence of YAML mappings that represent the constant values.
The target-specific constant pool items aren't serialized by this commit.
Reviewers: Duncan P. N. Exon Smith
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242707 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This change generalizes the implicit null checks pass to work with
instructions that don't have any explicit register defs. This lets us
use X86's `cmp` against memory as faulting load instructions.
Reviewers: reames, JosephTremoulet
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11286
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242703 91177308-0d34-0410-b5e6-96231b3b80d8
This commit extends the machine instruction lexer and implements support for
the quoted global value tokens. With this change the syntax for the global value
identifier tokens becomes identical to the syntax for the global identifier
tokens from the LLVM's assembly language.
Reviewers: Duncan P. N. Exon Smith
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242702 91177308-0d34-0410-b5e6-96231b3b80d8
Reordered the data tables at the top and placed the lookups after. The first stage in the yak shaving necessary to get more accurate costs for a variety of targets given the recent improvements to SINT_TO_FP/UINT_TO_FP/SIGN_EXTEND vector lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242643 91177308-0d34-0410-b5e6-96231b3b80d8
Not sure if the optimizer will save the call as getCalledFunction()
is not a trivial access function but the code is clearer this way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242641 91177308-0d34-0410-b5e6-96231b3b80d8
We don't bitcast the UNDEFs - that is done in visitVECTOR_SHUFFLE, and the getValueType should come from the operand's SDValue not the SDNode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242640 91177308-0d34-0410-b5e6-96231b3b80d8
canFoldMemoryOperand is not actually used anywhere in the codebase - all existing users instead call foldMemoryOperand directly when they wish to fold and can correctly deduce what they need from the return value.
This patch removes the canFoldMemoryOperand base function and the target implementations; only x86 had a real (bit-rotted) implementation, although AMDGPU had a preparatory stub that had never needed to be completed.
Differential Revision: http://reviews.llvm.org/D11331
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242638 91177308-0d34-0410-b5e6-96231b3b80d8
SKX supports conversion for all FP types. Integer types include doublewords and quardwords.
I added "Legal" status for these nodes and a bunch of tests.
I added "NoVLX" for AVX DAG selection to force VLX instructions selection when VLX is supported.
Differential Revision: http://reviews.llvm.org/D11255
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242637 91177308-0d34-0410-b5e6-96231b3b80d8
The standard containers are not designed to be inherited from, as
illustrated by the MSVC hacks for NodeOrdering. No functional change
intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242616 91177308-0d34-0410-b5e6-96231b3b80d8
directly model in the new PM.
This also was an incredibly brittle and expensive update API that was
never fully utilized by all the passes that claimed to preserve AA, nor
could it reasonably have been extended to all of them. Any number of
places add uses of values. If we ever wanted to reliably instrument
this, we would want a callback hook much like we have with ValueHandles,
but doing this for every use addition seems *extremely* expensive in
terms of compile time.
The only user of this update mechanism is GlobalsModRef. The idea of
using this to keep it up to date doesn't really work anyways as its
analysis requires a symmetric analysis of two different memory
locations. It would be very hard to make updates be sufficiently
rigorous to *guarantee* symmetric analysis in this way, and it pretty
certainly isn't true today.
However, folks have been using GMR with this update for a long time and
seem to not be hitting the issues. The reported issue that the update
hook fixes isn't even a problem any more as other changes to
GetUnderlyingObject worked around it, and that issue stemmed from *many*
years ago. As a consequence, a prior patch provided a flag to control
the unsafe behavior of GMR, and this patch removes the update mechanism
that has questionable compile-time tradeoffs and is causing problems
with moving to the new pass manager. Note the lack of test updates --
not one test in tree actually requires this update, even for a contrived
case.
All of this was extensively discussed on the dev list, this patch will
just enact what that discussion decides on. I'm sending it for review in
part to show what I'm planning, and in part to show the *amazing* amount
of work this avoids. Every call to the AA here is something like three
to six indirect function calls, which in the non-LTO pipeline never do
any work! =[
Differential Revision: http://reviews.llvm.org/D11214
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242605 91177308-0d34-0410-b5e6-96231b3b80d8
Instrumentation and the runtime library were in disagreement about
ASan shadow offset on Android/AArch64.
This fixes a large number of existing tests on Android/AArch64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242595 91177308-0d34-0410-b5e6-96231b3b80d8
Reapply r242500 now that the swift schedmodel includes LDRLIT.
This is mostly done to disable the PostRAScheduler which optimizes for
instruction latencies which isn't a good fit for out-of-order
architectures. This also allows to leave out the itinerary table in
swift in favor of the SchedModel ones.
This change leads to performance improvements/regressions by as much as
10% in some benchmarks, in fact we loose 0.4% performance over the
llvm-testsuite for reasons that appear to be unknown or out of the
compilers control. rdar://20803802 documents the investigation of
these effects.
While it is probably a good idea to perform the same switch for the
other ARM out-of-order CPUs, I limited this change to swift as I cannot
perform the benchmark verification on the other CPUs.
Differential Revision: http://reviews.llvm.org/D10513
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242588 91177308-0d34-0410-b5e6-96231b3b80d8
These pseudo instructions are only lowered after register allocation and
are therefore still present when the machine scheduler runs.
Add a run: line to a testcase that uses the uncommon flags necessary to
actually produce a LDRLIT instruction on swift.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242587 91177308-0d34-0410-b5e6-96231b3b80d8
The idea of deferred spilling is to delay the insertion of spill code until the
very end of the allocation. A "candidate" to spill variable might not required
to be spilled because of other evictions that happened after this decision was
taken. The spirit is similar to the optimistic coloring strategy implemented in
Preston and Briggs graph coloring algorithm.
For now, this feature is highly experimental. Although correct, it would require
much more modification to properly model the effect of spilling.
Anyway, this early patch helps prototyping this feature.
Note: The test case cannot unfortunately be reduced and is probably fragile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242585 91177308-0d34-0410-b5e6-96231b3b80d8
This commit modifies the machine instruction lexer so that it now accepts the
'$' characters in identifier tokens.
This change makes the syntax for unquoted global value tokens consistent with
the syntax for the global idenfitier tokens in the LLVM's assembly language.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242584 91177308-0d34-0410-b5e6-96231b3b80d8
This commit extends the interface provided by the AsmParser library by adding a
function that allows the user to parse a standalone contant value.
This change is useful for MIR serialization, as it will allow the MIR Parser to
parse the constant values in a machine constant pool.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10280
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242579 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r242500.
It broke some internal tests and Matthias asked me to revert it while he
is investigating.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242553 91177308-0d34-0410-b5e6-96231b3b80d8
No functional change, but it preps codegen for the future when SABSDIFF
will start getting generated in anger.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242546 91177308-0d34-0410-b5e6-96231b3b80d8
No functional change, but it preps codegen for the future when SABSDIFF
will start getting generated in anger.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242545 91177308-0d34-0410-b5e6-96231b3b80d8
basic changes to the IR such as folding pointers through PHIs, Selects,
integer casts, store/load pairs, or outlining.
This leaves the feature available behind a flag. This flag's default
could be flipped if necessary, but the real-world performance impact of
this particular feature of GMR may not be sufficiently significant for
many folks to want to run the risk.
Currently, the risk here is somewhat mitigated by half-hearted attempts
to update GlobalsModRef when the rest of the optimizer changes
something. However, I am currently trying to remove that update
mechanism as it makes migrating the AA infrastructure to a form that can
be readily shared between new and old pass managers very challenging.
Without this update mechanism, it is possible that this still unlikely
failure mode will start to trip people, and so I wanted to try to
proactively avoid that.
There is a lengthy discussion on the mailing list about why the core
approach here is flawed, and likely would need to look totally different
to be both reasonably effective and resilient to basic IR changes
occuring. This patch is essentially the first of two which will enact
the result of that discussion. The next patch will remove the current
update mechanism.
Thanks to lots of folks that helped look at this from different angles.
Especial thanks to Michael Zolotukhin for doing some very prelimanary
benchmarking of LTO without GlobalsModRef to get a rough idea of the
impact we could be facing here. So far, it looks very small, but there
are some concerns lingering from other benchmarking. The default here
may get flipped if performance results end up pointing at this as a more
significant issue.
Also thanks to Pete and Gerolf for reviewing!
Differential Revision: http://reviews.llvm.org/D11213
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242512 91177308-0d34-0410-b5e6-96231b3b80d8
Since r230724 ("Skip promotable allocas to improve performance at -O0"), there is a regression in the generated debug info for those non-instrumented variables. When inspecting such a variable's value in LLDB, you often get garbage instead of the actual value. ASan instrumentation is inserted before the creation of the non-instrumented alloca. The only allocas that are considered standard stack variables are the ones declared in the first basic-block, but the initial instrumentation setup in the function breaks that invariant.
This patch makes sure uninstrumented allocas stay in the first BB.
Differential Revision: http://reviews.llvm.org/D11179
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242510 91177308-0d34-0410-b5e6-96231b3b80d8
This is mostly done to disable the PostRAScheduler which optimizes for
instruction latencies which isn't a good fit for out-of-order
architectures. This also allows to leave out the itinerary table in
swift in favor of the SchedModel ones.
This change leads to performance improvements/regressions by as much as
10% in some benchmarks, in fact we loose 0.4% performance over the
llvm-testsuite for reasons that appear to be unknown or out of the
compilers control. rdar://20803802 documents the investigation of
these effects.
While it is probably a good idea to perform the same switch for the
other ARM out-of-order CPUs, I limited this change to swift as I cannot
perform the benchmark verification on the other CPUs.
Differential Revision: http://reviews.llvm.org/D10513
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242500 91177308-0d34-0410-b5e6-96231b3b80d8