This pass is responsible for figuring out where to place call safepoints and safepoint polls. It doesn't actually make the relocations explicit; that's the job of the RewriteStatepointsForGC pass (http://reviews.llvm.org/D6975).
Note that this code is not yet finalized. Its moving in tree for incremental development, but further cleanup is needed and will happen over the next few days. It is not yet part of the standard pass order.
Planned changes in the near future:
- I plan on restructuring the statepoint rewrite to use the functions add to the IRBuilder a while back.
- In the current pass, the function "gc.safepoint_poll" is treated specially but is not an intrinsic. I plan to make identifying the poll function a property of the GCStrategy at some point in the near future.
- As follow on patches, I will be separating a collection of test cases we have out of tree and submitting them upstream.
- It's not explicit in the code, but these two patches are introducing a new state for a statepoint which looks a lot like a patchpoint. There's no a transient form which doesn't yet have the relocations explicitly represented, but does prevent reordering of memory operations. Once this is in, I need to update actually make this explicit by reserving the 'unused' argument of the statepoint as a flag, updating the docs, and making the code explicitly check for such a thing. This wasn't really planned, but once I split the two passes - which was done for other reasons - the intermediate state fell out. Just reminds us once again that we need to merge statepoints and patchpoints at some point in the not that distant future.
Future directions planned:
- Identifying more cases where a backedge safepoint isn't required to ensure timely execution of a safepoint poll.
- Tweaking the insertion process to generate easier to optimize IR. (For example, investigating making SplitBackedge) the default.
- Adding opt-in flags for a GCStrategy to use this pass. Once done, add this pass to the actual pass ordering.
Differential Revision: http://reviews.llvm.org/D6981
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228090 91177308-0d34-0410-b5e6-96231b3b80d8
The llvm-level tests for coverage mapping need a binary input file,
which means they're hard to understand, hard to update, and it's
difficult to add new ones. By adding some unit tests that build up the
coverage data structures in C++, we can write more meaningful and
targeted tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228084 91177308-0d34-0410-b5e6-96231b3b80d8
Keeping regions that start at the same location in insertion order
makes this logic easier to test / more deterministic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228083 91177308-0d34-0410-b5e6-96231b3b80d8
This preserves the handy functionality of force-enabling the MachineVerifier, without the need to embed usage of environment variables in LLVM client applications.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228079 91177308-0d34-0410-b5e6-96231b3b80d8
Creating empty and expansion regions is awkward with the current API.
Expose static methods to make this simpler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228075 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Igor Laevsky
"This change generalizes statepoint verification to use ImmutableCallSite instead of CallInst. This will allow to easily implement invoke statepoint verification (in a following change)."
Differential Revision: http://reviews.llvm.org/D7308
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228064 91177308-0d34-0410-b5e6-96231b3b80d8
I've noticed this while trying to move addRuntimeCheck to LoopAccessAnalysis.
I think that the intention was to early exit from the overflow checking before
the code for the memchecks. This is the entire reason why we compute
FirstCheckInst but then we don't use that as the splitting instruction but the
final check. Looks like an oversight.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228056 91177308-0d34-0410-b5e6-96231b3b80d8
This was added in r188351 to fix a naming conflict between the
profile_rt-static and profile_rt-shared who both ended up in
lib/profile_rt.lib.
The change also affected other libraries (like libclang), and
users are reporting that they find it surprising that there's
no longer a libclang.lib. Since the profile_rt naming conflict
doesn't seem to exist any more, I think we can remove this.
Differential Revision: http://reviews.llvm.org/D7391
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228049 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by Kit Barton.
Add the vector population count instructions for byte, halfword, word,
and doubleword sizes. There are two major changes here:
PPCISelLowering.cpp: Make CTPOP legal for vector types.
PPCRegisterInfo.td: Added v2i64 to the VRRC register
definition. This is needed for the doubleword variations of the
integer ops that were added in P8.
Test Plan
Test the instruction vpcnt* encoding/decoding in ppc64-encoding-vmx.s
Test the generation of the vpopcnt instructions for various vector
data types. When adding the v2i64 type to the Vector Register set, I
also needed to add the appropriate bit conversion patterns between
v2i64 and the existing vector types. Testing for these conversions
were also added in the test case by passing a different vector type as
a parameter into the test functions. There is also a run step that
will ensure the vpopcnt instructions are generated when the vsx
feature is disabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228046 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: Make sure that FileCheck is built when running check-fuzzer
Test Plan:
run on bot:
lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fuzzer
Reviewers: samsonov
Reviewed By: samsonov
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7387
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228045 91177308-0d34-0410-b5e6-96231b3b80d8
lowering. I'm prepping patches to improve these, and this will let the
delta of those patches show the improvement. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228044 91177308-0d34-0410-b5e6-96231b3b80d8
Also remove hasPostISelHook=1 from V_LSHL_B32. It's defined by InstSI already.
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228039 91177308-0d34-0410-b5e6-96231b3b80d8
What this does is that if you accidentally select these instructions on VI,
the code generation will fail, because the pseudo -> _vi mapping will be
undefined.
The idea is to be able to catch possible future bugs easily.
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228038 91177308-0d34-0410-b5e6-96231b3b80d8
SI only has standard versions. VI only has REV versions.
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228037 91177308-0d34-0410-b5e6-96231b3b80d8
with 'stress' to indicate that the specific output isn't interesting and
relax them to only check the last instruction (a ret).
I've updated the one test case that really uses this to name the one
'stress_test' which was actually producing output we can directly check.
With this, the script doesn't introduce noise when run over the v16 test
file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228033 91177308-0d34-0410-b5e6-96231b3b80d8
Also re-implements the `dwarf::Tag` enumerator. I've moved the mock
tags into the enumerator since there's no other way to do this. Really
they shouldn't be used at all (they're just a hack to identify
`MDNode`s, but we have a class hierarchy for that now).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228030 91177308-0d34-0410-b5e6-96231b3b80d8
`dwarf::TagString()` shouldn't stringify `DW_TAG_lo_user` or
`DW_TAG_hi_user`. These aren't actual tags; they're markers for the
edge of vendor-specific tag regions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228029 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds general shuffle pattern matching for the MOVQ zero-extend instruction (copy lower 64bits, zero upper) for all 128-bit integer vectors, it is added as a fallback test in lowerVectorShuffleAsZeroOrAnyExtend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228022 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,
LLVM unrolls
#pragma unroll
foo (int i = 0; i < 3; ++i) {
sum += foo((b + i) * s);
}
into
sum += foo(b * s);
sum += foo((b + 1) * s);
sum += foo((b + 2) * s);
However, no optimizations yet reduce the internal redundancy of the three
expressions:
b * s
(b + 1) * s
(b + 2) * s
With SLSR, LLVM can optimize these three expressions into:
t1 = b * s
t2 = t1 + s
t3 = t2 + s
This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.
Test Plan: test/StraightLineStrengthReduce/slsr.ll
Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick
Reviewed By: jholewinski, atrick
Subscribers: karthikthecool, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D7310
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228016 91177308-0d34-0410-b5e6-96231b3b80d8
This patch detects consecutive vector loads using the existing
EltsFromConsecutiveLoads() logic. This fixes:
http://llvm.org/bugs/show_bug.cgi?id=22329
This patch effectively reverts the tablegen additions of D6492 /
http://reviews.llvm.org/rL224344 ...which in hindsight were a horrible hack.
The test cases that were added with that patch are simply modified to load
from varying offsets of a base pointer. These loads did not match the existing
tablegen patterns.
A happy side effect of doing this optimization earlier is that we can now fold
the load into a math op where possible; this is shown in some of the updated
checks in the test file.
Differential Revision: http://reviews.llvm.org/D7303
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228006 91177308-0d34-0410-b5e6-96231b3b80d8
lto_codegen_compile_optimized. Also add lto_api_version.
Before this commit, we can only dump the optimized bitcode after running
lto_codegen_compile, but it includes some impacts of running codegen passes,
one example is StackProtector pass. We will get assertion failure when running
llc on the optimized bitcode, because StackProtector is effectively run twice.
After splitting lto_codegen_compile, the linker can choose to dump the bitcode
before running lto_codegen_compile_optimized.
lto_api_version is added so ld64 can check for runtime-availability of the new
API.
rdar://19565500
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228000 91177308-0d34-0410-b5e6-96231b3b80d8