This canonicalization step saves us 3 pattern matching possibilities * 4 math ops
for scalar FP math that uses xmm regs. The backend can re-commute the operands
post-instruction-selection if that makes register allocation better.
The tests in llvm/test/CodeGen/X86/sse-scalar-fp-arith.ll cover this scenario already,
so there are no new tests with this patch.
Differential Revision: http://reviews.llvm.org/D7777
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230024 91177308-0d34-0410-b5e6-96231b3b80d8
It would be nice to get rid of the version checks here, but that will
have to wait until libstdc++ is upgraded to 5.0 everywhere ...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230021 91177308-0d34-0410-b5e6-96231b3b80d8
the wrong answer. We also got initializer lists which are *way* cleaner
for this kind of thing. Let's use those and make this a normal, boring
functionn accepting ArrayRef.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230004 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes an error introduced in r228934 where None was converted to
an int instead of the int being converted to an Optional as intended.
We make that sort of mistake a compile error by changing NoneType into
a scoped enum.
Finally, provide a static NoneType called None to avoid forcing all
users to spell it NoneType::None.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229980 91177308-0d34-0410-b5e6-96231b3b80d8
AsmPrinter.
getSubtargetInfo now asserts that the MachineFunction exists.
Debug printing of register naming now uses the register info
from MCAsmInfo as that's unchanging.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229978 91177308-0d34-0410-b5e6-96231b3b80d8
have access to a target specific subtarget info. Grab the module
level MCSubtargetInfo for the JumpInstrTable output stubs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229974 91177308-0d34-0410-b5e6-96231b3b80d8
This constructor is more efficient for symbols that have already been emitted,
since it avoids the construction/execution of a std::function.
Update the ObjectLinkingLayer to use this new constructor where possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229973 91177308-0d34-0410-b5e6-96231b3b80d8
single place and replace calls to getSubtargetImpl with calls
to get the subtarget from the MachineFunction where valid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229971 91177308-0d34-0410-b5e6-96231b3b80d8
The IBM BG/Q supercomputer's A2 cores have a hardware prefetching unit, the
L1P, but it does not prefetch directly into the A2's L1 cache. Instead, it
prefetches into its own L1P buffer, and the latency to access that buffer is
significantly higher than that to the L1 cache (although smaller than the
latency to the L2 cache). As a result, especially when multiple hardware
threads are not actively busy, explicitly prefetching data into the L1 cache is
advantageous.
I've been using this pass out-of-tree for data prefetching on the BG/Q for well
over a year, and it has worked quite well. It is enabled by default only for
the BG/Q, but can be enabled for other cores as well via a command-line option.
Eventually, we might want to add some TTI interfaces and move this into
Transforms/Scalar (there is nothing particularly target dependent about it,
although only machines like the BG/Q will benefit from its simplistic
strategy).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229966 91177308-0d34-0410-b5e6-96231b3b80d8
The new shuffle lowering has been the default for some time. I've
enabled the new legality testing by default with no really blocking
regressions. I've fuzz tested this very heavily (many millions of fuzz
test cases have passed at this point). And this cleans up a ton of code.
=]
Thanks again to the many folks that helped with this transition. There
was a lot of work by others that went into the new shuffle lowering to
make it really excellent.
In case you aren't using a diff algorithm that can handle this:
X86ISelLowering.cpp: 22 insertions(+), 2940 deletions(-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229964 91177308-0d34-0410-b5e6-96231b3b80d8
is going well, remove the flag and the code for the old legality tests.
This is the first step toward removing the entire old vector shuffle
lowering. *Much* more code to delete coming up next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229963 91177308-0d34-0410-b5e6-96231b3b80d8
When writing the bitcode serialization for the new debug info hierarchy,
I assumed two fields would never be null.
Drop that assumption, since it's brittle (and crashes the
`BitcodeWriter` if wrong), and is a check better left for the verifier
anyway. (No need for a bitcode upgrade here, since the new hierarchy is
still not in place.)
The fields in question are `MDCompileUnit::getFile()` and
`MDDerivedType::getBaseType()`, the latter of which isn't null in
test/Transforms/Mem2Reg/ConvertDebugInfo2.ll (see !14, a pointer to
nothing). While the testcase might have bitrotted, there's no reason
for the bitcode format to rely on non-null for metadata operands.
This also fixes a bug in `AsmWriter` where if the `file:` is null it
isn't emitted (caught by the double-round trip in the testcase I'm
adding) -- this is a required field in `LLParser`.
I'll circle back to ConvertDebugInfo2. Once the specialized nodes are
in place, I'll be trying to turn the debug info verifier back on by
default (in the newer module pass form committed r206300) and throwing
more logic in there. If the testcase has bitrotted (as opposed to me
not understanding the schema correctly) I'll fix it then.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229960 91177308-0d34-0410-b5e6-96231b3b80d8
This change addresses a deficiency pointed out in PR22629. To copy from the bug
report:
[from the bug report]
Consider this code:
int f(int x) {
int a[] = {12};
return a[x];
}
GCC knows to optimize this to
movl $12, %eax
ret
The code generated by recent Clang at -O3 is:
movslq %edi, %rax
movl .L_ZZ1fiE1a(,%rax,4), %eax
retq
.L_ZZ1fiE1a:
.long 12 # 0xc
[end from the bug report]
This definitely seems worth fixing. I've also seen this kind of code before (as
the base case of generic vector wrapper templates with one element).
The general idea is to look at the GEP feeding a load or a store, which has
some variable as its first non-zero index, and determine if that index must be
zero (or else an out-of-bounds access would occur). We can do this for allocas
and globals with constant initializers where we know the maximum size of the
underlying object. When we find such a GEP, we create a new one for the memory
access with that first variable index replaced with a constant zero.
Even if we can't eliminate the memory access (and sometimes we can't), it is
still useful because it removes unnecessary indexing calculations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229959 91177308-0d34-0410-b5e6-96231b3b80d8
reflects the fact that the x86 backend can in fact lower any shuffle you
want it to with reasonably high code quality.
My recent work on the new vector shuffle has made this regress *very*
little. The diff in the test cases makes me very, very happy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229958 91177308-0d34-0410-b5e6-96231b3b80d8
one test case that is only partially tested in 32-bits into two test
cases so that the script doesn't generate massive spews of tests for the
cases we don't care about.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229955 91177308-0d34-0410-b5e6-96231b3b80d8
When back merging the changes in 229945 I noticed that I forgot to mark the test cases with the appropriate GC. We want the rewriting to be off by default (even when manually added to the pass order), not on-by default. To keep the current test working, mark them as using the statepoint-example GC and whitelist that GC.
Longer term, we need a better selection mechanism here for both actual usage and testing. As I migrate more tests to the in tree version of this pass, I will probably need to update the enable/disable logic as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229954 91177308-0d34-0410-b5e6-96231b3b80d8
`DILocation` is a lightweight wrapper. Its accessors check for null and
the correct type, and then forward to `MDLocation`.
Extract a couple of macros to do the `dyn_cast_or_null<>` and default
return logic. I'll be using these to minimize error-prone boilerplate
when I move the new hierarchy into place -- since all the other
subclasses of `DIDescriptor` will similarly become lightweight wrappers.
(Note that I hope to obsolete these wrappers fairly quickly, with the
goal of renaming the underlying types (e.g., I'll rename `MDLocation` to
`DILocation` once the name is free).)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229953 91177308-0d34-0410-b5e6-96231b3b80d8
This doesn't pass 'ninja check-llvm' for me. Lots of tests, including
the ones updated, fail with crashes and other explosions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229952 91177308-0d34-0410-b5e6-96231b3b80d8
This patch consists of a single pass whose only purpose is to visit previous inserted gc.statepoints which do not have gc.relocates inserted yet, and insert them. This can be used either immediately after IR generation to perform 'early safepoint insertion' or late in the pass order to perform 'late insertion'.
This patch is setting the stage for work to continue in tree. In particular, there are known naming and style violations in the current patch. I'll try to get those resolved over the next week or so. As I touch each area to make style changes, I need to make sure we have adequate testing in place. As part of the cleanup, I will be cleaning up a collection of test cases we have out of tree and submitting them upstream. The tests included in this change are very basic and mostly to provide examples of usage.
The pass has several main subproblems it needs to address:
- First, it has identify any live pointers. In the current code, the use of address spaces to distinguish pointers to GC managed objects is hard coded, but this will become parametrizable in the near future. Note that the current change doesn't actually contain a useful liveness analysis. It was seperated into a followup change as the code wasn't ready to be shared. Instead, the current implementation just considers any dominating def of appropriate pointer type to be live.
- Second, it has to identify base pointers for each live pointer. This is a fairly straight forward data flow algorithm.
- Third, the information in the previous steps is used to actually introduce rewrites. Rather than trying to do this by hand, we simply re-purpose the code behind Mem2Reg to do this for us.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229945 91177308-0d34-0410-b5e6-96231b3b80d8
Today a simple function that only catches exceptions and doesn't run
destructor cleanups ends up containing a dead call to _Unwind_Resume
(PR20300). We can't remove these dead resume instructions during normal
optimization because inlining might introduce additional landingpads
that do have cleanups to run. Instead we can do this during EH
preparation, which is guaranteed to run after inlining.
Fixes PR20300.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D7744
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229944 91177308-0d34-0410-b5e6-96231b3b80d8
The instructions were being generated on architectures that don't support avx512.
This reverts commit r229837.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229942 91177308-0d34-0410-b5e6-96231b3b80d8