Yet another chapter in the endless story. While this looks like we leave
the loop in a non-canonical state this replicates the logic in
LoopSimplify so it doesn't diverge from the canonical form in any way.
PR21968
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230058 91177308-0d34-0410-b5e6-96231b3b80d8
In the old (well, current) schema, there are two types of file
references: untagged and tagged (the latter references the former).
!0 = !{!"filename", !"/directory"}
!1 = !{!"0x29", !1} ; DW_TAG_file_type [filename] [/directory]
The interface to `DIBuilder` universally takes the tagged version,
described by `DIFile`. However, most `file:` references actually use
the untagged version directly.
In the new hierarchy, I'm merging this into a single node: `MDFile`.
Originally I'd planned to keep the old schema unchanged until after I
moved the new hierarchy into place.
However, it turns out to be trivial to make `MDFile` match both nodes at
the same time.
- Anyone referencing !1 does so through `DIFile`, whose implementation
I need to gut anyway (as I do the rest of the `DIDescriptor`s).
- Anyone referencing !0 just references an `MDNode`, and expects a
node with two `MDString` operands.
This commit achieves that, and updates all the testcases for the parts
of the new hierarchy that used the two-node schema (I've replaced the
untagged nodes with `distinct !{}` to make the diff clear (otherwise the
metadata all gets renumbered); it might be worthwhile to come back and
delete those nodes and renumber the world, not sure).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230057 91177308-0d34-0410-b5e6-96231b3b80d8
This patch introduces a new mechanism that allows IR modules to co-operatively
build pointer sets corresponding to addresses within a given set of
globals. One particular use case for this is to allow a C++ program to
efficiently verify (at each call site) that a vtable pointer is in the set
of valid vtable pointers for the class or its derived classes. One way of
doing this is for a toolchain component to build, for each class, a bit set
that maps to the memory region allocated for the vtables, such that each 1
bit in the bit set maps to a valid vtable for that class, and lay out the
vtables next to each other, to minimize the total size of the bit sets.
The patch introduces a metadata format for representing pointer sets, an
'@llvm.bitset.test' intrinsic and an LTO lowering pass that lays out the globals
and builds the bitsets, and documents the new feature.
Differential Revision: http://reviews.llvm.org/D7288
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230054 91177308-0d34-0410-b5e6-96231b3b80d8
usage of instruction ADDU16 by CodeGen. For this instruction an improper
register is allocated, i.e. the register that is not from register set defined
for the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230053 91177308-0d34-0410-b5e6-96231b3b80d8
changes to remove non-Function based subtargets out of the asm
printer. For module level emission we'll need to construct up
an MCSubtargetInfo so that we can encode instructions for
emission.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230050 91177308-0d34-0410-b5e6-96231b3b80d8
When doing style cleanup, I noticed a minor bug in this code. If we have a pointer that we think is unused after a statepoint and thus doesn't need relocation, we store a null pointer into the alloca we're about to promote. This helps turn a mistake in liveness analysis into an easily debuggable crash. It turned out this code had never been updated to handle invoke statepoints.
There's no test for this. Without a bug in liveness, it appears impossible to make this trigger in a way which is visible in the resulting IR. We might store the null, but when promoting the alloca, there will be no uses and thus nothing to test against. Suggestions on how to test are very welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230047 91177308-0d34-0410-b5e6-96231b3b80d8
This patch teaches X86FastISel how to select intrinsic 'convert_from_fp16' and
intrinsic 'convert_to_fp16'.
If the target has F16C, we can select VCVTPS2PHrr for a float-half conversion,
and VCVTPH2PSrr for a half-float conversion.
Differential Revision: http://reviews.llvm.org/D7673
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230043 91177308-0d34-0410-b5e6-96231b3b80d8
Starting to update variable naming and types to match LLVM style. This will be an incremental process to minimize the chance of breakage as I work. Step one, rename member variables to LLVM CamelCase and use llvm's ADT. Much more to come.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230042 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: Turns out if you don't set CMAKE_BUILD_TYPE the default is an empty string. This results in some of the behaviors of debug builds, but not all of them. For example ENABLE_ASSERTIONS is false.
Reviewers: rnk
Reviewed By: rnk
Subscribers: chapuni, llvm-commits
Differential Revision: http://reviews.llvm.org/D7360
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230041 91177308-0d34-0410-b5e6-96231b3b80d8
Before calling Function::getGC to test for enablement, we need to make sure there's actually a GC at all via Function::hasGC. Otherwise, we'd crash on functions without a GC. Thankfully, this only mattered if you manually scheduled the pass, but still, oops. :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230040 91177308-0d34-0410-b5e6-96231b3b80d8
EmitFunctionStubs is called from doFinalization and so can't
depend on the Subtarget existing. It's also irrelevant as
we know we're darwin since we're in the darwin asm printer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230039 91177308-0d34-0410-b5e6-96231b3b80d8
This canonicalization step saves us 3 pattern matching possibilities * 4 math ops
for scalar FP math that uses xmm regs. The backend can re-commute the operands
post-instruction-selection if that makes register allocation better.
The tests in llvm/test/CodeGen/X86/sse-scalar-fp-arith.ll cover this scenario already,
so there are no new tests with this patch.
Differential Revision: http://reviews.llvm.org/D7777
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230024 91177308-0d34-0410-b5e6-96231b3b80d8
It would be nice to get rid of the version checks here, but that will
have to wait until libstdc++ is upgraded to 5.0 everywhere ...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230021 91177308-0d34-0410-b5e6-96231b3b80d8
the wrong answer. We also got initializer lists which are *way* cleaner
for this kind of thing. Let's use those and make this a normal, boring
functionn accepting ArrayRef.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230004 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes an error introduced in r228934 where None was converted to
an int instead of the int being converted to an Optional as intended.
We make that sort of mistake a compile error by changing NoneType into
a scoped enum.
Finally, provide a static NoneType called None to avoid forcing all
users to spell it NoneType::None.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229980 91177308-0d34-0410-b5e6-96231b3b80d8
AsmPrinter.
getSubtargetInfo now asserts that the MachineFunction exists.
Debug printing of register naming now uses the register info
from MCAsmInfo as that's unchanging.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229978 91177308-0d34-0410-b5e6-96231b3b80d8
have access to a target specific subtarget info. Grab the module
level MCSubtargetInfo for the JumpInstrTable output stubs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229974 91177308-0d34-0410-b5e6-96231b3b80d8
This constructor is more efficient for symbols that have already been emitted,
since it avoids the construction/execution of a std::function.
Update the ObjectLinkingLayer to use this new constructor where possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229973 91177308-0d34-0410-b5e6-96231b3b80d8
single place and replace calls to getSubtargetImpl with calls
to get the subtarget from the MachineFunction where valid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229971 91177308-0d34-0410-b5e6-96231b3b80d8
The IBM BG/Q supercomputer's A2 cores have a hardware prefetching unit, the
L1P, but it does not prefetch directly into the A2's L1 cache. Instead, it
prefetches into its own L1P buffer, and the latency to access that buffer is
significantly higher than that to the L1 cache (although smaller than the
latency to the L2 cache). As a result, especially when multiple hardware
threads are not actively busy, explicitly prefetching data into the L1 cache is
advantageous.
I've been using this pass out-of-tree for data prefetching on the BG/Q for well
over a year, and it has worked quite well. It is enabled by default only for
the BG/Q, but can be enabled for other cores as well via a command-line option.
Eventually, we might want to add some TTI interfaces and move this into
Transforms/Scalar (there is nothing particularly target dependent about it,
although only machines like the BG/Q will benefit from its simplistic
strategy).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229966 91177308-0d34-0410-b5e6-96231b3b80d8
The new shuffle lowering has been the default for some time. I've
enabled the new legality testing by default with no really blocking
regressions. I've fuzz tested this very heavily (many millions of fuzz
test cases have passed at this point). And this cleans up a ton of code.
=]
Thanks again to the many folks that helped with this transition. There
was a lot of work by others that went into the new shuffle lowering to
make it really excellent.
In case you aren't using a diff algorithm that can handle this:
X86ISelLowering.cpp: 22 insertions(+), 2940 deletions(-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229964 91177308-0d34-0410-b5e6-96231b3b80d8
is going well, remove the flag and the code for the old legality tests.
This is the first step toward removing the entire old vector shuffle
lowering. *Much* more code to delete coming up next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229963 91177308-0d34-0410-b5e6-96231b3b80d8
When writing the bitcode serialization for the new debug info hierarchy,
I assumed two fields would never be null.
Drop that assumption, since it's brittle (and crashes the
`BitcodeWriter` if wrong), and is a check better left for the verifier
anyway. (No need for a bitcode upgrade here, since the new hierarchy is
still not in place.)
The fields in question are `MDCompileUnit::getFile()` and
`MDDerivedType::getBaseType()`, the latter of which isn't null in
test/Transforms/Mem2Reg/ConvertDebugInfo2.ll (see !14, a pointer to
nothing). While the testcase might have bitrotted, there's no reason
for the bitcode format to rely on non-null for metadata operands.
This also fixes a bug in `AsmWriter` where if the `file:` is null it
isn't emitted (caught by the double-round trip in the testcase I'm
adding) -- this is a required field in `LLParser`.
I'll circle back to ConvertDebugInfo2. Once the specialized nodes are
in place, I'll be trying to turn the debug info verifier back on by
default (in the newer module pass form committed r206300) and throwing
more logic in there. If the testcase has bitrotted (as opposed to me
not understanding the schema correctly) I'll fix it then.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229960 91177308-0d34-0410-b5e6-96231b3b80d8
This change addresses a deficiency pointed out in PR22629. To copy from the bug
report:
[from the bug report]
Consider this code:
int f(int x) {
int a[] = {12};
return a[x];
}
GCC knows to optimize this to
movl $12, %eax
ret
The code generated by recent Clang at -O3 is:
movslq %edi, %rax
movl .L_ZZ1fiE1a(,%rax,4), %eax
retq
.L_ZZ1fiE1a:
.long 12 # 0xc
[end from the bug report]
This definitely seems worth fixing. I've also seen this kind of code before (as
the base case of generic vector wrapper templates with one element).
The general idea is to look at the GEP feeding a load or a store, which has
some variable as its first non-zero index, and determine if that index must be
zero (or else an out-of-bounds access would occur). We can do this for allocas
and globals with constant initializers where we know the maximum size of the
underlying object. When we find such a GEP, we create a new one for the memory
access with that first variable index replaced with a constant zero.
Even if we can't eliminate the memory access (and sometimes we can't), it is
still useful because it removes unnecessary indexing calculations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229959 91177308-0d34-0410-b5e6-96231b3b80d8
reflects the fact that the x86 backend can in fact lower any shuffle you
want it to with reasonably high code quality.
My recent work on the new vector shuffle has made this regress *very*
little. The diff in the test cases makes me very, very happy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229958 91177308-0d34-0410-b5e6-96231b3b80d8