This works like the composeSubRegisterIndices() function but transforms
a subregister lane mask instead of a subregister index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223874 91177308-0d34-0410-b5e6-96231b3b80d8
Let tablegen compute the combination of subregister lanemasks for all
subregisters in a register/register class. This is preparation for further
work subregister allocation
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223873 91177308-0d34-0410-b5e6-96231b3b80d8
We don't allow Value* to have names which contain null bytes. The
AsmParser should reject .ll files that try to do this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223869 91177308-0d34-0410-b5e6-96231b3b80d8
Nothing particularly interesting here, just documenting the way the code currently works before I start changing it...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223866 91177308-0d34-0410-b5e6-96231b3b80d8
The complicated situation is when we have to keep an alias but drop a GV
that is part of the aliasee.
We used to clone the dropped GV and make the clone internal. This is wasteful
as we know the original will be dropped.
With this patch what is done instead is set the linkage of the original to
internal and replace all uses (but the one in the alias) with a new
declaration that takes the name of the old GV. This saves us from having
to copy the body.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223863 91177308-0d34-0410-b5e6-96231b3b80d8
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
Differential Revision: http://reviews.llvm.org/D6585
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223862 91177308-0d34-0410-b5e6-96231b3b80d8
In the current implementation, GCStrategy is a part of the ownership structure for the gc metadata which describes a Module. It also contains a reference to the module in question. As a result, GCStrategy instances are essentially Module specific.
I plan to transition away from this design. Instead, a GCStrategy will be owned by the LLVMContext. It will be a lightweight policy object which contains no information about the Modules or Functions involved, but can be easily reached given a Function.
The first step in this transition is to remove the direct Module reference from GCStrategy. This also requires removing the single user of this reference, the GCMetadataPrinter hierarchy. In theory, this will allow the lifetime of the printers to be scoped to the LLVMContext as well, but in practice, I'm not actually changing that. (Yet?)
An alternate design would have been to move the direct Module reference into the GCMetadataPrinter and change the keying of the owning maps to explicitly key off both GCStrategy and Module. I'm open to doing it that way instead, but didn't see much value in preserving the per Module association for GCMetadataPrinters.
The next change in this sequence will be to start unwinding the intertwined ownership between GCStrategy, GCModuleInfo, and GCFunctionInfo.
Differential Revision: http://reviews.llvm.org/D6566
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223859 91177308-0d34-0410-b5e6-96231b3b80d8
There were two major problems with `MDNode` memory management.
1. `MDNode::operator new()` called a placement array constructor for
`MDOperand`. What? Each operand needs to be placed individually.
2. `MDNode::operator delete()` failed to destruct the `MDOperand`s at
all.
Frankly it's hard to understand how this worked locally, how this
survived an LTO bootstrap, or how it worked on most of the bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223858 91177308-0d34-0410-b5e6-96231b3b80d8
Move the combiner-state check into another function, add a few
small comments, and use a more general type in a cast<>.
In preparation for a future patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223834 91177308-0d34-0410-b5e6-96231b3b80d8
It was missing from the VLD1/VST1 handling logic, even though the
corresponding instructions exist (same form as v2i64).
In preparation for a future patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223832 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM_EXPLICIT is only supported by recent version of MSVC, and it seems
the not-so-recent versions get confused about the operator bool() when
tryint to resolve operator== calls.
This removed the operator bool()'s since they don't seem to be used
anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223824 91177308-0d34-0410-b5e6-96231b3b80d8
It is a static method of IRObjectFile, so having to use
IRObjectFile::createIRObjectFile was redundant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223822 91177308-0d34-0410-b5e6-96231b3b80d8
The load/store value type is currently not available when lowering the memcpy
intrinsic. Add the missing nullptr check to support this in 'computeAddress'.
Fixes rdar://problem/19178947.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223818 91177308-0d34-0410-b5e6-96231b3b80d8
patterns.
This is causing Clang to miscompile itself for 32-bit x86 somehow, and likely
also on ARM and PPC. I really don't know how, but reverting now that I've
confirmed this is actually the culprit. I have a reproduction as well and so
should be able to restore this shortly.
This reverts commit r223764.
Original commit log follows:
Teach instcombine to canonicalize "element extraction" from a load of an
integer and "element insertion" into a store of an integer into actual
element extraction, element insertion, and vector loads and stores.
Previously various parts of LLVM (including instcombine itself) would
introduce integer loads and stores into the code as a way of opaquely
loading and storing "bits". In some cases (such as a memcpy of
std::complex<float> object) we will eventually end up using those bits
in non-integer types. In order for SROA to effectively promote the
allocas involved, it splits these "store a bag of bits" integer loads
and stores up into the constituent parts. However, for non-alloca loads
and tsores which remain, it uses integer math to recombine the values
into a large integer to load or store.
All of this would be "fine", except that it forces LLVM to go through
integer math to combine and split up values. While this makes perfect
sense for integers (and in fact is critical for bitfields to end up
lowering efficiently) it is *terrible* for non-integer types, especially
floating point types. We have a much more canonical way of representing
the act of concatenating the bits of two SSA values in LLVM: a vector
and insertelement. This patch teaching InstCombine to use this
representation.
With this patch applied, LLVM will no longer introduce integer math into
the critical path of every loop over std::complex<float> operations such
as those that make up the hot path of ... oh, most HPC code, Eigen, and
any other heavy linear algebra library.
For the record, I looked *extensively* at fixing this in other parts of
the compiler, but it just doesn't work:
- We really do want to canonicalize memcpy and other bit-motion to
integer loads and stores. SSA values are tremendously more powerful
than "copy" intrinsics. Not doing this regresses massive amounts of
LLVM's scalar optimizer.
- We really do need to split up integer loads and stores of this form in
SROA or every memcpy of a trivially copyable struct will prevent SSA
formation of the members of that struct. It essentially turns off
SROA.
- The closest alternative is to actually split the loads and stores when
partitioning with SROA, but this has all of the downsides historically
discussed of splitting up loads and stores -- the wide-store
information is fundamentally lost. We would also see performance
regressions for bitfield-heavy code and other places where the
integers aren't really intended to be split without seemingly
arbitrary logic to treat integers totally differently.
- We *can* effectively fix this in instcombine, so it isn't that hard of
a choice to make IMO.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@223813 91177308-0d34-0410-b5e6-96231b3b80d8