FoldingSet is implemented as a chained hash table. When there is a hash
collision during insertion, which is common as we fill the table until a
load factor of 2.0 is hit, we walk the chained elements, comparing every
operand with the new element's operands. This can be very expensive if the
MDNode has many operands.
We sacrifice a word of space in MDNode to cache the full hash value, reducing
compares on collision to a minimum. MDNode grows from 28 to 32 bytes + operands
on x86. On x86_64 the new bits fit nicely into existing padding, not growing
the struct at all.
The actual speedup depends a lot on the test case and is typically between
1% and 2% for C++ code with clang -c -O0 -g.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154497 91177308-0d34-0410-b5e6-96231b3b80d8
binary and assembly. Patch by Carlo Kok. Emitting was inspired by but not based
on the D llvm bindings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154493 91177308-0d34-0410-b5e6-96231b3b80d8
StringMap. This was redundant and unnecessarily bloated the MDString class.
Because the MDString class is a "Value" and will never have a "name", and
because the Name field in the Value class is a pointer to a StringMap entry, we
repurpose the Name field for an MDString. It stores the StringMap entry in the
Name field, and uses the normal methods to get the string (name) back.
PR12474
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154429 91177308-0d34-0410-b5e6-96231b3b80d8
Take this opportunity to generalize the indirectbr bailout logic for
loop transformations. CFG transformations will never get indirectbr
right, and there's no point trying.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154386 91177308-0d34-0410-b5e6-96231b3b80d8
legalizer always use the DAG entry node. This is wrong when the libcall is
emitted as a tail call since it effectively folds the return node. If
the return node's input chain is not the entry (i.e. call, load, or store)
use that as the tail call input chain.
PR12419
rdar://9770785
rdar://11195178
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154370 91177308-0d34-0410-b5e6-96231b3b80d8
A couple of cases where we were accidentally creating constant conditions by
something like "x == a || b" instead of "x == a || x == b". In one case a
conditional & then unreachable was used - I transformed this into a direct
assert instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154324 91177308-0d34-0410-b5e6-96231b3b80d8
optimizations which are valid for position independent code being linked
into a single executable, but not for such code being linked into
a shared library.
I discussed the design of this with Eric Christopher, and the decision
was to support an optional bit rather than a completely separate
relocation model. Fundamentally, this is still PIC relocation, its just
that certain optimizations are only valid under a PIC relocation model
when the resulting code won't be in a shared library. The simplest path
to here is to expose a single bit option in the TargetOptions. If folks
have different/better designs, I'm all ears. =]
I've included the first optimization based upon this: changing TLS
models to the *Exec models when PIE is enabled. This is the LLVM
component of PR12380 and is all of the hard work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154294 91177308-0d34-0410-b5e6-96231b3b80d8
in TargetLowering. There was already a FIXME about this location being
odd. The interface is simplified as a consequence. This will also make
it easier to change TLS models when compiling with PIE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154292 91177308-0d34-0410-b5e6-96231b3b80d8
optimizers could do this for us, but expecting partial SROA of classes
with template methods through cloning is probably expecting too much
heroics. With this change, the begin/end pointer pairs which indicate
the status of each loop iteration are actually passed directly into each
layer of the combine_data calls, and the inliner has a chance to see
when most of the combine_data function could be deleted by inlining.
Similarly for 'length'.
We have to be careful to limit the places where in/out reference
parameters are used as those will also defeat the inliner / optimizers
from properly propagating constants.
With this change, LLVM is able to fully inline and unroll the hash
computation of small sets of values, such as two or three pointers.
These now decompose into essentially straight-line code with no loops or
function calls.
There is still one code quality problem to be solved with the hashing --
LLVM is failing to nuke the alloca. It removes all loads from the
alloca, leaving only lifetime intrinsics and dead(!!) stores to the
alloca. =/ Very unfortunate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154264 91177308-0d34-0410-b5e6-96231b3b80d8
by default.
This is a behaviour configurable in the MCAsmInfo. I've decided to turn
it on by default in (possibly optimistic) hopes that most assemblers are
reasonably sane. If this proves a problem, switching to default seems
reasonable.
I'm not sure if this is the opportune place to test, but it seemed good
to make sure it was tested somewhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154235 91177308-0d34-0410-b5e6-96231b3b80d8
The empty 1-argument operator delete is for the benefit of the
destructor. A couple of spot checks of running yaml-bench under
valgrind against a few of the files under test/YAMLParser did
not reveal any leaks introduced by this change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154137 91177308-0d34-0410-b5e6-96231b3b80d8
of the BBVectorizePass without using command line option. As pointed out
by Hal, we can ask the TargetLoweringInfo for the architecture specific
VectorizeConfig to perform vectorizing with architecture specific
information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154096 91177308-0d34-0410-b5e6-96231b3b80d8
BasicBlock in other passes, e.g. we can call vectorizeBasicBlock in the
loop unroll pass right after the loop is unrolled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154089 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154011 91177308-0d34-0410-b5e6-96231b3b80d8
The colorability heuristic should count these as denied registers.
No test case - this exposed a bug on an out-of-tree target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153958 91177308-0d34-0410-b5e6-96231b3b80d8
brace) so that we get more accurate line number information about the
declaration of a given function and the line where the function
first starts.
Part of rdar://11026482
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153916 91177308-0d34-0410-b5e6-96231b3b80d8
This patch allows llvm to recognize that a 64 bit object file is being produced
and that the subsequently generated ELF header has the correct information.
The test case checks for both big and little endian flavors.
Patch by Jack Carter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153889 91177308-0d34-0410-b5e6-96231b3b80d8
rather than a bitfield, a great suggestion by Chris during code review.
There is still quite a bit of cruft in the interface, but that requires
sorting out some awkward uses of the cost inside the actual inliner.
No functionality changed intended here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153853 91177308-0d34-0410-b5e6-96231b3b80d8
This also avoids emitting the information twice, which led to code bloat. On i386-linux-Release+Asserts
with all targets built this change shaves a whopping 1.3 MB off clang. The number is probably exaggerated
by recent inliner changes but the methods were already enormous with the old inline cost computation.
The DWARF reg -> LLVM reg mapping doesn't seem to have holes in it, so it could be a simple lookup table.
I didn't implement that optimization yet to avoid potentially changing functionality.
There is still some duplication both in tablegen and the generated code that should be cleaned up eventually.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153837 91177308-0d34-0410-b5e6-96231b3b80d8
This is the CodeGen equivalent of r153747. I tested that there is not noticeable
performance difference with any combination of -O0/-O2 /-g when compiling
gcc as a single compilation unit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153817 91177308-0d34-0410-b5e6-96231b3b80d8