This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).
So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208104 91177308-0d34-0410-b5e6-96231b3b80d8
An alias has the address of what it points to, so it also has the same
alignment.
This allows a few optimizations to see past aliases for free.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208103 91177308-0d34-0410-b5e6-96231b3b80d8
Also, provide the ability to create temporary and non-temporary
declarations, as not all declarations may be replaced by definitions
later on.
This provides the necessary infrastructure for Clang to fix PR19598,
leaking temporary MDNodes in Clang's debug info generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208054 91177308-0d34-0410-b5e6-96231b3b80d8
The number of tail call to loop conversions remains the same (1618 by my count).
The new algorithm does a local scan over the use-def chains to identify local "alloca-derived" values, as well as points where the alloca could escape. Then, a visit over the CFG marks blocks as being before or after the allocas have escaped, and annotates the calls accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208017 91177308-0d34-0410-b5e6-96231b3b80d8
operations on the call graph. This one forms a cycle, and while not as
complex as removing an internal edge from an SCC, it involves
a reasonable amount of work to find all of the nodes newly connected in
a cycle.
Also somewhat alarming is the worst case complexity here: it might have
to walk roughly the entire SCC inverse DAG to insert a single edge. This
is carefully documented in the API (I hope).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207935 91177308-0d34-0410-b5e6-96231b3b80d8
The fix itself is fairly simple: move getAccessVariant to MCValue so that we
replace the old weak expression evaluation with the far more general
EvaluateAsRelocatable.
This then requires that EvaluateAsRelocatable stop when it finds a non
trivial reference kind. And that in turn requires the ELF writer to look
harder for weak references.
Last but not least, this found a case where we were being bug by bug
compatible with gas and accepting an invalid input. I reported pr19647
to track it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207920 91177308-0d34-0410-b5e6-96231b3b80d8
Committed initially in r207724-r207726 and reverted due to compiler-rt
crashes in r207732.
Instead, fix this harder with unordered_map and store the LexicalScopes
by value in the map. This did necessitate moving the definition of
LexicalScope above the definition of LexicalScopes.
Let's see how the buildbots/compilers tolerate unordered_map::emplace +
std::piecewise_construct + std::forward_as_tuple...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207876 91177308-0d34-0410-b5e6-96231b3b80d8
This moves most of GlobalOpt's constructor optimization
code out of GlobalOpt into Transforms/Utils/CDtorUtils.{h,cpp}. The
public interface is a single function OptimizeGlobalCtorsList() that
takes a predicate returning which constructors to remove.
GlobalOpt calls this with a function that statically evaluates all
constructors, just like it did before. This part of the change is
behavior-preserving.
Also add a call to this from GlobalDCE with a filter that removes global
constructors that contain a "ret" instruction and nothing else – this
fixes PR19590.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207856 91177308-0d34-0410-b5e6-96231b3b80d8
This optimization merges the common part of a group of GEPs, so we can compute
each pointer address by adding a simple offset to the common part.
The optimization is currently only enabled for the NVPTX backend, where it has
a large payoff on some benchmarks.
Review: http://reviews.llvm.org/D3462
Patch by Jingyue Wu.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207783 91177308-0d34-0410-b5e6-96231b3b80d8
just connects an SCC to one of its descendants directly. Not much of an
impact. The last one is the hard one -- connecting an SCC to one of its
ancestors, and thereby forming a cycle such that we have to merge all
the SCCs participating in the cycle.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207751 91177308-0d34-0410-b5e6-96231b3b80d8
of SCCs in the SCC DAG. Exercise them in the big graph test case. These
will be especially useful for establishing invariants in insertion
logic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207749 91177308-0d34-0410-b5e6-96231b3b80d8
removal. We can't just blindly increment (or decrement) the adapted
iterator when the value is null because doing so can walk past the end
(or beginning) and keep inspecting the value. The fix I've implemented
is to restrict this further to a forward iterator and add an end
iterator to the members (replacing a member that had become dead when
I switched to the adaptor base!) and using that to stop the iteration.
I'm not entirely pleased with this solution. I feel like forward
iteration is too restrictive. I wasn't even happy about bidirectional
iteration. It also makes the iterator objects larger and the iteration
loops more complex. However, I also don't really like the other
alternative that seems obvious: a sentinel node. I'm still hoping to
come up with a more elegant solution here, but this at least fixes the
MSan and Valgrind errors on this code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207743 91177308-0d34-0410-b5e6-96231b3b80d8
This simplifies ELFObjectWriter::SymbolValue a bit more. This new version
will also be used in the COFF writer to fix pr19147.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207711 91177308-0d34-0410-b5e6-96231b3b80d8
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
%shr = lshr i64 %x, 4
%and = and i64 %shr, 15
%arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
%0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
lsr x8, x0, #1
and x8, x8, #0x78
add x8, x9, x8
If using ubfx, it only needs 2 instrs:
ubfx x8, x0, #4, #4
add x8, x9, x8, lsl #3
This fixes bug 19589
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207702 91177308-0d34-0410-b5e6-96231b3b80d8
We already do this for shstrtab, so might as well do it for strtab. This
extracts the string table building code into a separate class. The idea
is to use it for other object formats too.
I mostly wanted to do this for the general principle, but it does save a
little bit on object file size. I tried this on a clang bootstrap and
saved 0.54% on the sum of object file sizes (1.14 MB out of 212 MB for
a release build).
Differential Revision: http://reviews.llvm.org/D3533
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207670 91177308-0d34-0410-b5e6-96231b3b80d8
When we were moving from a larger vector to a smaller one but didn't
need to re-allocate, we would move-assign over uninitialized memory in
the target, then move-construct that same data again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207663 91177308-0d34-0410-b5e6-96231b3b80d8
edge entirely within an existing SCC. Shockingly, making the connected
component more connected is ... a total snooze fest. =]
Anyways, its wired up, and I even added a test case to make sure it
pretty much sorta works. =D
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207631 91177308-0d34-0410-b5e6-96231b3b80d8