The EH_frame and .eh symbols are now private, except for darwin9 and earlier.
The patch also fixes the definition of PrivateGlobalPrefix on pcc linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61242 91177308-0d34-0410-b5e6-96231b3b80d8
The problematic part of this patch is that we were out of attribute bits,
requiring some fancy bit hacking to make it fit (by shrinking alignment)
without breaking existing users or the file format.
This change will require users to rebuild llvm-gcc to match llvm.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61239 91177308-0d34-0410-b5e6-96231b3b80d8
argument. Nothing was using it, and it set the MBB member without
calling enterBasicBlock, which was problematic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61234 91177308-0d34-0410-b5e6-96231b3b80d8
First step to resolve this is, record file name and directory directly in debug info for various debug entities.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61164 91177308-0d34-0410-b5e6-96231b3b80d8
which source/line a certain BB/instruction comes from, original variable names,
and original (unmangled) C++ name of functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61085 91177308-0d34-0410-b5e6-96231b3b80d8
computation code. Also, avoid adding output-depenency edges when both
defs are dead, which frequently happens with EFLAGS defs.
Compute Depth and Height lazily, and always in terms of edge latency
values. For the schedulers that don't care about latency, edge latencies
are set to 1.
Eliminate Cycle and CycleBound, and LatencyPriorityQueue's Latencies array.
These are all subsumed by the Depth and Height fields.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61073 91177308-0d34-0410-b5e6-96231b3b80d8
alignment attribute such that 0 means unaligned.
This will probably require a rebuild of llvm-gcc because of the change to
Attributes.h. If you see many test failures on "make check", please rebuild
your llvm-gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61030 91177308-0d34-0410-b5e6-96231b3b80d8
memdep keeps track of how PHIs affect the pointer in dep queries, which
allows it to eliminate the load in cases like rle-phi-translate.ll, which
basically end up being:
BB1:
X = load P
br BB3
BB2:
Y = load Q
br BB3
BB3:
R = phi [P] [Q]
load R
turning "load R" into a phi of X/Y. In addition to additional exposed
opportunities, this makes memdep safe in many cases that it wasn't before
(which is required for load PRE) and also makes it substantially more
efficient. For example, consider:
bb1: // has many predecessors.
P = some_operator()
load P
In this example, previously memdep would scan all the predecessors of BB1
to see if they had something that would mustalias P. In some cases (e.g.
test/Transforms/GVN/rle-must-alias.ll) it would actually find them and end
up eliminating something. In many other cases though, it would scan and not
find anything useful. MemDep now stops at a block if the pointer is defined
in that block and cannot be phi translated to predecessors. This causes it
to miss the (rare) cases like rle-must-alias.ll, but makes it faster by not
scanning tons of stuff that is unlikely to be useful. For example, this
speeds up GVN as a whole from 3.928s to 2.448s (60%)!. IMO, scalar GVN
should be enhanced to simplify the rle-must-alias pointer base anyway, which
would allow the loads to be eliminated.
In the future, this should be enhanced to phi translate through geps and
bitcasts as well (as indicated by FIXMEs) making memdep even more powerful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61022 91177308-0d34-0410-b5e6-96231b3b80d8
callee will not introduce any new aliases of that pointer.
The attributes had all bits allocated already, so I decided to collapse
alignment. Alignment was previously stored as a 16-bit integer from bits 16 to
32 of the attribute, but it was required to be a power of 2. Now it's stored in
log2 encoded form in five bits from 16 to 21. That gives us 11 more bits of
space.
You may have already noticed that you only need four bits to encode a 16-bit
power of two, so why five bits? Because the AsmParser accepted 32-bit
alignments, even though we couldn't store them (they were silently discarded).
Now we can store them in memory, but not in the bitcode.
The bitcode format was already storing these as 64-bit VBR integers. So, the
bitcode format stays the same, keeping the alignment values stored as 16 bit
raw values. There's some hideous code in the reader and writer that deals with
this, waiting to be ripped out the moment we run out of bits again and have to
replace the parameter attributes table encoding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61019 91177308-0d34-0410-b5e6-96231b3b80d8
This stuff is not used outside Base.td, and with the conversion of the
compilation graph to string-based format became much less (if at all)
useful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60873 91177308-0d34-0410-b5e6-96231b3b80d8
node latencies. Use CalcLatency instead of manual code in
CalculatePriorities to keep it consistent. Previously it
computed slightly different results.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60817 91177308-0d34-0410-b5e6-96231b3b80d8
- Emit DW_AT_byte_size for struct and union of size zero.
- Emit DW_AT_declaration for forward type declaration.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60812 91177308-0d34-0410-b5e6-96231b3b80d8
The Cost field is removed. It was only being used in a very limited way,
to indicate when the scheduler should attempt to protect a live register,
and it isn't really needed to do that. If we ever want the scheduler to
start inserting copies in non-prohibitive situations, we'll have to
rethink some things anyway.
A Latency field is added. Instead of giving each node a single
fixed latency, each edge can have its own latency. This will eventually
be used to model various micro-architecture properties more accurately.
The PointerIntPair class and an internal union are now used, which
reduce the overall size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60806 91177308-0d34-0410-b5e6-96231b3b80d8
target-independent way of determining overflow on multiplication. It's very
tricky. Patch by Zoltan Varga!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60800 91177308-0d34-0410-b5e6-96231b3b80d8
of a pointer. This allows is to catch more equivalencies. For example,
the type_lists_compatible_p function used to require two iterations of
the gvn pass (!) to delete its 18 redundant loads because the first pass
would CSE all the addressing computation cruft, which would unblock the
second memdep/gvn passes from recognizing them. This change allows
memdep/gvn to catch all 18 when run just once on the function (as is
typical :) instead of just 3.
On all of 403.gcc, this bumps up the # reundandancies found from:
63 gvn - Number of instructions PRE'd
153991 gvn - Number of instructions deleted
50069 gvn - Number of loads deleted
to:
63 gvn - Number of instructions PRE'd
154137 gvn - Number of instructions deleted
50185 gvn - Number of loads deleted
+120 loads deleted isn't bad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60799 91177308-0d34-0410-b5e6-96231b3b80d8
essential problem was that the DAG can contain
random unused nodes which were never analyzed.
When remapping a value of a node being processed,
such a node may become used and need to be analyzed;
however due to operands being transformed during
analysis the node may morph into a different one.
Users of the morphing node need to be updated, and
this wasn't happening. While there I added a bunch
of documentation and sanity checks, so I (or some
other poor soul) won't have to scratch their head
over this stuff so long trying to remember how it
was all supposed to work next time some obscure
problem pops up! The extra sanity checking exposed
a few places where invariants weren't being preserved,
so those are fixed too. Since some of the sanity
checking is expensive, I added a flag to turn it
on. It is also turned on when building with
ENABLE_EXPENSIVE_CHECKS=1.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60797 91177308-0d34-0410-b5e6-96231b3b80d8
tricks based on readnone/readonly functions.
Teach memdep to look past readonly calls when analyzing
deps for a readonly call. This allows elimination of a
few more calls from 403.gcc:
before:
63 gvn - Number of instructions PRE'd
153986 gvn - Number of instructions deleted
50069 gvn - Number of loads deleted
after:
63 gvn - Number of instructions PRE'd
153991 gvn - Number of instructions deleted
50069 gvn - Number of loads deleted
5 calls isn't much, but this adds plumbing for the next change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60794 91177308-0d34-0410-b5e6-96231b3b80d8
really simple cache class for these queries. Hopefully this can
be removed if pred_iterator speeds back up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60742 91177308-0d34-0410-b5e6-96231b3b80d8
and use it in x86 address mode folding. Also, make
getRegForValue return 0 for illegal types even if it has a
ValueMap for them, because Argument values are put in the
ValueMap. This fixes PR3181.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60696 91177308-0d34-0410-b5e6-96231b3b80d8
track of whether the CachedNonLocalPointerInfo for a block is specific
to a block. If so, just return it without any pred scanning. This is
good for a 6% speedup on GVN (when it uses this lookup method, which
it doesn't right now).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60695 91177308-0d34-0410-b5e6-96231b3b80d8
AND. This is speedup on any reasonable target, but particularly
on 32-bit targets where this often turns into a libcall like udivdi3.
We know that alignments are a power of two but the compiler doesn't.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60688 91177308-0d34-0410-b5e6-96231b3b80d8
in a really obscure way, but more importantly has the side effect
of avoiding a GCC warning in the case that IntType is bool.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60677 91177308-0d34-0410-b5e6-96231b3b80d8
method. This will eventually take over load/store dep
queries from getNonLocalDependency. For now it works
fine, but is incredibly slow because it does no caching.
Lets not switch GVN to use it until that is fixed :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60649 91177308-0d34-0410-b5e6-96231b3b80d8
duplication of logic (in 2 places) to determine what pointer a
load/store touches. This will be addressed in a future commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60648 91177308-0d34-0410-b5e6-96231b3b80d8
1. Merge the 'None' result into 'Normal', making loads
and stores return their dependencies on allocations as Normal.
2. Split the 'Normal' result into 'Clobber' and 'Def' to
distinguish between the cases when memdep knows the value is
produced from when we just know if may be changed.
3. Move some of the logic for determining whether readonly calls
are CSEs into memdep instead of it being in GVN. This still
leaves verification that the arguments are hte same to GVN to
let it know about value equivalences in different contexts.
4. Change memdep's call/call dependency analysis to use
getModRefInfo(CallSite,CallSite) instead of doing something
very weak. This only really matters for things like DSA, but
someday maybe we'll have some other decent context sensitive
analyses :)
5. This reimplements the guts of memdep to handle the new results.
6. This simplifies GVN significantly:
a) readonly call CSE is slightly simpler
b) I eliminated the "getDependencyFrom" chaining for load
elimination and load CSE doesn't have to worry about
volatile (they are always clobbers) anymore.
c) GVN no longer does any 'lastLoad' caching, leaving it to
memdep.
7. The logic in DSE is simplified a bit and sped up. A potentially
unsafe case was eliminated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60607 91177308-0d34-0410-b5e6-96231b3b80d8
heretical from a STL standpoint, but is oh-so-useful for things that
can't throw exceptions when copied, like, well, everything in LLVM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60587 91177308-0d34-0410-b5e6-96231b3b80d8
the frame reference. This will help post-RA scheduling determine
that spills to distinct stack slots are independent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60486 91177308-0d34-0410-b5e6-96231b3b80d8
foldMemoryOperand how to "fold" them, by converting them into constant-pool
loads. When they aren't folded, they use xorps/cmpeqd, but for example when
register pressure is high, they may now be folded as memory operands, which
reduces register pressure.
Also, mark V_SET0 isAsCheapAsAMove so that two-address-elimination will
remat it instead of copying zeros around (V_SETALLONES was already marked).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60461 91177308-0d34-0410-b5e6-96231b3b80d8
MERGE_VALUES node with only one operand, so get
rid of special code that only existed to handle
that possibility.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60349 91177308-0d34-0410-b5e6-96231b3b80d8