accessed at least once as a vector. This prevents it from
compiling the example in not-a-vector into:
define double @test(double %A, double %B) {
%tmp4 = insertelement <7 x double> undef, double %A, i32 0
%tmp = insertelement <7 x double> %tmp4, double %B, i32 4
%tmp2 = extractelement <7 x double> %tmp, i32 4
ret double %tmp2
}
instead, producing the integer code. Producing vectors when they
aren't otherwise in the program is dangerous because a lot of other
code treats them carefully and doesn't want to break them down.
OTOH, many things want to break down tasty i448's.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63638 91177308-0d34-0410-b5e6-96231b3b80d8
With the new world order, it can handle cases where the first
store into the alloca is an element of the vector, instead of
requiring the first analyzed store to have the vector type
itself. This allows us to un-xfail
test/CodeGen/X86/vec_ins_extract.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63590 91177308-0d34-0410-b5e6-96231b3b80d8
turn icmp eq a+x, b+x into icmp eq a, b if a+x or b+x has other uses. This
may have been increasing register pressure leading to the bzip2 slowdown.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63487 91177308-0d34-0410-b5e6-96231b3b80d8
improvements to the EvaluateInDifferentType code. This code works
by just inserted a bunch of new code and then seeing if it is
useful. Instcombine is not allowed to do this: it can only insert
new code if it is useful, and only when it is converging to a more
canonical fixed point. Now that we iterate when DCE makes progress,
this causes an infinite loop when the code ends up not being used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63483 91177308-0d34-0410-b5e6-96231b3b80d8
simplifydemandedbits to simplify instructions with *multiple
uses* in contexts where it can get away with it. This allows
it to simplify the code in multi-use-or.ll into a single 'add
double'.
This change is particularly interesting because it will cover
up for some common codegen bugs with large integers created due
to the recent SROA patch. When working on fixing those bugs,
this should be disabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63481 91177308-0d34-0410-b5e6-96231b3b80d8
not doing so prevents it from properly iterating and prevents it
from deleting the entire body of dce-iterate.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63476 91177308-0d34-0410-b5e6-96231b3b80d8
be able to handle *ANY* alloca that is poked by loads and stores of
bitcasts and GEPs with constant offsets. Before the code had a number
of annoying limitations and caused it to miss cases such as storing into
holes in structs and complex casts (as in bitfield-sroa) where we had
unions of bitfields etc. This also handles a number of important cases
that are exposed due to the ABI lowering stuff we do to pass stuff by
value.
One case that is pretty great is that we compile
2006-11-07-InvalidArrayPromote.ll into:
define i32 @func(<4 x float> %v0, <4 x float> %v1) nounwind {
%tmp10 = call <4 x i32> @llvm.x86.sse2.cvttps2dq(<4 x float> %v1)
%tmp105 = bitcast <4 x i32> %tmp10 to i128
%tmp1056 = zext i128 %tmp105 to i256
%tmp.upgrd.43 = lshr i256 %tmp1056, 96
%tmp.upgrd.44 = trunc i256 %tmp.upgrd.43 to i32
ret i32 %tmp.upgrd.44
}
which turns into:
_func:
subl $28, %esp
cvttps2dq %xmm1, %xmm0
movaps %xmm0, (%esp)
movl 12(%esp), %eax
addl $28, %esp
ret
Which is pretty good code all things considering :).
One effect of this is that SROA will start generating arbitrary bitwidth
integers that are a multiple of 8 bits. In the case above, we got a
256 bit integer, but the codegen guys assure me that it can handle the
simple and/or/shift/zext stuff that we're doing on these operations.
This addresses rdar://6532315
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63469 91177308-0d34-0410-b5e6-96231b3b80d8
handling the case in Transforms/InstCombine/cast-store-gep.ll, which
is a heavily reduced testcase from Clang on x86-64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62904 91177308-0d34-0410-b5e6-96231b3b80d8
analyses could be run without the caches properly sorted. This
can fix all sorts of weirdness. Many thanks to Bill for coming
up with the 'issorted' verification idea.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62757 91177308-0d34-0410-b5e6-96231b3b80d8
ASCII IR; loading and storing these can change the
bits of NaNs on some hosts. Remove or add warnings
at a few other places using host floating point;
this is a bad thing to do in general.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62712 91177308-0d34-0410-b5e6-96231b3b80d8
Besides APFloat, this involved removing code
from two places that thought they knew the
result of frem(0., x) but were wrong.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62645 91177308-0d34-0410-b5e6-96231b3b80d8
invoking the host fmod, not by lowering to frem and
constant-folding that. Fix this so it tests what I
want to test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62622 91177308-0d34-0410-b5e6-96231b3b80d8
we assumed a CFG structure that would be valid when all code in
the function is reachable, but not all code is necessarily
reachable. Do a simple, but horrible, CFG walk to check for this
case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62487 91177308-0d34-0410-b5e6-96231b3b80d8
- Looking at the number of sign bits of the a sext instruction to determine whether new trunc + sext pair should be added when its source is being evaluated in a different type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62263 91177308-0d34-0410-b5e6-96231b3b80d8
my earlier patch to this file.
The issue there was that all uses of an IV inside a loop
are actually references to Base[IV*2], and there was one
use outside that was the same but LSR didn't see the base
or the scaling because it didn't recurse into uses outside
the loop; thus, it used base+IV*scale mode inside the loop
instead of pulling base out of the loop. This was extra bad
because register pressure later forced both base and IV into
memory. Doing that recursion, at least enough
to figure out addressing modes, is a good idea in general;
the change in AddUsersIfInteresting does this. However,
there were side effects....
It is also possible for recursing outside the loop to
introduce another IV where there was only 1 before (if
the refs inside are not scaled and the ref outside is).
I don't think this is a common case, but it's in the testsuite.
It is right to be very aggressive about getting rid of
such introduced IVs (CheckForIVReuse and the handling of
nonzero RewriteFactor in StrengthReduceStridedIVUsers).
In the testcase in question the new IV produced this way
has both a nonconstant stride and a nonzero base, neither
of which was handled before. And when inserting
new code that feeds into a PHI, it's right to put such
code at the original location rather than in the PHI's
immediate predecessor(s) when the original location is outside
the loop (a case that couldn't happen before)
(RewriteInstructionToUseNewBase); better to avoid making
multiple copies of it in this case.
Also, the mechanism for keeping SCEV's corresponding to GEP's
no longer works, as the GEP might change after its SCEV
is remembered, invalidating the SCEV, and we might get a bad
SCEV value when looking up the GEP again for a later loop.
This also couldn't happen before, as we weren't recursing
into GEP's outside the loop.
Also, when we build an expression that involves a (possibly
non-affine) IV from a different loop as well as an IV from
the one we're interested in (containsAddRecFromDifferentLoop),
don't recurse into that. We can't do much with it and will
get in trouble if we try to create new non-affine IVs or something.
More testcases are coming.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62212 91177308-0d34-0410-b5e6-96231b3b80d8
vector and extraneous loop over it, 2) not delete globals used by
phis/selects etc which could actually be useful. This fixes PR3321.
Many thanks to Duncan for narrowing this down.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@62201 91177308-0d34-0410-b5e6-96231b3b80d8
functions that don't already have a (dynamic) alloca.
Dynamic allocas cause inefficient codegen and we shouldn't
propagate this (behavior follows gcc). Two existing tests
assumed such inlining would be done; they are hacked by
adding an alloca in the caller, preserving the point of
the tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61946 91177308-0d34-0410-b5e6-96231b3b80d8
will get its preferred alignment. It has to be careful and cautiously assume
it will just get the ABI alignment. This prevents instcombine from rounding
up the alignment of a load/store without adjusting the alignment of the alloca.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61934 91177308-0d34-0410-b5e6-96231b3b80d8
loads from allocas that cover the entire aggregate. This handles
some memcpy/byval cases that are produced by llvm-gcc. This triggers
a few times in kc++ (with std::pair<std::_Rb_tree_const_iterator
<kc::impl_abstract_phylum*>,bool>) and once in 176.gcc (with %struct..0anon).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61915 91177308-0d34-0410-b5e6-96231b3b80d8
was it not very helpful, it was also wrong! The problem
is shown in the testcase: the alloca might be passed to
a nocapture callee which dereferences it and returns the
original pointer. But because it was a nocapture call we
think we don't need to track its uses, but we do.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61876 91177308-0d34-0410-b5e6-96231b3b80d8
integer to a (transitive) bitcast the alloca and if that integer
has the full size of the alloca, then it clobbers the whole thing.
Handle this by extracting pieces out of the stored integer and
filing them away in the SROA'd elements.
This triggers fairly frequently because the CFE uses integers to
pass small structs by value and the inliner exposes these. For
example, in kimwitu++, I see a bunch of these with i64 stores to
"%struct.std::pair<std::_Rb_tree_const_iterator<kc::impl_abstract_phylum*>,bool>"
In 176.gcc I see a few i32 stores to "%struct..0anon".
In the testcase, this is a difference between compiling test1 to:
_test1:
subl $12, %esp
movl 20(%esp), %eax
movl %eax, 4(%esp)
movl 16(%esp), %eax
movl %eax, (%esp)
movl (%esp), %eax
addl 4(%esp), %eax
addl $12, %esp
ret
vs:
_test1:
movl 8(%esp), %eax
addl 4(%esp), %eax
ret
The second half of this will be to handle loads of the same form.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61853 91177308-0d34-0410-b5e6-96231b3b80d8
In fact this also deletes those with linkonce linkage,
however this is currently dead because for the moment
aliases aren't allowed to have this linkage type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61742 91177308-0d34-0410-b5e6-96231b3b80d8
the argument to be stored to an alloca by tracking uses
of the alloca. This occurs 4 times (out of 7121, 0.05%)
in MultiSource/Applications, so may not be worth it. On
the other hand, it is easy to do and fairly cheap. The
functions it helps are: W_addcom and W_addlit in spiff;
process_args (argv) in d (make_dparser); ercPixConcealIMB
in JM/ldecod.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61570 91177308-0d34-0410-b5e6-96231b3b80d8
and clean recursive descent parser.
This change has a couple of ramifications:
1. The parser code is about 400 lines shorter (in what we maintain, not
including what is autogenerated).
2. The code should be significantly faster than the old code because we
don't have to work around bison's poor handling of datatypes with
ctors/dtors. This also makes the code much more resistant to memory
leaks.
3. We now get caret diagnostics from the .ll parser, woo.
4. The actual diagnostics emited from the parser are completely different
so a bunch of testcases had to be updated.
5. I now disallow "%ty = type opaque %ty = type i32". There was no good
reason to support this, it was just an accident of the old
implementation. I have no reason to think that anyone is actually using
this.
6. The syntax for sticking a global variable has changed to make it
unambiguous. I don't think anyone is depending on this since only clang
supports this and it is not solid yet, so I'm not worried about anything
breaking.
7. This gets rid of the last use of bison, and along with it the .cvs files.
I'll prune this from the makefiles as a subsequent commit.
There are a few minor cleanups that can be done after this commit (suggestions
welcome!) but this passes dejagnu testing and is ready for its time in the
limelight.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61558 91177308-0d34-0410-b5e6-96231b3b80d8
reason. Two functions which mutually require each other to be nocapture
are not currently supported.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61553 91177308-0d34-0410-b5e6-96231b3b80d8
functions that don't write can't leak a pointer except through
the return value, so a void readonly function is implicitly nocapture.
Test these, and add a test that verifies that f1 calling f2 with an
otherwise dead pointer gets both of them marked nocapture.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61552 91177308-0d34-0410-b5e6-96231b3b80d8
to work out (in a very simplistic way) which function
arguments (pointer arguments only) are only dereferenced
and so do not escape. Mark such arguments 'nocapture'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61525 91177308-0d34-0410-b5e6-96231b3b80d8
constants, since doing so is irrelevant for aliasing
purposes. While this doesn't increase the total number
of functions marked readonly or readnone in MultiSource/
Applications (3089), it does result in 12 functions being
marked readnone rather than readonly.
Before:
readnone: 820
readonly: 2269
After:
readnone: 832
readonly: 2257
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61469 91177308-0d34-0410-b5e6-96231b3b80d8
nodes. This allows it to do fairly general phi insertion if a
load from a pointer global wants to be SRAd but the load is used
by (recursive) phi nodes. This fixes a pessimization on ppc
introduced by Load PRE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61123 91177308-0d34-0410-b5e6-96231b3b80d8
consistently for deleting branches. In addition to being slightly
more readable, this makes SimplifyCFG a bit better
about cleaning up after itself when it makes conditions unused.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61100 91177308-0d34-0410-b5e6-96231b3b80d8
visited set before they are used. If used, their blocks need to be
added to the visited set so that subsequent queries don't use conflicting
pointer values in the cache result blocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61080 91177308-0d34-0410-b5e6-96231b3b80d8
cleans up the generated code a bit. This should have the added benefit of
not randomly renaming functions/globals like my previous patch did. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61023 91177308-0d34-0410-b5e6-96231b3b80d8
memdep keeps track of how PHIs affect the pointer in dep queries, which
allows it to eliminate the load in cases like rle-phi-translate.ll, which
basically end up being:
BB1:
X = load P
br BB3
BB2:
Y = load Q
br BB3
BB3:
R = phi [P] [Q]
load R
turning "load R" into a phi of X/Y. In addition to additional exposed
opportunities, this makes memdep safe in many cases that it wasn't before
(which is required for load PRE) and also makes it substantially more
efficient. For example, consider:
bb1: // has many predecessors.
P = some_operator()
load P
In this example, previously memdep would scan all the predecessors of BB1
to see if they had something that would mustalias P. In some cases (e.g.
test/Transforms/GVN/rle-must-alias.ll) it would actually find them and end
up eliminating something. In many other cases though, it would scan and not
find anything useful. MemDep now stops at a block if the pointer is defined
in that block and cannot be phi translated to predecessors. This causes it
to miss the (rare) cases like rle-must-alias.ll, but makes it faster by not
scanning tons of stuff that is unlikely to be useful. For example, this
speeds up GVN as a whole from 3.928s to 2.448s (60%)!. IMO, scalar GVN
should be enhanced to simplify the rle-must-alias pointer base anyway, which
would allow the loads to be eliminated.
In the future, this should be enhanced to phi translate through geps and
bitcasts as well (as indicated by FIXMEs) making memdep even more powerful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61022 91177308-0d34-0410-b5e6-96231b3b80d8
llvm[2]: Linking Release executable opt (without symbols)
...
Undefined symbols:
"llvm::APFloat::IEEEsingle", referenced from:
__ZN4llvm7APFloat10IEEEsingleE$non_lazy_ptr in libLLVMCore.a(Constants.o)
__ZN4llvm7APFloat10IEEEsingleE$non_lazy_ptr in libLLVMCore.a(AsmWriter.o)
__ZN4llvm7APFloat10IEEEsingleE$non_lazy_ptr in libLLVMCore.a(ConstantFold.o)
"llvm::APFloat::IEEEdouble", referenced from:
__ZN4llvm7APFloat10IEEEdoubleE$non_lazy_ptr in libLLVMCore.a(Constants.o)
__ZN4llvm7APFloat10IEEEdoubleE$non_lazy_ptr in libLLVMCore.a(AsmWriter.o)
__ZN4llvm7APFloat10IEEEdoubleE$non_lazy_ptr in libLLVMCore.a(ConstantFold.o)
ld: symbol(s) not found
This is in release mode. To replicate, compile llvm and llvm-gcc in optimized
mode. Then build llvm, in optimized mode, with the newly created compiler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60977 91177308-0d34-0410-b5e6-96231b3b80d8
tricks based on readnone/readonly functions.
Teach memdep to look past readonly calls when analyzing
deps for a readonly call. This allows elimination of a
few more calls from 403.gcc:
before:
63 gvn - Number of instructions PRE'd
153986 gvn - Number of instructions deleted
50069 gvn - Number of loads deleted
after:
63 gvn - Number of instructions PRE'd
153991 gvn - Number of instructions deleted
50069 gvn - Number of loads deleted
5 calls isn't much, but this adds plumbing for the next change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60794 91177308-0d34-0410-b5e6-96231b3b80d8
doesn't do its own local caching, and is slightly more aggressive about
free/store dse (see testcase). This eliminates the last external client
of MemDep::getDependenceFrom().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60619 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes many bugs. I will add more test cases in a separate check-in.
Some day, the code that manipulates CFG and updates dom. info could use refactoring help.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60554 91177308-0d34-0410-b5e6-96231b3b80d8
1) have it fold "br undef", which does occur with
surprising frequency as jump threading iterates.
2) teach j-t to delete dead blocks. This removes the successor
edges, reducing the in-edges of other blocks, allowing
recursive simplification.
3) Fold things like:
br COND, BBX, BBY
BBX:
br COND, BBZ, BBW
which also happens because jump threading iterates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60470 91177308-0d34-0410-b5e6-96231b3b80d8
straight-forward implementation. This does not require any extra
alias analysis queries beyond what we already do for non-local loads.
Some programs really really like load PRE. For example, SPASS triggers
this ~1000 times, ~300 times in 255.vortex, and ~1500 times on 403.gcc.
The biggest limitation to the implementation is that it does not split
critical edges. This is a huge killer on many programs and should be
addressed after the initial patch is enabled by default.
The implementation of this should incidentally speed up rejection of
non-local loads because it avoids creating the repl densemap in cases
when it won't be used for fully redundant loads.
This is currently disabled by default.
Before I turn this on, I need to fix a couple of miscompilations in
the testsuite, look at compile time performance numbers, and look at
perf impact. This is pretty close to ready though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60408 91177308-0d34-0410-b5e6-96231b3b80d8
overflowed on negation. This commit checks to make sure that neithe C nor X
overflows. This requires that the RHS of X (a subtract instruction) be a
constant integer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60275 91177308-0d34-0410-b5e6-96231b3b80d8
properly updates the reverse dependency map when it installs updated
dependencies for instructions that depend on the removed instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60222 91177308-0d34-0410-b5e6-96231b3b80d8
1. Make it fold blocks separated by an unconditional branch. This enables
jump threading to see a broader scope.
2. Make jump threading able to eliminate locally redundant loads when they
feed the branch condition of a block. This frequently occurs due to
reg2mem running.
3. Make jump threading able to eliminate *partially redundant* loads when
they feed the branch condition of a block. This is common in code with
lots of loads and stores like C++ code and 255.vortex.
This implements thread-loads.ll and rdar://6402033.
Per the fixme's, several pieces of this should be moved into Transforms/Utils.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60148 91177308-0d34-0410-b5e6-96231b3b80d8