then we don't want to set the destination in the indirect branch to the
destination. This is because the indirect branch needs its destinations to have
had their block addresses taken. This isn't so of the new critical edge that's
split during this process. If it turns out that the destination block has only
one predecessor, and that being a BB with an indirect branch, then it won't be
marked as 'used' and may be removed.
PR10072
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132638 91177308-0d34-0410-b5e6-96231b3b80d8
redundant with partially-aliasing loads.
When computing what portion of a clobbering load value is needed,
it doesn't consider phi-translation which may have occurred
between the clobbing load and the redundant load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132631 91177308-0d34-0410-b5e6-96231b3b80d8
which edge to split by pred/succ pair, which means that we can end up splitting
the wrong edge (by case value) in the switch statement entirely. Fixes PR10031!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132535 91177308-0d34-0410-b5e6-96231b3b80d8
In the given testcase, the "Clobber" was pointing to a load, and GVN was incorrectly assuming that meant that the "Clobber" load overlapped the load being analyzed (when they are actually unrelated).
The included testcase tests both this commit and r132434.
Part two of rdar://9429882. (r132434 was mislabeled.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132442 91177308-0d34-0410-b5e6-96231b3b80d8
variable. Noticed by inspection.
Simulate memset in EvaluateFunction where the target of the memset and the
value we're setting are both the null value. Fixes PR10047!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132288 91177308-0d34-0410-b5e6-96231b3b80d8
transformed by the inliner into a branch to the enclosing landing pad
(when inlined through an invoke). If not so optimized, it is lowered
DWARF EH preparation into a call to _Unwind_Resume (or _Unwind_SjLj_Resume
as appropriate). Its chief advantage is that it takes both the
exception value and the selector value as arguments, meaning that there
is zero effort in recovering these; however, the frontend is required
to pass these down, which is not actually particularly difficult.
Also document the behavior of landing pads a bit better, and make it
clearer that it's okay that personality functions don't always land at
landing pads. This is just a fact of life. Don't write optimizations that
rely on pushing things over an unwind edge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132253 91177308-0d34-0410-b5e6-96231b3b80d8
- the selector for the landing pad must provide all available information
about the handlers, filters, and cleanups within that landing pad
- calls to _Unwind_Resume must be converted to branches to the enclosing
lpad so as to avoid re-entering the unwinder when the lpad claimed it
was going to handle the exception in some way
This is quite specific to libUnwind-based unwinding. In an effort to not
interfere too badly with other unwinders, and with existing hacks in frontends,
this only triggers on _Unwind_Resume (not _Unwind_Resume_or_Rethrow) and does
nothing with selectors if it cannot find a selector call for either lpad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132200 91177308-0d34-0410-b5e6-96231b3b80d8
crc32.[8|16|32] have been renamed to .crc32.32.[8|16|32] and
crc64.[8|16|32] have been renamed to .crc32.64.[8|64].
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132163 91177308-0d34-0410-b5e6-96231b3b80d8
Use a proper worklist for use-def traversal without holding onto an
iterator. Now that we process all IV uses, we need complete logic for
resusing existing derived IV defs. See HoistStep.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132103 91177308-0d34-0410-b5e6-96231b3b80d8
aligned.
Teach memcpyopt to not give up all hope when confonted with an underaligned
memcpy feeding an overaligned byval. If the *source* of the memcpy can be
determined to be adequeately aligned, or if it can be forced to be, we can
eliminate the memcpy.
This addresses PR9794. We now compile the example into:
define i32 @f(%struct.p* nocapture byval align 8 %q) nounwind ssp {
entry:
%call = call i32 @g(%struct.p* byval align 8 %q) nounwind
ret i32 %call
}
in both x86-64 and x86-32 mode. We still don't get a tailcall though,
because tailcalls apparently can't handle byval.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131884 91177308-0d34-0410-b5e6-96231b3b80d8
failing to form a memset, then having to delete it" but my approximation
isn't safe for self recurrent loops. Instead of doign a hack, just
do it the right way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131858 91177308-0d34-0410-b5e6-96231b3b80d8
I also changed -simplifycfg, -jump-threading and -codegenprepare to use this to produce slightly better code without any extra cleanup passes (AFAICT this was the only place in -simplifycfg where now-dead conditions of replaced terminators weren't being cleaned up). The only other user of this function is -sccp, but I didn't read that thoroughly enough to figure out whether it might be holding pointers to instructions that could be deleted by this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131855 91177308-0d34-0410-b5e6-96231b3b80d8
causing it to get into infinite loops when it would widen a
load (which can necessarily leave around dead loads).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131847 91177308-0d34-0410-b5e6-96231b3b80d8
It's better to do this in codegen, mul.with.overflow(X, 2) is more canonical because it has only one use on "X".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131798 91177308-0d34-0410-b5e6-96231b3b80d8
As an example, the change to InstCombineCalls catches a common case where a call to a bitcast of a function is rewritten.
Chris, does this approach look reasonable?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131516 91177308-0d34-0410-b5e6-96231b3b80d8
often expressed as "x >= y ? x : y", there is a good chance we can extract
the existing "x >= y" from it and use that as a replacement for "max(x,y)==x".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131049 91177308-0d34-0410-b5e6-96231b3b80d8
return the pointer being dereferenced, it returns the pointee, but a call
might return the pointer itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130979 91177308-0d34-0410-b5e6-96231b3b80d8
but according to my super-optimizer there are only two missed simplifications
of -instsimplify kind when compiling bzip2, and this is one of them. It amuses
me to have bzip2 be perfectly optimized as far as instsimplify goes!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130840 91177308-0d34-0410-b5e6-96231b3b80d8
max(a,b) >= a -> true. According to my super-optimizer, these are
by far the most common simplifications (of the -instsimplify kind)
that occur in the testsuite and aren't caught by -std-compile-opts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130780 91177308-0d34-0410-b5e6-96231b3b80d8
This automagically provides a transform noticed by my super-optimizer
as occurring quite often: "rem x, (select cond, x, 1)" -> 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130694 91177308-0d34-0410-b5e6-96231b3b80d8
This obviously helps a lot if the division would be turned into a libcall
(think i64 udiv on i386), but div is also one of the few remaining instructions
on modern CPUs that become more expensive when the bitwidth gets bigger.
This also helps register pressure on i386 when dividing chars, divb needs
two 8-bit parts of a 16 bit register as input where divl uses two registers.
int foo(unsigned char a) { return a/10; }
int bar(unsigned char a, unsigned char b) { return a/b; }
compiles into (x86_64)
_foo:
imull $205, %edi, %eax
shrl $11, %eax
ret
_bar:
movzbl %dil, %eax
divb %sil, %al
movzbl %al, %eax
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130615 91177308-0d34-0410-b5e6-96231b3b80d8
a nice and tidy:
%x1 = load i32* %0, align 4
%1 = icmp eq i32 %x1, 1179403647
br i1 %1, label %if.then, label %if.end
instead of doing lots of loads and branches. May the FreeBSD bootloader
long fit in its allocated space.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130416 91177308-0d34-0410-b5e6-96231b3b80d8
wider load would allow elimination of subsequent loads, and when the wider
load is still a native integer type. This eliminates a ton of loads on
various benchmarks involving struct fields, though it is somewhat hobbled
by clang not being very aggressive about field alignment.
This is yet another step along the way towards resolving PR6627.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130390 91177308-0d34-0410-b5e6-96231b3b80d8
Modified LinearFunctionTestReplace to push the condition on the dead
list instead of eagerly deleting it. This can cause unnecessary
IV rewrites, which should have no effect on codegen and will not be an
issue once we stop generating canonical IVs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130340 91177308-0d34-0410-b5e6-96231b3b80d8
1. Only run the early (in the module pass pipe) instcombine/simplifycfg
if the "unit at a time" passes they are cleaning up after runs.
2. Move the "clean up after the unroller" pass to the very end of the
function-level pass pipeline. Loop unroll uses instsimplify now,
so it doesn't create a ton of trash. Moving instcombine later allows
it to clean up after opportunities are exposed by GVN, DSE, etc.
3. Introduce some phase ordering tests for things that are specifically
intended to be simplified by the full optimizer as a whole.
This resolves PR2338, and is progress towards PR6627, which will be
generating code that looks similar to test2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130241 91177308-0d34-0410-b5e6-96231b3b80d8
when X has multiple uses. This is useful for exposing secondary optimizations,
but the X86 backend isn't ready for this when X has a single use. For example,
this can disable load folding.
This is inching towards resolving PR6627.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130238 91177308-0d34-0410-b5e6-96231b3b80d8
translation fails. We were bailing out in some cases that would
cause us to miss GVN'ing some non-local cases away.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130206 91177308-0d34-0410-b5e6-96231b3b80d8
return it as a clobber. This allows GVN to do smart things.
Enhance GVN to be smart about the case when a small load is clobbered
by a larger overlapping load. In this case, forward the value. This
allows us to compile stuff like this:
int test(void *P) {
int tmp = *(unsigned int*)P;
return tmp+*((unsigned char*)P+1);
}
into:
_test: ## @test
movl (%rdi), %ecx
movzbl %ch, %eax
addl %ecx, %eax
ret
which has one load. We already handled the case where the smaller
load was from a must-aliased base pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130180 91177308-0d34-0410-b5e6-96231b3b80d8
generated by llvm-gcc, since llvm-gcc uses 2 i64s for passing a 4 x float
vector on ARM rather than an i64 array like Clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129878 91177308-0d34-0410-b5e6-96231b3b80d8
canonical, and generally leads to better code. Found while looking at
an article about saturating arithmetic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129545 91177308-0d34-0410-b5e6-96231b3b80d8
the same allocation size but different primitive sizes(e.g., <3xi32> and
<4xi32>). When ScalarRepl promotes them, it can't use a bit cast but
should use a shuffle vector instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129472 91177308-0d34-0410-b5e6-96231b3b80d8
reassociation opportunities are exposed. This fixes a bug where
the nested reassociation expects to be the IR to be consistent,
but it isn't, because the outer reassociation has disconnected
some of the operands. rdar://9167457
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129324 91177308-0d34-0410-b5e6-96231b3b80d8
has some bugs. If this is interesting functionality, it should be
reimplemented in the argpromotion pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129314 91177308-0d34-0410-b5e6-96231b3b80d8
is equivalent to any other relevant value; it isn't true in general.
If it is equivalent, the LoopPromoter will tell the AST the equivalence.
Also, delete the PreheaderLoad if it is unused.
Chris, since you were the last one to make major changes here, can you check
that this is sane?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129049 91177308-0d34-0410-b5e6-96231b3b80d8
space info. We crash with an assert in this case. This change checks that the
address space of the bitcasted pointer is the same as the gep ptr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128884 91177308-0d34-0410-b5e6-96231b3b80d8
after the given instruction; make sure to handle that case correctly.
(It's difficult to trigger; the included testcase involves a dead
block, but I don't think that's a requirement.)
While I'm here, get rid of the unnecessary warning about
SimplifyInstructionsInBlock, since it should work correctly as far as I know.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128782 91177308-0d34-0410-b5e6-96231b3b80d8
that one of the numbers is signed while the other is unsigned. This could lead
to a wrong result when the signed was promoted to an unsigned int.
* Add the data layout line to the testcase so that it will test the appropriate
thing.
Patch by David Terei!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128577 91177308-0d34-0410-b5e6-96231b3b80d8
Some platforms may treat denormals as zero, on other platforms multiplication
with a subnormal is slower than dividing by a normal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128555 91177308-0d34-0410-b5e6-96231b3b80d8
vector types. This helps a lot with inlined functions when using the ARM soft
float ABI. Fixes <rdar://problem/9184212>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128453 91177308-0d34-0410-b5e6-96231b3b80d8
removes one use of X which helps it pass the many hasOneUse() checks.
In my analysis, this turns up very often where X = A >>exact B and that can't be
simplified unless X has one use (except by increasing the lifetime of A which is
generally a performance loss).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128373 91177308-0d34-0410-b5e6-96231b3b80d8
For example, on 32-bit architecture, don't promote all uses of the IV
to 64-bits just because one use is a 64-bit cast.
Alternate implementation of the patch by Arnaud de Grandmaison.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127884 91177308-0d34-0410-b5e6-96231b3b80d8
chose is having a non-memcpy/memset use and being larger than any native integer
type. Originally I chose having an access of a size smaller than the total size
of the alloca, but this caused some minor issues on the spirit benchmark where
SRoA runs again after some inlining.
This fixes <rdar://problem/8613163>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127718 91177308-0d34-0410-b5e6-96231b3b80d8
Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.
This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127498 91177308-0d34-0410-b5e6-96231b3b80d8
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.
This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127459 91177308-0d34-0410-b5e6-96231b3b80d8
after it has finished all of its reassociations, because its
habit of unlinking operands and holding them in a datastructure
while working means that it's not easy to determine when an
instruction is really dead until after all its regular work is
done. rdar://9096268.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127424 91177308-0d34-0410-b5e6-96231b3b80d8
This happens a lot in clang-compiled C++ code because it adds overflow checks to operator new[]:
unsigned *foo(unsigned n) { return new unsigned[n]; }
We can optimize away the overflow check on 64 bit targets because (uint64_t)n*4 cannot overflow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127418 91177308-0d34-0410-b5e6-96231b3b80d8
gave up when I realized I couldn't come up with a good name for what the
refactored function would be, to describe what it does.
This is PR9343 test12, which is test3 with arguments reordered. Whoops!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127318 91177308-0d34-0410-b5e6-96231b3b80d8
a union of a float, <2 x float>, and <4 x float>. This mostly comes up with the
use of vector intrinsics, especially in NEON when programmers know the layout of
the register file. This enables codegen to eliminate a lot of the subregister
traffic it would otherwise generate.
This commit only enables this for a small number of floating-point cases, but a
lot more integer cases. I assume this is okay for all ports, but I did not do
extensive testing of the quality of code involving i512 vectors and the like. If
there is a use case where this generates worse code than before, let me know and
we can scale it back.
This fixes <rdar://problem/9036264>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127317 91177308-0d34-0410-b5e6-96231b3b80d8
reachable uses, but there still might be uses in dead blocks. Use the
standard solution of replacing all the uses with undef. This is
a rare case because it's very sensitive to phase ordering in SimplifyCFG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127299 91177308-0d34-0410-b5e6-96231b3b80d8