xform it is checking to actually pass. There is no need to match
m_SelectCst<0, -1> since instcombine canonicalizes that into not(sext).
Add matches for sext(not(x)) in addition to not(sext(x)).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95420 91177308-0d34-0410-b5e6-96231b3b80d8
following it. However, the EmitGlobalConstant method wasn't emitting a body for
the constant. The assembler doesn't like that. Before, we were generating this:
.zerofill __DATA, __common, __cmd, 1, 3
This fix puts us back to that semantic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95336 91177308-0d34-0410-b5e6-96231b3b80d8
"Attached patch removes the extra NUL bytes from the output and changes
test/Archive/MacOSX.toc from a binary to a text file (removes
svn:mime-type=application/octet-stream and adds svn:eol-style=native). I can't
figure out how to get SVN to include the new contents of the file in the patch
so I'm attaching it separately."
Patch by James Abbatiello!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95292 91177308-0d34-0410-b5e6-96231b3b80d8
Fix bugs where we would compute out of bounds as in bounds, and where
we couldn't know that the linker could override the size of an array.
Add a few new testcases, change existing testcase to use a private
global array instead of extern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95283 91177308-0d34-0410-b5e6-96231b3b80d8
Lock prefix, Repeat string operation prefixes and the Segment override prefixes.
Also added versions of the move string and store string instructions without the
repeat prefixes to X86InstrInfo.td. And finally marked the rep versions of
move/store string records in X86InstrInfo.td as isCodeGenOnly = 1 so tblgen is
happy building the disassembler files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95252 91177308-0d34-0410-b5e6-96231b3b80d8
some mechanism for specifying alternative syntaxes, but I'm not sure what form
that should take yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95158 91177308-0d34-0410-b5e6-96231b3b80d8
It's unclear if the matcher is nondeterminstic of what here,
but I'm getting matches without TAILCALL and some other hosts
are getting matches with it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95149 91177308-0d34-0410-b5e6-96231b3b80d8
This test case is different subset of the full auto generated test case, and a
larger subset that is in x86_32-bit.s (that set will encode correctly). These
instructions can pass though llvm-mc as it were a logical cat(1) and then
reassemble to the same instruction. It is useful as we bring up the parser and
matcher so we don't break things that currently work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95107 91177308-0d34-0410-b5e6-96231b3b80d8
where callee's arguments are already in the caller's own caller's stack and
they line up perfectly. e.g.
extern int foo(int a, int b, int c);
int bar(int a, int b, int c) {
return foo(a, b, c);
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95053 91177308-0d34-0410-b5e6-96231b3b80d8
as output. Needed for (functional) correctness in inline asm,
and should be generally beneficial. 7361612.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95050 91177308-0d34-0410-b5e6-96231b3b80d8
cases, and implement target-independent folding rules for alignof and
offsetof. Also, reassociate reassociative operators when it leads to
more folding.
Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.
Make the target-dependent folder promote GEP array indices to
pointer-sized integers, to make implicit casting explicit and exposed
to subsequent folding.
And add a bunch of testcases for this new functionality, and a bunch
of related existing functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94987 91177308-0d34-0410-b5e6-96231b3b80d8
of objc message send was getting marked arm_apcscc, but the prototype
isn't. This is fine at runtime because objcmsgsend is implemented in
assembly. Only turn a mismatched caller and callee into 'unreachable'
if the callee is a definition.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94986 91177308-0d34-0410-b5e6-96231b3b80d8
case, instcombine can't zap the invoke for fear of changing the CFG.
However, we have to do something to prevent the next iteration of
instcombine from inserting another store -> undef before the invoke
thereby getting into infinite iteration between dead store elim and
store insertion.
Just zap the callee to null, which will prevent the next iteration
from doing anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94985 91177308-0d34-0410-b5e6-96231b3b80d8
Even if they are suported by the core, they can be disabled
(this is just a configuration bit inside some register).
Allow unaligned memops on darwin and conservatively disallow them otherwise.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94889 91177308-0d34-0410-b5e6-96231b3b80d8
unconditionally. Besides checking the offset, also check that the underlying
object is aligned as much as the load itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94875 91177308-0d34-0410-b5e6-96231b3b80d8
something totally broken and parsing them as immediates, but the .td file also
had the wrong match class so things sortof worked. Except, that is, that we
would parse
movl $0, %eax
as
movl 0, %eax
Feel free to guess how well that worked.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94869 91177308-0d34-0410-b5e6-96231b3b80d8
needed for this test, but otherwise, there's nothing ARM-specific about
it and no need to specify the calling convention.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94862 91177308-0d34-0410-b5e6-96231b3b80d8
- This test case is auto generated, and has been verified to round-trip
correctly through llvm-mc by checking the assembled .o file before and after
piping through llvm-mc. It will be extended over time as the matcher grows
support for more instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94857 91177308-0d34-0410-b5e6-96231b3b80d8
getelementptr (i8* inttoptr (i64 1 to i8*), i32 -1)
to
inttoptr (i64 0 to i8*)
from the VMCore constant folder. It didn't handle sign-extension properly
in the case where the source integer is smaller than a pointer size. And,
it relied on an assumption about sizeof(i8).
The Analysis constant folder still folds these kinds of things; it has
access to TargetData, so it can do them right.
Add a testcase which tests that the VMCore constant folder doesn't
miscompile this, and that the Analysis folder does fold it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94750 91177308-0d34-0410-b5e6-96231b3b80d8
when it should have been and'd with LowBits. Fix that and while there beef
up the logic in the case of a negative LHS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94745 91177308-0d34-0410-b5e6-96231b3b80d8
runOnMachineFunction, and switch PPC to use EmitFunctionBody.
The two ppc asmprinters now don't heave to define
runOnMachineFunction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94722 91177308-0d34-0410-b5e6-96231b3b80d8
This was already being done in SSAUpdater::GetValueAtEndOfBlock so I've
just changed SSAUpdater to check for existing PHIs in both places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94690 91177308-0d34-0410-b5e6-96231b3b80d8
even when -tailcallopt is not specified and it does not require changing ABI.
First case is the most trivial one. Perform tail call optimization when both
the caller and callee do not return values and when the callee does not take
any input arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94664 91177308-0d34-0410-b5e6-96231b3b80d8
have trouble with an intermediate add overflowing. Also, be more conservative
about the case where the induction variable in an SLT loop exit can step past
the RHS of the SLT and overflow in a single step.
Make getSignedRange more aggressive, to recover for some common cases which
the above fixes pessimized.
This addresses rdar://7561161.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94512 91177308-0d34-0410-b5e6-96231b3b80d8
dbg.declare's we currently generate go through both
register allocators without perturbing the results.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94480 91177308-0d34-0410-b5e6-96231b3b80d8
"sext cond" instead of a select. This simplifies some instcombine
code, matches the policy for zext (cond ? 1 : 0 -> zext), and allows
us to generate better code for a testcase on ppc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94339 91177308-0d34-0410-b5e6-96231b3b80d8
for arbitrary terminators in predecessors, don't assume
it is a conditional or uncond branch. The testcase shows
an example where they can happen with switches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94323 91177308-0d34-0410-b5e6-96231b3b80d8
externally visible function, it can still find all callers of it and replace
the parameters to a dead argument with undef.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94322 91177308-0d34-0410-b5e6-96231b3b80d8
handle the case when we can infer an input to the xor
from all inputs that agree, instead of going into an
infinite loop. Another part of PR6199
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94321 91177308-0d34-0410-b5e6-96231b3b80d8
to MCExpr then emit them through MCStreamer with EmitValue. I think all
global variable initializers are now going through mcstreamer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94293 91177308-0d34-0410-b5e6-96231b3b80d8
if one of the vectors didn't have elements (such as undef). Fixes PR 6096.
Fix an issue in the constant folder where fcmp (<2 x %ty>, <2 x %ty>) would
have <2 x i1> type if constant folding was successful and i1 type if it wasn't.
This exposed a related issue in the bitcode reader.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94069 91177308-0d34-0410-b5e6-96231b3b80d8
This new version is much more aggressive about doing "full" reduction in
cases where it reduces register pressure, and also more aggressive about
rewriting induction variables to count down (or up) to zero when doing so
reduces register pressure.
It currently uses fairly simplistic algorithms for finding reuse
opportunities, but it introduces a new framework allows it to combine
multiple strategies at once to form hybrid solutions, instead of doing
all full-reduction or all base+index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94061 91177308-0d34-0410-b5e6-96231b3b80d8
of int initializers), change some methods to be static functions,
use raw_ostream::write_hex instead of a smallstring dance with
APValue::toStringUnsigned(S, 16).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93991 91177308-0d34-0410-b5e6-96231b3b80d8
emits one directive instead of N. Not doing this would be a
significant regression on the # bytes generated by .fill.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93889 91177308-0d34-0410-b5e6-96231b3b80d8
doing global variable classification anymore) and hookized, sink almost
all target targets global variable emission code into AsmPrinter and out
of each target.
Some notes:
1. PIC16 does completely custom and crazy stuff, so it is not changed.
2. XCore has some custom handling for extra directives. I'll look at it next.
3. This switches linux/ppc to use .globl instead of .global. If .globl is
actually wrong, let me know and I'll fix it.
4. This makes linux/ppc get a lot of random cases right which were obviously
wrong before, it is probably now a bit healthier.
5. Blackfin will probably start getting .comm and other things that it didn't
before. If this is undesirable, it should explicitly opt out of these
things by clearing the relevant fields of MCAsmInfo.
This leads to a nice diffstat:
14 files changed, 127 insertions(+), 830 deletions(-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93858 91177308-0d34-0410-b5e6-96231b3b80d8
are the same. I had already fixed a similar problem where the source and
destination were different bitcasts derived from the same alloca, but the
previous fix still did not handle the case where both operands are exactly
the same value. Radar 7552893.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93848 91177308-0d34-0410-b5e6-96231b3b80d8
GCC would put weak zero initialized mutable data in the .bss section,
we would put it into a crasy '.gnu.linkonce.b.test,"aw",@nobits'
section. Fixing this will allow simplifications next up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93844 91177308-0d34-0410-b5e6-96231b3b80d8
comments (fast isel, X86). This doesn't seem
to break any functionality, but will introduce
cases where -g affects the generated code. I'll
be fixing that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93811 91177308-0d34-0410-b5e6-96231b3b80d8
aggressive changed the canonical form from sext(trunc(x)) to ashr(lshr(x)),
make sure to transform a couple more things into that canonical form,
and catch a case where we missed turning zext/shl/ashr into a single sext.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93787 91177308-0d34-0410-b5e6-96231b3b80d8
Instcombine does this but apparently there are situations where this pattern will escape the optimizer and / or created by isel. Here is a case that's seen in JavaScriptCore:
%t1 = sub i32 0, %a
%t2 = add i32 %t1, -1
The dag combiner pattern: ((c1-A)+c2) -> (c1+c2)-A
will fold it to -1 - %a.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93773 91177308-0d34-0410-b5e6-96231b3b80d8
PASS: LLVM::FrontendC/pr5406.c (3463 of 5030)
and on X86 I get
XFAIL: LLVM::FrontendC/pr5406.c (3465 of 5030
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93689 91177308-0d34-0410-b5e6-96231b3b80d8
adding an "i" to the suffix, indicating that the elements are integers, is
accepted but not part of the standard syntax. This helps us pass a few more
of the Neon tests from gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93677 91177308-0d34-0410-b5e6-96231b3b80d8
Nodes that had children outside of the post dominator tree (infinite loops)
where removed from the post dominator tree. This seems to be wrong. Leave them
in the tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93633 91177308-0d34-0410-b5e6-96231b3b80d8
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
It also strips old llvm.dbg.declare intrinsics that did not pass metadata as the first argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93531 91177308-0d34-0410-b5e6-96231b3b80d8
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93504 91177308-0d34-0410-b5e6-96231b3b80d8
added to the FSub version. However, the original version of this xform guarded
against doing this for floating point (!Op0->getType()->isFPOrFPVector()).
This is causing LLVM to perform incorrect xforms for code like:
void func(double *rhi, double *rlo, double xh, double xl, double yh, double yl){
double mh, ml;
double c = 134217729.0;
double up, u1, u2, vp, v1, v2;
up = xh*c;
u1 = (xh - up) + up;
u2 = xh - u1;
vp = yh*c;
v1 = (yh - vp) + vp;
v2 = yh - v1;
mh = xh*yh;
ml = (((u1*v1 - mh) + (u1*v2)) + (u2*v1)) + (u2*v2);
ml += xh*yl + xl*yh;
*rhi = mh + ml;
*rlo = (mh - (*rhi)) + ml;
}
The last line was optimized away, but rl is intended to be the difference
between the infinitely precise result of mh + ml and after it has been rounded
to double precision.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93369 91177308-0d34-0410-b5e6-96231b3b80d8
different BlockAddress labels, but nothing semantically important.
Add a FIXME that BlockAddress codegen is broken if the LLVM BB has
an empty name (e.g. strip was run).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93303 91177308-0d34-0410-b5e6-96231b3b80d8
For now, this pass is fairly conservative. It only perform the replacement when both the pre- and post- extension values are used in the block. It will miss cases where the post-extension values are live, but not used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93278 91177308-0d34-0410-b5e6-96231b3b80d8
in JT.
2) When cloning blocks for PHI or xor conditions, use
instsimplify to simplify the code as we go. This allows us to
squish common cases early in JT which opens up opportunities for
subsequent iterations, and allows it to completely simplify the
testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93253 91177308-0d34-0410-b5e6-96231b3b80d8
condition is a xor with a phi node. This eliminates nonsense
like this from 176.gcc in several places:
LBB166_84:
testl %eax, %eax
- setne %al
- xorb %cl, %al
- notb %al
- testb $1, %al
- je LBB166_85
+ je LBB166_69
+ jmp LBB166_85
This is rdar://7391699
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93221 91177308-0d34-0410-b5e6-96231b3b80d8
has an immediate with at least 32 bits of leading zeros, to avoid needing to
materialize that immediate in a register first.
FileCheckize, tidy, and extend a testcase to cover this case.
This fixes rdar://7527390.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93160 91177308-0d34-0410-b5e6-96231b3b80d8
new AsmPrinter. This is perhaps less elegant than describing them
in terms of MOV32r0 and subreg operations, but it allows the
current register to rematerialize them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93158 91177308-0d34-0410-b5e6-96231b3b80d8
ignore alignment requirements for SIMD memory operands. This
is useful on architectures like the AMD 10h that do not trap on
unaligned references if a status bit is twiddled at startup time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93151 91177308-0d34-0410-b5e6-96231b3b80d8
BitsToClear case. This allows it to promote expressions which have an
and/or/xor after the lshr, promoting cases like test2 (from PR4216)
and test3 (random extample extracted from a spec benchmark).
clang now compiles the code in PR4216 into:
_test_bitfield: ## @test_bitfield
movl %edi, %eax
orl $194, %eax
movl $4294902010, %ecx
andq %rax, %rcx
orl $32768, %edi
andq $39936, %rdi
movq %rdi, %rax
orq %rcx, %rax
ret
instead of:
_test_bitfield: ## @test_bitfield
movl %edi, %eax
orl $194, %eax
movl $4294902010, %ecx
andq %rax, %rcx
shrl $8, %edi
orl $128, %edi
shlq $8, %rdi
andq $39936, %rdi
movq %rdi, %rax
orq %rcx, %rax
ret
which is still not great, but is progress.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93145 91177308-0d34-0410-b5e6-96231b3b80d8
new BitsToClear result which allows us to start promoting
expressions that end with a lshr-by-constant. This is
conservatively correct and better than what we had before
(see testcases) but still needs to be extended further.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93144 91177308-0d34-0410-b5e6-96231b3b80d8
the zext dest type. This allows us to handle test52/53 in cast.ll,
and allows llvm-gcc to generate much better code for PR4216 in -m64
mode:
_test_bitfield: ## @test_bitfield
orl $32962, %edi
movl %edi, %eax
andl $-25350, %eax
ret
This also fixes a bug handling vector extends, ensuring that the
mask produced is a vector constant, not an integer constant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93127 91177308-0d34-0410-b5e6-96231b3b80d8
elimination of a sign extend to be a win, which simplifies
the client of CanEvaluateSExtd, and allows us to eliminate
more casts (examples taken from real code).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93109 91177308-0d34-0410-b5e6-96231b3b80d8
lshr+ashr instead of trunc+sext. We want to avoid type
conversions whenever possible, it is easier to codegen expressions
without truncates and extensions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93107 91177308-0d34-0410-b5e6-96231b3b80d8
1) don't try to optimize a sext or zext that is only used by a trunc, let
the trunc get optimized first. This avoids some pointless effort in
some common cases since instcombine scans down a block in the first pass.
2) Change the cost model for zext elimination to consider an 'and' cheaper
than a zext. This allows us to do it more aggressively, and for the next
patch to simplify the code quite a bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93097 91177308-0d34-0410-b5e6-96231b3b80d8
R11, and then asserting that the target was in R9. Since R9 isn't reserved for
the target anymore, and is used as an argument, this patch changes the
assertion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93065 91177308-0d34-0410-b5e6-96231b3b80d8
really does need to be a vector type, because
TargetLowering::getOperationAction for SIGN_EXTEND_INREG uses that type,
and it needs to be able to distinguish between vectors and scalars.
Also, fix some more issues with legalization of vector casts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93043 91177308-0d34-0410-b5e6-96231b3b80d8
result int by 8 for the first byte. While normally harmless,
if the result is smaller than a byte, this shift is invalid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93018 91177308-0d34-0410-b5e6-96231b3b80d8
that feeds into a zext, similar to the patch I did yesterday for sext.
There is a lot of room for extension beyond this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92962 91177308-0d34-0410-b5e6-96231b3b80d8
When folding a and(any_ext(load)) both the any_ext and the
load have to have only a single use.
This removes the anyext-uses.ll testcase which started failing
because it is unreduced and unclear what it is testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92950 91177308-0d34-0410-b5e6-96231b3b80d8
to an element of a vector in a static ctor) which occurs with an
unrelated patch I'm testing. Annoyingly, EvaluateStoreInto basically
does exactly the same stuff as InsertElement constant folding, but it
now handles vectors, and you can't insertelement into a vector. It
would be 'really nice' if GEP into a vector were not legal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92889 91177308-0d34-0410-b5e6-96231b3b80d8
(OP (trunc x), (trunc y)) -> (trunc (OP x, y))
Unfortunately this simple change causes dag combine to infinite looping. The problem is the shrink demanded ops optimization tend to canonicalize expressions in the opposite manner. That is badness. This patch disable those optimizations in dag combine but instead it is done as a late pass in sdisel.
This also exposes some deficiencies in dag combine and x86 setcc / brcond lowering. Teach them to look pass ISD::TRUNCATE in various places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92849 91177308-0d34-0410-b5e6-96231b3b80d8
phi nodes when deciding which pointers point to local memory.
I actually checked long ago how useful this is, and it isn't
very: it hardly ever fires in the testsuite, but since Chris
wants it here it is!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92836 91177308-0d34-0410-b5e6-96231b3b80d8
memcpy, memset and other intrinsics that only access their arguments
to be readnone if the intrinsic's arguments all point to local memory.
This improves the testcase in the README to readonly, but it could in
theory be made readnone, however this would involve more sophisticated
analysis that looks through the memcpy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92829 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, instcombine would only promote an expression tree to
the larger type if doing so eliminated two casts. This is because
a need to manually do the sign extend after the promoted expression
tree with two shifts. Now, we keep track of whether the result of
the computation is going to be properly sign extended already. If
so, we can unconditionally promote the expression, which allows us
to zap more sext's.
This implements rdar://6598839 (aka gcc pr38751)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92815 91177308-0d34-0410-b5e6-96231b3b80d8
The only difference is that EvaluateInDifferentType checks to ensure
they are profitable before doing them :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92788 91177308-0d34-0410-b5e6-96231b3b80d8
when doing this transform if the GEP is not inbounds. No testcase because
it is very difficult to trigger this: instcombine already canonicalizes
GEP indices to pointer size, so it relies specific permutations of the
instcombine worklist.
Thanks to Duncan for pointing this possible problem out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92495 91177308-0d34-0410-b5e6-96231b3b80d8
on the example in PR4216. This doesn't trigger in the testsuite,
so I'd really appreciate someone scrutinizing the logic for
correctness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92458 91177308-0d34-0410-b5e6-96231b3b80d8
occurs in 403.gcc in mode_mask_array, in safe-ctype.c (which
is copied in multiple apps) in _sch_istable, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92427 91177308-0d34-0410-b5e6-96231b3b80d8
when a consequtive sequence of elements all satisfies the
predicate. Like the double compare case, this generates better
code than the magic constant case and generalizes to more than
32/64 element array lookups.
Here are some examples where it triggers. From 403.gcc, most
accesses to the rtx_class array are handled, e.g.:
@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=547]
%142 = icmp eq i8 %141, 105
@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=543]
%165 = icmp eq i8 %164, 60
Also, most of the 59-element arrays (mode_class/rid_to_yy, etc)
optimized before are actually range compares. This lets 32-bit
machines optimize them.
400.perlbmk has stuff like this:
400.perlbmk: PL_regkind, even for 32-bit:
@PL_regkind = constant [62 x i8] c"\00\00\02\02\02\06\06\06\06\09\09\0B\0B\0D\0E\0E\0E\11\12\12\14\14\16\16\18\18\1A\1A\1C\1C\1E\1F !!!$$&'((((,-.///88886789:;8$", align 32 ; <[62 x i8]*> [#uses=4]
%811 = icmp ne i8 %810, 33
@PL_utf8skip = constant [256 x i8] c"\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\04\04\04\04\04\04\04\04\05\05\05\05\06\06\07\0D", align 32 ; <[256 x i8]*> [#uses=94]
%12 = icmp ult i8 %10, 2
etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92426 91177308-0d34-0410-b5e6-96231b3b80d8
two elements match or don't match with two comparisons. For
example, the testcase compiles into:
define i1 @test5(i32 %X) {
%1 = icmp eq i32 %X, 2 ; <i1> [#uses=1]
%2 = icmp eq i32 %X, 7 ; <i1> [#uses=1]
%R = or i1 %1, %2 ; <i1> [#uses=1]
ret i1 %R
}
This generalizes the previous xforms when the array is larger than
64 elements (and this case matches) and generates better code for
cases where it overlaps with the magic bitshift case.
This generalizes more cases than you might expect. For example,
400.perlbmk has:
@PL_utf8skip = constant [256 x i8] c"\01\01\01\...
%15 = icmp ult i8 %7, 7
403.gcc has:
@rid_to_yy = internal constant [114 x i16] [i16 259, i16 260, ...
%18 = icmp eq i16 %16, 295
and xalancbmk has a bunch of examples, such as
_ZN11xercesc_2_5L15gCombiningCharsE and _ZN11xercesc_2_5L10gBaseCharsE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92417 91177308-0d34-0410-b5e6-96231b3b80d8
arrays with variable indices into a comparison of the index
with a constant. The most common occurrence of this that
I see by far is stuff like:
if ("foobar"[i] == '\0') ...
which we compile into: if (i == 6), saving a load and
materialization of the global address. This also exposes
loop trip count information to later passes in many cases.
This triggers hundreds of times in xalancbmk, which is where I first
noticed it, but it also triggers in many other apps. Here are a few
interesting ones from various apps:
@must_be_connected_without = internal constant [8 x i8*] [i8* getelementptr inbounds ([3 x i8]* @.str64320, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str27283, i64 0, i64 0), i8* getelementptr inbounds ([4 x i8]* @.str71327, i64 0, i64 0), i8* getelementptr inbounds ([4 x i8]* @.str72328, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str18274, i64 0, i64 0), i8* getelementptr inbounds ([6 x i8]* @.str11267, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str32288, i64 0, i64 0), i8* null], align 32 ; <[8 x i8*]*> [#uses=2]
%scevgep.i = getelementptr [8 x i8*]* @must_be_connected_without, i64 0, i64 %indvar.i ; <i8**> [#uses=1]
%17 = load ...
%18 = icmp eq i8* %17, null ; <i1> [#uses=1]
-> icmp eq i64 %indvar.i, 7
@yytable1095 = internal constant [84 x i8] c"\12\01(\05\06\07\08\09\0A\0B\0C\0D\0E1\0F\10\11266\1D: \10\11,-,0\03'\10\11B6\04\17&\18\1945\05\06\07\08\09\0A\0B\0C\0D\0E\1E\0F\10\11*\1A\1B\1C$3+>#%;<IJ=ADFEGH9KL\00\00\00C", align 32 ; <[84 x i8]*> [#uses=2]
%57 = getelementptr inbounds [84 x i8]* @yytable1095, i64 0, i64 %56 ; <i8*> [#uses=1]
%mode.0.in = getelementptr inbounds [9 x i32]* @mb_mode_table, i64 0, i64 %.pn ; <i32*> [#uses=1]
load ...
%64 = icmp eq i8 %58, 4 ; <i1> [#uses=1]
-> icmp eq i64 %.pn, 35 ; <i1> [#uses=0]
@gsm_DLB = internal constant [4 x i16] [i16 6554, i16 16384, i16 26214, i16 32767]
%scevgep.i = getelementptr [4 x i16]* @gsm_DLB, i64 0, i64 %indvar.i ; <i16*> [#uses=1]
%425 = load %scevgep.i
%426 = icmp eq i16 %425, -32768 ; <i1> [#uses=0]
-> false
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92411 91177308-0d34-0410-b5e6-96231b3b80d8
pointer to int casts that confuse later optimizations. See PR3351
for details.
This improves but doesn't complete fix 483.xalancbmk because llvm-gcc
does this xform in GCC's "fold" routine as well. Clang++ will do
better I guess.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92408 91177308-0d34-0410-b5e6-96231b3b80d8
(X != null) | (Y != null) --> (X|Y) != 0
(X == null) & (Y == null) --> (X|Y) == 0
so that instcombine can stop doing this for pointers. This is part of PR3351,
which is a case where instcombine doing this for pointers (inserting ptrtoint)
is pessimizing code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92406 91177308-0d34-0410-b5e6-96231b3b80d8
a constantexpr gep on the 'base' side of the expression.
This completes comment #4 in PR3351, which comes from
483.xalancbmk.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92402 91177308-0d34-0410-b5e6-96231b3b80d8
multiply sequence when the power is a constant integer. Before, our
codegen for std::pow(.., int) always turned into a libcall, which was
really inefficient.
This should also make many gfortran programs happier I'd imagine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92388 91177308-0d34-0410-b5e6-96231b3b80d8
positive and negative forms of constants together. This
allows us to compile:
int foo(int x, int y) {
return (x-y) + (x-y) + (x-y);
}
into:
_foo: ## @foo
subl %esi, %edi
leal (%rdi,%rdi,2), %eax
ret
instead of (where the 3 and -3 were not factored):
_foo:
imull $-3, 8(%esp), %ecx
imull $3, 4(%esp), %eax
addl %ecx, %eax
ret
this started out as:
movl 12(%ebp), %ecx
imull $3, 8(%ebp), %eax
subl %ecx, %eax
subl %ecx, %eax
subl %ecx, %eax
ret
This comes from PR5359.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92381 91177308-0d34-0410-b5e6-96231b3b80d8
compare. On other targets we end up with a call to memcmp because we don't
want 16 individual byte loads. We should be able to use movups as well, but
we're failing to select the generated icmp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92107 91177308-0d34-0410-b5e6-96231b3b80d8
SDISel. This optimization was causing simplifylibcalls to
introduce type-unsafe nastiness. This is the first step, I'll be
expanding the memcmp optimizations shortly, covering things that
we really really wouldn't want simplifylibcalls to do.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92098 91177308-0d34-0410-b5e6-96231b3b80d8
missing check that an array reference doesn't go past the end of the array,
and remove some redundant checks for in-bound array and vector references
that are no longer needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91897 91177308-0d34-0410-b5e6-96231b3b80d8
data on them, for example:
addb %al, (%rax)
simple-tests.txt:11:5: error: excess data detected in input
0 0 0 0 0
^
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91896 91177308-0d34-0410-b5e6-96231b3b80d8
by merging all returns in a function into a single one, but simplifycfg
currently likes to duplicate the return (an unfortunate choice!)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91890 91177308-0d34-0410-b5e6-96231b3b80d8
'GetValueInMiddleOfBlock' case, instead of inserting
duplicates.
A similar fix is almost certainly needed by the machine-level
SSAUpdate implementation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91820 91177308-0d34-0410-b5e6-96231b3b80d8
implement some optimizations for MIN(MIN()) and MAX(MAX()) and
MIN(MAX()) etc. This substantially improves the code in PR5822 but
doesn't kick in much elsewhere. 2 max's were optimized in
pairlocalalign and one in smg2000.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91814 91177308-0d34-0410-b5e6-96231b3b80d8
Use the presence of NSW/NUW to fold "icmp (x+cst), x" to a constant in
cases where it would otherwise be undefined behavior.
Surprisingly (to me at least), this triggers hundreds of the times in
a few benchmarks: lencode, ldecode, and 466.h264ref seem to *really*
like this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91812 91177308-0d34-0410-b5e6-96231b3b80d8
a bunch in lencode, ldecod, spass, 176.gcc, 252.eon, among others. It is
also the first part of PR5822
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91811 91177308-0d34-0410-b5e6-96231b3b80d8
cache a pointer as being unavailable due to phi trans in the
wrong place. This would cause later queries to fail even when
they didn't involve phi trans.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91787 91177308-0d34-0410-b5e6-96231b3b80d8
where instcombine would have to split a critical edge due to a
phi node of an invoke. Since instcombine can't change the CFG,
it has to bail out from doing the transformation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91763 91177308-0d34-0410-b5e6-96231b3b80d8
bootstrap. This also replaces the WeakVH references that Chris objected to
with normal Value references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91711 91177308-0d34-0410-b5e6-96231b3b80d8
be non-optimal. To be precise, we should avoid folding loads if the instructions
only update part of the destination register, and the non-updated part is not
needed. e.g. cvtss2sd, sqrtss. Unfolding the load from these instructions breaks
the partial register dependency and it can improve performance. e.g.
movss (%rdi), %xmm0
cvtss2sd %xmm0, %xmm0
instead of
cvtss2sd (%rdi), %xmm0
An alternative method to break dependency is to clear the register first. e.g.
xorps %xmm0, %xmm0
cvtss2sd (%rdi), %xmm0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91672 91177308-0d34-0410-b5e6-96231b3b80d8