doing global variable classification anymore) and hookized, sink almost
all target targets global variable emission code into AsmPrinter and out
of each target.
Some notes:
1. PIC16 does completely custom and crazy stuff, so it is not changed.
2. XCore has some custom handling for extra directives. I'll look at it next.
3. This switches linux/ppc to use .globl instead of .global. If .globl is
actually wrong, let me know and I'll fix it.
4. This makes linux/ppc get a lot of random cases right which were obviously
wrong before, it is probably now a bit healthier.
5. Blackfin will probably start getting .comm and other things that it didn't
before. If this is undesirable, it should explicitly opt out of these
things by clearing the relevant fields of MCAsmInfo.
This leads to a nice diffstat:
14 files changed, 127 insertions(+), 830 deletions(-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93858 91177308-0d34-0410-b5e6-96231b3b80d8
are the same. I had already fixed a similar problem where the source and
destination were different bitcasts derived from the same alloca, but the
previous fix still did not handle the case where both operands are exactly
the same value. Radar 7552893.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93848 91177308-0d34-0410-b5e6-96231b3b80d8
GCC would put weak zero initialized mutable data in the .bss section,
we would put it into a crasy '.gnu.linkonce.b.test,"aw",@nobits'
section. Fixing this will allow simplifications next up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93844 91177308-0d34-0410-b5e6-96231b3b80d8
comments (fast isel, X86). This doesn't seem
to break any functionality, but will introduce
cases where -g affects the generated code. I'll
be fixing that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93811 91177308-0d34-0410-b5e6-96231b3b80d8
aggressive changed the canonical form from sext(trunc(x)) to ashr(lshr(x)),
make sure to transform a couple more things into that canonical form,
and catch a case where we missed turning zext/shl/ashr into a single sext.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93787 91177308-0d34-0410-b5e6-96231b3b80d8
Instcombine does this but apparently there are situations where this pattern will escape the optimizer and / or created by isel. Here is a case that's seen in JavaScriptCore:
%t1 = sub i32 0, %a
%t2 = add i32 %t1, -1
The dag combiner pattern: ((c1-A)+c2) -> (c1+c2)-A
will fold it to -1 - %a.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93773 91177308-0d34-0410-b5e6-96231b3b80d8
PASS: LLVM::FrontendC/pr5406.c (3463 of 5030)
and on X86 I get
XFAIL: LLVM::FrontendC/pr5406.c (3465 of 5030
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93689 91177308-0d34-0410-b5e6-96231b3b80d8
adding an "i" to the suffix, indicating that the elements are integers, is
accepted but not part of the standard syntax. This helps us pass a few more
of the Neon tests from gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93677 91177308-0d34-0410-b5e6-96231b3b80d8
Nodes that had children outside of the post dominator tree (infinite loops)
where removed from the post dominator tree. This seems to be wrong. Leave them
in the tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93633 91177308-0d34-0410-b5e6-96231b3b80d8
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
It also strips old llvm.dbg.declare intrinsics that did not pass metadata as the first argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93531 91177308-0d34-0410-b5e6-96231b3b80d8
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93504 91177308-0d34-0410-b5e6-96231b3b80d8
added to the FSub version. However, the original version of this xform guarded
against doing this for floating point (!Op0->getType()->isFPOrFPVector()).
This is causing LLVM to perform incorrect xforms for code like:
void func(double *rhi, double *rlo, double xh, double xl, double yh, double yl){
double mh, ml;
double c = 134217729.0;
double up, u1, u2, vp, v1, v2;
up = xh*c;
u1 = (xh - up) + up;
u2 = xh - u1;
vp = yh*c;
v1 = (yh - vp) + vp;
v2 = yh - v1;
mh = xh*yh;
ml = (((u1*v1 - mh) + (u1*v2)) + (u2*v1)) + (u2*v2);
ml += xh*yl + xl*yh;
*rhi = mh + ml;
*rlo = (mh - (*rhi)) + ml;
}
The last line was optimized away, but rl is intended to be the difference
between the infinitely precise result of mh + ml and after it has been rounded
to double precision.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93369 91177308-0d34-0410-b5e6-96231b3b80d8
different BlockAddress labels, but nothing semantically important.
Add a FIXME that BlockAddress codegen is broken if the LLVM BB has
an empty name (e.g. strip was run).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93303 91177308-0d34-0410-b5e6-96231b3b80d8
For now, this pass is fairly conservative. It only perform the replacement when both the pre- and post- extension values are used in the block. It will miss cases where the post-extension values are live, but not used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93278 91177308-0d34-0410-b5e6-96231b3b80d8
in JT.
2) When cloning blocks for PHI or xor conditions, use
instsimplify to simplify the code as we go. This allows us to
squish common cases early in JT which opens up opportunities for
subsequent iterations, and allows it to completely simplify the
testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93253 91177308-0d34-0410-b5e6-96231b3b80d8
condition is a xor with a phi node. This eliminates nonsense
like this from 176.gcc in several places:
LBB166_84:
testl %eax, %eax
- setne %al
- xorb %cl, %al
- notb %al
- testb $1, %al
- je LBB166_85
+ je LBB166_69
+ jmp LBB166_85
This is rdar://7391699
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93221 91177308-0d34-0410-b5e6-96231b3b80d8
has an immediate with at least 32 bits of leading zeros, to avoid needing to
materialize that immediate in a register first.
FileCheckize, tidy, and extend a testcase to cover this case.
This fixes rdar://7527390.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93160 91177308-0d34-0410-b5e6-96231b3b80d8
new AsmPrinter. This is perhaps less elegant than describing them
in terms of MOV32r0 and subreg operations, but it allows the
current register to rematerialize them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93158 91177308-0d34-0410-b5e6-96231b3b80d8
ignore alignment requirements for SIMD memory operands. This
is useful on architectures like the AMD 10h that do not trap on
unaligned references if a status bit is twiddled at startup time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93151 91177308-0d34-0410-b5e6-96231b3b80d8
BitsToClear case. This allows it to promote expressions which have an
and/or/xor after the lshr, promoting cases like test2 (from PR4216)
and test3 (random extample extracted from a spec benchmark).
clang now compiles the code in PR4216 into:
_test_bitfield: ## @test_bitfield
movl %edi, %eax
orl $194, %eax
movl $4294902010, %ecx
andq %rax, %rcx
orl $32768, %edi
andq $39936, %rdi
movq %rdi, %rax
orq %rcx, %rax
ret
instead of:
_test_bitfield: ## @test_bitfield
movl %edi, %eax
orl $194, %eax
movl $4294902010, %ecx
andq %rax, %rcx
shrl $8, %edi
orl $128, %edi
shlq $8, %rdi
andq $39936, %rdi
movq %rdi, %rax
orq %rcx, %rax
ret
which is still not great, but is progress.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93145 91177308-0d34-0410-b5e6-96231b3b80d8
new BitsToClear result which allows us to start promoting
expressions that end with a lshr-by-constant. This is
conservatively correct and better than what we had before
(see testcases) but still needs to be extended further.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93144 91177308-0d34-0410-b5e6-96231b3b80d8
the zext dest type. This allows us to handle test52/53 in cast.ll,
and allows llvm-gcc to generate much better code for PR4216 in -m64
mode:
_test_bitfield: ## @test_bitfield
orl $32962, %edi
movl %edi, %eax
andl $-25350, %eax
ret
This also fixes a bug handling vector extends, ensuring that the
mask produced is a vector constant, not an integer constant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93127 91177308-0d34-0410-b5e6-96231b3b80d8
elimination of a sign extend to be a win, which simplifies
the client of CanEvaluateSExtd, and allows us to eliminate
more casts (examples taken from real code).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93109 91177308-0d34-0410-b5e6-96231b3b80d8
lshr+ashr instead of trunc+sext. We want to avoid type
conversions whenever possible, it is easier to codegen expressions
without truncates and extensions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93107 91177308-0d34-0410-b5e6-96231b3b80d8
1) don't try to optimize a sext or zext that is only used by a trunc, let
the trunc get optimized first. This avoids some pointless effort in
some common cases since instcombine scans down a block in the first pass.
2) Change the cost model for zext elimination to consider an 'and' cheaper
than a zext. This allows us to do it more aggressively, and for the next
patch to simplify the code quite a bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93097 91177308-0d34-0410-b5e6-96231b3b80d8
R11, and then asserting that the target was in R9. Since R9 isn't reserved for
the target anymore, and is used as an argument, this patch changes the
assertion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93065 91177308-0d34-0410-b5e6-96231b3b80d8
really does need to be a vector type, because
TargetLowering::getOperationAction for SIGN_EXTEND_INREG uses that type,
and it needs to be able to distinguish between vectors and scalars.
Also, fix some more issues with legalization of vector casts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93043 91177308-0d34-0410-b5e6-96231b3b80d8
result int by 8 for the first byte. While normally harmless,
if the result is smaller than a byte, this shift is invalid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93018 91177308-0d34-0410-b5e6-96231b3b80d8
that feeds into a zext, similar to the patch I did yesterday for sext.
There is a lot of room for extension beyond this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92962 91177308-0d34-0410-b5e6-96231b3b80d8
When folding a and(any_ext(load)) both the any_ext and the
load have to have only a single use.
This removes the anyext-uses.ll testcase which started failing
because it is unreduced and unclear what it is testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92950 91177308-0d34-0410-b5e6-96231b3b80d8
to an element of a vector in a static ctor) which occurs with an
unrelated patch I'm testing. Annoyingly, EvaluateStoreInto basically
does exactly the same stuff as InsertElement constant folding, but it
now handles vectors, and you can't insertelement into a vector. It
would be 'really nice' if GEP into a vector were not legal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92889 91177308-0d34-0410-b5e6-96231b3b80d8
(OP (trunc x), (trunc y)) -> (trunc (OP x, y))
Unfortunately this simple change causes dag combine to infinite looping. The problem is the shrink demanded ops optimization tend to canonicalize expressions in the opposite manner. That is badness. This patch disable those optimizations in dag combine but instead it is done as a late pass in sdisel.
This also exposes some deficiencies in dag combine and x86 setcc / brcond lowering. Teach them to look pass ISD::TRUNCATE in various places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92849 91177308-0d34-0410-b5e6-96231b3b80d8
phi nodes when deciding which pointers point to local memory.
I actually checked long ago how useful this is, and it isn't
very: it hardly ever fires in the testsuite, but since Chris
wants it here it is!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92836 91177308-0d34-0410-b5e6-96231b3b80d8
memcpy, memset and other intrinsics that only access their arguments
to be readnone if the intrinsic's arguments all point to local memory.
This improves the testcase in the README to readonly, but it could in
theory be made readnone, however this would involve more sophisticated
analysis that looks through the memcpy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92829 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, instcombine would only promote an expression tree to
the larger type if doing so eliminated two casts. This is because
a need to manually do the sign extend after the promoted expression
tree with two shifts. Now, we keep track of whether the result of
the computation is going to be properly sign extended already. If
so, we can unconditionally promote the expression, which allows us
to zap more sext's.
This implements rdar://6598839 (aka gcc pr38751)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92815 91177308-0d34-0410-b5e6-96231b3b80d8