after checking for a GEP, so that it matches what GetUnderlyingObject
does. This fixes an obscure bug turned up by bugpoint in the testcase
for PR9931.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131971 91177308-0d34-0410-b5e6-96231b3b80d8
non-zero.
- Teach X86 cmov optimization to eliminate the cmov from ctlz, cttz extension
when the source of X86ISD::BSR / X86ISD::BSF is proven to be non-zero.
rdar://9490949
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131948 91177308-0d34-0410-b5e6-96231b3b80d8
The following improvements are accomplished as a result of applying this patch:
- Fixed frame objects' offsets (relative to either the virtual frame pointer or
the stack pointer) are set before instruction selection is completed. There is
no need to wait until Prologue/Epilogue Insertion is run to set them.
- Calculation of final offsets of fixed frame objects is straightforward. It is
no longer necessary to assign negative offsets to fixed objects for incoming
arguments in order to distinguish them from the others.
- Since a fixed object has its relative offset set during instruction
selection, there is no need to conservatively set its alignment to 4.
- It is no longer necessary to reorder non-fixed frame objects in
MipsFrameLowering::adjustMipsStackFrame.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131915 91177308-0d34-0410-b5e6-96231b3b80d8
aligned.
Teach memcpyopt to not give up all hope when confonted with an underaligned
memcpy feeding an overaligned byval. If the *source* of the memcpy can be
determined to be adequeately aligned, or if it can be forced to be, we can
eliminate the memcpy.
This addresses PR9794. We now compile the example into:
define i32 @f(%struct.p* nocapture byval align 8 %q) nounwind ssp {
entry:
%call = call i32 @g(%struct.p* byval align 8 %q) nounwind
ret i32 %call
}
in both x86-64 and x86-32 mode. We still don't get a tailcall though,
because tailcalls apparently can't handle byval.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131884 91177308-0d34-0410-b5e6-96231b3b80d8
Modified the patch to .td file supplied by Jyun-Yan You. Add a test case and
modified ARMDisassemblerCore.cpp a little bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131859 91177308-0d34-0410-b5e6-96231b3b80d8
failing to form a memset, then having to delete it" but my approximation
isn't safe for self recurrent loops. Instead of doign a hack, just
do it the right way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131858 91177308-0d34-0410-b5e6-96231b3b80d8
I also changed -simplifycfg, -jump-threading and -codegenprepare to use this to produce slightly better code without any extra cleanup passes (AFAICT this was the only place in -simplifycfg where now-dead conditions of replaced terminators weren't being cleaned up). The only other user of this function is -sccp, but I didn't read that thoroughly enough to figure out whether it might be holding pointers to instructions that could be deleted by this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131855 91177308-0d34-0410-b5e6-96231b3b80d8
causing it to get into infinite loops when it would widen a
load (which can necessarily leave around dead loads).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131847 91177308-0d34-0410-b5e6-96231b3b80d8
Original log message:
When BasicAA can determine that two pointers have the same base but
differ by a dynamic offset, return PartialAlias instead of MayAlias.
See the comment in the code for details. This fixes PR9971.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131809 91177308-0d34-0410-b5e6-96231b3b80d8
It's better to do this in codegen, mul.with.overflow(X, 2) is more canonical because it has only one use on "X".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131798 91177308-0d34-0410-b5e6-96231b3b80d8
differ by a dynamic offset, return PartialAlias instead of MayAlias.
See the comment in the code for details. This fixes PR9971.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131781 91177308-0d34-0410-b5e6-96231b3b80d8
text section.
Assume the following bit of annotated assembly:
.section .data.rel.ro,"aw",%progbits
.align 2
.LAlpha:
.long startval(GOTOFF)
.text
.align 2
.type main,%function
.align 4
main: ;;; assume "main" starts at offset 0x20
0x0 push {r11, lr}
0x4 movw r0, :lower16:(.LAlpha-(.LBeta+8))
;;; ==> (.AddrOf(.LAlpha) - ((.AddrOf(.LBeta) - .AddrOf(".")) + 8)
;;; ==> (??? - ((16-4) + 8) = -20
0x8 movt r0, :upper16:(.LAlpha-(.LBeta+8))
;;; ==> (.AddrOf(.LAlpha) - ((.AddrOf(.LBeta) - .AddrOf(".")) + 8)
;;; ==> (??? - ((16-8) + 8) = -16
0xc ... blah
.LBeta:
0x10 add r0, pc, r0
0x14 ... blah
.LGamma:
0x18 add r1, pc, r1
Above snippet results in the following relocs in the .o file for the
first pair of movw/movt instructions
00000024 R_ARM_MOVW_PREL_NC .LAlpha
00000028 R_ARM_MOVT_PREL .LAlpha
And the encoded instructions in the .o file for main: must be
00000020 <main>:
20: e92d4800 push {fp, lr}
24: e30f0fec movw r0, #65516 ; 0xffec i.e. -20
28: e34f0ff0 movt r0, #65520 ; 0xfff0 i.e. -16
However, llc (prior to this commit) generates the following sequence
00000020 <main>:
20: e92d4800 push {fp, lr}
24: e30f0fec movw r0, #65516 ; 0xffec - i.e. -20
28: e34f0fff movt r0, #65535 ; 0xffff - i.e. -1
What has to happen in the ArmAsmBackend is that if the relocation is PC
relative, the 16 bits encoded as part of movw and movt must be both addends,
not addresses. It makes sense to encode addresses by right shifting the value
by 16, but the result is incorrect for PIC.
i.e., the right shift by 16 for movt is ONLY valid for the NON-PCRel case.
This change agrees with what GNU as does, and makes the PIC code run.
MC/ARM/elf-movt.s covers this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131674 91177308-0d34-0410-b5e6-96231b3b80d8
x86_64 sibcall logic. I've filed PR9943 for the sibcall problem, and
this patch alters the testcase to work around the flaw. When PR9943
is fixed, this patch should be reverted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131557 91177308-0d34-0410-b5e6-96231b3b80d8
happily accept things like "sext <2 x i32> to <999 x i64>". It would
also accept "sext <2 x i32> to i64", though the verifier would catch
that later. Fixed by having castIsValid check that vector lengths match
except when doing a bitcast. (2) When creating a cast instruction, check
that the cast is valid (this was already done when creating constexpr
casts). While there, replace getScalarSizeInBits (used to allow more
vector casts) with getPrimitiveSizeInBits in getCastOpcode and isCastable
since vector to vector casts are now handled explicitly by passing to the
element types; i.e. this bit should result in no functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131532 91177308-0d34-0410-b5e6-96231b3b80d8
As an example, the change to InstCombineCalls catches a common case where a call to a bitcast of a function is rewritten.
Chris, does this approach look reasonable?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131516 91177308-0d34-0410-b5e6-96231b3b80d8
When instructions are deleted, they leave tombstone SlotIndex entries.
The isZeroLength method should ignore these null indexes.
This causes RABasic to sometimes spill a callee-saved register in the
abi-isel.ll test, so don't run that test with -regalloc=basic. Prioritizing
register allocation according to spill weight can cause more registers to be
used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131436 91177308-0d34-0410-b5e6-96231b3b80d8
("T is 1 if the target symbol S has type STT_FUNC and the
symbol addresses a Thumb instruction ;it is 0 otherwise."
from "ELF for the ARM Architecture" 4.7.1.2)
Patch by Koan-Sin Tan!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131406 91177308-0d34-0410-b5e6-96231b3b80d8
by non-CMP expressions. The executable test case (129821) would test
this as well, if we had an "-O0 -disable-arm-fast-isel" LLVM-GCC
tester. Alas, the ARM assembly would be very difficult to check with
FileCheck.
The thumb2-cbnz.ll test is affected; it generates larger code (tst.w
vs. cmp #0), but I believe the new version is correct.
rdar://problem/9298790
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131261 91177308-0d34-0410-b5e6-96231b3b80d8
If there is a store after the load node, then there is a chain, which means
that there is another user. Thus, asking hasOneUser would fail. Instead we
ask hasNUsesOfValue on the 'data' value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131183 91177308-0d34-0410-b5e6-96231b3b80d8
at the start of basic blocks to their common predecessor. It's actually quite
common (e.g. about 50 times in JM/lencod) and has shown to be a nice code size
benefit. e.g.
pushq %rax
testl %edi, %edi
jne LBB0_2
## BB#1:
xorb %al, %al
popq %rdx
ret
LBB0_2:
xorb %al, %al
callq _foo
popq %rdx
ret
=>
pushq %rax
xorb %al, %al
testl %edi, %edi
je LBB0_2
## BB#1:
callq _foo
LBB0_2:
popq %rdx
ret
rdar://9145558
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131172 91177308-0d34-0410-b5e6-96231b3b80d8
this clang will use .debug_frame in, for example,
clang -g -c -m32 test.c
This matches gcc's behaviour. It looks like .debug_frame is a bit bigger
than .eh_frame, but has the big advantage of not being allocated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131140 91177308-0d34-0410-b5e6-96231b3b80d8
often expressed as "x >= y ? x : y", there is a good chance we can extract
the existing "x >= y" from it and use that as a replacement for "max(x,y)==x".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131049 91177308-0d34-0410-b5e6-96231b3b80d8
This can't be just an assertion, users can always write impossible inline
assembly. Such an assembly statement should be included in the error message.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131024 91177308-0d34-0410-b5e6-96231b3b80d8
assert in the bitcode writer. No change needed because the ValueEnumerator holds
a whole-module numbering anyhow. Fixes PR9857!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131016 91177308-0d34-0410-b5e6-96231b3b80d8
return the pointer being dereferenced, it returns the pointee, but a call
might return the pointer itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130979 91177308-0d34-0410-b5e6-96231b3b80d8
I tested both gdb on a bootstrapped clang and and the gdb testsuite on OS X (snow leopard)
and both are happy using __eh_frame.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130937 91177308-0d34-0410-b5e6-96231b3b80d8
Most of these tests require a single mov instruction that can come either before
or after a 2-addr instruction. -join-physregs changes the behavior, but the
results are equivalent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130891 91177308-0d34-0410-b5e6-96231b3b80d8
landing pad as its successor.
SjLj exception handling jumps to the correct landing pad via a switch statement
that's generated right before code-gen. Loosen the constraint in the machine
instruction verifier to allow for this. Note, this isn't the most rigorous check
since we cannot determine where that switch statement came from. But it's
marginally better than turning this check off when SjLj exceptions are used.
<rdar://problem/9187612>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130881 91177308-0d34-0410-b5e6-96231b3b80d8
Original message:
Teach MachineCSE how to do simple cross-block CSE involving physregs. This allows, for example, eliminating duplicate cmpl's on x86. Part of rdar://problem/8259436 .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130877 91177308-0d34-0410-b5e6-96231b3b80d8
These tests all follow the same pattern:
mov r2, r0
movs r0, #0
$CMP r2, r1
it eq
moveq r0, #1
bx lr
The first 'mov' can be eliminated by rematerializing 'movs r0, #0' below the
test instruction:
$CMP r0, r1
mov.w r0, #0
it eq
moveq r0, #1
bx lr
So far, only physreg coalescing can do that. The register allocators won't yet
split live ranges just to eliminate copies. They can learn, but this particular
problem is not likely to show up in real code. It only appears because r0 is
used for both the function argument and return value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130858 91177308-0d34-0410-b5e6-96231b3b80d8
it is both inefficient and unexpected by dwarfdump. Change to
a DW_FORM_data4.
While in here, change the predicate name to reflect that the position
is not really absolute (it is an offset), just that the linker needs a
relocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130846 91177308-0d34-0410-b5e6-96231b3b80d8
but according to my super-optimizer there are only two missed simplifications
of -instsimplify kind when compiling bzip2, and this is one of them. It amuses
me to have bzip2 be perfectly optimized as far as instsimplify goes!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130840 91177308-0d34-0410-b5e6-96231b3b80d8
The basic allocator is really bad about hinting, so it doesn't eliminate all
copies when physreg joining is disabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130817 91177308-0d34-0410-b5e6-96231b3b80d8
max(a,b) >= a -> true. According to my super-optimizer, these are
by far the most common simplifications (of the -instsimplify kind)
that occur in the testsuite and aren't caught by -std-compile-opts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130780 91177308-0d34-0410-b5e6-96231b3b80d8
model constants which can be added to base registers via add-immediate
instructions which don't require an additional register to materialize
the immediate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130743 91177308-0d34-0410-b5e6-96231b3b80d8
This automagically provides a transform noticed by my super-optimizer
as occurring quite often: "rem x, (select cond, x, 1)" -> 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130694 91177308-0d34-0410-b5e6-96231b3b80d8
Currently the output should be almost identical to the one produced by CodeGen
to make the transition easier.
The only two differences I know of are:
* Some files get an extra advance loc of size 0. This will be fixed when
relaxations are enabled.
* The optimization of declaring an EH symbol as an external variable is not
implemented. This is a subset of adding the nounwind attribute, so we if really
this at -O0 we should probably do it at the IL level.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130623 91177308-0d34-0410-b5e6-96231b3b80d8
This obviously helps a lot if the division would be turned into a libcall
(think i64 udiv on i386), but div is also one of the few remaining instructions
on modern CPUs that become more expensive when the bitwidth gets bigger.
This also helps register pressure on i386 when dividing chars, divb needs
two 8-bit parts of a 16 bit register as input where divl uses two registers.
int foo(unsigned char a) { return a/10; }
int bar(unsigned char a, unsigned char b) { return a/b; }
compiles into (x86_64)
_foo:
imull $205, %edi, %eax
shrl $11, %eax
ret
_bar:
movzbl %dil, %eax
divb %sil, %al
movzbl %al, %eax
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130615 91177308-0d34-0410-b5e6-96231b3b80d8
Fix a rather obscure crash caused by ARM fast-isel generating code which redefines a register.
rdar://problem/9338332 .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130539 91177308-0d34-0410-b5e6-96231b3b80d8
a nice and tidy:
%x1 = load i32* %0, align 4
%1 = icmp eq i32 %x1, 1179403647
br i1 %1, label %if.then, label %if.end
instead of doing lots of loads and branches. May the FreeBSD bootloader
long fit in its allocated space.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130416 91177308-0d34-0410-b5e6-96231b3b80d8
wider load would allow elimination of subsequent loads, and when the wider
load is still a native integer type. This eliminates a ton of loads on
various benchmarks involving struct fields, though it is somewhat hobbled
by clang not being very aggressive about field alignment.
This is yet another step along the way towards resolving PR6627.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130390 91177308-0d34-0410-b5e6-96231b3b80d8
Modified LinearFunctionTestReplace to push the condition on the dead
list instead of eagerly deleting it. This can cause unnecessary
IV rewrites, which should have no effect on codegen and will not be an
issue once we stop generating canonical IVs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130340 91177308-0d34-0410-b5e6-96231b3b80d8
successors) and use inverse depth first search to traverse the BBs. However
that doesn't work when the CFG has infinite loops. Simply do a linear
traversal of all BBs work just fine.
rdar://9344645
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130324 91177308-0d34-0410-b5e6-96231b3b80d8
only check arguments with pointer types. Update the documentation
of IntrReadArgMem reflect this.
While here, add support for TBAA tags on intrinsic calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130317 91177308-0d34-0410-b5e6-96231b3b80d8
We cannot rely on the <imp-def> operands added by LiveIntervals in all cases as
demonstrated by the test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130313 91177308-0d34-0410-b5e6-96231b3b80d8
more callee-saved registers and introduce copies. Only allows it if scheduling
a node above calls would end up lessen register pressure.
Call operands also has added ABI restrictions for register allocation, so be
extra careful with hoisting them above calls.
rdar://9329627
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130245 91177308-0d34-0410-b5e6-96231b3b80d8
1. Only run the early (in the module pass pipe) instcombine/simplifycfg
if the "unit at a time" passes they are cleaning up after runs.
2. Move the "clean up after the unroller" pass to the very end of the
function-level pass pipeline. Loop unroll uses instsimplify now,
so it doesn't create a ton of trash. Moving instcombine later allows
it to clean up after opportunities are exposed by GVN, DSE, etc.
3. Introduce some phase ordering tests for things that are specifically
intended to be simplified by the full optimizer as a whole.
This resolves PR2338, and is progress towards PR6627, which will be
generating code that looks similar to test2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130241 91177308-0d34-0410-b5e6-96231b3b80d8
when X has multiple uses. This is useful for exposing secondary optimizations,
but the X86 backend isn't ready for this when X has a single use. For example,
this can disable load folding.
This is inching towards resolving PR6627.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130238 91177308-0d34-0410-b5e6-96231b3b80d8
translation fails. We were bailing out in some cases that would
cause us to miss GVN'ing some non-local cases away.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130206 91177308-0d34-0410-b5e6-96231b3b80d8
return it as a clobber. This allows GVN to do smart things.
Enhance GVN to be smart about the case when a small load is clobbered
by a larger overlapping load. In this case, forward the value. This
allows us to compile stuff like this:
int test(void *P) {
int tmp = *(unsigned int*)P;
return tmp+*((unsigned char*)P+1);
}
into:
_test: ## @test
movl (%rdi), %ecx
movzbl %ch, %eax
addl %ecx, %eax
ret
which has one load. We already handled the case where the smaller
load was from a must-aliased base pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130180 91177308-0d34-0410-b5e6-96231b3b80d8
these was just one line of a file. Explicitly set the eol-style property on the
files to try and ensure this fix stays.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130125 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes Thumb2 ADCS and SBCS lowering: <rdar://problem/9275821>.
t2ADCS/t2SBCS are now pseudo instructions, consistent with ARM, so the
assembly printer correctly prints the 's' suffix.
Fixes Thumb2 adde -> SBC matching to check for live/dead carry flags.
Fixes the internal ARM machine opcode mnemonic for ADCS/SBCS.
Fixes ARM SBC lowering to check for live carry (potential bug).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130048 91177308-0d34-0410-b5e6-96231b3b80d8
fix bugs exposed by the gcc dejagnu testsuite:
1. The load may actually be used by a dead instruction, which
would cause an assert.
2. The load may not be used by the current chain of instructions,
and we could move it past a side-effecting instruction. Change
how we process uses to define the problem away.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130018 91177308-0d34-0410-b5e6-96231b3b80d8
On x86 this allows to fold a load into the cmp, greatly reducing register pressure.
movzbl (%rdi), %eax
cmpl $47, %eax
->
cmpb $47, (%rdi)
This shaves 8k off gcc.o on i386. I'll leave applying the patch in README.txt to Chris :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130005 91177308-0d34-0410-b5e6-96231b3b80d8
This tends to happen a lot with bitfield code generated by clang. A simple example for x86_64 is
uint64_t foo(uint64_t x) { return (x&1) << 42; }
which used to compile into bloated code:
shlq $42, %rdi ## encoding: [0x48,0xc1,0xe7,0x2a]
movabsq $4398046511104, %rax ## encoding: [0x48,0xb8,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0x00]
andq %rdi, %rax ## encoding: [0x48,0x21,0xf8]
ret ## encoding: [0xc3]
with this patch we can fold the immediate into the and:
andq $1, %rdi ## encoding: [0x48,0x83,0xe7,0x01]
movq %rdi, %rax ## encoding: [0x48,0x89,0xf8]
shlq $42, %rax ## encoding: [0x48,0xc1,0xe0,0x2a]
ret ## encoding: [0xc3]
It's possible to save another byte by using 'andl' instead of 'andq' but I currently see no way of doing
that without making this code even more complicated. See the TODOs in the code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129990 91177308-0d34-0410-b5e6-96231b3b80d8
add <rd>, sp, #<imm8>
ldr <rd>, [sp, #<imm8>]
When the offset from sp is multiple of 4 and in range of 0-1020.
This saves code size by utilizing 16-bit instructions.
rdar://9321541
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129971 91177308-0d34-0410-b5e6-96231b3b80d8