Add an assertion to linear scan to prevent it from allocating registers outside
the register class.
<rdar://problem/9183021>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128254 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions were changed to not embed the addressing mode within the MC instructions
We also need to update the corresponding assert stmt. Also add a test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128240 91177308-0d34-0410-b5e6-96231b3b80d8
Set the encoding bits to {0,?,?,0}, not 0. Plus delegate the disassembly of ADR to
the more generic ADDri/SUBri instructions, and add a test case for that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128234 91177308-0d34-0410-b5e6-96231b3b80d8
entries being compared may not be ARMConstantPoolValue. Without checking
whether they are ARMConstantPoolValue first, and if the stars and moons
are aligned properly, the equality test may return true (when the first few
words of two Constants' values happen to be identical) and very bad things can
happen.
rdar://9125354
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128203 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions were changed to not embed the addressing mode within the MC instructions
We also need to update the corresponding assert stmt. Also add two test cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128191 91177308-0d34-0410-b5e6-96231b3b80d8
were incomplete. The assert stmt needs to be updated and the operand index incrment is wrong.
Fix the bad logic and add some sanity checking to detect bad instruction encoding;
and add a test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128186 91177308-0d34-0410-b5e6-96231b3b80d8
int tries = INT_MAX;
while (tries > 0) {
tries--;
}
The check should be:
subs r4, #1
cmp r4, #0
bgt LBB0_1
The subs can set the overflow V bit when r4 is INT_MAX+1 (which loop
canonicalization apparently does in this case). cmp #0 would have cleared
it while not changing the N and Z bits. Since BGT is dependent on the V
bit, i.e. (N == V) && !Z, it is not safe to eliminate the cmp #0.
rdar://9172742
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128179 91177308-0d34-0410-b5e6-96231b3b80d8
VFP Load/Store Multiple Instructions used to embed the IA/DB addressing mode within the
MC instruction; that has been changed so that now, for example, VSTMDDB_UPD and VSTMDIA_UPD
are two instructions. Update the ARMDisassemblerCore.cpp's DisassembleVFPLdStMulFrm()
to reflect the change.
Also add a test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128103 91177308-0d34-0410-b5e6-96231b3b80d8
the alias of an InstAlias instead of the thing being aliased. Because we need to
know the features that are valid for an InstAlias.
This is part of a work-in-progress.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127986 91177308-0d34-0410-b5e6-96231b3b80d8
to have single return block (at least getting there) for optimizations. This
is general goodness but it would prevent some tailcall optimizations.
One specific case is code like this:
int f1(void);
int f2(void);
int f3(void);
int f4(void);
int f5(void);
int f6(void);
int foo(int x) {
switch(x) {
case 1: return f1();
case 2: return f2();
case 3: return f3();
case 4: return f4();
case 5: return f5();
case 6: return f6();
}
}
=>
LBB0_2: ## %sw.bb
callq _f1
popq %rbp
ret
LBB0_3: ## %sw.bb1
callq _f2
popq %rbp
ret
LBB0_4: ## %sw.bb3
callq _f3
popq %rbp
ret
This patch teaches codegenprep to duplicate returns when the return value
is a phi and where the phi operands are produced by tail calls followed by
an unconditional branch:
sw.bb7: ; preds = %entry
%call8 = tail call i32 @f5() nounwind
br label %return
sw.bb9: ; preds = %entry
%call10 = tail call i32 @f6() nounwind
br label %return
return:
%retval.0 = phi i32 [ %call10, %sw.bb9 ], [ %call8, %sw.bb7 ], ... [ 0, %entry ]
ret i32 %retval.0
This allows codegen to generate better code like this:
LBB0_2: ## %sw.bb
jmp _f1 ## TAILCALL
LBB0_3: ## %sw.bb1
jmp _f2 ## TAILCALL
LBB0_4: ## %sw.bb3
jmp _f3 ## TAILCALL
rdar://9147433
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127953 91177308-0d34-0410-b5e6-96231b3b80d8
The relevant instruction table entries were changed sometime ago to no longer take
<Rt2> as an operand. Modify ARMDisassemblerCore.cpp to accomodate the change and
add a test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127935 91177308-0d34-0410-b5e6-96231b3b80d8
o A8.6.195 STR (register) -- Encoding T1
o A8.6.193 STR (immediate, Thumb) -- Encoding T1
It has been changed so that now they use different addressing modes
and thus different MC representation (Operand Infos). Modify the
disassembler to reflect the change, and add relevant tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127833 91177308-0d34-0410-b5e6-96231b3b80d8
1. The ARM Darwin *r9 call instructions were pseudo-ized recently.
Modify the ARMDisassemblerCore.cpp file to accomodate the change.
2. The disassembler was unnecessarily adding 8 to the sign-extended imm24:
imm32 = SignExtend(imm24:'00', 32); // A8.6.23 BL, BLX (immediate)
// Encoding A1
It has no business doing such. Removed the offending logic.
Add test cases to arm-tests.txt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127707 91177308-0d34-0410-b5e6-96231b3b80d8
accept. If a value in the mask is out of range, it uses the value 0, for VTBL,
or leaves the value unchanged, for VTBX.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127700 91177308-0d34-0410-b5e6-96231b3b80d8
register operand was erroneously added. Remove an incorrect assert which triggers the bug.
rdar://problem/9131529
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127642 91177308-0d34-0410-b5e6-96231b3b80d8
Also more cleanly separate the ARM vs. Thumb functionality. Previously, the
encoding would be incorrect for some Thumb instructions (the indirect calls).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127637 91177308-0d34-0410-b5e6-96231b3b80d8
Go ahead and add them on when we might want to use them and let
later passes remove them.
Fixes rdar://9118569
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127518 91177308-0d34-0410-b5e6-96231b3b80d8
actual instruction as the non-Darwin defs, but have different call-clobber
semantics and so need separate patterns. They don't need to duplicate the
encoding information, however.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127515 91177308-0d34-0410-b5e6-96231b3b80d8
The insufficient encoding information of the combined instruction confuses the decoder wrt
UQADD16. Add extra logic to recover from that.
Fixed an assert reported by Sean Callanan
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127354 91177308-0d34-0410-b5e6-96231b3b80d8
This is just very first approximation how the stuff should be done
(e.g. ARM-only for now). More to follow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127101 91177308-0d34-0410-b5e6-96231b3b80d8
D registers since the vpush list may not have gaps. Make sure the stack
adjustment instruction isn't moved between them. Ditto for vpop in
epilogues.
Sorry, can't reduce a small test case.
rdar://9043312
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126457 91177308-0d34-0410-b5e6-96231b3b80d8
The previous codegen for the slow path (when values are in VFP / NEON
registers) was incorrect if the source is NaN.
The new codegen uses NEON vbsl instruction to copy the sign bit. e.g.
vmov.i32 d1, #0x80000000
vbsl d1, d2, d0
If NEON is not available, it uses integer instructions to copy the sign bit.
rdar://9034702
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126295 91177308-0d34-0410-b5e6-96231b3b80d8
up by the dynamic linker, but it's better to use the correct instruction
to begin with.
Fixes rdar://9011034
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126176 91177308-0d34-0410-b5e6-96231b3b80d8
In other words, do not keep track of argument's location. The debugger (gdb) is not prepared to see line table entries for arguments. For the debugger, "second" line table entry marks beginning of function body.
This requires some coordination with debugger to get this working.
- The debugger needs to be aware of prolog_end attribute attached with line table entries.
- The compiler needs to accurately mark prolog_end in line table entries (at -O0 and at -O1+)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126155 91177308-0d34-0410-b5e6-96231b3b80d8
of testing for its presence at cmake time.
This way the build automatically regenerates the makefiles when a svn
update brings in a new sublibrary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126068 91177308-0d34-0410-b5e6-96231b3b80d8
No one uses *-mingw64. mingw-w64 is represented as {i686|x86_64}-w64-mingw32. In llvm side, i686 and x64 can be treated as similar way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125747 91177308-0d34-0410-b5e6-96231b3b80d8
This is necessary to avoid a crash in certain tangled situations where a kill
flag is first correctly moved to a merged instruction, and then needs to be
moved again:
STR %R0, a...
STR %R0<kill>, b...
First becomes:
STR %R0, b...
STM a, %R0<kill>, ...
and then:
STM a, %R0, ...
STM b, %R0<kill>, ...
We can now remove the kill flag from the merged STM when needed. 8960050.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125591 91177308-0d34-0410-b5e6-96231b3b80d8
- Add custom operand matching for imod and iflags.
- Rename SplitMnemonicAndCC to SplitMnemonic since it splits more than CC
from mnemonic.
- While adding ".w" as an operand, don't change "Head" to avoid passing the
wrong mnemonic to ParseOperand.
- Add asm parser tests.
- Add disassembler tests just to make sure it can catch all cps versions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125489 91177308-0d34-0410-b5e6-96231b3b80d8
have their low bits set to zero. This allows us to optimize
out explicit stack alignment code like in stack-align.ll:test4 when
it is redundant.
Doing this causes the code generator to start turning FI+cst into
FI|cst all over the place, which is general goodness (that is the
canonical form) except that various pieces of the code generator
don't handle OR aggressively. Fix this by introducing a new
SelectionDAG::isBaseWithConstantOffset predicate, and using it
in places that are looking for ADD(X,CST). The ARM backend in
particular was missing a lot of addressing mode folding opportunities
around OR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125470 91177308-0d34-0410-b5e6-96231b3b80d8
Teach the AsmMatcher handling to distinguish between an error custom-parsing
an operand and a failure to match. The former should propogate the error
upwards, while the latter should continue attempting to parse with
alternative matchers.
Update the ARM asm parser accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125426 91177308-0d34-0410-b5e6-96231b3b80d8
This
define float @foo(float %x, float %y) nounwind readnone {
entry:
%0 = tail call float @copysignf(float %x, float %y) nounwind readnone
ret float %0
}
Was compiled to:
vmov s0, r1
bic r0, r0, #-2147483648
vmov s1, r0
vcmpe.f32 s0, #0
vmrs apsr_nzcv, fpscr
it lt
vneglt.f32 s1, s1
vmov r0, s1
bx lr
This fails to copy the sign of -0.0f because it's lost during the float to int
conversion. Also, it's sub-optimal when the inputs are in GPR registers.
Now it uses integer and + or operations when it's profitable. And it's correct!
lsrs r1, r1, #31
bfi r0, r1, #31, #1
bx lr
rdar://8984306
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125357 91177308-0d34-0410-b5e6-96231b3b80d8
t2LDRpci with t2LDRi12.
There are a couple of problems with this.
1. The encoding for the literal and immediate constant are different.
Note bit 7 of the literal case is 'U' so it can be negative.
2. t2LDRi12 is now narrowed to tLDRpci before constant island pass is run.
So we end up never using the Thumb2 instruction, which ends up creating a
lot more constant islands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125074 91177308-0d34-0410-b5e6-96231b3b80d8
parsing of operands introduced in r125030. As a small note, besides using a more
generic approach we can also have more descriptive output when debugging
llvm-mc, example:
mcr p7, #1, r5, c1, c1, #4
note: parsed instruction:
['mcr', <ARMCC::al>,
<coprocessor number: 7>,
1,
<register 73>,
<coprocessor register: 1>,
<coprocessor register: 1>,
4]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125052 91177308-0d34-0410-b5e6-96231b3b80d8
The vld1-lane, vld1-dup and vst1-lane instructions do not yet support using
post-increment versions, but all the rest of the NEON load/store instructions
should be handled now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125014 91177308-0d34-0410-b5e6-96231b3b80d8
These operations are expanded to pairs of loads or stores, and the first one
uses the address register update to produce the address for the second one.
So far, the second load/store has also updated the address register, just
for convenience, since that output has never been used. In anticipation of
actually supporting post-increment updates for these operations, this changes
the non-updating operations to use a non-updating load/store for the second
instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125013 91177308-0d34-0410-b5e6-96231b3b80d8
Unified EmitTextAttribute for both Asm and Obj emission (.cpu only)
Added necessary cortex-A8 related attrs for codegen compat tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124995 91177308-0d34-0410-b5e6-96231b3b80d8
(yes, this is different from R_ARM_CALL)
- Adds a new method getARMBranchTargetOpValue() which handles the
necessary distinction between the conditional and unconditional br/bl
needed for ARM/ELF
At least for ARM mode, the needed fixup for conditional versus unconditional
br/bl is identical, but the ARM docs and existing ARM tools expect this
reloc type...
Added a few FIXME's for future naming fixups in ARMInstrInfo.td
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124895 91177308-0d34-0410-b5e6-96231b3b80d8
the load, then it may be legal to transform the load and store to integer
load and store of the same width.
This is done if the target specified the transformation as profitable. e.g.
On arm, this can transform:
vldr.32 s0, []
vstr.32 s0, []
to
ldr r12, []
str r12, []
rdar://8944252
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124708 91177308-0d34-0410-b5e6-96231b3b80d8
clang's -Wuninitialized-experimental warning.
While these don't look like real bugs, clang's
-Wuninitialized-experimental analysis is stricter
than GCC's, and these fixes have the benefit
of being general nice cleanups.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124073 91177308-0d34-0410-b5e6-96231b3b80d8
1. Fixed ARM pc adjustment.
2. Fixed dynamic-no-pic codegen
3. CSE of pc-relative load of global addresses.
It's now enabled by default for Darwin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123991 91177308-0d34-0410-b5e6-96231b3b80d8
qadd and qdadd uses "rd, rm, rn", the same applies to the 'sub' variants. This
is described in ARM manuals and matches the encoding used by the gnu assembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123975 91177308-0d34-0410-b5e6-96231b3b80d8
flags. They are still not enable in this revision.
Added TargetInstrInfo::isZeroCost() to fix a fundamental problem with
the scheduler's model of operand latency in the selection DAG.
Generalized unit tests to work with sched-cycles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123969 91177308-0d34-0410-b5e6-96231b3b80d8
value, the "add pc" must be CSE'ed at the same time. We could follow the same
approach as T2 by adding pseudo instructions that combine the ldr + "add pc".
But the better approach is to use movw + movt (which I will enable soon), so
I'll leave this as a TODO.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123949 91177308-0d34-0410-b5e6-96231b3b80d8
in cdp/cdp2 instructions. Also increase the hack with cdp/cdp2 instructions.
- Fix the encoding of cdp/cdp2 instructions for ARM (no thumb and thumb2 yet) and add testcases for t
hem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123927 91177308-0d34-0410-b5e6-96231b3b80d8
TargetInstrInfo:
Change produceSameValue() to take MachineRegisterInfo as an optional argument.
When in SSA form, targets can use it to make more aggressive equality analysis.
Machine LICM:
1. Eliminate isLoadFromConstantMemory, use MI.isInvariantLoad instead.
2. Fix a bug which prevent CSE of instructions which are not re-materializable.
3. Use improved form of produceSameValue.
ARM:
1. Teach ARM produceSameValue to look pass some PIC labels.
2. Look for operands from different loads of different constant pool entries
which have same values.
3. Re-implement PIC GA materialization using movw + movt. Combine the pair with
a "add pc" or "ldr [pc]" to form pseudo instructions. This makes it possible
to re-materialize the instruction, allow machine LICM to hoist the set of
instructions out of the loop and make it possible to CSE them. It's a bit
hacky, but it significantly improve code quality.
4. Some minor bug fixes as well.
With the fixes, using movw + movt to materialize GAs significantly outperform the
load from constantpool method. 186.crafty and 255.vortex improved > 20%, 254.gap
and 176.gcc ~10%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123905 91177308-0d34-0410-b5e6-96231b3b80d8
of the floating point types less than 64-bits. It's somewhat of a temporary
hack but forces more accurate modeling of register pressure and results
in fewer spills.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123811 91177308-0d34-0410-b5e6-96231b3b80d8
movw r0, :lower16:(L_foo$non_lazy_ptr-(LPC0_0+4))
movt r0, :upper16:(L_foo$non_lazy_ptr-(LPC0_0+4))
LPC0_0:
add r0, pc, r0
It's not yet enabled by default as some tests are failing. I suspect bugs in
down stream tools.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123619 91177308-0d34-0410-b5e6-96231b3b80d8