always be disambiguated as sldtw. sldtw and sldtq with
a mem operands have the same effect, but sldtw is more
compact. Force it to sldtw, resolving rdar://8017530
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113186 91177308-0d34-0410-b5e6-96231b3b80d8
of a mneumonic, report operand errors with better location
info. For example, we now report:
t.s:6:14: error: invalid operand for instruction
cwtl $1
^
but we fail for common cases like:
t.s:11:4: error: invalid operand for instruction
addl $1, $1
^
because we don't know if this is supposed to be the reg/imm or imm/reg
form.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113178 91177308-0d34-0410-b5e6-96231b3b80d8
failed because a subtarget feature was not enabled. Use this to
remove a bunch of hacks from the X86AsmParser for rejecting things
like popfl in 64-bit mode. Previously these hacks weren't needed,
but were important to get a message better than "invalid instruction"
when used in the wrong mode.
This also fixes bugs where pushal would not be rejected correctly in
32-bit mode (just pusha).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113166 91177308-0d34-0410-b5e6-96231b3b80d8
into the middle of the class, and rework how the different sections of
the generated file are conditionally included for simplicity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113163 91177308-0d34-0410-b5e6-96231b3b80d8
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack,
for example, before, this code:
int foo(int x, int y, int z) {
return x+y+z;
}
used to compile into:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
movl 4(%rsp), %esi
addl %edx, %esi
movl (%rsp), %edx
addl %esi, %edx
movl %edx, %eax
addq $12, %rsp
ret
Now we produce:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
addl 4(%rsp), %edx ## Folded load
addl (%rsp), %edx ## Folded load
movl %edx, %eax
addq $12, %rsp
ret
Fewer instructions and less register use = faster compiles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113102 91177308-0d34-0410-b5e6-96231b3b80d8
checking each standalone condition and decide whether emit target
specific nodes or remove the condition if it's already matched before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113031 91177308-0d34-0410-b5e6-96231b3b80d8
"Use target specific nodes instead of relying in unpckl and
unpckh pattern fragments during isel time. Also place a
depth limit in getShuffleScalarElt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113020 91177308-0d34-0410-b5e6-96231b3b80d8
mask pattern fragment", which depends on r112934, which introduced some infinite
loop and select failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112998 91177308-0d34-0410-b5e6-96231b3b80d8
"For ARM stack frames that utilize variable sized objects and have either
large local stack areas or require dynamic stack realignment, allocate a
base register via which to access the local frame. This allows efficient
access to frame indices not accessible via the FP (either due to being out
of range or due to dynamic realignment) or the SP (due to variable sized
object allocation). In particular, this greatly improves efficiency of access
to spill slots in Thumb functions which contain VLAs."
r112986 fixed a latent bug exposed by the above.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112989 91177308-0d34-0410-b5e6-96231b3b80d8
alignment should be performed. Otherwise dynamic realignment may trigger
when the register allocator has already used the frame pointer as a general
purpose register. That is, we need to make sure that the list of reserved
registers doesn't change after register allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112986 91177308-0d34-0410-b5e6-96231b3b80d8