When the immediate operand of an AND or BIC instruction isn't representable
in the immediate field of the instruction, but the bitwise negation of the
immediate is, assemble the instruction as the inverse operation instead
with the inverted immediate as the operand.
rdar://10550057
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146283 91177308-0d34-0410-b5e6-96231b3b80d8
Refactor the instructions into fixed writeback and register-stride
writeback variants to simplify the offset operand (no more optional
register operand using reg0). This is a simpler representation and allows
the assembly parser to more easily handle these instructions.
Add tests for the instruction variants now supported.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146278 91177308-0d34-0410-b5e6-96231b3b80d8
MipsTargetLowering::LowerGlobalTLSAddress. This is necessary to have
call16(__tls_get_addr) emitted instead of got_disp(__tls_get_addr) when the
target is Mips64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146183 91177308-0d34-0410-b5e6-96231b3b80d8
- Modify lowering of global TLS address nodes.
- Modify isel of ThreadPointer.
- Wrap target global TLS address nodes that are operands of loads with WrapperPIC.
- Remove Mips-specific DAG nodes TlsGd, TprelHi and TprelLo, which can be
substituted with other existing nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146175 91177308-0d34-0410-b5e6-96231b3b80d8
if (HasAVX)
X86SSELevel = NoMMXSSE;
This is so patterns that are predicated on hasSSE3, etc. would not be selected when avx is available. Instead, the AVX variant is selected.
However, this breaks instructions which do not have AVX variants.
The right way to fix this is for the SSE but not-AVX patterns to predicate on something like hasSSE3() && !hasAVX().
Then we can take out the hack in X86Subtarget.cpp. Patterns which do not have AVX variants do not need to change.
However, we need to audit all the patterns before we make the change. This patch is workaround that fixes one specific case,
the prefetch instructions. rdar://10538297
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146163 91177308-0d34-0410-b5e6-96231b3b80d8
It is not used any more. We are tracking inline assembly misalignments
directly through the BBInfo.Unalign and KnownBits fields.
A simple conservative size estimate is not good enough since it can
cause alignment padding to be underestimated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146124 91177308-0d34-0410-b5e6-96231b3b80d8
Compute alignment padding before and after basic blocks dynamically.
Heed basic block alignment.
This simplifies bookkeeping because we don't have to constantly add and
remove padding from BBInfo.Size. It also makes it possible to track the
extra known alignment bits we get after a tBR_JTr terminator and when
entering an aligned basic block.
This makes the ARMConstantIslandPass aware of aligned basic blocks.
It is tricky to model block alignment correctly when dealing with inline
assembly and tBR_JTr instructions that have variable size. If inline
assembly turns out to be smaller than expected, that may cause following
alignment padding to be larger than expected. This could cause constant
pool entries to move out of range.
To avoid that problem, we use the worst case alignment padding following
inline assembly. This may cause slightly suboptimal constant island
placement in aligned basic blocks following inline assembly. Normal
functions should be unaffected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146118 91177308-0d34-0410-b5e6-96231b3b80d8