Patch by Brendon Cahoon!
This extends the existing LoopUnroll and LoopUnrollPass. Brendon
measured no regressions in the llvm test suite with -unroll-runtime
enabled. This implementation works by using the existing loop
unrolling code to unroll the loop by a power-of-two (default 8). It
generates an if-then-else sequence of code prior to the loop to
execute the extra iterations before entering the unrolled loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146245 91177308-0d34-0410-b5e6-96231b3b80d8
symbol difference. This matches gas behavior and fixes PR11513.
We still don't handle _GLOBAL_OFFSET_TABLE_ in data sections.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146238 91177308-0d34-0410-b5e6-96231b3b80d8
MipsTargetLowering::LowerGlobalTLSAddress. This is necessary to have
call16(__tls_get_addr) emitted instead of got_disp(__tls_get_addr) when the
target is Mips64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146183 91177308-0d34-0410-b5e6-96231b3b80d8
- Modify lowering of global TLS address nodes.
- Modify isel of ThreadPointer.
- Wrap target global TLS address nodes that are operands of loads with WrapperPIC.
- Remove Mips-specific DAG nodes TlsGd, TprelHi and TprelLo, which can be
substituted with other existing nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146175 91177308-0d34-0410-b5e6-96231b3b80d8
clients to decide whether to look inside bundled instructions and whether
the query should return true if any / all bundled instructions have the
queried property.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146168 91177308-0d34-0410-b5e6-96231b3b80d8
if (HasAVX)
X86SSELevel = NoMMXSSE;
This is so patterns that are predicated on hasSSE3, etc. would not be selected when avx is available. Instead, the AVX variant is selected.
However, this breaks instructions which do not have AVX variants.
The right way to fix this is for the SSE but not-AVX patterns to predicate on something like hasSSE3() && !hasAVX().
Then we can take out the hack in X86Subtarget.cpp. Patterns which do not have AVX variants do not need to change.
However, we need to audit all the patterns before we make the change. This patch is workaround that fixes one specific case,
the prefetch instructions. rdar://10538297
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146163 91177308-0d34-0410-b5e6-96231b3b80d8
We must not issue a bitcast operation for integer-promotion of vector types, because the
location of the values in the vector may be different.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146150 91177308-0d34-0410-b5e6-96231b3b80d8
It is not used any more. We are tracking inline assembly misalignments
directly through the BBInfo.Unalign and KnownBits fields.
A simple conservative size estimate is not good enough since it can
cause alignment padding to be underestimated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146124 91177308-0d34-0410-b5e6-96231b3b80d8
Compute alignment padding before and after basic blocks dynamically.
Heed basic block alignment.
This simplifies bookkeeping because we don't have to constantly add and
remove padding from BBInfo.Size. It also makes it possible to track the
extra known alignment bits we get after a tBR_JTr terminator and when
entering an aligned basic block.
This makes the ARMConstantIslandPass aware of aligned basic blocks.
It is tricky to model block alignment correctly when dealing with inline
assembly and tBR_JTr instructions that have variable size. If inline
assembly turns out to be smaller than expected, that may cause following
alignment padding to be larger than expected. This could cause constant
pool entries to move out of range.
To avoid that problem, we use the worst case alignment padding following
inline assembly. This may cause slightly suboptimal constant island
placement in aligned basic blocks following inline assembly. Normal
functions should be unaffected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146118 91177308-0d34-0410-b5e6-96231b3b80d8