This is a skeleton for a pre-RA MachineInstr scheduler strategy. Currently
it only tries to expose more parallelism for ALU instructions (this also
makes the distribution of GPR channels more uniform and increases the
chances of ALU instructions to be packed together in a single VLIW group).
Also it tries to reduce clause switching by grouping instruction of the
same kind (ALU/FETCH/CF) together.
Vincent Lejeune:
- Support for VLIW4 Slot assignement
- Recomputation of ScheduleDAG to get more parallelism opportunities
Tom Stellard:
- Fix assertion failure when trying to determine an instruction's slot
based on its destination register's class
- Fix some compiler warnings
Vincent Lejeune: [v2]
- Remove recomputation of ScheduleDAG (will be provided in a later patch)
- Improve estimation of an ALU clause size so that heuristic does not emit cf
instructions at the wrong position.
- Make schedule heuristic smarter using SUnit Depth
- Take constant read limitations into account
Vincent Lejeune: [v3]
- Fix some uninitialized values in ConstPair
- Add asserts to ensure an ALU slot is always populated
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176498 91177308-0d34-0410-b5e6-96231b3b80d8
Clarify that we mean the object starting at the pointer to the end of the
underlying object and not the size of the whole allocated object.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176491 91177308-0d34-0410-b5e6-96231b3b80d8
Maintaining CONST_COPY Instructions until Pre Emit may prevent some ifcvt case
and taking them in account for scheduling is difficult for no real benefit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176488 91177308-0d34-0410-b5e6-96231b3b80d8
Reviewed-by: Tom Stellard <thomas.stellard at amd.com>
mayLoad complexify scheduling and does not bring any usefull info
as the location is not writeable at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176486 91177308-0d34-0410-b5e6-96231b3b80d8
one-byte NOPs. If the processor actually executes those NOPs, as it sometimes
does with aligned bundling, this can have a performance impact. From my
micro-benchmarks run on my one machine, a 15-byte NOP followed by twelve
one-byte NOPs is about 20% worse than a 15 followed by a 12. This patch
changes NOP emission to emit as many 15-byte (the maximum) as possible followed
by at most one shorter NOP.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176464 91177308-0d34-0410-b5e6-96231b3b80d8
GlobalValue linkage up to ExternalLinkage in the ExtractGV pass. This
prevents linkonce and linkonce_odr symbols from being DCE'd.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176459 91177308-0d34-0410-b5e6-96231b3b80d8
'R' An address that can be sued in a non-macro load or store.
This patch includes a positive test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176452 91177308-0d34-0410-b5e6-96231b3b80d8
* Only apply divide bypass optimization when not optimizing for size.
* Fixed bug caused by constant for 0 value of type Int32,
used dividend type to generate the constant instead.
* For atom x86-64 apply the divide bypass to use 16-bit divides instead of
64-bit divides when operand values are small enough.
* Added lit tests for 64-bit divide bypass.
Patch by Tyler Nowicki!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176442 91177308-0d34-0410-b5e6-96231b3b80d8
This adds minimalistic support for PHI nodes to llvm.objectsize() evaluation
fingers crossed so that it does break clang boostrap again..
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176408 91177308-0d34-0410-b5e6-96231b3b80d8
this is similar to getObjectSize(), but doesnt subtract the offset
tweak the BasicAA code accordingly (per PR14988)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176407 91177308-0d34-0410-b5e6-96231b3b80d8
This matters for example in following matrix multiply:
int **mmult(int rows, int cols, int **m1, int **m2, int **m3) {
int i, j, k, val;
for (i=0; i<rows; i++) {
for (j=0; j<cols; j++) {
val = 0;
for (k=0; k<cols; k++) {
val += m1[i][k] * m2[k][j];
}
m3[i][j] = val;
}
}
return(m3);
}
Taken from the test-suite benchmark Shootout.
We estimate the cost of the multiply to be 2 while we generate 9 instructions
for it and end up being quite a bit slower than the scalar version (48% on my
machine).
Also, properly differentiate between avx1 and avx2. On avx-1 we still split the
vector into 2 128bits and handle the subvector muls like above with 9
instructions.
Only on avx-2 will we have a cost of 9 for v4i64.
I changed the test case in test/Transforms/LoopVectorize/X86/avx1.ll to use an
add instead of a mul because with a mul we now no longer vectorize. I did
verify that the mul would be indeed more expensive when vectorized with 3
kernels:
for (i ...)
r += a[i] * 3;
for (i ...)
m1[i] = m1[i] * 3; // This matches the test case in avx1.ll
and a matrix multiply.
In each case the vectorized version was considerably slower.
radar://13304919
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176403 91177308-0d34-0410-b5e6-96231b3b80d8
The LoopVectorizer often runs multiple times on the same function due to inlining.
When this happens the loop vectorizer often vectorizes the same loops multiple times, increasing code size and adding unneeded branches.
With this patch, the vectorizer during vectorization puts metadata on scalar loops and marks them as 'already vectorized' so that it knows to ignore them when it sees them a second time.
PR14448.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176399 91177308-0d34-0410-b5e6-96231b3b80d8
Fix the way resources are counted. I'm taking some time to cleanup the
way MachineScheduler handles in-order machine resources. Eventually
we'll need more PPC/Atom test cases in tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176390 91177308-0d34-0410-b5e6-96231b3b80d8
The sys::fs::is_directory() check is unnecessary because, if the filename is
a directory, the function will fail anyway with the same error code returned.
Remove the check to avoid an unnecessary stat call.
Someone needs to review on windows and see if the check is necessary there or not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176386 91177308-0d34-0410-b5e6-96231b3b80d8
This patch eliminates the need to emit a constant move instruction when this
pattern is matched:
(select (setgt a, Constant), T, F)
The pattern above effectively turns into this:
(conditional-move (setlt a, Constant + 1), F, T)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176384 91177308-0d34-0410-b5e6-96231b3b80d8
Also removed the comments of "should produce..." because they completely
don't match the actually produced output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176381 91177308-0d34-0410-b5e6-96231b3b80d8