Commit Graph

18 Commits

Author SHA1 Message Date
Andrew Trick
6a7770b7ae Enable MI Sched for x86.
This changes the SelectionDAG scheduling preference to source
order. Soon, the SelectionDAG scheduler can be bypassed saving
a nice chunk of compile time.

Performance differences that result from this change are often a
consequence of register coalescing. The register coalescer is far from
perfect. Bugs can be filed for deficiencies.

On x86 SandyBridge/Haswell, the source order schedule is often
preserved, particularly for small blocks.

Register pressure is generally improved over the SD scheduler's ILP
mode. However, we are still able to handle large blocks that require
latency hiding, unlike the SD scheduler's BURR mode. MI scheduler also
attempts to discover the critical path in single-block loops and
adjust heuristics accordingly.

The MI scheduler relies on the new machine model. This is currently
unimplemented for AVX, so we may not be generating the best code yet.

Unit tests are updated so they don't depend on SD scheduling heuristics.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192750 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-15 23:33:07 +00:00
Stephen Lin
8b2b8a1835 Mass update to CodeGen tests to use CHECK-LABEL for labels corresponding to function definitions for more informative error messages. No functionality change and all updated tests passed locally.
This update was done with the following bash script:

  find test/CodeGen -name "*.ll" | \
  while read NAME; do
    echo "$NAME"
    if ! grep -q "^; *RUN: *llc.*debug" $NAME; then
      TEMP=`mktemp -t temp`
      cp $NAME $TEMP
      sed -n "s/^define [^@]*@\([A-Za-z0-9_]*\)(.*$/\1/p" < $NAME | \
      while read FUNC; do
        sed -i '' "s/;\(.*\)\([A-Za-z0-9_-]*\):\( *\)$FUNC: *\$/;\1\2-LABEL:\3$FUNC:/g" $TEMP
      done
      sed -i '' "s/;\(.*\)-LABEL-LABEL:/;\1-LABEL:/" $TEMP
      sed -i '' "s/;\(.*\)-NEXT-LABEL:/;\1-NEXT:/" $TEMP
      sed -i '' "s/;\(.*\)-NOT-LABEL:/;\1-NOT:/" $TEMP
      sed -i '' "s/;\(.*\)-DAG-LABEL:/;\1-DAG:/" $TEMP
      mv $TEMP $NAME
    fi
  done


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186280 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-14 06:24:09 +00:00
Andrew Trick
b2b5dc642c Revert "Temporarily enable MI-Sched on X86."
This reverts commit 98a9b72e8c.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184823 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-25 02:48:58 +00:00
Andrew Trick
98a9b72e8c Temporarily enable MI-Sched on X86.
Sorry for the unit test churn. I'll try to make the change permanently
next time.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184705 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-24 09:13:20 +00:00
Lang Hames
2d95e43fd8 When lowering memcpys to loads and stores, make sure we don't promote alignments
past the natural stack alignment.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174085 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-31 20:23:43 +00:00
Evan Cheng
61f4dfe369 Avoid using lossy load / stores for memcpy / memset expansion. e.g.
f64 load / store on non-SSE2 x86 targets.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169944 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-12 00:42:09 +00:00
Evan Cheng
376642ed62 Some enhancements for memcpy / memset inline expansion.
1. Teach it to use overlapping unaligned load / store to copy / set the trailing
   bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies.
2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g.
   x86 and ARM.
3. When memcpy from a constant string, do *not* replace the load with a constant
   if it's not possible to materialize an integer immediate with a single
   instruction (required a new target hook: TLI.isIntImmLegal()).
4. Use unaligned load / stores more aggressively if target hooks indicates they
   are "fast".
5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8.
   Also increase the threshold to something reasonable (8 for memset, 4 pairs
   for memcpy).

This significantly improves Dhrystone, up to 50% on ARM iOS devices.

rdar://12760078


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169791 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-10 23:21:26 +00:00
Benjamin Kramer
2dbe929685 X86: Enable SSE memory intrinsics even when stack alignment is less than 16 bytes.
The stack realignment code was fixed to work when there is stack realignment and
a dynamic alloca is present so this shouldn't cause correctness issues anymore.

Note that this also enables generation of AVX instructions for memset
under the assumptions:
- Unaligned loads/stores are always fast on CPUs supporting AVX
- AVX is not slower than SSE
We may need some tweaked heuristics if one of those assumptions turns out not to
be true.

Effectively reverts r58317. Part of PR2962.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167967 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-14 20:08:40 +00:00
Chris Lattner
b85e4eba85 rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which is
for pre-2.9 bitcode files.  We keep x86 unaligned loads, movnt, crc32, and the
target indep prefetch change.

As usual, updating the testsuite is a PITA.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133337 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-18 06:05:24 +00:00
Evan Cheng
a5e1362f96 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123015 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 19:35:30 +00:00
Evan Cheng
461f1fc359 Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122955 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 07:58:36 +00:00
Evan Cheng
c3b0c341e7 Avoid using f64 to lower memcpy from constant string. It's cheaper to use i32 store of immediates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100751 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-08 07:37:57 +00:00
Evan Cheng
3ea97550e3 In 64-bit mode, use i64 to lower memcpy / memset instead of f64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100137 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 20:27:45 +00:00
Evan Cheng
0bf77de91f Add -mcpu to memcpy / memset tests to ensure they behave the same on all hosts / targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100101 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 08:25:26 +00:00
Evan Cheng
255f20f7f7 Fix sdisel memcpy, memset, memmove lowering:
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
   load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
   ops more effectively.
rdar://7774704


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100090 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 06:04:33 +00:00
Dan Gohman
36a0947820 Eliminate more uses of llvm-as and llvm-dis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81290 91177308-0d34-0410-b5e6-96231b3b80d8
2009-09-08 23:54:48 +00:00
Nick Lewycky
3978927dfa The Linux ABI emits an extra "movl %esp, %ebp" in function prologue and
sometimes a "mov %ebp, %esp" in the epilogue.

Force these tests that rely on counting 'mov' to use i686-apple-darwin8.8.0
where they were written.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51568 91177308-0d34-0410-b5e6-96231b3b80d8
2008-05-26 20:18:56 +00:00
Evan Cheng
d86323efc9 Add a couple of test cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51441 91177308-0d34-0410-b5e6-96231b3b80d8
2008-05-22 21:19:19 +00:00