Commit Graph

10 Commits

Author SHA1 Message Date
Chris Lattner
b85e4eba85 rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which is
for pre-2.9 bitcode files.  We keep x86 unaligned loads, movnt, crc32, and the
target indep prefetch change.

As usual, updating the testsuite is a PITA.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133337 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-18 06:05:24 +00:00
Evan Cheng
a5e1362f96 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123015 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 19:35:30 +00:00
Evan Cheng
461f1fc359 Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122955 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 07:58:36 +00:00
Evan Cheng
c3b0c341e7 Avoid using f64 to lower memcpy from constant string. It's cheaper to use i32 store of immediates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100751 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-08 07:37:57 +00:00
Evan Cheng
3ea97550e3 In 64-bit mode, use i64 to lower memcpy / memset instead of f64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100137 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 20:27:45 +00:00
Evan Cheng
0bf77de91f Add -mcpu to memcpy / memset tests to ensure they behave the same on all hosts / targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100101 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 08:25:26 +00:00
Evan Cheng
255f20f7f7 Fix sdisel memcpy, memset, memmove lowering:
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
   load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
   ops more effectively.
rdar://7774704


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100090 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 06:04:33 +00:00
Dan Gohman
36a0947820 Eliminate more uses of llvm-as and llvm-dis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81290 91177308-0d34-0410-b5e6-96231b3b80d8
2009-09-08 23:54:48 +00:00
Nick Lewycky
3978927dfa The Linux ABI emits an extra "movl %esp, %ebp" in function prologue and
sometimes a "mov %ebp, %esp" in the epilogue.

Force these tests that rely on counting 'mov' to use i686-apple-darwin8.8.0
where they were written.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51568 91177308-0d34-0410-b5e6-96231b3b80d8
2008-05-26 20:18:56 +00:00
Evan Cheng
d86323efc9 Add a couple of test cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51441 91177308-0d34-0410-b5e6-96231b3b80d8
2008-05-22 21:19:19 +00:00