Commit Graph

8 Commits

Author SHA1 Message Date
Evan Cheng
18fb1d35db Add Mode64Bit feature and sink it down to MC layer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134641 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-07 21:06:52 +00:00
Jakob Stoklund Olesen
d5b679c8ce Weekly fix of register allocation dependent unit tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130567 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-30 01:37:52 +00:00
Jakob Stoklund Olesen
57b0fb7850 Fix one more batch of X86 tests to be register allocation dependent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128919 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-05 20:20:30 +00:00
Eric Christopher
7c2cdb1c05 Turn on list-ilp scheduling by default on x86 and x86-64, fix up
testcases accordingly. Some are currently xfailed and will be filed
as bugs to be fixed or understood.

Performance results:

roughly neutral on SPEC
some micro benchmarks in the llvm suite are up between 100 and 150%, only
a pair of regressions that are due to be investigated

john-the-ripper saw:
10% improvement in traditional DES
8% improvement in BSDI DES
59% improvement in FreeBSD MD5
67% improvement in OpenBSD Blowfish
14% improvement in LM DES

Small compile time impact.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127208 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-08 02:42:25 +00:00
Evan Cheng
a5e1362f96 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123015 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 19:35:30 +00:00
Evan Cheng
461f1fc359 Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122955 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 07:58:36 +00:00
Owen Anderson
4a9f150926 When TCO is turned on, it is possible to end up with aliasing FrameIndex's. Therefore,
CombinerAA cannot assume that different FrameIndex's never alias, but can instead use
MachineFrameInfo to get the actual offsets of these slots and check for actual aliasing.

This fixes CodeGen/X86/2010-02-19-TailCallRetAddrBug.ll and CodeGen/X86/tailcallstack64.ll
when CombinerAA is enabled, modulo a different register allocation sequence.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114348 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-20 20:39:59 +00:00
Owen Anderson
14ac1dd2be Invert the logic of reachesChainWithoutSideEffects(). What we want to check is that there is
NO path to the destination containing side effects, not that SOME path contains no side effects.
In  practice, this only manifests with CombinerAA enabled, because otherwise the chain has little
to no branching, so "any" is effectively equivalent to "all".


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114268 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-18 04:45:14 +00:00