Commit Graph

9 Commits

Author SHA1 Message Date
Chris Lattner
26b0000166 manually upgrade a bunch of tests to modern syntax, and remove some that
are either unreduced or only test old syntax.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133228 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-17 03:14:27 +00:00
NAKAMURA Takumi
4491aa49b3 test/CodeGen/X86/byval*.ll: Win64 has not supported byval yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127731 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-16 13:52:20 +00:00
Stuart Hastings
03d5826164 Revert 127359; it broke lencod.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127382 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-10 00:25:53 +00:00
Stuart Hastings
2f26fa4838 X86 byval copies no longer always_inline. <rdar://problem/8706628>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127359 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-09 21:10:30 +00:00
Dan Gohman
36a0947820 Eliminate more uses of llvm-as and llvm-dis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81290 91177308-0d34-0410-b5e6-96231b3b80d8
2009-09-08 23:54:48 +00:00
Evan Cheng
1887c1c2f9 Fix a number of byval / memcpy / memset related codegen issues.
1. x86-64 byval alignment should be max of 8 and alignment of type. Previously the code was not doing what the commit message was saying.
2. Do not use byte repeat move and store operations. These are slow.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55139 91177308-0d34-0410-b5e6-96231b3b80d8
2008-08-21 21:00:15 +00:00
Dan Gohman
707e018423 Drop ISD::MEMSET, ISD::MEMMOVE, and ISD::MEMCPY, which are not Legal
on any current target and aren't optimized in DAGCombiner. Instead
of using intermediate nodes, expand the operations, choosing between
simple loads/stores, target-specific code, and library calls,
immediately.

Previously, the code to emit optimized code for these operations
was only used at initial SelectionDAG construction time; now it is
used at all times. This fixes some cases where rep;movs was being
used for small copies where simple loads/stores would be better.

This also cleans up code that checks for alignments less than 4;
let the targets make that decision instead of doing it in
target-independent code. This allows x86 to use rep;movs in
low-alignment cases.

Also, this fixes a bug that resulted in the use of rep;stos for
memsets of 0 with non-constant memory size when the alignment was
at least 4. It's better to use the library in this case, which
can be significantly faster when the size is large.

This also preserves more SourceValue information when memory
intrinsics are lowered into simple loads/stores.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49572 91177308-0d34-0410-b5e6-96231b3b80d8
2008-04-12 04:36:06 +00:00
Evan Cheng
2928650262 Let each target decide byval alignment. For X86, it's 4-byte unless the aggregare contains SSE vector(s). For x86-64, it's max of 8 or alignment of the type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46286 91177308-0d34-0410-b5e6-96231b3b80d8
2008-01-23 23:17:41 +00:00
Rafael Espindola
5c0d6ed325 Add support for byval function whose argument is not 32 bit aligned.
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset.  I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43172 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19 10:41:11 +00:00