U test/CodeGen/X86/byval2.ll
U test/CodeGen/X86/byval4.ll
U test/CodeGen/X86/byval.ll
U test/CodeGen/X86/byval3.ll
U test/CodeGen/X86/byval5.ll
--- Merging r127732 into '.':
U test/CodeGen/X86/stdarg.ll
U test/CodeGen/X86/fold-mul-lohi.ll
U test/CodeGen/X86/scalar-min-max-fill-operand.ll
U test/CodeGen/X86/tailcallbyval64.ll
U test/CodeGen/X86/stride-reuse.ll
U test/CodeGen/X86/sse-align-3.ll
U test/CodeGen/X86/sse-commute.ll
U test/CodeGen/X86/stride-nine-with-base-reg.ll
U test/CodeGen/X86/coalescer-commute2.ll
U test/CodeGen/X86/sse-align-7.ll
U test/CodeGen/X86/sse_reload_fold.ll
U test/CodeGen/X86/sse-align-0.ll
--- Merging r127733 into '.':
U test/CodeGen/X86/peep-vector-extract-concat.ll
U test/CodeGen/X86/pmulld.ll
U test/CodeGen/X86/widen_load-0.ll
U test/CodeGen/X86/v2f32.ll
U test/CodeGen/X86/apm.ll
U test/CodeGen/X86/h-register-store.ll
U test/CodeGen/X86/h-registers-0.ll
--- Merging r127734 into '.':
U test/CodeGen/X86/2007-01-08-X86-64-Pointer.ll
U test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll
U test/CodeGen/X86/avoid-lea-scale2.ll
U test/CodeGen/X86/lea-3.ll
U test/CodeGen/X86/vec_set-8.ll
U test/CodeGen/X86/i64-mem-copy.ll
U test/CodeGen/X86/x86-64-malloc.ll
U test/CodeGen/X86/mmx-copy-gprs.ll
U test/CodeGen/X86/vec_shuffle-17.ll
U test/CodeGen/X86/2007-07-18-Vector-Extract.ll
--- Merging r127775 into '.':
U test/CodeGen/X86/constant-pool-remat-0.ll
--- Merging r127872 into '.':
U utils/lit/lit/TestingConfig.py
U lib/Support/raw_ostream.cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/branches/release_29@128258 91177308-0d34-0410-b5e6-96231b3b80d8
1. x86-64 byval alignment should be max of 8 and alignment of type. Previously the code was not doing what the commit message was saying.
2. Do not use byte repeat move and store operations. These are slow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55139 91177308-0d34-0410-b5e6-96231b3b80d8
on any current target and aren't optimized in DAGCombiner. Instead
of using intermediate nodes, expand the operations, choosing between
simple loads/stores, target-specific code, and library calls,
immediately.
Previously, the code to emit optimized code for these operations
was only used at initial SelectionDAG construction time; now it is
used at all times. This fixes some cases where rep;movs was being
used for small copies where simple loads/stores would be better.
This also cleans up code that checks for alignments less than 4;
let the targets make that decision instead of doing it in
target-independent code. This allows x86 to use rep;movs in
low-alignment cases.
Also, this fixes a bug that resulted in the use of rep;stos for
memsets of 0 with non-constant memory size when the alignment was
at least 4. It's better to use the library in this case, which
can be significantly faster when the size is large.
This also preserves more SourceValue information when memory
intrinsics are lowered into simple loads/stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49572 91177308-0d34-0410-b5e6-96231b3b80d8
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset. I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43172 91177308-0d34-0410-b5e6-96231b3b80d8