stack tracebacks on Darwin x86-64 won't work by default;
nevertheless, everybody but me thinks this is a good idea.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49663 91177308-0d34-0410-b5e6-96231b3b80d8
much simpler than in LegalizeDAG because calls are
not yet expanded into call sequences: that happens
after type legalization has finished.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49634 91177308-0d34-0410-b5e6-96231b3b80d8
in its maps. Add some sanity checks that catch
this kind of thing. Hopefully these can be
removed one day (once all problems are fixed!)
but for the moment it seems wise to have them in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49612 91177308-0d34-0410-b5e6-96231b3b80d8
optimized x86-64 (and x86) calls so that they work (... at least for
my test cases).
Should fix the following problems:
Problem 1: When i introduced the optimized handling of arguments for
tail called functions (using a sequence of copyto/copyfrom virtual
registers instead of always lowering to top of the stack) i did not
handle byval arguments correctly e.g they did not work at all :).
Problem 2: On x86-64 after the arguments of the tail called function
are moved to their registers (which include ESI/RSI etc), tail call
optimization performs byval lowering which causes xSI,xDI, xCX
registers to be overwritten. This is handled in this patch by moving
the arguments to virtual registers first and after the byval lowering
the arguments are moved from those virtual registers back to
RSI/RDI/RCX.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49584 91177308-0d34-0410-b5e6-96231b3b80d8
on any current target and aren't optimized in DAGCombiner. Instead
of using intermediate nodes, expand the operations, choosing between
simple loads/stores, target-specific code, and library calls,
immediately.
Previously, the code to emit optimized code for these operations
was only used at initial SelectionDAG construction time; now it is
used at all times. This fixes some cases where rep;movs was being
used for small copies where simple loads/stores would be better.
This also cleans up code that checks for alignments less than 4;
let the targets make that decision instead of doing it in
target-independent code. This allows x86 to use rep;movs in
low-alignment cases.
Also, this fixes a bug that resulted in the use of rep;stos for
memsets of 0 with non-constant memory size when the alignment was
at least 4. It's better to use the library in this case, which
can be significantly faster when the size is large.
This also preserves more SourceValue information when memory
intrinsics are lowered into simple loads/stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49572 91177308-0d34-0410-b5e6-96231b3b80d8
in addition to integer expressions. Rewrite GetOrEnforceKnownAlignment
as a ComputeMaskedBits problem, moving all of its special alignment
knowledge to ComputeMaskedBits as low-zero-bits knowledge.
Also, teach ComputeMaskedBits a few basic things about Mul and PHI
instructions.
This improves ComputeMaskedBits-based simplifications in a few cases,
but more noticeably it significantly improves instcombine's alignment
detection for loads, stores, and memory intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49492 91177308-0d34-0410-b5e6-96231b3b80d8
MOVZQI2PQIrr. This would be better handled as a dag combine
(with the goal of eliminating the bitconvert) but I don't know
how to do that safely. Thoughts welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49463 91177308-0d34-0410-b5e6-96231b3b80d8