This is the first incremental patch to implement this feature. It adds no
functionality to LLVM but setup up the information needed from targets in
order to implement the optimization correctly. Each target needs to specify
the maximum number of store operations for conversion of the llvm.memset,
llvm.memcpy, and llvm.memmove intrinsics into a sequence of store operations.
The limit needs to be chosen at the threshold of performance for such an
optimization (generally smallish). The target also needs to specify whether
the target can support unaligned stores for multi-byte store operations.
This helps ensure the optimization doesn't generate code that will trap on
an alignment errors.
More patches to follow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22468 91177308-0d34-0410-b5e6-96231b3b80d8
1. Pass Value*'s into lowering methods so that the proper pointers can be
added to load/stores from the valist
2. Intrinsics that return void should only return a token chain, not a token
chain/retval pair.
3. Rename LowerVAArgNext -> LowerVAArg, because VANext is long gone.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22338 91177308-0d34-0410-b5e6-96231b3b80d8
what the contents of the top bits of these registers are, in the common cases
of targets that sign and zero extend the results.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21145 91177308-0d34-0410-b5e6-96231b3b80d8
Add a hook to find out how the target handles shift amounts that are out of
range. Either they are undefined (the default), they mask the shift amount
to the size of the register (X86, Alpha, etc), or they extend the shift (PPC).
This defaults to undefined, which is conservatively correct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19676 91177308-0d34-0410-b5e6-96231b3b80d8
track of how to deal with it, and provide the target with a hook that they
can use to legalize arbitrary operations in arbitrary ways.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19609 91177308-0d34-0410-b5e6-96231b3b80d8