vabd intrinsic and add and/or zext operations. In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests. Auto-upgrade the old intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112941 91177308-0d34-0410-b5e6-96231b3b80d8
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests. Add auto-upgrade support for the old intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112773 91177308-0d34-0410-b5e6-96231b3b80d8
IR add/sub operations with one or both operands sign- or zero-extended.
Auto-upgrade the old intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112416 91177308-0d34-0410-b5e6-96231b3b80d8
Update all the tests using those intrinsics and add support for
auto-upgrading bitcode files with the old versions of the intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112271 91177308-0d34-0410-b5e6-96231b3b80d8
Add support for using the FPSCR in conjunction with the vcvtr instruction, for controlling fp to int rounding.
Add support for the FLT_ROUNDS_ node now that the FPSCR is exposed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110152 91177308-0d34-0410-b5e6-96231b3b80d8
vector shuffles. Temporarily remove the tests for these operations until the
new implementation is working.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79579 91177308-0d34-0410-b5e6-96231b3b80d8
the overloaded vector types allowed floating-point or integer vector elements.
Most of these operations actually depend on the element type, so bitcasting
was not an option.
If you include the vpadd intrinsics that I updated earlier, this gets rid
of 20 intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78646 91177308-0d34-0410-b5e6-96231b3b80d8
take the table vectors as separate arguments, instead of the previous
approach where they were combined into one big vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78525 91177308-0d34-0410-b5e6-96231b3b80d8
as vector shuffles did not work out well. Shuffles that produce double-wide
vectors accurately represent the operation but make it hard to do anything
with the results. I considered splitting them up into 2 shuffles, one to
write each register separately, but there doesn't seem to be a good way to
reunite them for codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78437 91177308-0d34-0410-b5e6-96231b3b80d8
wide vectors. Likewise, change VSTn intrinsics to take separate arguments
for each vector in a multi-vector struct. Adjust tests accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77468 91177308-0d34-0410-b5e6-96231b3b80d8
"parameter" types. An intrinsic can now return a multiple return values like
this:
def add_with_overflow : Intrinsic<[llvm_i32_ty, llvm_i1_ty],
[LLVMMatchType<0>, LLVMMatchType<0>]>;
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59237 91177308-0d34-0410-b5e6-96231b3b80d8