Add intrinsics @llvm.arm.neon.vmulls and @llvm.arm.neon.vmullu.* back. Frontends

was lowering them to sext / uxt + mul instructions. Unfortunately the
optimization passes may hoist the extensions out of the loop and separate them.
When that happens, the long multiplication instructions can be broken into
several scalar instructions, causing significant performance issue.

Note the vmla and vmls intrinsics are not added back. Frontend will codegen them
as intrinsics vmull* + add / sub. Also note the isel optimizations for catching
mul + sext / zext are not changed either.

First part of rdar://8832507, rdar://9203134


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128502 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
Evan Cheng
2011-03-29 23:06:19 +00:00
parent 75c7563f83
commit 92e3916c3b
5 changed files with 113 additions and 12 deletions
+3 -10
View File
@@ -76,20 +76,13 @@
; CHECK: zext <4 x i16>
; CHECK-NEXT: sub <4 x i32>
; vmull should be auto-upgraded to multiply with sext/zext
; (but vmullp should remain an intrinsic)
; vmull* intrinsics will remain intrinsics
; CHECK: vmulls8
; CHECK-NOT: arm.neon.vmulls.v8i16
; CHECK: sext <8 x i8>
; CHECK-NEXT: sext <8 x i8>
; CHECK-NEXT: mul <8 x i16>
; CHECK: arm.neon.vmulls.v8i16
; CHECK: vmullu16
; CHECK-NOT: arm.neon.vmullu.v4i32
; CHECK: zext <4 x i16>
; CHECK-NEXT: zext <4 x i16>
; CHECK-NEXT: mul <4 x i32>
; CHECK: arm.neon.vmullu.v4i32
; CHECK: vmullp8
; CHECK: arm.neon.vmullp.v8i16