mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-10-06 14:57:41 +00:00
Update
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27643 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
parent
9df21dc0d2
commit
fc7c17abb5
@ -191,6 +191,18 @@ commutative, it is not matched with the load on both sides. The dag combiner
|
||||
should be made smart enough to cannonicalize the load into the RHS of a compare
|
||||
when it can invert the result of the compare for free.
|
||||
|
||||
How about intrinsics? An example is:
|
||||
*res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
|
||||
|
||||
compiles to
|
||||
pmuludq (%eax), %xmm0
|
||||
movl 8(%esp), %eax
|
||||
movdqa (%eax), %xmm1
|
||||
pmulhuw %xmm0, %xmm1
|
||||
|
||||
The transformation probably requires a X86 specific pass or a DAG combiner
|
||||
target specific hook.
|
||||
|
||||
//===---------------------------------------------------------------------===//
|
||||
|
||||
LSR should be turned on for the X86 backend and tuned to take advantage of its
|
||||
|
Loading…
Reference in New Issue
Block a user