mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2025-02-06 06:33:24 +00:00
movd always clear the top 96 bits and movss does so when it's loading the value from memory. The net result is codegen for 4-wide shuffles is much improved. It is near optimal if one or more elements is a zero. e.g. __m128i test(int a, int b) { return _mm_set_epi32(0, 0, b, a); } compiles to _test: movd 8(%esp), %xmm1 movd 4(%esp), %xmm0 punpckldq %xmm1, %xmm0 ret compare to gcc: _test: subl $12, %esp movd 20(%esp), %xmm0 movd 16(%esp), %xmm1 punpckldq %xmm0, %xmm1 movq %xmm1, %xmm0 movhps LC0, %xmm0 addl $12, %esp ret or icc: _test: movd 4(%esp), %xmm0 #5.10 movd 8(%esp), %xmm3 #5.10 xorl %eax, %eax #5.10 movd %eax, %xmm1 #5.10 punpckldq %xmm1, %xmm0 #5.10 movd %eax, %xmm2 #5.10 punpckldq %xmm2, %xmm3 #5.10 punpckldq %xmm3, %xmm0 #5.10 ret #5.10 There are still room for improvement, for example the FP variant of the above example: __m128 test(float a, float b) { return _mm_set_ps(0.0, 0.0, b, a); } _test: movss 8(%esp), %xmm1 movss 4(%esp), %xmm0 unpcklps %xmm1, %xmm0 xorps %xmm1, %xmm1 movlhps %xmm1, %xmm0 ret The xorps and movlhps are unnecessary. This will require post legalizer optimization to handle. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27939 91177308-0d34-0410-b5e6-96231b3b80d8