llvm-6502/test/CodeGen
Benjamin Kramer 11f2bf7f15 X86: Do splat promotion later, so the optimizer can chew on it first.
This catches many cases where we can emit a more efficient shuffle for a
specific mask or when the mask contains undefs. Once the splat is lowered to
unpacks we can't do that anymore.

There is a possibility of moving the promotion after pshufb matching, but I'm
not sure if pshufb with a mask loaded from memory is faster than 3 shuffles, so
I avoided that for now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173569 91177308-0d34-0410-b5e6-96231b3b80d8
2013-01-26 11:44:21 +00:00
..
ARM Fixed the condition codes for the atomic64 min/umin code generation on ARM. If the sutraction of the higher 32 bit parts gives a 0 result, we need to do the store operation. 2013-01-25 10:39:49 +00:00
CPP
Generic For inline asm: 2013-01-11 18:12:39 +00:00
Hexagon Add indexed load/store instructions for offset validation check. 2013-01-17 18:42:37 +00:00
MBlaze
Mips [mips] Set flag neverHasSideEffects flag on some of the floating point instructions. 2013-01-25 00:20:39 +00:00
MSP430
NVPTX
PowerPC Restore reverted test case, this time with REQUIRES: asserts 2013-01-17 19:46:51 +00:00
R600 DAGCombiner: Avoid generating illegal vector INT_TO_FP nodes 2013-01-02 22:13:01 +00:00
SI
SPARC
Thumb
Thumb2 FileCheck-ify some grep tests 2013-01-25 22:11:46 +00:00
X86 X86: Do splat promotion later, so the optimizer can chew on it first. 2013-01-26 11:44:21 +00:00
XCore