mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2025-01-10 02:36:06 +00:00
1f821512fc
return it as a clobber. This allows GVN to do smart things. Enhance GVN to be smart about the case when a small load is clobbered by a larger overlapping load. In this case, forward the value. This allows us to compile stuff like this: int test(void *P) { int tmp = *(unsigned int*)P; return tmp+*((unsigned char*)P+1); } into: _test: ## @test movl (%rdi), %ecx movzbl %ch, %eax addl %ecx, %eax ret which has one load. We already handled the case where the smaller load was from a must-aliased base pointer. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130180 91177308-0d34-0410-b5e6-96231b3b80d8
Analysis Opportunities: //===---------------------------------------------------------------------===// In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the ScalarEvolution expression for %r is this: {1,+,3,+,2}<loop> Outside the loop, this could be evaluated simply as (%n * %n), however ScalarEvolution currently evaluates it as (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n)) In addition to being much more complicated, it involves i65 arithmetic, which is very inefficient when expanded into code. //===---------------------------------------------------------------------===// In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll, ScalarEvolution is forming this expression: ((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32))) This could be folded to (-1 * (trunc i64 undef to i32)) //===---------------------------------------------------------------------===//