Chandler Carruth 6231d5be41 Try to harden the recursive simplification still further. This is again
spotted by inspection, and I've crafted no test case that triggers it on
my machine, but some of the windows builders are hitting what looks like
memory corruption, so *something* is amiss here.

This patch takes a more generalized approach to eliminating
double-visits. Imagine code such as:

  %x = ...
  %y = add %x, 1
  %z = add %x, %y

You can imagine that if we simplify %x, we would add %y and %z to the
list. If the use-chain order happens to cause us to add them in reverse
order, we could pull %y off first, and simplify it, adding %z to the
list. We now have %z on the list twice, and will reference it after it
is deleted.

Currently, all my test cases happen to not trigger this, likely due to
the use-chain ordering, but there seems no guarantee that such
a situation could not occur, so we should handle it correctly.

Again, if anyone knows how to craft a testcase that actually triggers
this, please let me know.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153397 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-24 22:34:26 +00:00
..
2012-03-11 06:09:17 +00:00
2012-02-22 17:25:00 +00:00

Analysis Opportunities:

//===---------------------------------------------------------------------===//

In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the
ScalarEvolution expression for %r is this:

  {1,+,3,+,2}<loop>

Outside the loop, this could be evaluated simply as (%n * %n), however
ScalarEvolution currently evaluates it as

  (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n))

In addition to being much more complicated, it involves i65 arithmetic,
which is very inefficient when expanded into code.

//===---------------------------------------------------------------------===//

In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll,

ScalarEvolution is forming this expression:

((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32)))

This could be folded to

(-1 * (trunc i64 undef to i32))

//===---------------------------------------------------------------------===//