makes ScalarEvolution::deleteValueFromRecords, and it's code that
subtly needed to be called before ReplaceAllUsesWith, unnecessary.
It also makes ValueDeletionListener unnecessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70645 91177308-0d34-0410-b5e6-96231b3b80d8
to make the copy constructor and destructor protected, and corresponding
adjustments to the unittests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70644 91177308-0d34-0410-b5e6-96231b3b80d8
of returning a list of pointers to Values that are deleted. This was
unsafe, because the pointers in the list are, by nature of what
RecursivelyDeleteDeadInstructions does, always dangling. Replace this
with a simple callback mechanism. This may eventually be removed if
all clients can reasonably be expected to use CallbackVH.
Use this to factor out the dead-phi-cycle-elimination code from LSR
utility function, and generalize it to use the
RecursivelyDeleteTriviallyDeadInstructions utility function.
This makes LSR more aggressive about eliminating dead PHI cycles;
adjust tests to either be less trivial or to simply expect fewer
instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70636 91177308-0d34-0410-b5e6-96231b3b80d8
it also forget any SCEVs associated with loop-header PHIs in the loop,
as they may be dependent on trip count information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70633 91177308-0d34-0410-b5e6-96231b3b80d8
of LSR. This makes the AddUsersIfInteresting phase of LSR a pure
analysis instead of a phase that potentially does CFG modifications.
The conditions where this code would actually perform a split are
rare, and in the cases where it actually would do a split the split
is usually undone by CodeGenPrepare, and in cases where splits
actually survive into codegen, they appear to hurt more often than
they help.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70625 91177308-0d34-0410-b5e6-96231b3b80d8
target hooks canLosslesslyBitCastTo and isTruncateFree. This allows
targets to avoid worrying about handling all combinations of integer
and pointer types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70555 91177308-0d34-0410-b5e6-96231b3b80d8
artificial "ptrtoint", as it tends to clutter up complicated
expressions. The cast operators now print both source and
destination types, which is usually sufficient.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70554 91177308-0d34-0410-b5e6-96231b3b80d8
classes.
This is implemented as a function rather than a method on TargetRegisterClass
because it is symmetric in its arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70512 91177308-0d34-0410-b5e6-96231b3b80d8
compute an upper-bound value for the trip count, in addition to
the actual trip count. Use this to allow getZeroExtendExpr and
getSignExtendExpr to fold casts in more cases.
This may eventually morph into a more general value-range
analysis capability; there are certainly plenty of places where
more complete value-range information would allow more folding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70509 91177308-0d34-0410-b5e6-96231b3b80d8
This is *not* turned on by default. Testing indicates this is just as likely to pessimize code. The main issue seems to be allocation preference doesn't work effectively. That will change once I've taught register allocator "swapping".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70503 91177308-0d34-0410-b5e6-96231b3b80d8
memory operands otherwise the writebacks get lost when the inline asm
doesn't otherwise have side effects. This fixes rdar://6839427, though
clang really shouldn't generate these anymore.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@70455 91177308-0d34-0410-b5e6-96231b3b80d8