modified in a way that may effect the trip count calculation. Change
IndVars to use this method when it rewrites pointer or floating-point
induction variables instead of using a doInitialization method to
sneak these changes in before ScalarEvolution has a chance to see
the loop. This eliminates the need for LoopPass to depend on
ScalarEvolution.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64810 91177308-0d34-0410-b5e6-96231b3b80d8
U include/llvm/CodeGen/DebugLoc.h
U lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
U lib/CodeGen/SelectionDAG/SelectionDAGBuild.cpp
U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
Enable debug location generation at -Os. This goes with the reapplication of the
r63639 patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64715 91177308-0d34-0410-b5e6-96231b3b80d8
Cleanup some warning.
Remark: when struct/class are declared differently than they are defined, this make problem for VC++ since it seems to mangle class differently that struct. These error are very hard to understand and find. So please, try to keep your definition/declaration in sync.
Only tested with VS2008. hope it does not break anything. feel free to revert.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64554 91177308-0d34-0410-b5e6-96231b3b80d8
taken advantage of anywhere. Change the definition
of IntrWriteArgMem to no longer imply nocapture, and
explicitly add nocapture attributes everywhere (well,
not quite everywhere, because some of these intrinsics
did capture their arguments!). Also, make clear that
the lack of other side-effects does not exclude doing
volatile loads or stores - the atomic intrinsics do
these, yet they are all marked IntrWriteArgMem (this
change is safe because nothing exploited it).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64539 91177308-0d34-0410-b5e6-96231b3b80d8
being used for atomic intrinsics, it seems the
access may be volatile. No code was exploiting
the original non-volatile definition, so only
the comment needs changing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64464 91177308-0d34-0410-b5e6-96231b3b80d8
loop induction on LP64 targets. When the induction variable is
used in addressing, IndVars now is usually able to inserst a
64-bit induction variable and eliminates the sign-extending cast.
This is also useful for code using C "short" types for
induction variables on targets with 32-bit addressing.
Inserting a wider induction variable is easy; the tricky part is
determining when trunc(sext(i)) expressions are no-ops. This
requires range analysis of the loop trip count. A common case is
when the original loop iteration starts at 0 and exits when the
induction variable is signed-less-than a fixed value; this case
is now handled.
This replaces IndVarSimplify's OptimizeCanonicalIVType. It was
doing the same optimization, but it was limited to loops with
constant trip counts, because it was running after the loop
rewrite, and the information about the original induction
variable is lost by that point.
Rename ScalarEvolution's executesAtLeastOnce to
isLoopGuardedByCond, generalize it to be able to test for
ICMP_NE conditions, and move it to be a public function so that
IndVars can use it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64407 91177308-0d34-0410-b5e6-96231b3b80d8
add efficient versions of op_begin and op_end. Up to now always those from User have been
called, which in most cases follow an indirection (OperandList) even if the exact Instruction
type is known.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64331 91177308-0d34-0410-b5e6-96231b3b80d8
instruction index across each part. Instruction indices are used
to make live range queries, and live ranges can extend beyond
scheduling region boundaries.
Refactor the ScheduleDAGSDNodes class some more so that it
doesn't have to worry about this additional information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64288 91177308-0d34-0410-b5e6-96231b3b80d8
scheduling, and generalize is so that preserves state across
scheduling regions. This fixes incorrect live-range information around
terminators and labels, which are effective region boundaries.
In place of looking for terminators to anchor inter-block dependencies,
introduce special entry and exit scheduling units for this purpose.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64254 91177308-0d34-0410-b5e6-96231b3b80d8
even if the underlying operand is NULL. This may happen in debugging context
within opt with partial loop unrolling (see test/Transforms/LoopUnroll/partial.ll).
After this fix I can resubmit the (backed out) r63459:
* lib/VMCore/AsmWriter.cpp: use precise accessors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64142 91177308-0d34-0410-b5e6-96231b3b80d8
suprise to some callers, e.g. register coalescer. For now, add an parameter
that tells AnalyzeBranch whether it's safe to modify the mbb. A better
solution is out there, but I don't have time to deal with it right now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64124 91177308-0d34-0410-b5e6-96231b3b80d8
Adjust derived classes to pass UnknownLoc where
a DebugLoc does not make sense. Pick one of
DebugLoc and non-DebugLoc variants to survive
for all such classes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@64000 91177308-0d34-0410-b5e6-96231b3b80d8
Many targets build placeholder nodes for special operands, e.g.
GlobalBaseReg on X86 and PPC for the PIC base. There's no
sensible way to associate debug info with these. I've left
them built with getNode calls with explicit DebugLoc::getUnknownLoc operands.
I'm not too happy about this but don't see a good improvement;
I considered adding a getPseudoOperand or something, but it
seems to me that'll just make it harder to read.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63992 91177308-0d34-0410-b5e6-96231b3b80d8
getCALLSEQ_{END,START} to permit passing no DebugLoc
there. UNDEF doesn't logically have DebugLoc; add
getUNDEF to encapsulate this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63978 91177308-0d34-0410-b5e6-96231b3b80d8