C bindings. Change all the backend "Initialize" functions to have C linkage.
Change the "llvm/Config/Targets.def" header to use C-style comments to avoid
compile warnings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74026 91177308-0d34-0410-b5e6-96231b3b80d8
but should work on all the platforms we care about.
I might revisit this if a totally awesome way to do it occurs to me.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74002 91177308-0d34-0410-b5e6-96231b3b80d8
Chris recently broke llvmc with his Makefile changes (r75379). That patch made
the global change .o -> .a, which caused built-in llvmc plugins to stop working
since plugin initialization in llvmc is based on static variables not referenced
from the main executable. This patch implements auto-generated forced references
to the plugin libraries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74000 91177308-0d34-0410-b5e6-96231b3b80d8
Support for .text relocations, implementing TargetELFWriter overloaded methods for x86/x86_64.
Use a map to track global values to their symbol table indexes
Code cleanup and small fixes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73894 91177308-0d34-0410-b5e6-96231b3b80d8
This also throws out the SCEV reference counting scheme, as the the SCEVs now have a lifetime controlled by the
ScalarEvolution pass.
Note that SCEVHandle is now a no-op, and will be remove in a future commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73892 91177308-0d34-0410-b5e6-96231b3b80d8
blocks, and also exit blocks with multiple conditions (combined
with (bitwise) ands and ors). It's often infeasible to compute an
exact trip count in such cases, but a useful upper bound can often
be found.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73866 91177308-0d34-0410-b5e6-96231b3b80d8
LEA64_32r, eliminating a bunch of modifier logic stuff on addr modes.
Implement support for printing mbb labels as operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73817 91177308-0d34-0410-b5e6-96231b3b80d8
create separate recursive mutexes for each value map. The recursive-ness fixes the double-acquiring issue, which having one per ValueMap
lets us continue to maintain some concurrency.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73801 91177308-0d34-0410-b5e6-96231b3b80d8
so that it can access the TargetData member (when available) and
use ValueTracking.h information to compute information for
SCEVUnknown Values.
Also add GetMinLeadingZeros and GetMinSignBits functions,
with minimal implementations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73794 91177308-0d34-0410-b5e6-96231b3b80d8
gets involved, and we end up trying to recursively acquire a writer lock. The fix for this is slightly horrible,
and involves passing a boolean "locked" parameter around in Constants.cpp, but it's better than having locked and
unlocked versions of most of the code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73790 91177308-0d34-0410-b5e6-96231b3b80d8
implementation. The idea is that we want asmprinting to
work by converting MachineInstrs into a new MCInst class,
then the per-instruction asmprinter works on MCInst. MCInst
and the new asmprinters will not depend on most of the
llvm code generators. This allows building diassemblers
that don't link in the whole llvm code generator. This is
step #1 of many.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73743 91177308-0d34-0410-b5e6-96231b3b80d8
into DarwinTargetAsmInfo.cpp. The remaining differences should
be evaluated. It seems strange that x86/arm has .zerofill but ppc
doesn't, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73742 91177308-0d34-0410-b5e6-96231b3b80d8
should become a no-op when not running in multithreaded mode. Make sys::Mutex a typedef of SmartMutex<false>, to preserve source compatibility.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73709 91177308-0d34-0410-b5e6-96231b3b80d8
casted induction variables in cases where the cast
isn't foldable. It ended up being a pessimization in
many cases. This could be fixed, but it would require
a bunch of complicated code in IVUsers' clients. The
advantages of this approach aren't visible enough to
justify it at this time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73706 91177308-0d34-0410-b5e6-96231b3b80d8
- Register allocator should resolve the second part of the hint (register number) before passing it to the target since it knows virtual register to physical register mapping.
- More fixes to get ARM load / store double word working.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73671 91177308-0d34-0410-b5e6-96231b3b80d8
initialization of all targets (InitializeAllTargets.h) or assembler
printers (InitializeAllAsmPrinters.h). This is a step toward the
elimination of relinked object files, so that we can build normal
archives.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73543 91177308-0d34-0410-b5e6-96231b3b80d8
to ignore readonly calls, and factor it out of instcombine so
that it can be used by other passes. Patch by Frits van Bommel!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73506 91177308-0d34-0410-b5e6-96231b3b80d8
failures.
To support this, add some utility functions to Type to help support
vector/scalar-independent code. Change ConstantInt::get and
ConstantFP::get to support vector types, and add an overload to
ConstantInt::get that uses a static IntegerType type, for
convenience.
Introduce a new getConstant method for ScalarEvolution, to simplify
common use cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73431 91177308-0d34-0410-b5e6-96231b3b80d8
- Change register allocation hint to a pair of unsigned integers. The hint type is zero (which means prefer the register specified as second part of the pair) or entirely target dependent.
- Allow targets to specify alternative register allocation orders based on allocation hint.
Part 2.
- Use the register allocation hint system to implement more aggressive load / store multiple formation.
- Aggressively form LDRD / STRD. These are formed *before* register allocation. It has to be done this way to shorten live interval of base and offset registers. e.g.
v1025 = LDR v1024, 0
v1026 = LDR v1024, 0
=>
v1025,v1026 = LDRD v1024, 0
If this transformation isn't done before allocation, v1024 will overlap v1025 which means it more difficult to allocate a register pair.
- Even with the register allocation hint, it may not be possible to get the desired allocation. In that case, the post-allocation load / store multiple pass must fix the ldrd / strd instructions. They can either become ldm / stm instructions or back to a pair of ldr / str instructions.
This is work in progress, not yet enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73381 91177308-0d34-0410-b5e6-96231b3b80d8
is that, for functions whose bodies are entirely guarded by an if-statement, it
can be profitable to pull the test out of the callee and into the caller.
This code has had some cursory testing, but still has a number of known issues
on the LLVM test suite.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73338 91177308-0d34-0410-b5e6-96231b3b80d8
Emission for globals, using the correct data sections
Function alignment can be computed for each target using TargetELFWriterInfo
Some small fixes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73201 91177308-0d34-0410-b5e6-96231b3b80d8
This changes the IndexedModeAction representation to remove the
limitation on the number of value types in MVT. This limitation
prevents us from specifying AVX types.
Prior to this change IndexedModActions was represented as follows...
uint64_t IndexedModeActions[2][ISD::LAST_INDEXED_MODE];
the first dimension was used to represent loads, then stores. This
imposed a limitation of 32 on the number of value types that could be
handled with this method. The value type was used to shift the two bits
into and out of the approprate bits in the uint64_t.
With this change the array is now represented as ...
uint8_t IndexedModeActions[MVT::LAST_VALUETYPE][2][ISD::LAST_INDEXED_MODE];
Takes more space but removes the limitation on MVT::LAST_VALUETYPE. The
first dimension is now the value_type for the reference. The second
dimension is the load [0] vs. store[1]. The third dimension represents
the various modes for load store. Accesses are now direct, no shifting
or masking.
There are other limitations that need to be removed, so that
MVT::LAST_VALUETYPE can be greater than 32. This is merely the first
step towards that goal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73104 91177308-0d34-0410-b5e6-96231b3b80d8
This changes the IndexedModeAction representation to remove the
limitation on the number of value types in MVT. This limitation
prevents us from specifying AVX types.
Prior to this change IndexedModActions was represented as follows...
uint64_t IndexedModeActions[2][ISD::LAST_INDEXED_MODE];
the first dimension was used to represent loads, then stores. This
imposed a limitation of 32 on the number of value types that could be
handled with this method. The value type was used to shift the two bits
into and out of the approprate bits in the uint64_t.
With this change the array is now represented as ...
uint8_t IndexedModeActions[MVT::LAST_VALUETYPE][2][ISD::LAST_INDEXED_MODE];
Takes more space but removes the limitation on MVT::LAST_VALUETYPE. The
first dimension is now the value_type for the reference. The second
dimension is the load [0] vs. store[1]. The third dimension represents
the various modes for load store. Accesses are now direct, no shifting
or masking.
There are other limitations that need to be removed, so that
MVT::LAST_VALUETYPE can be greater than 32. This is merely the first
step towards that goal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73102 91177308-0d34-0410-b5e6-96231b3b80d8
ABI. The missing piece is support for putting "homogeneous aggregates"
into registers.
Patch by Sandeep Patel!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73095 91177308-0d34-0410-b5e6-96231b3b80d8
other operators. For the rare cases where a list type cannot be
deduced, provide a []<type> syntax, where <type> is the list element
type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73078 91177308-0d34-0410-b5e6-96231b3b80d8
Also create isValidElementType for ArrayType, PointerType, StructType and
VectorType.
Make LLParser use them. This closes up some holes like an assertion failure on:
%x = type {label}
but largely doesn't change any semantics. The only thing we accept now which
we didn't before is vectors of opaque type such as "<4 x opaque>". The opaque
can be resolved to an int or float when linking.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73016 91177308-0d34-0410-b5e6-96231b3b80d8
Update code generator to use this attribute and remove NoImplicitFloat target option.
Update llc to set this attribute when -no-implicit-float command line option is used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72959 91177308-0d34-0410-b5e6-96231b3b80d8
build vectors with i64 elements will only appear on 32b x86 before legalize.
Since vector widening occurs during legalize, and produces i64 build_vector
elements, the dag combiner is never run on these before legalize splits them
into 32b elements.
Teach the build_vector dag combine in x86 back end to recognize consecutive
loads producing the low part of the vector.
Convert the two uses of TLI's consecutive load recognizer to pass LoadSDNodes
since that was required implicitly.
Add a testcase for the transform.
Old:
subl $28, %esp
movl 32(%esp), %eax
movl 4(%eax), %ecx
movl %ecx, 4(%esp)
movl (%eax), %eax
movl %eax, (%esp)
movaps (%esp), %xmm0
pmovzxwd %xmm0, %xmm0
movl 36(%esp), %eax
movaps %xmm0, (%eax)
addl $28, %esp
ret
New:
movl 4(%esp), %eax
pmovzxwd (%eax), %xmm0
movl 8(%esp), %eax
movaps %xmm0, (%eax)
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72957 91177308-0d34-0410-b5e6-96231b3b80d8
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.
For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.
This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72897 91177308-0d34-0410-b5e6-96231b3b80d8
Update code generator to use this attribute and remove DisableRedZone target option.
Update llc to set this attribute when -disable-red-zone command line option is used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72894 91177308-0d34-0410-b5e6-96231b3b80d8
TargetData pointer. The only thing it's used for are
calls to ConstantFoldCompareInstOperands and
ConstantFoldInstOperands, which both already accept a
null TargetData pointer. This makes
ConstantFoldConstantExpression easier to use in clients
where TargetData is optional.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72741 91177308-0d34-0410-b5e6-96231b3b80d8
ADDC/ADDE use MVT::i1 (later, whatever it gets legalized to)
instead of MVT::Flag. Remove CARRY_FALSE in favor of 0; adjust
all target-independent code to use this format.
Most targets will still produce a Flag-setting target-dependent
version when selection is done. X86 is converted to use i32
instead, which means TableGen needs to produce different code
in xxxGenDAGISel.inc. This keys off the new supportsHasI1 bit
in xxxInstrInfo, currently set only for X86; in principle this
is temporary and should go away when all other targets have
been converted. All relevant X86 instruction patterns are
modified to represent setting and using EFLAGS explicitly. The
same can be done on other targets.
The immediate behavior change is that an ADC/ADD pair are no
longer tightly coupled in the X86 scheduler; they can be
separated by instructions that don't clobber the flags (MOV).
I will soon add some peephole optimizations based on using
other instructions that set the flags to feed into ADC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72707 91177308-0d34-0410-b5e6-96231b3b80d8
e.g.
orl $65536, 8(%rax)
=>
orb $1, 10(%rax)
Since narrowing is not always a win, e.g. i32 -> i16 is a loss on x86, dag combiner consults with the target before performing the optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72507 91177308-0d34-0410-b5e6-96231b3b80d8
entries as there are basic blocks in the function. LiveVariables::getVarInfo
creates a VarInfo struct for every register in the function, leading to
quadratic space use. This patch changes the BitVector to a SparseBitVector,
which doesn't help the worst-case memory use but does reduce the actual use in
very long functions with short-lived variables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72426 91177308-0d34-0410-b5e6-96231b3b80d8
in the case where a loop exit value cannot be computed, instead of only in
some cases while using SCEVCouldNotCompute in others. This simplifies
getSCEVAtScope's callers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72375 91177308-0d34-0410-b5e6-96231b3b80d8
sending SCEVUnknowns to expandAddToGEP. This avoids the need for
expandAddToGEP to bend the rules and peek into SCEVUnknown
expressions.
Factor out the code for testing whether a SCEV can be factored by
a constant for use in a GEP index. This allows it to handle
SCEVAddRecExprs, by recursing.
As a result, SCEVExpander can now put more things in GEP indices,
so it emits fewer explicit mul instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72366 91177308-0d34-0410-b5e6-96231b3b80d8
Fix by clearing the rewriter cache before deleting the trivially dead
instructions.
Also make InsertedExpressions use an AssertingVH to catch these
bugs easier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72364 91177308-0d34-0410-b5e6-96231b3b80d8
and it wasn't generating calls through @PLT for these functions.
hasLocalLinkage() is now false for available_externally,
I attempted to fix the inliner and dce to handle available_externally properly.
It passed make check.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72328 91177308-0d34-0410-b5e6-96231b3b80d8
will allow simplifying LegalizeDAG to eliminate type legalization. (I
have a patch to do that, but it's not quite finished; I'll commit it
once it's finished and I've fixed any review comments for this patch.)
See the comment at the beginning of
lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp for more details on the
motivation for this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72325 91177308-0d34-0410-b5e6-96231b3b80d8
code in preparation for code generation. The main thing it does
is handle the case when eh.exception calls (and, in a future
patch, eh.selector calls) are far away from landing pads. Right
now in practice you only find eh.exception calls close to landing
pads: either in a landing pad (the common case) or in a landing
pad successor, due to loop passes shifting them about. However
future exception handling improvements will result in calls far
from landing pads:
(1) Inlining of rewinds. Consider the following case:
In function @f:
...
invoke @g to label %normal unwind label %unwinds
...
unwinds:
%ex = call i8* @llvm.eh.exception()
...
In function @g:
...
invoke @something to label %continue unwind label %handler
...
handler:
%ex = call i8* @llvm.eh.exception()
... perform cleanups ...
"rethrow exception"
Now inline @g into @f. Currently this is turned into:
In function @f:
...
invoke @something to label %continue unwind label %handler
...
handler:
%ex = call i8* @llvm.eh.exception()
... perform cleanups ...
invoke "rethrow exception" to label %normal unwind label %unwinds
unwinds:
%ex = call i8* @llvm.eh.exception()
...
However we would like to simplify invoke of "rethrow exception" into
a branch to the %unwinds label. Then %unwinds is no longer a landing
pad, and the eh.exception call there is then far away from any landing
pads.
(2) Using the unwind instruction for cleanups.
It would be nice to have codegen handle the following case:
invoke @something to label %continue unwind label %run_cleanups
...
handler:
... perform cleanups ...
unwind
This requires turning "unwind" into a library call, which
necessarily takes a pointer to the exception as an argument
(this patch also does this unwind lowering). But that means
you are using eh.exception again far from a landing pad.
(3) Bugpoint simplifications. When bugpoint is simplifying
exception handling code it often generates eh.exception calls
far from a landing pad, which then causes codegen to assert.
Bugpoint then latches on to this assertion and loses sight
of the original problem.
Note that it is currently rare for this pass to actually do
anything. And in fact it normally shouldn't do anything at
all given the code coming out of llvm-gcc! But it does fire
a few times in the testsuite. As far as I can see this is
almost always due to the LoopStrengthReduce codegen pass
introducing pointless loop preheader blocks which are landing
pads and only contain a branch to another block. This other
block contains an eh.exception call. So probably by tweaking
LoopStrengthReduce a bit this can be avoided.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72276 91177308-0d34-0410-b5e6-96231b3b80d8
If this causes any new assertion failures that I didn't catch in
testing, the fix is usually to change "&v[0]" to "v.data()" for some
SmallVector v.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72221 91177308-0d34-0410-b5e6-96231b3b80d8
type as a target independent constant expression. I confess
that I didn't check that this method works as intended (though
I did test the equivalent hand-written IR a little). But what
could possibly go wrong!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72213 91177308-0d34-0410-b5e6-96231b3b80d8
mutex support. LLVM_MULTITHREADED indicates (or will indicate) the ability to run LLVM itself across multiple threads, and requires atomics support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72140 91177308-0d34-0410-b5e6-96231b3b80d8
instructions. It attempts to create high-level multi-operand GEPs,
though in cases where this isn't possible it falls back to casting
the pointer to i8* and emitting a GEP with that. Using GEP instructions
instead of ptrtoint+arithmetic+inttoptr helps pointer analyses that
don't use ScalarEvolution, such as BasicAliasAnalysis.
Also, make the AddrModeMatcher more aggressive in handling GEPs.
Previously it assumed that operand 0 of a GEP would require a register
in almost all cases. It now does extra checking and can do more
matching if operand 0 of the GEP is foldable. This fixes a problem
that was exposed by SCEVExpander using GEPs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72093 91177308-0d34-0410-b5e6-96231b3b80d8
Introduce a new class (MachineCodeInfo) that the JIT can fill in with details. Right now, just the address and the size of the machine code are reported.
Patch by Evan Phoenix!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72040 91177308-0d34-0410-b5e6-96231b3b80d8
The following is checked:
* Operand counts: All explicit operands must be present.
* Register classes: All physical and virtual register operands must be
compatible with the register class required by the instruction descriptor.
* Register live intervals: Registers must be defined only once, and must be
defined before use.
The machine code verifier is enabled with the command-line option
'-verify-machineinstrs', or by defining the environment variable
LLVM_VERIFY_MACHINEINSTRS to the name of a file that will receive all the
verifier errors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71918 91177308-0d34-0410-b5e6-96231b3b80d8
to low-level sync operations.
The only one present at the moment is MemoryFence(), and only for the platforms
for which I could easily discern the proper way to do it. If your favorite platform
isn't represented, patches are welcome!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71770 91177308-0d34-0410-b5e6-96231b3b80d8
llvm.eh.sjlj.* for better clarity as to their purpose and scope. Add
a description of llvm.eh.sjlj.setjmp to ExceptionHandling.html.
(llvm.eh.sjlj.longjmp documentation coming when that implementation is
added).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71758 91177308-0d34-0410-b5e6-96231b3b80d8
of exception handling builtin sjlj targets in functions turns out not to
be necessary. Marking the intrinsic implementation in the .td file as
defining all registers is sufficient to get the context saved properly by
the containing function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71743 91177308-0d34-0410-b5e6-96231b3b80d8
booleans. This gives a better indication of what the "addReg()" is
doing. Remembering what all of those booleans mean isn't easy, especially if you
aren't spending all of your time in that code.
I took Jakob's suggestion and made it illegal to pass in "true" for the
flag. This should hopefully prevent any unintended misuse of this (by reverting
to the old way of using addReg()).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71722 91177308-0d34-0410-b5e6-96231b3b80d8
getNoopOrSignExtend, and getTruncateOrNoop. These are similar
to getTruncateOrZeroExtend etc., except that they assert that
the conversion is either not widening or narrowing, as
appropriate. These will be used in some upcoming fixes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71632 91177308-0d34-0410-b5e6-96231b3b80d8
without one. Use it where we were using abs on
int64_t objects.
(I strongly suspect the casts to unsigned in the
fragments in LoopStrengthReduce are not doing whatever
the original intent was, but the obvious change to
uint64_t doesn't work. Maybe later.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71612 91177308-0d34-0410-b5e6-96231b3b80d8
a supporting preliminary patch for GCC-compatible SjLJ exception handling. Note that these intrinsics are not designed to be invoked directly by the user, but
rather used by the front-end as target hooks for exception handling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71610 91177308-0d34-0410-b5e6-96231b3b80d8
and generalize it so that it can be used by IndVarSimplify. Implement the
base IndVarSimplify transformation code using IVUsers. This removes
TestOrigIVForWrap and associated code, as ScalarEvolution now has enough
builtin overflow detection and folding logic to handle all the same cases,
and more. Run "opt -iv-users -analyze -disable-output" on your favorite
loop for an example of what IVUsers does.
This lets IndVarSimplify eliminate IV casts and compute trip counts in
more cases. Also, this happens to finally fix the remaining testcases
in PR1301.
Now that IndVarSimplify is being more aggressive, it occasionally runs
into the problem where ScalarEvolutionExpander's code for avoiding
duplicate expansions makes it difficult to ensure that all expanded
instructions dominate all the instructions that will use them. As a
temporary measure, IndVarSimplify now uses a FixUsesBeforeDefs function
to fix up instructions inserted by SCEVExpander. Fortunately, this code
is contained, and can be easily removed once a more comprehensive
solution is available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71535 91177308-0d34-0410-b5e6-96231b3b80d8
- reduces _static_ callee saved register spills
and restores similar to Chow's original algorithm.
- iterative implementation with simple heuristic
limits to mitigate compile time impact.
- handles placing spills/restores for multi-entry,
multi-exit regions in the Machine CFG without
splitting edges.
- passes test-suite in LLCBETA mode.
Added contains() method to ADT/SparseBitVector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71438 91177308-0d34-0410-b5e6-96231b3b80d8
which are not analyzed with SCEV techniques, which can require
brute-forcing through a large number of instructions. This
fixes a massive compile-time issue on 400.perlbench (in
particular, the loop in MD5Transform).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71259 91177308-0d34-0410-b5e6-96231b3b80d8