where none was before. Just don't declare it and hope it's declared
in every translation unit that needs it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127612 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the unused reserved_ bit vector, no functional change intended.
This doesn't break 'svn blame', this file really is all my fault.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127607 91177308-0d34-0410-b5e6-96231b3b80d8
properties.
Added the self-wrap flag for SCEV::AddRecExpr.
A slew of temporary FIXMEs indicate the intention of the no-self-wrap flag
without changing behavior in this revision.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127590 91177308-0d34-0410-b5e6-96231b3b80d8
load and store reference same memory location, the memory location
is represented by getelementptr with two uses (load and store) and
the getelementptr's base is alloca with single use. At this point,
instructions from alloca to store can be removed.
(this pattern is generated when bitfield is accessed.)
For example,
%u = alloca %struct.test, align 4 ; [#uses=1]
%0 = getelementptr inbounds %struct.test* %u, i32 0, i32 0;[#uses=2]
%1 = load i8* %0, align 4 ; [#uses=1]
%2 = and i8 %1, -16 ; [#uses=1]
%3 = or i8 %2, 5 ; [#uses=1]
store i8 %3, i8* %0, align 4
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127565 91177308-0d34-0410-b5e6-96231b3b80d8
This allows the allocator to free any resources used by the virtual register,
including physical register assignments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127560 91177308-0d34-0410-b5e6-96231b3b80d8
llvm-gcc-i386-linux-selfhost and llvm-x86_64-linux-checks buildbots.
The original log entry:
Remove optimization emitting a reference insted of label difference, since
it can create more relocations. Removed isBaseAddressKnownZero method,
because it is no longer used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127540 91177308-0d34-0410-b5e6-96231b3b80d8
Live range splitting can create a number of small live ranges containing only a
single real use. Spill these small live ranges along with the large range they
are connected to with copies. This enables memory operand folding and maximizes
the spill to fill distance.
Work in progress with known bugs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127529 91177308-0d34-0410-b5e6-96231b3b80d8
There are too many compatibility problems with using mixed types in
std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up
a call to a 10-line function. Binary search is not /that/ hard to implement
correctly.
I tried terminating the binary search with a linear search, but that actually
made the algorithm slower against my expectation. Most live intervals have less
than 4 segments. The early test against endIndex() does pay, and this version is
25% faster than plain std::upper_bound().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127522 91177308-0d34-0410-b5e6-96231b3b80d8
Go ahead and add them on when we might want to use them and let
later passes remove them.
Fixes rdar://9118569
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127518 91177308-0d34-0410-b5e6-96231b3b80d8
actual instruction as the non-Darwin defs, but have different call-clobber
semantics and so need separate patterns. They don't need to duplicate the
encoding information, however.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127515 91177308-0d34-0410-b5e6-96231b3b80d8