The RegisterCoalescer understands overlapping live ranges where one
register is defined as a copy of the other. With this change, register
allocators using LiveRegMatrix can do the same, at least for copies
between physical and virtual registers.
When a physreg is defined by a copy from a virtreg, allow those live
ranges to overlap:
%CL<def> = COPY %vreg11:sub_8bit; GR32_ABCD:%vreg11
%vreg13<def,tied1> = SAR32rCL %vreg13<tied0>, %CL<imp-use,kill>
We can assign %vreg11 to %ECX, overlapping the live range of %CL.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163336 91177308-0d34-0410-b5e6-96231b3b80d8
We will soon allow virtual register live ranges to overlap regunit live
ranges when the physreg is defined as a copy of the virtreg:
%EAX = COPY %vreg5
FOO %vreg5
BAR %EAX<kill>
There is no real interference since %vreg5 and %EAX have the same value
where they overlap.
This patch prevents addKillFlags from adding virtreg kill flags to FOO
where the assigned physreg is overlapping the virtual register live
range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163335 91177308-0d34-0410-b5e6-96231b3b80d8
This Operand type takes a default argument, and is initialized to
this value if it does not appear in a patter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163315 91177308-0d34-0410-b5e6-96231b3b80d8
Make sure to return a pointer into the target memory, not the local memory.
Often they are the same, but we can't assume that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163217 91177308-0d34-0410-b5e6-96231b3b80d8
pointers-to-strong-pointers may be in play. These can lead to retains and
releases happening in unstructured ways, foiling the optimizer. This fixes
rdar://12150909.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163180 91177308-0d34-0410-b5e6-96231b3b80d8
implementation does not co-exist well with how the sideeffect and alignstack
attributes are handled. The reverts r161641.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163174 91177308-0d34-0410-b5e6-96231b3b80d8
The MachineOperand::TiedTo field was maintained, but not used.
This patch enables it in isRegTiedToDefOperand() and
isRegTiedToUseOperand() which are the actual functions use by the
register allocator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163153 91177308-0d34-0410-b5e6-96231b3b80d8
After much agonizing, use a full 4 bits of precious MachineOperand space
to encode this. This uses existing padding, and doesn't grow
MachineOperand beyond its current 32 bytes.
This allows tied defs among the first 15 operands on a normal
instruction, just like the current MCInstrDesc constraint encoding.
Inline assembly needs to be able to tie more than the first 15 operands,
and gets special treatment.
Tied uses can appear beyond 15 operands, as long as they are tied to a
def that's in range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163151 91177308-0d34-0410-b5e6-96231b3b80d8
- CodeGenPrepare pass for identifying div/rem ops
- Backend specifies the type mapping using addBypassSlowDivType
- Enabled only for Intel Atom with O2 32-bit -> 8-bit
- Replace IDIV with instructions which test its value and use DIVB if the value
is positive and less than 256.
- In the case when the quotient and remainder of a divide are used a DIV
and a REM instruction will be present in the IR. In the non-Atom case
they are both lowered to IDIVs and CSE removes the redundant IDIV instruction,
using the quotient and remainder from the first IDIV. However,
due to this optimization CSE is not able to eliminate redundant
IDIV instructions because they are located in different basic blocks.
This is overcome by calculating both the quotient (DIV) and remainder (REM)
in each basic block that is inserted by the optimization and reusing the result
values when a subsequent DIV or REM instruction uses the same operands.
- Test cases check for the presents of the optimization when calculating
either the quotient, remainder, or both.
Patch by Tyler Nowicki!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163150 91177308-0d34-0410-b5e6-96231b3b80d8
Rationale: For each preprocessor macro, either the definedness is what's
meaningful, or the value is what's meaningful, or both. If definedness is
meaningful, we should use #ifdef. If the value is meaningful, we should use
and #ifdef interchangeably for the same macro, seems ugly to me, even if
undefined macros are zero if used.
This also has the benefit that including an LLVM header doesn't prevent
you from compiling with -Wundef -Werror.
Patch by John Garvin!
<rdar://problem/12189979>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163148 91177308-0d34-0410-b5e6-96231b3b80d8
by instruction address from DWARF.
Add --inlining flag to llvm-dwarfdump to demonstrate and test this functionality,
so that "llvm-dwarfdump --inlining --address=0x..." now works much like
"addr2line -i 0x...", provided that the binary has debug info
(Clang's -gline-tables-only *is* enough).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163128 91177308-0d34-0410-b5e6-96231b3b80d8
the NumMCOperands argument to the GetMCInstOperandNum() function that is set
to the number of MCOperands this asm operand mapped to.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163124 91177308-0d34-0410-b5e6-96231b3b80d8
MatchInstructionImpl() function.
These values are used by the ConvertToMCInst() function to index into the
ConversionTable. The values are also needed to call the GetMCInstOperandNum()
function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163101 91177308-0d34-0410-b5e6-96231b3b80d8
For example, the ARM target does not have efficient ISel handling for vector
selects with scalar conditions. This patch adds a TLI hook which allows the
different targets to report which selects are supported well and which selects
should be converted to CF duting codegen prepare.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163093 91177308-0d34-0410-b5e6-96231b3b80d8
Most of the code guarded with ANDROIDEABI are not
ARM-specific, and having no relation with arm-eabi.
Thus, it will be more natural to call this
environment "Android" instead of "ANDROIDEABI".
Note: We are not using ANDROID because several projects
are using "-DANDROID" as the conditional compilation
flag.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163087 91177308-0d34-0410-b5e6-96231b3b80d8
Manage tied operands entirely internally to MachineInstr. This makes it
possible to change the representation of tied operands, as I will do
shortly.
The constraint that tied uses and defs must be in the same order was too
restrictive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163021 91177308-0d34-0410-b5e6-96231b3b80d8
- Overloading operator<< for raw_ostream and pointers is dangerous, it alters
the behavior of code that includes the header.
- Remove unused ID.
- Use LLVM's byte swapping helpers instead of a hand-coded.
- Make ReadProfilingData work directly on a pointer.
No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162992 91177308-0d34-0410-b5e6-96231b3b80d8
Changes the hash result for strings containing characters
with values >= 128, such as UTF8 strings (not normal ASCII).
Changed mostly so we match other implementations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162882 91177308-0d34-0410-b5e6-96231b3b80d8
Ordered memory operations are more constrained than volatile loads and
stores because they must be ordered with respect to all other memory
operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162861 91177308-0d34-0410-b5e6-96231b3b80d8
This means the same as LoadInst/StoreInst::isUnordered(), and implies
!isVolatile().
Atomic loads and stored are also ordered, and this is the right method
to check if it is safe to reorder memory operations. Ordered atomics
can't be reordered wrt normal loads and stores, which is a stronger
constraint than volatile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162859 91177308-0d34-0410-b5e6-96231b3b80d8