Summary:
Parts of the compiler still believed MSA load/stores have a 16-bit offset when
it is actually 10-bit. Corrected this, and fixed a closely related issue this
uncovered where load/stores with 10-bit and 12-bit offsets (MSA and microMIPS
respectively) could not load/store using offsets from the stack/frame pointer.
They accepted frameindex+offset, but not frameindex by itself.
Reviewers: jacksprat, matheusalmeida
Reviewed By: jacksprat
Differential Revision: http://llvm-reviews.chandlerc.com/D2888
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202717 91177308-0d34-0410-b5e6-96231b3b80d8
subsequent changes are easier to review. About to fix some layering
issues, and wanted to separate out the necessary churn.
Also comment and sink the include of "Windows.h" in three .inc files to
match the usage in Memory.inc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198685 91177308-0d34-0410-b5e6-96231b3b80d8
This required correcting the definition of the bins[lr]i intrinsics because
the result is also the first operand.
It also required removing the (arbitrary) check for 32-bit immediates in
MipsSEDAGToDAGISel::selectVSplat().
Currently using binsli.d with 2 bits set in the mask doesn't select binsli.d
because the constant is legalized into a ConstantPool. Similar things can
happen with binsri.d with more than 10 bits set in the mask. The resulting
code when this happens is correct but not optimal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193687 91177308-0d34-0410-b5e6-96231b3b80d8
Most constant BUILD_VECTOR's are matched using ComplexPatterns which cover
bitcasted as well as normal vectors. However, it doesn't seem to be possible to
match ldi.[bhwd] in a type-agnostic manner (e.g. to support the widest range of
immediates, it should be possible to use ldi.b to load v2i64) using TableGen so
ldi.[bhwd] is matched using custom code in MipsSEISelDAGToDAG.cpp
This made the majority of the constant splat BUILD_VECTOR lowering redundant.
The only transformation remaining for constant splats is when an (up-to) 32-bit
constant splat is possible but the value does not fit into a 10-bit signed
integer. In this case, the BUILD_VECTOR is transformed into a bitcasted
BUILD_VECTOR so that fill.[bhw] can be used to splat the vector from a GPR32
register (which is initialized using the usual lui/addui sequence).
There are no additional tests since this is a re-implementation of previous
functionality. The change is intended to make it easier to implement some of
the upcoming instruction selection patches since they can rely on existing
support for BUILD_VECTOR's in the DAGCombiner.
compare_float.ll changed slightly because a BITCAST is no longer
introduced during legalization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191299 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, the DAGISel function WalkChainUsers was spotting that it
had entered already-selected territory by whether a node was a
MachineNode (amongst other things). Since it's fairly common practice
to insert MachineNodes during ISelLowering, this was not the correct
check.
Looking around, it seems that other nodes get their NodeId set to -1
upon selection, so this makes sure the same thing happens to all
MachineNodes and uses that characteristic to determine whether we
should stop looking for a loop during selection.
This should fix PR15840.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191165 91177308-0d34-0410-b5e6-96231b3b80d8
This patch eliminates the need to emit a constant move instruction when this
pattern is matched:
(select (setgt a, Constant), T, F)
The pattern above effectively turns into this:
(conditional-move (setlt a, Constant + 1), F, T)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176384 91177308-0d34-0410-b5e6-96231b3b80d8
functions. Set AddedComplexity to determine the order in which patterns are
matched.
This simplifies selection of floating point loads/stores.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175300 91177308-0d34-0410-b5e6-96231b3b80d8
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.
There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.
The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.
I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).
I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171366 91177308-0d34-0410-b5e6-96231b3b80d8
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.
Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169131 91177308-0d34-0410-b5e6-96231b3b80d8
Previously mips16 was sharing the pattern addr which is used for mips32
and mips64. This had a number of problems:
1) Storing and loading byte and halfword quantities for mips16 has particular
problems due to the primarily non mips16 nature of SP. When we must
load/store byte/halfword stack objects in a function, we must create a mips16
alias register for SP. This functionality is tested in stchar.ll.
2) We need to have an FP register under certain conditions (such as
dynamically sized alloca). We use mips16 register S0 for this purpose.
In this case, we also use this register when accessing frame objects so this
issue also affects the complex pattern addr16. This functionality is
tested in alloca16.ll.
The Mips16InstrInfo.td has been updated to use addr16 instead of addr.
The complex pattern C++ function for addr has been copied to addr16 and
updated to reflect the above issues.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166897 91177308-0d34-0410-b5e6-96231b3b80d8
use load/store fragments defined in TargetSelectionDAG.td in place of them.
Unaligned loads/stores are either expanded or lowered to target-specific nodes,
so instruction selection should see only aligned load/store nodes.
No changes in functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163960 91177308-0d34-0410-b5e6-96231b3b80d8
single-precision load and store.
Also avoid selecting LUXC1 and SUXC1 instructions during isel. It is incorrect
to map unaligned floating point load/store nodes to these instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161063 91177308-0d34-0410-b5e6-96231b3b80d8
The long branch pass (fixed in r160601) no longer uses the global base register
to compute addresses of branch destinations, so it is not necessary to reserve
a slot on the stack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160703 91177308-0d34-0410-b5e6-96231b3b80d8
pointer register.
This is the first of the series of patches which clean up the way global pointer
register is used. The patches will make the following improvements:
- Make $gp an allocatable temporary register rather than reserving it.
- Use a virtual register as the global pointer register and let the register
allocator decide which register to assign to it or whether spill/reloads are
needed.
- Make sure $gp is valid at the entry of a called function, which is necessary
for functions using lazy binding.
- Remove the need for emitting .cprestore and .cpload directives.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156671 91177308-0d34-0410-b5e6-96231b3b80d8
For example, the first instruction in the code below can be eliminated if the
use of $vr0 is replaced with $zero:
addiu $vr0, $zero, 0
add $vr2, $vr1, $vr0
add $vr2, $vr1, $zero
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152280 91177308-0d34-0410-b5e6-96231b3b80d8
and stores was added.
- SelectAddr should return false if Parent is an unaligned f32 load or store.
- Only aligned load and store nodes should be matched to select reg+imm
floating point instructions.
- MIPS does not have support for f64 unaligned load or store instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151843 91177308-0d34-0410-b5e6-96231b3b80d8
reserving a physical register ($gp or $28) for that purpose.
This will completely eliminate loads that restore the value of $gp after every
function call, if the register allocator assigns a callee-saved register, or
eliminate unnecessary loads if it assigns a temporary register.
example:
.cpload $25 // set $gp.
...
.cprestore 16 // store $gp to stack slot 16($sp).
...
jalr $25 // function call. clobbers $gp.
lw $gp, 16($sp) // not emitted if callee-saved reg is chosen.
...
lw $2, 4($gp)
...
jalr $25 // function call.
lw $gp, 16($sp) // not emitted if $gp is not live after this instruction.
...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151402 91177308-0d34-0410-b5e6-96231b3b80d8