Instructions emitted to compute branch offsets now use immediate operands
instead of symbolic labels. This change was needed because there were problems
when R_MIPS_HI16/LO16 relocations were used to make shared objects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162731 91177308-0d34-0410-b5e6-96231b3b80d8
In SelectionDAGLegalize::ExpandLegalINT_TO_FP, expand INT_TO_FP nodes without
using any f64 operations if f64 is not a legal type.
Patch by Stefan Kristiansson.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162728 91177308-0d34-0410-b5e6-96231b3b80d8
to prevent it from being clobbered. mips uses $gp to access small data section.
This bug was originally reported by Carl Norum.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162340 91177308-0d34-0410-b5e6-96231b3b80d8
MipsSEFrameLowering.
Implement MipsSEFrameLowering::hasReservedCallFrame. Call frames will not be
reserved if there is a call with a large call frame or there are variable sized
objects on the stack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161090 91177308-0d34-0410-b5e6-96231b3b80d8
The frame object which points to the dynamically allocated area will not be
needed after changes are made to cease reserving call frames.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161076 91177308-0d34-0410-b5e6-96231b3b80d8
arguments to the stack in MipsISelLowering::LowerCall, use stack pointer and
integer offset operands rather than frame object operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161068 91177308-0d34-0410-b5e6-96231b3b80d8
single-precision load and store.
Also avoid selecting LUXC1 and SUXC1 instructions during isel. It is incorrect
to map unaligned floating point load/store nodes to these instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161063 91177308-0d34-0410-b5e6-96231b3b80d8
The long branch pass (fixed in r160601) no longer uses the global base register
to compute addresses of branch destinations, so it is not necessary to reserve
a slot on the stack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160703 91177308-0d34-0410-b5e6-96231b3b80d8
This pass no longer requires that the global pointer value be saved to the
stack or register since it uses bal instruction to compute branch distance.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160601 91177308-0d34-0410-b5e6-96231b3b80d8
Print the high order register of a double word register operand.
In 32 bit mode, a 64 bit double word integer will be represented
by 2 32 bit registers. This modifier causes the high order register
to be used in the asm expression. It is useful if you are using
doubles in assembler and continue to control register to variable
relationships.
This patch also fixes a related bug in a previous patch:
case 'D': // Second part of a double word register operand
case 'L': // Low order register of a double word register operand
case 'M': // High order register of a double word register operand
I got 'D' and 'M' confused. The second part of a double word operand
will only match 'M' for one of the endianesses. I had 'L' and 'D'
be the opposite twins when 'L' and 'M' are.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160429 91177308-0d34-0410-b5e6-96231b3b80d8
Low order register of a double word register operand. Operands
are defined by the name of the variable they are marked with in
the inline assembler code. This is a way to specify that the
operand just refers to the low order register for that variable.
It is the opposite of modifier 'D' which specifies the high order
register.
Example:
main()
{
long long ll_input = 0x1111222233334444LL;
long long ll_val = 3;
int i_result = 0;
__asm__ __volatile__(
"or %0, %L1, %2"
: "=r" (i_result)
: "r" (ll_input), "r" (ll_val));
}
Which results in:
lui $2, %hi(_gp_disp)
addiu $2, $2, %lo(_gp_disp)
addiu $sp, $sp, -8
addu $2, $2, $25
sw $2, 0($sp)
lui $2, 13107
ori $3, $2, 17476 <-- Low 32 bits of ll_input
lui $2, 4369
ori $4, $2, 8738 <-- High 32 bits of ll_input
addiu $5, $zero, 3 <-- Low 32 bits of ll_val
addiu $2, $zero, 0 <-- High 32 bits of ll_val
#APP
or $3, $4, $5 <-- or i_result, high 32 ll_input, low 32 of ll_val
#NO_APP
addiu $sp, $sp, 8
jr $ra
If not direction is done for the long long for 32 bit variables results
in using the low 32 bits as ll_val shows.
There is an existing bug if 'L' or 'D' is used for the destination register
for 32 bit long longs in that the target value will be updated incorrectly
for the non-specified part unless explicitly set within the inline asm code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160028 91177308-0d34-0410-b5e6-96231b3b80d8
Print the second half of a double word operand.
The include list was cleaned up a bit as well.
Also the test case was modified to test for both
big and little patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159787 91177308-0d34-0410-b5e6-96231b3b80d8
inlineasm-cnstrnt-bad-r-1.ll is NOT supposed to fail, so it was removed. This resulted in the removal of a negative test (inlineasm-cnstrnt-bad-r-1.ll)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159625 91177308-0d34-0410-b5e6-96231b3b80d8
inlineasm-cnstrnt-bad-r-1.ll is NOT supposed to fail, so it was removed. This resulted in the removal of a negative test (inlineasm-cnstrnt-bad-r-1.ll)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159610 91177308-0d34-0410-b5e6-96231b3b80d8
another mechanical change accomplished though the power of terrible Perl
scripts.
I have manually switched some "s to 's to make escaping simpler.
While I started this to fix tests that aren't run in all configurations,
the massive number of tests is due to a really frustrating fragility of
our testing infrastructure: things like 'grep -v', 'not grep', and
'expected failures' can mask broken tests all too easily.
Essentially, I'm deeply disturbed that I can change the testsuite so
radically without causing any change in results for most platforms. =/
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159547 91177308-0d34-0410-b5e6-96231b3b80d8
This allows the user/front-end to specify a model that is better
than what LLVM would choose by default. For example, a variable
might be declared as
@x = thread_local(initialexec) global i32 42
if it will not be used in a shared library that is dlopen'ed.
If the specified model isn't supported by the target, or if LLVM can
make a better choice, a different model may be used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159077 91177308-0d34-0410-b5e6-96231b3b80d8
to be generic across architectures. It has the
following description in the gnu sources:
Negate the immediate constant
Several Architectures such as x86 have local implementations
of operand modifier 'n' which go beyond the above description
slightly. This won't affect them.
Affected files:
lib/CodeGen/AsmPrinter/AsmPrinterInlineAsm.cpp
Added 'n' to the switch cases.
test/CodeGen/Generic/asm-large-immediate.ll
Generic compiled test (x86 for me)
test/CodeGen/Mips/asm-large-immediate.ll
Mips compiled version of the generic one
Contributer: Jack Carter
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158939 91177308-0d34-0410-b5e6-96231b3b80d8
to be generic across architectures. It has the
following description in the gnu sources:
Substitute immediate value without immediate syntax
Several Architectures such as x86 have local implementations
of operand modifier 'c' which go beyond the above description
slightly. To make use of the generic modifiers without overriding
local implementation one can make a call to the base class method
for AsmPrinter::PrintAsmOperand() in the locally derived method's
"default" case in the switch statement. That way if it is already
defined locally the generic version will never get called.
This change is needed when test/CodeGen/generic/asm-large-immediate.ll
failed on a native Mips board. The test was assuming a generic
implementation was in place.
Affected files:
lib/Target/Mips/MipsAsmPrinter.cpp:
Changed the default case to call the base method.
lib/CodeGen/AsmPrinter/AsmPrinterInlineAsm.cpp
Added 'c' to the switch cases.
test/CodeGen/Mips/asm-large-immediate.ll
Mips compiled version of the generic one
Contributer: Jack Carter
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158925 91177308-0d34-0410-b5e6-96231b3b80d8
- Remove code which lowers pseudo SETGP01.
- Fix LowerSETGP01. The first two of the three instructions that are emitted to
initialize the global pointer register now use register $2.
- Stop emitting .cpload directive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156689 91177308-0d34-0410-b5e6-96231b3b80d8
pointer register.
This is the first of the series of patches which clean up the way global pointer
register is used. The patches will make the following improvements:
- Make $gp an allocatable temporary register rather than reserving it.
- Use a virtual register as the global pointer register and let the register
allocator decide which register to assign to it or whether spill/reloads are
needed.
- Make sure $gp is valid at the entry of a called function, which is necessary
for functions using lazy binding.
- Remove the need for emitting .cprestore and .cpload directives.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156671 91177308-0d34-0410-b5e6-96231b3b80d8
This is mostly to test the waters. I'd like to get results from FNT
build bots and other bots running on non-x86 platforms.
This feature has been pretty heavily tested over the last few months by
me, and it fixes several of the execution time regressions caused by the
inlining work by preventing inlining decisions from radically impacting
block layout.
I've seen very large improvements in yacr2 and ackermann benchmarks,
along with the expected noise across all of the benchmark suite whenever
code layout changes. I've analyzed all of the regressions and fixed
them, or found them to be impossible to fix. See my email to llvmdev for
more details.
I'd like for this to be in 3.1 as it complements the inliner changes,
but if any failures are showing up or anyone has concerns, it is just
a flag flip and so can be easily turned off.
I'm switching it on tonight to try and get at least one run through
various folks' performance suites in case SPEC or something else has
serious issues with it. I'll watch bots and revert if anything shows up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154816 91177308-0d34-0410-b5e6-96231b3b80d8
- FCOPYSIGN nodes that have operands of different types were not handled.
- Different code was generated depending on the endianness of the target.
Additionally, code is added that emits INS and EXT instructions, if they are
supported by target (they are R2 instructions).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154540 91177308-0d34-0410-b5e6-96231b3b80d8
* Removed test/lib/llvm.exp - it is no longer needed
* Deleted the dg.exp reading code from test/lit.cfg. There are no dg.exp files
left in the test suite so this code is no longer required. test/lit.cfg is
now much shorter and clearer
* Removed a lot of duplicate code in lit.local.cfg files that need access to
the root configuration, by adding a "root" attribute to the TestingConfig
object. This attribute is dynamically computed to provide the same
information as was previously provided by the custom getRoot functions.
* Documented the config.root attribute in docs/CommandGuide/lit.pod
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153408 91177308-0d34-0410-b5e6-96231b3b80d8
The test fell back to the C backend, making it useless and it started to fail
on configurations that don't build the C backend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152342 91177308-0d34-0410-b5e6-96231b3b80d8
and stores was added.
- SelectAddr should return false if Parent is an unaligned f32 load or store.
- Only aligned load and store nodes should be matched to select reg+imm
floating point instructions.
- MIPS does not have support for f64 unaligned load or store instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151843 91177308-0d34-0410-b5e6-96231b3b80d8