the overloaded vector types allowed floating-point or integer vector elements.
Most of these operations actually depend on the element type, so bitcasting
was not an option.
If you include the vpadd intrinsics that I updated earlier, this gets rid
of 20 intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78646 91177308-0d34-0410-b5e6-96231b3b80d8
and short. Well, it's kinda short. Definitely nasty and brutish.
The front-end generates the register/unregister calls into the SjLj runtime,
call-site indices and landing pad dispatch. The back end fills in the LSDA
with the call-site information provided by the front end. Catch blocks are
not yet implemented.
Built on Darwin and verified no llvm-core "make check" regressions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78625 91177308-0d34-0410-b5e6-96231b3b80d8
This patch takes pain to ensure all the PEI lowering code does the right thing when lowering frame indices, insert code to manipulate stack pointers, etc. It's also custom lowering dynamic stack alloc into pseudo instructions so we can insert the right instructions at scheduling time.
This fixes PR4659 and PR4682.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78361 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.
This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.
This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78142 91177308-0d34-0410-b5e6-96231b3b80d8
Get rid of yesterday's code to fix the register usage during isel.
Select the new DAG nodes to machine instructions. The new pre-alloc pass
to choose adjacent registers for these results is not done, so the
results of this will generally not assemble yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78136 91177308-0d34-0410-b5e6-96231b3b80d8
instructions for calls since BL and BLX are always 32-bit long and BX is always
16-bit long.
Also, we should be using BLX to call external function stubs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77756 91177308-0d34-0410-b5e6-96231b3b80d8
it is highly specific to the object file that will be generated in the end,
this introduces a new TargetLoweringObjectFile interface that is implemented
for each of ELF/MachO/COFF/Alpha/PIC16 and XCore.
Though still is still a brutal and ugly refactoring, this is a major step
towards goodness.
This patch also:
1. fixes a bunch of dangling pointer problems in the PIC16 backend.
2. disables the TargetLowering copy ctor which PIC16 was accidentally using.
3. gets us closer to xcore having its own crazy target section flags and
pic16 not having to shadow sections with its own objects.
4. fixes wierdness where ELF targets would set CStringSection but not
CStringSection_. Factor the code better.
5. fixes some bugs in string lowering on ELF targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77294 91177308-0d34-0410-b5e6-96231b3b80d8
Before:
adr r12, #LJTI3_0_0
ldr pc, [r12, +r0, lsl #2]
LJTI3_0_0:
.long LBB3_24
.long LBB3_30
.long LBB3_31
.long LBB3_32
After:
adr r12, #LJTI3_0_0
add pc, r12, +r0, lsl #2
LJTI3_0_0:
b.w LBB3_24
b.w LBB3_30
b.w LBB3_31
b.w LBB3_32
This has several advantages.
1. This will make it easier to optimize this to a TBB / TBH instruction +
(smaller) table.
2. This eliminate the need for ugly asm printer hack to force the address
into thumb addresses (bit 0 is one).
3. Same codegen for pic and non-pic.
4. This eliminate the need to align the table so constantpool island pass
won't have to over-estimate the size.
Based on my calculation, the later is probably slightly faster as well since
ldr pc with shifter address is very slow. That is, it should be a win as long
as the HW implementation can do a reasonable job of branch predict the second
branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77024 91177308-0d34-0410-b5e6-96231b3b80d8
symbols were not getting stubs. While I'm at it, add a big testcase for
stub generation to make sure I don't break anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@75737 91177308-0d34-0410-b5e6-96231b3b80d8
This adds location info for all llvm_unreachable calls (which is a macro now) in
!NDEBUG builds.
In NDEBUG builds location info and the message is off (it only prints
"UREACHABLE executed").
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@75640 91177308-0d34-0410-b5e6-96231b3b80d8
Make llvm_unreachable take an optional string, thus moving the cerr<< out of
line.
LLVM_UNREACHABLE is now a simple wrapper that makes the message go away for
NDEBUG builds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@75379 91177308-0d34-0410-b5e6-96231b3b80d8
With the SVR4 ABI on PowerPC, vector arguments for vararg calls are passed differently depending on whether they are a fixed or a variable argument. Variable vector arguments always go into memory, fixed vector arguments are put
into vector registers. If there are no free vector registers available, fixed vector arguments are put on the stack.
The NumFixedArgs attribute allows to decide for an argument in a vararg call whether it belongs to the fixed or variable portion of the parameter list.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74764 91177308-0d34-0410-b5e6-96231b3b80d8