location is being re-stored to the memory location. We would get
a dangling pointer from the SSAUpdate data structure and miss a
use. This fixes PR8068
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113042 91177308-0d34-0410-b5e6-96231b3b80d8
checking each standalone condition and decide whether emit target
specific nodes or remove the condition if it's already matched before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113031 91177308-0d34-0410-b5e6-96231b3b80d8
invertible. ScalarEvolution's folding routines don't always succeed
in canonicalizing equal expressions to a single canonical form, and
this can cause these asserts to fail, even though there's no actual
correctness problem. This fixes PR8066.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113021 91177308-0d34-0410-b5e6-96231b3b80d8
"Use target specific nodes instead of relying in unpckl and
unpckh pattern fragments during isel time. Also place a
depth limit in getShuffleScalarElt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113020 91177308-0d34-0410-b5e6-96231b3b80d8
overload UserInInstr. Explicitly check Allocatable. The early exit in the
condition will mean the performance impact of the extra test should be
minimal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113016 91177308-0d34-0410-b5e6-96231b3b80d8
mask pattern fragment", which depends on r112934, which introduced some infinite
loop and select failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112998 91177308-0d34-0410-b5e6-96231b3b80d8
solve the root problem, but it corrects the bug in the code I added to
support legalizing in the case where the non-extended type is also legal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112997 91177308-0d34-0410-b5e6-96231b3b80d8
"For ARM stack frames that utilize variable sized objects and have either
large local stack areas or require dynamic stack realignment, allocate a
base register via which to access the local frame. This allows efficient
access to frame indices not accessible via the FP (either due to being out
of range or due to dynamic realignment) or the SP (due to variable sized
object allocation). In particular, this greatly improves efficiency of access
to spill slots in Thumb functions which contain VLAs."
r112986 fixed a latent bug exposed by the above.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112989 91177308-0d34-0410-b5e6-96231b3b80d8
slot.
Teach it to also check for early clobbered aliases, and early clobber operands
following the current operand.
This fixes the miscompilation in PR8044 where EC registers eax and ecx were
being used for inputs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112988 91177308-0d34-0410-b5e6-96231b3b80d8
alignment should be performed. Otherwise dynamic realignment may trigger
when the register allocator has already used the frame pointer as a general
purpose register. That is, we need to make sure that the list of reserved
registers doesn't change after register allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112986 91177308-0d34-0410-b5e6-96231b3b80d8
instructions prior to regalloc. Since it's getting a little close to
the 2.8 branch deadline, I'll have to leave the rest of the instructions
handled by the NEONPreAllocPass for now, but I didn't want to leave half
of the VLD instructions converted and the other half not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112983 91177308-0d34-0410-b5e6-96231b3b80d8
Original commit message:
Use the SSAUpdator to turn calls to eh.exception that are not in a
landing pad into uses of registers rather than loads from a stack
slot. Doesn't touch the 'orrible hack code - Bill needs to persuade
me harder :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112952 91177308-0d34-0410-b5e6-96231b3b80d8
vabd intrinsic and add and/or zext operations. In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests. Auto-upgrade the old intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112941 91177308-0d34-0410-b5e6-96231b3b80d8
- Teach getShuffleScalarElt how to handle more target
specific nodes, so the DAGCombine can make use of it.
- Add another hack to avoid the node update problem
during legalization. More description on the comments
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112934 91177308-0d34-0410-b5e6-96231b3b80d8
Remove #uses comments from functions: they we're padded out to column 50
and were potentially confusing for externally visible functions.
going further, remove the "<i8**> [#uses=3]" comments entirely. They
add a lot of noise, confuse people about what the IR is, and don't add
any particular value. When the types are long it makes it really really
hard to read IR.
If someone is interested in this sort of thing, the right way to do this
is to implement an AsmAnnotationWriter that produces the same output, and
add a flag to llvm-dis (only) to produce this output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112899 91177308-0d34-0410-b5e6-96231b3b80d8
and were potentially confusing for externally visible functions.
going further, remove the "<i8**> [#uses=3]" comments entirely. They
add a lot of noise, confuse people about what the IR is, and don't add
any particular value. When the types are long it makes it really really
hard to read IR.
If someone is interested in this sort of thing, the right way to do this
is to implement an AsmAnnotationWriter that produces the same output, and
add a flag to llvm-dis (only) to produce this output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112894 91177308-0d34-0410-b5e6-96231b3b80d8
I wasn't able to convince myself that all GetMainExecutable
implementations always return absolute paths; this prevents
unexpected behavior in case they ever don't.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112888 91177308-0d34-0410-b5e6-96231b3b80d8
large local stack areas or require dynamic stack realignment, allocate a
base register via which to access the local frame. This allows efficient
access to frame indices not accessible via the FP (either due to being out
of range or due to dynamic realignment) or the SP (due to variable sized
object allocation). In particular, this greatly improves efficiency of access
to spill slots in Thumb functions which contain VLAs.
rdar://7352504
rdar://8374540
rdar://8355680
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112883 91177308-0d34-0410-b5e6-96231b3b80d8
capacity and remove the workaround in SmallVector<T,0>. There are some
theoretical benefits to a N->2N+1 growth policy anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112870 91177308-0d34-0410-b5e6-96231b3b80d8
there are clearly no stores between the load and the store. This fixes
this miscompile reported as PR7833.
This breaks the test/CodeGen/X86/narrow_op-2.ll optimization, which is
safe, but awkward to prove safe. Move it to X86's README.txt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112861 91177308-0d34-0410-b5e6-96231b3b80d8
I'm sure it is harmless. Original commit message:
If PrototypeValue is erased in the middle of using the SSAUpdator
then the SSAUpdator may access freed memory. Instead, simply pass
in the type and name explicitly, which is all that was used anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112810 91177308-0d34-0410-b5e6-96231b3b80d8
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests. Add auto-upgrade support for the old intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112773 91177308-0d34-0410-b5e6-96231b3b80d8
check more strict, breaking some cases not checked in the
testsuite, but also exposes some foldings not done before,
as this example:
movaps (%rdi), %xmm0
movaps (%rax), %xmm1
movaps %xmm0, %xmm2
movss %xmm1, %xmm2
shufps $36, %xmm2, %xmm0
now is generated as:
movaps (%rdi), %xmm0
movaps %xmm0, %xmm1
movlps (%rax), %xmm1
shufps $36, %xmm1, %xmm0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112753 91177308-0d34-0410-b5e6-96231b3b80d8
This caused a miscompilation in WebKit where %RAX had conflicting defs when
RemoveCopyByCommutingDef was commuting a %EAX use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112751 91177308-0d34-0410-b5e6-96231b3b80d8
of a base class.
This makes it possible to unregister the file from FilesToRemove when
the file is done. Also, this eliminates the need for
formatted_tool_output_file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112706 91177308-0d34-0410-b5e6-96231b3b80d8
landing pad into uses of registers rather than loads from a stack
slot. Doesn't touch the 'orrible hack code - Bill needs to persuade
me harder :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112702 91177308-0d34-0410-b5e6-96231b3b80d8
then the SSAUpdator may access freed memory. Instead, simply pass
in the type and name explicitly, which is all that was used anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112699 91177308-0d34-0410-b5e6-96231b3b80d8
on llvmdev: SRoA is introducing MMX datatypes like <1 x i64>,
which then cause random problems because the X86 backend is
producing mmx stuff without inserting proper emms calls.
In the short term, force off MMX datatypes. In the long term,
the X86 backend should not select generic vector types to MMX
registers. This is being worked on, but won't be done in time
for 2.8. rdar://8380055
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112696 91177308-0d34-0410-b5e6-96231b3b80d8
and output the dwarf line number tables. This takes the current loc info after
an instruction is assembled and saves the needed info into an object that has
vector and for each section. These objects will be used for the final patch to
build and emit the encoded dwarf line number tables. Again for now this is only
in the Mach-O streamer but at some point will move to a more generic place.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112668 91177308-0d34-0410-b5e6-96231b3b80d8
int x(int t) {
if (t & 256)
return -26;
return 0;
}
We generate this:
tst.w r0, #256
mvn r0, #25
it eq
moveq r0, #0
while gcc generates this:
ands r0, r0, #256
it ne
mvnne r0, #25
bx lr
Scandalous really!
During ISel time, we can look for this particular pattern. One where we have a
"MOVCC" that uses the flag off of a CMPZ that itself is comparing an AND
instruction to 0. Something like this (greatly simplified):
%r0 = ISD::AND ...
ARMISD::CMPZ %r0, 0 @ sets [CPSR]
%r0 = ARMISD::MOVCC 0, -26 @ reads [CPSR]
All we have to do is convert the "ISD::AND" into an "ARM::ANDS" that sets [CPSR]
when it's zero. The zero value will all ready be in the %r0 register and we only
need to change it if the AND wasn't zero. Easy!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112664 91177308-0d34-0410-b5e6-96231b3b80d8
Reserved registers are unpredictable, and are treated as always live by machine
DCE.
Allocatable registers are never reserved, and can be used for virtual registers.
Unreserved, unallocatable registers can not be used for virtual registers, but
otherwise behave like a normal allocatable register. Most targets only have
the flag register in this set.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112649 91177308-0d34-0410-b5e6-96231b3b80d8
1. Allocate them in the entry block of the function to enable function-wide
re-use. The instructions to create them should be re-materializable, so
there shouldn't be additional cost compared to creating them local
to the basic blocks where they are used.
2. Collect all of the frame index references for the function and sort them
by the local offset referenced. Iterate over the sorted list to
allocate the virtual base registers. This enables creation of base
registers optimized for positive-offset access of frame references.
(Note: This may be appropriate to later be a target hook to do the
sorting in a target appropriate manner. For now it's done here for
simplicity.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112609 91177308-0d34-0410-b5e6-96231b3b80d8
any more. I plan to reimplement alloca promotion using SSAUpdater later.
It looks like Bill's URoR logic really always needs domtree, so the pass
now always asks for domtree info.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112597 91177308-0d34-0410-b5e6-96231b3b80d8
two are weak, we make them thunks to a new strong function) so don't iterate
through the function list as we're modifying it.
Also add back the outermost loop which got removed during the cleanups.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112595 91177308-0d34-0410-b5e6-96231b3b80d8
This actually exposed an infinite recursion bug in ComputeValueKnownInPredecessors which theoretically already existed (in JumpThreading's
handling of and/or of i1's), but never manifested before. This patch adds a tracking set to prevent this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112589 91177308-0d34-0410-b5e6-96231b3b80d8
getMagicNumber was treating the _binary_ data it read in as a
null terminated string. This resulted in the std::string
calculating the length, and causing an assert in other code that
assumed that the length it passed was the same as the length of
the string it would get back.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112586 91177308-0d34-0410-b5e6-96231b3b80d8
where we hash, compare and fold, instead of one iteration where we build up
the hash buckets and a second one to fold.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112582 91177308-0d34-0410-b5e6-96231b3b80d8
Eventually, we want to disable physreg coalescing completely, and let the
register allocator do its job using hints.
This option makes it possible to measure the impact of disabling physreg
coalescing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112567 91177308-0d34-0410-b5e6-96231b3b80d8
kill flag.
This could cause duplicate kill flags when the same register was used twice in a
continuous sequence of STRs.
There is no small test case. <rdar://problem/8218046>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112534 91177308-0d34-0410-b5e6-96231b3b80d8
help relieve register pressure a bit. Recalculating the local address is
almost always going to be better than spilling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112503 91177308-0d34-0410-b5e6-96231b3b80d8
1) nuke ConstDataCoalSection, which is dead.
2) revise my previous patch for rdar://8018335,
which was completely wrong. Specifically, it doesn't
make sense to mark __TEXT,__const_coal as PURE_INSTRUCTIONS,
because it is for readonly data. templates (it turns out)
go to const_coal_nt. The real fix for rdar://8018335 was
to give ConstTextCoalSection a section kind of ReadOnly
instead of Text.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112496 91177308-0d34-0410-b5e6-96231b3b80d8
operand is killed, add it to the expanded instruction as an implicit kill
operand instead of marking the individual subregs with kill flags. This
should work better in general and also handles the case for VST3 where one
of the subregs was not referenced in the expanded instruction and so was
not marked killed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112494 91177308-0d34-0410-b5e6-96231b3b80d8
Unfortunately, the only testcase I have for this is huge and doesn't reduce well because the error is
sensitive to iteration-order issues, since the problem only occurs when merging values in a particular order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112489 91177308-0d34-0410-b5e6-96231b3b80d8
On Mingw and Cygwin, the symbol __main is resolved to
callee's(eg. tools/lli) one, to invoke wrong duplicated ctors
(and register wrong callee's dtors with atexit(3)).
We expect, by callee, ExecutionEngine::runStaticConstructorsDestructors()
is called before ExecutionEngine::runFunctionAsMain() is called.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112474 91177308-0d34-0410-b5e6-96231b3b80d8
Triple class constructor. Only valid triples should now be used
inside LLVM - front-ends are now responsable for rejecting or
correcting invalid target triples. The Triple::normalize method
can be used to straighten out funky triples provided by users.
Give this a whirl through the buildbots to see if I caught all
places where triples enter LLVM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112470 91177308-0d34-0410-b5e6-96231b3b80d8
optional modified register (instead of reg0). Along with r112461 it will make
sure that the optional define of CPSR is marked as "def" and will thus mark the
instructions using these classes (t2ANDS*) as setting the 's' flag.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112462 91177308-0d34-0410-b5e6-96231b3b80d8
instead of PromoteMemToReg. This allows it to stop using DF and DT,
eliminating a computation of DT and DF from clang -O3. Clang is now
down to 2 runs of DomFrontier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112457 91177308-0d34-0410-b5e6-96231b3b80d8
assertingvh so we get a violent explosion if the pointer dangles.
2) Fix AliasSetTracker::deleteValue to remove call sites with
by-pointer comparisons instead of by-alias queries. Using
findAliasSetForCallSite can cause alias sets to get merged
when they shouldn't, and can also miss alias sets when the
call is readonly.
#2 fixes PR6889, which only repros with a .c file :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112452 91177308-0d34-0410-b5e6-96231b3b80d8
LICM correctly. When sinking an instruction, it should not add
entries for the sunk instruction to the AST, it should remove
the entry for the sunk instruction. The blocks being sunk to
are not in the loop, so their instructions shouldn't be in the
AST (yet)!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112447 91177308-0d34-0410-b5e6-96231b3b80d8
keeping them around until the pass is destroyed, keep them
around a) just when useful (not for outer loops) and b) destroy
them right after we use them. This should reduce memory use
and fixes potential bugs where a loop is deleted and another
loop gets allocated to the same address.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112446 91177308-0d34-0410-b5e6-96231b3b80d8
other filtering techniques, as those may allow it to filter
out more obviously unprofitable candidates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112441 91177308-0d34-0410-b5e6-96231b3b80d8
LSRInstance data structures up to date. This fixes some
pessimizations caused by stale data which will be exposed
in an upcoming change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112440 91177308-0d34-0410-b5e6-96231b3b80d8
all applicable addrecs before recursing on getMulExpr, instead of
recursing on getMulExpr for each one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112433 91177308-0d34-0410-b5e6-96231b3b80d8
IR add/sub operations with one or both operands sign- or zero-extended.
Auto-upgrade the old intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112416 91177308-0d34-0410-b5e6-96231b3b80d8
of the sets is volatile. We were dropping the volatile bit of the
merged in set, leading (luckily) to assertions in cases like
PR7535. I cannot produce a testcase that repros with opt, but this
is obviously correct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112402 91177308-0d34-0410-b5e6-96231b3b80d8
times. This patch causes llc and llvm-mc (which both default to
verbose-asm) to print out comments after a few common shuffle
instructions which indicates the shuffle mask, e.g.:
insertps $113, %xmm3, %xmm0 ## xmm0 = zero,xmm0[1,2],xmm3[1]
unpcklps %xmm1, %xmm0 ## xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
pshufd $1, %xmm1, %xmm1 ## xmm1 = xmm1[1,0,0,0]
This is carefully factored to keep the information extraction (of the
shuffle mask) separate from the printing logic. I plan to move the
extraction part out somewhere else at some point for other parts of
the x86 backend that want to introspect on the behavior of shuffles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112387 91177308-0d34-0410-b5e6-96231b3b80d8
when the top elements of a vector are undefined. This happens all
the time for X86-64 ABI stuff because only the low 2 elements of
a 4 element vector are defined. For example, on:
_Complex float f32(_Complex float A, _Complex float B) {
return A+B;
}
We used to produce (with SSE2, SSE4.1+ uses insertps):
_f32: ## @f32
movdqa %xmm0, %xmm2
addss %xmm1, %xmm2
pshufd $16, %xmm2, %xmm2
pshufd $1, %xmm1, %xmm1
pshufd $1, %xmm0, %xmm0
addss %xmm1, %xmm0
pshufd $16, %xmm0, %xmm1
movdqa %xmm2, %xmm0
unpcklps %xmm1, %xmm0
ret
We now produce:
_f32: ## @f32
movdqa %xmm0, %xmm2
addss %xmm1, %xmm2
pshufd $1, %xmm1, %xmm1
pshufd $1, %xmm0, %xmm3
addss %xmm1, %xmm3
movaps %xmm2, %xmm0
unpcklps %xmm3, %xmm0
ret
This implements rdar://8368414
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112378 91177308-0d34-0410-b5e6-96231b3b80d8
a new EltStride variable instead of reusing NumElems variable
for a non-obvious purpose. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112377 91177308-0d34-0410-b5e6-96231b3b80d8
According to the Microsoft documentation here:
http://msdn.microsoft.com/en-us/library/ms724284%28VS.85%29.aspx
this cast used in lib/System/Win32/Path.inc:
__int64 ft = *reinterpret_cast<__int64*>(&fi.ftLastWriteTime);
should not be done. The documentation says: "Do not cast a pointer to a
FILETIME structure to either a ULARGE_INTEGER* or __int64* value because
it can cause alignment faults on 64-bit Windows."
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112376 91177308-0d34-0410-b5e6-96231b3b80d8
Also teach this logic how to handle target specific shuffles if
needed, this is necessary while searching recursively for zeroed
scalar elements in vector shuffle operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112348 91177308-0d34-0410-b5e6-96231b3b80d8
the special values that for ARM would be used with IB or DA modes. Fall
through and consider materializing a new base address is it would be
profitable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112329 91177308-0d34-0410-b5e6-96231b3b80d8
all the other LDM/STM instructions. This fixes asm printer crashes when
compiling with -O0. I've changed one of the NEON tests (vst3.ll) to run
with -O0 to check this in the future.
Prior to this change VLDM/VSTM used addressing mode #5, but not really.
The offset field was used to hold a count of the number of registers being
loaded or stored, and the AM5 opcode field was expanded to specify the IA
or DB mode, instead of the standard ADD/SUB specifier. Much of the backend
was not aware of these special cases. The crashes occured when rewriting
a frameindex caused the AM5 offset field to be changed so that it did not
have a valid submode. I don't know exactly what changed to expose this now.
Maybe we've never done much with -O0 and NEON. Regardless, there's no longer
any reason to keep a count of the VLDM/VSTM registers, so we can use
addressing mode #4 and clean things up in a lot of places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112322 91177308-0d34-0410-b5e6-96231b3b80d8
A = shl x, 42
...
B = lshr ..., 38
which can be transformed into:
A = shl x, 4
...
iff we can prove that the would-be-shifted-in bits
are already zero. This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112314 91177308-0d34-0410-b5e6-96231b3b80d8
framework, which is good at ripping through bitfield
operations. This generalize a bunch of the existing
xforms that instcombine does, such as
(x << c) >> c -> and
to handle intermediate logical nodes. This is useful for
ripping up the "promote to large integer" code produced by
SRoA.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112304 91177308-0d34-0410-b5e6-96231b3b80d8
transformation collect all the addrecs with the same loop
add combine them at once rather than starting everything over
at the first chance.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112290 91177308-0d34-0410-b5e6-96231b3b80d8
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112285 91177308-0d34-0410-b5e6-96231b3b80d8
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:
%94 = zext i16 %93 to i32 ; <i32> [#uses=2]
%96 = lshr i32 %94, 8 ; <i32> [#uses=1]
%101 = trunc i32 %96 to i8 ; <i8> [#uses=1]
This also unblocks other xforms from happening, now clang is able to compile:
struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }
into:
_foo: ## @foo
## BB#0: ## %entry
pshufd $1, %xmm0, %xmm2
addss %xmm0, %xmm2
movdqa %xmm1, %xmm3
addss %xmm2, %xmm3
pshufd $1, %xmm1, %xmm0
addss %xmm3, %xmm0
ret
on x86-64, instead of:
_foo: ## @foo
## BB#0: ## %entry
movd %xmm0, %rax
shrq $32, %rax
movd %eax, %xmm2
addss %xmm0, %xmm2
movapd %xmm1, %xmm3
addss %xmm2, %xmm3
movd %xmm1, %rax
shrq $32, %rax
movd %eax, %xmm0
addss %xmm3, %xmm0
ret
This seems pretty close to optimal to me, at least without
using horizontal adds. This also triggers in lots of other
code, including SPEC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112278 91177308-0d34-0410-b5e6-96231b3b80d8
Update all the tests using those intrinsics and add support for
auto-upgrading bitcode files with the old versions of the intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112271 91177308-0d34-0410-b5e6-96231b3b80d8
return to avoid needing two calls to test for equivalence, and sort
addrecs by their degree before examining their operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112267 91177308-0d34-0410-b5e6-96231b3b80d8
virtual base registers handle this function, and more. A bit more cleanup
to do on the interface to eliminateFrameIndex() after this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112237 91177308-0d34-0410-b5e6-96231b3b80d8
still having a significant effect. It shouldn't be now that the pre-RA
virtual base reg stuff is in. Assuming that's valididated by the nightly
testers, we can simplify a lot of the PEI frame index code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112220 91177308-0d34-0410-b5e6-96231b3b80d8
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).
This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112190 91177308-0d34-0410-b5e6-96231b3b80d8
comparison with 0. These two pieces of code should give identical results:
rsbs r1, r1, 0
cmp r0, r1
mov r0, #0
it ls
mov r0, #1
and:
cmn r0, r1
mov r0, #0
it ls
mov r0, #1
However, the CMN gives the *opposite* result when r1 is 0. This is because the
carry flag is set in the CMP case but not in the CMN case. In short, the CMP
instruction doesn't perform a truncate of the (logical) NOT of 0 plus the value
of r0 and the carry bit (because the "carry bit" parameter to AddWithCarry is
defined as 1 in this case, the carry flag will always be set when r0 >= 0). The
CMN instruction doesn't perform a NOT of 0 so there is never a "carry" when this
AddWithCarry is performed (because the "carry bit" parameter to AddWithCarry is
defined as 0).
The AddWithCarry in the CMP case seems to be relying upon the identity:
~x + 1 = -x
However when x is 0 and unsigned, this doesn't hold:
x = 0
~x = 0xFFFF FFFF
~x + 1 = 0x1 0000 0000
(-x = 0) != (0x1 0000 0000 = ~x + 1)
Therefore, we should disable *all* versions of CMN, especially when comparing
against zero, until we can limit when the CMN instruction is used (when we know
that the RHS is not 0) or when we have a hardware fix for this.
(See the ARM docs for the "AddWithCarry" pseudo-code.)
This is related to <rdar://problem/7569620>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112176 91177308-0d34-0410-b5e6-96231b3b80d8
with the VST4 instructions. Until after register allocation, we want to
represent sets of adjacent registers by a single super-register. These
VST4 pseudo instructions have a single QQ or QQQQ source register operand.
They get expanded to the real VST4 instructions with 4 separate D register
operands. Once this conversion is complete, we'll be able to remove the
NEONPreAllocPass and avoid some fragile and hacky code elsewhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112108 91177308-0d34-0410-b5e6-96231b3b80d8
expanding: e.g. <2 x float> -> <4 x float> instead of -> 2 floats. This
affects two places in the code: handling cross block values and handling
function return and arguments. Since vectors are already widened by
legalizetypes, this gives us much better code and unblocks x86-64 abi
and SPU abi work.
For example, this (which is a silly example of a cross-block value):
define <4 x float> @test2(<4 x float> %A) nounwind {
%B = shufflevector <4 x float> %A, <4 x float> undef, <2 x i32> <i32 0, i32 1>
%C = fadd <2 x float> %B, %B
br label %BB
BB:
%D = fadd <2 x float> %C, %C
%E = shufflevector <2 x float> %D, <2 x float> undef, <4 x i32> <i32 0, i32 1, i32 undef, i32 undef>
ret <4 x float> %E
}
Now compiles into:
_test2: ## @test2
## BB#0:
addps %xmm0, %xmm0
addps %xmm0, %xmm0
ret
previously it compiled into:
_test2: ## @test2
## BB#0:
addps %xmm0, %xmm0
pshufd $1, %xmm0, %xmm1
## kill: XMM0<def> XMM0<kill> XMM0<def>
insertps $0, %xmm0, %xmm0
insertps $16, %xmm1, %xmm0
addps %xmm0, %xmm0
ret
This implements rdar://8230384
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112101 91177308-0d34-0410-b5e6-96231b3b80d8
from MDValueList between each function, now that the bitcode
writer is reusing the index space for function-local metadata.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112082 91177308-0d34-0410-b5e6-96231b3b80d8