The expression based expansion too often results in IR level optimizations
splitting the intermediate values into separate basic blocks, preventing
the formation of the VBSL instruction as the code author intended. In
particular, LICM would often hoist part of the computation out of a loop.
rdar://11011471
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164340 91177308-0d34-0410-b5e6-96231b3b80d8
A PHI can't create interference on its own. If two live ranges interfere
at a PHI, they must also interfere when leaving one of the PHI
predecessors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164330 91177308-0d34-0410-b5e6-96231b3b80d8
We already have HoistThenElseCodeToIf, this patch implements
SinkThenElseCodeToEnd. When END block has only two predecessors and each
predecessor terminates with unconditional branches, we compare instructions in
IF and ELSE blocks backwards and check whether we can sink the common
instructions down.
rdar://12191395
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164325 91177308-0d34-0410-b5e6-96231b3b80d8
- Rewrite/merge pseudo-atomic instruction emitters to address the
following issue:
* Reduce one unnecessary load in spin-loop
previously the spin-loop looks like
thisMBB:
newMBB:
ld t1 = [bitinstr.addr]
op t2 = t1, [bitinstr.val]
not t3 = t2 (if Invert)
mov EAX = t1
lcs dest = [bitinstr.addr], t3 [EAX is implicit]
bz newMBB
fallthrough -->nextMBB
the 'ld' at the beginning of newMBB should be lift out of the loop
as lcs (or CMPXCHG on x86) will load the current memory value into
EAX. This loop is refined as:
thisMBB:
EAX = LOAD [MI.addr]
mainMBB:
t1 = OP [MI.val], EAX
LCMPXCHG [MI.addr], t1, [EAX is implicitly used & defined]
JNE mainMBB
sinkMBB:
* Remove immopc as, so far, all pseudo-atomic instructions has
all-register form only, there is no immedidate operand.
* Remove unnecessary attributes/modifiers in pseudo-atomic instruction
td
* Fix issues in PR13458
- Add comprehensive tests on atomic ops on various data types.
NOTE: Some of them are turned off due to missing functionality.
- Revise tests due to the new spin-loop generated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164281 91177308-0d34-0410-b5e6-96231b3b80d8
A common coalescing conflict in vector code is lane insertion:
%dst = FOO
%src = BAR
%dst:ssub0 = COPY %src
The live range of %src interferes with the ssub0 lane of %dst, but that
lane is never read after %src would have clobbered it. That makes it
safe to merge the live ranges and eliminate the COPY:
%dst = FOO
%dst:ssub0 = BAR
This patch teaches the new coalescer to resolve conflicts where dead
vector lanes would be clobbered, at least as long as the clobbered
vector lanes don't escape the basic block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164250 91177308-0d34-0410-b5e6-96231b3b80d8
to improve compatibility with GNU as.
Based on a patch by PaX Team.
Fixed assertion failures on non-Darwin and added additional test cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164248 91177308-0d34-0410-b5e6-96231b3b80d8
- Merge the processing of LOAD_ADD with other atomic load-arith
operations
- Separate the logic getting target constant for atomic-load-op and add
an optimization for atomic-load-add on i16 with negative value
- Optimize a minor case for atomic-fetch-add i16 with negative operand. Test
case is revised.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164243 91177308-0d34-0410-b5e6-96231b3b80d8
XFAIL needs a trailing colon. Hopefully this will get the buildbots
happy again while Bill works on getting it passing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164237 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Target/PowerPC/PPCISelLowering.{h,cpp}
Rename LowerFormalArguments_Darwin to LowerFormalArguments_Darwin_Or_64SVR4.
Rename LowerFormalArguments_SVR4 to LowerFormalArguments_32SVR4.
Receive small structs right-justified in LowerFormalArguments_Darwin_Or_64SVR4.
Rename LowerCall_Darwin to LowerCall_Darwin_Or_64SVR4.
Rename LowerCall_SVR4 to LowerCall_32SVR4.
Pass small structs right-justified in LowerCall_Darwin_Or_64SVR4.
test/CodeGen/PowerPC/structsinregs.ll
New test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164228 91177308-0d34-0410-b5e6-96231b3b80d8
two variables where the first variable is returned and the second
ignored.
I don't think this occurs in practice (other passes should have cleaned
up the unused phi node), but it should still be handled correctly.
Also make the logic for determining if we should return early less
sketchy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164225 91177308-0d34-0410-b5e6-96231b3b80d8
Because the test invokes llc -march=sparc, it needs to be in a directory
which is only run when the sparc target is built.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164211 91177308-0d34-0410-b5e6-96231b3b80d8
This is a follow-up from r163302, which added a transformation to
SimplifyCFG that turns some switches into loads from lookup tables.
It was pointed out that some targets, such as GPUs and deeply embedded
targets, might not find this appropriate, but SimplifyCFG doesn't have
enough information about the target to decide this.
This patch adds the reverse transformation to CodeGenPrep: it turns
loads from lookup tables back into switches for targets where we do not
build jump tables (assuming these are also the targets where lookup
tables are inappropriate).
Hopefully we will eventually get to have target information in
SimplifyCFG, and then this CodeGenPrep transformation can be removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164206 91177308-0d34-0410-b5e6-96231b3b80d8
from the dragonegg build bots when we turned on the full version of the
pass. Included a much reduced test case for this pesky bug, despite
bugpoint's uncooperative behavior.
Also, I audited all the similar code I could find and didn't spot any
other cases where this mistake cropped up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164178 91177308-0d34-0410-b5e6-96231b3b80d8
working on FCA splitting. Instead of refusing to form a common type when
there are uses of a subsection of the alloca as well as a use of the
entire alloca, just skip the subsection uses and continue looking for
a whole-alloca use with a type that we can use.
This produces slightly prettier IR I think, and also fixes the other
failure in the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164146 91177308-0d34-0410-b5e6-96231b3b80d8
FCAs. This is essential in order to promote allocas that are used in
struct returns by frontends like Clang. The FCA load would block the
rest of the pass from firing, resulting is significant regressions with
the bullet benchmark in the nightly test suite.
Thanks to Duncan for repeated discussions about how best to do this, and
to both him and Benjamin for review.
This appears to have blocked many places where the pass tries to fire,
and so I'm expect somewhat different results with this fix added.
As with the last big patch, I'm including a change to enable the SROA by
default *temporarily*. Ben is going to remove this as soon as the LNT
bots pick up the patch. I'm just trying to get a round of LNT numbers
from the stable machines in the lab.
NOTE: Four clang tests are expected to fail in the brief window where
this is enabled. Sorry for the noise!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164119 91177308-0d34-0410-b5e6-96231b3b80d8
aligned address. Based on patch by David Peixotto.
Also use vld1.64 / vst1.64 with 128-bit alignment to take advantage of alignment
hints. rdar://12090772, rdar://12238782
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164089 91177308-0d34-0410-b5e6-96231b3b80d8
Hanlde the case when we split the default edge if the default target has "icmp"
and unconditinal branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164076 91177308-0d34-0410-b5e6-96231b3b80d8
Add LIS::pruneValue() and extendToIndices(). These two functions are
used by the register coalescer when merging two live ranges requires
more than a trivial value mapping as supported by LiveInterval::join().
The pruneValue() function can remove the part of a value number that is
going to conflict in join(). Afterwards, extendToIndices can restore the
live range, using any new dominating value numbers and updating the SSA
form.
Use this complex value mapping to support merging a register into a
vector lane that has a conflicting value, but the clobbered lane is
undef.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164074 91177308-0d34-0410-b5e6-96231b3b80d8