to prevent it from being clobbered. mips uses $gp to access small data section.
This bug was originally reported by Carl Norum.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162340 91177308-0d34-0410-b5e6-96231b3b80d8
within the codegen EK_GPRel64BlockAddress. This was not
supported for direct object output and resulted in an assertion.
This change adds support for EK_GPRel64BlockAddress for
direct object.
One fallout from this is to turn on rela relocations
for mips64 to match gas.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162334 91177308-0d34-0410-b5e6-96231b3b80d8
consistent with the other "expected identifier" errors.
Extracted from the Andy/PaX patch. I added the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162291 91177308-0d34-0410-b5e6-96231b3b80d8
This optimization is really just replacing allocas wholesale with
globals, there is no scalarization.
The underlying motivation for this patch is to simplify the SROA pass
and focus it on splitting and promoting allocas.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162271 91177308-0d34-0410-b5e6-96231b3b80d8
IR that hasn't been through SimplifyCFG can look like this:
br i1 %b, label %r, label %r
Make sure we don't create duplicate Machine CFG edges in this case.
Fix the machine code verifier to accept conditional branches with a
single CFG edge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162230 91177308-0d34-0410-b5e6-96231b3b80d8
this allows for better code generation.
Added a new DAGCombine transformation to convert FMAX and FMIN to FMANC and
FMINC, which are commutative.
For example:
movaps %xmm0, %xmm1
movsd LC(%rip), %xmm0
minsd %xmm1, %xmm0
becomes:
minsd LC(%rip), %xmm0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162187 91177308-0d34-0410-b5e6-96231b3b80d8
Add these transformations to the existing add/sub ones:
(and (select cc, -1, c), x) -> (select cc, x, (and, x, c))
(or (select cc, 0, c), x) -> (select cc, x, (or, x, c))
(xor (select cc, 0, c), x) -> (select cc, x, (xor, x, c))
The selects can then be transformed to a single predicated instruction
by peephole.
This transformation will make it possible to eliminate the ISD::CAND,
COR, and CXOR custom DAG nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162176 91177308-0d34-0410-b5e6-96231b3b80d8
arithmetic instructions. However, when small data types are used, a truncate
node appears between the SETCC node and the arithmetic operation. This patch
adds support for this pattern.
Before:
xorl %esi, %edi
testb %dil, %dil
setne %al
ret
After:
xorb %dil, %sil
setne %al
ret
rdar://12081007
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162160 91177308-0d34-0410-b5e6-96231b3b80d8
PEI can't handle the pseudo-instructions. This can be removed when the
pseudo-instructions are replaced by normal predicated instructions.
Fixes PR13628.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162130 91177308-0d34-0410-b5e6-96231b3b80d8
The previous fix only checked for simple cycles, use a set to catch longer
cycles too.
Drop the broken check from the ObjectSizeOffsetEvaluator. The BoundsChecking
pass doesn't have to deal with invalid IR like InstCombine does.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162120 91177308-0d34-0410-b5e6-96231b3b80d8
make it more consistent with its intended semantics.
The `linker_private_weak_def_auto' linkage type was meant to automatically hide
globals which never had their addresses taken. It has nothing to do with the
`linker_private' linkage type, which outputs the symbols with a `l' (ell) prefix
among other things.
The intended semantic is more like the `linkonce_odr' linkage type.
Change the name of the linkage type to `linkonce_odr_auto_hide'. And therefore
changing the semantics so that it produces the correct output for the linker.
Note: The old linkage name `linker_private_weak_def_auto' will still parse but
is not a synonym for `linkonce_odr_auto_hide'. This should be removed in 4.0.
<rdar://problem/11754934>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162114 91177308-0d34-0410-b5e6-96231b3b80d8
multiple edges between two blocks is linear. If the caller is iterating all
edges leaving a BB that would be a square time algorithm. It is more efficient
to have the callers handle that case.
Currently the only callers are:
* GVN: already avoids the multiple edge case.
* Verifier: could only hit this assert when looking at an invalid invoke. Since
it already rejects the invoke, just avoid computing the dominance for it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162113 91177308-0d34-0410-b5e6-96231b3b80d8
I really need to find a way to automate this, but I can't come up with a regex
that has no false positives while handling tricky cases like custom check
prefixes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162097 91177308-0d34-0410-b5e6-96231b3b80d8
It is not my plan to duplicate the entire ARM instruction set with
predicated versions. We need a way of representing predicated
instructions in SSA form without requiring a separate opcode.
Then the pseudo-instructions can go away.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162061 91177308-0d34-0410-b5e6-96231b3b80d8
where some fact lake a=b dominates a use in a phi, but doesn't dominate the
basic block itself.
This feature could also be implemented by splitting critical edges, but at least
with the current algorithm reasoning about the dominance directly is faster.
The time for running "opt -O2" in the testcase in pr10584 is 1.003 times slower
and on gcc as a single file it is 1.0007 times faster.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162023 91177308-0d34-0410-b5e6-96231b3b80d8
Without fastcc support, the caller just falls through to CallingConv::C
for fastcc, but callee still uses fastcc, this inconsistency of calling
convention is a problem, and fastcc support can fix it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@162013 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM select instructions are just predicated moves. If the select is
the only use of an operand, the instruction defining the operand can be
predicated instead, saving one instruction and decreasing register
pressure.
This implementation can turn AND/ORR/EOR instructions into their
corresponding ANDCC/ORRCC/EORCC variants. Ideally, we should be able to
predicate any instruction, but we don't yet support predicated
instructions in SSA form.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161994 91177308-0d34-0410-b5e6-96231b3b80d8
around. That's not how we do things. Besides, the commit message tells us that
it is covered by the GCC test suite.
------------------------------------------------------------------------
r127497 | zwarich | 2011-03-11 13:51:56 -0800 (Fri, 11 Mar 2011) | 3 lines
Fix the GCC test suite issue exposed by r127477, which was caused by stack
protector insertion not working correctly with unreachable code. Since that
revision was rolled out, this test doesn't actual fail before this fix.
------------------------------------------------------------------------
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161985 91177308-0d34-0410-b5e6-96231b3b80d8
- memcpy size is wrongly truncated into 32-bit and treat 8GB memcpy is
0-sized memcpy
- as 0-sized memcpy/memset is already removed before SimplifyMemTransfer
and SimplifyMemSet in visitCallInst, replace 0 checking with
assertions.
- replace getZExtValue() with getLimitedValue() according to
Eli Friedman
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161923 91177308-0d34-0410-b5e6-96231b3b80d8
reversed. This leads to wrong codegen for float-to-half conversion
intrinsics which are used to support storage-only fp16 type.
NEON variants of same instructions are fine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161907 91177308-0d34-0410-b5e6-96231b3b80d8
- FP_EXTEND only support extending from vectors with matching elements.
This results in the scalarization of extending to v2f64 from v2f32,
which will be legalized to v4f32 not matching with v2f64.
- add X86-specific VFPEXT supproting extending from v4f32 to v2f64.
- add BUILD_VECTOR lowering helper to recover back the original
extending from v4f32 to v2f64.
- test case is enhanced to include different vector width.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161894 91177308-0d34-0410-b5e6-96231b3b80d8