std::auto_ptr is deprecated in GCC 5.0, and we are already using
std::unique_ptr all over the place.
Patch by Xan López.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235479 91177308-0d34-0410-b5e6-96231b3b80d8
Previously the code was accidentally checking if 'this' was an IntRecTy which it can't be since 'this' is a BitRecTy. Looking back at the history it appears it was intended to check RHS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235477 91177308-0d34-0410-b5e6-96231b3b80d8
Without pointee types the space optimization of storing only the pointer
type and not the value type won't be viable - so add the extra type
information that would be missing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235475 91177308-0d34-0410-b5e6-96231b3b80d8
Without pointee types the space optimization of storing only the pointer
type and not the value type won't be viable - so add the extra type
information that would be missing.
Storeatomic coming soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235474 91177308-0d34-0410-b5e6-96231b3b80d8
Add a flag to lib/Linker (and `llvm-link`) to override linkage rules.
When set, the functions in the source module *always* replace those in
the destination module.
The `llvm-link` option is `-override=abc.ll`. All the "regular" modules
are loaded and linked first, followed by the `-override` modules. This
is useful for debugging workflows where some subset of the module (e.g.,
a single function) is extracted into a separate file where it's
optimized differently, before being merged back in.
Patch by Luqman Aden!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235473 91177308-0d34-0410-b5e6-96231b3b80d8
Factor the loop for linking input files together into a combined module
into a separate function. This is in preparation for an upcoming patch
that runs the logic twice.
Patch by Luqman Aden!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235472 91177308-0d34-0410-b5e6-96231b3b80d8
Turns out I misread the parentheses. Though I'm pretty sure its always a RecordRecTy and non of the callers really seem to expect null. But until I'm completely sure I'm going to revert this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235469 91177308-0d34-0410-b5e6-96231b3b80d8
With SSE2, we can generate a 'movq' or other 64-bit store op on a 32-bit system
even though 64-bit integers are not legal types.
So instead of producing this:
pshufd $229, %xmm0, %xmm1 ## xmm1 = xmm0[1,1,2,3]
movd %xmm0, (%eax)
movd %xmm1, 4(%eax)
We can do:
movq %xmm0, (%eax)
This is a fix for the problem noted in D7296.
Differential Revision: http://reviews.llvm.org/D9134
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235460 91177308-0d34-0410-b5e6-96231b3b80d8
We should also teach the inliner to collapse framerecover of
frameaddress of the current frame down to an alloca, but that can happen
later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235459 91177308-0d34-0410-b5e6-96231b3b80d8
Calls to llvm::Value::mutateType are becoming extra-sensitive now that
instructions have extra type information that will not be derived from
operands or result type (alloca, gep, load, call/invoke, etc... ). The
special-handling for mutateType will get more complicated as this work
continues - it might be worth making mutateType virtual & pushing the
complexity down into the classes that need special handling. But with
only two significant uses of mutateType (vectorization and linking) this
seems OK for now.
Totally open to ideas/suggestions/improvements, of course.
With this, and a bunch of exceptions, we can roundtrip an indirect call
site through bitcode and IR. (a direct call site is actually trickier...
I haven't figured out how to deal with the IR deserializer's lazy
construction of Function/GlobalVariable decl's based on the type of the
entity which means looking through the "pointer to T" type referring to
the global)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235458 91177308-0d34-0410-b5e6-96231b3b80d8
https://llvm.org/bugs/show_bug.cgi?id=23163.
Gep merging sometimes behaves like a reverse CSE/LICM optimization,
which has negative impact on performance. In this patch we restrict
gep merging to happen only when the indexes to be merged are both consts,
which ensures such merge is always beneficial.
The patch makes gep merging only happen in very restrictive cases.
It is possible that some analysis/optimization passes rely on the merged
geps to get better result, and we havn't notice them yet. We will be ready
to further improve it once we see the cases.
Differential Revision: http://reviews.llvm.org/D8911
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235455 91177308-0d34-0410-b5e6-96231b3b80d8
https://llvm.org/bugs/show_bug.cgi?id=23163.
Gep merging sometimes behaves like a reverse CSE/LICM optimizations,
which has negative impact on performance. In this patch we restrict
gep merging to happen only when the indexes to be merged are both consts,
which ensures such merge is always beneficial.
The patch makes gep merging only happen in very restrictive cases.
It is possible that some analysis/optimization passes rely on the merged
geps to get better result, and we havn't notice them yet. We will be ready
to further improve it once we see the cases.
Differential Revision: http://reviews.llvm.org/D9007
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235451 91177308-0d34-0410-b5e6-96231b3b80d8
MemIntrinsic::getDest() looks through pointer casts, and using it
directly when building the new GEP+memset results in stuff like:
%0 = getelementptr i64* %p, i32 16
%1 = bitcast i64* %0 to i8*
call ..memset(i8* %1, ...)
instead of the correct:
%0 = bitcast i64* %p to i8*
%1 = getelementptr i8* %0, i32 16
call ..memset(i8* %1, ...)
Instead, use getRawDest, which just gives you the i8* value.
While there, use the memcpy's dest, as it's live anyway.
In most cases, when the optimization triggers, the memset and memcpy
sizes are the same, so the built memset is 0-sized and eliminated.
The problem occurs when they're different.
Fixes a regression caused by r235232: PR23300.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235419 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
With D9096 and D9101, there's no need to run DCE after SLSR and
SeparateConstOffsetFromGEP.
Test Plan: no regression
Reviewers: jholewinski, meheff
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D9172
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235415 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the `DIArray` and `DITypeArray` typedefs, preferring the
underlying types (`DebugNodeArray` and `MDTypeRefArray`, respectively).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235413 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
After we rewrite a candidate, the instructions used by the old form may
become unused. This patch cleans up these unused instructions so that we
needn't run DCE after SLSR.
Test Plan: removed -dce in all the SLSR tests
Reviewers: broune, meheff
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9101
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235410 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: so that we needn't run DCE after this pass.
Test Plan: removed -dce from the commandline in split-gep.ll and split-gep-and-gvn.ll
Reviewers: meheff
Subscribers: llvm-commits, HaoLiu, hfinkel, jholewinski
Differential Revision: http://reviews.llvm.org/D9096
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235409 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
MemorySSA uses this algorithm as well, and this enables us to reuse the code in both places.
There are no actual algorithm or datastructure changes in here, just code movement.
Reviewers: qcolombet, chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9118
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235406 91177308-0d34-0410-b5e6-96231b3b80d8
Remove early returns for when `getVariable()` is null, and just assert
that it never happens. The Verifier already confirms that there's a
valid variable on these intrinsics, so we should assume the debug info
isn't broken. I also updated a check for a `!dbg` attachment, which the
Verifier similarly guarantees.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235400 91177308-0d34-0410-b5e6-96231b3b80d8
Keep the old SEH fan-in lowering on by default for now, since projects
rely on it. This will make it easy to test this change with a simple
flag flip.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235399 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This directive is exactly the same as .asciz, except it's only used by MIPS.
It is used to store null terminated strings in object files.
Reviewers: rafael, dsanders, echristo
Reviewed By: dsanders, echristo
Subscribers: echristo, llvm-commits
Differential Revision: http://reviews.llvm.org/D7530
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235382 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The 64-bit version of the variable shift instructions uses the
shift_rotate_reg class which uses a GPR32Opnd to specify the variable
shift amount. With this patch we avoid the generation of a redundant
SLL instruction for the variable shift instructions in 64-bit targets.
Reviewers: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7413
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235376 91177308-0d34-0410-b5e6-96231b3b80d8