There are 3 types of relocations on MachO
* Scattered
* Section based
* Symbol based
On ELF and COFF relocations are symbol based.
We were in the strange situation that we abstracted over two of them. This makes
section based relocations MachO only.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240149 91177308-0d34-0410-b5e6-96231b3b80d8
This commit implements the initial serialization of machine basic blocks in a
machine function. Only the simple, scalar MBB attributes are serialized. The
reference to LLVM IR's basic block is preserved when that basic block has a name.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10465
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240145 91177308-0d34-0410-b5e6-96231b3b80d8
The patch is generated using this command:
tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
llvm/lib/
Thanks to Eugene Kosov for the original patch!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240137 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds initial support for the -fsanitize=kernel-address flag to Clang.
Right now it's quite restricted: only out-of-line instrumentation is supported, globals are not instrumented, some GCC kasan flags are not supported.
Using this patch I am able to build and boot the KASan tree with LLVMLinux patches from github.com/ramosian-glider/kasan/tree/kasan_llvmlinux.
To disable KASan instrumentation for a certain function attribute((no_sanitize("kernel-address"))) can be used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240131 91177308-0d34-0410-b5e6-96231b3b80d8
What this does is make all symbols that would otherwise start with a .L
(or L on MachO) unnamed.
Some of these symbols still show up in the symbol table, but we can just
make them unnamed.
In order to make sure we produce identical results when going thought assembly,
all .L (not just the compiler produced ones), are now unnamed.
Running llc on llvm-as.opt.bc, the peak memory usage goes from 208.24MB to
205.57MB.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240130 91177308-0d34-0410-b5e6-96231b3b80d8
Currently, we canonicalize shuffles that produce a result larger than
their operands with:
shuffle(concat(v1, undef), concat(v2, undef))
->
shuffle(concat(v1, v2), undef)
because we can access quad vectors (see PerformVECTOR_SHUFFLECombine).
This is useful in the general case, but there are special cases where
native shuffles produce larger results: the two-result ops.
We can look through the concat when lowering them:
shuffle(concat(v1, v2), undef)
->
concat(VZIP(v1, v2):0, :1)
This lets us generate the native shuffles instead of scalarizing to
dozens of VMOVs.
Differential Revision: http://reviews.llvm.org/D10424
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240118 91177308-0d34-0410-b5e6-96231b3b80d8
In preparation for a future patch: makes it easier to do the same
matching to generate different nodes, without duplication.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240116 91177308-0d34-0410-b5e6-96231b3b80d8
In a relocation target can take 3 basic forms
* A r_value in scattered relocations.
* A symbol in external relocations.
* A section is non-external relocations.
Have the dump reflect that. With this change we go from
CHECK-NEXT: Extern: 0
CHECK-NEXT: Type: X86_64_RELOC_SUBTRACTOR (5)
CHECK-NEXT: Symbol: 0x2
CHECK-NEXT: Scattered: 0
To just
// CHECK-NEXT: Type: X86_64_RELOC_SUBTRACTOR (5)
// CHECK-NEXT: Section: __data (2)
Since the relocation is with a section, we print the seciton name and don't
need to say that it is not scattered or external.
Someone motivated can add further special cases for things like
ARM64_RELOC_ADDEND and ARM_RELOC_PAIR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240073 91177308-0d34-0410-b5e6-96231b3b80d8
To same compile time, the analysis to find dense case-clusters in switches is
not done at -O0. However, when the whole switch is dense enough, it is easy to
turn it into a jump table, resulting in much faster code with no extra effort.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240071 91177308-0d34-0410-b5e6-96231b3b80d8
Deduplicates some code and lets us use LEA on atom when adjusting the
stack around callee-cleanup calls. This is the only intended
functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240044 91177308-0d34-0410-b5e6-96231b3b80d8
While the hash functions are subtly different it shouldn't have an
impact. Instructions are checked with isIdenticalTo later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240040 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Currently intrinsics don't affect the creation of the call graph.
This is not accurate with respect to statepoint and patchpoint
intrinsics -- these do call (or invoke) LLVM level functions.
This change fixes this inconsistency by adding a call to the external
node for call sites that call these non-leaf intrinsics. This coupled
with the fact that these intrinsics also escape the function pointer
they call gives us a conservatively correct call graph.
Reviewers: reames, chandlerc, atrick, pgavlin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10526
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240039 91177308-0d34-0410-b5e6-96231b3b80d8
They had been getting emitted as a section + offset reference, which
is bogus since the value needs to be the offset within the GOT, not
the actual address of the symbol's object.
Differential Revision: http://reviews.llvm.org/D10441
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240020 91177308-0d34-0410-b5e6-96231b3b80d8
- zext the value to alloc size first, then check if the value repeats
with zero padding included. If so we can still emit a .space
- Do the checking with APInt.isSplat(8), which handles non-pow2 types
- Also handle large constants (bit width > 64)
- In a ConstantArray all elements have the same type, so it's sufficient
to check the first constant recursively and then just compare if all
following constants are the same by pointer compare
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239977 91177308-0d34-0410-b5e6-96231b3b80d8
Added explicit sign extension for v4i16/v8i16 to v4i32/v8i32 before conversion to floats. Matches existing support for v4i8/v8i8.
Follow up to D10433
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239966 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This is done by first adding two additional instructions to convert the
alloca returned address to local and convert it back to generic. Then
replace all uses of alloca instruction with the converted generic
address. Then we can rely NVPTXFavorNonGenericAddrSpace pass to combine
the generic addresscast and the corresponding Load, Store, Bitcast, GEP
Instruction together.
Patched by Xuetian Weng (xweng@google.com).
Test Plan: test/CodeGen/NVPTX/lower-alloca.ll
Reviewers: jholewinski, jingyue
Reviewed By: jingyue
Subscribers: meheff, broune, eliben, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10483
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239964 91177308-0d34-0410-b5e6-96231b3b80d8
MCFragment didn't really need vtables. The majority of virtual methods were just getters and setters.
This removes the vtables and uses dispatch on the kind to do things like delete which needs to
get the appropriate class.
This reduces memory on the verify use list order test case by about 2MB out of 800MB.
Reviewed by Rafael Espíndola
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239952 91177308-0d34-0410-b5e6-96231b3b80d8
There is a one-to-one relationship between X86Subtarget and
X86FrameLowering, but every frame lowering method would previously pull
the subtarget off the MachineFunction and query some subtarget
properties.
Over time, these locals began to grow in complexity and it became
important to keep their names and meaning in sync across all of the
frame lowering methods, leading to duplication. We can eliminate that
duplication by computing them once in the constructor.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239948 91177308-0d34-0410-b5e6-96231b3b80d8
The personality routine currently lives in the LandingPadInst.
This isn't desirable because:
- All LandingPadInsts in the same function must have the same
personality routine. This means that each LandingPadInst beyond the
first has an operand which produces no additional information.
- There is ongoing work to introduce EH IR constructs other than
LandingPadInst. Moving the personality routine off of any one
particular Instruction and onto the parent function seems a lot better
than have N different places a personality function can sneak onto an
exceptional function.
Differential Revision: http://reviews.llvm.org/D10429
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239940 91177308-0d34-0410-b5e6-96231b3b80d8
It's been used before to avoid infinite loops caused by separate CGP
optimizations undoing one another. We found one more such issue
caused by r238054. To avoid it, generalize the "InsertedTruncs"
set to any inst, and use it to avoid touching those again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239938 91177308-0d34-0410-b5e6-96231b3b80d8
The restriction on unnamed aliases was removed in r239921. Mostly reverts
r239590, but we keep the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239923 91177308-0d34-0410-b5e6-96231b3b80d8
If globals can be unnamed, there is no reason for aliases to be different.
The restriction was there since the original implementation in r36435. I
can only guess it was there because of the old bison parser for the old
alias syntax.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239921 91177308-0d34-0410-b5e6-96231b3b80d8