We are now able to handle big endian macho files in llvm-readobject. Thanks to
David Fang for providing the object files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179440 91177308-0d34-0410-b5e6-96231b3b80d8
According to the ARM reference manual, constant offsets are mandatory for pre-indexed addressing modes.
The MC disassembler was not obeying this when the offset is 0.
It was producing instructions like: str r0, [r1]!.
Correct syntax is: str r0, [r1, #0]!.
This change modifies the dumping of operands so that the offset is always printed, regardless of its value, when pre-indexed addressing mode is used.
Patch by Mihail Popa <Mihail.Popa@arm.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179398 91177308-0d34-0410-b5e6-96231b3b80d8
These tests rely specifically on the names of ELF relocations, let alone any
other detail. There's no way they'd work if LLVM was emitting something else by
default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179376 91177308-0d34-0410-b5e6-96231b3b80d8
It turns out some platforms (e.g. Windows) lay out their llvm-mc slightly
differently with extra newlines; there was no real reason for the test lines to
be consecutive, so this relaxes the FileCheck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179375 91177308-0d34-0410-b5e6-96231b3b80d8
This test ensures that relocation type names returned by libObject match
the raw relocation type value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179360 91177308-0d34-0410-b5e6-96231b3b80d8
When debugging performance regressions we often ask ourselves if the regression
that we see is due to poor isel/sched/ra or due to some micro-architetural
problem. When comparing two code sequences one good way to rule out front-end
bottlenecks (and other the issues) is to force code alignment. This pass adds
a flag that forces the alignment of all of the basic blocks in the program.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179353 91177308-0d34-0410-b5e6-96231b3b80d8
Original message:
Print more information about relocations.
With this patch llvm-readobj now prints if a relocation is pcrel, its length,
if it is extern and if it is scattered.
It also refactors the code a bit to use bit fields instead of shifts and
masks all over the place.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179345 91177308-0d34-0410-b5e6-96231b3b80d8
Added PathAliases to check if two struct-path tags can alias.
Added command line option -struct-path-tbaa.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179337 91177308-0d34-0410-b5e6-96231b3b80d8
With this patch llvm-readobj now prints if a relocation is pcrel, its length,
if it is extern and if it is scattered.
It also refactors the code a bit to use bit fields instead of shifts and
masks all over the place.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179294 91177308-0d34-0410-b5e6-96231b3b80d8
When trying to collapse sequences of insertelement/extractelement
instructions into single shuffle instructions, there is one specific
case where the Instruction Combiner wrongly updates the resulting
Mask of shuffle indexes.
The problem is in function CollectShuffleElments.
If we have a sequence of insert/extract element instructions
like the one below:
%tmp1 = extractelement <4 x float> %LHS, i32 0
%tmp2 = insertelement <4 x float> %RHS, float %tmp1, i32 1
%tmp3 = extractelement <4 x float> %RHS, i32 2
%tmp4 = insertelement <4 x float> %tmp2, float %tmp3, i32 3
Where:
. %RHS will have a mask of [4,5,6,7]
. %LHS will have a mask of [0,1,2,3]
The Mask of shuffle indexes is wrongly computed to [4,1,6,7]
instead of [4,0,6,7].
When analyzing %tmp2 in order to compute the Mask for the
resulting shuffle instruction, the algorithm forgets to update
the mask index at position 1 with the index associated to the
element extracted from %LHS by instruction %tmp1.
Patch by Andrea DiBiagio!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179291 91177308-0d34-0410-b5e6-96231b3b80d8
As packed comparisons in AVX/SSE produce all 0s or all 1s in each SIMD lane,
vector select could be simplified to AND/OR or removed if one or both values
being selected is all 0s or all 1s.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179267 91177308-0d34-0410-b5e6-96231b3b80d8
As these two instructions in AVX extension are privileged instructions for
special purpose, it's only expected to be used in inlined assembly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179266 91177308-0d34-0410-b5e6-96231b3b80d8
This patch is revised based on patch from Victor Umansky
<victor.umansky@intel.com>. More cases are handled in X86's bool
simplification, i.e.
- SETCC_CARRY
- value is truncated to i1 with AND
As a by-product, PR5443 is also fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179265 91177308-0d34-0410-b5e6-96231b3b80d8
Add support for the COFF relocation types IMAGE_REL_I386_DIR32NB and
IMAGE_REL_AMD64_ADDR32NB for 32- and 64-bit respectively. These are
similar to normal 4-byte relocations except that they do not include
the base address of the image.
Image-relative relocations are used for debug information (32-bit) and
SEH unwind tables (64-bit).
A new MCSymbolRef variant called 'VK_COFF_IMGREL32' is introduced to
specify such relocations. For AT&T assembly, this variant can be accessed
using the symbol suffix '@imgrel'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179240 91177308-0d34-0410-b5e6-96231b3b80d8
In the simple and triangle if-conversion cases, when CopyAndPredicateBlock is
used because the to-be-predicated block has other predecessors, we need to
explicitly remove the old copied block from the successors list. Normally if
conversion relies on TII->AnalyzeBranch combined with BB->CorrectExtraCFGEdges
to cleanup the successors list, but if the predicated block contained an
un-analyzable branch (such as a now-predicated return), then this will fail.
These extra successors were causing a problem on PPC because it was causing
later passes (such as PPCEarlyReturm) to leave dead return-only basic blocks in
the code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179227 91177308-0d34-0410-b5e6-96231b3b80d8
temporarily while we work on plumbing through some changes to continue
supporting gdb on darwin.
This reverts commit r179122.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179222 91177308-0d34-0410-b5e6-96231b3b80d8
to disable following tests for Hexagon that require direct object
generation support.
DebugInfo/dwarf-public-names.ll
DebugInfo/dwarf-version.ll
DebugInfo/member-pointers.ll
DebugInfo/namespace.ll
DebugInfo/two-cus-from-same-file.ll
Fixes bug 15616 - http://llvm.org/bugs/show_bug.cgi?id=15616
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179209 91177308-0d34-0410-b5e6-96231b3b80d8
Mips32 code as Mips16 unless it can't be compiled as Mips 16. For now this
would happen as long as floating point instructions are not needed.
Probably it would also make sense to compile as mips32 if atomic operations
are needed too. There may be other cases too.
A module pass prescans the IR and adds the mips16 or nomips16 attribute
to functions depending on the functions needs.
Mips 16 mode can result in a 40% code compression by utililizing 16 bit
encoding of many instructions.
The hope is for this to replace the traditional gcc way of dealing with
Mips16 code using floating point which involves essentially using soft float
but with a library implemented using mips32 floating point. This gcc
method also requires creating stubs so that Mips32 code can interact with
these Mips 16 functions that have floating point needs. My conjecture is
that in reality this traditional gcc method would never win over this
new method.
I will be implementing the traditional gcc method also. Some of it is already
done but I needed to do the stubs to finish the work and those required
this mips16/32 mixed mode capability.
I have more ideas for to make this new method much better and I think the old
method will just live in llvm for anyone that needs the backward compatibility
but I don't for what reason that would be needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179185 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
I did a local comparison between using bash and using lit's runner, and
more of the suite passes with lit than passes with bash. Most of the
bash failures have to do with /dev/null, which is nonsensical on
Windows, but the lit runner handles it.
The lit shell runner is also much faster than bash, so I would expect
most Windows devs would want it by default.
The behavior can be overridden on any OS by setting
LIT_USE_INTERNAL_SHELL to 0 or 1 in the environment.
Reviewers: chapuni, ddunbar
CC: llvm-commits, timurrrr
Differential Revision: http://llvm-reviews.chandlerc.com/D559
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179173 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions aren't universally available, but depend on a specific
extension to the normal ARM architecture (rather than, say, v6/v7/...) so a new
feature is appropriate.
This also enables the feature by default on A-class cores which usually have
these extensions, to avoid breaking existing code and act as a sensible
default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179171 91177308-0d34-0410-b5e6-96231b3b80d8