I introduced it in r166785. PR14291.
If TD is unavailable use getScalarSizeInBits, but don't optimize
pointers or vectors of pointers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170586 91177308-0d34-0410-b5e6-96231b3b80d8
((x & 0xff00) >> 8) << 2
to
(x >> 6) & 0x3fc
This is general goodness since it folds a left shift into the mask. However,
the trailing zeros in the mask prevents the ARM backend from using the bit
extraction instructions. And worse since the mask materialization may require
an addition instruction. This comes up fairly frequently when the result of
the bit twiddling is used as memory address. e.g.
= ptr[(x & 0xFF0000) >> 16]
We want to generate:
ubfx r3, r1, #16, #8
ldr.w r3, [r0, r3, lsl #2]
vs.
mov.w r9, #1020
and.w r2, r9, r1, lsr #14
ldr r2, [r0, r2]
Add a late ARM specific isel optimization to
ARMDAGToDAGISel::PreprocessISelDAG(). It folds the left shift to the
'base + offset' address computation; change the mask to one which doesn't have
trailing zeros and enable the use of ubfx.
Note the optimization has to be done late since it's target specific and we
don't want to change the DAG normalization. It's also fairly restrictive
as shifter operands are not always free. It's only done for lsh 1 / 2. It's
known to be free on some cpus and they are most common for address
computation.
This is a slight win for blowfish, rijndael, etc.
rdar://12870177
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170581 91177308-0d34-0410-b5e6-96231b3b80d8
When the least bit of C is greater than V, (x&C) must be greater than V
if it is not zero, so the comparison can be simplified.
Although this was suggested in Target/X86/README.txt, it benefits any
architecture with a directly testable form of AND.
Patch by Kevin Schoedel
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170576 91177308-0d34-0410-b5e6-96231b3b80d8
There's probably a better expansion for those nodes than the default for
altivec, but this is better than crashing. VSELECTs occur in loop vectorizer
output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170551 91177308-0d34-0410-b5e6-96231b3b80d8
I cannot reproduce it the failures locally, so I will keep an eye at the ppc
bots. This patch does add the change to the "Disassembly of section" message,
but that is not what was failing on the bots.
Original message:
Add a funciton to get the segment name of a section.
On MachO, sections also have segment names. When a tool looking at a .o file
prints a segment name, this is what they mean. In reality, a .o has only one
anonymous, segment.
This patch adds a MachO only function to fetch that segment name. I named it
getSectionFinalSegmentName since the main use for the name seems to be infor
the linker with segment this section should go to.
The patch also changes MachOObjectFile::getSectionName to return just the
section name instead of computing SegmentName,SectionName.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170545 91177308-0d34-0410-b5e6-96231b3b80d8
This changes adds shadow and origin propagation for unknown intrinsics
by examining the arguments and ModRef behaviour. For now, only 3 classes
of intrinsics are handled:
- those that look like simple SIMD store
- those that look like simple SIMD load
- those that don't have memory effects and look like arithmetic/logic/whatever
operation on simple types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170530 91177308-0d34-0410-b5e6-96231b3b80d8
MapVector is a bit heavyweight, but I don't see a simpler way. Also the
InductionList is unlikely to be large. This should help 3-stage selfhost
compares (PR14647).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170528 91177308-0d34-0410-b5e6-96231b3b80d8
bitwidth op back to the original size. If we reduce ANDs then this can cause
an endless loop. This patch changes the ZEXT to ANY_EXTEND if the demanded bits
are equal or smaller than the size of the reduced operation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170505 91177308-0d34-0410-b5e6-96231b3b80d8
instructions in the assembly code variant if one exists.
The intended use for this is so tools like lldb and darwin's otool(1)
can be switched to print Intel-flavored disassembly.
I discussed extensively this API with Jim Grosbach and we feel
while it may not be fully general, in reality there is only one syntax
for each assembly with the exception of X86 which has exactly
two for historical reasons.
rdar://10989182
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170477 91177308-0d34-0410-b5e6-96231b3b80d8
The bundle_iterator::operator++ function now doesn't need to dig out the
basic block and check against end(). It can use the isBundledWithSucc()
flag to find the last bundled instruction safely.
Similarly, MachineInstr::isBundled() no longer needs to look at
iterators etc. It only has to look at flags.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170473 91177308-0d34-0410-b5e6-96231b3b80d8
Now that the bundle flag aware APIs are all in place, it is possible to
continuously verify the flag consistency.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170465 91177308-0d34-0410-b5e6-96231b3b80d8
The new bidirectional bundle flags are redundant, so inadvertent bundle
tearing can be detected in the machine code verifier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170463 91177308-0d34-0410-b5e6-96231b3b80d8
To not over constrain the scheduler for ARM in thumb mode, some optimizations for code size reduction, specific to ARM thumb, are blocked when they add a dependency (like write after read dependency).
Disables this check when code size is the priority, i.e., code is compiled with -Oz.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170462 91177308-0d34-0410-b5e6-96231b3b80d8
The bundle-related MI flags need to be kept in sync with the neighboring
instructions. Don't allow the bulk flag-setting setFlags() function to
change them.
Also don't copy MI flags when cloning an instruction. The clone's bundle
flags will be set when it is explicitly inserted into a bundle.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170459 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the instr_iterator versions of the splice() functions. It doesn't
seem useful to be able to splice sequences of instructions that don't
consist of full bundles.
The normal splice functions that take MBB::iterator arguments are not
changed, and they can move whole bundles around without any problems.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170456 91177308-0d34-0410-b5e6-96231b3b80d8