Targets which provide a rotate make it possible to replace a sequence of
(XOR (SHL 1, x), -1) with (ROTL ~1, x). This saves an instruction on
architectures like X86 and POWER(64).
Differential Revision: http://reviews.llvm.org/D8350
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@232572 91177308-0d34-0410-b5e6-96231b3b80d8
This happens e.g. with <2 x i64> -1 on x86_32. It cannot be generated directly
because i64 is illegal. It would be nice if getNOT would handle this
transparently, but I don't see a way to generate a legal constant there right
now. Fixes PR17487.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192795 91177308-0d34-0410-b5e6-96231b3b80d8
This was done with the following sed invocation to catch label lines demarking function boundaries:
sed -i '' "s/^;\( *\)\([A-Z0-9_]*\):\( *\)test\([A-Za-z0-9_-]*\):\( *\)$/;\1\2-LABEL:\3test\4:\5/g" test/CodeGen/*/*.ll
which was written conservatively to avoid false positives rather than false negatives. I scanned through all the changes and everything looks correct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186258 91177308-0d34-0410-b5e6-96231b3b80d8
Fold (xor (and x, y), y) -> (and (not x), y)
This removes an opportunity for a constant to appear twice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181395 91177308-0d34-0410-b5e6-96231b3b80d8
This makes it possible to speed up def_iterator by stopping at the first
use. This makes def_empty() and getUniqueVRegDef() much faster when
there are many uses.
In a +Asserts build, LiveVariables is 100x faster in one case because
getVRegDef() has an assertion that would scan to the end of a
def_iterator chain.
Spill weight calculation is significantly faster (300x in one case)
because isTriviallyReMaterializable() calls MRI->isConstantPhysReg(%RIP)
which calls def_empty(%RIP).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161634 91177308-0d34-0410-b5e6-96231b3b80d8
The xorps instruction is smaller than pxor, so prefer that encoding.
The ExecutionDepsFix pass will switch the encoding to pxor and xorpd
when appropriate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@143996 91177308-0d34-0410-b5e6-96231b3b80d8
Instcombine does this but apparently there are situations where this pattern will escape the optimizer and / or created by isel. Here is a case that's seen in JavaScriptCore:
%t1 = sub i32 0, %a
%t2 = add i32 %t1, -1
The dag combiner pattern: ((c1-A)+c2) -> (c1+c2)-A
will fold it to -1 - %a.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93773 91177308-0d34-0410-b5e6-96231b3b80d8
allows matching and remembering a string and then matching and
verifying that the string occurs later in the file.
Change X86/xor.ll to use this in some cases where the test was
checking for an arbitrary register allocation decision.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82891 91177308-0d34-0410-b5e6-96231b3b80d8
regex and matching it instead of trying to match chunks at a time.
Matching chunks at a time broke with check lines like
CHECK: foo {{.*}}bar
because the .* would eat the entire rest of the line and bar would
never match.
Now we just escape the fixed strings for the user, so that something
like:
CHECK: a() {{.*}}???
is matched as:
CHECK: {{a\(\) .*\?\?\?}}
transparently "under the covers".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82779 91177308-0d34-0410-b5e6-96231b3b80d8