Commit Graph

12 Commits

Author SHA1 Message Date
Evan Cheng
a5e1362f96 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123015 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 19:35:30 +00:00
Benjamin Kramer
50dd09bd85 Try to unbreak the arm buildbot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122999 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 11:35:21 +00:00
Evan Cheng
461f1fc359 Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122955 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 07:58:36 +00:00
Benjamin Kramer
80220369b0 Try to reuse the value when lowering memset.
This allows us to compile:
  void test(char *s, int a) {
    __builtin_memset(s, a, 15);
  }
into 1 mul + 3 stores instead of 3 muls + 3 stores.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122710 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-02 19:57:05 +00:00
Benjamin Kramer
8c06aa1c59 Lower the i8 extension in memset to a multiply instead of a potentially long series of shifts and ors.
We could implement a DAGCombine to turn x * 0x0101 back into logic operations
on targets that doesn't support the multiply or it is slow (p4) if someone cares
enough.

Example code:
  void test(char *s, int a) {
      __builtin_memset(s, a, 4);
  }
before:
  _test:                                  ## @test
    movzbl  8(%esp), %eax
    movl  %eax, %ecx
    shll  $8, %ecx
    orl %eax, %ecx
    movl  %ecx, %eax
    shll  $16, %eax
    orl %ecx, %eax
    movl  4(%esp), %ecx
    movl  %eax, 4(%ecx)
    movl  %eax, (%ecx)
    ret
after:
  _test:                                  ## @test
    movzbl  8(%esp), %eax
    imull $16843009, %eax, %eax   ## imm = 0x1010101
    movl  4(%esp), %ecx
    movl  %eax, 4(%ecx)
    movl  %eax, (%ecx)
    ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122707 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-02 19:44:58 +00:00
Chris Lattner
1eb1b68e3a Fix an inconsistency in the x86 backend that led it to reject "calll foo" on
x86-32: 32-bit calls were named "call" not "calll".  64-bit calls were correctly
named "callq", so this only impacted x86-32.

This fixes rdar://8456370 - llvm-mc rejects 'calll'

This also exposes that mingw/64 is generating a 32-bit call instead of a 64-bit call,
I will file a bugzilla.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114534 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-22 05:49:14 +00:00
Evan Cheng
f28f8bc40e Correctly lower memset / memcpy of undef. It should be a nop. PR6767.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100208 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-02 19:36:14 +00:00
Evan Cheng
94107ba9ce - Avoid using floating point stores to implement memset unless the value is zero.
- Do not try to infer GV alignment unless its type is sized. It's not possible to infer alignment if it has opaque type.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100118 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 18:19:11 +00:00
Evan Cheng
255f20f7f7 Fix sdisel memcpy, memset, memmove lowering:
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
   load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
   ops more effectively.
rdar://7774704


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100090 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 06:04:33 +00:00
Dan Gohman
36a0947820 Eliminate more uses of llvm-as and llvm-dis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81290 91177308-0d34-0410-b5e6-96231b3b80d8
2009-09-08 23:54:48 +00:00
Daniel Dunbar
3c2d4bf97f Pass target triple string in to TargetMachine constructor.
This is not just a matter of passing in the target triple from the module;
currently backends are making decisions based on the build and host
architecture. The goal is to migrate to making these decisions based off of the
triple (in conjunction with the feature string). Thus most clients pass in the
target triple, or the host triple if that is empty.

This has one important change in the way behavior of the JIT and llc.

For the JIT, it was previously selecting the Target based on the host
(naturally), but it was setting the target machine features based on the triple
from the module. Now it is setting the target machine features based on the
triple of the host.

For LLC, -march was previously only used to select the target, the target
machine features were initialized from the module's triple (which may have been
empty). Now the target triple is taken from the module, or the host's triple is
used if that is empty. Then the triple is adjusted to match -march.

The take away is that -march for llc is now used in conjunction with the host
triple to initialize the subtarget. If users want more deterministic behavior
from llc, they should use -mtriple, or set the triple in the input module.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77946 91177308-0d34-0410-b5e6-96231b3b80d8
2009-08-03 04:03:51 +00:00
Evan Cheng
1887c1c2f9 Fix a number of byval / memcpy / memset related codegen issues.
1. x86-64 byval alignment should be max of 8 and alignment of type. Previously the code was not doing what the commit message was saying.
2. Do not use byte repeat move and store operations. These are slow.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55139 91177308-0d34-0410-b5e6-96231b3b80d8
2008-08-21 21:00:15 +00:00