Commit Graph

4352 Commits

Author SHA1 Message Date
Duncan Sands
bbc7016c60 Transform code like this
%V = mul i64 %N, 4
 %t = getelementptr i8* bitcast (i32* %arr to i8*), i32 %V
into
 %t1 = getelementptr i32* %arr, i32 %N
 %t = bitcast i32* %t1 to i8*
incorporating the multiplication into the getelementptr.
This happens all the time in dragonegg, for example for
  int foo(int *A, int N) {
    return A[N];
  }
because gcc turns this into byte pointer arithmetic before it hits the plugin:
  D.1590_2 = (long unsigned int) N_1(D);
  D.1591_3 = D.1590_2 * 4;
  D.1592_5 = A_4(D) + D.1591_3;
  D.1589_6 = *D.1592_5;
  return D.1589_6;
The D.1592_5 line is a POINTER_PLUS_EXPR, which is turned into a getelementptr
on a bitcast of A_4 to i8*, so this becomes exactly the kind of IR that the
transform fires on.

An analogous transform (with no testcases!) already existed for bitcasts of
arrays, so I rewrote it to share code with this one.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166474 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-23 08:28:26 +00:00
Nadav Rotem
782090aa02 Don't crash if the load/store pointer is not a GEP.
Fix by Shivarama Rao <Shivarama.Rao@amd.com>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166427 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-22 18:27:56 +00:00
Argyrios Kyrtzidis
0b06e2331e Revert r166407 because it caused analyzer tests to crash and broke self-host bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166424 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-22 18:16:14 +00:00
Hal Finkel
e29c19091c BBVectorize should ignore unreachable blocks.
Unreachable blocks can have invalid instructions. For example,
jump threading can produce self-referential instructions in
unreachable blocks. Also, we should not be spending time
optimizing unreachable code. Fixes PR14133.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166423 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-22 18:00:55 +00:00
Nadav Rotem
565048e78a Vectorizer: optimize the generation of selects. If the condition is uniform, generate a scalar-cond select (i1 as selector).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166409 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-22 04:38:00 +00:00
Nick Lewycky
18b1f4e769 Reapply r166405, teaching tailcallelim to be smarter about nocapture, with a
very small but very important bugfix:
  bool shouldExplore(Use *U) {
    Value *V = U->get();
    if (isa<CallInst>(V) || isa<InvokeInst>(V))
    [...]
should have read:
  bool shouldExplore(Use *U) {
    Value *V = U->getUser();
    if (isa<CallInst>(V) || isa<InvokeInst>(V))
Fixes PR14143!



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166407 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-22 03:03:52 +00:00
NAKAMURA Takumi
d581b9e61f Revert r166405, "Teach TailRecursionElimination to consider 'nocapture' when deciding whether"
It broke selfhosting stage2 in several builders.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166406 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-22 00:48:51 +00:00
Nick Lewycky
241d1398e0 Teach TailRecursionElimination to consider 'nocapture' when deciding whether
calls can be marked tail.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166405 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-21 23:51:22 +00:00
Hal Finkel
3d39fb8a3f DataLayout should use itself when calculating the size of a vector.
This is important for vectors of pointers because only DataLayout,
not the underlying vector type, knows how to calculate the size
of the pointers in the vector. Fixes PR14138.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166401 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-21 20:38:03 +00:00
Benjamin Kramer
3740e798bc Revert r166390 "LoopIdiom: Replace custom dependence analysis with LoopDependenceAnalysis."
It passes all tests, produces better results than the old code but uses the
wrong pass, LoopDependenceAnalysis, which is old and unmaintained. "Why is it
still in tree?", you might ask. The answer is obviously: "To confuse developers."

Just swapping in the new dependency pass sends the pass manager into an infinte
loop, I'll try to figure out why tomorrow.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166399 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-21 19:31:16 +00:00
Benjamin Kramer
5c6e9ae14e LoopIdiom: Replace custom dependence analysis with LoopDependenceAnalysis.
Requires a lot less code and complexity on loop-idiom's side and the more
precise analysis can catch more cases, like the one I included as a test case.
This also fixes the edge-case miscompilation from PR9481. I'm not entirely
sure that all cases are handled that the old checks handled but LDA will
certainly become smarter in the future.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166390 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-21 15:03:07 +00:00
Nadav Rotem
bb950854ac Fix a bug in the vectorization of wide load/store operations.
We used a SCEV to detect that A[X] is consecutive. We assumed that X was
the induction variable. But X can be any expression that uses the induction
for example: X = i + 2;



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166388 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-21 06:49:10 +00:00
Nadav Rotem
c847872629 Add support for reduction variables that do not start at zero.
This is important for nested-loop reductions such as :

In the innermost loop, the induction variable does not start with zero:

for (i = 0 .. n)
 for (j = 0 .. m)
  sum += ...



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166387 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-21 05:52:51 +00:00
Nadav Rotem
5a418ba5f5 Vectorizer: fix a bug in the classification of induction/reduction phis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166384 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-21 02:38:01 +00:00
Nadav Rotem
ccaccfa8bf Fix an infinite loop in the loop-vectorizer.
PR14134.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166379 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-20 20:45:01 +00:00
Benjamin Kramer
82a1833865 InstCombine: Fix an edge case where constant icmps could sneak into ConstantFoldInstOperands and crash.
Have to refactor the ConstantFolder interface one day to define bugs like this away. Fixes PR14131.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166374 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-20 08:43:52 +00:00
Nadav Rotem
bf8772ed2c Vectorize: teach cavVectorizeMemory to distinguish between A[i]+=x and A[B[i]]+=x.
If the pointer is consecutive then it is safe to read and write. If the pointer is non-loop-consecutive then
it is unsafe to vectorize it because we may hit an ordering issue.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166371 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-20 08:26:33 +00:00
Nadav Rotem
5dbe64e2bc Vectorizer: Add support for loop reductions.
For example:

  for (i=0; i<n; i++)
   sum += A[i] +  B[i] + i;



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166351 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-19 23:05:40 +00:00
Benjamin Kramer
0aae4bd0fc SimplifyLibcalls: The return value of ffsll is always i32, even when the input is zero.
Fixes PR13028.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166313 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-19 20:43:44 +00:00
Benjamin Kramer
7182126b0f Indvars: Don't recursively delete instruction during BB iteration.
This can invalidate the iterators leading to use after frees and crashes.
Fixes PR12536.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166291 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-19 17:53:54 +00:00
Benjamin Kramer
239fd44f7a SCEVExpander: Don't crash when trying to merge two constant phis.
Just constant fold them so they can't cause any trouble. Fixes PR12627.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166286 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-19 16:37:30 +00:00
Nadav Rotem
89e7b356f2 vectorizer: Add support for reading and writing from the same memory location.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166255 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-19 01:24:18 +00:00
Meador Inge
0c41d57b09 instcombine: Migrate strcpy optimizations
This patch migrates the strcpy optimizations from the simplify-libcalls pass
into the instcombine library call simplifier.  Note also that StrCpyChkOpt
has been updated with a few simplifications that were being done in the
simplify-libcalls version of StrCpyOpt, but not in the migrated implementation
of StrCpyOpt.  There is no reason to overload StrCpyOpt with fortified and
regular simplifications in the new model since there is already a dedicated
simplifier for __strcpy_chk.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166198 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-18 18:12:40 +00:00
Nadav Rotem
1953ace81d Vectorizer: Add support for loops with an unknown count. For example:
for (i=0; i<n; i++){
        a[i] = b[i+1] + c[i+3];
     }



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166165 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-18 05:29:12 +00:00
Nadav Rotem
d15c0c7ac1 Add a loop vectorizer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166112 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-17 18:25:06 +00:00
Chandler Carruth
02bf98ab38 This just in, it is a *bad idea* to use 'udiv' on an offset of
a pointer. A very bad idea. Let's not do that. Fixes PR14105.

Note that this wasn't *that* glaring of an oversight. Originally, these
routines were only called on offsets within an alloca, which are
intrinsically positive. But over the evolution of the pass, they ended
up being called for arbitrary offsets, and things went downhill...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166095 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-17 09:23:48 +00:00
Michael Gottesman
4932bbe20c [InstCombine] Teach InstCombine how to handle an obfuscated splat.
An obfuscated splat is where the frontend poorly generates code for a splat
using several different shuffles to create the splat, i.e.,

  %A = load <4 x float>* %in_ptr, align 16
  %B = shufflevector <4 x float> %A, <4 x float> undef, <4 x i32> <i32 0, i32 0, i32 undef, i32 undef>
  %C = shufflevector <4 x float> %B, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 4, i32 undef>
  %D = shufflevector <4 x float> %C, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 2, i32 4>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166061 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-16 21:29:38 +00:00
Chandler Carruth
d2cd73f6a5 Update the memcpy rewriting to fully support widened int rewriting. This
includes extracting ints for copying elsewhere and inserting ints when
copying into the alloca. This should fix the CanSROA assertion coming
out of Clang's regression test suite.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165931 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-15 10:24:43 +00:00
Chandler Carruth
94fc64c42f Follow-up fix to r165928: handle memset rewriting for widened integers,
and generally clean up the memset handling. It had rotted a bit as the
other rewriting logic got polished more.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165930 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-15 10:24:40 +00:00
Chandler Carruth
81ff90db44 First major step toward addressing PR14059. This teaches SROA to handle
cases where we have partial integer loads and stores to an otherwise
promotable alloca to widen[1] those loads and stores to cover the entire
alloca and bitcast them into the appropriate type such that promotion
can proceed.

These partial loads and stores stem from an annoying confluence of ARM's
calling convention and ABI lowering and the FCA pre-splitting which
takes place in SROA. Clang lowers a { double, double } in-register
function argument as a [4 x i32] function argument to ensure it is
placed into integer 32-bit registers (a really unnerving implicit
contract between Clang and the ARM backend I would add). This results in
a FCA load of [4 x i32]* from the { double, double } alloca, and SROA
decomposes this into a sequence of i32 loads and stores. Inlining
proceeds, code gets folded, but at the end of the day, we still have i32
stores to the low and high halves of a double alloca. Widening these to
be i64 operations, and bitcasting them to double prior to loading or
storing allows promotion to proceed for these allocas.

I looked quite a bit changing the IR which Clang produces for this case
to be more friendly, but small changes seem unlikely to help. I think
the best representation we could use currently would be to pass 4 i32
arguments thereby avoiding any FCAs, but that would still require this
fix. It seems like it might eventually be nice to somehow encode the ABI
register selection choices outside of the parameter type system so that
the parameter can be a { double, double }, but the CC register
annotations indicate that this should be passed via 4 integer registers.

This patch does not address the second problem in PR14059, which is the
reverse: when a struct alloca is loaded as a *larger* single integer.

This patch also does not address some of the code quality issues with
the FCA-splitting. Those don't actually impede any optimizations really,
but they're on my list to clean up.

[1]: Pedantic footnote: for those concerned about memory model issues
here, this is safe. For the alloca to be promotable, it cannot escape or
have any use of its address that could allow these loads or stores to be
racing. Thus, widening is always safe.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165928 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-15 08:40:30 +00:00
Meador Inge
a239c2e6a7 instcombine: Migrate strcmp and strncmp optimizations
This patch migrates the strcmp and strncmp optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165915 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-15 03:47:37 +00:00
Meador Inge
186f8d90df instcombine: Migrate strchr and strrchr optimizations
This patch migrates the strchr and strrchr optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165875 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-13 16:45:37 +00:00
Meador Inge
73d8a5864f instcombine: Migrate strcat and strncat optimizations
This patch migrates the strcat and strncat optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165874 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-13 16:45:32 +00:00
Chandler Carruth
07525a6be6 Teach SROA to cope with wrapper aggregates. These show up a lot in ABI
type coercion code, especially when targetting ARM. Things like [1
x i32] instead of i32 are very common there.

The goal of this logic is to ensure that when we are picking an alloca
type, we look through such wrapper aggregates and across any zero-length
aggregate elements to find the simplest type possible to form a type
partition.

This logic should (generally speaking) rarely fire. It only ends up
kicking in when an alloca is accessed using two different types (for
instance, i32 and float), and the underlying alloca type has wrapper
aggregates around it. I noticed a significant amount of this occurring
looking at stepanov_abstraction generated code for arm, and suspect it
happens elsewhere as well.

Note that this doesn't yet address truly heinous IR productions such as
PR14059 is concerning. Those result in mismatched *sizes* of types in
addition to mismatched access and alloca types.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165870 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-13 10:49:33 +00:00
Nick Lewycky
5e01f80bf8 Don't crash when !tbaa.struct contents is invalid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165693 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-11 02:05:23 +00:00
Duncan Sands
ea46827d6c Add the testcase from pr13254 (the old scalarreply pass handles this wrong;
the new sroa pass handles it right).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165644 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-10 18:41:19 +00:00
Michael Ilseman
440ae6800e New EarlyCSE tests for CSE-ing across commutativity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165510 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-09 16:58:13 +00:00
Alexey Samsonov
e97a3a4b4f Fix PR14016.
DeadArgumentElimination pass can replace one LLVM function with another,
invalidating a pointer stored in debug info metadata entry for this function.
To fix this, we collect debug info descriptors for functions before
running a DeadArgumentElimination pass and "patch" pointers in metadata nodes
if we replace a function.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165490 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-09 08:13:15 +00:00
Chandler Carruth
2fdb25b5a9 Fix PR14034, an infloop / heap corruption / crash bug in the new SROA.
Thanks to Benjamin for the raw test case. This one took about 50 times
longer to reduce than to fix. =/

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165476 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-09 01:58:35 +00:00
Micah Villmow
791cfc211a Move TargetData to DataLayout.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165403 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-08 16:39:34 +00:00
Chandler Carruth
fca3f4021a Teach the new SROA a new trick. Now we zap any memcpy or memmoves which
are in fact identity operations. We detect these and kill their
partitions so that even splitting is unaffected by them. This is
particularly important because Clang relies on emitting identity memcpy
operations for struct copies, and these fold away to constants very
often after inlining.

Fixes the last big performance FIXME I have on my plate.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165285 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-05 01:29:09 +00:00
Benjamin Kramer
1e21db6e83 SimplifyCFG: Enhance the "remove CFG edge that leads to null pointer dereference" optimization to also handle instructions with multiple uses.
We conservatively only check the first use to avoid walking long use chains.
This catches the common case of having both a load and a store to a pointer
supplied by a PHI node.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165232 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-04 16:11:49 +00:00
Duncan Sands
7508f946bc In my recent change to avoid use of underaligned memory I didn't notice that
cpyDest can be mutated in some cases, which would then cause a crash later if
indeed the memory was underaligned.  This brought down several buildbots, so
I guess the underaligned case is much more common than I thought!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165228 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-04 13:53:21 +00:00
Duncan Sands
ffcf6dffee The alignment of an sret parameter is known: it must be at least the
alignment of the return type.  Teach the optimizers this.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165226 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-04 13:36:31 +00:00
Chandler Carruth
b2d98c2917 Fix PR13969, a mini-phase-ordering issue with the new SROA pass.
Currently, we re-visit allocas when something changes about the way they
might be *split* to allow better scalarization to take place. However,
we weren't handling the case when the *promotion* is what would change
the behavior of SROA. When an address derived from an alloca is stored
into another alloca, we consider the first to have escaped. If the
second is ever promoted to an SSA value, we will suddenly be able to run
the SROA pass on the first alloca.

This patch adds explicit support for this form if iteration. When we
detect a store of a pointer derived from an alloca, we flag the
underlying alloca for reprocessing after promotion. The logic works hard
to only do this when there is definitely going to be promotion and it
might remove impediments to the analysis of the alloca.

Thanks to Nick for the great test case and Benjamin for some sanity
check review.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165223 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-04 12:33:50 +00:00
Duncan Sands
f58747517c The memcpy optimizer was happily doing call slot forwarding when the new memory
was less aligned than the old.  In the testcase this results in an overaligned
memset: the memset alignment was correct for the original memory but is too much
for the new memory.  Fix this by either increasing the alignment of the new
memory or bailing out if that isn't possible.  Should fix the gcc-4.7 self-host
buildbot failure.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165220 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-04 10:54:40 +00:00
Chandler Carruth
aa3cb334af Teach the integer-promotion rewrite strategy to be endianness aware.
Sorry for this being broken so long. =/

As part of this, switch all of the existing tests to be Little Endian,
which is the behavior I was asserting in them anyways! Add in a new
big-endian test that checks the interesting behavior there.

Another part of this is to tighten the rules abotu when we perform the
full-integer promotion. This logic now rejects cases where there fully
promoted integer is a non-multiple-of-8 bitwidth or cases where the
loads or stores touch bits which are in the allocated space of the
alloca but are not loaded or stored when accessing the integer. Sadly,
these aren't really observable today as the rest of the pass will
already ensure the invariants hold. However, the latter situation is
likely to become a potential concern in the future.

Thanks to Benjamin and Duncan for early review of this patch. I'm still
looking into whether there are further endianness issues, please let me
know if anyone sees BE failures persisting past this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165219 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-04 10:39:28 +00:00
Jakub Staszak
395c1502a7 Fix PR13967.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165187 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-03 23:59:47 +00:00
Chandler Carruth
322e9ba2cb Fix an issue where we failed to adjust the alignment constraint on
a memcpy to reflect that '0' has a different meaning when applied to
a load or store. Now we correctly use underaligned loads and stores for
the test case added.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165101 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-03 08:26:28 +00:00
Chandler Carruth
f710fb14ee Try to use a better set of abstractions for computing the alignment
necessary during rewriting. As part of this, fix a real think-o here
where we might have left off an alignment specification when the address
is in fact underaligned. I haven't come up with any way to trigger this,
as there is always some other factor that reduces the alignment, but it
certainly might have been an observable bug in some way I can't think
of. This also slightly changes the strategy for placing explicit
alignments on loads and stores to only do so when the alignment does not
match that required by the ABI. This causes a few redundant alignments
to go away from test cases.

I've also added a couple of tests that really push on the alignment that
we end up with on loads and stores. More to come here as I try to fix an
underlying bug I have conjectured and produced test cases for, although
it's not clear if this bug is the one currently hitting dragonegg's
gcc47 bootstrap.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165100 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-03 08:14:02 +00:00