llvm-6502/lib/Target
Chris Lattner 541f91b17c Custom lower all BUILD_VECTOR's so that we can compile vec_splat_u8(8) into
"vspltisb v0, 8" instead of a constant pool load.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27335 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-02 00:43:36 +00:00
..
Alpha Expose base register for DwarfWriter. Refactor code accordingly. 2006-03-28 13:48:33 +00:00
CBackend Modify how CBE handles #lines. 2006-03-23 18:08:29 +00:00
IA64 Expose base register for DwarfWriter. Refactor code accordingly. 2006-03-28 13:48:33 +00:00
PowerPC Custom lower all BUILD_VECTOR's so that we can compile vec_splat_u8(8) into 2006-04-02 00:43:36 +00:00
Skeleton Remove the skeleton target, it doesn't produce useful code and there are 2006-02-16 23:14:50 +00:00
Sparc Expose base register for DwarfWriter. Refactor code accordingly. 2006-03-28 13:48:33 +00:00
SparcV8 Remove the SparcV8 backend. It has been renamed to be the Sparc backend. 2006-02-05 06:33:29 +00:00
SparcV9 Expose base register for DwarfWriter. Refactor code accordingly. 2006-03-28 13:48:33 +00:00
X86 Use a X86 target specific node X86ISD::PINSRW instead of a mal-formed 2006-03-31 21:55:24 +00:00
Makefile
MRegisterInfo.cpp Expose base register for DwarfWriter. Refactor code accordingly. 2006-03-28 13:48:33 +00:00
README.txt ADd a note 2006-04-01 04:08:29 +00:00
SubtargetFeature.cpp Clean up some commentary. 2006-03-24 10:00:56 +00:00
Target.td Add support for dwarf register numbering. 2006-03-24 21:13:21 +00:00
TargetData.cpp TargetData.cpp::getTypeInfo() was returning alignment of element type as the 2006-03-31 22:33:42 +00:00
TargetFrameInfo.cpp
TargetInstrInfo.cpp
TargetMachine.cpp Eliminate IntrinsicLowering from TargetMachine. 2006-03-23 05:43:16 +00:00
TargetMachineRegistry.cpp remove always-null IntrinsicLowering argument. 2006-03-23 05:28:02 +00:00
TargetSchedInfo.cpp
TargetSchedule.td
TargetSelectionDAG.td Add vector_extract and vector_insert nodes. 2006-03-31 19:21:16 +00:00
TargetSubtarget.cpp

Target Independent Opportunities:

===-------------------------------------------------------------------------===

FreeBench/mason contains code like this:

static p_type m0u(p_type p) {
  int m[]={0, 8, 1, 2, 16, 5, 13, 7, 14, 9, 3, 4, 11, 12, 15, 10, 17, 6};
  p_type pu;
  pu.a = m[p.a];
  pu.b = m[p.b];
  pu.c = m[p.c];
  return pu;
}

We currently compile this into a memcpy from a static array into 'm', then
a bunch of loads from m.  It would be better to avoid the memcpy and just do
loads from the static array.

//===---------------------------------------------------------------------===//

Make the PPC branch selector target independant

//===---------------------------------------------------------------------===//

Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
precision don't matter (ffastmath).  Misc/mandel will like this. :)

//===---------------------------------------------------------------------===//

Solve this DAG isel folding deficiency:

int X, Y;

void fn1(void)
{
  X = X | (Y << 3);
}

compiles to

fn1:
	movl Y, %eax
	shll $3, %eax
	orl X, %eax
	movl %eax, X
	ret

The problem is the store's chain operand is not the load X but rather
a TokenFactor of the load X and load Y, which prevents the folding.

There are two ways to fix this:

1. The dag combiner can start using alias analysis to realize that y/x
   don't alias, making the store to X not dependent on the load from Y.
2. The generated isel could be made smarter in the case it can't
   disambiguate the pointers.

Number 1 is the preferred solution.

This has been "fixed" by a TableGen hack. But that is a short term workaround
which will be removed once the proper fix is made.

//===---------------------------------------------------------------------===//

Turn this into a signed shift right in instcombine:

int f(unsigned x) {
  return x >> 31 ? -1 : 0;
}

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25600
http://gcc.gnu.org/ml/gcc-patches/2006-02/msg01492.html

//===---------------------------------------------------------------------===//

On targets with expensive 64-bit multiply, we could LSR this:

for (i = ...; ++i) {
   x = 1ULL << i;

into:
 long long tmp = 1;
 for (i = ...; ++i, tmp+=tmp)
   x = tmp;

This would be a win on ppc32, but not x86 or ppc64.

//===---------------------------------------------------------------------===//

Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)

//===---------------------------------------------------------------------===//

Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.

//===---------------------------------------------------------------------===//

Interesting? testcase for add/shift/mul reassoc:

int bar(int x, int y) {
  return x*x*x+y+x*x*x*x*x*y*y*y*y;
}
int foo(int z, int n) {
  return bar(z, n) + bar(2*z, 2*n);
}

//===---------------------------------------------------------------------===//

These two functions should generate the same code on big-endian systems:

int g(int *j,int *l)  {  return memcmp(j,l,4);  }
int h(int *j, int *l) {  return *j - *l; }

this could be done in SelectionDAGISel.cpp, along with other special cases,
for 1,2,4,8 bytes.

//===---------------------------------------------------------------------===//

This code:
int rot(unsigned char b) { int a = ((b>>1) ^ (b<<7)) & 0xff; return a; }

Can be improved in two ways:

1. The instcombiner should eliminate the type conversions.
2. The X86 backend should turn this into a rotate by one bit.

//===---------------------------------------------------------------------===//

Add LSR exit value substitution. It'll probably be a win for Ackermann, etc.

//===---------------------------------------------------------------------===//

It would be nice to revert this patch:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html

And teach the dag combiner enough to simplify the code expanded before 
legalize.  It seems plausible that this knowledge would let it simplify other
stuff too.

//===---------------------------------------------------------------------===//

The loop unroller should be enhanced to be able to unroll loops that aren't 
single basic blocks.  It should be able to handle stuff like this:

  for (i = 0; i < c1; ++i)
     if (c2 & (1 << i))
       foo

where c1/c2 are constants.

//===---------------------------------------------------------------------===//

For packed types, TargetData.cpp::getTypeInfo() returns alignment that is equal
to the type size. It works but can be overly conservative as the alignment of
specific packed types are target dependent.

//===---------------------------------------------------------------------===//

We should add 'unaligned load/store' nodes, and produce them from code like
this:

v4sf example(float *P) {
  return (v4sf){P[0], P[1], P[2], P[3] };
}

//===---------------------------------------------------------------------===//