Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
; RUN: opt < %s -sroa -S | FileCheck %s
|
2012-10-04 10:39:28 +00:00
|
|
|
target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-n8:16:32:64"
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
|
|
|
|
%S1 = type { i64, [42 x float] }
|
|
|
|
|
|
|
|
define i32 @test1(<4 x i32> %x, <4 x i32> %y) {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test1(
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
entry:
|
|
|
|
%a = alloca [2 x <4 x i32>]
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
%a.x = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0
|
|
|
|
store <4 x i32> %x, <4 x i32>* %a.x
|
|
|
|
%a.y = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1
|
|
|
|
store <4 x i32> %y, <4 x i32>* %a.y
|
|
|
|
; CHECK-NOT: store
|
|
|
|
|
|
|
|
%a.tmp1 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0, i64 2
|
|
|
|
%tmp1 = load i32* %a.tmp1
|
|
|
|
%a.tmp2 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 3
|
|
|
|
%tmp2 = load i32* %a.tmp2
|
|
|
|
%a.tmp3 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 0
|
|
|
|
%tmp3 = load i32* %a.tmp3
|
|
|
|
; CHECK-NOT: load
|
|
|
|
; CHECK: extractelement <4 x i32> %x, i32 2
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %y, i32 3
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %y, i32 0
|
|
|
|
|
|
|
|
%tmp4 = add i32 %tmp1, %tmp2
|
|
|
|
%tmp5 = add i32 %tmp3, %tmp4
|
|
|
|
ret i32 %tmp5
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: ret
|
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @test2(<4 x i32> %x, <4 x i32> %y) {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test2(
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
entry:
|
|
|
|
%a = alloca [2 x <4 x i32>]
|
2012-11-21 08:16:30 +00:00
|
|
|
; CHECK-NOT: alloca
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
|
|
|
|
%a.x = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0
|
|
|
|
store <4 x i32> %x, <4 x i32>* %a.x
|
|
|
|
%a.y = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1
|
|
|
|
store <4 x i32> %y, <4 x i32>* %a.y
|
2012-11-21 08:16:30 +00:00
|
|
|
; CHECK-NOT: store
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
|
|
|
|
%a.tmp1 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0, i64 2
|
|
|
|
%tmp1 = load i32* %a.tmp1
|
|
|
|
%a.tmp2 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 3
|
|
|
|
%tmp2 = load i32* %a.tmp2
|
|
|
|
%a.tmp3 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 0
|
|
|
|
%a.tmp3.cast = bitcast i32* %a.tmp3 to <2 x i32>*
|
|
|
|
%tmp3.vec = load <2 x i32>* %a.tmp3.cast
|
|
|
|
%tmp3 = extractelement <2 x i32> %tmp3.vec, i32 0
|
2012-11-21 08:16:30 +00:00
|
|
|
; CHECK-NOT: load
|
|
|
|
; CHECK: %[[extract1:.*]] = extractelement <4 x i32> %x, i32 2
|
|
|
|
; CHECK-NEXT: %[[extract2:.*]] = extractelement <4 x i32> %y, i32 3
|
|
|
|
; CHECK-NEXT: %[[extract3:.*]] = shufflevector <4 x i32> %y, <4 x i32> undef, <2 x i32> <i32 0, i32 1>
|
|
|
|
; CHECK-NEXT: %[[extract4:.*]] = extractelement <2 x i32> %[[extract3]], i32 0
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
|
|
|
|
%tmp4 = add i32 %tmp1, %tmp2
|
|
|
|
%tmp5 = add i32 %tmp3, %tmp4
|
|
|
|
ret i32 %tmp5
|
2012-11-21 08:16:30 +00:00
|
|
|
; CHECK-NEXT: %[[sum1:.*]] = add i32 %[[extract1]], %[[extract2]]
|
|
|
|
; CHECK-NEXT: %[[sum2:.*]] = add i32 %[[extract4]], %[[sum1]]
|
|
|
|
; CHECK-NEXT: ret i32 %[[sum2]]
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @test3(<4 x i32> %x, <4 x i32> %y) {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test3(
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
entry:
|
|
|
|
%a = alloca [2 x <4 x i32>]
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
%a.x = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0
|
|
|
|
store <4 x i32> %x, <4 x i32>* %a.x
|
|
|
|
%a.y = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1
|
|
|
|
store <4 x i32> %y, <4 x i32>* %a.y
|
|
|
|
; CHECK-NOT: store
|
|
|
|
|
|
|
|
%a.y.cast = bitcast <4 x i32>* %a.y to i8*
|
|
|
|
call void @llvm.memset.p0i8.i32(i8* %a.y.cast, i8 0, i32 16, i32 1, i1 false)
|
|
|
|
; CHECK-NOT: memset
|
|
|
|
|
|
|
|
%a.tmp1 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0, i64 2
|
|
|
|
%a.tmp1.cast = bitcast i32* %a.tmp1 to i8*
|
|
|
|
call void @llvm.memset.p0i8.i32(i8* %a.tmp1.cast, i8 -1, i32 4, i32 1, i1 false)
|
|
|
|
%tmp1 = load i32* %a.tmp1
|
|
|
|
%a.tmp2 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 3
|
|
|
|
%tmp2 = load i32* %a.tmp2
|
|
|
|
%a.tmp3 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 0
|
|
|
|
%tmp3 = load i32* %a.tmp3
|
|
|
|
; CHECK-NOT: load
|
|
|
|
; CHECK: %[[insert:.*]] = insertelement <4 x i32> %x, i32 -1, i32 2
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %[[insert]], i32 2
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> zeroinitializer, i32 3
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> zeroinitializer, i32 0
|
|
|
|
|
|
|
|
%tmp4 = add i32 %tmp1, %tmp2
|
|
|
|
%tmp5 = add i32 %tmp3, %tmp4
|
|
|
|
ret i32 %tmp5
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: ret
|
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @test4(<4 x i32> %x, <4 x i32> %y, <4 x i32>* %z) {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test4(
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
entry:
|
|
|
|
%a = alloca [2 x <4 x i32>]
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
%a.x = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0
|
|
|
|
store <4 x i32> %x, <4 x i32>* %a.x
|
|
|
|
%a.y = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1
|
|
|
|
store <4 x i32> %y, <4 x i32>* %a.y
|
|
|
|
; CHECK-NOT: store
|
|
|
|
|
|
|
|
%a.y.cast = bitcast <4 x i32>* %a.y to i8*
|
|
|
|
%z.cast = bitcast <4 x i32>* %z to i8*
|
|
|
|
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %a.y.cast, i8* %z.cast, i32 16, i32 1, i1 false)
|
|
|
|
; CHECK-NOT: memcpy
|
|
|
|
|
|
|
|
%a.tmp1 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0, i64 2
|
|
|
|
%a.tmp1.cast = bitcast i32* %a.tmp1 to i8*
|
|
|
|
%z.tmp1 = getelementptr inbounds <4 x i32>* %z, i64 0, i64 2
|
|
|
|
%z.tmp1.cast = bitcast i32* %z.tmp1 to i8*
|
|
|
|
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %a.tmp1.cast, i8* %z.tmp1.cast, i32 4, i32 1, i1 false)
|
|
|
|
%tmp1 = load i32* %a.tmp1
|
|
|
|
%a.tmp2 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 3
|
|
|
|
%tmp2 = load i32* %a.tmp2
|
|
|
|
%a.tmp3 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 0
|
|
|
|
%tmp3 = load i32* %a.tmp3
|
|
|
|
; CHECK-NOT: memcpy
|
|
|
|
; CHECK: %[[load:.*]] = load <4 x i32>* %z
|
|
|
|
; CHECK-NEXT: %[[gep:.*]] = getelementptr inbounds <4 x i32>* %z, i64 0, i64 2
|
|
|
|
; CHECK-NEXT: %[[element_load:.*]] = load i32* %[[gep]]
|
|
|
|
; CHECK-NEXT: %[[insert:.*]] = insertelement <4 x i32> %x, i32 %[[element_load]], i32 2
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %[[insert]], i32 2
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %[[load]], i32 3
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %[[load]], i32 0
|
|
|
|
|
|
|
|
%tmp4 = add i32 %tmp1, %tmp2
|
|
|
|
%tmp5 = add i32 %tmp3, %tmp4
|
|
|
|
ret i32 %tmp5
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: ret
|
|
|
|
}
|
|
|
|
|
2014-02-26 08:25:02 +00:00
|
|
|
declare void @llvm.memcpy.p0i8.p1i8.i32(i8* nocapture, i8 addrspace(1)* nocapture, i32, i32, i1) nounwind
|
|
|
|
|
|
|
|
; Same as test4 with a different sized address space pointer source.
|
|
|
|
define i32 @test4_as1(<4 x i32> %x, <4 x i32> %y, <4 x i32> addrspace(1)* %z) {
|
|
|
|
; CHECK-LABEL: @test4_as1(
|
|
|
|
entry:
|
|
|
|
%a = alloca [2 x <4 x i32>]
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
%a.x = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0
|
|
|
|
store <4 x i32> %x, <4 x i32>* %a.x
|
|
|
|
%a.y = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1
|
|
|
|
store <4 x i32> %y, <4 x i32>* %a.y
|
|
|
|
; CHECK-NOT: store
|
|
|
|
|
|
|
|
%a.y.cast = bitcast <4 x i32>* %a.y to i8*
|
|
|
|
%z.cast = bitcast <4 x i32> addrspace(1)* %z to i8 addrspace(1)*
|
|
|
|
call void @llvm.memcpy.p0i8.p1i8.i32(i8* %a.y.cast, i8 addrspace(1)* %z.cast, i32 16, i32 1, i1 false)
|
|
|
|
; CHECK-NOT: memcpy
|
|
|
|
|
|
|
|
%a.tmp1 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0, i64 2
|
|
|
|
%a.tmp1.cast = bitcast i32* %a.tmp1 to i8*
|
|
|
|
%z.tmp1 = getelementptr inbounds <4 x i32> addrspace(1)* %z, i16 0, i16 2
|
|
|
|
%z.tmp1.cast = bitcast i32 addrspace(1)* %z.tmp1 to i8 addrspace(1)*
|
|
|
|
call void @llvm.memcpy.p0i8.p1i8.i32(i8* %a.tmp1.cast, i8 addrspace(1)* %z.tmp1.cast, i32 4, i32 1, i1 false)
|
|
|
|
%tmp1 = load i32* %a.tmp1
|
|
|
|
%a.tmp2 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 3
|
|
|
|
%tmp2 = load i32* %a.tmp2
|
|
|
|
%a.tmp3 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 0
|
|
|
|
%tmp3 = load i32* %a.tmp3
|
|
|
|
; CHECK-NOT: memcpy
|
|
|
|
; CHECK: %[[load:.*]] = load <4 x i32> addrspace(1)* %z
|
|
|
|
; CHECK-NEXT: %[[gep:.*]] = getelementptr inbounds <4 x i32> addrspace(1)* %z, i64 0, i64 2
|
|
|
|
; CHECK-NEXT: %[[element_load:.*]] = load i32 addrspace(1)* %[[gep]]
|
|
|
|
; CHECK-NEXT: %[[insert:.*]] = insertelement <4 x i32> %x, i32 %[[element_load]], i32 2
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %[[insert]], i32 2
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %[[load]], i32 3
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %[[load]], i32 0
|
|
|
|
|
|
|
|
%tmp4 = add i32 %tmp1, %tmp2
|
|
|
|
%tmp5 = add i32 %tmp3, %tmp4
|
|
|
|
ret i32 %tmp5
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: ret
|
|
|
|
}
|
|
|
|
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
define i32 @test5(<4 x i32> %x, <4 x i32> %y, <4 x i32>* %z) {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test5(
|
Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
thresholds.
- It is overly conservative about which constructs can be split and
promoted.
- The vector value replacement aspect is separated from the splitting
logic, missing many opportunities where splitting and vector value
formation can work together.
- The splitting is entirely based around the underlying type of the
alloca, despite this type often having little to do with the reality
of how that memory is used. This is especially prevelant with unions
and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
replacement (again because it is separate) can kick in for
preposterous cases where we simply should have split the value. This
results in forming i1024 and i2048 integer "bit vectors" that
tremendously slow down subsequnet IR optimizations (due to large
APInts) and impede the backend's lowering.
The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.
First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).
The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.
Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.
Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.
Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.
After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.
There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.
Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.
Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.
Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@163883 91177308-0d34-0410-b5e6-96231b3b80d8
2012-09-14 09:22:59 +00:00
|
|
|
; The same as the above, but with reversed source and destination for the
|
|
|
|
; element memcpy, and a self copy.
|
|
|
|
entry:
|
|
|
|
%a = alloca [2 x <4 x i32>]
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
%a.x = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0
|
|
|
|
store <4 x i32> %x, <4 x i32>* %a.x
|
|
|
|
%a.y = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1
|
|
|
|
store <4 x i32> %y, <4 x i32>* %a.y
|
|
|
|
; CHECK-NOT: store
|
|
|
|
|
|
|
|
%a.y.cast = bitcast <4 x i32>* %a.y to i8*
|
|
|
|
%a.x.cast = bitcast <4 x i32>* %a.x to i8*
|
|
|
|
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %a.x.cast, i8* %a.y.cast, i32 16, i32 1, i1 false)
|
|
|
|
; CHECK-NOT: memcpy
|
|
|
|
|
|
|
|
%a.tmp1 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 0, i64 2
|
|
|
|
%a.tmp1.cast = bitcast i32* %a.tmp1 to i8*
|
|
|
|
%z.tmp1 = getelementptr inbounds <4 x i32>* %z, i64 0, i64 2
|
|
|
|
%z.tmp1.cast = bitcast i32* %z.tmp1 to i8*
|
|
|
|
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %z.tmp1.cast, i8* %a.tmp1.cast, i32 4, i32 1, i1 false)
|
|
|
|
%tmp1 = load i32* %a.tmp1
|
|
|
|
%a.tmp2 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 3
|
|
|
|
%tmp2 = load i32* %a.tmp2
|
|
|
|
%a.tmp3 = getelementptr inbounds [2 x <4 x i32>]* %a, i64 0, i64 1, i64 0
|
|
|
|
%tmp3 = load i32* %a.tmp3
|
|
|
|
; CHECK-NOT: memcpy
|
|
|
|
; CHECK: %[[gep:.*]] = getelementptr inbounds <4 x i32>* %z, i64 0, i64 2
|
|
|
|
; CHECK-NEXT: %[[extract:.*]] = extractelement <4 x i32> %y, i32 2
|
|
|
|
; CHECK-NEXT: store i32 %[[extract]], i32* %[[gep]]
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %y, i32 2
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %y, i32 3
|
|
|
|
; CHECK-NEXT: extractelement <4 x i32> %y, i32 0
|
|
|
|
|
|
|
|
%tmp4 = add i32 %tmp1, %tmp2
|
|
|
|
%tmp5 = add i32 %tmp3, %tmp4
|
|
|
|
ret i32 %tmp5
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: add
|
|
|
|
; CHECK-NEXT: ret
|
|
|
|
}
|
|
|
|
|
|
|
|
declare void @llvm.memcpy.p0i8.p0i8.i32(i8* nocapture, i8* nocapture, i32, i32, i1) nounwind
|
|
|
|
declare void @llvm.memset.p0i8.i32(i8* nocapture, i8, i32, i32, i1) nounwind
|
2012-10-10 18:41:19 +00:00
|
|
|
|
|
|
|
define i64 @test6(<4 x i64> %x, <4 x i64> %y, i64 %n) {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test6(
|
2012-10-10 18:41:19 +00:00
|
|
|
; The old scalarrepl pass would wrongly drop the store to the second alloca.
|
|
|
|
; PR13254
|
|
|
|
%tmp = alloca { <4 x i64>, <4 x i64> }
|
|
|
|
%p0 = getelementptr inbounds { <4 x i64>, <4 x i64> }* %tmp, i32 0, i32 0
|
|
|
|
store <4 x i64> %x, <4 x i64>* %p0
|
|
|
|
; CHECK: store <4 x i64> %x,
|
|
|
|
%p1 = getelementptr inbounds { <4 x i64>, <4 x i64> }* %tmp, i32 0, i32 1
|
|
|
|
store <4 x i64> %y, <4 x i64>* %p1
|
|
|
|
; CHECK: store <4 x i64> %y,
|
|
|
|
%addr = getelementptr inbounds { <4 x i64>, <4 x i64> }* %tmp, i32 0, i32 0, i64 %n
|
|
|
|
%res = load i64* %addr, align 4
|
|
|
|
ret i64 %res
|
|
|
|
}
|
2012-10-30 20:52:40 +00:00
|
|
|
|
2012-11-21 08:16:30 +00:00
|
|
|
define <4 x i32> @test_subvec_store() {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test_subvec_store(
|
2012-11-21 08:16:30 +00:00
|
|
|
entry:
|
|
|
|
%a = alloca <4 x i32>
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
%a.gep0 = getelementptr <4 x i32>* %a, i32 0, i32 0
|
|
|
|
%a.cast0 = bitcast i32* %a.gep0 to <2 x i32>*
|
|
|
|
store <2 x i32> <i32 0, i32 0>, <2 x i32>* %a.cast0
|
|
|
|
; CHECK-NOT: store
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK: select <4 x i1> <i1 true, i1 true, i1 false, i1 false>
|
2012-11-21 08:16:30 +00:00
|
|
|
|
|
|
|
%a.gep1 = getelementptr <4 x i32>* %a, i32 0, i32 1
|
|
|
|
%a.cast1 = bitcast i32* %a.gep1 to <2 x i32>*
|
|
|
|
store <2 x i32> <i32 1, i32 1>, <2 x i32>* %a.cast1
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: select <4 x i1> <i1 false, i1 true, i1 true, i1 false>
|
2012-11-21 08:16:30 +00:00
|
|
|
|
|
|
|
%a.gep2 = getelementptr <4 x i32>* %a, i32 0, i32 2
|
|
|
|
%a.cast2 = bitcast i32* %a.gep2 to <2 x i32>*
|
|
|
|
store <2 x i32> <i32 2, i32 2>, <2 x i32>* %a.cast2
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: select <4 x i1> <i1 false, i1 false, i1 true, i1 true>
|
2012-11-21 08:16:30 +00:00
|
|
|
|
|
|
|
%a.gep3 = getelementptr <4 x i32>* %a, i32 0, i32 3
|
|
|
|
store i32 3, i32* %a.gep3
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: insertelement <4 x i32>
|
2012-11-21 08:16:30 +00:00
|
|
|
|
|
|
|
%ret = load <4 x i32>* %a
|
|
|
|
|
|
|
|
ret <4 x i32> %ret
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: ret <4 x i32>
|
2012-11-21 08:16:30 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
define <4 x i32> @test_subvec_load() {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test_subvec_load(
|
2012-11-21 08:16:30 +00:00
|
|
|
entry:
|
|
|
|
%a = alloca <4 x i32>
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
store <4 x i32> <i32 0, i32 1, i32 2, i32 3>, <4 x i32>* %a
|
|
|
|
; CHECK-NOT: store
|
|
|
|
|
|
|
|
%a.gep0 = getelementptr <4 x i32>* %a, i32 0, i32 0
|
|
|
|
%a.cast0 = bitcast i32* %a.gep0 to <2 x i32>*
|
|
|
|
%first = load <2 x i32>* %a.cast0
|
|
|
|
; CHECK-NOT: load
|
|
|
|
; CHECK: %[[extract1:.*]] = shufflevector <4 x i32> <i32 0, i32 1, i32 2, i32 3>, <4 x i32> undef, <2 x i32> <i32 0, i32 1>
|
|
|
|
|
|
|
|
%a.gep1 = getelementptr <4 x i32>* %a, i32 0, i32 1
|
|
|
|
%a.cast1 = bitcast i32* %a.gep1 to <2 x i32>*
|
|
|
|
%second = load <2 x i32>* %a.cast1
|
|
|
|
; CHECK-NEXT: %[[extract2:.*]] = shufflevector <4 x i32> <i32 0, i32 1, i32 2, i32 3>, <4 x i32> undef, <2 x i32> <i32 1, i32 2>
|
|
|
|
|
|
|
|
%a.gep2 = getelementptr <4 x i32>* %a, i32 0, i32 2
|
|
|
|
%a.cast2 = bitcast i32* %a.gep2 to <2 x i32>*
|
|
|
|
%third = load <2 x i32>* %a.cast2
|
|
|
|
; CHECK-NEXT: %[[extract3:.*]] = shufflevector <4 x i32> <i32 0, i32 1, i32 2, i32 3>, <4 x i32> undef, <2 x i32> <i32 2, i32 3>
|
|
|
|
|
|
|
|
%tmp = shufflevector <2 x i32> %first, <2 x i32> %second, <2 x i32> <i32 0, i32 2>
|
|
|
|
%ret = shufflevector <2 x i32> %tmp, <2 x i32> %third, <4 x i32> <i32 0, i32 1, i32 2, i32 3>
|
|
|
|
; CHECK-NEXT: %[[tmp:.*]] = shufflevector <2 x i32> %[[extract1]], <2 x i32> %[[extract2]], <2 x i32> <i32 0, i32 2>
|
|
|
|
; CHECK-NEXT: %[[ret:.*]] = shufflevector <2 x i32> %[[tmp]], <2 x i32> %[[extract3]], <4 x i32> <i32 0, i32 1, i32 2, i32 3>
|
|
|
|
|
|
|
|
ret <4 x i32> %ret
|
|
|
|
; CHECK-NEXT: ret <4 x i32> %[[ret]]
|
|
|
|
}
|
|
|
|
|
2012-12-17 04:07:37 +00:00
|
|
|
declare void @llvm.memset.p0i32.i32(i32* nocapture, i32, i32, i32, i1) nounwind
|
|
|
|
|
2012-12-17 14:03:01 +00:00
|
|
|
define <4 x float> @test_subvec_memset() {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test_subvec_memset(
|
2012-12-17 04:07:37 +00:00
|
|
|
entry:
|
2012-12-17 14:03:01 +00:00
|
|
|
%a = alloca <4 x float>
|
2012-12-17 04:07:37 +00:00
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
2012-12-17 14:03:01 +00:00
|
|
|
%a.gep0 = getelementptr <4 x float>* %a, i32 0, i32 0
|
|
|
|
%a.cast0 = bitcast float* %a.gep0 to i8*
|
2012-12-17 04:07:37 +00:00
|
|
|
call void @llvm.memset.p0i8.i32(i8* %a.cast0, i8 0, i32 8, i32 0, i1 false)
|
|
|
|
; CHECK-NOT: store
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK: select <4 x i1> <i1 true, i1 true, i1 false, i1 false>
|
2012-12-17 04:07:37 +00:00
|
|
|
|
2012-12-17 14:03:01 +00:00
|
|
|
%a.gep1 = getelementptr <4 x float>* %a, i32 0, i32 1
|
|
|
|
%a.cast1 = bitcast float* %a.gep1 to i8*
|
2012-12-17 04:07:37 +00:00
|
|
|
call void @llvm.memset.p0i8.i32(i8* %a.cast1, i8 1, i32 8, i32 0, i1 false)
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: select <4 x i1> <i1 false, i1 true, i1 true, i1 false>
|
2012-12-17 04:07:37 +00:00
|
|
|
|
2012-12-17 14:03:01 +00:00
|
|
|
%a.gep2 = getelementptr <4 x float>* %a, i32 0, i32 2
|
|
|
|
%a.cast2 = bitcast float* %a.gep2 to i8*
|
2012-12-17 04:07:37 +00:00
|
|
|
call void @llvm.memset.p0i8.i32(i8* %a.cast2, i8 3, i32 8, i32 0, i1 false)
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: select <4 x i1> <i1 false, i1 false, i1 true, i1 true>
|
2012-12-17 04:07:37 +00:00
|
|
|
|
2012-12-17 14:03:01 +00:00
|
|
|
%a.gep3 = getelementptr <4 x float>* %a, i32 0, i32 3
|
|
|
|
%a.cast3 = bitcast float* %a.gep3 to i8*
|
2012-12-17 04:07:37 +00:00
|
|
|
call void @llvm.memset.p0i8.i32(i8* %a.cast3, i8 7, i32 4, i32 0, i1 false)
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: insertelement <4 x float>
|
2012-12-17 04:07:37 +00:00
|
|
|
|
2012-12-17 14:03:01 +00:00
|
|
|
%ret = load <4 x float>* %a
|
2012-12-17 04:07:37 +00:00
|
|
|
|
2012-12-17 14:03:01 +00:00
|
|
|
ret <4 x float> %ret
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: ret <4 x float>
|
2012-12-17 04:07:37 +00:00
|
|
|
}
|
|
|
|
|
2012-12-17 14:51:24 +00:00
|
|
|
define <4 x float> @test_subvec_memcpy(i8* %x, i8* %y, i8* %z, i8* %f, i8* %out) {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @test_subvec_memcpy(
|
2012-12-17 14:51:24 +00:00
|
|
|
entry:
|
|
|
|
%a = alloca <4 x float>
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
%a.gep0 = getelementptr <4 x float>* %a, i32 0, i32 0
|
|
|
|
%a.cast0 = bitcast float* %a.gep0 to i8*
|
|
|
|
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %a.cast0, i8* %x, i32 8, i32 0, i1 false)
|
|
|
|
; CHECK: %[[xptr:.*]] = bitcast i8* %x to <2 x float>*
|
|
|
|
; CHECK-NEXT: %[[x:.*]] = load <2 x float>* %[[xptr]]
|
|
|
|
; CHECK-NEXT: %[[expand_x:.*]] = shufflevector <2 x float> %[[x]], <2 x float> undef, <4 x i32> <i32 0, i32 1, i32 undef, i32 undef>
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: select <4 x i1> <i1 true, i1 true, i1 false, i1 false>
|
2012-12-17 14:51:24 +00:00
|
|
|
|
|
|
|
%a.gep1 = getelementptr <4 x float>* %a, i32 0, i32 1
|
|
|
|
%a.cast1 = bitcast float* %a.gep1 to i8*
|
|
|
|
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %a.cast1, i8* %y, i32 8, i32 0, i1 false)
|
|
|
|
; CHECK-NEXT: %[[yptr:.*]] = bitcast i8* %y to <2 x float>*
|
|
|
|
; CHECK-NEXT: %[[y:.*]] = load <2 x float>* %[[yptr]]
|
|
|
|
; CHECK-NEXT: %[[expand_y:.*]] = shufflevector <2 x float> %[[y]], <2 x float> undef, <4 x i32> <i32 undef, i32 0, i32 1, i32 undef>
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: select <4 x i1> <i1 false, i1 true, i1 true, i1 false>
|
2012-12-17 14:51:24 +00:00
|
|
|
|
|
|
|
%a.gep2 = getelementptr <4 x float>* %a, i32 0, i32 2
|
|
|
|
%a.cast2 = bitcast float* %a.gep2 to i8*
|
|
|
|
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %a.cast2, i8* %z, i32 8, i32 0, i1 false)
|
|
|
|
; CHECK-NEXT: %[[zptr:.*]] = bitcast i8* %z to <2 x float>*
|
|
|
|
; CHECK-NEXT: %[[z:.*]] = load <2 x float>* %[[zptr]]
|
|
|
|
; CHECK-NEXT: %[[expand_z:.*]] = shufflevector <2 x float> %[[z]], <2 x float> undef, <4 x i32> <i32 undef, i32 undef, i32 0, i32 1>
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: select <4 x i1> <i1 false, i1 false, i1 true, i1 true>
|
2012-12-17 14:51:24 +00:00
|
|
|
|
|
|
|
%a.gep3 = getelementptr <4 x float>* %a, i32 0, i32 3
|
|
|
|
%a.cast3 = bitcast float* %a.gep3 to i8*
|
|
|
|
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %a.cast3, i8* %f, i32 4, i32 0, i1 false)
|
|
|
|
; CHECK-NEXT: %[[fptr:.*]] = bitcast i8* %f to float*
|
|
|
|
; CHECK-NEXT: %[[f:.*]] = load float* %[[fptr]]
|
2013-05-01 19:53:30 +00:00
|
|
|
; CHECK-NEXT: %[[insert_f:.*]] = insertelement <4 x float>
|
2012-12-17 14:51:24 +00:00
|
|
|
|
|
|
|
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %out, i8* %a.cast2, i32 8, i32 0, i1 false)
|
|
|
|
; CHECK-NEXT: %[[outptr:.*]] = bitcast i8* %out to <2 x float>*
|
|
|
|
; CHECK-NEXT: %[[extract_out:.*]] = shufflevector <4 x float> %[[insert_f]], <4 x float> undef, <2 x i32> <i32 2, i32 3>
|
|
|
|
; CHECK-NEXT: store <2 x float> %[[extract_out]], <2 x float>* %[[outptr]]
|
|
|
|
|
|
|
|
%ret = load <4 x float>* %a
|
|
|
|
|
|
|
|
ret <4 x float> %ret
|
|
|
|
; CHECK-NEXT: ret <4 x float> %[[insert_f]]
|
|
|
|
}
|
|
|
|
|
2012-10-30 20:52:40 +00:00
|
|
|
define i32 @PR14212() {
|
2013-07-14 01:42:54 +00:00
|
|
|
; CHECK-LABEL: @PR14212(
|
2012-10-30 20:52:40 +00:00
|
|
|
; This caused a crash when "splitting" the load of the i32 in order to promote
|
|
|
|
; the store of <3 x i8> properly. Heavily reduced from an OpenCL test case.
|
|
|
|
entry:
|
|
|
|
%retval = alloca <3 x i8>, align 4
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
store <3 x i8> undef, <3 x i8>* %retval, align 4
|
|
|
|
%cast = bitcast <3 x i8>* %retval to i32*
|
|
|
|
%load = load i32* %cast, align 4
|
|
|
|
ret i32 %load
|
|
|
|
; CHECK: ret i32
|
|
|
|
}
|
2012-11-20 01:12:50 +00:00
|
|
|
|
|
|
|
define <2 x i8> @PR14349.1(i32 %x) {
|
2012-12-06 21:24:47 +00:00
|
|
|
; CHECK: @PR14349.1
|
2012-11-20 01:12:50 +00:00
|
|
|
; The first testcase for broken SROA rewriting of split integer loads and
|
|
|
|
; stores due to smaller vector loads and stores. This particular test ensures
|
|
|
|
; that we can rewrite a split store of an integer to a store of a vector.
|
|
|
|
entry:
|
|
|
|
%a = alloca i32
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
store i32 %x, i32* %a
|
|
|
|
; CHECK-NOT: store
|
|
|
|
|
|
|
|
%cast = bitcast i32* %a to <2 x i8>*
|
|
|
|
%vec = load <2 x i8>* %cast
|
|
|
|
; CHECK-NOT: load
|
|
|
|
|
|
|
|
ret <2 x i8> %vec
|
|
|
|
; CHECK: %[[trunc:.*]] = trunc i32 %x to i16
|
|
|
|
; CHECK: %[[cast:.*]] = bitcast i16 %[[trunc]] to <2 x i8>
|
|
|
|
; CHECK: ret <2 x i8> %[[cast]]
|
|
|
|
}
|
|
|
|
|
|
|
|
define i32 @PR14349.2(<2 x i8> %x) {
|
2012-12-06 21:24:47 +00:00
|
|
|
; CHECK: @PR14349.2
|
2012-11-20 01:12:50 +00:00
|
|
|
; The first testcase for broken SROA rewriting of split integer loads and
|
|
|
|
; stores due to smaller vector loads and stores. This particular test ensures
|
|
|
|
; that we can rewrite a split load of an integer to a load of a vector.
|
|
|
|
entry:
|
|
|
|
%a = alloca i32
|
|
|
|
; CHECK-NOT: alloca
|
|
|
|
|
|
|
|
%cast = bitcast i32* %a to <2 x i8>*
|
|
|
|
store <2 x i8> %x, <2 x i8>* %cast
|
|
|
|
; CHECK-NOT: store
|
|
|
|
|
|
|
|
%int = load i32* %a
|
|
|
|
; CHECK-NOT: load
|
|
|
|
|
|
|
|
ret i32 %int
|
|
|
|
; CHECK: %[[cast:.*]] = bitcast <2 x i8> %x to i16
|
|
|
|
; CHECK: %[[trunc:.*]] = zext i16 %[[cast]] to i32
|
|
|
|
; CHECK: %[[insert:.*]] = or i32 %{{.*}}, %[[trunc]]
|
|
|
|
; CHECK: ret i32 %[[insert]]
|
|
|
|
}
|