Commit Graph

11371 Commits

Author SHA1 Message Date
Eric Christopher
a01bc6a59f Remove an argument-less call to getSubtargetImpl from TargetLoweringBase.
This required plumbing a TargetRegisterInfo through computeRegisterProperties
and into findRepresentativeClass which uses it for register class
iteration. This required passing a subtarget into a few target specific
initializations of TargetLowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230583 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-26 00:00:24 +00:00
David Majnemer
92d1637e2f X86, Win64: Allow 'mov' to restore the stack pointer if we have a FP
The Win64 epilogue structure is very restrictive, it permits a very
small number of opcodes and none of them are 'mov'.

This means that given:
  mov %rbp, %rsp
  pop %rbp

The mov isn't the epilogue, only the pop is.  This is problematic unless
a frame pointer is present in which case we are free to do whatever we'd
like in the "body" of the function.  If a frame pointer is present,
unwinding will undo the prologue operations in reverse order regardless
of the fact that we are at an instruction which is reseting the stack
pointer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230543 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-25 21:13:37 +00:00
Bruno Cardoso Lopes
51fc7f5afa [X86][MMX] Reapply: Add MMX instructions to foldable tables
Reapply r230248.

Teach the peephole optimizer to work with MMX instructions by adding
entries into the foldable tables. This covers folding opportunities not
handled during isel.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230499 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-25 15:14:02 +00:00
Bruno Cardoso Lopes
8ad268fd61 [X86][MMX] Prevent MMX_MOVD64rm folding
MMX_MOVD64rm zero-extends i32 load results into i64 registers.

The peephole optimizer will try to fold it in other MMX foldable
instructions, the wrong thing to do, since there's no MMX memory
instruction that loads from i32 and does implict zero extension.

Remove 'canFoldAsLoad' from MOVD64rm in order to prevent such folding.
The current MMX tests already test this, but since there are no MMX
instructions in the foldable tables yet, this did not trigger. This
commit prepares the addition of those instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230498 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-25 15:13:52 +00:00
Elena Demikhovsky
4105fd49d4 AVX-512: Gather and Scatter patterns
Gather and scatter instructions additionally write to one of the source operands - mask register.
In this case Gather has 2 destination values - the loaded value and the mask.
Till now we did not support code gen pattern for gather - the instruction was generated from 
intrinsic only and machine node was hardcoded.
When we introduce the masked_gather node, we need to select instruction automatically,
in the standard way.
I added a flag "hasTwoExplicitDefs" that allows to handle 2 destination operands.

(Some code in the X86InstrFragmentsSIMD.td is commented out, just to split one big
patch in many small patches)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230471 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-25 09:46:31 +00:00
Sanjay Patel
a90eb87f7e simplify control flow; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230342 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 16:26:02 +00:00
Michael Kuperstein
09d756a7e0 [x32] Mark RBX as reserved when EBX is the base pointer.
This should have gone into r230334.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230339 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 16:13:16 +00:00
Sanjay Patel
269510242b fix typo in comment; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230338 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 16:11:05 +00:00
Michael Kuperstein
2379e8a2ee [x32] x32 should use ebx as the base pointer.
This fixes the original issue in PR22655, but not the secondary one.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230334 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 15:27:13 +00:00
Craig Topper
c3b9d471f6 [X86] Remove the AbsMem32 type from the assembly parser. Only really need the 16-bit version which will automatically get prioritized over AbsMem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230313 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 08:02:13 +00:00
David Majnemer
fbdee9f0c0 X86: Only use 'lea' in Win64 epilogues if a frame pointer exists
We can only use 'add' in epilogues, 'lea' is not permitted unless we've
established a frame pointer in the prologue.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230286 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-24 00:11:32 +00:00
David Majnemer
ad6622575c X86: Use a smaller 'mov' instruction for stack probe calls
Prologue emission, in some cases, requires calls to a stack probe helper
function.  The amount of stack to probe is passed as a register
argument in the Win64 ABI but the instruction sequence used is
pessimistic: it assumes that the number of bytes to probe is greater
than 4 GB.

Instead, select a more appropriate opcode depending on the number of
bytes we are going to probe.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230270 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 21:50:30 +00:00
David Majnemer
d71e4c6218 X86: Use 'mov' instead of 'lea' in Win64 SEH prologues when possible
'mov' and 'lea' are equivalent when the displacement applied with 'lea'
is zero.  However, 'mov' should encode smaller.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230269 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 21:50:27 +00:00
David Majnemer
16ae406776 X86: Explain why we cannot use a 'mov' in a Win64 epilogue
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230268 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 21:50:25 +00:00
David Majnemer
10c4458d7d X86: Consistently use 'epilogue' instead of 'epilog'
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230267 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 21:50:18 +00:00
Bruno Cardoso Lopes
6bf5b2b094 [AsmPrinter] Access pointers to globals via pcrel GOT entries
Front-ends could use global unnamed_addr to hold pointers to other
symbols, like @gotequivalent below:

@foo = global i32 42
@gotequivalent = private unnamed_addr constant i32* @foo

@delta = global i32 trunc (i64 sub (i64 ptrtoint (i32** @gotequivalent to i64),
                                    i64 ptrtoint (i32* @delta to i64))
                           to i32)

The global @delta holds a data "PC"-relative offset to @gotequivalent,
an unnamed pointer to @foo. The darwin/x86-64 assembly output for this follows:

 .globl  _foo
_foo:
 .long   42

 .globl  _gotequivalent
_gotequivalent:
 .quad   _foo

 .globl  _delta
_delta:
 .long   _gotequivalent-_delta

Since unnamed_addr indicates that the address is not significant, only
the content, we can optimize the case above by replacing pc-relative
accesses to "GOT equivalent" globals, by a PC relative access to the GOT
entry of the final symbol instead. Therefore, "delta" can contain a pc
relative relocation to foo's GOT entry and we avoid the emission of
"gotequivalent", yielding the assembly code below:

 .globl  _foo
_foo:
 .long   42

 .globl  _delta
_delta:
 .long   _foo@GOTPCREL+4

There are a couple of advantages of doing this: (1) Front-ends that need
to emit a great deal of data to store pointers to external symbols could
save space by not emitting such "got equivalent" globals and (2) IR
constructs combined with this opt opens a way to represent GOT pcrel
relocations by using the LLVM IR, which is something we previously had
no way to express.

Differential Revision: http://reviews.llvm.org/D6922

rdar://problem/18534217

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230264 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 21:26:18 +00:00
Bruno Cardoso Lopes
ee7b509aa3 Revert "[X86][MMX] Add MMX instructions to foldable tables"
This reverts commit r230226 since it breaks win buildbots.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230248 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 19:53:37 +00:00
Bruno Cardoso Lopes
77d2363908 [X86][MMX] Add MMX instructions to foldable tables
Teach the peephole optimizer to work with MMX instructions by adding
entries into the foldable tables. This covers folding opportunities not
handled during isel.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230226 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 15:23:22 +00:00
Bruno Cardoso Lopes
c606f3a3cb [X86][MMX] Support folding loads in psll, psrl and psra intrinsics
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230225 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 15:23:14 +00:00
Elena Demikhovsky
fdafc8fd5e AVX-512: recommitted 229837 + bugfix + test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230223 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 15:12:31 +00:00
Elena Demikhovsky
d8e5adcd92 restructured X86 scalar unary operation templates
I made the templates general, no need to define pattern separately for each instruction/intrinsic.
Now only need to add r_Int pattern for AVX.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230221 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-23 14:14:02 +00:00
Craig Topper
f9c1605d56 [X86] Add some missing redundant MMX and SSE encodings for disassembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230165 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-22 07:50:41 +00:00
Benjamin Kramer
2b17108064 Remove dead prototype.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230137 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-21 14:35:00 +00:00
Benjamin Kramer
edf99a5e3f X86: Remove custom lowering of SIGN_EXTEND_INREG
This was just replicating logic from the legalizer. Covered by existing
tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230136 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-21 14:31:29 +00:00
David Majnemer
164db1c6b9 X86: Call __main using the SelectionDAG
Synthesizing a call directly using the MI layer would confuse the frame
lowering code.  This is problematic as frame lowering is highly
sensitive the particularities of calls, etc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230129 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-21 05:49:45 +00:00
Tim Northover
ca7e0787f0 CodeGen: convert CCState interface to using ArrayRefs
Everyone except R600 was manually passing the length of a static array
at each callsite, calculated in a variety of interesting ways. Far
easier to let ArrayRef handle that.

There should be no functional change, but out of tree targets may have
to tweak their calls as with these examples.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230118 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-21 02:11:17 +00:00
David Majnemer
e95985d3a0 Win64: Stack alignment constraints aren't applied during SET_FPREG
Stack realignment occurs after the prolog, not during, for Win64.
Because of this, don't factor in the maximum stack alignment when
establishing a frame pointer.

This fixes PR22572.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230113 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-21 01:04:47 +00:00
Reid Kleckner
4b91be0289 X86: Remove pre-2010 dead code in mergeSPUpdatesDown
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230075 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 22:13:25 +00:00
Simon Pilgrim
0de2c870d8 LowerScalarImmediateShift - Merged v16i8 and v32i8 shift lowering. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230074 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 22:13:03 +00:00
Andrea Di Biagio
3583d23018 [X86][FastIsel] Teach how to select float-half conversion intrinsics.
This patch teaches X86FastISel how to select intrinsic 'convert_from_fp16' and
intrinsic 'convert_to_fp16'.
If the target has F16C, we can select VCVTPS2PHrr for a float-half conversion,
and VCVTPH2PSrr for a half-float conversion.

Differential Revision: http://reviews.llvm.org/D7673


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230043 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 19:37:14 +00:00
Sanjay Patel
74e8bf678a canonicalize a v2f64 blendi of 2 registers
This canonicalization step saves us 3 pattern matching possibilities * 4 math ops
for scalar FP math that uses xmm regs. The backend can re-commute the operands
post-instruction-selection if that makes register allocation better.

The tests in llvm/test/CodeGen/X86/sse-scalar-fp-arith.ll cover this scenario already,
so there are no new tests with this patch.

Differential Revision: http://reviews.llvm.org/D7777


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230024 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 16:55:27 +00:00
Chandler Carruth
634fc5f26b [x86] Switching the shuffle equivalence test to a variadic template was
the wrong answer. We also got initializer lists which are *way* cleaner
for this kind of thing. Let's use those and make this a normal, boring
functionn accepting ArrayRef.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@230004 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 10:47:28 +00:00
Eric Christopher
b661ab1cbd Save the MachineFunction in startFunction so that we can use it for
lookups of the subtarget later.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229996 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 08:01:55 +00:00
Eric Christopher
7b0c988b90 Use the cached subtarget from the MachineFunction rather than
doing a lookup on the TargetMachine.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229995 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 08:01:52 +00:00
Nick Lewycky
12cbedbaee Fix build in release mode, -Wunused-variable on this lambda function used only in an assert.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229977 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 07:16:17 +00:00
David Blaikie
7be2b85e1e Fix -Wunused-variable warning in non-asserts build, and optimize a little bit while I'm here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229970 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 06:28:38 +00:00
Chandler Carruth
efbbaefea5 [x86] Remove the old vector shuffle lowering code and its flag.
The new shuffle lowering has been the default for some time. I've
enabled the new legality testing by default with no really blocking
regressions. I've fuzz tested this very heavily (many millions of fuzz
test cases have passed at this point). And this cleans up a ton of code.
=]

Thanks again to the many folks that helped with this transition. There
was a lot of work by others that went into the new shuffle lowering to
make it really excellent.

In case you aren't using a diff algorithm that can handle this:
  X86ISelLowering.cpp: 22 insertions(+), 2940 deletions(-)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229964 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 04:25:04 +00:00
Chandler Carruth
07ef8904ad [x86] Now that the new vector shuffle legality is enabled and everything
is going well, remove the flag and the code for the old legality tests.

This is the first step toward removing the entire old vector shuffle
lowering. *Much* more code to delete coming up next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229963 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 03:59:35 +00:00
Chandler Carruth
38749b8e07 [x86] Make the new vector shuffle legality test on by default, which
reflects the fact that the x86 backend can in fact lower any shuffle you
want it to with reasonably high code quality.

My recent work on the new vector shuffle has made this regress *very*
little. The diff in the test cases makes me very, very happy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229958 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 03:05:47 +00:00
Eric Christopher
8c4bb575e1 Revert "AVX-512: Full implementation for VRNDSCALESS/SD instructions and intrinsics."
The instructions were being generated on architectures that don't support avx512.

This reverts commit r229837.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229942 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 00:45:28 +00:00
Eric Christopher
74678a1ed1 Add a license header to the AVX512 file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229941 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-20 00:36:53 +00:00
Benjamin Kramer
1ce666d86c Demote vectors to arrays. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229861 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 15:26:17 +00:00
Chandler Carruth
b7012af85f [x86] Delete still more piles of complex code now that we have a good
systematic lowering of v8i16.

This required a slight strategy shift to prefer unpack lowerings in more
places. While this isn't a cut-and-dry win in every case, it is in the
overwhelming majority. There are only a few places where the old
lowering would probably be a touch faster, and then only by a small
margin.

In some cases, this is yet another significant improvement.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229859 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 15:21:57 +00:00
Chandler Carruth
c57e90422f [x86] Teach the unpack lowering how to lower with an initial unpack in
addition to lowering to trees rooted in an unpack.

This saves shuffles and or registers in many various ways, lets us
handle another class of v4i32 shuffles pre SSE4.1 without domain
crosses, etc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229856 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 15:06:13 +00:00
Chandler Carruth
7f583a4201 [x86] Dramatically improve v8i16 shuffle lowering by not using its
terribly complex partial blend logic.

This code path was one of the more complex and bug prone when it first
went in and it hasn't faired much better. Ultimately, with the simpler
basis for unpack lowering and support bit-math blending, this is
completely obsolete. In the worst case without this we generate
different but equivalent instructions. However, in many cases we
generate much better code. This is especially true when blends or pshufb
is available.

This does expose one (minor) weakness of the unpack lowering that I'll
try to address.

In case you were wondering, this is actually a big part of what I've
been trying to pull off in the recent string of commits.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229853 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 14:08:24 +00:00
Chandler Carruth
943b2ca2de [x86] Remove the final fallback in the v8i16 lowering that isn't really
needed, and significantly improve the SSSE3 path.

This makes the new strategy much more clear. If we can blend, we just go
with that. If we can't blend, we try to permute into an unpack so
that we handle cases where the unpack doing the blend also simplifies
the shuffle. If that fails and we've got SSSE3, we now call into
factored-out pshufb lowering code so that we leverage the fact that
pshufb can set up a blend for us while shuffling. This generates great
code, especially because we *know* we don't have a fast blend at this
point. Finally, we fall back on decomposing into permutes and blends
because we do at least have a bit-math-based blend if we need to use
that.

This pretty significantly improves some of the v8i16 code paths. We
never need to form pshufb for the single-input shuffles because we have
effective target-specific combines to form it there, but we were missing
its effectiveness in the blends.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229851 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 13:56:49 +00:00
Chandler Carruth
c3d7858505 [x86] Simplify the pre-SSSE3 v16i8 lowering significantly by decomposing
them into permutes and a blend with the generic decomposition logic.

This works really well in almost every case and lets the code only
manage the expansion of a single input into two v8i16 vectors to perform
the actual shuffle. The blend-based merging is often much nicer than the
pack based merging that this replaces. The only place where it isn't we
end up blending between two packs when we could do a single pack. To
handle that case, just teach the v2i64 lowering to handle these blends
by digging out the operands.

With this we're down to only really random permutations that cause an
explosion of instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229849 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 13:15:12 +00:00
Chandler Carruth
3d4542ce3d [x86] Remove the insanely over-aggressive unpack lowering strategy for
v16i8 shuffles, and replace it with new facilities.

This uses precise patterns to match exact unpacks, and the new
generalized unpack lowering only when we detect a case where we will
have to shuffle both inputs anyways and they terminate in exactly
a blend.

This fixes all of the blend horrors that I uncovered by always lowering
blends through the vector shuffle lowering. It also removes *sooooo*
much of the crazy instruction sequences required for v16i8 lowering
previously. Much cleaner now.

The only "meh" aspect is that we sometimes use pshufb+pshufb+unpck when
it would be marginally nicer to use pshufb+pshufb+por. However, the
difference there is *tiny*. In many cases its a win because we re-use
the pshufb mask. In others, we get to avoid the pshufb entirely. I've
left a FIXME, but I'm dubious we can really do better than this. I'm
actually pretty happy with this lowering now.

For SSE2 this exposes some horrors that were really already there. Those
will have to fixed by changing a different path through the v16i8
lowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229846 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 12:10:37 +00:00
Chandler Carruth
71164d08b1 [x86] The SELECT x86 DAG combine also does legalization. It used to rely
on things not being marked as either custom or legal, but we now do
custom lowering of more VSELECT nodes. To cope with this, manually
replicate the legality tests here. These have to stay in sync with the
set of tests used in the custom lowering of VSELECT.

Ideally, we wouldn't do any of this combine-based-legalization when we
have an actual custom legalization step for VSELECT, but I'm not going
to be able to rewrite all of that today.

I don't have a test case for this currently, but it was found when
compiling a number of the test-suite benchmarks. I'll try to reduce
a test case and add it.

This should at least fix the test-suite fallout on build bots.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229844 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 11:43:37 +00:00
Michael Kuperstein
2b5910a767 Reverting r229831 due to multiple ARM/PPC/MIPS build-bot failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@229841 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-19 11:38:11 +00:00