introduce an attempt at post-processing the colour artefacting that
results in coalescing adjacent '1' bits into white pixels. This is
an incomplete modeling even of this artefact, let alone the other
various fringing weirdness that happens for e.g. NTSC rendering (which
is not faithfully reproduced by Apple //GS RGB display, so hard for me
to test further)
- Commented-out experimentation with using hitherdither library to
use yliluoma dithering (see
https://bisqwit.iki.fi/story/howto/dither/jy/) instead of the
(error-diffusion) based BMP2DHR which introduces a lot of noise between
frames since it is easily perturbed.
Unfortunately apart from being extremely slow, it also doesn't give
good results, even for (simulated) DHGR palette. There's a lot of
banding and for HGR the available colours are just too far apart in
colour space.
This is even without (somehow) applying the HGR colour constraints.
- Also return the priority from _compute_error as preparation for
reinserting the offset back into the priority heap, in case we can do
a better job later. In order to do this properly we need to compute
both the error edit distance and the "true" edit distance and only
insert the priority of the latter.
- Change ACK code to perform two dummy stream reads rather than relying
on a preceding NOP to pad the TCP frame to 2K. This fixes the timing
issue that was causing most of the low-frequency ticks.
- Ticks still aren't perfectly aligned during the ACK slow path but
it's almost good enough, i.e. probably no need to actually bother
optimizing the slow path more.
of all possible weights. We encode the two 8-bit inputs into a single
16 bit value instead of dealing with an array of tuples.
Fix an important bug in _compute_delta: old and is_odd were transposed
so we weren't actually subtracting the old deltas! Surprisingly,
when I accidentally fixed this bug in the vectorized version, the video
encoding was much worse! This turned out to be because the edit
distance metric allowed reducing diffs by turning on pixels, which
meant it would tend to do this when "minimizing error" in a way that
was visually unappealing.
To remedy this, introduce a separate notion of substitution cost for
errors, and weight pixel colour changes more highly to discourage them
unless absolutely necessary. This gives very good quality results!
Also vectorize the selection of page offsets and priorities having
a negative error delta, instead of heapifying the entire page.
Also it turns out to be a bit faster to compute (and memoize) the delta
between a proposed content byte and the entire target screen at once,
since we'll end up recomputing the same content diffs multiple times.
(Committing messy version in case I want to revisit some of those
interim versions)
has ticked past the appropriate time.
- optimize the frame encoding a bit
- use int64 consistently to avoid casting
Fix a bug - when retiring an offset, also update our memory map with the
new content, oops
If we run out of changes to index, keep emitting stores for content
at page=32,offset=0 forever
Switch to a weighted D-L implementation so we can weight e.g. different
substitutions differently (e.g. weighting diffs to/from black pixels
differently than color errors)
0x800-0x2000
Place some of the tick opcodes there. This gives enough room for all
but 2 of the op_tick_*_page_n opcodes!
It may be possible to fit the remaining ones into unused RAM in the
language card, but this will require some finesse to get the code in
there. Or maybe I can optimize enough bytes...
0x300 is used by the loader.system, but there is also still 0x400..0x800
if I don't mind messing up the text page, and 0x200 if I can
get away with using the keyboard buffer.
Something is broken with RESET now though, maybe the reset vector is
pointing somewhere orphaned.
- Introduce a new Movie() class that multiplexes audio and video.
- Every N audio frames we grab a new video frame and begin pulling
opcodes from the audio and video streams
- Grab frames from the input video using bmp2dhr if the .BIN file does
not already exist. Run bmp2dhr in a background thread to not block
encoding
- move the output byte streaming from Video to Movie
- For now, manually clip updates to pages > 56 since the client doesn't
support them yet
The way we encode video is now:
- iterate in descending order over update_priority
- begin a new (page, content) opcode
- for all of the other offset bytes in that page, compute the error
between the candidate content byte and the target content byte
- iterate over offsets in order of increasing error and decreasing
update_priority to fill out the remaining opcode
trick of temporarily violating the X=0 invariant (which is only
required in the tick_6 opcode tail path to steal an extra cycle)
to reorder a STA $2000,Y outside of the tick loop.
The cost of this is that we don't have enough pad cycles left to JMP
to the common opcode tail, but I think this still (barely) fits in
main RAM.
- enough to fit in AUX RAM but still room to go, hopefully will be
able to fit in MAIN?
Fix some of the off-by-one cycle counts introduced when switching from
STA tick (which is wrong since it accesses twice) to BIT tick.
Hopefully can fix others by reordering?
scheduling audio + video.
Use a combined "fat" audio + video opcode that combines several
features:
- constant cycle count of 73 cycles/opcode (=14364 Hz)
- page and content are controlled per opcode
- each opcode does 4 offset stores (hence 57456 stores/sec)
- tick speaker twice per opcode, with varying duty cycles 4 .. 70 in
units of 2 cycles
- thus 32 opcodes, or 5-bit audio @ 14364 Hz
The price for this is that we need per-page variants of the opcodes,
and at 53 bytes/opcode they won't (quite) all fit even in AUX RAM.
The good news is that with some further work it should be possible
to reduce this footprint by having opcodes share implementation by
JMPing into a common tail sequence.
Also introduce some ticks in approximately correct places during the
ACK slow path, as a proof of concept that this does mitigate the
clicking.
This works and gives reasonable quality audio!
weight of diffs that persist across multiple frames.
For each frame, zero out update priority of bytes that no longer have
a pending diff, and add the edit distance of the remaining diffs.
Zero these out as opcodes are retired.
Replace hamming distance with Damerau-Levenshtein distance of the
encoded pixel colours in the byte, e.g. 0x2A --> GGG0 (taking into
account the half-pixel)
This has a couple of benefits over hamming distance of the bit patterns:
- transposed pixels are weighted less (edit distance 1, not 2+ for
Hamming)
- coloured pixels are weighted equally as white pixels (not half as
much)
- weighting changes in palette bit that flip multiple pixel colours
While I'm here, the RLE opcode should emit run_length - 1 so that we
can encode runs of 256 bytes.