introduce an attempt at post-processing the colour artefacting that
results in coalescing adjacent '1' bits into white pixels. This is
an incomplete modeling even of this artefact, let alone the other
various fringing weirdness that happens for e.g. NTSC rendering (which
is not faithfully reproduced by Apple //GS RGB display, so hard for me
to test further)
- Commented-out experimentation with using hitherdither library to
use yliluoma dithering (see
https://bisqwit.iki.fi/story/howto/dither/jy/) instead of the
(error-diffusion) based BMP2DHR which introduces a lot of noise between
frames since it is easily perturbed.
Unfortunately apart from being extremely slow, it also doesn't give
good results, even for (simulated) DHGR palette. There's a lot of
banding and for HGR the available colours are just too far apart in
colour space.
This is even without (somehow) applying the HGR colour constraints.
- Also return the priority from _compute_error as preparation for
reinserting the offset back into the priority heap, in case we can do
a better job later. In order to do this properly we need to compute
both the error edit distance and the "true" edit distance and only
insert the priority of the latter.
- Change ACK code to perform two dummy stream reads rather than relying
on a preceding NOP to pad the TCP frame to 2K. This fixes the timing
issue that was causing most of the low-frequency ticks.
- Ticks still aren't perfectly aligned during the ACK slow path but
it's almost good enough, i.e. probably no need to actually bother
optimizing the slow path more.
of all possible weights. We encode the two 8-bit inputs into a single
16 bit value instead of dealing with an array of tuples.
Fix an important bug in _compute_delta: old and is_odd were transposed
so we weren't actually subtracting the old deltas! Surprisingly,
when I accidentally fixed this bug in the vectorized version, the video
encoding was much worse! This turned out to be because the edit
distance metric allowed reducing diffs by turning on pixels, which
meant it would tend to do this when "minimizing error" in a way that
was visually unappealing.
To remedy this, introduce a separate notion of substitution cost for
errors, and weight pixel colour changes more highly to discourage them
unless absolutely necessary. This gives very good quality results!
Also vectorize the selection of page offsets and priorities having
a negative error delta, instead of heapifying the entire page.
Also it turns out to be a bit faster to compute (and memoize) the delta
between a proposed content byte and the entire target screen at once,
since we'll end up recomputing the same content diffs multiple times.
(Committing messy version in case I want to revisit some of those
interim versions)
has ticked past the appropriate time.
- optimize the frame encoding a bit
- use int64 consistently to avoid casting
Fix a bug - when retiring an offset, also update our memory map with the
new content, oops
If we run out of changes to index, keep emitting stores for content
at page=32,offset=0 forever
Switch to a weighted D-L implementation so we can weight e.g. different
substitutions differently (e.g. weighting diffs to/from black pixels
differently than color errors)
0x800-0x2000
Place some of the tick opcodes there. This gives enough room for all
but 2 of the op_tick_*_page_n opcodes!
It may be possible to fit the remaining ones into unused RAM in the
language card, but this will require some finesse to get the code in
there. Or maybe I can optimize enough bytes...
0x300 is used by the loader.system, but there is also still 0x400..0x800
if I don't mind messing up the text page, and 0x200 if I can
get away with using the keyboard buffer.
Something is broken with RESET now though, maybe the reset vector is
pointing somewhere orphaned.
- Introduce a new Movie() class that multiplexes audio and video.
- Every N audio frames we grab a new video frame and begin pulling
opcodes from the audio and video streams
- Grab frames from the input video using bmp2dhr if the .BIN file does
not already exist. Run bmp2dhr in a background thread to not block
encoding
- move the output byte streaming from Video to Movie
- For now, manually clip updates to pages > 56 since the client doesn't
support them yet
The way we encode video is now:
- iterate in descending order over update_priority
- begin a new (page, content) opcode
- for all of the other offset bytes in that page, compute the error
between the candidate content byte and the target content byte
- iterate over offsets in order of increasing error and decreasing
update_priority to fill out the remaining opcode