of all possible weights. We encode the two 8-bit inputs into a single
16 bit value instead of dealing with an array of tuples.
Fix an important bug in _compute_delta: old and is_odd were transposed
so we weren't actually subtracting the old deltas! Surprisingly,
when I accidentally fixed this bug in the vectorized version, the video
encoding was much worse! This turned out to be because the edit
distance metric allowed reducing diffs by turning on pixels, which
meant it would tend to do this when "minimizing error" in a way that
was visually unappealing.
To remedy this, introduce a separate notion of substitution cost for
errors, and weight pixel colour changes more highly to discourage them
unless absolutely necessary. This gives very good quality results!
Also vectorize the selection of page offsets and priorities having
a negative error delta, instead of heapifying the entire page.
Also it turns out to be a bit faster to compute (and memoize) the delta
between a proposed content byte and the entire target screen at once,
since we'll end up recomputing the same content diffs multiple times.
(Committing messy version in case I want to revisit some of those
interim versions)
has ticked past the appropriate time.
- optimize the frame encoding a bit
- use int64 consistently to avoid casting
Fix a bug - when retiring an offset, also update our memory map with the
new content, oops
If we run out of changes to index, keep emitting stores for content
at page=32,offset=0 forever
Switch to a weighted D-L implementation so we can weight e.g. different
substitutions differently (e.g. weighting diffs to/from black pixels
differently than color errors)
- Introduce a new Movie() class that multiplexes audio and video.
- Every N audio frames we grab a new video frame and begin pulling
opcodes from the audio and video streams
- Grab frames from the input video using bmp2dhr if the .BIN file does
not already exist. Run bmp2dhr in a background thread to not block
encoding
- move the output byte streaming from Video to Movie
- For now, manually clip updates to pages > 56 since the client doesn't
support them yet
The way we encode video is now:
- iterate in descending order over update_priority
- begin a new (page, content) opcode
- for all of the other offset bytes in that page, compute the error
between the candidate content byte and the target content byte
- iterate over offsets in order of increasing error and decreasing
update_priority to fill out the remaining opcode
weight of diffs that persist across multiple frames.
For each frame, zero out update priority of bytes that no longer have
a pending diff, and add the edit distance of the remaining diffs.
Zero these out as opcodes are retired.
Replace hamming distance with Damerau-Levenshtein distance of the
encoded pixel colours in the byte, e.g. 0x2A --> GGG0 (taking into
account the half-pixel)
This has a couple of benefits over hamming distance of the bit patterns:
- transposed pixels are weighted less (edit distance 1, not 2+ for
Hamming)
- coloured pixels are weighted equally as white pixels (not half as
much)
- weighting changes in palette bit that flip multiple pixel colours
While I'm here, the RLE opcode should emit run_length - 1 so that we
can encode runs of 256 bytes.
bytemap, (page,offset) memory map)
- add a FlatMemoryMap that is a linear 8K array
- add converter methods and default constructors that allow converting
between them
- use MemoryMap as the central representation used by the video encoder
opcodes until the cycle budget for the frame is exhausted.
Output stream is also now aware of TCP framing, and schedules an ACK
opcode every 2048 output bytes to instruct the client to perform
TCP ACK and buffer management.
Fixes several serious bugs in RLE encoding, including:
- we were emitting the RLE opcode with the next content byte after the
run completed!
- we were looking at the wrong field for the start offset!
- handle the case where the entire page is a single run
- stop trying to allow accumulating error when RLE -- this does not
respect the Apple II colour encoding, i.e. may introduce colour
fringing.
- also because of this we're unlikely to actually be able to find
many runs because odd and even columns are encoded differently. In
a followup we should start encoding odd and even columns separately
Optimize after profiling -- encoder is now about 2x faster
Add tests.