has ticked past the appropriate time.
- optimize the frame encoding a bit
- use int64 consistently to avoid casting
Fix a bug - when retiring an offset, also update our memory map with the
new content, oops
If we run out of changes to index, keep emitting stores for content
at page=32,offset=0 forever
Switch to a weighted D-L implementation so we can weight e.g. different
substitutions differently (e.g. weighting diffs to/from black pixels
differently than color errors)
- Introduce a new Movie() class that multiplexes audio and video.
- Every N audio frames we grab a new video frame and begin pulling
opcodes from the audio and video streams
- Grab frames from the input video using bmp2dhr if the .BIN file does
not already exist. Run bmp2dhr in a background thread to not block
encoding
- move the output byte streaming from Video to Movie
- For now, manually clip updates to pages > 56 since the client doesn't
support them yet
The way we encode video is now:
- iterate in descending order over update_priority
- begin a new (page, content) opcode
- for all of the other offset bytes in that page, compute the error
between the candidate content byte and the target content byte
- iterate over offsets in order of increasing error and decreasing
update_priority to fill out the remaining opcode
bytemap, (page,offset) memory map)
- add a FlatMemoryMap that is a linear 8K array
- add converter methods and default constructors that allow converting
between them
- use MemoryMap as the central representation used by the video encoder
Prototype a threaded version of the decoder but this doesn't seem to be
necessary as it's not the bottleneck.
Opcode stream is now aware of frame cycle budget and will keep emitting
until budget runs out -- so no need for fullness estimate.
for runs of N >= 4.
Also fix a bug in the decoder that was apparently allowing opcodes to
fall through. Replace BVC with BRA (i.e. assume 65C02) until I can work
out what is going on
solver to minimize the cycle cost to visit all changes in our estimated
list.
This is fortunately a tractable (though slow) computation that does give
improvements on the previous heuristic at the level of ~6% better
throughput.
This opcode schedule prefers to group by page and vary over content, so
implement a fast heuristic that does that. This scheduler is within 2%
of the TSP solution.
bonus we now maintain much better tracking of our target frame rate.
Maintain a running estimate of the opcode scheduling overhead, i.e.
how many opcodes we end up scheduling for each content byte written.
Use this to select an estimated number of screen changes to fill the
cycle budget, ordered by hamming weight of the delta. Group these
by content byte and then page as before.
weight of the xor of old and new frames, and switch to setting the
new byte directly instead of xor'ing, to improve efficiency of decoder.
Instead of iterating in a fixed order by target byte then page, at
each step compute the next change to make that would maximize
cycles/pixel, including switching page and/or content byte.
This is unfortunately much slower to encode currently but can hopefully
be optimized sufficiently.
bytestream by prioritizing bytes to be XOR'ed that have the highest
hamming weight, i.e. will result in the largest number of pixel
transitions on the screen.
Not especially optimized yet (either runtime, or byte stream)