Add a test case that the bmp2dhr output of input filenames containing
'.' are handled correctly.
Break out video.Mode into video_mode.VideoMode to resolve circular
dependency.
- Extract out a (File)FrameSequencer class from Video to encapsulate
the generation of still frames. This also makes Video easier to test.
- Fix FileFrameSequencer.frames() to correctly handle filenames
containing '.'
- Temporarily switch to the BMP2DHR NTSC palette (#5) for evaluation.
Video:
- Temporarily hardcode DHGR decoding
- Optimize _heapify_priorities() by using numpy to vectorize the
construction of the list of tuples. This requires changing the
random nonce to an int so the intermediate array has a uniform type.
- Use the efficient 28-bit representation of DHGR (aux, main, aux,
main) tuples introduced in DHGRBitmap to evaluate diffs
- Switch to np.int type for accumulating diffs, and random.randint(0,
10000) instead of float for nonce values.
- Fix/improve some of the error evaluation in _index_changes:
- skip offsets whose diffs have already been cleared
- hoist some stuff out of _compute_error into the parent
- Add some validation that when we run out of work to do with a frame,
the source and target memory maps should be equal. This isn't
happening sometimes, i.e. there is a bug.
Add a new DHGRBitmap class that efficiently represents the
DHGR interleaving of the (aux, main) MemoryMap as a sequence of
28-bit integers.
This allows for easily extracting the 8-bit and 12-bit subsequences
representing the DHGR pixels that are influenced when storing a byte
at offsets 0..3 within the interleaved (aux, main, aux, main)
sequence.
Since we have precomputed all of the pairwise differences between
these 8- and 12-bit values, this allows us to efficiently compute the
edit distances between pairs of screen bytes (and/or arrays)
(3-pixel) sequences that may be modified when storing bytes to the
DHGR display.
This relies on producing an efficient linear representation of the
DHGR framebuffer in terms of a packed 28-bit representation of (Aux,
Main, Aux, Main) screen bytes.
- Every time we process an ACK opcode, toggle page 1/page 2 soft
switches to steer subsequent writes between MAIN and AUX memory
- while I'm here, squeeze out some unnecessary operations from the
buffer management
On the player side, this is implemented by maintaining two screen
memory maps, and alternating between opcode streams for each of them.
This is using entirely the wrong colour model for errors, but
surprisingly it already works pretty well in practise (and the frame
rate is acceptable on test videos)
DHGR/HGR could be made runtime selectable by adding a header byte that
determines whether to set the DHGR soft switches before initiating
the decode loop.
While I'm in here, fix op_terminate to clear keyboard strobe before
waiting.
- can't emit Terminate opcode in the middle of the bytestream
- pad the TCP stream to next 2k boundary when emitting terminate opcode,
since the player will block until receiving this much data.
Clean up naming in edit_distance
In video encoder, when we emit additional offsets as part of an opcode,
reinsert back into the priority heapq if the new edit distance is
nonzero, in case we get the chance to fix it up later in the frame.
Also make sure to zero out the diff_weights and content_deltas
so we don't consider the offset again as a side-effect of some other
opcode.
Instead of prioritizing side-effect offsets by their previous update
priority, prioritize by those with the lowest (error - edit) delta i.e.
not introducing too much error relative to their edit distance.