Hi-Res is essentially a more constrained version of Double Hi-Res, in which only about half of the 560 horizontal screen pixels can be independently addressed.
In particular an 8 bit byte in screen memory controls 14 or 15 screen pixels. Bits 0-7 are doubled, and bit 8 shifts these 14 dots to the right if enabled. In this case bit 7 of the previous byte is repeated a third time.
This means that we have to optimize all 8 bits at once and move forward in increments of 14 screen pixels.
There's also a timing difference that results in a phase shift of the NTSC colour signal, which means the mappings from dot patterns to effective colours are rotated.
Error diffusion seems to give best results if we only distribute about 2/3 of the quantization error according to the dither pattern.
- Preprocess the source image by dithering with the full 12-bit //gs
colour palette, ignoring SHR palette restrictions (i.e. each pixel
chosen independently from 4096 colours)
- Using this as the ground truth allows much better handling of
e.g. solid colours, which were being dithered inconsistently with
the previous approach
- Also when fitting an SHR palette, fix any colours that comprise more
than 10% of source pixels. This also encourages more uniformity in
regions of solid colour.
This is useful when used as part of an image repository build
pipeline, to avoid replacing existing images if the new score is
higher.
Hide intermediate output behind --verbose
reserved colours from the global palette, and pick unique random
points from the samples for the rest. This encourages a larger range
of colours in the resulting images and may improve quality.
Iterate a max number of times without improvement in the outer loop as
well.
Save intermediate preview outputs.
- start with an equal split
- with each iteration, pick a palette and adjust its line ranges by a small random amount
- if the proposed palette is accepted, continue to apply the same delta
- if not, revert the adjustment and pick a different one
In addition, often there will be palettes that are entirely unused by
the image. For such palettes:
- find the palette with the largest line range. If > 20, then
subdivide this range and assign half each to both palettes
- if not, then pick a random line range for the unused palette
This helps to refine and explore more of the parameter space.
space but continue to use CAM16-UCS for distances and updating
centroid positions, before mapping back to the nearest legal 12-bit
RGB position.
Needs some more work to deal with the fact that now that there are
discrete distances (but no fixed minimum) between allowed centroid
positions, the previous notion of convergence doesn't apply. Actually
the centroids can oscillate between positions.
There is room for optimization but this is already reasonably
performant, and the image quality is much higher \o/
all palettes. This will be useful for Total Replay which does an
animation effect when displaying the image (first set palettes, then
transition in pixels)
- this requires us to go back to computing k-means ourself instead of
using sklearn, since it can't keep some centroids fixed
- try to be more careful about //gs RGB values, which are in the
Rec.601 colour space. This isn't quite right yet - the issue seems
to be that since we dither in linear RGB space but quantize in the
nonlinear space, small differences may lead to a +/- 1 in the 4-bit
//gs RGB value, which is quite noticeable. Instead we need to be
clustering and/or dithering with awareness of the quantized palette
space.
depend on the width of the palette sampling.
Note the potential issue that since we are clustering in CAM space but
then quantizing a (much coarser) 4-bit RGB value we could end up
picking multiple centroids that will be represented by the same RGB
value. This doesn't seem to be a major issue though (e.g. 3-4 lost
colours per typical image)
to mutate our source image!
Fix another bug introduced in the previous commit: convert from linear
rgb before quantizing //gs RGB palette since //gs RGB values are in
Rec.601 colour space.
Switch to double for colour_squared_distance and related variables,
not sure if it matters though.
When iterating palette clustering, reject the new palettes if they
would increase the total image error. This prevents accepting changes
that are local improvements to one palette but which would introduce
more net errors elsewhere when this palette is reused.
This now seems to give monotonic improvements in image quality so no need
to write out intermediate images any more.