With recent optimizations converter became ~2.1x faster, so

--lookahead=8 is a reasonable new default.
This commit is contained in:
kris 2021-03-15 17:55:21 +00:00
parent 5487b4aa7e
commit 7a7923503f
2 changed files with 3 additions and 3 deletions

View File

@ -162,7 +162,7 @@ Rendering the same .dhr image with 4-pixel colour shows the reason for the diffe
In the particular case of DHGR this algorithm runs into difficulties, because each pixel only has two possible colour choices (from a total of 16+). If we only consider the two possibilities for the immediate next pixel then neither may be a particularly good match. However it may be more beneficial to make a suboptimal choice now (deliberately introduce more error), if it allows us access to a better colour for a subsequent pixel. "Classical" dithering algorithms do not account for these palette constraints, and produce suboptimal image quality for DHGR conversions. In the particular case of DHGR this algorithm runs into difficulties, because each pixel only has two possible colour choices (from a total of 16+). If we only consider the two possibilities for the immediate next pixel then neither may be a particularly good match. However it may be more beneficial to make a suboptimal choice now (deliberately introduce more error), if it allows us access to a better colour for a subsequent pixel. "Classical" dithering algorithms do not account for these palette constraints, and produce suboptimal image quality for DHGR conversions.
We can deal with this by looking ahead N pixels (6 by default) for each image position (x,y), and computing the effect of choosing all 2^N combinations of these N-pixel states on the dithered source image. We can deal with this by looking ahead N pixels (8 by default) for each image position (x,y), and computing the effect of choosing all 2^N combinations of these N-pixel states on the dithered source image.
Specifically, for a fixed choice of one of these N pixel sequences, we tentatively perform the error diffusion as normal on a copy of the image, and compute the total mean squared distance from the (fixed) N-pixel sequence to the error-diffused source image. For the perceptual colour distance metric we use [CIE2000 delta-E](https://en.wikipedia.org/wiki/Color_difference#CIEDE2000). Specifically, for a fixed choice of one of these N pixel sequences, we tentatively perform the error diffusion as normal on a copy of the image, and compute the total mean squared distance from the (fixed) N-pixel sequence to the error-diffused source image. For the perceptual colour distance metric we use [CIE2000 delta-E](https://en.wikipedia.org/wiki/Color_difference#CIEDE2000).

View File

@ -25,9 +25,9 @@ def main():
parser.add_argument("output", type=str, help="Output file for converted " parser.add_argument("output", type=str, help="Output file for converted "
"Apple II image.") "Apple II image.")
parser.add_argument( parser.add_argument(
"--lookahead", type=int, default=6, "--lookahead", type=int, default=8,
help=("How many pixels to look ahead to compensate for NTSC colour " help=("How many pixels to look ahead to compensate for NTSC colour "
"artifacts (default: 6)")) "artifacts (default: 8)"))
parser.add_argument( parser.add_argument(
'--dither', type=str, choices=list(dither_pattern.PATTERNS.keys()), '--dither', type=str, choices=list(dither_pattern.PATTERNS.keys()),
default=dither_pattern.DEFAULT_PATTERN, default=dither_pattern.DEFAULT_PATTERN,