Since we have less than 100k of usage.h data anyway, using bzip -9 is silly.

That says use 900k chunks when compressing, which needs about 4 megs of data
structures to undo the Burrows-Wheeler transform.  Switching it down to
bzip -1 (100k chunks) should have no impact on the compression (since it
still all fits in one chunk) but should reduce runtime decompression memory
requirements to something like 500k.  Still larger than gunzip, but not
egregiously so.
This commit is contained in:
Rob Landley 2006-05-30 19:19:45 +00:00
parent b2d42fa6d1
commit 3252b625b7

View File

@ -6,7 +6,7 @@ test "$loc" || loc=.
test -x "$loc/usage" || exit 1
echo 'static const char packed_usage[] = '
"$loc"/usage | bzip2 -9 | od -v -t x1 \
"$loc"/usage | bzip2 -1 | od -v -t x1 \
| $SED -e 's/^[^ ]*//' -e 's/ \(..\)/\\x\1/g' -e 's/^\(.*\)$/"\1"/' || exit 1
echo ';'
sz=`"$loc"/usage | wc -c` || exit 1