112 Commits
1.2 ... 1.8

Author SHA1 Message Date
82b5926b4f Release version 1.8.0 2020-07-18 17:31:25 +02:00
5456013bf4 Use flake8 extend-exclude setting instead of exclude
This setting was added in flake8 3.8.0 and allows adding entries to the
exclude list without also removing the default entries.
2020-07-18 13:40:15 +02:00
b595456a05 Switch back to using attr directive in setup.cfg version
As of setuptools 46.4.0, this extracts the attribute value statically
using the ast module, if possible. This allows it to work properly even
if the attribute is stored in a file that cannot be imported at setup
time (e. g. because of dependencies that might not be installed yet).
2020-07-18 13:13:19 +02:00
d367a9238a Add bytes_quote helper function
bytes_quote does the same as bytes_escape, but automatically adds the
quote character around the escaped string.
2020-07-07 01:03:38 +02:00
33c4016124 Fix flake8 problems 2020-07-07 00:04:54 +02:00
b01cfc77cf Don't pass required=True to add_subparsers
The required kwarg of add_subparsers was only added in Python 3.7, and
we currently still support Python 3.6.
2020-07-07 00:01:57 +02:00
a9f54b678c Add py.typed marker file for PEP 561
This allows type checkers like mypy to use the type hints in our code
when type-checking another project that imports this library.
2020-07-06 23:57:25 +02:00
b46018e666 Use is_printable in definition of _TRANSLATE_NONPRINTABLES 2020-07-06 18:01:34 +02:00
b0eefe3889 Replace custom CLI subcommand system with standard argparse subparsers
This is a purely internal change and should have no visible effect on
the command-line interface.
2020-07-05 19:43:01 +02:00
3e0bbcee04 Add Python 3.8 classifier to metadata 2020-04-19 16:21:25 +02:00
13654c2560 Add pyproject.toml with PEP 517/PEP 518 metadata 2020-04-19 16:21:04 +02:00
d5199bd503 Replace setup.cfg metadata license_file with license_files
license_file has been deprecated in wheel 0.32.0 in favor of
license_files.
2020-04-19 16:20:07 +02:00
c5c3f24a10 Add tox environment for building and checking distributions 2020-04-19 16:16:21 +02:00
7c77c4ef20 Prepare setup.py/.cfg for additional import-time dependencies
Reading the version number using attr: rsrcfork.__version__ will no
longer work properly if rsrcfork has non-stdlib dependencies at import
time, because setuptools needs to be able to import rsrcfork and read
the version number before the dependencies are installed.

As a workaround, our setup.py now manually parses the version number
from rsrcfork/__init__.py using the ast module.
2020-04-03 22:32:10 +02:00
f7b6080c0e Remove random execute bits from test data files 2020-04-01 00:01:50 +02:00
007d15eb3d Fix tox configuration breaking on spaces in the project path
The {envpython} substitution is not quoted, so spaces in the path are
treated as argument separators and cause the test runs to fail.
To work around this, we now always use an unqualified python command
instead of the {envpython} substitution. This is safe because the tox
commands are always run in a virtual environment, so the python command
is guaranteed to point to the environment's Python and not the system
default.
2020-03-30 01:46:10 +02:00
246b69e375 Remove accidental empty comment from test_rsrcfork.py 2020-01-21 22:32:44 +01:00
d67ff64851 Add tests for reading from resource forks and fork auto-selection
These tests are only run on Mac, because they require native support
for resource forks.
2020-01-21 22:29:18 +01:00
5391d66a78 Add tests for reading resource files from streams instead of path 2020-01-21 15:20:46 +01:00
5b2700bf17 Add some missing asserts to test_compress_compare 2020-01-19 23:24:52 +01:00
c41b25fea1 Add test case for compressed resource handling and decompression 2020-01-19 23:19:19 +01:00
a45dbd8eca Remove upgrade of pip from CI workflow
The GitHub Actions environment clearly has a working pip pre-installed,
and it's unlikely that this project relies on any extremely new
features.
2020-01-19 19:59:42 +01:00
3401ce65dd Update actions/checkout to v2 2020-01-19 19:38:29 +01:00
890dd24f76 Also run CI workflow on pull requests 2020-01-19 19:36:53 +01:00
67c2b4acf0 Add test case for additional resource file and resource metadata 2020-01-19 19:22:59 +01:00
238c78a73e Simplify attribute asserts in tests 2020-01-19 19:05:05 +01:00
fbd861edf4 Fix test_textclipping not checking resource ID lists properly
Because Python's zip terminates once *any* of the input iterables
terminates, the previous code would not detect if the file was missing
resources or contained extra ones.
2020-01-19 02:30:10 +01:00
a7a407a1dd Add extra assertion to test_textclipping 2019-12-30 03:04:48 +01:00
ecee2616cf Add flake8-bugbear plugin 2019-12-30 03:04:27 +01:00
ba284d1800 Fix a bunch of flake8 violations 2019-12-30 03:00:12 +01:00
f690caac24 Add flake8 configuration 2019-12-30 02:57:31 +01:00
3a805c3e56 Add GitHub Actions workflow for CI 2019-12-30 01:59:05 +01:00
6adf8eb88d Fix mypy errors about byte strings as format string parameters 2019-12-30 01:48:33 +01:00
e132a91dea Fix missing sys.exit calls in CLI subcommand functions 2019-12-30 01:48:33 +01:00
4e1cd05412 Fix miscellaneous mypy errors 2019-12-30 01:48:33 +01:00
1a416defed Add tox configuration 2019-12-30 01:48:33 +01:00
1089a19c01 Add basic unit tests 2019-12-29 00:39:40 +01:00
8fc24040ea Add resource-info subcommand 2019-12-26 01:58:23 +01:00
d492d9a6a8 Remove an incorrect assertion from describe_resource
red.compressed_info can be None here if decompress is False.
2019-12-26 01:50:34 +01:00
d0e1eaf262 Add raw-compress-info subcommand (#6) 2019-12-26 00:34:27 +01:00
1e55569442 Add support for passing filters to the list subcommand 2019-12-25 01:47:03 +01:00
2abf6e2a06 Add class for resource filters in place of lambdas
This is easier to debug (printing out a lambda doesn't show what values
it checks against) and makes it easier to check that the filter values
are valid.
2019-12-25 00:15:35 +01:00
2b0bbb19ed Refactor filter_resources in __main__
With the new implementation, each filter is converted to a function,
then all resources are checked if they match any of the filter
functions. This is simpler than the old implementation, where the
resource lookup code was slightly different for some filter forms.
2019-12-25 00:15:35 +01:00
c009e8f80f Support passing an empty filter list to filter_resources 2019-12-25 00:15:35 +01:00
d67641d537 Remove compatibility code for old CLI syntax 2019-12-25 00:15:30 +01:00
d6dbfdb149 Fix version number in changelog 2019-12-17 12:17:31 +01:00
b2502c48a2 Bump version to 1.7.1.dev 2019-12-17 12:16:39 +01:00
158ca4884b Release version 1.7.0 2019-12-17 11:28:26 +01:00
8568f355c4 Remove incorrect outdated paragraph from list subcommand help 2019-12-10 16:15:18 +01:00
97d2dbe1b3 Change formatting of command help strings in source code
The automatic textwrap.dedent makes it impossible to cleanly extract
parts of the help strings into separate constants.
2019-12-10 15:58:20 +01:00
a4b6328782 Fix 'dcmp' (0) jump table decompression for large segment numbers 2019-12-04 23:36:57 +01:00
393160b5da Add raw-decompress subcommand (#6) 2019-12-04 23:36:56 +01:00
476eaecd17 Fix typo in the help text for rsrcfork read 2019-12-04 21:16:29 +01:00
546edbc31a Update and improve resource and resource map reprs 2019-12-04 02:01:40 +01:00
cf6ce3c2a6 Move _LazyResourceMap out of ResourceFile 2019-12-04 02:01:40 +01:00
af2ac70676 Simplify ResourceFile._references and ._LazyResourceMap
The _references map now stores Resource objects directly, instead of
constructing them only when they are looked up. Resource objects are
now lazy themselves, so the previous lazy resource creation mechanism
is redundant.

_LazyResourceMap is now a simple read-only wrapper around an existing
map. The custom class is now only used to provide a specialized repr.
2019-12-04 02:01:40 +01:00
5af455992b Refactor resource reading internals
The reading of resource name and data is now performed in the Resource
class (lazily, when the respective attributes are accessed) instead of
in ResourceFile._LazyResourceMap.
2019-12-04 02:01:40 +01:00
2193c81518 Bump version to 1.6.1.dev 2019-12-04 01:45:15 +01:00
7dc0d980a3 Release version 1.6.0 2019-12-04 01:35:57 +01:00
2ce1d6b63a Move resource file format reference links to mac_file_format_docs repo
eebce6e7cc
2019-10-30 23:51:12 +01:00
ec5eb3bcc1 Don't display header data and attributes in list output
This is redundant now that there is a dedicated info subcommand.
2019-10-22 13:26:03 +02:00
25bec2f93a Add info subcommand to display technical info/stats about resource file 2019-10-22 13:18:25 +02:00
6fbb919285 Display warnings when the old CLI syntax is used 2019-10-22 10:46:25 +02:00
3be4d9c969 Add a new subcommand-based CLI syntax
The new syntax supports the same operations as the old syntax, but is
clearer to understand and more extensible in the future. The old syntax
is still supported for now.
2019-10-22 10:25:22 +02:00
f537fb3d37 Bump version to 1.5.1.dev 2019-10-22 10:23:33 +02:00
d342614f55 Release version 1.5.0 2019-10-22 10:17:07 +02:00
a5fb30e194 Fix broken handling of - (stdin) file name on command line 2019-10-16 23:29:20 +02:00
f3b3de496e Change naming of compression types
The old names ("system" and "application" compression) were not really
accurate in all cases, so the compression types are now referred to by
their number.
2019-10-07 10:08:32 +02:00
a71274d554 Document stream-based decompression in changelog 2019-10-02 16:36:54 +02:00
6d69d0097d Update rsrcfork.compress.__all__ 2019-10-02 16:29:32 +02:00
8db1b22bdc Make the generic decompression API stream-based
The non-stream-based APIs still exist as before and are not deprecated,
they just act as thin wrappers around the stream-based API.

The main rsrcfork module doesn't use the stream-based APIs yet, because
it reads each resource's data all at once and not incrementally.
2019-10-02 16:28:40 +02:00
6559cbc337 Refactor .dcmp2 to be stream-based
This is a little more complex than with the other decompressors,
because .dcmp2 has to behave differently when at the byte before EOF.
Checking whether this is the case requires lookahead, which is not easy
to do with a plain IO stream.

Some buffered IO streams provide a peek method for lookahead, but
others don't (such as io.BytesIO). There is no standard way to wrap an
already buffered IO stream to add a peek method, so we need a custom
wrapper class and helper function for this purpose.
2019-10-02 10:26:03 +02:00
1e79dc3c50 Refactor .dcmp0 and .dcmp1 to be stream-based
The decompression code is more readable this way, because the
compressed data needs to be processed sequentially. It also allows
moving the length check and some debug logging into an outer generator.

This also allows incremental decompression, but this doesn't have any
practical advantage, because the compressed resource data is all read
at once (there is no API for opening resources as streams), and
resources are not very large anyway.
2019-10-01 21:26:41 +02:00
db48212ade Fix a typo in a .compress.dcmp0 debug message 2019-10-01 21:26:41 +02:00
3a72bd3406 Remove leading underscores where they don't make much sense
The leading underscore is meant to distinguish private (for internal
use only) APIs from public (for external use) APIs. One can argue about
where the line between public and private should be, but if something
is used from other modules (as with read_variable_length_integer) it's
not really private IMHO.

In scripts (like __main__) it also doesn't make much sense to use
leading underscores, because the entire file is never meant to be used
by external code.
2019-10-01 21:26:41 +02:00
cb868b8005 Bump version to 1.4.1.dev 2019-09-29 19:27:43 +02:00
2f2472cfe9 Release version 1.4.0 2019-09-29 19:20:37 +02:00
e0f73d3220 Fix more issues reported by mypy 2019-09-29 16:28:07 +02:00
b77c85c295 Add mypy configuration section to setup.cfg 2019-09-29 16:27:37 +02:00
e5875ffe67 Fix various issues reported by mypy 2019-09-29 16:14:55 +02:00
449bf4dd71 Use parameterized typing.Mapping in ResourceFile definition
Previously the un-parameterized collections.abc.Mapping was used, which
makes type checking less accurate, as the exact key/value types are not
known.
2019-09-29 15:42:19 +02:00
0ac6e8a3c4 Fix misplaced parens in dcmp modules 2019-09-29 15:33:14 +02:00
29ddd21740 Add missing type annotations on some methods 2019-09-29 15:32:18 +02:00
add22b704a Fix ResourceFile.__enter__ not returning anything 2019-09-29 15:09:41 +02:00
fdd04c944b Remove __slots__ declaration from Resource class
It doesn't seem to have any noticeable performance benefit.
2019-09-29 15:00:45 +02:00
97c459bca7 Change attribute type annotations to standard format
Previously, the types of instance attributes were annotated with the
first assignment of each attribute. The standard way to annotate
instance attributes is to do so at class level without assigning any
value.
2019-09-29 14:58:18 +02:00
9ef084de58 Remove uses of the typing.io pseudo-module
According to https://bugs.python.org/issue35089, typing.io should not
be used anymore, and the types that it contains should be accessed
through the main typing module instead.
2019-09-28 01:40:34 +02:00
6d03954784 Document setup.cfg options.packages fixes in changelog 2019-09-25 02:32:32 +02:00
343259049c Fix setup.cfg options.packages not including subpackages
This caused normal installs (i. e. without --editable) of this library
to not include the rsrcfork.compress subpackage, and made everything
unusable as a result. Oops.
2019-09-25 01:51:23 +02:00
e75e88018e Add lots of additional Inside Macintosh-related links/info to README 2019-09-25 00:32:18 +02:00
0f72e8eb1f Document decompression improvements in changelog 2019-09-24 00:46:35 +02:00
84f09d0b83 Display 'dcmp' IDs in command line listings of compressed resources 2019-09-24 00:27:54 +02:00
c108af60ca Add length and length_raw attributes to Resource (closes #3)
For compressed resources, the value of the length attribute can be
accessed much more quickly than the data itself (because it only
requires parsing the header, rather than decompressing the entire
data). This is used to speed up listing of compressed resources on the
command line.

The length_raw attribute is added for symmetry, although it is not
specifically optimized in any case yet.
2019-09-24 00:13:23 +02:00
0c942e26ec Fix hex number formatting in compressed header info reprs 2019-09-23 23:52:06 +02:00
868a322b8e Add Resource.compressed_info attribute
This allows accessing a compressed resource's header data, without
having to decompress it or parse the compressed data manually.
2019-09-23 23:50:29 +02:00
a23cd0fcb2 Simplify decompressor lookup
All decompressors now have exactly the same signature (as a result,
each decompressor now has to check itself that the header type is
correct). This allows the decompressors to be stored in a simple
dictionary, which makes the lookup process much simpler.
2019-09-23 23:32:38 +02:00
53e73be980 Pass complete header info to individual decompressors 2019-09-23 23:19:20 +02:00
9dbdf5b827 Move compressed header info constants/classes to .compress.common
This allows the constants/classes to be accessed from the individual
decompressor submodules.
2019-09-23 23:14:06 +02:00
87d4ae43d4 Refactor parsing of compressed resource headers
In preparation for #3, the compressed resource data headers are parsed
and stored as proper objects. For now these objects are only used
internally by the decompression code, but in the future they can be
exposed.
2019-09-23 23:10:55 +02:00
716ac30a53 Add release instructions in a comment in __init__.py 2019-09-16 17:09:47 +02:00
20991154d3 Bump version to 1.3.1.dev 2019-09-16 16:46:17 +02:00
7207b1d32b Release version 1.3.0 2019-09-16 16:34:40 +02:00
1de940d597 Enable --sort by default and add --no-sort to disable sorting
In most cases the file order is not important and the unsorted output
hurts readability. The performance impact of sorting is relatively
small and barely noticeable even with large resource files.
2019-09-16 15:25:41 +02:00
d7255bc977 Adjust --group=id output format slightly 2019-09-16 14:58:21 +02:00
c6337bdfbd Rename resource_type and resource_id attributes to type and id
The old names were chosen to avoid conflicts with Python's type and id
builtins, but for attribute names this is not necessary.
2019-09-15 15:56:03 +02:00
f4c2717720 Add command-line --group option 2019-09-15 15:38:01 +02:00
8ad0234633 Add command-line --sort option 2019-09-13 15:00:56 +02:00
7612322c43 Add dump-text output format on command line 2019-09-13 14:51:16 +02:00
51ae7c6a09 Refactor __main__.main into smaller functions 2019-09-13 14:17:21 +02:00
194c886472 Change hex dump output format to match hexdump -C 2019-09-13 10:51:27 +02:00
b2fa5f8b0f Collapse multiple subsequent identical lines in hex dumps 2019-09-13 10:40:03 +02:00
752ec9e828 Bump version to 1.2.1.dev 2019-09-13 10:22:43 +02:00
29 changed files with 1815 additions and 655 deletions

View File

@ -8,3 +8,7 @@ insert_final_newline = true
[*.rst]
indent_style = space
indent_size = 4
[*.yml]
indent_style = space
indent_size = 2

20
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,20 @@
on: [pull_request, push]
jobs:
test:
strategy:
matrix:
platform: [macos-latest, ubuntu-latest, windows-latest]
runs-on: ${{ matrix.platform }}
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v1
with:
python-version: "3.6"
- uses: actions/setup-python@v1
with:
python-version: "3.7"
- uses: actions/setup-python@v1
with:
python-version: "3.8"
- run: python -m pip install --upgrade tox
- run: tox

6
.gitignore vendored
View File

@ -2,7 +2,13 @@
*.py[co]
__pycache__/
# tox
.tox/
# setuptools
*.egg-info/
build/
dist/
# mypy
.mypy_cache/

6
MANIFEST.in Normal file
View File

@ -0,0 +1,6 @@
# Note: See the PyPA documentation for a list of file names that are included/excluded by default:
# https://packaging.python.org/guides/using-manifest-in/#how-files-are-included-in-an-sdist
# Please only add entries here for files that are *not* already handled by default.
recursive-include tests *.py
recursive-include tests/data *.rsrc

View File

@ -56,7 +56,7 @@ Simple example
>>> rf
<rsrcfork.ResourceFile at 0x1046e6048, attributes ResourceFileAttrs.0, containing 4 resource types: [b'utxt', b'utf8', b'TEXT', b'drag']>
>>> rf[b"TEXT"]
<rsrcfork.ResourceFile._LazyResourceMap at 0x10470ed30 containing one resource: rsrcfork.Resource(resource_type=b'TEXT', resource_id=256, name=None, attributes=ResourceAttrs.0, data=b'Here is some text')>
<rsrcfork.ResourceFile._LazyResourceMap at 0x10470ed30 containing one resource: rsrcfork.Resource(type=b'TEXT', id=256, name=None, attributes=ResourceAttrs.0, data=b'Here is some text')>
Automatic selection of data/resource fork
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -108,25 +108,88 @@ Writing resource data is not supported at all.
Further info on resource files
------------------------------
Sources of information about the resource fork data format, and the structure of common resource types:
* Inside Macintosh, Volume I, Chapter 5 "The Resource Manager". This book can probably be obtained in physical form somewhere, but the relevant chapter/book is also available in a few places online:
* `Apple's legacy documentation <https://developer.apple.com/legacy/library/documentation/mac/pdf/MoreMacintoshToolbox.pdf>`_
* pagetable.com, a site that happened to have a copy of the book: `info blog post <http://www.pagetable.com/?p=50>`_, `direct download <http://www.weihenstephan.org/~michaste/pagetable/mac/Inside_Macintosh.pdf>`_
* `Wikipedia <https://en.wikipedia.org/wiki/Resource_fork>`_, of course
* The `Resource Fork <http://fileformats.archiveteam.org/wiki/Resource_Fork>`_ article on "Just Solve the File Format Problem" (despite the title, this is a decent site and not clickbait)
* The `KSFL <https://github.com/kreativekorp/ksfl>`_ library (and `its wiki <https://github.com/kreativekorp/ksfl/wiki/Macintosh-Resource-File-Format>`_), written in Java, which supports reading and writing resource files
* Alysis Software Corporation's article on resource compression (found on `the company's website <http://www.alysis.us/arctechnology.htm>`_ and in `MacTech Magazine's online archive <http://preserve.mactech.com/articles/mactech/Vol.09/09.01/ResCompression/index.html>`_) has some information on the structure of certain kinds of compressed resources.
* Apple's macOS SDK, which is distributed with Xcode. The latest version of Xcode is available for free from the Mac App Store. Current and previous versions can be downloaded from `the Apple Developer download page <https://developer.apple.com/download/more/>`_. Accessing these downloads requires an Apple ID with (at least) a free developer program membership.
* Apple's MPW (Macintosh Programmer's Workshop) and related developer tools. These were previously available from Apple's FTP server at ftp://ftp.apple.com/, which is no longer functional. Because of this, these downloads are only available on mirror sites, such as http://staticky.com/mirrors/ftp.apple.com/.
If these links are no longer functional, some are archived in the `Internet Archive Wayback Machine <https://archive.org/web/>`_ or `archive.is <http://archive.is/>`_ aka `archive.fo <https://archive.fo/>`_.
For technical info and documentation about resource files and resources, see the `"resource forks" section of the mac_file_format_docs repo <https://github.com/dgelessus/mac_file_format_docs/blob/master/README.md#resource-forks>`_.
Changelog
---------
Version 1.8.0
^^^^^^^^^^^^^
* Removed the old (non-subcommand-based) CLI syntax.
* Added filtering support to the ``list`` subcommand.
* Added a ``resource-info`` subcommand to display technical information about resources (more detailed than what is displayed by ``list`` and ``read``).
* Added a ``raw-compress-info`` subcommand to display technical header information about standalone compressed resource data.
* Made the library PEP 561-compliant by adding a py.typed file.
* Fixed an incorrect ``AssertionError`` when using the ``--no-decompress`` command-line options.
Version 1.7.0
^^^^^^^^^^^^^
* Added a ``raw-decompress`` subcommand to decompress compressed resource data stored in a standalone file rather than as a resource.
* Optimized lazy loading of ``Resource`` objects. Previously, resource data would be read from disk whenever a ``Resource`` object was looked up, even if the data itself is never used. Now the resource data is only loaded once the ``data`` (or ``data_raw``) attribute is accessed.
* The same optimization applies to the ``name`` attribute, although this is unlikely to make a difference in practice.
* As a result, it is no longer possible to construct ``Resource`` objects without a resource file. This was previously possible, but had no practical use.
* Fixed a small error in the ``'dcmp' (0)`` decompression implementation.
Version 1.6.0
^^^^^^^^^^^^^
* Added a new subcommand-based command-line syntax to the ``rsrcfork`` tool, similar to other CLI tools such as ``git`` or ``diskutil``.
* This subcommand-based syntax is meant to replace the old CLI options, as the subcommand structure is easier to understand and more extensible in the future.
* Currently there are three subcommands: ``list`` to list resources in a file, ``read`` to read/display resource data, and ``read-header`` to read a resource file's header data. These subcommands can be used to perform all operations that were also available with the old CLI syntax.
* The old CLI syntax is still supported for now, but it will be removed soon.
* The new syntax no longer supports reading CLI arguments from a file (using ``@args_file.txt``), abbreviating long options (e. g. ``--no-d`` instead of ``--no-decompress``), or the short option ``-f`` instead of ``--fork``. If you have a need for any of these features, please open an issue.
Version 1.5.0
^^^^^^^^^^^^^
* Added stream-based decompression methods to the ``rsrcfork.compress`` module.
* The internal decompressor implementations have been refactored to use streams.
* This allows for incremental decompression of compressed resource data. In practice this has no noticeable effect yet, because the main ``rsrcfork`` API doesn't support incremental reading of resource data.
* Fixed the command line tool always displaying an incorrect error "Cannot specify an explicit fork when reading from stdin" when using ``-`` (stdin) as the input file.
Version 1.4.0
^^^^^^^^^^^^^
* Added ``length`` and ``length_raw`` attributes to ``Resource``. These attributes are equivalent to the ``len`` of ``data`` and ``data_raw`` respectively, but may be faster to access.
* Currently, the only optimized case is ``length`` for compressed resources, but more optimizations may be added in the future.
* Added a ``compressed_info`` attribute to ``Resource`` that provides access to the header information of compressed resources.
* Improved handling of compressed resources when listing resource files with the command line tool.
* Metadata of compressed resources is now displayed even if no decompressor implementation is available (as long as the compressed data header can be parsed).
* Performance has been improved - the data no longer needs to be fully decompressed to get its length, this information is now read from the header.
* The ``'dcmp'`` ID used to decompress each resource is displayed.
* Fixed an incorrect ``options.packages`` in ``setup.cfg``, which made the library unusable except when installing from source using ``--editable``.
* Fixed ``ResourceFile.__enter__`` returning ``None``, which made it impossible to use ``ResourceFile`` properly in a ``with`` statement.
* Fixed various minor errors reported by type checking with ``mypy``.
Version 1.3.0.post1
^^^^^^^^^^^^^^^^^^^
* Fixed an incorrect ``options.packages`` in ``setup.cfg``, which made the library unusable except when installing from source using ``--editable``.
Version 1.2.0.post1
^^^^^^^^^^^^^^^^^^^
* Fixed an incorrect ``options.packages`` in ``setup.cfg``, which made the library unusable except when installing from source using ``--editable``.
Version 1.3.0
^^^^^^^^^^^^^
* Added a ``--group`` command line option to group resources in list format by type (the default), ID, or with no grouping.
* Added a ``dump-text`` output format to the command line tool. This format is identical to ``dump``, but instead of a hex dump, it outputs the resource data as text. The data is decoded as MacRoman and classic Mac newlines (``\r``) are translated. This is useful for examining resources that contain mostly plain text.
* Changed the command line tool to sort resources by type and ID, and added a ``--no-sort`` option to disable sorting and output resources in file order (which was the previous behavior).
* Renamed the ``rsrcfork.Resource`` attributes ``resource_type`` and ``resource_id`` to ``type`` and ``id``, respectively. The old names have been deprecated and will be removed in the future, but are still supported for now.
* Changed ``--format=dump`` output to match ``hexdump -C``'s format - spacing has been adjusted, and multiple subsequent identical lines are collapsed into a single ``*``.
Version 1.2.0
^^^^^^^^^^^^^

6
pyproject.toml Normal file
View File

@ -0,0 +1,6 @@
[build-system]
requires = [
"setuptools >= 46.4.0",
"wheel >= 0.32.0",
]
build-backend = "setuptools.build_meta"

View File

@ -1,6 +1,26 @@
"""A pure Python, cross-platform library/tool for reading Macintosh resource data, as stored in resource forks and ``.rsrc`` files."""
__version__ = "1.2.0"
# To release a new version:
# * Remove the .dev suffix from the version number in this file.
# * Update the changelog in the README.rst (rename the "next version" section to the correct version number).
# * Remove the ``dist`` directory (if it exists) to clean up any old release files.
# * Run ``python3 setup.py sdist bdist_wheel`` to build the release files.
# * Run ``python3 -m twine check dist/*`` to check the release files.
# * Fix any errors reported by the build and/or check steps.
# * Commit the changes to master.
# * Tag the release commit with the version number, prefixed with a "v" (e. g. version 1.2.3 is tagged as v1.2.3).
# * Fast-forward the release branch to the new release commit.
# * Push the master and release branches.
# * Upload the release files to PyPI using ``python3 -m twine upload dist/*``.
# * On the GitHub repo's Releases page, edit the new release tag and add the relevant changelog section from the README.rst. (Note: The README is in reStructuredText format, but GitHub's release notes use Markdown, so it may be necessary to adjust the markup syntax.)
# After releasing:
# * (optional) Remove the build and dist directories from the previous release as they are no longer needed.
# * Bump the version number in this file to the next version and add a .dev suffix.
# * Add a new empty section for the next version to the README.rst changelog.
# * Commit and push the changes to master.
__version__ = "1.8.0"
__all__ = [
"Resource",
@ -11,8 +31,8 @@ __all__ = [
"open",
]
from . import api, compress
from .api import Resource, ResourceAttrs, ResourceFile, ResourceFileAttrs
from . import compress
# noinspection PyShadowingBuiltins
open = ResourceFile.open

File diff suppressed because it is too large Load Diff

View File

@ -4,6 +4,7 @@ import enum
import io
import os
import struct
import types
import typing
import warnings
@ -58,9 +59,11 @@ STRUCT_RESOURCE_REFERENCE = struct.Struct(">hHI4x")
# 1 byte: Length of following resource name.
STRUCT_RESOURCE_NAME_HEADER = struct.Struct(">B")
class InvalidResourceFileError(Exception):
pass
class ResourceFileAttrs(enum.Flag):
"""Resource file attribute flags. The descriptions for these flags are taken from comments on the map*Bit and map* enum constants in <CarbonCore/Resources.h>."""
@ -81,6 +84,7 @@ class ResourceFileAttrs(enum.Flag):
_BIT_1 = 1 << 1
_BIT_0 = 1 << 0
class ResourceAttrs(enum.Flag):
"""Resource attribute flags. The descriptions for these flags are taken from comments on the res*Bit and res* enum constants in <CarbonCore/Resources.h>."""
@ -93,23 +97,37 @@ class ResourceAttrs(enum.Flag):
resChanged = 1 << 1 # "Existing resource changed since last update", "Resource changed?"
resCompressed = 1 << 0 # "indicates that the resource data is compressed" (only documented in https://github.com/kreativekorp/ksfl/wiki/Macintosh-Resource-File-Format)
class Resource(object):
"""A single resource from a resource file."""
__slots__ = ("resource_type", "resource_id", "name", "attributes", "data_raw", "_data_decompressed")
_resfile: "ResourceFile"
type: bytes
id: int
name_offset: int
_name: typing.Optional[bytes]
attributes: ResourceAttrs
data_raw_offset: int
_data_raw: bytes
_compressed_info: compress.common.CompressedHeaderInfo
_data_decompressed: bytes
def __init__(self, resource_type: bytes, resource_id: int, name: typing.Optional[bytes], attributes: ResourceAttrs, data_raw: bytes):
"""Create a new resource with the given type code, ID, name, attributes, and data."""
def __init__(self, resfile: "ResourceFile", resource_type: bytes, resource_id: int, name_offset: int, attributes: ResourceAttrs, data_raw_offset: int) -> None:
"""Create a resource object representing a resource stored in a resource file.
External code should not call this constructor manually. Resources should be looked up through a ResourceFile object instead.
"""
super().__init__()
self.resource_type: bytes = resource_type
self.resource_id: int = resource_id
self.name: typing.Optional[bytes] = name
self.attributes: ResourceAttrs = attributes
self.data_raw: bytes = data_raw
self._resfile = resfile
self.type = resource_type
self.id = resource_id
self.name_offset = name_offset
self.attributes = attributes
self.data_raw_offset = data_raw_offset
def __repr__(self):
def __repr__(self) -> str:
try:
data = self.data
except compress.DecompressError:
@ -119,14 +137,85 @@ class Resource(object):
decompress_ok = True
if len(data) > 32:
data_repr = f"<{len(data)} bytes: {data[:32]}...>"
data_repr = f"<{len(data)} bytes: {data[:32]!r}...>"
else:
data_repr = repr(data)
if not decompress_ok:
data_repr = f"<decompression failed - compressed data: {data_repr}>"
return f"{type(self).__module__}.{type(self).__qualname__}(resource_type={self.resource_type}, resource_id={self.resource_id}, name={self.name}, attributes={self.attributes}, data={data_repr})"
return f"<{type(self).__qualname__} type {self.type!r}, id {self.id}, name {self.name!r}, attributes {self.attributes}, data {data_repr}>"
@property
def resource_type(self) -> bytes:
warnings.warn(DeprecationWarning("The resource_type attribute has been deprecated and will be removed in a future version. Please use the type attribute instead."))
return self.type
@property
def resource_id(self) -> int:
warnings.warn(DeprecationWarning("The resource_id attribute has been deprecated and will be removed in a future version. Please use the id attribute instead."))
return self.id
@property
def name(self) -> typing.Optional[bytes]:
try:
return self._name
except AttributeError:
if self.name_offset == 0xffff:
self._name = None
else:
self._resfile._stream.seek(self._resfile.map_offset + self._resfile.map_name_list_offset + self.name_offset)
(name_length,) = self._resfile._stream_unpack(STRUCT_RESOURCE_NAME_HEADER)
self._name = self._resfile._read_exact(name_length)
return self._name
@property
def data_raw(self) -> bytes:
try:
return self._data_raw
except AttributeError:
self._resfile._stream.seek(self._resfile.data_offset + self.data_raw_offset)
(data_raw_length,) = self._resfile._stream_unpack(STRUCT_RESOURCE_DATA_HEADER)
self._data_raw = self._resfile._read_exact(data_raw_length)
return self._data_raw
@property
def compressed_info(self) -> typing.Optional[compress.common.CompressedHeaderInfo]:
"""The compressed resource header information, or None if this resource is not compressed.
Accessing this attribute may raise a DecompressError if the resource data is compressed and the header could not be parsed. To access the unparsed header data, use the data_raw attribute.
"""
if ResourceAttrs.resCompressed in self.attributes:
try:
return self._compressed_info
except AttributeError:
self._compressed_info = compress.common.CompressedHeaderInfo.parse(self.data_raw)
return self._compressed_info
else:
return None
@property
def length_raw(self) -> int:
"""The length of the raw resource data, which may be compressed.
Accessing this attribute may be faster than computing len(self.data_raw) manually.
"""
return len(self.data_raw)
@property
def length(self) -> int:
"""The length of the resource data. If the resource data is compressed, this is the length of the data after decompression.
Accessing this attribute may be faster than computing len(self.data) manually.
"""
if self.compressed_info is not None:
return self.compressed_info.decompressed_length
else:
return self.length_raw
@property
def data(self) -> bytes:
@ -135,72 +224,84 @@ class Resource(object):
Accessing this attribute may raise a DecompressError if the resource data is compressed and could not be decompressed. To access the compressed resource data, use the data_raw attribute.
"""
if ResourceAttrs.resCompressed in self.attributes:
if self.compressed_info is not None:
try:
return self._data_decompressed
except AttributeError:
self._data_decompressed = compress.decompress(self.data_raw)
self._data_decompressed = compress.decompress_parsed(self.compressed_info, self.data_raw[self.compressed_info.header_length:])
return self._data_decompressed
else:
return self.data_raw
class ResourceFile(collections.abc.Mapping):
class _LazyResourceMap(typing.Mapping[int, Resource]):
"""Internal class: Read-only wrapper for a mapping of resource IDs to resource objects.
This class behaves like a normal read-only mapping. The main difference to a plain dict (or similar mapping) is that this mapping has a specialized repr to avoid excessive output when working in the REPL.
"""
type: bytes
_submap: typing.Mapping[int, Resource]
def __init__(self, resource_type: bytes, submap: typing.Mapping[int, Resource]) -> None:
"""Create a new _LazyResourceMap that wraps the given mapping."""
super().__init__()
self.type = resource_type
self._submap = submap
def __len__(self) -> int:
"""Get the number of resources with this type code."""
return len(self._submap)
def __iter__(self) -> typing.Iterator[int]:
"""Iterate over the IDs of all resources with this type code."""
return iter(self._submap)
def __contains__(self, key: object) -> bool:
"""Check if a resource with the given ID exists for this type code."""
return key in self._submap
def __getitem__(self, key: int) -> Resource:
"""Get a resource with the given ID for this type code."""
return self._submap[key]
def __repr__(self) -> str:
if len(self) == 1:
contents = f"one resource: {next(iter(self.values()))}"
else:
contents = f"{len(self)} resources with IDs {list(self)}"
return f"<Resource map for type {self.type!r}, containing {contents}>"
class ResourceFile(typing.Mapping[bytes, typing.Mapping[int, Resource]], typing.ContextManager["ResourceFile"]):
"""A resource file reader operating on a byte stream."""
# noinspection PyProtectedMember
class _LazyResourceMap(collections.abc.Mapping):
"""Internal class: Lazy mapping of resource IDs to resource objects, returned when subscripting a ResourceFile."""
def __init__(self, resfile: "ResourceFile", restype: bytes):
"""Create a new _LazyResourceMap "containing" all resources in resfile that have the type code restype."""
super().__init__()
self._resfile: "ResourceFile" = resfile
self._restype: bytes = restype
self._submap: typing.Mapping[int, typing.Tuple[int, ResourceAttrs, int]] = self._resfile._references[self._restype]
def __len__(self):
"""Get the number of resources with this type code."""
return len(self._submap)
def __iter__(self):
"""Iterate over the IDs of all resources with this type code."""
return iter(self._submap)
def __contains__(self, key: int):
"""Check if a resource with the given ID exists for this type code."""
return key in self._submap
def __getitem__(self, key: int) -> Resource:
"""Get a resource with the given ID for this type code."""
name_offset, attributes, data_offset = self._submap[key]
if name_offset == 0xffff:
name = None
else:
self._resfile._stream.seek(self._resfile.map_offset + self._resfile.map_name_list_offset + name_offset)
(name_length,) = self._resfile._stream_unpack(STRUCT_RESOURCE_NAME_HEADER)
name = self._resfile._read_exact(name_length)
self._resfile._stream.seek(self._resfile.data_offset + data_offset)
(data_length,) = self._resfile._stream_unpack(STRUCT_RESOURCE_DATA_HEADER)
data = self._resfile._read_exact(data_length)
return Resource(self._restype, key, name, attributes, data)
def __repr__(self):
if len(self) == 1:
return f"<{type(self).__module__}.{type(self).__qualname__} at {id(self):#x} containing one resource: {next(iter(self.values()))}>"
else:
return f"<{type(self).__module__}.{type(self).__qualname__} at {id(self):#x} containing {len(self)} resources with IDs: {list(self)}>"
_close_stream: bool
_stream: typing.BinaryIO
data_offset: int
map_offset: int
data_length: int
map_length: int
header_system_data: bytes
header_application_data: bytes
map_type_list_offset: int
map_name_list_offset: int
file_attributes: ResourceFileAttrs
_reference_counts: typing.MutableMapping[bytes, int]
_references: typing.MutableMapping[bytes, typing.MutableMapping[int, Resource]]
@classmethod
def open(cls, filename: typing.Union[str, bytes, os.PathLike], *, fork: str="auto", **kwargs) -> "ResourceFile":
def open(cls, filename: typing.Union[str, os.PathLike], *, fork: str = "auto", **kwargs: typing.Any) -> "ResourceFile":
"""Open the file at the given path as a ResourceFile.
The fork parameter controls which fork of the file the resource data will be read from. It accepts the following values:
@ -259,7 +360,7 @@ class ResourceFile(collections.abc.Mapping):
else:
raise ValueError(f"Unsupported value for the fork parameter: {fork!r}")
def __init__(self, stream: typing.io.BinaryIO, *, close: bool=False):
def __init__(self, stream: typing.BinaryIO, *, close: bool = False) -> None:
"""Create a ResourceFile wrapping the given byte stream.
To read resource file data from a bytes object, wrap it in an io.BytesIO.
@ -273,8 +374,7 @@ class ResourceFile(collections.abc.Mapping):
super().__init__()
self._close_stream: bool = close
self._stream: typing.io.BinaryIO
self._close_stream = close
if stream.seekable():
self._stream = stream
else:
@ -298,7 +398,7 @@ class ResourceFile(collections.abc.Mapping):
raise InvalidResourceFileError(f"Attempted to read {byte_count} bytes of data, but only got {len(data)} bytes")
return data
def _stream_unpack(self, st: struct.Struct) -> typing.Tuple:
def _stream_unpack(self, st: struct.Struct) -> tuple:
"""Unpack data from the stream according to the struct st. The number of bytes to read is determined using st.size, so variable-sized structs cannot be used with this method."""
try:
@ -306,17 +406,11 @@ class ResourceFile(collections.abc.Mapping):
except struct.error as e:
raise InvalidResourceFileError(str(e))
def _read_header(self):
def _read_header(self) -> None:
"""Read the resource file header, starting at the current stream position."""
assert self._stream.tell() == 0
self.data_offset: int
self.map_offset: int
self.data_length: int
self.map_length: int
self.header_system_data: bytes
self.header_application_data: bytes
(
self.data_offset,
self.map_offset,
@ -329,25 +423,23 @@ class ResourceFile(collections.abc.Mapping):
if self._stream.tell() != self.data_offset:
raise InvalidResourceFileError(f"The data offset ({self.data_offset}) should point exactly to the end of the file header ({self._stream.tell()})")
def _read_map_header(self):
def _read_map_header(self) -> None:
"""Read the map header, starting at the current stream position."""
assert self._stream.tell() == self.map_offset
self.map_type_list_offset: int
self.map_name_list_offset: int
(
_file_attributes,
self.map_type_list_offset,
self.map_name_list_offset,
) = self._stream_unpack(STRUCT_RESOURCE_MAP_HEADER)
self.file_attributes: ResourceFileAttrs = ResourceFileAttrs(_file_attributes)
self.file_attributes = ResourceFileAttrs(_file_attributes)
def _read_all_resource_types(self):
def _read_all_resource_types(self) -> None:
"""Read all resource types, starting at the current stream position."""
self._reference_counts: typing.MutableMapping[bytes, int] = collections.OrderedDict()
self._reference_counts = collections.OrderedDict()
(type_list_length_m1,) = self._stream_unpack(STRUCT_RESOURCE_TYPE_LIST_HEADER)
type_list_length = (type_list_length_m1 + 1) % 0x10000
@ -361,13 +453,13 @@ class ResourceFile(collections.abc.Mapping):
count = (count_m1 + 1) % 0x10000
self._reference_counts[resource_type] = count
def _read_all_references(self):
def _read_all_references(self) -> None:
"""Read all resource references, starting at the current stream position."""
self._references: typing.MutableMapping[bytes, typing.MutableMapping[int, typing.Tuple[int, ResourceAttrs, int]]] = collections.OrderedDict()
self._references = collections.OrderedDict()
for resource_type, count in self._reference_counts.items():
resmap: typing.MutableMapping[int, typing.Tuple[int, ResourceAttrs, int]] = collections.OrderedDict()
resmap: typing.MutableMapping[int, Resource] = collections.OrderedDict()
self._references[resource_type] = resmap
for _ in range(count):
(
@ -379,9 +471,9 @@ class ResourceFile(collections.abc.Mapping):
attributes = attributes_and_data_offset >> 24
data_offset = attributes_and_data_offset & ((1 << 24) - 1)
resmap[resource_id] = (name_offset, ResourceAttrs(attributes), data_offset)
resmap[resource_id] = Resource(self, resource_type, resource_id, name_offset, ResourceAttrs(attributes), data_offset)
def close(self):
def close(self) -> None:
"""Close this ResourceFile.
If close=True was passed when this ResourceFile was created, the underlying stream's close method is called as well.
@ -390,31 +482,37 @@ class ResourceFile(collections.abc.Mapping):
if self._close_stream:
self._stream.close()
def __enter__(self):
pass
def __enter__(self) -> "ResourceFile":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
def __exit__(
self,
exc_type: typing.Optional[typing.Type[BaseException]],
exc_val: typing.Optional[BaseException],
exc_tb: typing.Optional[types.TracebackType]
) -> typing.Optional[bool]:
self.close()
return None
def __len__(self):
def __len__(self) -> int:
"""Get the number of resource types in this ResourceFile."""
return len(self._references)
def __iter__(self):
def __iter__(self) -> typing.Iterator[bytes]:
"""Iterate over all resource types in this ResourceFile."""
return iter(self._references)
def __contains__(self, key: bytes):
def __contains__(self, key: object) -> bool:
"""Check whether this ResourceFile contains any resources of the given type."""
return key in self._references
def __getitem__(self, key: bytes) -> "ResourceFile._LazyResourceMap":
def __getitem__(self, key: bytes) -> "_LazyResourceMap":
"""Get a lazy mapping of all resources with the given type in this ResourceFile."""
return ResourceFile._LazyResourceMap(self, key)
return _LazyResourceMap(key, self._references[key])
def __repr__(self):
def __repr__(self) -> str:
return f"<{type(self).__module__}.{type(self).__qualname__} at {id(self):#x}, attributes {self.file_attributes}, containing {len(self)} resource types: {list(self)}>"

View File

@ -1,97 +1,68 @@
import struct
import io
import typing
from . import dcmp0
from . import dcmp1
from . import dcmp2
from .common import DecompressError
from .common import DecompressError, CompressedHeaderInfo, CompressedType8HeaderInfo, CompressedType9HeaderInfo
__all__ = [
"CompressedHeaderInfo",
"CompressedType8HeaderInfo",
"CompressedType9HeaderInfo",
"DecompressError",
"decompress",
"decompress_parsed",
"decompress_stream",
"decompress_stream_parsed",
]
# The signature of all compressed resource data, 0xa89f6572 in hex, or "®üer" in MacRoman.
COMPRESSED_SIGNATURE = b"\xa8\x9fer"
# The compression type commonly used for application resources.
COMPRESSED_TYPE_APPLICATION = 0x0801
# The compression type commonly used for System file resources.
COMPRESSED_TYPE_SYSTEM = 0x0901
# Common header for compressed resources of all types.
# 4 bytes: Signature (see above).
# 2 bytes: Length of the complete header (this common part and the type-specific part that follows it). (This meaning is just a guess - the field's value is always 0x0012, so there's no way to know for certain what it means.)
# 2 bytes: Compression type. Known so far: 0x0901 is used in the System file's resources. 0x0801 is used in other files' resources.
# 4 bytes: Length of the data after decompression.
STRUCT_COMPRESSED_HEADER = struct.Struct(">4sHHI")
# Header continuation part for an "application" compressed resource.
# 1 byte: "Working buffer fractional size" - the ratio of the compressed data size to the uncompressed data size, times 256.
# 1 byte: "Expansion buffer size" - the maximum number of bytes that the data might grow during decompression.
# 2 bytes: The ID of the 'dcmp' resource that can decompress this resource. Currently only ID 0 is supported.
# 2 bytes: Reserved (always zero).
STRUCT_COMPRESSED_APPLICATION_HEADER = struct.Struct(">BBhH")
# Header continuation part for a "system" compressed resource.
# 2 bytes: The ID of the 'dcmp' resource that can decompress this resource. Currently only ID 2 is supported.
# 4 bytes: Decompressor-specific parameters.
STRUCT_COMPRESSED_SYSTEM_HEADER = struct.Struct(">h4s")
# Maps 'dcmp' IDs to their corresponding Python implementations.
# Each decompressor has the signature (header_info: CompressedHeaderInfo, stream: typing.BinaryIO, *, debug: bool=False) -> typing.Iterator[bytes].
DECOMPRESSORS = {
0: dcmp0.decompress_stream,
1: dcmp1.decompress_stream,
2: dcmp2.decompress_stream,
}
def _decompress_application(data: bytes, decompressed_length: int, *, debug: bool=False) -> bytes:
working_buffer_fractional_size, expansion_buffer_size, dcmp_id, reserved = STRUCT_COMPRESSED_APPLICATION_HEADER.unpack_from(data)
if debug:
print(f"Working buffer fractional size: {working_buffer_fractional_size} (=> {len(data) * 256 / working_buffer_fractional_size})")
print(f"Expansion buffer size: {expansion_buffer_size}")
if dcmp_id == 0:
decompress_func = dcmp0.decompress
elif dcmp_id == 1:
decompress_func = dcmp1.decompress
else:
raise DecompressError(f"Unsupported 'dcmp' ID: {dcmp_id}, expected 0 or 1")
if reserved != 0:
raise DecompressError(f"Reserved field should be 0, not 0x{reserved:>04x}")
return decompress_func(data[STRUCT_COMPRESSED_APPLICATION_HEADER.size:], decompressed_length, debug=debug)
def _decompress_system(data: bytes, decompressed_length: int, *, debug: bool=False) -> bytes:
dcmp_id, params = STRUCT_COMPRESSED_SYSTEM_HEADER.unpack_from(data)
if dcmp_id == 2:
decompress_func = dcmp2.decompress
else:
raise DecompressError(f"Unsupported 'dcmp' ID: {dcmp_id}, expected 2")
return decompress_func(data[STRUCT_COMPRESSED_SYSTEM_HEADER.size:], decompressed_length, params, debug=debug)
def decompress(data: bytes, *, debug: bool=False) -> bytes:
"""Decompress the given compressed resource data."""
def decompress_stream_parsed(header_info: CompressedHeaderInfo, stream: typing.BinaryIO, *, debug: bool = False) -> typing.Iterator[bytes]:
"""Decompress compressed resource data from a stream, whose header has already been read and parsed into a CompressedHeaderInfo object."""
try:
signature, header_length, compression_type, decompressed_length = STRUCT_COMPRESSED_HEADER.unpack_from(data)
except struct.error:
raise DecompressError(f"Invalid header")
if signature != COMPRESSED_SIGNATURE:
raise DecompressError(f"Invalid signature: {signature!r}, expected {COMPRESSED_SIGNATURE}")
if header_length != 0x12:
raise DecompressError(f"Unsupported header length: 0x{header_length:>04x}, expected 0x12")
decompress_func = DECOMPRESSORS[header_info.dcmp_id]
except KeyError:
raise DecompressError(f"Unsupported 'dcmp' ID: {header_info.dcmp_id}")
if compression_type == COMPRESSED_TYPE_APPLICATION:
decompress_func = _decompress_application
elif compression_type == COMPRESSED_TYPE_SYSTEM:
decompress_func = _decompress_system
else:
raise DecompressError(f"Unsupported compression type: 0x{compression_type:>04x}")
decompressed_length = 0
for chunk in decompress_func(header_info, stream, debug=debug):
decompressed_length += len(chunk)
yield chunk
if decompressed_length != header_info.decompressed_length:
raise DecompressError(f"Actual length of decompressed data ({decompressed_length}) does not match length stored in resource ({header_info.decompressed_length})")
def decompress_parsed(header_info: CompressedHeaderInfo, data: bytes, *, debug: bool = False) -> bytes:
"""Decompress the given compressed resource data, whose header has already been removed and parsed into a CompressedHeaderInfo object."""
return b"".join(decompress_stream_parsed(header_info, io.BytesIO(data), debug=debug))
def decompress_stream(stream: typing.BinaryIO, *, debug: bool = False) -> typing.Iterator[bytes]:
"""Decompress compressed resource data from a stream."""
header_info = CompressedHeaderInfo.parse_stream(stream)
if debug:
print(f"Decompressed length: {decompressed_length}")
print(f"Compressed resource data header: {header_info}")
decompressed = decompress_func(data[STRUCT_COMPRESSED_HEADER.size:], decompressed_length, debug=debug)
if len(decompressed) != decompressed_length:
raise DecompressError(f"Actual length of decompressed data ({len(decompressed)}) does not match length stored in resource ({decompressed_length})")
return decompressed
yield from decompress_stream_parsed(header_info, stream, debug=debug)
def decompress(data: bytes, *, debug: bool = False) -> bytes:
"""Decompress the given compressed resource data."""
return b"".join(decompress_stream(io.BytesIO(data), debug=debug))

View File

@ -1,3 +1,5 @@
import io
import struct
import typing
@ -5,19 +7,199 @@ class DecompressError(Exception):
"""Raised when resource data decompression fails, because the data is invalid or the compression type is not supported."""
def _read_variable_length_integer(data: bytes, position: int) -> typing.Tuple[int, int]:
"""Read a variable-length integer starting at the given position in the data, and return the integer as well as the number of bytes consumed.
# The signature of all compressed resource data, 0xa89f6572 in hex, or "®üer" in MacRoman.
COMPRESSED_SIGNATURE = b"\xa8\x9fer"
# The number of the "type 8" compression type. This type is used in the Finder, ResEdit, and some other system files.
COMPRESSED_TYPE_8 = 0x0801
# The number of the "type 9" compression type. This type is used in the System file and System 7.5's Installer.
COMPRESSED_TYPE_9 = 0x0901
# Common header for compressed resources of all types.
# 4 bytes: Signature (see above).
# 2 bytes: Length of the complete header (this common part and the type-specific part that follows it). (This meaning is just a guess - the field's value is always 0x0012, so there's no way to know for certain what it means.)
# 2 bytes: Compression type. Known so far: 0x0801 ("type 8") and 0x0901 ("type 9").
# 4 bytes: Length of the data after decompression.
# 6 bytes: Remainder of the header. The exact format varies depending on the compression type.
STRUCT_COMPRESSED_HEADER = struct.Struct(">4sHHI6s")
# Remainder of header for a "type 8" compressed resource.
# 1 byte: "Working buffer fractional size" - the ratio of the compressed data size to the uncompressed data size, times 256.
# 1 byte: "Expansion buffer size" - the maximum number of bytes that the data might grow during decompression.
# 2 bytes: The ID of the 'dcmp' resource that can decompress this resource. Currently only ID 0 is supported.
# 2 bytes: Reserved (always zero).
STRUCT_COMPRESSED_TYPE_8_HEADER = struct.Struct(">BBhH")
# Remainder of header for a "type 9" compressed resource.
# 2 bytes: The ID of the 'dcmp' resource that can decompress this resource. Currently only ID 2 is supported.
# 4 bytes: Decompressor-specific parameters.
STRUCT_COMPRESSED_TYPE_9_HEADER = struct.Struct(">h4s")
class CompressedHeaderInfo(object):
@classmethod
def parse_stream(cls, stream: typing.BinaryIO) -> "CompressedHeaderInfo":
try:
signature, header_length, compression_type, decompressed_length, remainder = STRUCT_COMPRESSED_HEADER.unpack(stream.read(STRUCT_COMPRESSED_HEADER.size))
except struct.error:
raise DecompressError("Invalid header")
if signature != COMPRESSED_SIGNATURE:
raise DecompressError(f"Invalid signature: {signature!r}, expected {COMPRESSED_SIGNATURE!r}")
if header_length != 0x12:
raise DecompressError(f"Unsupported header length: 0x{header_length:>04x}, expected 0x12")
if compression_type == COMPRESSED_TYPE_8:
working_buffer_fractional_size, expansion_buffer_size, dcmp_id, reserved = STRUCT_COMPRESSED_TYPE_8_HEADER.unpack(remainder)
if reserved != 0:
raise DecompressError(f"Reserved field should be 0, not 0x{reserved:>04x}")
return CompressedType8HeaderInfo(header_length, compression_type, decompressed_length, dcmp_id, working_buffer_fractional_size, expansion_buffer_size)
elif compression_type == COMPRESSED_TYPE_9:
dcmp_id, parameters = STRUCT_COMPRESSED_TYPE_9_HEADER.unpack(remainder)
return CompressedType9HeaderInfo(header_length, compression_type, decompressed_length, dcmp_id, parameters)
else:
raise DecompressError(f"Unsupported compression type: 0x{compression_type:>04x}")
@classmethod
def parse(cls, data: bytes) -> "CompressedHeaderInfo":
return cls.parse_stream(io.BytesIO(data))
header_length: int
compression_type: int
decompressed_length: int
dcmp_id: int
def __init__(self, header_length: int, compression_type: int, decompressed_length: int, dcmp_id: int) -> None:
super().__init__()
self.header_length = header_length
self.compression_type = compression_type
self.decompressed_length = decompressed_length
self.dcmp_id = dcmp_id
class CompressedType8HeaderInfo(CompressedHeaderInfo):
working_buffer_fractional_size: int
expansion_buffer_size: int
def __init__(self, header_length: int, compression_type: int, decompressed_length: int, dcmp_id: int, working_buffer_fractional_size: int, expansion_buffer_size: int) -> None:
super().__init__(header_length, compression_type, decompressed_length, dcmp_id)
self.working_buffer_fractional_size = working_buffer_fractional_size
self.expansion_buffer_size = expansion_buffer_size
def __repr__(self) -> str:
return f"{type(self).__qualname__}(header_length={self.header_length}, compression_type=0x{self.compression_type:>04x}, decompressed_length={self.decompressed_length}, dcmp_id={self.dcmp_id}, working_buffer_fractional_size={self.working_buffer_fractional_size}, expansion_buffer_size={self.expansion_buffer_size})"
class CompressedType9HeaderInfo(CompressedHeaderInfo):
parameters: bytes
def __init__(self, header_length: int, compression_type: int, decompressed_length: int, dcmp_id: int, parameters: bytes) -> None:
super().__init__(header_length, compression_type, decompressed_length, dcmp_id)
self.parameters = parameters
def __repr__(self) -> str:
return f"{type(self).__qualname__}(header_length={self.header_length}, compression_type=0x{self.compression_type:>04x}, decompressed_length={self.decompressed_length}, dcmp_id={self.dcmp_id}, parameters={self.parameters!r})"
if typing.TYPE_CHECKING:
class PeekableIO(typing.Protocol):
"""Minimal protocol for binary IO streams that support the peek method.
The peek method is supported by various standard Python binary IO streams, such as io.BufferedReader. If a stream does not natively support the peek method, it may be wrapped using the custom helper function make_peekable.
"""
def readable(self) -> bool:
...
def read(self, size: typing.Optional[int] = ...) -> bytes:
...
def peek(self, size: int = ...) -> bytes:
...
class _PeekableIOWrapper(object):
"""Wrapper class to add peek support to an existing stream. Do not instantiate this class directly, use the make_peekable function instead.
Python provides a standard io.BufferedReader class, which supports the peek method. However, according to its documentation, it only supports wrapping io.RawIOBase subclasses, and not streams which are already otherwise buffered.
Warning: this class does not perform any buffering of its own, outside of what is required to make peek work. It is strongly recommended to only wrap streams that are already buffered or otherwise fast to read from. In particular, raw streams (io.RawIOBase subclasses) should be wrapped using io.BufferedReader instead.
"""
_wrapped: typing.BinaryIO
_readahead: bytes
def __init__(self, wrapped: typing.BinaryIO) -> None:
super().__init__()
self._wrapped = wrapped
self._readahead = b""
def readable(self) -> bool:
return self._wrapped.readable()
def read(self, size: typing.Optional[int] = None) -> bytes:
if size is None or size < 0:
ret = self._readahead + self._wrapped.read()
self._readahead = b""
elif size <= len(self._readahead):
ret = self._readahead[:size]
self._readahead = self._readahead[size:]
else:
ret = self._readahead + self._wrapped.read(size - len(self._readahead))
self._readahead = b""
return ret
def peek(self, size: int = -1) -> bytes:
if not self._readahead:
self._readahead = self._wrapped.read(io.DEFAULT_BUFFER_SIZE if size < 0 else size)
return self._readahead
def make_peekable(stream: typing.BinaryIO) -> "PeekableIO":
"""Wrap an arbitrary binary IO stream so that it supports the peek method.
The stream is wrapped as efficiently as possible (or not at all if it already supports the peek method). However, in the worst case a custom wrapper class needs to be used, which may not be particularly efficient and only supports a very minimal interface. The only methods that are guaranteed to exist on the returned stream are readable, read, and peek.
"""
if hasattr(stream, "peek"):
# Stream is already peekable, nothing to be done.
return typing.cast("PeekableIO", stream)
elif not typing.TYPE_CHECKING and isinstance(stream, io.RawIOBase):
# This branch is skipped when type checking - mypy incorrectly warns about this code being unreachable, because it thinks that a typing.BinaryIO cannot be an instance of io.RawIOBase.
# Raw IO streams can be wrapped efficiently using BufferedReader.
return io.BufferedReader(stream)
else:
# Other streams need to be wrapped using our custom wrapper class.
return _PeekableIOWrapper(stream)
def read_exact(stream: typing.BinaryIO, byte_count: int) -> bytes:
"""Read byte_count bytes from the stream and raise an exception if too few bytes are read (i. e. if EOF was hit prematurely)."""
data = stream.read(byte_count)
if len(data) != byte_count:
raise DecompressError(f"Attempted to read {byte_count} bytes of data, but only got {len(data)} bytes")
return data
def read_variable_length_integer(stream: typing.BinaryIO) -> int:
"""Read a variable-length integer from the stream.
This variable-length integer format is used by the 0xfe codes in the compression formats used by 'dcmp' (0) and 'dcmp' (1).
"""
assert len(data) > position
if data[position] == 0xff:
assert len(data) > position + 4
return int.from_bytes(data[position+1:position+5], "big", signed=True), 5
elif data[position] >= 0x80:
assert len(data) > position + 1
data_modified = bytes([(data[position] - 0xc0) & 0xff, data[position+1]])
return int.from_bytes(data_modified, "big", signed=True), 2
head = read_exact(stream, 1)
if head[0] == 0xff:
return int.from_bytes(read_exact(stream, 4), "big", signed=True)
elif head[0] >= 0x80:
data_modified = bytes([(head[0] - 0xc0) & 0xff]) + read_exact(stream, 1)
return int.from_bytes(data_modified, "big", signed=True)
else:
return int.from_bytes(data[position:position+1], "big", signed=True), 1
return int.from_bytes(head, "big", signed=True)

View File

@ -1,3 +1,5 @@
import typing
from . import common
# Lookup table for codes in range(0x4b, 0xfe).
@ -36,133 +38,103 @@ TABLE = [TABLE_DATA[i:i + 2] for i in range(0, len(TABLE_DATA), 2)]
assert len(TABLE) == len(range(0x4b, 0xfe))
def decompress(data: bytes, decompressed_length: int, *, debug: bool=False) -> bytes:
"""Decompress compressed data in the format used by 'dcmp' (0)."""
def decompress_stream_inner(header_info: common.CompressedHeaderInfo, stream: typing.BinaryIO, *, debug: bool = False) -> typing.Iterator[bytes]:
"""Internal helper function, implements the main decompression algorithm. Only called from decompress_stream, which performs some extra checks and debug logging."""
prev_literals = []
decompressed = b""
if not isinstance(header_info, common.CompressedType8HeaderInfo):
raise common.DecompressError(f"Incorrect header type: {type(header_info).__qualname__}")
i = 0
prev_literals: typing.List[bytes] = []
while i < len(data):
byte = data[i]
while True: # Loop is terminated when the EOF marker (0xff) is encountered
(byte,) = common.read_exact(stream, 1)
if debug:
print(f"Tag byte 0x{byte:>02x}, at 0x{i:x}, decompressing to 0x{len(decompressed):x}")
print(f"Tag byte 0x{byte:>02x}")
if byte in range(0x00, 0x20):
# Literal byte sequence.
if byte in (0x00, 0x10):
# The length of the literal data is stored in the next byte.
count_div2 = data[i+1]
begin = i + 2
(count_div2,) = common.read_exact(stream, 1)
else:
# The length of the literal data is stored in the low nibble of the tag byte.
count_div2 = byte >> 0 & 0xf
begin = i + 1
end = begin + 2*count_div2
count = 2 * count_div2
# Controls whether or not the literal is stored so that it can be referenced again later.
do_store = byte >= 0x10
literal = data[begin:end]
literal = common.read_exact(stream, count)
if debug:
print(f"Literal (storing: {do_store})")
print(f"\t-> {literal}")
decompressed += literal
if do_store:
if debug:
print(f"\t-> stored as literal number 0x{len(prev_literals):x}")
print(f"\t-> storing as literal number 0x{len(prev_literals):x}")
prev_literals.append(literal)
i = end
yield literal
elif byte in (0x20, 0x21):
# Backreference to a previous literal, 2-byte form.
# This can reference literals with index in range(0x28, 0x228).
table_index = 0x28 + ((byte - 0x20) << 8 | data[i+1])
i += 2
(next_byte,) = common.read_exact(stream, 1)
table_index = 0x28 + ((byte - 0x20) << 8 | next_byte)
if debug:
print(f"Backreference (2-byte form) to 0x{table_index:>02x}")
literal = prev_literals[table_index]
if debug:
print(f"\t-> {literal}")
decompressed += literal
yield prev_literals[table_index]
elif byte == 0x22:
# Backreference to a previous literal, 3-byte form.
# This can reference any literal with index 0x28 and higher, but is only necessary for literals with index 0x228 and higher.
table_index = 0x28 + int.from_bytes(data[i+1:i+3], "big", signed=False)
i += 3
table_index = 0x28 + int.from_bytes(common.read_exact(stream, 2), "big", signed=False)
if debug:
print(f"Backreference (3-byte form) to 0x{table_index:>02x}")
literal = prev_literals[table_index]
if debug:
print(f"\t-> {literal}")
decompressed += literal
yield prev_literals[table_index]
elif byte in range(0x23, 0x4b):
# Backreference to a previous literal, 1-byte form.
# This can reference literals with indices in range(0x28).
table_index = byte - 0x23
i += 1
if debug:
print(f"Backreference (1-byte form) to 0x{table_index:>02x}")
literal = prev_literals[table_index]
if debug:
print(f"\t-> {literal}")
decompressed += literal
yield prev_literals[table_index]
elif byte in range(0x4b, 0xfe):
# Reference into a fixed table of two-byte literals.
# All compressed resources use the same table.
table_index = byte - 0x4b
i += 1
if debug:
print(f"Fixed table reference to 0x{table_index:>02x}")
entry = TABLE[table_index]
if debug:
print(f"\t-> {entry}")
decompressed += entry
yield TABLE[table_index]
elif byte == 0xfe:
# Extended code, whose meaning is controlled by the following byte.
i += 1
kind = data[i]
(kind,) = common.read_exact(stream, 1)
if debug:
print(f"Extended code: 0x{kind:>02x}")
i += 1
if kind == 0x00:
# Compact representation of (part of) a segment loader jump table, as used in 'CODE' (0) resources.
if debug:
print(f"Segment loader jump table entries")
print("Segment loader jump table entries")
# All generated jump table entries have the same segment number.
segment_number_int, length = common._read_variable_length_integer(data, i)
i += length
segment_number_int = common.read_variable_length_integer(stream)
if debug:
print(f"\t-> segment number: {segment_number_int:#x}")
# The tail part of all jump table entries (i. e. everything except for the address).
entry_tail = b"?<" + segment_number_int.to_bytes(2, "big", signed=True) + b"\xa9\xf0"
if debug:
print(f"\t-> tail of first entry: {entry_tail}")
entry_tail = b"?<" + segment_number_int.to_bytes(2, "big", signed=False) + b"\xa9\xf0"
# The tail is output once *without* an address in front, i. e. the first entry's address must be generated manually by a previous code.
decompressed += entry_tail
yield entry_tail
count, length = common._read_variable_length_integer(data, i)
i += length
count = common.read_variable_length_integer(stream)
if count <= 0:
raise common.DecompressError(f"Jump table entry count must be greater than 0, not {count}")
# The second entry's address is stored explicitly.
current_int, length = common._read_variable_length_integer(data, i)
i += length
current_int = common.read_variable_length_integer(stream)
if debug:
print(f"-> address of second entry: {current_int:#x}")
entry = current_int.to_bytes(2, "big", signed=False) + entry_tail
if debug:
print(f"-> second entry: {entry}")
decompressed += entry
print(f"\t-> address of second entry: {current_int:#x}")
yield current_int.to_bytes(2, "big", signed=False) + entry_tail
for _ in range(1, count):
# All further entries' addresses are stored as differences relative to the previous entry's address.
diff, length = common._read_variable_length_integer(data, i)
i += length
diff = common.read_variable_length_integer(stream)
# For some reason, each difference is 6 higher than it should be.
diff -= 6
@ -170,10 +142,7 @@ def decompress(data: bytes, decompressed_length: int, *, debug: bool=False) -> b
current_int = (current_int + diff) & 0xffff
if debug:
print(f"\t-> difference {diff:#x}: {current_int:#x}")
entry = current_int.to_bytes(2, "big", signed=False) + entry_tail
if debug:
print(f"\t-> {entry}")
decompressed += entry
yield current_int.to_bytes(2, "big", signed=False) + entry_tail
elif kind in (0x02, 0x03):
# Repeat 1 or 2 bytes a certain number of times.
@ -188,42 +157,36 @@ def decompress(data: bytes, decompressed_length: int, *, debug: bool=False) -> b
print(f"Repeat {byte_count}-byte value")
# The byte(s) to repeat, stored as a variable-length integer. The value is treated as unsigned, i. e. the integer is never negative.
to_repeat_int, length = common._read_variable_length_integer(data, i)
i += length
to_repeat_int = common.read_variable_length_integer(stream)
try:
to_repeat = to_repeat_int.to_bytes(byte_count, "big", signed=False)
except OverflowError:
raise common.DecompressError(f"Value to repeat out of range for {byte_count}-byte repeat: {to_repeat_int:#x}")
count_m1, length = common._read_variable_length_integer(data, i)
i += length
count = count_m1 + 1
count = common.read_variable_length_integer(stream) + 1
if count <= 0:
raise common.DecompressError(f"Repeat count must be positive: {count}")
repeated = to_repeat * count
if debug:
print(f"\t-> {to_repeat} * {count}: {repeated}")
decompressed += repeated
print(f"\t-> {to_repeat!r} * {count}")
yield to_repeat * count
elif kind == 0x04:
# A sequence of 16-bit signed integers, with each integer encoded as a difference relative to the previous integer. The first integer is stored explicitly.
if debug:
print(f"Difference-encoded 16-bit integers")
print("Difference-encoded 16-bit integers")
# The first integer is stored explicitly, as a signed value.
initial_int, length = common._read_variable_length_integer(data, i)
i += length
initial_int = common.read_variable_length_integer(stream)
try:
initial = initial_int.to_bytes(2, "big", signed=True)
except OverflowError:
raise common.DecompressError(f"Initial value out of range for 16-bit integer difference encoding: {initial_int:#x}")
if debug:
print(f"\t-> initial: {initial}")
decompressed += initial
print(f"\t-> initial: 0x{initial_int:>04x}")
yield initial
count, length = common._read_variable_length_integer(data, i)
i += length
count = common.read_variable_length_integer(stream)
if count < 0:
raise common.DecompressError(f"Count cannot be negative: {count}")
@ -232,64 +195,75 @@ def decompress(data: bytes, decompressed_length: int, *, debug: bool=False) -> b
for _ in range(count):
# The difference to the previous integer is stored as an 8-bit signed integer.
# The usual variable-length integer format is *not* used here.
diff = int.from_bytes(data[i:i+1], "big", signed=True)
i += 1
diff = int.from_bytes(common.read_exact(stream, 1), "big", signed=True)
# Simulate 16-bit integer wraparound.
current_int = (current_int + diff) & 0xffff
current = current_int.to_bytes(2, "big", signed=False)
if debug:
print(f"\t-> difference {diff:#x}: {current}")
decompressed += current
print(f"\t-> difference {diff:#x}: 0x{current_int:>04x}")
yield current_int.to_bytes(2, "big", signed=False)
elif kind == 0x06:
# A sequence of 32-bit signed integers, with each integer encoded as a difference relative to the previous integer. The first integer is stored explicitly.
if debug:
print(f"Difference-encoded 16-bit integers")
print("Difference-encoded 32-bit integers")
# The first integer is stored explicitly, as a signed value.
initial_int, length = common._read_variable_length_integer(data, i)
i += length
initial_int = common.read_variable_length_integer(stream)
try:
initial = initial_int.to_bytes(4, "big", signed=True)
except OverflowError:
raise common.DecompressError(f"Initial value out of range for 32-bit integer difference encoding: {initial_int:#x}")
if debug:
print(f"\t-> initial: {initial}")
decompressed += initial
print(f"\t-> initial: 0x{initial_int:>08x}")
yield initial
count, length = common._read_variable_length_integer(data, i)
i += length
count = common.read_variable_length_integer(stream)
assert count >= 0
# To make the following calculations simpler, the signed initial_int value is converted to unsigned.
current_int = initial_int & 0xffffffff
for _ in range(count):
# The difference to the previous integer is stored as a variable-length integer, whose value may be negative.
diff, length = common._read_variable_length_integer(data, i)
i += length
diff = common.read_variable_length_integer(stream)
# Simulate 32-bit integer wraparound.
current_int = (current_int + diff) & 0xffffffff
current = current_int.to_bytes(4, "big", signed=False)
if debug:
print(f"\t-> difference {diff:#x}: {current}")
decompressed += current
print(f"\t-> difference {diff:#x}: 0x{current_int:>08x}")
yield current_int.to_bytes(4, "big", signed=False)
else:
raise common.DecompressError(f"Unknown extended code: 0x{kind:>02x}")
elif byte == 0xff:
# End of data marker, always occurs exactly once as the last byte of the compressed data.
if debug:
print("End marker")
if i != len(data) - 1:
raise common.DecompressError(f"End marker reached at {i}, before the expected end of data at {len(data) - 1}")
i += 1
# Check that there really is no more data left.
extra = stream.read(1)
if extra:
raise common.DecompressError(f"Extra data encountered after end of data marker (first extra byte: {extra!r})")
break
else:
raise common.DecompressError(f"Unknown tag byte: 0x{data[i]:>02x}")
raise common.DecompressError(f"Unknown tag byte: 0x{byte:>02x}")
def decompress_stream(header_info: common.CompressedHeaderInfo, stream: typing.BinaryIO, *, debug: bool = False) -> typing.Iterator[bytes]:
"""Decompress compressed data in the format used by 'dcmp' (0)."""
if decompressed_length % 2 != 0 and len(decompressed) == decompressed_length + 1:
# Special case: if the decompressed data length stored in the header is odd and one less than the length of the actual decompressed data, drop the last byte.
# This is necessary because nearly all codes generate data in groups of 2 or 4 bytes, so it is basically impossible to represent data with an odd length using this compression format.
decompressed = decompressed[:-1]
return decompressed
decompressed_length = 0
for chunk in decompress_stream_inner(header_info, stream, debug=debug):
if debug:
print(f"\t-> {chunk!r}")
if header_info.decompressed_length % 2 != 0 and decompressed_length + len(chunk) == header_info.decompressed_length + 1:
# Special case: if the decompressed data length stored in the header is odd and one less than the length of the actual decompressed data, drop the last byte.
# This is necessary because nearly all codes generate data in groups of 2 or 4 bytes, so it is basically impossible to represent data with an odd length using this compression format.
decompressed_length += len(chunk) - 1
yield chunk[:-1]
else:
decompressed_length += len(chunk)
yield chunk
if debug:
print(f"Decompressed {decompressed_length:#x} bytes so far")

View File

@ -1,3 +1,5 @@
import typing
from . import common
# Lookup table for codes in range(0xd5, 0xfe).
@ -19,96 +21,75 @@ TABLE = [TABLE_DATA[i:i + 2] for i in range(0, len(TABLE_DATA), 2)]
assert len(TABLE) == len(range(0xd5, 0xfe))
def decompress(data: bytes, decompressed_length: int, *, debug: bool=False) -> bytes:
"""Decompress compressed data in the format used by 'dcmp' (1)."""
def decompress_stream_inner(header_info: common.CompressedHeaderInfo, stream: typing.BinaryIO, *, debug: bool = False) -> typing.Iterator[bytes]:
"""Internal helper function, implements the main decompression algorithm. Only called from decompress_stream, which performs some extra checks and debug logging."""
prev_literals = []
decompressed = b""
if not isinstance(header_info, common.CompressedType8HeaderInfo):
raise common.DecompressError(f"Incorrect header type: {type(header_info).__qualname__}")
i = 0
prev_literals: typing.List[bytes] = []
while i < len(data):
byte = data[i]
while True: # Loop is terminated when the EOF marker (0xff) is encountered
(byte,) = common.read_exact(stream, 1)
if debug:
print(f"Tag byte 0x{byte:>02x}, at 0x{i:x}, decompressing to 0x{len(decompressed):x}")
print(f"Tag byte 0x{byte:>02x}")
if byte in range(0x00, 0x20):
# Literal byte sequence, 1-byte header.
# The length of the literal data is stored in the low nibble of the tag byte.
count = (byte >> 0 & 0xf) + 1
begin = i + 1
end = begin + count
# Controls whether or not the literal is stored so that it can be referenced again later.
do_store = byte >= 0x10
literal = data[begin:end]
literal = common.read_exact(stream, count)
if debug:
print(f"Literal (1-byte header, storing: {do_store})")
print(f"\t-> {literal}")
decompressed += literal
if do_store:
if debug:
print(f"\t-> stored as literal number 0x{len(prev_literals):x}")
print(f"\t-> storing as literal number 0x{len(prev_literals):x}")
prev_literals.append(literal)
i = end
yield literal
elif byte in range(0x20, 0xd0):
# Backreference to a previous literal, 1-byte form.
# This can reference literals with indices in range(0xb0).
table_index = byte - 0x20
i += 1
if debug:
print(f"Backreference (1-byte form) to 0x{table_index:>02x}")
literal = prev_literals[table_index]
if debug:
print(f"\t-> {literal}")
decompressed += literal
yield prev_literals[table_index]
elif byte in (0xd0, 0xd1):
# Literal byte sequence, 2-byte header.
# The length of the literal data is stored in the following byte.
count = data[i+1]
begin = i + 2
end = begin + count
(count,) = common.read_exact(stream, 1)
# Controls whether or not the literal is stored so that it can be referenced again later.
do_store = byte == 0xd1
literal = data[begin:end]
literal = common.read_exact(stream, count)
if debug:
print(f"Literal (2-byte header, storing: {do_store})")
print(f"\t-> {literal}")
decompressed += literal
if do_store:
if debug:
print(f"\t-> stored as literal number 0x{len(prev_literals):x}")
print(f"\t-> storing as literal number 0x{len(prev_literals):x}")
prev_literals.append(literal)
i = end
yield literal
elif byte == 0xd2:
# Backreference to a previous literal, 2-byte form.
# This can reference literals with indices in range(0xb0, 0x1b0).
table_index = data[i+1] + 0xb0
i += 2
(next_byte,) = common.read_exact(stream, 1)
table_index = next_byte + 0xb0
if debug:
print(f"Backreference (2-byte form) to 0x{table_index:>02x}")
literal = prev_literals[table_index]
if debug:
print(f"\t-> {literal}")
decompressed += literal
yield prev_literals[table_index]
elif byte in range(0xd5, 0xfe):
# Reference into a fixed table of two-byte literals.
# All compressed resources use the same table.
table_index = byte - 0xd5
i += 1
if debug:
print(f"Fixed table reference to 0x{table_index:>02x}")
entry = TABLE[table_index]
if debug:
print(f"\t-> {entry}")
decompressed += entry
yield TABLE[table_index]
elif byte == 0xfe:
# Extended code, whose meaning is controlled by the following byte.
i += 1
kind = data[i]
(kind,) = common.read_exact(stream, 1)
if debug:
print(f"Extended code: 0x{kind:>02x}")
i += 1
if kind == 0x02:
# Repeat 1 byte a certain number of times.
@ -119,33 +100,45 @@ def decompress(data: bytes, decompressed_length: int, *, debug: bool=False) -> b
print(f"Repeat {byte_count}-byte value")
# The byte(s) to repeat, stored as a variable-length integer. The value is treated as unsigned, i. e. the integer is never negative.
to_repeat_int, length = common._read_variable_length_integer(data, i)
i += length
to_repeat_int = common.read_variable_length_integer(stream)
try:
to_repeat = to_repeat_int.to_bytes(byte_count, "big", signed=False)
except OverflowError:
raise common.DecompressError(f"Value to repeat out of range for {byte_count}-byte repeat: {to_repeat_int:#x}")
count_m1, length = common._read_variable_length_integer(data, i)
i += length
count = count_m1 + 1
count = common.read_variable_length_integer(stream) + 1
if count <= 0:
raise common.DecompressError(f"Repeat count must be positive: {count}")
repeated = to_repeat * count
if debug:
print(f"\t-> {to_repeat} * {count}: {repeated}")
decompressed += repeated
print(f"\t-> {to_repeat!r} * {count}")
yield to_repeat * count
else:
raise common.DecompressError(f"Unknown extended code: 0x{kind:>02x}")
elif byte == 0xff:
# End of data marker, always occurs exactly once as the last byte of the compressed data.
if debug:
print("End marker")
if i != len(data) - 1:
raise common.DecompressError(f"End marker reached at {i}, before the expected end of data at {len(data) - 1}")
i += 1
# Check that there really is no more data left.
extra = stream.read(1)
if extra:
raise common.DecompressError(f"Extra data encountered after end of data marker (first extra byte: {extra!r})")
break
else:
raise common.DecompressError(f"Unknown tag byte: 0x{data[i]:>02x}")
raise common.DecompressError(f"Unknown tag byte: 0x{byte:>02x}")
def decompress_stream(header_info: common.CompressedHeaderInfo, stream: typing.BinaryIO, *, debug: bool = False) -> typing.Iterator[bytes]:
"""Decompress compressed data in the format used by 'dcmp' (1)."""
return decompressed
decompressed_length = 0
for chunk in decompress_stream_inner(header_info, stream, debug=debug):
if debug:
print(f"\t-> {chunk!r}")
decompressed_length += len(chunk)
yield chunk
if debug:
print(f"Decompressed {decompressed_length:#x} bytes so far")

View File

@ -73,68 +73,73 @@ def _split_bits(i: int) -> typing.Tuple[bool, bool, bool, bool, bool, bool, bool
)
def _decompress_system_untagged(data: bytes, decompressed_length: int, table: typing.Sequence[bytes], *, debug: bool=False) -> bytes:
parts = []
i = 0
while i < len(data):
if i == len(data) - 1 and decompressed_length % 2 != 0:
def _decompress_untagged(stream: "common.PeekableIO", decompressed_length: int, table: typing.Sequence[bytes], *, debug: bool = False) -> typing.Iterator[bytes]:
while True: # Loop is terminated when EOF is reached.
table_index_data = stream.read(1)
if not table_index_data:
# End of compressed data.
break
elif not stream.peek(1) and decompressed_length % 2 != 0:
# Special case: if we are at the last byte of the compressed data, and the decompressed data has an odd length, the last byte is a single literal byte, and not a table reference.
if debug:
print(f"Last byte: {data[-1:]}")
parts.append(data[-1:])
print(f"Last byte: {table_index_data!r}")
yield table_index_data
break
# Compressed data is untagged, every byte is a table reference.
(table_index,) = table_index_data
if debug:
print(f"Reference: {data[i]} -> {table[data[i]]}")
parts.append(table[data[i]])
i += 1
return b"".join(parts)
print(f"Reference: {table_index} -> {table[table_index]!r}")
yield table[table_index]
def _decompress_system_tagged(data: bytes, decompressed_length: int, table: typing.Sequence[bytes], *, debug: bool=False) -> bytes:
parts = []
i = 0
while i < len(data):
if i == len(data) - 1 and decompressed_length % 2 != 0:
def _decompress_tagged(stream: "common.PeekableIO", decompressed_length: int, table: typing.Sequence[bytes], *, debug: bool = False) -> typing.Iterator[bytes]:
while True: # Loop is terminated when EOF is reached.
tag_data = stream.read(1)
if not tag_data:
# End of compressed data.
break
elif not stream.peek(1) and decompressed_length % 2 != 0:
# Special case: if we are at the last byte of the compressed data, and the decompressed data has an odd length, the last byte is a single literal byte, and not a tag or a table reference.
if debug:
print(f"Last byte: {data[-1:]}")
parts.append(data[-1:])
print(f"Last byte: {tag_data!r}")
yield tag_data
break
# Compressed data is tagged, each tag byte is followed by 8 table references and/or literals.
tag = data[i]
(tag,) = tag_data
if debug:
print(f"Tag: 0b{tag:>08b}")
i += 1
for is_ref in _split_bits(tag):
if is_ref:
# This is a table reference (a single byte that is an index into the table).
table_index_data = stream.read(1)
if not table_index_data:
# End of compressed data.
break
(table_index,) = table_index_data
if debug:
print(f"Reference: {data[i]} -> {table[data[i]]}")
parts.append(table[data[i]])
i += 1
print(f"Reference: {table_index} -> {table[table_index]!r}")
yield table[table_index]
else:
# This is a literal (two uncompressed bytes that are literally copied into the output).
# Note: if i == len(data)-1, the literal is actually only a single byte long.
# This case is handled automatically - the slice extends one byte past the end of the data, and only one byte is returned.
literal = stream.read(2)
if not literal:
# End of compressed data.
break
# Note: the literal may be only a single byte long if it is located exactly at EOF. This is intended and expected - the 1-byte literal is yielded normally, and on the next iteration, decompression is terminated as EOF is detected.
if debug:
print(f"Literal: {data[i:i+2]}")
parts.append(data[i:i + 2])
i += 2
# If the end of the compressed data is reached in the middle of a chunk, all further tag bits are ignored (they should be zero) and decompression ends.
if i >= len(data):
break
return b"".join(parts)
print(f"Literal: {literal!r}")
yield literal
def decompress(data: bytes, decompressed_length: int, parameters: bytes, *, debug: bool=False) -> bytes:
def decompress_stream(header_info: common.CompressedHeaderInfo, stream: typing.BinaryIO, *, debug: bool = False) -> typing.Iterator[bytes]:
"""Decompress compressed data in the format used by 'dcmp' (2)."""
unknown, table_count_m1, flags_raw = STRUCT_PARAMETERS.unpack(parameters)
if not isinstance(header_info, common.CompressedType9HeaderInfo):
raise common.DecompressError(f"Incorrect header type: {type(header_info).__qualname__}")
unknown, table_count_m1, flags_raw = STRUCT_PARAMETERS.unpack(header_info.parameters)
if debug:
print(f"Value of unknown parameter field: 0x{unknown:>04x}")
@ -152,24 +157,21 @@ def decompress(data: bytes, decompressed_length: int, parameters: bytes, *, debu
print(f"Flags: {flags}")
if ParameterFlags.CUSTOM_TABLE in flags:
table_start = 0
data_start = table_start + table_count * 2
table = []
for i in range(table_start, data_start, 2):
table.append(data[i:i + 2])
for _ in range(table_count):
table.append(common.read_exact(stream, 2))
if debug:
print(f"Using custom table: {table}")
else:
if table_count_m1 != 0:
raise common.DecompressError(f"table_count_m1 field is {table_count_m1}, but must be zero when the default table is used")
table = DEFAULT_TABLE
data_start = 0
if debug:
print("Using default table")
if ParameterFlags.TAGGED in flags:
decompress_func = _decompress_system_tagged
decompress_func = _decompress_tagged
else:
decompress_func = _decompress_system_untagged
decompress_func = _decompress_untagged
return decompress_func(data[data_start:], decompressed_length, table, debug=debug)
yield from decompress_func(common.make_peekable(stream), header_info.decompressed_length, table, debug=debug)

0
rsrcfork/py.typed Normal file
View File

View File

@ -18,8 +18,10 @@ classifiers =
Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
license = MIT
license_file = LICENSE
license_files =
LICENSE
description = A pure Python, cross-platform library/tool for reading Macintosh resource data, as stored in resource forks and ``.rsrc`` files
long_description = file: README.rst
long_description_content_type = text/x-rst
@ -33,12 +35,55 @@ keywords =
macos
[options]
setup_requires =
setuptools>=39.2.0
# mypy can only find type hints in the package if zip_safe is set to False,
# see https://mypy.readthedocs.io/en/latest/installed_packages.html#making-pep-561-compatible-packages
zip_safe = False
python_requires = >=3.6
packages =
packages = find:
[options.package_data]
rsrcfork =
py.typed
[options.packages.find]
include =
rsrcfork
rsrcfork.*
[options.entry_points]
console_scripts =
rsrcfork = rsrcfork.__main__:main
[flake8]
extend-exclude =
.mypy_cache/,
build/,
dist/,
# The following issues are ignored because they do not match our code style:
ignore =
E226, # missing whitespace around arithmetic operator
E261, # at least two spaces before inline comment
E501, # line too long
W293, # blank line contains whitespace
W503, # line break before binary operator
# flake8-tabs configuration
use-flake8-tabs = true
blank-lines-indent = always
indent-tabs-def = 1
[mypy]
files=rsrcfork/**/*.py
python_version = 3.6
disallow_untyped_calls = True
disallow_untyped_defs = True
disallow_untyped_decorators = True
no_implicit_optional = True
warn_unused_ignores = True
warn_unreachable = True
warn_redundant_casts = True

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 355 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 884 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 478 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

BIN
tests/data/empty.rsrc Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 B

BIN
tests/data/testfile.rsrc Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 558 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 602 B

290
tests/test_rsrcfork.py Normal file
View File

@ -0,0 +1,290 @@
import collections
import io
import pathlib
import shutil
import sys
import tempfile
import typing
import unittest
import rsrcfork
RESOURCE_FORKS_SUPPORTED = sys.platform.startswith("darwin")
RESOURCE_FORKS_NOT_SUPPORTED_MESSAGE = "Resource forks are only supported on Mac"
DATA_DIR = pathlib.Path(__file__).parent / "data"
EMPTY_RSRC_FILE = DATA_DIR / "empty.rsrc"
TEXTCLIPPING_RSRC_FILE = DATA_DIR / "unicode.textClipping.rsrc"
TESTFILE_RSRC_FILE = DATA_DIR / "testfile.rsrc"
COMPRESS_DATA_DIR = DATA_DIR / "compress"
COMPRESSED_DIR = COMPRESS_DATA_DIR / "compressed"
UNCOMPRESSED_DIR = COMPRESS_DATA_DIR / "uncompressed"
COMPRESS_RSRC_FILE_NAMES = [
"Finder.rsrc",
"Finder Help.rsrc",
# "Install.rsrc", # Commented out for performance - this file contains a lot of small resources.
"System.rsrc",
]
def make_pascal_string(s):
return bytes([len(s)]) + s
UNICODE_TEXT = "Here is some text, including Üñïçø∂é!"
DRAG_DATA = (
b"\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03"
b"utxt\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"utf8\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"TEXT\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00"
)
TEXTCLIPPING_RESOURCES = collections.OrderedDict([
(b"utxt", collections.OrderedDict([
(256, UNICODE_TEXT.encode("utf-16-be")),
])),
(b"utf8", collections.OrderedDict([
(256, UNICODE_TEXT.encode("utf-8")),
])),
(b"TEXT", collections.OrderedDict([
(256, UNICODE_TEXT.encode("macroman")),
])),
(b"drag", collections.OrderedDict([
(128, DRAG_DATA),
]))
])
TESTFILE_HEADER_SYSTEM_DATA = (
b"\xa7F$\x08 <\x00\x00\xab\x03\xa7F <\x00\x00"
b"\x01\x00\xb4\x88f\x06`\np\x00`\x06 <\x00\x00"
b"\x08testfile\x00\x02\x00\x02\x00rs"
b"rcRSED\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x02\x00rsrcRSED\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\xdaIp~\x00\x00\x00\x00\x00\x00\x02.\xfe\x84"
)
TESTFILE_HEADER_APPLICATION_DATA = b"This is the application-specific header data section. Apparently I can write whatever nonsense I want here. A few more bytes...."
TESTFILE_RESOURCES = collections.OrderedDict([
(b"STR ", collections.OrderedDict([
(128, (
None, rsrcfork.ResourceAttrs(0),
make_pascal_string(b"The String, without name or attributes"),
)),
(129, (
b"The Name", rsrcfork.ResourceAttrs(0),
make_pascal_string(b"The String, with name and no attributes"),
)),
(130, (
None, rsrcfork.ResourceAttrs.resProtected | rsrcfork.ResourceAttrs.resPreload,
make_pascal_string(b"The String, without name but with attributes"),
)),
(131, (
b"The Name with Attributes", rsrcfork.ResourceAttrs.resSysHeap,
make_pascal_string(b"The String, with both name and attributes"),
)),
])),
])
class UnseekableStreamWrapper(io.BufferedIOBase):
_wrapped: typing.BinaryIO
def __init__(self, wrapped: typing.BinaryIO) -> None:
super().__init__()
self._wrapped = wrapped
def read(self, size: typing.Optional[int] = -1) -> bytes:
return self._wrapped.read(size)
def open_resource_fork(path: pathlib.Path, mode: str) -> typing.BinaryIO:
return (path / "..namedfork" / "rsrc").open(mode)
class ResourceFileReadTests(unittest.TestCase):
def test_empty(self) -> None:
with rsrcfork.open(EMPTY_RSRC_FILE, fork="data") as rf:
self.assertEqual(rf.header_system_data, bytes(112))
self.assertEqual(rf.header_application_data, bytes(128))
self.assertEqual(rf.file_attributes, rsrcfork.ResourceFileAttrs(0))
self.assertEqual(list(rf), [])
def internal_test_textclipping(self, rf: rsrcfork.ResourceFile) -> None:
self.assertEqual(rf.header_system_data, bytes(112))
self.assertEqual(rf.header_application_data, bytes(128))
self.assertEqual(rf.file_attributes, rsrcfork.ResourceFileAttrs(0))
self.assertEqual(list(rf), list(TEXTCLIPPING_RESOURCES))
for (actual_type, actual_reses), (expected_type, expected_reses) in zip(rf.items(), TEXTCLIPPING_RESOURCES.items()):
with self.subTest(type=expected_type):
self.assertEqual(actual_type, expected_type)
self.assertEqual(list(actual_reses), list(expected_reses))
for (actual_id, actual_res), (expected_id, expected_data) in zip(actual_reses.items(), expected_reses.items()):
with self.subTest(id=expected_id):
self.assertEqual(actual_res.type, expected_type)
self.assertEqual(actual_id, expected_id)
self.assertEqual(actual_res.id, expected_id)
self.assertEqual(actual_res.name, None)
self.assertEqual(actual_res.attributes, rsrcfork.ResourceAttrs(0))
self.assertEqual(actual_res.data, expected_data)
self.assertEqual(actual_res.compressed_info, None)
def test_textclipping_seekable_stream(self) -> None:
with TEXTCLIPPING_RSRC_FILE.open("rb") as f:
with rsrcfork.ResourceFile(f) as rf:
self.internal_test_textclipping(rf)
def test_textclipping_unseekable_stream(self) -> None:
with TEXTCLIPPING_RSRC_FILE.open("rb") as f:
with UnseekableStreamWrapper(f) as usf:
with rsrcfork.ResourceFile(usf) as rf:
self.internal_test_textclipping(rf)
def test_textclipping_path_data_fork(self) -> None:
with rsrcfork.open(TEXTCLIPPING_RSRC_FILE, fork="data") as rf:
self.internal_test_textclipping(rf)
@unittest.skipUnless(RESOURCE_FORKS_SUPPORTED, RESOURCE_FORKS_NOT_SUPPORTED_MESSAGE)
def test_textclipping_path_resource_fork(self) -> None:
with tempfile.NamedTemporaryFile() as tempf:
with TEXTCLIPPING_RSRC_FILE.open("rb") as dataf:
with open_resource_fork(pathlib.Path(tempf.name), "wb") as rsrcf:
shutil.copyfileobj(dataf, rsrcf)
with rsrcfork.open(tempf.name, fork="rsrc") as rf:
self.internal_test_textclipping(rf)
@unittest.skipUnless(RESOURCE_FORKS_SUPPORTED, RESOURCE_FORKS_NOT_SUPPORTED_MESSAGE)
def test_textclipping_path_auto_resource_fork(self) -> None:
with tempfile.NamedTemporaryFile() as temp_data_fork:
with TEXTCLIPPING_RSRC_FILE.open("rb") as source_file:
with open_resource_fork(pathlib.Path(temp_data_fork.name), "wb") as temp_rsrc_fork:
shutil.copyfileobj(source_file, temp_rsrc_fork)
with self.subTest(data_fork="empty"):
# Resource fork is selected when data fork is empty.
with rsrcfork.open(temp_data_fork.name) as rf:
self.internal_test_textclipping(rf)
with self.subTest(data_fork="non-resource data"):
# Resource fork is selected when data fork contains non-resource data.
temp_data_fork.write(b"This is the file's data fork. It should not be read, as the file has a resource fork.")
with rsrcfork.open(temp_data_fork.name) as rf:
self.internal_test_textclipping(rf)
with self.subTest(data_fork="valid resource data"):
# Resource fork is selected even when data fork contains valid resource data.
with EMPTY_RSRC_FILE.open("rb") as source_file:
shutil.copyfileobj(source_file, temp_data_fork)
with rsrcfork.open(temp_data_fork.name) as rf:
self.internal_test_textclipping(rf)
@unittest.skipUnless(RESOURCE_FORKS_SUPPORTED, RESOURCE_FORKS_NOT_SUPPORTED_MESSAGE)
def test_textclipping_path_auto_data_fork(self) -> None:
with tempfile.NamedTemporaryFile() as temp_data_fork:
with TEXTCLIPPING_RSRC_FILE.open("rb") as source_file:
shutil.copyfileobj(source_file, temp_data_fork)
# Have to flush the temporary file manually so that the data is visible to the other reads below.
# Normally this happens automatically as part of the close method, but that would also delete the temporary file, which we don't want.
temp_data_fork.flush()
with self.subTest(rsrc_fork="nonexistant"):
# Data fork is selected when resource fork does not exist.
with rsrcfork.open(temp_data_fork.name) as rf:
self.internal_test_textclipping(rf)
with self.subTest(rsrc_fork="empty"):
# Data fork is selected when resource fork exists, but is empty.
with open_resource_fork(pathlib.Path(temp_data_fork.name), "wb") as temp_rsrc_fork:
temp_rsrc_fork.write(b"")
with rsrcfork.open(temp_data_fork.name) as rf:
self.internal_test_textclipping(rf)
with self.subTest(rsrc_fork="non-resource data"):
# Data fork is selected when resource fork contains non-resource data.
with open_resource_fork(pathlib.Path(temp_data_fork.name), "wb") as temp_rsrc_fork:
temp_rsrc_fork.write(b"This is the file's resource fork. It contains junk, so it should be ignored in favor of the data fork.")
with rsrcfork.open(temp_data_fork.name) as rf:
self.internal_test_textclipping(rf)
def test_testfile(self) -> None:
with rsrcfork.open(TESTFILE_RSRC_FILE, fork="data") as rf:
self.assertEqual(rf.header_system_data, TESTFILE_HEADER_SYSTEM_DATA)
self.assertEqual(rf.header_application_data, TESTFILE_HEADER_APPLICATION_DATA)
self.assertEqual(rf.file_attributes, rsrcfork.ResourceFileAttrs.mapPrinterDriverMultiFinderCompatible | rsrcfork.ResourceFileAttrs.mapReadOnly)
self.assertEqual(list(rf), list(TESTFILE_RESOURCES))
for (actual_type, actual_reses), (expected_type, expected_reses) in zip(rf.items(), TESTFILE_RESOURCES.items()):
with self.subTest(type=expected_type):
self.assertEqual(actual_type, expected_type)
self.assertEqual(list(actual_reses), list(expected_reses))
for (actual_id, actual_res), (expected_id, (expected_name, expected_attrs, expected_data)) in zip(actual_reses.items(), expected_reses.items()):
with self.subTest(id=expected_id):
self.assertEqual(actual_res.type, expected_type)
self.assertEqual(actual_id, expected_id)
self.assertEqual(actual_res.id, expected_id)
self.assertEqual(actual_res.name, expected_name)
self.assertEqual(actual_res.attributes, expected_attrs)
self.assertEqual(actual_res.data, expected_data)
self.assertEqual(actual_res.compressed_info, None)
def test_compress_compare(self) -> None:
# This test goes through pairs of resource files: one original file with both compressed and uncompressed resources, and one modified file where all compressed resources have been decompressed (using ResEdit on System 7.5.5).
# It checks that the rsrcfork library performs automatic decompression on the compressed resources, so that the compressed resource file appears to the user like the uncompressed resource file (ignoring resource order, which was lost during decompression using ResEdit).
for name in COMPRESS_RSRC_FILE_NAMES:
with self.subTest(name=name):
with rsrcfork.open(COMPRESSED_DIR / name, fork="data") as compressed_rf, rsrcfork.open(UNCOMPRESSED_DIR / name, fork="data") as uncompressed_rf:
self.assertEqual(sorted(compressed_rf), sorted(uncompressed_rf))
for (compressed_type, compressed_reses), (uncompressed_type, uncompressed_reses) in zip(sorted(compressed_rf.items()), sorted(uncompressed_rf.items())):
with self.subTest(type=compressed_type):
self.assertEqual(compressed_type, uncompressed_type)
self.assertEqual(sorted(compressed_reses), sorted(uncompressed_reses))
for (compressed_id, compressed_res), (uncompressed_id, uncompressed_res) in zip(sorted(compressed_reses.items()), sorted(uncompressed_reses.items())):
with self.subTest(id=compressed_id):
# The metadata of the compressed and uncompressed resources must match.
self.assertEqual(compressed_res.type, uncompressed_res.type)
self.assertEqual(compressed_id, uncompressed_id)
self.assertEqual(compressed_res.id, compressed_id)
self.assertEqual(compressed_res.id, uncompressed_res.id)
self.assertEqual(compressed_res.name, uncompressed_res.name)
self.assertEqual(compressed_res.attributes & ~rsrcfork.ResourceAttrs.resCompressed, uncompressed_res.attributes)
# The uncompressed resource really has to be not compressed.
self.assertNotIn(rsrcfork.ResourceAttrs.resCompressed, uncompressed_res.attributes)
self.assertEqual(uncompressed_res.compressed_info, None)
self.assertEqual(uncompressed_res.data, uncompressed_res.data_raw)
self.assertEqual(uncompressed_res.length, uncompressed_res.length_raw)
# The compressed resource's (automatically decompressed) data must match the uncompressed data.
self.assertEqual(compressed_res.data, uncompressed_res.data)
self.assertEqual(compressed_res.length, uncompressed_res.length)
if rsrcfork.ResourceAttrs.resCompressed in compressed_res.attributes:
# Resources with the compressed attribute must expose correct compression metadata.
self.assertNotEqual(compressed_res.compressed_info, None)
self.assertEqual(compressed_res.compressed_info.decompressed_length, compressed_res.length)
else:
# Some resources in the "compressed" files are not actually compressed, in which case there is no compression metadata.
self.assertEqual(compressed_res.compressed_info, None)
self.assertEqual(compressed_res.data, compressed_res.data_raw)
self.assertEqual(compressed_res.length, compressed_res.length_raw)
if __name__ == "__main__":
unittest.main()

27
tox.ini Normal file
View File

@ -0,0 +1,27 @@
[tox]
# When adding a new Python version here, please also update the list of Python versions called by the GitHub Actions workflow (.github/workflows/ci.yml).
envlist = py{36,37,38},flake8,mypy,package
[testenv]
commands = python -m unittest discover --start-directory ./tests
[testenv:flake8]
deps =
flake8 >= 3.8.0
flake8-bugbear
flake8-tabs
commands = flake8
[testenv:mypy]
deps =
mypy
commands = mypy
[testenv:package]
deps =
twine
wheel >= 0.32.0
commands =
python setup.py sdist bdist_wheel
twine check dist/*