We still haven't changed any of how cppo-ng fundamentally works here (aside from
fixing things I definitely had broken--and there's some code I still find
questionable), but g.image_data is now gone. One g.var down, all the rest still
to go.
One thing both AppleCommander and CiderPress do is wrap 2img is effectively
treat 2img files as containers. That's a good idea. We need to start using our
buffer code before it begins making sense, but that means this 2img code has to
go as it's kind of just bolted on to the side for now.
Instead we'll put the image we load into a buffer. How we do that is likely to
change, but this gets us to the point we can start using it.
Bytes-like objects have some strangeness regarding slicing. But if you provide
only a start, it gives you an int. Any other conditions, it gives you a
bytes-like object. That doesn't translate to our read method which may not be
slicable. So we either need to make sure we always explicitly turn things into
an int when we need them with ord(), or this.
So, this.
This actually disables all pylint warnings we haven't fixed because, as of now,
we don't intend to fix them. The arg parsing is suboptimal, but if you can make
sure that existing scripts using cppo don't break by turning this into argparse
code, be my guest. I didn't see an obvious way to do it any better than what we
have now. When cppo is rewritten, we'll create a new runner with its own
argparse-based command parser. Until then, I'm inclined to leave it be.
This is just part of a larger rewrite I started working on, but ... it's not
needed now.
Duplicating the information from the usage here made it clear that a couple of
our arguments are undocumented. Breaking up a larger effort into smaller pieces
so that the changes are more visible here.
The actual logging configuration belongs in the application, not a module. In a
module, it's always going to do what the module says to do. And that's fine for
cppo which is a script that works the way it does, but we have bigger plans for
this code. It's now in the cppo module.
We've stopped using log from blocksfree.logging, favoring LOG instead, so we
have removed it.
If the use of a four-space tab for files under .git aside from patches (and
commit messages if you use commit.verbose) is annoying to you, feel free to PR
this to make the .editorconfig more specific.
Additionally, cppo was not explicitly specified in editorconfig. EditorConfig
applies rules based on filename, not filetype. That doesn't bother me--there is
a vim modeline in the fiel. It might bother you if you use something else.
If you read one index from a bytes or bytearray, you get an int, not a single
character bytes object. Originally I didn't want to mimic that feature because
it's actually somewhat annoying at times. Realizing it was done that way for a
reason, and not doing it in ByteBuffer is gonna be even more annoying.
We broke str(ByteBuffer) with the util.py commit, but having that return a
possibly megabytes-long string was stupid anyway. It now returns a string just
indicating that it's not really a printable thing rather than the still
probably extremely long repr().
Added a hexdump function to call util.hexdump cleanly. This is really only
here for debugging. I can imagine a lot of things you could do to account for
possibilities later on, but YAGNI applies (including that if you do need it,
you can add it then.)
Our changes to the built-in logging module of Python are kind of a hack designed
to be as lightweight as possible way to replace the built-in logging module with
one that operates using newer str.format based string expansion. It's not
really complete we probably should change that at some point.
Changes include:
- Docstrings, lots of docstrings
- Type hinting
- log is now LOG
- pylint warnings disabled on things that will not change and are on purpose
- StyleAdapter.log does not dedent msg anymore unless dedent=True is passed
which hopefully should make it a little less DWIM.
First, docstrings have been haphazard and inconsistent in blocksfree, that's
going to change. I don't know rST markup, but I do intend to learn it because
it is a superior format for technical writing. User-focused docs will remain
in Markdown format. It's more likely to be read by end-users as text, after
all, and those sorts of docs are the things Markdown is good for.
Rewrote printables() in procedural fashion for clarity. Would like to have
done that with hexchars(), but that's not actually much clearer when written
procedurally than functionally, so I let be.
Functions now have type hints again. Those went away when I rewrote this mess
and I didn't put them back.
Finally I renamed the Iterator version of this function to hexdump_gen.
pylint3 objects to the workaround to mixing commits. Temporary.
I'm not sure using a hexdump makes sense for str() here, but I don't know what
else does yet. I also know that I want a buffer to be hexdumpable, although I
think I'd prefer something that allows a hexdump not to require a huge amount
of memory. Until we start processing HFS, I don't need to dwell on that too
much. Let's get it working first.
The nex gen_hexdump returns an Iterator to give you a hex dump one line at a
time. This is the thing to use (probably) if you want to do something useful
with a hexdump other than print it, say. The hexdump function now takes
another argument, func, a callable meant to take a string, defaulting to print.
It just uses the Iterator.
I think I'm done messing with the API to this and can soon start actually
committing some code that uses it now and then. ;)
Type hinting in Python is just that: Hinting. Which means if you do it wrong,
as long as the syntax is right, Python itself will just let you do it. Python
is intended to be a duck-typed language, so you're really just documenting you
intent/expectations. The feature is somewhat new, and its conventions are not
completely understood by certain tools that encounter it. This apparently
applies to myself as a developer trying to use it. ;)
I explained in the comments on BufferType why I'm doing this, but the nutshell
version is that I anticipate having bigger files to deal with at some point we
won't want to keep in memory. Otherwise we could just use bytearrays.
The way this is meant to be used (which admittedly isn't clear--would someone
like to submit a patch that improves the docs to clarify?) is that this is
intended to be used as a context. In other words, using Python's with
statement. This isn't all that different for a ByteBuffer, but it would be for
a FileBuffer (which doesn't exist yet and won't for awhile.)
Implementation hint for FileBuffer when I get there: If the file is not
explicitly opened read-only, I intend for read-modify-write to be standard
practice from the start. That'll mean duplicating a file to a temporary one we
can manipulate safely and then at flush time, copying the changes over the
original. That way you'd always be able to undo your changes by quitting
without saving. This seems important as blocksfree is likely to serve a lot of
archival duty and you may only get one shot at trying to save an image from a
damaged floppy. It would be awful if that image were then destroyed by an
accidental exception somewhere in the middle of other operations. So let's not
go there.
What this is missing is a flush method. It's not implemented because:
a. cppo is all we've got so far and it doesn't need write at all
b. I'm not 100% sure how I'm doing files yet
c. I'm hoping the way I ought to implement this will make itself apparent when
the code gets used.
Python3 already has a means of turning numerical data into a binary string with
format(). The only place it was used was with ProDOS case masks as it was, so
it's an easy call to replace the specialty function with Python's internals.
The util functions consist entirely of hexdump and its helper function right
now, both of which are completely unused at the moment. I don't intend for
legacy to ever call these functions, but I should start using them soon. :)
I may not have done this 100% "properly"--this is really the first full
application thingy I've ever tried to write in Python. I learned circular
imports are possible and the error messages are not obvious when you do that.
I've also learned that importing a package doesn't necessarily import the
modules within that package--if you want that, the package needs to import its
submodules into itself. That was not obvious either. It does explain why
sometimes you must import subpackages directly and other times importing the
package is all you need. This is probably obvious to Python developers who
actually work on big projects, but ... I've only ever done single-file scripts
before now so it wasn't obvious to me.
For now, blocksfree is importing legacy. We don't have enough outside of
legacy yet to make the alternative even remotely useful at this time.
Eventually though the goal is to stop doing that.
Basically cleaning out my stash stack--the stash this came from had been mostly
applied elsewhere already, leaving only a few stray ()s. Figured it was a good
time to PEP 8 the end-of-line comments just so there was something here to
actually commit.
You now simply stuff g with the appropriate options and run the thing. You
could even modify the function to take those things as arguments now, but I
didn't do that for now.
At present we actually have a reasonably high Python minimum of 3.5. We could
back that off to 3.3 without much effort or perhaps even 3.2. Once we're a
little closer to something to release, we should consider doing that.
Not quite finished with this, but the goal here is not have args being passed
in to the legacy cppo at all. The g namespace is not ideal, but it does work
and it isolates the legacy code from needing to understand what's going on in
the shell level. So we'll take advantage of that for the time being.
The bigger problem was that when I moved the arg parsing out of cppo, I failed
to move all of it--a block that checked the number of args got lost. Restored.
The point is to separate the CLI interface to cppo from the inner logic so that
we can begin replacing the legacy code with proper implementations thereof.
This isn't 100% complete in that regard--we still need to pass args to the
legacy main function--but the part of cppo that will survive this whole process
is now functionally isolated from the part that's likely to get replaced to a
large degree.
The history document is kind of a mishmash of explanation about what decisions
have lead to what this project is trying to do and why this rather than other
things, such as improving AppleCommander. (Ohh, it has the reason for that
believe me--die in the cash-consuming fire of the Internet's rage, Oracle!)
More importantly, there are Copyright notices and the GNU GPL v2.
This is kind of an expensive thing to do unconditionally, but it lets us make
multi-line strings fit into the code with less ugliness. Basically, if you're
four levels in, you can do something like this:
log.warn("""\
There was a problem.
It was probably wasn't fatal because this
is only a warning, but it is enough to have
a multiline string.
""")
This will print without the indentation. It's not quite as clean as how
docutils handles docstrings (allowing the first line to be unindented, hence
the line-continuation), but it's still an improvement. If you can improve upon
this, please feel free to PR it!
The section of cppo containing the logging code has been moved to its own very
short module inside a (bare) Python package. This is messily done for now, but
I wanted this to be a minimal commit.
CiderPress checks that these optional areas are actually as long as the header
says they are and discards them if they're not. This seems like a useful
heuristic. It happens to be the only one we can perform. :)
Rwrite of the 2mg parsing code which saves the comment and creator blocks in
memory by putting the code into the Disk class.
Doesn't yet attempt to parse out 2mg image format, and the question is open as
to whether or not I'll try to follow cppo's historical method of handling this
(chop the header and go), AppleCommander's method (consider 2mg its own unique
image format which happens to contain one of the others) or CiderPress's (use
one of several possible wrappers independent of the image orders. CP's is
probably the most flexible.
So basically this code is probably far from final, but it works for what it
does so far.
What's wrong with b2a_hex() or hex()? Well, hex() only converts integers. And
while a2b_hex() ignores whitespace, b2a_hex() doesn't provide any, making for
difficult to read output for anything longer than about 8 bytes or so.
In the basic case, it seems like you want a classic hexdump. I chose the xxd
format:
xxxxxxxx: xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx |cccccccccccccccc|
Rather than hardcode all of the integers and strings (as I started doing), I
decided that I might as well use variables for these things if only for
readability. And if they're locals, you might as well be able to override
them.
The knobs you have to play with are therefore these:
- wordsize=2, how many bytes are grouped together
- sep=' ', the spacing between words
- sep2=' ', the midpoint spacing
I suppose I could've made everything else configurable too, but YAGNI.
Finally! The Disk class doesn't actually serve as much more than a slightly
improved Globals class at the moment holding every splitting of the source path
and filename that we use in legacy code, as well as a copy of the disk image
itself that gets used long enough to read the a2mg header.
The idea I have here is to begin building the module-based code in parallel.
Then I'll just modify the linear code to compare doing it the old way to doing
it the new. That'll let me verify that the new code does what the old should.
When it's all done, we can just modify main to use the new modular code and
look at splitting the modular code into a package with cppo as a runner. At
that point the code should begin being able to do things cppo cannot. We could
continue to extend cppo at that point, but my inclination is to maintain the
cppo runner as a compatibility layer and begin building a more modern image
tool. Essentially to begin building the CiderPress for Linux or the Java-free
AppleCommander.