My original goal was to add a sign-extended decimal format, but that
turned out to be awkward. It works for data items and instructions
with immediate operands (e.g. "LDA #-1"), but is either wrong or
useless for address operands, since most assemblers treat integers
as 32-bit values. (LDA -1 is not LDA $FFFF, it's LDA $FFFFFFFF,
which is not useful unless your asm is doing an implicit mod.)
There's also a bit of variability in how assemblers treat negative
values, so I'm shelving the idea for now. I'm keeping the updated
tests, which are now split into 6502 / 65816 parts.
Also, updated the formatter to output all decimal values as unsigned.
Most assemblers were fine with negative values, but 64tass .dword
insists on positive. Rather than make the opcode conditional on the
value's range, we now just always output unsigned decimal, which
all current assemblers accept.
Add a 6502-only version of the 20032-labels-and-symbols test. The
65816 version could get away with just the 65816-specific stuff, but
there's no real need to modify it. (The next time I update it I may
remove the duplicate label since that requires hand-editing.)
The regression tests were written with the assumption that all cross
assemblers would support 6502, 65C02, and 65816 code. There are a
few that support 65816 partially (e.g. ACME) or not at all. To best
support these, we need to split some of the tests into pieces, so
that important 6502 tests aren't skipped simply because parts of the
test also exercise 65816 code.
The first step is to change the regression test naming scheme. The
old system used 1xxx for tests without project files, and 2xxx for
tests with project files. The new system uses 1xxxN / 2xxxN, where
N indicates the CPU type: 0 for 6502, 1 for 65C02, and 2 for 65816.
For the 1xxxN tests the new value determines which CPU is used,
which allows us to move the "allops" 6502/65C02 tests into the
no-project category. For 2xxxN it just allows the 6502 and 65816
versions to have the same base name and number.
This change updates the first batch of tests. It involves minor
changes to the test harness and a whole bunch of renaming.
ACME has a "real" PC and a "pseudo" PC. The "real" PC determines the
initial position in a 64KB buffer used to hold assembler output. If
the amount of code generated runs off the end, the assembler fails
with "produced too much code".
The source code generator in SourceGen was outputting a "real" PC
for the first address range and "psuedo" PCs for any address ranges
that followed. This produced nice results for code with a single
range, but caused problems for multi-range sources if the initial
range was high in memory and a later range was lower in memory.
While the assembler isn't actually generating more than 64KB of code,
ACME's buffer management was detecting an overflow.
Now, if a source file has multiple address ranges, we set the "real"
PC to $0000 and use a "pseudo" PC for all ranges. Output for projects
with a single address range is unmodified.
JSR/JSL calls with inline data have the option of reporting that
they don't continue, which causes the code analyzer to treat them
as JMPs instead. There was a bug that was causing the no-continue
flag to be lost in certain circumstances.
The code now explicitly records the plugin's response in an Anattrib
flag. Test 2022-extension-scripts has been updated with a test case
that exercises this situation.
The code was making an unwarranted assumption about how the flags
were being set. For example, ORA #$00 can't know if the previous
contents of the accumulator were nonzero, only that the instruction
hasn't made them nonzero, but instead of marking the Z-flag
"indeterminate" it was leaving the flag in its previous state. This
produces incorrect results if the previous instruction didn't set
its flags from the accumulator contents, e.g. it was an LDX.
Test 1003-flags-and-branches has been updated to test these states.
There's no "standard" coordinate system, so the choice is arbitrary.
However, an examination of the Transporter mesh in Elite revealed
that the mesh was designed for a left-handed coordinate system. We
can compensate for that trivially in the Elite visualizer, but we
might as well match what they're doing. (The only change required
in the code is a couple of sign changes on the Z coordinate, and an
update to the rotation matrix.)
This also downsizes Matrix44 to Matrix33, exposes the rotation mode
enum, and adds a left-handed ZYX rotation mode.
This does mean that meshes that put the front at +Z will show their
backsides initially, since we're now oriented as if we're flying
the ships rather than facing them. I considered adding a 180-degree
Y rotation (with a tweak to the rotation matrix handedness to correct
the first rotation axis) to have them facing by default, but figured
that might be confusing since +Z is supposed to be away.
Anybody who really wants it to be the other way can trivially flip
the coordinates in their visualizer (negate xc/zc).
The Z coordinates in the visualization test project were flipped so
that the design is still facing the viewer at rotation (0,0,0).
Handle the remaining visualization editor UI controls, except for
the "test" button. Save/restore wireframe animations in the
project file. Changed the preview from a 1-pixel-wide line drawn
by a path half the window size to a 2-pixel-wide line drawn by a
path the exact window size.
Moved X/Y/Z rotation out of the plugin, since it has nothing to do
with the plugin at all. (Backface removal and perspective projection
are somewhat based on the data contents, as is the choice for
whether or not they should be options.)
Added sliders for X/Y/Z rotation. Much more fun that way.
Renamed VisualizationAnimation to VisBitmapAnimation, as we're not
going to use it for wireframe animation. Created a new class to
hold wireframe animation data, which is really just a reference to
the IVisualizationWireframe so we can generate an animated GIF
without having to pry open the plugin again.
Renamed the "frame-delay-msec" parameter, which should start with
an underscore to ensure it doesn't clash with plugin parameters.
If we don't find it with an underscore we check again without for
backward compatibility.
We extract the data from the wireframe visualization, perform a
trivial transform, and display it. The perspective vs.
orthographic flag in the parameters is respected. (No rotation or
backface removal yet.)
Also, increased the thumbnail sizes in the visualization set editor
list from 48x48 to 64x64, because the nearest-pixel-scaled 48x48
looks nasty when used for wireframes.
Added some more plumbing. Updated visualization set edit dialog,
which now does word-wrapping correctly in the buttons. Added Alt+V
as the hotkey for Create/Edit Visualization Set, which allows you
to double-tap it to leap into the visualization editor.
Experimented with Path drawing, which looks like it could do just
what we need.
Also, show the file size in KB in the code/data/junk breakdown at the
bottom of the window. (Technically it's KiB, but that looked funny.)
These were being overlooked because they didn't actually cause
anything to happen (a no-op .ORG sets the address to what it would
already have been). The assembly source generator works in a way
that causes them to be skipped, so everybody was happy.
This seemed like the sort of thing that was likely to cause problems
down the road, however, so we now split regions correctly when a
no-op .ORG is encountered. This affects the uncategorized data
analyzer and selection grouping.
This changed the behavior of the 2004-numeric-types test, which was
visibly weird in the UI but generated correct output.
Added the 2024-ui-edge-cases test to provide a place to exercise
edge cases when testing the UI by hand. It has some value for the
automated regression test, so it's included there.
Also, changed the AddressMapEntry objects to be immutable. This
is handy when passing lists of them around.
For nonzero values we were leaving Z=prev, which is wrong when Z=0
because the AND result might be zero. Now if Z=1 we leave it alone,
but if Z=0 we now set it to Z=?.
Test 1003-flags-and-branches was testing for the (incorrect)
behavior, so we're now running into a BRK. This is fine.
We want to be able to declare a symbol for a struct or buffer that
spans the entire width, and then declare more-specific items within
it that take precedence. This worked for everything but the very
first byte, because on an exact match we were resolving the conflict
alphabetically.
Now, if one is wider than the other, we use the narrower definition.
Updated 2021-external-symbols with some additional test cases.
We're doing this for user labels but not for project/platform
symbols. So if you have a constant named "BCC" you can't assemble
your code with certain assemblers. Now we rename it automatically.
Added a quick test to 2007-labels-and-symbols. (No change to ACME,
which barfs on the test.)
In 1.5.0-dev1, as part of changes to the way label localization
works, the local variable de-duplicator started checking against a
filtered copy of the symbol table. Unfortunately it never
re-generated the table, so a long-lived LocalVariableLookup (like
the one used by LineListGen) would set up the dup map wrong and
be inconsistent with other parts of the program.
We now regenerate the table on every Reset().
The de-duplication stuff also had problems when opcodes and
operands were double-clicked on. When the opcode is clicked, the
selection should jump to the appropriate variable declaration, but
it wasn't being found because the label generated in the list was
in its original form. Fixed.
When an instruction operand is double-clicked, the instruction operand
editor opens with an "edit variable" shortcut. This was showing
the de-duplicated name, which isn't necessarily a bad thing, but it
was passing that value on to the DefSymbol editor, which thought it
was being asked to create a new entry. Fixed. (Entering the editor
through the LvTable editor works correctly, with nary a de-duplicated
name in sight. You'll be forced to rename it because it'll fail the
uniqueness test.)
References to de-duplicated local variables were getting lost when
the symbol's label was replaced (due largely to a convenient but
flawed shortcut: xrefs are attached to DefSymbol objects). Fixed by
linking the XrefSets.
Given the many issues and their relative subtlety, I decided to make
the modified names more obvious, and went back to the "_DUPn" naming
strategy. (I'm also considering just making it an error and
discarding conflicting entries during analysis... this is much more
complicated than I expected it to be.)
Quick tests can be performed in 2019-local-variables:
- go to +000026, double-click on the opcode, confirm sel change
- go to +000026, double-click on the operand, confirm orig name
shown in shortcut and that shortcut opens editor with orig name
- go to +00001a, down a line, click on PROJ_ZERO_DUP1 and confirm
that it has a single reference (from +000026)
- double-click on var table and confirm editing entry
The list of EQUs at the top of the file is sorted, by type, then
value, then name. This adds width as an additional check, so that
if you have overlapping items the widest comes first.
This is nice when you have a general entry for a block of data, and
then specific entries for some locations within the block.
We emit address adjustments like "LDA thing+1", which are usually
small values. Sometimes they're large, e.g. "LDA thing-61440",
which is harder to understand than "LDA thing-$F000". So now we
show small adjustments in decimal, and large adjustments in hex.
The current definition of "small" is abs(adjust) < 256.
The uncategorized data scanner isn't supposed to create strings or
".fill" directives that straddle labels, long comments, notes,
visualizations, or ORG directives. The test for crossing an ORG
directive is incomplete, and doesn't correctly handle no-op ORGs
(where the new address is the same as the old address).
The code generator doesn't output ORGs that are hidden inside other
things, so we're not generating bad code, but it looks funny on
screen and may cause problems later on. The 2004-numeric-types test
has the basic .align/.fill/.bulk directive tests, and now has an
extended set of tests for uncategorized data region splitting.
We now store Visualizations, VisualizationAnimations, and
VisualizationSets as three separate lists linked by tag strings.
WARNING: this breaks existing projects with visualizations. The
test projects have been updated.
We now generate GIF images for visualizations and add inline
references to them in the HTML output.
Images are scaled using the HTML img properties. This works well
on some browsers, but others insist on "smooth" scaling that blurs
out the pixels. This may require a workaround.
An extra blank line is now added above visualizations. This helps
keep the image and data visually grouped.
The Apple II bitmap test project was updated to have a visualization
set with multiple images at the top of the file.
Added comments, renamed files, removed cruft.
Stop showing the visualization tag name in the code list. It's
often redundant with the code label, and it's distracting. (We may
want to make this an option so you can Ctrl+F to find a tag.)
First swing at a visualizer for Atari 2600 sprites and playfields.
Won't necessarily present an accurate view of what is displayed on
screen, but should provide a reasonable shape for data stored in
the obvious way.
The Adventure playfields looked squashed, so I added a simple row
duplication value.
Also, minor improvements to visualizers generally:
- Throw an exception, rather than an Assert, in VisBitmap8 when the
arguments are bad.
- Show the exception in the Visualization Edit dialog.
- If generation fails and we don't have an error message, show a
generic "stuff be broke" string.
- Set focus on OK button in Visualization Set Edit after editing,
so you can hit Enter twice after renaming a tag.
Various changes:
- Generally treat visualization sets like long comments and notes
when it comes to defining data region boundaries. (We were doing
this for selections; now we're also doing it for format-as-word
and in the data analyzer when scanning for strings/fill.)
- Clear the visualization cache when the address map is altered.
This is necessary for visualizers that dereference addresses.
- Read the Apple II screen image from a series of addresses rather
than a series of offsets. This allows it to work when the image
is contiguous in memory but split into chunks in the file.
- Put 1 pixel of padding around the images in the main code list,
so they don't blend into the background.
- Remember the last visualizer used, so we can re-use it the next
time the user selects "new".
- Move min-size hack from Loaded to ContentRendered, as it apparently
spoils CenterOwner placement.
Bitmap fonts are a series of (usually) 1x8 bitmaps, which we arrange
into a grid of cells.
Screen images are useful for embedded screens, or for people who want
to display stand-alone image files as disassembly projects.
Various improvements:
- Switched to ReadOnlyDictionary in Visualization to make it clear
that the parameter dictionary should not be modified.
- Added a warning to the Visualization Set editor that appears when
there are no plugins that implement a visualizer.
- Make sure an item is selected in the set editor after edit/remove.
- Replaced the checkerboard background with one that's a little bit
more grey, so it's more distinct from white pixel data.
- Added a new Apple II hi-res color converter whose output more
closely matches KEGS and AppleWin RGB.
- Added VisHiRes.cs to some Apple II system definitions.
- Added some test bitmaps for Apple II hi-res to the test directory.
(These are not part of an automated test.)
Implemented Apple II hi-res bitmap conversion. Supports B&W and
color. Uses essentially the same algorithm as CiderPress.
Experimented with displaying non-text items in ListView. I assumed
it would work, since it's the sort of thing WPF is designed to do,
but it's always wise to approach with caution. Visualization Sets
now show a 64x64 button as a placeholder for the eventual thumbnail.
Some things were being flaky, which turned out to be because I
wasn't Prepare()ing the plugins before using them from Edit
Visualization. To make this a deterministic failure I added an
Unprepare() call that tells the plugin that we're all done.
NOTE: this breaks all existing plugins.
It's pretty common for code to access BUFFER-1,X, but it's rare for
the buffer to live on zero page memory. More often than not we're
auto-formatting zero-page operands with a nearby symbol when they're
just simple variables. It's more confusing than useful, so we don't
do that anymore.
Correct handling of local variables. We now correctly uniquify them
with regard to non-unique labels. Because local vars can effectively
have global scope we mostly want to treat them as global, but they're
uniquified relative to other globals very late in the process, so we
can't just throw them in the symbol table and be done. Fortunately
local variables exist in a separate namespace, so we just need to
uniquify the variables relative to the post-localization symbol table.
In other words, we take the symbol table, apply the label map, and
rename any variable that clashes.
This also fixes an older problem where we weren't masking the
leading '_' on variable labels when generating 64tass output.
The code list now makes non-unique labels obvious, but you can't tell
the difference between unique global and unique local. What's more,
the default type value in Edit Label is now adjusted to Global for
unique locals that were auto-generated. To make it a bit easier to
figure out what's what, the Info panel now has a "label type" line
that reports the type.
The 2023-non-unique-labels test had some additional tests added to
exercise conflicts with local variables. The 2019-local-variables
test output changed slightly because the de-duplicated variable
naming convention was simplified.
Implemented assembly source generation of non-unique local labels.
The new 2023-non-unique-labels test exercises various edge cases
(though we're still missing local variable interaction).
The format of uniquified labels changed slightly, so the expected
output of 2012-label-localizer needed to be updated.
This changes the "no opcode mnemonics" and "mask leading underscores"
functions into integrated parts of the label localization process.
The label localizer is now always on. The regression tests turned
it off by default, but that's no longer allowed, so the generated
output has changed for many of them. The tests themselves were not
altered.
Update the symbol lookup in EditInstructionOperand, EditDataOperand,
and GotoBox to correctly deal with non-unique labels.
This is a little awkward because we're doing lookups by name on
a non-unique symbol, and must resolve the ambiguity. In the case of
an instruction operand that refers to an address this is pretty
straightforward. For partial bytes (LDA #>:foo) or data directives
(.DD1 :foo) we have to take a guess. We can probably make a more
informed guess than we currently are, e.g. the LDA case could find
the label that minimizes the adjustment, but I don't want to sink a
lot of time into this until I'm sure it'll be useful.
Data operands with multiple regions are something of a challenge,
but I'm not sure specifying a single symbol for multiple locations
is important.
The "goto" box just finds the match that's closest to the selection.
Unlike "find", it always grabs the closest, not the next one forward.
(Not sure if this is useful or confusing.)
Added serialization of non-unique labels to project files.
The address labels are stored without the non-unique tag, because we
can get that from the file offset. (If we stored it, we'd need to
extract the value and verify that it matches the offset.) Operand
weak references are symbolic, and so do include the tag string.
We weren't validating symbol labels before. Now we are.
This also adds a "NonU" filter to the Symbols window so the labels
can be shown or hidden as desired.
Also, added source for a first pass at a regression test.
Some style guides say you should only put one space between
sentences, but I and many others still put two. The line-folding
code was only eating one of them when they straddled the end of the
line, which looked a little funny because the following line was
indented by one space.
This tweaks the code to eat both spaces. Regression test updated.
Also, nudge some UI elements so they line up.
The code that found a nearby data target for an instruction operand
was searching backward but not forward. We now take one step
forward, so that "LDA TABLE-1,Y" fills in automatically.
This altered 2008-address-changes, which had just this situation.
It didn't alter 2010-target-adjustment, but the existing tests were
insufficient and have been improved.
In the assembler output, add a blank line between the constants
and addresses in the long list of equates.
The earlier change that corrected the BIT instruction caused test
2009-branches-and-banks to fail, because it was relying on the idea
that BIT made the carry flag indeterminate. Changing a BCC to a
BVS restored the desired behavior.
This began with a change to support "BRK <operand>" in cc65. The
assembler only supports this for 65816 projects, so we detect that
and enable it when available.
While fiddling with some test code an assertion fired. This
revealed a minor issue in the code analyzer: when overwriting inline
data with instructions, we weren't resetting the format descriptor.
The code that exercises it, which requires two-byte BRKs and an
inline BRK handler in an extension script, has been added to test
2022-extension-scripts.
The new regression test revealed a flaw in the 64tass code
generator's character encoding scanner that caused it to hang.
Fixed.
Sometimes there's a bunch of junk in the binary that isn't used for
anything. Often it's there to make things line up at the start of
a page boundary.
This adds a ".junk" directive that tells the disassembler that it
can safely disregard the contents of a region. If the region ends
on a power-of-two boundary, an alignment value can be specified.
The assembly source generators will output an alignment directive
when possible, a .fill directive when appropriate, and a .dense
directive when all else fails. Because we're required to regenerate
the original data file, it's not always possible to avoid generating
a hex dump.