The regression tests were written with the assumption that all cross
assemblers would support 6502, 65C02, and 65816 code. There are a
few that support 65816 partially (e.g. ACME) or not at all. To best
support these, we need to split some of the tests into pieces, so
that important 6502 tests aren't skipped simply because parts of the
test also exercise 65816 code.
The first step is to change the regression test naming scheme. The
old system used 1xxx for tests without project files, and 2xxx for
tests with project files. The new system uses 1xxxN / 2xxxN, where
N indicates the CPU type: 0 for 6502, 1 for 65C02, and 2 for 65816.
For the 1xxxN tests the new value determines which CPU is used,
which allows us to move the "allops" 6502/65C02 tests into the
no-project category. For 2xxxN it just allows the 6502 and 65816
versions to have the same base name and number.
This change updates the first batch of tests. It involves minor
changes to the test harness and a whole bunch of renaming.
ACME has a "real" PC and a "pseudo" PC. The "real" PC determines the
initial position in a 64KB buffer used to hold assembler output. If
the amount of code generated runs off the end, the assembler fails
with "produced too much code".
The source code generator in SourceGen was outputting a "real" PC
for the first address range and "psuedo" PCs for any address ranges
that followed. This produced nice results for code with a single
range, but caused problems for multi-range sources if the initial
range was high in memory and a later range was lower in memory.
While the assembler isn't actually generating more than 64KB of code,
ACME's buffer management was detecting an overflow.
Now, if a source file has multiple address ranges, we set the "real"
PC to $0000 and use a "pseudo" PC for all ranges. Output for projects
with a single address range is unmodified.
JSR/JSL calls with inline data have the option of reporting that
they don't continue, which causes the code analyzer to treat them
as JMPs instead. There was a bug that was causing the no-continue
flag to be lost in certain circumstances.
The code now explicitly records the plugin's response in an Anattrib
flag. Test 2022-extension-scripts has been updated with a test case
that exercises this situation.
The code was making an unwarranted assumption about how the flags
were being set. For example, ORA #$00 can't know if the previous
contents of the accumulator were nonzero, only that the instruction
hasn't made them nonzero, but instead of marking the Z-flag
"indeterminate" it was leaving the flag in its previous state. This
produces incorrect results if the previous instruction didn't set
its flags from the accumulator contents, e.g. it was an LDX.
Test 1003-flags-and-branches has been updated to test these states.
There's no "standard" coordinate system, so the choice is arbitrary.
However, an examination of the Transporter mesh in Elite revealed
that the mesh was designed for a left-handed coordinate system. We
can compensate for that trivially in the Elite visualizer, but we
might as well match what they're doing. (The only change required
in the code is a couple of sign changes on the Z coordinate, and an
update to the rotation matrix.)
This also downsizes Matrix44 to Matrix33, exposes the rotation mode
enum, and adds a left-handed ZYX rotation mode.
This does mean that meshes that put the front at +Z will show their
backsides initially, since we're now oriented as if we're flying
the ships rather than facing them. I considered adding a 180-degree
Y rotation (with a tweak to the rotation matrix handedness to correct
the first rotation axis) to have them facing by default, but figured
that might be confusing since +Z is supposed to be away.
Anybody who really wants it to be the other way can trivially flip
the coordinates in their visualizer (negate xc/zc).
The Z coordinates in the visualization test project were flipped so
that the design is still facing the viewer at rotation (0,0,0).
Handle the remaining visualization editor UI controls, except for
the "test" button. Save/restore wireframe animations in the
project file. Changed the preview from a 1-pixel-wide line drawn
by a path half the window size to a 2-pixel-wide line drawn by a
path the exact window size.
Moved X/Y/Z rotation out of the plugin, since it has nothing to do
with the plugin at all. (Backface removal and perspective projection
are somewhat based on the data contents, as is the choice for
whether or not they should be options.)
Added sliders for X/Y/Z rotation. Much more fun that way.
Renamed VisualizationAnimation to VisBitmapAnimation, as we're not
going to use it for wireframe animation. Created a new class to
hold wireframe animation data, which is really just a reference to
the IVisualizationWireframe so we can generate an animated GIF
without having to pry open the plugin again.
Renamed the "frame-delay-msec" parameter, which should start with
an underscore to ensure it doesn't clash with plugin parameters.
If we don't find it with an underscore we check again without for
backward compatibility.
We extract the data from the wireframe visualization, perform a
trivial transform, and display it. The perspective vs.
orthographic flag in the parameters is respected. (No rotation or
backface removal yet.)
Also, increased the thumbnail sizes in the visualization set editor
list from 48x48 to 64x64, because the nearest-pixel-scaled 48x48
looks nasty when used for wireframes.
Added some more plumbing. Updated visualization set edit dialog,
which now does word-wrapping correctly in the buttons. Added Alt+V
as the hotkey for Create/Edit Visualization Set, which allows you
to double-tap it to leap into the visualization editor.
Experimented with Path drawing, which looks like it could do just
what we need.
Also, show the file size in KB in the code/data/junk breakdown at the
bottom of the window. (Technically it's KiB, but that looked funny.)
These were being overlooked because they didn't actually cause
anything to happen (a no-op .ORG sets the address to what it would
already have been). The assembly source generator works in a way
that causes them to be skipped, so everybody was happy.
This seemed like the sort of thing that was likely to cause problems
down the road, however, so we now split regions correctly when a
no-op .ORG is encountered. This affects the uncategorized data
analyzer and selection grouping.
This changed the behavior of the 2004-numeric-types test, which was
visibly weird in the UI but generated correct output.
Added the 2024-ui-edge-cases test to provide a place to exercise
edge cases when testing the UI by hand. It has some value for the
automated regression test, so it's included there.
Also, changed the AddressMapEntry objects to be immutable. This
is handy when passing lists of them around.
For nonzero values we were leaving Z=prev, which is wrong when Z=0
because the AND result might be zero. Now if Z=1 we leave it alone,
but if Z=0 we now set it to Z=?.
Test 1003-flags-and-branches was testing for the (incorrect)
behavior, so we're now running into a BRK. This is fine.
We want to be able to declare a symbol for a struct or buffer that
spans the entire width, and then declare more-specific items within
it that take precedence. This worked for everything but the very
first byte, because on an exact match we were resolving the conflict
alphabetically.
Now, if one is wider than the other, we use the narrower definition.
Updated 2021-external-symbols with some additional test cases.
We're doing this for user labels but not for project/platform
symbols. So if you have a constant named "BCC" you can't assemble
your code with certain assemblers. Now we rename it automatically.
Added a quick test to 2007-labels-and-symbols. (No change to ACME,
which barfs on the test.)
In 1.5.0-dev1, as part of changes to the way label localization
works, the local variable de-duplicator started checking against a
filtered copy of the symbol table. Unfortunately it never
re-generated the table, so a long-lived LocalVariableLookup (like
the one used by LineListGen) would set up the dup map wrong and
be inconsistent with other parts of the program.
We now regenerate the table on every Reset().
The de-duplication stuff also had problems when opcodes and
operands were double-clicked on. When the opcode is clicked, the
selection should jump to the appropriate variable declaration, but
it wasn't being found because the label generated in the list was
in its original form. Fixed.
When an instruction operand is double-clicked, the instruction operand
editor opens with an "edit variable" shortcut. This was showing
the de-duplicated name, which isn't necessarily a bad thing, but it
was passing that value on to the DefSymbol editor, which thought it
was being asked to create a new entry. Fixed. (Entering the editor
through the LvTable editor works correctly, with nary a de-duplicated
name in sight. You'll be forced to rename it because it'll fail the
uniqueness test.)
References to de-duplicated local variables were getting lost when
the symbol's label was replaced (due largely to a convenient but
flawed shortcut: xrefs are attached to DefSymbol objects). Fixed by
linking the XrefSets.
Given the many issues and their relative subtlety, I decided to make
the modified names more obvious, and went back to the "_DUPn" naming
strategy. (I'm also considering just making it an error and
discarding conflicting entries during analysis... this is much more
complicated than I expected it to be.)
Quick tests can be performed in 2019-local-variables:
- go to +000026, double-click on the opcode, confirm sel change
- go to +000026, double-click on the operand, confirm orig name
shown in shortcut and that shortcut opens editor with orig name
- go to +00001a, down a line, click on PROJ_ZERO_DUP1 and confirm
that it has a single reference (from +000026)
- double-click on var table and confirm editing entry
The list of EQUs at the top of the file is sorted, by type, then
value, then name. This adds width as an additional check, so that
if you have overlapping items the widest comes first.
This is nice when you have a general entry for a block of data, and
then specific entries for some locations within the block.
We emit address adjustments like "LDA thing+1", which are usually
small values. Sometimes they're large, e.g. "LDA thing-61440",
which is harder to understand than "LDA thing-$F000". So now we
show small adjustments in decimal, and large adjustments in hex.
The current definition of "small" is abs(adjust) < 256.
The uncategorized data scanner isn't supposed to create strings or
".fill" directives that straddle labels, long comments, notes,
visualizations, or ORG directives. The test for crossing an ORG
directive is incomplete, and doesn't correctly handle no-op ORGs
(where the new address is the same as the old address).
The code generator doesn't output ORGs that are hidden inside other
things, so we're not generating bad code, but it looks funny on
screen and may cause problems later on. The 2004-numeric-types test
has the basic .align/.fill/.bulk directive tests, and now has an
extended set of tests for uncategorized data region splitting.
We now store Visualizations, VisualizationAnimations, and
VisualizationSets as three separate lists linked by tag strings.
WARNING: this breaks existing projects with visualizations. The
test projects have been updated.
We now generate GIF images for visualizations and add inline
references to them in the HTML output.
Images are scaled using the HTML img properties. This works well
on some browsers, but others insist on "smooth" scaling that blurs
out the pixels. This may require a workaround.
An extra blank line is now added above visualizations. This helps
keep the image and data visually grouped.
The Apple II bitmap test project was updated to have a visualization
set with multiple images at the top of the file.
Added comments, renamed files, removed cruft.
Stop showing the visualization tag name in the code list. It's
often redundant with the code label, and it's distracting. (We may
want to make this an option so you can Ctrl+F to find a tag.)
First swing at a visualizer for Atari 2600 sprites and playfields.
Won't necessarily present an accurate view of what is displayed on
screen, but should provide a reasonable shape for data stored in
the obvious way.
The Adventure playfields looked squashed, so I added a simple row
duplication value.
Also, minor improvements to visualizers generally:
- Throw an exception, rather than an Assert, in VisBitmap8 when the
arguments are bad.
- Show the exception in the Visualization Edit dialog.
- If generation fails and we don't have an error message, show a
generic "stuff be broke" string.
- Set focus on OK button in Visualization Set Edit after editing,
so you can hit Enter twice after renaming a tag.
Various changes:
- Generally treat visualization sets like long comments and notes
when it comes to defining data region boundaries. (We were doing
this for selections; now we're also doing it for format-as-word
and in the data analyzer when scanning for strings/fill.)
- Clear the visualization cache when the address map is altered.
This is necessary for visualizers that dereference addresses.
- Read the Apple II screen image from a series of addresses rather
than a series of offsets. This allows it to work when the image
is contiguous in memory but split into chunks in the file.
- Put 1 pixel of padding around the images in the main code list,
so they don't blend into the background.
- Remember the last visualizer used, so we can re-use it the next
time the user selects "new".
- Move min-size hack from Loaded to ContentRendered, as it apparently
spoils CenterOwner placement.
Bitmap fonts are a series of (usually) 1x8 bitmaps, which we arrange
into a grid of cells.
Screen images are useful for embedded screens, or for people who want
to display stand-alone image files as disassembly projects.
Various improvements:
- Switched to ReadOnlyDictionary in Visualization to make it clear
that the parameter dictionary should not be modified.
- Added a warning to the Visualization Set editor that appears when
there are no plugins that implement a visualizer.
- Make sure an item is selected in the set editor after edit/remove.
- Replaced the checkerboard background with one that's a little bit
more grey, so it's more distinct from white pixel data.
- Added a new Apple II hi-res color converter whose output more
closely matches KEGS and AppleWin RGB.
- Added VisHiRes.cs to some Apple II system definitions.
- Added some test bitmaps for Apple II hi-res to the test directory.
(These are not part of an automated test.)
Implemented Apple II hi-res bitmap conversion. Supports B&W and
color. Uses essentially the same algorithm as CiderPress.
Experimented with displaying non-text items in ListView. I assumed
it would work, since it's the sort of thing WPF is designed to do,
but it's always wise to approach with caution. Visualization Sets
now show a 64x64 button as a placeholder for the eventual thumbnail.
Some things were being flaky, which turned out to be because I
wasn't Prepare()ing the plugins before using them from Edit
Visualization. To make this a deterministic failure I added an
Unprepare() call that tells the plugin that we're all done.
NOTE: this breaks all existing plugins.
It's pretty common for code to access BUFFER-1,X, but it's rare for
the buffer to live on zero page memory. More often than not we're
auto-formatting zero-page operands with a nearby symbol when they're
just simple variables. It's more confusing than useful, so we don't
do that anymore.
Correct handling of local variables. We now correctly uniquify them
with regard to non-unique labels. Because local vars can effectively
have global scope we mostly want to treat them as global, but they're
uniquified relative to other globals very late in the process, so we
can't just throw them in the symbol table and be done. Fortunately
local variables exist in a separate namespace, so we just need to
uniquify the variables relative to the post-localization symbol table.
In other words, we take the symbol table, apply the label map, and
rename any variable that clashes.
This also fixes an older problem where we weren't masking the
leading '_' on variable labels when generating 64tass output.
The code list now makes non-unique labels obvious, but you can't tell
the difference between unique global and unique local. What's more,
the default type value in Edit Label is now adjusted to Global for
unique locals that were auto-generated. To make it a bit easier to
figure out what's what, the Info panel now has a "label type" line
that reports the type.
The 2023-non-unique-labels test had some additional tests added to
exercise conflicts with local variables. The 2019-local-variables
test output changed slightly because the de-duplicated variable
naming convention was simplified.
Implemented assembly source generation of non-unique local labels.
The new 2023-non-unique-labels test exercises various edge cases
(though we're still missing local variable interaction).
The format of uniquified labels changed slightly, so the expected
output of 2012-label-localizer needed to be updated.
This changes the "no opcode mnemonics" and "mask leading underscores"
functions into integrated parts of the label localization process.
The label localizer is now always on. The regression tests turned
it off by default, but that's no longer allowed, so the generated
output has changed for many of them. The tests themselves were not
altered.
Update the symbol lookup in EditInstructionOperand, EditDataOperand,
and GotoBox to correctly deal with non-unique labels.
This is a little awkward because we're doing lookups by name on
a non-unique symbol, and must resolve the ambiguity. In the case of
an instruction operand that refers to an address this is pretty
straightforward. For partial bytes (LDA #>:foo) or data directives
(.DD1 :foo) we have to take a guess. We can probably make a more
informed guess than we currently are, e.g. the LDA case could find
the label that minimizes the adjustment, but I don't want to sink a
lot of time into this until I'm sure it'll be useful.
Data operands with multiple regions are something of a challenge,
but I'm not sure specifying a single symbol for multiple locations
is important.
The "goto" box just finds the match that's closest to the selection.
Unlike "find", it always grabs the closest, not the next one forward.
(Not sure if this is useful or confusing.)
Added serialization of non-unique labels to project files.
The address labels are stored without the non-unique tag, because we
can get that from the file offset. (If we stored it, we'd need to
extract the value and verify that it matches the offset.) Operand
weak references are symbolic, and so do include the tag string.
We weren't validating symbol labels before. Now we are.
This also adds a "NonU" filter to the Symbols window so the labels
can be shown or hidden as desired.
Also, added source for a first pass at a regression test.
Some style guides say you should only put one space between
sentences, but I and many others still put two. The line-folding
code was only eating one of them when they straddled the end of the
line, which looked a little funny because the following line was
indented by one space.
This tweaks the code to eat both spaces. Regression test updated.
Also, nudge some UI elements so they line up.
The code that found a nearby data target for an instruction operand
was searching backward but not forward. We now take one step
forward, so that "LDA TABLE-1,Y" fills in automatically.
This altered 2008-address-changes, which had just this situation.
It didn't alter 2010-target-adjustment, but the existing tests were
insufficient and have been improved.
In the assembler output, add a blank line between the constants
and addresses in the long list of equates.
The earlier change that corrected the BIT instruction caused test
2009-branches-and-banks to fail, because it was relying on the idea
that BIT made the carry flag indeterminate. Changing a BCC to a
BVS restored the desired behavior.
This began with a change to support "BRK <operand>" in cc65. The
assembler only supports this for 65816 projects, so we detect that
and enable it when available.
While fiddling with some test code an assertion fired. This
revealed a minor issue in the code analyzer: when overwriting inline
data with instructions, we weren't resetting the format descriptor.
The code that exercises it, which requires two-byte BRKs and an
inline BRK handler in an extension script, has been added to test
2022-extension-scripts.
The new regression test revealed a flaw in the 64tass code
generator's character encoding scanner that caused it to hang.
Fixed.
Sometimes there's a bunch of junk in the binary that isn't used for
anything. Often it's there to make things line up at the start of
a page boundary.
This adds a ".junk" directive that tells the disassembler that it
can safely disregard the contents of a region. If the region ends
on a power-of-two boundary, an alignment value can be specified.
The assembly source generators will output an alignment directive
when possible, a .fill directive when appropriate, and a .dense
directive when all else fails. Because we're required to regenerate
the original data file, it's not always possible to avoid generating
a hex dump.
Sort of silly to have every handler immediately pull the operand out
of the file data. (This is arguably less efficient, since we now
have to serialize the argument across the AppDomain boundary, but
we should be okay spending a few extra nanoseconds here.)
We were failing to update properly when a label changed if the label
was one that a plugin cared about. The problem is that a label
add/remove operation skips the code analysis, and a label edit skips
everything but the display update. Plugins only run during the code
analysis pass, so changes weren't being reflected in the display
list until something caused it to refresh.
The solution is to ask the plugin if the label being changed is one
that it cares about. This allows the plugin to use the same
wildcard-match logic that it uses elsewhere.
For efficiency, and to reduce clutter in plugins that don't care
about symbols, a new interface class has been created to handle the
"here are the symbols" call and the "do you care about this label"
call.
The program in Examples/Scripts has been updated to show a very
simple single-call plugin and a slightly more complex multi-call
plugin.
Changed the sort order on EQU lines so that constants come before
address definitions. This caused trivial changes to three of the
regression tests.
Added the ability to jump directly to an EQU line when an opcode
is double-clicked on.
Early data sheets listed BRK as one byte, but RTI after a BRK skips
the following byte, effectively making BRK a 2-byte instruction.
Sometimes, such as when diassembling Apple /// SOS code, it's handy
to treat it that way explicitly.
This change makes two-byte BRKs optional, controlled by a checkbox
in the project settings. In the system definitions it defaults to
true for Apple ///, false for all others.
ACME doesn't allow BRK to have an arg, and cc65 only allows it for
65816 code (?), so it's emitted as a hex blob for those assemblers.
Anyone wishing to target those assemblers should stick to 1-byte mode.
Extension scripts have to switch between formatting one byte of
inline data and formatting an instruction with a one-byte operand.
A helper function has been added to the plugin Util class.
To get some regression test coverage, 2022-extension-scripts has
been configured to use two-byte BRK.
Also, added/corrected some SOS constants.
See also issue #44.
Also exercise various formatting options.
Also, fix a bug where the code that applies project/platform symbols
to numeric references was ignoring inline data items.
The current AddressMap is now passed into the plugin manager, which
wraps it in an AddressTranslate object and passes that to the
plugins at Prepare() time. This allows plugins to convert addresses
to offsets, making it possible to format complex structures.
This breaks existing plugins.
Extension scripts (a/k/a "plugins") can now apply any data format
supported by FormatDescriptor to inline data. In particular, it can
now handle variable-length inline strings. The code analyzer
verifies the string structure (e.g. null-terminated strings have
exactly one null byte, at the very end).
Added PluginException to carry an exception back to the plugin code,
for occasions when they're doing something so wrong that we just
want to smack them.
Added test 2022-extension-scripts to exercise the feature.
We were providing platform symbols to plugins through the PlatSym
list, which allowed them to find constants and well-known addresses.
We now pass all project symbols and user labels in as well. The
name "PlatSym" is no longer accurate, so the class has been renamed.
Also, added a bunch of things to the problem list viewer, and
added some more info to the Info panel.
Also, added a minor test to 2011-hinting that does not affect the
output (which is the point).
Handle situation where a symbol wraps around a bank. Updated
2021-external-symbols for that, and to test the behavior when file
data and an external symbol overlap.
The bank-wrap test turned up a bug in Merlin 32. A workaround has
been added.
Updated documentation to explain widths.
Implement multi-byte project/platform symbols by filling out a table
of addresses. Each symbol is "painted" into the table, replacing
an existing entry if the new entry has higher priority. This allows
us to handle overlapping entries, giving boosted priority to platform
symbols that are defined in .sym65 files loaded later.
The bounds on project/platform symbols are now rigidly defined. If
the "nearby" feature is enabled, references to SYM-1 will be picked
up, but we won't go hunting for SYM+1 unless the symbol is at least
two bytes wide.
The cost of adding a symbol to the symbol table is about the same,
but we don't have a quick way to remove a symbol.
Previously, if two platform symbols had the same value, the symbol
with the alphabetically lowest label would win. Now, the symbol
defined in the most-recently-loaded file wins. (If you define two
symbols with the same value in the same file, it's still resolved
alphabetically.) This allows the user to pick the winner by
arranging the load order of the platform symbol files.
Platform symbols now keep a reference to the file ident of the
symbol file that defined them, so we can show the symbols's source
in the Info panel.
These changes altered the behavior of test 2008-address-changes,
which includes some tests on external addresses that are close to
labeled internal addresses. The previous behavior essentially
treated user labels as being 3 bytes wide and extending outside the
file bounds, which was mildly convenient on occasion but felt a
little skanky. (We could do with a way to define external symbols
relative to internal symbols, for things like the source address of
code that gets relocated.)
Also, re-enabled some unit tests.
Also, added a bit of identifying stuff to CrashLog.txt.
In a recent survey, three out of four cross assemblers surveyed
recommended not using opcode mnemonics to their patients who use
labels. We now remap labels like "AND" and "jmp", using the label
map that's part of the label localizer.
We skip the step for Merlin 32, which is perfectly happy to assemble
"JMP JMP JMP".
Also, fixed a bug in MaskLeadingUnderscores that could hang the
source generator thread.
Most assemblers end local label scope when a global label is
encountered. cc65 takes this one step further by ending local label
scope when constants or variables are defined. So, if we have a
variable table with a nonzero number of entries, we want to create
a fake global label at that point to end the scope.
Merlin 32 won't let you write " LDA #',' ". For some reason the
comma causes an error. IGenerator now has a "tweak operand format"
interface that lets us fix that.
If you set things up just right, it's possible for flag status
changes to fail to get merged.
Added a regression test to 1003-flags-and-branches.
Also, tweaked the instruction operand editor to be a bit smoother
from the keyboard: added alt-key shortcuts, and put the focus on the
OK button after creating/editing a label so you can just hit the
return key twice.
If a line has a comment with a cycle count and nothing else, it was
getting an extra space or two on the end.
Also, added a few end-of-line comments to the 2020 test to show how
they interact with the cycle counts.
Cycle counting is CPU-specific. The 2020 test exercises the
65816, but there are things unique to 6502 and 65C02 that should
also be checked if we want to be thorough.
No changes to the test itself.
A ".dd2 <address>" item would get linked to an internal label, but
references to external addresses weren't doing the appropriate
search through the platform/project symbol list.
This change altered the output of the 2019-local-variables test.
The previous behavior was restored by disabling "nearby" symbol
matching in the project properties.
Updated the "lookup symbol by address" function to ignore local
variables.
Also, minor updates to Applesoft and F8-ROM symbol tables.
If you play games with code hints you can create a data operand that
overlaps with code. This causes problems (see issue #45). We now
check for that situation and ignore overlapping data descriptors.
Added a regression test to 2011-hinting.
If a symbol is defined at <addr>, and we counter STA <addr>-1,Y,
we want to use the symbol in the operand. This worked for labels
but not project/platform symbols.
Also, fixed a crash that happened if you tried to delete an auto
label.
Ported the column width stuff from EditAppSettings, which it turns
out can be simplified slightly.
Moved the clipboard copy code out into its own class.
Disabled "File > Print", which has never done anything and isn't
likely to do anything in the near future.
Also, added a note to 2019-local-variables about a test case it
should probably have.
Unlike 64tass and Merlin, which allow you to redefine symbols, ACME
uses "zones" that provide scope for local variables. This means
that, at the point of a local variable table definition, we have to
start a new zone and output the full set of active symbols, not just
the newly-defined ones. (If you set the "clear previous" flag in
the LvTable there's no difference.)
We could do a bit better by only outputting the symbols that are
actually used within the zone, similar to what we do for global
project/platform symbols, but that's a bunch of work for questionable
benefit.
After thrashing around a bit, I had to choose between making the
uniquifier more complicated, or making de-duplication a separate
step. Since I don't really expect duplicates to be a thing, I went
with the latter.
Updated the regression test.
This hits most of the edge cases, but doesn't exercise the two
duplicate name situations (var name same as user label, var name
same as project/platform symbol).
Also, fixed a bug in the EditDefSymbol uniqueness check where it
was comparing a symbol to itself.
Previously, we used the default character encoding from the project
properties to determine how strings and character constants in the
entire source file should be encoded. Now we switch between
encodings as needed. The default character encoding is no longer
relevant.
High ASCII is now an actual encoding, rather than acting like ASCII
that sometimes doesn't work. Because we can do high ASCII character
operands with "| $80", we don't output a .enc to switch from ASCII
to high ASCII unless we need to generate a string. (If we're
already in high ASCII mode, the "| $80" isn't required but won't
hurt anything.)
We now do a scan up front to see if ASCII or high ASCII is needed,
and only output the .cdefs for the encodings that are actually used.
The only gap in the matrix is high ASCII DCI strings -- the ".shift"
pseudo-op rejects text if the string doesn't start with the high
bit clear.
I didn't think it made sense, but I found something that used it,
so apparently it's a thing. This updates the operand editor to
let you choose PETSCII+DCI, and updates the assemblers to handle
it correctly (really just 64tass, since the others either don't
have a DCI directive or don't deal with PETSCII at all).
Changed the char-encoding sample from "bad dcI" to "pet dcI", and
updated the documentation.
The documentation for 64tass says you're required to pass "--ascii"
when the source file is ASCII (as opposed to PETSCII). We were
ignoring this, but it turns out that everything works a bit better
if we don't.
So we now pass "--ascii" on the command line, and add a two-line
character encoding definition to every file that is generated with
ASCII as the default encoding. The sg_petscii and sg_screen
encodings go away, as PETSCII is now the default, and we can use the
built-in "screen" encoding.
All tests use the same data file and nearly the same project file.
The only difference is the default text encoding property setting.
For "-a" it's ASCII, for "-p" it's PETSCII, for "-s" it's C64 screen
code. Right now this only affects the code generated for 64tass.
The test itself is a collection of strings and characters in the
supported character encodings. How these are handled varies
significantly between assemblers.
Both dialogs got a couple extra radio buttons for selection of
single character operands. The data operand editor got a combo box
that lets you specify how it scans for viable strings.
Various string scanning methods were made more generic. This got a
little strange with auto-detection of low/high ASCII, but that was
mostly a matter of keeping the previous code around as a special
case.
Made C64 Screen Code DCI strings a thing that works.
We weren't checking to see if character operands matched their
delimiters, so bad code like "LDA #'''" was being generated.
There wasn't a test for this in 2006-operand-formats, so the test
has been updated with single and double quotes in low and high ASCII.
The previous functions just grabbed 62 characters and slapped quotes
on the ends, but that doesn't work if we want to show strings with
embedded control characters. This change replaces the simple
formatter with the one used to generate assembly source code. This
increases the cost of refreshing the display list, so a cache will
need to be added in a future change.
Converters for C64 PETSCII and C64 Screen Code have been defined.
The results of changing the auto-scan encoding can now be viewed.
The string operand formatter was using a single delimiter, but for
the on-screen version we want open-quote and close-quote, and might
want to identify some encodings with a prefix. The formatter now
takes a class that defines the various parts. (It might be worth
replacing the delimiter patterns recently added for single-character
operands with this, so we don't have two mechanisms for very nearly
the same thing.)
While working on this change I remembered why there were two kinds
of "reverse" in the old Merlin 32 string operand generator: what you
want for assembly code is different from what you want on screen.
The ReverseMode enum has been resurrected.
The previous code output a character in single-quotes if it was
standard ASCII, double-quotes if high ASCII, or hex if it was neither
of those. If a flag was set, high ASCII would also be output as
hex.
The new system takes the character value and an encoding identifier.
The identifier selects the character converter and delimiter
pattern, and puts the two together to generate the operand.
While doing this I realized that I could trivially support high
ASCII character arguments in all assemblers by setting the delimiter
pattern to "'#' | $80".
In FormatDescriptor, I had previously renamed the "Ascii" sub-type
"LowAscii" so it wouldn't be confused, but I dislike filling the
project file with "LowAscii" when "Ascii" is more accurate and less
confusing. So I switched it back, and we now check the project
file version number when deciding what to do with an ASCII item.
The CharEncoding tests/converters were also renamed.
Moved the default delimiter patterns to the string table.
Widened the delimiter pattern input fields slightly. Added a read-
only TextBox with assorted non-typewriter quotes and things so
people have something to copy text from.
DCI is handled with the ".shift" pseudo-op. The .null, .ptext,
and .shift operators all work correctly with escaped characters,
so we no longer redo those.
This was the result of the earlier change to eliminate "reverse DCI"
strings. On further examination, it doesn't seem like we can do
much better than a hex dump without more work than the situation
merits. So hex dump it is.
During a discussion with the cc65 developers, I became convinced that
generating "MVN $01,$02" is wrong, and "MVN #$01,#$02" is correct.
64tass, cc65, and Merlin 32 all accept this syntax; only ACME does
not. Operands without a leading '#' should be treated as 24-bit
values, and have the bank byte extracted.
This change updates the on-screen display and assembled output to
include the '#'. The ACME generator uses a Quirk to suppress the
hash mark. (It doesn't currently accept values larger than 8 bits,
so there's no ambiguity.)
There's no easy way to make non-zero-bank 65816 code work, so I'm
punting and just generating a whole-file hex dump for those. This
renders tests 2007 and 2009 useless, so I'm hesitant to claim that
ACME support is fully functional.
WDM <arg> now works. MVN/MVP are still broken. Correct code is
generated for whichever version of the assembler is configured.
Regression tests updated for new version.
Also, fixed a UI bug where manual edits to the assembler path were
being ignored.
The 65816 definition makes it a two-byte instruction, like COP. On
the 6502 it acted like a two-byte instruction, but in practice very
few assemblers treat it that way. Very few humans, for that matter.
So it's now treated as a single byte instruction, with the following
byte encoded as a data value.
Instead of providing no-op CheckJsr/CheckJsl, plugins now declare
which calls they support by defining interfaces on the plugin class.
I added a CheckBrk call for code like Apple /// SOS calls, which
use BRK as an OS call mechanism. The formatting doesn't work quite
right yet because I've been treating BRK as a two-byte instruction.
Hardly anything else does, and I think it's time I stopped (but not
in this commit).
Note: THIS BREAKS ALL PLUGINS that use the inline JSR/JSL feature,
which is pretty much all of them.
This worked, sort of. The problem is that SourceGen will revert to
hex output in certain situations, such as a broken symbolic
reference. There happens to be one in the ZIPPY example, and it's
on a relative branch.
The goal with the segment stuff is to allow cc65 to treat the
source as relocatable code. In that context, a relative branch to
an absolute address doesn't make any sense, so the assembler reports
a range error.
We don't currently have a mechanism that guarantees no references
are broken (and no affordance for finding them), so we can't make
this mode the default yet.
Instead, we continue to use the generic config, but generate the
correct set of lines as comments.
(issue #39)
The system configuration you get with "-t none" works for smaller
files but fails for larger ones. This updates the generator to
produce a source file and linker script pair. (I kinda saw this
one coming -- it's why the gen/asm dialog has a combo box for the
file preview -- so it didn't require that much work.)
This currently generates a fixed script for a generic system with
64KiB of RAM, using .ORGs to set the addresses as before.
With this change, assembling a file with 65536 NOPs succeeds.
(issue #39)
If you double-click on the opcode of "JSR label", the code view
selection jumps to the label. This now works for partial operands,
e.g. "LDA #<label".
Some changes to the find-label-offset code affected the cc65 "is it
a forward reference to a direct-page label" logic. The regression
test now correctly identifies an instruction that refers to itself
as not being a forward reference.
The cc65 assembler runs in a single pass, which means forward
address references default to 16 bits. For zero-page references
we have to add an explicit width disambiguator. (This is an
unusual situation that only occurs if you have a zero-page .ORG
in the file after code that references it.)
With this change, 2014-label-dp passes, and no other regression
tests were affected.
(issue #40)
The 2014-label-dp test now passes. Prior regression tests are
unaffected.
Also, renamed an IGenerator interface to more accurately reflect
its role.
(issue #37)
This is primarily to exercise a Merlin 32 failure (issue #37).
However, it also exercises a problem with cc65 (issue #40).
Currently, only 64tass can assemble this project.
To avoid confusing the assembler, expressions with a leading
parenthesis like "(foo & $ffff) + 1" are prefixed with a "0+". This
is not necessary if the operand begins with a '#'.
(issue #16)
We now insert parenthesis as needed. This can cause problems in
some situations, so we always prefix parenthetical expressions with
"0+", which looks goofy and is unnecessary for immediate operands.
But it does generate working source code.
Renamed the "simple" expression mode to "common", as it's not
particularly simple but is what you'd expect most assemblers to do.
(OTOH, life has been full of surprises.)
(issue #16)
Gave cc65 its own expression generator, as the precedence table seems
atypical if not unique. Configured 64tass to use the "simple"
expression mode.
Added some operations on a 32-bit constant to 2007-labels-and-symbols
to exercise the current worst-case expression (shift + AND + add).
Tweaked the Merlin expression generator to handle it.
(issue #16)