6502bench SourceGen: Instruction and Data Analysis

Back to index

This section discusses the internal workings of SourceGen. It is not necessary to understand this to use the program.

Analysis Process

Analysis of the file data is a complex multi-step process. Some changes to the project, such as adding a code start point or changing the CPU selection, require a full re-analysis of instructions and data. Other changes, such as adding or removing a label, don't affect the code tracing and only require a re-analysis of the data areas. And some changes, such as editing a comment, only require a refresh of the displayed lines.

It should be noted that none of the analysis results are stored in the project file. Only user-supplied data, such as the locations of code entry points and label definitions, is written to the file. This does create the possibility that two different users might get different results when opening the same project file with different versions of SourceGen, but these effects are expected to be minor.

The analyzer performs the following steps (see the Analyze method in DisasmProject.cs):

Once analysis is complete, a line-by-line display list is generated by walking through the annotated file data. Most of the actual text isn't rendered until they're needed. For complicated multi-line items like string operands, the formatted text must be generated to know how many lines it will occupy, so it's done immediately and cached for re-use on subsequent runs.

Automatic Formatting

Every offset in the file is marked as an instruction byte, data byte, or inline data byte. Some offsets are also marked as the start of an instruction or data area. The start offsets may have a format descriptor associated with them.

Format descriptors have a format (like "numeric" or "null-terminated string") a sub-format (like "hexadecimal" or "high ASCII"), and a length. For an instruction operand the length is redundant, but for a data operand it determines the width of the numeric value or length of the string. For this reason, instructions do not need a format descriptor, but all data items do.

Symbolic references are format descriptors with a symbol attached. The symbol reference also specifies low/high/bank, for partial symbol references like LDA #>symbol.

Every offset marked as a start point gets its own line in the on-screen display list. Embedded instructions are identified internally by looking for instruction-start offsets inside instructions.

The Anattrib array holds the post-analysis state for every offset, including comments and formatting, but any changes you make in the editors are applied to the data structures that are saved in the project file. After a change is made, a full or partial re-analysis is done to fill out the Anattribs.

Consider a simple example:

         .ADDRS  $1000
         JMP     L1003
L1003    NOP

We haven't explicitly formatted anything yet. The data analyzer sees that the JMP operand is inside the file, and has no label, so it creates an auto-label at offset +000003 and a format descriptor with a symbolic operand reference to "L1003" at +000000.

Suppose we now edit the label, changing L1003 to "FOO". This goes into the project's "user label" list. The analyzer is run, and applies the new "user label" to the Anattrib array. The data analyzer finds the numeric reference in the JMP operand, and finds a label at the target address, so it creates a symbolic operand reference to "FOO". When the display list is generated, the symbol "FOO" appears in both places.

Even though the JMP operand changed from "L1003" to "FOO", the only change actually written to the project file is the label edit. The contents of the Anattrib array are disposable, so it can be used to hold auto-generated labels and "fix up" numeric references. Labels and format descriptors generated by SourceGen are never added to the project file.

If the JMP operand were edited, a format descriptor would be added to the user-specified descriptor list. During the analysis pass it would be added to the Anattrib array at offset +000000.

Interaction With Undo/Redo

The analysis pass always considers the current state of the user data structures. Whether you're adding a label or removing one, the code runs through the same set of steps. The advantage of this approach is that the act of doing a thing, undoing a thing, and redoing a thing are all handled the same way.

None of the editors modify the project data structures directly. All changes are added to a change set, which is processed by a single "apply changes" function. The change sets are kept in the undo/redo buffer indefinitely. After the changes are made, the Anattrib array and other data structures are regenerated.

Data format editing can create some tricky situations. For example, suppose you have 8 bytes that have been formatted as two 32-bit words:

1000: 68690074           .dd4    $74006968
1004: 65737400           .dd4    $00747365

You realize these are null-terminated strings, select both words, and reformat them:

1000: 686900             .zstr   "hi"
1003: 74657374+          .zstr   "test"

Seems simple enough. Under the hood, SourceGen created three changes:

  1. At offset +000000, replace the current format descriptor (4-byte numeric) with a 3-byte null-terminated string descriptor.
  2. At offset +000003, add a new 5-byte null-terminated string descriptor.
  3. At offset +000004, remove the 4-byte numeric descriptor.

Each entry in the change set has "before" and "after" states for the format descriptor at a specific offset. Only the state for the affected offsets is included -- the program doesn't record the state of the full project after each change (even with the RAM on a modern system that would add up quickly). When undoing a change, before and after are simply reversed.

Code Analysis

The code tracer walks through the instructions, examining them to determine where execution will proceed next. There are five possibilities for every instruction:

  1. Continue. Execution always continues at the next instruction. Examples: LDA, STA, AND, NOP.
  2. Don't continue. The next instruction to be executed can't be determined from the file data (unless you're disassembling the system ROM around the BRK vector). Examples: RTS, BRK.
  3. Branch always. The operand specifies the next instruction address. Examples: JMP, BRA, BRL.
  4. Branch sometimes. Execution may continue at the operand address, or may execute the following instruction. If we know the value of the flags in the processor status register, we can eliminate one possibility. Examples: BCC, BEQ, BVS.
  5. Call subroutine. Execution will continue at the operand address, and is expected to also continue at the following instruction. Examples: JSR, JSL.

Branch targets are added to a list. When the current run of instructions is exhausted (i.e. a "don't continue" or "branch always" instruction is reached), the next target is pulled off of the list.

The state of the processor status flags is recorded for every instruction. When execution proceeds to the next instruction or branches to a new address, the flags are merged with the flags at the new location. If one execution path through a given address has the flags in one state (say, the carry is clear), while another execution path sees a different state (carry is set), the merged flag is "indeterminate". Indeterminate values cannot become determinate through a merge, but can be set by an instruction.

There can be multiple paths to a single address. If the analyzer sees that an instruction has been visited before, with an identical set of status flags, the analyzer stops pursuing that path.

The analyzer must always know the width of immediate load instructions when examining 65816 code, but it's possible for the status flag values to be indeterminate. In such a situation, short registers are assumed. Similarly, if the carry flag is unknown when an XCE is performed, we assume a transition to emulation mode (E=1).

There are three ways in which code can set a flag to a definite value:

  1. With explicit instructions, like SEC or CLD.
  2. With immediate-operand instructions. LDA #$00 sets Z=1 and N=0. ORA #$80 sets Z=0 and N=1.
  3. By inference. For example, if we see a BCC instruction, we know that the carry will be clear at the branch target address, and set at the following instruction. The instruction doesn't affect the value of the flag, but we know what the value will be at both addresses.

Self-modifying code can spoil any of these, possibly requiring a status flag override to get correct disassembly.

The instruction that is most likely to cause problems is PLP, which pulls the processor status flags off of the stack. SourceGen doesn't try to track stack contents, so it can't know what values may be pulled. In many cases the PLP appears not long after a PHP, so SourceGen can scan backward through the file to find the nearest PHP, and use the status flags from that. In practice this doesn't work well, but the "smart" behavior can be enabled from the project properties if desired. Otherwise, a PLP causes all flags to be set to "indeterminate", except for the M/X flags on the 65816 which are left unmodified.

Some other things that the code analyzer can't recognize automatically:

Sometimes the indirect jump targets are coming from a table of addresses in the file. If so, these can be formatted as addresses, and then the target locations tagged as code entry points.

The 65816 adds an additional twist: some instructions combine their operands with the Data Bank Register ("B") to form a 24-bit address. SourceGen can't automatically determine what the register holds, so it assumes that it's equal to the program bank register ("K"), and provides a way to override the value.

Extension Scripts

Extension scripts can mark data that follows a JSR, JSL, or BRK as inline data, or change the format of nearby data or instructions. The first time a JSR/JSL/BRK instruction is encountered, all loaded extension scripts that implement the appropriate interface are offered a chance to act.

The first script that applies a format wins. Attempts to re-format instructions or data that have already been formatted will fail. This rule ensures that anything explicitly formatted by the user will not be overridden by a script.

If code jumps into a region that is marked as inline data, the branch will be ignored. If an extension script tries to flag bytes as inline data that have already been executed, the script will be ignored. This can lead to a race condition in the analyzer if an extension script is doing the wrong thing. (The race doesn't exist with inline data tags specified by the user, because those are applied before code analysis starts.)

Data Analysis

The data analyzer performs two tasks. It matches operands with offsets, and it analyzes uncategorized data. This behavior can be modified in the project properties.

The data target analyzer examines every instruction and data operand to see if it's referring to an offset within the data file. If the target is within the file, and has a label, a format descriptor with a weak symbolic reference to that label is added to the Anattrib array. If the target doesn't have a label, the analyzer will either use a nearby label, or generate a unique label and use that.

While most of the "nearby label" logic can be disabled, targets that land in the middle of an instruction are always adjusted backward to the instruction start. This is necessary because labels are only visible if they're associated with the first (opcode) byte of an instruction.

The uncategorized data analyzer tries to find character strings and opportunities to use the ".FILL" operation. It breaks the file into pieces, where contiguous regions hold nothing but data, are not split across address region start/end directives, are not interrupted by data, and do not contain anything that the user has chosen to format. Each region is scanned for matching patterns. If a match is found, a format entry is added to the Anattrib array. Otherwise, data is added as single-byte values.