Originally the Apple II had a 64 char set and used the upper two bits to control inverse and blinking. The Apple //e brought then an alternate char set without blinking but more individual chars. However, it does _not_ contain 128 chars and use the upper bit to control inverse as one would assume. Rather it contains more than 128 chars - the MouseText chars. And because Apple wanted to provide as much backward compatibility as possible with the original char set, the alternate char set has a rather weird layout for chars > 128 with the inverse lowercase chars _not_ at (normal lowercase char + 128).
So far the Apple II CONIO implementation mapped chars 128-255 to chars 0-127 (with the exception of \r and \n). It made use of alternate chars > 128 transparently for the user via reverse(1). The user didn't have direct access to the MouseText chars, they were only used interally for things like chline() and cvline().
Now the mapping of chars 128-255 to 0-127 is removed. Using chars > 128 gives the user direct access to the "raw" alternate chars > 128. This especially give the use direct access to the MouseText chars. But this clashes with the exsisting (and still desirable) revers(1) logic. Combining reverse(1) with chars > 128 just doesn't result in anything usable!
What motivated this change? When I worked on the VT100 line drawing support for Telnet65 on the Apple //e (not using CONIO at all) I finally understood how MouseText is intended to be used to draw arbitrary grids with just three chars: A special "L" type char, the underscore and a vertical bar at the left side of the char box. I notice that with those chars it is possible to follow the CONIO approach to boxes and grids: Combining chline()/cvline() with special CH_... char constants for edges and intersections.
But in order to actually do so I needed to be able to define CH_... constants that when fed into the ordinary cputc() pipeline end up as MouseText chars. The obvious approach was to allow chars > 128 to directly access MouseText chars :-)
Now that the native CONIO box/grid approach works I deleted the Apple //e proprietary textframe() function that I added as replacement quite some years ago.
Again: Please note that chline()/cvline() and the CH... constants don't work with reverse(1)!
For quite some time I deliberately didn't add cursor support to the Apple II CONIO imöplementation. I consider it inappropriate to increase the size of cgetc() unduly for a rather seldom used feature.
There's no hardware cursor on the Apple II so displaying a cursor during keyboard input means reading the character stored at the cursor location, writing the cursor character, reading the keyboard and finally writing back the character read initially.
The naive approach is to reuse the part of cputc() that determines the memory location of the character at the cursor position in order to read the character stored there. However that means to add at least one additional JSR / RTS pair to cputc() adding 4 bytes and 12 cycles :-( Apart from that this approach means still a "too" large cgetc().
The approach implemented instead is to include all functionality required by cgetc() into cputc() - which is to read the current character before writing a new one. This may seem surprising at first glance but an LDA(),Y / TAX sequence adds only 3 bytes and 7 cycles so it cheaper than the JSR / RTS pair and allows to brings down the code increase in cgetc() down to a reasonable value.
However so far the internal cputc() code in question saved the X register. Now it uses the X register to return the old character present before writing the new character for cgetc(). This requires some rather small adjustments in other functions using that internal cputc() code.
About all CONIO functions offering a <...>xy variant call
popa
_gotoxy
By providing an internal gotoxy variant that starts with a popa all those CONIO function can be shortened by 3 bytes. As soon as program calls more than one CONIO function this means an overall code size reduction.