Originally the Apple II had a 64 char set and used the upper two bits to control inverse and blinking. The Apple //e brought then an alternate char set without blinking but more individual chars. However, it does _not_ contain 128 chars and use the upper bit to control inverse as one would assume. Rather it contains more than 128 chars - the MouseText chars. And because Apple wanted to provide as much backward compatibility as possible with the original char set, the alternate char set has a rather weird layout for chars > 128 with the inverse lowercase chars _not_ at (normal lowercase char + 128).
So far the Apple II CONIO implementation mapped chars 128-255 to chars 0-127 (with the exception of \r and \n). It made use of alternate chars > 128 transparently for the user via reverse(1). The user didn't have direct access to the MouseText chars, they were only used interally for things like chline() and cvline().
Now the mapping of chars 128-255 to 0-127 is removed. Using chars > 128 gives the user direct access to the "raw" alternate chars > 128. This especially give the use direct access to the MouseText chars. But this clashes with the exsisting (and still desirable) revers(1) logic. Combining reverse(1) with chars > 128 just doesn't result in anything usable!
What motivated this change? When I worked on the VT100 line drawing support for Telnet65 on the Apple //e (not using CONIO at all) I finally understood how MouseText is intended to be used to draw arbitrary grids with just three chars: A special "L" type char, the underscore and a vertical bar at the left side of the char box. I notice that with those chars it is possible to follow the CONIO approach to boxes and grids: Combining chline()/cvline() with special CH_... char constants for edges and intersections.
But in order to actually do so I needed to be able to define CH_... constants that when fed into the ordinary cputc() pipeline end up as MouseText chars. The obvious approach was to allow chars > 128 to directly access MouseText chars :-)
Now that the native CONIO box/grid approach works I deleted the Apple //e proprietary textframe() function that I added as replacement quite some years ago.
Again: Please note that chline()/cvline() and the CH... constants don't work with reverse(1)!
We want to add the capability to not only get the time but also set the time, but there's no "setter" for the "getter" time().
The first ones that come into mind are gettimeofday() and settimeofday(). However, they take a struct timezone argument that doesn't make sense - even the man pages says "The use of the timezone structure is obsolete; the tz argument should normally be specified as NULL." And POSIX says "Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function."
The ...timeofday() functions work with microseconds while the clock_...time() functions work with nanoseconds. Given that we expect our targets to support only 1/10 of seconds the microseconds look preferable at first sight. However, already microseconds require the cc65 data type 'long' so it's not such a relevant difference to nanoseconds. Additionally clock_getres() seems useful.
In order to avoid code duplication clock_gettime() takes over the role of the actual time getter from _systime(). So time() now calls clock_gettime() instead of _systime().
For some reason beyond my understanding _systime() was mentioned in time.h. _systime() worked exactly like e.g. _sysremove() and those _sys...() functions are all considered internal. The only reason I could see would be a performance gain of bypassing the time() wrapper. However, all known _systime() implementations internally called mktime(). And mktime() is implemented in C using an iterative algorithm so I really can't see what would be left to gain here. From that perspective I decided to just remove _systime().
The change matches the way that I/O register structures are defined in other headers. The names are defined as "struct", instead of as "pointer to struct".
So far conio.h included the target header to get the CH_... and COLOR_... macros. However tgi.h never did the same to get the TGI_COLOR_... macros. And some time ago the JOY_..._MASK macros moved from joystick.h into the target header yet joystick.h didn't include the target header.
Why wasn't that issue detected so far? Because about every program using TGI and/or the joystick uses CONIO too and therefore includes the target header that way.
However, conceptually it's clean to factor out the target header inclusion and have tgi.h and joystick.h do it like conio.h.
Apart from that user code may make direct use of target.h too.
As discussed in https://github.com/cc65/cc65/pull/452 after my premature merge the two functions in question don't work as expected.
Additionally I adjusted several style deviations in the pull request in question.
So far the joy_masks array allowed several joystick drivers for a single target to each have different joy_read return values. However this meant that every call to joy_read implied an additional joy_masks lookup to post-process the return value.
Given that almost all targets only come with a single joystick driver this seems an inappropriate overhead. Therefore now the target header files contain constants matching the return value of the joy_read of the joystick driver(s) on that target.
If there indeed are several joystick drivers for a single target they must agree on a common return value for joy_read. In some cases this was alredy the case as there's a "natural" return value for joy_read. However a few joystick drivers need to be adjusted. This may cause some overhead inside the driver. But that is for sure smaller than the overhead introduced by the joy_masks lookup before.
!!! ToDo !!!
The following three joystick drivers become broken with this commit and need to be adjusted:
- atrmj8.s
- c64-numpad.s
- vic20-stdjoy.s
There were two aspects of this behavior that were considered undesirable:
- Although the safe inlining is in general desirable it should only be enabled if asked for it - like any other optimization.
- The option name -Os implies that it is a safe option, the potentially unsafe inlining should have a more explicit name.
So now:
- The option -Os enables the safe inlining.
- The new option --eagerly-inline-funcs enables the potentially unsafe inlining (including the safe inlining).
Additionally was added:
- The option --inline-stdfuncs that does like -Os enable the safe inlining but doesn't enable optimizations.
- The pragma inline-stdfuncs that works identical to --inline-stdfuncs.
- The pragma allow-eager-inline that enables the potentially unsafe inlining but doesn't include the safe inlining. That means that by itself it only marks code as safe for potentially unsafe inlining but doesn't actually enable any inlining.