Originally the Apple II had a 64 char set and used the upper two bits to control inverse and blinking. The Apple //e brought then an alternate char set without blinking but more individual chars. However, it does _not_ contain 128 chars and use the upper bit to control inverse as one would assume. Rather it contains more than 128 chars - the MouseText chars. And because Apple wanted to provide as much backward compatibility as possible with the original char set, the alternate char set has a rather weird layout for chars > 128 with the inverse lowercase chars _not_ at (normal lowercase char + 128).
So far the Apple II CONIO implementation mapped chars 128-255 to chars 0-127 (with the exception of \r and \n). It made use of alternate chars > 128 transparently for the user via reverse(1). The user didn't have direct access to the MouseText chars, they were only used interally for things like chline() and cvline().
Now the mapping of chars 128-255 to 0-127 is removed. Using chars > 128 gives the user direct access to the "raw" alternate chars > 128. This especially give the use direct access to the MouseText chars. But this clashes with the exsisting (and still desirable) revers(1) logic. Combining reverse(1) with chars > 128 just doesn't result in anything usable!
What motivated this change? When I worked on the VT100 line drawing support for Telnet65 on the Apple //e (not using CONIO at all) I finally understood how MouseText is intended to be used to draw arbitrary grids with just three chars: A special "L" type char, the underscore and a vertical bar at the left side of the char box. I notice that with those chars it is possible to follow the CONIO approach to boxes and grids: Combining chline()/cvline() with special CH_... char constants for edges and intersections.
But in order to actually do so I needed to be able to define CH_... constants that when fed into the ordinary cputc() pipeline end up as MouseText chars. The obvious approach was to allow chars > 128 to directly access MouseText chars :-)
Now that the native CONIO box/grid approach works I deleted the Apple //e proprietary textframe() function that I added as replacement quite some years ago.
Again: Please note that chline()/cvline() and the CH... constants don't work with reverse(1)!
We want to add the capability to not only get the time but also set the time, but there's no "setter" for the "getter" time().
The first ones that come into mind are gettimeofday() and settimeofday(). However, they take a struct timezone argument that doesn't make sense - even the man pages says "The use of the timezone structure is obsolete; the tz argument should normally be specified as NULL." And POSIX says "Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function."
The ...timeofday() functions work with microseconds while the clock_...time() functions work with nanoseconds. Given that we expect our targets to support only 1/10 of seconds the microseconds look preferable at first sight. However, already microseconds require the cc65 data type 'long' so it's not such a relevant difference to nanoseconds. Additionally clock_getres() seems useful.
In order to avoid code duplication clock_gettime() takes over the role of the actual time getter from _systime(). So time() now calls clock_gettime() instead of _systime().
For some reason beyond my understanding _systime() was mentioned in time.h. _systime() worked exactly like e.g. _sysremove() and those _sys...() functions are all considered internal. The only reason I could see would be a performance gain of bypassing the time() wrapper. However, all known _systime() implementations internally called mktime(). And mktime() is implemented in C using an iterative algorithm so I really can't see what would be left to gain here. From that perspective I decided to just remove _systime().
Although the primary target OS for the Apple II for sure isn't DOS 3.3 but ProDOS 8 the Apple II binary files contained a DOS 3.3 4-byte header. Recently I was made aware of the AppleSingle file format. That format is a much better way to transport Apple II meta data from the cc65 toolchain to the ProDOS 8 file system. Therefore I asked AppleCommander to support the AppleSingle file format. Now that there's an AppleCommander BETA with AppleSingle support it's the right time for this change.
I bumped version to 2.17 because of this from the perspective of Apple II users of course incompatible change.
The <toc> tag can't be put inside of a section. It isn't needed, anyway; we can get a TOC by putting the header as a section, and the functions as subsections.
As discussed in https://github.com/cc65/cc65/pull/452 after my premature merge the two functions in question don't work as expected.
Additionally I adjusted several style deviations in the pull request in question.
I recently came across that the question if a driver is compatible with DOS 3.3 isn't about the fact if it actually uses IRQs but if it potentially could use IRQs as the driver kernel pulls in the IRQ handler anyway. This is especially suboptimal in the scenario of statically linked drivers where it is concpetually totally clear at link time they use IRQs or not. Apart from that it might make sense to be able to define on a per-target basis if _any_ of the drivers of a certain class uses IRQs. If that isn't the cases the driver kernel for that driver class for that target could omit IRQ handling too. I'm aware that Uz imagined drivers being loaded which weren't known when the program was linked - but I don't see this.
There were two aspects of this behavior that were considered undesirable:
- Although the safe inlining is in general desirable it should only be enabled if asked for it - like any other optimization.
- The option name -Os implies that it is a safe option, the potentially unsafe inlining should have a more explicit name.
So now:
- The option -Os enables the safe inlining.
- The new option --eagerly-inline-funcs enables the potentially unsafe inlining (including the safe inlining).
Additionally was added:
- The option --inline-stdfuncs that does like -Os enable the safe inlining but doesn't enable optimizations.
- The pragma inline-stdfuncs that works identical to --inline-stdfuncs.
- The pragma allow-eager-inline that enables the potentially unsafe inlining but doesn't include the safe inlining. That means that by itself it only marks code as safe for potentially unsafe inlining but doesn't actually enable any inlining.
The 'all' target deliberately doesn't build the doc nor the samples. But that doesn't mean that the Makefiles in the 'doc' and 'samples' directories must default to the (empty) 'all' target.
For quite some time I deliberately didn't add cursor support to the Apple II CONIO imöplementation. I consider it inappropriate to increase the size of cgetc() unduly for a rather seldom used feature.
There's no hardware cursor on the Apple II so displaying a cursor during keyboard input means reading the character stored at the cursor location, writing the cursor character, reading the keyboard and finally writing back the character read initially.
The naive approach is to reuse the part of cputc() that determines the memory location of the character at the cursor position in order to read the character stored there. However that means to add at least one additional JSR / RTS pair to cputc() adding 4 bytes and 12 cycles :-( Apart from that this approach means still a "too" large cgetc().
The approach implemented instead is to include all functionality required by cgetc() into cputc() - which is to read the current character before writing a new one. This may seem surprising at first glance but an LDA(),Y / TAX sequence adds only 3 bytes and 7 cycles so it cheaper than the JSR / RTS pair and allows to brings down the code increase in cgetc() down to a reasonable value.
However so far the internal cputc() code in question saved the X register. Now it uses the X register to return the old character present before writing the new character for cgetc(). This requires some rather small adjustments in other functions using that internal cputc() code.
If cc65 is installed and used as designed there's no need whatsoever for CC65_HOME (both on *IX and Windows) from the perspective of the cc65 binaries. If the user however has to access files from the 'target' directory thenhe ends up with some assumption on the cc65 installation path nevertheless :-(
In order to avoid this I added the --print-target-path option. It "exports" the logic used by the cc65 binaries to locate their files to the user thus allowing him to leverage the same logic to locate the target files in his build scripts / Makefiles.
- use this function instead of directly looking at _dos_type in the included
targetutil and test programs
- fixes/improvements to the Atari runtime library regarding the recently
changed _dos_type values
- libsrc/atari/targetutil/w2cas.c: exit if no filename was entered
- add documentation for the new function
It prevents the statement's Assembly code from being optimized (e.g., moved or removed). Optimization is disabled for that statement's entire function (other functions aren't affected).
The name RAM doesn't make much sense in general for a memeory area because i.e. the zero page is for sure RAM but is not part of the memory area named RAM.
For disk based targets it makes sense to put the disk file more into focus and here MAIN means the main part of the file - in contrast to some header.
Only for ROM based targets the name RAM is kept as it makes sense to focus on the difference between RAM and ROM.
The way we want to use the INITBSS segment - and especially the fact that it won't have the type bss on all ROM based targets - means that the name INITBSS is misleading. After all INIT is the best name from my perspective as it serves several purposes and therefore needs a rather generic name.
Unfortunately this means that the current INIT segment needs to be renamed too. Looking for a short (ideally 4 letter) name I came up with ONCE as it contains all code (and data) accessed only once during initialization.