Apple2 and Atmos have Index in X, but can still use it for the
best-case path as long as we reload it in the worst-case part
(when we assert flow control).
Also, standardize the free space to trigger flow control to 32
characters left (compare with RecvFreeCnt before decrement)
There's no target with more than one serial driver (and I don't see that change anytime soon) so it's a no-brainer to apply the standard driver concept to serial drivers.
The Receive Data Register and the Transmit Data Register share share a single address. Accessing that address with STA abs,X in order to fill the Transmit Data Register causes a 6502 false read which causes the Receive Data Register to be emptied.
The simplest way to work around that issue - which I chose here - is to move the base address for all ACIA accesses from page $C0 to page $BF. However, that adds an additional cycle to all read accesses. An alternative approach would be to only modify the single line `sta ACIA_DATA,x`.
The target util convert.system is to be used in conjunction with GEOS on the Apple II but has to be built as an "ordinary" Apple II program. The way the cc65 library build system is designed there's no way to define dependencies between targets. The solution used so far was to explicitly trigger a build of the target 'apple2enh' from the target 'geos-apple'. However, that approach tends to break parallel builds which may be in the middle of building 'appple2enh' at the time it is triggered by 'geos-apple'.
There might be ways to get this fixed - but the the cc65 library build systrem is already (more than) complex enough, so I really don't want to add anything special to it.
On the other hand there are easier ways (outside the scope of cc65) to archive what convert.system does so I don't presume convert.system to be actually used - it's more a reference type of thing.
Putting all facts together the decision was easy: Just move convert.system from the target it is used with to the target(s) it is built with.
Certain scenarios (e.g. not running any Applesoft program at all since booting DOS 3.3) can make DOS 3.3 consider cc65 device input (e.g. getchar()) that reads a CR interpreting the command in the keyboard buffer. Setting the hibyte of the Applesoft currently executed line number to some value <> $FF (beside setting the input prompt to some value <> ']') makes DOS 3.3 understand that we're not in intermediate mode and that therefore I/O not preceded with ctrl-d mustn't be fiddled with (see DOS 3.3 routine at $A65E).
Please refer to https://github.com/cc65/cc65/pull/532 for background info.
I wrote in https://sourceforge.net/p/cc65/mailman/message/35873183/
===
cputs() wraps to the next line if the strings is too long to fit in the current line. I don't know if it's worth the effort to allow cpeeks() to continue reading from the next line. I'd like to discuss this aspect with the actual implementers.
===
This is still as unclear today as it was when I wrote the above. Therefore this change just doesn't add cpeeks() at all.
Since f8c6c58373 the Apple II CONIO implementation doesn't "need" revers() anymore - meaning that (nearly) every possible value can be placed in VRAM with a straight cputc() (without the need for a previous revers(1)).
The implementation of cpeekc() leverages that cputc() ability by always returning the value that can be fed into cputc() without a previous revers(1). Accordingly, cpeekrevers() always returns 0.
So after the sequence revers(1); cputc(x); a cpeekc() will return a value different from x! However, I don't see this behavior braking the cpeekc() contract. I see the cpeekc() contract being defined by the sequence textcolor(cpeekcolor()); revers(cpeekrevers()); cputc(cpeekc()); placing the very same value in VRAM that there was before. And that contract is fulfilled.
The implementation is a bit tricky as it requires to take different code paths for the //e, the //c and the IIgs. Additionally the //c only provides a VBL IRQ flag supposed to be used by an IRQ handler to determine what triggered the IRQ. However, masking IRQs on the CPU, activating the VBL IRQ, clearing any pending VBL IRQs and then polling for the IRQ flag does the trick.
As described e.g. in the Apple IIe Technote #6: 'The Apple II Paddle Circuits' it doesn't work to call PREAD several times in immediate succession. However, so far the Apple II joystick driver did just that in order to read the two joystick axis.
Therefore the driver now uses a custom routine that reads both paddles _at_the_same_time_. The code doing so requires nearly twice the cycles meaning that the overall time for a joy_read() stays roughly the same. However, twice the cycles in the read loop means half the resolution. But for the cc65 joystick driver use case that doesn't hurt at all as the driver is supposed to only detect neutral vs. left/right and up/down.
CPU accelerators are supposed to detect access to $C070 and slow down for some time automatically. However, the IIgs rather comes with a modified ROM routine. Therefore it is necessary to manually slow down the IIgs when replacing the ROM routine.
Do not check $fbbe when detecting the Apple //e card. This byte is a version number for the Apple //e card according to misc technote #7 and it appears that the last version of the software that I am aware of has a 3 at this location.
Prior to this change, Apple //e cards which we not version 0 would be detected as an Apple //e enhanced.
Although documented nowhere (!!!) ProDOS trashes the random counter locations $4E/$4F. Is discovered this because my TCP connections didn't have random local ports.
It's a really funny coincidence that David Finnigan discovered only 3 years ago the very same issue because of the very same reason: https://groups.google.com/forum/#!topic/comp.sys.apple2.programmer/1ciep_Oetvo
In order to have randomize() work as expected (and the Apple II random number generation in general) it is necessary to update the random counter during keypress wait just like the ROM function does.
Fixes this issue:
https://github.com/cc65/cc65/issues/722
ftell() returns the value returned by lseek(), and lseek() for the
Apple II wasn't returning a value.
According to https://github.com/cc65/wiki/wiki/Direct-console-IO it is undefined what happens when the end of the sceen is reached. But it is _not_ undefined what happens when the end of the line is reached. So implement the usual thing - which was easy enough to do after all.
Originally the Apple II had a 64 char set and used the upper two bits to control inverse and blinking. The Apple //e brought then an alternate char set without blinking but more individual chars. However, it does _not_ contain 128 chars and use the upper bit to control inverse as one would assume. Rather it contains more than 128 chars - the MouseText chars. And because Apple wanted to provide as much backward compatibility as possible with the original char set, the alternate char set has a rather weird layout for chars > 128 with the inverse lowercase chars _not_ at (normal lowercase char + 128).
So far the Apple II CONIO implementation mapped chars 128-255 to chars 0-127 (with the exception of \r and \n). It made use of alternate chars > 128 transparently for the user via reverse(1). The user didn't have direct access to the MouseText chars, they were only used interally for things like chline() and cvline().
Now the mapping of chars 128-255 to 0-127 is removed. Using chars > 128 gives the user direct access to the "raw" alternate chars > 128. This especially give the use direct access to the MouseText chars. But this clashes with the exsisting (and still desirable) revers(1) logic. Combining reverse(1) with chars > 128 just doesn't result in anything usable!
What motivated this change? When I worked on the VT100 line drawing support for Telnet65 on the Apple //e (not using CONIO at all) I finally understood how MouseText is intended to be used to draw arbitrary grids with just three chars: A special "L" type char, the underscore and a vertical bar at the left side of the char box. I notice that with those chars it is possible to follow the CONIO approach to boxes and grids: Combining chline()/cvline() with special CH_... char constants for edges and intersections.
But in order to actually do so I needed to be able to define CH_... constants that when fed into the ordinary cputc() pipeline end up as MouseText chars. The obvious approach was to allow chars > 128 to directly access MouseText chars :-)
Now that the native CONIO box/grid approach works I deleted the Apple //e proprietary textframe() function that I added as replacement quite some years ago.
Again: Please note that chline()/cvline() and the CH... constants don't work with reverse(1)!
We basically cast a struct timespec pointer to a time_t pointer when we pass the clock_settime() paramter to localtime(). Explicitly express that in the source code.
The situation on the Apple II is rather special: There are several types of RTCs. It's not desirable to have specific code for all of them. As the OS supports file timestamps RTC owners usually use OS drivers for their RTC. Those drivers read the RTC and write the result in a "date/time location" in RAM. The OS reads the date/time from the RAM location. If there's no RTC the RAM location keeps containing zeros. The OS uses those zeros as timestamps and the files show up in a directory as "<NO DATE>".
There's no common interface to set RTCs so if an RTC _IS_ present there's just nothing to do. However, if there's _NO_ RTC present the user might very well be interest to "manually" set the RAM location in order to have timestamps. But he surely doesn't want to manually set the RAM location over an over again. Rather he wants to set it just once after booting the OS.
From that perspective it makes most sense to not set both the date and the time but rather only set the date and have the time just stay zero. Then files show up in a directory as "DD-MON-YY 0:00".
So clock_settime() checks if the current time equals 0:00. If it does _NOT_ then an RTC is supposed to be active and clock_settime() fails with ERANGE. Otherwise clock_settime() ignores sets the date - and completely ignores the time provided as parameter.
clock_getres() too checks if the current time equals 0:00. If it does _NOT_ then an RTC is supposed to be active and clock_getres() returns a time resolution of one minute. Otherwise clock_getres() presumes that the only one who sets the RAM location is clock_settime() and therefore returns a time resolution of one day.
We want to add the capability to not only get the time but also set the time, but there's no "setter" for the "getter" time().
The first ones that come into mind are gettimeofday() and settimeofday(). However, they take a struct timezone argument that doesn't make sense - even the man pages says "The use of the timezone structure is obsolete; the tz argument should normally be specified as NULL." And POSIX says "Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function."
The ...timeofday() functions work with microseconds while the clock_...time() functions work with nanoseconds. Given that we expect our targets to support only 1/10 of seconds the microseconds look preferable at first sight. However, already microseconds require the cc65 data type 'long' so it's not such a relevant difference to nanoseconds. Additionally clock_getres() seems useful.
In order to avoid code duplication clock_gettime() takes over the role of the actual time getter from _systime(). So time() now calls clock_gettime() instead of _systime().
For some reason beyond my understanding _systime() was mentioned in time.h. _systime() worked exactly like e.g. _sysremove() and those _sys...() functions are all considered internal. The only reason I could see would be a performance gain of bypassing the time() wrapper. However, all known _systime() implementations internally called mktime(). And mktime() is implemented in C using an iterative algorithm so I really can't see what would be left to gain here. From that perspective I decided to just remove _systime().