The situation on the Apple II is rather special: There are several types of RTCs. It's not desirable to have specific code for all of them. As the OS supports file timestamps RTC owners usually use OS drivers for their RTC. Those drivers read the RTC and write the result in a "date/time location" in RAM. The OS reads the date/time from the RAM location. If there's no RTC the RAM location keeps containing zeros. The OS uses those zeros as timestamps and the files show up in a directory as "<NO DATE>".
There's no common interface to set RTCs so if an RTC _IS_ present there's just nothing to do. However, if there's _NO_ RTC present the user might very well be interest to "manually" set the RAM location in order to have timestamps. But he surely doesn't want to manually set the RAM location over an over again. Rather he wants to set it just once after booting the OS.
From that perspective it makes most sense to not set both the date and the time but rather only set the date and have the time just stay zero. Then files show up in a directory as "DD-MON-YY 0:00".
So clock_settime() checks if the current time equals 0:00. If it does _NOT_ then an RTC is supposed to be active and clock_settime() fails with ERANGE. Otherwise clock_settime() ignores sets the date - and completely ignores the time provided as parameter.
clock_getres() too checks if the current time equals 0:00. If it does _NOT_ then an RTC is supposed to be active and clock_getres() returns a time resolution of one minute. Otherwise clock_getres() presumes that the only one who sets the RAM location is clock_settime() and therefore returns a time resolution of one day.
We want to add the capability to not only get the time but also set the time, but there's no "setter" for the "getter" time().
The first ones that come into mind are gettimeofday() and settimeofday(). However, they take a struct timezone argument that doesn't make sense - even the man pages says "The use of the timezone structure is obsolete; the tz argument should normally be specified as NULL." And POSIX says "Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function."
The ...timeofday() functions work with microseconds while the clock_...time() functions work with nanoseconds. Given that we expect our targets to support only 1/10 of seconds the microseconds look preferable at first sight. However, already microseconds require the cc65 data type 'long' so it's not such a relevant difference to nanoseconds. Additionally clock_getres() seems useful.
In order to avoid code duplication clock_gettime() takes over the role of the actual time getter from _systime(). So time() now calls clock_gettime() instead of _systime().
For some reason beyond my understanding _systime() was mentioned in time.h. _systime() worked exactly like e.g. _sysremove() and those _sys...() functions are all considered internal. The only reason I could see would be a performance gain of bypassing the time() wrapper. However, all known _systime() implementations internally called mktime(). And mktime() is implemented in C using an iterative algorithm so I really can't see what would be left to gain here. From that perspective I decided to just remove _systime().
All but one TGI drivers didn't use IRQs. Especially when the TGI driver kernel was the only .interruptor this meant quite some unnecessary overhead because it pulled in the whole IRQ infrastructure.
The one driver using IRQs (the graphics driver for the 160x102x16 mode on the Lynx) now uses a library reference to set up a JMP to its IRQ handler.
All but one joystick drivers didn't use IRQs. Espsecially when the joystick driver kernel was the only .interruptor this meant quite some unnecessary overhead because it pulled in the whole IRQ infrastructure.
I was told that the one driver using IRQs (the DXS/HIT-4 Player joystick driver for the C64) can be reworked to not do it. Until this is done that driver is defunct.
So far the joy_masks array allowed several joystick drivers for a single target to each have different joy_read return values. However this meant that every call to joy_read implied an additional joy_masks lookup to post-process the return value.
Given that almost all targets only come with a single joystick driver this seems an inappropriate overhead. Therefore now the target header files contain constants matching the return value of the joy_read of the joystick driver(s) on that target.
If there indeed are several joystick drivers for a single target they must agree on a common return value for joy_read. In some cases this was alredy the case as there's a "natural" return value for joy_read. However a few joystick drivers need to be adjusted. This may cause some overhead inside the driver. But that is for sure smaller than the overhead introduced by the joy_masks lookup before.
!!! ToDo !!!
The following three joystick drivers become broken with this commit and need to be adjusted:
- atrmj8.s
- c64-numpad.s
- vic20-stdjoy.s
This change includes some cleanups, removal of mainargs.s (game console
programs never have arguments), and a workaround for a problem I'm seeing.
The problem is that sometimes (in fact, more often than not) the clrscr()
call in testcode/lib/joy-test.c writes some garbage chars on the screen (most
often a "P"). Could be my hardware (I haven't seen it on MAME), but to
me the root cause is still unknown.
Stefan Dorndorf, author of XDOS, pointed out that retrieving the
default device by looking at an undocumented memory location won't
work in future XDOS versions.
He also showed a way to get the default device in a compatible
manner.
This change implements his method and adds a version check (XDOS
versions below 2.4 don't support this -- for them the behaviour
will be the same as, for example, AtariDOS: no notion of a default
drive).
- Adds new ENOEXEC error code, also used by Apple2 targets.
- Maximum command line length is 40, incl. program name. This is
an XDOS restriction.
- testcode/lib/tinyshell.c has been extended to be able to run
programs.
When a program starts running, INIT is moved from one place to another place. Then, INIT's code is executed; and, the first place is re-used for variables. After the INIT code has finished, the second place can be re-used by the heap and the C stack. That means that initiation code and data won't waste any RAM space after they stop being needed.
Up to now static drivers were created via co65 from dynamic drivers. However there was an issue with that approach:
The dynamic drivers are "o65 simple files" which obligates that they start with the 'code' segment. However dynamic drivers need to start with the module header - which is written to. For dynamic drivers this isn't more than a conceptual issue because they are always contain a 'data' segment and may therefore only be loaded into writable memory.
However when dynamic drivers are converted to static drivers using co65 then that issue becomes a real problem as then the 'code' segment may end up in non-writable memory - and thus writing to the module header fails.
Instead of changing the way dynamic drivers work I opted to rather make static driver creation totally independent from dynamic drivers. This allows to place the module header in the 'data' segment (see 'module.mac').
The Apple2 doesn't have sprites so the Apple2 mouse callbacks place a special character on the text screen to indicate the mouse position. In order to support the necessary character removing and redrawing the Apple2 mouse driver called the Apple2 mouse callbacks in an "unusual way". So far so (sort of) good.
However the upcoming Atari mouse driver aims to support both "sprite-type" mouse callbacks as well as "text-char-type" mouse callbacks. Therefore the interface between mouse drivers and callbacks needs to be extended to allow the mouse callbacks to hide their different types from the mouse driver.
The nature of this change can be seen best by looking at the Apple2 file modifications. The CBM drivers and callbacks (at least the current ones) don't benefit from this change.