* Docs say that CLK_TCK is an obsolete alias of CLOCKS_PER_SEC so there's no point in individual definitions.
* All targets determining the clock rate at runtime can use a common handling.
This follows a suggestion by Sijmen Schouten in issue #818.
Platoterm64 now works with mouse at 1200 baud.
Bump MOUSE_API_VERSION in asminc/mouse-kernel.inc.
Fix typo in testcode/lib/mouse-test.c.
If detected, the program refuses to run, preventing a crash.
The check only works with SpartaDOS. I don't have an overview which
DOSes potentially use the RAM under the ROM. Or which other installed
programs might use it.
No additional runtime memory space is consumed, since the change
is in the "system check" load chunk which gets replaced by the
user program during loading.
Extendend memory is mapped over the main memory in the 0x4000..0x7FFF
area. Many DOSes disable interrupts while extended memory is banked in,
but not all (e.g. SpartaDOS-X).
This change modifies the initial interrupt handler to map in main memory
before chaining to the "worker" handlers.
Since the initial interrupt handler uses a data segment to store the
trampoline to chain to the original handler, introduce a new "LOWBSS"
segment to hold this trampoline. Otherwise the trampoline may end up
inside the 0x4000..0x7FFF area.
Add a link time warning if "LOWCODE" segment lays within the extended
memory window.
We want to add the capability to not only get the time but also set the time, but there's no "setter" for the "getter" time().
The first ones that come into mind are gettimeofday() and settimeofday(). However, they take a struct timezone argument that doesn't make sense - even the man pages says "The use of the timezone structure is obsolete; the tz argument should normally be specified as NULL." And POSIX says "Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function."
The ...timeofday() functions work with microseconds while the clock_...time() functions work with nanoseconds. Given that we expect our targets to support only 1/10 of seconds the microseconds look preferable at first sight. However, already microseconds require the cc65 data type 'long' so it's not such a relevant difference to nanoseconds. Additionally clock_getres() seems useful.
In order to avoid code duplication clock_gettime() takes over the role of the actual time getter from _systime(). So time() now calls clock_gettime() instead of _systime().
For some reason beyond my understanding _systime() was mentioned in time.h. _systime() worked exactly like e.g. _sysremove() and those _sys...() functions are all considered internal. The only reason I could see would be a performance gain of bypassing the time() wrapper. However, all known _systime() implementations internally called mktime(). And mktime() is implemented in C using an iterative algorithm so I really can't see what would be left to gain here. From that perspective I decided to just remove _systime().
All but one TGI drivers didn't use IRQs. Especially when the TGI driver kernel was the only .interruptor this meant quite some unnecessary overhead because it pulled in the whole IRQ infrastructure.
The one driver using IRQs (the graphics driver for the 160x102x16 mode on the Lynx) now uses a library reference to set up a JMP to its IRQ handler.
All but one joystick drivers didn't use IRQs. Espsecially when the joystick driver kernel was the only .interruptor this meant quite some unnecessary overhead because it pulled in the whole IRQ infrastructure.
I was told that the one driver using IRQs (the DXS/HIT-4 Player joystick driver for the C64) can be reworked to not do it. Until this is done that driver is defunct.
This function is used by many other CONIO functions, and the user program not
necessarily uses 'cgetc'. Having "setcursor" in a different object file saves
space in this case and also allows the user program to override it (e.g. when
not using GRAPHICS 0 mode).
The change is inspired by the code of the standard joystick driver. It is however absolutely untested.
Note: Sites like http://raster.atariportal.cz/english.htm state that there needs to be a delay when reading joysticks via the MultiJoy adapter. There's no such delay in the driver. But I don't dare to decide to add it.
So far the joy_masks array allowed several joystick drivers for a single target to each have different joy_read return values. However this meant that every call to joy_read implied an additional joy_masks lookup to post-process the return value.
Given that almost all targets only come with a single joystick driver this seems an inappropriate overhead. Therefore now the target header files contain constants matching the return value of the joy_read of the joystick driver(s) on that target.
If there indeed are several joystick drivers for a single target they must agree on a common return value for joy_read. In some cases this was alredy the case as there's a "natural" return value for joy_read. However a few joystick drivers need to be adjusted. This may cause some overhead inside the driver. But that is for sure smaller than the overhead introduced by the joy_masks lookup before.
!!! ToDo !!!
The following three joystick drivers become broken with this commit and need to be adjusted:
- atrmj8.s
- c64-numpad.s
- vic20-stdjoy.s
Stefan Dorndorf, author of XDOS, pointed out that retrieving the
default device by looking at an undocumented memory location won't
work in future XDOS versions.
He also showed a way to get the default device in a compatible
manner.
This change implements his method and adds a version check (XDOS
versions below 2.4 don't support this -- for them the behaviour
will be the same as, for example, AtariDOS: no notion of a default
drive).