Fixes this issue:
https://github.com/cc65/cc65/issues/722
ftell() returns the value returned by lseek(), and lseek() for the
Apple II wasn't returning a value.
According to https://github.com/cc65/wiki/wiki/Direct-console-IO it is undefined what happens when the end of the sceen is reached. But it is _not_ undefined what happens when the end of the line is reached. So implement the usual thing - which was easy enough to do after all.
Originally the Apple II had a 64 char set and used the upper two bits to control inverse and blinking. The Apple //e brought then an alternate char set without blinking but more individual chars. However, it does _not_ contain 128 chars and use the upper bit to control inverse as one would assume. Rather it contains more than 128 chars - the MouseText chars. And because Apple wanted to provide as much backward compatibility as possible with the original char set, the alternate char set has a rather weird layout for chars > 128 with the inverse lowercase chars _not_ at (normal lowercase char + 128).
So far the Apple II CONIO implementation mapped chars 128-255 to chars 0-127 (with the exception of \r and \n). It made use of alternate chars > 128 transparently for the user via reverse(1). The user didn't have direct access to the MouseText chars, they were only used interally for things like chline() and cvline().
Now the mapping of chars 128-255 to 0-127 is removed. Using chars > 128 gives the user direct access to the "raw" alternate chars > 128. This especially give the use direct access to the MouseText chars. But this clashes with the exsisting (and still desirable) revers(1) logic. Combining reverse(1) with chars > 128 just doesn't result in anything usable!
What motivated this change? When I worked on the VT100 line drawing support for Telnet65 on the Apple //e (not using CONIO at all) I finally understood how MouseText is intended to be used to draw arbitrary grids with just three chars: A special "L" type char, the underscore and a vertical bar at the left side of the char box. I notice that with those chars it is possible to follow the CONIO approach to boxes and grids: Combining chline()/cvline() with special CH_... char constants for edges and intersections.
But in order to actually do so I needed to be able to define CH_... constants that when fed into the ordinary cputc() pipeline end up as MouseText chars. The obvious approach was to allow chars > 128 to directly access MouseText chars :-)
Now that the native CONIO box/grid approach works I deleted the Apple //e proprietary textframe() function that I added as replacement quite some years ago.
Again: Please note that chline()/cvline() and the CH... constants don't work with reverse(1)!
We basically cast a struct timespec pointer to a time_t pointer when we pass the clock_settime() paramter to localtime(). Explicitly express that in the source code.
The CIA TOD only stores the time but not the date. Therefore the date set by clock_settime() ist just stored inside the C library for retrieval via clock_gettime().
The "very special" handling of 12AM/PM is based on https://groups.google.com/d/msg/comp.sys.cbm/ysVYSX4AMbc/vHrXCWEhCOUJ saying:
==========
24hr: Wr => Rd => Nx
--------------------
0 : 92 => 12 => 01 <= Switch from 00 to 01 (24-hour notation)
1 : 01 => 01 => 02
2 : 02 => 02 => 03
11 : 11 => 11 => 92
12 : 12 => 92 => 81 <= Switch from 12 to 13 (24-hour notation)
13 : 81 => 81 => 82
14 : 82 => 82 => 83
23 : 91 => 91 => 12
1. column ("24hr"): hour to be tested (decimal)
2. column ("Wr"): hour written to TOD register (BCD)
3. column ("Rd"): hour read from TOD register (BCD) immediately after writing the value in column 2 to see the conversion between AM/PM, if any
4. column ("Nx"): next hour (BCD) after the hour switch
==========
Thanks Paul!
The situation on the Apple II is rather special: There are several types of RTCs. It's not desirable to have specific code for all of them. As the OS supports file timestamps RTC owners usually use OS drivers for their RTC. Those drivers read the RTC and write the result in a "date/time location" in RAM. The OS reads the date/time from the RAM location. If there's no RTC the RAM location keeps containing zeros. The OS uses those zeros as timestamps and the files show up in a directory as "<NO DATE>".
There's no common interface to set RTCs so if an RTC _IS_ present there's just nothing to do. However, if there's _NO_ RTC present the user might very well be interest to "manually" set the RAM location in order to have timestamps. But he surely doesn't want to manually set the RAM location over an over again. Rather he wants to set it just once after booting the OS.
From that perspective it makes most sense to not set both the date and the time but rather only set the date and have the time just stay zero. Then files show up in a directory as "DD-MON-YY 0:00".
So clock_settime() checks if the current time equals 0:00. If it does _NOT_ then an RTC is supposed to be active and clock_settime() fails with ERANGE. Otherwise clock_settime() ignores sets the date - and completely ignores the time provided as parameter.
clock_getres() too checks if the current time equals 0:00. If it does _NOT_ then an RTC is supposed to be active and clock_getres() returns a time resolution of one minute. Otherwise clock_getres() presumes that the only one who sets the RAM location is clock_settime() and therefore returns a time resolution of one day.
We want to add the capability to not only get the time but also set the time, but there's no "setter" for the "getter" time().
The first ones that come into mind are gettimeofday() and settimeofday(). However, they take a struct timezone argument that doesn't make sense - even the man pages says "The use of the timezone structure is obsolete; the tz argument should normally be specified as NULL." And POSIX says "Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function."
The ...timeofday() functions work with microseconds while the clock_...time() functions work with nanoseconds. Given that we expect our targets to support only 1/10 of seconds the microseconds look preferable at first sight. However, already microseconds require the cc65 data type 'long' so it's not such a relevant difference to nanoseconds. Additionally clock_getres() seems useful.
In order to avoid code duplication clock_gettime() takes over the role of the actual time getter from _systime(). So time() now calls clock_gettime() instead of _systime().
For some reason beyond my understanding _systime() was mentioned in time.h. _systime() worked exactly like e.g. _sysremove() and those _sys...() functions are all considered internal. The only reason I could see would be a performance gain of bypassing the time() wrapper. However, all known _systime() implementations internally called mktime(). And mktime() is implemented in C using an iterative algorithm so I really can't see what would be left to gain here. From that perspective I decided to just remove _systime().
So far time_t values were interpreted as local time values. However, usually time_t values are to be interpreted as "seconds since 1 Jan 1970 in UTC". Therefore all logic handling time_t values has to be changed.
- So far gmtime() called localtime() with an adjusted time_t, now localtime() calls gmtime() with an adjusted time_t.
- mktime() has to do "the opposite" of localtime(), to keep it that way mktime() does now the inverse adjustment made by localtime().
- All currently present time() implementations internally call mktime() so they don't require individual adjustments.