This is documented in TBR3 and is already declared in <misctool.h>, but did not previously have glue code. TBR3 says "Applications should never make this call," but it may be useful in system utilities.
This moves the free() call for the file buffer before the malloc() that occurs when closing a temp file, which should at least slightly reduce the chances that the malloc() call fails.
Previously, the code for closing a temporary file assumed that malloc would succeed. If it did not, the code would trash memory and (at least in my testing) crash the system. Now it checks for and handles malloc failures, although they will still lead to the temporary file not being deleted.
Here is a test program illustrating the problem:
#include <stdio.h>
#include <stdlib.h>
int main(void) {
FILE *f = tmpfile();
if (!f)
return 0;
void *p;
do {
p = malloc(8*1024);
} while (p);
fclose(f);
}
This can happen, e.g., if there is an IO error or if there is insufficient free disk space to flush the data. In this case, fclose should return -1 to report an error, but it should still effectively close the stream and deallocate the buffer for it. (This behavior is explicitly specified in the C99 and later standards.)
Previously, ORCA/C effectively left the stream open in these cases. As a result, the buffer was not deallocated. More importantly, this could cause the program to hang at exit, because the stream would never be removed from the list of open files.
Here is an example program that demonstrates the problem:
/*
* Run this on a volume with less than 1MB of free space, e.g. a floppy.
* The fclose return value should be -1 (EOF), indicating an error, but
* the two RealFreeMem values should be close to each other (indicating
* that the buffer was freed), and the program should not hang on exit.
*/
#include <stdio.h>
#include <stddef.h>
#include <memory.h>
#define BUFFER_SIZE 1000000
int main(void) {
size_t i;
int ret;
printf("At start, RealFreeMem = %lu\n", RealFreeMem());
FILE *f = fopen("testfile", "wb");
if (!f)
return 0;
setvbuf(f, NULL, _IOFBF, BUFFER_SIZE);
for (i = 0; i < BUFFER_SIZE; i++) {
putc('x', f);
}
ret = fclose(f);
printf("fclose return value = %d\n", ret);
printf("At end, RealFreeMem = %lu (should be close to start value)\n",
RealFreeMem());
}
This fixes the following issues:
*If n was 0x80000000 or greater, strncmp would return 0 without performing a comparison.
*If n was 0x1000000 or greater, strncmp might compare fewer characters than it should because the high byte of n was effectively ignored, causing it to return 0 when it should not.
Here is an example demonstrating these issues:
#pragma memorymodel 1
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#define LEN 100000
int main(void) {
char *s1 = malloc(LEN+1);
char *s2 = malloc(LEN+1);
if (!s1 || !s2)
return 0;
for (unsigned long i = 0; i < LEN; i++) {
s2[i] = s1[i] = '0' + (i & 0x07);
}
s1[LEN] = 'x';
return strncmp(s1,s2,0xFFFFFFFF);
}
This addresses the following issues:
*If the low-order 16 bits of n were 0x0000, no concatenation would be performed.
*If n was 0x1000000 or greater, the output could be cut off prematurely because the high byte of n was effectively ignored.
The following test program demonstrates these issues:
#pragma memorymodel 1
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#define LEN2 100000
int main(void) {
char *s1 = malloc(LEN2+2);
char *s2 = malloc(LEN2+1);
if (!s1 || !s2)
return 0;
for (unsigned long i = 0; i < LEN2; i++)
s2[i] = '0' + (i & 0x07);
strcpy(s1,"a");
strncat(s1, s2, 0x1000000);
puts(s1);
printf("len = %zu\n", strlen(s1));
}
There were two issues:
*If bit 15 of the n value was set, the second string would not be copied.
*If the length of the second string was 64K or more, it would not be copied properly because the pointers were not updated.
This test program demonstrates both issues:
#pragma memorymodel 1
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#define LEN2 100000
int main(void) {
char *s1 = malloc(LEN2+2);
char *s2 = malloc(LEN2+1);
if (!s1 || !s2)
return 0;
for (unsigned long i = 0; i < LEN2; i++)
s2[i] = '0' + (i & 0x07);
strcpy(s1,"a");
strncat(s1, s2, LEN2);
puts(s1);
printf("len = %zu\n", strlen(s1));
}
Previously, the pointer was not properly updated to account for the bank crossing, so the characters from the second string would be written to the wrong bank.
Here is an example that illustrates this:
#include <memory.h>
#include <string.h>
#include <orca.h>
#include <stdio.h>
int main(void) {
Handle hndl = NewHandle(0x1000f, userid(), 0xC000, 0);
if (toolerror())
return 0;
char *s = *hndl;
s = (void*)((unsigned long)s | 0xffff);
strcpy(s, "foo");
strcat(s, "bar");
strncat(s, "baz", 5);
puts(s);
}
If the format string is empty or contains only %n conversions, then nothing should be read from the stream, so no error should be indicated even if it is at EOF. If a directive does read from the stream and encounter EOF, that will be handled when the directive is processed.
This could cause scanf to pause waiting for input from the console in cases where it should not.
This uses an approximation based on the Stirling series for most positive values, but uses separate rational approximations for greater accuracy near 1 and 2. A reflection formula is used for negative values.
These are similar enough that they can use the same code with just a few conditionals, which saves space.
(This same code can also be used for binary when that is added.)
These print a floating-point number in a hexadecimal format, with several variations based on the conversion specification:
Upper or lower case letters (%A or %a)
Number of digits after decimal point (precision)
Use + sign for positive numbers? (+ flag)
Use leading space for positive numbers (space flag)
Include decimal point when there are no more digits? (# flag)
Pad with leading zeros after 0x? (0 flag)
If no precision is given, enough digits are printed to represent the value exactly. Otherwise, the value is correctly rounded based on the rounding mode.
This is what the standards require. Previously, the '0' flag would effectively override '-'.
Here is a program that demonstrates the problem:
#include <stdio.h>
int main(void) {
printf("|%-020d|\n", 123);
printf("|%0-20d|\n", 123);
printf("|%0*d|\n", -20, 123);
}
This tries to carefully follow the C and IEEE standards regarding rounding, exceptions, etc. Like the other ORCA/C <math.h> functions, there is really just one version that has extended precision, so double rounding is still possible if the result gets assigned to a float or double variable.
In addition to the tests I added to the ORCA/C test suite, I have also tested this against (somewhat modified versions of) the following:
*FreeBSD fma tests by David Schultz:
https://github.com/freebsd/freebsd-src/blob/release/9.3.0/tools/regression/lib/msun/test-fma.c
*Tests by Bruno Haible, in the Gnulib test suite and attached to this bug report:
https://sourceware.org/bugzilla/show_bug.cgi?id=13304
Previously, the functions registered with atexit() would be called with data bank corresponding to the blank segment, which is correct in the small memory model but not necessarily in the large memory model. This could cause memory corruption or misbehavior for certain operations accessing global variables.
ORCA/C's tmpnam() implementation is designed to use prefix 3 if it is defined and the path is sufficiently short. I think it was intended to allow up to a 15-character disk name to be specified, but it used a GS/OS result buffer size of 16, which only leaves 12 characters for the path, including initial and terminal : characters. As such, only up to a 10-character disk name could be used. This patch increases the specified buffer size to 21, allowing for a 17-character path that can encompass a 15-character disk name.
If the last element in the range being sorted has the smallest value, rsort can be called with last set to first-1, i.e. pointing to (what would be) the element before the first one. But with large enough element sizes and appropriate address values, this address computation can wrap around and produce a negative value for last. We need to treat such a value as being less than first, so it terminates that branch of the recursive computation. Previously, we were doing an unsigned comparison, so such a last value would be treated as greater than first and would lead to improper behavior including memory trashing.
Here is an example program that can show this (depending on memory layout):
#pragma memorymodel 1
#include <stdlib.h>
#include <stdio.h>
#define PADSIZE 2000000 /* may need to adjust based on memory size/layout */
#define N 2
struct big {
int i;
char pad[PADSIZE];
};
int cmp(const void *p1, const void *p2) {
int a = ((struct big *)p1)->i;
int b = ((struct big *)p2)->i;
return (a < b) ? -1 : (a > b);
}
int main(void) {
int j;
struct big *p = malloc(sizeof(struct big) * N);
if (!p)
return 0;
for (j = 0; j < N; j++) {
p[j].i = N-j;
}
qsort(p, N, sizeof(struct big), cmp);
for (j = 0; j < N; j++) {
printf("%i\n", p[j].i);
}
}
It could have O(n) recursion depth for some inputs (e.g. if already sorted or reverse sorted), which could easily cause stack overflows.
Now, recursion is only used for the smaller of the two subarrays at each step, so the maximum recursion depth is bounded to log2(n).
When using the large memory model, the wrong data bank (that of the library code rather than the program's static data) would be in place when the comparison function was called, potentially leading to data corruption or other incorrect behavior.
This code did not previously work properly, because the X register value was overwritten within the loop. This could result in incorrect behavior such as hanging or data corruption when using qsort with element sizes >= 64KiB.
This is used by the new ORCA/C debugging option to check for illegal use of null pointers. It is similar to an existing routine in PasLib used by ORCA/Pascal's similar checks.
This ensures use of the Time Tool is fully under the control of the programmer, rather than potentially being affected by other things that may load it (like the Time Zone CDev). It also avoids calls to tiStatus in the default non-Time Tool code paths, and thereby allows them to work under Golden Gate.
The UTC time may be several hours before or after local time, and therefore the UTC time/date may be slightly outside the limits of what can be represented as a local time/date. This is now handled correctly.
This also more generally fixes handling of negative seconds/minutes/hours, which is also applicable to mktime().