- Up to now the web browser used several fixed size arrays to hold the various types attribute data of the web page. This turned out to be way to inflexible for any non-trivial web page. Therefore now all attribute data is stored in a single buffer one after the other as they arrive from the parser only occupying the memory actually needed. This allows for pages with many links with rather short URLs as well as pages with few link with long URLs as well as pages with several simple forms as well as pages with one form with many form inputs.
- Using the actual web page buffer to hold the text buffers of text entry fields was in general a cool idea but in reality it is often necessary to enter text longer than the size of the text entry field. Therefore the text buffer is now stored in the new unified attribute data buffer.
- Splitting up the process of canonicalizing a link URL and actually navigating to the resulting URL allowed to get rid of the 'tmpurl' buffer used during form submit. Now the form action is canonicalized like a usual link, then the form input name/value pairs are written right into the 'url' buffer and afterwards the navigation is triggered.
- Support for the 'render states' was completely removed. The only actually supported render state was centered output. The new unified attribute buffer would have complicated enumerating all widgets added to the page in order to adjust their position. Therefore I decided to drop the whole feature as the <center> tag is barely used anymore and newer center attributes are to hard to parse.
In general it seems a bad idea to have two http-strings.c files as this precludes to have them both in the Contiki library. However as it stands it seems most reasonable to have one http-strings.c file be a clean superset of all usecases in order to allow them to run together in a single binary. As webserver/http-strings.c already contained strings not present in webbrowser/http-strings.c it seems reasonable to consider webserver/http-strings.c as the superset described. From that perspective it is appropriate to remove all strings from webbrowser/http-strings.c which are not used by the web browser in order to save memory otherwise wasted.
Both the source code and the cc65 compiler have changed. So it made sense to review which object files are to be compiled for placement in the Language Card.
On the C128 the custom PFS code doesn't add functionality (as it does with IDE64 support on the C64) but is "only" smaller than the POSIX file i/o code in the C library. But the stdio code in the C library (used in WGET for screen i/o) relies on the POSIX file i/o code anyway so there no point in additionally adding the PFS code to the WGET program.
Modern compilers (especially GCC) ignore the register keyword anyway and the latest cc65 snapshot generates actually larger code with the register keyword at the locations in question.
This reverts commit 029bc0ee27, reversing
changes made to a7b3e99644.
This uses LGPL libopencm3. While the patch doesn't include the code,
the resulting binary would force the release of all code as LGPL.
This magic comes from the `--gc-sections` linker flag, which turns on garbage collection for unused input sections. The compiler flags `-ffunction-sections` and `-fdata-sections` make sure that each function and each static data definition have their own section. The result is that GCC can prune away all unused symbols, reducing the size of the resulting executable.
These optimizations may be disabled by setting the Makefile variable
`SMALL` to zero.
The tag <div> (in contrast to the tag <span>) is normally used to denote content placed on a line by its own. So it makes sense to trigger a newline when </div> is processed.
The "normal" web is moving forward quickly reducing the interoperability of the Contiki web browser to nearly zero. The Mobile Web fits the capabilities of the Contiki web browser much better. Modern smartphones don't need the Mobile Web anymore but there are large areas in world with rather low end mobile phones and limited mobile bandwidth where the Mobile Web will be necessary for quite some time.
From that perspective it is reasonable to increase the Contiki web browser's interoperability with the Mobie Web - namely WAP 2.0 aka XHTML MP. XHTML MP is delivered as MIME types 'application/vnd.wap.xhtml+xml' or 'application/xhtml+xml'. Therefore we (try to) parse the document if the MIME type contains the substring 'html' (which is true 'text/html' too).
On the C128 the custom PFS code doesn't add functionality (as it does with IDE64 support on the C64) but is "only" smaller than the POSIX file i/o code in the C library. But the POSIX directory access code in the C library relies on the POSIX file i/o code anyway so there no point in additionally adding the PFS code to the FTP program.
Relevant cc65 changes...
General:
- The compiler generates "extended" dependency info (like gcc) so there's no need for postprocessing whatsoever :-)
- The linker is very pernickety regarding the ordering of cmdline options so a custom linker rule is necessary :-(
Apple2:
- The various memory usage scenarios aren't specified anymore via separate linker configs but via defines overriding default values in the builtin linker config.
Atari:
- The builtin linker config allows to override the start addr so there no more need for a custom linker config.
- The C library comes with POSIX directory access. So there's no more need for for a custom coding.
CBM:
- The C library comes with POSIX directory access. So there's no more need for for a custom coding.
Either I found and fixed a severe bug in PSOCK_READTO() or I misunderstood something completely. To me PSOCK_READTO() is supposed to return if either the supplied character was read or if the user supplied buffer is exhausted - sor far so good.
However if the latter occurs up to now PSOCK_READTO() was continuing to process characters already read from the network (aka present in the uIP buffer) in order to check if the supplied character was found there and adjust the return value accordingly. But this means that the character processed this way were lost forever for the caller as the next call to PSOCK_READTO() would continue to read past the characters processed this way.
Therefore I removed that character processing altogether. So now if the user supplied buffer is exhausted before the supplied character is found the next call to PSOCK_READTO() starts exactly where previous call left off.