Marco van de Voort wrote:
One should create a .s parser that can detect global syms and breaks the .s in pieces there, piping the different small .s files to the back end .assembler
so e.g.
- compiler pipes gpc.s to our filter program.
- the filter program splits gpc.s per global symbol up in gpc00.s gpc01.s gpc02.s etc
- the filter programs pipes each .s separately through as, generating a .o per .s
- all .s files are combined to a .a by AR, which is executed by the filterprogram
Actually, I've had exactly the same idea -- but no time yet to pursue it further ...
It is DOG slow (which is the reason that it is internal in FPC now), but could be handy for release versions.
I'm not so sure. The splitting program would probably do relatively simple text manipulations which are quite fast, and the assembler usually takes only a tiny fraction of compile time, so even multiple invocations shouldn't take too long. (This might vary a little on systems like windoze which have such a large process spawning overhead -- at least as far as I've heard.)
E.g. a windows API unit generates 20000 .s files.
Nope. They're just external declarations that don't produce any code of their own. Such a unit would be only one file (in fact, such units currently have very small .o files and large .gpi files which are not relevant for linking and executable size, of course). Only routines with Pascal implementations (and global variables) would need to be split.
Frank