--
You received this message because you are subscribed to the Google Groups "DITA-OT Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dita-ot-user...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "DITA-OT Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dita-ot-user...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I was about to ask about the same thing but kind of knew the
answer. The pre-processing, rel-linking, conrefing, etc.… would be very hard to
maintain based on a file by file change.
From my experience with build errors, every file seems to be processed and
parsed many times throughout the DITA-OT; one bad file can cause many errors. I
can see and understand thought, with all the ant stages in the toolkit and XSL
document() calls.
I was also wondering if processing of large document sets could possibly be speed up by using something like exist-db for the xsl document() calls between some ant stages. That way xml documents don’t have to be reparsed many times (especially for conref source common text documents).
How efficiently does the toolkit use large amounts of RAM? It seems like the ant tasks of processing each XML along the various ant stages don’t lend itself very well additional RAM. Is there any benefit to giving JAVA the Toolkit 700M vs 3GB? Could the toolkit possibly store the parsed document objects in memory between the ant target tasks, possibly using that cached object in a later ant task? I actually tried implementing something like that back while back, writing my own JAVA XSLT processor class but then realized it would only be caching the XSL which is only parsed once anyway.
--