This sounds great!
Does it mean that you need some files to benchmark?
(In this case you can take http://pub.mojca.org/gnuplot/sample/ and
compile the document test-gnuplot.tex with "\input e". I would be
really curious how much time it takes because I didn't manage to
compile it here - "out-of-memmory".)
I would be pretty pessimistic otherwise since it now takes 6 minutes
to compile 10 graphs (most often TeX runs out of memmory and then
scripts keep running for another 3 minutes just to figure out that it
can't complete the job).
I don't know where the bottleneck lies, but most probably performance
would be improved considerably if I separated text from graphics (for
example if I would write the metapost stuff into one file that would
be compiled into PDF once forever until something changes and place
text and points in a kind of "overlay" or however it's called (it
would then also be more fool-proof: no problems with expansion, macros
defined in the document could be used for labels and so on); perhaps I
could even use PostScript driver as an alternative to metapost for
doing everything except text).
Being is slightly desperate mood it sounds like the NTS project
(citing what I've heard in BachoTeX I think): "we had great fun, we
learnt a lot, but it is simply too slow to be considered as a serious
TeX processor".
In the worst case I've written a great "debugging module" for textext ;)
But seriously: I'll continue experimenting and will try to complete
the work once the exams are over from "something that works and is
useful" to "something that works decent enough to be released",
hopefully in the first half of July. (Someone could say that if I
would be posting less to the ConTeXt/XeTeX list, I could already
complete the work ;)
In any case I will need some help in writing macros for cleaner
inclusion of those graphics (to replace \startMPcode to something like
\startGNUPLOTgraphic[inline][number of graphic] that can be recalled
later in \placefigure[number of graphic] and for many other things).
The most efficient way would probably be to call gnuplot as few times
as possible and to compile as much as possible in a single metapost
run.
Mojca
Mojca Miklavec wrote:
> On 6/6/06, Hans Hagen wrote:
>
>> I even got textext working that way, which means that in the end
> I would be pretty pessimistic otherwise since it now takes 6 minutes
> to compile 10 graphs (most often TeX runs out of memmory and then
> scripts keep running for another 3 minutes just to figure out that it
> can't complete the job).
Would it be possible to port the new textext code to the non-lua
metapost? That would greatly improve the efficiency of label
processing, i think.
Taco
Hans
-----------------------------------------------------------------
Hans Hagen | PRAGMA ADE
Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com
| www.pragma-pod.nl
-----------------------------------------------------------------
the next release of texmfstart will provide --ifchanges, not, if you use
buffers instead of the current temp files, you can use that feature to
prevent redundant gnuplot runs (i made for the m-r module)
I am talking about the textext interception code only (that trick
with the faked box). These gnuplot runs are mostly slow because
each plot has at least a dozen labels. The labels all use
textext() (of course), so every runtime mpost has to start a
child texexec for the labels, generating a few dozen pages
each run.
Cheers, Taco
--
Not crying, just trying to improve things. I saw that --ifchanges, but
I thought that it was there already before.
Is there also some \appendtobuffer functionality hidden somewhere in ConTeXt?
> the next release of texmfstart will provide --ifchanges, not, if you use
> buffers instead of the current temp files, you can use that feature to
> prevent redundant gnuplot runs (i made for the m-r module)
The module is actually really nice (and clean code)!
You know, that's not fair: people spend days to make some piece of
code work and then you completely rewrite the code in half an hour to
make it 4 timed shorter and ten times faster ;)
Mojca
also, i got the impression that i can bring back m-gnuplot to at most 4
lines of code, but 512 lines of code is more impressive -)
interesting is that installing R involves some 50 meg which is kind of
funny because at tex live discussions about latex packaging currently
involve minimizing package sizes (split into smaller pieces); in that
respect a dependency on context [supp-pdf] was considered too much
overhead; in the meantime we seem to ignore the fact that other
components (like gnuplot, r) are becoming way bigger than anything tex -)
And I respect you more of course ;). I can decript somehow the code
from R, but if I wanted to decipher what the m-database does (or
better: to write a similar one now that I have it) ... then I still
don't believe that you don't have 20 coding-monkeys hidden somewhere
in the drawers ;)
> interesting is that installing R involves some 50 meg which is kind of
> funny because at tex live discussions about latex packaging currently
> involve minimizing package sizes (split into smaller pieces); in that
> respect a dependency on context [supp-pdf] was considered too much
> overhead; in the meantime we seem to ignore the fact that other
> components (like gnuplot, r) are becoming way bigger than anything tex -)
Gnuplot is not growing. Didn't you notice that the development is
not-that-far-away-from-dead? Well, not really, but when it comes to
talk about features that are "in" or "hot" now (say, even some more
than really basic 3D support or just drawing pies), it might take
years to implement them. It's not that bad, there are some
improvements, but it certainly won't reach 50 megs in the near future
unless I submit the .tex files resulting from running the demos in
their repository ;)
Mojca