This morning I wrote a small script to profile full_redraw method. The script is:
import cProfile
p1 = p.nodeAfterTree()
g.es('starting')
pr = cProfile.Profile()
pr.enable()
c.selectPosition(p1)
for i in range(20):
c.executeMinibufferCommand('demote')
c.executeMinibufferCommand('promote')
pr.disable()
g.es('finished')
pr.dump_stats('/tmp/stat.txt')
pr.print_stats('time')
The node with this script is at the end of my version of LeoPyRef.leo, followed by three nodes at the root level a, b and c. After fully expanding the outline, select this node and execute script. In the console you'll get stats list sorted by time spent in each function.
I was very surprised to see a lot of functions I didn't expect to see like g.toUnicodeFileEncoding, os.path.normpath, g.os_path_finalize. ...
g.toUnicodeFileEncoding was called 2504395 times during the exection of my script?! os.path.normpath was called 1128328 times.
Paths certainly haven't changed during the script execution. There might be 10 (20 at most) different icons in the outline, so I would expect to see at most 10-20 calls to these functions. How is it possible that there were so many calls. It turned out that declutter_node was computing icon list for each node. And for each node on each redraw it was calculating all paths to themes folder, icons folder, checking every time for the existence of file icons.
The solution was quite simple, one dict to hold the cached declutter data and the execution time for the script above went from 31.307 s down to 17.116 s.
New code is already in the devel branch. Revision f71fc76c45.
Once again it turned out that the first step in any optimization effort must be profiling the code. I would never guess that the full_redraw used to make so many calls in to the path handling and file system functions.
Vitalije