How to handle a lot of wikis with Node

298 views
Skip to first unread message

Tristan Kohl

unread,
Feb 10, 2018, 8:19:42 AM2/10/18
to TiddlyWiki
Hey guys,

thanks to your help in the past I got pretty decent (apart from a few hickups) in developing my own TW plugins to keep track of a lot of stuff. However since I prefer to keep thinks seperate and do not want to put my beekeeping notes in the same wiki I keep track of my honey wine making I ended up creating quite a few wikis for various (logging) tasks. However since I come close to 30 instances all served by TiddlyServer (thanks Arlen) my poor Raspberry 3 which is my 24/7 home server starts to struggle with all the Node instances - one for every wiki - running at the same time.

So my question is if there is some way to keep just one Node instance running and serving multiple wikis through it. I know one Node instance can handle this, the CPU use is minimal over all but all the memory taken up by those processes is quite tough on my limited hardware. Since all the wikis share the same code base on the server and my plugins are executed in the browser I thought if it would be possible to have Node serve one "default" wiki where one can select which wiki to load.

Node could for example create one folder per wiki as the server version does already and determine which folder to serve via the URL or a configuration similar to the way NoteSelf handles different databases. NoteSelf was my inspiration about this to be honest and I think Daniello solved the multi-wiki problem pretty elegant, however I was not able to setup a CouchDB on my Pi the way that NoteSelf would connect to it (I am not used to CouchDB).

Is there any way one can serve multiple wikis through one Node instance to save ressources, especially memory which is the limiting factor most of the time?

Sorry for my length I fought hard to explain my problem ;)

Cheers,
Tristan

coda coder

unread,
Feb 10, 2018, 2:45:44 PM2/10/18
to TiddlyWiki

Hi Tristan

Perhaps I'm not understanding you, but...

I use Arlen's TiddlyServer too, and I think I'm using it as you describe:  I have one TS serving multiple "flatfile" TWs and also a NodeTW.

For me, it's all mapped out in the settings.json file in the TS folder.

Coda

BurningTreeC

unread,
Feb 10, 2018, 11:33:16 PM2/10/18
to TiddlyWiki
Hello Tristan,

I'm using pm2 http://pm2.keymetrics.io/ to handle multiple node processes, it gives you some control you may want

you can start a process with pm2 start "processname" and stop it the same way, reload it and more

with a bit of  bash scripting one could manage processes to stop when another starts or simply stop all others when you start one

right now I'm starting and reloading tiddlyserver that way, but TS then starts the whole group of wikies.

maybe inside the TS starting scripts pm2 could be inserted

BTC

Tristan Kohl

unread,
Feb 11, 2018, 2:31:41 AM2/11/18
to TiddlyWiki
Hi Coda,

I use TiddlyServer as well, my "problem" is that TS seems to start a new Node instance for every wiki I have once I browse it for the first time. If I do not regularly restart TS to kill all the instances my memory fills up to the point my Pi gets unresponsive. I can see that by starting top in a ssh session and browse one wiki after the other. Every wiki adds about 30-40 MB to TS's reserved memory, slowly creeping to 900+ MB which I think is because there is always a new full TW server running even though on the server side there is not much to do other than sync my Tiddlers back which together range between 2MB and 15MB per wiki. Therefore I would like just one Node instance handling all the wikis at once and determine via URL or setting (like in NoteSelf) where to get and store the Tiddlers for this particular wiki.

Cheers,
Tristan

Tristan Kohl

unread,
Feb 11, 2018, 2:41:36 AM2/11/18
to TiddlyWiki
Hello BTC,

this does not sound too bad and might be a intermidate solution to solve at least part of the problem.
However it would require some memory watching and instance starting and stoping all the time, I switch my wikis regularly and access at least 20 of them every day. Killing and starting new processes might prevent my Pi from bogging down but adds complexity and overhead (and Bash scripting which I am the worst in ;) ) which would be solved if the Node version would serve multiple wikis by itself.

Another things are connection and speed: If my poor "server" has to start a new process every time I go to a wiki it adds additional delay. But what I think might become a problem is if there is a wiki open in one tab and I have pm2 configured to kill all other wikis I can not edit this wiki anymore if there is another started in the meantime. But how should TS know which wikis are still "up" in the browser? I am quite confident with JS and Python but this I would not know how to solve reliably so I shy away from just killing processes automatically. If I restart TS I know I have to reload my wikis but if my Pi kills them I can only know via the XMLHttpRequest error coming up.

Cheers,
Tristan

TonyM

unread,
Feb 11, 2018, 7:54:37 AM2/11/18
to TiddlyWiki
Only a short response but on node instance with starting multiple wikis with nohup xxxx & does that reduce overheads?

Tony

Tristan Kohl

unread,
Feb 11, 2018, 11:46:44 AM2/11/18
to TiddlyWiki
Unfortunetely not, nohup just stops them from receiving the HUP signal when I terminate a ssh session. Nothing in terms of overhead due to loading a new Node instance running full TW for every wiki.

Arlen Beiler

unread,
Feb 13, 2018, 10:38:03 AM2/13/18
to TiddlyWiki
I came across this thread this evening. I like hearing that someone else besides me has grappled with this problem. You in using TiddlyServer, and I in developing it. 

This is exactly the problem that I predicted would happen, but I did not expect it would show up in real-world use, as I did not really think anyone would have 50 wikis! I should have known to trust my instincts :)

The problem is caused in part by the way Tiddlywiki data folders are setup. Each folder must be in its own Tiddlywiki environment. So every wiki loads the same code files from disk and executes them. All in the same Node process, but still all separate objects.

I could create a new type of connecter that would serve the folder directly to the client along with the required plugins and core. When a data folder would be opened it would serve a loader which would load the core and the specified plugins directly into the browser and then sync changes back to the server. 

In my mind this should be a fairly easy route to follow. The loader could be generated on the fly so the page loads fairly quickly and also so it works with the browser cache. I have wanted to implement this in many scenarios and it seems like it is becoming more realistic and necessary.

Thoughts, anyone? Does it matter if plugins in a data folder get loaded directly to the browser? It shouldn't, I don't think. Especially with the current template for the server command in TiddlyServer, which is $:/core/save/all.

That's my two cents.
Arlen

--
You received this message because you are subscribed to the Google Groups "TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tiddlywiki+unsubscribe@googlegroups.com.
To post to this group, send email to tiddl...@googlegroups.com.
Visit this group at https://groups.google.com/group/tiddlywiki.
To view this discussion on the web visit https://groups.google.com/d/msgid/tiddlywiki/8f082316-2de6-4838-b6fc-1907d61a2935%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tristan Kohl

unread,
Feb 14, 2018, 3:24:25 AM2/14/18
to TiddlyWiki
Hi Arlen,

nice to hear from you again (thanks for fixing my image linking problems a while back over on github :) ).

Since there was no answer that could help me up until you came I finally had a good reason to overcome my own weaker self and start to dig into TW code for real. As I am quite confident in JS but was just to lazy to ever bother before I quickly came to the conclusion that the changes I would have to implement are less dramatic than I expected. As the tiddlyweb adaptor already uses the "bag" field but only fills it with "default", changing this behavior to make the field contain the actual wiki name should be an easy task. I just started setting up a dev environment the other day but was not able to write any code just yet.

I think one has to fix the server command as well as it defaults to the "default" route and just needs to use the value from bag to determine the correct target wiki.

Other than that I did not find any problems that surfaced on my first look. It would be fine for me if the plugins are just forwarded to the browser as all the real work happens there.

Side note: I thought about writing a SQLite adaptor some time if my setup grows as I can imagine this would speed up things quite a lot and make backups easier since all wikis would be just tables within one database. The only problem I saw just yet is the fact that the Node server seems to cache the tiddlers - otherwise mult-user wikis would be a breeze as Node could be completely agnostic of anything and just serve the tiddlers from the database. But I think this is a topic for a different day.

If you need any help in implementing the changes, I would be willing to throw in a hand.

Cheers,
Tristan


On Tuesday, February 13, 2018 at 4:38:03 PM UTC+1, Arlen Beiler wrote:
I came across this thread this evening. I like hearing that someone else besides me has grappled with this problem. You in using TiddlyServer, and I in developing it. 

This is exactly the problem that I predicted would happen, but I did not expect it would show up in real-world use, as I did not really think anyone would have 50 wikis! I should have known to trust my instincts :)

The problem is caused in part by the way Tiddlywiki data folders are setup. Each folder must be in its own Tiddlywiki environment. So every wiki loads the same code files from disk and executes them. All in the same Node process, but still all separate objects.

I could create a new type of connecter that would serve the folder directly to the client along with the required plugins and core. When a data folder would be opened it would serve a loader which would load the core and the specified plugins directly into the browser and then sync changes back to the server. 

In my mind this should be a fairly easy route to follow. The loader could be generated on the fly so the page loads fairly quickly and also so it works with the browser cache. I have wanted to implement this in many scenarios and it seems like it is becoming more realistic and necessary.

Thoughts, anyone? Does it matter if plugins in a data folder get loaded directly to the browser? It shouldn't, I don't think. Especially with the current template for the server command in TiddlyServer, which is $:/core/save/all.

That's my two cents.
Arlen
On Feb 12, 2018 00:46, "Tristan Kohl" <kohlt...@gmail.com> wrote:
Unfortunetely not, nohup just stops them from receiving the HUP signal when I terminate a ssh session. Nothing in terms of overhead due to loading a new Node instance running full TW for every wiki.

On Sunday, February 11, 2018 at 1:54:37 PM UTC+1, TonyM wrote:
Only a short response but on node instance with starting multiple wikis with nohup xxxx & does that reduce overheads?

Tony

--
You received this message because you are subscribed to the Google Groups "TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tiddlywiki+...@googlegroups.com.

Arlen Beiler

unread,
Feb 14, 2018, 10:54:19 PM2/14/18
to TiddlyWiki
You have precisely defined the problem. The crux of the matter really is the cache because it isn't just a cache, it is the entire $tw environment loaded into NodeJS, which then generates the HTML for the browser. 

Tristan Kohl

unread,
Feb 16, 2018, 3:21:51 AM2/16/18
to TiddlyWiki
I do not have much time right now unfortunately but as I see from the Dev-Tools there is only one initial load of the wiki. Afterwards it just periodically polls the Tiddler list. Since you know more about the ins and outs of TW would it be hard to just let TW generate this initial view for multiple wikis determined by the URL and consecutively the tiddlers JSON? As my 99% text-only tiddlers do not eat up that much memory I would be fine if they were all stored within Node. I only need to get rid of redundant Node instances all loading a full TW environment and snacking away my limited RAM 30 - 40 MB at a time.
Reply all
Reply to author
Forward
0 new messages