Your TWederation! [The official TWederation brainstorm thread]

381 views
Skip to first unread message

Mat

unread,
Jun 23, 2016, 1:15:18 PM6/23/16
to TiddlyWiki
In another thread the topic of TWederation came up and the fact that it all is still very fuzzy. What the heck IS it?! - A justified question and I've put much thought into various concepts for it, but instead of letting my own fantasy set artificial limits for what it can be, let's have a joint brainstorm!

There are technical limitations and the that will be most(?) obvious when we do kick off is that TWederation is based on fetching of tiddlers. Contrary to servers, tiddlers cannot be fed/pushed to others but tiddlers must instead be actively "leeched".

So... I ask the good people; what does this enable? Why? Why not? What's better than... what? Who? How BAD? In which way does that have to do with beauty? Isn't that impossible? Fetch webpage? Encryption. Context, does that change the meaning? Will be created yesterday. Webmail. Threading. Ads? Parallel threading. Who cares about technical limitations!!! Block inline injection foobar? Stylesheet hangout? Signal! Tikky Widdly.

...you tell us. For fun. That's how ideas are born.

<:-)

RichardWilliamSmith

unread,
Jun 23, 2016, 11:27:02 PM6/23/16
to TiddlyWiki
Hi Mat,

I'm sort of torn on this issue. On the one hand, I want to support the development efforts of other members of the community and I enjoy thinking about technical/design issues too. I can fully understand why you would want to do this.

On the other hand, I think there is probably a better path to pursue towards these goals, but it's currently much (much, much?) more technically abstract. I'm talking about ipfs, which has been recently mentioned by Jeremy also.

I might be totally wrong about this, but my reason for thinking it is that what you're attempting to do is fundamentally hard. Even if you get it to work, I don't think it will ever be secure or scalable. I'm happy to discuss why this might be wrong, though.

I think what ipfs gives us, amongst much else, is the notion of serverless publishing because everything is essentially published out onto one big filing system, distributed across the network. My conjecture is that this basically solves the hard problems of federation. 

It is also my belief that TW may prove to be a uniquely interesting tool in the context of ipfs in general - it may well turn out that we are sitting on a 'killer app' for a platform that is yet to be fully built.

I hope this doesn't sound negative. It may well be that you will learn valuable lessons by trying to get 'twederation' to work over http or I might be altogether wrong and it might be a roaring success, but I hope you'll take the time to look at ipfs if you haven't already. In my opinion, this technology or something similar will be the next evolution of the web.

Regards,
Richard

Josiah

unread,
Jun 24, 2016, 5:17:55 AM6/24/16
to TiddlyWiki
Ciao Mat & RichardWS

One thing that seems to unite you is understanding that we could communicate better without needing any of the current server defined systems.

The very fact TW is not server dependent I absolutely believe is its strength. The strong adherence to that model pushes the edge. The challenge being HOW to allow better communication without falling into all the old issues.

TW is currently natively week on communication. I don't think it inherently has to stay that way. But the task is difficult in that there are not that many models of what to do and the resource base for sorting through that is limited.

IPFS I took look at. I am not a techie, but I got the general idea. Its looks like it has promise, though no instantiation I could find.

In trying to find my way about in the TW world of how to connect up & network I seen a lot of things that look like kludges, mainly embedded Google bits.

I'm sure YOU are thinking on the right edges.

Josiah

Josiah

unread,
Jun 24, 2016, 5:30:20 AM6/24/16
to TiddlyWiki
Ciao Mat

I didn't mean to make it look in my last that I wasn't thinking about Twederation.

I think it would help me if you could say more about what it ALLOWS, rather than how it works, which I seen mostly. I mean what CONTENT does it allow? Is it threaded? What's the main purposes?

Best wishes
Josiah

Jed Carty

unread,
Jun 24, 2016, 6:51:27 AM6/24/16
to TiddlyWiki
I may regret this but since people seem to have a lot of misconceptions about what twederation is and what currently exists I am going to try to explain.

- this doesn't use a client-server architecture. This is important. There are no special nodes so every wiki is treated the same.
- everything functions by fetching things from other wikis. There is no way to push anything to another wiki. There is no way to control anything on another wiki. The owner of a wiki has complete control of what happens on their wiki.
- this doesn't use a client-server architecture. Given the questions I have been asked this deserves to be said a few hundred more times.
- the wiki you wish to fetch things from has to allow you to do it. This means that they have to have the plugin as well in order for the fetching to work. If you want to get something from a wiki that doesn't have the plugin you have to go to the wiki and import by drag and drop like you normally would.
- all of this currently works in almost every use case (yes, it can work in dropbox), I will talk about the problems below.
- you only talk to one other wiki at a time.  There is no server so you have to make connections to each wiki individually.
- you can't push anything to another wiki, there is no server and your wiki doesn't have write permissions on any other wiki
- you don't need to have your wiki hosted online to fetch from a wiki that is hosted online, so you can pull content from an online wiki onto a wiki stored locally
- I have not had any trouble using multiple local wikis stored on my harddrive, I have been able to pull content from one file uri to another file uri

At it's base there is a widget that allows one wiki to fetch tiddlers from another wiki based on wikitext filters, it does require some changes to the core. This widget is an extension of how the plugin library works and is set up so that you don't have to build a specific edition to serve plugins from a plugin library, it could be a normal wiki and would work just fine.
Everything else is wikitext applications built on top of that.

Me and Mat are working on using this to create a loosely connected blogging/social network. It is currently unclear if 'TWederation' refers only to the network of connected wikis or also to the enabling techniques.

For the twederation edition that we have been showing off everything works fine if you don't try to fetch things from an http server when you have loaded your wiki from an https server (see below about hosting on dropbox). The 'fetch all comments' buttons on blog posts work inconsistently because it tries to fetch from multiple wikis at the same time and there are collisions when receiving responses so not everything is correctly received. I have a solution to this that I need to implement. But, aside from that problem, the edition works as expected. This doesn't mean that it is easy to use or that there is enough documentation or that my interface is usable or anything else, just that the development is to the point where we are working on the application instead of the enabling technology.

Now, there have been a lot of questions about http-vs-https and most of them have missed what the problem is. The problem is that when you load a site on an https server it is normally prevented from loading content from a site on a non-https server. Which means if you open a wiki on an https server and try to fetch content from a wiki on an http server it won't work. That is it. It is a big problem in some cases, but that is the extent of the problem. If you are on an http server you can fetch things from an https server without any trouble. If you are on an https server you can fetch things from another https server without any trouble. The biggest place the http-vs-https problem arises is hosting on dropbox. This isn't as big a problem as I thought at first.

http-vs-https on dropbox:

If you are hosting your wiki on dropbox than you have access to the file. You can't save your wiki when you open it from the dropbox url anyway, and other people can access your wiki regardless of what type of server they are using. So if you are using dropbox open your wiki locally as a file and other people will use the dropbox url to access it.

I am sure I have missed a lot, but that is a brief explanation of what is going on.

Richard,

For security it is as secure as anything else tiddlywiki does online. Which is to say it isn't really. There are some things we can do to work on this but it isn't really my concern at the moment.
As far as scalability goes I never envisioned this as something with thousands of users in one network, or even hundreds really. TWederation isn't meant to be facebook, the point is that you aren't connected to anyone who you don't want to be connected to.

While I am excited about ipfs and agree that it or something like it should be (and hopefully will be) how the web functions in the future, the reasons for using ipfs and twederaion are completely different. You may even be able to have your wiki on ipfs and connect to other wikis using what we are making for twederation. If things do go well and people start using the ipfs as an http replacement than the backend will change for how twederation works, but the connections between wikis will function in the same way.

I am not convinced that ipfs is going to be any more 'serverless publishing' than the web is now. It is just distributed servers instead of central servers. Talking about distributed systems in terms of client-server architecture causes a lot of problems in nomenclature, but it comes down to 'there is no cloud, only other people's computers'. Ipfs has huge benefits over the current systems, but it doesn't in any way solve what we are working on with twederation, it just gives a potential alternate vehicle for a solution.

Josiah,

Using twederation you could make multi-user tiddlywikis where each user has their own wiki and pulls any updated content from the other users. This is what the blogging/messages in the twederation edition are. At its base it is just a way to simplify sharing content between wikis.

The blogging network we can create using the twederation edition that currently exists is the easiest thing to point to right now as far as what is allows.

Another example is I want to do with it is to work more on the interactive fiction engine I made and actually publish content for it. Then people could use twederation to fetch new content from my site into their wikis. In an idea world I would like to create this and then have collaborative world building where a group of people could take the tool and each one could create their own part of the setting and pull in and modify things other people have created.

You could have a community calendar that pulls content from different wikis created by each member of a community.

I already use it to pull content between different wikis I have on my own computer. That was the original reason I started working on it.

Dragon Cotterill

unread,
Jun 24, 2016, 7:55:40 AM6/24/16
to TiddlyWiki
- this doesn't use a client-server architecture. Given the questions I have been asked this deserves to be said a few hundred more times.

This to me sound more like the old Groove Networks database system was based. I feel that you really need to take a look at the Groove architecture as it does have possible implications which parallels what you are trying to achieve here.

I kind of achieved something similar using TWC, when I created "worker" wikis. Each worker had their own TWC installation, and I used the SyncFileTiddlerPlugin ( http://tiddlywiki.abego-software.de/#SyncFileTiddlerPlugin ) to export a single tiddler which carried the actual work. This was replicated around via DropBox into the central wiki (mine) where I could see the changes and re-allocate the work out the the workers. It wasn't a true cross installation setup as each worker only saw their own stuff.

In fact you could exact mimic such a setup with the TWC SharedTiddlersPlugin ( http://yakovl.bplaced.net/TW/STP/STP.html#SharedTiddlersPluginInfo ) simply by "including" the shared wikis of the other users in the network.

But what you're trying to achieve goes far beyond this kind of system and I wish you well in it's implementation.

RichardWilliamSmith

unread,
Jun 24, 2016, 9:21:27 AM6/24/16
to TiddlyWiki
Hi Jed,

Can you help me to better understand the mechanism? When you pull from another wiki, are you pulling the whole HTML of that file and then filtering it to take what you want? Or how do you make it yield only the tiddlers you want? If I'm fetching from you, where does the plugin in your wiki run? On my machine after I've fetched your whole file?

Regards,
Richard

Jed Carty

unread,
Jun 24, 2016, 9:43:48 AM6/24/16
to TiddlyWiki
Yes, you load the wiki you want to fetch in a hidden iframe and then the plugin in that wiki creates a json object with the tiddlers returned by the filter you send and then uses postmessage to send the information to your wiki. All of the processing is done locally in your browser.

Mark S.

unread,
Jun 24, 2016, 1:29:21 PM6/24/16
to TiddlyWiki
I thought that SyncFileTiddler was a great feature available to TWC, and would like to see it available to TW5. It allowed you to "connect" another, possibly larger, TW file temporarily. So, for instance, if you were doing a report on Shakespeare, you might pull in a TW with his works and run searches. You wouldn't corrupt the Shakespeare data by accident, and you wouldn't have to hop back and forth between tabs.  When you were done, you could disconnect. This allowed you to keep your main TW's size low, while leveraging other data sets.

Maybe something like Jed's system could be used for TW5, but currently it actually brings in everything that is imported. If the plugin marked everything that was brought in (tagged it), then I suppose you could delete it in a batch process later. Still not as convenient as SFT.

Mark

Jed Carty

unread,
Jun 24, 2016, 3:51:15 PM6/24/16
to TiddlyWiki
What do you mean 'currently it actually brings in everything that is imported'? I am not sure how else you would import something. Do you mean temporary tiddlers that the wiki can interact with but aren't actually saved? Because that could be done pretty simply by making the widget use some field to indicate that something was imported and then have the saver ignore those tiddlers. It would just be a matter of creating a new import type on the twederation side and editing the saver filter or whatever it is called in your wiki.

Mark S.

unread,
Jun 24, 2016, 6:41:36 PM6/24/16
to TiddlyWiki
Maybe that's how the SFT did it. With the SFT, you could make the content of another wiki temporarily available just as if it were part of your TW (except you couldn't write to the tiddlers). Then you could disconnect when you were done and your original TW would be back the way it was.

Is the saver filter something that's easily available ?

Thanks,
Mark

Jeremy Ruston

unread,
Jun 25, 2016, 7:10:20 AM6/25/16
to tiddl...@googlegroups.com
Hi Everyone

Just to confirm what has already been said:

* The SFT plugin relied on the way that older browsers were more relaxed about allowing xmlhttprequest to be used to extract content from remote URIs
* The spark for the current TWederation work was the discovery that we could get around that problem by loading the remote wiki in an iframe and then using window.postMessage() to communicate between them. There are many limitations to the technique, as Jed has outlined, but it is entirely HTML5 compliant, and therefore as future proof as anything can be
* TWederation is thus independent of IPFS: IPFS gives us distributed hosting but still looks like a server to the browser. As far as I can tell, we would need the same iframe/postMessage technique to allow one IPFS hosted wiki to load content from another. If xmlhttprequest were to work in that situation then we could use it instead of the iframe/postMessage technique, but the remainder of the TWederation approach and protocols would still be relevant
* Mark’s suggestion of making the imported tiddlers be volatile (so that they don’t get saved) could easily be done if TWederation were to package imported tiddlers in a dynamically created plugin in the $:/temp namespace; the imported tiddlers would then appear as shadow tiddlers

Best wishes

Jeremy


--
You received this message because you are subscribed to the Google Groups "TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tiddlywiki+...@googlegroups.com.
To post to this group, send email to tiddl...@googlegroups.com.
Visit this group at https://groups.google.com/group/tiddlywiki.
To view this discussion on the web visit https://groups.google.com/d/msgid/tiddlywiki/8e22c4ac-f333-4fa2-8929-43ccf08d5061%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

RichardWilliamSmith

unread,
Jun 25, 2016, 8:25:28 AM6/25/16
to TiddlyWiki
Hi Jeremy,

I've got lots to learn about ipfs still, and the implementation(s) of it are evolving quickly. The web-gateways look just like servers, as you say, with distributed storage, but if we use ipfs directly, shouldn't we be able to treat each other's files as if they were on the same file-system? I'm anyway imagining more value in having individual tiddlers stored as separate objects and each wiki acting as a manifest to pull them off the network, like in the current node configuration - then sharing content and pub/sub seems quite straightforward. In the mean time I am interested in seeing if I could get orbit.db to work in the same way that other people have used couch/pouch. 

With regards to twederation as currently implemented, is it possible to sandbox the incoming content so that it can't run code, for example? Otherwise it's always going to be vulnerable to injection attacks, no? There are presumably some safeguards we could implement, such as not allowing imported content to carry system tags.

Also, I really don't know much about them, but might it also be possible to use a web-worker to load the remote wiki, instead of an iframe within the page?

Regards,
Richard

Jed Carty

unread,
Jun 25, 2016, 8:37:37 AM6/25/16
to TiddlyWiki
Also, I really don't know much about them, but might it also be possible to use a web-worker to load the remote wiki, instead of an iframe within the page?

The web workers need to load the wiki somehow, which means in an iframe. The safety things come from the sandboxing done by the iframe. Also the imported content is bundled together and isn't active until the user extracts the specific tiddlers they want.

I have been experimenting with workers and if I can get around some things about how https pages load web worker code they will make things nicer in terms of performance but browsers don't like creating workers from data uris so they aren't used yet.

Feel free to look at the code on github to see how it is currently set up.

Jed Carty

unread,
Jun 25, 2016, 8:50:43 AM6/25/16
to TiddlyWiki
And as for using ipfs for a distributed node-like implementation online, that would be awesome. There would unfortunately be problems with read-write privileges and the like. I imagine it would be easiest to make something that could dynamically build a wiki from a set of distributed tid files on page load, but you could do the same thing using http. Fundamentally http(s) is also a way to access a distributed file system so that aspect of it doesn't necessarily allow any novel applications.

RichardWilliamSmith

unread,
Jun 25, 2016, 10:17:18 PM6/25/16
to TiddlyWiki
And as for using ipfs for a distributed node-like implementation online, that would be awesome. There would unfortunately be problems with read-write privileges and the like. I imagine it would be easiest to make something that could dynamically build a wiki from a set of distributed tid files on page load, but you could do the same thing using http. Fundamentally http(s) is also a way to access a distributed file system so that aspect of it doesn't necessarily allow any novel applications.

The novel applications come (imo) from the fact that ipfs is content-addressable - you ask for tiddlers by their hash, not a location. It also has signed namespaces that you can use as a pub location for pub/sub and built-in versioning. As far as I understand it, the best way to explain ipfs is something like "bit-torrent, with git on top, with the web on top of that" and there are no servers, only peers in a swarm. If you publish something, then I view it and then you go offline, other people can still get what you published from me, and yet there's no way I can tamper with it. There are also proposals for an incentivisation layer (filecoin) that would let you pay a small amount to guarantee your content to remain hosted, though in practise anything of interest to more than a few people will remain live anyway. All of this, of course, is far from ready for widespread adoption and, as I say, I have a lot to learn about it all.

RR

Josiah

unread,
Jun 27, 2016, 6:00:35 AM6/27/16
to TiddlyWiki
Ciao RichardWS

IPFS interests me as part-and-parcel of the broad issue of TW deliberately not being server dependent. I think its suffered, not because of any of its limitations, but because the dominance of server dependent models in software design on networking & communications, and poor protocols, made it much harder to get steps.

So ANY initiatives that show how to assert non-server dominated steps that can help TW better interact are worthy of full examination IMO. What is doable is I think a very OPEN question.

Best wishes
Josiah

Mark S.

unread,
Jun 27, 2016, 12:57:45 PM6/27/16
to TiddlyWiki
Hi Richard,

Since you seem to be ahead on the learning curve, these are the questions that occur to me ...

How do you know what hash to ask for, and what happens when the original content (aka "site") changes? How many different versions of the hashed site would you retain? If "pulling" members of the ipfs don't retain early versions of the content, then how is it significantly different than what we have now, where content may be replaced at any time.

Lot's of sites these days are database-driven. Which means there's really no "site" to be saved, captured, and shared. Each individual has a unique experience (think Amazon). Can the problem the ipfs was supposed to fix (e.g. "We're losing the internet") even be fixed in a world in which static sites are rapidly disappearing?

With twederation, does the hash imply that each tiddler should get it's own unique ID (something that I think is going to be necessary somehow to avoid title crashes).

Thanks,
Mark

RichardWilliamSmith

unread,
Jun 27, 2016, 8:44:33 PM6/27/16
to TiddlyWiki
Hi Mark,

I'm not very much further along the curve than anyone else. The question of how to search ipfs is one that I have wondered about myself and one which I intend to ask of their community soon - i imagine it will have to behave something like torrent trackers currently do, or otherwise 'spider' the ipfs network in some new way.

The hash for static resources changes whenever the content changes, of course, so there is at least the possibility to keep all previous versions - just as we currently do with git objects (the tw repo, for example, contains all of Jeremy's commits from the very beginning). Inidividual nodes will retain whatever they find useful but it's true that there is no cast-iron guarantee of permanence, just as there is currently no guarantee that a given torrent file will be available, even if there is a tracker for it.

Sites which regularly change will be published to a namespace where eg; <hash>/index.html will always point to the current version, but it should still be possible for me to send you a link to a particular site/page/resource and be sure that you will see exactly what I see, even if the page changes. If you bookmark a site and look for it later, you don't rely on it still being hosted in the same place, only on it still existing somewhere on the network (so, for example, archive.org will no longer need to maintain a 'mirror' of original pages, but will be able to collect and store original assets themselves and reloading the page in 10 years from archive.org will be exactly the same experience as accessing the original pages.

Databases and db backed sites are another issue, but part of the same 'problem' with the current architecture which is all hub-and-spoke, server-client and imho we will soon start to move en masse from databases to blockchains and things built on them. If you are interested in this then I recommend the work of Vinay Gupta, who does a good job of boiling it down for non-technical audiences (eg; https://vimeo.com/153600491)
 
With twederation, does the hash imply that each tiddler should get it's own unique ID (something that I think is going to be necessary somehow to avoid title crashes).

If you were storing the tiddlers as separate files, each one would get a hash unique to it's total contents and it would change on each save but whether and how the internals of tw would see/calculate the hash, I don't know. This gives tiddlers unique id's, provided they differ in at least some respect but it doesn't work very well as a handle for the content, because it's constantly changing.

Anyway, as you can tell, I find all this stuff very interesting. So interesting that I have allowed myself to drift very far from the intent of Mat's thread. Apologies. I will report back to the community about ipfs if and when I have something more concrete to demonstrate, but I maintain the belief that tiddlywiki is even more interesting in the context of ipfs than it is over http.

Sorry for stealing your thread, Mat, I will also find time to try out the current twederation implementation - your excitement is very infectious.

Regards,
Richard
Reply all
Reply to author
Forward
0 new messages