Hello all.
This topic has been floating around my world for many years (maybe 10?), but perhaps now the time is ripe to do something with it!
Among other things, Scrapbook archives on a local drive all necessary page requisites to re-present Web pages. From a currently visible Web page, Scrapbook will collect all and store all page requisites, rewrite necessary html to reflect local links, and provide a durable link of the archived page in full HTML complexity. Preferences handle downloads of media. (Preferences can also be set to handle following links one or more layers out, so that Scrapbook can be a small scale web crawler, but that's a different project...)
For TW users, one use case might be this: Given a current timestamp of 20160404203343 (reflecting YYYYMMDDHHMMSS), and a scrapbook directory is set as scrapbook/, then a just-scrapbooked page will be displayed at http://../scrapbook/20160404203343/index.html --- and with a small amount of tweaking we can imagine a macro that would return a durable link to a scrapbooked page: <<scrapbook 20160404203343>>. (I built something like this in TWC a few years back:
http://bit.do/tiddlwiki-classic-scrapbook-web-archiving-system - haven't looked at it for a while, but it worked well enough).
There are two things I'd like to do:
1 > update my old scrapbook code to work with classic
2 > import the RDF file produced by Scrapbook as tiddlers
Was wondering if anyone has played in this space recently, or has any thoughts about how to go about importing and processing RDF files? And/or jettisoning Scrapbook altogether in favor of something else that does the same work?
Thanks!
//steve.