Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

THEORY: the analytic table-of-contents

3 views
Skip to first unread message

Jorn Barger

unread,
Apr 25, 2003, 3:46:30 AM4/25/03
to
There seems to be a deep analogy between _objects_ in
object-oriented programming and _pages_ in web-hypertext.

If your program concerns persons and places, you'll need
a class of person-objects and a class of place-objects,
with attributes (slots) for all their defining features.

Similarly, if you create a webpage for a person, there
are characteristic 'slots' that need to be filled--
primarily a timeline of their life, probably a broken-out
list of their major works, a picture, links to more
pictures, etc.

(The topic-map crowd seems to be sidling towards a
general attack on this problem-- what classes of
'object' can be the topic of a page, and what sorts of
subtopics will they require?)

If you create a page for a _book_, one of the most
useful 'slots' is an analytic table-of-contents. (This
is the conventional name for old-fashioned ToCs that
include a concise summary of the chapter.) I'm not
sure I've seen any examples of this except my own:

http://www.robotwisdom.com/jaj/homer/odyssey.html
http://www.robotwisdom.com/jaj/ulysses/
http://www.robotwisdom.com/jaj/bible.html
http://www.robotwisdom.com/flaubert/bovary.html
http://www.robotwisdom.com/flaubert/sentimentale.html
http://www.robotwisdom.com/flaubert/bouvard.html
http://www.robotwisdom.com/jorn/gibbon.html
http://www.robotwisdom.com/ai/wilkins.html

Along with a basic chapter-by-chapter synopsis, this
ought to include:

- links to all online etexts of each individual chapter
(this is slightly more work for the author than a single
link to each etext's own ToC, but it's extremely
efficient for the reader)

- possibly a link to a split-screen FRAME that allows
comparison of any two versions (the simplest way to
do this is a 50-50 top-bottom frame the loads two copies
of your own book-page, allowing the reader to click each
of these again to load the etexts)

- links to critical essays about the chapter

- annotations of difficult points

- illustrations

- explicit statement of the time-period covered by the
chapter (where relevant)


Other slots for book-pages, beyond the analytic ToC:

- character-list
- movie-versions (IMDb links)
- timeline of the author's composition of the work, and
the subsequent critical reaction
- reviews


Some other classes of 'object' suitable for webpages:

- websites, eg: http://www.robotwisdom.com/sites/perseus.html
- years, eg: http://www.robotwisdom.com/jaj/ulysses/1904.html
- words, eg: http://www.robotwisdom.com/jaj/ulysses/acatalectic.html
- historical snapshots, eg: http://www.robotwisdom.com/science/thera.html

phil

unread,
Apr 25, 2003, 10:26:31 AM4/25/03
to
jo...@enteract.com (Jorn Barger) wrote in message news:<16e613ec.03042...@posting.google.com>...

> There seems to be a deep analogy between _objects_ in
> object-oriented programming and _pages_ in web-hypertext.

Jorn, I've been thinking a bit about this from a different angle. I've
been building a personal wiki this year
(http://216.36.193.92/cgi-bin/wiki.pl) and
I've started thinking about how to combine wiki's free style
organization with a scripting language. The basic idea would be to
allow certain "marked-up" fields or slots to be placed within pages.

Then the language could be used to write macros which could, for
example, allow data from one page to be used in a dynamic calculation
on another. But if the macro-scripts were also notionally placed in
named slots on the pages, we'd effectively be able to write
PageName.methodSelector within the script. All pages would become
potential objects.


Why do this? One reason is a kind of literate programming, but instead
of embedding comments in code, embed code in free-form wiki pages. The
wiki or hypertext nature of the collection of object descriptions
would give the language the ease of organization and refactorability
of wiki. Something I think would be a big improvement over other
development environments.


> If you create a page for a _book_, one of the most
> useful 'slots' is an analytic table-of-contents. (This
> is the conventional name for old-fashioned ToCs that
> include a concise summary of the chapter.) I'm not
> sure I've seen any examples of this except my own:

Isn't this something an RSS file can represent?

phil jones

Jorn Barger

unread,
Apr 25, 2003, 2:27:40 PM4/25/03
to
inte...@postmaster.co.uk (phil) wrote in message news:<d7825a3e.03042...@posting.google.com>...

> I've started thinking about how to combine wiki's free style
> organization with a scripting language. The basic idea would be to
> allow certain "marked-up" fields or slots to be placed within pages.

I want a scriptable browser, for sure, with web-authoring features:
http://www.robotwisdom.com/web/drawback.html
http://www.robotwisdom.com/drawback/

> Then the language could be used to write macros which could, for
> example, allow data from one page to be used in a dynamic calculation
> on another.

I distrust auto-generated content. (I like human authors.)

> > If you create a page for a _book_, one of the most
> > useful 'slots' is an analytic table-of-contents. (This
> > is the conventional name for old-fashioned ToCs that
> > include a concise summary of the chapter.) I'm not
> > sure I've seen any examples of this except my own:
>
> Isn't this something an RSS file can represent?

When you turn a document into a database, it becomes less
readable, and less pleasant to revise (imho).

bobbyhaqq

unread,
Apr 26, 2003, 5:14:11 PM4/26/03
to
jo...@enteract.com (Jorn Barger) wrote in message news:<16e613ec.03042...@posting.google.com>...

Well my experience in web development is that you are 100% right. DBs
need to be used for certain things, like tracking or directory
systems. But using DB system which allow users to enter in a input
form and see the text in export systems don't seem to work. User seem
unable to get past the ages of placing stain upon surface.

BUT, given a few generations of databases, with more and more database
art, perhaps people will get better at using databases to make
documents.

phil

unread,
Apr 28, 2003, 1:59:36 AM4/28/03
to
jo...@enteract.com (Jorn Barger) wrote in message
>
> I want a scriptable browser, for sure, with web-authoring features:
> http://www.robotwisdom.com/web/drawback.html
> http://www.robotwisdom.com/drawback/

> >

> > Isn't this something an RSS file can represent?
>
> When you turn a document into a database, it becomes less
> readable, and less pleasant to revise (imho).

Jorn, I agree with some of your complaints against XML, but it seems
that the most plausible way we're getting the functionality of
Drawback is actually through RSS newsreader / agregators. The XML
*isn't* for people to read (so it's readability isn't an issue in this
case), but the culture of using RSS does have the effect of getting
site owners to strip the rubbish off their webpages, and letting the
user put design sufficient to his or her own needs and taste on the
client side.

Some, like Syndirella (http://www.yole.ru/projects/syndirella/) seem
to offer a screen-scraping option for sites w/out RSS feeds.

Jorn Barger

unread,
Apr 28, 2003, 8:11:32 AM4/28/03
to
> > When you turn a document into a database, it becomes less
> > readable, and less pleasant to revise (imho).
>
> Jorn, I agree with some of your complaints against XML, but it seems
> that the most plausible way we're getting the functionality of
> Drawback is actually through RSS newsreader / agregators. The XML
> *isn't* for people to read (so it's readability isn't an issue in this
> case), but the culture of using RSS does have the effect of getting
> site owners to strip the rubbish off their webpages, and letting the
> user put design sufficient to his or her own needs and taste on the
> client side.

I agree that most webpages have lots of rubbish I'd like
to strip off (I call it html-junk). But in every case, the
page-author chose to include it, so trying to convince
them to use RSS will be approximately as hard as trying
to get them to omit it anyway.

Parsing a webpage to identify the main content seems like
a very easy problem to me-- you look for paragraphs made
of sentences. So why waste time trying to convince
authors to make their pages differently, instead of
making a smarter browser that works with any old page?

When I make a webpage, I optimise it for some ideal
third-worlder using a 28k modem and HTML 2.0, and I try
to make it look decent and respectable-- NOT like an
unreadable database, but not like a glossy magazine
either.

0 new messages