Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Theory: The Line, the Screen, and the Page

10 views
Skip to first unread message

Jorn Barger

unread,
Sep 7, 2002, 6:14:38 AM9/7/02
to
Say I create a webpage for Gustave Flaubert (19thC French
novelist).

I contend that this webpage will be less 'contingent' and
more 'necessary' if I include a timeline of GF's life,
and a subsection for each of his major works that
includes links to all known online etexts of each work,
and all known critical essays, etc:
http://www.robotwisdom.com/flaubert/

But as I research the timeline, I keep finding quotations
that strike home for me... though at the same time I
don't find that much on the Web in English... so I decide
to read/re-read all his major works, and to create
subpages for each of them.

These subpages may include detailed plot-summaries, and
annotations that explain historical and literary
allusions, and a more detailed timeline of the creation
of the work, and maybe the etext-links can be broken up
by individual chapters...

But we're now geting into what seems to me an unsolved
problem of hypertext theory:

When you spin off a subpage on a subtopic, how much
of the info do you _leave_ on the main topic-page?

One extreme would be to leave nothing but a link to
the new page; the other extreme is to include the
whole subpage on the main page.

An intermediate compromise might be to leave one
screensworth of the 'best' (most necessary) info, and
move the rest. With Flaubert, who only wrote a half-
dozen major works, this would result in six screens
devoted to his major works-- not an awkward amount--
while Iris Murdoch wrote 26 novels and giving each
a full screen definitely strains the 'comfort
margin'.
cf: http://www.robotwisdom.com/jorn/iris.html

I've concluded that webpages that intend to be
information-resources should be about the length of
one chapter of a book-- ideally 30-60k, but 200k is
bearable if no TABLEs are used (delaying rendering).

The author's goal should be to use those 60k as
efficiently as possible, so that anyone searching
for an answer related to that topic will either find
it directly on the page, or find a link there that
immediately leads to the answer. (Breaking the 60k up
into many shorter pages would slow down this search
unacceptably.)

I think the old VT100 definition of a 'screen' as
being 25 lines of <80 characters is a reasonable
startingpoint for designers. The eye rebels when lines
are longer than 80 (64 may be more nearly optimal), and
25-lines-per-window will exclude very few platforms.

I find that long timelines are most useful when most
entries are kept to a single line-- it makes scanning
much easier. So these three 'chunk sizes' seem most
fundamental to me:

- the <80 character line
- the ~25-line (2k) 'screen'
- the ~30-screen (60k) page

When I create maps I try to optimise them for a
screensworth in size, as well: about 600 by 400. (I
think surfers should be encouraged to make 640-by-480
their default window-size. This has the advantage of
allowing links to be dragged between overlapping
windows on most modern desktops.)


more: http://www.robotwisdom.com/web/

Daniel R. Tobias

unread,
Sep 7, 2002, 4:39:06 PM9/7/02
to
jo...@enteract.com (Jorn Barger) wrote in message news:<16e613ec.02090...@posting.google.com>...

> When you spin off a subpage on a subtopic, how much
> of the info do you _leave_ on the main topic-page?
>
> One extreme would be to leave nothing but a link to
> the new page; the other extreme is to include the
> whole subpage on the main page.

My own preference is to have just a very short amount of content from
the subpage on the main page, if anything at all; after all, it's
rather redundant to put all or most of the subpage on the main page --
if you do, then why have a subpage at all? I usually spin off
subpages when sections of my content get so large that the main page
has an unwieldy amount of content, so that breaking it up will both
shorten the main page and make the rest of the content more logically
structured.

Thus, at the most, I'll put a single-paragraph "teaser-blurb-style"
summary of what the subpage is about, to introduce the link to it; or,
if I'm being more minimalist I'll just have the link, with the title
of the linked page as the link text. The point is to make people
aware of what the subject matter of the subpages is, so that if
they're interested they can follow the link to it; the point is not to
try to completely or mostly recapitulate the subpage (because then why
would anybody want to follow the link at all?).

--
Dan
Dan's Web Tips: http://webtips.dan.info/

Jorn Barger

unread,
Sep 8, 2002, 3:55:21 AM9/8/02
to
[Google isn't updating so I'm faking the threading in this reply]

Daniel R. Tobias <d...@tobias.name> wrote in
news:<aab17256.02090...@posting.google.com>:


> My own preference is to have just a very short amount of content from
> the subpage on the main page, if anything at all;

I've called this the 'stripped grapes' model because it seems
informationally impoverished to me.

> after all, it's
> rather redundant to put all or most of the subpage on the main page --
> if you do, then why have a subpage at all?

For search-engines, maybe. But obviously I wasn't recommending that
extreme, I was asking what intermediate level was theoretically
optimal. My general policy is to 'promote' the _best_ content onto
the main page.

> I usually spin off
> subpages when sections of my content get so large that the main page
> has an unwieldy amount of content, so that breaking it up will both
> shorten the main page and make the rest of the content more logically
> structured.

This would be uncontroversial, except I'm sure you define 'unwieldy'
as about 10x smaller than I do.

("Logically structured" is a fine goal for school essays, but it
isn't necessarily the most efficient solution for the real world of
Web surfers.)

> Thus, at the most, I'll put a single-paragraph "teaser-blurb-style"
> summary of what the subpage is about, to introduce the link to it;

Does the term 'teaser' imply that you view your visitors as prey
to be manipulated and tricked, in order to maximise your hitcount?
(Obviously I'd strongly disapprove of this.)

> or,
> if I'm being more minimalist I'll just have the link, with the title
> of the linked page as the link text.

My technical term for this ancient theory is 'braindead'.

> The point is to make people
> aware of what the subject matter of the subpages is, so that if
> they're interested they can follow the link to it;

If 'subject' was enough, then each Yahoo subject-category could
just be reduced to a row of generic 'click here' links. What
you're omitting is what _resources_ about the subject each page
offers.

> the point is not to try to completely or mostly recapitulate
> the subpage (because then why would anybody want to follow the
> link at all?).

I've been arguing for 5+ years that good design tries to
***minimise*** clicks... especially disappointing clicks where the
visitor wouldn't have bothered if they'd known what they were
going to get.

In April I suggested that you revise your tips-page:
http://groups.google.com/groups?th=255a072785b634ed

"make the toc-page a long condensation of all your tips,
with links to the _explanations_."

Do you have some excuse for not even trying this as an experiment?

Daniel R. Tobias

unread,
Sep 8, 2002, 12:27:56 PM9/8/02
to
jo...@enteract.com (Jorn Barger) wrote in message news:<16e613ec.02090...@posting.google.com>...

> > My own preference is to have just a very short amount of content from


> > the subpage on the main page, if anything at all;
>
> I've called this the 'stripped grapes' model because it seems
> informationally impoverished to me.

I suppose that's one way of thinking about it... if your mental model
demands having all of your information in one place, with few or no
links needing to be followed to get some of it, then a page with much
or most of the information off on other linked pages would seem
"impoverished". However, the concept of hypertext has always been
based on there being many different documents out there, covering many
different aspects of a subject, and capable of being interlinked in
many different ways to allow relationships among them to be explored.
Hypertext doesn't demand that everything of any relevance be on the
same page; in fact, in the extreme case of this, it wouldn't be
hypertext at all, just plain text, as no links would be needed if you
put everything including the kitchen sink in a single document.

> > after all, it's
> > rather redundant to put all or most of the subpage on the main page --
> > if you do, then why have a subpage at all?
>
> For search-engines, maybe. But obviously I wasn't recommending that
> extreme, I was asking what intermediate level was theoretically
> optimal. My general policy is to 'promote' the _best_ content onto
> the main page.

My own policy tends to go in the other direction in that relatively
minor things that don't "deserve" their own page tend to be tacked
onto existing pages, while things that grow into sizable sections of
the site get "promoted" to pages of their own; thus, the "better"
something is, the more likely it is to be on a separate page from the
main one. Then, however, I do make sure to prominently link the best
content on the main page so that people can find it.

> > I usually spin off
> > subpages when sections of my content get so large that the main page
> > has an unwieldy amount of content, so that breaking it up will both
> > shorten the main page and make the rest of the content more logically
> > structured.
>
> This would be uncontroversial, except I'm sure you define 'unwieldy'
> as about 10x smaller than I do.

Maybe, maybe not... I do have some pretty long text pieces on single
pages in my site, and don't generally believe in breaking them up just
for the sake of breaking them up, when no logical break point exists;
I can't stand the various online magazines, etc., that insist on
chopping up all their articles into bite-sized chunks so you have to
keep following "Next Page" links to read the whole thing. Still, if
the page gets really, really long I do think about finding a way to
segment it; like, I eventually broke up my "Brand X Browsers" page
into several pages, some of the breaks being fairly arbitrary such as
alphabetic position of the browser name, to keep it from being a
single humongeous page. Also, "photo-gallery"-style pages with lots
of pictures sometimes get broken up in my sites, since there is a
bandwidth quota in my hosting account (even though I'm not currently
anywhere near it), and I'd hence prefer not to have everybody who
stumbles on the gallery page end up downloading every single image if
they're not very interested; broken up, it leads to only a few images
getting loaded unless the reader follows the link to the next page.

> ("Logically structured" is a fine goal for school essays, but it
> isn't necessarily the most efficient solution for the real world of
> Web surfers.)

Well, the particular type of logic used to decide on the structure
needs to be sensitive to the nature of the medium and the purpose of
the particular site, of course; the sort of structure appropriate for
a school essay might not suit a business website, and that in turn
might not suit a personal hobby site.

> > Thus, at the most, I'll put a single-paragraph "teaser-blurb-style"
> > summary of what the subpage is about, to introduce the link to it;
>
> Does the term 'teaser' imply that you view your visitors as prey
> to be manipulated and tricked, in order to maximise your hitcount?
> (Obviously I'd strongly disapprove of this.)

Well, I am kind of using "teaser" in a "marketing" sense, but not with
the intent of committing the kinds of excesses sometimes done by
marketing types who do indeed use misleading descriptions to attract
unwarranted attention. I'm referring more to "honest" teasers, that
make people interested in clicking on if in fact they have an interest
in the subject matter being discussed, and whose desires are honestly
fulfilled by the page that does in fact contain material as described
in the "teaser".

> > or,
> > if I'm being more minimalist I'll just have the link, with the title
> > of the linked page as the link text.
>
> My technical term for this ancient theory is 'braindead'.

That's a bit pejorative; people can disagree without being stupid. I
find your views interesting as an alternative to both the commonplace
graphic-oriented designer crowd and the structure-oriented "purist
camp", but don't agree with all of them; that doesn't make either of
us "braindead".

By "page title", I don't mean something really cryptic like
"index2.html"; I believe in making titles descriptive of the content
and purpose of the pages. If this is done well, a set of links by
title can be useful, though it's even better to supplement it with a
capsule description (but not a complete recap of everything in the
linked page).

> > The point is to make people
> > aware of what the subject matter of the subpages is, so that if
> > they're interested they can follow the link to it;
>
> If 'subject' was enough, then each Yahoo subject-category could
> just be reduced to a row of generic 'click here' links. What
> you're omitting is what _resources_ about the subject each page
> offers.

Of course; by "subject" I don't mean a braindead (oops, now I'm using
that word myself!) set of top-level topics as found in the Yellow
Pages or the Dewey Decimal System; what's appropriate for subject
descriptions is context-sensitive depending on the purpose of the site
and of the section of the site you're in. At the very top level of a
general-purpose site, "Music" is a valid topic; at other levels,
topics might have to distinguish between "pop music" and "classical
music", or between particular finely-divided genres within each of
these, or with subject division by era, or with sections for
particular artists, composers, rock groups, etc. In a fan site for a
specific performer, the "music" section might have separate pages for
each album released by that performer, or each song. The fineness of
subject division and the detail with which the subjects are described
will vary by circumstance -- what the intended subject matter is of
the site as a whole, and what the intended audience is; a site aimed
at casual browsers should be more detailed in its explanations about
how the subject is divided than one aimed at hard-core enthusiasts of
the topic who are better educated about it.

> > the point is not to try to completely or mostly recapitulate
> > the subpage (because then why would anybody want to follow the
> > link at all?).
>
> I've been arguing for 5+ years that good design tries to
> ***minimise*** clicks... especially disappointing clicks where the
> visitor wouldn't have bothered if they'd known what they were
> going to get.

That's why clarity of subject description is important.

> In April I suggested that you revise your tips-page:
> http://groups.google.com/groups?th=255a072785b634ed
>
> "make the toc-page a long condensation of all your tips,
> with links to the _explanations_."
>
> Do you have some excuse for not even trying this as an experiment?

Well, I just did a major redesign of the site a few months ago and am
not working on another quite yet... your suggestion is one of the
possibilities I might consider if and when I give it another overhaul.

Jorn Barger

unread,
Sep 8, 2002, 6:52:55 PM9/8/02
to
d...@tobias.name (Daniel R. Tobias) wrote in message news:<aab17256.02090...@posting.google.com>...

> > I've called this the 'stripped grapes' model because it seems
> > informationally impoverished to me.
>
> I suppose that's one way of thinking about it... if your mental model
> demands having all of your information in one place,

This is really distorted.

> with few or no
> links needing to be followed to get some of it,

Minimal clicks.

> then a page with much
> or most of the information off on other linked pages would seem
> "impoverished".

In fact, that impoverishment can be easily estimated by counting
the number of characters of informational text on the page.

> However, the concept of hypertext has always been
> based on there being many different documents out there, covering many
> different aspects of a subject, and capable of being interlinked in
> many different ways to allow relationships among them to be explored.

Okay... And that 'classical' hypertext theory has yet to notice
that a page consisting mainly of links can always be enriched by
including the best information from the linked pages.

> Hypertext doesn't demand that everything of any relevance be on the
> same page; in fact, in the extreme case of this, it wouldn't be
> hypertext at all, just plain text, as no links would be needed if you
> put everything including the kitchen sink in a single document.

(Why do you keep appealing to this irrelevant argument?)

> [...] Still, if


> the page gets really, really long I do think about finding a way to
> segment it; like, I eventually broke up my "Brand X Browsers" page
> into several pages, some of the breaks being fairly arbitrary such as
> alphabetic position of the browser name, to keep it from being a
> single humongeous page.

Definition?

> Well, the particular type of logic used to decide on the structure
> needs to be sensitive to the nature of the medium

That medium being dominated, in practice, by the fact that clicking
to a new page takes a lot more time than scrolling down; while the
theory seems more often to assume that clicking is an inherently
fun activity, even maximised for its own sake.

> [...] I'm referring more to "honest" teasers, that


> make people interested in clicking on if in fact they have an interest
> in the subject matter being discussed, and whose desires are honestly
> fulfilled by the page that does in fact contain material as described
> in the "teaser".

So the critical question is, do your teasers give enough info for
people to predict whether their desires _will_ be fulfilled. If
you simply name the topic/subject without describing the kinds of
resources (bio, timeline, pix, links, etexts, etc) then it's
impossible for them to make an informed choice.

> > > or,
> > > if I'm being more minimalist I'll just have the link, with the title
> > > of the linked page as the link text.
> > My technical term for this ancient theory is 'braindead'.
>
> That's a bit pejorative; people can disagree without being stupid.

I call the _theory_ braindead because it hasn't evolved since the
earliest days of the Web.

> By "page title", I don't mean something really cryptic like
> "index2.html"; I believe in making titles descriptive of the content
> and purpose of the pages. If this is done well, a set of links by
> title can be useful, though it's even better to supplement it with a
> capsule description (but not a complete recap of everything in the
> linked page).

'Title' normally refers to the header-TITLE or the H1 (or
equivalent). The former should be optimised for bookmarking, the
latter for search-engines and especially for readers. Neither
of these are likely to be descriptive in the way a link should
be-- giving enough info to decide whether to click.

> [...] The fineness of


> subject division and the detail with which the subjects are described
> will vary by circumstance

You're missing my point-- subject/topic is one thing, type of
resources is a completely 'orthogonal' thing.

In terms of information-density (richness), telling just the
subject is low/impoverished, telling the subject plus enumerating
the resources is much richer.

But in fact the purpose of my original post was to explore a
still-richer strategy, of promoting the best content onto the
linking page. In weblogging, this may be a quote or an image.

For my Flaubert page, I think the way to decide what to 'promote'
has to be based on usefulness to searchers-in-general, eg
my main Flaubert page should keep a link to the best English and
French etexts, but doesn't need to include the full inventory
of all etexts.

Thomas Baekdal

unread,
Sep 9, 2002, 6:58:04 PM9/9/02
to
"Jorn Barger" <jo...@enteract.com> wrote in message
news:16e613ec.02090...@posting.google.com...

> Say I create a webpage for Gustave Flaubert (19thC French
> novelist).

Just a few suggestions you might want to consider:

- When you make web pages that long it would be polite to your visitor to
make a table of contents at the top. Then, if one of your readers wants to
learn more about books, they will not have to scroll down to an (at that
point) unknown position on the page. It will also provide direct feedback to
what the page holds.

- You might also want to consider adding visual guides to aid your visitor
find what they are looking for. With this I mean making important things
bold, making the each entry (in the timeline) stand out better, position
each timeline year so that it does not overlap (vertical) other text
elements and other visual aids. This can be done without using tables.

- Add an introductory text at the top of each page, summarizing the content.
As it is now you need to know a lot about the artist to even get started on
your page (most of the content requires previous knowledge to understand).

- There is a couple of errors in the underlying code that might cause the
page to render unexpected.

In general I do admire your way of structuring web content. Very few people
do this correctly, and you come very close. What I mean is that when you
make a page you should put paragraphs into a <p> tag, headlines into their
appropriate header <h1> <h2>..., place quotes in their appropriate tags a so
fort. This you do in most of the cases.

> But we're now getting into what seems to me an unsolved problem of
hypertext theory:

When making a link from one page to another, it is essential that you let
the user know what the content is of the linked page. This means that "leave
nothing but the link" would not benefit anyone. This also means that links
reading just "head" or even worse "ditto" is not recommended.

Instead of "head" consider writing "Picture: Statue of Dr. Flaubert" and
make the full text clickable. This way your readers will know exactly what
to expect.

> I've concluded that web pages that intend to be information-resources


should be about the length of one chapter of a book

That very much depends on how you structure the content on those pages. One
of the major problems on the web is that it is not very comfortable to read
on a computer screen. This is mostly due to having to scroll and because of
the low resolution.

If you make a page longer than one and a half screen (vertical) you need to
include quick links to each sub section + the need for better screen
visualization increases tremendously. If you are concerned about
accessibility you also need to make adjustments to the code so that each
section is easily identified, and can be accessed quickly - even for people
who are blind.

You do not have to clutter the page with graphics or colors, but good
interaction design is essential for long pages.

Regards,
Thomas Baekdal
--------------
http://www.baekdal.com
- The Goal is Pretty Simple

Jorn Barger

unread,
Sep 10, 2002, 4:03:43 AM9/10/02
to
"Thomas Baekdal" <notava...@baekdal.com> wrote in message news:<3d7d277d$0$30477$edfa...@dspool01.news.tele.dk>...

> - When you make web pages that long it would be polite to your
> visitor to make a table of contents at the top.

Agreed. Also, use "#" as the link-bullets to clarify that it's a
same-page jump.

> [...] making important things bold

These days I design for an ideal reader who actually does *read*
online, not just skimming. So I avoid bold except for headings,
because I find it very disruptive to the reading eye. But on my
Joyce-pages I often substitute a shade of green instead.



> - Add an introductory text at the top of each page, summarizing
> the content.

Ideally, this can be merged with the #-ToC.

> [...] put paragraphs into a <p> tag,

I think structured markup is a hoax:
http://www.robotwisdom.com/web/structure.html

> headlines into their appropriate header <h1> <h2>...,

Here I balk: you're asking me to lay out the page with my eyes
closed! <H1> is unbearably huge on my platform, so I make an
unhappy compromise with H2 or H3.

TimBL and the W3C think web authors should submit to a tyranny
of dumb parsers. I think web authors should create pages, and
parser-coders should get a whole lot smarter:
http://www.robotwisdom.com/web/parsing.html

> place quotes in their appropriate tags

I *******************hate************************ curly-quotes,
because I think copying between webpages and email/netnews
should be as simple as possible.

> When making a link from one page to another, it is essential
> that you let the user know what the content is of the linked
> page. This means that "leave nothing but the link" would not
> benefit anyone. This also means that links reading just "head"
> or even worse "ditto" is not recommended.

But you've torn those out of context! If you're saying the
anchor-text should be a big old blotch of underlined blue,
so that an imaginary 'dumb link extractor' can harvest it in
one swell foop, then I disagree.



> Instead of "head" consider writing "Picture: Statue of Dr.
> Flaubert" and make the full text clickable.

No no no. I've been arguing very explicitly that blue-
underlining needs to be minimised. I admit I sometimes err
on the side of ambiguity, but people who are looking for
something related can afford to check it out, and people
who aren't can afford to skip it.

What I _do_ hope to add to my one-word 'text buttons' is an
occasional second word that recommends the best links: [head-superb]

> One of the major problems on the web is that it is not very
> comfortable to read on a computer screen. This is mostly due to
> having to scroll and because of the low resolution.

I consider it the user's responsibility to set up their platform
(default font, chair-position, etc) so that they can read all day.

> If you are concerned about accessibility you also need to make
> adjustments to the code so that each section is easily
> identified, and can be accessed quickly - even for people
> who are blind.

Please reassure me you haven't fallen for the "blind readers
choke on <I> and <B>" myth!?

I'm willing to make design concessions if they really help the
blind-&c, but I think TimBL&c are deluded about what that really
is.

Thomas Baekdal

unread,
Sep 10, 2002, 5:49:02 AM9/10/02
to

"Jorn Barger" <jo...@enteract.com> wrote in message
news:16e613ec.02091...@posting.google.com...

> > [...] making important things bold
>
> These days I design for an ideal reader who actually does *read*
> online, not just skimming. So I avoid bold except for headings,
> because I find it very disruptive to the reading eye. But on my
> Joyce-pages I often substitute a shade of green instead.

It is your choise of course, but all usability reports about reading on the
web would tell you that people scan the pages, rather than reading (as you
would in a book). I would recommend making a site that makes what the the
web-reader do easier - this mean making the page scannable.

Making "shade of green" might cause problems with people that are vision
impaired (read: anyone older than 40).

> > - Add an introductory text at the top of each page, summarizing
> > the content.
>
> Ideally, this can be merged with the #-ToC.

It depends on how you intent to make your ToC. A summary should be designed
to give the reader a short introductory to the contents, what it is about
and why it is important/interesting. The ToC would probably not be able to
do this, since you would then need very detailed headlines.

> > [...] put paragraphs into a <p> tag,
>
> I think structured markup is a hoax:
> http://www.robotwisdom.com/web/structure.html

I do not agree with you. I think structuring content is a very important
part of handling information. Structure markup languages (HTML, XHTML, XML)
is made specifially to make this task easy, and identical from site to site
(so that the reader do not have to learn new a structure each time they
visit another site)

> > headlines into their appropriate header <h1> <h2>...,
>
> Here I balk: you're asking me to lay out the page with my eyes
> closed! <H1> is unbearably huge on my platform, so I make an
> unhappy compromise with H2 or H3.

You might want to look into CSS (stylesheets). The <h1> tags are not about
text sizes, it is solely about how you structure your text. You can make
text in a <h1> tag any size you want. The important thing about the <h[x]>
tags is that they identify headlines. How they look is solely up to you. You
can either use them unstyled, or you can style them as you please.

> > place quotes in their appropriate tags
>
> I *******************hate************************ curly-quotes,
> because I think copying between webpages and email/netnews
> should be as simple as possible.

I am not speaking of curly-quotes. I am speaking of the quote tags on markup
languages.

> > When making a link from one page to another, it is essential
> > that you let the user know what the content is of the linked
> > page. This means that "leave nothing but the link" would not
> > benefit anyone. This also means that links reading just "head"
> > or even worse "ditto" is not recommended.
>
> But you've torn those out of context! If you're saying the
> anchor-text should be a big old blotch of underlined blue,
> so that an imaginary 'dumb link extractor' can harvest it in
> one swell foop, then I disagree.

I am saying that your links should be more than a single word. They should
be clear about what the linked page is about, so that your reader only visit
the pages they feel are relevant. If you do not do this, you give your
readers too little a clue about what to expect - potentially wasting their
time.

> > Instead of "head" consider writing "Picture: Statue of Dr.
> > Flaubert" and make the full text clickable.
>
> No no no. I've been arguing very explicitly that blue-
> underlining needs to be minimised. I admit I sometimes err
> on the side of ambiguity, but people who are looking for
> something related can afford to check it out, and people
> who aren't can afford to skip it.

I do agree that blue-underlining should be minimized, but only so that it
still contains the important part of the text. An example: You write on your
website "Troyat calls her 'pretty', but the pic he includes at...". If I
where to make the same page I would instead write "Troyat calls her
'pretty', but the picture of her, he includes at..." making the text
"picture of her" into the link text.

BTW: No one can afford to check out or skip links that are not relevant to
them. Making pages that do not care about the readers time is directly
responsible for the information overload that is damaging the Web.

> > One of the major problems on the web is that it is not very
> > comfortable to read on a computer screen. This is mostly due to
> > having to scroll and because of the low resolution.
>
> I consider it the user's responsibility to set up their platform
> (default font, chair-position, etc) so that they can read all day.

That is simply not possible, since the technology does not provide such
setup. Surveys have found that reading on screen is 25% (or more) slower
than reading from books, newspapers and the likes. With a book you can place
it in your lap read a few pages, move your position, move the book etc. This
you cannot do with a computer (not even if you have a laptop).

The latest handheld PC's does offer improved readability, but the text is
still far from as readable as a ordinary book. Products like Microsoft
Reader improves the readability even further, but MsReader documents cannot
be compared to web documents.

> > If you are concerned about accessibility you also need to make
> > adjustments to the code so that each section is easily
> > identified, and can be accessed quickly - even for people
> > who are blind.
>
> Please reassure me you haven't fallen for the "blind readers
> choke on <I> and <B>" myth!?

No I have not. I am solely talking about how you structure your code, so
that it will be easier for people with screen readers to move around. On
your page now you do not provide in-page links to sub sections. This mean
that even though people with normal vision can quickly scroll up and down to
find the section they like, blind people do not have this luxery. By
probably identifying headlines, summaries, quotes and the likes - and by
providing in-page link that skips to up or down (or to specific sections)
you will increase the usability of the site enormously.

I belive that when I make a website, it should be perfectly structured, and
contain enough information and in-page links, for the readers to move
around - without wasting their time.


Please remember that the above is only how I recommend making web pages. You
may use them, or you may ignore them - that is of course up to you.

Regards,
Thomas Baekdal
----------
http://baekdal.com

Jorn Barger

unread,
Sep 10, 2002, 12:43:11 PM9/10/02
to
"Thomas Baekdal" <notava...@baekdal.com> wrote in message news:<3d7dc01c$0$27680$edfa...@dspool01.news.tele.dk>...
> [...] all usability reports about reading on the

> web would tell you that people scan the pages, rather than reading

Which 'people'? Random stooges hauled off the street? (Usability
'experts' like Nielsen are expert mainly in hoaxing people into
accepting their bogus experimental methodology.)

My pages are for intelligent readers. The popularity of my weblog
shows that lots of people are willing to read full-length articles
on the Web.

> > > - Add an introductory text at the top of each page, summarizing
> > > the content.
> > Ideally, this can be merged with the #-ToC.
>

> It depends on how you intend to make your ToC. A summary should be


> designed to give the reader a short introductory to the contents,
> what it is about and why it is important/interesting. The ToC
> would probably not be able to do this, since you would then need
> very detailed headlines.

My experiments suggest that you can break the summary up into
one-line sentences with #-bulleted links, eg:
http://www.robotwisdom.com/web/theory.html



> I think structuring content is a very important
> part of handling information. Structure markup languages (HTML,

> XHTML, XML) are made specifically to make this task easy, and


> identical from site to site (so that the reader do not have to
> learn new a structure each time they visit another site)

No, there's no correlation between the structure the reader sees
and the structure the parser sees. And the reader is what's
important.

> > unhappy compromise with H2 or H3.
> You might want to look into CSS (stylesheets).

I object on principle, because they introduce unnecessary
complexity to a simple problem.

> [...] I am speaking of the quote tags on markup languages.

Are you claiming there's some well-thought-out solution for
marking up the zillion different uses of quotation marks?
(I'll believe it when I see it-- my background is in AI, and
I don't accept handwavy ivory-tower theories.)

>> I consider it the user's responsibility to set up their platform
>> (default font, chair-position, etc) so that they can read all day.
>
> That is simply not possible, since the technology does not provide
> such setup. Surveys have found that reading on screen is 25% (or
> more) slower than reading from books, newspapers and the likes.
> With a book you can place it in your lap read a few pages, move
> your position, move the book etc. This you cannot do with a
> computer (not even if you have a laptop).

I've been surfing the Web 12 hours a day for the last five years,
so you can't tell me it's impossible. (I use 18-pt Geneva as my
default font, and I keep my monitor next to my bed so I can change
position for variety.)

Thomas Baekdal

unread,
Sep 10, 2002, 7:22:51 PM9/10/02
to

"Jorn Barger" <jo...@enteract.com> wrote in message
news:16e613ec.02091...@posting.google.com...

> Which 'people'? Random stooges hauled off the street? (Usability


> 'experts' like Nielsen are expert mainly in hoaxing people into
> accepting their bogus experimental methodology.)

That was a very ...uhm... interesting statement. Do you have any proof to
support it?

> My pages are for intelligent readers. The popularity of my weblog
> shows that lots of people are willing to read full-length articles
> on the Web.

How do you know that? Have you ever tried what I suggest? You might end up
with a much higher popularity rate by doing so?

> My experiments suggest that you can break the summary up into
> one-line sentences with #-bulleted links, eg:
> http://www.robotwisdom.com/web/theory.html

I do agree that this approach might work. If each sentence clearly inform
about the content of the linked section.

> No, there's no correlation between the structure the reader sees
> and the structure the parser sees. And the reader is what's
> important.

Heh... (Sorry for that outburst). You obviosly do not know what you are
talking about. Take XML. Here you have a very structured markup, but the
resulting page is often presented in a very different way. The same applies
for HTML and XHTML.
The XML code for your time line would be something similar to:
---------------
<timeline>
<item>
<year>1815</year>
<description>Father promoted to chief surgeon</description>
</item>
<item>
<year>1818</year>
<description>Family moves into residential hospital wing (extremely
morbid environment to grow up in)</description>
</item>
...
</timeline>
-------------
This is of course not how you want to present it, but since your code is
structure you can choose many different kinds of output - example: this is
just some of the possibilities (without changing anything in the page code):

1815: Father promoted to chief surgeon
1818: Family moves into residential hospital wing (extremely morbid
environment to grow up in)
...or...
- Father promoted to chief surgeon (1815)
- Family moves into residential hospital wing (extremely morbid environment
to grow up in) (1818)

By structuring your content you are suddenly free to express yourself in any
way you like.

> > > unhappy compromise with H2 or H3.
> > You might want to look into CSS (stylesheets).
>
> I object on principle, because they introduce unnecessary
> complexity to a simple problem.

As Jim Dabell also wrote; not using the markup correctly will make you pages
much more complex. I know this for a fact. I have seen too many people
wasting their and their visitor's time by making pages that does not have a
clear markup structure.

> > [...] I am speaking of the quote tags on markup languages.
>
> Are you claiming there's some well-thought-out solution for
> marking up the zillion different uses of quotation marks?
> (I'll believe it when I see it-- my background is in AI, and
> I don't accept handwavy ivory-tower theories.)

Yes that is what I claim; you might want to read the specifications for web
markup languages. But, again you are referring to the visuals, while I am
referring to how the page is structured. If you properly identify which
parts of the text that are quoted, and which that is not you have almost
unlimited freedom to how this structured should be presented visually. You
can make any variations of your zillion quotations marks with very little
effort if you text is structured correctly.

> I've been surfing the Web 12 hours a day for the last five years,
> so you can't tell me it's impossible. (I use 18-pt Geneva as my
> default font, and I keep my monitor next to my bed so I can change
> position for variety.)

Sleep tight then, because the strain on your eyes is most likely affected by
it.

Regards,
Thomas Baekdal
----
http://www.baekdal.com

Jorn Barger

unread,
Sep 11, 2002, 4:49:59 AM9/11/02
to
"Thomas Baekdal" <notava...@baekdal.com> wrote in message news:<3d7e7edc$0$161$edfa...@dspool01.news.tele.dk>...
> > [...] (Usability

> > 'experts' like Nielsen are expert mainly in hoaxing people into
> > accepting their bogus experimental methodology.)
>
> That was a very ...uhm... interesting statement. Do you have any
> proof to support it?

Nielsen spent years claiming people don't scroll, apparently based
on one very-poorly designed experiment:
http://www.robotwisdom.com/issues/nielsen.html

Ben Shneiderman took one look at my weblog and said 'usability
experts' had shown that centered text was hard to read. But
obviously those tests didn't include individually-centered
*headlines*, so he was drawing the wrong inference from the
data.

Interface-effects always happen in some context, which on the
Web is always changing... so the only effective way to do Web
HF-research is to continually test new theories within a real
informative website.

Nielsen's website has tried at most a half dozen experiments in
years I've been reading it. The 'American Memory' website that
Shneiderman was involved with has the most straitjacketed design
imaginable.

> > My pages are for intelligent readers. The popularity of my weblog
> > shows that lots of people are willing to read full-length articles
> > on the Web.
>
> How do you know that? Have you ever tried what I suggest? You might
> end up with a much higher popularity rate by doing so?

Browse my 1000 pages of text. You will see every sort of experiment.

> > [...] there's no correlation between the structure the reader sees


> > and the structure the parser sees. And the reader is what's
> > important.
>

> [...] You obviously do not know what you are talking about.

Oh right. The idea of structured markup is so rich and subtle that
the only people who can understand it are vastly more intelligent
than dumb old me. (Right.)

> Take XML. Here you have a very structured markup, but the
> resulting page is often presented in a very different way.

I'd like to hear you quote what statement of mine you imagine this
argument is a reply to. Do you really imagine I haven't grasped
the incredibly rich and subtle concept of stylesheets?

You claimed that structured markup helps the user to experience
a more-consistent interface. I replied that what the user
experiences has nothing to do with whether the page is marked
up structurally or not. Now you say XML lets the author
change the appearance in a consistent way... but that has
nothing to do with the user's experience.

(This argument for XML could be called the 'authoring tool'
argument-- this added flexibility belong in the authoring tool,
not in the published document.)

> The XML code for your time line would be something similar to:
> ---------------
> <timeline>

I guess you haven't found: http://www.robotwisdom.com/web/biography.html
yet?

> <item>
> <year>1815</year>

Wow, my parser will be so grateful to have that extra hint! (Not.)

And my readers will be thrilled that all years are formatted in the
same style... (except when they need a different style, because
semantics and style are almost totally uncorrelated).

Also, you're cheating because 'year' is just a handwavy fake tag--
you'd want something more like <event year="1815" relationship="promotion"
person="Flaubert, Achille"> and including the further 'hinting' of
a special tag to pre-parse the year is just wasteful of bandwidth.

> This is of course not how you want to present it, but since your

> code is structured you can choose many different kinds of output -


> example: this is just some of the possibilities (without changing
> anything in the page code):
> 1815: Father promoted to chief surgeon
> 1818: Family moves into residential hospital wing (extremely morbid
> environment to grow up in)
> ...or...
> - Father promoted to chief surgeon (1815)
> - Family moves into residential hospital wing (extremely morbid
> environment to grow up in) (1818)

That sounds like a damn hairy algorithm (XSLT?). Has anyone really
made it work yet-- eg moving the date to the end of the line and
wrapping it in parens? Are the tools usable by anyone without a
year of training?

And again, notice that this is an authoring-tool argument. The
reader will see some version of the formatting that could have
been presented the same way without structures.

> By structuring your content you are suddenly free to express
> yourself in any way you like.

Wanna see a trick? (Look ma, no XML!)

1815: Father promoted to chief surgeon

- Father promoted to chief surgeon (1815)

See, I can express myself any way I want right now!

> not using the markup correctly will make you pages
> much more complex. I know this for a fact. I have seen too many
> people wasting their and their visitor's time by making pages
> that does not have a clear markup structure.

<scratches head>

Do they browse in 'View Source'?

> > Are you claiming there's some well-thought-out solution for
> > marking up the zillion different uses of quotation marks?
>

> Yes that is what I claim; you might want to read the specifications
> for web markup languages.

No, rest assured I don't enjoy having the W3C's bamboo-shoots-under-
fingernails writing-style forced upon me. If they've solved the
AI-problem for quotes you ought to be able to summarise the key
points.

> But, again you are referring to the visuals, while I am
> referring to how the page is structured.

I'm talking about the visuals because that's all the user experiences.

> If you properly identify which
> parts of the text that are quoted, and which that is not you
> have almost unlimited freedom to how this structured should be
> presented visually. You can make any variations of your zillion
> quotations marks with very little effort if you text is structured
> correctly.

<scratches brain>

And why would I want to do that?

Why would any author want to do semantic analysis for every 'span'
that can be called a quote, and explicitly tag each one with the
narrowly appropriate tag? So that they can instantly switch
between single and double quotes... as if that's such a common
occurrence it would justify the advance effort? And why would you
want this on the published page, even so, instead of in the
authoring environment?



> > I've been surfing the Web 12 hours a day for the last five years,
> > so you can't tell me it's impossible. (I use 18-pt Geneva as my
> > default font, and I keep my monitor next to my bed so I can change
> > position for variety.)
>
> Sleep tight then, because the strain on your eyes is most likely
> affected by it.

Thanks, Doc. (Can you also write long-distance 'scrips?)

Thomas Baekdal

unread,
Sep 11, 2002, 7:06:44 AM9/11/02
to
"Jorn Barger" <jo...@enteract.com> wrote in message
news:16e613ec.02091...@posting.google.com...
> "Thomas Baekdal" <notava...@baekdal.com> wrote in message
news:<3d7e7edc$0$161$edfa...@dspool01.news.tele.dk>...

I see no point in continuing this discussion. I do not agree with what you
say, nor do I think most of your statements make sense. I gave you a number
of suggestion in my intial reply. These suggestion is what I would do if I
should make a website with the same content as yours.

It is nice to know that you see your weblog as a sucess, and I wish you all
the best for you and your site in the future.

Regards,
Thomas Baekdal
-----------------

Daniel R. Tobias

unread,
Sep 14, 2002, 1:46:00 PM9/14/02
to
jo...@enteract.com (Jorn Barger) wrote in message news:<16e613ec.02091...@posting.google.com>...

> > But, again you are referring to the visuals, while I am
> > referring to how the page is structured.
>
> I'm talking about the visuals because that's all the user experiences.

I have a lot of trouble understanding your position... you often talk
like a presentationalist (the usual variety of visual-oriented web
designer) and sneer at the structuralists' (the so-called "purist
camp") championing of deeper structure and not just appearances;
however, your own pages pretty much resemble the worst of the
straw-man "purist pages" so often lambasted by the presentationalist
camp -- plain, utilitarian, bare-bones, with lots of text and no
graphics or layout, and declining to use any feature not implemented
in the browsers around in 1994.

In effect, you end up taking on the worst elements of both of these
warring camps; all the disdain for structure of the presentationalists
without any of their sense of aesthetics; all the resistance to
innovation of the most conservative of the structuralists without any
of their concern for logical structure.

On the other hand, I attempt myself to take on the *best* parts of
both camps; keeping, as much as possible, a clean logical structure to
my sites, while not being afraid to carefully add enhancements to
improve the aesthetics and functionality for users of newer browsers
so long as they always degrade gracefully and keep the site accessible
(even if not as pretty) on older browsers. Some of the things you
deride, like stylesheets, are very useful to this, allowing the HTML
code to be just about as "bare-bones" as a straw-man 1994 purist page
while looking much nicer to modern users.

Jorn Barger

unread,
Sep 15, 2002, 6:28:41 AM9/15/02
to
d...@tobias.name (Daniel R. Tobias) wrote in message news:<aab17256.02091...@posting.google.com>...

> I have a lot of trouble understanding your position... you often talk
> like a presentationalist (the usual variety of visual-oriented web
> designer) and sneer at the structuralists' (the so-called "purist
> camp") championing of deeper structure and not just appearances;
> however, your own pages pretty much resemble the worst of the
> straw-man "purist pages" so often lambasted by the presentationalist
> camp -- plain, utilitarian, bare-bones, with lots of text and no
> graphics or layout, and declining to use any feature not implemented
> in the browsers around in 1994.

Most assuredly, I live entirely outside your one-dimensional worldview.

> In effect, you end up taking on the worst elements of both of these
> warring camps; all the disdain for structure of the presentationalists
> without any of their sense of aesthetics; all the resistance to
> innovation of the most conservative of the structuralists without any
> of their concern for logical structure.

I disdain 'structure' because I'm trying to communicate with humans,
not machines. I disdain TABLEs because they slow down rendering.

> On the other hand, I attempt myself to take on the *best* parts of
> both camps; keeping, as much as possible, a clean logical structure to
> my sites, while not being afraid to carefully add enhancements to
> improve the aesthetics and functionality for users of newer browsers
> so long as they always degrade gracefully and keep the site accessible
> (even if not as pretty) on older browsers. Some of the things you
> deride, like stylesheets, are very useful to this, allowing the HTML
> code to be just about as "bare-bones" as a straw-man 1994 purist page
> while looking much nicer to modern users.

I'd say you run a tidy shop in the mall, selling Elvis-on-black-velvet,
while I have a sprawling studio/lab in the boondocks, continually
innovating whole new genres of art.

And I invite people to appropriate my innovations so long as they link
the original/source/inspiration. You have little or nothing original
to offer.

Tina Holmboe

unread,
Sep 15, 2002, 9:13:45 AM9/15/02
to
jo...@enteract.com (Jorn Barger) exclaimed in <16e613ec.02091...@posting.google.com>:

>> graphics or layout, and declining to use any feature not implemented
>> in the browsers around in 1994.
>
> Most assuredly, I live entirely outside your one-dimensional worldview.

One thing which has alternatingly fascinated and horrified me during the
last ten or so years is the odd arrogance seeming to spring from people
involved with the technology behind information.

When Daniel suggests one way of viewing the world, Jorn answers with
a derogatory statement. No humility, no thought as to whether he might
not actually be - dare I say it - wrong. Or just different ?

Perhaps, just perhaps, it is your view of the world that exist in only
one dimension ?

It would be quite refreshing to, for once, see someone accept that his
was not the only valuable view.


> I disdain 'structure' because I'm trying to communicate with humans,
> not machines. I disdain TABLEs because they slow down rendering.

This seems to be a product of the 1990ies - the idea that to communicate
with human beings it is only your own ideas, your own mental picture of
the world, is worth considering - never the recipient.

Why ? What is it about other humans that make you so disdainful of them,
Jorn ?

Most people I know prefer their information structured to some extent. Odd,
isn't it ?

> I'd say you run a tidy shop in the mall, selling Elvis-on-black-velvet,
> while I have a sprawling studio/lab in the boondocks, continually
> innovating whole new genres of art.

And the value added to each ? Is there an objective value added to these
two seemingly pointless occupations which make them less a waste of
time ? Is there anything which objectively suggest to the passerby that
*this* shop, or *that* studio, is worth more than the other despite my
own taste and emotions in the matter ?

If not, why do you hate the tidy shop so much that you cannot appreciate
that others - not you, you, you - find something original in it ?

Or, perhaps, I believe too much in

"Live and let live"

and not enough in

"Live life to the fullest! But, first, let me explain to you what is
important, original, and the Right Things to live with and for ... "


> And I invite people to appropriate my innovations so long as they link
> the original/source/inspiration. You have little or nothing original
> to offer.

"It's about learning to detach from your ego and see the page as others
will see it."

How about peeking out from your little studio, ignoring that gleaming
big thing - it's the sun - and detach from it enough to see that others
find pleasure in both the tidy little shop and Elvis.

Its not as if accepting other people and their views and tastes reduces
your own to ashes.

If not, I'd have to ask what new and original thinking is brough to us
by http://www.robotwisdom.com/web/ - but I'd like to make a comment
about it nevertheless:

When suggesting how to present links, and how to make sure that a human
searching for information can best be satisified, it would be a good idea
to practice what you preach - and avoid linking to pages of your own which
no longer exist.

Not that claiming content you do not have is any more original than
defending your own views by attacking those of others. I find little
orginality in your material.

But, just perhaps, I am living in a one-dimentional world, and cannot
grasp that [more] IS, no matter what opposing view I might falsely believe
that I have [1], the perfect application of the First Law of Linktext.


[1]
I know - I'm influenced by Nielsen. It's just odd, I cannot find any Ode
to David Siegel anywhere on your site. You really should add one.


PS:
ftp://ftp.mcs.net/mcsnet.users/jorn/newsreader.txt
http://www.robotwisdom.com/wb/wishlist.html

--
- Tina.Sunday

Stan Brown

unread,
Sep 15, 2002, 2:06:06 PM9/15/02
to
Jorn Barger <jo...@enteract.com> wrote in
comp.infosystems.www.authoring.site-design:

>I disdain 'structure' because I'm trying to communicate with humans,
>not machines.

Do you truly think that structure has nothing to do with how well
humans understand your message (whatever it may be)?

A millisecond's reflection should be enough to show you how
structure can aid communication, or impede it. Think about your
daily newspaper, with the stories not in any order, jumbled in
amongst the classified ads and the comics.

--
Stan Brown, Oak Road Systems, Cortland County, New York, USA
http://OakRoadSystems.com
"Thoroughness. I always tell my students, but they are
constitutionally averse to painstaking work."
-- Emma Thompson, in /Wit/ (2000)

Stan Brown

unread,
Sep 15, 2002, 2:08:47 PM9/15/02
to
Tina Holmboe <ti...@elfi.org> wrote in
comp.infosystems.www.authoring.site-design:

>jo...@enteract.com (Jorn Barger) exclaimed in <16e613ec.02091...@posting.google.com>:
>
>>> graphics or layout, and declining to use any feature not implemented
>>> in the browsers around in 1994.
>>
>> Most assuredly, I live entirely outside your one-dimensional worldview.
>
> One thing which has alternatingly fascinated and horrified me during the
> last ten or so years is the odd arrogance seeming to spring from people
> involved with the technology behind information.

It's not just technology, Tina. Everywhere I see people boasting of
their ignorance, and their disdain for everyone else's way of doing
things. (Read Miss Manners' column for a couple of weeks, for
instance.)

That's fine in a Beethoven or a Shakespeare, someone who actually
_has_ a better way of doing things (though someone who is truly
superior to others generally has no need to say so). But to hear
this from so many people whose own ideas are mediocre at best is
truly disappointing.

Thomas Baekdal

unread,
Sep 15, 2002, 2:43:18 PM9/15/02
to
"Jorn Barger" <jo...@enteract.com> wrote in message
news:16e613ec.02091...@posting.google.com...
> d...@tobias.name (Daniel R. Tobias) wrote in message
news:<aab17256.02091...@posting.google.com>...

> > In effect, you end up taking on the worst elements of both of these


> > warring camps; all the disdain for structure of the presentationalists
> > without any of their sense of aesthetics; all the resistance to
> > innovation of the most conservative of the structuralists without any
> > of their concern for logical structure.
>
> I disdain 'structure' because I'm trying to communicate with humans,
> not machines. I disdain TABLEs because they slow down rendering.

I have to make a comment to this one because this is why I do not think you
(John) really understand what structuring is all about. How does "tables"
fit in as a reply to Daniel's reply to you?

When we are discussing structure, the only way tables could fit in would be
to structure schematic information - like some of the things you would make
in a spreadsheet (using rows and columns). You cannot rule out tables since
you will then have no way of effectively displaying schematic information -
just as <p> holds paragraphs, <h[x]> holds headings and <b> holds bold text.

By really understanding what each element in the HTML markup language does,
you problem with slow rendering would be minimized extensively. Structuring
your page correctly will make your page render faster in any browser - not
doing this and you will have to add extra code to "fake" what the correct
markup would give you for free.

I think you are referring using tables to setup your page as a whole
(placing different kinds of information, navigation and so fort in separate
cells). This is not structuring, this is design - and yes it could be done
more efficiently with the use of other technologies. One would be the use of
stylesheets, which you can only use well if your page is structured - and
one technology you also disdain.

You write that want to communicate with "humans not machines". This is a
very important you should keep on doing it. In fact to many web designers
forget this, causing millions of sites to be useless to their visitors.
But... you cannot communicate with a computer; it is merely a tool that
carries out your commands. Since it is only a tool, I do not understand why
you do not use this tool to make communication easier between you and other
humans?

The way I see your statements is like if you had a hammer, then you would
not use the hammer's steal head but its wooden stick to put the nails into
the wall. Not only this, but you will also complain that the hammer does not
work.

BTW: to Daniel and Tina: Thank you. I do agree with both of you, and have
enjoyed reading your comments.

Regards,
Thomas Baekdal
------------------

Jorn Barger

unread,
Sep 15, 2002, 3:58:27 PM9/15/02
to
ti...@elfi.org (Tina Holmboe) wrote in message news:<dI%g9.8361$e5.16...@newsb.telia.net>...

> One thing which has alternatingly fascinated and horrified me
> during the last ten or so years is the odd arrogance seeming to
> spring from people involved with the technology behind
> information.

I agree 100% that the structural-markup crowd is horrifyingly
arrogant. My litmus-test for arrogance is inability to paraphrase
your opponent's point-of-view, though, and I've never had any
problem with the 'theory' of structural markup.



> When Daniel suggests one way of viewing the world, Jorn answers with
> a derogatory statement. No humility, no thought as to whether he might
> not actually be - dare I say it - wrong. Or just different ?
> Perhaps, just perhaps, it is your view of the world that exist in only
> one dimension ?

Daniel quite explicitly stated that he only understood two poles,
the structural and the presentational, and that he was baffled by
my failure to fit that one-dimensional analysis. I was just
confirming (paraphrasing) him.

> It would be quite refreshing to, for once, see someone accept
> that his was not the only valuable view.

Every day for the last five years I've linked three to ten Web
articles that I found worth reading, expressing every viewpoint
under the sun. 'Eclectic' is the most common description others
use.

> > I disdain 'structure' because I'm trying to communicate with humans,
> > not machines. I disdain TABLEs because they slow down rendering.
>
> This seems to be a product of the 1990ies - the idea that to communicate
> with human beings it is only your own ideas, your own mental picture of
> the world, is worth considering - never the recipient.

Total non sequitur.

> Why ? What is it about other humans that make you so disdainful
> of them, Jorn ?
> Most people I know prefer their information structured to some
> extent. Odd, isn't it ?

As I recently asked in an overlapping xpost, do they surf in
view-source mode? If not, how do they know whether the markup
is structural or not?

> > I'd say you run a tidy shop in the mall, selling Elvis-on-black-velvet,
> > while I have a sprawling studio/lab in the boondocks, continually
> > innovating whole new genres of art.
>
> And the value added to each ? Is there an objective value added to these
> two seemingly pointless occupations which make them less a waste of
> time ? Is there anything which objectively suggest to the passerby that
> *this* shop, or *that* studio, is worth more than the other despite my
> own taste and emotions in the matter ?

sci.philosophy is on the other campus.

> If not, why do you hate the tidy shop so much

'hate'?

> that you cannot appreciate that others - not you, you, you -
> find something original in it ?

Original isn't relative. If Daniel thinks he's made original
points, he's welcome to submit counterexamples.

> How about peeking out from your little studio, ignoring that gleaming
> big thing - it's the sun - and detach from it enough to see that others
> find pleasure in both the tidy little shop and Elvis.

The Web offers no end of Elvis shops. I'm aiming 20 years beyond
today's Web models.

> [...] I'd have to ask what new and original thinking is brough to us
> by http://www.robotwisdom.com/web/

The bottom of the page offers what I call a 'ToC footer' that links
discussions and examples of most of my ideas. Briefly, pages that
maximise info-density the way FAQs do, with hundreds of links
embedded within the text as text-buttons.

> [...] avoid linking to pages of your own which no longer exist.

My website offers tens of thousands of links, 50% of which are
surely broken. If my goal was to maintain links I wouldn't have
time to experiment with design theories.

My 'open web content license' encourages people to copy my pages
and update the links. (I also hope to code a link-maintenance
utility someday, but that's not a shortterm solution.)

> Not that claiming content you do not have is any more original than
> defending your own views by attacking those of others. I find little
> orginality in your material.

Take my litmus-test, or stand condemned of arrogance!

Daniel R. Tobias

unread,
Sep 15, 2002, 4:42:07 PM9/15/02
to
jo...@enteract.com (Jorn Barger) wrote in message news:<16e613ec.02091...@posting.google.com>...

> I'd say you run a tidy shop in the mall, selling Elvis-on-black-velvet,


> while I have a sprawling studio/lab in the boondocks, continually
> innovating whole new genres of art.

I've been called a lot of things by people debating with me, online
and elsewhere, but never, to the best of my recollection, have I been
compared to the sellers of velvet Elvises (Elvi?) before. That's a
new one. I'd think that this label would better apply to the
developers of commercial Web sites (who generally fall in what I term
the presentationalist camp) than it would to anything that I maintain
in my personal sites, but to each his own, I guess.

--
Dan

Tina Holmboe

unread,
Sep 15, 2002, 5:34:01 PM9/15/02
to

>> One thing which has alternatingly fascinated and horrified me


>> during the last ten or so years is the odd arrogance seeming to
>> spring from people involved with the technology behind
>> information.
>
> I agree 100% that the structural-markup crowd is horrifyingly
> arrogant. My litmus-test for arrogance is inability to paraphrase
> your opponent's point-of-view, though, and I've never had any
> problem with the 'theory' of structural markup.

That wasn't original either, you know. Have you considered applying your
litmus test to your litmus test ?

>> How about peeking out from your little studio, ignoring that gleaming
>> big thing - it's the sun - and detach from it enough to see that others
>> find pleasure in both the tidy little shop and Elvis.
>
> The Web offers no end of Elvis shops. I'm aiming 20 years beyond
> today's Web models.

Isn't it, then, odd that people disagree ? Is it their own problems which
make them 'fail' to understand that you are forward ?

Could it not be that you are suffering from the Boo.com illusion; you
are under the belief that you are twenty years ahead with things that
were understood to be not really worth it five years ago -and those that
point it out to you are arrogant, one-dimensional, and wrong- ?

Or am I just another naysayer ?


> The bottom of the page offers what I call a 'ToC footer' that links
> discussions and examples of most of my ideas. Briefly, pages that
> maximise info-density the way FAQs do, with hundreds of links
> embedded within the text as text-buttons.

I understand that. What I was wondering was what about your pages were
original ?

The table of contents footer ?


The pages with 'hundreds' of links embedded within the text as
'text-buttons' ?

Why,

"There is a new profession of trail blazers, those who find delight
in the task of establishing useful trails through the enormous mass
of the common record."

could almost surely be made to fit on what you do. It could even be
seen as a compliment.

But is it original ?


> Take my litmus-test, or stand condemned of arrogance!

Ah, you fear that I am inable to restate your ideas, or unable to give
the meaning in another form ? Should I take your litmus test then,
and take your point of view and rewrite them in another form ?

Perhaps, right away, take yours as my own ? Why ? My litmus test on
arrogance is not yours, and your litmus-test is not mine. Mine can be
summarized as: why not accept that whilst you do not like other's views,
they are neither 'better' nor 'worse' than your own - just 'different'.

Perhaps what I am saying is that I am not interested in paraphrasing
your point of view - only in leaving it as much alone as you are willing
to leave that of others. And perhaps not.

--

Jorn Barger

unread,
Sep 15, 2002, 7:07:24 PM9/15/02
to
"Thomas Baekdal" <notava...@baekdal.com> wrote in message news:<3d84d4d9$0$66784$edfa...@dspool01.news.tele.dk>...

> > > In effect, you end up taking on the worst elements of both of these
> > > warring camps; all the disdain for structure of the presentationalists
> > > without any of their sense of aesthetics; all the resistance to
> > > innovation of the most conservative of the structuralists without any
> > > of their concern for logical structure.
> > I disdain 'structure' because I'm trying to communicate with humans,
> > not machines. I disdain TABLEs because they slow down rendering.
>
> I have to make a comment to this one because this is why I do not think you
> (John) really understand what structuring is all about. How does "tables"
> fit in as a reply to Daniel's reply to you?

He mentioned two camps, I took each in turn.

> When we are discussing structure, the only way tables could fit in would be
> to structure schematic information - like some of the things you would make
> in a spreadsheet (using rows and columns). You cannot rule out tables since
> you will then have no way of effectively displaying schematic information -
> just as <p> holds paragraphs, <h[x]> holds headings and <b> holds bold text.

<pre>
x x
x x
xxxx x x
xxxx Nogent-poor x x
xxxxxxxxxxxxxxxxxxx xxxxx x
0 1 2 3 4 5 6 7 8 9 0 1 2
| part one | part two | part three
12333334444455555566666666666666123333344566611122222222223333444445</pre>

(A graph of Flaubert's Sentimental Education)

> By really understanding what each element in the HTML markup language does,
> you problem with slow rendering would be minimized extensively. Structuring
> your page correctly will make your page render faster in any browser - not
> doing this and you will have to add extra code to "fake" what the correct
> markup would give you for free.

I don't believe this-- are you talking about saving the parser
a miniscule amount of effort by closing your <p>s, etc?

Daniel R. Tobias

unread,
Sep 15, 2002, 7:18:36 PM9/15/02
to
"Thomas Baekdal" <notava...@baekdal.com> wrote in message news:<3d84d4d9$0$66784$edfa...@dspool01.news.tele.dk>...

> The way I see your statements is like if you had a hammer,

...I'd hammer in the morning! :)

> then you would
> not use the hammer's steal head

Hey, I *bought* my hammer... don't accuse me of stealing it! :)

> BTW: to Daniel and Tina: Thank you. I do agree with both of you, and have
> enjoyed reading your comments.

You're welcome!

--
Dan

Jorn Barger

unread,
Sep 16, 2002, 9:19:31 AM9/16/02
to
qx1...@bigfoot.com (Stan Brown) wrote in message news:<MPG.17ee9615f...@news.odyssey.net>...

> >I disdain 'structure' because I'm trying to communicate with humans,
> >not machines.
>
> Do you truly think that structure has nothing to do with how well
> humans understand your message (whatever it may be)?

The topic is structured *markup*.

> A millisecond's reflection should be enough to show you how
> structure can aid communication, or impede it. Think about your
> daily newspaper, with the stories not in any order, jumbled in
> amongst the classified ads and the comics.

Uh-yup.

> [...] Everywhere I see people boasting of

> their ignorance, and their disdain for everyone else's way of doing

> things. [...] to hear

> this from so many people whose own ideas are mediocre at best is
> truly disappointing.

So, you're dismissing my ideas as mediocre without knowing the
first thing about them?

Jorn Barger

unread,
Sep 16, 2002, 9:38:21 AM9/16/02
to
ti...@elfi.org (Tina Holmboe) wrote in message news:<d17h9.8433$e5.16...@newsb.telia.net>...
> > [...] My litmus-test for arrogance is inability to paraphrase

> > your opponent's point-of-view, though, and I've never had any
> > problem with the 'theory' of structural markup.
>
> That wasn't original either, you know. Have you considered applying your
> litmus test to your litmus test ?

There's a weird convention in the arts that every single thing
needs to be totally original. That's never been my view.

> > I'm aiming 20 years beyond today's Web models.
>
> Isn't it, then, odd that people disagree ? Is it their own problems
> which make them 'fail' to understand that you are forward ?

They can't 'disagree' until they 'comprehend'.

> > The bottom of the page offers what I call a 'ToC footer' that links
> > discussions and examples of most of my ideas. Briefly, pages that
> > maximise info-density the way FAQs do, with hundreds of links
> > embedded within the text as text-buttons.
>
> I understand that. What I was wondering was what about your pages were
> original ? The table of contents footer ?

Yes, for starters I don't think you can show me any other webpage that
recommends including the ToC at the bottom of each page.

Additionally: dense 'text button' linking, embedded in faq-like
overviews of various topics. Also TABLE-free pages that achieve
minimal esthetic variety by choice of text/link colors and by
spinkling BLOCKQUOTEs thruout. Also 'one-layer' design that
eliminates menus. Also a section of pages devoted to analysing
major websites. Also annotated literature that does away with
hyperlinking. Also a principle of using extracts-quotes-images
to enliven links. Also a way of using timelines as the organising
principle for large numbers of related links. This sentence is
a test to see if she's actually reading this list. Also a
systematic use of Google Groups to document posters' history.

> The pages with 'hundreds' of links embedded within the text as
> 'text-buttons' ? Why,
> "There is a new profession of trail blazers, those who find delight
> in the task of establishing useful trails through the enormous mass
> of the common record."
> could almost surely be made to fit on what you do. It could even be
> seen as a compliment. But is it original ?

I could say 'Having good ideas is a good idea' and then claim all
future good ideas were plagiarism. But that would be wrong.

> My litmus test on
> arrogance is not yours, and your litmus-test is not mine. Mine can be
> summarized as: why not accept that whilst you do not like other's views,
> they are neither 'better' nor 'worse' than your own - just 'different'.

Farewell, knowledge; farewell, history...!

> Perhaps what I am saying is that I am not interested in paraphrasing
> your point of view - only in leaving it as much alone as you are willing
> to leave that of others. And perhaps not.

Plonk.

Tina Holmboe

unread,
Sep 16, 2002, 9:56:31 AM9/16/02
to
jo...@enteract.com (Jorn Barger) exclaimed in <16e613ec.0209...@posting.google.com>:

>> Isn't it, then, odd that people disagree ? Is it their own problems
>> which make them 'fail' to understand that you are forward ?
>
> They can't 'disagree' until they 'comprehend'.

Yes ...

> Yes, for starters I don't think you can show me any other webpage that
> recommends including the ToC at the bottom of each page.

No, but I could show you a heap of them that *do*.

> minimal esthetic variety by choice of text/link colors and by
> spinkling BLOCKQUOTEs thruout. Also 'one-layer' design that

Yes, I did notice your labelling of non-quoted material as quoted; and
it puzzled me how you expect anyone to warble gaz with all those slipz
cannae in your mouth.

You communicate by rewriting the way others perceive - and expect, then
that they follow you to the very gates of Hell. King George would be
ashamed.


>> "There is a new profession of trail blazers, those who find delight
>> in the task of establishing useful trails through the enormous mass
>> of the common record."
>> could almost surely be made to fit on what you do. It could even be
>> seen as a compliment. But is it original ?
>
> I could say 'Having good ideas is a good idea' and then claim all
> future good ideas were plagiarism. But that would be wrong.
>
>> My litmus test on
>> arrogance is not yours, and your litmus-test is not mine. Mine can be
>> summarized as: why not accept that whilst you do not like other's views,
>> they are neither 'better' nor 'worse' than your own - just 'different'.
>
> Farewell, knowledge; farewell, history...!

Which is ironic, as is the plonk, when you observe that you do not even
understand that your suggested orginality is from 1945. Goodbye, Mr. Barger;
yours is not a loss.

--
- Tina Holmboe Greytower Technologies
ti...@greytower.net http://www.greytower.net/
[+46] 0708 557 905

Isofarro

unread,
Sep 16, 2002, 3:30:08 PM9/16/02
to
Jorn Barger wrote:

> The Web offers no end of Elvis shops. I'm aiming 20 years beyond
> today's Web models.

So in your World Wide Web 20 years from now, there'll be thousands of
trillions of pages without a markup structure, search engines falling
over due to the sheer weight of unstructured keywords (with no semantic
meaning or essence), and human beings still trying to find a website
that explains something by using a browser and a mouse.

I don't like the sound of that. I'd much rather have an intelligent
agent trawl the web for me, finding those pages that match the fuzzy
criteria I set (where search engines fail), and then return me a list
of resources that closely match my requirements.

A human doing this is so untenable considering the volume of
information. While using a structured markup (allied with RDF
knowledgebases) allows an intelligent agent to make some decent
(artificial) insight into content.

I would just tell my PC to "find me some interesting stuff about
foobar", while I go off to meet some friends for lunch, and then
leisurely read through the results tailored to my individual taste,
while you'll still be trawling manually in a totally unstructured way,
having skipped lunch and missed out on some friendly chat.

--
Iso.
FAQs: http://html-faq.com http://alt-html.org http://allmyfaqs.com/
Recommended Hosting: http://www.affordablehost.com/
AnyBrowser Campaign: http://www.anybrowser.org/campaign/

Isofarro

unread,
Sep 16, 2002, 3:31:59 PM9/16/02
to
Jorn Barger wrote:

> are you talking about saving the parser
> a miniscule amount of effort by closing your <p>s, etc

Saving time and mistakes by removing ambiguity.

Jorn Barger

unread,
Sep 16, 2002, 3:20:42 PM9/16/02
to
Isofarro <spam...@spamdetector.co.uk> wrote in
news:<0gb5ma...@sidious.isolani.co.uk>:

> > The Web offers no end of Elvis shops. I'm aiming 20 years beyond
> > today's Web models.
>
> So in your World Wide Web 20 years from now, there'll be thousands of
> trillions of pages without a markup structure, search engines falling
> over due to the sheer weight of unstructured keywords (with no semantic
> meaning or essence), and human beings still trying to find a website
> that explains something by using a browser and a mouse.

Take a deep breath, and I will (re)explain my view of the Semantic
Web.

My _primary goal_ in life since 1972 (when I wrote my first Fortran
simulation [1] of human behavior) has been an effective limited
vocabulary for the hardest, *psychological* parts of semantics.

Until you've tried to nail these down, you're bound to imagine that
it's a trivial, common-sense problem... but it's not. As soon as
you venture into any representation problem that includes human
motives-- which includes 99.9% of real Web semantics-- you get
caught on the slippery slope that's baffled everyone since
Aristotle.

So I don't believe TimBL&co are going to be able to define any
useful semantic labels for the bulk of webpage markup. (If they
could, it would be more useful in a META header than in phony
style=semantics tags.)

MY ALTERNATIVE is what I've begun calling the 'Necessary Web',
which will be built entirely by hand, by individual enthusiasts.

This will be a backbone of FAQ-like webpages that treat every
important topic, surveying all web-resources for each topic
and making them accessible via the types of web-design I've
been exploring: especially timelines with text-buttons.

Only AFTER this process is well in hand can we begin to define
the topic maps that these pages implicitly define... and only
then can we go back and define the necessary semantics.

> I don't like the sound of that. I'd much rather have an intelligent
> agent trawl the web for me, finding those pages that match the fuzzy
> criteria I set (where search engines fail), and then return me a list
> of resources that closely match my requirements.

Nice fantasy! (Show me the demo...)

> A human doing this is so untenable considering the volume of
> information. While using a structured markup (allied with RDF
> knowledgebases) allows an intelligent agent to make some decent
> (artificial) insight into content.

You're demanding that web-authors do the AI/knowledge-representation.
This is ludicrous.

> I would just tell my PC to "find me some interesting stuff about
> foobar", while I go off to meet some friends for lunch, and then
> leisurely read through the results tailored to my individual taste,
> while you'll still be trawling manually in a totally unstructured way,
> having skipped lunch and missed out on some friendly chat.

Actually, I've built dozens of near-optimal resource pages on dozens
of topics (so I know what I'm talking about, semantics-wise), while
you're still building castles in the air (and don't).


[1] http://www.robotwisdom.com/ai/jbai.html
--
Robot Wisdom Weblog: http://www.robotwisdom.com/ "If you worry that
reading the news online will rob you of the serendipity factor you get
with the newspaper, Jorn Barger solves the problem." --Dan Gillmor

Bradley K. Sherman

unread,
Sep 16, 2002, 3:44:32 PM9/16/02
to
In article <16e613ec.0209...@posting.google.com>,
Jorn Barger <jo...@enteract.com> wrote:
> ...

>So I don't believe TimBL&co are going to be able to define any
>useful semantic labels for the bulk of webpage markup. (If they
>could, it would be more useful in a META header than in phony
>style=semantics tags.)
> ...

This is an understatement. There are thousands of talented
people attempting to categorize knowlege and they are
*all* failing. XML is a disaster. Ontologies are a disaster.
XML plus Ontologies is synergistic: the combination is
a bloated incomprehensible catastrophe. The Semantic Web
is a red herring.

Just creating a set of keywords is very hard. Assigning
the keywords to the polymorphic phantasms we call 'things'
is perilous. Attempting to place the things and keywords
in hierarchies is mind-boggling and every group begins
de novo and ends up with a similar yet different plate of
simplified semiotic spaghetti.

If only Ph.D's really did have to know some philosophy!

The web of 2050 is going to look a lot like the web of 1995.

--bks

Isofarro

unread,
Sep 16, 2002, 5:35:59 PM9/16/02
to
Jorn Barger wrote:

[Finding information in a vastly growing knowledgebase]

>> A human doing this is so untenable considering the volume of
>> information. While using a structured markup (allied with RDF
>> knowledgebases) allows an intelligent agent to make some decent
>> (artificial) insight into content.
>
> You're demanding that web-authors do the AI/knowledge-representation.

Nope. I'm insisting that authors structure their work into coherent
thought (preferably unambigiously). The RDF-knowledgebase (a collection
of known facts) would be totally independant of the author, but
customisable via masking or views to the reader (imposing choice and
interest selection on a group of resources involved in describing
facts), and that can be leveraged (along with a customised or
personalised profile of reader interests) into links to information the
reader finds relevant (unlike typical sites today that focus on what
the author finds relevant).

Your technique (judging by your excellent resource robotwisdom.com)
revolves around the author anticipating which links to other resources
are appropriate for the reader. So at best you are providing a
generalised (or should that be biased) linking of resources that you
judge useful. Perhaps I don't really want to know about James Joyce, so
I would prefer not to see links to material on that topic. There is
also the ongoing problem of overlinking - what happens when there's too
many links to too many excellent resources? How do we rationalise that
down to something more manageable and appropriate for the reader?

> Actually, I've built dozens of near-optimal resource pages on dozens
> of topics (so I know what I'm talking about, semantics-wise),

From http://www.robotwisdom.com/web/
"The defining goal of this style is to deliver the best possible
response to search-engine queries on a given topic"

IMO, search engines tend to make better use of structured text in
weighting returned results. Structured headers provide valuable
metainformation to search engines, as well as to human readers.

When reaching one of your all-in-one portal pages, it would be very
useful for a reader to be able to extract a struture from your page
such as a document outline made out of all the marker header elements -
this breaks a hoard of information down to managable chunks, allowing a
reader to find the specific content they wanted quicker.

Yes, you could achieve that by using stack-loads of interpage links
from a table of content, but then the onus is on you to keep that
uptodate. With structured markup, a simple script running as a
javascript bookmarklet could generate that outline on request when its
needed -- and that's the valuable trait of information, being correctly
available in a timely and accessible manner.


> while
> you're still building castles in the air (and don't).

Fair comment - its a challenge I'm enjoying, and I'm going to give it a
try. If its not possible, at least I'll be having fun while finding
that out. In this case the road to progress is more fun than the
endpoint itself.

> [1] http://www.robotwisdom.com/ai/jbai.html

Its a pity websense has filtered your site from work, so I'll guess
I'll have to peruse at home.

Jorn Barger

unread,
Sep 16, 2002, 6:35:10 PM9/16/02
to
Isofarro <spam...@spamdetector.co.uk> wrote in
news:<1si5ma...@sidious.isolani.co.uk>:

> > You're demanding that web-authors do the AI/knowledge-representation.
>
> Nope. I'm insisting that authors structure their work into coherent
> thought (preferably unambiguously).

...and that this be reflected somehow in the markup-underlayer?

> The RDF-knowledgebase (a collection of known facts)

A la Cyc? Built by whom when?

> would be totally independent of the author,

...assuming that a new race of master-authors deliver their promised
goods in a timely manner?

> but customisable via masking or views to the reader (imposing choice
> and interest selection on a group of resources involved in describing
> facts),

So behind every amateur's webpage, there will be a superheroic
'necessary' web-database that looks over the reader's shoulder
and gives them what the author was incapable of giving?

(That's just like my Necessary Web, only handwavier.)

> and that can be leveraged (along with a customised or
> personalised profile of reader interests)

You have NO IDEA how hard this aspect would really be. (My
interests change hourly.)

> into links to information the reader finds relevant

Ah, so you just have to solve the 'omniscience problem'?

> (unlike typical sites today that focus on what
> the author finds relevant).

Many years ago I was jeered in comp.text.xml (I think) for saying
XML was a neurotic projection... but this still feels like that
to me. Geek-culture feels awkward trying to empathise with the
average reader, and dreams of a robot that fills in those
awkward gaps. But that's exactly where my version of the
'Necessary Web' differs from TimBL's Semantic Web-- I know the
awkwardness has to be addressed directly, by humans, before
the robot can be built.

> Your technique (judging by your excellent resource robotwisdom.com)
> revolves around the author anticipating which links to other resources
> are appropriate for the reader.

Yes, developing a sense of priorities. (The best way to develop this
sense is to use the pages you build.) I expect that these priorities
will become a shared, evolving, community Knowledge.

> So at best you are providing a
> generalised (or should that be biased) linking of resources that you
> judge useful.

And soliciting feedback from others, too.

> Perhaps I don't really want to know about James Joyce, so
> I would prefer not to see links to material on that topic.

If you're thinking of this as part of your 'personalised profile'
then I think you're wildly oversimplifying and mis-imagining
how links work. You'd be hard-pressed to find any links on
my pages that could really be beneficially suppressed based on
realistic personal preferences.

> There is
> also the ongoing problem of overlinking - what happens when there's too
> many links to too many excellent resources? How do we rationalise that
> down to something more manageable and appropriate for the reader?

How in the world do you expect your rdf-knowledgebase to solve this
problem?!? It's a matter of human judgment-calls. Authors who
can't get the hang of it will find their pages don't get recommended.

> "The defining goal of this style is to deliver the best possible
> response to search-engine queries on a given topic"
> IMO, search engines tend to make better use of structured text in
> weighting returned results. Structured headers provide valuable
> metainformation to search engines, as well as to human readers.

'Search engines' is a meaningless generalisation-- do you mean Google?

Google used to (and may still) lower my rank because I find H1
unesthetically large. This is Google's problem.

> When reaching one of your all-in-one portal pages, it would be very
> useful for a reader to be able to extract a struture from your page
> such as a document outline made out of all the marker header elements -
> this breaks a hoard of information down to managable chunks, allowing a
> reader to find the specific content they wanted quicker.

No, I just don't buy this. I expect people to scroll down and get
the immediate 'tactile' model of my page that way. Fancy techno-
gizmos have never demonstrated any lasting value.

> Yes, you could achieve that by using stack-loads of interpage links
> from a table of content, but then the onus is on you to keep that
> uptodate. With structured markup, a simple script running as a
> javascript bookmarklet could generate that outline on request when its
> needed -- and that's the valuable trait of information, being correctly
> available in a timely and accessible manner.

Sounds like you're just shifting the onus to the original author.

> > you're still building castles in the air (and don't).
>
> Fair comment - its a challenge I'm enjoying, and I'm going to give it a
> try. If its not possible, at least I'll be having fun while finding
> that out. In this case the road to progress is more fun than the
> endpoint itself.

Fine, but be very careful not to claim you have a better way,
when all you have is an untested hypothesis.

> > [1] http://www.robotwisdom.com/ai/jbai.html
>
> Its a pity websense has filtered your site from work, so I'll guess
> I'll have to peruse at home.

I think I fixed that a month ago-- they had me under 'sex'.

Stan Brown

unread,
Sep 16, 2002, 6:02:52 PM9/16/02
to
Jorn Barger <jo...@enteract.com> wrote in
comp.infosystems.www.authoring.site-design:
>So, you're dismissing my ideas as mediocre without knowing the
>first thing about them?

No, I know the first thing. And that's enough to tell me not to
waste my time on the second thing.

Some people come to the newsgroup to learn, some to teach, some to
do both. Others just come to spout off, and I have better uses for
my time than to stroke them.

--
Stan Brown, Oak Road Systems, Cortland County, New York, USA

http://OakRoadSystems.com/
"Don't move, or I'll fill you full of [... pause ...] little
yellow bolts of light." -- Farscape, first episode

Tina Holmboe

unread,
Sep 16, 2002, 8:35:13 PM9/16/02
to

> my pages that could really be beneficially suppressed based on
> realistic personal preferences.

Whops. It sounds, to me, that you just indicated a desire to define
"realistic personal preferences".

I must be wrong.

> Google used to (and may still) lower my rank because I find H1
> unesthetically large. This is Google's problem.

No, this would be your flaw. You have not understood WHY someone would go
to Google, ask said engine to retrieve information on a topic, and go
though the results - with the hope that those things which are most
interesting are ranked the highest because the *author* has given the
reader a fair chance *by honestly marking up the content as what it is*.

When you grasp this concept instead of muddling the waters with your
"unesthetically large" we, and you, have something to build on.

The semantic web, no matter which definition you pretend to believe in,
is about calling a spade a spade - and not a brighly silver colored slice
of atoms with very few valens-elektronen attached to a cylindrical object
prepared from a dead three.

You, on the other hand, insist on calling your very-important header a
piece of regular text. That, of course, is your choice: but if you believe
as strongly in communication as you seem to indicate it would help you if
you understood that you cannot communicate with something unless the two
of you speak roughly the same language.

If you insist on speaking Klingon, don't expect alot of people to understand
what you try to communicate. If you do, then that phrase which I suspect
noone has ever used on you come into play: don't be selfish.

> No, I just don't buy this. I expect people to scroll down and get
> the immediate 'tactile' model of my page that way. Fancy techno-
> gizmos have never demonstrated any lasting value.

It would seem that you have yet to track down the quote that I so kindly
supplied - if you had, the above statement would even to you sound
ridiculus. The value of structure does not depend on the gizmoness of
technology.


> Fine, but be very careful not to claim you have a better way,
> when all you have is an untested hypothesis.

Since you have nothing else than the same to offer, why not ?

I'll ask despite the plonk. Did you even *read* the paper by Page and Brin
about searching the web and Google ? Or the one by Dorn, van den Berg
and Chakrabarti on focused crawling ? Perhaps the article by Place and
Belcher on quality control and semantics ? No ?

Or try NEC's CiteSeer ?

No.

Hint. CiteSeer does exactly that - it scours the web and looks for
citations to scientific material. Autonomously. Does it look for the
<cite> tag ? No. Why not ? Because CITE is "unesthetically large" ?

No.

Because people like yourself insist upon making material available in
the hardest possible manner. The use of CITE could improve the algorithms
ACI use - if people would only understand the spade.

Alexander Johannesen

unread,
Sep 17, 2002, 6:33:49 AM9/17/02
to
Bradley K. Sherman <b...@panix.com> wrote:
> There are thousands of talented
> people attempting to categorize knowlege and they are
> *all* failing. XML is a disaster. Ontologies are a disaster.

Ok, I'll bite; why am I failing? I use XML, and I use ontologies,
defined with Topic Maps and backed with RDF. It works like a charm
here, so why is this a failure?

> XML plus Ontologies is synergistic: the combination is
> a bloated incomprehensible catastrophe. The Semantic Web
> is a red herring.

And again; why? There is of course the distinct difference between doing
meta-maps and the actual content, and solutions that try to put the
two in the same I see huge problems with. Are you refering to such, or
the whole idea?

> Just creating a set of keywords is very hard.

Well, only the limitation of it; a keyword for an article, i.e, might
be every word in the article, as search-engines today. The semantic
idea is a strong one; apply *associations* and *occurences* to it, and
you'll be on the right track. It ain't *that* hard to acomplish.

> Assigning
> the keywords to the polymorphic phantasms we call 'things'
> is perilous.

The key here is "scope"; assign a scope that applies to a term, more
than trying to attach a "meaning" to something.

> Attempting to place the things and keywords
> in hierarchies is mind-boggling and every group begins
> de novo and ends up with a similar yet different plate of
> simplified semiotic spaghetti.

I can agree a lot of the way; over the years I've seen some pretty
horrible attempts at making data and knowledge make sense. But there
is one that actually *does* makes sense, when a technocrat and an
information specialist join forces; topic maps.

> If only Ph.D's really did have to know some philosophy!

Hmm, who says they don't?

> The web of 2050 is going to look a lot like the web of 1995.

Only if we stop evolving.


Alexander
--
"Ultimately, all things are known because you want to believe you know."
- Frank Herbert
__ http://shelter.nu/ __________________________________________________


Alexander Johannesen

unread,
Sep 17, 2002, 6:26:33 AM9/17/02
to
Jorn Barger <jo...@enteract.com> wrote:
> You're demanding that web-authors do the AI/knowledge-representation.
> This is ludicrous.

There is a better way; bridge the gap with a Topic Map[1], and you'll
have the infospecialist implementing *with* the technocrats and the
web-authors.


Alexander

[1] www.topicmaps.org/

Bradley K. Sherman

unread,
Sep 17, 2002, 7:59:18 AM9/17/02
to
In article <3d87050d$1...@news.wineasy.se>,
Alexander Johannesen <alexander....@bekk.no.spam> wrote:
> ...

>Ok, I'll bite; why am I failing? I use XML, and I use ontologies,
>defined with Topic Maps and backed with RDF. It works like a charm
>here, so why is this a failure?
> ...

I'll guess that this is on a toy project or something closely
constrained. When you say 'here' do you mean here in comp.human-factors?

--bks

Daniel R. Tobias

unread,
Sep 17, 2002, 8:16:35 AM9/17/02
to
jo...@enteract.com (Jorn Barger) wrote in message news:<16e613ec.02091...@posting.google.com>...
> Google used to (and may still) lower my rank because I find H1
> unesthetically large. This is Google's problem.

Use H1 and use a stylesheet to suggest a presentation of it that's to
your aesthetic liking. Oops, sorry, stylesheets are against your
religion.

--
Dan

Daniel R. Tobias

unread,
Sep 17, 2002, 8:32:40 AM9/17/02
to
ti...@elfi.org (Tina Holmboe) wrote in message news:<5Nuh9.8866$e5.17...@newsb.telia.net>...

> Hint. CiteSeer does exactly that - it scours the web and looks for
> citations to scientific material. Autonomously. Does it look for the
> <cite> tag ? No. Why not ? Because CITE is "unesthetically large" ?
>
> No.
>
> Because people like yourself insist upon making material available in
> the hardest possible manner. The use of CITE could improve the algorithms
> ACI use - if people would only understand the spade.

Similarly, the online translators I've encountered, such as
AltaVista's Babelfish, generally fail to make any use of the "lang"
and "hreflang" attributes, which allow authors to be very specific
about what human languages their content is in. I think a few of them
notice the "lang" attribute in the HTML tag giving the langguage of
the entire document, but none that I am aware of find "lang" tags on
elements within the document. The translators prefer to use cruder,
more error-prone methods to find the language of the document they're
translating, either asking the user to specify it with a pulldown menu
or using some heuristic algorithm to figure out what language it is,
applying this to the entire document even if it contains parts in
different languages. Whereas, if "lang" attributes were used by the
author and recognized by the translator, then it could recognize that
in the middle of an English document being translated into Spanish
there is a quotation that is already in Spanish and should be left
alone rather than translated. No online translator that I've found
yet gets this right.

However, I was pleased to find that Google's translator is at least
smart enough to make use of HTTP language negotiation when present; if
a page is available in English and Spanish depending on the language
acceptance header sent by the user agent, Google has indexed the
English version, and a user brings it up in a search result while
running Google in Spanish mode, a translation link appears, which for
most pages would run the page through Google's translator to create a
clumsy automated Spanish version; however, Google's translator program
has the sense to make the HTTP request using the target language as
the most desired one, and if the response indicates that this is
indeed the content language of the returned page, it simply sends it
to the user rather than attempting to translate it mechanically. This
is more logical sense than I expected of such a program, and it's
proved very useful to the Spanish-speaking readers of one of my sites
(which has language-negotiated versions in English and Spanish, as
described here).

But on the whole, people haven't made much use of the powerful
features in HTML and HTTP for specifying and negotiating languages; to
some extent, there's a "chicken or egg" problem, in that HTML authors
see no need to put in "lang" and "hreflang" attributes because "no
user agents use them anyway", while creators of user agents, even
specialized ones like translators where these attributes would be very
useful, see no need to take the trouble to program in recognition of
them because "no site authors use them anyway". It's a vicious
circle, which I've attempted to start breaking by using those
attributes extensively in my pages.

--
Dan

Daniel R. Tobias

unread,
Sep 17, 2002, 8:38:52 AM9/17/02
to
Isofarro <spam...@spamdetector.co.uk> wrote in message news:<fjb5ma...@sidious.isolani.co.uk>...

> Jorn Barger wrote:
>
> > are you talking about saving the parser
> > a miniscule amount of effort by closing your <p>s, etc
>
> Saving time and mistakes by removing ambiguity.

If he were to ever drop his present disdain for stylesheets and begin
to use them, he'd likely find that the first paragraph of each section
of his documents wasn't styled in the manner he specified for
paragraphs, because he uses the coding style:

This is the first paragraph.<P>
This is the second paragraph.

He uses <P> as an empty "paragraph-break" tag, a usage that has been
deprecated since 1994, instead of as a marker of a container element;
in all versions of HTML from 2.0 on, what his code actually means is
that the "first paragraph" is not a paragraph at all, only naked text
beneath whatever element is above it (probably the <BODY> element,
since he doesn't use layout tables), while the "second paragraph" is
actually a paragraph. This can make a big difference in their
styling.

--
Dan

Mark Tranchant

unread,
Sep 17, 2002, 8:33:01 AM9/17/02
to
"Jorn Barger" <jo...@enteract.com> wrote in message
news:16e613ec.02091...@posting.google.com...

> 'Search engines' is a meaningless generalisation-- do you mean Google?
>
> Google used to (and may still) lower my rank because I find H1
> unesthetically large. This is Google's problem.

Google should be a service to you, not the other way around. If you cannot
or will not ask the client's browser to render your H1s in a way that
pleases you, that is *your* problem. H1 refers to top-level headings, not
"big text". I could write a browser that renders H1 smaller than normal text
if I chose.

Learn to separate content from style. Use stylesheets.

--
Mark.


Alexander Johannesen

unread,
Sep 17, 2002, 8:41:40 AM9/17/02
to
Bradley K. Sherman <b...@panix.com> wrote:
> I'll guess that this is on a toy project or something closely
> constrained.

Uh, no, its my companys full KB, and it isn't constrained; topic maps
are by default mergable with others without compromising sematics.

> When you say 'here' do you mean here in comp.human-factors?

No, I mean here at my firm and our customers. And I read this from
comp.infosystems.www.authoring.site-design.

Jukka K. Korpela

unread,
Sep 17, 2002, 11:01:52 AM9/17/02
to

You are debating against a person who has indicated his unwillingness to
participate in civilized debate; so it is relatively irrelevant whether he
_knows_ about the topic at hand.

See, for example, Jorn Barger calling a person an idiot (behind his back,
later _defending_ himself with that!):
<http://groups.google.com/groups?selm=1e3uc7x.xo90wtzzfvp4N%40
216-80-34-26.d.enteract.com>
He also has interesting labels to attach to people, like "crypto-fascist".
Need I say more?

Jorn Barger sometimes manages to give the impression of presenting something
original and interesting. But you'll notice how the illusion disappears if
you try to debate (or otherwise discuss) with him.

As usual, Korpela's laws rule OK: "The average usefulness of a thread is
inversely proportional to the cube of the number of groups it is posted to."
(More laws: http://www.cs.tut.fi/~jkorpela/usenet/laws.html )

Followups trimmed.

--
Yucca, http://www.cs.tut.fi/~jkorpela/

Jorn Barger

unread,
Sep 17, 2002, 2:42:53 PM9/17/02
to
d...@tobias.name (Daniel R. Tobias) wrote in message news:<aab17256.02091...@posting.google.com>...

> He uses <P> as an empty "paragraph-break" tag, a usage that has been
> deprecated since 1994

Deprecated by TimBL&co, who initiated the whole structural-markup
boondoggle: http://www.robotwisdom.com/web/history.html

I'd like to appeal to the hypothetically-many rational bystanders
following this flamewar, to stick their necks out and call for some
serious debate about the ideas instead of the gigantic waste of
endless accusations.

Structural markup is not a complicated idea. I've spelled out
my objections in many webpages and many more postings. I have
an extensive background in AI and programming, going back c1970.

Arguments about empty paragraphs, blind-readers that choke on <I>,
the miracle of <CITE>, etc etc etc don't add up to a hill of beans
compared to the real problems of universal knowledge representation.

The W3C is imploding under the weight of overcomplicated ivory-tower
design ideals, appallingly poor human-factors skills, and comically
bad communication skills. But instead of debate about ideas, these
newsgroups offer Fox-network-style pitbull bloodbaths.

Can't we do any better, given all that's at stake?

Isofarro

unread,
Sep 17, 2002, 4:25:53 PM9/17/02
to
Jorn Barger wrote:

> Structural markup is not a complicated idea.

Yep - its second nature. Its merely about encapsulating text into
appropriate elements.

> I've spelled out
> my objections in many webpages and many more postings.

The vast majority of your "Structural Markup Myths" are refuted by CSS
suggestions, keeping the document structure a reflection of the content.

> Arguments about empty paragraphs, blind-readers that choke on <I>,
> the miracle of <CITE>, etc etc etc don't add up to a hill of beans
> compared to the real problems of universal knowledge representation.

Its fine to represent knowledge, but if its not in a format everyone
can treat unambigiously, this knowledge cannot be leveraged, reused and
extended upon.


Could you explain precisely why your usage of

<p><b>Some rather long wordy title</b></p>

is superior a knowledge representation than

<h1 class="whisper">Some rather long wordy title</h1>


Especially when AI-based programs would parse the above as a stream of
text characters.

Bradley K. Sherman

unread,
Sep 17, 2002, 4:36:09 PM9/17/02
to
In article <3d872304$1...@news.wineasy.se>,

Alexander Johannesen <alexander....@bekk.no.spam> wrote:
>Bradley K. Sherman <b...@panix.com> wrote:
>> I'll guess that this is on a toy project or something closely
>> constrained.
>
>Uh, no, its my companys full KB, and it isn't constrained; topic maps
>are by default mergable with others without compromising sematics.

'KB'?

Others are calling for a serious discussion. Can you share
part of your ontology with us?

--bks

Jorn Barger

unread,
Sep 17, 2002, 7:56:04 PM9/17/02
to
"Alexander Johannesen" <alexander....@bekk.no.spam> wrote in message news:<3d87050d$1...@news.wineasy.se>...

> Ok, I'll bite; why am I failing? I use XML, and I use ontologies,
> defined with Topic Maps and backed with RDF. It works like a charm
> here, so why is this a failure?

I started an experiment with the demo topic-map about Italian opera,
to see what its strengths and weaknesses were:
http://www.robotwisdom.com/web/operatopic.html
(The demo turned out to be larger than it looked, so I didn't get
very far.)

I think 'database dumps' are a very mediocre and boring interface
for human readers, but my 'necessary web' resource-pages definitely
have a strong database-component that could be forced into a static
topic-map mold... but it's the quirky human edges that I worry
will be lost if topic-maps are made the priority/starting-point.

So long as you're treating each opera as equal, your database can
be perfectly neat... but my design-strategy requires doing an
overview of Web-resources for each, and here the human element
intrudes quite violently-- the greatest operas inspire an infinite
variety of fan-pages, and analysing these in a disciplined way
is just the universal-AI problem again.

Example: Mozart's Magic Flute, if I remember right, has a theme
derived from Freemasonry. If your topic map for opera is really
going to merge consistently with all others, you have to wrestle
with paradoxes like "what's the difference between a philosophy
and a religion?"

In my Joyce pages I've compiled various inventories and timelines
of Joyce's allusions to religious themes. Unexpected new ways of
structuring these keep turning up, eg two characters in Ulysses
who symbolise two characters in Biblical legend.

How can you ask me to worry about defining such structures before
I've exhausted all the webpages I can think of on these topics?

Daniel R. Tobias

unread,
Sep 17, 2002, 7:57:08 PM9/17/02
to
jo...@enteract.com (Jorn Barger) wrote in message news:<16e613ec.02091...@posting.google.com>...

> Structural markup is not a complicated idea.

Then why do you constantly fail to grasp it? :)

--
Dan

Thomas Baekdal

unread,
Sep 17, 2002, 8:24:12 PM9/17/02
to

"Jorn Barger" <jo...@enteract.com> wrote in message
news:16e613ec.02091...@posting.google.com...
> d...@tobias.name (Daniel R. Tobias) wrote in message
news:<aab17256.02091...@posting.google.com>...
> > He uses <P> as an empty "paragraph-break" tag, a usage that has been
> > deprecated since 1994
>
> Deprecated by TimBL&co, who initiated the whole structural-markup
> boondoggle: http://www.robotwisdom.com/web/history.html
>
> I'd like to appeal to the hypothetically-many rational bystanders
> following this flamewar, to stick their necks out and call for some
> serious debate about the ideas instead of the gigantic waste of
> endless accusations.

I have just posted a new thread asking the people here to do just that. It
is called "Web Content structure (year 2020)". I have moved it to a seperate
thread since this one is getting to disordered.

Regards,
Thomas Baekdal
http://www.baekdal.com


Alexander Johannesen

unread,
Sep 18, 2002, 3:27:46 AM9/18/02
to
Bradley K. Sherman <b...@panix.com> wrote:
> 'KB'?

Knowledge Base. All our various bits of important data (peoples CVs,
customer relations, our skillsets, details about technologies, and all
sorts of links, associations and relations between them all. It
consists of more than 3000 topics, and is about 1Gb in data alone.

> Others are calling for a serious discussion. Can you share
> part of your ontology with us?

Are you saying I'm not being serious?

Alexander Johannesen

unread,
Sep 18, 2002, 3:42:33 AM9/18/02
to
Jorn Barger <jo...@enteract.com> wrote:
> I started an experiment with the demo topic-map about Italian opera,

The one from Ontopia.net, I presume?

> to see what its strengths and weaknesses were:
> http://www.robotwisdom.com/web/operatopic.html
> (The demo turned out to be larger than it looked, so I didn't get
> very far.)

What do you mean by that?

> I think 'database dumps' are a very mediocre and boring interface
> for human readers,

It's just a demo. You make the interface whatever you'd like.

> but my 'necessary web' resource-pages definitely
> have a strong database-component that could be forced into a static
> topic-map mold... but it's the quirky human edges that I worry
> will be lost if topic-maps are made the priority/starting-point.

You are right in the fact that we should get rid of all humans, and
leave all the handling to the computers. :)

> So long as you're treating each opera as equal, your database can
> be perfectly neat...

Actually, one of the great advantages with topic maps is that you can
define them and the associations any way you like, even emotional;

Alex >---- really loves ----> Il Coronation d'Poppea

Alex >---- likes ----> Il Coronation d'Poppea

Il Coronation d'Poppea >---- is a ----> Baroque Opera

Baroque Opera >---- is a ----> Opera

Alex >---- likes ----> Opera

It's up to you to define what the associations are. These examples are
simple to map up.

> Example: Mozart's Magic Flute, if I remember right, has a theme
> derived from Freemasonry. If your topic map for opera is really
> going to merge consistently with all others, you have to wrestle
> with paradoxes like "what's the difference between a philosophy
> and a religion?"

You define this through "scope", if you like. Topic maps are metamaps
of knowledge and relations. Topic maps aren't any more mind-reading
than I am, so someone defines them. That is the human factor. Topic
maps is a way to map such things, not to generate them, like AI tries
to do. They are quite different.

The trick is not topic maps themselves; they're easy. The trick is to
write the tools you need to represent what you want to portray. You
normally use a programming language (Java, Python) to create a TM
map in that language and model, or use templating (XSL, notation) to
display and do simple statistical displaying of your associations. With
not too much work, you can represent quite a lot of human thinking
through good association-maps.

> In my Joyce pages I've compiled various inventories and timelines
> of Joyce's allusions to religious themes. Unexpected new ways of
> structuring these keep turning up, eg two characters in Ulysses
> who symbolise two characters in Biblical legend.
>
> How can you ask me to worry about defining such structures before
> I've exhausted all the webpages I can think of on these topics?

A topic map grows. It doesn't put any constraints on you when new
stuff show up. New structures are put into scope, and you can merge
any TM with any other TM without loosing your semantics.

Topic maps are *not* for structure; they're for associations and
relations, a far more loose and human-friendly term.

Bradley K. Sherman

unread,
Sep 18, 2002, 7:47:47 AM9/18/02
to
In article <3d882af2$1...@news.wineasy.se>,

Alexander Johannesen <alexander....@bekk.no.spam> wrote:
>
>Are you saying I'm not being serious?
>

Not yet. Can you describe the practical application of your KB?

--bks

Alexander Johannesen

unread,
Sep 18, 2002, 9:11:37 AM9/18/02
to
Bradley K. Sherman <b...@panix.com> wrote:
> Not yet. Can you describe the practical application of your KB?

I'm not sure I like you tone, Mr. Sherman. Why is the application of
our KB important to judging *me* as serious or not? We use it. Why do
you need to know more?

It is not that I'm not willing to tell, but I do find your attitude
towards what I say a bit disturbing. To get some value out of this,
I ask if you have made an application with topic maps before? You seem
pretty sure they are a failure, you generalise XML and ontologies as
disasters, and claim that all that categorise knowledge are *failing*,
so if the answer is indeed 'yes, I've tried topic maps, and the suck',
then it would be *very* helpful to know why I am failing, together with
all those other people who's travelling down similar paths as me. What
makes you so sure in your assumption, and where is the proof?

Jorn Barger

unread,
Sep 18, 2002, 10:38:55 AM9/18/02
to
"Alexander Johannesen" <alexander....@bekk.no.spam> wrote in message news:<3d882af2$1...@news.wineasy.se>...

> Knowledge Base. All our various bits of important data (peoples CVs,
> customer relations, our skillsets, details about technologies, and all
> sorts of links, associations and relations between them all. It
> consists of more than 3000 topics, and is about 1Gb in data alone.

If it's just a database, then of course the xml-ontology can be
handled trivially.

Alexander Johannesen

unread,
Sep 18, 2002, 11:37:04 AM9/18/02
to
Jorn Barger <jo...@enteract.com> wrote:
> If it's just a database, then of course the xml-ontology can be
> handled trivially.

What can *not* be handled, you reckon?

Bradley K. Sherman

unread,
Sep 18, 2002, 12:15:52 PM9/18/02
to
In article <3d887b89$1...@news.wineasy.se>,

Alexander Johannesen <alexander....@bekk.no.spam> wrote:
>Bradley K. Sherman <b...@panix.com> wrote:
>> Not yet. Can you describe the practical application of your KB?
>
>I'm not sure I like you tone, Mr. Sherman. Why is the application of

Okay so I remain unconvinced that XML + Ontologies are worth
much in terms of real-world projects. My tone notwithstanding.

I am a realist Mr. Johannesen and have been trying to map
molecular biology using avant garde tools for over 10 years.
I find the web very, very helpful, just as gopher and
anonymous FTP and TCP/IP and the Unix file system were
very, very helpful. Having sat through interminable
lectures about ontologies and XML and then sitting
through interminable fruitless discussions about how
to categorize the tangled web of scientific knowledge,
you'll permit me a bit of bile.

I like to find groups people doing great things and then
learning from their techniques.

I am not swayed by groups talking about doing great things
and urging me to buy into their techniques.

As we say here in the land of the free and the home
of the Department of Homeland Security: Show me.

--bks

Jorn Barger

unread,
Sep 18, 2002, 1:45:44 PM9/18/02
to
"Alexander Johannesen" <alexander....@bekk.no.spam> wrote in message news:<3d88...@news.wineasy.se>...

> > (The demo turned out to be larger than it looked, so I didn't get
> > very far.)
> What do you mean by that?

Nothing interesting-- I'd hoped to squeeze it all onto a single
juicy page, but there were just too many bits of trivia.

> > I think 'database dumps' are a very mediocre and boring interface
> > for human readers,
> It's just a demo. You make the interface whatever you'd like.

Phrases like "whatever you'd like" set off my bs-alarm. An infinitely
flexible solution is no solution.

> [...] one of the great advantages with topic maps is that you can


> define them and the associations any way you like, even emotional;
> Alex >---- really loves ----> Il Coronation d'Poppea
> Alex >---- likes ----> Il Coronation d'Poppea

Okay, pause here.

'Likes' and 'Loves' and 'Really loves' etc etc etc are all shades
of critical judgment. But trying to define a disciplined ontology
of critical judgment is a massively-unsolved problem.

If topic-mappers are content to generate 'chaotic' ontologies, I'm
content to leave them to it-- but they shouldn't claim to be doing
AI!

> Il Coronation d'Poppea >---- is a ----> Baroque Opera
> Baroque Opera >---- is a ----> Opera

AI people would protest that you're confusing categories and
instances here. (Il Cd'P is an instance, B.O is a category.)

In general, you need to be very careful with 'isA' because
all your logical deductions will rely on it, while 'likes' is
almost totally undefined and can be treated like a 'gensym'
(generated symbol, or meaningless label).

> It's up to you to define what the associations are. These examples are
> simple to map up.

But how do you _use_ them to disambiguate webpage content?

> [...] With


> not too much work, you can represent quite a lot of human thinking
> through good association-maps.

Where 'not too much' = 10,000+ years of expert effort
and 'quite a lot' = a tiny fraction
(these figures based on Lenat's Cyc).

JJ

unread,
Sep 18, 2002, 4:48:48 PM9/18/02
to
"Jorn Barger" <jo...@enteract.com> wrote in message
news:16e613ec.02090...@posting.google.com...

> > with few or no
> > links needing to be followed to get some of it,

> Minimal clicks.

Is scrolling a long web page less effecient than
clicking to another page ?

How many users of the web have scroll mice ?

Is scrolling more time consuming if one is a slow reader ?

Is scrolling a bigger calorie burn for the finger ?

If a webpage is non-profit, then a site might need more
pages to house more ads. However, if enough people
request that a webpage be not broken up into smaller
pages, then the webmaster or webmistress might comply,
even though they might not prefer it that way.

If a user is using a 28.8k modem or 56k, then their ISP
might kick them offline if they are reading a long webpage.

One hazzard of computer use is encouraging a lack of
eye blinking (causing eye dryness). If a page is split
into more pages, then it should encourage eye blinking.

Also if it is a graphics heavy page then the person might
take a drink of water or stretch their arms or legs while
it loads. With DSL, I seem to stretch out at the computer
less than when I had a 56k connection.

> You're missing my point-- subject/topic is one thing, type of
> resources is a completely 'orthogonal' thing.

http://www.dictionary.com/search?q=orthogonal

So by not providing a short definition to orthogonal,
you have increased my mouse clicks. Did you think
that the majority of readers of this usenet group knew
the meaning ? I have Atomica, so I didn't have to use
copy and paste to get the meaning of the word.

http://www.atomica.com/solutions_products_pc.html

> For my Flaubert page, I think the way to decide what to 'promote'
> has to be based on usefulness to searchers-in-general, eg
> my main Flaubert page should keep a link to the best English and
> French etexts, but doesn't need to include the full inventory
> of all etexts.

"Best" is subjective and does not relate to your "logical" posts.

Jon
.
.
.
.


JJ

unread,
Sep 18, 2002, 5:29:53 PM9/18/02
to
"Thomas Baekdal" <notava...@baekdal.com> wrote in message
news:3d7d277d$0$30477$edfa...@dspool01.news.tele.dk...
> When making a link from one page to another, it is essential that you let
> the user know what the content is of the linked page. This means that
"leave
> nothing but the link" would not benefit anyone.

> One
> of the major problems on the web is that it is not very comfortable to
read
> on a computer screen. This is mostly due to having to scroll and because
of
> the low resolution.

On Geocities web pages, a data statistics page can tell you the resolution
of screens of the people who hit your page are using. I have seen pages
that ask you if you want 'text only' or 'graphic heavy'. Perhaps having a
'high speed net connection" or 'slow net connection' and 'high resolution
monitor' and 'low resolution montior' options might be useful ?

However some people don't know what resolution means or even how
to adjust their monitors for less eye strain. Some users don't know
what a dot pitch is and what Hz their monitor is currently at. I
think Jorn would call this "braindead" as webpages are trying
to be made for the lowest common denominator out there and
have not evolved in some areas. However, the plugin market
has encouraged lots of users to expand what they "see" on the
web. Perhaps a plugin could be designed that helps web
authors detect and adjust their web page automatically to
account for resolution, user defined colors, and wether or
not any accessibility functions were currently turned on ?

Right now, most DVDs give you sound options. Some DVDs
give you widescreen or full screen choices. Most all laundry
machines have different color and white choices. Some
appliances are easily adjustable to user habits and needs
and some are not. Computers certainly are appliances.

> If you make a page longer than one and a half screen (vertical) you
> need to include quick links to each sub section + the need for better
> screen visualization increases tremendously. If you are concerned about
> accessibility you also need to make adjustments to the code so that each
> section is easily identified, and can be accessed quickly - even for
> people who are blind.

As the digital divide increases, we will see users with 15 inch
screens and users with 30 inch screens. Web page designers might
have to take that into account soon.

As for blind users, some of them have text enlargers. If you are using
a font, that is part of a bitmap or jpg file, then they might not be able
to "read" it.

> You do not have to clutter the page with graphics or colors, but good
> interaction design is essential for long pages.

Some people collect graphics of different shapes and sizes. For
those people, clutter is fine on a long page. That way, they can
scan and compare many of the graphics at once.

Jon

> Regards,
> Thomas Baekdal

.
.


Tina Holmboe

unread,
Sep 18, 2002, 5:58:15 PM9/18/02
to
"JJ" <jj...@removethisdrizzle.com> exclaimed in <1032384591.687288@yasure>:

> On Geocities web pages, a data statistics page can tell you the resolution
> of screens of the people who hit your page are using. I have seen pages

You might want to reconsider the word 'statistics', as what Geocities
and others are doing is pure guesswork based on Javascript techniques.

The statistical value inherent in predicting the number of users with a
certain resolution using a method reliant on a technology which the
statistics tell an uncertain tale as to how many have enabled ...

And do remember that the resolution has virtually nothing to do with
anything.


> think Jorn would call this "braindead" as webpages are trying
> to be made for the lowest common denominator out there and
> have not evolved in some areas. However, the plugin market

Creating a webpage to the "lowest common denominator" does not, as some
believe, "grey" - unless, of course, grey is what one want. What it does
involve is designing to the "highest number of users" - ie. not dependant
on resolution, colors, or graphics.


> web. Perhaps a plugin could be designed that helps web
> authors detect and adjust their web page automatically to
> account for resolution, user defined colors, and wether or
> not any accessibility functions were currently turned on ?

There has been quite alot of discussion regarding these topics in many
fora. A consensus has yet to be reached, but the majority of people
involved with useability and accessibility does not seem to agree that
producing an infinite number of different versions of a webpage just to
satisfy the desire for eyecandy is a particularly good idea.

In the area of accessibility functions, heavy arguments have been voiced
against it for privacy reasons and these I agree with whole heartedly.


> As the digital divide increases, we will see users with 15 inch
> screens and users with 30 inch screens. Web page designers might
> have to take that into account soon.

Indeed. Time to tap that huge, glowing thing in front of them, accept
that it is a monitor and not a piece of paper, and adjust their view of
the world accordingly.


> As for blind users, some of them have text enlargers. If you are using
> a font, that is part of a bitmap or jpg file, then they might not be able
> to "read" it.

I think I can promise you with some certainly that the majority of
blind users do not keep text enlargers around for pets ...

--
- Tina Holmboe Greytower Technologies
ti...@greytower.net http://www.greytower.net/
[+46] 0708 557 905

Thomas Baekdal

unread,
Sep 18, 2002, 8:08:49 PM9/18/02
to

"JJ" <jj...@removethisdrizzle.com> wrote in message
news:1032384591.687288@yasure...

> On Geocities web pages, a data statistics page can tell you the resolution
> of screens of the people who hit your page are using. I have seen pages
> that ask you if you want 'text only' or 'graphic heavy'. Perhaps having a
> 'high speed net connection" or 'slow net connection' and 'high resolution
> monitor' and 'low resolution montior' options might be useful ?

I would not personally do this. I would make a page that can flow according
to my readers screen resolution - even so that it fits palm computers. The
layout would be made so that the majority of users can see it, but the rest
will still be able to see the site - withou the layout elements.

> However some people don't know what resolution means or even how
> to adjust their monitors for less eye strain. Some users don't know
> what a dot pitch is and what Hz their monitor is currently at.

Nor should they need to. I still do not understand why graphics cards do
not, as default, use the highest setting possible (Hz wise).

> Perhaps a plugin could be designed that helps web
> authors detect and adjust their web page automatically to
> account for resolution, user defined colors, and wether or
> not any accessibility functions were currently turned on ?

I do not think this should be a web plugin. It should be an integrated part
of your operation system. A web page is only one of many things that suffers
from low screen resolutions.

> As the digital divide increases, we will see users with 15 inch
> screens and users with 30 inch screens. Web page designers might
> have to take that into account soon.

The problem is that we have totally misunderstood what resolution is all
about. The higher a resolution you use, the smaller everything will be. The
way it should work is that when you increase the resolution you screen will
have more detail, but the buttons, text and other elements should not change
in size.
- It is like if you have 2 letter size pages. One printed at 120 dpi,
another printed at 305 dpi. The element on these pages are the same size,
the difference is just that the 305 dpi page looks much better, and is much
easier to read.
- If you have two 17" screens. one setup to display 1024x768 the other to
display 1600x1200. The content on these two identical screens are very
different because instead of using the increased resolution to improve the
content - it just makes everything smaller. This practically makes the
readability hopeless.

Until this has changed there is no point in making special plugins or other
fancy things. The basic resolution is still 72dpi no matter what screen you
have ot what your setup would be.

If you have a 17" screen. You need to use a resolution of 4338x3330 at
120Hz - while still keeping the content on the screen the exact same size as
how it would look at 1024x768 (not making it smaller as the screen normally
do). If you do this you will have the same resolution and readability as you
do on paper.

BTW: We also need to change our way of thinking. A screen has landscape
orientation, but most websites is made with a horizontal thinking. Any print
publisher will tell you that there is a profound difference in the way you
setup landscape and horizontal pages. I am not saying that printed pages is
the same as web pages, but we could learn a lot about the old publishing
strategies.

> As for blind users, some of them have text enlargers. If you are using
> a font, that is part of a bitmap or jpg file, then they might not be able
> to "read" it.

As Tina also wrote. I am pretty sure no blind people use text enlargers. If
you are blind it really does not matter how big the visual text is :o)

About JPEG's and other bitmaps then I do not write text as a part of any
graphics. My recent projects are all compliant with Section 508
Accessibility Act (US).

> Some people collect graphics of different shapes and sizes. For
> those people, clutter is fine on a long page. That way, they can
> scan and compare many of the graphics at once.

I was referrring to the layout elements. IF the content is about graphics it
should of course display graphics.

Tina Holmboe

unread,
Sep 18, 2002, 9:07:58 PM9/18/02
to
"Thomas Baekdal" <no...@baekdal.com> exclaimed in <3d89159c$0$64155$edfa...@dspool01.news.tele.dk>:

> Nor should they need to. I still do not understand why graphics cards do
> not, as default, use the highest setting possible (Hz wise).

Possibly because it is not given that the monitor is able to handle
what they are fed. Modern monitors will not, then, sync and shut down.
Older monitors might burn.

New monitors *may* support protocols for requesting their specs; but
as with everything else: it isn't a good assumption to make. Not unless
you *like* the smell of burning plastic and short circuits in the
morning ;)

Thomas Baekdal

unread,
Sep 18, 2002, 10:36:47 PM9/18/02
to
"Tina Holmboe" <ti...@elfi.org> wrote in message
news:Or9i9.13447$HY3.2...@newsc.telia.net...

> "Thomas Baekdal" <no...@baekdal.com> exclaimed in
<3d89159c$0$64155$edfa...@dspool01.news.tele.dk>:
>
> > Nor should they need to. I still do not understand why graphics cards do
> > not, as default, use the highest setting possible (Hz wise).
>
> Possibly because it is not given that the monitor is able to handle
> what they are fed. Modern monitors will not, then, sync and shut down.
> Older monitors might burn.
>
> New monitors *may* support protocols for requesting their specs; but
> as with everything else: it isn't a good assumption to make. Not unless
> you *like* the smell of burning plastic and short circuits in the
> morning ;)

Actually the latest OS is perfectly capable os figuring out if what Hz the
screen supports. The OS actually states (in the settings window) "Hide modes
that this monitor cannot display." All screens that (in this case) Windows
can identify in this way is also certified - so you can be 100% sure that it
does support the values Windows states.

Then, if it can hide the modes that the screen does not support, why cannot
it not also set the Hz to the highest possible (supported and certified) Hz?

Must be caused by lazy developers ... :o)

Thomas Baekdal
http://www.baekdal.com
- The Goal is Pretty Simple


JJ

unread,
Sep 19, 2002, 12:40:07 AM9/19/02
to
"Thomas Baekdal" <notava...@baekdal.com> wrote in message
news:3d7e7edc$0$161$edfa...@dspool01.news.tele.dk...

> 1815: Father promoted to chief surgeon
> 1818: Family moves into residential hospital wing (extremely morbid
> environment to grow up in)
> ...or...
> - Father promoted to chief surgeon (1815)
> - Family moves into residential hospital wing (extremely morbid
environment
> to grow up in) (1818)

Sounds to me like you are both fighting over
which side of the bread should get the butter. (^:

Why does it matter wether you put the dates at
the beginning or the end ?

Jon
.
.
.
.

.


Alexander Johannesen

unread,
Sep 19, 2002, 4:18:44 AM9/19/02
to
Jorn Barger <jo...@enteract.com> wrote:
> Nothing interesting-- I'd hoped to squeeze it all onto a single
> juicy page, but there were just too many bits of trivia.

I'm interested to know what you would like on one juicy page that would
satisfy you.

> Phrases like "whatever you'd like" set off my bs-alarm. An infinitely
> flexible solution is no solution.

No, I'm talking from the persepective of design, not "wants". You design
your systems the way you see fit to the task at hand. Too many systems
are built on the "need something next week"-principle. When I design
systems, *I* want a certain degree of present and future needs covered.
I'm not talking about infinite flexibility, but flexible restraints.

> Okay, pause here.
>
> 'Likes' and 'Loves' and 'Really loves' etc etc etc are all shades
> of critical judgment. But trying to define a disciplined ontology
> of critical judgment is a massively-unsolved problem.

Then you must define what you mean by "disciplined" ontology. Everything
in the same ontology? I use various maps and merge them to create a
new map that represents better what I'm looking for.

> If topic-mappers are content to generate 'chaotic' ontologies, I'm
> content to leave them to it-- but they shouldn't claim to be doing
> AI!

I've never claimed this?

> > Il Coronation d'Poppea >---- is a ----> Baroque Opera
> > Baroque Opera >---- is a ----> Opera
>
> AI people would protest that you're confusing categories and
> instances here. (Il Cd'P is an instance, B.O is a category.)

Strictly speaking, then yes, you're right, but the problem, as I see it
with most human approaches to the computer-problem is that we contraint
the semantics so much we make it difficult on ourselves. In theory,
everything is a topic, right? Map this first; this is your primary data.
Next comes the metametadata of categories and instances. My own systems
are built using topical mapping, using cumulative statistics to make
some human sense out of it. It seems to work, and I'm afraid to add,
for me.

> In general, you need to be very careful with 'isA' because
> all your logical deductions will rely on it, while 'likes' is
> almost totally undefined and can be treated like a 'gensym'
> (generated symbol, or meaningless label).

I agree, but my take on this is to use general maps for common knowledge
and personal maps that merges the common at intervals. That gives you
a flexible growing commons, and a flexible constraining personal. Maybe
this doesn't make much sense, and maybe the English language being a
bit down the natural ladder for me doesn't help either, but I'm
basically *not* trying to fit all into one solution, rather I use a
lot of smaller maps (that can be personal, i.e) that I merge to get a
map of a certain flair.

Example: There is the common KB map of the company, and there is my own
personal map that tells of likes, dislikes, what I know, what I want
to do more of, where I want to go, and how I like to do it. They both
change over time without problem. I merge these maps to get a new
mapping. I conduct a representation of what my take (computer-wise) on
the companys CV (people) and technologies and directions. I take this
map and extract reports and info from this.

This way I basically map pure data and knowledge with my own labels
that makes sense to me. Why are better to understand those labels than
me?

To further this, I can (I'm working on this right now) create a set of
absolutes / philosophical defines, and map these to my personal map
to make it conform to a standard, and hence we can talk about what
various people in the company "likes", "hates", "loves" and so forth.

Before I continue, though; are your take on this from a strict AI
perspective, as in making computers *understand* what those labels mean?
Mine is a by far more pragmatic take on it, and we might be talking
about two entierly things.

> > It's up to you to define what the associations are. These examples
> > are simple to map up.
>
> But how do you _use_ them to disambiguate webpage content?

I personally use cumulative statistics to find the associations that
are binded together to score the association. By this, I can get a
smaller (but arguably less complete) scope to work with. Maybe this
pragmatic approach is too pragmatic for specific science, but for a
KB it works quite nice.

> > [...] With
> > not too much work, you can represent quite a lot of human thinking
> > through good association-maps.
>
> Where 'not too much' = 10,000+ years of expert effort
> and 'quite a lot' = a tiny fraction
> (these figures based on Lenat's Cyc).

That figure seems waaay big, but I guess that depends totally on what
you're trying to achieve. My goals I can achieve today.

Alexander Johannesen

unread,
Sep 19, 2002, 4:19:10 AM9/19/02
to
Bradley K. Sherman <b...@panix.com> wrote:
> Okay so I remain unconvinced that XML + Ontologies are worth
> much in terms of real-world projects. My tone notwithstanding.

`Because I don't like your tone, you are unconvinced?

> I am a realist Mr. Johannesen and have been trying to map
> molecular biology using avant garde tools for over 10 years.
> I find the web very, very helpful, just as gopher and
> anonymous FTP and TCP/IP and the Unix file system were
> very, very helpful. Having sat through interminable
> lectures about ontologies and XML and then sitting
> through interminable fruitless discussions about how
> to categorize the tangled web of scientific knowledge,
> you'll permit me a bit of bile.

I can understand this. I come from a different background, but with very
similar goals, and I've travelled much of the same way as you have. I
still don't understand why the scepticism of a proposed solution needs
to be forwarded as "you all fail", "you've got it wrong" without any
specific knowledge of it.

I'm a realist too, working with AI and security for more than 10 years,
and I've been trying to map movements, shapes, textures and weather.
TM's work for what I need it for. That is all I'm saying.

> I am not swayed by groups talking about doing great things
> and urging me to buy into their techniques.

I havn't asked you to buy into my techniques at all. I use TM's for my
goals which, with any ontology, should be similar.

> As we say here in the land of the free and the home
> of the Department of Homeland Security: Show me.

That depends on what you want; show you what?

Jorn Barger

unread,
Sep 19, 2002, 7:19:13 AM9/19/02
to
"JJ" <jj...@removethisdrizzle.com> wrote in message news:<1032382131.709437@yasure>...

> Is scrolling a long web page less effecient than
> clicking to another page ?

Scrolling is always faster.

> How many users of the web have scroll mice ?

You don't need a mouse to scroll.

> Is scrolling more time consuming if one is a slow reader ?

I don't see how reading-speed is relevant.

> Is scrolling a bigger calorie burn for the finger ?

I think 'mental calories' are more critical than finger-calories.

> If a webpage is non-profit, then a site might need more
> pages to house more ads.

You can put ads midpage instead.

> However, if enough people
> request that a webpage be not broken up into smaller
> pages, then the webmaster or webmistress might comply,
> even though they might not prefer it that way.

Are better hf-approach is to offer two versions labelled
'one-page' and 'six-page' and see which people choose.

> If a user is using a 28.8k modem or 56k, then their ISP
> might kick them offline if they are reading a long webpage.

I don't think I bothered to include this in my inventory of
arguments: http://www.robotwisdom.com/web/pagelength.html
because it's not the author's responsibility by any stretch
of the imagination. (I keep Jennicam open in a mini-window
for that purpose, when I'm on dialup.)

> One hazzard of computer use is encouraging a lack of
> eye blinking (causing eye dryness). If a page is split
> into more pages, then it should encourage eye blinking.

You make joke, yes?

> Also if it is a graphics heavy page then the person might
> take a drink of water or stretch their arms or legs while
> it loads. With DSL, I seem to stretch out at the computer
> less than when I had a 56k connection.

Maybe we should require that between every paragraph, authors
should be required to insert good-citizenship messages like
"Say 'no' to drugs" or "They hate us because we're so good"?

> > You're missing my point-- subject/topic is one thing, type of
> > resources is a completely 'orthogonal' thing.
>
> http://www.dictionary.com/search?q=orthogonal
>
> So by not providing a short definition to orthogonal,
> you have increased my mouse clicks. Did you think
> that the majority of readers of this usenet group knew
> the meaning ?

I totally agree that authors must weigh that question.

> I have Atomica, so I didn't have to use
> copy and paste to get the meaning of the word.
> http://www.atomica.com/solutions_products_pc.html

I use bookmarklets in Netscape's personal toolbar.

> > For my Flaubert page, I think the way to decide what to 'promote'
> > has to be based on usefulness to searchers-in-general, eg
> > my main Flaubert page should keep a link to the best English and
> > French etexts, but doesn't need to include the full inventory
> > of all etexts.
>
> "Best" is subjective and does not relate to your "logical" posts.

Authors must be brave and recommend what they think is best,
but 'best etext' is not purely subjective-- nicely-formatted
HTML is better-in-general than overdesigned, junky HTML or
dry Project-Gutenberg txt. Carefully proofread is definitely
better than sloppy. Etc...

Jorn Barger

unread,
Sep 19, 2002, 7:26:31 AM9/19/02
to
"JJ" <jj...@removethisdrizzle.com> wrote in message news:<1032384591.687288@yasure>...

> However some people don't know what resolution means or even how
> to adjust their monitors for less eye strain. Some users don't know
> what a dot pitch is and what Hz their monitor is currently at. I
> think Jorn would call this "braindead" as webpages are trying
> to be made for the lowest common denominator out there and
> have not evolved in some areas.

Au contraire, lowest common denominator is my highest ideal.

> As the digital divide increases, we will see users with 15 inch
> screens and users with 30 inch screens. Web page designers might
> have to take that into account soon.

Web-designers should assume that the default font has been set to
what the ***user*** finds most comfortable, and that the default
window-size has been adjusted to hold about 80 characters of
that default font.

Tina Holmboe

unread,
Sep 19, 2002, 7:28:48 AM9/19/02
to
"Thomas Baekdal" <no...@baekdal.com> exclaimed in <3d893849$0$64195$edfa...@dspool01.news.tele.dk>:

> Actually the latest OS is perfectly capable os figuring out if what Hz the
> screen supports. The OS actually states (in the settings window) "Hide modes
> that this monitor cannot display." All screens that (in this case) Windows

And, pray tell, how does it know what my ADI DMC-2304 supports ? Is there
perhaps a list of all monitors ? Or does it use an API to ask the monitor ?

Add to it my elegant little Nokia 20" which I used to own - the maximum
horizontal frequency it could support was 120. In 640x480, that is. It
could do 82 (if I recall) in 1600x1200 - but how should the card know to
set itself to that ? Should it always select the highest ? Or perhaps a
- haha - default resolution and set the maxmium refresh rate based on that?

A new ADI - say the G1000 - supports 80Hz in 1920x1440; but it can go as
far up as 160Hz. What do you suggest the card uses ? 80, or 160 ?

Y'know, I *really* don't see any point in spending too much time discussing
exactly why the dishwasher isn't set to my *exact* water standard when I
pick it up. I am, and so are more people, fully capable of reading the
documentation that comes with it and change it if I want to.

Plug'n'Pray can be taken abit too far, IMHO. In the above case I don't
find anything to convince me that any driver can make an intelligent
choice - save heuristically. "Lemme see ... NOONE wants to run 1600 on
a 17" monitor! I'll set the resolution to 1024, and then the Hz to ..."

Oi. *I* want to run 1600 on a 17", and anyone arguing with *my choice* is
welcome to a hard object in the forehead any time. It isn't practical;
and doesn't solve any problems.

Mark Tranchant

unread,
Sep 19, 2002, 9:05:34 AM9/19/02
to
"Tina Holmboe" <ti...@elfi.org> wrote in message
news:Qxii9.13524$HY3.2...@newsc.telia.net...

> "Thomas Baekdal" <no...@baekdal.com> exclaimed in
<3d893849$0$64195$edfa...@dspool01.news.tele.dk>:
>
> > Actually the latest OS is perfectly capable os figuring out if what Hz
the
> > screen supports. The OS actually states (in the settings window) "Hide
modes
> > that this monitor cannot display." All screens that (in this case)
Windows
>
> And, pray tell, how does it know what my ADI DMC-2304 supports ? Is
there
> perhaps a list of all monitors ?

Yes.

> Add to it my elegant little Nokia 20" which I used to own - the maximum
> horizontal frequency it could support was 120. In 640x480, that is. It
> could do 82 (if I recall) in 1600x1200 - but how should the card know to
> set itself to that ? Should it always select the highest ? Or perhaps a
> - haha - default resolution and set the maxmium refresh rate based on
that?

The monitor data in the list below gives supported refresh rates for each
resolution in the driver. Either that, or the refresh rates are calculated
from the horizontal and vertical frequency limits of the monitor. You set
the resolution, it gives you a list of supported frequencies. The suggestion
here is that it should set the highest one, which I believe many graphics
card drivers do.

> A new ADI - say the G1000 - supports 80Hz in 1920x1440; but it can go as
> far up as 160Hz. What do you suggest the card uses ? 80, or 160 ?

See above - if you set 1920x1440, it will (or should) set 80Hz. Set a lower
resolution, it'll set a higher refresh as appropriate.

> Y'know, I *really* don't see any point in spending too much time
discussing
> exactly why the dishwasher isn't set to my *exact* water standard when I
> pick it up. I am, and so are more people, fully capable of reading the
> documentation that comes with it and change it if I want to.

But you agree it would be better if it did it itself. Or do you still have a
car with an ignition timing lever to adjust the spark advance?

> Plug'n'Pray can be taken abit too far, IMHO. In the above case I don't
> find anything to convince me that any driver can make an intelligent
> choice - save heuristically. "Lemme see ... NOONE wants to run 1600 on
> a 17" monitor! I'll set the resolution to 1024, and then the Hz to ..."

It doesn't choose the resolution: you do.

> Oi. *I* want to run 1600 on a 17", and anyone arguing with *my choice*
is
> welcome to a hard object in the forehead any time. It isn't practical;
> and doesn't solve any problems.

Fine. Select 1600x1200, it'll set you the best refresh, which is nearly
always the highest supported.

--
Mark.


Tina Holmboe

unread,
Sep 19, 2002, 11:52:29 AM9/19/02
to
"Mark Tranchant" <mtra...@ford.com> exclaimed in <amci2u$p9...@eccws12.dearborn.ford.com>:

> from the horizontal and vertical frequency limits of the monitor. You set
> the resolution, it gives you a list of supported frequencies. The suggestion
> here is that it should set the highest one, which I believe many graphics

Then either I or you have misunderstood the suggestion - as I took it to be
that the card should 'choose' the highest it could do without 'interference'
from a user - ie. *not* that the resolution was chosen first.

I've yet to encounter a graphics card driver that does not choose the optimal
refresh rate for the *resolution chosen*.

It issues is whether that was the intent: to let the user choose. I didn't
interpret it that way; if I am mistaken I am also wrong in my comments.

ian glendinning

unread,
Sep 19, 2002, 12:03:46 PM9/19/02
to
Thoughts inserted (IG:)

b...@panix.com (Bradley K. Sherman) wrote in message news:<am5cb0$pkh$1...@panix2.panix.com>...
> In article <16e613ec.0209...@posting.google.com>,
> Jorn Barger <jo...@enteract.com> wrote:
> > ...
> >So I don't believe TimBL&co are going to be able to define any
> >useful semantic labels for the bulk of webpage markup. (If they
> >could, it would be more useful in a META header than in phony
> >style=semantics tags.)
> > ...
>
> This is an understatement. There are thousands of talented
> people attempting to categorize knowlege and they are
> *all* failing. XML is a disaster. Ontologies are a disaster.
> XML plus Ontologies is synergistic: the combination is
> a bloated incomprehensible catastrophe. The Semantic Web
> is a red herring.

IG: Ontologies are a disaster for anyone who believes "an" ontology
can define the entire world of information. The seduction of XML
re-inforces this issue as you say, but let's not throw baby out with
the bathwater. Of course the Semantic Web existed before WWW, being
coined by Michel Foucault (1966?) to signify that (human) knowledge
represented a tangled web of human interactions.

>
> Just creating a set of keywords is very hard. Assigning
> the keywords to the polymorphic phantasms we call 'things'
> is perilous. Attempting to place the things and keywords
> in hierarchies is mind-boggling and every group begins
> de novo and ends up with a similar yet different plate of
> simplified semiotic spaghetti.

IG: Agreed there are as many possible ontologies as there are human
experiences of the world. Ontologies can only be agreed / shared in
limited contexts. Any "model" of information in the Semantic web,
needs to look for links representing human interaction with
information in contexts, not simply hierarchical taxonomic links. A
useful model for the Semantic Web will not be "An Ontology". This will
I believe be some extensible framework supporting any number of
taxonomies (ontologies) in contexts driven by human purpose and
organisational processes. No reason why XML could not describe it.
>
> If only Ph.D's really did have to know some philosophy!
>
> The web of 2050 is going to look a lot like the web of 1995.

IG: There's still time for us to learn something I hope.
Ian Glendinning
www.psybertron.org

Jorn Barger

unread,
Sep 19, 2002, 12:42:04 PM9/19/02
to
"Alexander Johannesen" <alexander....@bekk.no.spam> wrote in message news:<3d898864$1...@news.wineasy.se>...

> I'm interested to know what you would like on one juicy page that would
> satisfy you.

In the opera domain, a single timeline that shows birth and death of
major composers, first performance of major works, with links to
composer bios, librettos etc of works. That's the basics, then
you add the juiciest links to the timeline. Cf:
http://www.robotwisdom.com/science/classical/cicero.html

> Then you must define what you mean by "disciplined" ontology. Everything
> in the same ontology? I use various maps and merge them to create a
> new map that represents better what I'm looking for.

(Why are you consistently avoiding the question of what your topic
maps are about and what they do?)

'Disciplined' means the opposite of 'ad hoc'-- you don't have to keep
changing it for every new addition. It's a wellknown _illusion_ in
AI that a small domain is easy to map, but breaks down quickly when
you try to 'scale it up'.

> Example: There is the common KB map of the company, and there is my own
> personal map that tells of likes, dislikes, what I know, what I want
> to do more of, where I want to go, and how I like to do it. They both
> change over time without problem. I merge these maps to get a new
> mapping. I conduct a representation of what my take (computer-wise) on
> the companys CV (people) and technologies and directions. I take this
> map and extract reports and info from this.

If the representation-language is private/idiosyncratic, then search
engines won't find it meaningful.

> To further this, I can (I'm working on this right now) create a set of
> absolutes / philosophical defines, and map these to my personal map
> to make it conform to a standard, and hence we can talk about what
> various people in the company "likes", "hates", "loves" and so forth.

You will be approximately the 1,000,000,000,000,000th person to set out
on this quest. We're still waiting to hear back from any of them.

> Before I continue, though; are your take on this from a strict AI
> perspective, as in making computers *understand* what those labels mean?
> Mine is a by far more pragmatic take on it, and we might be talking
> about two entierly things.

My longterm goal is AI. My pragmatic intermediate approach is zero-AI,
well-designed informational-resource pages that I call 'Necessary Web'
overviews.

> I personally use cumulative statistics to find the associations that
> are binded together to score the association. By this, I can get a
> smaller (but arguably less complete) scope to work with. Maybe this
> pragmatic approach is too pragmatic for specific science, but for a
> KB it works quite nice.

That's greek to me-- examples please.

> > Where 'not too much' = 10,000+ years of expert effort
> > and 'quite a lot' = a tiny fraction
> > (these figures based on Lenat's Cyc).
>
> That figure seems waaay big, but I guess that depends totally on what
> you're trying to achieve. My goals I can achieve today.

The discussion topic is how to make it easier for search-engines
to understand document-content. 'Document-content' necessarily
includes all human knowledge, including especially the infinite
difficulties of the human psyche.

Bradley K. Sherman

unread,
Sep 19, 2002, 2:28:22 PM9/19/02
to
In article <3d89887e$1...@news.wineasy.se>,

Alexander Johannesen <alexander....@bekk.no.spam> wrote:
>
>That depends on what you want; show you what?


Evidence that XML + Ontologies is superior to existing
practice for real-world problems. I'm looking for
a system that is up and has been maintained for at
least one year (5 years would be better, but I'm
not unreasonable).

--bks

Alexander Johannesen

unread,
Sep 20, 2002, 3:28:56 AM9/20/02
to
> > That depends on what you want; show you what?

Bradley K. Sherman <b...@panix.com> wrote:
> Evidence that XML + Ontologies is superior to existing
> practice for real-world problems. I'm looking for
> a system that is up and has been maintained for at
> least one year (5 years would be better, but I'm
> not unreasonable).

Topic Maps have only been a standard (XTM 1.0) from end of 2000, so
from that it is obvious a few limitations in time. However, there are
a few sites running it, if you look through, from the FAQ;

http://easytopicmaps.com/index.php?page=TopicMapFaq

Q: Has anyone successfully used TopicMaps in a commercial
environment yet?

A: Yes. A non-commercial example of a topic map driving website
navigation http://www.ontopia.net/i18n/index.jsp There is a
French web site, the Quid Encyclopedia, which is based on topic
maps and is said to contain some 70,000 topics.
http://www.quid.fr There is also a Norwegian site at
http://www.itu.no Starbase (the makers of starteam) are using
topicmaps as a core building block of a new product they're
developing. Bravo, developed by GlobalWisdom Inc., is an advanced
taxonomy tool based on topic maps. See
http://www.techquila.com/bcase.html for more examples of current
commercial use of topic maps.

But solid proof of a system that has used this for years? Can't be done.
Is that then an argument for "it doesn't work" and "it's a failure"?

Alexander Johannesen

unread,
Sep 20, 2002, 4:04:22 AM9/20/02
to
Jorn Barger <jo...@enteract.com> wrote:
> In the opera domain, a single timeline that shows birth and death of
> major composers, first performance of major works, with links to
> composer bios, librettos etc of works. That's the basics, then
> you add the juiciest links to the timeline. Cf:
> http://www.robotwisdom.com/science/classical/cicero.html

Funny you should mention it, because I'm creating such a beast for my
www.claudiomonteverdi.org site; I'm faced with a whole lot of problems
for works, published dates, timelines and so forth, and my solution so
far has been best solved through topic maps using the OSS TM4J (at
sf.net) to write graphs and textual timelines w/links to historic and
specific events on them. This is still very alpha for the graphs, but
the rest seems to go quite smoothly.

> (Why are you consistently avoiding the question of what your topic
> maps are about and what they do?)

Didn't know I was; mapping the company CV and knowledge base. I work
for a consultancy company, so obviously this is our main commodity. You
can browse for consultants through associations, competency plans, pure
data, and relevance, sorting through it with timetables and a cobbel of
technology terms and specifics. You basically tag those that fits the
job, and print out a list of these (as suggestions) and their CV's, or
generate it all for web browsing.

Example: I can input that we have a project using JSP, and I can find
candidates who know Java but havn't put up JSP, and I can select
candidates based on the people they've worked with before, the
technology they like, maybe using techniques they've got on their
competence plan, and so forth. It ain't rocket-science, but it seems
to work well (by this I mean that it is in beta stages).

> It's a wellknown _illusion_ in
> AI that a small domain is easy to map, but breaks down quickly when
> you try to 'scale it up'.

Depends on the scaling method.

> If the representation-language is private/idiosyncratic, then search
> engines won't find it meaningful.

Of course, and I agree. But I don't see search-engines as the primar
for future content searching.

> > To further this, I can (I'm working on this right now) create a set
> > of absolutes / philosophical defines, and map these to my personal
> > map to make it conform to a standard, and hence we can talk about
> > what various people in the company "likes", "hates", "loves" and so
> > forth.
>
> You will be approximately the 1,000,000,000,000,000th person to set
> out on this quest. We're still waiting to hear back from any of them.

Oh, I know the argument well. And I know why most fail. I'm not even
saying I will succeed, and I *know* I won't succeed in giving you what
you want.

> My longterm goal is AI. My pragmatic intermediate approach is zero-

> AI, well-designed informational-resource pages that I call 'Necessary
> Web' overviews.

I believe our goals and current approaches are similar, then.

> > I personally use cumulative statistics to find the associations that
> > are binded together to score the association. By this, I can get a
> > smaller (but arguably less complete) scope to work with. Maybe this
> > pragmatic approach is too pragmatic for specific science, but for a
> > KB it works quite nice.
>
> That's greek to me-- examples please.

Ok; when the human interface is too complex to map, you make a smaller
interface based on merging of similar topics. The new interface is
easier to handle, but will not contain the same wisdom. I can say that
lauging and giggeling is the same topic, weighted by scores to
differentiate between them. The topic is mostly the same, but the
refined details that are human are lost on the process. This works
pragmatically, but not in pure AI apart from a model.

A few years back I made a prototype to handle this; using cumulative
histograms I created "mini-brains" that could hold an entitys
"emotional and experiance-based variables" where the above giggle or
laugh were mapped along a horisontal/vertical deduction axis. (More
greek, perhaps?) Anyway, they worked great for AI in games, in smaller
applications and an experimental neural-net simulation (that I never
finished). I have a distant plan to implement these again into a
framework with topic maps, as they are similar in structure.

> > That figure seems waaay big, but I guess that depends totally on
> > what you're trying to achieve. My goals I can achieve today.
>
> The discussion topic is how to make it easier for search-engines
> to understand document-content. 'Document-content' necessarily
> includes all human knowledge, including especially the infinite
> difficulties of the human psyche.

I'm told that topic maps + RDF is all you need to solve this problem,
however I have done little work yet on this side of things; I don't
see search-engines (as we know them) as the best future alternative
for searching the net; I picture a network where sites will link
their topic net to others, and through a mechanism (a server-side
application that speaks XTM) can map to the next server. This works
a little like having a bunch of people in a room, asking if someone
knows about a topic, and "no, but I hear that Tom over there might
know something". Tom knows a few bits and pieces, but also knows that
the big Knowledge Mother of all on that topic is Sam. And so forth.

As far as I've understood it, the RDF are the pragmatic link between
places, and topic maps (through their own or external applications)
handles the sites knowledge maps.

As far as the discussion topic goes; use RDF, and browse and index the
potential topic map. That ensures quality of data. As far as a simpler
and more current solution, a mix of plain language and YAML.org
would probably do. WikiWikis aren't too far off, either.

Jorn Barger

unread,
Sep 20, 2002, 10:58:26 AM9/20/02
to
"Alexander Johannesen" <alexander....@bekk.no.spam> wrote in message news:<3d8ad686$1...@news.wineasy.se>...

> Didn't know I was; mapping the company CV

CV = curricula vitae? Ie resumes of all employees?

> and knowledge base.

How the **** am I supposed to know what's in your KB?

> I work
> for a consultancy company, so obviously this is our main commodity.

I'm a professional handwaver, behold my smoke and my mirrors.

> You can browse for consultants through associations,

What kind of 'associations'? Inter-relationships, organisations, what?

> competency plans,

competency = skills
plans = ???

> pure data, and relevance,

(Your communication skills make the W3C look good.)

> sorting through it with timetables

Timelines? Of what? Where a person worked, and when???

> and a cobbel of
> technology terms and specifics. You basically tag those that fits the
> job, and print out a list of these (as suggestions) and their CV's, or
> generate it all for web browsing.

So for some reason you're searching for an employee with certain skills,
and you choose the terms for the skills, and it lists their names?
This is called a trivial database, that could be programmed by an
8-year-old in BASIC.

> > It's a wellknown _illusion_ in
> > AI that a small domain is easy to map, but breaks down quickly when
> > you try to 'scale it up'.
>
> Depends on the scaling method.

Right, so now we're back to the magical handwavy "topic maps solve
everything simply" crap. You just don't know anything about this,
so you should stop telling lies.

> I can say that
> lauging and giggeling is the same topic, weighted by scores to
> differentiate between them.

Unfortunately, the difference is qualitative more than quantitative.
(A small laugh is not necessarily a giggle.)

> The topic is mostly the same, but the
> refined details that are human are lost on the process. This works
> pragmatically, but not in pure AI apart from a model.

So 'pragmatically' means "via human intelligence, not automation".

> A few years back I made a prototype to handle this; using cumulative
> histograms I created "mini-brains" that could hold an entitys
> "emotional and experiance-based variables" where the above giggle or
> laugh were mapped along a horisontal/vertical deduction axis. (More
> greek, perhaps?)

As I explained, I built my first psychological simulation in 1972
and the topic has been my primary interest ever since, so I know
exactly how vacuous your claims are.

> Anyway, they worked great for AI in games,

No, you are lying.

> in smaller
> applications and an experimental neural-net simulation (that I never
> finished).

Ergo, it didn't work at all.

> I have a distant plan to implement these again into a
> framework with topic maps, as they are similar in structure.

That will be real helpful for people doing research on laughing
and giggling (Roget 840.3471), I'm sure.

> I'm told that topic maps + RDF is all you need to solve this problem,

Clearly you were profoundly impressed by their handwaving.

> As far as the discussion topic goes; use RDF, and browse and index the
> potential topic map. That ensures quality of data.

As far as this discussion goes, you're obviously making claims
way beyond your capabilities, so you should shut up now.

Bradley K. Sherman

unread,
Sep 20, 2002, 11:43:30 AM9/20/02
to
In article <3d8ace38$1...@news.wineasy.se>,
Alexander Johannesen <alexander....@bekk.no.spam> wrote:
> ...

>But solid proof of a system that has used this for years? Can't be done.
>Is that then an argument for "it doesn't work" and "it's a failure"?
> ...

No, nor is it an argument for adoption. I'll check back in
eighteen months.

--bks

David Lloyd-Jones

unread,
Sep 21, 2002, 1:52:42 AM9/21/02
to
Jorn Barger wrote:

>"Alexander Johannesen" <alexander....@bekk.no.spam> wrote
>
<blahblahblah snipped>

>(Your communication skills make the W3C look good.)
>

It's always nice to see mere piss and vinegar turning into good quality
vitriol from time to time.

Let me, however, pour oil on troubled waters with my dispassionate view
of things and my Deep Insight(tm.) into how things Really Are("):
"topic maps" seems to me an almost clinical case of net hype.

When I skimmed all the topic maps sites last night I ran across one
particularly nice item in a FAQ: "Q.What's the difference between a
topic map and a thesaurus? A. The difference is...."

Well this is purest horseshit. (Even assuming we allow them to pretend
this is something people ask frequently...) A topic map is obviously one
kind of thesaurus, perhaps crippled by the amount of ontological lard it
has to carry on its shoulders. Topic map folks, thesaurus folks, index
folks, librarians, are all doing the same thing, trying to get stuff
organized in useful ways, and the only thing distinguishing the Topic
Maps people, as far as I can see, is the amount of naive pretentiousness
-- and ignorance of their forebears -- that they bring to the effort.

Looking at the various commercial and semi-commercial topic maps
software suites around, the only thing I saw was a bunch of rather
simple script collections.

I say it's spinach.

-dlj.


Tina Holmboe

unread,
Sep 21, 2002, 11:13:57 AM9/21/02
to
David Lloyd-Jones <dlloy...@rogers.com> exclaimed in <KOTi9.8993$q41....@news02.bloor.is.net.cable.rogers.com>:

(I'm pretty certain I'll be flamed for this. So be it)


> organized in useful ways, and the only thing distinguishing the Topic
> Maps people, as far as I can see, is the amount of naive pretentiousness
> -- and ignorance of their forebears -- that they bring to the effort.

Personally I find this quote absolutely wonderful:

"A topic map defines a multidimensional topic space ? a space in which
the locations are topics, and in which the distances between topics
are measurable in terms of the number of intervening topics which must
be visited in order to get from one topic to another, and the kinds of
relationships that define the path from one topic to another, if any,
through the intervening topics, if any."

A multidimensional topic space. Gotcha. I guess I'm among those with less
intelligence than needed. When I look at

"Topic map folks, thesaurus folks, index folks, librarians, are all

doing the same thing, trying to get stuff organized in useful ways,..."

I find that I must have *severely* misunderstood the word 'useful'. Then
again, I always found SGML needlessly complicated as well. Alas.

This said: does anyone have a resource which in a
not-quite-as-useful-as-the-above way explains what a "topic map" is all
about ? Preferably in English. Modern day English would be even better.
Without references to Star Trek[1] would be perfect.


[1]
"Multidimentional topic space" - where did the "subspace phenomena" go ?

--
- Tina

Thomas Baekdal

unread,
Sep 21, 2002, 12:42:37 PM9/21/02
to

"Tina Holmboe" <ti...@elfi.org> wrote in message
news:V00j9.419$c5.1...@newsb.telia.net...

> Personally I find this quote absolutely wonderful:
>
> "A topic map defines a multidimensional topic space ? a space in which
> the locations are topics, and in which the distances between topics
> are measurable in terms of the number of intervening topics which
must
> be visited in order to get from one topic to another, and the kinds
of
> relationships that define the path from one topic to another, if any,
> through the intervening topics, if any."

You know; people, who do not know what they are talking about, often try to
hide it using words or sentences that are hard to understand. This is a good
example of such a case.

I can, however, not provide you with a clear (unambiguous? :o) ) answer to
what topic maps is all about. I too lack a clear explanation.

Bradley K. Sherman

unread,
Sep 23, 2002, 12:09:41 AM9/23/02
to

Semantic Differential Space was well-defined and under
investigation at least 50 years ago. See e.g.

An atlas of semantic profiles for 360 words.
Amer. J. Psychol. 1958, 71, 688-699.

--bks

news.songnetworks.no

unread,
Sep 23, 2002, 5:49:44 AM9/23/02
to
Alex wrote:
> > and knowledge base.

"Jorn Barger" <jo...@enteract.com> wrote :


> How the **** am I supposed to know what's in your KB?

Why does it matter? Tm's are maps of metadata. The data could be
anything.

> > I work
> > for a consultancy company, so obviously this is our main commodity.
>
> I'm a professional handwaver, behold my smoke and my mirrors.

What are you trying to say now?

> (Your communication skills make the W3C look good.)

You've got skills to match, I can assure you. And I've already told you
that english is not my native language, so I am sorry for not being able
to communicate in the appropriate manner.

> > sorting through it with timetables
>
> Timelines? Of what? Where a person worked, and when???

I'm pretty sure I wrote time_tables_ up there, but I doubt it will make
any difference.

> So for some reason you're searching for an employee with certain
> skills, and you choose the terms for the skills, and it lists their
> names?

Both yes and no, but I don't feel like going through this anymore. TM's
cannot help you, and neither can I.

> > > It's a wellknown _illusion_ in AI that a small domain is easy to
> > > map, but breaks down quickly when you try to 'scale it up'.
> >
> > Depends on the scaling method.
>
> Right, so now we're back to the magical handwavy "topic maps solve
> everything simply" crap.

I've never said it solves everything.

> You just don't know anything about this, so you should stop telling
> lies.

Right; you don't get it, and it is my fault. I am sorry.

> So 'pragmatically' means "via human intelligence, not automation".

No, it means limiting the amount of interface data so that automation
is viable.

> As I explained, I built my first psychological simulation in 1972
> and the topic has been my primary interest ever since, so I know
> exactly how vacuous your claims are.

So, based on your own simulations and interest since about 1972 you know
that what *I'm* saying is bull, even if I've only used keywords?
Don't tell me that it doesn't work, please; you simply do not know.
My simple brain-histograms did work. Oh, I'm sure they don't work for
*you*, by your scope and uses, but I can't help you with that. "Work"
is a matter of opinion, and you've clearly stated yours.

> > Anyway, they worked great for AI in games,
>
> No, you are lying.

I've built games with it, and it worked *great*. Why would I lie?

I'm really interested now; what excactly do you think I would gain from
lying? I'm not in the habit of lying, so why would I? I'm not trying
to convince you to do anything new or different. Heavens, if these
simple brain-histograms seems so stupid to you, just make a note of it.
All I've said is; "They worked for me." Through them I could make
objects have "emotions" in the simplest form, have experiances with
certain incidents, and so forth. I don't really care if you think all
this sounds dubious and like hogwash; they did their job for simple but
powerful AI.

> > in smaller
> > applications and an experimental neural-net simulation (that I never
> > finished).
>
> Ergo, it didn't work at all.

It worked in parts, but was never *finished*. I didn't know that an
explanative had to end in "true" or "false"?

> > I'm told that topic maps + RDF is all you need to solve this
> > problem,
>
> Clearly you were profoundly impressed by their handwaving.

Why do you persist in calling this hand-waving? This is not rocket-
science. In fact, it is really simple; even 8-year olds, who knows how
to make huge relational databases in BASIC, knows how to do this.

But, now I'd rather turn around and ask you kindly *not* to use
them. I don't think they are for you. They are not built to suit your
needs. They cannot solve any of your problems. I'm sorry I brought
them up.

> As far as this discussion goes, you're obviously making claims
> way beyond your capabilities, so you should shut up now.

You are, of course, an authority on what my capabilities are.

news.songnetworks.no

unread,
Sep 23, 2002, 5:54:00 AM9/23/02
to
"David Lloyd-Jones" <dlloy...@rogers.com> wrote :

> Looking at the various commercial and semi-commercial topic maps
> software suites around, the only thing I saw was a bunch of rather
> simple script collections.

Can you ellaborate on what suites were simple script collections? I'm
really interested to know.

news.songnetworks.no

unread,
Sep 23, 2002, 6:00:54 AM9/23/02
to

"Tina Holmboe" wrote:

> This said: does anyone have a resource which in a
> not-quite-as-useful-as-the-above way explains what a "topic map"
> is all about ?

The spec works for me;
http://www.topicmaps.org/xtm/index.html

And the general pages from ontopia.com is good too;
http://www.ontopia.net/topicmaps/index.html

> "Multidimentional topic space" - where did the "subspace phenomena"
> go ?

Apart from the Star Trek lingo, it actually makes sense when you get
into it, but do keep in mind that this is "information architect" speak,
which can be a bit difficult to grasp at times.

Tina Holmboe

unread,
Sep 23, 2002, 10:24:38 AM9/23/02
to
"news.songnetworks.no" <alexander....@bekk.no.spam> exclaimed in <3d8e...@news.wineasy.se>:

> The spec works for me;
> http://www.topicmaps.org/xtm/index.html
>
> And the general pages from ontopia.com is good too;
> http://www.ontopia.net/topicmaps/index.html

Both duly noted. When time permits I will try again ...

> Apart from the Star Trek lingo, it actually makes sense when you get
> into it, but do keep in mind that this is "information architect" speak,
> which can be a bit difficult to grasp at times.

I dunno. I've been told by a good friend that SGML isn't very difficult,
but it's hard to find a way to tell him that he's being full of it ;)

At this point in time, and with judgement still pending, it seems to
me that the information in - for instance - the XTM spec has been far
too heavily architected.

But I'm going to try again :) Thankyou for the links.

news.songnetworks.no

unread,
Sep 24, 2002, 2:47:09 AM9/24/02
to
"Tina Holmboe" wrote:
> I've been told by a good friend that SGML isn't very
> difficult, but it's hard to find a way to tell him that he's being
> full of it ;)

Oh, anyone who works with SGML *knows* they're full of it, but they hide
it well.

> At this point in time, and with judgement still pending, it seems to
> me that the information in - for instance - the XTM spec has been
> far too heavily architected.

I can somewhat agree with that; XTM isn't a "solve everything" tool but
more a "solve a heck of a lot" tool; it can handle a lot with
rather simple smaller building-blocks (there are about 49 elements in
total). The lack of constraints on what you can use TM's for is the
hardest part of XTM to explain or to agree with.

> But I'm going to try again :) Thankyou for the links.

No worries.

JJ

unread,
Sep 25, 2002, 2:31:20 AM9/25/02
to
"Tina Holmboe" <ti...@elfi.org> wrote in message
news:XF6i9.13220$HY3.2...@newsc.telia.net...
> "JJ" <jj...@removethisdrizzle.com> exclaimed in
<1032384591.687288@yasure>:
>
> > On Geocities web pages, a data statistics page can tell you the
resolution
> > of screens of the people who hit your page are using. I have seen pages
>
> You might want to reconsider the word 'statistics', as what Geocities
> and others are doing is pure guesswork based on Javascript techniques.

Pure guesswork ? Does that mean Geocities java data is 100% wrong ?

What techniques are better ?

> And do remember that the resolution has virtually nothing to do with
> anything.

Ok, could you explain this a little more ?

> > think Jorn would call this "braindead" as webpages are trying
> > to be made for the lowest common denominator out there and

> > have not evolved in some areas. However, the plugin market
>
> Creating a webpage to the "lowest common denominator" does not, as some
> believe, "grey" - unless, of course, grey is what one want. What it does
> involve is designing to the "highest number of users" - ie. not
dependant
> on resolution, colors, or graphics.

So you are saying that a web page could be designed to look
nice if someone has graphics turned off and a black and white
monitor ?

> > web. Perhaps a plugin could be designed that helps web
> > authors detect and adjust their web page automatically to
> > account for resolution, user defined colors, and wether or
> > not any accessibility functions were currently turned on ?
>
> There has been quite alot of discussion regarding these topics in many
> fora. A consensus has yet to be reached, but the majority of people
> involved with useability and accessibility does not seem to agree that
> producing an infinite number of different versions of a webpage just to
> satisfy the desire for eyecandy is a particularly good idea.

Gah, I only mentioned 3 things and you mention the word "infinite" ?

I was merely suggesting a plugin that could be configured by the user
to say choose higher quality graphics on a website, if that website
had 3 levels of graphics on its servers. Many artists and people with
above average vision, might want to set their browsers to not display
low quality graphics if they have a high speed connection and if
they have the hard drive space as well. Then on the colors plugin,
you could have html that senses what colors the user has chosen.
Say the user has a purple skin for their www browser and red text,
then the html could switch to a different font or different color ads to
match the users environment.

> In the area of accessibility functions, heavy arguments have been voiced
> against it for privacy reasons and these I agree with whole heartedly.

But when I say "detect" and "sense", it would not be setting a cookie
or sending info back to the author or originating host. I use those
2 terms to mean that the plugin would adjust the html by itself and
not breach privacy.

> > As for blind users, some of them have text enlargers. If you are using
> > a font, that is part of a bitmap or jpg file, then they might not be
able
> > to "read" it.
>
> I think I can promise you with some certainly that the majority of
> blind users do not keep text enlargers around for pets ...

Oops, I meant to explain this further but forgot to. I meant to say that
not all "blind" people are 100% blind. My grandmother and my best
friend Laura are not fully blind and they use text enlargers online.
They are slowly going blind and on their ID cards it says "blind" on
it. Anyway, Laura uses the Opera browser as she says that it,
"Enlarges text better than other browsers." They both strip away the
authors colors and use black background and white text that is shows
on the monitor as each letter being a half inch high or so. So that when
a web page has words included in a bitmap or jpeg graphic, neither
of them can "see" it or adjust it to "read" it. My grandmother also has
a book enlarger as she has macular degeneration and she has a tough
time of it sometimes finding large text books. Anyway, Laura is going
to have eye surgery soon, so hopefully she won't need an enlarger
after that. So if a plugin could sense that a user has the font turned
way up high, then the webpage could adapt and either enlarge the
graphics accordingly or have an option at the home page for text
only.

> - Tina Holmboe

Jon Melusky
.
.
.
.


Arjun Ray

unread,
Sep 25, 2002, 5:53:59 AM9/25/02
to
In <GuFj9.14313$HY3.3...@newsc.telia.net>, ti...@elfi.org (Tina
Holmboe) wrote:

|I've been told by a good friend that SGML isn't very difficult, but it's
| hard to find a way to tell him that he's being full of it ;)

What have you found difficult about SGML?

Tina Holmboe

unread,
Sep 25, 2002, 6:24:45 AM9/25/02
to
"JJ" <jj...@removethisdrizzle.com> exclaimed in <1032935479.770701@yasure>:

>> You might want to reconsider the word 'statistics', as what Geocities
>> and others are doing is pure guesswork based on Javascript techniques.
>
> Pure guesswork ? Does that mean Geocities java data is 100% wrong ?

No, it means that there is a statistical uncertainty which means that you
cannot say that the Geocities numbers are 0.1% wrong, 1.0% wrong, or 10%
wrong.

And that means that statistically it isn't very much use. Taking the
data supplied by TheCounter.com - often touted in such circumstances - for
August of 2002, we find that 11% does not use Javascript. TheCounter, as
is their habit, does not provide any indication as to the level of error
on that statistic.

Let us guess that the correct number may be 11% +/- 2.5%. That is down
from 16% in 2000, but has not changed *at all* since 2001.

In order to get at the resolution you need Javascript. This means that
their August 2002 needs to be moderated by the above 11% - they state,
for instance, that 37% use 1024x768. Again we'll apply 2.5% +/- to it,
and get a situation in where:

"37% - plus/minus 2.5% - of users that report resolution through
Javascript - that is 88% plus/minus 2.5% - run 1024x768"

I'm sorry, but this doesn't cut it for me. Add to it that the counter must
be installed on a target site, and that in itself biases the data towards
users hitting the site - if that site *requires* Javascript, for instance,
then it isn't very likely that it get many non-Javascript users.

Basically it boils down to whether you want to believe a statistic
regarding item A which is collected by using a method reliant on item B,
which doesn't have all too reliant statistics either.

Bottom line: neither Geocities nor TheCounter *knows* anything about what
resolutions are *actually* in use on the WWW; they have a fair (+/-2-5%)
idea of which are used to access their stuff, possibly, with some luck;
but resolution isn't very important anyway.


> What techniques are better ?

Ignoring the resolution, and concentrating on the humans involved; you
don't publish information for (ie. aimed at being interesting to) the
graphics-card do you ?

>> And do remember that the resolution has virtually nothing to do with
>> anything.
>
> Ok, could you explain this a little more ?

Let us for a moment disregard the situation in which resolution simply
isn't a factor (text, Braille, Voice, etc, etc, etc) and look at what
resolution you and I use.

I've got close-to-perfect vision. I run a number of different browsers
on a number of different systems, none of them 'modern' according to
to-days definition. One has 1024x768, one has 1280x1024. In neither case
do I run the browser in anything resembling that size. 844x755 is the size
of my current Opera window on the former resolution.

Which *font size* I want depends on what I read, when I read it, and the
light in the room. Neither the width of my browser nor the size of my font
are dependent on the *resolution*, however.

> So you are saying that a web page could be designed to look
> nice if someone has graphics turned off and a black and white
> monitor ?

I am certain I can find someone (I have many Goth friends ... ) who would
agree that a plain text, black and white, layout is "nice" - but it is
a subjective view - and not interesting.

What I am saying, and what is proveable, is that you can create a layout
which looks great WITH graphics, colors, and whatnots on a 19" monitor
in 1600x1200, and STILL does not prevent anyone else from getting to the
information.

- If I want to view the whole lot with graphics and eyecandy all over
the place, I can. Big 'fat' pipe to the 'net, nice monitor, latest
browser. No problem.

- If I want to use Lynx to get to the information and ONLY the information
(and, possibly, pipe it along to a Braille or Speech system) I can
do that as well. No, it won't *look* the same - but I'll get at the
content.

Unless someone has worked overtime to *prevent* that, of course. The
Swedish site http://www.libresse.se/ is an excellent example of the
latter. I wonder how much they paid for it. Anything above peanuts and
I'd be tempting at calling it a sting.


>> involved with useability and accessibility does not seem to agree that
>> producing an infinite number of different versions of a webpage just to
>> satisfy the desire for eyecandy is a particularly good idea.
>
> Gah, I only mentioned 3 things and you mention the word "infinite" ?

Yep. You mention three variables:

- Resolution
- User defined colours
- Accessibility functions

but those are not all the factors involved. Window size, dpi, font
selection (not all fonts behave the same), light levels, eyesight ...

But allright - the word "infinite" is perhaps wrong here. There are a
great number of different factors which involve HOW a design is viewed,
and it is my theory and - if you wish - belief that it is better to
leave adjustment of these factors to the user and avoid creating designs
which *depend* on any of them.

If you create a design which depend on resolution, then you might find
yourself in need to create one which depends on the window size, and then
the number of potential versions are rapidly growing.


> I was merely suggesting a plugin that could be configured by the user
> to say choose higher quality graphics on a website, if that website
> had 3 levels of graphics on its servers. Many artists and people with

Ah, content negotiation. Such as for instance suggested by HTTP 1.1;
may I refer you to

http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12


>> In the area of accessibility functions, heavy arguments have been voiced
>> against it for privacy reasons and these I agree with whole heartedly.
>
> But when I say "detect" and "sense", it would not be setting a cookie
> or sending info back to the author or originating host. I use those
> 2 terms to mean that the plugin would adjust the html by itself and
> not breach privacy.

It would seem that the plugin need to send some information to the server,
however, along the lines of "I have a Braille reader, send me such
information" - that would tell the server something about the user.

And be a waste of time, naturally, since the webpage could easily be
linearized and fed to a Braille reader without the intervention of a
plugin or content negotiation.

You don't *need* the plugin, when all that is needed goes like this:

"Hello, webserver. I'd like your resource / please ... ah, they are
sending data back. One HTML-file, check. Let's look at it ... hmm,
two OBJECTs with Flash plugins, I can handle that, lets download
those ... ah, a CSS file with layout instructions, that I can deal
with, let's get those ... that's it ? Good. Then I render the document
as the stylesheet suggests, and then I play the two Flash movies.
Done."

Versus


"Hello, webserver. I'd like your resource / please ... ah, they are
sending data back. One HTML-file, check. Let's look at it ... hmm,
two OBJECTs, I have no idea what THOSE are, but I can present the
alternate information inside'em, lets do that ... a CSS file ? What
on Earth is that ? Lets ignore that, and render the document according
to the HTML structure in it.
Done."

There is no animosity here. Both browsers can co-exist without any
other interference from a web designer than the simple theory: design
for your users, not for whatever resolution, browser, or machine they
have.


>> I think I can promise you with some certainly that the majority of
>> blind users do not keep text enlargers around for pets ...
>
> Oops, I meant to explain this further but forgot to. I meant to say that
> not all "blind" people are 100% blind. My grandmother and my best
> friend Laura are not fully blind and they use text enlargers online.

*nods* This is a common method for people with reduced vision; and
is a nice test for many web developers. It tend to flip some concepts
over - many such users that I have run across prefer, for instance, their
content to start directly at the left side instead of having to move over
a menu every time. Their mileage does vary, of course.


> "Enlarges text better than other browsers." They both strip away the
> authors colors and use black background and white text that is shows
> on the monitor as each letter being a half inch high or so. So that when
> a web page has words included in a bitmap or jpeg graphic, neither
> of them can "see" it or adjust it to "read" it. My grandmother also has

That is right - and also one reason why the WAI, the RNIB, and I state that
one should avoid using images for things that there exist a markup for; one
of the things in WCAG I agree with.


> time of it sometimes finding large text books. Anyway, Laura is going
> to have eye surgery soon, so hopefully she won't need an enlarger

Do wish her the best of luck.


> after that. So if a plugin could sense that a user has the font turned
> way up high, then the webpage could adapt and either enlarge the
> graphics accordingly or have an option at the home page for text

Yes, but there are drawbacks:

(a) That the font size is turned up is no indication that I want the
graphics turned up,

(b) If one avoid using graphics for text and avoid locking the font size
with 'tricks' the problem again isn't there,

(c) A text-only version of a document on the web can always be produced
with reasonable accuracy as long as one separate structure and layout,
and avoid locking oneself into the idea of 'the resolution matters'.

Tina Holmboe

unread,
Sep 25, 2002, 7:04:39 AM9/25/02
to
Arjun Ray <ar...@nmds.com.invalid> exclaimed in <gg13pukbvck2aupj0...@4ax.com>:

>|I've been told by a good friend that SGML isn't very difficult, but it's
>| hard to find a way to tell him that he's being full of it ;)
>
> What have you found difficult about SGML?

I think it, to me, springs from two things.

When I first looked at SGML - '89, I seem to recall - all I found to
learn from had a high warble-garlg level. It was, basically, very hard
to find easy to digest information. comp.text.sgml wasn't it.

The concept is easy enough; the implementation and most of the explanations
I have read are not. If you can suggest a good book or tutorial, I'm more
than happy to try again.

The other aspect springs from attitudes, but if I express any views
on that in public I will - yet again - be harassed to a degree which I
am not about to take.

The bottom line is: I have read a little about SGML, and I don't find it
_easy_. Whether that says something about my attitude, my intelligence,
or my enjoyment of cartoons is left open.

If it makes anyone feel better, SGML is THE best thing since roast bread,
will undoubtfully solve the world's problems, make information live
forever, free (as in beer), and obliterate the need for sheep such as I
who read cartoons.

--
- T.

JJ

unread,
Sep 26, 2002, 3:54:20 AM9/26/02
to

"Tina Holmboe" <ti...@elfi.org> wrote in message
news:N9gk9.14693$HY3.3...@newsc.telia.net...

> "JJ" <jj...@removethisdrizzle.com> exclaimed in
<1032935479.770701@yasure>:

> Ignoring the resolution, and concentrating on the humans involved; you
> don't publish information for (ie. aimed at being interesting to) the
> graphics-card do you ?

Arg, you lost me. I am not sure what you mean by that.

> Let us for a moment disregard the situation in which resolution simply
> isn't a factor (text, Braille, Voice, etc, etc, etc) and look at what
> resolution you and I use.

Ok.

> I've got close-to-perfect vision. I run a number of different browsers
> on a number of different systems, none of them 'modern' according to
> to-days definition. One has 1024x768, one has 1280x1024. In neither case
> do I run the browser in anything resembling that size. 844x755 is the
size
> of my current Opera window on the former resolution.
>
> Which *font size* I want depends on what I read, when I read it, and the
> light in the room. Neither the width of my browser nor the size of my
font
> are dependent on the *resolution*, however.

Good point. What about styles of fonts ? Doesn't resolution and width
of browser affect those a lot ?

Sorry I'm being slow at understanding this stuff. You seem to come
at this from a very different angle than I am.

> > So you are saying that a web page could be designed to look
> > nice if someone has graphics turned off and a black and white
> > monitor ?
>
> I am certain I can find someone (I have many Goth friends ... ) who
would
> agree that a plain text, black and white, layout is "nice" - but it is
> a subjective view - and not interesting.

Well, I bring up black and white because when I have a migraine headache,
a huge white webpage with black text hurts my eyes and increases the
headache. I get them sporadically but anyway, when I get them, I set
all windows to medium green background and black text. Many web
designers "hate" to hear that I do that. They "want" me to see their webpage
as they "intended" it to be seen. (Their words in quotes.)

> What I am saying, and what is proveable, is that you can create a layout
> which looks great WITH graphics, colors, and whatnots on a 19" monitor
> in 1600x1200, and STILL does not prevent anyone else from getting to the
> information.

I agree with you. I'm saying that if webpages can sense what language
I prefer, then why not sense what graphic types I prefer and what font
type and color I prefer also. If someone has a 12 inch monitor or a
palm pilot then something has to give perhaps.

> - If I want to view the whole lot with graphics and eyecandy all over
> the place, I can. Big 'fat' pipe to the 'net, nice monitor, latest
> browser. No problem.

Are there any webpages with eyecandy that you see out there that
are html styled well enough to give an award ?

What is your opnion about this page ?

http://www.djsasha.com/fr.php

> - If I want to use Lynx to get to the information and ONLY the
information
> (and, possibly, pipe it along to a Braille or Speech system) I can
> do that as well. No, it won't *look* the same - but I'll get at the
> content.

Information seems your main thrust of points. The web is not really
that great a place to get information. But I guess you want to change
that. Yesterday, my friend Dianne went to a swimming pool webpage
and the info was laid out nicely but the hours were not updated. So
when we got there, it was closed. We should have called I guess.

> Unless someone has worked overtime to *prevent* that, of course. The
> Swedish site http://www.libresse.se/ is an excellent example of the
> latter. I wonder how much they paid for it. Anything above peanuts and
> I'd be tempting at calling it a sting.

Or perhaps a rip off.... That site is horrid. I used Netscape 7 to get
there!
ha ha They cut their audience by a lot by insisting on IE and PC only.

> But allright - the word "infinite" is perhaps wrong here. There are a
> great number of different factors which involve HOW a design is viewed,
> and it is my theory and - if you wish - belief that it is better to
> leave adjustment of these factors to the user and avoid creating designs
> which *depend* on any of them.

Well said.

> If you create a design which depend on resolution, then you might find
> yourself in need to create one which depends on the window size, and
then
> the number of potential versions are rapidly growing.

I think you are saying that it is a slippery slope that could end in over-
complexity. Good point. But with graphics minimized, won't most all
web pages look a like ?

Do you think any plugins are good or useful ?

And shouldn't we all speak Esperanto to make things less complex ?

http://www.esperanto-usa.org/

> > I was merely suggesting a plugin that could be configured by the user
> > to say choose higher quality graphics on a website, if that website
> > had 3 levels of graphics on its servers. Many artists and people with
>
> Ah, content negotiation. Such as for instance suggested by HTTP 1.1;
> may I refer you to
>
> http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12

Holey cow, I have no idea what section 12 means ! I am wwway out
of my league here. What is an entity ? As an english major, I know
what every word in section 12 means but there is no meaning there.
There are no hyperlinks there. It assumes too much about this user.
*smile I am having a zen moment, I thought I was smart but I am
a fool. The webpage speaks of "transparent negotiation" but I can
do neither. Is this talking about a transparent society ?

> > time of it sometimes finding large text books. Anyway, Laura is going
> > to have eye surgery soon, so hopefully she won't need an enlarger
>
> Do wish her the best of luck.

Thanks !

> > after that. So if a plugin could sense that a user has the font turned
> > way up high, then the webpage could adapt and either enlarge the
> > graphics accordingly or have an option at the home page for text
>
> Yes, but there are drawbacks:
>
> (a) That the font size is turned up is no indication that I want the
> graphics turned up,

Good point. Perhaps during the installation process of the plugin,
a person could adjust that setting or have the plugin add a button
to the web browser to adjust it ? Or shut it off entirely. Or maybe
the browser itself should have a setting. I know lots of pro and
amateur photographers who wouldn't mind a plugin that only
showed high quality tifs or jpgs on every page they went to.
Most people I know have DSL speeds here in Seattle. So
perhaps my world view is too far off the mainstream...... Arg....

> (b) If one avoid using graphics for text and avoid locking the font
size
> with 'tricks' the problem again isn't there,

Well, I was looking at the grey tower website and it has words on the
new logo. But the logo probably would not be seen too well by a non-
perfect vision user. It just looks like "rey ower" to me. My vision is
not that great.

> (c) A text-only version of a document on the web can always be produced
> with reasonable accuracy as long as one separate structure and
layout,
> and avoid locking oneself into the idea of 'the resolution
matters'.

I guess I am coming from a graphics hungry life that I lead. I love art and
visual effects. A web page that is extremely text based leaves me wanting
more. But I do feel that you have a better grasp on the details than I, so
I
admit that your theories on web and html style are quite superior to mine.
I did find a typo on the grey tower website, so perhaps more time is
spent on html than proofreading. And the lack of graphics shows that
you have mastered part of the web experience but not the whole.

> - Tina Holmboe

Thanks for the debate !

Jon Melusky


Tina Holmboe

unread,
Sep 26, 2002, 7:47:43 AM9/26/02
to
"JJ" <jj...@removethisdrizzle.com> exclaimed in <1033026861.979659@yasure>:

>> Ignoring the resolution, and concentrating on the humans involved; you
>> don't publish information for (ie. aimed at being interesting to) the
>> graphics-card do you ?
>
> Arg, you lost me. I am not sure what you mean by that.

Well ... with a few very special exceptions most content authors put
information - be it graphics, sound, text, cartoons, etc - onto the
web for the benefit of human readers.

Hence my perhaps abit strangely worded question: the "entity" consuming
your information is the human being, not the graphics card, yes ?

Of course, there is value in presenting information in a pretty package;
that only enhances the experience, but if you remove the wrapping - ie.
remove the very specific resolution one had in mind when designing, the
content - the gift, if you like - either stands on it's own virtues,
or falls flat.

The goal, I would hope, is for it to stand on its own, and not be so
dependent on the wrapping that wrapping is all it is.

> Good point. What about styles of fonts ? Doesn't resolution and width
> of browser affect those a lot ?

Ohyes - in my end. Which is why I've chosen a font which, at my resolutions
and for my preferences, look good and readable (Trebuchet MS) to *me*. Any
choice you make COULD be much better, but it isn't likely.

Another point - one of my monitors is LCD. The other is a regular CRT. I
assure you that things look *vastly* different on the two; simply due to
different technologies.


> Sorry I'm being slow at understanding this stuff. You seem to come
> at this from a very different angle than I am.

Neither angle is necessarily *wrong* - just different.

> Well, I bring up black and white because when I have a migraine headache,
> a huge white webpage with black text hurts my eyes and increases the
> headache. I get them sporadically but anyway, when I get them, I set
> all windows to medium green background and black text. Many web

*nods* I believe I've heard about that - I *think* it is due to the fact
that massive amounts of white put a strain on the graphics card, and in
effect makes the screen flicker more. I wouldn't insist on that theory
however.

> designers "hate" to hear that I do that. They "want" me to see their webpage
> as they "intended" it to be seen. (Their words in quotes.)

Yes, and I can understand that - of *course* they'd want their work to be
seen as it was *meant* to be seen. No gripe.

But: if you change all windows to medium green background and black text,
and *all the content then disappear*, then I would strongly suggest that
too much effort has been put into the "it should ALWAYS look like this".


> I agree with you. I'm saying that if webpages can sense what language
> I prefer, then why not sense what graphic types I prefer and what font
> type and color I prefer also. If someone has a 12 inch monitor or a
> palm pilot then something has to give perhaps.

Ah, but now we're into the realms of guesswork, aren't we ? But yes, that
is part of the idea around HTTP content negotiation and CC/PP - I don't
think it's productive or cost effective.

If the webpage is "properly" coded, it will "adapt itself" to your
preferred graphics type - you change the background to green and the
color to black, and hey presto! It's still the same content. You increase
the width of your browser - voila. Still the same.

My point is that we don't need a plugin - we need to think about flexibility
when we code.

>> - If I want to view the whole lot with graphics and eyecandy all over
>> the place, I can. Big 'fat' pipe to the 'net, nice monitor, latest
>> browser. No problem.
>
> Are there any webpages with eyecandy that you see out there that
> are html styled well enough to give an award ?

Heavens, no. I wouldn't give an award for style, since it would be
impossible to objectively define the criteria for the award. Sorry. Too
much of a techie, and too little visualist :)

> What is your opnion about this page ?
>
> http://www.djsasha.com/fr.php

Ah ... well, it's a soothing grey color but there wasn't much THERE,
except for the copyright-text on the bottom.

> Information seems your main thrust of points. The web is not really
> that great a place to get information. But I guess you want to change
> that. Yesterday, my friend Dianne went to a swimming pool webpage
> and the info was laid out nicely but the hours were not updated. So
> when we got there, it was closed. We should have called I guess.

I wish they'd updated the info, instead. It isn't all that hard to set
up systems that would allow them to do so easily - they could even have
a webpage which they updated (from a browser) and which they then printed
to put up on their door ;)

> I think you are saying that it is a slippery slope that could end in over-
> complexity. Good point. But with graphics minimized, won't most all
> web pages look a like ?

No, I don't think so - I mean, there is no real reason to *minimize* the
amount of graphics (apart from reducing their size; long download times
neither of enjoy I bet) just 'cause I'm using only ONE image on my own
site :)

I daily read online newspapers with 80-90 images per page - that is
slightly over the top in my experience, but had they included ALT texts
on every single one I could have used Lynx instead - and graphical browsers
would STILL get all the images. It isn't an excluding technique; rather an
including one.


> Do you think any plugins are good or useful ?

If you want a plugin, go ahead. 's not my place to judge :) I can say that
I don't keep a Flash plugin around, mostly 'cause I read many online
newspapers, and they tend to include ads using Flash.

Not in itself wrong, since I do get the content for free, but they have
the rather annoying ability to freeze my keyboard. I scroll down by
pressing the spacebar - very handy function. When on top of a Flash
animation, everything freezes and I need to get out the old mouse to get
by it.

> And shouldn't we all speak Esperanto to make things less complex ?

Please, no. *shudders* Despicable thing, that. Almost as bad as
"New Norwegian" >:)


> Holey cow, I have no idea what section 12 means ! I am wwway out
> of my league here. What is an entity ? As an english major, I know
> what every word in section 12 means but there is no meaning there.

Hm. I think we can safely say that the HTTP spec is circling around
Pluto in terms of useability, yes. Sorry about that.

Now, I'm not a native speaker of English, but let me try to give you
an idea by using less warble-gargled language.

Basically, when your browser sends a request for a webpage to a server,
it can include a statement on what form to send the information back
to the user. For instance, your browser can send data on which language
you would prefer by adding this to its request:

Accept-Language: no, en-gb, en, sv

which tells the server that

"Hello. If possible, I'd prefer the Norwegian version of the content.
If that isn't available, I'd like the English (Great Britain) version.
If THAT can't be done, I'd like the generic English version, and if not
even that I'd like the Swedish one, thankyou."

The user can now, in his or her browser, adjust that value. If you try
this on Greytower, for instance, you'll get the Norwegian version of
the site. Change the language preference in your browser, and you'll
get the English version.

The specification suggests various methods of handling such requests
from both the browsers and servers point of view.

> a fool. The webpage speaks of "transparent negotiation" but I can
> do neither. Is this talking about a transparent society ?

No ... what it says is that if your browser request information based on
some user preference (such as language) from a server, and a cache server
is in the way, that cache server should be able to do the job of selecting
the proper content to reply with without user intervention.

"Transparent" in this case means that the user never needs see what
goes on.


>> (a) That the font size is turned up is no indication that I want the
>> graphics turned up,
>
> Good point. Perhaps during the installation process of the plugin,
> a person could adjust that setting or have the plugin add a button
> to the web browser to adjust it ? Or shut it off entirely. Or maybe

Possibly. I'm still not sure about whether a plugin for this is really
needed. You could do so with the HTTP stuff again:

Accept: image/tiff;q=0.9, image/png;q=0.8, image/jpg;q=0.7

which, unless I have totally misread the specs, would translate to
something like:

"I would like TIFF-images, please. If they are not available, I'd
like PNG instead. If that doesn't exist, please send as JPG."

Of course, that wouldn't solve the problem of "high quality" images
but I doubt a plugin could do much better ...

> Well, I was looking at the grey tower website and it has words on the
> new logo. But the logo probably would not be seen too well by a non-
> perfect vision user. It just looks like "rey ower" to me. My vision is
> not that great.

What you describe is true, of course. Yet - and this may sound like a
defence, I admit - the logo is there only as decoration. The site doesn't
depend on it, and you *can* navigate (which is the main problem with
using graphics for text) without the logo showing.

> I guess I am coming from a graphics hungry life that I lead. I love art and
> visual effects. A web page that is extremely text based leaves me wanting
> more. But I do feel that you have a better grasp on the details than I, so

I don't think so - just different points of view. I do love pretty things,
if you wonder - but ... if it is ALL pretty, with no *core* to it; no
*content* ... then I'm also left wanting more.

It's the compromise between our two points of view which does the most
good, I'm thinkin'.

> admit that your theories on web and html style are quite superior to mine.
> I did find a typo on the grey tower website, so perhaps more time is
> spent on html than proofreading. And the lack of graphics shows that

As I mentioned, I am not an native English-speaker or writer. Feel free
to indicate the error, if you have the time and inclination. I have no
doubt that my proof-readers may have missed a spot or two; we are but
human :)

> you have mastered part of the web experience but not the whole.

Ah, I have never claimed to be a graphics artist - the logo is bought,
fair and square. I couldn't draw a straight line; and frankly doubt that
anyone will find Greytower's site aesthetically pleasing.

Hopefully, however, it isn't too hard to get the content from.

Eric Jarvis

unread,
Sep 26, 2002, 4:33:08 PM9/26/02
to
JJ wrote:
>
> I agree with you. I'm saying that if webpages can sense what language
> I prefer, then why not sense what graphic types I prefer and what font
> type and color I prefer also. If someone has a 12 inch monitor or a
> palm pilot then something has to give perhaps.
>

of course in a way it can do that by default and all that most web
designers do is override it

--
eric
www.ericjarvis.co.uk
"I am a man of many parts, unfortunately most of
them are no longer in stock"

bobbyhaqq

unread,
Oct 1, 2002, 6:16:36 PM10/1/02
to
b...@panix.com (Bradley K. Sherman) wrote in message news:<am5cb0$pkh$1...@panix2.panix.com>...

>

> This is an understatement. There are thousands of talented
> people attempting to categorize knowlege and they are
> *all* failing. XML is a disaster. Ontologies are a disaster.
> XML plus Ontologies is synergistic: the combination is
> a bloated incomprehensible catastrophe. The Semantic Web

> is a red herring.


>
> Just creating a set of keywords is very hard. Assigning
> the keywords to the polymorphic phantasms we call 'things'
> is perilous. Attempting to place the things and keywords
> in hierarchies is mind-boggling and every group begins
> de novo and ends up with a similar yet different plate of
> simplified semiotic spaghetti.
>

> If only Ph.D's really did have to know some philosophy!
>

While I agree that rational Ontologies of all but the most limited
domains are impossible, I fail to see this as what XML will be used
for.

Essentially a database is a means of catagorizing information in
certain classes, and good database design to cover the roles of an
organization or even proper names can be very hard. Recently I tried
to figure out how you would create a table to capture simply the names
of Lords in the UK, it turns out to be rather hard. Also, provided
all the different means of counting votes in the EU a database of
elections was a downright nightmare, considering that there are, in
the UK, 4 seperate systems of counting votes.

That said Databases are very very usefull animals, essential, and a
XML document with assigned Schema, could be at the first order simply
a means of translating Databases in to uniform systems that any
machine could read and assing an XSLT style sheet upon. To date my
exposure to XML has been to create a CV, to read medical data for a
major hospital, and to allow a content publishing tool to work.

XML will not go away because it allows nesting, where databases do
not. In a standard database you have tables with records and column
headings, with columns having a set type. Across tables you have some
linking roles. But with XML a given table can be subcatagorized even
further. With the Schema as invisoned using .NET, you can export data
knowing that any programming language or device which picks the data
up will be able to use it.

I do not view XML as capable of capturing any DEEP meaning of
sentences, nor do I hear anyone talking about using it as such.
Rather it is simply a clean convention for classing data and
tansporting between layers and devices.

My primary work iwth it has been with ASP, and I have to say the thing
is excellent, much easier to work with then creating an ODBC link to a
SQL Server Database, much faster, and if only that you can not read
from it all around much better. I can import any number of XML
documents loaded in to an ASP object and, applying an XSL document to
it, get the information I want formated the way I want without having
to hit a database.

Don't look as XML as an AI product, AI is dead, look at XML as a means
for storing informaiton beyond the 2 d restrictions of a standard
table, in a quick easy format which is universally portable and does
not require a backend database processor, with all the computing that
involves to run it.

If Microsoft gets the .NET to really work in the future you will go on
the web, sellet the information you want, which will come with a
schema to provide consistent representation, and a style sheet which
will allow you to see it properly for your device. As I understand
Microsoft claims it is central to .NET and is probably the only
present technology to integrate almost everything over the Internet.

Bradley K. Sherman

unread,
Oct 2, 2002, 6:49:45 PM10/2/02
to
In article <689922c7.02100...@posting.google.com>,

bobbyhaqq <rhook...@hotmail.com> wrote:
>
>While I agree that rational Ontologies of all but the most limited
>domains are impossible, I fail to see this as what XML will be used
>for.
>

You're not listening to the XML enthusiasts, then.

As to XML as an interchange format. That is where it
really, really sucks. If you can control or collaboratively
establish the format of your exchanged data, XML is usually
needlessly complex and always bloated beyond belief. Data
exchange format *was not a problem*. Discipline was the problem
and XML does not cure that.

It is only when programmers begin to dream about an
endlessly extensible format for their data that XML
becomes useful, but those programmers are already
in a state of sin.

--bks

Jorn Barger

unread,
Oct 3, 2002, 3:11:43 AM10/3/02
to
b...@panix.com (Bradley K. Sherman) wrote in message news:<anft69$h8l$1...@panix3.panix.com>...

> >While I agree that rational Ontologies of all but the most limited
> >domains are impossible, I fail to see this as what XML will be used
> >for.
>
> You're not listening to the XML enthusiasts, then.

I noticed a kind of 'doublethink' around the time that TimBL's
article on the Semantic Web was in Scientific American-- people on
the html newsgroups were starting to straightfaced claim that XML
was never about AI!??

> [...] Data exchange format *was not a problem*. Discipline was


> the problem and XML does not cure that.

Amen, brother! This is the real human-factors disaster, and it's
a theoretically 'deep' one: you can't impose discipline on the
Internet from the top down, the way they're trying to do...

Don Verhagen

unread,
Oct 4, 2002, 5:37:14 PM10/4/02
to
bobbyhaqq <rhook...@hotmail.com> wrote the following:

: b...@panix.com (Bradley K. Sherman) wrote in message
: news:<am5cb0$pkh$1...@panix2.panix.com>...

[snipped....]
: XML will not go away because it allows nesting, where databases do
: not.
[...snipped]

SQL databases do not allowed for nesting however Pick or Multivalue
databases originally developed by Richard Pick do. I.E.
http://www-3.ibm.com/software/data/u2/unidata/


Sorry, had to respond.

Donald Verhagen
Tandem Staffing Solutions, Inc.


-----------== Posted via Newsfeed.Com - Uncensored Usenet News ==----------
http://www.newsfeed.com The #1 Newsgroup Service in the World!
-----= Over 100,000 Newsgroups - Unlimited Fast Downloads - 19 Servers =-----

Jenny Brien

unread,
Oct 5, 2002, 5:41:29 PM10/5/02
to
On Fri, 4 Oct 2002 17:37:14 -0400, "Don Verhagen"
<compute...@SPAMsoutheast-florida.com> wrote:

>bobbyhaqq <rhook...@hotmail.com> wrote the following:
>
>: b...@panix.com (Bradley K. Sherman) wrote in message
>: news:<am5cb0$pkh$1...@panix2.panix.com>...
>
>[snipped....]
>: XML will not go away because it allows nesting, where databases do
>: not.
>[...snipped]
>
>SQL databases do not allowed for nesting however Pick or Multivalue
>databases originally developed by Richard Pick do. I.E.
>http://www-3.ibm.com/software/data/u2/unidata/
>

As I recall, Pick found he never had to nest more than three deep,
wheras XML (in theory) allows for infinite nesting. I'm beginning to
think the former is more realistic from a HF point of view. When you
tell someone where to find something you say "It's in X under Y"
If that's not enough, you create another context "Oh, that's a Z" You
find those in X under Y"

For an interesting exmple of this "rule of three" see the HolonX
source code browser/editor http://holonforth.com/tools/holonx.htm


Jenny Brien
http://www.fig-uk.org
Home of the Fig UK website

0 new messages