Hypertext theory usually recommends that individual nodes link to a
pretty limited set of these relatives: all the node's children, its one
parent, one next-sibling, one last-sibling, maybe one grandparent. (The
idea of 'path' or 'breadcrumbs' is parent-grandparent-greatgrandparent,
for example.)
To take full advantage of the Web, though, strict hierarchies are too
limiting-- what you have is a web of relationships with multiple,
arbitrarily overlapping hierarchies.
For any given node, surely, the right question to ask with regard to
linking is: how _close_ is the topic of node N to the topic of my
current page.
By this standard, a given node should probably link _all_ its siblings,
and quite possibly all its cousins, because these will be thematically
close.
But more generally, we might compare this to the idea of 'hyperbolic'
magnification, as used in some computer interfaces: things that are
closer (more central) are magnified more, those farther away are
magnified less. (The MacOSX 'dock' behaves somewhat hyperbolically.)
The current fashion in webpage design, especially for large
magazine/news/portal sites, is to weigh down every article with hundreds
of marginal links to utterly unrelated parts of the site-- very
unhyperbolic, and I think utterly wasteful-- who ever clicks any of
those? Who even looks at them?
More and more now, happily, savvy news sites are ***adding value*** by
including a set of links to related topics-- recent news on the same
subject, pieces by the same author, etc.
This starts to look more hyperbolic, but to really execute the paradigm
adequately I think 80% of the links on a page should be to the closest
topics-- and should exhaustively link _all_ the closest possible ones.
('Closest' here should probably be read as 'most-useful-to' rather than
most-redundant-with.)
My current practice with my James Joyce pages is to start each new page
from a template that already has 100 likely links sorted at the bottom.
(Eg: http://www.robotwisdom.com/jaj/ulysses/eccles.html )
In theory I might trim some of the more distant ones, but in practice I
don't even see much reason to-- 100 text-only links at the bottom of a
page with no TABLEs don't noticeably slow down loading, and by keeping
them consistent from page to page people can grow to rely on them.
The reductio ad absurdum here would be to include my full sitemap at the
botom of every page-- with many hundreds of pages this would become a
problem, but the hyperbolic solution seems generally elegant in solving
it.
--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel
that's all i'm saying :)
Philip Stripling <phil_st...@cieux.zzn.com> wrote in message
news:w3qzoi8...@shell.tsoft.com...
> Jorn Barger wrote, in pertinent part:
>
> > The current fashion in webpage design, especially for large
> > magazine/news/portal sites, is to weigh down every article with hundreds
> > of marginal links to utterly unrelated parts of the site-- very
> > unhyperbolic, and I think utterly wasteful-- who ever clicks any of
> > those? Who even looks at them?
>
> They are following the theory of hypobolic linking, Jorn.
>
> --
> Philip Stripling | email to the replyto address is presumed
> Legal Assistance on the Web | spam and read later. email to philip@
> http://www.PhilipStripling.com/ | civex.com is read daily.
> Resources for small businesses, entrepreneurs, and legal professionals.
i did a long critique of that one at the time:
http://www.deja.com/=dnc/getdoc.xp?AN=570531997
Again I'll state that your opinions are interesting and thought
provoking and could be crafted into a usability study (or studies).
But your lack of citations, or raw data to prove your statements means
that (for now) I have to disregard your opinions entirely.
I don't agree with this statement. Linking is too user specific. If
I've got a story to tell then I either craft it so it can be read
along multiple paths or along a single path. If I'm presenting data
about a product, the progressive display of increasingly technical
data seems to work for a significant population, i.e. those that don't
use search.
I also don't understand the relevance of the question. I think that
regardless of how the question is answered, I will continue to link in
ways that I guess/think are most representative of the conceptual
model that my primary audience is using. What I think is related
isn't necessarily what you think is related. In the absence of a
universal meta keyword database (besides Yahoo), it's all going to
ultimately work out as a subjective call.
regards
c
Jorn Barger <jo...@mcs.com> wrote in message
news:1el9xt8.wau...@207-229-150-216.d.enteract.com...
> colin <cmc...@usa.net> wrote:
> > http://www.useit.com/alertbox/20000109.html
> > that's all i'm saying :)
>
> i did a long critique of that one at the time:
>
> http://www.deja.com/=dnc/getdoc.xp?AN=570531997
>
>
--
--Aaron C
"Jorn Barger" <jo...@mcs.com> wrote in message
news:1el903n.pdn...@207-229-151-171.d.enteract.com...
> When hypertext has a hierarchical tree-structure, links between nodes
> can be described using genealogical terms: parent, child, sibling,
> grandparent, cousin, etc.
>
> Hypertext theory usually recommends that individual nodes link to a
> pretty limited set of these relatives: all the node's children, its one
> parent, one next-sibling, one last-sibling, maybe one grandparent. (The
> idea of 'path' or 'breadcrumbs' is parent-grandparent-greatgrandparent,
> for example.)
>
> To take full advantage of the Web, though, strict hierarchies are too
> limiting-- what you have is a web of relationships with multiple,
> arbitrarily overlapping hierarchies.
>
> For any given node, surely, the right question to ask with regard to
> linking is: how _close_ is the topic of node N to the topic of my
> current page.
>
> By this standard, a given node should probably link _all_ its siblings,
> and quite possibly all its cousins, because these will be thematically
> close.
>
> But more generally, we might compare this to the idea of 'hyperbolic'
> magnification, as used in some computer interfaces: things that are
> closer (more central) are magnified more, those farther away are
> magnified less. (The MacOSX 'dock' behaves somewhat hyperbolically.)
>
> The current fashion in webpage design, especially for large
> magazine/news/portal sites, is to weigh down every article with hundreds
> of marginal links to utterly unrelated parts of the site-- very
> unhyperbolic, and I think utterly wasteful-- who ever clicks any of
> those? Who even looks at them?
>
> More and more now, happily, savvy news sites are ***adding value*** by
> including a set of links to related topics-- recent news on the same
> subject, pieces by the same author, etc.
>
> This starts to look more hyperbolic, but to really execute the paradigm
> adequately I think 80% of the links on a page should be to the closest
> topics-- and should exhaustively link _all_ the closest possible ones.
> ('Closest' here should probably be read as 'most-useful-to' rather than
> most-redundant-with.)
>
> My current practice with my James Joyce pages is to start each new page
> from a template that already has 100 likely links sorted at the bottom.
> (Eg: http://www.robotwisdom.com/jaj/ulysses/eccles.html )
>
> In theory I might trim some of the more distant ones, but in practice I
> don't even see much reason to-- 100 text-only links at the bottom of a
> page with no TABLEs don't noticeably slow down loading, and by keeping
> them consistent from page to page people can grow to rely on them.
>
> The reductio ad absurdum here would be to include my full sitemap at the
> botom of every page-- with many hundreds of pages this would become a
> problem, but the hyperbolic solution seems generally elegant in solving
> it.
>
>
>
Paraphrase:
"When you put an article on the web, make sure to offer lots of
of links _at the end_ to other articles your readers might like.
If you don't, you're wasting a big opportunity. ('Hyperbolic'
means the closest topics get the most links.) If you don't use
TABLEs, and if you sort them neatly, even 100 'footer links'
will be fine."
"Jorn Barger" <jo...@mcs.com> wrote in message
news:1elexcg.1ms...@207-229-151-176.d.enteract.com...
Glad to see your dedication Jorn, and hope to catch up with your writings
perhaps in a year.
-- Michael Hoffman
http://www.hypertextnavigation.com
But Jorn, if you're going to diss someone, please back up with some hard
evidence - something which neilsen has and you definately do not.
Francis
Having
"Jorn Barger" <jo...@mcs.com> a écrit dans le message news:
1el903n.pdn...@207-229-151-171.d.enteract.com...
'spoke' locally, 'hierarchical' globally. (ie, 'hyperbolic')
> But Jorn, if you're going to diss someone, please back up with some hard
> evidence - something which neilsen has and you definately do not.
JN's 'hard evidence' fooled him into asserting for years that users
don't scroll. There really is no hard evidence in the social sciences,
imho-- it's all a matter of better or worse experimental design, and
what looks hard today will look soft when we know more.
My evidence comes from daily experiments with hundreds of readers, via
my server logs and email feedback. Nielsen's experiments involve tens of
readers and tens of experiments. QED!
> JN's 'hard evidence' fooled him into asserting for years that users
> don't scroll. There really is no hard evidence in the social sciences,
> imho--
OUCH!
> My evidence comes from daily experiments with hundreds of readers, via
> my server logs and email feedback. Nielsen's experiments involve tens of
> readers and tens of experiments. QED!
could you elaborate on that J?
http://www.useit.com/alertbox/20000319.html
I love this thread!!
You seem to accept the common assumption that more data means better data.
That isn't necessarily so. If the data-collection mechanism biases the
data, then collecting more data points isn't going to eliminate the bias,
and isn't going to improve the quality of the data.
--
Darin McGrew, mcg...@stanfordalumni.org, http://www.rahul.net/mcgrew/
Web Design Group, da...@htmlhelp.com, http://www.htmlhelp.com/
"Nothing is so good as it seems beforehand." -- George Elliot
What bias do you think hundreds of hits daily from random web surfers
introduce? One test subject is plenty if they're articulate and give
useful feedback. Hundreds add the advantage of dozens of different
platforms and goals. And I'm not running the same experiment every
time-- every new page I create tries new things based on previous
feedback.
my hypertext design lab: http://www.robotwisdom.com/web/
>What bias do you think hundreds of hits daily from random web surfers
>introduce?
You seem to have difficulties with the concept of randomness, which is
_crucial_ in reliable studies based on sampling. "Random" Web surfers
aren't. Except in a "random" meaning of "random". See Statistics 101.
Followups narrowed according to normal Usenet practice.
--
Yucca, http://www.hut.fi/u/jkorpela/
Qui nescit tacere nescit et loqui
>To take full advantage of the Web, though, strict hierarchies are too
>limiting-- what you have is a web of relationships with multiple,
>arbitrarily overlapping hierarchies.
Because there's no "up" or "down" on the Web ...
>In theory I might trim some of the more distant ones, but in practice I
>don't even see much reason to-- 100 text-only links at the bottom of a
>page with no TABLEs don't noticeably slow down loading, and by keeping
>them consistent from page to page people can grow to rely on them.
The only problem with this is that it could retard searching, if the
search engine can't ignore your template -- say if I'm searching your
site for topic x, which happens to be one of those 100 links, it may
give me all of your pages with the template ...
Yeah, atomz.com introduced a handy <NOINDEX> tag for that, but I haven't
gotten around to using it.
(Did you do the clicktracking experiment?)
>(Did you do the clicktracking experiment?)
Yeah. I didn't keep it up for a full week though because the lag time
between click and pageload was too annoying. But the results have been
positive -- I got more mail re: the design than I'd *ever* gotten
before.
With this design, I wanted to cram as many links in a page as
possible, while keeping it easy to scan on the screen.
To this end, I just did another experiment this morning, counting the
number of links in the 10 most popular weblogs. Yours came out on top,
as I predicted, with 481 links (or roughly 2.5x more than slashdot,
and still 75+ more links than metafilter, which is good considering
yours is a one-man site). But the mean average was much higher than I
thought it'd be: 217. Only 20% of them had less than 100 links; at 121
links, I guess I've still got plenty of room!