Current version and robots

2 views
Skip to first unread message

Kevin Dangoor

unread,
May 30, 2006, 10:16:52 AM5/30/06
to doc...@googlegroups.com
I just wanted to mention something while it's on my mind. Bob Ippolito
pointed out on the MochiKit list that having separate versions of docs
is a pain with the search engines, because people will keep hitting
old versions of the docs. It's a good point... Docudo could really use
a /current/ link (and the URLs should all have /current/ rather than
/x.y/), and then there should be a robots.txt to ensure that the
search engines *only* look at the current versions.

Make sense? (if so, I can open a ticket)

Kevin

--
Kevin Dangoor
TurboGears / Zesty News

email: k...@blazingthings.com
company: http://www.BlazingThings.com
blog: http://www.BlueSkyOnMars.com

Kevin Horn

unread,
May 30, 2006, 10:22:26 AM5/30/06
to doc...@googlegroups.com
I agree, and this was brought up as far back as teh PyCon sprint.  Some people disagreed with the idea then, though I don't remember why...

I say go ahead and open a ticket.

Kevin H.

Ian Bicking

unread,
May 30, 2006, 11:54:31 AM5/30/06
to doc...@googlegroups.com
Kevin Dangoor wrote:
> I just wanted to mention something while it's on my mind. Bob Ippolito
> pointed out on the MochiKit list that having separate versions of docs
> is a pain with the search engines, because people will keep hitting
> old versions of the docs. It's a good point... Docudo could really use
> a /current/ link (and the URLs should all have /current/ rather than
> /x.y/), and then there should be a robots.txt to ensure that the
> search engines *only* look at the current versions.

If docudo isn't mounted at the root of the site it won't be able to
provide robots.txt. Maybe double up with a meta no-index tag?

Also, will it be too ambiguous to just have the docs at /*, with /x.y/*
for old versions?

Kevin Horn

unread,
May 30, 2006, 4:24:16 PM5/30/06
to doc...@googlegroups.com
On 5/30/06, Ian Bicking <ia...@colorstudy.com> wrote:

If docudo isn't mounted at the root of the site it won't be able to
provide robots.txt.  Maybe double up with a meta no-index tag?

This is a good point, but may be tricky to work around.  Since all the <head> info is stored in SVN properties, it would have to be stored with each individual file, and would have to be reset when migrating versions.

Shouldn't be _too_ difficult, but could have some tricky bits.

The robots.txt functionality should probably be made configurable so that it could be used or not depending upon the setup of the site. 

Also, will it be too ambiguous to just have the docs at /*, with /x.y/*
for old versions?


I don't know if it would be too ambiguous, but I certainly prefer using /current, rather than /* (better explicit than implicit, ya know!).  Perhaps this could be made into a configurable option, and the individual site could decide?

Of course at this point, only Ronald is really putting in any code, so maybe we should let him decide ;)

P.S. Hopefully, I'll be more able to help out soon, as I'm just finishing a project that has sucked up a lot of time for the last several weeks.

P.P.S. We have a ticket!! http://trac.docudo.org/docudo/ticket/1
We're really rolling now!

Kevin H.

Kevin Dangoor

unread,
Jun 14, 2006, 8:14:12 AM6/14/06
to doc...@googlegroups.com
On 5/30/06, Kevin Horn <kevin...@gmail.com> wrote:
> On 5/30/06, Ian Bicking <ia...@colorstudy.com> wrote:
> >
> > If docudo isn't mounted at the root of the site it won't be able to
> > provide robots.txt. Maybe double up with a meta no-index tag?
>
>
> This is a good point, but may be tricky to work around. Since all the
> <head> info is stored in SVN properties, it would have to be stored with
> each individual file, and would have to be reset when migrating versions.

There's no reason that something like a no-index meta tag would need
to be in the repository. it could just be in the template.

Kevin

Reply all
Reply to author
Forward
0 new messages