I know Webado and many others are ardent supporters of W3C validation. I have a website that is over 7 years old. It gets indexed nicely by Google and others. Regarding W3C validation please note the following: 1. My website is created by FrontPage and uses Microsoft Border. It is an illegal code generated by FrontPage. It does not effect any bots to crawl any of the pages... Its only an error in the eyes of W3C standards. Yes it can be corrected to meet the W3C standards, but what a hassle.
2. On every page I use Google search box. It creates an invalid code if you want to use the one with the logo. The error does not prohibit bots from crawling. Also, Google has a written policy that the code may not be changed for any reason. Yes it can be changed or use the one without the logo.. Eithe break the Google user agreement or.....
3. DOCType statement may provide some sort of similarirty across the browsers. Still remember all the browsers work differently, so what is the benefit? Again, its in the eyse of the beholder. It has nothing to do with bots crawling the page.
I agree the code should be clean, so that there are the bots dodnt get hung up in some funny loop. You can achieve most of it by using CSS stylesheet.
I wish someone would craete a valiadator, that could run on the entire website, and accept some statements that may not meet standards, such as the Microsoft border by Frontpage, or the valign in the google searc box. The user should be able to enter these non-conforming variants by hand.
This may not pleae the W3C purists happy, but it would make the life of others like me much simpler.
Concerning Google's search box - don't take it so literally that you may not change the code. You may and should change it in order to make it work with the doctype you are using. You may also style it any way you want. What you may not do is change it's functionality and it has to remain identifiable with Google.
[email address] wrote: > On a totally 'Out there' note, I was taking 1500mgs of Vitamin C while > everyone around me had a cold the other day and I got a rash on my > doodaa's. I was talking to my father and he said that it was exactly > the same for him too ... have you ever heard of this before ( as I > couldn't find much online to support the possibility ).
I think what Yogi and Colin are saying (or trying to say - I hope :-)) makes a great deal of sense. Sure it's best to aim for full W3C validation, it is a good idea for almost every website and can really help with visitors with all sorts of clients; but what really makes a difference for the search engines is whether or not your site is "parsable", clean on a block level - do the tags open and shut properly? Can context be extracted from the page or can it only be viewed (by a bot) with a text-based parser?
If your know your doctypes, you will be able to read that out of a W3C validation error-report. If you can't, then you end up fixing many items which might be irrelevant with regards to search engines (but which, in the end, make sense anyway, so it's not really lost time).
A simple tool to test your pages for this kind of bot-readability might be a good first step. (darn, does that mean I have to make one?)
God, Softplus ... if you could build something that would simplify things for us numpties that just want to show our creative work online it would be great. Your GS software is ace and has allowed me to conform to Google sitemaps and I would like to make it easy for Google to parse my site ... but not at the expense of my creativity. If it means building 'Boxy' sites, I'm gonna go it alone without Google ... something so good will always get out there.
Softplus has software by his name as quality in software design ... Ubooty & Badcol & Artistnos will be known for its creativity ... whether 'Valid' sites or not ...
John! You got it. W3C validation is not neccesary to have your site crawled successfully by the bots. Someone should produce a simple tool to crawl through the whole site. The report would show open tags or tags that are closed improperly by page, line number and the tag itself. John! another tool idea for u.... Yogi
John, my friend, you got your work cut out for you LOL! Ah, the headaches I can foresee LOL
Adam, may I remind you of the silliest Google gaffe with that blasted verification meta tag which breaks code on all non-xhtml pages (for those who don't know enough to fix it first)? That is the kind of broken which Googlebot isn't jumping over - considering how many sites have gotten hit a short while after using the meta tag. Though it's the easiest kind to fix too, both by the webmaster and especially by the Google team who handle that aspect of validation. Come on, just get it fixed and be done LOL We'd have far fewer such fruitless discussions.
Webado said "We'd have far fewer such fruitless discussions"
I don't know about anyone else, but I've found this discussion very fruitful. Just because it hasn't centred on you as the star of the show, doesn't make it fruitless. I'd like to thank everyone for posting here. I've learned a great deal and I certainly won't be worrying so much about my W3C validity anymore. I'll just focus on my main priority of Content and Visual Sense, oh yeah and let's not forget humour ... I love humour.
All the best for Christmas everyone, and a happy and profitable New Year.
And I wasn't refering to THIS thread being fruitless (though they be sour fruit), but to those threads that keep popping up on that subject (and I did mean the validation meta tag that's the cause of at least 1/3 of the troubles I see here all the time).
I get the feeling you're a 'Cup half empty' kind of person ...
... when Adam said "Being more specific: I'm betting that in the vast majority of cases in which folks have indexing or ranking concerns, the core issue is NOT that their site doesn't perfectly validate.", I like to think that's an end to the issue. I for one shall never mention Page Validation again.
So ... what's all this about page validation ? Doh !
Adam Lasnik wrote: > Seriously... I don't want to discourage anyone from validating their > site; however, unless it's REALLY broken, we're likely going to be able > to spider it pretty decently.
On the other hand, if I'm trying to solve someone's problem on these forums, my first step is always to put it through the validator - it's the quickest way to diagnose (or rule out) major problems. It's surprising how many sites have malformed or overlapping <html>, <head> and <body> tags...
[email address] wrote: > I'll just focus on my main priority > of Content and Visual Sense > Col :-)
I've just read this thread for the first time. I don't think there is any dispute that Content and Visual Sense are everyone's priorities not just yours. But when the Content is in place and it's all been made to look pretty, surely the next logical step is to rid the code of errors.
Can anyone make a case for NOT validating? What are the advantages?
Someone will build an XHTML-only web browser that has a magnitude lesser complexity than todays bloated beasts, runs faster, edits pages on the fly, supports automated html <=> wiki <=> textile <=> bbcode conversion, gpl, doesn't suck, stores bookmarks, history, passwords encrypted on webdav, monitors your web pages rankings for you ;), in 2020