Check out this link, specifically point 3 on how to set up a servlet filter that works with _escaped_fragment.
https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshotYou now have two choices: in the link above, they use a java WebClient (
http://htmlunit.sourceforge.net/ ) to grab the html from your real page. The WebClient can parse javascript, and if your GWT page is fairly straight forward it should just work and give the right html back to google bot with zero extra work beyond the web.xml config / filter on your end.
However, if WebClient can't get good data from your pages (you'll need to test using WebClient directly), then you can inline some html for any pages that don't work, and you can make calls to your database/servlets to pull any dynamic data you might need. This output should be valid html, but it doesnt have to be pretty - just a <body>Lots of text here...</body> is enough for googlebot to understand the content on your page.
On Thursday, December 27, 2012 1:36:17 PM UTC+2, Jan wrote:
Hi
I like to make my GWT-App by the google bot. I found this article (
https://developers.google.com/webmasters/ajax-crawling/). It states there should be a servlet filter, that serves a different view to the google bot. But how can this work? If i use for example the activities and places pattern, than the page changes are on the client-side only and there is no servlet involved -> servlet filter does not work here.
Can someone give me an explanation? Or is there another good tutorial tailored to gwt how to do this?
Thanks and best regards
Jan