Hi, Ale,
How is your progress? I got further on the topic...
Now if I deploy the war file and manually copy the app.war file under ROOT file (I had to change the path to the app.nocache.js ). The filter seems to take the query string just fine. (However, I don't remember any changes in the servlet code, it must be some configuration change from last time.)
For some reason, if I ran HtmlUnit offline, it can snapshot the content, but it doesn't work inside the servlet. So I had to ran it offline and save the contents into files. In the filter servlet I just read out from the file.
You can check this:
http://goscopia.com/?_escaped_fragment_=http://goscopia.com/http://goscopia.com/?_escaped_fragment_=info.about
http://goscopia.com/#!info.aboutRight now I only have a few pages to crawl, So I plan to update the SiteMap links file so Google crawler will crawl them individually.
QUESTION: what is the correct or better way than update sitemap, if the application can generate a lot of pages , how to make these pages known to the crawler?
Any suggestion is appreciated.
-maq
--
You received this message because you are subscribed to the Google Groups "Google Web Toolkit" group.