I don't understand why you'd want an ETag calculation done on an object when ETags are used for HTTP caching purposes and the actual data being output will always be a string, won't it?
Here is an alternative HTTP caching approach that doesn't require ETag:
Change this:
to this:
I built a file tracking system into my CFML app that indexes the filesystem for each web site in onApplicationStart (specific domain) and onServerStart (shared global files). You could use hash for this, but I just use file + date since that is compatible for my app. If your output is dynamically generated, then you'd still need to be versioning it somehow.
This allows me to output the max-age:3600 header, so that content doesn't expire for an hour for some of the file types.
When you append a unique query string, the browser is forced to re-download the file even if it is identical. Won't an ETag approach still require a smaller packet to verify it is identical with the server? According to the wiki, yes:
http://en.wikipedia.org/wiki/HTTP_ETag
Using Expires+max-age actually eliminates a large numbers of round-trip requests which will be noticeable anytime there is a slower internet connection or mobile device. It's also possible to add rewrite rules that eliminate the version number from being in the query string (an optimization that only helps certain proxy servers). I did this too, but I decided not to use since I want developers to understand where the file is more easily. A url like /style.1-1231.css is a bit strange.
Expires + max-age headers are perhaps the most efficient way to handle http caching.
Almost note that my version string is a combination of a file ID and a file version ID from the database and these values auto increment. I also have a scheduled task that deletes old versions periodically to avoid waste.
In addition to this, I built concat/minification and css sprite map automation into the same code in my app so that the minified file is also versioned. The sprite map was done by writing my own CSS parser, which also inserts query strings for the images in the CSS, so that they are versioned as well. Eventually, I may find the time to implement this across every feature that outputs to an embedded url so the entire site could be cached. At the moment, I find a 1 hour expiration an acceptable delay for the features which aren't versioned yet. Most web developers have to depend on CTRL + R or clearing cache, but you can eliminate that if you automate versioning. I've noticed that google even shows a message with ajax now on its apps when they release a new version. My system could later allow such a notification to appear which encourages people to run the new version.
Check out my company site to see all this in action. My site does virtually everything in terms of pagespeed recommendations, minify/concat, css sprite maps, ajax partial page transitions with html 5 push state, responsive web design, adjustable font sizes, and maximum caching performance with Railo, yet all requests are dynamic. I also patched nginx with the SPDY protocol and use SSL so that the requests are multi-plexed through a single connection. My site should seem faster then google because of all this. SPDY would be very noticeable if your page has hundreds of small images, etc and it works in current versions of Firefox and Chrome for SSL sites only. There is a mod_spdy for apache as well, which I previously was running before migrating to nginx.
I plan on making my source code for all this open source once I clean it up some more.