calling onLoadErrorFn on permutation 404

52 views
Skip to first unread message

Stephen Haberman

unread,
Oct 5, 2014, 10:40:34 PM10/5/14
to google-web-tool...@googlegroups.com
Hey,

During our deployments, we have a small window where both old/new
servers could potentially be available. I'm trying to hook into the
failure of when a client loads the .nocache.js from one version,
but tries to get the permutation from another.

Currently what happens is that the permutation 404s and the GWT app
just stops in its tracks, leaving the user with a blank page.

I added a onLoadErrorFn, expecting it to be called in this case, but
turns out it is not. I put together a patch that changes this:

https://gwt-review.googlesource.com/#/c/9510/

It seems fairly straightforward and correctly fires onLoadErrorFn when
I purposefully delete a permutation file when running locally to
reproduce the behavior.

Feel free to either reply with thoughts either here or on the code
review. If no one volunteers for the code review, after a few days I'll
use git blame to nominate someone.

Thanks,
Stephen

Jens

unread,
Oct 6, 2014, 4:25:24 AM10/6/14
to google-web-tool...@googlegroups.com
 I'm trying to hook into the
failure of when a client loads the .nocache.js from one version,
but tries to get the permutation from another.

Hm interesting, we never run into this. How do you update your app? Maybe we also suffer this small window but never noticed it. We deploy a second app on app servers and once that is done we tell load balancers to redirect to the new app. Now a couple of things can happen:

- User is logged in and has all split points => GWT-RPC Exception might occur => app reloads
- User is logged in and tries to download old split point (404) => Caught on split point level and app reloads
- User is not logged in => user will load new nocache.js file and gets new app

-- J. 

Thomas Broyer

unread,
Oct 6, 2014, 4:56:05 AM10/6/14
to google-web-tool...@googlegroups.com
- User starts loading old nocache.js, you tell load balancers to redirect to the new app, the nocache.js runs and tries to load an old permutation from a new server where it doesn't exist.

I suppose Stephen's deployment strategy is different, with some servers still serving the old app while others already serve the new app, and the load balancer could direct traffic to either server; but the end result is the same.
 

Stephen Haberman

unread,
Oct 6, 2014, 9:59:23 AM10/6/14
to Thomas Broyer, google-web-tool...@googlegroups.com

> - User starts loading old nocache.js, you tell load balancers to
> redirect to the new app, the nocache.js runs and tries to load an old
> permutation from a new server where it doesn't exist.

Right.

> I suppose Stephen's deployment strategy is different

Yeah, slightly.

Currently, if we have two servers running, our deployment script will
start another two (by increasing the AWS auto scaling group size to 4),
which means AWS will immediately add them to the AWS ELB when they boot.

Our script then waits to shut down the old servers until it's verified
both new ones are running/healthy.

So, the window where we could have both running is ~3-5 minutes long.

That said, our new deployment scripts (that we just haven't moved to
yet), start a completely new AWS auto scaling group for the deployment,
let all the new instances start in there (without being added to the
ELB), and then do a mass register/deregister of the machines with the
ELB.

Which will have a much smaller window, but it's still not atomic.

- Stephen

Reply all
Reply to author
Forward
0 new messages