Async process return operation on AbstractRequestContext

95 views
Skip to first unread message

Ignacio Baca Moreno-Torres

unread,
Jul 7, 2014, 9:00:53 AM7/7/14
to google-web-tool...@googlegroups.com
RequestFactory has some performance problems. One of them occurs when the AbstractRequestContext need to parse the response and create all proxies. This problem is especially annoying if the time required to process this response forces the explorer to show the warning message ~"A script on this page may be busy, or it may have stopped responding".

I review the code and apply and alternative async execution which solves the warning-popup problem (this is not a improve performance solution), but I'm not sure if this simple solution may have some collateral problems. One weird thing about the solution is that the EntityProxyChangeEvents might be fired in different javascript loops, but this might be solved throwing this event after all operations were processed. Is there any other problem?


I create this post to decide whether to create an issue (and patch) or not.

Ignacio Baca Moreno-Torres

unread,
Jul 8, 2014, 11:59:47 AM7/8/14
to google-web-tool...@googlegroups.com
In this ignore white-diff is much more clear that I just wrap the end of the process in a callback - https://github.com/ibaca/gwt/commit/cd4901a23109c5350113363f2c539a8105e874b9?w=1

Also, I had tested this patch in a '1500 lines deobfuscator' application with no issue, although this patch can generate some silent ""concurrency"" problems that I had not detected yet.

If I add this patch with some improvements and tests as a pull-request, it might be accepted?

Jens

unread,
Jul 8, 2014, 12:36:09 PM7/8/14
to google-web-tool...@googlegroups.com
Well in general I think its not a big issue to process the response in an async way, however it just moves your problem into the future. Your patch allows you to load more data from the server without blocking the browser. However sooner or later the browser will block again because you probably start loading even more data in the future and the chunks of work will become too large again. But a maintainer of RequestFactory will decide if its worth it.

IMHO your real solution would be to rethink your UI / workflow so you don't need load such a large amount of data at once. Out of curiosity: How much data are you actually trying to transfer and which causes the browser to block?

As a side note: GWT does not accept pull requests on GitHub. You must sign up on Gerrit and sign a CLA: http://www.gwtproject.org/makinggwtbetter.html#submittingpatches

-- J.

Colin Alworth

unread,
Jul 10, 2014, 1:53:14 PM7/10/14
to google-web-tool...@googlegroups.com
Jens, I think you may be mistaken on how far this patch moves the problem to the future - rather than break it into just another chunk, this patch appears to break into as many chunks as are required - one per message coming back from the server. The next bottleneck, if it can exist, appears to be now moved to processReturnOperation, which iterates over the current proxy and essentially hits each setter. In order for a single step to block long enough for an error, the proxy would need to have so many properties (at least on the order of thousands, if not hundreds of thousands) that it hangs.

If it were me writing the patch, I'd go another step, and break up the calls to onSuccess/onFail too ;). However, *that* might end up having some ramifications from users expecting that all onSuccess calls run synchronously with each other, and users can fix long-running errors by doing the scheduler work in their own code.

Truly massive object graphs might end up balking on the JSON.parse that happens in AutoBeanCodex.decode. That will be difficult to break up without rewriting decode to take a callback though.


--
You received this message because you are subscribed to the Google Groups "GWT Contributors" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-web-toolkit-co...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-web-toolkit-contributors/9c5244f3-e55b-4ade-aa82-4c60e2294229%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
218.248.6165
nilo...@gmail.com

Ignacio Baca Moreno-Torres

unread,
Jul 15, 2014, 4:31:17 AM7/15/14
to google-web-tool...@googlegroups.com
Although I also think that this may be a bad workflow problem, I also think that for some use cases this patch might be helpful. 

For me, this problem appears in two cases.

Case 1. A reporting app which loads reporting data for a long period and allow client side analysis. The request load ~5mb (compressed is less than 600k). In this case I admit that a re-think is the best solution because requestfactory it's really bad loading bulk data. Request-factory its perfect for editor framework or similar graph data request, but it's not useful for bulk ValueProxy data loading.

Case 2. A translation tool which loads all translations to allow offline editing. This request load a ~1mb (1500 keys with per-language translation and metadata). This request is done in the the first time an user login, but most of the time this data is loaded from the local storage, so I think that this use case must be supported by request-factory. With this patch, this works fine, without this patch, usually this first load usually fires the 'slow script' popup.

Ignacio Baca Moreno-Torres

unread,
Jul 15, 2014, 4:46:33 AM7/15/14
to google-web-tool...@googlegroups.com
But I prefer something very simple which solves the problem without touching too much. My patch looks simple which is important to be accepted. Also, I prefer that the user perspective do not change, so accumulate the EntityProxyChange until all operations are processed is required. Breaks up success/failure looks like something that will cause unexpected problem on apps. I sometimes write code which expect that all Receivers in one request be called in the same browser event loop, so looks like a bad idea split the response of a request in multiple browser event loops.

On Thursday, July 10, 2014 7:53:14 PM UTC+2, Colin Alworth wrote:
Jens, I think you may be mistaken on how far this patch moves the problem to the future - rather than break it into just another chunk, this patch appears to break into as many chunks as are required - one per message coming back from the server. The next bottleneck, if it can exist, appears to be now moved to processReturnOperation, which iterates over the current proxy and essentially hits each setter. In order for a single step to block long enough for an error, the proxy would need to have so many properties (at least on the order of thousands, if not hundreds of thousands) that it hangs.

If it were me writing the patch, I'd go another step, and break up the calls to onSuccess/onFail too ;). However, *that* might end up having some ramifications from users expecting that all onSuccess calls run synchronously with each other, and users can fix long-running errors by doing the scheduler work in their own code.

Truly massive object graphs might end up balking on the JSON.parse that happens in AutoBeanCodex.decode. That will be difficult to break up without rewriting decode to take a callback though.
On Tue, Jul 8, 2014 at 11:36 AM, Jens <jens.ne...@gmail.com> wrote:
Well in general I think its not a big issue to process the response in an async way, however it just moves your problem into the future. Your patch allows you to load more data from the server without blocking the browser. However sooner or later the browser will block again because you probably start loading even more data in the future and the chunks of work will become too large again. But a maintainer of RequestFactory will decide if its worth it.

IMHO your real solution would be to rethink your UI / workflow so you don't need load such a large amount of data at once. Out of curiosity: How much data are you actually trying to transfer and which causes the browser to block?

As a side note: GWT does not accept pull requests on GitHub. You must sign up on Gerrit and sign a CLA: http://www.gwtproject.org/makinggwtbetter.html#submittingpatches

-- J.

--
You received this message because you are subscribed to the Google Groups "GWT Contributors" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-web-toolkit-contributors+unsubscribe@googlegroups.com.

Jens

unread,
Jul 15, 2014, 5:07:12 AM7/15/14
to google-web-tool...@googlegroups.com
Case 2. A translation tool which loads all translations to allow offline editing. This request load a ~1mb (1500 keys with per-language translation and metadata). This request is done in the the first time an user login, but most of the time this data is loaded from the local storage,

Just as an alternative: I generally have a Login App that loads very fast because it is small and focused on login. After login, once the real app loads, such data that I require immediately after login are simply embedded into the app's host page as pure JSON. That saves initial requests and with GWTs Dictionary, AutoBeans or JSOs you can very efficiently access the data.

-- J.
Reply all
Reply to author
Forward
0 new messages