Concurrent update of chrome.storage.local

964 views
Skip to first unread message

Maksymilian Chwałek

unread,
Aug 22, 2022, 1:59:04 PM8/22/22
to Chromium Extensions
Hi, I'm having a problem with concurrent update of chrome.storage.local

I have a service worker that subscribes to chrome.webRequest.onSendHeaders. In the handler, I'm getting a dictionary of previously retrieved authorization tokens from storage (if any), then I'm updating it with the captured token and finally I'm storing it in the storage. The dictionary is then displayed in the popup.

Due to the concurrency of event handlers, it might happen that some data gets lost (for example when 2 handlers get the same state and then try to write "their" state). Is there any way to prevent race conditions in such case?

For further context please see my repo: https://github.com/mchwalek/AuthSnatcher. I worked around the issue, by storing each token on top level, but that doesn't seem right to me (https://github.com/mchwalek/AuthSnatcher/commit/327d3690ccba664becefca6c158d751260506972)

wOxxOm

unread,
Aug 23, 2022, 11:21:58 AM8/23/22
to Chromium Extensions, mchwa...@gmail.com
You only need to read the storage once per the service worker's run, then you can read/write it in a global variable directly and write the storage to provide continuity for the next run of SW or for the other contexts like the popup. Here's my example that assumes these values are changed only by the background script. If not, you need to use chrome.storage.onChanged as shown in an example in the documentation.

Note that webRequest events don't wake up the SW yet, it's a bug, which in some cases can be circumvented by subscribing to chrome.webNavigation or chrome.tabs events, otherwise you can try forcing the SW to run "forever".

P.S. The documentation is quite misleading and confusing in this aspect, apparently trying too hard to explain the concept in abstract yet nonsensical terms like resume/suspend instead of explaining the precise and simple mechanics: SW starts, SW runs, SW stops. When SW wasn't running previously and is started on an event, it just runs the script entirely and your listeners are registered via addListener calls, then the wake-up event is dispatched to the listener(s) subscribed to that event. This all happens in the first cycle of the JS event loop, which is why we need to register the listeners synchronously, which is often oversimplified as "in global scope", but actually means any synchronously running code. Finally, each such wake-up event restarts the internal 30-second termination timer, and when it finally fires the SW is just stopped, terminated, killed, destroyed. External connections like runtime ports or SW's fetch event from own pages adds 5 minutes to the lifetime. Connecting to a nativeMessaging host in Chrome 105+ makes the SW run forever.

Maksymilian Chwałek

unread,
Aug 26, 2022, 3:25:23 PM8/26/22
to Chromium Extensions, wOxxOm, Maksymilian Chwałek
Thanks a lot! I wish google described it more clearly in their docs.

Simeon Vincent

unread,
Aug 29, 2022, 10:09:53 PM8/29/22
to Maksymilian Chwałek, Chromium Extensions, wOxxOm
Due to the concurrency of event handlers, it might happen that some data gets lost (for example when 2 handlers get the same state and then try to write "their" state). Is there any way to prevent race conditions in such case?

IMO the design of the Storage API makes it difficult to properly handle concurrent updates. Storage is a relatively simple asynchronous key-value store; it wasn't designed to be ACID compliant. In particular, the Storage API doesn't support transactions, so you may run into race conditions in any situation where you have multiple parts of your application that attempt to update the database simultaneously and end up clobbering each other's changes. 

wOxxOm's suggestion is solid if all of your application state and database interactions can take place in a single JS environment, but if you need to perform updates in multiple JS environments (e.g. multiple pages) the limitations of the Storage API come right back. 

IndexedDB has an advantage here because it can batch multiple operations together into a single transaction. This in turn allows you to write your database update in a way that will allow you to detect that something went wrong and to retry as appropriate. For example:
  1. Read the current value stored in a given key
  2. Check whether that value has been modified from its expected value
    1. If it has been modified, abort the transaction (and tell the caller why)
    2. If not, proceed
  3. Write the updated value to the specified key
While it's possible for an extension developer to implement a transaction system using the Storage API, doing so would take a decent amount of work and would have an impact on the extension's architecture.


I wish google described it more clearly in their docs.

As a Googler that working the docs, I'd love to know what we could do to make our docs more effective. wOxxOm's comment covered a few different topics; which part in particular helped clear up your confusion? Where would you expect to find that information in the docs? 


P.S. The documentation is quite misleading and confusing in this aspect, … [instead it should use] precise and simple mechanics: SW starts, SW runs, SW stops.

This seems like a good suggestion. Just opened issue 3595 to track it.

Simeon - @dotproto
Chrome Extensions DevRel


--
You received this message because you are subscribed to the Google Groups "Chromium Extensions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-extens...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-extensions/e55a518a-e402-423a-8893-0ffac2d08a40n%40chromium.org.
Reply all
Reply to author
Forward
0 new messages