Privacy of loaded Frozen Model

228 views
Skip to first unread message

Andrew Sigurdson

unread,
Jul 18, 2018, 3:05:46 PM7/18/18
to TensorFlow.js Discussion
In the future I am going to be hosting a frozen model on a website.  When the model is loaded into the client's browser, where is the model actually saved?  Memory?  Is it encrypted so that the client can't retrieve the model?  Thank you,

Andrew Sigurdson

Loreto Parisi

unread,
Jul 18, 2018, 3:59:29 PM7/18/18
to TensorFlow.js Discussion
That's a good question. I have asked this times ago. The problem is that the model shards are public downloadable files, so it's pretty easy to download them, and this is actually what I do now to test the examples around. Just open the Development Tools, look at Network and you will see the model shards files there with this structure:

ip-192-168-1-104:model loretoparisi$ tree -L 1

.

├── group1-shard1of1

├── group2-shard1of1

├── group3-shard1of1

├── group4-shard1of3

├── group4-shard2of3

├── group4-shard3of3

├── metadata.json

└── model.json



Tensorflow by the way has a different way to handle "public" models that is the new Tensorflow Hub, where you can upload your model, and the you can download it, without the friction of handling the model 'locally' (it will be actually downloaded locally anyways). The other interesting part is the use of the new WebKit database to store model's files in the IndexedDB.

By the way, to keep your mode fase you need something else, like crypto. Let's think to it like a HLS streaming that you want to protect so you need to encrypt it with like AES-128 bit encryption, etc.

David Soergel

unread,
Jul 18, 2018, 4:29:12 PM7/18/18
to as...@umich.edu, TensorFlow.js Discussion
A TensorFlow.js model is indeed loaded in memory in the user's browser in unencrypted form, so users have full access to it-- i.e. they can easily extract the model back to local files using the JS console.

Also, the model must exist at some URL (which you pass to tf.loadModel()).  Access to that URL may require some authentication, but if the browser can load it, then in general the user can too (e.g., while they're logged in to your site).  Finally, the model files will typically be stored in the browser cache.

Providing a model in an encrypted-yet-functional form is an ongoing research problem; see https://medium.com/corti-ai/encrypt-your-machine-learning-12b113c879d6 for an overview.  We do not currently address this topic in TF.js.  Of course I agree it's an important issue, and hope that the technology becomes practical in the future.

-ds



On Wed, Jul 18, 2018 at 3:05 PM Andrew Sigurdson <as...@umich.edu> wrote:
In the future I am going to be hosting a frozen model on a website.  When the model is loaded into the client's browser, where is the model actually saved?  Memory?  Is it encrypted so that the client can't retrieve the model?  Thank you,

Andrew Sigurdson

--
You received this message because you are subscribed to the Google Groups "TensorFlow.js Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfjs+uns...@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/tfjs/.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfjs/02c84c3f-40a4-4420-9421-a54d4de8fb95%40tensorflow.org.

Andrew Sigurdson

unread,
Jul 18, 2018, 4:39:42 PM7/18/18
to TensorFlow.js Discussion, as...@umich.edu
Thank you for the responses. So currently there is no way to stop a competitor from stealing our models once it is loaded in their browser?  Is there a way to turn the model into machine code, rather then encrypt it, when loaded and then execute the machine code model in the browser?  At least it would be very hard to reproduce the code!  Thank you,

Andrew Sigurdson

David Soergel

unread,
Jul 18, 2018, 5:04:46 PM7/18/18
to as...@umich.edu, TensorFlow.js Discussion
Correct.  Loading a TF.js model into the browser is not the right solution for models that need to stay confidential.  You'll have to keep such models server-side and make calls to them over the wire.  If you like, you can still use TF.js on the server, via the node.js bindings-- but you also have a lot of other options, such as https://www.tensorflow.org/serving and https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models.
-ds

Reply all
Reply to author
Forward
0 new messages