You don't need a Webserver or Yarn to run Tensorflowjs!

2,132 views
Skip to first unread message

Jeremy Ellis

unread,
Apr 7, 2018, 12:59:02 PM4/7/18
to TensorFlow.js Discussion
I am working on a website that is for people not wanting to use servers or compilers (transpile) tools with Tensorflowjs.  Tensorflowjs works fine completely on your computer using simple webpages that you can edit using notepad (or whatever text editor you use)


My site is at 


I have taken every Tensorflowjs example (as of April 7tgh, 2018) that needs Yarn to compile and made them into regular pure Javascript webpages. Working examples that you can right-click--> View page source and see all the HTML and Javascript that was used to make the page!

I hope my site is valuable for both beginners and seasoned Javascript Developers. Please comment if you like it.

I am also making a list of examples Demos that use Tensorflowjs. So please comment if you find an example not on my list.


Stan Bileschi

unread,
Apr 9, 2018, 10:19:03 AM4/9/18
to TensorFlow.js Discussion
Great point!  Yarn and compilation are tools that become necessities for great big projects and complicated constraints, but for smaller getting started projects, they  are unnecessary and may impede understanding.

Jeremy Sawruk

unread,
May 4, 2018, 7:38:11 PM5/4/18
to TensorFlow.js Discussion
What command did you run to transpile the code to Javascript?

I'm working on a pull request, but I don't know how to transpile my code so that I can test it.


On Saturday, April 7, 2018 at 12:59:02 PM UTC-4, Jeremy Ellis wrote:

Jeremy Ellis

unread,
May 5, 2018, 12:04:01 AM5/5/18
to TensorFlow.js Discussion

To convert typescript to javascript I used TSC with es7 settings on each individual typescript file. Then I had to change all the import and export commands. Just deleted them.

Very happy the examples are now in Javascript. I now just make them useable without build tools.


If you want to send me some code I will see what I can do.

All the examples are already converted to pure javascript at

https://hpssjellis.github.io/beginner-tensorflowjs-examples-in-javascript/

Jeremy Ellis

unread,
May 5, 2018, 10:33:14 AM5/5/18
to TensorFlow.js Discussion
For Jeremy Sawruck


To transpile typscript to javascript for the mnist demo, from the main folder I ran this TSC command, but then still had to do a fair bit of cleaning (and it was fairly hard to setup since it often could not find the node modules). 

 tsc --target ES2017 --module none  ./demos/mnist/mnist.ts

 

Erwin Carpio

unread,
May 11, 2018, 8:57:06 PM5/11/18
to TensorFlow.js Discussion
Hi I'm also trying to build tensorflow.js web apps and hoping not to use too many transpilers, bundlers etc.
I'm currently stuck on loadFrozenModel particularly this part of the code:

import * as tf from '@tensorflow/tfjs';
import {loadFrozenModel} from '@tensorflow/tfjs-converter';

const MODEL_URL = 'https://.../mobilenet/web_model.pb';
const WEIGHTS_URL = 'https://.../mobilenet/weights_manifest.json';

const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);
const cat = document.getElementById('cat');
model.execute({input: tf.fromPixels(cat)});

for the MODERL_URL and WEIGHTS_URL, I used relative file paths. Is that permitted or do I really have to serve the files from the cloud or it won't work? Any advice will be much appreciated. Thanks.

Jeremy Ellis

unread,
May 11, 2018, 10:36:34 PM5/11/18
to TensorFlow.js Discussion
Have a look at this URL Sentiment Analysis demo  If you right click view source you can see how I got it working without using a transpiler. In your situation you should view the model.json file here and see how the path is defined relative to the github repository, NOT relative to the folder I was in.



paths
":[
 "
beginner-tensorflowjs-examples-in-javascript/tf-examples/Browser-Sentiment-Classification/group1-shard1of1"
 ],




Not sure why I had to do it this way, but that is what worked for me. Is your example one of the ones I have already converted to pure javascript, my site is here ?

Erwin Carpio

unread,
May 11, 2018, 10:52:38 PM5/11/18
to TensorFlow.js Discussion
Hi, thanks for replying,

I'm currently trying to build an emoji scavenger hunt type app at this site;


I'm also studying the web-cam-transfer-learning tensroflow example which you converted to pure js at your site here:


I've basically 
1) retrained using the new retrain.py from the emoji app (similar to tensorflow for poets 2)
2) then I saved the ML model as as a savedModel directory
3) then I converted the python savedModel using tensorflowjs_converter to get the shards, json files etc.
4) I'm currently stuck at the docs for tensorflowjs_converter {loadFrozenModel} at this link;


--since tf uses alot of import/export and asynx/await, I had to use a bundler and babel but I keep getting stuck with async/await as my bundler (tried both webpack and parcel) keep throwing errors at the await part saying my functions is promise void.
--I noticed you kept await and async functions in your web apps and they still work. So you did all of this without a bundler or transpiler for es2017 to es2015 like babel?

Thanks for your site.
wanted to see how to write web ml apps with as much raw js as possible.

Jeremy Ellis

unread,
May 12, 2018, 12:10:13 AM5/12/18
to TensorFlow.js Discussion
Bundlers and compilers are great until you try out a new platform and something doesn't install. All javascript works without a bundler as far as I know.

If you want to setup a github site to share what you've got so far I will have a look at it. Did you fork it? I didn't convert the emoji example to pure javascript.

 If you use twitter I am @rocksetta  easy for chatting. My previous post did not show the model.json file. That is the important file to get everything linked together. Here is the link again. Worth looking at:





Problem: I just looked at the emoji source code here and it is all in typescript. That is a lot harder, but not impossible, to convert to pure javascript.
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

kihapper

unread,
May 19, 2018, 4:42:42 PM5/19/18
to TensorFlow.js Discussion

I was also trying to do a similar thing with Erwin and is stuck.
I am trying to build a custom image classification using real-time inputs from webcams.

Where I am at now...

1) Retrained and created a custom image classifier following the tensorflow-for-poets tutorial here
2) Converted the [retrained_graph.pb] following the tensorflowjs_converter to get the shards, json files etc
 (I'm not sure if the model was a Frozen_model, but the command below worked...)

sudo tensorflowjs_converter \
--input_format=tf_frozen_model \
--output_node_names='final_result' \
/Users/kihapper/dev_graduation/tensorflow_js/input/retrained_graph.pb \
/Users/kihapper/dev_graduation/tensorflow_js/output/webmodel

 
3) Found an awesome repo by @dermio that is doing custom image classification in real-time.
4) Forked his repo and put my own custom model in the [web_model] file but I get this error below 

 Error in matMul: inputs must be rank 2, got ranks 1 and 2.


Maybe it's my lack of understanding of how new bundlers such as yarn work, but I have trouble proceeding from here...

I used to learn javascript in the good old days when it was only javascript + CSS +  HTML.
Coming back excited by the birth of tensorflow.js, I'm baffled by all the new things I need to learn to even make things work locally.
So I really appreciate Jeremy trying to do everything raw as possible :)

I have a feeling a tutorial for custom image classification would benefit a lot of people since I encounter several people stuck in similar places.


2018年5月12日土曜日 6時10分13秒 UTC+2 Jeremy Ellis:

Erwin Carpio

unread,
May 20, 2018, 11:07:43 AM5/20/18
to TensorFlow.js Discussion

hi kihapper, I used the
 savedModel version with this


$ tensorflowjs_converter \
    --input_format=tf_saved_model \
    --output_node_names='MobilenetV1/Predictions/Reshape_1' \
    --saved_model_tags=serve \
    /mobilenet/saved_model \
    /mobilenet/web_model


I think frozen model will soon be deprecated. TFjs seems to moving to the savedModel directory format which will contain shards, weights.json,and the mode.pb file plus additional forzen variable files and other supplemental or optional files.

when did you get the error? when you used loadFrozeModel or when you ran the prediction....

anyway I initially had problems with loading it but fixed it by downgrading my tensorflow_converter to version 0.2.0 .
the other forums noted that there are compatibility issues with the latest converter and the union package.
This will all be fixed when the new UNION package comes out.

anyway, when I finally loaded my model with the lower version number I used model.predict and that didn't work either since for tensorflow_converter, it's mode.execute so again there are differences in the API.
these will all be fixed with consolidated API's when the new UNION package gets released as the devs mentioned in other forums..

Finally,
hehe I found vanilla tf too cumbersome when converting for tfjs so I switched to KERAS... it's way easier, more reliable I belive...why not try it.
KERAS is just a wrapper api and can still use tensorflow as the backend... so you're still using tensorflow but with an easier to manipulate API.
You can also prototype a little more easily with KERAS.
you just lose the more granular controls of using vanila tensorflow.

though from what i've read tf team has chosen KERAS to be part of tf's api under tf.contrib.keras so hehehe..... who knows what's in store for the future.

goodluck on your project. So you're into skin disease or derma... that's great. there are articles for image classification or malignant moles and melanoma. are you planning to do something similar? but in the browser?

anyway, goodluck to you....

Nikhil Thorat

unread,
May 20, 2018, 1:33:56 PM5/20/18
to Erwin Carpio, TensorFlow.js Discussion, Ping Yu
+Ping, can you take a look at this? Looks like the converter is throwing some cryptic errors. Let's try to resolve this error and improve the error messages for the next time around.

Just to respond, we're not planning on deprecating loadFrozenModel any time soon.

--
You received this message because you are subscribed to the Google Groups "TensorFlow.js Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfjs+unsubscribe@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/tfjs/.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfjs/2155bbd3-3c65-4f17-8022-b3dcf94b6f36%40tensorflow.org.

kihapper

unread,
May 20, 2018, 2:56:47 PM5/20/18
to TensorFlow.js Discussion
Ah Erwin
Thanks for the comments.
I got the error when I ran the predection, still stuck.

As for the skin disease classifier, that is a custom model from @dermio

I also saw your other post that you managed to write a custom image classification in raw js.
I am stuck with getting everything bundled and would be great if you can share the code if possible...

2018年5月20日日曜日 17時07分43秒 UTC+2 Erwin Carpio:

Erwin Carpio

unread,
May 23, 2018, 8:07:46 AM5/23/18
to TensorFlow.js Discussion
Hi, been burried in work lately at the clinic/hospital. haven't been able to touch the code.

Here's a link to the raw2.html I'm currently experimenting with.



Unfortunately, linking to the cdn for the latest tfjs currently has broken it again.
I have a backup tfjs of an older version also on the repo that makes it work but I already sent the git repo to the tensorflow devs for them to look at.
You may have seen my latest post in the other forum threads.
Hope your project works soon.
Maybe we can help each other too.
or message on twitter since we have similar project goals.
Anyway, I'm EJTCarpio in twitter. 

Ping Yu

unread,
May 23, 2018, 7:23:59 PM5/23/18
to t123...@gmail.com, TensorFlow.js Discussion
Hi Kihapper

I looked into your example, the inference failed at the end of the retrained model, 
Here are the nodes related to the failure:
  1. 224:NodeDef {input: Array(1), attr: {…}, name: "MobilenetV1/Logits/SpatialSqueeze", op: "Squeeze"}
  2. 225:NodeDef {input: Array(1), attr: {…}, name: "input_1/BottleneckInputPlaceholder", op: "PlaceholderWithDefault"}
  3. 226:NodeDef {input: Array(2), attr: {…}, name: "final_retrain_ops/Wx_plus_b/MatMul", op: "MatMul"}
  4. 227:NodeDef {input: Array(2), attr: {…}, name: "final_retrain_ops/Wx_plus_b/add", op: "Add"}
  5. 228:NodeDef {input: Array(1), attr: {…}, name: "final_result", op: "Softmax"}
The output Tensor shape of squeeze node (224) is rank 1, while it is multiplied with a weight which is rank 2 by the MatMul node (226), it cause
the rank mismatch error.
This looks like an error for the retraining frozen model with input batch size as 1. 
We should verify if this is an issue with the frozen model, or it happens after the frozen model is optimized by the tfjs converter.
Can you help to try following first: the retraining tutorial has image_label.py script, see if that works for your frozen model.

Thanks

Ping


--
You received this message because you are subscribed to the Google Groups "TensorFlow.js Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfjs+uns...@tensorflow.org.

kihapper

unread,
May 24, 2018, 12:48:12 AM5/24/18
to TensorFlow.js Discussion, t123...@gmail.com
Thanks for looking into this problem Ping.
I tried the label_image.py with my model.
It worked fine with the models that were exported with [--output_graph] in retrain.py. (Is this model the frozen model...? not sure what model type this is)
However with the saved model exported with [--saved_model_dir], it had an error explained below.

I will explain step by step so maybe you can produce it on your end.

1.Clone the tensorflow repository [ ver 1.7.1]

git clone  https://github.com/tensorflow/tensorflow.git --branch r1.7

2.Run the retrain.py script here [ ver 1.7,1]
I first tried with just the [--saved_model_dir] but I realized that if you did not set [--output_graph] here, the [--saved_model_dir] will not be done.

python3 tensorflow/tensorflow/examples/image_retraining/retrain.py \
       
--image_dir=data/images \
       
--how_many_training_steps=600 \
       
--architecture mobilenet_0.25_224 \
       
--saved_model_dir=data/saved_model \
       
--output_graph=data/model_output/output_graph.pb \
       
--output_labels=data/model_output/output_labels.txt \
       
--summaries_dir=data/training_summary \

File Structure

├── images
  ├── bicycle
  ├── exit
  ├── fire_extinguisher
  ├── sphere_cam
  └── square_cam
├── model_output
  ├── output_graph.pb
  └── output_labels.txt
├── saved_model
  ├── saved_model.pb
  └── variables
├── test_images
  └── test_bike.jpg
├── training_summary
     
├── train
     
└── validation



3.With the model exported, I test them with label_image.py  [ ver 1.7,1]
If you just run the label_image.py, you get the error below so I changed some parts to make it run.

Error--> "The name 'import/InceptionV3/Predictions/Reshape_1' refers to an Operation not in the graph."

output_layer = "InceptionV3/Predictions/Reshape_1"  ---changed--->   output_layer = "final_result"

Error--> "Cannot feed value of shape (1, 299, 299, 3) for Tensor 'import/input:0', which has shape '(?, 224, 224, 3)"

input_height
= 299  ---changed--->  input_height = 224
input_width
= 299   ---changed--->  input_width = 224


4.With the models* that were exported with [--output_graph] and in the model_output directory, it works fine.(*frozen model...?)

python3 tensorflow/tensorflow/examples/label_image/label_image.py \
 
--graph=data/model_output/output_graph.pb \
 
--labels=data/model_output/output_labels.txt \
 
--image=data/test_images/test_bike.jpg

2018-05-23 23:35:52.038675: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

bicycle 0.9918669

square cam
0.008131062
sphere cam
1.5137134e-06
exit 3.2000648e-07
fire extinguisher
1.1101035e-07

5. However with saved models that were exported with [--saved_model_dir] , it has an error below when trying to read.

 python3 tensorflow/tensorflow/examples/label_image/label_image.py  \
 
--graph=data/saved_model/saved_model.pb \
 
--labels=data/model_output/output_labels.txt \
 
--image=data/test_images/test_bike.jpg

 
/Users/miniconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters

Traceback (most recent call last):
 
File "tensorflow/tensorflow/examples/label_image/label_image.py", line 118, in <module>
    graph
= load_graph(model_file)

 
File "tensorflow/tensorflow/examples/label_image/label_image.py", line 31, in load_graph
    graph_def
.ParseFromString(f.read())

google
.protobuf.message.DecodeError: Error parsing message


6. I convert the saved model using tfjs converter [version 0.3.1]

 python3 -m tensorflowjs.converters.converter \
   
--input_format=tf_saved_model \
   
--output_node_names='final_result' \
   
--saved_model_tags=serve \
    data
/saved_model/ \
    data
/web_saved_model/


7.After the web-saved-model is done, I put the file in the [assets/webmodel] of the custom image classifier that I've been trying to make and it returns the error below.
This error also happens to the converted frozen* model as well.

Uncaught (in promise) Error: Error in matMul: inputs must be rank 2, got ranks 1 and 2.


Maybe it has to do with the version issue here : https://github.com/google/emoji-scavenger-hunt/issues/14

I'm new to tensorflow, so I am trying to grasp the concept of rank mismatch error.
When you mean batch size, do you mean the set of examples used in one iteration of model training?

I really thank you for your time to look at this problem


2018年5月23日水曜日 19時23分59秒 UTC-4 Ping Yu:

Ping Yu

unread,
May 24, 2018, 11:56:56 AM5/24/18
to Tomo Kihara, TensorFlow.js Discussion
Kihapper, thanks for the details steps, one thing I want to confirm with you, have you tried to
use the converter to convert the frozen_model, in your case data/model_output/output_graph.pb 
file?

Secondly you can try following, duplicate the input tensor to force the
rank to stay at 2 after squeeze, this would slow down the inference time, but
just to confirm the problem is related to the input size:

const preprocessedInput = tfc.div(
        tfc.sub(input.asType('float32'), INPUT_MEAN),
        INPUT_STD);

// duplcate the input to avoid get squeezed
const twoInputs = tfc.stack([preprocessedInput, preprocessedInput]);

    const dict = {};

    dict[INPUT_NODE_NAME] = twoInputs;
    return this.model.execute(dict, OUTPUT_NODE_NAME);

Meanwhile I will try to reproduce the problem, and let you know if anything
came out. 

Thanks

Ping

kihapper

unread,
May 25, 2018, 10:08:58 AM5/25/18
to TensorFlow.js Discussion, t123...@gmail.com
Thanks Ping!
I added the code you mentioned to my index.js.
It now has double arrays so you have to work around that but it works now.
So the problem was with the input size.

I converted both frozen_model and saved_model.
They both work with the current program.


├── web_frozen_model
  ├── group1-shard1of1
  ├── tensorflowjs_model.pb
  └── weights_manifest.json
└── web_saved_model
   
├── group1-shard1of1
   
├── tensorflowjs_model.pb
   
└── weights_manifest.json


2018524日木曜日 115656 UTC-4 Ping Yu:

Erwin Carpio

unread,
May 26, 2018, 12:19:57 AM5/26/18
to TensorFlow.js Discussion
Hi, finally back to coding after a whole week of non-related programming work huhu....

now to my current problem, I hope someone can help.

I've switched to Keras using TF as a backend.
Converted to tf model.js with shards.
Ditched all bundlers, builders, babel, preprocessors, yarn, webpack, parcel etc. and using raw vanilla js.
Currently shows the probablities of each predicted category for a single static image in a an image element as a single array/tensor

Now for my questions...
1) I'm not using the latest tf.js from the CDN  (that breaks the code and I sent my github repo for this in another thread).I'm using the one before that which I backed up....(is there a way to determine the tfjs version number I'm using from the local file so I can start referring to it using the version number?)

2) Currently using code snippets from emoji scavenger hunt app. I'm now switching from single static image to capturing image data directly from the web cam and putting it into a video html tag. this is the code that I'm using.


async function kerasMod() {

    console.log('run');

  const model = await tf.loadModel('kerasModel/model.json');
  console.log(model);

  let isPredicting = true;

//   THE LOOP FOR LOGGING PREDICTIONS
  while (isPredicting) {
    const predictedClass = tf.tidy(() => {
      const img = webcam.capture();
      const predictions = model.predict(img);
      return predictions.as1D().argMax();
    });

    const classId = (await predictedClass.data())[0];
    console.log('CLASS ID is '+classId);
    await tf.nextFrame();

  }

}

kerasMod();


class Webcam {

  // class Webcam {

  /**
   * @param {HTMLVideoElement} webcamElement A HTMLVideoElement representing the webcam feed.
   */
  constructor(webcamElement) {
      this.webcamElement = webcamElement;
  }

  /**
   * Captures a frame from the webcam and normalizes it between -1 and 1.
   * Returns a batched image (1-element batch) of shape [1, w, h, c].
   */
  capture() {
      return tf.tidy(() => {
          // Reads the image as a Tensor from the webcam <video> element.
          const webcamImage = tf.fromPixels(this.webcamElement);

          // Crop the image so we're using the center square of the rectangular
          // webcam.
          const croppedImage = this.cropImage(webcamImage);

          // Expand the outer most dimension so we have a batch size of 1.
          const batchedImage = croppedImage.expandDims(0);

          // Normalize the image between -1 and 1. The image comes in between 0-255,
          // so we divide by 127 and subtract 1.
          return batchedImage.toFloat().div(tf.scalar(127)).sub(tf.scalar(1));
      });
  }

  /**
   * Crops an image tensor so we get a square image with no white space.
   * @param {Tensor4D} img An input image Tensor to crop.
   */
  cropImage(img) {
      const size = Math.min(img.shape[0], img.shape[1]);
      const centerHeight = img.shape[0] / 2;
      const beginHeight = centerHeight - (size / 2);
      const centerWidth = img.shape[1] / 2;
      const beginWidth = centerWidth - (size / 2);
      return img.slice([beginHeight, beginWidth, 0], [size, size, 3]);
  }

  /**
   * Adjusts the video size so we can make a centered square crop without
   * including whitespace.
   * @param {number} width The real width of the video element.
   * @param {number} height The real height of the video element.
   */
  adjustVideoSize(width, height) {
      const aspectRatio = width / height;
      if (width >= height) {
          this.webcamElement.width = aspectRatio * this.webcamElement.height;
      } else if (width < height) {
          this.webcamElement.height = this.webcamElement.width / aspectRatio;
      }
  }

  async setup() {
      return new Promise((resolve, reject) => {
          const navigatorAny = navigator;
          navigator.getUserMedia = navigator.getUserMedia ||
              navigatorAny.webkitGetUserMedia || navigatorAny.mozGetUserMedia ||
              navigatorAny.msGetUserMedia;
          if (navigator.getUserMedia) {
              navigator.getUserMedia({
                      video: true
                  },
                  stream => {
                      this.webcamElement.srcObject = stream;
                      this.webcamElement.addEventListener('loadeddata', async () => {
                          this.adjustVideoSize(
                              this.webcamElement.videoWidth,
                              this.webcamElement.videoHeight);
                          resolve();
                      }, false);
                  },
                  error => {
                      document.querySelector('#no-webcam').style.display = 'block';
                  });
          } else {
              reject();
          }
      });
  }



}


const webcam = new Webcam(document.getElementById('videoElement'));

console.log(webcam);


So console.log(webcam); works so the constructor for webcam runs smoothly which Means I'm getting the pixel data as a tensor.
However, I'm getting this new error.

Uncaught (in promise) Error: Requested texture size [0x0] is invalid.
    at Object.n.validateTextureSize (tfjs.js:1)
    at s (tfjs.js:1)
    at Object.n.createMatrixTexture (tfjs.js:1)
    at e.createMatrixTexture (tfjs.js:1)
    at e.acquireTexture (tfjs.js:1)
    at e.uploadToGPU (tfjs.js:1)
    at e.getTexture (tfjs.js:1)
    at e.fromPixels (tfjs.js:1)
    at e.fromPixels (tfjs.js:1)
    at e.fromPixels (tfjs.js:1)

so the requested texture size is invalid. Any suggestions? thanks again for any help or suggestions.












On Sunday, April 8, 2018 at 12:59:02 AM UTC+8, Jeremy Ellis wrote:

Erwin Carpio

unread,
May 26, 2018, 2:48:25 AM5/26/18
to TensorFlow.js Discussion
It's predicting....

i just reset my video element to 244 which is the size of my training set...

Leo M

unread,
May 26, 2018, 10:18:08 AM5/26/18
to TensorFlow.js Discussion
Hi

I've been trying for days to get a minimal posenet/camera example working without a Webserver or Yarn, so this post definitely caught my eye!
But forced to stick to parcel and yarn/npm as the camera demo works for reasons below.
Which already begins falling apart when i begin to e.g. remove --no-hmr from watch.

Once those two scripts are both working again, you'll see the error I was getting, which stemmed from the tfjs script file I think it being unable to get the canvas.
I also did what you had done, taking the demo camera example and trying to convert it to a webserver-free version, fixing errors by removing the imports and adding the javascript tags at the end of the html body tag, trying to preserve the order of the demo and stick to it as close as possible.
Any ideas?

Many thanks

Jeremy Ellis

unread,
May 29, 2018, 10:53:09 PM5/29/18
to TensorFlow.js Discussion

wang tiezhen

unread,
May 29, 2018, 11:40:48 PM5/29/18
to TensorFlow.js Discussion
I was facing exactly the same problem. I wanted my workshop program to be extremely simple without too much burden on knowing too many dependencies and taking too much time setting up the environment. 

That's why I built this workshop program where students could build a Python model, and then convert it to js model, and visualize it in a browser. I used pure es6 without await command, which is a bit uncomfortable but it works as intended. :-) Feel free to give it try.

Message has been deleted

Leo M

unread,
May 31, 2018, 6:45:46 AM5/31/18
to TensorFlow.js Discussion
Great, I'm going ahead and including it with https://unpkg.com/@tensorflow/tf...@0.10.3/dist/tf.min.js.
Since this and https://unpkg.com/@tensorflow-models/pos...@0.0.1/dist/bundle.js have had few days downtime the past couple weeks, I've just downloaded the files locally for next time.

Erwin Carpio

unread,
Jun 1, 2018, 8:23:45 AM6/1/18
to TensorFlow.js Discussion
hello.... got my set up working on a local server now. I used a keras model instead of savedFrozenModel to avoid all the hassle. It works on my local development server so I tried uploading it to the cloud but when in the cloud tfjs throws this error...

Uncaught (in promise) RangeError: byte length of Float32Array should be a multiple of 4
    at new Float32Array (<anonymous>)
    at tfjs:1
    at Array.forEach (<anonymous>)
    at tfjs:1
    at Array.forEach (<anonymous>)
    at tfjs:1
    at n (tfjs:1)
    at Object.next (tfjs:1)
    at i (tfjs:1)

This does not happen when I load the same web page with AI/ML, keras model with model.json and shards on a local server...

Any suggestions?
Thanks.

Erwin Carpio

unread,
Jun 1, 2018, 8:29:42 AM6/1/18
to TensorFlow.js Discussion
Let me clarify, this is the no server or yarn thread.
I meant on my local machine with no extra bundlers and using just raw javascript, it works but not when I upload my files to the cloud. I have a webhosting service in the cloud but when I put it there it doesn't work and throws the error. Is there a special way to serve the model.json and shards? could that be the problem? when using a webhosting service?

Erwin Carpio

unread,
Jun 1, 2018, 9:34:49 AM6/1/18
to TensorFlow.js Discussion
Just wanted to update. It's not a tensorflow.js problem.
It was a fileZilla bug. Transfer process in fileZilla is set to Ascii. I just manually reset it to binary for the shards and everything worked out. Please disregard the error log.
Thanks again and more power to the tensorflow team.
Message has been deleted

Erwin Carpio

unread,
Jun 6, 2018, 10:04:23 AM6/6/18
to TensorFlow.js Discussion
Hi, I've been posting on the emoji scavenger hunt github repo and other tensorflow related forums but forgot to ask here.

So I've built a little web app that uses the  camera to categorize two types of fractures: monteggia and galeazzi at this link;



It currently runs on chrome, opera and firefox on desktop and on android 8 on the latest firefox. I'm trying to get it to work on android 5.0.
I don't get any errors so I manually looked for the function stalling my app and found it to be

model.predict()

I've read there's a performance cliff for model.predict on android phones at this link;


so I tried to use predictadClass.dispose() to free the gpu for the return value of my tf.tidy function.
This still doesn't work and my app still stalls at mode.predict().

Any suggestions would be much appreciated. 

(sorry for reposting... I found too many typographical errors in the first post)

Erwin Carpio

unread,
Jul 5, 2018, 11:57:18 PM7/5/18
to TensorFlow.js Discussion
Hi, I tried waiting a while to see if updates to tensorflow.js would solve the issue and hopefully start supporting my web app that uses the videocam of a phone to recognize radiographic image fiindings. The latest 0.11.7 version of tf.js is blazing fast! in fact it's now too fast hehehe... I might have to tweak my app on laptop to slow down the detection rate. However the problem remains. The app still only works on android 8.0. Still not working on android 5.0. the webcam works. The AI/MODEL loads properly. It's still model.predict() that's taking forever so it's likely still connected to the performance cliff? on android 5.0? still not getting any errors. model.predict() just doesn't seem to run or takes forever to run. Any suggestions?

John Terry

unread,
Sep 26, 2018, 1:55:38 PM9/26/18
to TensorFlow.js Discussion
Hi, 

I am new at javascript but I liked posenet tutorial. Could you please walk me through how to setup posenet tensorjs to apply it locally with my set of videos or images. Is there a way to get it setup in keras?

Azamat

swati nair

unread,
Nov 28, 2018, 4:46:13 AM11/28/18
to TensorFlow.js Discussion
Hi , I need to call the camera.js in Posenet like node camera.js from cmd .  withou using the HTML . I need to hard code the video input path rather than taking it from the HTML.Actually my main aim is to save the keypoints extracted to a .json file. But that will not be possible if i am using a browser .Can you please tell me how i cam achieve this?

forough karandish

unread,
Aug 22, 2019, 4:07:22 PM8/22/19
to TensorFlow.js Discussion
@swati nair,

Have you achieved your goal? I've got the same question.

Tom Ming

unread,
Feb 6, 2020, 5:58:30 PM2/6/20
to TensorFlow.js Discussion
Great!
Reply all
Reply to author
Forward
0 new messages