[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running in a web worker #102

Closed
bkazi opened this issue Apr 5, 2018 · 30 comments
Closed

Running in a web worker #102

bkazi opened this issue Apr 5, 2018 · 30 comments
Assignees
Labels
P3 type:feature New feature or request

Comments

@bkazi
Copy link
bkazi commented Apr 5, 2018

Are there any plans to make tfjs able to run on a worker using offscreen canvas? Is it already possible? (sorry if I'm still stuck in the deeplearn.js days)

Would it be possible to do so now by manually creating a GPGPUContext and using it in the backend somehow?

@nsthorat
Copy link
Contributor
nsthorat commented Apr 5, 2018

Offscreen canvas is definitely something we want to support, we haven't prioritized it because it's still experimental.

If you want to send us a PR, we'd happily accept it. Just make sure you do feature testing inside environment.ts.

easadler pushed a commit to easadler/tfjs that referenced this issue Apr 12, 2018
* Sgd+ momentum optimizer added

* momentum optimizer extended from sgd

* momentum optimizer used in model-builder

* cleanup

* -_-

* redundant code removed in momentumOptimizer

* tabs replaced with spaces

* space added

* space added

* resolved conflicts

* rmsprop and adagrad optimizer added

*  resolved texture leakage and optimizers inherited from optimizer.ts

* Merge branch 'master' into master

* Merge remote-tracking branch 'upstream/master'

* minor changes in optimizers

* Merge branch 'master' into master

*  resolved conflicts

* Merge branch 'master' of https://github.com/mnottheone/deeplearnjs

* formatting done

* license updated

* cache -> accumulatedSquaredGradients

* formatted

* formatted
@prijindal
Copy link

If you mean, simply training a model in web worker, It seems that is already possible.

@riccardogiorato
Copy link

Is it possible to run the prediction of a model in web worker? Or is there a way to predict without freezing the browser UI cause GIFs in the page and other things remain still until prediction is completed.

Is there a workaround? @prijindal only training is possible inside a web worker?

@nsthorat
Copy link
Contributor

If the page is freezing because of WebGL, you won't really get much from a web worker.

Have you tried sprinkling await tf.nextFrame between calls to TF.js and using .data() instead of .dataSync()?

@prijindal
Copy link

@giorat Yes, it is possible to do prediction also inside a web worker.
Till now i couldn't find anything in tfjs that specifically requires it to have a dom environment, so i am guessing the whole library should work inside a web worker.

@brannondorsey
Copy link
Contributor

@giorat and @prijindal, tfjs does indeed work in web workers as of today, but unfortunately only using the cpu backend, not webgl.

Supporting the webgl backend inside web workers seems like a huge advantage if it can be managed with the OffscreenCanvas API. As of today, it seems to me that there are two options for running tfjs with the webgl backend:

  1. Train/infer on small batch sizes being careful to await tf.nextFrame() so as to not block the main UI thread.
  2. Ignore tf.nextFrame() and run your tfjs operations with no throttling via requestAnimationFrame() (web devs & users will hate you for this).

tf.nextFrame() (which uses requestAnimationFrame() underneath) is an interesting solution to the unique problem that arises when doing ML in browser which other batch-processing based ML frameworks in Python/C++ don't suffer from. Without using web workers, tfjs will always have to share the main UI thread, and as a result, the tfjs operations will always be throttled/slowed down. From where I see it, support for the webgl backend inside of web workers would be a huge step in making tfjs a first-class citizen when it comes to ML frameworks. I don't mean to say it isn't amazing as is, but without "multithreaded" support that can also leverage WebGL, tfjs will always be limited/bound to using a throttling API that was developed for video games and animation, not the kind of batch programming standard in machine learning.

Have there been any recent movements or a road map to add support for the WebGL backend in web workers?

@oeway
Copy link
oeway commented Jun 29, 2018

I think it's definitely necessary to support web workers, having cpu and webgl computation running in webworks would be awesome.
The situation I am facing right now is: I have webworkers doing preprocessing for my data which will be passed to tfjs for training, I encountered two related issues:

  1. in a webworker, I just tried with the cdn tfjs
importScripts("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs")

And I get Error: Script error., doesn't even give me the chance to set the backend to cpu.
@brannondorsey how did you get tfjs to work in the webworker with cpu backend?

  1. I try to see if tensors can be sent through postMessage, so I run code with tfjs in a sandboxed iframe, when I create a tensor and send it with postMessage, I got this object {isDisposedInternal: false, size: 4, shape: Array(2), dtype: "float32", strides: Array(1), …} but it can't be used in tfjs, the data is lost.
    Would it be possible to send tensors through postMessage?

@sandipdeveloper
Copy link

Hi All,

Is there a sample code to use tfjs in web worker, I have tried something like this, does not work

importScripts("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs")

@brannondorsey
Copy link
Contributor

It looks like the OffscreenCanvas API will be supported by default in the upcoming Chrome 69 release and beyond. Once this occurs, what would be the necessary steps to get WebGL support in a web worker with tfjs?

@woudsma
Copy link
woudsma commented Aug 20, 2018

It's too bad that the OffscreenCanvas API isn't available yet / soon in most browsers.
Loading a small MobileNetV2 model (~1MB) converted with tensorflowjs_converter and running predict adds noticeable UI lag when using a single thread.

@sandipdeveloper @oeway This is my very hacky solution so far. (I'm using React so don't mind the WorkerProxy workaround). The model should work inside a Web Worker by setting tf.setBackend('cpu').

Getting a prediction using a MobileNetV2_0.50_224 model takes ~18 seconds on my Macbook Pro..
(compared to ~50ms using the default webgl backend). UI lag is gone though.
Ideas/improvements are greatly appreciated!

Link to the steps I took to retrain the model.

ModelWorker.js

/* eslint-disable */
export default function ModelWorker() {
  this.window = this
  importScripts('https://cdn.jsdelivr.net/npm/setimmediate@1.0.5/setImmediate.min.js')
  importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core')
  this.tfc = this.tf
  importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.10.3')
  importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter')

  this.tf = _objectSpread(this.tf, this.tfc) // this.tf = { ...this.tf, ...this.tfc }
  tf.setBackend('cpu')

  onmessage = async (e) => {
    postMessage('(worker) Loading model')
    const { MODEL_URL, WEIGHTS_URL, IMAGE_SIZE } = e.data
    const model = await tf.loadFrozenModel(MODEL_URL, WEIGHTS_URL)
    postMessage('(worker) Model loaded')
    const input = tf.zeros([1, IMAGE_SIZE, IMAGE_SIZE, 3])
    const t0 = performance.now()
    postMessage('(worker) Predicting..')
    await model.predict({ Placeholder: input })
    postMessage(`(worker) Prediction took ${(performance.now() - t0).toFixed(1)} ms`)
  }

  // ES6 polyfills
  function _defineProperty(obj, key, value) {
    return key in obj
      ? Object.defineProperty(obj, key, {
        value,
        enumerable: true,
        configurable: true,
        writable: true,
      })
      : obj[key] = value
  }

  function _objectSpread(target) {
    for (let i = 1; i < arguments.length; i += 1) {
      const source = arguments[i] != null ? arguments[i] : {}
      let ownKeys = Object.keys(source)
      if (typeof Object.getOwnPropertySymbols === 'function') {
        ownKeys = ownKeys.concat(Object
          .getOwnPropertySymbols(source)
          .filter(sym => Object.getOwnPropertyDescriptor(source, sym).enumerable))
      }
      ownKeys.forEach(key => _defineProperty(target, key, source[key]))
    }
    return target
  }
}

WorkerProxy.js

export default class WorkerProxy {
  constructor(worker) {
    const code = worker.toString()
    const src = code.substring(code.indexOf('{') + 1, code.lastIndexOf('}'))
    const blob = new Blob([src], { type: 'application/javascript' })
    return new Worker(URL.createObjectURL(blob))
  }
}

(this should be easier with a future version of react-scripts..)

SomeComponent.js

import WorkerProxy from './WorkerProxy'
import ModelWorker from './ModelWorker'

const ASSETS_URL = `${window.location.origin}/assets`
const MODEL_URL = `${ASSETS_URL}/model/tensorflowjs_model.pb`
const WEIGHTS_URL = `${ASSETS_URL}/model/weights_manifest.json`
const LABELS_URL = `${ASSETS_URL}/model/labels.json`
const IMAGE_SIZE = 224

if (!!window.Worker) {
  const worker = new WorkerProxy(ModelWorker)
  worker.addEventListener('message', e => console.log(e.data))
  worker.postMessage({ MODEL_URL, WEIGHTS_URL, IMAGE_SIZE })
  // Load labels, etc.
}

...

It would be nice if there was a version of something like webgl-worker which can be used with TensorFlow.js.

As the adoption of OffscreenCanvas API will take time (longer than i can afford), all suggestions on possible workarounds are very welcome!

@nsthorat nsthorat added type:feature New feature or request P3 labels Oct 24, 2018
@wtfil
Copy link
wtfil commented Nov 29, 2018

Prepare yourself!
This is not for sensitive people. I have added following code BEFORE any import from tensorflow inside the woker.js file, just to make it "think" he is running in normal environment and prevent any runtime error (btw I am using webpack to bundle worker.js).
Also this will work only in browser with OffscreenCanvas (Chrome or Firefox+flag)

if (typeof OffscreenCanvas !== 'undefined') {
    self.document = {
        createElement: () => {
            return new OffscreenCanvas(640, 480);
        }
    };
    self.window = {
        screen: {
            width: 640,
            height: 480
        }
    }
    self.HTMLVideoElement = function() {}
    self.HTMLImageElement = function() {}
    self.HTMLCanvasElement = function() {}
}

import * as tfc from '@tensorflow/tfjs-core';
import * as tf from '@tensorflow/tfjs';

console.log('backend is %s', tf.getBackend());

Probably you will need to add more "polyfills" to make it work your you case

The code above could be used ONLY as POC, pls do not send something like this to production :)
Currently working on PR with real feature detection.

@MariasStory
Copy link

OffscreenCanvas is supported in Chrome 71.0.3578.98
https://devnook.github.io/OffscreenCanvasDemo/index.html

@mizchi
Copy link
mizchi commented Feb 10, 2019

I tried tfjs with #102 (comment) polyfill

Basic tensorflow.js usages work correctly, but with posenet does not work.

// ...polyfill
import * as posenet
const outputStride = 16;
const imageScaleFactor = 0.5;
const flipHorizontal = true;

self.onmessage = async ev => {
  const bitmap = ev.data.bitmap;
  const canvas = new OffscreenCanvas(bitmap.width, bitmap.height);
  const ctx = canvas.getContext("2d");
  ctx.drawImage(bitmap, 0, 0);

  const net = await posenet.load();
  const pose = await net.estimateSinglePose(
    canvas,
    imageScaleFactor,
    flipHorizontal,
    outputStride
  );
  console.log(pose);
};

I found this.

https://github.com/tensorflow/tfjs-core/blob/3b05a4f34193da2c6b3c86370df850c2860d5b72/src/kernels/backend_webgl.ts#L218-L221

HTMLVideoElement is not transferrable object now so I have to send data as ImageBitmap.

  const offscreen = new OffscreenCanvas(video.videoWidth, video.videoHeight);
  offscreen.width = video.videoWidth;
  offscreen.height = video.videoHeight;
  const ctx = offscreen.getContext("2d");
  ctx.drawImage(video, 0, 0);
  const bitmap = offscreen.transferToImageBitmap();

  worker.postMessage(
    {
      bitmap
    },
    [bitmap]
  );

To work this, I think tfjs needs OffscreenCanvas or ImageBitmap support.

@mizchi
Copy link
mizchi commented Feb 10, 2019

This work arround works with posenet (very slow)

if (typeof OffscreenCanvas !== "undefined") {
  self.document = {
    readyState: "complete",
    createElement: () => {
      return new OffscreenCanvas(640, 480);
    }
  };

  self.window = {
    screen: {
      width: 640,
      height: 480
    }
  };
  self.HTMLVideoElement = OffscreenCanvas;
  self.HTMLImageElement = function() {};
  class CanvasMock {
    getContext() {
      return new OffscreenCanvas(0, 0);
    }
  }
  // @ts-ignore
  self.HTMLCanvasElement = CanvasMock;
}

import * as _tfc from "@tensorflow/tfjs-core";
import * as _tf from "@tensorflow/tfjs";

@JSmith01
Copy link

Additionally I was unable to load model from json file in the worker. It can be helped with the following workaround to get TensorFlow.js up and running:

if (typeof OffscreenCanvas !== 'undefined') {
    self.document = {
        createElement: () => {
            return new OffscreenCanvas(640, 480);
        }
    };
    self.window = self;
    self.screen = {
        width: 640,
        height: 480
    };
    self.HTMLVideoElement = function() {};
    self.HTMLImageElement = function() {};
    self.HTMLCanvasElement = OffscreenCanvas;
}

tfjs internally tries to bind fetch() to window context, and this results in the error if you define window as plain object. Also, note that according to this article OffscreenCanvas is very similar to usual HTMLCanvasElement and has the same getContext() method, so it's better to polyfill TF as in the code before.

Hope this would help someone trying to run TF inside the worker.

@zemlyansky
Copy link

Any idea how to make the latest tfjs work in a web worker without OffscreenCanvas support (just use cpu)? tf.min.js throws the No backend found in registry. error.

@davlhd
Copy link
Contributor
davlhd commented Jun 14, 2019

I got it to work in chrome with OffscreenCanvas. But not in Safari with only the CPU.

this.window = this importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js'); tf.setBackend('cpu');

gives me Error: Backend name 'cpu' not found in registry when I do inference. Any ideas?

Edit: its because the CPU backend uses the canvas too. I thought that maybe it uses something else.

@davlhd
Copy link
Contributor
davlhd commented Jun 25, 2019

Is there any way to run it completely without a canvas? even if its slower?

@BorisChumichev
Copy link

@davlhd, starting from v1.2.3 it's possible to run tfjs in Safari and iOS Safari within a webworker with CPU backend.

Looks like this fix made it possible:
tensorflow/tfjs-core@9730187

@rthadur rthadur closed this as completed Aug 7, 2019
nsthorat pushed a commit that referenced this issue Aug 19, 2019
Implement the 4 new ops introduced recently.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/tensorflow/tfjs-node/102)
<!-- Reviewable:end -->
@supermoos
Copy link

Was this ever implemented? I can't seem to find any examples of posenet running in a worker?

@andytwoods
Copy link

Is this what people are after? https://github.com/mizchi/posenet-worker

@supermoos
Copy link

@andytwoods Definitely looks like it, however the demo doesn't display any other graphics / animation running on the main thread at 60 fps while it's doing the posenet things so it's hard to judge just by the demo. I'll try and take a look at getting it running locally and modify it to test if this could work, when I have time :-)

@supermoos
Copy link
supermoos commented Feb 19, 2020

@andytwoods Made a quick test locally, but the posenet estimateSinglePose() function still blocks the main thread somewhat at least, so it doesn't seem to be running fully in a worker, unless I'm misunderstanding the purpose of running it in a worker. Try to uncomment: https://github.com/mizchi/posenet-worker/blob/master/src/worker.ts#L28-L30 and the FPS goes up on the main thread.

@andytwoods
Copy link
andytwoods commented Feb 21, 2020

@supermoos I've 2 three.js animations, one in a worker, one not, with that 'not quite there' posenet theaded solution too. When I stress the main thread (via rnd generation), you can see the posenet video struggling too -- BUT, i think this is because the posenet thread must be spoonfed video feed, which would be affected by stressing the main thread.

In a way, this gives me confidence that I can indeed have 60 FFS video whilst having heavy lifting done elsewhere.

Apologies for the terrible typescript+js hackup here!: https://github.com/andytwoods/posenet-worker

youtube vid

three in threads here: https://threejs.org/examples/#webgl_worker_offscreencanvas

@supermoos
Copy link

@andytwoods Sorry for the late reply, was out on vacation. But from what I gather from your demo your essentially flipping the process and instead of moving posenet to a worker you move the threejs render to a worker? Which seems to be working in your video? :-) It's not ideal, since UI/Dom elements can't be moved to a worker like this, but it could work for some scenarios I suppose :-) Did you make any progress on moving the posenet to a proper worker?

@andytwoods
Copy link

We've posenet working in a webworker. With the major caveat of needing to feed it the webcam feed frame at a time from the main process (achieved here https://github.com/mizchi/posenet-worker). I am fairly confident you cannot access the webcam from a webworker (please prove me wrong!).

We then pass the results of posenet to another webworker containing three.js models.

So, any delay in the main thread leads to delays in the other threads.

@supermoos
Copy link

Are you sure it's the feeding of the webcam to the worker that's causing jank? If you disable the posenet stuff in the worker, and simply just feed it the webcam feed without doing anything the jank goes away, leading me to believe posenet is not fully running in a worker?

@penebrain
Copy link
penebrain commented May 27, 2020

you can send data as array from main.js to worker.js

in main.js

ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
var myImageData = ctx.getImageData(0, 0, canvas.width, canvas.height).data;

//send data to worker
Worker.postMessage({
                        "data": myImageData,
                        "width": canvas.width,
                        "height": canvas.height
                    }) 

in worker.js

importScripts("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.4.0/dist/tf.min.js");

var model;
var score;


//load tf model
(async() => {
    tf.loadGraphModel("https://example.com/model.json").then(mo => model = mo)
})()

async function predict(img) {
    model.executeAsync(img).then(res => {
        res[4].data().then(data => score = data[0])
    }).then(
        await new Promise(r => setTimeout(r, 3000))
    ).then(
        //send score to main.js
        postMessage(score)
    )
}
 {

    //create ImageData object that can use in tfjs
    const image = new ImageData(img.data.data, img.data.width, img.data.height);

    
    predict(tf.tidy(() => {
        return tf.browser.fromPixels(image).toFloat().expandDims()
    }))
}

hope this will help you

@Deamoner
Copy link
Deamoner commented Oct 8, 2020

Can validate this does work. Couldn’t get wasm to work although I do not have a gpu so I think that’s expected. Issue in speed seems to be 2 fold. Transferring cam pic data needs to be done through image data and that call of getinagedata is heavy taking 200 ms. Additionally the posenet model not being sped up but simply not blocking has some value but the large transfer latency overheard make it in affective for speed improvements. There might be some path to direct stream data or improvements in webcam access in webworker. For real-time achievement the latest blaze pose should give good enough inference time but the latency require from conversion will need a fix to make the offloading usable.

@Malikrehman00107
Copy link

@itxnaeem007

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P3 type:feature New feature or request
Projects
None yet
Development

No branches or pull requests