-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running in a web worker #102
Comments
Offscreen canvas is definitely something we want to support, we haven't prioritized it because it's still experimental. If you want to send us a PR, we'd happily accept it. Just make sure you do feature testing inside |
* Sgd+ momentum optimizer added * momentum optimizer extended from sgd * momentum optimizer used in model-builder * cleanup * -_- * redundant code removed in momentumOptimizer * tabs replaced with spaces * space added * space added * resolved conflicts * rmsprop and adagrad optimizer added * resolved texture leakage and optimizers inherited from optimizer.ts * Merge branch 'master' into master * Merge remote-tracking branch 'upstream/master' * minor changes in optimizers * Merge branch 'master' into master * resolved conflicts * Merge branch 'master' of https://github.com/mnottheone/deeplearnjs * formatting done * license updated * cache -> accumulatedSquaredGradients * formatted * formatted
If you mean, simply training a model in web worker, It seems that is already possible. |
Is it possible to run the prediction of a model in web worker? Or is there a way to predict without freezing the browser UI cause GIFs in the page and other things remain still until prediction is completed. Is there a workaround? @prijindal only training is possible inside a web worker? |
If the page is freezing because of WebGL, you won't really get much from a web worker. Have you tried sprinkling |
@giorat Yes, it is possible to do prediction also inside a web worker. |
@giorat and @prijindal, tfjs does indeed work in web workers as of today, but unfortunately only using the Supporting the webgl backend inside web workers seems like a huge advantage if it can be managed with the OffscreenCanvas API. As of today, it seems to me that there are two options for running tfjs with the
Have there been any recent movements or a road map to add support for the WebGL backend in web workers? |
I think it's definitely necessary to support web workers, having cpu and webgl computation running in webworks would be awesome.
And I get
|
Hi All, Is there a sample code to use tfjs in web worker, I have tried something like this, does not work
|
It looks like the OffscreenCanvas API will be supported by default in the upcoming Chrome 69 release and beyond. Once this occurs, what would be the necessary steps to get WebGL support in a web worker with tfjs? |
It's too bad that the OffscreenCanvas API isn't available yet / soon in most browsers. @sandipdeveloper @oeway This is my Getting a prediction using a Link to the steps I took to retrain the model. ModelWorker.js /* eslint-disable */
export default function ModelWorker() {
this.window = this
importScripts('https://cdn.jsdelivr.net/npm/setimmediate@1.0.5/setImmediate.min.js')
importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core')
this.tfc = this.tf
importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.10.3')
importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter')
this.tf = _objectSpread(this.tf, this.tfc) // this.tf = { ...this.tf, ...this.tfc }
tf.setBackend('cpu')
onmessage = async (e) => {
postMessage('(worker) Loading model')
const { MODEL_URL, WEIGHTS_URL, IMAGE_SIZE } = e.data
const model = await tf.loadFrozenModel(MODEL_URL, WEIGHTS_URL)
postMessage('(worker) Model loaded')
const input = tf.zeros([1, IMAGE_SIZE, IMAGE_SIZE, 3])
const t0 = performance.now()
postMessage('(worker) Predicting..')
await model.predict({ Placeholder: input })
postMessage(`(worker) Prediction took ${(performance.now() - t0).toFixed(1)} ms`)
}
// ES6 polyfills
function _defineProperty(obj, key, value) {
return key in obj
? Object.defineProperty(obj, key, {
value,
enumerable: true,
configurable: true,
writable: true,
})
: obj[key] = value
}
function _objectSpread(target) {
for (let i = 1; i < arguments.length; i += 1) {
const source = arguments[i] != null ? arguments[i] : {}
let ownKeys = Object.keys(source)
if (typeof Object.getOwnPropertySymbols === 'function') {
ownKeys = ownKeys.concat(Object
.getOwnPropertySymbols(source)
.filter(sym => Object.getOwnPropertyDescriptor(source, sym).enumerable))
}
ownKeys.forEach(key => _defineProperty(target, key, source[key]))
}
return target
}
} WorkerProxy.js export default class WorkerProxy {
constructor(worker) {
const code = worker.toString()
const src = code.substring(code.indexOf('{') + 1, code.lastIndexOf('}'))
const blob = new Blob([src], { type: 'application/javascript' })
return new Worker(URL.createObjectURL(blob))
}
} (this should be easier with a future version of SomeComponent.js import WorkerProxy from './WorkerProxy'
import ModelWorker from './ModelWorker'
const ASSETS_URL = `${window.location.origin}/assets`
const MODEL_URL = `${ASSETS_URL}/model/tensorflowjs_model.pb`
const WEIGHTS_URL = `${ASSETS_URL}/model/weights_manifest.json`
const LABELS_URL = `${ASSETS_URL}/model/labels.json`
const IMAGE_SIZE = 224
if (!!window.Worker) {
const worker = new WorkerProxy(ModelWorker)
worker.addEventListener('message', e => console.log(e.data))
worker.postMessage({ MODEL_URL, WEIGHTS_URL, IMAGE_SIZE })
// Load labels, etc.
}
... It would be nice if there was a version of something like webgl-worker which can be used with TensorFlow.js. As the adoption of OffscreenCanvas API will take time (longer than i can afford), all suggestions on possible workarounds are very welcome! |
Prepare yourself! if (typeof OffscreenCanvas !== 'undefined') {
self.document = {
createElement: () => {
return new OffscreenCanvas(640, 480);
}
};
self.window = {
screen: {
width: 640,
height: 480
}
}
self.HTMLVideoElement = function() {}
self.HTMLImageElement = function() {}
self.HTMLCanvasElement = function() {}
}
import * as tfc from '@tensorflow/tfjs-core';
import * as tf from '@tensorflow/tfjs';
console.log('backend is %s', tf.getBackend()); Probably you will need to add more "polyfills" to make it work your you case The code above could be used ONLY as POC, pls do not send something like this to production :) |
OffscreenCanvas is supported in Chrome 71.0.3578.98 |
I tried tfjs with #102 (comment) polyfill Basic tensorflow.js usages work correctly, but with posenet does not work. // ...polyfill
import * as posenet
const outputStride = 16;
const imageScaleFactor = 0.5;
const flipHorizontal = true;
self.onmessage = async ev => {
const bitmap = ev.data.bitmap;
const canvas = new OffscreenCanvas(bitmap.width, bitmap.height);
const ctx = canvas.getContext("2d");
ctx.drawImage(bitmap, 0, 0);
const net = await posenet.load();
const pose = await net.estimateSinglePose(
canvas,
imageScaleFactor,
flipHorizontal,
outputStride
);
console.log(pose);
}; I found this. HTMLVideoElement is not transferrable object now so I have to send data as ImageBitmap. const offscreen = new OffscreenCanvas(video.videoWidth, video.videoHeight);
offscreen.width = video.videoWidth;
offscreen.height = video.videoHeight;
const ctx = offscreen.getContext("2d");
ctx.drawImage(video, 0, 0);
const bitmap = offscreen.transferToImageBitmap();
worker.postMessage(
{
bitmap
},
[bitmap]
); To work this, I think tfjs needs OffscreenCanvas or ImageBitmap support. |
This work arround works with posenet (very slow) if (typeof OffscreenCanvas !== "undefined") {
self.document = {
readyState: "complete",
createElement: () => {
return new OffscreenCanvas(640, 480);
}
};
self.window = {
screen: {
width: 640,
height: 480
}
};
self.HTMLVideoElement = OffscreenCanvas;
self.HTMLImageElement = function() {};
class CanvasMock {
getContext() {
return new OffscreenCanvas(0, 0);
}
}
// @ts-ignore
self.HTMLCanvasElement = CanvasMock;
}
import * as _tfc from "@tensorflow/tfjs-core";
import * as _tf from "@tensorflow/tfjs"; |
Additionally I was unable to load model from json file in the worker. It can be helped with the following workaround to get TensorFlow.js up and running: if (typeof OffscreenCanvas !== 'undefined') {
self.document = {
createElement: () => {
return new OffscreenCanvas(640, 480);
}
};
self.window = self;
self.screen = {
width: 640,
height: 480
};
self.HTMLVideoElement = function() {};
self.HTMLImageElement = function() {};
self.HTMLCanvasElement = OffscreenCanvas;
}
Hope this would help someone trying to run TF inside the worker. |
Any idea how to make the latest tfjs work in a web worker without OffscreenCanvas support (just use cpu)? |
I got it to work in chrome with OffscreenCanvas. But not in Safari with only the CPU.
gives me Edit: its because the CPU backend uses the canvas too. I thought that maybe it uses something else. |
Is there any way to run it completely without a canvas? even if its slower? |
@davlhd, starting from v1.2.3 it's possible to run tfjs in Safari and iOS Safari within a webworker with CPU backend. Looks like this fix made it possible: |
Implement the 4 new ops introduced recently. <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/tensorflow/tfjs-node/102) <!-- Reviewable:end -->
Was this ever implemented? I can't seem to find any examples of posenet running in a worker? |
Is this what people are after? https://github.com/mizchi/posenet-worker |
@andytwoods Definitely looks like it, however the demo doesn't display any other graphics / animation running on the main thread at 60 fps while it's doing the posenet things so it's hard to judge just by the demo. I'll try and take a look at getting it running locally and modify it to test if this could work, when I have time :-) |
@andytwoods Made a quick test locally, but the posenet estimateSinglePose() function still blocks the main thread somewhat at least, so it doesn't seem to be running fully in a worker, unless I'm misunderstanding the purpose of running it in a worker. Try to uncomment: https://github.com/mizchi/posenet-worker/blob/master/src/worker.ts#L28-L30 and the FPS goes up on the main thread. |
@supermoos I've 2 three.js animations, one in a worker, one not, with that 'not quite there' posenet theaded solution too. When I stress the main thread (via rnd generation), you can see the posenet video struggling too -- BUT, i think this is because the posenet thread must be spoonfed video feed, which would be affected by stressing the main thread. In a way, this gives me confidence that I can indeed have 60 FFS video whilst having heavy lifting done elsewhere. Apologies for the terrible typescript+js hackup here!: https://github.com/andytwoods/posenet-worker three in threads here: https://threejs.org/examples/#webgl_worker_offscreencanvas |
@andytwoods Sorry for the late reply, was out on vacation. But from what I gather from your demo your essentially flipping the process and instead of moving posenet to a worker you move the threejs render to a worker? Which seems to be working in your video? :-) It's not ideal, since UI/Dom elements can't be moved to a worker like this, but it could work for some scenarios I suppose :-) Did you make any progress on moving the posenet to a proper worker? |
We've posenet working in a webworker. With the major caveat of needing to feed it the webcam feed frame at a time from the main process (achieved here https://github.com/mizchi/posenet-worker). I am fairly confident you cannot access the webcam from a webworker (please prove me wrong!). We then pass the results of posenet to another webworker containing three.js models. So, any delay in the main thread leads to delays in the other threads. |
Are you sure it's the feeding of the webcam to the worker that's causing jank? If you disable the posenet stuff in the worker, and simply just feed it the webcam feed without doing anything the jank goes away, leading me to believe posenet is not fully running in a worker? |
you can send data as array from main.js to worker.js
hope this will help you |
Can validate this does work. Couldn’t get wasm to work although I do not have a gpu so I think that’s expected. Issue in speed seems to be 2 fold. Transferring cam pic data needs to be done through image data and that call of getinagedata is heavy taking 200 ms. Additionally the posenet model not being sped up but simply not blocking has some value but the large transfer latency overheard make it in affective for speed improvements. There might be some path to direct stream data or improvements in webcam access in webworker. For real-time achievement the latest blaze pose should give good enough inference time but the latency require from conversion will need a fix to make the offloading usable. |
Are there any plans to make tfjs able to run on a worker using offscreen canvas? Is it already possible? (sorry if I'm still stuck in the deeplearn.js days)
Would it be possible to do so now by manually creating a GPGPUContext and using it in the backend somehow?
The text was updated successfully, but these errors were encountered: