Glasses virtual try-on with Sketchfab + viewer API

Hi,

Congratulation to the Sketchfab team for the amazing work you are doing here.
I develop computer vision solutions for the web through my company, WebAR.rocks.
I have developped a proof of concept webapp to use Sketchfab viewer for virtual try-on:

But I meet 2 difficulties:

  1. To update the pose of the glasses, I use the setMatrix function (Viewer API - Functions - Sketchfab). But it is slow. The callback function takes a long time to be called and during this time the pose is not updated. So the movement of the glasses is jerky. Is it possible to update the matrix directly by reference? Or is there a faster way to update the pose of an object in real-time?
  2. To hide the glasses branches, I need to add an occluder 3D object, i.e. a 3D object with the shape of the face, which will be rendered with a material writting into the depth buffer, but not in the color buffer. The goal is to reproduce the occlusion effect of the head of the user (in this demo for example there is an occluder: WebAR.rocks.face glasses VTO demo). I did not managed to create such a material with Sketchfab.

I think it would be great to use Sketchfab viewer for virtual try-on. Many client need a very high quality rendering, especially for jewelry and eyewear, and your viewer is amazing. I regularly have the feedback of some client asking how they could have both virtual try-on and Sketchfab rendering quality.

@xavier.bourry this looks really cool. I didn’t go through all of your source code, but I assume you set up a separate animation loop to send your matrix updates to sketchfab. I do a similar thing for my projects, but for custom annotations. I run into the same trouble where the annotations lag behind the motion of the 3D model.
A possible solution would be that we somehow can access the sketchfab animation loop. But that would require a change to the API.
Regarding your second question, there’s no occluder/matte/passthrough material. But you could fake it by hiding the legs of the glasses based on the orientation of the face. If you’re looking left, you could hide the left leg of the frame and vice versa. It’s a bit crude, but it could be quite effective.

You are right, there is a separate animation loop handled internally by WebAR.rocks.face to send the matrix updates. I have the impression that setMatrix sends its value to a webworker and it may explain the delay. Maybe it would be great to have a synchronous mode.

The problem of hiding the glasses legs is that models need to be cut (meshes of legs should be separated). Glasses sellers often have huge collections, with hundreds of models, and if they want to change occluder settings they have to recut all their glasses meshes.

And in some other VTO cases, cutting the mesh and hiding splitted parts depending on the view angles won’t work properly. I am thinking about wristwatch/bracelet virtual try-on. The hidden part of the wristwatch varies continuously depending on the wrist rotation. Whereas occluding with a cylindrical occluder mesh would be really easy.