Glasses virtual try-on with Sketchfab + viewer API

Hi,

Congratulation to the Sketchfab team for the amazing work you are doing here.
I develop computer vision solutions for the web through my company, WebAR.rocks.
I have developped a proof of concept webapp to use Sketchfab viewer for virtual try-on:

But I meet 2 difficulties:

  1. To update the pose of the glasses, I use the setMatrix function (Viewer API - Functions - Sketchfab). But it is slow. The callback function takes a long time to be called and during this time the pose is not updated. So the movement of the glasses is jerky. Is it possible to update the matrix directly by reference? Or is there a faster way to update the pose of an object in real-time?
  2. To hide the glasses branches, I need to add an occluder 3D object, i.e. a 3D object with the shape of the face, which will be rendered with a material writting into the depth buffer, but not in the color buffer. The goal is to reproduce the occlusion effect of the head of the user (in this demo for example there is an occluder: WebAR.rocks.face glasses VTO demo). I did not managed to create such a material with Sketchfab.

I think it would be great to use Sketchfab viewer for virtual try-on. Many client need a very high quality rendering, especially for jewelry and eyewear, and your viewer is amazing. I regularly have the feedback of some client asking how they could have both virtual try-on and Sketchfab rendering quality.

@xavier.bourry this looks really cool. I didn’t go through all of your source code, but I assume you set up a separate animation loop to send your matrix updates to sketchfab. I do a similar thing for my projects, but for custom annotations. I run into the same trouble where the annotations lag behind the motion of the 3D model.
A possible solution would be that we somehow can access the sketchfab animation loop. But that would require a change to the API.
Regarding your second question, there’s no occluder/matte/passthrough material. But you could fake it by hiding the legs of the glasses based on the orientation of the face. If you’re looking left, you could hide the left leg of the frame and vice versa. It’s a bit crude, but it could be quite effective.

You are right, there is a separate animation loop handled internally by WebAR.rocks.face to send the matrix updates. I have the impression that setMatrix sends its value to a webworker and it may explain the delay. Maybe it would be great to have a synchronous mode.

The problem of hiding the glasses legs is that models need to be cut (meshes of legs should be separated). Glasses sellers often have huge collections, with hundreds of models, and if they want to change occluder settings they have to recut all their glasses meshes.

And in some other VTO cases, cutting the mesh and hiding splitted parts depending on the view angles won’t work properly. I am thinking about wristwatch/bracelet virtual try-on. The hidden part of the wristwatch varies continuously depending on the wrist rotation. Whereas occluding with a cylindrical occluder mesh would be really easy.

Hello, very nice work.
However I’m afraid you won’t like my answers

  1. the way we send function calls from the api code to the viewer goes through the browser messaging layer (the viewer is in an iframe), and that’s what produce the latency of several frames you observe.
    Unfortunately there is not much we can do to fix this without a complete overhaul of the api architecture. It’s been discussed many times internally though, but for now it’s not on the roadmap.

  2. You can’t write depth only occluders through the api. That’s something we don’t use either internally. Hower it’s technically possible, but it can have some nasty implications on effects using depth like Shadows, SSAO, TAA etc… We have to think about it.

1 Like

Hi Nehon,

Thank you for your detailed reply.

For the 1, I understand your point. I would be glad to cooperate and update my VTO demo if your roadmap changed or you released another API suited for realtime pose updating. I guess it also depends on what the use case of virtual try-on could bring to Sketchfab in term of business perspectives.

For the 2 I think depth only occluders are not a big deal even with postprocessing effects (they don’t break shadows and TAA). And I think they can also be useful to implement some nice AR effects (for SLAM based AR). For example a 3D artist can do a rim with the tire as a depth occluder so that a user can try the rim on his car with AR. He will scale and position the rim to match the rim of his car in AR. Then if he turns around his car the hidden parts of the rim will be hidden thanks to the depth occluder (I guess we can even find more relevant examples :smiley: ).

Hi Nehon,

do you have any news regarding depth occluder ?

We are working with your teams on VTO projects but now the only missing feature is the possibility to render only in depth.

Thanks