Sketchfab Forum

Optimising scanned models/scenes

(Ben Kreunen) #1

Now that I'm fairly comfortable with my photogrammetry technique (can identify when I'm doing something right or wrong :wink:) it's time for me to move on to preparing data for re-use. I've previously used Meshlab, Meshmixer and occasionally some Blender that's mainly been for decimating/cleaning meshes. Does anyone care to share some details of their workflow? or know of some good tutorials.

I'm currently using Instant Meshes for decimating and 3DS Max for UV/Normal mapping although some of the tutorial I've followed for UV mapping faces don't seem to work so well for scans with even polygon sizes.... I'm pretty much still a noob for this sort of stuff.

(Ben Kreunen) #2

Yeah, I know, I hate questions like this too. Not being lazy though.... this is what I've got so far:

  • Raw mesh to "low res" mesh with Instant Meshes (this app is
    soooo cool)

3DS Max 2015 (Gottta be some perks to working at a uni :wink:)

  • Low res mesh to "really low res" mesh using Pro Optimiser.
  • This tool is good but it didn't like raw meshes.
  • Custom UV map
  • Render normal map to texture (default cage was too big, shrink 95%)
    Have to find out what I did wrong with the normal map... but close :wink:
    [edit: Render Heightmap into Alpha was the offending setting. Fixed now]

Photogrammetry app

  • Import really low res model back into photogrammetry app with UVs and
    create texture map.

There are a few small glitches left which I was expecting from the rendered preview but I'm pretty happy with this result as a first try. Anyone else care to share or offer variations to this?

Reality Capture - 3DS MAX test by UoM Digitisation Centre on Sketchfab

(Bart) #3

Ah so you chained these two tools together? Interesting :smile: I've been looking at Instant Meshes too, good to hear it's a useful tool. Do you feel there's more to explore in this process and should we do a tutorial about it eventually?

(Jvouillon) #4

May be a stupid question, but have you find a way to transform the texture from a 3Dscan to make it fit the mesh from instantMesh?

(Ben Kreunen) #5

@jvouillon Workflow in basic terms...

  1. Create raw mesh
  2. Export and create low poly version (Instant Meshes)
  3. UV map low poly version
  4. Generate normal map
  5. Import low poly version back into photogrammetry app
  6. Generate texture map

You can skip 3 and 4, but UV mapping is not a strong point of photogrammetry apps.

(Ben Kreunen) #6

I found Instant meshes via others who leave snippets of info in their model description (tests by @nebulousflynn, @ivlpaleontology) Wonderfully useful even on my aging PC at home, and command line as well. Might explore ra drag and drop batch file for standard conversions.

I think a tutorial on this would be invaluable. I've been stuck at the data production side of the workflow for a couple of years because I needed to understand that better before moving on to the end and working backwards which is what I normally do. There's a lot of talk about object based learning where I am but the current skill sets sit at either end of this part of the process... Production and presentation. May well be the same in other academic institutions trying to do a lot with a little.

I have a basic workflow now but I don't have enough knowledge to know whether the small glitches are an unavoidable part of the process based on the geometry of the model or something that could be fixed by changing some settings.

I'm also a bit old school... Just because the bandwidth can cope with the data doesn't mean we shouldn't optimise what we deliver to make the most of what we've got. That, and having models crash the browser on a mobile device is not much fun. High res is great if you're providing the data for download for research/re-use but other than that, if you can make it smaller while looking just as good (if not better) for a reasonable effort then why not.

(Jvouillon) #7

@uomdigitisation thanks! I see that I have still a lot to learn!

I played a bit with Meshlab and tried the texture to vertex filter. The idea was to transfer a texture from one model (from itSeez3D) to the vertices of the second one (from InstantMesh).Then use the vertex to texture filter to get the final texture. Unfortunately, the result is really poor, because I had to use very low poly models, otherwise Meshlab was crashing...

Did you encounter the same kind of crashing problems with Meshlab? Or is it my computer which is not able to handle the operations?

(Ben Kreunen) #8

You need to recreate the texture after optimising the model with Instant Meshes. I'm using Photoscan or Reaity Capture, both of which let you import the optimised mesh and then continue with the normal texture generation workflow. Not sure about what you can do with itSeez3D but transferring from a texture to vertex/face colours is only going to give you a very low res colour so you could just leave it at that rather than convert those back to a texture.

You might have to look at optimising your mesh via some other application that will retain the texture mapping coordinates unless it's possible to export the original images and camera positions from itSeez3D... but there's usually some trade offs when you have an app that produces the final result quickly, you may lose control over tweaking the result from the original data. Would b interesting to compare a photogrammetry capture from the same device.

Add new "3D scanning and photogrammetry" group in forum