Sketchfab Forum

Photogrammetry for a Tiny World


(Tinyruin) #1

Hello everybody! We created a HTC Vive game for university, where you are a tiny human on top of a forest treestump :evergreen_tree::herb:


As we needed to document our process anyway, I thought I'd post my devlogs here as well, documenting my journey from absolute beginner to advanced photogrammetry user. I try to share my process and learnings on my way. Hope you'll find some of it useful :blush:

(Tinyruin) #2

This was my first devlog when I started the project - it looks at what photogrammetry is and shows my first learnings with it. May be interesting for total beginners.

I spent the time since the project kick-off by gathering as much information as I can about photogrammetry - reading articles, watching talks, as well as doing experiments.

Here is what my first scan looked like (interactive 3D view):

And here is what I'm at now:


  1. Basic Idea

“Photogrammetry is the art and science of using overlapping photographs to reconstruct three dimensional scenes or objects”

Photoscanning is often used in museums to digitalize collections or by archaeologists to digitalize excavation sites. In games it's used to scan whole environment and props (e.g. Starwars Battlefront) or even humans (e.g. the new Metal Gear Solid).

We chose this technique because it creates a very "photoreal" effect. Combined with our idea of seeing the world trough the eyes as tiny human - and with the sense of scale/physical space VR gives you - , we hope to achieve a fascinating effect.

Lukas and I also have experience in photography, so we both can bring that into the process.

2. My process

The first step always is to create photos of the object or environment I want to digitize from all sides that are useful for the game (i.e. you don't need to make a 360° scan of a rock cliff). Here is a shitty photo shot with my ancient phone:


Here I could bring in my experience from photographing textures for games. You want a high f/stop, low ISO, a tripod and diffuse light. If I want to shoot outside it needs to be cloudy or I need to stick to a shadowy spot (changes in lighting will screw up the model).

The photographing usually takes anything between 10min to 1hour, depending on the object. For small objects I use a little turntable,
which speeds up the process a lot.

I won't go into deeper detail here, but I compiled a Google Doc with my learnings if you are curious.

A technique I recently tried to help the software with aligning is by using simple cardboard markers. (numbers are not important, software just needs an easily recognizable pattern)

_Note: I later learned Agisoft has a menu to export markers for printing, which the software can easily recognize.
For hard to align things this can help a lot, but usually I was fine without, too._

And here is a tree stump I scanned with this method:

Once I photographed the object I plug in the camera and copy the images on my drive. First off I delete blurry or out-of-focus shots to prevent errors.

Then I load the images into the tool we're using, called Agisoft Photoscan. We chose it because it is very accessible and widely used. It's relatively easy to find information or examples on it. Also the entry price is quite fair ^^

Sometimes I need to mask the images by hand like this:


There is some more info on this in the next devlog. We found that masks didn't improve the quality for our example object.

So Agisoft ignores the unimportant objects in the background.

Then I go trough the workflow of:


  • aligning photos

  • building a dense point cloud

  • building a mesh

  • removing floating faces etc.

  • creating the texture


This is mostly automatized and I only need to do some tweaks between the steps. It can take veeeery long, depending on the complexity of the object.

Once I have the highpoly mesh with the unoptimized texture, I load it in "InstantMeshes" to create a lowpoly model. Next I load both highpoly and lowpoly model in Blender. As the Agisoft texture is very unoptimized I UV-unwrap the lowpoly model. Now I can bake the height and color information of the highpoly model on the lowpoly one.
Note: Later I learned that Agisoft UVs get pretty usable if you import the lowpoly mesh and project the texture on it.

 

I then give Patrick the FBX, normalmap and diffuse map to import ingame.
That's it so far! Feel free to ask me any questions, it's a very interesting method to create 3D models! :slight_smile:
I'm gonna leave you with a scan of this bovist mushroom:


(Tinyruin) #3

Hello everybody! I forgot about writing the blogposts as I go, so I'll write them in retro-perspective now. It's actually quite interesting, because it allows me to look at what I've done with a bit more distance and knowledge, now that I'm near the end of the project. But let's get right to it. This blogpost is about my efforts to increase my photogrammetry results and the tricks I learned on the way.

The week before Christmas started with the rendered results of the tree-stump I previously photographed.

We wanted to try a few things out here and compare their impact on the quality of the result. The first thing we wanted to compare, is how masks affect the result of the final mesh.

First here is the result without any masks (Lukas was so nice to lend me the resources of his system to render this):

And for comparison, here is the same tree stump but with masks (thanks Patrick for rendering):

First off, in the mesh itself there really isn't any notable difference, the details are in both images. The main difference is in how the "unwanted" background is cut away for the masked version. Without the background leaves, there is more space on the generated UV map for the texture of the treestump. Which makes the masked result a bit sharper.
However, you can get the same effect with less work, if you delete unwanted parts once you generated the dense point cloud. This also gives you more controls and better tools to cut out the leaves.
Edit: Actually looked it up and using masks seem to be able to speed up the dense point reconstruction, as Agisoft then doesn't calculate the points for the parts you don't want/need.

Anyway, even the optimized version didn't give us the level of detail we needed for our treestump (as you stand directly on it, at the size of a human thumb). I figured out the reason were the source photos, which didn't carry enough detail at this level. So the solution was to use the focal length of my lens to it's max (18-55mm kit lens) and get as good magnification as possible.
However with this lens, this also means I need to get closer to the object that I want to photograph. This introduced another problem.
As you want to have a high f/stop / high depth of field and a low ISO, the shutter speed of my camera will go very high. Usually I shot at a shutter speed of 1/8th of a second. At this speed, it's not viable to shoot handheld, as you likely will end up with motion blur in the image.
So I needed to shoot with a tripod, which limited how close my lens could be above ground. As the stump wasn't very high (maybe 20-25cm), I couldn't get close enough, to get the level of detail we want.
Later we solved this by choosing a larger treestump. However, I think I'll upgrade my gear to give me better results and more freedom there.

Besides the marker and tripod learnings, I looked at other way to improve my results. As I read before markers improve the result in the alignment process - where Agisoft looks at which photos belong together. While my selfmade cardboard markers worked fine, I also found an option in Agisoft to print custom markers, which the software then can easily recognize.

Usually scans without markers align fine, but with surfaces that Agisoft has usually problems with it can help a lot. For example the leaves above were very wet and reflective as well as having a monochrome color to it. My results shot with markers of the forest floor were way better (I scanned it to place our treestump on top of it - to create a wholesome "photorealistic" world).

So much to some tips on how to improve photogrammetry.

We also had the chance to chat with Art by Rens, who is currently making a "4k experience" for Unreal Engine. Inspiring guy! He showed Patrick a trick to load in 8k textures in Unreal and gave Lukas and me the advice to scan the mushroom we wanted ingame in two parts to improve results (stem and hat).

So at the end of the week I knew how to get better results with photogrammetry (:

EDIT: I also tried out a method this week to reduce the holes in the model, at areas where there are no photos (e.g. due object laying on the ground).

I saw this method with the tripod around. However it means shooting handheld unless you have a 2nd tripod and I didn't want to sacrifice quality by turning the ISO up or f/stop down. Could work nicely with a 2nd tripod though, or by putting the object on a stick instead of a tripod.

EDIT #2: I forgot another thing! Shortly before we all left for Christmas, we had a collaborative photogrammetry session in the forest (Lukas, Shania and I), where I experimented with getting better detail for the treestump.


(Bart) #4

Wow, this is great! Thanks so much for sharing, I'll be following this with great interest indeed.


(Mesheritage) #5

Nice tuto :wink:

I'm not expert with agisoft, but I think it's better to keep images unmasked for the alignment and use the mask for the reconstruction. Or I think you can also reduced the area of interest to reconstruct (with the kind of cube). Not sure how it improves thing but worth a try.

I would be surprise if agisoft cannot align images without marker, we are using terrible quality images and it rarely fail!


(Abby Crawford) #6

I'm looking forward to reading more of this - it's great that you're putting your experiments down on "paper" so that you can keep track of what works and we can all benefit from your time and attention to detail. :slight_smile:


(Tinyruin) #7

@Mesheritage I actually looked the masking up in the documentation, since I still wasn't 100% sure about it ^^
Here is the Link: http://downloads.agisoft.ru/pdf/photoscan-pro_1_0_0_en.pdf
Basically:

  • when aligning photos, masks can help to ignore things in the image which would otherwise confuse the alignment. Can also help if object to scan occupies only a small part of the photo. However as you said, you want to keep it unmasked for the alignment usually, as it gives Agisoft additional points of reference.
  • masked areas can be excluded in dense point calculation. Might save time and power used for processing.

And yeah most things also work fine without markers for me, as there is enough structure in the object or enough info in the background. But for some harder to scan natural stuff it helped me I think :slight_smile: It just felt like a nice reassurance to have ^^ Also have a piece of newspaper glued to my turntable, as there were some problems if everything was 100% white around the object.


(Tinyruin) #8

@abbyec @bartv Glad you like the thread :slight_smile: You both actually inspired me with some experiments. Abby, your Photogrammetry tutorial was really awesome, thank you for that :slight_smile: And Bart, you'll see something in the next devlog, about your idea of baking the natural light into the texture.


(Nedo) #9

Hey tinyruin, nice read, i'm a big fan of Photogrammetry in VR! Have you made some
VR Worlds for Steams Destination? I use Destinations exactly for this kind of VR experience.

nedo


(Mauricesvay) #10

Thanks for the write-up! It is very interesting to learn from others' workflow.
Overall, my workflow is pretty similar. On the photography side, I tend to shoot in RAW to get more dynamic range. I usually process all the photos to move most of the details to the midtones, even if it makes them look washed-out. I also use Instant Meshes to decimate and retopo the model. I use Blender for UV unwrapping.

Quick question, how do you do the following?


(Tinyruin) #11

@nedo! Awesome Do you have some screenshots of your creations? Sadly I don't have a Vive for myself right now, just bought one for the team. But maybe in the future :slight_smile:


(Tinyruin) #12

@mauricesvay Yeah, I think shooting in RAW is probably best :slight_smile: Personally I had some problems with the format of my camera, so I had to resort to JPG for now (my camera's RAW format get's really noisy when exporting from Darktable and I don't know why). Substance is also developing a plugin right now to remove the lighting information/create a clean albedo, as far as I know.
EDIT: A Beta of the plugin/filter is linked in the article here: https://www.allegorithmic.com/blog/go-scan-world-photogrammetry-smartphone

On the baking part. Blender has built in baking for normal-maps, you'll find tutorials on youtube for that :slight_smile:
However, I switched to Substance Painter now, as it seems to give a slightly better normalmap, but more importantly also let's me bake ambient occlusion. (maybe AO is also possible with Blender)
EDIT: duh it's in the same menu as baking the normalmap actually :smiley:
Also use it for material setup, but I think it's a bit expensive, if you're not a student.
I think I'll include details on my current baking process in a later devlog :slight_smile:
EDIT 2: Also, to bake the color information of the highpoly mesh: There is an option in Agisoft to import a mesh. I import the lowpoly mesh and project textures on it. Can also be done in Blender with the bake menu.


(Bart) #13

It might be nice to have a quick tutorial on our blog on this topic, especially of how to use it in combination with photogrammetry to make great looking yet fast models. Can I trick you into writing one? :slight_smile:


(Tinyruin) #14

@bartv Hi Bart, thanks for the offer, but I don't think I have thorough enough knowledge yet to do a tutorial about the topic. I know how it works but not why ^^


(Tinyruin) #15

Here is the next part of my devlog! I hope you enjoy it :blush: In week 3 of the project I looked at photogrammetry from a more artistic side and decided to have some fun with it! (Christmas/new year holidays). I also recorded a bank of sound files, because I wanted to do sound recording again since quite some time - I smashed a pumpkin in the process - you'll see ^^

But first to the photogrammetry experiments! They were a nice way to try things out without the pressure that stuff necessarily needs to work or fit the game.

My first session was around a place I call the "forest temple".

I found this place once by accident and kept coming back to it. Recently moss has grown on top, making it look even more like a part of the forest. And everytime there is always new graffiti to discover on the pillars.

I decided to do a photogrammetry of one of the pillars, with enough detail to read the graffitti.

After I rendered it out, I found an entry that dated back to 1959, painted over with newer layers of paint. It really made me shift my view on the place and the kind of history graffiti can display. I later researched how and why the "temple" was built here. Appearantly it first stood about a kilometer away from it's current place. As the strip mine progressed, most of the temple was transferred to this new place. Except the foundation which was build anew. Really a piece of living history!

The 2nd thing I scanned at this place was this fallen treestump:


I found it right next to the temple. Quite compelling in it's symbolism. I wonder if the symbols were carved in there before the tree fell.
At the location, I also collected some bark, dead wood and pinecones to later scan for our game (I also tried photographing a tree mushroom as we wanted it for our game, but it got to dark to get good results).
Here is the result of the dead wood:

Very interesting structure.
Another experiment I did was baking static lights right into the mesh when photoscanning the object.
Here is the Physalis fruit I chose for my experiment. The LED insight really highlights it's color and beautiful fine details.

And here is the result I got:

Very interesting result. While we did not end up using baked lighting for the project, it is something I'll keep in mind for future projects. Imagine a tiny city made of clay with led lights in the windows .. beautiful.

I also scanned the cat of my grandma while I was at her place:


Really the cutest cat I know ^^ I had to capture her sleeping, as even the slightest of movement will cause errors if shooting only with one camera. But she was so tired it worked out quite well ^^

Also here are 2 more objects I scanned for the game, but did not end up in the final result, due medium errors of the constructed meshes:

So yeah, this was a really fun week. I enjoyed experimenting and spending time with my family. Gained experience at taking good photos for photogrammetry, as well as looking at it from an artistic standpoint.

EDIT: Oh, I forgot about the sound!
I recorded a soundbank to later mix sound effects ingame as well as ambient sounds to support the feeling of being in the forest.
Here is a photo of a poor smashed pumpkin and a microphone:

And here is how it sounds like:
https://drive.google.com/open?id=0B4jQqt5pwU2UdjJQT1VQc1NLWVE
I just saw the old pumpkin laying around and it was too good of an opportunity to pass. Makes for good impact sounds if you take out the high tones, or more disgusting "fleshy" sounds if you leave them in (as with most juicy fruits). I recorded most sounds with the dead wood - cave ,we still had in the design at this point, in mind. So I recorded steps on dead wood, dead wood crumbling between my fingers etc., as well as some general cracks to mix with it (e.g. crushing eggshells for additional debris).


(Bart) #16

Ok no worries :slight_smile: Still enjoying this topic very much, and I like how you're moving on to sound now too!


(Tinyruin) #17

Hi everybody! As the project is already over, I thought I'd approach the rest not as full devlog, but rather show you the experiments and techniques that helped me with the photogrammetry for the project.

First off, I thought it would be interesting to share my current turntable solution for scanning smaller objects.


The turntable is a cheap, little thing, actually meant for turning around products in a store (has no motor though).
We also got a bigger one later from HAMA, but that one was really crappily balanced, as it was meant as tool to rotate computer screens.

If you look at the site of my turntable you'll see some little markings.


They help me to shoot my objects with a similar spacing between the photos.
I tried to put the markings in a way that they ensure all my source photos have a good overlap, for the reconstruction process later. The distance I chose was 20° degrees, but @abbyec advises in her awesome photogrammetry tutorial even to take a photo every 10-15° degrees for good results.
However it's not the number that is important, it's the amount of overlap between photos (she says 50-60% overlap is good) - and with my setup and my lens (that is a bit more wide-angle) I think it works out with a higher degree number.
If you want to add the degree markings yourself, I found this Circle image quite useful for that. Just print it out at the size of your turntable and you have a good reference for adding the markings.

The next thing I found really useful for helping me shoot with the turntable, were poster-stickers!


You can get them in the super-market or drug-store. I use them to attach the turntable to the ground below (so I don't accidently hit it with my hand and push it away) and for gluing the objects on top of the turntable.
The great thing is that they are easily form-able, somewhat reusable and can be attached to and removed from almost any type of surface with ease.

Last but not least, I put some newspaper on top of my turntable.


This is giving Agisoft (or your software of choice) additional good reference points when aligning the photos. Newspaper works great, as the text is a non-repeating, easy-recognizable pattern with great contrast. Lots of good reference points for Agisoft here!
In the future I want to try out printing the markers Agisoft itself provides and put them on top of the turntable.

I hope you enjoyed this look behind the scenes and would love to hear about your turntable setups!