This was my first devlog when I started the project - it looks at what photogrammetry is and shows my first learnings with it. May be interesting for total beginners.
I spent the time since the project kick-off by gathering as much information as I can about photogrammetry - reading articles, watching talks, as well as doing experiments.
Here is what my first scan looked like (interactive 3D view):
And here is what I'm at now:
- Basic Idea
“Photogrammetry is the art and science of using overlapping photographs to reconstruct three dimensional scenes or objects”
Photoscanning is often used in museums to digitalize collections or by archaeologists to digitalize excavation sites. In games it's used to scan whole environment and props (e.g. Starwars Battlefront) or even humans (e.g. the new Metal Gear Solid).
We chose this technique because it creates a very "photoreal" effect. Combined with our idea of seeing the world trough the eyes as tiny human - and with the sense of scale/physical space VR gives you - , we hope to achieve a fascinating effect.
Lukas and I also have experience in photography, so we both can bring that into the process.
2. My process
The first step always is to create photos of the object or environment I want to digitize from all sides that are useful for the game (i.e. you don't need to make a 360° scan of a rock cliff). Here is a shitty photo shot with my ancient phone:
Here I could bring in my experience from photographing textures for games. You want a high f/stop, low ISO, a tripod and diffuse light. If I want to shoot outside it needs to be cloudy or I need to stick to a shadowy spot (changes in lighting will screw up the model).
The photographing usually takes anything between 10min to 1hour, depending on the object. For small objects I use a little turntable,
which speeds up the process a lot.
I won't go into deeper detail here, but I compiled a Google Doc with my learnings if you are curious.
A technique I recently tried to help the software with aligning is by using simple cardboard markers. (numbers are not important, software just needs an easily recognizable pattern)
_Note: I later learned Agisoft has a menu to export markers for printing, which the software can easily recognize.
For hard to align things this can help a lot, but usually I was fine without, too._
And here is a tree stump I scanned with this method:
Once I photographed the object I plug in the camera and copy the images on my drive. First off I delete blurry or out-of-focus shots to prevent errors.
Then I load the images into the tool we're using, called Agisoft Photoscan. We chose it because it is very accessible and widely used. It's relatively easy to find information or examples on it. Also the entry price is quite fair ^^
Sometimes I need to mask the images by hand like this:
There is some more info on this in the next devlog. We found that masks didn't improve the quality for our example object.
So Agisoft ignores the unimportant objects in the background.
Then I go trough the workflow of:
- aligning photos
- building a dense point cloud
- building a mesh
- removing floating faces etc.
- creating the texture
This is mostly automatized and I only need to do some tweaks between the steps. It can take veeeery long, depending on the complexity of the object.
Once I have the highpoly mesh with the unoptimized texture, I load it in "InstantMeshes" to create a lowpoly model. Next I load both highpoly and lowpoly model in Blender. As the Agisoft texture is very unoptimized I UV-unwrap the lowpoly model. Now I can bake the height and color information of the highpoly model on the lowpoly one.Note: Later I learned that Agisoft UVs get pretty usable if you import the lowpoly mesh and project the texture on it.
I then give Patrick the FBX, normalmap and diffuse map to import ingame.
That's it so far! Feel free to ask me any questions, it's a very interesting method to create 3D models!
I'm gonna leave you with a scan of this bovist mushroom: