3D Scan of a hall (Photogrammetry)

I am totally new to photogrammetry and 3D Scanning. My Bachelor Project is to scan a hall and reconstruct a 3D Model, which can be used in a virtual environment.

I’ve shot 495 Photos in total. Both on an iPhone XS (automatic stock camera) and with a Nikon D5100 (ISO 125; Exposure Time 1/3; Aperture f.9). All photos have been shot in portrait mode/orientation.
The photos on the iPhone and the Nikon are from the same position. (each time I shot an photo with the Nikon, I also took a photo with the iPhone)

I put all the photos into a video, so you can see all the photos and perspectives:

At first I tried to put all the iPhone photos into Meshroom. Just to see the outcoming results.
the 3D Model was really bad.

So I tried to turn up the Feature Extraction -> Describer Preset to “Ultra” and set the Describer Types of the FeatureExtraction, FeatureMatching, StructureFromMotion to “akaze”.
The processing took almost 2 days but now way more cameras could be reconstructed:

But the result is still not really good.

I also tried to use RealityCapture but the software could only align NUMBER photos out of 495.
Do you know how RealityCapture can align more photos?

My guess is that my pictures are not good to be used for photogrammetry. Do you have any recommendations and tips how I can improve my images?

Hi - did you manage a better result after all?

In general your results look like what I’d expect from your input images.

You may be having issues because:

  • mixing images from different cameras/lenses generally does not always work well
  • the room you are scanning has a LOT of plain and repeating patterned surfaces, this is tricky for photogrammetry software to reconstruct
  • you may have actually taken too many images, I’ve experienced software getting confused when there are images taken from very similar positions
  • if you were to re-shoot the images, I would suggest capturing photos while holding your camera at different heights and angles. You can just about see how I did this in the images in the description of this model

I hope that is useful info, if you share an update I will encourage other community members to chime in with their suggestions too!

This is the best result I could get. For that i used just the iPhone images and the default Meshroom settings.

I didnt mixed different cameras/lenses. I still haven’t used the Nikon images but I don’t think I can get better results with these images.

Do you have any ideas how i can reconstruct those plain and repeating patterned surfaces? I’ve seen techniques like putting random color patterns on the surfaces but that is not allowed and impossible.

I tried to take as many pictures as possible by taking images every step I moved. I wanted to guarantee a high overlap.

Are you suggesting to re-shoot all images and remove the old ones or should I just take additional new images and add them to the old ones ?

Thank you!

Hmm it’s odd that all the tables and chairs are not reconstructing… I am not sure adding more images would help to be honest.

What is your use case for the output 3D model? Perhaps you could 3D model the desk & chairs back in using Blender or another software?

For Meshroom specific advice I recommend you reach out to the developers directly: https://alicevision.org/#about

1 Like

I didn’t mention that I don’t need the desks and chairs. so I need to remove them afterwards anyways.
I just need the empty hall but I am not allowed to move any desks or chairs.
I am just worried about the ceiling and the walls having those rough bumpy surfaces.

The bumpy surfaces are a result of the plain / repeating pattern nature of the real world surfaces I am afraid.

How does the Mesh output look from Metashape or RealityCapture?

If you have the time and skill, it is possible to import a fixed version of the mesh (either an edited scan or modeled from scratch) to texture in the photogrammetry software. This is perhaps a big task for someone new to the process however.

I appreciate my advice has not been much use but unfortunately you have picked a tricky subject for one of your first projects :sweat_smile:

So maybe I am able to clean those bumby surfaces afterwards using Maya etc…

Metashape was able to reconstruct more parts of the tables and chairs. And could allign total of 493 out of 494 images. But weirdly it reconstruct one wall in the middle of the hall.

I don’t know why but RealityCapture is not able to allign many images ( about 10 or less for both iphone and Nikon images). It’s my first time working with RealityCapture and I don’t know how to improve or adjust to get more images alligned.

Yes I am a complete noobie :sweat_smile: the task was to get the best result just using photogrammetry without modeling much by yourself …
But I appreciate your time and effort to help me :slight_smile: I know that this is a very hard and tricky project even for Photogrammetry pros … :cry:

The trick was to edit all the photos of the Nikon-camera, because they were too dark. For that I used Lightroom. Then I worked like you mention with components, by separating all images into groups. Groups are specified depending on the location in the room they were taken. Import all groups seperately into RealityCapture and align them, then export the alignement. After all components have been aligned and exported, import the first alignment into RC and define control points in the first component, then realign that component and export again.
Redo the process by importing the alignment with control points back into RC and add the next component/alignment (without the control points). Now that the images of the new component doesn’t have control points, you have to set points on all the new images. At the end alle images have to have control points which were defined at the beginning.

the result and capturing technique:
Note: don’t focus on the floor, as it was not allowed to remove the desks and chairs for capturing