I am totally new to photogrammetry and 3D Scanning. My Bachelor Project is to scan a hall and reconstruct a 3D Model, which can be used in a virtual environment.
I’ve shot 495 Photos in total. Both on an iPhone XS (automatic stock camera) and with a Nikon D5100 (ISO 125; Exposure Time 1/3; Aperture f.9). All photos have been shot in portrait mode/orientation.
The photos on the iPhone and the Nikon are from the same position. (each time I shot an photo with the Nikon, I also took a photo with the iPhone)
I put all the photos into a video, so you can see all the photos and perspectives:
At first I tried to put all the iPhone photos into Meshroom. Just to see the outcoming results.
the 3D Model was really bad.
So I tried to turn up the Feature Extraction -> Describer Preset to “Ultra” and set the Describer Types of the FeatureExtraction, FeatureMatching, StructureFromMotion to “akaze”.
The processing took almost 2 days but now way more cameras could be reconstructed:
But the result is still not really good.
I also tried to use RealityCapture but the software could only align NUMBER photos out of 495.
Do you know how RealityCapture can align more photos?
My guess is that my pictures are not good to be used for photogrammetry. Do you have any recommendations and tips how I can improve my images?