The documentation is simply various research paper and book on imaging/computer vision, no really “user friendly” if you are not into coding. But this is something we should try to change I guess. Often the article using SFM photogrammetry for archeology are very simple or mix different terms. We even end up having different notions.
Just to clarify on what I mean by these terms, photogrammetry is to measure things from images and SFM is to assess positions of the image information based on different camera’s location. Therefor the point of SFM is to get accurate “camera position”.
I think nowadays the algorithm is quite strong to get accurate camera position if your images are good enough, I got amazing result with OpenMVG. If you want very accurate result that would need tracker, I would advise not even to use photoscan. The fact they hide all their code and have very few option, it doesnt help to trust.
Then you can link the coordinate of an object in the world versus in your model. You can see for example “camera to world coordinate” used for a standard stereo-vision.
After that, the alignment and the scale, it is simply some change of coordinate in the point cloud (or any other spatial data). I think you can look at “Cloud Compare” they have some option for that, Meshlab has a straigh forward for scaling.
If you are looking for markers or for static rig (you compute camera position with reference, not directly on the object) openMVG can do that as well.
For the error, you might get the error regarding the camera position if you use tracker, but you can’t use that info to assess the error of your model regarding the 3D accurcy of your mesh or points. This will depends on the details, which are the most important to assess 3D accurate measurement.
There is a project called “tanksandtemple” which try to assess the accurcy of different algorithm: https://www.tanksandtemples.org/leaderboard/
It’s nice but here as well it doesn’t take everything into account, it’s also crowdsource so I fell some of them are a bit cheating. The result I got from “Altrizure” or the one I saw are not that amazing. It’s a nice plateform, but very limited for the free version (obviously!).
There are lot of parameters that impact the accurcy, the question is how you compare an expert using a very expensive computer and a simple user?
Also a problem for photoscan, very little parameter to play with.
For the lighting issue, you can partilly solve it with sequence images. For example this model:
I did it with a ~20s video on my phone, so very low quality. As you can see in the comment, one side is dark because of the over exposition due to sun reflection through the window. But because there is a sequence it can adapt nicely.
It works as well for underwater, often the texture is quite bad because the video quality we are using is really low (max 1080p if we are lucky…).
I would love to test more underwater footage, but I had access only to one so far…
One thing to keep in mind is the algorithm for feature tracking improved as well as the computation power. Which make it possible to get reliable result without necessarily having “physical reference”, which in the past would have been just too “computer intensive” to run through the optimisation.
For example SIFT peform well for change in intensity, but it is copy righted so commercial software can’t use it (research/personnal use can).
Now even if I got reasonable camera position with the underwater footage, again, terrible texture. We could do some extra processing on the image but I never explore that.
Photogrammetry SFM depends a lot on the object, the light, the parameters and the images coverage. Which make it very hard to predict a certain level of accuracy. The good thing is when it doesn’t work, it tends to mess up in a very explicit way!
We maybe should do a software open-source for archeologist/heritage professional to avoid the “coding pain”. For example I saw some people doing masking by hand when we can do all automatically. Actually many department are proving these kind of software, a bit like “meshlab” but I have no idea why they are not more used or at least known (i rarely heard about it by user).