Open source PHOTOGRAMMETRY software

culturalheritage
opensource
photogrammetry

(Michal Zurawski) #1

Hi,

I’m back to photogrammetry after a while. During my absence a lot of new software have appeared. Last news I remember was announcement of RealityCapture… Now I see that it is quite popular software. I’ve noticed also that finally we have some open source/free alternatives for VisualSFM. Unfortunately, during popularization of 3D scans, word “photogrammetry” had degrade and means nothing. Almost every SFM solution is called now “photogrammetric”. So this is my question: which one from available software can be used for real photogrammetry. By “real” I mean using scales, manual adding points or detection of the coded targets. I would like to know more especially about open source software because it may be great option for introducing 3D pipeline into small museums or institution that cannot afford expensive software.

I found that MicMac, Meshroom and Colmap are described as photogrammetric. I’m going to give a try to each one but… as it is open source it wont be fast process. Do you have any experience or knowledge about those? Do they support full photogrammetric workflow? I found for an example that MicMac can handle many different types of scale bars but… that was only one sentence in review of the software. When I took a look at documenation I was not able to found anything more. I have searched net only roughly, so I’m going to take a deeper look but it would be very nice to know your experiences and opinions. I believe this post can be also a guide for future “searchers” as this issue can be popular. For those without an experience and professional background it can be quite hard to separate “photogrammetry” form “photogrammetry”.

For the beginning of this thread I can point that Agisoft Photoscan Pro supports full protogrammetric workflow. Basic version of Agisoft should be described as StructureFromMotion as there is no way to add a scale or verify alignment.

I will post my researches in this thread. Waiting for your answers guys.


(Mesheritage) #2

Hi there,

Indeed there is a trend to call “SFM” as “photogrammetry” or even to think it is the same thing. But the thing is you can do photogrammetry after the SFM process, you don’t need extra thing, you should have all the data. It’s more a matter of interface than process.

That is also the point of “open source” project, people with different interest can add what they want.
In most cases, people who develop these “project” are from computer vision background, they care a lot about features and camera position. Here you will find many people who care more about “getting a nice mesh and texture”, something more “visual”.

In term of measurement, there are often error annyway, stating we can reach sub-mm accuracy etc…which is not the case “in general”. So if very few people care about that aspect, it is not a surprise you don’t see the feature.

I am using these software and extract the data to make measurement. Personnaly I used programming to make my own stuff, in order to have more control. There are other software like CloudCompare and Meshlab, I know you can do quite a lot with these but as always it depends what you are looking for.

A small notice, it is not because they are open-source that they are slower. I got much faster result with them than agisoft photoscan (which has very poor option and is a black box, so terrible for research!).


(Michal Zurawski) #3

Hi, thanks for an answer

It sounds interesting, can you link any paper or documentation that is describing this issue?

I agree that SFM process and photogrammetric process differs mostly about of interface but…
I’m thinking about cultural heritage preservation. Visualisation, so meshes and textures, are the last step and of course how they look is very important. The key point of digitalization is preparing data for an archivization. So I’m looking for an open source way that will fulfil set of recommendations and good practices that were developed for use in museums. This is why i refer to Agisoft Photoscan. Their workflow allows to prepare ready to archivization data. So if, we need to be sure about having proper alignment, how you can get this data form SFM? Only way i know, according to the official recommendations, is to make sure that at every photo are some points that were correctly found. So, using Photoscan workflow, you can place coded targets at the model, analyse the set, build model and check after if they are correctly aligned. If yes, you can be sure that error measured by the software is right. And an measurement error is something that you need to attach to your cloud for an archivization. Is it possible with your workflow?

Another issue is when you are doing scans in environment where light behaves differently, so for an example under the water. You need to have reference to calibrate the cameras to measure how IOR of water and air in case for camera had influenced scan. You can do it with properly prepared calibration tools. What about an open source?

I agree that getting measurement form SFM is possible. The problem is that we wont know how inaccurate this measurement is so in general it is useless for archivization.

I would be very happy to know your way to dealing this problem.

And this is my problem… I’m not a coder. This is painful for me but i have no time to change it at the moment…

I never told that they are! I’m going to try Colmap. It waits due to some tests with 360 cameras. What would you recommend to start with?


(Mesheritage) #4

Hello,

The documentation is simply various research paper and book on imaging/computer vision, no really “user friendly” if you are not into coding. But this is something we should try to change I guess. Often the article using SFM photogrammetry for archeology are very simple or mix different terms. We even end up having different notions.
Just to clarify on what I mean by these terms, photogrammetry is to measure things from images and SFM is to assess positions of the image information based on different camera’s location. Therefor the point of SFM is to get accurate “camera position”.
I think nowadays the algorithm is quite strong to get accurate camera position if your images are good enough, I got amazing result with OpenMVG. If you want very accurate result that would need tracker, I would advise not even to use photoscan. The fact they hide all their code and have very few option, it doesnt help to trust.

Then you can link the coordinate of an object in the world versus in your model. You can see for example “camera to world coordinate” used for a standard stereo-vision.

After that, the alignment and the scale, it is simply some change of coordinate in the point cloud (or any other spatial data). I think you can look at “Cloud Compare” they have some option for that, Meshlab has a straigh forward for scaling.

If you are looking for markers or for static rig (you compute camera position with reference, not directly on the object) openMVG can do that as well.

For the error, you might get the error regarding the camera position if you use tracker, but you can’t use that info to assess the error of your model regarding the 3D accurcy of your mesh or points. This will depends on the details, which are the most important to assess 3D accurate measurement.

There is a project called “tanksandtemple” which try to assess the accurcy of different algorithm: https://www.tanksandtemples.org/leaderboard/
It’s nice but here as well it doesn’t take everything into account, it’s also crowdsource so I fell some of them are a bit cheating. The result I got from “Altrizure” or the one I saw are not that amazing. It’s a nice plateform, but very limited for the free version (obviously!).
There are lot of parameters that impact the accurcy, the question is how you compare an expert using a very expensive computer and a simple user?
Also a problem for photoscan, very little parameter to play with.

For the lighting issue, you can partilly solve it with sequence images. For example this model:

I did it with a ~20s video on my phone, so very low quality. As you can see in the comment, one side is dark because of the over exposition due to sun reflection through the window. But because there is a sequence it can adapt nicely.
It works as well for underwater, often the texture is quite bad because the video quality we are using is really low (max 1080p if we are lucky…).
I would love to test more underwater footage, but I had access only to one so far…

One thing to keep in mind is the algorithm for feature tracking improved as well as the computation power. Which make it possible to get reliable result without necessarily having “physical reference”, which in the past would have been just too “computer intensive” to run through the optimisation.
For example SIFT peform well for change in intensity, but it is copy righted so commercial software can’t use it (research/personnal use can).

Now even if I got reasonable camera position with the underwater footage, again, terrible texture. We could do some extra processing on the image but I never explore that.

Photogrammetry SFM depends a lot on the object, the light, the parameters and the images coverage. Which make it very hard to predict a certain level of accuracy. The good thing is when it doesn’t work, it tends to mess up in a very explicit way!

We maybe should do a software open-source for archeologist/heritage professional to avoid the “coding pain”. For example I saw some people doing masking by hand when we can do all automatically. Actually many department are proving these kind of software, a bit like “meshlab” but I have no idea why they are not more used or at least known (i rarely heard about it by user).


(Michal Zurawski) #5

sounds great, but it seems to be extremely hard

At the beginning a few words about underwater photogrammetry and using footage as an source for SFM.

I have just finished calculating video captured by 360 camera of old Gothic church interior. Light conditions were crap but i have finished with reconstruction. With good light conditions and better sticher i see huge possibility to capture interiors extremely fast. This weekend i will give a try to 360 HDR that i’m gonna capture in catacombs of this church. But in this case I wont use a footage.

I dunno if you know this but i think it can be nice think for you if you are interested into extracting frames from video. I have use it. It works. With RAW you will have much better data https://magiclantern.fm/

I will ask my friend about some underwater footage. Maybe I will get some.

The biggest thing for underwater photogrammetry is calibration of the camera. Take a look at https://www.mdpi.com/books/pdfview/book/214 , I can reccomend it. Also https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-5-W5/index.html have some nice informations. One thing you can notice… screenshots from photogrammetric software are nothing else but… agisoft photoscan screenshots. Here https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-5-W5/67/2015/isprsarchives-XL-5-W5-67-2015.pdf calibration rig explained, error measured in Agisoft. That is why I use Agi as an reference. I have basic personal license of this software and i worked with PRO version before. Every serious source was referring to this software Another example http://culturalheritageimaging.org/. Photoscan has tools that allows to pass through full photogrammetric process.

Got it. So let me expand it: photogrammetry is to measure things from images but photogrammetry for cultural heritage preservation or architecture must be enriched by error data, scale that was capture on the scene, lens calibration and points at every photo that can be checked manually. Only then we know that reconstruction was successful. For archivisation and analyses is better to have model with measured 10mm error that possible 0,1mm error.

So it means that using markers in openMVG and known length object in Meshlab we can get some useful data. What about error information? I’m not talking about model aberration but about measured error between point A and B. So for an example: I have scale bar that was produced from aluminum and printed targets on that. Is it possible to get an information about measurement error in openMVG?

I think it can be cause by IOR and lack of artificial light. Full spectrum of light is lost just a few meters under the surface. What you can do is to don’t use a texture. You bake texture from vertex colors.

This is plane with baked textures from vertex colors of 3D scan. Background is real photo and HDRI.