Hello everyone, i’m new here.
I am stuck in the beginning steps of a project to make a custom/unique PC-case based on the left engine of the Whizzing Arrow II from the show Oban Star Racers, which i have found a low-res rotating GIF of.
I have split the gif into its 240 frames, meaning each image is rotationally offset by 1.5 degrees.
When i put the images into any photogrammetry software i can find, the calculated camera motion is at best a half-circle, at worst garbage. This is due to the lack of surface features, as the original model’s texture is basically flat colours. The best result i’ve gotten so far is with Meshroom, which is why i’m writing here, and also because it is modular/node-based.
Is there any way to give this known positional data to the Meshroom pipeline? I’ve seen a couple of github issues opened on the topic, but all of them have been solved without stating the solution, as far as i can tell.
What is more, is that while i know the coordinates (x,y,z), as well as the yaw and roll, i don’t know the pitch of the camera. Does anyone know how i can calculate it from the images?
My main objective with this is to find the curvature of the inlet cowl so i can 3D-print it.
If i get anything more, great. If not, it’s fine - i can work with that.