So I found some spare time this weekend to work on RunSFM. I’m working on a version of RunSFM that supports larger dataset, tentatively called RunSFM_Large. It’s just going to be another script that the user can call.
The reason behind RunSFM_Large, besides the obvious need for one, is I’m working on a uni project on the side that requires reconstruction of a large collection of images (> 1000 8MP images). And by working I mean doing a freebie for fun 🙂 Here is what I’ve implemented into RunSFM_Large so far (but not well tested):
- Reconstruction of large dataset by splitting into smaller manageable ‘bundles’ (chunks, subsets, batch …), and re-combing them later. I’ll probably stick to the term bundle, since it is somewhat consistent with the project.
- Recovery support for SiftMatcher that allows it to resume from last successful match. The need for recovery support is, again, obvious. When you got a dataset that takes days or weeks to run, the last thing you want to worry is a power failure stuffing up everything.
Things to do:
- Recovery support for Bundler and PMVS stage. This one should be easy because I only need to keep track of which bundle has been completed or not.
I think RunSFM_Large should work well once done, with some compromise on accuracy. I’m already seeing slight imperfections on a small dataset. It might be a issue with how I’m doing the global optimisation.