Simple Pano Stitcher

In the past I’ve received emails from people on the OpenCV Yahoo group asking for help on panoramic stitching, to which I answered them to the best of my knowledge, though the answers were theoretical. Being the practical man I like to think of myself,  I decided to finally get down and dirty and write a simple panoramic stitcher that should help beginners venturing into panoramic stitching. In the process of doing so I learned some new things along the way. I believe OpenCV is scheduled to include some sort of panoramic stitching engine in future release. In the meantime you can check out my Simple Pano Stitcher program here:

Last update: 31/03/2014


It uses OpenCV to do fundamental image processing stuff and includes a public domain Levenberg-Marquardt non-linear solver.

To compile the code just type make and run bin/Release/SimplePanoStitcher. It will output a bunch of layer-xxx.png images as output. I’ve included sample images in the sample directory, which the program uses by default. Edit the source code to load your own images. You’ll need to know the focal length in pixels used to capture each image.

To keep things simple I only wrote the bare minimum to get it up and running, for more information check the comments in main.cpp at the top of the file.

Results with Simple Pano Stitcher

Below is a sample result I got after combining all the layers, by simply pasting them on top of each other. No fancy blending or anything. On close inspection there is some noticeable parallax. If I had taken it on a tripod instead of hand held it should be less noticeable.

Sample results from Simple Pano Stitcher
Sample results from Simple Pano Stitcher


One of the crucial steps to getting good results is the non-linear optimisation step. I decided to optimise the focal length, yaw, pitch, and roll for each image. A total of 4 parameters. I thought it would be interesting to compare results with and without any optimisation. For comparison I’ll be using the full size images (3872 x 2592), which is not included in the sample directory to keep the package size reasonable. Below shows a close up section of the roof of the house without optimisation.

Panoramic stitching without non-linear optimisation

The root mean squared error reported is 22.5008 pixels. The results are without a doubt awful! Looking at it too long would cause me to require new glasses with a stronger optical prescription. I’ll now turn on the non-linear optimisation step for the 4 parameters mentioned previously. Below is what we get.

Panoramic stitching with optimisation
Panoramic stitching with non-linear optimisation

The reported root mean squared error is 3.84413 pixels. Much better! Not perfect, but much more pleasant to look at than the previous image.

There is no doubt a vast improvement by introducing non-linear optimisation to panoramic stitching. One can improve the results further by taking into consideration for radial distortion, but I’ll leave that as an exercise to the reader.

I originally got the focal length for my camera lens from calibration using the sample program (calibration.cpp) in the OpenCV 2.x directory. It performs a standard calibration using a checkerboard pattern. Yet even then its accuracy is still a bit off. This got me thinking a bit. Can we use panoramic stitching to refine the focal length? What are the pros and cons? The images used in the stitching process should ideally be scenery with features that are very very far, so that parallax is not an issue, such as outdoor landscape of mountains or something. Seems obvious enough, there are probably papers out there but I’m too lazy to check at this moment …

34 thoughts on “Simple Pano Stitcher”

  1. Nice project. Im’m studying it since I’m interested in this kind of things.
    However there are a few things I didn’t really understood , would be nice if you could clarify them a bit

    1) When you first find the optimal rotation in the function FindBestRotation I can’t understand why after finding the matrix R, you build up the matrix transform = T1*R33*T2. What is this matrix ? Is it still a rotation matrix ? And how do you extract yaw, pitch and roll from it ? It looks strange to me that yaw is simply transform(0,2) and roll is transform(1,2)

    2)Can you please explain the reasoning behind function Angles2Pixel ? What does it mean to apply (rotate) yaw and pitch by an angle roll ?

    Thanks a lot. And thanks for your very nice and interesting blog .


    1. Hi there,

      Good questions, it actually took me some time to remember my own code!

      1) Have a look at this diagram I drew up

      In the diagram I’m trying to align the blue image to the green image. The first step I did is to convert the image points to ‘panoramic space’. Panoramic space is still 2D but the (x,y) values now directly represent the yaw/pitch angles instead.

      I then find the ‘Rigid’ transform between the matching points in panoramic space. It’s called a rigid transform because it finds the best rotation/translation to apply, without distorting the shape of the points. More info on this transform is found here

      The rotation R from FindBestRotation is a 2×2 2D rotation matrix only. You need a 3×3 transformation matrix to fully describe rotation/translation in 2D. You can read Mat transform = T1*R33*T2 verbally as:

      1. Translate all the points to the centre T2
      2. Rotate the points by R33
      3. Translate all the points back to centre T1

      Helps if you can visualise the 3 steps.

      ‘transform’ includes rotation and translation information. Referring back to the PDF diagram, the translation in the (x,y) direction is directly the (yaw,pitch) angles. The roll is from the 2×2 rotation matrix.

      2) Angles2Pixel is a bit of convoluted maths that is essentially finding out the conversion from (yaw,pitch) + roll in panoramic space to 2D pixel points in image space.

      Roll is not actually part of the panoramic space point, since the panoramic space is only a 2D space consisting of (yaw,pitch). Roll is a transform applied to (yaw,pitch). Hence why I did a -roll to undo the affect, basically ‘straightening’ the panoramic image in panoramic space, before doing further maths.

      It’s a bit tricky trying to explain with so little visual aid. Hope you can make sense of it 🙂

  2. Ok, thanks for taking the time to try to explain it. I didn’t realize that you aligned points directly in the (yaw, pitch) domain. That’s why I found it strange the way you got yaw and pitch from the transform matrix. Some papers I have seen usually estimate the 3D rotation between points by converting them to pixel rays (x-xc)/f , (y – yc)/f and then solving the orthogonal procrustes problem, so I thought it was something similar.

  3. I’m a beginner in this field, your work is very intersting.

    I have a question regarding the FindBoundingBox function. What this line mean?

    if(,x) > 0 ||,x) > 0 ||,x) > 0)

    What is the essence of repeating (y,x) > 0?

    Please bare with me. Thanks and appreciated.

      1. Thanks for the immediate. Can you discuss this part of the code?

        for(int y=0; y < img.rows; y++) {
        for(int x=0; x < img.cols; x++) {
        if(,x) > 0) {
        if(x < *x1) *x1 = x;
        if(y *x2) *x2 = x;
        if(y > *y2) *y2 = y;

        What is 0 in the initial if statement? Thanks for the help.

  4. Ohh.. sorry for spamming too many messages. There was a mistake in my reply..

    My question is, can you discuss how FindBoundingBox works?
    What is the initial if statement means?

    1. It just finds the minimum bounding box around a set of pixels that are greater than zero. For example in 1D if x = 16, 8 , 4, 100, 2, 0, then [0, 100] is the min/max value that “surround” the numbers. Same thing in 2D.

  5. Maybe we can use panoramic image to do self-calibration?
    And the panoramic image FOV needs to occupy 360 degrees?

    1. I’ve actually used panoramic images to do self calibration at work. It doesn’t have to occupy 360 degrees. It works well with about 6-10 images, each having like a 20% overlap. But I had to use Nead-Melder simplex method to optimize, much more reliable than Lmfit but slower. A good guess is required though.

  6. Pretty cool code u have here. Unfortunately am not able to get it working on my comp, says there are some linking issue of x32 and x64.

    I wanted to know what the computation time was around? considering an intel i3 comp. I am looking at stitching a large number of i,ages and as the number of images increase computaion time has been exponentially increasing. I am presently using the stitching module of OpenCV.


  7. Hello,
    I’m trying to build your project but I’m getting errors like:
    Error 1 error C2039: ‘info’ : is not a member of ‘lm_status_struct’ C:\…\lmfit\lmmin.cpp 181 1 SimplePanoStitcher
    Error 3 error C2039: ‘maxcall’ : is not a member of ‘lm_control_struct’ C:\…\lmfit\lmmin.cpp 196 1 SimplePanoStitcher

    It’s in lmmin.cpp. lm_control_struct really doesnt have “info, maxcall, printflags” fields..

  8. Hello,
    I’m trying to build your project but I’m getting errors like:
    Error 1 error LNK2019: unresolved external symbol _lmmin referenced in function _main
    Error 2 error LNK2001: unresolved external symbol _lm_control_double
    Error 3 error LNK1120: 2 unresolved externals

  9. im appreciate to see you expannation and code
    but when i try to make the giving code running
    there are some “Unable to discharge the error”
    1>main.obj : error LNK2028: 无法解析的标记(0A0006DA) “extern “C” void __cdecl lmmin(int,double *,int,void const *,void (__cdecl*)(double const *,int,void const *,double *,int *),struct lm_control_struct const *,struct lm_status_struct *)” (?lmmin@@$$J0YAXHPANHPBXP6AXPBNH10PAH@ZPBUlm_control_struct@@PAUlm_status_struct@@@Z),该标记在函数 “int __cdecl main(int,char * *)” (?main@@$$HYAHHPAPAD@Z) 中被引用
    1>main.obj : error LNK2020: 无法解析的标记(0A000753) lm_control_double
    1>main.obj : error LNK2019: 无法解析的外部符号 “extern “C” void __cdecl lmmin(int,double *,int,void const *,void (__cdecl*)(double const *,int,void const *,double *,int *),struct lm_control_struct const *,struct lm_status_struct *)” (?lmmin@@$$J0YAXHPANHPBXP6AXPBNH10PAH@ZPBUlm_control_struct@@PAUlm_status_struct@@@Z),该符号在函数 “int __cdecl main(int,char * *)” (?main@@$$HYAHHPAPAD@Z) 中被引用
    1>main.obj : error LNK2001: 无法解析的外部符号 _lm_control_double
    1>C:\Users\mhh\Desktop\SimplePanoStitcher\VisualStudio2010\Debug\SimplePanoStitcher.exe : fatal error LNK1120: 4 个无法解析的外部命令

    thank you for you help! Or if you can me some prompt message about where i can struggle!

  10. im sorry to trouble you again
    yet it was compiled succeed
    but when I push down F5 or ctrl F5 in VS2010 it will appear
    a “Debug Error! … … R6010 -abort() has bennd called ”
    and in the console it point out “Assertion failed: detector, file …
    … \simplepanostitcher\main.cpp , line 261”
    and in my step by step debug , i found that everything is fine,
    the pictures was read succeed, but when the program goes to
    function “FindMatches()” in main() line 261,
    and go to “assert(detector);” it will trigger breaking point

    is it a appeared problem and what can do next?Thanks a lot.

    1. I think the OpenCV you are using must be newer than me. I suspect you need to enable the nonfree lib. Make sure you are linking to opencv_nonfree.

      1. yes i used opencv_nonfree2411d.lib in the Linked library ,
        and it succeed to use the opencv Stitcher class ,
        it must newer than you, and i repleased yours ,
        what dou you mean im not clear understand sorry
        whether i need to downloading the old version?

  11. Thanx! It worked. Really appreciate your help.
    now the program is running without any err even i donot now
    why it throw break point.
    first: i download the code you give and extract the source
    second:click down the SimplePanoSitcher.sln
    third:add some sources file that donot first appear first time what in lmfit/… but without lmmin.cpp just as you say
    forth:configuration the opencv local libary about “include””lib”file while in my own computer
    fifth:change the openvc_core242.lib to opencv2411.lib and the other five *.lib contain the opencv_nonfree2411.lib (and change the CLR option closed in the compiler that can’t compile C file )
    now:F7 + F5 that all succeed!
    im so happy thanks a lot and i will continue to learn from you!

    if someone have the same problems above words may be help you or no one is silly like me because nobody asked NghiaHo so much but take me a lot of hours.

  12. hi
    i am doing my final year project in digital image processing with video stitching…can anyone please help me with the codings..that too i am planning to do with opencv…i am actually new to this software..if anyone got any idea with the codings just help me please.i tried to do but the videos, but the videos i have taken is not aligning correctly ..and i have downloaded codes from the net itself but even that contains lots of errors.please just help me.

  13. Dear NghiaHo,
    first:In your program we can not simple replease the sample\*.jpg ,because it need the user write the info.txt file. My question is how can i know the pictures’ focal length that take in my cellphone. You said it could be calulated in the opencv2.* with calibration.cpp itself, but its hundreds lines and difficult parameters. Is there any easy way to know the pictures focal length with opencv code .

    second:If i solve the focal length problem, maybe i can recevied the layer-00*.png the file, but why you didn’t give a full panoramic picture at last, but you printf the panoramic width and height.How could i paste them together?

    third:Why i uese the opencv class/function Stitcher stitcher = stitch(imgs, pano); When the “imgs” are the sample/*.jpg it run succeed and the result is good, except “imgs” are the result “layer-00*.jpg”,there will be a cv::Exception (about:Assertion failed (y==0 || =1 && ysize.p[0]>>in cv::Mat::ptr, file C:\buildslave64\win64_amdoc1\2_4_PackSlave-win32-vc10-shared\opencv\modules\core\include\opencv2/core/mat.hpp,line429)

    Thank you very much for your busy schedule to reply, any tips will you give me a great deal of help!

    1. Hi,

      1. If you are lucky you can find manufacturer specs for the camera sensors + lens and derive the focal length from that. Alternatively, use a chessboard pattern and calibrate it.

      2. I can’t remember the reason for this. Maybe I didn’t want to write code to do blending.

      3. I never used the OpenCV stitcher class, so have no knowledge of it.

  14. Nicely written. Thanks.
    Is there any reference publication on the implemented method?
    The way it works in panoramic space, arises some complication for me.
    I am wondering if finding the best rotation (in panoramic space) is still required after finding Homography?

    1. I don’t have any reference. Just something obvious I came up with one day. If you stitch using homography you’ll be limited to a rectilinear projection.

  15. I guess this uses the non-free section of opencv which was removed in opencv 3. Can u suggest what all have to be changed to try out this code in opencv 3

  16. Hi,
    I’m doing my image stitching project. May I ask your stitching code is available for 360 panorama? Or only for the wide-view stitching?
    Thank you for reading my question!!: )

Leave a Reply

Your email address will not be published. Required fields are marked *