The information presented here is based off the Wikipedia pages on Gauss-Newton.

The Gauss-Newton algorithm is a simple method for solving non-linear least square problems, typically expressed mathematically as:

where S is the sum of the residuals. The residual, r, is the difference between the actual observed value and the value predicted by the model:

where

- y is the observed value
- x is the input value
- f is the function that we are trying to fit the data to
- is the parameters of the model that we are trying to find, to minimise S

Starting with an intial guess , the Gauss-Newton algorithm will iteratively find the best values for as follows:

can be thought as the **“**correction” applied to to get it closer to the solution. in terms of the Jacobian matrix of f is:

Solving for

is the transpose of the matrix and is the inverse of the matrix.

I don’t know about most people, but whenever I see a bunch of matrix symbols clumped together it more than often leaves me in a state of confusion, particularly the matrix dimensions. For a better understanding of the matrices I’ll illustrate with an example. Suppose we want to do a quadratic fit to a set of data containing 100 points. A quadratic function has 3 parameters (A, B, C):

thus the Jacbobian is 100×3 and the residual matrix, r, is 100×1, ends up a 3×3 matrix.

One important note about the algorithm is that there is no guarantee it will converge. Each iteration will not necessarily improve the sum of residual. You can see this behaviour in my demo code. A plot of the mean square error vs iterations for my demo code is illustrated in the graph below. The code tries to fit the function , which is sensitive to the initial estimate.

It is important to initialise the algorithm close to the true parameter value as possible, this will improve your chances of finding the correct solution.

# Code

You’ll need OpenCV 2.x installed. In Linux just type **make** to compile and run. Replace the function Func() with your own one to play around with.

The program is very minimal and should be more or less self explanatory from the code, hopefully.

Hi, good job there! it seems quite clear.. Can you provide a new download link? The one here is broken 🙁

Thanks for letting me know. Link fixed.

Good day. Pls I’ve been able to download Opencv 2.4.4-beta. It contains so many files, i opened all but couldnt see the executable file to install to be able to use the code as directed. Please can you kindly assist on how to install the Opencv 2.x and how to go about the algorithm.

Thank you.

My experience is with Linux only. I always do

[go to the opencv directory]

mkdir build

cd build

cmake ..

make

make install

Hi,

The algorithm is clear but I sitll have some doubt. That is how can we be sure that J^{T}J has an inverse. Thank you for sharing. I am considering some nonlinear regression when I ran into your blog. 🙂

Best regards,

Thinh Tran

A quick check is to calculate the determinant of the matrix and see if it’s non-zero (good), zero (non-invertible).

Gooday for you!! Thanks for your help.

I have a project about PEM Fuel Cell.

I should draw the profile of lamda which is property of water. The parameter is lamda and I want to have 41 nodes of the system.

can you explain more specifically to me why it has 41×1 matrix?

Umm not sure if I understand you correctly. I assume your system has 41 data points, hence the 41×1 matrix?

In your code：

Mat delta = ((Jf.t()*Jf)).inv() * Jf.t()*r;

params += delta;

But as Wikipedia pages said, params(s+1) = params(s) – delta

How is your code to be right with plus instead of minus.

Look at the Wikipedia page history, someone changed the + to a minus, which is incorrect. I’ve checked this against other pdf files on the topic.

Thanks for your tutorial and your code which are both clear and easy to understand.

I have only one question: How can get the equation above Solving for \Delta ?

It’s from the Wikipedia page. But I think the page may have changed over the years, it looks different to what I remembered.

Hello , i would need the Gauss Newton algorithm to minimize reprojection error to estimate camera motion. I’ve seen your code but i don’t understand how i can use it for my target.

Can you help me please?

Modify the Func() function. It needs to return a single error value given your input and parameters.

Thank you for your kind support.

Which value i have to assign to initial guess if the images are rectified ? I’ m working on visual stereo odometry.

You should look at OpenCV’s way of doing stereo rectification. I’d probably not use my own code for serious work. This is more for learning.

the method that i’ve implemented based on a paper is not provided by opencv. Could you help me, in motion estimation if i provide you the paper’s name? I wrong something in the implementation, but i’m not able to understand what. The number of inliers provided is very low. And the estimated trajectory, is totally wrong.

Unfortunately, I don’t have much spare time these days. You might want to try posting on the OpenCV forum.

For me would be fine also a pseudo algorithm. I have to compute the 6 DOF motion parameters , i need to use a windowed bundle adjustement algorith that at each step use two pair of stereo images, current and previous, and use gauss newton to minimize a function cost . the algorithm use a dual residual scheme.

I haven’t done any work in bundle adjustment so won’t be of help here.

Ok thanks anyway.

Hi, I’m trying to estimate camera motion and I have two equation with eight parameters that describes the 2D parametric image motion, but I don’t understand how I can implement Gauss-Newton if the parameters are affecting both equations. I hope you can help me with some suggestions!

This is a topic that I’ve dabbled on in the past but not an expert in. You should check the Multiple View Geometry book for different minimization formulations for this problem. Gauss Newton is one of the many methods you can use for numerical optimization.