ICRA 2014 here we come!

Woohoo our paper “Localization on Freeways using the Horizon Line Signature” got accepted for ICRA 2014 Workshop on Visual Place Recognition in Changing Environments! Now I just need to sort out funding. The conference registration is over $1000! Damn, that’s going to hurt my pocket …

This is the new video we’ll be showing at our poster stand

Old school voxel carving

I’ve been working on an old school (circa early 90s) method of creating 3D models called voxel carving. As a first attempt I’m using an academic dataset that comes with camera projection matrices provided. The dataset is from

http://www.cs.wustl.edu/~furukawa/research/mview/index.html

specifically the Predator dataset (I have a thing for Predator).

Here it is in action

Screen recording was done using SimpleScreenRecorder, which was slick and easy to use.

Download VoxelCarving.tar.gz

You will need OpenCV installed and the Predator images. Instructions can be found in the README.txt.

Fun with seam cut and graph cut

This fun little project was inspired by a forum post for the Coursera course Discrete Inference and Learning in Artificial Vision. 

I use the method outlined in Graphcut Textures: Image and Video Synthesis Using Graph Cuts.

The toy problem is as follows. Given two images overlaid on top of each, with say a 50% overlap, find the best seam cut through the top most image to produce the best “blended” looking image. No actual alpha blending is performed though!

This is a task that can be done physically with two photos and a pair of scissors.

The problem is illustrated with the example below. Here the second image I want to blend with is a duplicate of the first image. The aim is to find a suitable seam cut in the top image such that when I merge the two images it produces the smoothest transition. This may seem unintuitive without alpha blending but it possible depending on the image, not all type of images will work with this method.

apple_seamcut

By formulating the problem as a graph cut problem we get the following result.

graphcut_apple

and the actual seam cut in thin red line. You might have to squint.

graphcut_and_cut_line

If you look closely you’ll see some strangeness at the seam, it’s not perfect. But from a distance it’s pretty convincing.

Here are a more examples using the same method as above, that is: duplicate the input image, shift by 50% in the x direction, find the seam cut in the top layer image.

strawberry

graphcut_strawberry

This one is very realistic.

Who likes penguins?

penguins-795082

graphcut_penguin

Code

SeamCut.tar.gz

You’ll need OpenCV 2.x install.

I’ve also included the maxflow library from http://vision.csd.uwo.ca/code/ for convenience.

To run call

$ ./SeamCut img.jpg

Simple video stabilization using OpenCV

I’ve been mucking around with video stabilization for the past two weeks after a masters student got me interested in the topic. The algorithm is pretty simple yet produces surprisingly good stabilization for panning videos and forwarding moving (eg. on a motorbike looking ahead). The algorithm works as follows:

  1. Find the transformation from previous to current frame using optical flow for all frames. The transformation only consists of three parameters: dx, dy, da (angle). Basically, a rigid Euclidean transform, no scaling, no sharing.
  2. Accumulate the transformations to get the “trajectory” for x, y, angle, at each frame.
  3. Smooth out the trajectory using a sliding average window. The user defines the window radius, where the radius is the number of frames used for smoothing.
  4. Create a new transformation such that new_transformation = transformation + (smoothed_trajectory – trajectory).
  5. Apply the new transformation to the video.

Here’s an example video of the algorithm in action using a smoothing radius of +- 30 frames.

We can see what’s happening under the hood by plotting some graphs for each of the steps mentioned above on the example video.

Step 1

This graph shows the dx, dy transformation for previous to current frame, at each frame. I’ve omitted da (angle) because it’s not particularly interesting for this video since there is very little rotation. You can see it’s quite a bumpy graph, which correlates with our observation of the video being shaky, though still orders of magnitude better than Hollywood’s shaky cam effect. I’m looking at you Bourne Supremacy.

transformation

Step 2 and 3

I’ve shown both the accumulated x and y, and their smoothed version so you get a better idea of what the smoothing is doing. The red is the original trajectory and the green is the smoothed trajectory.

It is worth noting that the trajectory is a rather abstract quantity that doesn’t necessarily have a direct relationship to the motion induced by the camera. For a simple panning scene with static objects it probably has a direct relationship with the absolute position of the image but for scenes with a forward moving camera, eg. on a car, then it’s hard to see any.

The important thing is that the trajectory can be smoothed, even if it doesn’t have any physical interpretation.

smoothed_trajectory_x

smoothed_trajectory_y

Step 4

This is the final transformation applied to the video.

smoothed_transformation

Code

videostab.cpp

videostabKalman.cpp (live version by Chen Jia using a Kalman Filter)

You just need OpenCV 2.x or above.

Once compile run it from the command line via

./videostab input.avi

More videos

Footages I took during my travels.


Project 100k

We (being myself and my buddy Jay) have been working on a fun vision pet project over the past few months. The project started from a little boredom and lots of discussion over wine back in July 2013. We’ve finally got the video done. It demonstrates our vision based localisation system (no GPS) on a car.

The idea is simple, to use the horizon line as a stable feature when performing image matching. The experiments were carried out on the freeway at 80-100 km/h (hence the name of the project). The freeway is just one long straight road, so the problem is simplified and constrained to localisation on a 1D path.

Now without further adieu, the video

We’re hoping to do more work on this project if time permits. The first thing we want to improve on is the motion model. At the moment, the system assumes the car travels at the same speed as the previously collected video (which is true most of the time, but not always eg. bad traffic). We have plans to determine the speed of the vehicle more accurately.

Don’t forget to visit Jay’s website as well !

Word morph game

I came across this interview question recently:

Given two words of equal length that are in a dictionary, write a method to transform one word into another word by changing only one letter at a time. The new word you get in each step must be in the dictionary.

I thought it was a neat problem and was bored enough to code a web version of this. Check it out at

http://106.187.92.122/word_morph

The PHP script calls an external C++ program, which can be viewed here:

http://106.187.92.122/word_morph/word_morph.cpp

 

An old thesis sketch

Here’s an amusing sketch I did for one of my thesis chapter back in November 2008. It was captioned “Concept design of augmented reality system using the vision based localisation”. A friend made it comment it looked like a xkcd drawing.

conceptual

The sketch is pretty crude and funny in hindsight. I originally added it to my thesis to give it an extra “visionary” depth, kind of predicting the future so to speak. I didn’t think anyone would seriously wear such bulky equipment, plus it made you look silly. A few years later Google made this …

That’s a Google streeview trekker. Different application to what I proposed but the design is not far off!

I should keep a record of all my ideas that I dismiss as impractical and ridiculous. Just so on the off chance someone does implemented it successfully I can get all smug and say I thought of it first 🙂 And then get jealous I didn’t captialise on it …

cppcheck and OpenCV

Every now and then when I’m free and bored, I do a daily git fetch on the OpenCV branch to keep up to date with the latest and greatest. One of my favourite things to do is run cppcheck on the source code to see what new bugs have appeared (I should find a new hobby). For those who don’t know what cppcheck is, it is an open source static code analyzer for C/C++. It will try to find coding mistakes eg. using an uinitialised variable, memory leak, and more. In other words, it is a must have tool, use it, and use it often.

To demonstrate its effectiveness, I just updated my OpenCV 2.4 branch as of 15th Dec 2013 and ran cppcheck on it, resulting in:

$ cppcheck -q -j 4 .
[features2d/src/orb.cpp:179]: (error) Uninitialized variable: ix
[features2d/src/orb.cpp:199]: (error) Uninitialized variable: ix
[features2d/src/orb.cpp:235]: (error) Uninitialized variable: ix
[features2d/src/orb.cpp:179]: (error) Uninitialized variable: iy
[features2d/src/orb.cpp:199]: (error) Uninitialized variable: iy
[features2d/src/orb.cpp:235]: (error) Uninitialized variable: iy
[imgproc/src/color.cpp:773]: (error) Array 'coeffs[2]' accessed at index 2, which is out of bounds.
[imgproc/test/test_cvtyuv.cpp:590]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::yuvReader_' can leak by wrong usage.
[imgproc/test/test_cvtyuv.cpp:591]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::yuvWriter_' can leak by wrong usage.
[imgproc/test/test_cvtyuv.cpp:592]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::rgbReader_' can leak by wrong usage.
[imgproc/test/test_cvtyuv.cpp:593]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::rgbWriter_' can leak by wrong usage.
[imgproc/test/test_cvtyuv.cpp:594]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::grayWriter_' can leak by wrong usage.
[legacy/src/calibfilter.cpp:725]: (error) Resource leak: f
[legacy/src/epilines.cpp:3005]: (error) Memory leak: objectPoints_64d
[legacy/src/epilines.cpp:3005]: (error) Memory leak: rotMatrs1_64d
[legacy/src/epilines.cpp:3005]: (error) Memory leak: rotMatrs2_64d
[legacy/src/epilines.cpp:3005]: (error) Memory leak: transVects1_64d
[legacy/src/epilines.cpp:3005]: (error) Memory leak: transVects2_64d
[legacy/src/vecfacetracking.cpp] -> [legacy/src/vecfacetracking.cpp:670]: (error) Internal error. Token::Match called with varid 0. Please report this to Cppcheck developers
[ml/src/svm.cpp:1338]: (error) Possible null pointer dereference: df
[objdetect/src/hog.cpp:2564]: (error) Resource leak: modelfl
: (error) Division by zero.
[ts/src/ts_gtest.cpp:7518]: (error) Address of local auto-variable assigned to a function parameter.
[ts/src/ts_gtest.cpp:7518]: (error) Uninitialized variable: dummy
[ts/src/ts_gtest.cpp:7525]: (error) Uninitialized variable: dummy

cppcheck -q – j 4, calls cppcheck in quiet mode (only reporting errors) using 4 threads.

The orb.cpp errors are fairly new. The others have been there for a while because I didn’t bother sending a pull request for the legacy functions, because well, they’re legacy. But I should.

The error in tvl1flow.cpp is a false alert, which I’ve reported and has been resolved. Basically, someone used a variable called div, which cppcheck confuses with the stdlib.h div function, because they both have the same parameter count and type, naughty.

vecfecetracking.cpp is an interesting one, cppcheck basically failed for some unknown reason. Though it rarely occurs. I should report that to the cppcheck team.

hog.cpp reports a resource leak because an fopen was called prior but the function calls a throw if something goes wrong without calling fclose, as shown below:

void HOGDescriptor::readALTModel(std::string modelfile)
{
   // read model from SVMlight format..
   FILE *modelfl;
   if ((modelfl = fopen(modelfile.c_str(), "rb")) == NULL)
   {
       std::string eerr("file not exist");
       std::string efile(__FILE__);
       std::string efunc(__FUNCTION__);
       throw Exception(CV_StsError, eerr, efile, efunc, __LINE__);
   }
   char version_buffer[10];
   if (!fread (&version_buffer,sizeof(char),10,modelfl))
   {
       std::string eerr("version?");
       std::string efile(__FILE__);
       std::string efunc(__FUNCTION__);
<strong>       // doing an fclose(modefl) would fix the error</strong>
       throw Exception(CV_StsError, eerr, efile, efunc, __LINE__);
   }

With holidays coming up in a week I’ll probably get off my lazy ass and submit some more fixes. What I find funny is the fact that I’ve been seeing the same errors for the past months (years even?). It seems to suggest cppcheck needs to be publicised more and possibly become part of the code submission guideline. I feel like I’m the only one running cppcheck on OpenCV.

UPDATE:

Running with the latest cppcheck 1.62 produces less false alerts than before (was running 1.61). I now get:

[highgui/src/cap_images.cpp:197]: (warning) %u in format string (no. 1) requires 'unsigned int *' but the argument type is 'int *'.
[imgproc/test/test_cvtyuv.cpp:590]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::yuvReader_' can leak by wrong usage.
[imgproc/test/test_cvtyuv.cpp:591]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::yuvWriter_' can leak by wrong usage.
[imgproc/test/test_cvtyuv.cpp:592]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::rgbReader_' can leak by wrong usage.
[imgproc/test/test_cvtyuv.cpp:593]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::rgbWriter_' can leak by wrong usage.
[imgproc/test/test_cvtyuv.cpp:594]: (style) Class 'ConversionYUV' is unsafe, 'ConversionYUV::grayWriter_' can leak by wrong usage.
[legacy/src/calibfilter.cpp:725]: (error) Resource leak: f
[legacy/src/epilines.cpp:3005]: (error) Memory leak: objectPoints_64d
[legacy/src/epilines.cpp:3005]: (error) Memory leak: rotMatrs1_64d
[legacy/src/epilines.cpp:3005]: (error) Memory leak: rotMatrs2_64d
[legacy/src/epilines.cpp:3005]: (error) Memory leak: transVects1_64d
[legacy/src/epilines.cpp:3005]: (error) Memory leak: transVects2_64d
[ml/src/svm.cpp:1338]: (error) Possible null pointer dereference: df
[objdetect/src/hog.cpp:2564]: (error) Resource leak: modelfl
[ts/src/ts_gtest.cpp:7518]: (error) Address of local auto-variable assigned to a function parameter.
[ts/src/ts_gtest.cpp:7518]: (error) Uninitialized variable: dummy
[ts/src/ts_gtest.cpp:7525]: (error) Uninitialized variable: dummy

Haar wavelet denoising

This is some old Haar wavelet code I dug up from my PhD days that I’ve adapted to image denoising. It denoises an image by performing the following steps

  1. Pad the width/height so the dimensions are a power of two. Padded with 0.
  2. Do 2D Haar wavelet transform
  3. Shrink all the coefficients using the soft thresholding: x = sign(x) * max(0, abs(x) – threshold)
  4. Inverse 2D Haar wavelet transform
  5. Remove the padding

I’ve coded a simple GUI using OpenCV to show the denoising in action. There’s a slider that goes from 0 to 100, which translates to a threshold range of [0, 0.1].

I’ll use the same image in a previous post. This is a cropped image taken at night on a point and shoot camera. The noise is real and not artificially added.noise_0

noise_15

noise_100

The Haar wavelet does a pretty good job of preserving edges and sharp transitions in general. At threshold = 100 you start to see the blocky nature of the Haar wavelet.

One downside of using the Haar wavelet is that the image dimensions have to be a power of two, which wastes memory and CPU cycles when we have to pad the image.

Download

haar_wavelet_denoising.cpp

Compile using GCC with

g++ haar_wavelet_denoising.cpp -o haar_wavelet_denoising -O3 -lopencv_core -lopencv_highgui -lopencv_imgproc

and run via

./haar_wavelet_denoising image.jpg

Asimo Vitruvian Man project

Had a busy weekend crafting this small gift of appreciation for our PhD supervisor. The design was a collaborative effort with my colleagues. The base features an Asimo version of Da Vinci’s Vitruvian Man. It took us from 10.30 am to 5.30 pm to complete the base due to a lot of filing/sanding (tool wasn’t sharp enough) and design changes along the way. The Asimo figured was 3D printed by Jay and took about 5 hours. Overall, everything was completed from start to finish within 5 days, after many many many email exchanges between us.

IMG_5426 IMGP5832 IMGP5925r