RBM and sparsity penalty

This is a follow up on my last post about trying to use RBM to extract meaningful visual features. I’ve been experimenting with a sparsity penalty that encourages the hidden units to rarely activate. I had a look at ‘A Practical Guide to Training Restricted Boltzmann Machines’ for some inspiration but had trouble figuring out how their derived formula fits into the training process. I also found some interesting discussions on MetaOptimize pointing to various implementations. But in the end I went for a simple approach and used the tools I learnt from the Coursera course.

The sparsity is the average probability of a unit being active, so they are applicable to sigmoid/logistic units. For my RBM this will be the hidden layer. If you look back in my previous post you can see the weights generate random visual patterns. The hidden units are active about 50% of the time, hence the random looking pattern.

What we want to do is reduce the sparsity so that the units are activated on average a small percentage of the time, which we’ll call the ‘target sparsity’. Using a typical square error, we can formulate the penalty as:

p = K\frac{1}{2} \left(s - t\right)^2

s = average\left(sigmoid\left(wx + b\right)\right)

  • K is a constant multiplier to tune the gradient step.
  • s is the current sparsity and is a scalar value, it is the average of all the MxN matrix elements.
  • t is the target sparsity, between [0,1].

Let the forward pass starting from the visible layer to the hidden layer be:

z = wx + b

h = sigmoid(z)

s = \frac{1}{M} \frac{1}{N} \sum\limits_{i} \sum\limits_{j} h_{ij}

  • w is the weight matrix
  • x is the data matrix
  • b is the bias vector
  • z is the input to the hidden layer

The derivative of the sparsity penalty, p, with respect to the weights, w, using the chain rule is:

\frac{\partial p}{\partial w} = \frac{\partial p}{\partial s} \frac{\partial s}{\partial h} \frac{\partial h}{\partial z} \frac{\partial z}{\partial w}

The derivatives are:

\frac{\partial p}{\partial s} = K\left(s - t\right) \;\; \leftarrow scalar

\frac{\partial s}{\partial h} = \frac{1}{NM} \;\; \leftarrow constant \; scalar

\frac{\partial h}{\partial z} = h\left(1-h\right)

\frac{\partial z}{\partial w} = x

The derivative of the sparsity penalty with respect to the bias is the same as above except the last partial derivative is replaced with:

\frac{\partial z}{\partial b} = 1

In actual implementation I omitted the constant  \frac{\partial s}{\partial h} = \frac{1}{NM}  because it made the gradients very small, and I had to crank K up quite high (in the 100s). If I take it out, good working values of K are around 1.0, which is a nicer range.

Results

I used an RBM with the following settings:

  • 5000 input images, normalized to \mu=0, \sigma=1
  • no. of visible units (linear) = 64 (16×16 greyscale images from the CIFAR the database)
  • no. of hidden units (sigmoid) = 100
  • sparsity target = 0.01 (1%)
  • sparsity multiplier K = 1.0
  • batch training size = 100
  • iterations = 1000
  • momentum = 0.9
  • learning rate = 0.05
  • weight refinement using an autoencoder with 500 iterations and a learning rate of 0.01

and this is what I got

RBM sparsityQuite a large portion of the weights are nearly zerod out. The remaining ones have managed to learn a general contrast pattern. It’s interesting to see how smooth they are. I wonder if there is an implicit L2 weight decay, like we saw in the previous post, from introducing the sparsity penalty. There are also some patterns that look like they’re still forming but not quite finished.

The results are certainly encouraging but there might be an efficiency drawback. Seeing as a lot of the weights are zeroed out, it means we have to use more hidden units in the hope of finding more meaningful patterns. If I keep the same number of units but increase the sparsity target then it approaches the random like patterns.

Download

RBM_Features-0.1.0.tar.gz

Have a look at the README.txt for instructions on obtaining the CIFAR dataset.

RBM, L1 vs L2 weight decay penalty for images

I’ve been fascinated for the past months or so on using RBM (restricted Boltzmann machine) to automatically learn visual features, as oppose to hand crafting them. Alex Krizhevsky’s master thesis, Learning Multiple Layers of Features from Tiny Images, is a good source on this topic. I’ve been attempting to replicate the results on a much smaller set of data with mix results. However, as a by product of I did manage generate some interesting results.

One of the tunable parameters of an RBM (neural network as well) is a weight decay penalty. This regularisation penalises large weight coefficients to avoid over-fitting (used conjunction with a validation set). Two commonly used penalties are L1 and L2, expressed as follows:

weight\;decay\;L1 = \displaystyle\sum\limits_{i}\left|\theta_i\right|

weight\;decay\;L2 = \displaystyle\sum\limits_{i}\theta_i^2

where theta is the coefficents of the weight matrix.

L1 penalises the absolute value and L2 the squared value. L1 will generally push a lot of the weights to be exactly zero while allowing some to grow large. L2 on the other hand tends to drive all the weights to smaller values.

Experiment

To see the effect of the two penalties I’ll be using a single RBM with the following configuration:

  • 5000 input images, normalized to \mu=0, \sigma=1
  • no. of visible units (linear) = 64 (16×16 greyscale images from the CIFAR the database)
  • no. of hidden units (sigmoid) = 100
  • batch training size = 100
  • iterations = 1000
  • momentum = 0.9
  • learning rate = 0.01
  • weight refinement using an autoencoder with 500 iterations and learning rate of 0.01

The weight refinement step uses a 64-100-64 autoencoder with standard backpropagation.

Results

For reference, here are the 100 hidden layer patterns without any weight decay applied:

As you can see they’re pretty random and meaningless There’s no obvious structure. Though what is amazing is that even with such random patterns you can reconstruct the original 5000 input images quite well using a weighted linear combinations of them.

Now applying an L1 weight decay with a weight decay multiplier of 0.01 (which gets multiplied with the learning rate) we get something more interesting:

We get stronger localised “spot” like features.

And lastly, applying L2 weight decay with a multiplier of 0.1 we get

which again looks like a bunch of random patterns except smoother. This has the effect of smoothing the reconstruction image.

Despite some interesting looking patterns I haven’t really observed the edge or Gabor like patterns reported in the literature. Maybe my training data is too small? Need to spend some more time …

HP Pavilion DM1 + AMD E-450 APU, you both suck!

Last year I bought myself a small HP Pavilion DM1 for traveling overseas. On paper the specs look great compared to any of Asus EEE PC offering in terms of processing power and consumption. It’s got a dual core AMD E-450, Radeon graphics card,  2GB RAM and about 4-5 hrs of battery life. In reality? I’ve had to take this laptop in for warranty repair TWICE in a few short months. First time was a busted graphics chip that only worked if I plugged in an external monitor. The second time was a faulty hard disk singing the click of the death. Both were repaired within a day, so I can’t get too mad. But when I finally got around to installing all the tools I need to do some number crunching under Linux, this happens …

Yes that’s right, Octave crashes on a matrix multiply!!! WTF?!?

I ran another program I wrote in GDB and it reveals this interesting snippet

My guess is it’s trying to call some AMD assembly instruction that doesn’t exist on this CPU. Octave uses /usr/lib/libblas as well, which explains the crash earlier. Oh well, bug report time …

RBM Autoencoders

I’ve just finished the wonderful “Neural Networks for Machine Learning” course on Coursera and wanted to apply what I learnt (or what I think I learnt). One of the topic that I found fascinating was an autoencoder neural network. This is a type of neural network that can “compress” data similar to PCA. An example of the network topology is shown below.

The network is fully connected and symmetrical, but I’m too lazy to draw all the connections. Given some input data the network will try to reconstruct it as best as it can on the output. The ‘compression’ is controlled mainly by the middle bottleneck layer. The above example has 8 input neurons, which gets squashed to 4 then to 2. I will use the notation 8-4-2-4-8 to describe the above autoencoder networks.

An autoencoder has the potential to do a better job of PCA for dimensionality reduction, especially for visualisation since it is non-linear.

My autoencoder

I’ve implemented a simple autoencoder that uses RBM (restricted Boltzmann machine) to initialise the network to sensible weights and refine it further using standard backpropagation. I also added common improvements like momentum and early termination to speed up training.

I used the CIFAR-10 dataset to train 100 small images of dogs. The images are 32×32 (1024 vector) colour images, which I converted to grescale. The network I train on is:

1024-256-64-8-64-256-1024

The input, output and bottleneck are linear with the rest being sigmoid units. I expected this autoencoder to reconstruct the image better than PCA, because it has much more parameters. I’ll compare the results with PCA using the first 8 principal components.

Results

Here are 10 random results from the 100 images I trained on.










The autoencoder does indeed give a better reconstruction than PCA. This gives me confidence that my implementation is somewhat correct.

The RMSE (root mean squared error) for the autoencoder is 9.298, where as for PCA it is 30.716, pixel values range from [0,255].

All the parameters used can be found in the code.

Download

You can download the code here

Last update: 27/07/2013

RBM_Autoencoder-0.1.2.tar.gz

You’ll need the following libraries installed

  • Armadillo (http://arma.sourceforge.net)
  • OpenBLAS (or any other BLAS alternative, but you’ll need to edit the Makefile/Codeblocks project)
  • OpenCV (for display)

On Ubuntu 12.10 I use the OpenBLAS package in the repo. Use the latest Armadillo from the website if the Ubuntu one doesn’t work, I use some newer function introduced recently. I recommend using OpenBLAS over Atlas with Armadillo on Ubuntu 12.10, because multi-core support works straight out of the box. This provides a big speed up.

You’ll also need the dataset http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz

Edit main.cpp and change DATASET_FILE to point to your CIFAR dataset path. Compile via make or using CodeBlocks.

All parameter variables can be found in main.cpp near the top of the file.

OpenCV vs. Armadillo vs. Eigen vs. more! Round 3: pseudoinverse test

Okay, the title of this post is getting longer and sillier, but this is the 3rd continuation of my last two post on comparing different libraries for everyday matrix operations. The last two posts compared basic operations such as multiplication, transposition, inversion etc. etc. in isolation, which is probably not a good reflection of real life usage. So I decided to come up with a new test that would combine different matrix operations together. I chose the pseudoinverse because it is something I use every now and then and it combines multiplication, transposition and inversion, which seems like a good test.

For benchmarking I’m going to be solving the following over determined linear system:

AX = B

and solve for X using

X = \left(A^TA\right)^{-1}A^{T}B

A is a NxM matrix, where N is much larger than M. I’ll be using N=1,000,000 data points and M (dimensions of the data) varying from 2 to 16.

B is a Nx1 matrix.

The matrix values will be randomly generated from 0 to 1 with uniform noise of [-1,1] added to B. They values are kept to a small range to avoid any significant numerical problems that can come about doing the pseudoinverse this way, not that I care too much for this benchmark. Each test is performed for 10 iterations, but not averaged out since I’m not interested in absolute time but relative to the other libraries.

Just to make the benchmark more interesting I’ve added GSL and OpenBLAS to the test, since they were just an apt-get away on Ubuntu.

Results

The following libraries were used

  • OpenCV 2.4.3 (compiled from source)
  • Eigen 3.1.2 (C++ headers from website)
  • Armadillo 3.4.4 (compiled from source)
  • GSL 1.15 (Ubuntu 12.10 package)
  • OpenBLAS 1.13 (Ubuntu 12.10 package)
  • Atlas 3.8.4 (Ubuntu 12.10 package)

My laptop has an Intel i7 1.60GHz with 6GB of RAM.

All values reported are in milliseconds. Each psuedoinverse test is performed 10 times but NOT averaged out. Lower is better. Just as a reminder each test is dealing with 1,000,000 data points of varying dimensions.

2 3 4 5 6 7 8 9
OpenCV 169.619 321.204 376.3 610.043 873.379 1185.82 1194.12 1569.16
Eigen 152.159 258.069 253.844 371.627 423.474 577.065 555.305 744.016
Armadillo +  Atlas 162.332 184.834 273.822 396.629 528.831 706.238 848.51 1088.47
Armadillo + OpenBLAS 79.803 118.718 147.714 298.839 372.235 484.864 411.337 507.84
GSL 507.052 787.429 1102.07 1476.67 1866.33 2321.66 2831.36 3237.67
10 11 12 13 14 15 16
OpenCV 1965.95 2539.57 2495.63 2909.9 3518.22 4023.67 4064.92
Eigen 814.683 1035.96 993.226 1254.8 1362.02 1632.31 1615.69
Armadillo + Atlas 1297.01 1519.04 1792.74 2064.77 1438.16 1720.64 1906.79
Armadillo + OpenBLAS 534.947 581.294 639.175 772.382 824.971 825.79 893.771
GSL 3778.44 4427.47 4917.54 6037.29 6303.08 7187.5 7280.27

Ranking from best to worse

  1. Armadillo + OpenBLAS
  2. Eigen
  3. Armadillo + Atlas (no multi-core support out of the box???)
  4. OpenCV
  5. GSL

All I can say is, holly smokes Batman! Armadillo + OpenBLAS wins out for every single dimension!  Last is GSL, okay no surprise there for me. It never boasted being the fastest car on the track.

The cool thing about Armadillo is switching the BLAS engine only requires a different library to be linked, no recompilation of Armadillo. What is surprising is the Atlas library doesn’t seem to support multi-core by default. I’m probably not doing it right. Maybe I’m missing an environmental variable setting?

OpenBLAS is based on GotoBLAS and is actually a ‘made in China’ product, except this time I don’t get to make any jokes about the quality. It is fast because it takes advantage of multi-core CPU, while the others appear to only use 1 CPU core.

I’m rather sad OpenCV is not that fast since I use it heavily for computer vision tasks. My compiled version actually uses Eigen, but that doesn’t explain why it’s slower than Eigen! Back in the old days OpenCV used to use BLAS/LAPACK, something they might need to consider bringing back.

Code

test_matrix_pseudoinverse.cpp (right click save as)

Edit the code to #define in the libraries you want to test. Make sure you don’t turn on Armadillo + GSL, because they have conflicting enums. Instructions for compiling is at the top of the cpp file, but here it is again for reference.

To compile using ATLAS:

g++ test_matrix_pseudoinverse.cpp -o test_matrix_pseudoinverse -L/usr/lib/atlas-base -L/usr/lib/openblas-base -lopencv_core -larmadillo -lgomp -fopenmp -lcblas -llapack_atlas -lgsl -lgslcblas -march=native -O3 -DARMA_NO_DEBUG -DNDEBUG -DHAVE_INLINE -DGSL_RANGE_CHECK_OFF

To compile with OpenBLAS:

g++ test_matrix_pseudoinverse.cpp -o test_matrix_pseudoinverse -L/usr/lib/atlas-base -L/usr/lib/openblas-base -lopencv_core -larmadillo -lgomp -fopenmp -lopenblas -llapack_atlas -lgsl -lgslcblas -march=native -O3 -DARMA_NO_DEBUG -DNDEBUG -DHAVE_INLINE -DGSL_RANGE_CHECK_OFF

Five point algorithm for essential matrix, 1 year later …

In 2011, some time around April, I had a motorcycle accident which left me with a broken right hand. For the next few weeks I was forced to take public transport to work, *shudder* (it takes me twice as long to get to work using Melbourne’s public transport than driving). I killed time on the train reading comic books and academic papers. One interesting one I came across was Five-Point Algorithm Made Easy by Hongdong Li and Richard Hartley for calculating the essential matrix using five points (minimum). Now here is something you don’t see everyday in an academic paper title, ‘easy’. I’ve always be meaning to get around to learning the five-point algorithm and this one boasts an easier implementation than David Nister’s version. Seemed like the perfect opportunity to try it out, I mean how hard could it be? Well …

Over one year later of on and off programming I finally finished implementing this damn algorithm! A fair bit of time was spent on figuring out how to implement symbolic matrices in C++. I was up for using existing open source symbolic packages but found that they all struggled to calculate the determinant of a 10×10 symbolic matrix of polynomials. If you do the maths, doing it the naive way is excruciatingly slow, at least in the orders of 10 factorial. I ended up writing my own simple symbolic matrix class. I also wrote a Maxima and PHP script to generate parts of the symbolic maths step into C++ code, fully expanded out in all its glory. The end result is a rather horrendous looking header file with 202 lines of spaghetti code (quite proud of that one actually).

The second major road black was actually a one liner bug in my code. I was doing the SVD of a 5×9 matrix in OpenCV. By default OpenCV’s SVD doesn’t do the full decomposition for Vt unless you tell it to eg. cv::SVD svd(F, cv::SVD::FULL_UV). So I was using the 5×9 version of Vt instead of 9×9 to grab the null basis, which was incorrect.

It was quite a hellish journey, but in the end I learnt something new and here’s the code for you all to enjoy.

Download

Last update: 25th August 2013

5point-0.1.5.tar.gz (requires OpenCV)

5Point-Eigen-0.1.5.tar.gz (requires Eigen)

The OpenCV and Eigen version produce identical output. The Eigen version was ported by Stuart Nixon. The file 5point.cpp in the Eigen version has a special #ifdef ROBUST_TEST that you can turn on to enable a more robust test against outliers. I have not thoroughly test this feature so it is turned off by default.

Given 5 or more points, the algorithm will calculate all possible essential matrix solutions and return the correct one(s) based on depth testing. As a bonus it even returns the 3×4 projection matrix. The whole process takes like 0.4 milliseconds on my computer. There’s a lot of room for speed improvement using more efficient maths. David Nister’s paper list some improvements in the appendix.

Python port of rigid_transform_3D.m and first impression using Numpy

As part of my Python learning, Numpy in particular, I’ve ported rigid_transform_3D.m to Python. You can download it from at the bottom of the page.

My first impression of using Numpy to port the Matlab script hasn’t been too thrilling. The syntax isn’t as a nice to use as Matlab/Octave. The choice between using an array or matrix type have different trade offs. If I use an array (as recommended by the short answer) I can’t use the * to perform a standard matrix multiplication, a bit annoying to say the least. Indexing a matrix is slightly different compared to Matlab, for example A[0:3] means elements 0,1,2 but not 3 (not a big deal). Another problem I found was trying to do an outer product using dot(A, A.T) or dot(A.T, T), which returned a scalar value instead of a matrix. It seems arrays don’t make a distinction between row/column vector. The solution was to explicitly use the numpy.outer function.

Other than those small pet peeves (so far), I haven’t come across anything show stopping yet. I guess I have to tell myself that Python is a general purpose scripting language, unlike Matlab/Octave which was designed with matrices being the fundamental data type.

Preview of SfM + texture mapping

A few people in the past have expressed interest in texture mapping the point clouds from SfM. I said I probably won’t get around to writing software for it, even though I’ve done it before. Guess what? I’ve back flipped and have decided to do it! Here’s a sneak preview …



The pipeline is as follows:

RunSFM -> Meshlab -> TextureMesh (new program I wrote) -> ViewMesh (custom viewer)

The Meshlab step is a manual process, but can be automated using their server program. The output mesh is a custom, but easy to read, text format. I haven’t looked around for any standardised format.

More details to come in the following days. First, I need to package up the software, write instructions and a guide on how to use Meshlab.

First person shooter (FPS) control demo in GLUT

Today I was looking around for some quick copy and paste code to add FPS gaming control to a small GLUT application I am writing. Half an hour or so of searching later and I still couldn’t find anything that I liked. A lot of the results were from forums of people trying to roll out their own code. I gave up and resorted to something I didn’t think I’d ever do, use code I wrote during my PhD …

After an hour or so of tinkering I got something that I’m happy with. I got my GLUT demo to do the following:

  • W,A,S,D keys for moving and strafing
  • Mouse look (default is inverted mouse)
  • Mouse button to fly up/down
  • SPACEBAR to toggle FPS control mode
  • Mouse always stays within the window with the cursor hidden

It has the feel of a proper FPS game.

Here’s what the demo looks like. Interestingly, when taking a screenshot the mouse cursor appears in the image.

The demo code can be used as a copy and paste project to quickly get a viewer running. All you have to do is add your rendering code into the Display() function.

Download

GlutFPS.tar.gz

The demo requires freeglut to be installed. You can use CodeBlocks to open up the project or type make if you’re in Linux. If you’re using Windows you’ll have to setup your own Visual Studio project.

If you have any questions just leave a comment and I’ll get back to you.

Python turtle!

I’ve recently started learning Python and was surprised and delighted to find that it has an implementation of the old Turtle program that I used to play around with back in primary. For some reason it even runs as slow as the original! (intentional?). Here is a simple geometric flower pattern I whipped up that I thought I’d share.

Seems to be lacking anti-aliasing support …

Here is the Python code.

# Thanks goes to Raphael for improving the original slow clunky code
import turtle

turtle.hideturtle()
turtle.speed('fastest')
turtle.tracer(False)

def petal(radius,steps):
    turtle.circle(radius,90,steps)
    turtle.left(90)
    turtle.circle(radius,90,steps)

num_petals = 8
steps = 8
radius = 100

for i in xrange(num_petals):
    turtle.setheading(0)
    turtle.right(360*i/num_petals)
    petal(radius,steps)

turtle.tracer(True)
turtle.done()