So, we have seen first that using Matlab and CUDA together was not always straightforward, especially on a Mac. Then, I showed a simple project to demonstrate how to do it in practice. In this post, we’re going to see how to compile this project using a Makefile.
Final part : the Makefile! Overview of the compilation process Let’s recap the different steps needed to compile our mex file :
In the previous post, I introduced the various problems that I have encountered when I had to install some external CUDA library on our Mac workstations. In this post and the next, I will take a simple example (moving data from array A to array B through the GPU) to show how I finally achieved this installation process. While the application is not really useful, it demonstrates all the various tips and tricks that we applied.
I’ve done a lot of casual (unfortunately, non-thesis-writing related) stuff this summer, including a lot of code. Among these, we had the chance to host a software consultant during a few days, to test and deploy some CUDA codes that they had been optimizing during the last year.
As the official Nerd PhD Student0 on optical flow), I was picked up by a post-doc to assist the consultant integrating their CUDA stuff on our Mac line.
A short post to let you know about this interesting initiative from the blog Computer Vision Talks.
The blogger (and augmented reality software developer) Eugene Khvedchenya has started a tutorial series on using OpenCV in the iOS world. This is really useful, since it can be problematic to make both worlds work together. Furthermore, CV and Image Processing people are usually not really aware of software design issues, and developing nice iOS demos of our cool algorithms can be a really painful process without this knowledge.
Just lost a few hours ripping my hair apart1.
I’m currently doing some tests compiling CUDA code as object files or as a C/C++ library, in order to call it from the Dreaded Matlab.
Wait a minute… did you say external Cuda lib + Matlab ? Yes, I did say that. I am aware of the existence of Matlab’s Parallel Computing Toolbox. However, two important things make me avoid it :
Some updates I’m quietly updating the github depot associated with this blog. If you sync the code regularly, you may have already noticed the apparition of movies a few weeks ago. They will be useful when the optical flow code will be online ;-)
Today’s update brings hard and soft-thresholding, and a demo on how to use them for image denoising (in the DCT domain).
Being original The implementation of soft-thresholding may seem a bit awkward to people familiar with it : the soft-thresholding is implemented via a shrinkage operation on the original coefficient, without explicit sign extraction.
LBP model An LBP can be decomposed into two tiers:
first, a real description vector, obtained by convolution-then-difference then the quantization (binarization) operation. Mathematically, the real i-th component of the descriptor is computed with the formula:
$${\mathcal L}(p)_i = \langle{\mathcal G}_{x_i, \sigma_i} , p \rangle - \langle G_{x_i’, \sigma_i’} , p\rangle, $$
where ${\mathcal G}_{x_i, \sigma_i}, {\mathcal G}_{x’_i, \sigma’_i}$ are two Gaussians.
The variety of the LBP family comes from the choice of these Gaussians : they can have a fixed size but random positions (a la BRIEF), fixed sizes and positions (a la BRISK)1… The choice in FREAK was :