AS5030 magnetic encoder: capturing a PWM signal with an ATSAMD21

It seems that I can’t avoid periodically ostracizing this page. Welp, let’s try to make it up.

Context: the AS5030 magnetic encoder IC

In a project I’m currently working on (more about it in later posts, perhaps), I needed a halfway decent way of measuring the angular displacement of a small, manually-turnt wheel. I had originally mounted a small mechanical encoder (Bourns PEC11R) into my solution, which yielded about 96 pulses per-revolution (PPR). However, on a long run, contacting encoders are not the best pick for these applications due to mechanical wear.

Searching for a better solution I came across ams’ portfolio of magnetic encoders, which features interesting solutions for contactless position measurement. Amongst them, the AS5030 (datasheet) was the one that best met my requirements, providing an improved 256 PPR. The gist of it is simple: mount the IC, power it up and spin a magnet on top of it. The IC will generate a PWM signal with a pulse width proportional to the magnet’s angle in relation to the chip. Spoiler alert: not any magnet will work! You’ll have to get diametric magnets, such as this little guy here.

AS5030 overview: the hall frontend of the chip measures the orientation of the field lines of a diametric magnet and produces both PWM and analog signals accordingly. A SPI interface is also available.

As you may see in the diagram above, the AS5030 also provides a serial interface. The thing, however, is a weird half-duplex SPI that operates on 21-bit long frames, making it odd to use. On top of that, the AS5030 is strictly 5V compatible, which means I would have to level-shift all the signals going to my 3v3 microcontroller. That would just add unnecessary parts and complexity, so I ditched the serial interface and went with PWM (and a resistive divider to shift the voltage for the uC).

The PWM signal frequency is rated at 1.72kHz, but may vary slightly with temperature. The duty cycle encodes the position, going from a spec’d minimum of 2.26us to a spec’d maximum of 578.56us, like shown in the picture below:

AS5030 PWM signal specifictaion.

Getting the angular position accurately can be done via the ratio of the duty cycle (t_{\text{on}}) and the PWM’s period (t_{\text{on}} + t_{\text{off}}), as per the equation supplied in the datasheet:

\text{angle}[{^\circ}] = \frac{360}{256}\large[\large(257\frac{t_{on}}{t_{\text{on}}+t_{\text{off}}}\large)-1\large]

 ATSAMD21: peripheral bureaucracy

Measuring the pulse-width of a PWM signal is a textbook example of input capture. Capture operations allow you to record a signal edge together with a timestamp directly via hardware, without having to employ any CPU cycles. The vast majority of modern microcontrollers support this feature, and the ATSAMD21E18A employed in this project is no exception.

As we see in the ATSAMD21 datasheet (section 31.2), the TCC peripheral not only supports this sort of operation, but also has a dedicated mode for pulse-width capture, where “period T will be captured into CC0 and the pulse-width t_p into CC1″. Unfortunately, there’s a catch. Usually, capture operations occur entirely within a timer peripheral, which detects the signal’s edge on a dedicated pin an stores the timestamp based on its internal counter. However, to increase flexibility, capture operations in the ATSAMD21 use the External Interrupt (EIC) peripheral to generate an event in the internal Event System (EVSYS), which gets relayed via the internal event channels to the Timer/Counter (TCC) peripheral, in turn triggering the capture. Wait, what? Let me sketch that out:

Signal flow on ATSAMD21 capture operation: the PWM signal is routed from the GPIO pin to the EIC peripheral, generating an EVENT (IRQ-esque feature) on high logic. The event edges get relayed via an EVSYS channel, where the EIC acts as a generator, and the TCC timer peripheral is the event user.

This certainly allows for a lot more flexibility, not tying the capture operation to a particular pin, since all pins have EIC Interrupt/Event capability. However, configuring it is quite a mouthful (and quite poorly documented, BTW).

Talk is chip, show me the encode

I suck at puns. Ok, so let’s get a bloatw…I mean, ASF-free setup going. In my particular example, I’ll be using pin PA11, tied to EIC channel 4, but again, any pin can be used. Also, I’ll be using EVSYS’s channel 0, but you can use any of the available channels for any event. This will be done in four steps:

We’ll first configure the timer. It’ll run off the main GCLK (in this case, at 48MHz), counting up from 0 until 0xFFFFFFFF (then wrapping around):

Then, let’s setup the EIC. Sense is set to HIGH, and a tiny helper function configures the channel. Notice how the EVCTRL bit is set, which generates the EIC event:

Let’s then configure the EVSYS: EIC acts as a event generator on channel 0, and TCC1 is the event’s user (akin to a listener):

Last but not least,  let’s configure the pin. The PINMUX_PA11A_EIC_EXTINT11 value should be defined in the samd21.h header:

That’s it! Your device is now running PWM pulse-width capture on pin PA11, with no CPU cycles being used for it at all.

Now what?

Now that everything’s configured, … Well, make sure the PWM signal is getting to the designated pin. The signal’s pulse-width and period are now constantly being fetched and saved into the CC0 and CC1 registers, respectively. To wrap it up with the AS5030, we can now compute the measured angle with the following function:

This function returns angles x10, (i.e. 15.4° = 154), so that no float support needs to be added (which can eat a lot of Flash on smaller devices). It’s also enough to properly deal with the AS5030’s 1.4° resolution.

Best part? It actually works:

Rotating a magnet in front of the AS5030. Very professional test setup.

You can optionaly use the TCC_INTFLAG_MC1 interruption if you don’t want to poll the registers for changes. Yeah.

’til next time.

Getting OpenProject up and running

Ok, so I was looking for a self-hosted project management tool. Something that would fill the gap that the late (and discontinued?) DotProject left in my heart. After lots of Googlin’, I came across OpenProject. Though they offer hosting plans, you can host the tool for free on your own server. Installation looked like it had a bit too many steps for my own taste, but I went with it. Needless to say, if I’m writing this note-to-self, sh*t went south. Considering my overall stupidity regarding all-things-web, I found the installation process fairly convoluted (and apparently I’m not alone). The steps I took to get it working are sumarized below (which were compiled from quite a few different sources).

Rolling up the sleeves

A little context here – I’m installing OpenProject version 6 (latest stable release at the time of writing) on a VM running Ubuntu 14.04. Unfortunately, for other distros, YMMV. For many distros (Ubuntu included), there’s the possibility of going through with the packaged installation (e.g. via apt-get), but I ended up following the manual process. I’ll be referring to these instructions during this post. If the link breaks or changes in the future, get the PDF version (but these instructions will probably be worthless anyway):

Let’s start with the Prepare Your Environment part. I’ve totally skipped the groupadd stuff, and installed everything under my regular user – do as you please. I installed all essentials and Memcached  as listed. I already had a working install of MySQL, so I skiped the Installation of MySQL part, including the database creation (which will happen automatically later on). I then followed the Installation of Ruby (via rbenv)…

…and Installation of Node (via nodenv, but using the 6.0.0 LTS version) as listed on the manual :

Adding some swap and getting to it

I had trouble getting the rbenv install to complete. It would get killed halfway through, in which seemed an insuficcient memory issue. Anticipating a future instruction from the manual, I’ve decided to try adding some swap space to my VM. As usual, DigitalOcean has a great documentation page on how to do this. Spoilers:

Ok, now fetch the relevant version of OpenProject – I’m going with the regular openproject (not the Comunity Edition), version 6, and install it:

Now run the Configure OpenProject and Finish the Installation of OpenProject as described in the instructions. After getting them done, you’ll have an openproject/public folder that looks like this:

The lack of an index file looked super weird to me, but it’s supposed to be that way. Don’t panic.

Getting it to play nice with Apache

I started having some trouble in the Serve OpenProject with Apache section of the instructions. The compilation and installation of the passenger module worked nicely, and I could a2enmod passenger with no trouble. However, the supplied VirtualHost configuration file didn’t cut it for me.

I wanted to have OpenProject accessible from martinvb.com/op, and since my VM already serves my WordPress and Pydio, I couldn’t just move my DocumentRoot around. So I edited my  configuration in /etc/apache2/sites-enabled/ as follows:

I kept my DocumentRoot where it was, and served OpenProject through an Alias. Setting the PassengerResolveSymliksInDocumentRoot is necessary here since Passenger won’t solve symlinks automatically for versions above 2.2.0 (which happens to be the case). Also, we have to point the PassengerAppRoot to where the app’s config.ru is stored — in this case, the root of OpenProject’s cloned git repo.

Also, I’ve added rails_relative_url_root = "/op" to the config/configuration.yml created during the Configure OpenProject step, to match the folder alias I’ve created.

Reload Apache with a sudo service apache2 restart and give it a go. Works on my machine.

’til next time.

openVPS: Poor man’s motion capture

The Why and The What

Flying tiny drones indoors is cool, no questions asked. And stuff gets all the more interesting when you can accurately control the drone’s position in space — enabling all sort of crazy manouvers. However, using regular GPS for such applications is often a no-go: antennas may be too heavy for the drone, accuracy is above the often-desired cm range and getting GPS to work reliably indoors often requires breaking some laws of physics. For that reason, labs and research teams often turn to commercial marker-based motion capture systems. You place fancy cameras around a desired scene, fix some reflective makers on the objects you want to measure/track and some magic software gives you sub-centimeter (often sub-millimeter!) tracking accuracy. Yey. Too bad such systems are expensive. Like, way too expensive. Like, “I’d have to sell my organs on the black market for that”-expensive.

I thought: “surely someone wrote a piece of FOSS that allows me to grab some 3 or 4 webcams I have lying around and setup a poor-man’s version of such a system”. To my amazement, I was wrong (I mean, Google didn’t turn up anything, but I’d still love to run into something).  Considering that I still had to come up with some image-processing-related activity for my Masters degree, this seemed like a nice fit. So, let’s get on with it. [Disclaimer: heavy usage of OpenCV ahead.]

First things first: what are we looking for? Well, the sketch bellow sums it up: we have N cameras (C_0, C_1, … C_{n-1}), all looking at some region of interest from different angles. When an object of interest \{O\}  (with some red markers) shows up in the field of view of 2 \leq k \leq N cameras, we can intersect the rays emanating from each of them to discover the object’s position and orientation in space.

Sketch of system’s layout.

I’m a simple camera – I see the object, I project the object

Ok, but how? Well, let’s understand how a camera works. The image below (which I shamelessly stole) represents a pinhole-camera, a simplified mathematical camera model:

Pinhole camera model.

Any point P = (X,Y,Z) in the scene can be represented as a pixel \alpha = (u,v) via the transformation in equation (1) below. [R|t] are the rotation and translation between the camera’s frame of reference and the scene’s.

(1)   \begin{equation*} s \begin{bmatrix} u \\ v \\ 1 \end{bmatrix} = \begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \bigg( R \begin{bmatrix} X \\ Y \\ Z \end{bmatrix} + t \bigg)\end{equation*}

The 3×3 matrix on the right-hand side of the equation is called the intrisic parameters matrix, and includes information about the camera’s focal distance and centering on both the and axes. As I’ve mentioned, the pinhole model is a (over)simplified one, which doesn’t accout for distortions introduced by the lens(es). Such distortions can be polynomially modelled, and the coefficients of such polynomial are then addressed as the distortion coefficients. No need to get into detail here, but both intrisic parameters and distortion coefficients can be obtained through OpenCV’s calibration routines. We’ll now assume all cameras are properly calibrated and lens distortion have been compensated, making the pinhole model valid.

Of blobs and vectors

Ok, so first thing: how to find the markers — reflective or not? Well, there’s lots of room to explore and digress here, but I’m lazy and just went with OpenCV’s built-in SimpleBlobDetector (great tutorial here!). Setup and usage are straightforward and results are nice:

Detecting markers (a.k.a. blobs).

Ok, so the detector spits out center coordinates (in pixel space, along the \{u,v\} axes) of each blob’s center point. If we want to “intersect” each view of each marker as previously sketched, then we need to compute unit vectors that emanate from each camera’s optic center \mathcal{F}_c through each blob’s center. Lets assume a point p = (x,y,z) – e.g., the marker – already in camera’s coordinates. Since the projective relationship in equation (1) loses depht information, we’ll rewrite the point in its homogenous form p_h = (x/z,  y/z, 1). We can then compute our unit vector \hat{p} of interest from the blob’s center pixel \alpha_h = (u,v,1) via,

p_h = F^{-1} \alpha_h

\hat{p} = \frac{p_h}{|p_h|}

is our aforementioned intrisic parameters matrix. Implementing this is as straightforward as it looks:

Where art thou, corresponding vector?

Following our system sketch up there, the next logical step is to intersect all these unit vectors we computed in each camera, for each marker. But the question that arises is: how do we know which vectors to match/intersect? — i.e., which vectors are pointing to the same marker on each camera? One convenient way of tackling this is exploring epipolar geometry (thanks for the inspiration, Chung J. et al). Simply put, knowing the rotation and translation between two cameras – e.g., and O’ (see image below) – allows you to map a point X as seen by O to an epiline l’ in O’. This drastically reduces the search space for the corresponding X’ — that is, the corresponding view of the same object.

There’s no need to delve into epipolar geometry — specially since I didn’t learned it in detail. OpenCV supplies satisfactory documentation and great features for this. First, I’ve used OpenCV’s stereoCalibrate within a routine that continuously checks which pairs of cameras see the calibration pattern at any given time, records that information, then later computes the fundamental and essential matrices. These which encode the [R|t] relationships between cameras, as shown in the figure below. Following that, computeCorrespondEpilines get’s you parameters a, b and c for the epiline’s equation au + bv + c = 0. So, if the n-th camera sees marker j, and a camera n+k, k \geq 1 sees a set of markers, the matching view of is m, as given by

\operatorname*{argmin}\limits_{m \in M} au_m+bv_m+c

Encoding this relations yields the results shown below (step-by-step usage of the aforementioned OpenCV functions can be found in the provided stereo_calib.cpp sample – worth checking out).

Epipolar relations between cameras being used to match points between two scenes.

Transformations between cameras

So far we have our unit vectors represented on each camera’s coordinate frame, and we are able to match the vectors “pointing” to the same marker/object on the scene. Great! Now, as a last step before computing their intersection, we need to express them all in a single coordinate frame. For simplicity, the first camera (n = 0) was chosen as the main reference for the system. So, in order to express all unit vectors in that coordinate frame, we need to know [R_n^0|t_n^0],\text{ }\forall n. In theory, we could concatenate all the successive transformations obtained with the stereo calibration procedure, but I’ve foud that it propagates lots of measurement errors and uncertainties. For now, since I’m using a small rig with only 4 cameras, the solution is to simultaneously show all cameras a known reference (namely, an AR marker), as show below:

Computing relationships between cameras via simultaneous imaging of an AR marker.

With each camera seeing the marker at the same time, we can easily compute the relationship between the 0-th camera and the n-th camera by just composing two transformations, as illustrated below:

Transformations between the AR marker, the 0-th camera and the n-th camera.

We can then easily obtain the transformations we’re interested in:

t_n^0 = -R_n^0 t_n^A + t_0^A   R_n^0 = R_0^A(R_n^A)^T

Getting to the point

Now we (finally!) have everything we need to intersect our vectors and find the position of the point(s) in the scene. I greatly recommend you take a look at this post in the Math StackExchange, since I’m piggybacking on Alberto’s detailed answer.

So, let’s assume that k \in [2, N]  cameras see a marker – thus we have unit vectors to deal with. Let’s also assume from now on, that all points and vectors are being represented in the 0-th coordinate frame. Essentially, by “intersecting” the vectors, we are trying to find a point \tilde{P} that is as close as possible to all the lines, minimizing the quadratic error relation given by

(2)   \begin{equation*}  E^2({\tilde{P}}) = \displaystyle \sum_k \bigg( ||(\tilde{P}-\mathcal{F}_{ck})||^2 - \big[ (\tilde{P} - \mathcal{F}_{ck}) \cdot \hat{p_k} \big]^2 \bigg) \end{equation*}

where \mathcal{F}_{ck} is the focal point of the k-th camera and \hat{p}_k is the unit vector emanating from the k-th camera towards our marker of interest. This, ladies and gentlemen, is also probably the most convoluted way of writing the Pythagorean Theorem. To minimize this relationship, we look for its inflection point

\frac{\delta E^2(\tilde{P})}{\delta \tilde{P}} = 0

Taking the derivative of (2) and manipulating it a bit, we get

\displaystyle \bigg[ \sum_k \big[ \hat{p_k} \cdot \hat{p_k}^T - I \big] \bigg] \cdot \tilde{P} = \displaystyle \sum_k \big[ \hat{p_k} \cdot \hat{p_k} \big] \cdot \mathcal{F}_{ck}

This can be represented as the matrix system S \tilde{P} = C. Thus our intersection point is given by \tilde{P} = S^{+} C, where S^{+} = S^T(SS^T)^{-1} is the pseudoinverse of S.

But will it blend?

How disappointing would it be, not to have a single video after all this boring maths? There you go.

If you like looking at badly-written, hastily-put-together code, there is a GitHub page for this mess. I plan on making this a nice, GUI-based thing, but who knows when. For more info, there’s also a report in portuguese, here.

’til next time.

libFilter add-ons

Following my last post on my minimalist filter library, I just got off my butt to add some high-pass filtering capabilities too. That’s specially useful when you’re trying to remove trends from datasets – this happens a lot for instance in biomedical applications, e.g. ECG, where some breathing artifacts come up as low frequency trends on your signal.

A simple example of such trend removal is included. As shown in the snippet below, taken from the example, after instancing the Filter, extracting the trend and DC components is very straigthforward:

The result looks like the following (blue senoidal wave on top is signal + trend + DC, red senoid is signal + DC, orange senoid is the filtered signal):

The filters implemented go only up to order 2 (I was lazy, sorry), and are also based on normalized Butterworth polynomials, discretized via Bilinear Transforms. Frequency responses for two hypotetical filters at 1kHz sampling frequency are shown below:

All filters are implemented in closed forms, so they can be reparameterized on the fly with minimum computational effort. Enjoy (and report any bugs). Major cleanup on the code is foreseen (and a notch-filtering example too).

’til next time.

Minimalist low-pass filter library

So, the other day I needed to compute some low-pass filters on the fly on an Arduino Mega. By “on the fly” I mean that the filters’ parameters would eventually be recomputed mid-operation, so setting some equation with static const parameters would not cut it. Using a static filter, is, however, the most common application scenario. If that’s your case – and I insist – tweak the sh*t out of your filter with a decent tool then just add it to your code. If, on the other hand, you need to update your filter dynamically (or if you’re just plain lazy to compute poles and zeros), then this is for you.

I ended up putting together a minimalist library for that, libFilter (github!). Works with the Arduino IDE and as a regular C++ library (just uncommenting one #define does the trick).

Using two filter instances on a signal from a load cell getting punched.

Using two filter instances on a signal from a load cell getting punched.

For now it only implements low-pass filters based on  normalized butterworth polynomials, but who knows what necessity might add to it next. Let’s take a look at some ADC filtering:

’til next time!

P.S.: I just can’t avoid letting this page eventually fall into temporary ostracism. Geez.

Finding my way with Cura 10.06

I’ve been using Cura as my go-to 3D-printer slicer for quite some time now. Compared to Slic3r, it’s faster and produces more optimized G-Code (using the same settings in both slicers, Cura’s prints were faster for me- but as always, YMMV). However, Cura provides less tweakable options than Slic3r, so it takes some getting used to.

So, I was using Cura 10.04 under OSX (10.10 and 10.11), and all was nice and good. Suddenly, I wild bug appeared. As shown in the picture below, Cura uses some text overlays on the 3D-viewer to display information: print duration, filament usage, estimated cost, type of view, etc. On my machine, all text overlays were suddenly gone! Rebooting, reinstalling, re*ing, etc, didn’t seemed to solve the issue. And, needless to say, using a slicer without the aforementioned infos is quite frustrating.

cura_missing_overlays

Left: Fully functional Cura, with text overlays in 3D-view (image courtesy Tom’s Guides). Right: My wild bug in action – bye, text overlays.

Furthermore, this bug seemed to afflict only a handful of unlucky bastards: Google pointed me to one unanswered forum entry, and to one GitHub issue ticket. The latter got a response from one of Cura’s developers, which stated that they’re “not working on a fix, as we will be repacing the whole UI in a few months, which no longer uses this old font rendering code”.

All right then. The dev’s answer was posted mid-April 2015, so I figured a new version should already exist. Indeed, in the list of beta releases, Cura 10.06 was available for download. I grabbed it and got going.

The UI got heavily revamped, and lots of options were made available (bringing a Slic3r-y level of tweakability to Cura). A quick tour in Cura’s Preferences > Settings allows you to select which options you want to edit in the main window. No need for much detail here, as the help pop-ups on each option make the whole experience nicely intuitive.

cura_1006_ui

Cura 10.06 revamped UI.

Unfortunately, as new options were added, others were lost. When first configuring Cura, you need to add a printer. In previous versions, you could choose from a list of models or just go with a “generic” machine – to which you’d add dimensions and other relevant informations. This option is no longer readily available – you have to pick a model from the list! Having a DIY printer that looks like nothing on the list, I was quite frustrated. For test purposes, I picked the Prusa-i3 preset (though my printer uses a bed roughly four times as big).

On top of that – and that was the killer for me – there is no option to edit the Start/End G-Code procedures. While this sounds trivial/useless for most off-the-shelf printers, it won’t (probably) work with a custom machine (e.g., like mine). For instance, due to the form and size of my printer’s bed, I can’t go homing XYZ like a Prusa-i3 would do (and as the preset G-Code insists on doing).

After shelving Cura 10.06 for some days, I stumbled on this page on the RepRap Wiki. It shed some precious light on the workarounds required to solve these (and other) problems. As it turns out, much of Cura’s printer configurations are set in JSON files – a file per printer model, stored in Cura’s installation folder, under resources > machines. To add my custom printer, I’ve copied the prusa_i3.json preset to a new custom_3dcnc.json in the same folder, and went on editing it. The JSON entries are pretty self-explanatory:

Changing the id and name fields makes your custom printer appear on Cura’s list. Also note: AFAIK, you need a STL model for your printer’s bed (commenting out the field platforms won’t work). By default, they’re stored under resources > meshes. I could reuse one, but I’ve simply exported my printbed’s STL from my CAD program.

Configured Cura (with my printer's crazy-ass printbed on display).

Configured Cura (with my printer’s printbed being displayed).

Last, but not least, machine_start_gcode and machine_end_gcode are what I was looking for. Add the G-Code inside a single pair of quotes, with commands spaced with a newline character (\n) and you should be golden. Save the file, reload Cura, and you’re good.

’til next time.

PCL library with Kinect under OSX 10.11

This last week, I dug up my trustworthy Kinect for a spin. I’ve been wanting to mess around with the PCL (Point Clouds) library for some time, so I decided to give it a shot.

Installation on OSX using Homebrew is fairly straigthforward, as shown in their documentation. However, I want to make sure that I have support for the Kinect (the Xbox 360 model).

Side note on Kinect support: To get data off of your Kinect, you can use the OpenNI library (which handles Kinect “1”).  OpenNI2 does exist, but it handles only the Kinect “2” and Occipital’s Structure Sensor. I’ll be using OpenNI here, because it’s supported directly by PCL. However, for standalone applications, I’d greatly recommend libfreenect (also available through Homebrew), which is fast, lightweight, and very easy to use.

So, we’ll be using Homebrew to get the following libraries: Boost, Flann, Qt4, GLEW, VTK and last but not least, OpenNI (1 and 2, just for argument’s sakes).  Run:

brew update
brew install boost flann qt glew vtk openni openni2

Grab a coffee, ’cause this will take some time. Now, to install PCL, you may want to check the available options (brew options pcl), then install it with at least the following settings:

brew install pcl --with-openni --with-openni2

Great, if nothing intensely wrong happened midway, you should be golden. Now, at that point I was eager to get some Kinect-demo up an running, which led me to PCL’s openni-grabber page (go ahead, visit the page). They are kind enough to supply the C++ file and a CMakeLists.txt to compile it. It compiled fine, but crashed hard every time I tried to use it, spitting the message:

Assertion failure in +[NSUndoManager _endTopLevelGroupings], /Library/Caches/com.apple.xbs/Sources/Foundation/Foundation-1256.1/Misc.subproj/NSUndoManager.m:359
2016-01-06 12:50:51.779 openni_grabber[42988:7013866] +[NSUndoManager(NSInternal) _endTopLevelGroupings] is only safe to invoke on the main thread.

Now, I don’t know if this happens under other platforms. After wasting some days struggling with this, I realized what the heck was going on. Let’s take a look at the non-working supplied code. In the header section, we see:

It uses the PCL’s Cloud Viewer, which has been long deprecated (meaning that this example is very out-of-date). Secondly, we see that the grabber’s functionality is based on a callback, defined here:

The cloud_cb_ callback function is called every time a new data packet arrives from the Kinect. This is fine, but the showCloud() command updates the display of the point cloud from within the callback, witch is what’s creating our "...is only safe to invoke on the main thread..." error. To fix that, the callback should only update the cloud’s data, while the cloud itself must be displayed with a call placed on the main thread. The fixed code I came up with looks like this:

Now, the callback cloud_cb_ only updates the cloud’s data, not touching the UI. The drawing happens in a loop inside the SimpleOpenNIViewer::run(). Since the callback and the loop are handling the same data set asynchronously, I’ve added a mutex around the critical sections to avoid any issues (the nice CPP wrapper  stolen from libfreenect’s examples, thanks!). To compile it, the following CMakeLists.txt should do the trick:

The results are not astounding: the RGB data is slightly out of place with the depth data, but I believe this is a problem internal to PCL, since my Kinect is properly calibrated. To use only the depth data in the code above, simply replace all instances of pcl::PointXYZRGBA with pcl::PointXYZ.

kinect_pcl_view
’til next time!

Using CMake and Qt Creator 5.5.1

Well, things have been dead around here. So, to keep things running, I’ve decided to post some less important content, mostly as notes-to-self (ya know, when you spend the weekend trying to get something to work, only to forget how you did it a month later). To avoid purging hard earned pseudo-knowledge, I’ll try to create the habit to write it down here.

So, for my first post in this (otherwise endless) series: how to use CMake with and IDE.

CMake is a nice cross-platform utility that generates Makefiles for C/C++ applications. It will automatically search for include files, libraries, generate configurable source code, etc, etc. A simple CMakeLists.txt file looks like this:

This neatly handles dependencies and configurations in a much more platform-independent way. Nice tutorials on CMake are available here.

As the kind of guy that can’t cope with VI(M) (sorry, I’m just to dumb for it), I generally gravitate towards programming-with-IDEs. I began searching for an IDE that had decent CMake integration. “Eclipse can handle it, for sure”, I thought. Nope. I was disappointed to find out that Eclipse not only lacks native CMake support, but there seems to be no plugins that handle it. CMakeBuilder was widely referenced online, but it seems that it is no longer available (using Eclipse to  query www.cmakebuilder.com/update/ yields no results).

Qt Creator ended up being the go-to solution. AFAIK, is the only well supported cross-platform GUI that handles CMake natively. Suppose we have the test_project folder, with a main.cpp and the CMakeLists.txt I’ve shown above. Use Qt Creator to open a file or project:

open_file_qt_creatorOpen the CMakeLists.txt. You’ll be prompted to run CMake (which you can skip, if you wish). After running the wizard, the left project pane will look something like this:

wrong_project_pane

Neither the project name nor source files will be displayed, which is a bummer. This seems to be an issue with Qt Creator >5.0. There’s an fairly easy fix. Add the aux_source_directory command to your CMakeLists.txt:

Run Qt’s import wizard now, and the result will be as expected: project name will be featured, together with listed source files.

qt_creator_cmake_pane_ok

 

Great. Hit ⌘B and watch the build magic happen.

’til next time.