Sep 082013
 

Well, some might remember my time with a cheap and nasty Android tablet (some might call these “landfill Android”).  The device packaging did not once even acknowledge the fact that there was GPL’ed software onboard, let alone how one obtains the source.

I discovered it was based around the Vimicro VC0882 SoC.  Turns out, that’s the same as the ViewSonic ViewPad 10e, who do release their kernel sources on their knowledge base.

Thank-you ViewSonic, you have just helped me greatly!  Maybe I should track down one of your tablets and buy one in appreciation.

Aug 052013
 

This last week the local repeaters here in Brisbane have been rather quiet.

One repeater I used to use a lot has been acting up, and so rather than potentially try to exacerbate a possibly worsening issue, I figured I’d leave it well alone until it was fixed.

Another I lurk on, has been working fine, but many of the people I’d talk to, are away on holidays.

So, I figured I’d dust off my trusty HF whip and give the lower frequencies another crack.  This time last week I was getting nowhere on 15m.  Maybe wrong place at the wrong time (Aside: is there ever a right time to be in the wrong place?) and so didn’t get anywhere.

40m I knew worked on this particular antenna, so I’ve been lurking there… calling in on the Coral Coast net on 7060kHz in the mornings, and tuning up and down the band on the way home.  I make note of my listening frequency via APRS-IS, see my tracker or look for VK4MSL-10 on aprs.fi.

I knew the antenna worked there, not perfect, but it did work.  It works particularly well when the other station is equipped to pick up weak stations.  Earlier this evening, I set out from my workplace listening on 7060kHz where I was this morning.  I noticed it was chock-a-bloc full of stations north of us.  Indonesia and surrounds by the sounds of things.

Couldn’t make a head or tail of what they were saying, so I moved up the band, stumbled across a couple of “local” stations chatting around 7175kHz.  Turns out one was portable in Barcaldine, didn’t catch the name, but the callsign was VK2DQC, I think.  (I didn’t write it down.)  We chatted for a short while, but apparently my signal was up and down like a yo-yo.

No surprise, I started the QSO walking up Greer Street, Bardon, continued my next over riding down The Drive, Cecil Road and Bowman Parade then up through Sunset Park.  Anyone who knows that stretch knows it goes up then down then up again then down.  I finished the chat as I came down Monoplane Street, Ashgrove.

Tuning around, I found another pair talking on 7158kHz.  Bob VK6KJ and Bruce VK2??.  As they were talking a third station, Joe, W5?? called in from Florida USA.  To say I was impressed would be an understatement, all three were coming in Q5, and signal strengths in excess of S6 in most cases.  Bob was peaking S9.

Joe mentioned is misfortune of having some equipment destroyed in a storm, and this necessitating the replacement of a computer along with its OS.  Apparently he’s not a fan of “Window 8″ (as we call it at work).  I did try to call Joe but must’ve doubled and wasn’t heard.

I did though, manage to make contact with Bob.  He was located in Albany, about 400km south-southeast of Perth, and running 400W into a two-element beam pointed at the US.  With my measly 100W and stubby home-made antenna, I apparently was registering a Q5S5 signal with the odd drop-out.

Clearly Bob’s end was doing all the work, but impressive nonetheless.  Seeing as the evenings can be particularly quiet, I think I’ve found a new past-time to while away the hour-long trip home, stirring up HF on the deadly-treadly-mobile!

Jul 152013
 

Well, this year’s International Rally of Queensland didn’t go the way everyone expected. We were there with Brisbane Area WICEN, providing the backup communications for the event. Our primary role was to relay the scores given to us by the post chief in the timekeeper’s tent. They looked after scheduling the cars, getting times, and sending the cars through. We just passed on scores (start/finish times) and other traffic.

Saturday went well. My father and I were set up at Kandanga North running the WICEN checkpoint for stages 6 and 12 of the rally. After some early hiccups getting the packet radio network going, we had the scores being sent out on time and everything running smoothly. Apart from some cows holding up traffic, there were no delays.

Sunday however… just about everyone would have heard about the fatality. My father and I ran the WICEN checkpoint at the start of the fateful Michell Creek Special Stage 14.

Having now seen the ABC website footage, looking at the competitor lists and my own logs, I can say with 90% certainty which car (and therefore 45% certainty who the deceased is) the unfortunate car was and when they left the stage.

My condolences go out to both driver and co driver at this difficult time.

Update: The names have been released.

Jul 112013
 

Some time back I actually got to look at Windows 8 first hand.  What intrigues me about this, is it seems to be more a knee-jerk reaction of the rise of the tablet, and less a careful considered re-work of a user interface.

In fact, what I hear on the grape vine, they don’t seem to have any real road map forward.  This has scared the likes of Rockwell Automation, who have started baking their SCADA systems into their hardware to remove their dependence on the OS.

All this to chase the tablet and smart phone market.

It makes me wonder what their road map actually is.  Perhaps they’ve taken a leaf out of Mental as Anything’s song book?  Sure looks that way…

New Windows released last night
You could call it a blight
It’s such a shame, We never thought it was
Gonna be so bad!  They wished for something good.

They’ve had enough of that
at other times in days gone by.
Changed so much I know…
Mmm just enough, enough to make you cry.

If you leave me, can I come too?
We can always stay
But if you leave me, can I come too?
And if you go, can I come too?

We let it happen again!
‘Cause that they couldn’t take.
Ooh once was quite enough
It’s easy to forgive, harder to forget

If you leave me, can I come too?
We can always stay
But if you leave me, can I come too?
And if you go, can I come too?

(Original lyrics credit: Chuck Krumel, Jeff Raymond, James Stewart)

Jun 092013
 

Over the last year or so, I’ve done a number of improvements to the bicycle mobile station.  I’ve kept meaning to document what’s happened, as a number of people have asked about the station, and not everyone gets to see it up close.

A big move was when the FT-290RII 25W PA died, I was using the FT-897D a lot, and that thing is a heavy lump of a radio to lug around.  So I bought its smaller sister, the FT-857D with its remote head kit.

A second move was from the heavy 40Ah battery pack to a much lighter 10Ah pack.  Then, in July last year, I bought myself a new pair of wheels.  The ’09 model Boulder pictured earlier still gets regular use and is good on the road, but longer trips and on hills, it’s a drag, and the tyres are not good on dirt.

Thus I bought a Talon 29 ER 0… in contrast to the Boulder, this bike is designed with mountain-biking sports in mind, so a little heavier duty, better gearing and suspension.  Sadly not dual-suspension … they don’t seem to make one that will take a pannier rack on the back like I require.  Nonetheless, this one has been going well.

VK4MSL/BM Mk3: New and improved

VK4MSL/BM Mk3: New and improved

Rather than buying an open basket like I did on the other, I went one step further, I bought a motorcycle hard top-box and mounted that on the back.  Thus the FT-857D could live in there, sheltered from the weather.  I later also bought pannier bags: my battery, some tools, spare tubes, visors for the helmet, etc, live in one bag, my clothes live in the other.

The station is otherwise, not much different to how it was in concept.  The antennas now mount on opposite sides of the top box with right-angle aluminium.  I still have to work on grounding for the HF side but even then, the station still delivers respectable performance on 40m.

On my way to BARCfest this year, I was being heard S9+40dB in Newcastle with 60W PEP.  I’d have ran 100W, but due to the earthing problems, I found I was getting a bit too much RF feedback.

The 2m antenna is similar to previous ventures, just a 51cm length of RG-213 with the jacket and braid stripped off and a PL-259 plug soldered onto one end.  It’s a simple design that’s easy to make, easy to fix, cheap and can be constructed from readily available parts.  If you can make your own patch leads, you can make one of these.

VK4MSL/BM: 2m antenna.  Just some RG-213 and a PL-259 connector is all you need

VK4MSL/BM: 2m antenna. Just some RG-213 and a PL-259 connector is all you need

70cm remains a work in progress.  In theory, a ¼? antenna resonant at 144MHz should also resonate at 432MHz, as this is the ¾? frequency.  In practice, this has been a pain to tune.  I basically just stick to 2m and leave it at that.

As for coupling the radio to the head unit… I could use the leads that Yaesu supplied.  One distinct disadvantage with this is that it ties me into using only compatible equipment.  The other is that the connectors are just not designed for constant plugging/unplugging, and the 6P6C and 8P8C connectors become unreliable very quickly if you do this.  A solution was to make up a patch lead to go onto each end, and to use some standard cable in the middle.

Initially I did this with a 25-pin printer cable, but found the RF problems were terrible!  Three lengths of CAT5e however, did the job nicely.  Yes, I sacrifice one pin, right in the middle.  24 pins is more than enough.  I allocate six pins on one end for the head unit cable; choosing the wires so that the connections are consistent at each end.

The other end, I have a standard convention for microphone/control cabling.  The balanced nature of the CAT5e works well for microphone cabling on a radio like the FT-857D which was designed with dynamic microphones in mind.

The only other connectors I need then are for power, and for lights.  Power I just use Anderson PowerPole type connectors, the 30A variety… and for lighting, I use ruggedised 6-pin automotive connectors.

VK4MSL/BM Mk3: Rear connections onto top box

VK4MSL/BM Mk3: Rear connections onto top box

At the handlebars, things have been refined a little… the switches and push buttons are in plastic boxes now.  Here I still have to work on the front basket mount, this compromise of a former broomstick handle hose-clamped to the handlebars is a workaround for the basket bracket’s inability to clamp around the rather thick handlebars.  This arrangement is fine until one of the hose clamps slips (which happens from time to time).

For now I put up with it.  The controls from the radio are now mostly on the left side… Since the rear gear shift and front brake are on the right-hand side, I do far more with my right hand than with my left.  Thus doing this, I free up my right hand to actually operate the bike and use my less-busy left hand to operate the radio.

VK4MSL/BM: Front handlebar controls

VK4MSL/BM: Front handlebar controls

I mentioned earlier about HF… the HF antenna should look familiar.  It’s actually the same one I’ve been using for a while now.  Most distant contact so far has been into the Cook Islands on 20m.  I’ve had successful contacts on 80m, 40m, 20m and 15m with this antenna.  10m and 6m are the two that elude me just now.

VK4MSL/BM Mk3: With the HF antenna

VK4MSL/BM Mk3: With the HF antenna

It is a little difficult to see the entire antenna.  I did try to pick the angle to show it best… but if you look above the tree, you’ll see the tip of it immediately above the top box.  Below is a close-up shot to give you an idea where to look.

VK4MSL/BM Mk3: Base of HF antenna

VK4MSL/BM Mk3: Base of HF antenna

One big advantage of the new set up, is that night-time visibility is much better than before.  On the front I have a LED strip which lights up the path maybe 2m ahead of the front wheel.  Not a strong light, but ticks a box… my main headlight is on the helmet — people frequently assume they’re being filmed by it.  On the rear however, is a different story:

VK4MSL/BM Mk3: All lit up

VK4MSL/BM Mk3: All lit up

It doesn’t look like much in the day time, but it is quite bright at night.  The back uses two LED strips mounted in behind the red plastic on the top box, and one can easily read a book in the light produced.  Looking in the rear vision mirrors at night, the red glow can be seen reflecting off objects for a good 100m or so.

On my TO-DO list, is to mount switches to operate the brake light (just above the callsign).  Options include reed switches, hydraulic switches in the brake lines, or strategic placement of micro-switches.  I’ll have to experiment.  The other electronics is in place.

As to the other bike?  It’s still around, in fact if you look at the photo of the VHF antenna, you can see it in the background… along side the trailer I use when I do my grocery shopping.

I’ve done away with the basket on it, and gotten a second mounting plate, so the same top box fits on the back of the other bike, along with the same pannier bags, and same front basket.  It has done about 2800km since I bought the Talon (mid July, 2012), the Talon itself has done 2617km.

Thus I’d estimate the Boulder is well and truly past the 10000km mark, probably closer to 11000km now.  It’s still the primary means of getting around, averaging close to 100km a week and with a heavy load.  Not bad for a bike that’s designed for a little recreational riding.

May 172013
 

Hi all,

Recently I heard a story of a young 15 year old, apparently playing with a golf ball he got from a “mate” that turned out to be packed with more than what he bargained for.  What stuck me most was the suggestion that he was likely targeted.

The other thing that stood out, was that like me, he has Asperger’s Syndrome.

Having Asperger’s can make it rather difficult, depending on its severity, to judge someone’s character.  That’s why it shook me up more than somewhat — had I judged someone’s character in a similar way, that could have been me!

For those who are wondering, there is a community trust and yes, I fully intend to drop some money into it on Monday.  Given there’s apparently been about $30000 or so, maybe another $2000 into the pot.

My hope for young Michael now, is that the surgeons are able to restore enough function in his hands to allow him to resume some sense of normality.

One question I have tough is what his interests were.  A common trait among people with Asperger’s is a keen interest in one field or another.  For me it’s Electronics, radio and programming.  One of my friends, it’s horticulture, one of my cousins is into cars.

I think whatever Michael’s interest was, I think it important that as a community, we find some way that he can resume that hobby.  It’s good to know that a few fingers were saved… he apparently has a little finger on his left hand (nothing else though) and from the photos, two middle fingers and a thumb on his right.  So he can still give his bully the middle finger at least, and should be able to do many things himself with some practice.

As an example, take the band Def Leppard.  After releasing a few albums, their drummer Rick Allen, lost his left arm in a car accident.  The band found a way for him to continue as their drummer, using two foot pedals.

I have no idea how to assist, and there’s probably a lot of people rallying around him, as they should.

In short, I have been thinking a lot about this incident.  Michael, we have likely not met, and prior to your incident, probably wouldn’t have known you from a bar of soap before that fateful day… but you have very much been in my thoughts this last week, and I do hope we can find a way to give you a hand somehow (if you’d pardon the pun).

May 122013
 

I’ve been working with VRT Systems for a few years now. Originally brought in as a software engineer, my role shifted to include network administration duties.

This of course does not phase me, I’ve done network administration work before for charities. There are some small differences, for example, back then it was a single do-everything box running Gentoo hosting a Samba-based NT domain for about 5 Windows XP workstations, now it’s about 20 Windows 7 workstations, a Samba-based NT domain backed by LDAP, and a number of servers.

Part of this has been to move our aging infrastructure to a more modern “private cloud” infrastructure. In the following series, I plan to detail my notes on what I’ve learned through this process, so that others may benefit from my insight.  At this stage, I don’t have all the answers, and there are some things  I may have wrong below.

Planning

The first stage with any such network development (this goes for “cloud”-like and traditional structures) is to consider how we want the network to operate, how it is going to be managed, and what skills we need.

Both my manager and I are Unix-oriented people, in my case I’ll be honest — I have a definite bias towards open source, and I’ll try to assess a solution on technical merit rather than via glossy brochures.

After looking at some commercial solutions, my manager more or less came to the conclusion that a lot of these highly expensive servers are not so magical, they are fundamentally just standard desktops in a small form factor. While we could buy a whole heap of 1U high rack servers, we might be better served by using more standard hardware.

The plan is to build a cluster of standard boxes, in as small form factor as practical, which would be managed at a higher level for load balancing and redundancy.

Hardware: first attempt

One key factor we wanted to reduce was the power consumption. Our existing rack of hardware chews about 1.5kW of power. Since we want to run a lot of virtual machines, we want to make them as efficient as possible. We wanted a small building block that would handle a small handful of VMs, and storing data across multiple nodes for redundancy.

After some research, we wound up with our first attempt at a compute node:

Motherboard: Intel DQ77KB Mini ITX
CPU: Intel Core i3-3220T 2.8GHz Dual-Core
RAM: 8GB SODIMM
Storage: Intel 520S 240GB SSD
Networking: Onboard dual gigabit for cluster, PCIe Realtek RTL8168 adaptor for client-facing network

The plan, is that we’d have many of these, they would pool their storage in a redundant fashion.  The two on-board NICs would be bonded together using LACP and would form a back-end storage network for the nodes to share data.  The one PCIe card would be the “public” face of the cluster and would connect it to the outside world using VLANs.

For the OS, we threw on Ubuntu 12.04 LTS AMD64, and we ran the KVM hypervisor. We then tried throwing this on one of our power meters to see how much power the thing drew. At first my manager asked if the thing was even turned on … it was idling at 10W.

Loaded it on with a few virtual machines, eventually I had 6 VMs going on the thing, ranging from Linux, Windows 2000, Windows XP and a Windows 2008R2 P2V image for one of our customer projects.

The CPU load sat at about 6.0, and the power consumption did not budge above 30W. Our existing boxes drew 300W, so theoretically we could run 10 of these for just one of our old servers.

Management software

Running QEMU VMs from bash scripts is all very well, but in this case we need to be able to give non-technical users use of a subset of the cluster for projects.  I hardly expect them to write bash scripts to fire up KVM over SSH.

We considered a few options: Ganeti, OpenNebula and OpenStack.

Ganeti looked good but the lack of a template system and media library let it down for us, and OpenNebula proved a bit fiddly as well.  OpenStack is a big behemoth and will take quite a bit of research however.

Storage

One factor that stood out like a sore thumb: our initial infrastructure was going to just have all compute nodes, with shared storage between them.  There were a couple of options for doing this such as having the nodes in pairs with DR:BD, using Ceph or Sheepdog, etc… but by far, the most common approach was to have a storage backend on a SAN.

SANs get very expensive very quickly.  Nice hardware, but overkill and over budget.  We figured we should plan for that eventuality, should the need arise, but it’d be a later addition.  We don’t need blistering speed, if we can sustain 160Mbps throughput, that’d probably be fine for most things.

Reading the literature, Ceph looked by far and above the best choice, but it had a catch — you can’t run Ceph server daemons, and Ceph in-kernel clients, on the same host.  Doing so you run the risk of a deadlock, in much the same manner as NFS does when you mount from localhost.

OpenStack actually has 3 types of storage:

  • Ephemeral storage
  • Block storage
  • Image storage

Ephemeral storage is specific to a given virtual machine.  It often lives on the compute node with the VM, or on a back-end storage system, and stores data temporarily for the life of a virtual machine instance.  When a VM instance is created, new copies of ephemeral block devices are created from images stored in image storage.  Once the virtual machine is terminated, these ephemeral block devices are deleted.

Block storage is the persistent storage for a given VM.  Say you were running a mail server … your OS and configuration might exist on a ephemeral device, but your mail would sit on a block device.

Image storage are simply raw images of block devices.  Image storage cannot be mounted as a block device directly, but rather, the storage area is used as a repository which is read from when creating the other two types of storage.

Ephemeral storage in OpenStack is managed by the compute node itself, often using LVM on a local block device.  There is no redundancy as it’s considered to be temporary data only.

For block storage, OpenStack provides a service called cinder.  This, at its heart, seems to use LVM as well, and exports the block devices over iSCSI.

For image storage, OpenStack has a redundant storage system called swift.  The basis for this seems to be rsync, with a service called swift-proxy providing a REST-interface over http.  swift-proxy is very network intensive, and benefits from hardware such as high-speed networking (e.g. 10Gbps Ethernet).

Hardware: second attempt

Having researched how storage works in OpenStack somewhat, it became clear that one single building block would not do.  There would in fact be two other types of node: storage nodes, and management nodes.

The storage nodes would contain largish spinning disks, with software maintaining copies and load balancing between all nodes.

The management nodes would contain the high-speed networking, and would provide services such as Ceph monitors (if we use Ceph), swift-proxy and other core functions.  RabbitMQ and the core database would run here for example.

Without the need for big storage, the compute nodes could be downsized in disk, and expanded in RAM.  So we now had a network that looked like this:

Node Type Compute Management Storage
Motherboard: Intel DQ77KB Mini ITX Intel DQ77MH Micro ATX
CPU: Intel Core i3-3220T 2.8GHz Dual-Core
RAM: 2*8GB SODIMM 2*4GB DIMM
Storage: Intel 520S 60GB SSD Intel 520S 60GB SSD for OS, 2*Seagate ST3000VX000-1CU1 3TB HDDs for data
Networking: Onboard dual gigabit for cluster, PCIe Realtek RTL8168 adaptor for client-facing network Onboard dual gigabit for management, PCIe 10GbE for cluster communications Onboard dual gigabit for cluster, PCIe Realtek RTL8168 adaptor for management

The management and storage nodes are slightly tweaked versions of what we use for compute nodes. The motherboard is basically the same chipset, but capable of taking larger PCIe cards and using a standard ATX power supply.

Since we’re not storing much on the compute nodes, we’ve gone for 60GB SSDs rather than 240GB SSDs to cut the cost down a little. We might have to look at 120GB SSDs in newer nodes, or maybe look at other options, as Intel seem to have discontinued the 60GB 520S … bless them! The Intel 520S SSDs were chosen due to the 5-year warranty offered.

The management and storage nodes, rather than going into small Mini-ITX media-centre style cases, are put in larger 2U rackmount cases. These cases have room for 4 HDDs, in theory.

Deployment

For testing purposes, we got two of each node. This allows us to try out things like testing what would happen if a node went belly up by yanking its power, and to test load balancing when things are working properly.

We haven’t bought the 10GbE cards at this stage, as we’re not sure exactly which ones to get (we have a Cisco SG500X switch to plug them into) and they’re expensive.

The final cluster will have at least 3 storage nodes, 3 management nodes and maybe as many as 16 compute nodes. I say at least 3 storage nodes — in buying the test hardware, I accidentally ordered 7 cases, and so we might decide to build an extra storage node.

Each of those gives us 6TB of storage, and the production plan is to load balance with a replica on at least 3 nodes… so we can survive any two going belly up. The disks also push close to 800Mbps throughput, so with 3 nodes serving up data, that should be enough to saturate the dual-gigabit link on the compute node. 4 nodes would give us 8TB of effective storage.

With so many nodes though, one problem remains, deploying the configuration and managing it all. We’re using Ubuntu as our base platform, and so it makes sense to tap into their technologies for deployment.

We’ll be looking to use Ubuntu Cloud and Juju to manage the deployment.

Ubuntu Cloud itself is a packaged version of OpenStack.  The components of OpenStack are deployed with Juju.  Juju itself can deploy services either to “public clouds” like Amazon AWS, or to one’s own private cluster using Ubuntu MAAS (Metal As A Service).

Metal As a Service itself basically is a deployment system which installs and configures Ubuntu on network-booting clients for automatic installation and configuration.

The underlying technology is based on a few components: dnsmasq DHCP/DNS server, tftp-hpa TFTP server, and the configuration gets served up to the installer via a web service API.  There’s a web interface for managing it all.  Once installed, you then deploy services using Juju (the word juju apparently translates to “magic”).

Further research

So having researched what hardware will likely be needed, I need to research a few things.

Firstly, the storage mechanism, we can either go with the pure OpenStack approach with cinder managing LVM based storage and exporting over iSCSI, or we get cinder to manage a Ceph back-end storage cluster.  This decision has not yet been made.  My two biggest concerns with cinder are:

  • Does cinder manage multiple replicas of block storage?
  • Does cinder try to load-balance between replicas?

With image storage, if we use Ceph, we have two choices.  We can either:

  • Install Swift on the storage nodes, partition the drives and use some of the storage for Swift, and the rest for Ceph… with Swift-proxy on the management nodes.
  • Install Rados Gateway on the management nodes in place of Swift

But which is the better approach?  My understanding is that Ceph doesn’t fully integrate into the OpenStack identity service (called keystone).  I need to find out if this matters much, or whether splitting storage between Swift and Ceph might be better.

Metal As a Service seems great in concept.  I’ve been researching OpenStack and Ceph for a few months now (with numerous interruptions), and I’m starting to get a picture as to how it all fits together.  Now the next step is to understand MAAS and Juju.  I don’t mind magic in entertainment, but I do not like it in my systems.  So my first step will be to get to understand MAAS and Juju on a low level.

Crucially, I want to figure out how one customises the image provided by MAAS… in particular, making sure it deploys to the 60GB SSD on each node, and not just the first block device it sees.

The storage nodes have their two 6Gbps SATA ports connected to the 3TB HDDs for performance, making them visible as /dev/sda and /dev/sdb — MAAS needs to understand that the disk it needs to deploy to is called /dev/sdc in this case.  I’d also perfer it to use XFS rather than EXT4, and a user called something other than “ubuntu”.  These are things I’d like to work out how to configure.

As for Juju, I need to work out exactly what it does when it “bootstraps” itself.  When I tried it last, it randomly picked a compute node.  I’d be happier if it deployed itself to the management node I ran it from.  I also need to figure out how it picks out nodes and deploys the application.  My quick testing with it had me asking it to deploy all the OpenStack components, only to have it sit there doing nothing… so clearly I missed something in the docs.  How is it supposed to work?  I’ll need to find out.  It certainly isn’t this simple.

Apr 282013
 

I’ve been tinkering with GStreamer lately, specifically QtGStreamer, since Qt is my preferred UI toolkit.

One thing I wanted to be able to do, is to programatically generate a list of all plug-ins and elements accessible to the application. My end goal was to allow a user to select audio devices for input/output.

Now, I could just try the suck-it-and-see approach, attempting to guess the names of elements. This could work, but suppose someone wanted to use an element other than the ones blessed enough to be included in your list?

Most of the audio source and sink elements have similar parameters, and the parameters can be discovered at run-time anyway. The bulk of them seem to accept a “device” parameter, which can be probed to generate a list of possible devices.

This gives us an elegant way of letting the user specify what they want. Known elements can be configured with specialised UI forms, but anything else, there’s a way to at least present the options to the user and allow them to configure it.

/*!
 * @file gstinfo.h
 */
#ifndef _GSTINFO_H
#define _GSTINFO_H

#include <QList>
#include <QString>

/*!
 * Get a list of all GStreamer plug-ins installed
 * @param       list    A list that will be populated with the names
 *                      of installed plug-ins.
 */
void GstGetPlugins(QList<QString>& list);

/*!
 * Get a list of all elements provided by the given GStreamer plug-in.
 * @param       plugin  The plug-in to query
 * @param       list    A list that will be populated with the names
 *                      of elements provided by this plug-in.
 */
void GstGetElements(const QString& plugin, QList<QString>& list);

#endif

/*!
 * @file gstinfo.cpp
 */
#include "gstinfo.h"
#include <gst/controller/gstcontroller.h>

/*!
 * Get a list of all GStreamer plug-ins installed
 * @param       list    A list that will be populated with the names
 *                      of installed plug-ins.
 */
void GstGetPlugins(QList<QString>& list) {
        /*
         * This code is partially based on code observed in gst-inspect.c
         * from GStreamer release 0.10.36.
         *
         * Original copyright:
         * GStreamer
         * Copyright (C) 1999,2000 Erik Walthinsen <omega@cse.ogi.edu>
         *               2000 Wim Taymans <wtay@chello.be>
         *               2004 Thomas Vander Stichele <thomas@apestaart.org>
         */
        GList* plugins; /* The head of the plug-in list */
        GList* pnode;   /* The currently viewed node */

        /* Empty the list out here */
        list.clear();

        plugins = pnode = gst_default_registry_get_plugin_list();
        while(pnode) {
                /* plugin: the plug-in info object pointed to by pnode */
                GstPlugin* plugin = (GstPlugin*)pnode->data;
                list << QString(plugin->desc.name);
                pnode = g_list_next(pnode);
        }

        /* Clean-up */
        gst_plugin_list_free (plugins);
}

/*!
 * Get a list of all elements provided by the given GStreamer plug-in.
 * @param       plugin  The plug-in to query
 * @param       list    A list that will be populated with the names
 *                      of elements provided by this plug-in.
 */
void GstGetElements(const QString& plugin, QList<QString>& list) {
        /*
         * This code is partially based on code observed in gst-inspect.c
         * from GStreamer release 0.10.36.
         *
         * Original copyright:
         * GStreamer
         * Copyright (C) 1999,2000 Erik Walthinsen <omega@cse.ogi.edu>
         *               2000 Wim Taymans <wtay@chello.be>
         *               2004 Thomas Vander Stichele <thomas@apestaart.org>
         */
        GList* features;        /* The list of plug-in features */
        GList* fnode;           /* The currently viewed node */

        /* Empty the list out here */
        list.clear();

        features = fnode = gst_registry_get_feature_list_by_plugin(
                        gst_registry_get_default(),
                        plugin.toUtf8().data());
        while(fnode) {
                if (fnode->data) {
                        /* Currently pointed-to feature */
                        GstPluginFeature* feature
                                = GST_PLUGIN_FEATURE(fnode->data);

                        if (GST_IS_ELEMENT_FACTORY (feature)) {
                                GstElementFactory* factory
                                        = GST_ELEMENT_FACTORY(gst_plugin_feature_load(feature));
                                list << QString(GST_PLUGIN_FEATURE_NAME(factory));
                        }
                }
                fnode = g_list_next(fnode);
        }
        gst_plugin_feature_list_free(features);
}

How does one use this?

	QList<QString> plugins;
	QList<QString>::iterator p_it;

	GstGetPlugins(plugins);
	for (p_it = plugins.begin(); p_it != plugins.end(); p_it++) {
		QList<QString> elements;
		QList<QString>::iterator e_it;
		GstGetElements(*p_it, elements);
		for (e_it = elements.begin(); e_it != elements.end(); e_it++) {
			std::cout	<< "Plug-in "
					<< p_it->toStdString()
					<< " Element "
					<< e_it->toStdString()
					<< std::endl;
		}
	}
Apr 052013
 

This is another one of those brain-RAM-to-blog-NVRAM dumps for my own future reference as much as anyone else’s benefit.  One thing I could never quite get my head around was the store= parameter in OpenERP.

OpenERP explains it like this:

store Parameter

It will calculate the field and store the result in the table. The field will be recalculated when certain fields are changed on other objects. It uses the following syntax:

store = {
    'object_name': (
            function_name,
            ['field_name1', 'field_name2'],
            priority)
}

It will call function function_name when any changes are written to fields in the list [‘field1’,’field2’] on object ‘object_name’. The function should have the following signature:

def function_name(self, cr, uid, ids, context=None):

Where ids will be the ids of records in the other object’s table that have changed values in the watched fields. The function should return a list of ids of records in its own table that should have the field recalculated. That list will be sent as a parameter for the main function of the field.

Now note the parameter self. Quite often, these are defined as methods of the object that owns the given function field. self, in the Python vernacular, is similar to the this keyword in C++; it is a reference to the object that owns the method. And here, the use of the name self is misleading.

I had occasion to derive a few timesheet objects so that I could rename a few fields. I didn’t want to change the implementation, just the name. And I wanted to do it in one place, not go do a search and replace on each and every view for the timesheet. The model seemed the most appropriate place for it. Unfortunately, the only way you change a field name in this way, is to copy and paste the definition.

So I tried adding this to my derived model (note, we were able to leave _progress_rate alone, since I had in fact overridden that method in this same object to fix a bug in the original, otherwise I’d use the same lambda trick here):

        'planned_hours': fields.function(
            _progress_rate,
            multi="progress", string='Total Planned Time',
            help=   "Sum of hours planned by the project manager for all "  \
                    "tasks related to this project and its child projects",
            store = {
                'project.project': (
                    lambda self, cr, uid, ids, context=None :               \
                        self._get_project_and_parents(                      \
                            cr, uid, ids, context),
                    ['tasks', 'parent_id', 'child_ids'], 10),
                'project.task': (
                    lambda self, cr, uid, ids, context=None :               \
                        self._get_projects_from_tasks(                      \
                            cr, uid, ids, context),
                    ['planned_hours', 'remaining_hours',
                    'work_ids', 'state'], 20),
            }),

The lambda functions are just a quick way of picking up the functions that would later be inherited by the base class (handled by the osv.osv object).

Imagine my surprise when I get told that there is no attribute called _get_projects_from_tasks. … Hang on, I’m sure that’s what it’s called! I check again, yes, I spelt it correctly. I look closer at the backtrace:

AttributeError: 'project.task' object has no attribute '_get_projects_from_tasks'

I’ve underlined the significant bit I had missed earlier. Despite the fact that this _get_project_from_tasks is in fact, defined as a method in project.project, the argument that’s passed in as self is not a project.project instance, but a project.task.

self is not in fact, self, but another object entirely.

So from now on, I shall no longer call this parameter self, as this will trip regular Python programmers up — it should be called what it is. My field definition now looks like this:

        'planned_hours': fields.function(
            _progress_rate,
            multi="progress", string='Total Planned Time',
            help=   "Sum of hours planned by the project manager for all "  \
                    "tasks related to this project and its child projects",
            store={
                'project.project': (
                    lambda project_obj, cr, uid, ids, context=None: \
                        project_obj._get_project_and_parents(cr, uid, ids, context),
                    ['tasks', 'parent_id', 'child_ids'],
                    10
                ),
                'project.task': (
                    lambda task_obj, cr, uid, ids, context=None:    \
                        task_obj.pool.get('project.project'         \
                            )._get_projects_from_tasks(cr, uid,     \
                                ids, context),
                    ['planned_hours', 'remaining_hours', 'work_ids', 'state'],
                    20
                ),
            }),

Hopefully that should be clear next time I, or anyone else, comes across it.

Mar 312013
 

Recently I purchased a second hand Kantronics KPC-3 packet TNC. Brisbane Area WICEN make heavy use of packet at one particular event, the International Rally of Queensland, where they use the 1200-baud network to report the scores of rally cars as they progress through each stage.

Now, I’m a newcomer to radio compared to most on the band. I got my license in 2008, and I’ve only had contact with packet for the last two years, and even then, mostly only at a distance.  I had a hand-held that did APRS, and I’ve also done some APRS using soundmodem and Xastir.  Full-blooded AX.25 has taken me some time, and I’m slowly coming to grips with some of it.

One thing I wanted to try and figure out, is how to re-lay traffic from a host connected to the RF world, to a host on a local network.  I knew there was some protocol that did it, but didn’t know what, or how it worked.  Turns out the protocol I was thinking of was AXIP, which basically overlays AX.25 frames directly atop IP.  There’s also a version that encapsulates them in UDP datagrams; AXUDP.

The following are my notes on how I managed to get some routing to happen.

So, my set-up.  I have my FT-897D set up on 145.175MHz FM, the APRS frequency in Australia.  (I did go hunting for BBSes the other night but came up blank, but since APRS uses AX.25 messaging, it’ll be a start.)

To its data port, I have the KPC-3, which connects to my trusty old P4 laptop via good ol’e RS-232 (the real stuff, not pretend USB-RS232, yes the laptop is that old).  This laptop is on my local LAN, with an IP address of 192.168.64.141.

In front of me, is my main workhorse, a MacBook at the address of 192.168.64.140.  Both laptops are booted into Linux, and my target is Xastir.

First thing I had to do was compile the AX.25 kernel modules, and the ax25-tools, ax25-apps.  The userspace tools needed for this are: ax25ipd and kissnetd.

On the RF-facing system

This is the P4 in my case, the one with the TNC. First step is to get the TNC into KISS mode. In the case of Kantronics TNCs, the way to do this is to fire up your terminal emulator and run int kiss followed by reset.

Important note: to get it back, shut down everything using the serial port then run echo -e '\0300\0377\0300' > /dev/ttyS0. This sends the three-byte exit-kiss-mode sequence (0xc0 0xff 0xc0).

Configure /etc/ax25/ax25ipd.conf. Three things you’ll need to set up:

  • mode: should be tnc
  • device: should be whatever your serial device is (more on this later)
  • your default route: this is the host that will receive ALL traffic

In my case, my ax25ipd.conf on the P4 laptop looks like this:

socket ip
mode tnc
device /dev/ttyS0
speed 9600
loglevel 2
broadcast QST-0 NODES-0
# This points to my MacBook; d means default route
route 0 192.168.64.140 d

Once done, we start the ax25ipd service as root, it should fork into the background, and checking with netstat should show it as listening on a raw socket.

On the client machine

Here, we also run a AXIP server, but this time to catch the packets that get flung our way by the other system. We want Xastir to pick up the traffic as it comes in. Two ways of doing this.

One is to configure kissattach to give us a PTY device which we then pass onto ax25ipd, then run Xastir as root and tell it to use the AX.25 stack directly. Gentoo’s Xastir ebuild ships with this feature disabled, so not an option here (unless I hack the ebuild like I did last time).

The AX.25 tools also come with kissnetd: this basically generates several PTYs and links them all together so they all see eachother’s KISS traffic. So ax25ipd will receive packets, pass them to its PTY, which will then get forwarded by kissnetd to the other PTY attached to Xastir.

There is one catch. Unlike in kernels of yore, kernel 2.6 and above (3.x is no exception) do not have statically configured PTY devices. So all the AX.25 docs that say to use /dev/ptyq0 for one end and /dev/ttyqf for the other? Make that /dev/ptmx for one end, and the tool will tell you, what the other end is called. And yes, it’ll change.

Run kissnetd -p 2; the parameter tells it to create two PTYs. The tool will run in the foreground so make a note of what they’re called, then hit CTRL-Z followed by bg to bring it into the background.

vk4msl-mb stuartl # kissnetd -p 2
kissnetd V 1.5 by Frederic RIBLE F1OAT - ATEPRA FPAC/Linux Project

Awaiting client connects on:
/dev/pts/1 /dev/pts/4
^Z
[1]+  Stopped                 kissnetd -p 2
vk4msl-mb stuartl # bg 1

Now, in this example, PTYs 1 and 4 are allocated. I can allocate either one of them to Xastir or ax25ipd, here I’ll use /dev/pts/4 for ax25ipd and the other for Xastir. It is possibly best if you make symlinks to these, and just refer to the symlinks in your software.

# ln -s /dev/pts/4 /dev/kiss-ax25ipd
# ln -s /dev/pts/1 /dev/kiss-xastir

Whilst you’re at it, change the ownership of the one you give to Xastir to your user/group so Xastir doesn’t need to run as root.

Set up /etc/ax25/ax25ipd.conf on the client. Here, I’ve given it a route for all WIDE* traffic to the other host. It might be possible to just use 0 as I did before, I wasn’t sure if that’d create a loop or not.

socket ip
mode tnc
device /dev/kiss-ax25ipd
speed 9600
loglevel 2
broadcast QST-0 NODES-0
# This points to my P4, attached to the TNC; d means default route
route WIDE* 192.168.64.141 d

Now start up ax25ipd and Xastir, you should be able to bring up the interface and see APRS traffic, more over, you should be able to hit Transmit and see the TNC broadcast your packets.

Some stations visible direct via RF

Some stations visible direct via RF (click to enlarge)