Gammu with Samsung

A housemate of mine got a new Samsung phone on the weekend. Being a resident geek, I offered to transfer her contacts across rather than get her sister to manually retype 500-odd contacts.

Naturally, I thought this would be a simple problem, right? I mean, everyone updates their phones every 2 years, this must be a pretty common use case. All my Sony Ericsson phones have had a "send all contacts by Bluetooth" option since the inception of Bluetooth. Naturally, it didn't have such a feature, it only supports sending one contact at a time. (Although, to Samsung's credit, the new phone will be able to do for the next upgrade)

Next option: I'll sync old phone to laptop to new phone.

The Samsung website has a helpful Windows utility that you can download to do this, however you need the cable to link the phone to the computer. The phones needed different cables, and I had neither. My laptop with a Windows partition has had broken Bluetooth ever since its motherboard got replaced. So that wasn't an option. The phones don't have IRDA, so there was no way to connect them with the Windows laptop.

Time to do it properly.

I tried wammu, a python-based gammu GUI. It supported the phones via the "blueat" driver, and could browse their SIM cards fine, but not their internal Phonebooks. It couldn't back them up either. A bit of poking around with gammu on the command line showed that the internal phone books are not 0-indexed (normal computer counting, 0 to n-1) or 1-indexed (normal human counting, 1 to n), but 2-indexed. Dijkstra would turn in his grave!

At this point, I could see that I was going to have to write my own, backup utility. The output of gammu was awkable, but seeing as there are good gammu-python bindings, I decided to do it in pure Python.

Reading the address book went something like this:

import gammu, pickle
sm = gammu.StateMachine()
sm.ReadConfig(3, 0)
sm.Init()
old = []
for i in range(2, 587):
    old.append(sm.GetMemory("ME", i))

pickle.dump(old, file("phonebook.dump", "w"))

The 3 signifies gammu configuration number 3, read into position 0. 587 is the number of address book entries. "ME" means internal memory. I then pickled "old" in preparation for the next stage. Here is an example of an item in old:

{'Entries': [{'AddError': 7517792,
              'Type': 'Text_FirstName',
              'Value': u'Foo'},
             {'AddError': 796160623,
              'Type': 'Text_LastName',
              'Value': u'Bar'},
             {'AddError': 796160623,
              'SMSList': [],
              'Type': 'Number_Other',
              'Value': u'0211234567',
              'VoiceTag': 0},
             {'Type': 'Category', 'Value': 0}],
 'Location': 2,
 'MemoryType': 'ME'}

Pretty icky, but at least all the information is there. At this point, one should be able to feed it into the new phone:

sm.Terminate()
sm = gammu.StateMachine()
sm.ReadConfig(4, 0)
sm.Init()
for i in old:
    sm.AddMemory(i)

However nothing I tried worked, I always got an "Invalid Location" error. I think the 2-indexing is trumping gammu again.

Next idea, lets munge the data into vCard format and use wammu / gammu's "import from vCard" function. (Code coming up soon) Turns out this doesn't work either. The phone only received the First name, first phone number, and various other things that I didn't send it (i.e. custom ring tones that it made up). Hmph!

Aha, but cellphones can normally Bluetooth vCards to each other. So I pushed it the vCard collection via obexftp. Starts transmitting, but then the phone reboots. I played around a bit, and found that if you send it more than one vCard in a vCard file, it reboots. Lovely.

So my final solution was: Extract address book with python-gammu. Transform into vCards. Send each one individually. At least the phone had a "trust this device" option so that it wouldn't prompt the user for every vCard I sent, but just automatically import them - the first sensible feature I've found on it.

Here goes:

#!/usr/bin/env python
import os, pickle, time
def normalise_num(n):
    "Neaten up the phone number, internationalise, etc."
    if n.startswith("+"):
        return n
    if n.startswith("00"):
        return "+" + n[2:]
    if len(n) == 10 and n[0] == "0":
        return "+27" + n[1:]
    return n
d = pickle.load(file("phonebook.dump", "r"))
# Normalise into a sensible format:
o = []
for i in d:
    t = {}
    for j in i["Entries"]:
        if j["Type"] == "Text_FirstName":
            t["First"] = j["Value"]
        if j["Type"] == "Text_LastName":
            t["Last"] = j["Value"]
        if j["Type"] == "Number_Other":
            n = normalise_num(j["Value"])
            type = "Home"
            if n[3] in ("7", "8"):
                type = "Cell"
            if type not in t:
                t[type] = []
            t[type].append(n)
    o.append(t)
# Write & Send vCards:
for i in o:
    f = file("temp.vcf", "w")
    f.write("BEGIN:VCARD\n")
    f.write("VERSION:2.1\n")
    f.write("N:%s;%s;;;\n" % (i.get("Last", ""), i.get("First", "")))
    pref = ";PREF"
    for j in i["Cell"]:
        f.write("TEL;CELL%s:%s\n" % (pref, j))
        pref=""
    for j in i["Home"]:
        f.write("TEL;HOME%s:%s\n" % (pref, j))
        pref=""
    f.write("END:VCARD\n")
    f.close()
    os.system("obexftp -b 00:DE:AD:00:BE:EF -p temp.vcf")
    # Give the thing a chance to recover:
    time.sleep(0.1)

Yes, the normalisation could be done with list comprehensions, but it would be horrible to read. And there might by Python Obex bindings, but I couldn't be bothered.

I got to spend an afternoon messing with dodgy Cellphones, rather than having a teenager do the job for free. I think I chose the wrong option, but at least it was fun.

Footnote: Samsung, your phones User Interface is awful. Why on earth is Bluetooth under "Applications" rather than "Settings"? I searched everywhere but there, and finally googled before I found it...

Spam Spam Spam Spam, Spam Spam Spam Spam, Lovely Spam, Wonderful Spam

This morning I got an unsolicited SMS spam "Home owners ? do u need money? R100,000 @ R752 pm! Reply YES and we?ll phone you". I know that everybody gets things like this and they just shrug them off, but I have a rabid hatred of spammers.

With e-mail spam, there's normally nothing you can do. The spammers are on the other side of the world, and they've used a botnet. But when I get something from South Africans, I act. We have the ECT act protecting us against spam. It's not the most effective anti-spam legislation, but it's better than nothing. I'll send the IOZ Spam Message to the spammers, their ISP, the domain registrants etc etc. Usually I get a response. Usually they remove me from their lists. (If they don't, their VP of marketing is going to have me harassing him over the phone in short order.) But of course they rarely mend their ways. Sometimes we end up in long e-mail arguments backwards and forwards, them saying "but I'm justified in spamming, because of foo", me saying "no bloody way, because of bar" etc. It's ineffectual and depressing, but at least I'm doing something to deter spammers and keep South Africa relatively clean.

But enough about e-mail. It's time for some tips on dealing with SMS-spam. The SMS Spamming industry (euphemisms: direct marketing, wireless application service provider) is attempting to regulate itself rather than be regulated by government. They've formed WASPA and signed the sms code of practice. WASPA lets you file complaints against its members and fines them (although the fines are rather paltry).

I heard about them via Jeremy Thurgood's recent spam-scapades. His spammers were charging R1 to opt-out. While the WASPA code of conduct allows a <=R1 fee, I agree with him that this is intolerable extortion.

In my case, my spammers had broken a few WASPA code of conduct rules:

  • 5.1.1.: They didn't identify themselves in the SMS
  • 5.1.2.: There is no opt-out facility that I know of.
  • 5.1.4.: There is no advertised opt-out procedure.
  • 5.2.1.: I'm very careful about not allowing people to spam me, so I'm pretty sure they failed all the options. I'd like them to prove otherwise.

I looked up the originating number on the SMS Code website. It belongs to Celerity Systems. They are currently under a suspended sentence at WASPA, so my WASPA complaint should force them to fork out a fine. Lets hope for the best.

I'm against capital punishment, but I wouldn't mind seeing a few spammers being hanged, drawn and quartered :-)

Bandwidth accounting with ulogd

My post about repositories wasn't just a little attempt to stave off work, it was part of a larger scheme.

I share the ADSL line in my digs with 3 other people. We do split-routing to save money, but we still have to divide the phone bill at the end of the month. Rather than buy a fixed cap, and have a fight over who's fault it was when we get capped, we are running a pay-per-use system (with local use free, subsidised by me). It means you don't have to restrain yourself for the common cap, but it also means I need to calculate who owes what.

For the first month, I used my old standby, bandwidthd. It uses pcap to count traffic, and gives you totals and graphs. For simplicity of logging, I gave each person a /28 for their machines and configured static DHCP leases. Then bandwidthd totalled up the internet use for each /28.

This was sub-optimal. bandwidthd either sees the local network, in which case it can't see which packets went out over which link. Or it can watch the international link, but then not know which user is responsible.

I could have installed some netflow utilities at this point, but I wanted to roll my own with the correct Linux approach (ulog) rather than any pcapping. ulogd is the easy ulog solution.

Ulogd can pick up packets that you "-j ULOG" from iptables. It receives them over a netlink interface. You can tell iptables how many bytes of each packet to send, and how many to queue up before sending them. E.g.

# iptables -I INPUT 1 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 48 --ulog-prefix input

will log the first 48 bytes of any incoming packet to netlink-group 1. It will tag the packet as being "input", and send them in batches of 50. 48 bytes is usually enough to catch any data you could want from the headers. If you were only need size, 4 bytes will do, and for source and destination as well, 20.

Now, we tell ulogd to listen for this stuff and log it. Ulogd has a pluggable architecture. IPv4 decoding is a plugin, and there are various logging plugins for "-j LOG" emulation, Text files, pcap-files, MySQL, PostgreSQL, and SQLite. For my purposes, I used MySQL as the router in question already had MySQL on it (for Cacti). Otherwise, I would have opted for SQLite. Be warned that the etch version of ulogd doesn't automatically reconnect to the MySQL server should the connection break for any reason. I backported the lenny version to etch to get around that. (You also need to provide the reconnect and connect_timeout options.)

Besides the reconnection issue, the SQL implementations are quite nice. They have a set schema, and you just need to create a table with the columns in it that you are interested in. No other configuration (beyond connection details) is necessary.

My MySQL table:

CREATE TABLE `ulog` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `oob_time_sec` int(10) unsigned NOT NULL,
  `oob_prefix` char(4) NOT NULL,
  `ip_totlen` smallint(5) unsigned NOT NULL,
  PRIMARY KEY  (`id`),
  UNIQUE KEY `id` (`id`),
  KEY `oob_prefix` (`oob_prefix`),
  KEY `oob_time_sec` (`oob_time_sec`)
);

My ulogd.conf:

[global]
# netlink multicast group (the same as the iptables --ulog-nlgroup param)
nlgroup=1    
# logfile for status messages
logfile="/var/log/ulog/ulogd.log"    
# loglevel: debug(1), info(3), notice(5), error(7) or fatal(8)
loglevel=5    
# socket receive buffer size (should be at least the size of the
# in-kernel buffer (ipt_ULOG.o 'nlbufsiz' parameter)
rmem=131071    
# libipulog/ulogd receive buffer size, should be > rmem
bufsize=150000
# ulogd_BASE.so - interpreter plugin for basic IPv4 header fields
#             you will always need this
plugin="/usr/lib/ulogd/ulogd_BASE.so"
plugin="/usr/lib/ulogd/ulogd_MYSQL.so"

[MYSQL]
table="ulog"
pass="foo"
user="ulog"
db="ulog"
host="localhost"
reconnect=5
connect_timeout=10

The relevant parts of my firewall rules:

# Count proxy usage (transparent and explicit)
iptables -A count-from-inside -p ! tcp -j RETURN
iptables -A count-from-inside -p tcp -m multiport --destination-ports ! 3128,8080 -j RETURN
iptables -A count-from-inside -s 10.0.0.16/28 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix sr-p
iptables -A count-from-inside -s 10.0.0.32/28 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix fb-p
iptables -A count-from-inside -s 10.0.0.128/25 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix gu-p

iptables -A count-to-inside -p ! tcp -j RETURN
iptables -A count-to-inside -p tcp -m multiport --source-ports ! 3128,8080 -j RETURN
iptables -A count-to-inside -d 10.0.0.16/28 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix sr-p
iptables -A count-to-inside -d 10.0.0.32/28 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix fb-p
iptables -A count-to-inside -d 10.0.0.128/25 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix gu-p

# Count forwarded traffic (excluding local internet connection - ppp2)
iptables -A count-forward-in -i ppp2 -j RETURN
iptables -A count-forward-in -d 10.0.0.16/28 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix sr-f
iptables -A count-forward-in -d 10.0.0.32/28 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix fb-f
iptables -A count-forward-in -d 10.0.0.128/25 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix gu-f

iptables -A count-forward-out -o ppp2 -j RETURN
iptables -A count-forward-out -s 10.0.0.16/28 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix sr-f
iptables -A count-forward-out -s 10.0.0.32/28 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix fb-f
iptables -A count-forward-out -s 10.0.0.128/25 -j ULOG --ulog-nlgroup 1 --ulog-qthreshold 50 --ulog-cprange 4 --ulog-prefix gu-f

# Glue
iptables -A INPUT -i eth0 -j count-from-inside
iptables -A OUTPUT  -o eth0 -j count-to-inside
iptables -A FORWARD -i ppp+ -j count-forward-in
iptables -A FORWARD -o ppp+ -j count-forward-out

So, traffic for my /28 (sr) will be counted as sr-f or sr-p so I can tally up proxy & forwarded traffic separately. (Yes, I can count traffic with squid too, but doing it all in one place is simpler.) fb is random housemate Foo Bar, and gu guest (unreserved IP addresses).

You can query the usage this month with for example:

SELECT oob_prefix, SUM(ip_totlen) FROM ulog WHERE oob_time_sec > UNIX_TIMESTAMP('2008-04-01 00:00:00') GROUP BY oob_prefix;

Your table will fill up fast. We are averaging around 200 000 rows per day. So obviously some aggregation is in order:

CREATE TABLE daily (
  id INT UNSIGNED NOT NULL AUTO_INCREMENT,
  time TIMESTAMP,
  oob_prefix CHAR(4) NOT NULL,
  data INT UNSIGNED NOT NULL,
  PRIMARY KEY (id),
  KEY (oob_prefix(4)),
  KEY (time)
);

And every night, run something like:

INSERT INTO daily (time, oob_prefix, data)
SELECT FROM_UNIXTIME(MAX(oob_time_sec)), oob_prefix, SUM(ip_totlen)
FROM ulog
WHERE oob_time_sec >= UNIX_TIMESTAMP('2008-04-01 00:00:00')
  AND oob_time_sec < UNIX_TIMESTAMP('2008-04-02 00:00:00')
GROUP BY oob_prefix;
DELETE FROM ulog WHERE oob_time_sec  >= UNIX_TIMESTAMP('2008-04-01 00:00:00')
  AND oob_time_sec < UNIX_TIMESTAMP('2008-04-02 00:00:00');

Finally, I have a simple little PHP script that provides reporting and calculates dues. Done.

My first (real) debian repo

Up to now, whenever I've needed a backport or debian recompile, I've done it locally. But finally last night, instead of studying for this morning's exam, I decided to do it properly.

The tool for producing a debian archive tree is reprepro. There are a few howtos out there for it, but none of them quite covered everything I needed. So this is mine. But we'll get to that later, first we need to have some packages to put up.

For building packages, I decided to do it properly and use pbuilder. Just install it:

# aptitude install pbuilder cdebootstrap devscripts

Make the following changes to /etc/pbuilderrc:

MIRRORSITE=http://ftp.uk.debian.org/debian
DEBEMAIL="Your Name <you@example.com>"

The first, to point to your local mirror, and the second to credit you in the packages.

Then, as root:

# pbuilder create --distribution etch --debootstrapopts --variant=buildd

Now, we can build a package, lets build the hello package:

$ mkdir /tmp/packaging; cd /tmp/packaging
$ gpg --recv-key 3EF23CD6
$ dget -x http://ftp.uk.debian.org/debian/pool/main/h/hello/hello_2.2-2.dsc
dpkg-source: extracting hello in hello-2.2
dpkg-source: unpacking hello_2.2.orig.tar.gz
dpkg-source: applying ./hello_2.2-2.diff.gz
$ cd hello-2.2/
$ debchange -n

dget and debchange are neat little utilities from devscripts. You can configure them to know your name, e-mail address, etc. If you work with debian packages a lot, you'll get to know them well. Future versions of debchange support --bpo for backports, but we use -n which means new package. You should edit the version number in the top line to be a backport version, i.e.:

hello (2.2-2~bpo-sr.1) etch-backports; urgency=low

  * Rebuild for etch-backports.

 -- Your Name <you@example.com>  Wed,  2 Apr 2008 22:24:30 +0100

Now, let's build it. We are only doing a backport, but if you were making any changes, you'd do them before the next stage, and list them in the changelog you just edited:

$ cd ..
$ dpkg-source -sa -b hello-2.2-2~bpo/
$ sudo pbuilder build hello_2.2-2~bpo-sr.1.dsc

Assuming no errors, the built package will be sitting in /var/cache/pbuilder/result/.

Now, for the repository:

$ mkdir ~/public_html/backports
$ cd ~/public_html/backports
$ mkdir conf
$ cat > conf/distributions << EOF
Origin: Your Name
Label: Your Name's Backports
Suite: stable-backports
Codename: etch-backports
Version: 4.0
Architectures: i386 all source
Components: main
Description: Your Name's repository of etch backports.
SignWith: ABCDABCD
NotAutomatic: yes
EOF

This file defines your repository. The codename will be the distribution you list in your sources.list. The version should match it. The architectures are the architectures you are going to carry - "all" refers to non-architecture-specific packages, and source to source packages. I added amd64 to mine. SignWith is the ID of the GPG key you are going to use with this repo. I created a new DSA key for the job. NotAutomatic is a good setting for a backports repo, it means that packages won't be installed from here unless explicitly requested (via package=version or -d etch-backports).

Let's start by importing our source package:

$ cd /tmp/packaging
$ debsign -kABCDABCD hello_2.2-2~bpo-sr.1.dsc
$ cd ~/public_html/backports
$ reprepro -P optional -S devel --ask-passphrase -Vb . includedsc etch-backports /tmp/packaging/hello_2.2-2~bpo-sr.1.dsc

(There is currently a known bug in reprepro's command-line handling. -S and -P are swapped.)

Now, let's import our binary package:

$ reprepro --ask-passphrase -Vb . includedeb etch-backports /var/cache/pbuilder/result/hello_2.2-2~bpo-sr.1_i386.deb

Reprepro can be automated with it's processincoming command, but that's beyond the scope of this howto.

Test your new repository, add it to your /etc/apt/sources.list

deb http://example.com/~you/backports etch-backports main

# aptitude update
# aptitude install hello=2.2-2~bpo-sr.1

Enjoy. My backports repository can be found here.

I'm a Google Reader convert

My blog hasn't had much to say recently, but now that I'm feeling pressured by University assignments, I think it's time to get back into one-post-per-day mode :-)

I remember once trying Google Reader, just after it launched, and very quickly deciding that I couldn't stand it, and I'd stick to Liferea.

Recently, however, Liferea has been giving me trouble. It's been incredibly unstable, and I'd often forgot to run a transparent proxy on my laptop when in restrictive environments, so it'd miss lots of posts and generally be un-happy. The instability I fixed by exporting an OPML list, wiping the configuration, and re-loading, but that was a ball-ache to do. While I was bitching about this, Vhata pushed me to try Google Reader again.

I was pleasantly surprised. It works well, and I didn't find it oppressive. That doesn't mean it's perfect, I'd like to see the following things improved:

  • Duplicate post detection (i.e. planetified & origional posts, liferea does this)
  • Performance
  • Favicons (or something similar, to make it more clear where a post comes from)
  • On that note, maybe configurable colour borders for important feeds?
  • Automatic refreshing (i.e. "r")
  • More viewable area
  • A key press for opening a post in a backgrounded new tab "v" changes your focus to the new tab, which is against the principles of tabbed browsing.

Some cool things it does that lifera doesn't:

  • Clicking on a folder shows you the all the posts from the feeds in that folder
  • "river of posts" view, which lets me get through my reading a lot faster
  • preloading images for posts that I haven't got to yet (this contributes a fair whack to the reading speed, given the slow interwebs in ZA)
  • Shared items
  • Access from multiple machines (OX, X-forwarding worked, but this is neater)
  • Doesn't crash (sorry lifrea...)

I'm converted. Google Reader really is good.

/me gets on with reading feeds...

*Camp Videos

I've (finally) finished encoding the ~6 hours of *camp video. They can be found on archive.org. As usual, 3 qualities.

I've probably screwed up at least one of them, so if anyone spots a problem, please let me know soon, before I delete the source material.

Irssi libnotify integration

I came across irssi-libnotify integration in a picture in blog post I read this morning.

I thought about this, and decided that this was something I had to have. I often don't pay attention to my IRC while I'm busy with something else, and miss out on a conversation that I'm being hailed in. (By something else, I'm meaning non-important, non-masked-interrupts-something-else.)

It isn't an easy problem to solve, though. Irssi is running on a remote machine inside screen. I'll be accessing it from one of many machines, possibly NATed, and possibly unable to receive incoming TCP connections.

I googled around a bit, and came across 3 main classes of solution to this problem:

  1. Run libnotify directly on the irssi-box, and use ssh's X-forwarding to display it on my client. This is sub-optimal, because your X server isn't always available. Example: irssi-libnotify (which requires the module to be reloaded every time you re-attach screen)
  2. Output all hilighted messages to a log file (using fnotfiy), and tail that log file with a second ssh session into a local script that calls libnotify. Sub-optimal, because it requires manually running a second ssh session, and restarting it in the event of network issues. Example
  3. Send notify events down the Print Channel of the terminal. This will pass through screen, and pop out at your terminal-emulator. xterm and rxvt are both capable of then sending them to an arbitrary command (which could call libnotify). This is quite a clever hack, except that gnome-terminal doesn't support it. Example

As you can see, they all have major short-comings, and I wasn't about to implement any of them.

Finally, I realized that Jabber would be a good way to hail me. My laptop / desktop / n800 / foo all run jabber clients. Perfect. I googled, and found a few pre-canned solutions. I settled for jabber-hilight-notify. It runs a jabber client in a perl irssi script. This then sends me a message whenever a hilighted line crops up. (Assuming I'm not in "Do Not Distrub" mode)

I initially had some problems with getting jabber-hilight-notify working. It turns out that setting a custom resource string is a bad idea. My final config was:

jabber_hilight_notify_target = stefano@rivera.za.net
jabber_password = xxxxxxxx
jabber_id = irssi@rivera.za.net
jabber_server_reconnect_time = 60
jabber_hilight_notify_target_presence = online chat away xa
jabber_hilight_notify_when_away = OFF

My Pidgin provides the libnotify integration, although jabber-hilight-notify's designed to work with Tavu (a desktop-notification frontend for KDE). I think a better approach would be to use Telepathy. If such a general telepathy-based solution could be found, then it would be easy to have multiple remote daemons send notifications to you via jabber transport.

Now to see if I'm still happy with it after a week of it interrupting me.

Just what is Universally Unique

I had an interesting discussion with "bonnyrsa" in #ubuntu-za today. He'd re-arranged his partitions with gparted, and copied and pasted his / partition, so that he could move it to the end of the disk.

However this meant that he now had two partitions with the same UUID. While you can imagine that this is the correct result of a copy & paste operation, it now means that your universally unique ID is totally non-unique. Not in your PC, and no even on it's home drive.

Ubuntu mounts by UUID, so now how do we know which partition is being mounted?

  • "mount" said /dev/sda2
  • /proc/mounts said /dev/disk/by-uuid/c087bad7-5021-4f65-bb97-e0d3ea9d01a6 which was a symlink to /dev/sda2.

However neither were correct.

Mounting /dev/sda4 (ro) produced "/dev/sda4 already mounted or /mnt busy".

Aha, so we must be running from /dev/sda4.

/dev/sda2 mounted fine, but then wouldn't unmount: "it seems /dev/sda2 is mounted multiple times".

Aaaargh!

I got him to reboot, change /dev/sda2s UUID, and reboot again (sucks). Then everything was better.

This shouldn't have happened. Non-unique UUIDs is a really crap situation to be in. It brings out bugs in all sorts of unexpected places. I think parted should (by default) change the UUID of a copied partition (although if you are copying an entire disk, it shouldn't).

I've filed a bug on Launchpad, let's see if anyone bites.

PS: All UUIDs in this post have been changed to protect the identity of innocent Ubuntu systems (who aren't expecting a sudden attack of non-uniqueness).

Madwifi regdomain issues

The CS Department at UCT has some Wireless APs on Channel 13. This is quite cool (for geeky reasons), but my MacBook (purchased in the US) did not agree. As far as it is concerned, the only 802.11g channels in existence are 1-11.

The reason for this is that my Atheros (madwifi) network card is a software-defined radio. Atheros interprets the FCC regulations to mean that it cannot provide an Open Source driver for this card, allowing it broadcast on any random channel. Thus the madwifi driver contains a binary HAL, produced by Atheros, which is responsible for regulating frequencies and power levels. (This HAL has been reverse-engineered by the OpenBSD people, but not for my card, unfortunately).

The card has two values stored in it's EEPROM, a "countrycode", and a "regdomain". The countrycode is overrideable in software (you modprobe ath_pci countrycode=710), but only if the countrycode you specify is valid for the card's regdomain. Some cards have a 0x00 or 0xFF regdomain (wildcard values), but mine had 0x64. This meant that whenever I tried to specify a country code, I'd get an error, and the madwifi module would refuse to load:

Feb 11 11:34:11 beethoven kernel: [ 2047.669023] MadWifi: ath_getchannels: Unable to collect channel list from HAL; regdomain likely 100 country code 710

There has been some success with changing the regdomain in the EEPROM, using the hard-to-find ar5k utility (or possible the ath_info utility?). However, again this didn't work with my model. But I found an e-mail from somebody who'd been playing with similar stuff. I mailed Salvatore, and he replied almost instantly, pointing me to a public Windows utility for changing regdomains. It depends on a special driver, available in the demo of "CommView for Wireless".

I installed Windows in my swap partition (it's not an operating system I normally have around). (Naturally, I forgot to have an Ubuntu CD handy, to rebuild my grub, but that was easily remedied.). After a few blue screens of death (install all necessary drivers first), I got my regdomain changed to 0x37, which is the regdomain for South Africa & Europe.

Now, I'm writing this from a couch in the CS department, using a channel 13 AP. Success.

A telecoms rant

You'd think that telecoms (esp in South Africa, where being one is a license to print money) would do everything they could to get you to spend money with them? If only.

Cellphones (at least Vodacom) have always required you to manually enable International phone calls & International roaming, by giving them a call and requesting it. This is a pain, to say the least. And you normally forget until you are already in another country.

But Telkom have taken this to a new level. I got a new phone line last month (which still doesn't have working DSL, grr!), and just noticed that I can't make International calls on it. So I phone telkom. Amazingly I didn't have to wait on hold at all (something never before experienced when calling the beast), and the lady I spoke to told me I have to "Visit a Telkom Shop with my ID". WTF? How hard are they making it to spend money on them?

Is there any legitimate reason that international calls are blocked? With our pricing, it's easy to knock up a multi-k-ZAR bill without even thinking about dialling an international number, so they aren't protecting anyone.

In related news, the reason you can't have incoming connections on Vodacom 3G (even with "internetvpn") is because then you'd be liable for the cost of any DOS you received. Isn't this a problem that hosting providers and ISPs already have to deal with? Why are mobile operators special? We need some Internet Neutrality and Telecoms sense in this country...