2010

1

Old and new: Mixing irssi and iPhones for fun and no profit

Posted on Tuesday, 14 December 2010

Introduction


I use irssi for IRC and an iPhone for pocket Internets; These two choices are both excellent, but they're not terribly compatible - typing in irssi on an iPhone via SSH is quite slow and annoying.

Obviously the thing to do is run an iPhone IRC client, but then I'm signing in and out all the time and I have multiple nicknames - what I want is a way to be connected to the same IRC session as normal, but from my phone using the excellent IRC client Colloquy. By taking advantage of several different pieces of Free Software, this is entirely doable! When we're not connected to IRC, messages which trigger irssi's highlight will be forwarded to the iPhone as a Push Notification.

Preparation


These are the tools we are going to use to make this happen:

  • A patched irssi (don't worry though, the patch is *tiny*)

  • An irssi script (just some Perl really)

  • irssi-proxy

  • stunnel

  • Colloquy Mobile (from the iPhone App Store)


Throughout I will be assuming that you're running Ubuntu 10.04 (Lucid Lynx) as this is the currently most recent LTS release and thus most suited to servers. Also it's what I run, and this is my fun evening project :)

Although it will not be necessary to download it, I would like to note the original location of the patch and script that this method relies on. You can obtain both here.

Instead, we are going to install a patched irssi from one of my PPAs, but if you do not care for this idea, the above URL will let you build your own patched irssi and contains the colloquy_push.pl script.

Installation


These commands will install the patched irssi and the colloquy_push.pl script:
sudo add-apt-repository ppa:cmsj/irssi-colloquy-push
sudo apt-get update
sudo apt-get install irssi

(If you don't have 'add-apt-repository' available, it's in the 'python-software-properties' package).

Configuration


irssi-proxy


The first step is to load irssi-proxy. This is distributed as a plugin library in the irssi package, you can load it with:
/load proxy
/set irssiproxy_bind 127.0.0.1
/set irssiproxy_password PICKAGOODPASSWORD
/set irssiproxy_ports network1=31337 network2=31338 network3=31339

Obviously you'll need to replace "PICKAGOODPASSWORD" with a password, preferably a good one. Also you'll need to replace 'network1'/'network2'/'network3' with the names of the networks you've configured in irssi (which you can see with the command '/network list') and switch them to different ports if you want.

Finally you should run '/save' so irssi writes out its config file with all of these changes. Et voila, we have a running proxy, but as you noticed, we forced it to listed on 127.0.0.1, so we can't yet connect to it from the Internet. The reason we've done this is that irssi_proxy is not able to directly offer encrypted connections. It would be a bad idea to allow all our proxy password and general IRC traffic to flow around unencrypted (even though many IRC server connections are unencrypted).

Stunnel


Stunnel is a very simple tool that lets you  add SSL support to anything listening on a TCP socket. To get started, install the 'stunnel4' package and edit '/etc/default/stunnel4' and change ENABLED=0 to ENABLED=1.

Now we need to construct /etc/stunnel/stunnel.conf. The default contains various options we don't really care about, but one important one is the 'cert =' line - we need an SSL certificate for this to work. You can either buy one or generate your own (a so-called "snake-oil" certificate). There are many guides to generating a .crt file and this is left as an exercise for the reader. With that file in place somewhere, edit stunnel.conf to point at it.

The final step for stunnel is to add port configurations. Jump to the bottom of the file and add a section like this for each of the ports irssi_proxy is listening on:
[myfirststunnel]
accept=123.123.123.123:31337
connect=127.0.0.1:31337

What we have done here is told stunnel to listen on our public IP on the same port that it will then connect to on 127.0.0.1. This might seem confusing, but I think it makes sense that the port numbers stay directly mapped between tunnels and proxy ports. Restart the stunnel4 service and you should see the appropriate ports being listened on.

colloquy_push.pl


This is the irssi script which glues all the magic together - it receives special commands from the iPhone version of Colloquy and uses those to pass on Push Notifications when necessary. To load it, type '/script load colloquy_push.pl' and you probably want to symlink '/usr/share/irssi/scripts/colloquy_push.pl' into ~/.irssi/scripts/autorun/.

Colloquy


Now configure a new IRC Connection in Colloquy on your iPhone. Enter its hostname/IP and the port you have stunnel listening on (the port settings are in Advanced) and enable SSL. Finally, set Push Notifications to On and you're done.

Shortcomings


The script, while excellent, has one or two drawbacks - it's not yet able to detect when you're watching irssi, so it may well send lots of notifications to your phone unnecessarily (I'm looking into expanding it to detect if you're running in screen/tmux and are attached), also it doesn't have any concept of sleeping hours, so you may get woken up by notifications! Nonetheless, this is an excellent way to use your awesome iPhone and not sacrifice the magnificence of irssi!

3

GStreamer thread oddness

Posted on Thursday, 28 October 2010

I sometimes find myself in a place where there are a number of Icecast streams going out at once and I'm interested in finding better ways of monitoring these. It seems like a nice option would be a window showing a visualisation of each stream.

I quickly whipped up some python to do this, but it almost always locks up when I run it, but I'm not sure if I've done something fundementally wrong or if I've found a bug somewhere.

If you are a gstreamer expert, please take a look a this code and let me know what I should do next! If you know a gstreamer expert, please try and bribe them to read this post ;)

6

Lifesaver for Maverick

Posted on Tuesday, 21 September 2010

I think that enough of the planets have aligned in the shape of a failboat that I have been able to successfully upload a source package of Lifesaver to its PPA for Maverick.

I might be wrong though, we'll find out shortly when Launchpad processes the ridiculous output of several ridiculous tools.

Seriously Debian/Ubuntu developers, please sort this out. I really don't care about the intricacies of your workflow - just make it easy for me to be an upstream developer pushing packages into a PPA. Don't make me wade through a sea of hundreds of build tools, dscs, origs, diffs, etc. Just make a bundle and shove it into Launchpad. One command. bzr2ppa in a working directory. Done.

I'm quite sure the failures I had were due either to my incorrect use of some tool or other, or an incorrect setup, but I contend that I shouldn't have to care. Such a tool just needs to know that there's a debian/ that works and a PPA waiting. Make it happen. Go. Now. Are we there yet?

GRRRRRRRRRRRRRR!
(Rant over, the package uploaded and will presumably build shortly, enjoy!)

0

Terminator 0.95 released!

Posted on Tuesday, 24 August 2010

This release is mostly to bring a couple of important compatibility fixes with the newest pre-release of VTE, but we also have some updated translations, improved error handling and two new features for you. The features are a URL handler plugin for Maven by Julien Nicolaud and a DBus server that was the result of some work with Andrea Corbellini - for now the only thing this is useful for is opening additional Terminator windows without spawning a new process, but we'll be exploring options in the future to allow more control and interaction with Terminator processes.

4

Adventures in Puppet: Tangled Strings

I am trying to do as much management on my new VM servers as possible with Puppet, but these are machines I still frequently log on to, and not everything is managed by Puppet, so it's entirely possible that in a fit of forgetfulness I will start editing a file that Puppet is managing and then be annoyed when my changes are lost next time Puppet runs.
Since prior preparation and planning prevents pitifully poor performance, I decided to do something about this.

Thus, I present a VIM plugin called TangledStrings, which I'm distributing as a Vimball (.vba) you can download from its project page on Launchpad. For more information on Vimball formatted plugins, see this page. To install the plugin, simply:


  • vim tangledstrings.vba

  • Follow the instructions from Vimball to type: :so %


By default, TangledStrings will show a (configurable) warning message when you load a Puppet-owned file:



This message can be disabled, and you can choose to enable a persistent message in the VIM status line instead:



(or you could choose to enable both of these methods).

For more information, see the documentation included in the Vimball which you can display with the VIM command:
:help TangledStrings

Suggestions, improvements, patches, etc. are most welcome! Email me or use Launchpad to file bugs and propose merges.

3

Adventures in Puppet: concat module

R.I. Pienaar has a Puppet module on github called "concat". Its premise is very simple, it just concatenates fragments of text together into a particular file.

I'm sure that a more seasoned Puppet veteran would have had this running in no time, but since it introduced some new concepts for me, I thought I'd throw up some notes of how I'm using it. I was particularly interested in an example usage I saw which lists the puppet modules a system is using in its /etc/motd, but because of the way Ubuntu handles constructing the motd, I needed to slightly rework the example. In Ubuntu, the /etc/motd file is constructed dynamically when you log in - this is done by pam_motd which executes the scripts in /etc/update-motd.d/. One of those scripts (99-footer) will simply append the contents of /etc/motd.tail to /etc/motd after everything else - my example will take advantage of this. If you are already using motd.tail, you could just have this puppet system write to a different file and then drop another script into /etc/update-motd.d/ to append the contents of that different file.

This is what I did:


  • git clone http://github.com/ripienaar/puppet-concat.git

  • Move the resulting git branch to /etc/puppet/modules/concat and add it to my top-level site manifest that includes modules

  • Create a class to manage /etc/motd.tail. In my setup this ends up being /etc/puppet/manifests/classes/motd.pp, which is included by my default node, but your setup is probably different. This is what my class looks like:


class motd {
include concat::setup
$motdfile = "/etc/motd.tail"

concat{$motdfile:
owner => root,
group => root,
mode => 644
}

concat::fragment{"motd_header":
target => $motdfile,
content => "\nPuppet modules: ",
order => 10,
}

concat::fragment{"motd_footer":
target => $motdfile,
content => "\n\n",
order => 90,
}
}

# used by other modules to register themselves in the motd
define motd::register($content="", $order=20) {
if $content == "" {
$body = $name
} else {
$body = $content
}

concat::fragment{"motd_fragment_$name":
target => "/etc/motd.tail",
content => "$body ",
order => $order
}
}

So that's quite a mouthful. Let's break it down:

  • We have to include concat::setup so the concat module can...set... up :)

  • We then set a variable pointing at the location of the file we want to manage

  • We then instantiate the concat module for the file we want to manage and set properties like the ownership/mode

  • We then call the concat::fragment function for two specific fragments we want in the output - a header and a footer (although I do this on a single line, so it's the phrase "Puppet modules" and "\n\n" respectively). They're forced to be header/footer by the "order" parameter - by making sure we use a low number for the header and a high number for the footer, we get the layout we expect.

  • Outsite this class we define a function motd::register which other modules will call and the content they supply will be handed to concat::fragment with a default order parameter of 20 (which is higher than the value we used for the header and lower than the footer one).


Finally, in each of my modules I include the line:
motd::register{"someawesomemodule":}

and now when I ssh to a node, I see a line like:
Puppet modules: web ssh 

It's a fairly simple little thing, but quite pleasing and from here out it's almost zero effort - just adding the motd::register calls to each module.

1

Adventures in Puppet

I'm very slowly learning and exploring the fascinating world of Puppet for configuration management. As I go I'm going to try and blog about random things I discover. Partially for my own future reference, partially to help me crystalise my knowledge and partially to help you.

The first post is coming up immediately, I'm just writing this post as an opening bookend :)

0

Dream a little dream of me

Posted on Saturday, 17 July 2010

Last night I had a lovely meal out and then saw Inception with Rike and some friends.
I've really enjoyed all of Christopher Nolan's previous films and I think he does an excellent job of creating surprising and compelling stories.
I'm not really going to say anything about the plot, other than to advise you avoid reading anything about it until you've seen it - not because there's anything particularly secret, but because it's nice to not have any preconceptions about what might happen.
For me, the best films have me leaving the cinema totally caught up in their world, my mind reeling with the possibilities of what they have explored. Inception achieved this, and I want to see it again, although preferably at home on Bluray so I can hear every word of dialogue properly - a surprising shortcoming of one of London's flagship cinemas.

0

Random puppetry

Posted on Wednesday, 14 July 2010

I was talking to a colleague earlier about Puppet and its ability to install packages. I'd not really given it much thought beyond using it to install packages on classes of machines, but he mentioned one particular package which gets updated quite frequently, but is extremely low risk to update - tzdata. By setting this to "ensure => latest" rather than "ensure => present" I can forget about ever having to upgrade that package again \o/

Simple really, but it hadn't occurred to me.

1

Pick a letter, any letter

Posted on Wednesday, 7 July 2010

Earlier on my laptop suffered a slight mishap which resulted in a key popping off. I examined the mechanism and it didn't obviously go back on by itself, so I googled around a little and landed on the helpful chaps at laptopkey.com. I watched the video that pertains to my exact model, figured out which bits of metal had been slightly bent and a few minutes later I had everything back in working order.
It's almost a shame I didn't need to buy anything from them in return for using their helpful video ;)

5

Who wants to see something really ugly?

Posted on Tuesday, 6 July 2010

I think it should be abundantly clear from my postings here that I'm not a very good programmer, and this means I give myself a lot of free rope to do some very stupid things.

I'm in constant need of debugging information and in Terminator particularly where we have lots of objects all interacting and reparenting all the time. We've had a simple dbg() method for a long time, but I was getting very bored of typing out dbg('Class::method:: Some message about %d' % foo), so I decided to see what could be done about inferring the Class and method parts of the message.

It turns out that python is very good at introspecting its own runtime, so back in January, armed with my own stupidity and some help from various folks on the Internet, I came up with the following:

# set this to true to enable debugging output
DEBUG = False
# set this to true to additionally list filenames in debugging
DEBUGFILES = False
# list of classes to show debugging for. empty list means show all classes
DEBUGCLASSES = []
# list of methods to show debugging for. empty list means show all methods
DEBUGMETHODS = []

def dbg(log = ""):
"""Print a message if debugging is enabled"""
if DEBUG:
stackitem = inspect.stack()[1]
parent_frame = stackitem[0]
method = parent_frame.f_code.co_name
names, varargs, keywords, local_vars = inspect.getargvalues(parent_frame)
try:
self_name = names[0]
classname = local_vars[self_name].__class__.__name__
except IndexError:
classname = "noclass"
if DEBUGFILES:
line = stackitem[2]
filename = parent_frame.f_code.co_filename
extra = " (%s:%s)" % (filename, line)
else:
extra = ""
if DEBUGCLASSES != [] and classname not in DEBUGCLASSES:
return
if DEBUGMETHODS != [] and method not in DEBUGMETHODS:
return
try:
print >> sys.stderr, "%s::%s: %s%s" % (classname, method, log, extra)
except IOError:
pass


How's about that for shockingly bad? ;)
It also adds a really impressive amount of overhead to the execution time.
I added the DEBUGCLASSES and DEBUGMETHODS lists so I could cut down on the huge amount of output - these are hooked up to command line options, so you can do something like "terminator -d --debug-classes=Terminal" and only receive debugging messages from the Terminal module.

I'm not exactly sure what I hope to gain from this post, other than ridicule on the Internet, but maybe, just maybe, someone will pop up and point out how stupid I am in a way that turns this into a 2 line, low-overhead function :D

0

My python also spins webs

With Terminator 0.94 released I'm turning my little brain onto an idea I have for a web service and obviously I'm sticking with python.
Clearly writing all the web gubbins by hand is mental, so I'm playing with Flask, a microframework for web apps. So far I'm really liking it, but it's taken a while to figure it and sqlalchemy out.
I'm not at all convinced that this is going to be in any way scalable, but it's a nice way to test my idea :)

0

A good day

Today has been about creating, not consuming. Apart from half-watching Primal Fear with Rike, I have spent the day fixing bugs in Terminator and playing with the Akai Synthstation app on my iPad. I suspect I'm not going to be ruling the clubs anytime soon, and the UI is pretty dreadful for composing music, but it has a good library of sounds and synth mangling knobs :)
I even filmed myself playing some of the parts and edited them together into a little music video, but it's really very poor ;)
Rike's going to be out for most of tomorrow, so I have to decide between doing more of what I've been doing today, playing PS3 games or going out myself. Tricky!

2

The Lawnmower Man

Posted on Tuesday, 8 June 2010

Introduction


This website shares a server with various other network services that form the foundation of my online life (i.e. IRC and Email) and I've been running into capacity issues in the last few months, so I'm running an experiment whereby I upgrade to brand new hardware (Quad Core i7, 8GB of RAM) and partition the available resources across virtual machines so the various network services are isolated into logical security zones.

Whining


I have plenty of experience using Xen for this sort of thing, but that's becoming more and more irrelevant in newer kernels/distributions. As much as I think that's a shame and a stupid upstream decision, I can't change it, so I need to move on to KVM and libvirt.

Resolution


So, with the beefy new server booted up in a -server kernel and a big, empty LVM Volume Group I got to work creating some virtual machines. This post is mainly a reminder to myself of the things I need to do for each VM :)

Action


These are the steps I used to make a VM with 1GB of RAM, 10GB / and 1GB of swap:

Create an LVM Logical Volume


lvcreate -L11G -n somehostname VolumeGroup

Create a VM image and libvirt XML definition


ubuntu-vm-builder kvm lucid --arch amd64 --mem=1024 --cpus=1

--raw=/dev/VolumeGroup/somehostname --rootsize=10240 --swapsize=1024

--kernel-flavour=server --hostname=somehostname

--mirror=http://archive.ubuntu.com/ubuntu/ --components=main,universe

--name 'Chris Jones' --user cmsj --pass 'ubuntu' --bridge virbr0

--libvirt qemu:///system --addpkg vim --addpkg ssh --addpkg ubuntu-minimal

Catchy command, huh? ;)

Wait


(building the VM will take a few minutes)

Modify the libvirt XML definition for performance


The best driver for disk/networking is the paravirtualised "virtio" driver. I found that ubuntu-vm-builder had already configured the networking to use this, but not the disk, so I modified the disk section to look like this:
<disk type='block' device='disk'>

  <source dev='/dev/VolumeGroup/somehostname'/>

  <target dev='vda' bus='virtio'/>

</disk>

Modify the libvirt XML definition for emulated serial console


I don't really want to use VNC to talk to the console of my VMs, so I add the following to the <devices> section of the XML definition to make a virtualised serial port and consider it a console:


<serial type='pty'>

  <target port='0'/>

</serial>

<console type='pty'>

  <target port='0'/>

</console>

Modify the libvirt XML definition for a better CPU


I'm running this on an Intel Core i7 (Nehalem), but libvirt's newest defined CPU type is a Core2Duo, so we'll go with that in the root of the <domain> section:
<cpu match='minimum'>

  <model>core2duo</model>

</cpu>

Import the XML definition into the running libvirt daemon
virsh define /etc/libvirt/qemu/somehostname.xml

Mount the VM's root filesystem


The Logical Volume we created should be considered as a whole disk, not a mountable partition, but dmsetup can present the partitions within it, and these should still be present after running ubuntu-vm-builder:

mkdir /mnt/tmpvmroot

mount /dev/mapper/VolumeGroup-somehostnamep1 /mnt/tmpvmroot

Fix fstab in the VM


Edit /mnt/tmpvmroot/etc/fstab and s/hda/vda/

Configure serial console in the VM


Edit /etc/init/ttyS0.conf and place the following in it:


# ttyS0 - getty

#

# This service maintains a getty on ttyS0 from the point the system is

# started until it is shut down again.

start on stopped rc RUNLEVEL=[2345]

stop on runlevel [!2345]

respawn

exec /sbin/getty -L 115200 ttyS0 xterm

Edit /boot/grub/menu.lst and look for the commented "defoptions" line. Change it to:


# defoptions=console=ttyS0 console=tty0

(the default "quiet splash" is not useful for servers IMHO)

Unmount the VM's root filesystem


umount /mnt/tmpvmroot

rmdir /mnt/tmpvmroot

Start the VM


virsh start somehostname

SSH into the VM


I didn't specify any networking details to ubuntu-vm-builder, so the machine will boot and try to get an address from DHCP. By default you'll have a bridge device for libvirt called virbr0 and dnsmasq will be running, so watch syslog for the VM getting its address.
ssh cmsj@192.168.122.xyz

you should now be in your VM! Now all you need to do is configure it to do things and then fix its networking. My plan is to switch the VMs to static IPs and then use NAT to forward connections from public IPs to the VMs, but you could bridge them onto the host's main ethernet device and assign public IPs directly to the VMs.

7

Python decisions

Posted on Thursday, 3 June 2010

Every time I find myself hacking on some Python I find myself second guessing all sorts of tiny design decisions and so I figure the only way to get any kind of perspective on them is to talk about them. Either I'll achieve more clarity through constructing explanations of what I was thinking, or people will comment with useful insights. Hopefully the latter, but this is hardly the most popular blog in the world ;)
So. What shall we look at first. Well, I just hacked up a tiny script last night to answer a simple question:

Is most of my music collection from the 90s?



Obviously what I want to do here is examine the ID3 tags of the files in my music collection and see how they're distributed. A quick search with apt showed that Ubuntu 10.04 has two python libraries for dealing with ID3 tags and a quick play with each suggested that the one with the API most relevant to my interests was eyeD3. After a few test iterations of the script I was getting bored of waiting for it to silently scan the roughly 4000 MP3s I have, so I did another quick search and found a progress bar library.

So that's all of the motive and opportunity established, now let's examine the means to the end. If you want to follow along at home, the whole script is here.



try:

 import eyeD3

 import progressbar as pbar

except ImportError:

print("You should make sure python-eyed3 and python-progressbar \

are installed")

 sys.exit(1)

First off this is the section where I'm importing the two non-default python libraries that I depend on. I want to provide a good experience when they're not installed, so I catch the exception and tell people the Debian/Ubuntu package names they need, and exit gracefully. I rename the progressbar module as I import it just because "progressbar" is annoyingly long as a name, and I don't like doing "from foo import *".

Skipping further on, we find the code that extracts the ID3 year tag:
year = tag.getYear() or 'Unknown'

This is something I'm really not sure about the "correctness" of; One of the reasons I went with the eyeD3 library was that the getYear() method returns None if it can't find any data, but I don't really want to capture the result, then test the result explicitly and if it's None set the value to "Unknown", so I went with the above code which only needs a single line and is (IMHO) highly readable.

This is ultimately the crux of the entire program - we've now collected the year, so we can work out which decade it's from:
if year is not 'Unknown':
year = "%s0s" % str(year)[:3]

If this isn't an unknown year we chop the final digit off the year and replace it with a zero. Job done!

Next up, another style question. Rather than store the year we just processed I want to know how many of each decade have been found, so the obvious choice is a dict where the keys are the decades and the values are the number of times each decade has been found. One option would be to pre-fill the dict with all the decades, each with a value of zero, but that seems redundant and ugly, so instead I start out with an empty dict. This presents a challenge - if we find a decade that isn't already a key in the dict (which will frequently be the case) we need to notice that and add it. We could do this by pre-emptively testing the dict with its has_key() method, but that struck me as annoyingly wordy, so I went with:
try:

 years[year] += 1

except KeyError:

 years[year] = 1

If we are incrementing a year that isn't already in the dict, python will raise a KeyError, at which point we know what's happened and know the correct value is 1, so we just set it explicitly. Seems like the simplest solution, but is it the sanest?

The only other thing I wanted to say is a complaint - having built up the dict I then want to print it nicely, so I have a quick list comprehension to produce a list of strings of the format "19xx: yy" (i.e. the decade and the final number of tracks found for that decade), which I then join together using:
', '.join(OUTPUT)

which I hate! Why can't I do:
OUTPUT.join(', ')

(where "OUTPUT" is the list of strings). If that were possible, what I'd actually do is tack the .join() onto the end of the list comprehension and a single line would turn the dict into a printable string.

So there we have it, my thoughts on the structure of my script. I'd also add that I've become mildly obsessive about getting good scores from pylint on my code, which is why it's rigorously formatted, docstring-ed and why the variable names in the __main__ section are in capitals.

What are your thoughts?

Oh, and the answer is no, most of my music is from the 2000s. The 1990s come in second :)

1

gtk icon cache search tool

Posted on Thursday, 13 May 2010

Earlier on this evening I was asking the very excellent Ted Gould about a weird problem with my Gtk+ icon theme - an app I'd previously installed by hand in /usr/local/, but subsequently removed, had broken icons because Gtk+ was looking in /usr/local/share/icons/ instead of /usr/share/icons/.

We did a little digging and realised I had an icon theme cache file in /usr/local/ that was overriding the one in /usr/. A bit of deleting later and it's back, but in the process we whipped up a little bit of python to print out the filename of an icon given an icon name.

#!/usr/bin/python
# gtk-find-icon by Chris Jones <cmsj@tenshu.net>
# Copyright 2010. GPL v2.

import sys
import gtk

THEME = gtk.IconTheme()
ICON = THEME.lookup_icon(sys.argv[1],
gtk.ICON_SIZE_MENU,
gtk.ICON_LOOKUP_USE_BUILTIN)

if not ICON:
print "None found"
else:
print(ICON.get_filename())

1

Hybrid - Disappear Here

Posted on Monday, 19 April 2010

It's a while since I wrote anything in the Music category of this blog, and since everything has been about my software projects recently, I figure it's time to mix things up a little. It's also just a few weeks since the release of the latest studio album by probably my favourite band of the last few years, Hybrid.

The album is called Disappear Here and it's pretty damn good - I was going to describe the tracks individually, but that's what reviewers typically do and it always sounds insufferably poncy, so I suggest you just go to the album's site and listen to the damn thing yourself ;)

They also post semi-frequent hour long DJ mixes on their Soundcloud page, which I would recommend!

9

Writing Terminator plugins

Posted on Sunday, 18 April 2010

Terminator Plugin HOWTO


One of the features of the new 0.9x series of Terminator releases that hasn't had a huge amount of announcement/discussion yet is the plugin system. I've posted previously about the decisions that went into the design of the plugin framework, but I figured now would be a good time to look at how to actually take advantage of it.

While the plugin system is really generic, so far there are only two points in the Terminator code that actually look for plugins - the Terminal context menu and the default URL opening code. If you find you'd like to write a plugin that interacts with a different part of Terminator, please let me know, I'd love to see some clever uses of plugins and I definitely want to expand the number of points that plugins can hook into.

The basics of a plugin


A plugin is a class in a .py file in terminatorlib/plugins or ~/.config/terminator/plugins, but not all classes are automatically treated as plugins. Terminator will examine each of the .py files it finds for a list called 'available' and it will load each of the classes mentioned therein.

Additionally, it would be a good idea to import terminatorlib.plugin as that contains the base classes that other plugins should be derived from.

A quick example:
import terminatorlib.plugin as plugin
available = ['myfirstplugin']
class myfirstplugin(plugin.SomeBasePluginClass):
etc.

So now let's move on to the simplest type of plugin currently available in Terminator, a URL handler.

URL Handlers


This type of plugin adds new regular expressions to match text in the terminal that should be handled as URLs. We ship an example of this with Terminator, it's a handler that adds support for the commonly used format for Launchpad. Ignoring the comments and the basics above, this is ultimately all it is:
class LaunchpadBugURLHandler(plugin.URLHandler):
  capabilities = ['url_handler']
  handler_name = 'launchpad_bug'
  match = '\\b(lp|LP):?\s?#?[0-9]+(,\s*#?[0-9]+)*\\b'

  def callback(self, url):
    for item in re.findall(r'[0-9]+', url):
      return('https://bugs.launchpad.net/bugs/%s' % item)

That's it! Let's break it down a little to see the important things here:

  • inherit from plugin.URLHandler if you want to handle URLs.

  • include 'url_handler' in your capabilities list

  • URL handlers must specify a unique handler_name (no enforcement of uniqueness is performed by Terminator, so use some common sense with the namespace)

  • Terminator will call a method in your class called callback() and pass it the text that was matched. You must return a valid URL which will probably be based on this text.


and that's all there is to it really. Next time you start terminator you should find the pattern you added gets handled as a URL!

Context menu items


This type of plugin is a little more involved, but not a huge amount and as with URLHandler we ship an example in terminatorlib/plugins/custom_commands.py which is a plugin that allows users to add custom commands to be sent to the terminal when selected. This also brings a second aspect of making more complex plugins - storing configuration. Terminator's shiny new configuration system (based on the excellent ConfigObj) exposes some API for plugins to use for loading and storing their configuration. The nuts and bolts here are:
import terminatorlib.plugin as plugin
from terminatorlib.config import Config
available = ['CustomCommandsMenu']
class CustomCommandsMenu(plugin.MenuItem):
capabilities = ['terminal_menu']
config = None
def __init__(self):
self.config = Config()
myconfig = self.config.plugin_get_config(self.__class__.__name__)
# Now extract valid data from sections{}
def callback(self, menuitems, menu, terminal):
menuitems.append(gtk.MenuItem('some jazz'))

This is a pretty simplified example, but it's sufficient to insert a menu item that says "some jazz". I'm not going to go into the detail of hooking up a handler to the 'activate' event of the MenuItem or other PyGTK mechanics, but this gives you the basic detail. The method that Terminator will call from your class is again "callback()" and you get passed a list you should add your menu structure to, along with references to the main menu object and the related Terminal. As the plugin system expands and matures I'd like to be more formal about the API that plugins should expect to be able to rely on, rather than having them poke around inside classes like Config and Terminal. Suggestions are welcome :)

Regarding the configuration storage API - the value returned by Config.plugin_get_config() is just a dict, it's whatever is currently configured for your plugin's name in the Terminator config file. There's no validation of this data, so you should pay attention to it containing valid data. You can then set whatever you want in this dict and pass it to Config().plugin_set_config() with the name of your class and then call Config().save() to flush this out to disk (I recommend that you be quite liberal about calling save()).

Wrap up


Right now that's all there is to it. Please get in touch if you have any suggestions or questions - I'd love to ship more plugins with Terminator itself, and I can think of some great ideas. Probably the most useful thing would be something to help customise Terminator for heavy ssh users (see the earlier fork of Terminator called 'ssherminator')

0

Terminator 0.92 released

Posted on Wednesday, 7 April 2010

Hot on the heels of 0.91 we have a new release for you. This is another bugfix release, stomping on as many regressions from 0.14 as we can find. Many, many thanks to all of the people who have been in touch with the project to tel us about the things that are affecting them. If you find more regressions/bugs, please let us know!
Also in this release the Palette section of the Profile editor in the Preferences GUI is now fully active, which means that all of the config file options should now be fully editable in the GUI.

4

Heads up, new Terminator incoming

Posted on Tuesday, 30 March 2010

Ok folks, I suck for not getting Terminator 0.90 released earlier and I suck for not having a bunch of bug fixes for 0.14 in Ubuntu Lucid.
I'm going to fix both tonight by releasing 0.90 and begging the lovely Ubuntu Universe folks to grant an exception to get it into Lucid.
Here's hoping everything goes smoothly!

0

Terminator 0.90beta3 released

Posted on Monday, 15 March 2010

We've been hard at work over the last 7 months preparing a whole new core for Terminator and it's getting close to being ready, so this is a beta release intended for testing only. Ubuntu packages have been uploaded to our test PPA (https://launchpad.net/~gnome-terminator/+archive/test) and a tarball is available from http://mairukipa.tenshu.net/~cmsj/terminator/ .
Please provide any feedback about this release to our bug tracker at https://bugs.launchpad.net/terminator/ or our IRC channel, #terminator on irc.freenode.net.


Caveats:
 * config files from 0.14 and earlier are currently ignored by 0.90 because the config file format has changed.
 * we now have a very basic ability to save and restore layouts, but this feature is very new and likely to contain many bugs

12

An adventure with an HP printer/scanner and Ubuntu

For a while now I've been thinking about some ideas for a project that will require a scanner. No problem you think, scanners of various kinds have been supported in Linux for a long time.

I dislike ordering hardware online because of the shipping lag and because I'm a sucker for the retail experience, so I was checking out which devices would work with Ubuntu and which devices were on sale in my local computer supermarket. The latter was a depressingly short list, and the former was getting annoying to search for, but I stumbled on the idea of a multi-function printer. It turns out that it's cheaper to buy a scanner as part of a printer than it is to buy a scanner on its own (granted the resolution of the scanner isn't quite as good, but it's more than sufficient for my needs). The reason for this is undoubtedly that the manufacturers are expecting to make up their money by selling me ink cartridges every few months.

As I started to look at models of multi-function printers, one thing became apparent almost immediately - HP have done a lot of leg work on this. I quickly found a bunch of info on their site about how they basically support all of their stuff on Linux, including a page which specifically listed popular distros and which versions worked with which printers.

I decided pretty much immediately that I wanted to support this, so off I went to the shop to buy an HP. They had the decent looking F4580 for around £40, so I nabbed that and set off home.

When I got home I fired up my laptop running Lucid and plugged the new device in. Less than 10 seconds later I was told it was ready for printing, and I fired up Robert Ancell's excellent new Simple Scan to see what configuration I would need to do to make that work.... the answer being none, it scanned a page first time.

Now smug with the ease with which that had worked I started installing the HP driver software on a popular proprietary operating system so I could use it to configure the printer's WiFi feature (something I assumed I couldn't do from within Ubuntu - an assumption that turns out to have been wrong). Ten minutes later it was still finishing off the install process, but eventually I did get the printer hooked up to our wireless network.

Back to the Lucid machine, I told it to add a new printer, it immediately saw the HP announcing itself on the network and let me quickly add that and I could print over wifi. Pretty nifty stuff.

Then I started poking around with HP's Linux Imaging and Printing software (HPLIP) and noticed that there was an "hp-toolbox" that wasn't installed. This is the tool I should have used to configure the wifi network on the printer; It also shows the ink levels and lets you kick off scanning/printing/cleaning type jobs. Out of sheer curiosity I went into hp-toolbox's preferences and changed it from using xsane to simple-scan, and told it to start a Scan. I wasn't expecting it to work because the device wasn't connected via USB, but it turns out that not only does the device support scanning over WiFi, it works in Linux. It's not quite as fast as a direct hookup, but it's certainly significantly more convenient!

So there we have it, out of the box I was up and running within 10 seconds of plugging the device in, and if I'd known to just install hp-toolbox I would have had everything running wirelessly a few minutes later. This being compared to installing CDs and dealing with great gobs of driver/application mess (I've seen HP's Windows drivers and it's no fun trying to persuade them to update themselves, or to persuade them not to prompt you to register every week). A huge, epic victory for Linux and Ubuntu - and one that I seem to find with much random consumer hardware these days. A few years ago this post would have been full of complicated commands and scripts and compilation as I described how to make the device work, but now all I can do is be smug about how easy it was :D
Win.

2

This is your captain speaking, Terminator has now landed!

Posted on Thursday, 21 January 2010

I managed to finish off what I thought were the last few missing keyboard shortcuts during my lunch break today, but then realised that I'd missed two, but I was so excited an short of time that I decided to just go ahead and land the branch anyway!
So there it is - trunk is now completely refactored and full of exciting new bugs. I noticed while I was working from it this afternoon that the transparency setting code wasn't working, but I expect I can get that cleared up tonight :)

Now a bunch of bug fixing and a config converter and we can release!
Thanks to everyone who has been testing so far.

3

Final approach for Terminator epic-refactor

I'm done hacking on the Terminator epic-refactor branch for the evening and the following has been achieved today (in chronological order):


  • Fix a bug in handling URLs dropped on the window

  • Implement directional navigation

  • Implement geometry hinting

  • Fix a bug in group emitting that cause "Broadcast off" and "Broadcast to all" to become inverted

  • Implement WM_URGENT bell handler


I'm really happy with how this is going. All that is left to have feature parity with trunk, I think, is some keyboard shortcut handlers.

I'd still love to get more testing results to make sure I haven't missed anything, but at this rate I'm expecting to be able to land the epic-refactor branch on trunk this weekend, after five and a half months.

Then I'm going to write a tool to convert old config files and we can think about putting out a 0.90 beta release. Exciting stuff!

10

Terminator 0.90 progress

Posted on Tuesday, 19 January 2010

Further to my previous post I thought I'd post a quick update about how things are progressing. I mentioned in my previous post that I knew of several things that were not yet working in the Epic Refactor branch:


  • -e and -x command line options

  • all forms of drag & drop

  • directional navigation

  • some keyboard shortcuts


I'm pleased to say that the first two of these are now taken care of, but the latter two are still to be done. I'm less pleased to say that I haven't had much external feedback about this branch yet, but I suspect that most people who might be interested probably don't read my blog ;)

So if you know people who like Terminator and enjoy testing things out, all they need to do is:
bzr branch lp:~cmsj/terminator/epic-refactor
cd epic-refactor
./terminator

and give some feedback!

0

Testing Terminator 0.90

Posted on Tuesday, 5 January 2010

You might have seen my recent posts about the epic refactoring that has been going on in the Terminator codebase for the last few months.

I think it's finally time that we get some more eyeballs on it, mainly so I can check that I haven't massively screwed something up. I know there is lots of missing functionality right now, and probably a bunch of subtle bugs, but I could use your help quantifying exactly what these are!

If you're inclined to help, please branch lp:~cmsj/terminator/epic-refactor, cd into it and run ./terminator, then use it like you always would and file bugs, preferably indicating clearly in the bug that you're using this branch and not trunk (maybe tag the bug 'epicrefactor').

Things I know are broken right now:


  • -e and -x command line options

  • all forms of drag & drop

  • directional navigation

  • some keyboard shortcuts


Things I know are missing because they're not coming back:

  • Extreme tabs mode (sorry, it's just too insane to support)

  • GNOME Terminal profile reading (I'm trying to simplify our crazy config system and dropping GConf is a good way to achieve that)

  • Config file reading. At some point I'll write something that migrates old Terminator configs to the new format, but for now you'll have to live without your old config file. The new one isn't documented yet either, but it is a whole bunch better!


Now would also be a great time to start writing plugins for Terminator and telling me about them. I'm happy to ship good plugins, but more importantly I want feedback about the weaknesses/strengths of our plugin system. Right now you can only hook into URL mangling and the terminal context menu, but the latter of those gives you pretty serious flexibility I think. Obviously one massive weakness is a lack of documentation about the plugin API, but I'll get to that, I promise!

So there we have it, another step along the way to me being able to merge this branch into trunk and put out a real release of 0.90 and then eventually 1.0!