Wednesday, November 9, 2011

JRuby and RubyGems

For reasons too tangential to cover here, it was decided to port a Ruby 1.8.7 project to JRuby (>= 1.6.0). This worked well enough, except for the fatal flaw in JRuby: using gems.

To begin with, getting *any* gem with native extensions to build in JRuby requires some manual intervention.

The reason? The RVM JRuby has its Ruby include files under cext/src/include/, instead of lib/native/include like the other RVM Rubies. This appears to be a known issue.

A symlink handily solves the problem:

bash$ cd /usr/local/rvm/rubies/jruby-head
bash$ ln -s cext/src/include lib/native/include

Making this change will allow most gems with native extensions (e.q. pg, sqlite
3, etc) to be built.

The qtbindings gem, however, still fails with an error like the following:

bash$ sudo rvm jruby-head do gem install qtbindings                                
CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:91 (MESSAGE):
  Could NOT find Ruby (missing: RUBY_INCLUDE_DIR)
Call Stack (most recent call first):
  /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:252 (_FPHSA_FAILURE_MESSAGE)
  cmake/modules/FindRuby.cmake:249 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
  CMakeLists.txt:18 (FIND_PACKAGE)

-- Configuring incomplete, errors occurred!
make: [build] Error 1 (ignored)
cd ext/build; make
make[1]: Entering directory `/usr/local/rvm/gems/jruby-head/gems/qtbindings-'
make[1]: *** No targets specified and no makefile found.  Stop.
make[1]: Leaving directory `/usr/local/rvm/gems/jruby-head/gems/qtbindings-'
make: *** [build] Error 2

Unpacking the gem, fixing the extconf.rb to properly find JRuby, and repacking the gem gets past these problems, but ultimately fails when compiling the actual C extension code for Qt4.

The reason? Qt4 uses heavy inspection of the stack frame in order to implement its signal/slot mechanisms, and JRuby does not provide access to this information. Until equivalent functions for rb_frame_callee() and the like are provided by JRuby, Qt4 will remain incompatible with it.

Postscript: For the curious, the following lines (or variants thereof) can be added to extconf.rb to fix the Cmake build issues:

    file.puts "-DRUBY_EXECUTABLE=/usr/local/rvm/rubies/jruby-head/bin/jruby \\"
    file.puts "-DRUBY_LIBRARY=/usr/local/rvm/rubies/jruby-head/lib/native/x86_64-Linux/ \\"
    file.puts "-DRUBY_LIB_PATH=/usr/local/rvm/rubies/jruby-head/lib/native/x86_64-Linux \\"
    file.puts "-DRUBY_INCLUDE_DIR=/usr/local/rvm/rubies/jruby-head/cext/src/include/ruby \\"
    file.puts "-DRUBY_CONFIG_INCLUDE_DIR=/usr/local/rvm/rubies/jruby-head/cext/src/include/ruby \\"
    file.puts "-DCMAKE_INSTALL_PREFIX=/usr/local/rvm/rubies/jruby-head \\"

The lines go in the 'else' block of the 'if windows' condition, where the other cmake options are defined (around line 216, or search for '-DENABLE_SMOKE').

Monday, September 26, 2011

Qt4 and Ruby1.9.x

After a long period with no Qt4 support for Ruby 1.9, the qtbindings gem has stepped in to fill the void. This project packages the qtruby4-qtruby library from the korundum project, so the API is unchanged.

Installation is straightforward:

sudo gem install qtbindings

or for rvm users:

sudo rvm gem install qtbindings

Compatible with 1.8 as well.

Sunday, September 25, 2011

Determining installed version of Ubuntu

Neither uname nor motd are of much use in determining the version of Ubuntu installed on a third-party system, and searching for version details of key packages in dpkg is annoying.

Fortunately there is a utility called lsb_release that provides this information:

bash$ lsb_release -si
bash$ lsb_release -sr
bash$ lsb_release -sc

Complete all the way down to the silly nickname for each release.

Since this is an LSB utility, it's probably been in place for some time, silently waiting to provide more detail than uname. Seriously, would a mention in the SEE ALSO section of the uname manpage be *so* hard to add?

Wednesday, September 21, 2011

First impressions, redux

The R500 bit the dust over a year and a half ago. Most PC manufacturers were busy trying to crank out netbooks or a Macbook Air ripoff at the time, so there was no suitable replacement -- had to limp along with a Macbook Pro (ugh!) running Ubuntu until spring of this year.

What replaced the R500? A Thinkpad X1. It weighs twice as much, has no transreflective screen, and is a bit bulkier, but it is paradise after using that Macbook Pro for a year. Dedicated pageup/down keys! Home and End keys! A Delete key! THREE mouse buttons! A hardware kill switch for wifi and bluetooth! A freaken eSATA connector!

But why no grumpy install post detailing how to get Linux speaking to all the hardware? Because everything just works! Even the volume control buttons and the fingerprint reader. No joke. You can thank IBM's interest in Linux for that.

On top of the compatibility, it's a nice looking machine, powerful (64-bit debian runs great on a dual-core i5 with 8 GB of RAM), and tough -- GorillaGlass over the display, MILSPEC (assuming 810) shock and spill resistance (including a keyboard that drains *through* to the bottom of the laptop), the works.

The one caveat: it's best to use non-standard video drivers, as the standard debian/ubuntu Intel video driver became a bit crashy. Easily fixed, though, by adding the xorg-edgers PPA.

Sunday, September 18, 2011

.vimrc and UTF-8

Quick .vimrc config to display UTF-8 characters correctly:

" support UTF-8 automatically when not on console
if  has('gui_running') && has('multi_byte')
        set encoding=utf-8
        set fileencoding=utf-8
        set fileencodings=utf-8

Note that the gui_running requirement ensures that this will be used by gvim and by vim run from a terminal emulator (such as urxvt or mlterm), not by vim running from a virtual terminal or console.

Wednesday, September 7, 2011

Fixing strings(1)

The first step in any examination of a binary is to run strings(1) on it.

This usually results in output like the following:
bash$ strings /bin/ls
...which, needless to say, sucks.

The problem here is that a 'string' is any sequence of printable ASCII characters of a minimum length (usually 4).

The output of strings(1) can be made a bit more usable by discarding any string that does not contain a word in the system dictionary. 

An off-the cuff implementation of such a filter in Ruby would be:
#!/usr/bin/env ruby

ARGF.lines.each do |line|
  is_str = false
  line.gsub(/[^[:alnum:]]/, ' ').split.each do |word|
    next if word.length < 4
    is_str = true if (`grep -Fxc '#{word}' #{DICT}`.chomp.to_i > 0)
  puts line if is_str

This would then be passed the output of strings(1):
bash$ strings /bin/ls | ./strings.rb

This is super-slow for large executables, and will miss sprintf-style formatting strings that contain only control characters (e.g. "%p\n"), but for the general case it produces useful output.

Directions for improvement: load one or more dictionary files and perform lookups on them in Ruby. Search for english-like words by taking 4-letter prefixes and suffixes of each 'word' in the string and searching for dictionary words that start or end with that prefix/suffix. Provide early-exit from the inner field look when a match is found. Allow matches of sprintf formatting strings, URIs ('http://'), etc.

Thursday, August 11, 2011

Ruby __END__ and DATA

The __END__ keyword in Ruby causes the parser to stop executing the source file; it is often used for appending documentation such as a LICENSE file to the end of a source code file.

More interesting is the fact that the contents of the file following the __END__ keyword are available via the global IO object named DATA.

This means that it is possible to include test data -- even binary data -- at the end of a Ruby source file:

bash$ cat od.rb
#!/usr/bin/env ruby
if __FILE__ == $0
  offset = 0
  while (buf = do
    bytes = buf.unpack 'C*'
    puts "%08X: %s\n" % [ offset, { |b| " %03o" % b } ]
    offset += 16
bash$ cat /bin/true >> od.rb
bash$ ./od.rb
00000000:  177 105 114 106 002 001 001 000 000 000 000 000 000 000 000 000
00000010:  002 000 076 000 001 000 000 000 220 044 100 000 000 000 000 000
00000020:  100 000 000 000 000 000 000 000 060 226 001 000 000 000 000 000

Monday, August 8, 2011

knotify4 uses 100% CPU

This is true, and has been for awhile. It tends to occur when a laptop is suspended while online, then resumed while offline. A nice side effect is that battery life gets reduced by about 75% while offline, which is generally when one wants the longest battery life.

There are plenty of bug reports open on this, but it's pretty clear that the KDE/Kubuntu guys either have no clue how to fix this, or cannot be bothered.

Since knotify4 isn't really all that useful (especially when using E17 for a WM), there is a brutal hack that will effectively silence it:

sudo mv /usr/bin/knotify4  /usr/bin/knotify4.orig
sudo cp /bin/true /usr/bin/knotify4

Needless to say, the original file should be restored before doing an upgrade.

Disabling startup (init.d) services in Ubuntu

Ubuntu never has made obvious what the "Ubuntu way" of removing services from System-V run levels is. The GUI tools in GNOME and KDE are incomplete, and a quick investigation of the run level directories shows that they are filled automatically -- so that symlinks added and removed manually might, in the future, get ignored.

Fortunately, the README in /etc/init.d ends with the following advice:

Use the update-rc.d command to create symbolic links in the /etc/rc?.d
as appropriate. See that man page for more details.

The man page lists the following forms for invoking upgrade-rc.d:                                                        

      update-rc.d [-n] [-f] B name  remove

       update-rc.d [-n] B name  defaults [NN | SS KK]

       update-rc.d  [-n]  name  start|stop  R  NN runlevel  [ runlevel ]... 

       update-rc.d [-n] B name  disable|enable [ S|2|3|4|5 ]

The following command will remove the service collectd from all run levels:

bash$ sudo update-rc.d collectd disable
update-rc.d: warning: collectd start runlevel arguments (none) do not match LSB Default-Start values (2 3 4 5)
update-rc.d: warning: collectd stop runlevel arguments (none) do not match LSB Default-Stop values (0 1 6)
 Disabling system startup links for /etc/init.d/collectd ...
 Removing any system startup links for /etc/init.d/collectd ...
 Adding system startup for /etc/init.d/collectd ...
   /etc/rc0.d/K95collectd -> ../init.d/collectd
   /etc/rc1.d/K95collectd -> ../init.d/collectd
   /etc/rc6.d/K95collectd -> ../init.d/collectd
   /etc/rc2.d/K05collectd -> ../init.d/collectd
   /etc/rc3.d/K05collectd -> ../init.d/collectd
   /etc/rc4.d/K05collectd -> ../init.d/collectd
   /etc/rc5.d/K05collectd -> ../init.d/collectd

The command to remove a service from a specific runlevel should be the following:

sudo update-rc.d BASENAME disable `runlevel | cut -d ' ' -f 2`

...however a quick experiment shows that the runlevel argument is ignored:

bash$ sudo update-rc.d collectd disable 2

sudo update-rc.d collectd disable 2
update-rc.d: warning: collectd start runlevel arguments (none) do not match LSB Default-Start values (2 3 4 5)
update-rc.d: warning: collectd stop runlevel arguments (none) do not match LSB Default-Stop values (0 1 6)
 Disabling system startup links for /etc/init.d/collectd ...
 Removing any system startup links for /etc/init.d/collectd ...
 Adding system startup for /etc/init.d/collectd ...
   /etc/rc0.d/K95collectd -> ../init.d/collectd
   /etc/rc1.d/K95collectd -> ../init.d/collectd
   /etc/rc6.d/K95collectd -> ../init.d/collectd
   /etc/rc2.d/K05collectd -> ../init.d/collectd
   /etc/rc3.d/K05collectd -> ../init.d/collectd
   /etc/rc4.d/K05collectd -> ../init.d/collectd

Sunday, August 7, 2011

perf-backed disassembly

Since 2.6.31 or thereabouts, the Linux kernel has come with a built-in performance counter known as perf.

The common form of perf is well-known to be useful in gathering performance statistics on a running program:

bash$ perf stat -cv ./a.out 

cache-misses: 11313 2020574449 2020574449
cache-references: 62031796 2020574449 2020574449
branch-misses: 17909 2020574449 2020574449
branches: 606684832 2020574449 2020574449
instructions: 6324531571 2020574449 2020574449
cycles: 6408533747 2020574449 2020574449
page-faults: 304 2019963367 2019963367
CPU-migrations: 7 2019963367 2019963367
context-switches: 205 2019963367 2019963367
task-clock-msecs: 2019963367 2019963367 2019963367

 Performance counter stats for './a.out':

             11313 cache-misses             #      0.006 M/sec
          62031796 cache-references         #     30.709 M/sec
             17909 branch-misses            #      0.003 %    
         606684832 branches                 #    300.344 M/sec
        6324531571 instructions             #      0.987 IPC  
        6408533747 cycles                   #   3172.599 M/sec
               304 page-faults              #      0.000 M/sec
                 7 CPU-migrations           #      0.000 M/sec
               205 context-switches         #      0.000 M/sec
       2019.963367 task-clock-msecs         #      0.996 CPUs 

        2.027948307  seconds time elapsed

The  events to be recorded can be specified with the -e option in order to refine the output:

bash$ perf stat -e cpu-clock -e instructi

 Performance counter stats for './a.out':

       2026.748812 cpu-clock-msecs         
        6324293589 instructions             #      0.000 IPC  

        2.032519896  seconds time elapsed

A list of available events can be obtained via perf list:

bash$ perf list | head
List of pre-defined events (to be used in -e):

  cpu-cycles OR cycles                       [Hardware event]
  instructions                               [Hardware event]
  cache-references                           [Hardware event]
  cache-misses                               [Hardware event]
  branch-instructions OR branches            [Hardware event]
  branch-misses                              [Hardware event]
  bus-cycles                                 [Hardware event]

The perf toolchain also includes the utility perf top, which can be used to monitor a single process, or which can be used to monitor the kernel:

bash$ sudo perf top 2>/dev/null
   PerfTop:       0 irqs/sec  kernel:-nan%  exact: -nan% [1000Hz cycles],  (all, 4 CPUs)

             samples  pcnt function               DSO
             _______ _____ ______________________ __________________

               77.00 39.3% intel_idle             [kernel.kallsyms] 
               13.00  6.6% __pthread_mutex_unlock
               13.00  6.6% pthread_mutex_lock
               12.00  6.1% __ticket_spin_lock     [kernel.kallsyms] 
                7.00  3.6% schedule               [kernel.kallsyms] 
                6.00  3.1% menu_select            [kernel.kallsyms] 
                6.00  3.1% fget_light             [kernel.kallsyms] 
                6.00  3.1% clear_page_c           [kernel.kallsyms] 

Where things start to get interesting, however, is with perf record. This utility is generally used along with perf report to record the performance counters of a process, and review them later.

This can be used, for example, to generate a call graph:

bash$  perf record -g -o /tmp/a.out.perf ./a.out
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.148 MB /tmp/a.out.perf (~6461 samples) ]
bash$ perf report -g -i /tmp/a.out.perf
# Events: 1K cycles
# Overhead        Command  Shared Object  Symbol
# ........  .............  .............  ......
    99.90%          a.out  a.out          [.] main
            --- main

     0.10%          a.out  [l2cap]        [k] 0xffffffff8103804a
            --- 0xffffffff8105f438

Once perf data has been recorded, the perf annotate utility can be used to display a disassembly of the instructions that were executed:

bash$  perf annotate -i /tmp/a.out.perf |more

 Percent |      Source code & Disassembly of a.out
         :      Disassembly of section .text:
         :      0000000000400554

    0.00 :  400554:       55                      push   %rbp
    0.00 :  400555:       48 89 e5                mov    %rsp,%rbp
    0.00 :  400558:       48 81 ec 30 00 0c 00    sub    $0xc0030,%rsp
    0.00 :  40055f:       48 8d 85 d0 ff fb ff    lea    -0x40030(%rbp),%rax
    0.00 :  400566:       ba 00 00 04 00          mov    $0x40000,%edx
    0.00 :  40056b:       be 00 00 00 00          mov    $0x0,%esi
    0.00 :  400570:       48 89 c7                mov    %rax,%rdi
    0.00 :  400573:       e8 b0 fe ff ff          callq  400428 <memset@plt>
    0.00 :  400578:       c7 45 fc 00 00 00 04    movl   $0x4000000,-0x4(%rbp)

    4.21 :  4006a5:       8b 45 d0                mov    -0x30(%rbp),%eax
   15.54 :  4006a8:       83 c0 01                add    $0x1,%eax
    4.97 :  4006ab:       89 45 d0                mov    %eax,-0x30(%rbp)
    4.87 :  4006ae:       8b 45 d0                mov    -0x30(%rbp),%eax
   17.79 :  4006b1:       83 c0 01                add    $0x1,%eax
    4.36 :  4006b4:       89 45 d0                mov    %eax,-0x30(%rbp)
    4.72 :  4006b7:       48 83 45 f0 01          addq   $0x1,-0x10(%rbp)
    0.00 :  4006bc:       48 8b 45 f0             mov    -0x10(%rbp),%rax

As to be expected from Torvalds and company, the utilities include a number of options for generating parser-friendly output, limiting reporting to specified events and symbols, and so forth. Check the man pages for details.

Sunday, July 31, 2011

Easy E(17)

Ubuntu finally ships with a working E17, but there is a lingering problem: most of the modules are gone!

To get a full-featured install of E17, replete with modules and UI enhancements (finally! the settings panel entries re available from the main menu!), it is best to install from SVN.

Building E17 has always been an ordeal. Fortunately, the script makes downloading, building, and installing E17 from SVN to be quite a simple affair, once all of the dependencies are met.

To begin with, download the easy_e17 script from

There is a bug in the script which must be fixed if packages are going to be built. According to this bug report, the line (around 36 or 37)

  packages_full="$efl_basic $bin_basic $e_modules_bin $e_modules_extra $efl_extra

should be

  packages_full="$efl_basic $bin_basic $e_modules_bin $efl_extra $bin_extra $e_modules_extra"

... in order to get all the internal dependencies straight. Failure to do this results in the following error:
e_mod_main.c:1527:32: error: ‘ethumb_client’ undeclared (first use in this function)

Next, install all necessary dependencies to build E17 and the modules:

bash$ sudo apt-get install autopoint libudev-dev libgcrypt11 libgcrypt11-dev
bash$ sudo apt-get install libasound2-dev libasound2 libxine-dev 
bash$ sudo apt-get install paman padevchooser paprefs pavucontrol pavumeter 
bash$ sudo apt-get install libiptcdata-dev libmpd-dev cython libxcb-shape0-dev

According to this guide, the full list of dependencies is as follows:

bash$ sudo apt-get install xterm make gcc bison flex subversion cvs automake1.10 autoconf autotools-dev autoconf-archive libtool gettext libpam0g-dev libfreetype6-dev libpng12-dev zlib1g-dev libjpeg62-dev libtiff4-dev libungif4-dev librsvg2-dev libx11-dev libxcursor-dev libxrender-dev libxrandr-dev libxfixes-dev libxdamage-dev libxcomposite-dev libxss-dev libxp-dev libxext-dev libxinerama-dev libxft-dev libxfont-dev libxi-dev libxv-dev libxkbfile-dev libxres-dev libxtst-dev libltdl7-dev libglu1-xorg-dev libglut3-dev xserver-xephyr libdbus-1-dev liblua5.1-0-dev libasound2-dev libudev-dev autopoint  libxml2-dev

Chances are, however, that most of these are already installed.

The E17 build makes some assumptions about the location of the gcrypt libraries, so they must be symlinked:

bash$ sudo ln -s /usr/lib/x86_64-linux-gnu/libgcrypt.a /lib/x86_64-linux-gnu
bash$ sudo ln -s /usr/lib/x86_64-linux-gnu/ /lib/x86_64-linux-gnu
bash$ sudo ln -s /usr/lib/x86_64-linux-gnu/ /lib/x86_64-linux-gnu

...other wise linker errors such as the following will appear:
../../src/lib/.libs/ undefined reference to `gcry_cipher_setiv'
../../src/lib/.libs/ undefined reference to `gcry_cipher_setkey'

The Xine development headers (libxine-dev) must be installed in order for emotion to compile; otherwise an error like this will appear:
configure: error: Xine, Gstreamer or VLC backends must be selected to build Emotion
Naturally, Gstreamer or VLC headers could be installed instead.

The asound2-dev and the pulseaudio tools are required to get the mixer module working. Without them, the error
No ALSA mixer found!
will appear in the mixer module settings. Note that pulseaudio requires that the current user be added to the audio group; this can be done with the following command:

  sudo adduser $USERNAME audio

Once all this is done, it is a simple matter to install E17 with the following command:

./ --instpath=/usr/local/e17 --srcpath=/home/$USER/src/e17 -i

This downloads the E17 source code to ~/src/e17, and sets the install directory to /usr/local/e17. There should be a nice success message if everything goes well.

Next, build all of the modules:

./ --instpath=/usr/local/e17 --srcpath=/home/$USER/src/e17 --packagelist=full -i

Upgrading is just as straightforward:

./ --instpath=/usr/local/e17 --srcpath=/home/$USER/src/e17 --packagelist=full -u

Final notes:

  * The composite module causes problems: when using the Software engine, it causes typing delays that make terminals near unusable; when using the OpenGL engine, windows do not update until there is a focus change (BIG problem). It should be disabled.

  * The shelf containing the systray module must be given the stacking order Above Everything; otherwise, the applications in the E17 systray will not receive mouse clicks (VERY BAD).

  * Determining the fonts used by various modules is easier when the source is available. The font classes used in the tclock module, for example, are in E-MODULES-EXTRA/tclock/tclock.edc. The time is displayed in text_class: "module_large", and the date is displayed in text_class: "module_small". The Settings->Look->Fonts dialog can be used to determine the font used for these classes by enabling (via the checkbox at bottom-right) the font class Modules::Small or Modules::Large, and setting the font (e.g. Sans/Regular/12 pixels) to be used for each.

UPDATE : Recent builds now fail when using --packagelist=full. Best stick with --packagelist=half if the full build fails.

Static DNS servers in /etc/resolv.conf

It's an old problem: NetworkManager overwrites /etc/resolv.conf with each connection, forcing one to use the local ISP's crappy DNS servers instead of one's tried-and-true public DNS servers.

The fixes for this are legion: chattr resolv.conf, disable dhclient, replace NetworkManager.

It turns out that dhclient can be configured to include specified DNS servers (up to twp before a warning is displayed) in the resolv.conf.

Simply add the following line to /etc/dhcp/dhclient.conf :

prepend domain-name-servers,;                

Obviously, replace and with whatever name servers are desired. The semicolon, by the way, is extremely important -- don't leave it out.

Friday, July 29, 2011

Ubuntu, you're fired.

Attempted to upgrade a workstation (dual-xeon, sata3 ssd for /; sata 3 hdd for /usr, /var, /opt, /usr/local; sata2 mirrored raid (via dmraid) for /home) from 10.04 to 11.04 via do-release-upgrade (in update-manager-core) last night. Why do that? Because Ubuntu only updates software packages for its newest release, so if you need, say, Python 2.7 in order to work on a project, you're SOL unless you upgrade.

Everything went well until the final reboot, which left the machine at:

error: the symbol 'grub_xputs' not found.

Verified that the grub software is all present:

grub rescue> ls (hd0,1)/boot
... lots of modules and such...

Everything seemed to be there, so attempted to boot manually, under the assumption it was all due to misconfiguration:

grub rescue> set root=(hd0,1)
grub rescue> set kernel=/boot/vmlinuz-2.6.32-33-generic
grub rescue> set initrd=/boot/initrd.img-2.6.32-33-generic
grub rescue> boot
error: no loaded kernel

Second attempt, loading the grub modules manually (per Ubuntu Grub2 Docs and this thread):

grub rescue> set prefix=(hd0,1)/boot/grub
grub rescue> insmod normal
error: the symbol 'grub_xputs' not found.
grub rescue> insmod linuxgrub
error: the symbol 'grub_xputs' not found.

For some reason, the symbols in the grub2 bootstrap code do not match the ones expected by the modules. One of them is the wrong version. Looks to be a reported bug.

The solution, per Ubuntu Grub2 Docs and this fix, is to boot a LiveCD (which must be the same version as the newly-installed-but-broken Ubuntu) and run grub-install. This turns out to be a huge delay, as the upgrade took place without a LiveCD, so an ISO must be downloaded. It is also a huge pain when one is out of blank CDs, and has to press a usbstick into service (accomplished per Ubuntu Install Docs).

It also doesn't work. The new error on reboot is

error: file not found
grub rescue> insmod linux
ELF header smaller than expected

It turns out that 10.04 upgrades to 10.10, NOT 11.04, so the LiveCD contained the wrong version of grub!

Fortunately, there is a guide to purging and reinstalling grub once booted from any LiveCD.

The steps are (details such as hdd devices and mount points are specific to this workstation):

bash$ sudo bash
bash# mount /dev/sdk1 /mnt
bash# mount /dev/sdf3 /usr
bash# mount /dev/sdf4 /var
bash# mount --bind /dev /mnt/dev
bash# mount --bind /dev/pts /mnt/dev/pts
bash# mount --bind /proc /mnt/proc
bash# mount --bind /sys /mnt/sys
bash# chroot /mnt
bash-chroot# apt-get update
bash-chroot# apt-get purge grub grub-common grub-pc
bash-chroot# apt-get install grub-common grub-pc
bash-chroot# #NOTE: Be sure to select /dev/sdk for Grub2 install location
bash-chroot# exit
bash# umount /mnt/dev/pts
bash# umount /mnt/dev 
bash# umount /dev/sys
bash# umount /mnt/proc
bash# umount /mnt/var
bash# umount /mnt/usr
bash# umount /mnt
bash# reboot

That does the trick.

But you know what would really be nice? A way to upgrade the OS, including the kernel and all packages, WITHOUT TOUCHING THE GODDAMN BOOT-LOADER. Seriously, it's an upgrade, the bootloader already works -- don't mess with it!

Monday, July 18, 2011

Pre-defined preprocessor definitions (GCC)

The predef project is extremely convenient for looking up architecture-dependent, os-dependent, compiler-dependent, or standard C/C++ compiler macros, but it is not available when working offline.

To get around this, the preprocessor definitions defined automatically by GCC can be viewed with this command line:

bash$ gcc -dM < /dev/null

#define __DBL_MIN_EXP__ (-1021)
#define __UINT_LEAST16_MAX__ 65535
#define __FLT_MIN__ 1.17549435082228750797

This will output the #define commands as they would be executed by the preprocessor. To get a list of the non-expanded macros, use -dN :

bash$ gcc -dN < /dev/null
# 1 ""
# 1 ""
#define __STDC__
#define __STDC_HOSTED__
#define __GNUC__
#define __GNUC_MINOR__

This is a bit more noisy, as undefined macros (e.g. "# 1") are included, but may be more suitable for parsing.

Sunday, July 17, 2011

/usr/bin/say linux

Anyone who has spent any time fiddling with OS X on the command line will have discovered /usr/bin/say, and incorporated it into a few shell scripts to provide background notifications ("build complete", etc).

While Linux doe snot provide this command out of the box, there is a package called festival which provides text-to-speech capability:

apt-get install festival festival-freebsoft-utils

The usage is quite straightforward:

echo "this is only a test" | festival --tts

A simple alias can be used to replace /usr/bin/say on the command line:

alias say='festival -tts'

A more script-friendly solution would be to create a small wrapper script that passes /usr/bin/say arguments (filenames or STDIN) to festival:

#!/usr/bin/env ruby                                                             

def say(str)
  `echo '#{str}' | festival --tts`

if __FILE__ == $0

Friday, July 8, 2011


Rattle is a useful data analysis UI for R.

Unfortunately, it's a bit hard to get started from a shortcut such as an XDG .desktop file. The usual method (ala Rkward and RCommander), `R_DEFAULT_PACKAGES="$R_DEFAULT_PACKAGES rattle" R`, doesn't work. Instead, one is left doing the `library(rattle); rattle()` commands in an R terminal.

A first stab with Rscript fails:

#!/usr/bin/env Rscript

This starts Rattle, then exits both Rattle and R. Obviously, rattle() is spawning a thread and returning immediately, causing RScript to reach EOF and exit. The same happens when the commands are sent to R, r, or Rscript via STDIN.

Adding a line to wait for an enter keypress seems a likely workaround:

#!/usr/bin/env Rscript

system('bash -c read')

This starts Rattle, and pauses R waiting for an enter key as expected, but Rattle does not receive input. Whatever it's doing for its UI, it is not properly threaded as locking up the parent also locks up the child --- probably some sort of "greenthreads" ala Ruby or Python.

It turns out, however, that this is not an unknown problem. A query of "R_DEFAULT_PACKAGES rattle" turns up pages that provide a fix for the original method: specifying Rattle's dependencies.

The correct command is therefore

 R_DEFAULT_PACKAGES="datasets,utils,grDevices,graphics,stats,rattle" R -q --save

Note the -q, to suppress the startup text, and --save (or, alternately, --no-save) to suppress the confirm-quit message. This should be launched in a .desktop file with Terminal=true. R will still wait around for a q() or Ctrl-D, but that is a small price to pay.

Saturday, May 14, 2011

Installing Google Go system-wide on Ubuntu

Just some quick notes. This assumes all necessary packages for C dev are installed (sudo apt-get install bison ed gawk gcc libc6-dev make).

As root (sudo bash):

bash# apt-get install mercurial mercurial-common mercurial-git
bash# cd /usr/local
bash# hg clone -u release go
bash# cd go/src
bash# export GOBIN=/usr/local/bin
bash# ./all.bash

After this, the Go toolchain will be installed in /usr/local/bin, with all supporting files in /usr/local/go. Ubuntu should already have /usr/local/bin in the path for all users.

To test (as a normal user):

bash$ cd /tmp
bash$ cat >hello.go <
> package main
> import "fmt"
> func main() {
> fmt.Printf("hello, world\n")
> }
bash$ 6g hello.go
bash$ 6l hello.6
bash$ ./6.out
hello, world
bash$ rm hello.go hello.6 6.out

More on OSS vs Commercial users

The striking difference in attitude between commercial users and open source users (e.g. more professionalism, gratitude and patience in the former than the latter) is most likely due to one of Cialdini's Influences: Commitment (in the book, "commitment and consistency").

It seems that the initial decision to spend money on software causes an emotional investment in the user. One would therefore expect that when a user has had the choice of software forced upon them (e.g. the use of Windows in a corporate environment, the use of an unfamiliar version control utility in a dev department, etc), they would be as impatient, rude, and generally unprofessional as a typical open source user.

Informal observation bears this out, but it would be nice to see some studies demonstrating the effects -- a simple review of ticketing systems for similar projects (Apache/IIS, Eclipse/VisualStudio, etc) should bear some fruit.

Thursday, March 24, 2011

Not as constant as one might think

One of those Ruby gotchas that has to lead to a bug report before it finally burns itself to memory:

irb(main):001:0> TMP="1234"
=> "1234"
irb(main):002:0> t = TMP
=> "1234"
irb(main):003:0> t.succ!
=> "1235"
irb(main):004:0> TMP
=> "1235"

Eh? Isn't TMP a constant, after all?

Apparently this has to do with t being assigned a reference to a (shared) String:

irb(main):005:0> a="456"
=> "456"
irb(main):006:0> b=a
=> "456"
irb(main):007:0> b.succ!
=> "457"
irb(main):008:0> a
=> "457"

Needless to say, this will do the trick:

t = TMP.dup

This will raise an error on t.succ!, for obvious reasons :


Moral: "Constants" are NOT frozen and their values are not copied on assignment!

Wednesday, March 23, 2011

Using git to synchronize and backup home directories

With git-home-history gone, and nothing suitable to take its place, one must handroll a decent solution to version controlling a home directory.  Since git can synchronize repositories, it should also be possible to use it to synchronize the contents of a home directory on multiple machines.

This example will assume that a desktop computer (hostname 'desktop')contains the master ('remote') repository, and that a laptop (hostname 'laptop') contains the slave (local) repository.

NOTE: It is important to be aware of which files should be left out of the repo. Because a laptop and a desktop will have different graphics hardware, the settings for GUI applications such as web browsers and window managers should not be in the repo. Also, files which change a lot (cache files, .bash_history, etc) should be kept out of the repo as they will always cause merge conflicts. Finally, private keys should be left out of the repo.

Finally, need it be said that the home directories of both machines should be backed up before trying this?

Desktop: Create and fill the repository

To begin with, create an empty git repo:

bash$ cd ~
bash$ git init .
bash$ touch .gitignore
bash$ git add .gitignore

Next, modify the .gitignore file to select which files or directories to leave out of the repo:

bash$ vi .gitignore

This example ignores backup, swap, and core files, as well as directories that probably shouldn't be shared between the two machines (Desktop, Downloads, Templates, tmp, mnt). Note that all hidden files are left out of the repo by default (.*).

Now add all allowed files into the repo:

bash$ git add .

Now add exceptions to the .gitignore file. These will be config files that are shared between the two machines:

bash$ git add -f .bashrc .xsessionrc .vimrc .gvimrc .gdbinitrc  .ssh/config .local/share/applications

If Firefox was configured correctly (i.e. by making a tarball of ~/.mozille/firefox on the desktop machine and extracting it to ~ on the laptop machine, instead of letting Firefox generate its own config), then the bookmarks file can be added as well:

bash$ git add -f .mozilla/firefox/*.default/bookmarks.html

This of course holds true for non-config data in the Firefox dir, such as .mozilla/firefox/*.default/ReadItLater (UPDATE: but not zotero, as it updates itself even while it is not being modified) .

Finally, commit all of the contents to the repo:

bash$ git commit -m 'Initial home dir checkin'

The git directory now has a starting version of the home directory checked in. It can be reviewed with a tool such as QGit to  ensure nothing is missing or unwanted:

bash$ qgit &

NOTE: To make the following operations go smoothly, the following line must be added to .git/config :

    denyCurrentBranch = false

Desktop : Create a script to auto-commit

At this point. it is useful to create a shell script that performs a commit in the background.

bash$ mkdir -p bin
bash$ vi bin/
cd ~
git add .
git commit -m 'automated backup' . 
bash$chmod +x bin/

Laptop : Clone the repository

On the laptop, clone the repository from the desktop:

bash$ cd ~
bash$ mkdir -p tmp/git-repo
bash$ cd tmp/git-repo
bash$ git clone desktop:/home/$USER

Note that the repo was cloned to a temporary directory so that it will not overwrite any local files. This is important!

Move the git metadata directory to the home directory:

bash$ cd $USER
bash$ mv .git ~

Retrieve any missing files (i.e. that exist on the desktop but not on the laptop, such as .gitignore) from the repository:

bash$ cd ~
bash$ git checkout \*

Laptop: Add local changes

Create a branch for the changes that will be made next:

bash$ git checkout -b laptop

Add any local exclusions to the .gitignore file:

bash$ echo .pr0n >> .gitignore

Add any additional files to git:

bash$ git add TODO NOTES

Commit the branch:

git commit -m 'laptop additions'

Now merge the branch into master:

bash$ git checkout master
bash$ git merge laptop

Verify that the changes are suitable:

bash$ qgit &

Finally, push the changes to the desktop:

bash$ git push

Desktop: Generate canonical file versions

The desktop will now have all its files set to the laptop versions.

At this point, files that have been modified should be reviewed and editted, so that a canonical version will be stored in the repo and used by both the desktop and the laptop. QGit makes the review process fairly simple.

Note that some config files will have to source local config files that lie outside the repository (i.e they are excluded in /gitignore). For example, .bashrc might have a line like

[ -f ~/.bash_local.rc ] && . ~/.bash_local.rc

...and .vimrc might have a line like

if filereadable(expand("$HOME/.vim_local.rc"))
    source ~/.vim_local.rc

The files .bashrc_local.rc and .vim_local.rc will be listed in .gitignore, and will have machine-specific configuration such as custom prompts, font size (e.g. in .gvimrc), etc.

Once the canonical versions of the files have been created, they are committed :

bash$ git commit -, 'canonical version' .
bash$ git tag 'canonical'

Laptop: Pull canonical versions

The canonical versions can now be pulled down to the laptop. Note that any supporting files (e.g. .bash_local.rc) will have to be created on the laptop.

bash$ git pull

Laptop & Desktop : Add cron job

In order for git to automatically track changes to the home directory, both the laptop and the desktop will need to add a cron job for running .

The following crontab will run git every two hours:

bash$ crontab -e
0 */2 * * * /home/$USER/bin/ 2>&1 > /dev/null

...of course $USER must be replaced with the actual username.

Note: Some provision must be made for pushing the laptop repo to the desktop. This can be done in a cron job, but is probably better suited to an if-up (on network interface up) script.

UPDATE: Be careful when pushing; the desktop must be forced to update its working tree, or its next commit will delete files on the laptop. The following script will do the trick:

cd $HOME

git push && ssh desktop 'git reset --merge `git rev-list --max-count=1 master`'

Of course passwordless ssh should be set up for this to work. A similar problem exists when pulling from the server: a "git checkout \*" must be performed to create any missing files.

Desktop: Add backup script and cron job

At this point, a backup script and cron job can be added to the desktop server. The directory ~/.git is all that needs to be backed up; a shell script can rsync it to a server.