Linux Format - December 2014 UK

Linux Format - December 2014 UK
Docker containers
Create and automate your
own services with virtualisation
1 for Free Software
Discover the inner secrets
of the mighty but mini
Linux PC board
Minecraft hacks
Audio streaming
OwnCloud hosting
Samba file sharing
Use the Twitter API
k tteacher,
h artist
ti t
Michael Shiloh on the power of Arduino p42
Revision control
Network monitoring
Python meets C
Git started
Thank Linus and
control your projects
Gear up for some
deep packet inspection
Also inside…
Pro music
You don’t know JACK, but you
will, pro-level music on Linux.
Mash up your Python
with some super C code
Arduino developed a large
community, and that
was really key to its success
1 for Free Software
What we do
We support the open source community
by providing a resource of information, and
a forum for debate.
We help all readers get more from Linux with our
tutorials section – we’ve something for everyone!
We license all the source code we print in our
tutorials section under the GNU GPLv3.
We give you the most accurate, unbiased and
up-to-date information on all things Linux.
Who we are
A year in FOSS
We asked our team of experts to tell us what
they believe to be the single most exciting
open source project of the last year...
The time has flown by but it seems I’ve been in the
hot FOSS seat for twelve months; more commonly
known as one of your Earth years. It’s rare to talk
directly about the magazine, but the good news is we’re
still here and doing better than ever. We’re continuing to
bring onboard new writers to expand our areas of expertise
and we’re planning, what we hope you will find, fascinating
new features and tutorials for the year ahead.
I might have started on Linux Format as something of an
open source neophyte – though to qualify that I’ve used
Linux since 2000 for projects and even looked at the kernel
source mid-90s on my Amiga – but even then it was
obvious how vibrant, constantly changing and vitally
important GNU/Linux is to the world. Over the last year
here in the UK open source has seeped into our
government infrastructure and is now part of the school
curriculum, championed a good deal by the work of the
Raspberry Pi Foundation. To help celebrate this fortunate
turn of events we’re exploring how you can hack Linux on
the Pi to ever greater levels with fun projects, advanced
Linux services and essential skills. All the fun starts on
page 34 so we hope you’ve brought your Pi with you.
Not that we want to obsess on the Pi too much, even if
it’s a great device that’s changing the world, a big part of
that success is its Linux smarts. We’re finishing our Nginx
series on page 76 ( which also happens to all work with the
Raspberry Pi) but we’re also going deep into Cython to
accelerate everyday Python on page 84, exploring how to
use Git for your own projects on page 88 and look at how
you can get started developing PHP on page 52.
We’re also trying to stay musical with a roundup of
excellent players on page 26 and even pro-level production
on page 48. Plus even more advanced tutorials, reviews
and a packed DVD. I hope you enjoy the issue and many
more over the next 12 months!
Jonni Bidwell
If the propaganda is anything to go by, then
Maidsafe is going to be pretty darn
exciting: A distributed ecosystem featuring
a decentralised, anonymous app platform,
incentivised by its own cryptocurrency. The
platform is entirely open, and the crypto it
uses has been thoroughly scrutinised. It’ll
be interesting to see reactions…
Andrew Mallett
Puppet will wins hands down for me every
time. Although not exactly new, Puppet is
still providing configuration support to new
innovations, such as OpenStack. The ease
and simplicity that enterprise systems
have shown us really can be managed
makes me a marionette by choice. (Insert
‘strings attached’ joke here.)
Les Pounder
In the last twelve months the Shrimping
project has held my attention more than
most. It’s an Arduino Uno board for £5, and
the project has been really interesting as
you get ‘hands on’ with building an Arduino
and learning the function of each of the
components. I recently learnt it works with
Scratch – RESULT!
Richard Smedley
Mailpile brings the much-needed protection
of encrypted email to non-technical users
on all platforms. Standing out in a crowd of
exciting, emerging IndieWeb projects,
Mailpile’s powerful tagging, notable speed,
and flexible self-hosting choices make it the
first realistic FOSS alternative to the reallyslightly-mighty Gmail.
Mayank Sharma
My favourite projects all seem to have
something of a habit of dying soon after
I’ve labelled them as such – SolusOS, Pear
Linux, and the latest favourite of mine,
Bodhi Linux. So you’ll understand if I’m a
little superstitious about using that word.
Oh wait – my favourite project right now
has to be Windows!
Neil Mohr Editor
[email protected]
Subscribe today
See p32 for awesome deals
December 2014 LXF191 3
“Be kind, for everyone you meet is fighting a harder battle.” – Plato
LG Ultra-wide monitor ..... 17
21:9 is the new 4:3. Widescreen is dead we’ll
all be using these 34-inch HD monsters.
Hack the Raspberry Pi
Get the tiny PC board to
rock your world p34
Motorola Moto 360.............18
Android Wear is here from Google and this is
the poster watch for it from Motorola. Would
you strap it to your wrist?
We’re not square, says Motorola.
Thecus N4560 NAS ..........19
A powerful mid-range SOHO or home NAS
that packs four drives and an impressive OS.
Music players p26
Gentoo Live ....................... 20
Trapped in the Phantom Zone-like plastic of
a DVD, who thought this was a good idea?
Peach OSi 14.04 ................21
Taste the not-so forbidden fruit of yet
another and brand new Ubuntu respin.
Tondi .................................. 22
Why would Mayank Sharma leave
OwnCloud? There’s only one way to find out.
WordPress 4.0 .................. 23
The ubiquitous CMS gets a full point update,
dancing breaks out across the globe.
Gaming headsets ............. 24
Top-notch headsets for perfect team-chat
or just idle gossip online.
Games ................................ 25
We catch up with The Witcher 2 and see if it’s
ready for prime-time on Linux.
Talking heads
“I discovered open
source long before I
think it was even a term...”
I said Ubuntu was best!
4 LXF191 December 2014
Michael Shiloh on the power of Arduino p42
On your free DVD
Raspbian, KaliLinux
PiMusicBox, RetroPie,
Jasper 2014, Noobs
All the Raspberry Pi essentials
PLUS: HotPicks and tutorial code
Treat yourself or a
loved one to an LXF
subscription! p32
Don’t miss...
JACK in ...................................... 48
Get started with pro-level music production
with our guide to the essential JACK.
PHP virtual dev-box .............52
Build the ideal PHP dev environment in a
portable, VirtualBox and get building.
Coding Academy
Explore Journald.............. 68
Cython ................................... 84
Meet Cython, a Python to C compiler, Philip
Herron will be your loyal guide on how to make
your code 12 times faster and all he asks in
return is that you read to the end...
The new Systemd logging system explained
and explored in a true tale of, disaster!
Git started ............................. 88
Jonni Bidwell has always fancied a bit of
creative literature and what better place to
practice than in the safe, nurturing
environment of Git. Linus would be proud.
Nothing to do with The Wire.
Regulars at a glance
News............................. 6 Subscriptions ...........32 Back issues ...............66
The Internet is on fire as the
Bundle! Is something that was
Buy one now or we’ll force Jonni to
Shellshock bug rages through
screamed around Midland schools
write more PHP code! No collection
servers, but not
but now it’s our subs team saying it.
is complete without LXF183.
Mailserver................... 11 Sysadmin...................56 Next month ...............98
More letters and questions answered
by what’s left of the LXF team.
User groups................15
Dr Chris also goes all creative writing You can’t get rid of us that easily!
on us and looks at how WINE plays
Distro secrets will be revealed, robots
its part in the commercial world and
built, NAS drives created and more.
the options you have.
Les Pounder goes all Bake Off
exploring JAMs, maker events and
actually this is about the Pi? Oops.
Network monitoring
Wireshark.......................... 70
You always fancied a bit of deep packet
inspection, time to dip into some TCP.
Core skills
Awk .....................................74
The best skills are core skills, learn how Awk
can manipulate text files quickly and easily
with powerful terminal commands.
MySQL queries ................76
It’s time to link up your Nqinx web server
with its own MySQL databse for more PHP
fun than you can handle.
HotPicks ....................60
Apps so hot they make The Hunger
Games trilogy look like ice-cold teen
Roundup ....................24
trash literature. Included this month
Discover what’s the best FOSS
are QMplay2, Rosa ImageWriter,
to store and enjoy your music
PDFSaM, Rodent Core, KXStitch,
collections from library features to
KEncFS, Blobby Volley, Caesaria,
straight listening enjoyment.
I-Nex and more.
Jenkins ..............................80
Our subscriptions team is
waiting for your call.
Jolyon Brown explains how to rollout
Docker containers in a working environment
with Jenkins and your sysadmin skills.
December 2014 LXF191 5
THIS ISSUE: Shellshock
Ubuntu Touch
Debian Jessie Beta 2
Shellshock fallout
After a vulnerability in Bash was discovered, the race was on to patch it out of existence.
t the end of September a
number of security bugs, given
the collective name Shellshock,
were found, affecting the Unix Bash
shell. Because of the widespread use of
the shell, particularly in web servers, it
quickly became apparent that
Shellshock could prove to be more of a
threat than the much publicised
Heartbleed vulnerability that had given
us all a scare at the start of 2014.
The Shellshock vulnerability was
found in the Bash software package, a
powerful command line shell that many
Linux users may be familiar with.
By exploiting the vulnerability, malicious
hackers are able to remotely run
commands via Bash. Perhaps worst of
all, exploiting the Shellshock bug takes
very little specialist knowledge, so far
more people may begin using it.
Coupled with the fact that many web
servers use Bash, the potential for
damage is worrying.
Since the bug was revealed, a
number of Shellshock exploits have
been seen in the wild. These vary in
severity, from opening and closing a
DVD drive on a remote computer, to
creating and running a self-replicating
Worm program which can go on to
infect other machines.
The website http://unix. contained a good
example of how the Shellshock
vulnerability might be used. It occurs
because Bash, in its unpatched state at
least, stores exported function
definitions as environment variables.
When a new instance of Bash launches,
it searches for these environment
variables and interprets them as
function definitions. Once the function
definition has been terminated, it will
interpret arbitrary commands after this,
due to a lack of constraint in
6 LXF191 December 2014
patch the vulnerability on their
respective servers, and while Apple
initially downplayed the danger of the
bug to its customers, it nevertheless
released a security patch for Mavericks,
Lion and Mountain Lion versions of its
operating system, OSX.
The fightback
determining which function-like strings
are acceptable during Bash startup.
Since the vulnerability was detected,
companies have been scrambling to
ensure their services – and customers
– are no longer vulnerable to the bug as
evidence mounted that criminals and
hackers were already using the exploit
to mount attacks across the globe.
A report by Trend Micro explained
one such attack: “Trend Micro Deep
Discovery was able to detect this
attempt and found that attackers were
Shellshock isn’t
a vulnerability
that should be
ignored but
patches are
rapidly arriving
for devices that
allow for updates.
“It is not a critical failure
in Bash. It is a critical
failure in people…”
trying to see if several IPs owned by the
institution were vulnerable to a
vulnerability, CVE-2014-06271”.
Google and Amazon were quick to
explain the steps each had taken to
Red Hat published a set of commands
that people can run to help confirm if a
system is patched against Shellshock.
Depending on the results returned by
running the script, you should be able
to ascertain what patch you have, and if
any further patches are required. You
can view the steps at https://access.
Whilst efforts to contain the fallout
of the Shellshock vulnerability were
being undertaken by some of the
biggest technology firms in the world,
others were worried about how the
potential crisis was being seized upon
by critics of Unix systems, Bash and
open source projects, especially after
the much publicised debacle of the
Heartbleed bug.
In a passionate defence of Bash,
GNU’s godfather Richard Stallman and
Andrew Auernheimer wrote on
LiveJournal how people attacking Bash
for the vulnerability are missing the real
target. As he wrote “Shellshock is not a
critical failure in Bash. It is a critical
failure in thousands of people who knew
a tool so useful that they decided to
deploy it far beyond its scope. Everyone
knew in the 1990s that when you
execute a UNIX command with
untrusted input, you clear away the
environment variables first. Anyone that
has untrusted input embedded within a
shell script does not know what they
are doing…”
The Firefox OS
Matchstick could do the job better running Firefox OS.
oogle’s Chromecast device has been a
pretty big success, thanks both to a low
price and an easy to use, plug and play
interface. Not everyone is pleased, however, and
despite it being built on the Linux-based Chrome
OS, many are finding it too closed for comfort.
Step forward, then, the Matchstick, which has
blown past its Kickstarter funding goal of
$100,000 in just one day. Like the Chromecast it
plugs into a HDMI port and can stream media,
connect with other wireless devices and play nice
with online services, such as Netflix. It should also
give the Chromecast a run for its money pricewise, with a handy $25 pricetag.
What sets the Matchstick apart from
Chromecast is that it’s the first streaming device
to run on Mozilla’s open source Firefox OS.
According to the makers of Matchstick, they went
with Firefox OS because of its adaptable nature.
They hope this will lead to lower production costs,
and without the need for app approval, it may
encourage a huge library of apps to appear.
The hardware included in the Matchstick
features a dual-core Rockchip 3066 processor,
4GB of on-board storage and 1GB of DDR3 RAM.
It also includes an 802.11 b/g/n wireless receiver
for connecting to wireless networks. The creators
want the hardware to be truly open, so reference
designs and hardware schematics are available
for download from,
so if you have the right knowhow, you’ll be able to
build your own Matchstick. As the creators say
“Our goal was to make a streaming stick that was
low-cost, high design, and adaptable without the
walled garden for developers that tends to slow
progress … It’s what Chromecast wanted to be.”
Matchstick is an open source equivalent to
Chromecast for cheap home streaming.
Ubuntu Touch
Canoncial’s mobile operating system reaches RTM status.
Debian Installer Jessie Beta 2 is
out now and ready for download,
letting you test the latest Debian 8
version. While previous versions of
Debian 8 have used the Xfce desktop
environment by default, the new beta
sees the return of Gnome. You can
download the beta from http://bit.
ly/DebianJessieB2, though
remember that this is a beta version,
so there will be bugs.
Backblaze has released its
regular state of the hard drives
report, and it’ll make uncomfortable
reading for some HDD manufacturers
– and the people who use their drives.
It’s bad news for Seagate in particular,
with the failure rate of its 3TB drives
jumping from 9% to 15% compared to
January 2014. Western Digital’s 3TB
drives didn’t fare much better, with the
failure rates jumping from 4% to 7%.
Backblaze knows how important
reliable hard drives are.
e might have given up hope of seeing
the Ubuntu Edge smartphone after it
failed to reach its $32m kickstart goal.
However, that doesn’t mean Canonical’s vision of
Ubuntu on smartphones is dead.
Ubuntu Touch, the mobile version of its distro,
has just reached RTM (Release to Manufacturing)
status, which means it’s been fixed for bugs and
ready to go. If you’re that way inclined you can
download the RTM version of Ubuntu Touch right
now from
ubuntu-for-devices/installing-ubuntu-fordevices and install it on your Android phone.
Of course this does come with a number of
potential dangers, so if you’d rather wait for a
phone to come along with Ubuntu Touch pre-
installed and officially supported, you won’t have
to wait too long either.
The Meizu MX4 should be released around
December, and it will run Ubuntu Touch right out
of the box. In fact, there will be two different
versions: one running Android, the other being a
slightly more powerful ‘Pro’ version of the
smartphone with a Exynos 5430 processor, 4GB
of RAM, and a 2560x1536 resolution screen
running Ubuntu Touch.
We’re still not too sure on how much Ubuntu
phones will cost, though earlier this year
Canoncial’s Mark Shuttleworth said that they
would “come out in the mid-higher edge, so $200
to $400”, and if anyone is in a position to give an
educated guess, it would be Shuttleworth.
Earlier in the year the Linux
Foundation began issuing its own
Certified SysAdmin (LFCS) and
Certified Engineer (LFCE)
certifications, and as well as it open
online course all have been roaring
successes. As Dan Brown, the Linux
Foundation’s PR & Marketing
Manager, recently disclosed, “The
Intro to Linux Massive Open Online
Course (MOOC), which can help with
basic prep for the LF Certified
SysAdmin Exam, has had over
270,000 registrations from 100 plus
countries.” The certified exams are
pretty tough, though, with a pass rate
of below 60%. However, 80% of
people who’ve taken the exams would
recommend a friend taking the
certification as well.
If anyone ever says that Linux
has no games, you can now
officially laugh in their face, as Steam
has just passed the milestone of
having over 700 games available for
Linux. While things might have gone a
bit quiet on the SteamOS side of
things, which is a Linux distro specially
made by Steam developer’s Valve,
gaming on Linux continues to go from
strength to strength.
December 2014 LXF191 7
Civility is
free too
Michael Meeks
Lennart, of
fame, recently
published a
blog saying that open source communities are
not all rose-gardens of tolerance, gentleness
and happiness. Indeed his thesis is that some
communities are quite sick in that regard.
That this is news, is quite interesting in itself.
What changed to make people interested in this
topic? The Linux Kernel community has always
been one where good coding seems to happen
despite occasional foul-mouthed tirades from
Linus. Ever since I’ve been involved with free
software people have ‘flamed’ each other quite
immoderately with a surprising level of bile.
Indeed, as the internet gains widespread
traction, hacker culture’s terms like ‘troll’ seem
to have disseminated into wider society. With
an ever larger supply of semi-anonymous
people, the set of those willing to heap abuse
grows; along with weak social structures to help
suppress that.
Douse the flames
Of course, Lennart’s concerns are around the
bile generated about Systemd; a topic whose
controversial nature will inevitably fade with
time as all major distros switch to using it.
Naturally there are those who agitate for
extremely broad codes of conduct backed by
coercive rules to exclude people they don’t like.
I’m a sceptic of that approach. Instead with
clear, positive leadership by example it seems
possible to keep discussion polite on mailing
lists. Partly I hate rules because I want to
include myself - at least other awkward,
under-socialised, poor-communicators can be
some of the brightest and best developers.
In LibreOffice-land we also help to de-louse
communication problems with weekly, highbandwidth phone calls to try to make the
project a friendlier place to be; why not get
involved and join one?
Michael is a pseudo-engineer, semi-colon lover,
SUSE LibreOffice hacker and amateur pundit.
8 LXF191 December 2014
Hitting the mirrors
What’s behind the free software sofa?
GhostBSD 4.0: The major new
version of the FreeBSD 10-based
operating system will be available
to download by the time you read
this. It’s the first release of the 4.x
branch and brings with it Mate as
the default desktop environment,
Clang as the default compiler,
pkg(7) as the default package
management utility and GCC no
longer installed by default. You can
either burn the ISO to a DVD or
The latest major version of
GhostBSD is based on FreeBSD 10..
use a USB stick, and you can go
ahead and grab the image from
Chakra GNU/Linux
2014.09 is a Linux distro
for desktop PCs that
uses the Pacman
package manager. The KDE 4.14
desktop comes as default, and in the
release announcement more
changes have been detailed.“The
Chakra team is happy to announce
the first release of the Chakra Euler
series, which will follow the 4.14 KDE
releases. A noticeable change in this
release is the major face-lift of
Kapudan, which now gives the option
to users to enable the [extra]
repository during first boot so they
can easily install the most popular
GTK+-based applications”. You can
download the distribution from
The latest stable release
of OpenElec has, a distro
designed for media
centre PCs, has hit the
mirrors, ready for
download. Changes in
this version include an
update to Linux Kernel
3.16, updated Nvidia graphic driver support for the 64-bit version of the distro
and XBMC has been updated to XBMC Gotham 13.2. Download OpenELEC 4.2
The latest stable Linux Kernel, 3.17
is now available. It includes a
number of hardware and stability
updates, including Nouveau driver
improvements for Nvidia’s latest
GPUs, Xbox One controller support
(though without vibration support
at the moment), and support for
Rockchip RK3288 and AllWinner
A23 ARM SoCs. You can find out
more at
Not early, but Linus is happy.
s for
Includes guide
Windows 8 as
redefines performance
EXCLUSIVE Inside Intel's
game-changing 5960X
X99 mobos and RAM
PLUS How to overclock
Haswell-E to 4.4GHz
Discover new ways to speed up your PC,
6(LJ8ƺ,ǝ< ǜ3(ǨƮDŽ/
folders safe and password protected
New things to do
Buying advice
Help & support
Windows tutor als
We help you to buy
the r ght laptop or
Turn to p98 now!
(£6.49 Outside UK & RO
16 games worth getting
cited abo t i ht
FOR £468
100% jargon free
FROM £25.49
FROM £23.49
FROM £25.49
FROM £15.99
FROM £23.49
DVD missing?
Ask your vendor
FROM £26.49
FROM £12.99
FROM £17.99
FROM £22.49
FROM £20.99
FROM £21.49
FROM £25.49
2 easy ways to order
Or call us on 0844 848 2852
quote Z501
Lines open Mon to Fri 8am – 9.30pm
and Sat 8am – 4pm
Savings compared to buying 2 year’s worth of full priced issues from UK newsstand. This offer is for new print subscribers only. You will receive 13 issues in a year. Full details of the Direct Debit guarDQWHHDUHDYDLODEOHXSRQUHTXHVW,I\RXDUHGLVVDWLV¿HGLQDQ\ZD\\RXFDQZULWHWRXVRUFDOOXVWRFDQFHO\RXUVXEVFULSWLRQDWDQ\WLPHDQGZHZLOOUHIXQG\RXIRUDOOXQPDLOHGLVVXHV3ULFHVFRUUHFWDW
point of print and subject to change. For full terms and conditions please visit: Offer ends: 31st January 2015
Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath, BA1 1UA or [email protected]
In the dark!
I have recently gone from a PC
running Windows 7 to a
MacBook Pro, which I spilled
milk on, back to an Acer Aspire
running Windows 7 Starter.
I upgraded the 7 Starter to
Windows 7 Home Premium only
to have Microsoft tell me it could
not be validated although it was
indeed a legal copy of the OS
which was supposed to have an
‘Anytime upgrade.’
I decided to walk away from
Windows on the netbook and I
intended to do a dual-boot and
went with Ubuntu. Not fully
realising what I was doing, I
made a mistake and wrote over
the Windows 7 Home Premium
leaving myself only the Linux
Ubuntu. What a surprise!
I spent maybe five hours
setting up the Netbook. It went
well, except I found that Linux
seems to like HP printer drivers
more than Canon MX430
drivers, But persistence paid off
and I finally located them and
now my printer is set up.
I’m age 74 and find the switch
from Windows 7 to Linux quite
surprising. So far the printer
drivers has/have been the only
challenge and that was minimal.
The netbook surprisingly boots
up, from power on until I’m
ready to use it in a mere 35
seconds; Windows 7 took at
least four to five minutes. Since
Linux Format magazine has used
Letter of the month
would like to share with Linux readers a
challenging matter about having ‘one
powerful Linux distribution instead of the
many that we have now’. I feel we need
one united and properly supported Linux
distribution, like MacOS or MS Windows.
There are a lot of Linux distributions and over
the past 16 years only five main distributions
have been properly supported while the others
are dying slowly!, for instance,
lists over 270 Linux distros. Imagine if we had
one main Linux distribution, properly supported
from big companies for driver updates and
security issues, I’m pretty sure that even
notebook and PC manufactures would use that
one distribution instead of Windows. Gathering
all open source’ members for one united
powerful Linux distribution. I hope we’ll see that
soon in the near future.
Mohee Jarada
Neil says: I certainly appreciate your sentiment
and there is truth in the “divide and rule” phrase
Caesar made famous. But in terms of drivers this
is already the case, the Linux kernel is used
across all distros and encompasses the drivers
already. While many major software and services
are developed in a centralised way. So in many
ways what you want is already the case.
the headline Escape Windows
recently I just wanted to let you
and your readers know it is
pretty easy to escape if one has
the desire.
The Linux has a myriad range of distros, dare we
say it’s time to have One (to rule them all)?
More widely I think the breadth of distros
enables multiple flavours to be developed for
specific tasks. Ubuntu/Mint for desktops, RedHat/
CentOS for server on a macro level, but delving
down into things like Puppy and Uber Student
they’re all targeting specific needs.
I also think the idea of taking on Microsoft is an
outmoded one. Depending on who you talk to
Linux has already won: in the server and enterprise
space, on mobile with Android and on embedded
appliances. It’s just the pesky consumer desktop/
laptop market that remains stuck to the teat.
Now for a question: Is it possible
to put Linux Mint on the same
drive along with Ubuntu? I ask
because I want to test both
before finally settling on one.
Also, please could you suggest
some teaching and learning
sources that an old man can
turn to?
John Colyer
[email protected]
December 2014 LXF191 11
have experienced the same
issue loads of stuff out there
googling on ‘wget resume
broken download’ - minus the
single quotes, of course.
D Schmittz
Neil says: Thanks for the update,
I’m sure there’s a host of people
that will find it helpful to know that
they can use the -c option to
continue stalled downloads.
Open Print
OwnCloud is a great idea, but it’ll make you realise the responsibility involved in protecting your data.
Neil says: Another escapee,
welcome to the world of software
freedom! I think your experience is
a mirror of most people’s – Linux
tends to just work these days, bar
the odd driver issues, which
everyone expects even with
Microsoft systems. To answer your
question you can dual-boot Linux
OSes and Linux with Windows.
You just need to ensure you resize
your existing partition with
Gparted or similar to enable room
for the new OS. Linux to Linux
dual-booting is usually painless
but if you reinstall Windows it’ll
wipe out Grub, which will require
you to recreate the bootloader
with something like Rescatux
(more on that in LXF192). For
resources Neil Bothwick would say
read the man pages, but you can
try this starting point: http://bit.
I’m pleased to see Owncloud 7
covered in LXF189 and the note
12 LXF191 December 2014
on page 71 that you will explore
the new features in LXF190. I
have been using Owncloud at
home within my home network
for a little while and really like it,
but I have been nervous about
forwarding the ports to allow
access externally in case I have
not configured Apache correctly
to be secure. Would it be
possible for you run a tutorial
showing how to set up Owncloud
securely, ie covering what needs
to be done correctly with Apache
or Nginx as well as in Owncloud
itself, so that anyone running a
Linux box at home with a router
can use it?
David Whiting
Jonni says: Hope you enjoyed the
OwnCloud followup, David. It’s
good to be prudent about running
internet-facing services, but there
isn’t an easy answer to your query.
There might be unknown
vulnerabilities in any of Apache,
PHP or OwnCloud, so no matter
how much you lock ‘em down an
attacker may be able to get the
same access to your machine as
the Apache process has. Running
Apache in a chroot jail is a good
start, but no silver bullet. Here’s a
guide for Ubuntu: https://wiki. A hip
alternative is to use a Docker
image. Try https://registry.hub., there are plenty
available which contain a knowngood Apache/PHP setup and will
auto-update as new versions of
OwnCloud are released.
In reference to the letter from
Peter Lonsdale in LXF189
reading between the lines of
what Peter wrote if this is a
broken download issue, then
using command line wget with
the -c parameter will resume a
broken download from where
the it left off, without having to
download the entire download
again. It works, I too have a
similar download speed and
Is there a Linux Driver for an
Epson WF-7610DWF A3 Printer?
Neil says: I know Neil Bothwick
often gets queries about printer
drivers on Linux. It’s one of those
ridiculous situations where all
manufacturers have to do is
support the open standard
Postscript and everyone will be
happy. As it is, the usual route is
via but
Epson maintain its own at http:// search for Epson
WF-76010 and you’ll find at least a
couple, which while might not
offer full scanning features should
offer printing ones.
Erik the Red deliberately gave
the land he discovered the more
appealing name ‘Greenland’ than
‘Iceland’ (where he’d come from)
reasoning that ‘People would be
attracted to go there if it had a
favourable name’. But look at the
Linux product names:
The terrible ones KDE,
Gnome, Gimp, Brasero, Ekoga,
Amarok, Glade and Quanta Plus.
The better ones Writer,
Scribus, Kmail and Gnumeric.
The even better OpenOffice,
AbiWord, Firefox, Thunderbird
and Inkscape.
Now take a guess which of
these, and other names actually
mean something to the Average
Joe?! And I’ve not mentioned
Gnus, Knode, Pan, Lifera and
Akregator. I don’t have to explain
why most people like and
understand what apps like MS
Office, Excel, Outlook and
AutoCAD do. To the point: Some
great programs and utilities, but
the names – It’s time that
someone take the initiative and
changed this practice, and as I
see it that could be Linux Format
as the best Linux mag around.
Edvard Koren, Celje Slovenia
Installing Arch will give you a great insight into how Linux is put together.
Neil says: You’re right, Linux
Format is the best Linux magazine
around, I say entirely objectively.
Let’s face it Linux is built by
coders, developers, engineers and
enthusiasts not marketing and PR
people and let’s thank the digital
gods for that, as an OS built by
marketing and PR people would
be, well, I shudder to think.
The side effect of that is perhaps
project names have been
somewhat more TLA based than
promotionally based. But on the
whole I don’t think it matters.
Who actually needs to know what
KDE is other than developers and
distro builders? You’re right it
highlights inward-looking nature of
Linux development, but user
engagement is with the final
distros, which are doing a fine job
of being more welcoming.
Nuts and bolts
One of the things I’ve loved
about Linux is its modularity and
ease of configurability through
the editing of text files and the
freely available documentation
that has enable me to have finegrained control over various
aspects. I must admit to being
concerned over the last two
years at the push toward tightly
integrated desktops and
automated configurations,
which, while they make our lives
easier, rob us of the freedom of
choice and often frustratingly
begin to ‘think on our behalf’ and
make our decisions for us.
The scary trend toward the
‘Windows Registry’ approach
worries me too!
Now I realise that many
people want their systems to
‘just work’, but I’d love to see if
it’s still possible to do it the
‘good old linux way’ of taking a
bunch of applications that do
their job really well and fasten
them together in a way that
suits the user’s preferences.
So, here’s a tutorial
suggestion that’s been tickling
me for a while. How about taking
us from a vanilla install of a
distro (with no X, desktop or
Window Manager) and talk us
through installing X; the
difference between a desktop
(Gnome, KDE etc) and a window
manager (E17, metacity, Compiz
etc); and then take us through
installing a window manager
through options around panels,
menus, shortcut keys and
applications, so that we end up
with a customised desktop
where we’ve made the choices.
We’d also have to look at the
login options (GDM etc) that are
available and their pros and
cons. It would be great to know
which are the config files X
opens as it starts up and how
the different components of a
working desktop are loaded up.
How do the different libraries
(GTK, Gnome, Qt etc) come into
play? How do we handle alerts
and notifications?
Theo Groeneveld, S Africa
Neil says: What a wonderful idea.
In certain aspects we have
touched upon areas of this, such
as our guide to Gentoo in LXF182
and our Arch guide in LXF188. By
their very nature you have to start
from scratch and ‘bolt’ on the bits
you need, including the desktop
and windows managers of your
choice. But aspects, such as X, are
often used in their default form as
they are automatically installed
There’s a plan to cover ‘cuttingedge’ Linux features in a future
issue, but I’ll give your approach
some thought.
I’m a long-time reader with a few
questions. How do I tell if my
machine is 32- or 64-bit? It’s
about 7 years old. How do I get
rid of XP? I have been using Mint
16 also. Can I replace the BIOS
with UEFI and should I do so
with a 32 bit or 64 bit machine?
Neil says: So many questions!
A machine is 32- or 64-bit based
on its processor. Within Linux you
can open a terminal and type grep
/proc/cpuinfo. This returns a
bundle of flags that indicate the
CPU’s capabilities such as:
flags : fpu vme de ... tm pbe nx
lm constant_tsc pni ...
In the middle of that lot will be
one of three (really two) options
lm, rm or pm. Long mode (lm)
indicates a 64-bit processor, Real
mode (rm) is a 16-bit processor
and Protected mode (pm) is a
32-bit one. You can also type top
/proc/cpuinfo and attempt to
search for your processor details
that way. When you say “get rid
of” if you’re dual-booting with a
Grub bootloader you could
remove the partition using
Gparted, or you could just install a
Linux OS over everything on the
drive, though obviously back up
everything that you want to
keep beforehand.
As for the BIOS/UEFI this is
something your PC comes with
installed as its firmware. It’ll either
have a BIOS or UEFI, you’re not
able to ‘upgrade’. There can be a
level of confusion as UEFI
systems do offer a BIOS
compatibility mode, which looks
identical to an old BIOS. In this
case you’re able to switch back to
UEFI mode. Hope that helps!
Kubuntu tip
Why is it that the keyboard
sometimes stops responding on
a browser left-click when the
mouse is on an empty part of
the screen and the keyboard
doesn’t respond. That’s not my
main question, though: why
does my Ubuntu 13.04 say
Kubuntu on startup and
shutdown when it has the Unity
Ian Learmonth
Neil says: Yup, we’ve had that
keyboard issue on a few
occasions. We assume it’s
something pulling focus. As for
your Kubuntu issue we think this
must be the Plymouth
splashscreen that Ubuntu uses
for the Kubuntu image. It’s usually
because you’ve upgraded to try
Kubuntu at some stage. To refresh
the Plymouth configuration try
sudo update-alternatives --config
sudo update-initramfs -u
and that should fix it. LXF
Write to us
Do you have a burning Linuxrelated issue that you want to
discuss? Want to speculate about
how we can read your minds?
Then write to us at Linux Format,
Future Publishing, Quay House,
The Ambury, Bath, BA1 1UA or
alternatively send an email to:
[email protected]
December 2014 LXF191 13
` [LZ[LK
Linux user groups
United Linux!
The intrepid Les Pounder brings you the latest community and LUG news.
Find and join a LUG
Blackpool Makerspace Meet every Saturday,
10am to 2pm. At PC Recycler, 29 Ripon Road FY1 4DY.
Bristol and Bath LUG Meet on the 4th
Saturday of each month at the Knights Templar (near
Temple Meads Station) at 12:30pm until 4pm.
Edinburgh LUG Meet on the first Thursday of
the month at the Southsider pub, West Richmond St,
Hull LUG Meet at 8pm in Hartleys Bar, Newland
Ave, 1st Tuesday every month.
Lincoln LUG Meet on the third Wednesday of
the month at 7pm, Lincoln Bowl, Washingborough
Road, Lincoln, LN4 1EF.
Liverpool LUG Meet on the first Wednesday of
the month from 7pm onwards at the Liverpool Social
Centre on Bold Street, Liverpool.
Manchester Hackspace Open night every
Wednesday at 42 Edge St, in the Northern Quarter of
Surrey & Hampshire Hackspace Meet
weekly each Thursday from 6:30pm at Games Galaxy
in Farnborough.
Tyneside LUG Meet from 12pm, first Saturday
of the month at the Discovery Museum, Blandford
Square, Newcastle.
New jam in the UK capital
Les Pounder says this is jam hot in the big bad city.
aspberry Jams keep spreading
one of many! If you live in London and
across the world from Dublin to
are interested in what you can do with
Dubai, and the latest addition is
the Pi … bring one along and make it a
in London at the Covent Garden Dragon
fun day for all the family."
Hall Trust. This event was started by
Frank has done exceptionally well to
Frank Thomas-Hockey, a parent and
start a new jam in the UK capital.
passionate Raspberry Pi enthusiast.
Finding a central and easily
We asked Frank what makes the
commutable location in London with
Raspberry Pi so special: “It’s a great
enough space isn’t an easy job, so If you
way to see the versatility of Linux in
live in the area and have a Raspberry Pi
action,” says Thomas-Hockey. “I feel
project to show off or want to know
that the Raspberry Pi is the best value
what to do with your Pi, please support
and most versatile little PC in the world,
this worthy venture.
enabling anyone to make the step into
The Covent Garden Raspberry Jam
hardware hacking and coding via an
will take place on November 29 at 1pm.
affordable means. The Raspberry Pi
You can find out more details via the
also has a remarkable community
Eventbrite site:
underpinning all of the good work done
CoventGardenRaspJam LXF
by the Foundation.”
“From Sonic Pi to
Minecraft and then to small
robot armies, the Pi is
unlimited in potential,
requiring an inquisitive
mind and inventive spirit.
I’m excited to be joining
Alan O'Donohoe's jamming
community and this will be
the first event, so let's make
the first event at dragon
Raspberry Jams cater for all ages. If you’re stuck
( you can always ask the kids for help!
Community events news
Blackpool Raspberry Jam
Another Jam has sprung up in
the UK, this time in Blackpool.
This Raspberry Jam meets on
the 3rd Saturday of the month at
Palatine Library from 1pm-4pm.
At the first event, which took
place in September, there were
demos from Simon Walters
(Scratch GPIO5) and Tim Gibbon
(Minecraft) and a talk from Pete
Lomas from the Raspberry Pi
Foundation. Tickets are free and
you can find out what’s going on
via the jam’s official website.
Following on from the successful
/dev/summer, are
returning for another seasonal
one-day software development
conference. /dev/winter is your
chance to learn the latest skills in
the devops, via workshops and
talks. Devops is a constantly
changing field that involves many
new and rapidly evolving
technologies, such as NoSQL,
Node.js and cloud computing.
Hosted in Cambridge on January
24 2015 this event has limited
places so please follow @dev_
cycles on Twitter for the latest
ticket information.
E&C Mini Makerfaire
The popular Elephant & Castle
Mini Makerfaire will take place on
15 November at the London
College of Communication this
year. Makerfaires are massive
show and tells, with projects
from around the world rubbing
shoulders and swapping ideas.
This makerfaire has played host
to many great projects, but also
has a great family feel to it, so
take the kids for a fun day of arts,
crafts, robotics and electronics.
December 2014 LXF191 15
Download the
day they go
on sale in the
All the latest software and hardware reviewed and rated by our experts
LG 34UM95
Welcome to the wonderful world of extra-wide, widescreen. Jeremy Laird
sits back and enjoys the extra desktop real estate he now owns.
In brief...
An ultra-wide,
ultra-HD monitor
that provides tons
of desktop space
and a great game
experience, but
it's far from
perfect and at this
price you might
want to see how
the market
Size: 34-inch
Type: IPS
Refresh: 60Hz
Response: 5ms
n the days of old we’ve seen 21:9
displays with limited 29-inch 1080p
vertical lines, this latest generation
ups the ante to 34 inches. Immediately,
this helps to mitigate the old 29-inch
form factor’s visual shallowness. The LG
is so big your initial thought is no longer
“crikey, this thing is like looking through
a letterbox”.
Then there’s the resolution. We’re
now talking 3,440 x 1,440 pixels. Yep,
the same vertical pixel count as the
popular 27-inch segment and its 2,560
x 1,440 grid. Instantly, our previous
objection melts away. Okay, we’d prefer
it was even better. 1,600 vertical pixels
would be nice, but the vertical
resolution is certainly no longer an
instant turn off. Instead, we reckon
you’ll be massively turned on by the
sheer visual spectacle.
In games, you don’t actually look
directly at the edges of the screen all
that much, which makes it sound like
they’re redundant. But they do serve a
purpose, as proved by this widescreen
LG as it gives you a much more life-like
filling of your peripheral vision than a
normal monitor. Suddenly, every other
screen you’ve used looks like a
miserable little window into your
gaming world, whereas this one truly
immerses you in the action. It’s one of
those things that’s almost impossible to
appreciate until you’ve tried it.
Back to black
Of course, there are issues with the 21:9
aspect ratio. Standard HDTV content is
16:9, which means you end up with
black bars on either side. In practice,
watching 16:9 content turns the LG into
something akin to a 27-inch monitor. To
make matters worse, a lot of streaming
web content that contains 21:9 aspect
video is actually hard
coded in 16:9 with black
bars inserted into the
video stream.
As for the LG
34UM95’s raw image
quality, we’re not
absolutely blown
away, to be honest.
The specs say it is a
native 8-bit panel
with dithering to mimic 10-bit
in terms of colour depth. But
subjectively, its look is somewhat
redolent of the cheaper 6-bit IPS
panels on the market.
At default settings the objective
metrics show good details in the white
scales and perhaps a hint of detail loss
in the black scales, but nothing too ugly.
The viewing angles, meanwhile, are
absolutely fantastic and colour
gradients basically look great.
Less pleasing is the cheapo tilt-only
stand. Yes, you can dismantle it and
rebuild choosing one of two heights.
But it essentially offers very little
adjustment, which is disappointing at
this really rather adventurous price
point. As it happens, we suspect the
price is partly down to the inclusion of a
Thunderbolt interface, allowing for
compatibility with the latest Mac Pros.
Meanwhile, that in turn might explain
the Apple-esque tilt-only stand. It’s also
worth noting that the external power
brick is huge.
Another minor issue involves display
interface compatibility. This is a fastevolving subject, what with various
iterations of DisplayPort and HDMI
appearing soon or mooted for the
future. But as things stand with HDMI
1.3, you can’t drive this display at 60Hz.
The bandwidth available only allows for
For a gorgeous
desktop and amazing
gaming, choose 21:9.
50Hz. In practice this isn’t too awful,
although we did notice some slightly
odd judders. Instead, it’s DisplayPort
that offers the full 60Hz experience. It’s
not as nice as 120Hz-plus, but you
simply can’t have everything you want
in a monitor right now. LXF
LG 34UM95
Developer: LG
Price: £770
Ease of use
Value for money
Ultra widescreen is intoxicating and
super-cool. But wait a few months for
the price to hit £500 later this year.
Rating 7/10
December 2014 LXF191 17
Reviews Android Wear
Motorola Moto 360
The first Android Wear watch from Motorola certainly has
the looks... But Matt Swider wonders if it’s really of any use.
In brief...
Motorola’s first
ever bit of Android
technology – and
it does a lot more
than just telling
the time...!
oto 360 proves that
smartwatches can be as
fashionable as Google Now
software is functional, making it the first
Android Wear watch worth strapping to
your wrist. Its circular watch face takes
cues from stylish designer wristwatches
with analog tickers, not square-shaped
smartwatches like the very
computerized-looking Pebble Steel,
Samsung Gear Live and LG G Watch.
The watch's charm is shortened by
its lack of longevity, but Motorola clearly
shaped a winner that's more ambitious
looking than the overly boxy Apple
Watch. And its price point is equally
impressive. It's on sale right now for
$249 in the US and for £199 in the UK.
There are few sharp-edged
downsides to the Moto 360, and none
of them can be found on its round, 1.56inch LCD display with its 205 pppi, 320
x 290 resolution display. In fact,
Motorola's enterprising circular screen
is so attractive it instantly became the
antithesis of the smartwatch argument
when Google first announced Android
Wear in March 2014.
The screen always remains on and
the Moto 360 use an ambient light
sensor that automatically adjusts the
LCD brightness. It's especially
convenient when you need a bright
screen in sunny conditions, or if you
want to reduce battery life consumption
in dark environments.
Inside, an ailing Texas Instruments
OMAP 3 processor makes this digital
smartwatch tick. There is occasional lag
Features at a glance
Many faces
Its stainless steel frame
will match the forthcoming
metal bands nicely.
The screen is ideal for
popping up notifications
trivial or vitally important.
18 LXF191 December 2014
Moto 360 has style and substance, just not battery life or a reliable processor.
when swiping through Android Wear
menus, almost as if you didn't apply
enough pressure on the touchscreen.
Even with its decent 320mAh battery,
the inefficient processor taxes its weak
battery life before it shuts down, which
is poor. Inside is 512MB of memory and
4GB of storage, as every other Wear
device. There’s no GPS, but there is a
heartrate sensor and pedometer.
Android Wear is the same as on
every other current device. It has a
familiar card-based interface lifted from
Google Now and Google Glass, and it
often slides contextual information onto
the screen in addition to text messages
and important email.
Sliding your finger left explores the
pop-ups a little more with little
touchscreen interaction needed.
Android Wear is designed to predict
what you want to know, so commute
times to places you've searched,
frequently visit and add to your
calendar appointments should
automatically slide into view.
There are now 44 featured apps that
are part of the Google Play Store's
Android Wear section. Glympse is
particularly useful on a watch because
it sends your location to contacts of
your choosing. Say "Okay Google, start
Glympse" and send them your real-time
GPS coordinates in an instant. Google
Maps is still one of the most useful
Android Wear apps. Asking Moto 360 to
"navigate to..." initiates turn-by-turn
directions on the watch while starting
the full route mapping on your phone.
Google Fit is here too. It uses the
watch's built-in pedometer and heart
rate monitor, counting up the metrics
on small-scale graphs and timelines.
Motorola promises future updates
including the ability to recognise when
you transition from running to cycling.
The Moto 360 is compatible with all
Android 4.3 Jelly Bean and later
devices, largely dictated by the need for
Bluetooth 4.0 LE support. Android 4.2
and earlier owners are out of luck, so
are iPhone owners. It’s mostly
waterproof, with an IP67 rating or 30
minutes in up to one meter of water.
The device has its own Qi wireless
charging dock, but it needs it as it didn’t
work with any other chargers... LXF
Motorola Moto 360
Developer: Motorola
Price: £199
Ease of use
Moto 360 has practical information
on the virtual dial, but terrible battery
life and a slow processor...
Rating 7/10
NAS Reviews
Thecus N4560
A mid-range smart NAS from Thecus has Neil Mohr considering if
he really needs a home server any more?
In brief...
A full-featured,
4-bay NAS drive
from Thecus that
offers interesting
HDMI output and
XBMC integration
based around an
Intel Atom SoC
and a browserbased interface
that anyone can
use. See similar
level devices such
as QNAP TS-421,
Synology DS-412.
CPU: Intel Atom
SoC CD5335
LAN: 1x Gigabit
Bays: 4x SATA
3.0, 2x USB 2.0
Ports: HDMI,
RAID: 0, 1, 5, 6,
10, JBOD
hy choose a NAS over a lowcost home- or mini-server?
It's something we're looking
at in LXF192, but with models like this
Thecus N4560 the level of functionality
you gain from an advanced NAS is
pretty stunning. This model, released
towards the end of 2013 but still
current, is of interest as it's based on an
Intel CE5335 dual-core Atom running at
1.6GHz dubbed the Intel EvanSport
platform. Older NAS models have been
a bit wooly in performance terms, but
the introduction of Intel in the market
should eliminate any bad performance
for this model.
The Thecus N4560 is a four-bay
unit, so we can run a full RAID5 or 6
with support for up to 4TB drives or
16TB in total. Using the Intel platform
has a number of advantages besides
the processing power, including the
Gigabit networking, media handling
capabilities, including HDMI and S/PDIF
out, a video controller, power
management, USB 3.0 support and
DDR3 memory support of which there's
2GB in this model.
Setting up the device is very
straightforward as it uses quick-mount
drive bays, which are standardised
across the Thecus range. The device will
automatically establish a suitable RAID
when it's run for the first time. Thecus
offers a Chromium-based quick-find
app for Linux (Mac and Windows users
have an Adobe Air tool) but this does
little more than locate the NAS on the
network and offer a link. This matters
Features at a glance
Browser interface
Open the bay doors
It's not as slick as some
NAS interfaces, like
Synology, but it does the
trick and is easy to use.
Offering four drive bays
helps lift the Thecus out
of the NAS gutter with full
RAID6 capabilities.
Never has a big, black box looked so big or, indeed, so damn black.
little as the star of the show is the webbased interface ThecusOS 6.1, it's
attempting to be as slick as the
Synology interface, but falls a little
short. That being said, it's a fully
realised windows interface accessed
through your browser and it works a
treat. We'd also mention you can turn
on SSH and access it via a terminal.
The list of capabilities are somewhat
too long to list right here but it's pretty
much everything you could ever
possibly want from basic user/group
quotas and access to Active Directory,
iSCSI, NFS and LDAP support. However,
other key services include RAID
encryption (hardware accelerated),
UPS support (via USB), (S)FTP, UpnP,
printer server, Plex/media server, webbased photo server, iTunes support,
download manager including torrents,
full back up service including Rsync and
Apple Time Machine. It supports being
a web server with MySQL and directly
mounting ISO files.
Thecus also pushes its XBMC/Kodi
playing abilities via the HDMI output,
though this didn’t seem to want to play
ball for us and we’d question how
genuinely useful this would actually be.
That aside setting up a RAID is
delightfully easy, the interface might not
be visually as slick a Synology device
but it’s functional. The inclusion of the
Atom processor should make this a
speedy device and we found it to be
competitive with straight read/write file
copy speeds of 96/78MB/s.
While the Thecus is certainly
capable both in terms of speed and
feature set, it’s not priced that well. At
this price for a diskless setup it’s more
expensive than the competition: the
likes of Synology, QNAP and Buffalo
offer similarly priced units with dual
LAN or populated with disks. We can’t
help but feel some of the additional
features offered are somewhat surplus
to requirements and perhaps push up
the price beyond what it needs to be –
and we’d also prefer a second LAN port
over the HDMI. That aside, if you can
snap this up at a more reasonable price
then it will be as competitive as
anything on the market. LXF
Thecus N4560
Manufacturer: Thecus
Price: £340
Ease of use
A solid, fully featured NAS that
offers all the software features
you want but is overpriced.
Rating 7/10
December 2014 LXF191 19
Reviews Distribution
Gentoo Live 2014
Although Gentoo seems incapable of mistake or mediocrity, Shashank
Sharma feels the attempt to improve upon perfection has backfired...
In brief...
The Gentoo Live
DVD is filled to the
brim with apps to
please all manners
of users. It isn't an
installable distro
and isn't really
designed to serve
as a newby’s
introduction to
Gentoo, though.
entoo has the distinction of
splitting the community into
those who admire it and those
who fear it. This stands in stark contrast
to the community's stance on most
other distros, they either like it or don't.
The Gentoo Live DVD, despite the name
and heritage, is just another distro.
Unlike most other Live distros, this
cannot be installed onto the disk. This is
chiefly why any comparisons with its
namesake are moot. The DVD isn't
designed to entice more users into
using Gentoo. It isn't a test bed for
Gentoo technologies, hoping to ease
users into the Gentoo experience. The
Gentoo DVD is a product of the Gentoo
community with the Gentoo developers
assisting in the effort.
The DVD comes in two flavours. The
version works with 32-bit x86 and
64-bit x86_64 architecture. The livedvdamd64-multilib-20140826 is only for
x86_64 systems. Despite being
significantly 'lighter' than the past
release, the DVD is still chock-full of
applications covering the best that the
Linux ecosystem has to offer.
Taking stock
The best thing about the DVD is that it
is all you need to demonstrate the
power of Linux to new and interested
users. In fact, it's not just applications,
the DVD also includes a variety of
desktop environments for a fully
rounded experience. It ships with Linux
kernel 3.15.6 and on the desktop front
Features at a glance
Persistence Mode
Choice is good
You can use persistent
mode with a partition on
the same USB drive that
you're booting from...
Our Gnome experience
was unpleasant, but the
screen serves to show the
number of apps on offer.
20 LXF191 December 2014
The 20140826 release, as if the lack of a more conventional or quotable name
wasn't an indication, is an expansive but vanilla Live distro and little else.
you get KDE, Gnome, XFCE, Fluxbox,
LXQT Desktop and i3 Desktop, which is
a tiling window manager primarily
aimed at developers or advanced users.
If you wish to use persistent storage
while booting off the USB drive, you will
need to create a EXT partition to store
the files. We used fdisk to create the
partition and the mkfs.ext3 command
to create the filesystem. When you next
boot into Gentoo, select the image you
wish to boot into and press F2. Now
type aufs=<device> in the Kernel line.
You'll have to do this every time you
boot into Gentoo, or when you switch
machines. When booting into the DVD,
at the GRUB prompt, press the function
keys for instructions on using the
different boot options.
The project maintains a list of all
packages included in the DVD for each
image on its website. The list is too vast
for us to include in this review. Suffice to
say that all your favourites and regularly
used tools are included, along with
several alternatives.
There's VLC, Amarok, Xine, and
more for your multimedia needs.
Internet apps such as web browsers,
email clients, IM and IRC clients, etc.
and Games, categorised into Arcade,
Logic, Boardgames and more,
outnumber almost all other types of
apps included in the DVD. The
enthusiasts will also appreciate Gimp,
Blender and various other tools to
channel your creativity.
Unlike the previous releases, Gentoo
20140826 doesn't offer a very smooth
user experience. Gnome crashed on us
every time we ran it and was tiresomely
slow, especially compared to KDE,
which seems to be the most polished of
all included desktop environments.
Also unlike the previous release, the
latest DVD doesn't have any diagnostic
or system recovery tools. We couldn't
even get Gparted to identify the local
disks and partitions.
In contrast to the older releases, the
project has chosen to limit the
functionality it offers and only really
aims to serve as a means to show off
Linux applications. If you have any
specific purpose to use a Live
environment, the latest Gentoo DVD is
probably not for you. LXF
Gentoo 20140826
Developer: Gentoo Foundation/comm
Licence: Various open source licenses
Ease of use
Comes across as a pale shadow of
the 2012 release, with nothing of any
real value to offer...
Rating 7/10
Distribution Reviews
Peach OSi 14.04
Excuse the frown on Mayank Sharma’s face as he tastes yet
another fruit-flavoured Linux distro for the new Linux user.
In brief...
distro designed to
ease new users
into Linux. See
also: Zorin,
aming a distro after a fruit is a
dead giveaway about its
intended audience. A quick look
at its website confirms that Peach OSI
14.04 is designed to usher new users
into Linux. Almost on cue, the website
lists the same old advantages of Linux
over proprietary distros – more secure,
less vulnerable, faster on older
hardware and such. So it was with some
trepidation that we downloaded the
distro. However, five minutes with Peach
and it had found a place in our hearts
and on our hard disk.
Peach OSI 14.04 is based on the
latest Xubuntu 14.04 LTS release which
makes it functional on relatively older
hardware. The modified Xfce desktop
features a panel at the bottom that’s
loaded with application launchers. The
panel is quite literally bursting at the
seams with launchers for over 20
applications – everything from the
Firefox web browser to the KeePassX
password manager to the digikam
photo manager.
Look through its menu and you’ll
notice that the distro bundles almost
every popular Linux app you’ve ever
heard of, right off the bat. But the
developer hasn’t just packed in all the
apps they could lay their hands on. In
fact they have selected the apps
meticulously and listed the reasons for
bundling multiple apps for seemingly
the same function. For example, the
Midori web browser is bundled along
with Firefox, as the former is ideal for
creating web apps.
Features at a glance
Loaded with apps
For everyone
The distro ships with many
pre-installed apps sourced
from the KDE, Gnome and
Xfce desktops.
To make it usable for a
wide variety of workflows,
the distro includes four
application launchers.
Peach OSI packs in a lot of functionality but comes at a price. the distro
requires a mammoth 14.4 GB of disk space to park itself.
Fuzzy logic
The distro’s websites states that Peach
OSI includes almost 100 apps in all.
Besides the popular ones that you’ll find
in other distros such as LibreOffice,
Thunderbird, and Ubuntu Tweak Tool,
the distro includes some less common
but equally useful ones such as
Entangle for tethered shooting, Blender
for 3D modelling, Ardour3 digital audio
workstation, wxBanker finance
manager, and Wine along with the
Winetricks and PlayOnLinux frontends
for installing Windows apps and games.
Some of the apps in the distro aren’t
native to Xfce and rather depend on
KDE and Gnome libraries. Kudos to the
developer for mixing in these libraries
and apps without adversely affecting
the distro’s usability. The distro feels
coherent and functions smoothly even
after several hours of usage.
Its collection of apps makes Peach
usable straight out of the box, for a wide
variety of desktop Linux users. In fact,
there are even distinct ways to get to
the bundled apps. The distro packs in
four different application launchers to
suit all kinds of users and their different
workflows. There’s the standard Xfce
applications menu that lists all apps
broken into categories. There’s also
Xfce’s reimagined Whisker menu which
gives quick access to frequently used
apps. Then there’s the Mac OS X
lookalike Slingshot menu borrowed
from the Elementary distro. Finally, old
time Linux users can also get to the
apps by right-click on the desktop.
However, the brains behind Peach
have not ignored their primary
audience. All the documentation on the
distro’s website is geared towards the
inexperienced Linux user and helps
familiarise them with terms such as
MD5sum, repository, etc., to help them
wrap their head around the technical
jargon. The distro also has a readme file
on the Live desktop that handholds
users through installing updates, and
switching boot loaders.
For a first release, Peach OSI 14.04 is
aces. It’s stable, well-behaved and
performs extremely well despite its
loaded cache of apps. If you want to
give someone a taste of Linux, get them
to bite into Peach OSI. LXF
Peach OSI 14.04
Developer: James Carpenter
Licence: GNU GPL
Ease of use
Desktop Linux users who aren’t
averse to an Ubuntu-based distro are
encouraged to take a look at Peach.
Rating 8/10
December 2014 LXF191 21
Reviews Cloud storage
Tonido Raspberry Pi
As a long-time ownCloud user, Mayank Sharma wonders if this
space has room for a proprietary software.
In brief...
A self-hosted
file sharing server
for the Raspberry
Pi. See also:
ownCloud and
Seafile, etc.
e’ve favourably reviewed
Tonido for Linux in the past
and now the freeware filesharing server is available for the
Raspberry Pi. If you haven’t heard of
Tonido, the software allows you to
access your files and media on a
remote computer. it’s also available preinstalled on a small plug computer with
Ethernet and USB ports.
Tonido’s simplicity extends to the Pi
version as well. To get it up and running,
you just need to download Tonido’s
compressed archive on the Pi, extract
its contents and start the server. You
can then configure Tonido from a
remote browser. All you really need to
do is create a unique Tonido ID that’ll
identify your Raspberry Pi Tonido
installation on the web. This ID is for the
Tonido relay service which will allow you
to access the Pi and any shared files
and folders from any computer on the
web without any further configuration –
no messing around with firewalls and
Dynamic DNS services, just pick a
unique TonidoID and you’re then taken
through a brief setup wizard and asked
to select the folders you need to access
remotely. By default Tonido will share all
folders, however you can customise the
setting to manually specify folders.
Similarly, you’ll then be asked to point
to folders that host your Music, Photos
and Video files. You can also change
these settings from within the Tonido
web interface.
One of Tonido’s most touted
features is the ability to stream
Features at a glance
Universal access
Public shares
The TonidoID relay service
lets you access your
cloud from any computer
connected to the internet.
Allow access to data
from remote location by
creating private and public
shares with ease.
22 LXF191 December 2014
The Tonido web interface is very intuitive and bundles a native media player.
multimedia. If you’ve pointed to the
folders that contain the media, Tonido
will scan them and add all media it finds
in the respective virtual libraries for
quick access. The software also
includes an inbuilt media and played
MP3s, MP4, but balked at OGGs and
AVIs. The developers suggest you install
the FFMPEG libraries for more
comprehensive multimedia support.
Care to share
Sharing a folder is pretty
straightforward. All you need to do is
use the built-in file manager to point to
a local folder on the Raspberry Pi, which
could also be on a mounted USB disk,
and give it a name. You also get
adequate sharing permissions. You can,
for example, set an expiration date for a
share, allow everyone with the share
URL to access the folder, or restrict
access to a select few.
When you restrict access to
particular users, you can also control
their right to view and upload files
individually and restrict the amount of
data they can upload. Tonido also keeps
an access history of the shared files.
The only limitation with the freeware
version is that you can only add up to
five shares and five guests.
To upload files, you can either use
the built-in file manager or simply drag
and drop the files into the browser
window. The ability to drop folders is
only available for the Chrome browser.
However, there are no progress bars to
track uploads. There’s no indication of
any kind to let you know that the
selected file or files are being uploaded.
This is most unhelpful when uploading
large files, obviously.
To sync files you’ll need to grab a
client. Tonido has clients for Windows,
Linux and Mac. But the Linux client is
only available as a .deb file and that‘s
for 32-bit systems only. Once it’s up and
running, you can use the client to sync
files to the Raspberry Pi without any
issues. You can use the web interface to
tweak the location of the synced folder.
All said and done, Tonido is designed
to help most users get off the ground.
However, its usability will start tapering
off as you become more experienced
and demanding... LXF
Tonido for Raspberry Pi
Developer: CodeLathe LLC
Licence: Proprietary
Ease of use
A file sharing server that eases setup
and admin. Ideal for inexperienced
users but doesn’t scale very well.
Rating 7/10
CMS Reviews
WordPress 4.0
WordPress seems manically driven to constantly improve itself. Shashank
Sharma takes a closer look, hoping some of it rubs off on him...
In brief...
Popular CMS
capable of hosting
almost any kind of
website thanks to
its vast collection
of plugins. See
also: Drupal,
niversal appeal and malleability
have made WordPress one of
the most widely used content
management systems. Thanks to its
design and ease of setup, WordPress
appeals to all manners of users, despite
their skill set. The large selection of
plugins also means that it can be used
to set up just about any kind of website.
As a breed, reviewers are a greedy
lot. We're predisposed toward featureheavy releases and frown upon a
release that doesn't offer many new
additions. Despite that, with tools like
WordPress, you have to look not at the
individual additions, but how they add
to the experience the project offers.
This is especially true in case of
WordPress 4.0, named 'Benny', after
Jazz musician Benny Goodman.
In the open source ecosystem, users
are frequently burdened with choice.
Natural selection takes care of the
problem, but only to an extent. For the
most part, the popularity of a project
and its success depends on what
measures it takes to provide users what
they want, and routinely. This is one
area where WordPress has done quite
well for itself over the years and the
latest release is no different.
Internationalisation has long been a
strong suit for the project, with an
estimated 30% of WordPress sites
hosting non-English content. With 4.0,
WordPress hopes to expand its global
appeal further by offering users the
option to even install it in their own
language. Also, while WordPress offers a
Features at a glance
Plugins grid
Embedded media
The plugins grid even
allows you to actually
restrict search to plugins
that are in Beta.
Embedding multimedia,
you can even stream the
video or audio from the
visual editor itself.
WordPress continues to impress with yet another stable release, boasting
enough features to please existing users and impress new ones.
variety of plugins to fight off spam, 4.0
now lets users mark even deleted
comments as spam out of box.
Being content
A major focus of the release has been
on improving how content is previewed
and edited. In previous releases, a major
problem was that users had to scroll to
the top of the page to access the edit
controls menu bar. This bar now sticks
to the top of the content box and allows
for a smoother editing experience.
Another new feature is the grid view
that allows you to view at a glance all
the uploaded media. You can also list
the media by type and even edit an
image, by selecting it from the grid view,
and then clicking the Edit Image link on
the right sidebar. When editing images,
WordPress 4.0 offers on the fly previews
of the changes being made.
Unfortunately, while the grid view allows
you to select multiple files to embed
into a post or page at once, it doesn't
allow you to perform the same edits on
multiple images simultaneously.
When working with embedded
content, be it images or videos,
WordPress 4.0 now offers previews of it
in the visual editor itself. So you can
directly paste the URL of your Flick
image, YouTube video or a Tweet into
the content box and WordPress will
render it in the editor, removing the
need to 'Preview' your post to
determine how the embedded content
sits in with the rest of the post.
Plugins are the biggest draw for any
WordPress user. With the latest release,
users now get a grid view of available
plugins. The plugin cards provide a lot of
detail for each listed plugin, such as the
purpose, its developer, compatibility
with the installed WordPress version,
rating, and more. Apart from these, the
release also features the usual plethora
of bug fixes and other tweaks to further
improve the user experience.
Despite its dominance of the CMS
market share, WordPress continues to
improve itself with each release and 4.0
continues this proud tradition. We like
'Benny' for being a rounded, stable
release. You might too! LXF
WordPress 4.0
Developer: WordPress Foundation
Licence: GPLv2
Ease of use
While the latest release is not the
most featureful, WordPress 4.0 is
greater than the sum of its parts.
Rating 9/10
December 2014 LXF191 23
Reviews Gaming headsets
Gamdias Eros
For less than fifty pounds how can this outperform more expensive sets?
In brief...
From peripheral
newbie Gamdias
comes a low-cost
headset that’ll do
you proud.
Drivers: 40mm
20Hz to 20KHz
Cable: 3m
Weight: 328g
his bargain-priced headset from
peripheral newcomer, Gamdias,
is a bit of a surprise. On first
sight it looks every bit the £46 gaming
headset – all glossy red and black
styling with a fat, pivoting
boom mic. It doesn’t
scream quality, it just
mumbles something
about certain
plastics being
cheaper to
manufacture. But
throw the Eros
over your ears
and you’ll be
as stunned
as we were
about the
level of audio
loveliness that flows directly into your
reverberating lugholes.
The Eros combines bold, pounding
bass notes with pin-sharp treble,
making it a genuinely great gaming
headset. It only rocks a modest
20Hz-20KHz frequency
response range, but sounds
almost as good as sets
costing twice as much.
Despite the questionable
appearance, the cans are
comfortable and cushion
your head in gentlypadded luxury – ideal for
any planned lengthy
Battlefield 4 frag-fests.
The mic is nothing to
get excited about, but it
doesn’t let the Eros
We hope you like
red as you’ll be
seeing a lot of it
with this design.
down when you’re screaming at your
squad for a little fire support. The audio
quality isn’t quite so astounding
musically, with treble feeling just a little
too harsh for some vocals. Yet for a wellpriced gaming headset, you can’t really
ask for more than what Gamdias has
delivered right here. LXF
Gamdias Eros
Developer: Gamdias
Price: £46
Ease of use
Looking for a lower-cost headset
that can rock it with the big boys of
the headset world? Look no further.
Rating 9/10
Kingston HyperX
Will tricked out QPad cans will put your ears on cloud nine?
In brief...
Driver: 53mm
Mic response:
Mic type:
3.5mm jack
Probably the
best headset
you can buy for
the money.
hen we first saw Kingston’s
HyperX-branded headset
there was a twang of
recognition in our collective hivemind
tech lobes. It is effectively
QPad’s excellent QH-90
headset but with a little
bit of a paint job. Then
there was an immediate
shudder as we heard
that Kingston had
taken one of our
favourite headsets
and decided that it
needed bigger
‘splodes. Yup,
Kingston has turned
up the dial on the bass.
To our surprise, it hasn’t
taken anything away from
24 LXF191 December 2014
the original headset’s aural quality. Our
one concern with the QH-90 was that
the bass response was a little light
because of the delightfully balanced EQ
that QPad had used. Kingston
has augmented the low-end
of this HyperX version to
give the sub-20Hz
frequencies a punch
that is more felt than
heard. It’s not
punching your ear like
the Mad Catz cans,
but adds depth to
the bass notes
without muddying
other sounds.
And because it’s
practically identical to
the QH-90 set, it’s got
an elegant, robust and lightweight
design that feels comfy after long
periods of use, and the detachable mic
is as clear as you could want. Incredibly,
Kingston has improved on an already
impressive headset. LXF
Kingston HyperX Cloud
Developer: Kingston
Price: £86
Ease of use
Injecting QPad’s excellent QH-90
with more bass results in one of the
most complete headset packages.
Rating 10/10
Games Reviews
The Witcher 2
A game sequel that balances on the pinnacle of greatness. And then falls off.
Richard Cobbett dons his armour of +4 to new games journalism.
In brief...
Take control
once again of
Geralt of Rivia, the
titular Witcher,
geneticallyenhanced and
trained in the art
of monster
slaying, in this
gory, nasty,
spooky sequel.
OS: Ubuntu
12.04+ or Steam
CPU: Quad core
Mem: 4GB+
Gfx: GeForce GT
640 or later, Intel
not supported
Driver: Nvidia
340.32, AMD fglrx
14.4 rev 2,
drivers not
his is a game sequel that shoots
for the sun while its rivals are
still lining up their sights on the
moon. It’s an AAA RPG with an indie
soul, and a charged, exciting adventure
you can really sink your teeth into,
admire, and for the most part, love.
From the raw technical wizardry of
the engine, to tent walls rippling in the
breeze and villagers running for cover
when it rains, it’s a game built with
burning, red-raw passion and exactly
one goal – to be the best RPG ever,
whatever it takes. Admittedly, it falls
short of that, but not without giving it a
damn good go – over its 20-30 hours of
almost relentlessly superb moments,
The Witcher 2 raises almost every bar it
can get its hands on.
The supernatural odyssey is packed
to the gills with big decisions and major
plot branches, and unlike most RPGs,
these have consequences far beyond
whether or not you get a magic karma
point or an NPC opts to kiss you.
In the opening section, for instance,
you’re sent to take down a traitor, Aryan
La Valette. Whether you kill him in a duel
or make him surrender, the game
happily rumbles on. The scale of the
consequences of many of your choices
is almost ridiculous. Chapter 1 features
two completely different final acts
depending on who you work with, both
of them dramatic and well-produced.
Chapter 2 takes this to a whole new
level, offering two completely different
towns depending on your earlier choice.
All this detail and ambition comes at
a price, however. The Witcher 2 often
feels like CD Projekt struggled to take a
step back from their game, or were
The Witcher 3 is on the horizon; we hope it’s longer...
You can only kill monsters with a silver sword. Surely it would kill humans too?
unwilling to bring in fresh eyes to
playtest it. Quest markers and
descriptions are frequently confusing,
wrong, or just plain missing – very
much the sort of mistake someone
wouldn’t notice if they already know
where they were going and why. The
Witcher 2 still has too much
backtracking and too many invisible
walls. Importantly the narrative and
action both truly hit the ground running,
and hit it hard, with huge armies
clashing, dragon attacks, daring
escapes, and an opening village full of
drama and intrigue and interesting
moral dilemmas.
The new combat system is a mixed
bag. As before, the gimmick is that you
use a steel sword against humans and a
silver one against monsters, along with
several magic spells to stun, burn and
otherwise tip the balance in your favour.
This works well against one or two
opponents at once, but a mix of long,
non-interruptible animations and bad
targeting can make fighting groups a
pain. This is especially problematic early
on, when Geralt has almost no stamina,
his spells are weak, you can’t block
more than a couple of hits at a time,
rear attacks deal 200% damage, and
you can easily be obliterated by pesky
random encounters.
Chapter 1 was glorious, beautiful,
involving and heartfelt. Chapter 2 was
even better: epic, dramatic, amazing.
When Chapter 3 dawned, it felt like the
game-changing mid-point, where the
gloves would come off and the second
half of the story absolutely explodes
into life... It wasn’t. Chapter 3 turned out
to be the end, as if The Witcher 2
suddenly looked at its watch, and went
‘Whoa, is that the time?’.
The game’s crime is simple: failing
to live up to its own high standards,
even after exceeding almost everyone
else’s with fire and passion and style. All
things considered, that’s not a difficult
thing to forgive. Forgetting? Not so easy.
We’ll also quickly mention its Linux
release, which was mixed based around
the eON wrapper system. This largely
favoured Nvidia hardware, but a major
update in August and AMD driver
tweaks have seen improvements to
stability and performance. LXF
The Witcher 2
Developer: CD Projekt RED
Price: £14.99
Last minute collapse onto its own
silver sword aside, this is one of the
most impressive RPGs you’ll find.
Rating 8/10
December 2014 LXF191 25
Every month we compare tons
of stuff so you don’t have to!
Music players
Which music players handle large music collections with the best
blend of features and performance? Richard Smedley is all ears.
How we tested...
While these players are fairly feature
packed we wanted to give a realistic
assessment, so we tested on a
relatively humble dual-core laptop
with 2GB of RAM and, to mimic the
repurposed machine setup many
might have, we also tested on an
original Acer Aspire One Netbook.
OS was courtesy of Ubuntu 14.04,
Debian Testing (Jessie), and Fedora
FC20. Some listening was done with
the players running against a
minimal window manager (usually
XMonad), and some against the
more realistic environment of KDE
or Unity, and several other apps
eating into resources. High-res files
were used, as well as CDs, lossless
FLACs, compressed downloads
(MP3 and AAC), and streams. Files
on an external drive enabled testing
across machines, and checking the
search and playlist building worked
outside of the home directory.
ven if you value the sound
quality of vinyl above all other
sources, you can't avoid
amassing digital files of all
the music, audiobooks, podcasts, and
other audio goodies only available in
some or other digital format.
Most distros take care of codec
support for you, and Linux's audio
plumbing handles higher-resolution
audio files, such as 24-bit 192kHz
sample rate, which means good sound
quality is potentially within reach of all
software choices. Direct Stream Digital
(DSD) is the exception as it's barely a
year since support was added to the
26 LXF191 December 2014
“We’re comparing
the more fullyfeatured players...”
kernel's Advanced Linux Sound
Architecture (ALSA).
Options for storage and playback
are legion, with dedicated hi-fi
components competing against the
several software solutions available on
the three main PC platforms. Here we're
comparing the more fully-featured
audio players, capable of handling very
large music collections – tens of
thousands of tracks – something the
default player shipped with your distro
might not stretch to so well.
There are many solutions for those
wanting a more minimalist audio player
but for typical use, such as your musical
life on the hard disk of your laptop, or an
audio-playing netbook plugged into
your AV amp, the power and flexibility of
these five picks should mark them out.
Music players Roundup
Intuitive is a much abused word…
anshee has a flexible interface.
Its multi-pane interface put
everything just where you reach
for it with our cursor – controls and
menus are at the top, media choices
are at the side, and three panes of info
are available as required – making it
easy to switch between its many
services. It's little wonder that this
handy app has quickly become many
people’s default media player in Ubuntu
and elsewhere. Earlier versions of
Banshee were reported as unstable on
KDE desktops but we’ve found that this
is certainly no longer the case –
although admittedly we did have the
occasional crash when system
resources were under pressure; a
problem shared with Nightingale.
Nightingale has a clean default
interface, centred on the playlist and
the importing, sorting and playing of
your playlist. Lyric fetching was a hitand-miss affair over the score of
providers listed, and cover artwork
didn't always show up, but there weren't
any problems getting Nightingale to do
anything of a purely audio nature.
MPD (Music Player Daemon) is a
special case, as you're accessing it
through a client. We focussed our
attention on QMPD, a Qt-based client
which gives a good range of features
under its deceptively simple skin. Gtk
and terminal clients are also available,
but for handling large software
collections it's MPD itself that matters,
and the clients vary by feature set,
speed and usability.
QMPD has a notably distinctive take
on the multi-pane interface, with few
menu choices, icons for various playlist
controls, and tabs for library, playlists,
internet Radio etc, shunted to a vertical
column on the right. Nevertheless, with
MPD set up, playing music through
QMPDClient turns out to be a pleasingly
straightforward affair, and the software
was certainly quick and responsive
when we used it.
Audacious's somewhat oldfashioned interface was quite refreshing
in its directness, and although it can be
changed to a classic blue-neon
WinAmp lookalike, we preferred
working with its default Gtk theme.
Despite the sparse interface and config file set-up, MPD
is the only player to cope with DSD files.
Extensive menus, basic playback icons,
the playlist, and a colourful pane of
cover art and traditional frequency
spectrum type visualisation make for
an easy to use player.
GMusicBrowser, while having quite a
busy interface, is another player
focussed on the playlist and the
individual tracks. We found that moving
from the library to the context tab
removes most of the distraction, and
those who like plenty of contextrelevant right-click options will be right
at home with this player.
GMusicBrowser shares the laurels
with Banshee on this score...
Not the most
featureful, but
Banshee is the
easiest player
to dive into.
Setting up
Finding your music and managing it.
anshee digs deep to pull in
connections to many services,
including browsing Amazon
MP3s, playlists, Miro Guide, and
the Internet Archive's DLC. Audacious,
too, was exemplary in setting up
playlists with ease. The simple interface
once more proves an easy way to
discover the player's abilities, and the
list of installed plug-ins is impressive.
Unsurprisingly, as a server app,
MPD needs work to set up. You need to
Audacious, like all the players, takes on 192KHz/24-bit HD audio files.
edit /etc/mpd.conf with the absolute
path to where the music is, then run
mpc update. There's a choice of client
software, QMPDClient, pictured here, is
a reasonable choice for KDE users...
GMusicBrowser is notable for its
simple default interface, and easy
configuration with a variety of options
to make it like most other media
players, or like Nightingale, it uses the
GStreamer framework to pipe together
media handling, which means codec
support goes as far as you wish to allow
non-free codecs on your PC (via the
gstreamer-plugins-ugly collection,
which Ubuntu users can find in the
Multiverse repository).
Nightingale was occasionally less
intuitive than Banshee and Audacious,
but worked well during the set up and
discovery process – at least once we'd
switched to the non-Unity package, and
restarted with US-English settings.
was ready to
play even
Commodore 64
sound chip files.
December 2014 LXF191 27
Roundup Music players
Features & extensions
You want an alarm clock? No problem.
here's so much more we've come to
expect from music players. Displays of
cover art and fetching lyrics are taken
for granted, as is scrobbling to and
comparable services. Attempting to prove
beauty (for software at least) is just skin deep
with numerous themes to match your lovinglytweaked desktop is plainly important, given
the thousands of user-contributed skins to be
found. Our five players certainly have what it
takes to manage playlists and tame your
250GB disk of MP3s, but when free and open
source software offers the potential to make
every feature possible, why not go for it? There
may be an extra feature that will swing your
choice and there's plenty on offer: from finding
CC-licensed music to playing DSD files, there's
something to make each one stand out...
Audacious +++++
We've mentioned the number of plug-ins that ship with Audacious and
behind the tame façade hides a player ready to take on anything. There
are useful audio effects, such as binaural conversion (which makes
headphone listening more 'realistic') and LADSPA effects. An EQ window
opens when you start up, which may be an effect too far if you have a
well-balanced amp and speakers setup. Impressive codec support
ranges from the Nintendo DS Sound Format to Apple Lossless (ALAC)
and there are even plugins for formats such as M3U, PLS, and XSPF.
As well as multiple obscure codecs and numerous plug-ins, Audacious
lets you change its appearance with Winamp2 skins (WSZ files). Useradjustable colour balance means the possibilities are endless, but with
great control comes great responsibility for the distress of any onlookers.
Banshee +++++
In addition to the best compatibility with portable music players,
Banshee offers smart playlists and utilities for handling larger collections
with such features as bulk fixing of missing metadata. Using
Musicbrainz, Banshee will automatically retrieve missing and
supplementary metadata for library items, including cover art. Also, If
you have multimedia keys configured in Gnome, Banshee can use them
too. Like Audacious, Banshee can also rip CDs. Extensions worth a
mention include Karaoke; Stream Recorder for recording Internet radio
streams; and LCD, which displays track info on an LCD screen using
LCDproc. That last extension makes Banshee a great choice for an
embedded music player project. There are also extra touches, like typeahead find and excellent support for audiobooks and podcasts, which
explains why Banshee is shipped as a default player in many distros.
Large collection handling
If you've passed 5,000 CDs worth of music, these are for you.
ur five players were chosen
because of their ability to
handle large collections – the
kind to fill the disk drive of the late,
lamented iPod Classic. GMusicBrowser
stood out here with its flexible scanning.
Add to this highly-configurable playlist
features and touches like weightedrandom playback options, and it's a
great way to enjoy your collection.
While Audacious quickly indexed a
large external disk and added
everything to the playlist, it couldn't tell
28 LXF191 December 2014
audio files from anything else –
although removing files from the
playlist was simple enough. Nightingale
is also a quick indexer, but uses a fair
amount of system resources no matter
what size of music collection it's
working with, so it struggled a little on
lower-specified hardware.
Banshee is a little behind the other
four players, but barring the odd crash
it soldiered gamely on. Its easy, intuitive
interface earns it a lot of forgiveness
when it falls short elsewhere.
MPD was expected to do well in our
tests and did. However, it doesn't
handle things like other players, which
tend to build playlists by scanning
multiple directories for media files.
MPD works from one music directory. If
you keep sound files in multiple
locations, you'll need to make symlinks
from MPD's assigned directory, though
it can read from remote servers via
CMB or NFS. This is recommended if
you are running MPD on limited
hardware such as a Raspberry Pi.
The ease of
edges out MPD's
command line
Music players Roundup
GMusicBrowser ++++
GMusicBrowser bills itself as a "jukebox for large collections of MP3s,
Oggs, FLACs and MPCs." The playlist is the heart of what it does, and
none are as flexible and powerful. Features such as simple mass-tagging
and mass-renaming, and customisable labels make for easy
management. Shimmer is a theme that ships with the player and
improves the standard layout of controls as well as their appearance,
and has a number of options such as a mosaic of album covers. Plug-ins
aren't as numerous as those found for Audacious, but they do fetch
song lyrics and cover art. The ones worth a try are: Albumrandom3,
which plays whole albums with the use of weighted random and
Sunshine3, an alarm clock which fades in music to start your day (and
can also fade out tracks as you go to sleep at night).
MPD/QMPDClient +++++
MPD's strength is that it can handle tens of thousands of tracks without
breaking into a sweat, but clients like QMPDClient do the heavy lifting by
providing features. EQ is one feature you won't find on the client, but
QMPDClient does supply a lot others, including cover art, lyrics, internet
radio, playlists, track submission, tray notifications (Freedesktop
too), skins, and shoutcast directory browsing. MPD was the only one of
our players to manage high-resolution Direct Stream Digital (DSD) files.
This is the format for Super Audio CD, and gives a more 'natural' sound
than the PCM encoding used in other discs and downloads. If you're
happy with CD or compressed audio this won't be a problem, but with a
rapidly expanding library of DSD tracks now available to purchase for
download, this may swing audiophiles towards an MPD-based set-up.
Nightingale ++++
Nightingale picked up the Songbird source code and flew off with it,
maintaining both the player and it extensions compatibility. You can add
to the interface with more important track info (or an analogue clock,
should you so wish); browse for CC-licensed music; find broken, missing
or duplicate tracks; add an iTunes-like grid view and even fetch guitar
tabs. The secret to its extendability is the Firefox-like nature of its
extensions, as Nightingale uses Mozilla XULRunner underneath, making
embedded browser windows a natural feature, and you'll find things like
your page in its own tab. The gapless playback and good tag
editing keep Nightingale in the running. Add in its excellent control via
the keyboard and it's a powerful package, provided, of course, your
hardware resources are up to it.
Support and Documentation
Because everyone needs a helping hand.
hen QMPD kept reporting
that it 'could not connect to
lastfm' there was no hint that
we should install mpdscribble, and edit
/etc/default/mpdscribble, then
restart the service. MPD has a
reputation for being difficult to set up,
but don't let that put you off. For the
most part following the user manual on
the MPD website or one of its excellent
online tutorials will get you going.
Nightingale, as a child of the web,
will load the online forums in an
embedded browser window if you click
Help. The built-in search finds answers
to most problems, and when it refused
to start on first install, it did so with the
message that switching to US-English
often fixed start-up problems… and it
made the switch for us. Nice work.
Banshee's manual first appeared
with the 1.8 release four years ago. It's a
practical guide aimed at managing and
playing your music, as well as extending
Banshee and connecting to portable
players. It contains useful tips, such as
how to get purchased files from
Amazon before the .amz file expires.
Audacious is so straightforward it
could almost be said not to need any
documentation. the main page's list of
keyboard shortcuts is enough for most,
but if not the forums on the website
have plenty of help and advice.
GMusicBrowser provides forums, a
reasonable FAQ section and a
developers' wiki, as well as a useful
guide to the less obvious features –
so an adequate level of support.
practical yet
concise guide is
model reference
December 2014 LXF191 29
Roundup Music players
Hardware connections
iPods, remote controls and outboard sound cards.
hanks to ALSA, GStreamer and
Pulse connecting to a USB DAC
should present no problems
whichever player you use as it's just
seen as an outboard sound card.
Whatever your level of scepticism about
the world of hi-fi, where connection
cables sell for thousands of pounds,
and more is written about the hardware
than the music, spending a couple of
hundred pounds on a USB DAC and
adding a decent amp and speakers will
do more for your music than choosing
the best player here (mostly because
any of the five will serve you quite well).
While all of the players could be
controlled from your phone via
Bluetooth or Wi-Fi by adding in the
appropriate remuco package,
Nightingale's support is only found
Even if you never connect Banshee to an iPod, you'll appreciate other features.
outside the Remuco codebase in the
ageing packages of an Ubuntu PPA.
Nevertheless, for those not wishing to
leave the sofa remote control options
are available for all of our tested players.
Speaking of phones, while iPod
integration was and remains a major
source of grousing for Linux users, the
ubiquity of Android phones generally,
and particularly among Linux users, has
made this far less of an issue.
Nightingale, for instance, will import
and export to iTunes, and managing an
iPod/iPhone and so on is best done via
a separate program like Podtool or
Gnupod. The exception here is Banshee,
which not only has excellent – and
frequently updated – support for
iDevices, it also connects to many MTP
devices. MPD has the MPod client for
portable devices.
Incidentally, those who see the
world through the filter of the muchloved Conky system monitor will be
pleased to know that it connects to
Audacious, Banshee, GMusicBrowser,
and MPD. Yes, that's right, all except
Nightingale. In fact, most of the players
that didn't make the selection here will
also connect to Conky.
ability to
connect to
portable players
stands out.
Stream support
Listening to sounds on the wire.
adly, BBC iPlayer is still locked
away from these clients –
unless you go to the bother of
using a script to regularly gather the
ever-changing m3u addresses to send
to MPD – but you do have the option of
downloading BBC radio programmes
with the get_iplayer script, to add them
to your library and listen to them later.
Banshee puts its online resources in
the left-hand pane: Amazon, Miro, the
Internet Archive,, etc. Thirdparty plug-ins add support for Jamendo
and Magnatune, and radio stations
from and, while
Miro Guide pulls in hundred of podcasts.
Nightingale, with its web heritage, is
the easiest to connect to, and
supports Shoutcast and Soundcloud.
Using Audacious, the only method of
receiving streams is the Add URL
30 LXF191 December 2014
option from the menu, while
Nightingale's USP is
Soundcloud, which makes it
the only GNU/Linux audio
player app that lets you
easily access your favourites
and your profile stream from
SoundCloud. QMPDClient,
meanwhile, enables you to
add any internet radio
station, as well as Shoutcast
stations. At the back end,
the mpc add <URL>
command adds radio
stations to MPD.
Nightingale, which is based on Mozilla's
XULRunner, easily embeds Internet services.
GMusicBrowser's Web
Context plugin uses
MozEmbed or WebKit to
GMusicBrowser doesn't tap into the
provide views of Wikipedia pages, and
benefits of the Internet, although
lyrics, appropriate to the song playing.
project developer Quentin Sculo says
Beyond this and support,
he has internet radio on his to-do list.
brings in a
whole world of
audio bliss.
Music players Roundup
Music players
The verdict
anking the winners isn't an easy
task, as all five have something
to offer. As you'd expect, it all
depends on what you want out of a
music player and particularly one that
can handle large music collections. If
you're not bothered about streaming
content, then Audacious is worth a look:
it's just enjoyable to use and does the
job well, which is sadly all too rare in
desktop software.
Nightingale is a mighty beast.
Unfortunately, it uses significantly more
system resources than the other
players we tested, which means you
won't have much fun using it on your
poor little Raspberry Pi. Despite this,
and the lack of Conky and native
remuco integration, it's a capable player
which will take on large music
collections and use the power of its
extensions to manage them.
Any MPD-based solution will lose
marks due to the less than trivial
installation and configuration. For
1st GMusicBrowser
readers happy to push through this
barrier, we'd like to recommend MPD
and one of its many clients QMPDClient
for handling large collections in a stable
and easy way. The flexibility of its client/
server network architecture (with
clients on iPods, in web browsers, and
anywhere you can imagine), and the
separation of server (MPD), basic
control (MPC), and clients feels like the
correct Unix way of handling the job.
Back to more user-friendly desktop
apps, and we're left to choose between
the well-performing Banshee and
GMusicBrowser. Here's our surprising
result: Banshee is packed with features
and comes tops for flexibility, yet it's
also the least comfortable with large
collections, and can be a little sluggish
and unstable
handling them. In
short: try it.
Banshee may suit
all your needs, but
it does have those
4th Nightingale
+++++ Licence: GPLv2, MPL, BSD Version: 1.12.1
Made for large collections, quirky but powerful.
Graceful all-rounder with power and extensibility.
5th Audacious
+++++ Licence: MIT Version: 2.6.2
+++++ Licence: GNU GPL Version: 3.5.1
Less robust, but featureful and lovely to use.
3rd MPD/QMPDClient
shortcomings. But for more reliable
management of a large collection of
audio files, GMusicBrowser is tailormade. It's not the best at everything,
but there's nothing it does badly and
the slightly busy interface works well for
us, even without changing it to
Shimmer, or another theme.
“For more reliable management
of a large collection of audio files,
GMusicBrowser is tailor-made.” Licence: GNU GPL Version: 1.1.13
2nd Banshee
GMusicBrowser's quirky interface is surprisingly easy to
use and focuses on managing the music collection.
Nice – but not competitive in this company.
+++++ Licence: GPLv2, MPL, BSD Version: 0.18.14/1.2.2
The Unix way – non-trivial configuration and powerful results.
Over to you...
What do you think? Do you have a preferred media player that’s not
on our list? Email your opinions to [email protected]
Also consider...
This is a crowded field, and honourable
mention should go to Amarok, which has
improved in recent releases. Quod Libet is also
worth of note with its powerful tagging and the
more minimal Qmmp which lacks the feature
of the five we tested but works well with large
collections. Qmmp is also happy to play 24-bit
tracks at 96kHz or 192kHz sample rates, in
stereo or 5.1 surround - but not DSD tracks.
We're also big fans of Clementine with its
excellent management of tabbed playlists and
streaming services
There's a certain amount of crossover
between audio and media players these days,
and although the latter are not necessarily built
to handle very large collections of audio files in
an easy and graceful manner, UMPlayer is a
good example which featured strongly in a
recent Media Player Roundup (See p24,
LXF187) and SMPlayer.
Lastly, if you want the flexibility of a clientserver and a choice of front-ends, but without
MPD's complexity then XMMS2 provides that
along with the best gapless playback. LXF
December 2014 LXF191 31
Subscribe to
1 for Free Software
Subscribe to our print edition, digital edition or get the
best value with our complete print and digital bundle!
Choose your
1 for Free Software
Every issue delivered to your door with
a 4GB DVD packed full of the hottest
distros, app, games, and more.
PLUS exclusive access to the Linux
Format subscribers-only area.
Instant access to the digital editions of
the magazine on your iPad and iPhone.
PLUS exclusive access to the Linux
Format subscribers-only area, featuring
complete issues & disc downloads.
ONLY £25.49
ONLY £20.49
Your subscription will then continue at £25.49 every 6
months – SAVING 40% on the shop price.
Your subscription will then continue at £20.49 every
6 months – SAVING 37% on the shop price.
32 LXF191 December 2014
Get the complete
1 for Free Software
Print + Digital
A DVD packed with the best
new distros and free & open
source software every issue.
Exclusive access to the
Linux Format archive – with
1,000s of DRM-free tutorials,
features, and reviews.
Every new issue of the
magazine in print and on iPad
and iPhone.
Never miss an issue, with
delivery to your door and on
your device.
Huge savings, the best value
for money, and a money-back
Your subscription will then continue at £30.99 every 6
months – SAVING 40% on the shop price and giving
you a 83% discount on a digital subscription.
Two easy ways to subscribe...
Online -
Call 0844 848 2852
(please quote PRINT14, DIGITAL14, BUNDLE14)
Prices and savings quoted are compared to buying full priced UK print and digital issues. You will receive 13
issues in a year. If you are dissatisfied in any way you can write to us or call us to cancel your subscription at
any time and we will refund you for all unmailed issues. Prices correct at point of print and subject to change.
For full terms and conditions please visit: Offer ends 25/11/2014
December 2014 LXF191 33
Hack the Raspberry Pi
Hack the Raspberry Pi
Follow in the footsteps of free software wizards as
Mayank Sharma shows you how to master the
Raspberry Pi by hacking it.
he Raspberry Pi was
conceptualised as an
educational device. The
Raspberry Pi Foundation
designed the no-frills computer to make
an affordable and functional computing
device for kids who wanted to learn to
program, but found it difficult to come
up with the cash to procure hardware offthe-shelf. However, the device hit it off
with the hackers and modders who
began using it creatively and
made it usable to audiences
far beyond what Eben Upton,
Rob Mullins, Jack Lang and
Alan Mycroft had originally
imagined when discussing the
idea at the University of
Cambridge’s Computer Laboratory.
But this has created a misconception. A
lot of people believe that the Pi can either
be used inside an elementary educational
institute or in the hands of an experienced
campaigner who’ll plug it inside a RC car or
space-faring teddy bears. While this is true,
the Pi is also an excellent tool to cover the
ground in between these two extremes.
Now don’t sound surprised when we
tell you that hacking with the Pi is a nice
way to learn about Linux and
programming. And we aren’t the only ones
to say this. We’ve reported in detail on the
new UK computing curriculum and how it
encouraged the use of the Raspberry Pi in
each of the key stages defined in the new
curriculum in the past (see LXF189, p56).
Stage 3 between years 11 and 14 are
introduced to text-based languages like
Python to control electronics. The Pi is a
wonderful platform for this application
because of its accessible GPIO ports on
which you can hook up devices, such as
Pibrella. Finally, the kids in Stage 4
between ages 14 and 16 can make use of
the excellent add-on kits available for the
Pi to develop their computation abilities.
In this feature we’ll help you pick up
new skills as you hack with
the Raspberry Pi. Just make
sure you have one ready. Use
NOOBS to prepare an SD
card for the Pi. Download it or
grab it off the disc, unzip it
and copy the extracted
contents onto a formatted SD card, and
you’re up and running. This feature covers
some very practical everyday projects that
can be rolled out by anyone, irrespective of
their skill level. As you complete each of
these 10 hacks you’ll learn some tricks of
the trade that are widely used....
“The Raspberry Pi was
designed as a device for kids
to learn to program...”
34 LXF191 December 2014
In Stage 1, kids between 5 and 7 years
write and test simple programs on
different devices, such as a tablet or the
Raspberry Pi. Then there’s Stage 2 where
kids between ages 7 and 11 are introduced
to Scratch which is installed by default on
the official Pi distro Raspbian. The kids in
Hack the Raspberry Pi
Skills: Security, Twitter API
Hack #1: Tweeting security camera
The launch of the official Raspberry Pi camera module threw
open a world of possibilities. Hackers who were already using
USB cameras now had a minuscule Full HD shooter that was
ideal for projects like surveillance. Combined with the Pi’s
computing prowess, you can send the captured security
images into your Twitter stream.
Before you attach the camera, locate the Camera Serial
Interface (next to the Ethernet port) and pull the tab gently
up. Now push the camera module’s ribbon cable into the slot,
with the silver contacts on the cable facing away from the
Ethernet port. Remember not to push the cable in very deep.
Now hold it in place with one hand and push the CSI tab back
down with the other hand to lock the camera’s ribbon.
With the hardware in place, it’s now time to set up the
software. Boot into Raspbian and bring up the configuration
tool to configure your Pi with sudo raspi-config. Scroll down
the list to the item that reads Enable Camera. You’ll need to
confirm your choice and restart your Pi. On restart, you’ll be
able to use the well-documented raspistill and raspivid
commands to capture still images and videos respectively.
To capture motion, we’ll use the lightweight motion
detection Python script written by Raspberry Pi community
members. The script relies on the Python Imaging Library
which is a library for analysing and manipulating images. You
can install it with sudo apt-get install python-imaging-tk.
Also create a directory named picam under your home
directory for the script to store images with mkdir ~/picam.
Now grab the script and make it executable with wget -c -O
and make it executable with chmod +x
When you run the script with ./, it’ll turn on the
red LED on the Pi camera and start taking low-resolution
images. It’ll then look for movement by comparing the pixels
in two consecutive images. If it detects changes, the script will
capture a higher-resolution image. The script is very efficient
and will automatically remove the low-res images it captures
for comparison and only store the high-res images that have
captured the motion.
To run the script at boot, you’ll need an init script that runs
The Kali Linux distro is designed specifically for
penetration testing and is available for the Raspberry Pi.
the script and kills it before shutting down the
Raspberry Pi. Again, the community has done all the legwork
for you. Just grab their script with wget http://pastebin.
com/raw.php?i=AfqbjQrb -O picam_init and move it into
the correct location with sudo mv ~/picam_init /etc/init.d/
picam before making it executable with sudo chmod +x /
etc/init.d/picam. Finally, make the boot system aware of
this script with sudo update-rc.d picam defaults...
The script will now start and shutdown along with the
Raspberry Pi. You can also manually control it like any other
daemon. For example /etc/init.d/picam stop will stop the
script and /etc/init.d/picam start will start it.
It weighs just
9 grams but the
Raspberry Pi
camera module
can shoot still
images with a
resolution of
2,592×1,944 as
well as Full HD
1080p video
30FPS and 720p
video 60 FPS.
Post to Twitter
We’ll now setup a new Twitter account and ask the Pi to
post images to it. Make sure the account is private. Begin by
installing Python’s pip package manager for installing Python
“The launch of the Raspberry
Pi camera module threw open
a world of possibilities...”
libraries with sudo apt-get install python-pip. Then install
Twython: a Python wrapper for the Twitter API with sudo pip
install twython. Note: To use Twython you need a Twitter
developer account. Head to and
sign in with the credentials of the new account. In the page
that opens click on ‘Create New App’ and use the space
provided to give it a name and a description. Leave out the
Callback URL field, scroll down the page and create the app.
Initially, the app is created with read-only permissions.
Click on the app, switch to the Permissions tab and toggle the
Read and Write radio button. Then switch to the API Keys tab
and click on Create My Access Token button. Make a note of
the API Key, API Secret, Access Token, and Access Token
Secret variables listed on this page.
Then download the modified picam script, altered to post
captured images to Twitter with: >
Open this new in a text editor and insert the four
bits of information you jotted from Twitter in the space
provided at the top of the script.
When you’re done, make it executable with chmod +x Now when you run the script and it detects
motion, in addition to capturing an image and storing it in the
SD card, the Pi will also post it on your private Twitter account
which will then show up in your Twitter feed.
December 2014 LXF191 35
Hack the Raspberry Pi
Skills: Networking,
data management
Hack #2: File-sharing samba server
The ability to share and access data on your Raspberry Pi
from other machines is very useful indeed. For example, if you
are using it as an always-on download box, you’d want to
move the downloaded data off the Pi as well. Now that data
could either be on the SD card or on an attached USB disk.
With the Samba software – which is an implementation of the
SMB/CIFS networking protocol – you can use your Raspberry
Pi as Network Attached Storage (NAS) device and easily
access the USB drive attached to the Pi from computers on
your network.
The sudo apt-get install samba samba-common-bin
command will fetch the required software. Now, attach the
USB disk to the Pi which will be automatically mounted under
the /media folder. Let’s assume the USB drive is mounted to
/media/usb. You now need to configure Samba so the drive
can be shared on the network. To do this, you first need to
add a Samba user called pi. Enter the sudo smbpasswd -a pi
command and type in a password when prompted.
Next, open Samba’s configuration file (/etc/samba/smb.
conf) in a text editor. If you wish to access the Pi from a
Windows machine, locate the workgroup = WORKGROUP line
near the top of the smb.conf file and change it to the name of
your Windows workgroup. Further down the file, locate the #
security = user line and remove the # to uncomment the
line and turn security on.
Lastly, scroll down to the end of the file and add the
following lines:
path = /media/usb
comment = USB NAS Drive
valid users = pi
writeable = yes
browseable = yes
create mask = 0777
public = yes
Save the file and then restart Samba with sudo /etc/
init.d/samba restart.
That’s it. You’ll now be able to access the USB drive
attached to the Pi from any other computer on the network.
Hack #3: Raspberry Pi as a thin client
Remmina has a very usable interface and scrolls automatically when the
mouse moves over the screen edge.
A thin client is a computer that depends on other powerful
computers to do the heavy lifting while it presents the results.
Thanks to its power efficient and noiseless design, the
Raspberry Pi is a natural thin client. A thin client relies on
remote desktop protocols to communicate with the powerful
remote desktop.
For this hack, you’ll need a powerful computer that’ll act
as the remote server and the Raspberry Pi which will be the
thin client. Our desktop server is running Ubuntu which
comes pre-installed with the Vino remote desktop server that
Overclock the Pi
While the Pi’s 700MHz processor is good
enough to set up these hacks, after a
while you’d wish you could squeeze more
juice out of it. The good news is that you
can! The Pi’s BCM2835 processor can be
run above its default speed...
However, remember that such
performance comes at a price – the
processor will draw more power, run
hotter than before, and may not last as
long as a regular Pi running at its default
speed. Also, while it’s possible to alter the
Pi’s performance manually, the safest way
36 LXF191 December 2014
is to use the preconfigured overclock
settings in the raspi-config tool which also
doesn’t void warranty.
The settings in the raspi-configtool are
known to be safe for use with most Pis. To
overclock your device, launch raspi-config
and scroll down to the Overclock option
and confirm that you wish to continue.
You’ll now be shown a list of pre-set
overclock speeds and you can select any
one. When you have made your selection,
the menu will reboot the Raspberry Pi at
its new speed.
If the Pi fails to boot with the overclocked setting, hold
Shift while booting, which loads the Pi at its stock speed.
Hack the Raspberry Pi
can be accessed via the Desktop Sharing app. The remote
desktop functionality is disabled by default. To enable it,
launch the Desktop Sharing app and tick the ‘Allow other user
to view your desktop’ box. This also enables the second
checkbox which allows the connected users to control the
Ubuntu server.
Also enabled is the first option under the Security section
which forces you to approve each request for connection.
However, for smoother workflow, you’d want to disable this
option by unchecking the box. Instead, enable the next
checkbox which will prompt the user for a password and
enter a strong password in the space provided. When you’re
done, click on the Close button to save the changes. If you
use another distro, browse the web and install a VNC server,
such as Vino or Krfb, on top of that distro.
If you’re using Ubuntu’s Vino you’ll have to make one
additional change because Vino was changed to require
encryption by default, but only supports old encryption
methods that are not commonly available. Fire up a terminal
on the Ubuntu server and modify Vino’s security settings with
gsettings set org.gnome.Vino require-encryption false.
Follow the rest of the hack and if you’re able to connect from
the Pi, return to the Ubuntu Server and make the settings
permanent by installing the dconf-editor with sudo apt-get
install dconf-editor and navigate to org > gnome > desktop >
remote-access and uncheck the require-encryption setting.
You now need to prepare the thin client Pi. For this hack,
we’ll use the lightweight Remmina client which you can install
with sudo apt-get install remmina. Once installed it’ll be
housed inside the Internet menu. Launch the client and click
on the new icon to configure a connection to the server.
In the window that opens, give this connection a name
and select VNC from the Protocol pull-down menu. You’ll also
need to specify the IP address of the server in the Server field.
You can also boost image quality by selecting a higher colour
depth and a better quality from the respective pull-down lists.
Click Connect when you’re done. Remmina will establish a
connection to the Ubuntu Server and prompt you for the
password you had set before allowing you to take control of
remote desktop from the Pi.
Hack #4: Host your OwnCloud
If you want universal access to your data you don’t really need
to throw money at a service that might store it on servers
outside your legal jurisdiction. Instead use the money to get a
Pi and a large capacity powered USB disk and host your own
private and protected cloud-sharing service with OwnCloud.
OwnCloud lets you sync and share your data and access it
from any device connected to the Internet. For added security
OwnCloud can also encrypt your files. The software can
handle files in a variety of formats and has an in-built gallery
image viewer and a music player. One interesting feature in
OwnCloud is file versioning – because of this, the server
tracks all changes to every file and you can revert to an older
version with a single click.
Like with other online cloud storage services, you can sync
files on OwnCloud either using the web browser or a desktop
client on Windows, Mac, and Linux as well as mobile clients
for Android and iOS devices.
OwnCloud runs on the Apache web server and also needs
a database server. While it can work with MySQL, for this hack
we’ll use the lightweight SQLite server. You can install all
required components with sudo apt-get install apache2
php5 php5-gd php5-sqlite curl libcurl3 php5-curl.
Now head to and download the
tarball of the latest version. Unwrap it with tar xjvf owncloud7.0.2.tar.bz2 and move the resulting folder into the root of
your Apache server with sudo mv owncloud /var/www.
Then make sure the new files have the correct permissions
for their new location with sudo chown -R www-data:wwwdata /var/www/owncloud.
You’ll need to enable certain Apache modules for
OwnCloud to work correctly. In a terminal enter sudo
a2enmod headers rewrite env and then restart Apache with
sudo service apache2 restart. To configure OwnCloud
launch a web browser and navigate to the OwnCloud
installation instance at localhost/owncloud. Since this is a
new installation, you’ll be asked to create a user account for
the OwnCloud administrator.
You can now log into your OwnCloud server as the
administrator and start uploading files you want to share. But
before you do that, you’ll have to tweak PHP’s configuration
file if you wish to upload files that are greater than 2MB in
size. To do that, open the PHP configuration file, php.ini,
ownCloud 7 is a
significant step
up from earlier
versions, with
a streamlined
“If you want universal access to
your data, get a Pi and a USB
disk and host your own cloud!”
housed under /etc/php5/apache2 in a text editor. Look for
the upload_max_filesize and post_max_size variables and
change their value from 2M to something like 500M or 1G.
You’re now all set to upload data into your OwnCloud
server using the web interface. You can also interact with the
OwnCloud server via the WebDAV protocol. In the location
area in the Files file manager, press Ctrl+L to enable the
location area. Here you can point to your OwnCloud server,
such as dav://localhost/owncloud/remote.php/webdav.
Once you’ve authenticated, the OwnCloud storage will be
mounted and you can interact with it just like a regular folder.
To share uploaded files, go to the Files section in the web
interface and hover over the file you wish to share. This
displays several options, including Share, which lets you
select who you want to share the item with and whether you
want to give them permission to edit and delete the files.
You can also share an item with someone who isn’t on
your OwnCloud server. When you click on the Share with link
checkbox, OwnCloud displays a link to the item that you can
share with anybody on the Internet. Optionally you can also
password-protect the link and set an expiration date.
December 2014 LXF191 37
Hack the Raspberry Pi
Skills: Streaming,
emulation and GPIO
Hack #5: Stream music from the web
We’ve got music everywhere. In addition to DRM-free tracks
on the hard disk, you’ve probably got hundreds of tracks on
Spotify or Google Play. You can get them all together with the
PiMusicBox distro which transforms the Raspberry Pi into the
ultimate music player.
The PiMusicBox can output music through speakers
attached to the headphone jack of the Pi, and also through
the HDMI and USB ports. So all you gotta do is install the
distro, hook up some speakers to the Pi, plug in your account
credentials and let it rip. You can then control your juiced up
Pi from any computer on the network and even from any
Android device.
Begin by downloading the compressed image for the
PiMusicBox distro from Extract the
downloaded ZIP file and then put the .img image file on your
SD card with the dd command, such as sudo dd
if=musicbox0.5.img of=/dev/sdd. Remember to replace
/d/ev/sdd with the location of your SD card.
If you use the Ethernet port to connect the Pi to the
internet, you can boot the Pi from the newly created SD card.
However, if you use a wireless card, you’ll need to edit the
distro’s configuration file and manually point it to your
wireless router. Access the newly created SD card from a
regular distro and navigate to the config folder and open the
settings.ini file in a text editor. Near the top of the file you’ll
notice two variables, WIFI_NETWORK and WIFI_PASSWORD.
Insert the corresponding values of your network next to these
two variables and save the file. The caveat is that PiMusicBox
will only work with WPA2-protected wireless networks.
Once that’s done, boot the Pi with the configured SD card.
On first boot, the distro will resize the filesystem to take over
the complete card, and restart automatically. If you have a
monitor attached to the Pi you can follow the booting
process, otherwise wait a minute or two, then fire up a
browser on any computer on your network and head to
http://musicbox.local. If that doesn’t take you anywhere,
then point your browser to the IP address of the Pi.
PiMusicBox is based on the Mopidy music server that’s written in Python.
38 LXF191 December 2014
The default interface of PiMusicBox is rather bland since
you haven’t configured any music source yet. To fix that, click
on the Setting link in the navigation bar on the left. This will
take you to a page where you can individually enable and
configure all the support streaming services, from premium
ones like Spotify and Google Music to free ones like The
Internet Archive, Soma FM, and more. You can also control
other settings from this page. For example, the Audio setting
lets you switch audio output devices.
The PiMusicBox distro has a working Samba configuration
and should show up in the Network section inside the file
manager of all OS’s. The distro’s Samba share only has one
folder named Music. You can put any audio file inside this
folder and they’ll be transferred to the SD card. Whenever you
restart the Pi, the distro will scan for any new music at boot
up. You can then browse through and play these files from the
distro’s web interface.
You can also play music via any software that supports the
Music Player Daemon (MPD) such as the MPDroid app for
Android. To connect, launch the app and in the first-run
wizard, enter the IP address of the Pi in the hostname field.
Hack #6: Broadcast audio
Streaming music into the Pi is one thing. What if you wish to
stream music out to other devices? While you’re at it, how
about running your own radio station? As it happens, you can
do so without too much trouble.
Besides the familiar Audio, Ethernet, HDMI, and USB ports
on the Raspberry Pi, the device also has interfaces that are
designed to to connect more directly with other chips and
modules. These General Purpose Input/Output (GPIO) ‘ports’
are the pins arranged in two parallel strips (26 on the Model B
board and 40 on the B+).
These interfaces are not plug-and-play but can be
controlled through software. A bunch of hackers over at Code
Club wrote a program to use the pins intended to generate
spread-spectrum clock signals to instead output FM Radio
signals. To transmit a surprisingly strong FM signal, all you
need is to attach a wire to the GPIO 4 pin. Even without the
wire, the FM signal broadcast by the Pi will be picked up by
nearby FM receivers.
Power up the Pi and bring up a terminal. Now grab the
code written at the Code Club and extract it with Wget..:
mkdir ~/pifm
tar zxvf pifm.tar.gz -C ~/pifm
The tarball extracts six files. Surprisingly that’s all there’s
to it. You can now broadcast the included sound.wav file with
sudo ./pifm sound.wav 101.2. Now grab a FM receiver and
set it to FM 101.2 and you’ll hear the Star Wars theme. You
can actually change the broadcast frequency from anywhere
between 88 MHz and 108 MHz simply by appending the
channel frequency at the end of the command.
You can play other audio files as well, but they must be
16-bit 22,050Hz mono and in the WAV format only. That
Hack the Raspberry Pi
might seem like a limitation but it really isn’t, thanks to the
brilliant SoX sound exchange audio editor. We’ll use the nifty
little tool to process any MP3 file irrespective of its encoding
and convert it into the correct WAV file on-the-fly.
Begin by installing the audio editor and its dependencies
with sudo apt-get install sox libsox-fmt-all. When it’s done,
type in the following command, substituting “SomeSong.
mp3” with the name of the MP3 file you wish to play:
sox -t mp3 SomeSong.mp3 -t wav -r 22050 -c 1 - | sudo
./pifm - 101.2.
The first part of the command converts the MP3 file into a
WAV file, changes its audio sampling rate to 22050 Hz and
down-mixes the track to mono. The converted track is then
sent to the standard output, denoted by the hyphen, and is
then piped (|) into the standard input of the pifm command.
The only difference in the pifm command in the above
example is that instead of specifying the name of the file to
broadcast, we are asking the script to instead broadcast the
standard input. If you’ve still got your FM receiver tuned to the
101.2 frequency, you should now hear your MP3.
You can do some wonderful things with SoX. You can, for
example, use it to broadcast your favourite streams live from
the Internet. The command sox -t mp3 http://www. -t
wav -r 22050 -c 1 - | sudo ./pifm - 101.2 will broadcast the
TuxRadar podcast. The only difference between this
command and the previous example is that instead of
pointing to a local MP3, you are now pointing to one that
resides online.
Hack #7: Emulate vintage gaming
Games weren’t always graphical masterpieces. Their
developers had fewer resources to play with and instead of
graphical excellence the developers relied on gameplay to
keep the players hooked. This is why the vintage games even
with their rudimentary graphics are still quite popular with
gamers of all ages even today – there’s nothing quite like A
classic gaming challenge, even on Steam or Live.
The easiest way to start playing vintage games on the
Raspberry Pi is to install RetroPie which packs a bundle of
emulators. You can install RetroPie on an existing Raspbian
install or run it as a standalone distro. Before you fetch the
installation script, grab its dependencies with sudo apt-get
install git dialog. Then you can download the latest RetroPie
setup script with:
git clone git://
Now change into the newly cloned directory and run the
script with cd RetroPie-Setup && sudo ./
The script will download missing dependencies and then
present a menu with a couple of options.
The first option downloads and installs precompiled
binaries of all the popular supported platforms. The second
grabs the newest source code for each app and compiles
them on the Pi. If you choose this option, be prepared to wait
as it might take almost an entire day. In case you aren’t this
patient, you might as well grab the image file and transfer it
on to a blank SD card with dd.
RetroPie uses a graphical front-end called
EmulationStation, which allows you to manage the various
installed emulators. The RetroPie image automatically starts
this frontend, but if you’ve setup RetroPie on top of an
“In order to play emulated
games, of course, you must
first own them...”
existing Raspbian install, you’ll have to launch it manually with
the emulationstation command.
Once it’s up you should see the controller setup screen,
from where you can configure your USB gaming controller. In
order to play the games, of course, you must first own them.
ROMs can be made from your own copies of these old games,
or you can find abandonware online. Some companies, such
as iD software, have made the original Doom open source and
public domain. Once obtained, you’ll need to copy them to
their appropriate emulator sub-folder inside the roms/ folder
on the SD card.
EmulationStation will only display the emulators that have ROMs added.
Donate excess resources
If you aren’t always using your Pi, you can
donate the idle processing power for a
worthy cause. BOINC is a computing
volunteering service and it uses the
donated power for a variety of projects,
from protein folding for medical sciences
to the search for alien intelligence.
Before installing BOINC, tweak the
Raspberry Pi to cut down its own
resources utilisation by calling sudo
raspi-config. Head to Advanced Options
> Memory Split and earmark the least
possible memory for the GPU which you
should find is 16MB. Then install BOINC
with sudo apt-get install boincmanager boinc-client.
When it’s done, launch the BOINC
Manager from the Others menu. You’ll be
asked to add from one of over 30
supported projects. Some projects might
warn you that they might not have work
suitable for the Pi. While you can still add
them, it’s best to add projects that can
make use of the Pi such as the Collatz
project, which is attempting to disprove
the Collatz Conjecture.
To configure how BOINC uses the Pi’s resources, first
enable the Advance View and then head to Tools >
Computing preferences.
December 2014 LXF191 39
Hack the Raspberry Pi
Skills: Python, eSpeak
Hack #8: Make your Pi speak
Software speech synthesizers are always very popular to
bring computing devices to life – so to speak. Of course they
have also have a very crucial role in making the device
accessible to vision-impaired users.
You can use the Raspberry Pi for text-to-voice commands
thanks to the availability of the powerful eSpeak library in
Raspbian’s repository. There’s also a module that allows you
to use eSpeak in Python and allows you to perform
automation tasks. Fire up a terminal and use the sudo aptget install espeak python-espeak command to fetch the
library and the required Python modules.
Using the eSpeak library is pretty straightforward. Type
espeak “Hello! How are you doing, today?” in a terminal
and the library will use its default settings to verbalise the text
inside the quotes. You can then influence the speech of the
eSpeak library with a wide array of command-line switches.
For example, the command, espeak -ven+f2 -s140 “Aren’t
you a little short for a storm-trooper”, will speak the
message at a slower pace and in a female’s voice. In addition
to the default English voice, eSpeak can also do American
English and English with a Scottish accent. Besides English
eSpeak can also speak various other languages. The
command espeak -voices lists all available voice files.
The Python eSpeak module is quite simple to use to just
convert some text to speech. Invoke the interactive Python
shell with the command python and enter the following lines...
from espeak import espeak
espeak.synth(“How ya doin’?”)
Now let’s use the eSpeak library inside a Python script to
print and speak out loud the name of your Twitter followers.
Follow the instructions in the first hack to get the Twitter API
and access tokens and their secrets and place them between
the single quotes in the code below:
import time
from twython import Twython
from espeak import espeak
api_token = ‘ ‘
api_secret = ‘ ‘
access_token = ‘ ‘
access_token_secret = ‘ ‘
twitter = Twython (api_token, api_secret, access_token,
while (next_cursor):
You can customise and extend Jasper by adding your own commands.
40 LXF191 December 2014
search = twitter.get_followers_list(screen_
for result in search[‘users’]:
print result[“name”]
their_name= result[“name”]
next_cursor = search[“next_cursor”]
In the above code, we connect to our Twitter account and
fetch a list of all our followers. We use a technique known as
cursoring to separate a large set of results (the list of
followers) into pages and then move forward through them
(with next_cursor). The For loop lasts till our list of followers
is exhausted. For every follower, the loop prints its name,
stores it in a variable (their_name) and then passes it on to
the eSpeak library which reads it aloud. It then pauses for a
couple of seconds before moving on to the next follower.
Hack #9: Voice control your Pi
Apple has Siri, Google has Google Now and Microsoft has
Cortana. If you think the Raspberry Pi doesn’t have a digital
assistance of its own, then you haven’t heard of Jasper. Unlike
the previously mentioned assistants, Jasper is open source.
To hear Jasper’s output you’ll need to hook up speakers to
the audio port. However, as the Pi doesn’t have a microphone
input, you’ll need to find a USB mic that works with the Pi. If
you’re feeling adventurous you might as well try plugging in a
USB webcam – some of these have microphones. After
hooking up the microphone, make sure the Pi recognises it as
a recording device with the arecord -l command.
While you can install Jasper on top of an existing Raspbian
installation, the recommended way is to use the SD card
image. Grab the tarball off the website (http:// and extract the image from it with
tar zxvf jasper-disk-image.tar.gz. Then assuming /dev/sdd
is the location of the blank SD card, type sudo dd if=jasperdisk-image.img of=/dev/sdd to write the image. When it’s
done, boot the Pi from the SD card. You can configure Jasper
from a remote computer via SSH, so you’re ready to get
started if the Pi is connected via Ethernet. If you’re using WiFi,
you’ll have to connect a monitor to the Pi, start the X
environment and use the graphical Wi-Fi config utility to set
up the connection.
The login credentials for the Jasper image is the same as
a stock Raspbian distro, that is pi:raspberry. Once the
network’s been setup, connect to the Pi over SSH. Then in the
home directory of your Pi, fetch the Jasper client with git
git jasper. Then upgrade the setuptools library with sudo pip
install upgrade setuptools. When it’s done, pull in the
required Python components with sudo pip install -r
jasper/client/requirements.txt. Finally, set the required
permissions inside the home directory with sudo chmod 777
-R * and restart.
While it reboots, Jasper will run the script and
generate the languagemodel.lm file inside the ~/jasper/
client folder. Make sure the file has been created. If not, you
can manually invoke the script with python ~/
jasper/boot/ Once the language model file has
Hack the Raspberry Pi
been created, proceed to create a user profile for Jasper to
get to know you. Change to the ~/jasper/ client directory
and run the profile population script with python populate.
py. The script will ask for information such as your name,
contact details, etc. Jasper uses this information to
communicate with you and respond to your queries more
accurately. Also remember that Jasper will ask you for the
password for your email account and then store it as plain
unencrypted text. The details are stored in the profile.yml
file. Follow the documentation on the project’s website to
integrate other services such as Facebook and Spotify.
Restart the Pi again after you’ve run through the profile
creation script.
When it’s back up, Jasper will greet you with the speech
synthesizer you selected while creating the profile. You can
now start interacting with your new assistant by saying
‘Jasper’. If it catches the phrase, Jasper will respond with a
beep. You can now speak your command such ‘What’s the
time’, or ‘Do I have new email?’. Again, if Jasper can hear you it
will respond with the answer. Depending on the quality of your
microphone and the oratory skills of the operator, your initial
conversations with Jasper might not be very pleasant.
Hack #10: Minecraft Pi Edition
The Minecraft Pi Edition is a cut-down version of the popular
Pocket Edition but includes enough components to unleash
your creativity. You can explore randomly generated worlds
and use the construction mode to build your own creations.
The best thing is its API, which can be accessed using
Python. You can write Python scripts to move the player as
well as create and destroy blocks. This allows you to create
structures in seconds that would take hours to create by
hand. The API also lets you create interactive objects such as
clocks and equips you with teleportation powers. You can
learn programming while having fun.
To install the Pi edition, head to and
download the compressed archive. Then extract it under your
home directory with tar zxvf minecraft-pi.tar.gz. This will
inflate the archive under the ~/mcpi directory. Switch to the
directory and launch the game with ./minecraft-pi. If you’re
not familiar with Minecraft, you control movement with the
mouse and the WASD keys. Keys 1-8 select items in your
quickbar, and the space bar makes you jump and double-tap
on the space bar to toggle the ability to fly.
You can use the API to connect to a running Minecraft
instance. However, before you take control with the Python
API, it’s a good idea to make a replica of the Python API folder
from within the /mcpi folder to a new location. In the
terminal, type mkdir ~/mycraft to create a folder and copy
the API into it with:
cp -r ~/mcpi/api/python/mcpi ~/mycraft/minecraft
We’ll now save our custom scripts inside the ~/mycraft
directory. While the game is running, press the Alt+Tab keys
to switch back to the terminal. Open a new tab in the terminal
and change into the mycraft directory. Fire up a text editor
and create a new script with nano ~/mycraft/ and copy the following into it:
# Demo script to interact with the Minecraft environment
import minecraft.minecraft as minecraft
import minecraft.block as block
import time
#Connect to minecraft by creating the minecraft object
mc = minecraft.Minecraft.create()
#Post a message to the minecraft chat window
mc.postToChat (“Hello, this is a demo of Minecraft API.”)
playerPos = mc.player.getPos() #Find your player’s position
#Change your players position
mc.postToChat(“Let’s teleport you 50 blocks in the sky!”)
mc.player.setPos(playerPos.x,playerPos.y + 50,playerPos.z)
# wait for 10 seconds while you fall back
# - create a STONE block in front of you
playerPos = mc.player.getTilePos()
mc.setBlock(playerPos.x+1, playerPos.y+1, playerPos.z,
# - Now change that block into WOOD
mc.setBlock(playerPos.x+1, playerPos.y+1, playerPos.z,
# - Now let’s create a tower by stacking block
for top in range(0, 10):
mc.setBlock(playerPos.x+3, playerPos.y+top, playerPos.z,
# - Let’s now teleport you to the top of the tower
mc.player.setPos(playerPos.x+1, playerPos.y+10, playerPos.z)
Save the script and while Minecraft is running, fire up the
script with python ~mycraft/ If you haven’t made
any typos, you’ll see the welcome message in the game as
defined in the postToChat string. We then find your player’s
position with getPos and store it in the playerPos variable.
the players position is defined with the X, Y and Z coordinates
that you can see in the top-left corner of the screen. We then
use setPos to change your position and teleport you into the
sky. When you fall back down, we use the setBlock
parameter to create a stone block (block.STONE) in front of
you and then after five seconds replace it with a wooden
block (block.WOOD_PLANKS). We then use a For loop to
stack ten stone blocks a little further away, and then place
you on top of it before ending the demo.
This is really just a demo of the things that are possible
with the Python API. Download this neat PDF cheat sheet
( of the parameters
in the API along with examples and brief descriptions. Using
Python to control Minecraft is extremely powerful and offers
so much flexibility that we can’t possibly demonstrate it all in
this limited space. However, if you’re intrigued, check out
Jonni’s Minecraft Python tutorials in previous magazines (see
Coding Academy, p83, LXF185 onwards). LXF
In LXF186,
Jonni showed
you how to build
a cannon and
blow stuff up in
December 2014 LXF191 41
Michael Shiloh
42 LXF191 December 2014
Michael Shiloh
Michael Shiloh is an artist and Arduino lecturer
who loves taking things apart to build new creations.
Matt Hanson talks to him about his hobbies.
Michael Shiloh is a
lifelong tinkerer, a
supporter of free
and open source
software and
hardware, a tutor
and an artist. He's
heavily involved
with the DIY Maker movement, and his
artwork and robots are aimed at
inspiring people to make their own
creations – which is why he often uses
Arduino boards.
Linux Format: First of all, Michael, your
background sounds absolutely
fascinating, with your work combining
DIY creations, electronics and art. So how
does open source, electronics and art
actually combine?
Michael Shiloh: That’s a really good
question, and it’s one that I think a lot about,
actually, because I also teach and I think that
teaching and open source go together handin-hand. Open source, in a way, is providing
the software, but it's also teaching about
how the software works. Of course, for
someone who is willing to share their
software as open source is willing to show
the world how they did something. So it’s a
willingness to not only share the code, but
share the knowledge that someone has.
I think to some extent there’s a
connection between that and the art world
as well. In my experience in the art world
there are – well, I want to say there are two
kinds of people, but that’s a bit of a
generalisation. But to use that expression,
there are some people who like to cover
their work up so that all you see is the
December 2014 LXF191 43
Michael Shiloh
exterior of what they want to present, and the
inner workings of how they got there is hidden.
And then there are those who are more than
happy to have the inside be exposed. This isn’t
just true of mechanical art, which is what I’m
kind of referring, but even with visual art, where
someone might have a particular technique. So
there might be, I don’t know, maybe a particular
application of paint colour even, and one artist
might say, “I’m not going to tell you how I
developed that colour, it’s my own secret.” Then,
somebody else might say, “I’m happy to share
with you how I did this: here’s my recipe!” So I
find that’s a lot like the open source versus
closed source movement. I see it in viewers as
well. I look at a piece of art and immediately
want to look behind it. How was it done? How
was it made? Then, by the same token there are
other viewers who just would enjoy something
as it’s presented, and not be curious about how
it’s working. Like I said, though, breaking things
down into two is a fallacy in the first place, but
you can sometimes think of things that way.
LXF: Definitely. And as you say there are
some artists who want to be shrouded in
mystery, and almost remove themselves
from the art and have their artwork without
context, and then there are other people
who invite people ‘behind the curtain’.
MS: Yes, and this is not to judge either one, but I
think they are just two different approaches.
Neither is better, neither is correct.
LXF: So what is the scope and the breadth
of the projects that you’ve worked on?
have a conceptual nature. I’m very interested in
the conceptual aspect of art.
The third category is a thing that I'm having
a lot of trouble defining. It’s creating situations
in which people have to do something in order
for the art to be meaningful. So, one example is
a robot that I built quite a while ago, maybe 20
years ago. In fact, I had to go and dig through
the Internet Archive to find the original website.
It was a very simple robot, not many sensors, a
couple of motors. So the computer that drove
the robot was a full Linux server. It was on the
internet, wide open. It served a web page, and
there was a domain that linked to it, http:// So, the robot wouldn’t do
anything until someone
logged in and write a
program for it. So, on the
webpage I provided a
number of examples, I
explained how it worked.
It was all documented,
there were examples, and
I gave the instructions for
logging in. So I gave the root password, and the
whole point was people from around the world
would connect to it, write programs for it and
the robot would do something. It was a very
interesting experiment, it would spend most of
the day doing nothing, and then suddenly it
would wake up and start roaming around!
Usually in the middle of the night, which
suggested that most of my visitors were on the
other side of the world… or maybe just up in the
middle of the night. Many of them didn’t leave
comments, I had no idea who they were, what
they were doing, where they were from. Some
of them left meaningful comments. One of my
favourites was that somebody wanted to make
the robot dance. So, they were experimenting
with different movements to make it dance. It
was something that I had never thought of.
So, the whole concept of putting something
out there that was sort of a blank slate, and
“Survival Research Labs is a
performance group, but the
performers are machines...”
MS: They seem to fall into two categories.
Maybe three. One of them is mechanical art,
also sometimes called kinetic art or machine
art. I got exposed to this field through my work
with a group called Survival Research Labs. It’s
a group that’s been around for over thirty years
now. It’s a performance group, but all of the
performers are machines. So they build large
machines, most of which are remote controlled,
and then stage productions with them. The
performers are the machines, and the humans
are the operators, sort of backstage or off-stage.
So a lot of the machine art that I do is heavily
influenced by that type of work.
Then I also have done a series of furniture
pieces which are kind of similar in the sense
that I use a lot of metal; I use a lot of concrete.
So, sort of, the industrial materials. Heavy
industrial materials that I think also reference
the connection to machine art. Usually they
44 LXF191 December 2014
then requiring that people do something on it, is
one that has intrigued me, and I have done a
number of similar pieces like that since. I’m still
trying to figure out how…
I run into difficulty sometimes justifying that
as art. I’ve done a number of other pieces like
that. We did a similar robot project, a smaller
version, and did it at an art gallery. People had
to sit down and program it on the spot.
Amazingly, people did. I’ve always said that the
true measure of something’s success in a
gallery is if you can compete with free cheese
and wine...
LXF: Fantastic. So it’s interesting to think in
a general art gallery how much of the
audience would be able to go and program –
is that more your audience?
MS: Not necessarily, this was a general art
gallery, and we did the project with Arduino, so it
was very simple, and we provided examples and
explained to people what was going on. And
yeah, you’re absolutely right, it isn’t for
everybody. But we had people who'd never
programmed before in their lives sit down and
program. One of my friends said the trick is not
to tell them that they’re programming. You just
say they are just modifying something. You give
them some pointers to what to modify, and
what to expect when you modify something a
certain way, and then just let them fiddle with it.
Then, they don’t think they’re programming.
Which was actually another interesting aspect.
I’m very interested in the part of teaching
technology where, if you were going to tell
people they were going to do something, say as
complicated as programming, and they had
some phobia about it, or they didn’t consider
themselves programmers, they wouldn’t
approach. But, if you somehow don’t present it
as such, then more people will join, that
otherwise would have missed out on it. I learnt
that at the first Maker Fair, where we did this
huge workshop that was designed for people to
Michael Shiloh
take stuff apart and build new things out of
them. What made it work in this way was that
there was a lot of space there. Whole families
would come in and sit down and usually they
would say ‘we’re just going to watch’ and only
one family member would be the builder, the
designated person who would get involved.
Usually it was a young boy, stereotypically. But
what happened was that after a while you’d see
the parents starting to fiddle with things. There
was stuff everywhere so there was plenty of
materials. The other kids would start playing
with stuff, maybe they’d start playing with
somebody sitting next to them. They would see
what other people were doing and begin adding
to it. A few hours later they would start building
these wonderful big things, and it was really
interesting to me, that none of those people
would have signed up if I had said ‘You’re going
wasn’t called that then – was just being
developed. People shared stuff, very very freely,
because there didn’t seem to be any value in it!
So, if you wrote a clever little program – for
instance one of my office mates wanted to
know when she got new email. So, she wrote a
little program that would ring a bell whenever a
new email arrived. Of course now that is
something that every mail reader has, but back
then it wasn’t. It was immensely useful and it
spread all around the internet, and I don’t think
that she officially made it open source, but there
was just no question about it.
I think it was more a matter of being
culturally used to sharing information that way,
rather than closed source, that introduced me
to open source. It wasn’t until I began running
into programs that weren’t that way, that I sort
of said,“Oh, there’s a whole other half!’”
LXF: So a lot of your
background involves
tinkering and rapid
prototyping – is
that how you were
introduced to
the Arduino?
MS: You know, I’m not
sure exactly how I was introduced to Arduino,
but it was probably around that. I’ve always
been a tinkerer, it’s not as if one day I decided
I’m going to be a tinkerer! It’s just the way I
work; fiddling with all kinds of different
electronics and building things and making
stuff. I can’t remember the specific time I met
Arduino, I think I just started hearing the name
and decided it was something that I should
know more about. I actually had co-founded a
company many years earlier to do what Arduino
is doing now, and we weren’t the only ones.
There were other companies doing similar
things. The BASIC Stamp, for example, was
around then. This was back when a lot of
computers still had serial ports. The new Apple
computer, I think, was the first computer to not
“I discovered open source
long before I think it
was even a term...”
to make something here’. But, by virtue of the
fact that there was space for them to sit down,
they got involved. And that’s what kind of ties it
into these robot projects. How can I trick people
into doing these things without their barriers
coming up, and saying, “Oh, I can’t do that”?
LXF: The connotations of the word
‘programming’ for a lot of people means that
they can sometimes tell themselves “Well, I
haven’t got a computing degree, I won’t be
able to do that.” But to invite them to make a
robot dance? It brings almost a bit of a
human element into it, and they suddenly
say, “Maybe I can do that!” Something that
we’ve been struggling with in the UK is
getting kids coding, encouraging that.
MS: You’re right, and I think that this is
interesting, from the point of view of education,
for that very reason. The thing about teaching
about robotics and electronics and
programming, there’s going to be a whole lot of
students who would gravitate towards those
subjects one way or another. Then, there will be
another load of students who will run away from
that, no matter what you do. It’s that group that
I’m interested in. You know, the first group will
find it. But the latter group – how do we reach
out to them?
have a serial port. Everyone was in a panic,
because what are we going to do now with all of
these boards that use serial ports?
So, I and a couple of my friends founded a
company to make a basically Arduino-like thing.
It had a USB connection, in fact it used the
same chip that Arduino ended up using, the
FTDI chip. It also had a number of generic I/O
ports that you could configure in a number of
ways. And we were moderately successful, we
were in maybe as many as a couple of hundred
universities worldwide, I think Bath University
had one actually. It wasn’t wildly successful,
though. It didn’t explode in the way Arduino did.
After a number of years of doing that,
actually it was around the time Arduino came
around, we felt it was no longer necessary. So I
was keeping my eyes out for something like that
anyway, and that was how I heard about
Arduino. All of these different types of boards
had different features, different prices, different
levels of complexity, different kind of software
approaches, and I think one of the successes of
Arduino is that they hit a sort of sweet spot of all
of those decisions. Like how complex should it
be, what could the software do, what should the
user have to do, how much should it cost?
And the other piece of magic is that Arduino
immediately developed a large community, and
that was really key to its success. Because with
that community comes not just validation, but
also the support that a small company just
cannot provide.
LXF: It also shows the power of open
designs, and how that can rapidly grow. The
ecosystem is huge now, and it attracts a very
committed community. So, open hardware
seems like an obvious extension to open
source software, but the comparative
number of projects is still rather small. What
do you think the stumbling blocks are?
MS: I think probably there are two major
stumbling blocks. One is that with open
software you can reproduce it just by copying a
LXF: So was open source something that
you discovered naturally through your
career? Was it an extension of your artwork?
MS: I think I discovered open source long
before I think it was even a term. I went to
university at Berkeley at the time that Unix was
being developed, and the internet – though it
December 2014 LXF191 45
Michael Shiloh
file. With open hardware you have to actually
manufacture something. I think the other
difficulty is that I think there’s a sense with open
software, as a business, that you can make your
money elsewhere: in support, in custom
versions of that software. With hardware, I think
there’s a sense that the only way you can make
money from the hardware is by selling it. And
then once you give someone the ability to make
their own, they have no use for you.
LXF: But do you see open hardware
continuing to grow despite all these
stumbling blocks?
MS: Good question. I hope it does, I think it
probably will – there’s certainly a huge open
hardware movement right now. But, I think
maybe because of those stumbling blocks, I
think that, well…
Here’s why I’m hesitating. There’s a huge
amount of interest in the sort of maker, DIY
movement right now. And of course DIY and
making isn’t a new concept, we’ve been doing it
for, well, the beginning of our race, pretty much.
So what’s happening now, I think, is a sort of
renewed interest. I think we’ve seen it go
through cycles. I think that, and I’m not much of
a historian, but I get the sense that when mass
production started becoming very prevalent,
there was also an attitude change that needed
to come along with it; mass consumption, of
course. But for mass consumption to work you
need to be willing to throw away things in order
to buy new ones. You only need one phone, but
if that phone is going to become obsolete in six
months, then you’re going to have to buy
another one. I think that discouraged the DIY
and the maker and the tinkerer, because as
things are mass produced they have to be
produced in a way that is as inexpensive as
possible, which means that they need to be as
compressed as possible. So it just makes things
harder to tinker with. So, there’s a resurgence of
interest now in the maker/DIY movement, and
my fear is that it's just another one of the cycles.
That there will be this interest and then it will
collapse. So I think the interest in open
hardware is tied to that because much of the
open hardware products are in this community.
If this community becomes less relevant, then
I’m afraid that open hardware will become less
relevant. What I’m hoping is that this isn’t a
cycle but will continue – an upward trend. If this
interest in DIY is an upward trend, then I
suspect that yes, the open hardware movement
will also continue as an upward trend.
LXF: Certainly with open source software
there’s also an ethical element with making
software available for people who wouldn’t
be able to afford an expensive licence for a
closed source software equivalent. With
open hardware, as you said, as much as we’d
like to offer materials and tools to build the
hardware, it’s not always possible. Your
LinkedIn profile states that ethical
approaches to resources is very important to
you, so how do you ensure that that’s at the
forefront of what you do?
MS: I don’t know if I succeed in that. I’ve always
had an interest in repairing things, and there is
certainly a huge ethical aspect to that. There’s
also a certain, I don’t know, aesthetic or… I feel
good. That probably is an ethical thing. I feel
good when I’m able to repair something or
reuse something instead of buying a new
replacement. So, I guess in my own work I don’t
have to ensure it because that is where I will go
naturally, it’s a tendency of my own. When I
teach this stuff, or when I am preparing a
workshop, in many of my classes we do use a
lot of recycled and discarded electronics. It’s not
so appropriate for a three-hour workshop, but
for instance I’ve just finished doing a two-week
workshop for the California State University.
Because it was two weeks we could do that, so
before the workshop we had the students
gather a number of discarded electronics and
we talked about taking them apart, and what
things were valuable inside, so they could
recognise different components and how they
“I’m not a fanatical open
source user, but I would
prefer it if the Pi were open.”
can be reused. Then they went ahead and built
new things with them. So I think by
demonstrating that, and making those
materials available we get that ethic passed on
and instilled as early as possible.
LXF: So with Arduino being quite an open
design, there’s also the Raspberry Pi which
is an interesting contrast, because a major
complaint is that despite its popularity the
Pi still has closed source elements. What are
your feelings on this?
MS: I guess I don’t have very specific feelings
on that. I’m not a fanatical open source user – I
do use Android rather than iPhone, but then the
phone itself uses closed source elements. I
would prefer if the Raspberry Pi were open
because I think there is an educational
opportunity there that’s missed. I guess I don’t
think of it as a ‘this versus that’ but just another
option in this pantheon of boards that do this
sort of thing.
46 LXF191 December 2014
Michael Shiloh
LXF: Sure, and what’s more important is the
endgame, which is getting kids involved with
building their own projects, having it
affordable. Would you say that was more
important than worrying too much about
how much of it is closed source?
MS: Yeah, and Eben’s endgame was very much
at getting as many of those computers in the
hands of people as possible, so if he was able to
bring the price down further by agreeing to
include closed source hardware then he’s
accomplished what he wanted.
LXF: Absolutely, and we’re quite excited at
Linux Format about how the Raspberry Pi is
being embraced by schools in the UK. Do
you see Arduino playing a similar role in the
US and the rest of the world?
MS: Yeah, in the US and the rest of the world –
very much so in Europe. I see it in more and
more places – in more schools and universities.
And I’m always curious as to how they find out
about it, how they decide to apply it. I can see it
coming into university level, but with high
schools it surprises me, but always pleases me.
I suspect it’s most often one teacher who
knows of it, who says, “this will be applicable to
what we are doing.”
What I like seeing is that it's not being used
just to teach programming or robotics, or the
things that it was designed for, but that
teachers are starting to see it as a tool for
talking about whatever else they want to talk
about. So an example of that is environmental
science, for monitoring the atmosphere and the
water and what have you, and I had a
conversation with a physics teacher who
wanted to use Arduino to measure a toy car
going down an inclined plan; all the classic
physics experiments but instrumenting them
with Arduino. So I said, “Is this really adding
much to what you’re teaching?” and he replied
that the most important thing is that the kids
are really into it. They’re really interested in this
little board with the blinking light. So he said if
that makes them pay more attention to the
things I’m trying to teach them, then I’m willing
to use it! So it becomes sort of a vehicle for
exciting the classroom, and that’s great.
LXF: So where do you see Arduino heading
in the future?
MS: I think – and I should say this isn't the
official Arduino point of view, this is just my
interpretation of what I see and what I hear –
obviously there are new processors, new microcontrollers coming out all the time, so taking
advantage of the ones that seem appropriate
for ‘Arduinoisation’ is definitely interesting. As
they become more powerful, people will sneer
at the older models so we have to remain
current, when appropriate. I think the Arduino
Yun has shown that an easy way to do
networking is very important, so I imagine there
will be more of that in the future; a continuation
of that. Education is something that’s quite
important to us, so figuring out ways – like we
were talking earlier about – how it's being used
in schools. Figuring out how to continue with
that and make it more accessible; providing
more materials for that. Those are the trends
that I’m aware of, I can speculate on a lot of
things but I really don’t know for sure..!
LXF: Finally, you co-founded Teach Me To
Make – how successful has that been?
MS: Actually much more than we thought. I
originally created Teach Me To Make because at
that workshop I told you about earlier at the
Maker Fair, I found I needed a website, a URL for
somewhere to put it. So I came up with Teach
Me To Make almost on a whim, and grabbed the
domain so I could at least put materials,
pictures whatever up there. But really, Teach Me
LXF: So with the Arduino Yún, which is a cool
Linux and Arduino hybrid, where are you
seeing that being used in particular?
MS: I’m aware of a number of individual
projects, but I’ve not seen them quite fall into
categories of things. The most common place I
see the Yún being used is anything that needs
to be connected to
the internet, because
it has the Ethernet
and Wi-Fi built in it’s
just ideal for that kind
of stuff. In fact, the
workshop that I just
did at OSCON was
focused on just that.
To Make as an organisation came into being
So, the idea being that if you want to control a
robot on a network, monitor plants and any kind when my partner Judy and I started doing
workshops together and we needed a place to
of control or sensing applications that need to
put our materials. We needed a place to
be network accessible... that’s ideal for the Yún.
advertise what we did, so whenever we teach a
The target audience for the Arduino
workshop, whenever we do an event, whenever
originally was not for engineers. So the
we develop some new project we put that
challenge was how do you get non-engineers to
information up on the website. We have a
turn LEDs on and off, or recognise presses on
Wordpress blog on it, because it’s easy and I like
buttons? It was showing them how to do this
something I can edit on the fly, and Wordpress
without having to become engineers. So when
is good for that. But I’ve put up a lot of
the Yún came along, it now has a real computer
materials, usually things that came out of
on it, it now is serious engineering, so the
questions my students asked.
question I find when doing workshops is how do
One of the recurring questions was ‘How do
you fit that in with that philosophy? The Ardiuno
I use a transistor as a switch?’ So I did a pretty
software developers created a mechanism that
thorough treatment on that. I’m amazed but I
from within your Ardiuno sketch you can cause
keep getting hits for that, and getting questions
things to happen on the Linux side, so you can
from around the world on how to do things. So
run a program on the Linux side, you can
people are seeing it, even though we are not
execute a command and get responses back.
actively advertising Teach Me To Make very
You can tell the Linux side to go out on the
much, people keep finding us and asking us
internet and download some data and feed it
how to do things. LXF
back to you.
“We’re not advertising, but
people keep finding us and
asking us how to do things.”
December 2014 LXF191 47
Turn on,
tune in,
JACK out
Demystifying JACK, discover with the
help of
how you can plug into the world of
professional audio on Linux.
evel music
For more pro-lnd help visit:
guides, advice
48 LXF191 December 2014
ACK is the de-facto standard audio
server for working with professional
audio on Linux. JACK (a recursive
acronym for ‘JACK Audio
Connection Kit’) is a very powerful piece of
software. Some new users find it confusing
at first and that’s understandable, with its
complex interface and myriad settings.
However, you only need to know the basics
to get started and take advantage of its
underlying power.
By the end of this guide, you will hopefully
have a sufficient understanding of JACK and its
workings so you can set it up, forget about it
and get on with your work, utilising the power
that it provides. Throughout the article we’ll
refer to any piece of software that supports
JACK as being ‘JACK aware’.
What’s JACK
JACK is a sound server optimised for the
demands of audio production work. There are
a few main aspects to what it does. Let’s have
a quick overview of these now.
Settings: JACK controls your audio and
MIDI settings. It allows you to choose your
audio interface as well as all the important
audio settings such as sample rate, buffer size
and periods. These settings will be explained in
more detail further down.
Performance: Using JACK will allow you to
achieve low latencies with both audio and MIDI.
This means that if you are recording an
instrument into your computer, you can
monitor the audio back through your speakers,
or headphones, without any perceivable delay.
For a good overview, and visual description of
latency, check out ‘Latency and LatencyCompensation’ at
Connections and Interconnectivity: This
is JACK’s strong point. Any inputs and outputs
from your audio interface and/or JACK aware
programs can arbitrarily be connected
together. JACK not only deals with connections
between programs but also within programs.
Any JACK aware program uses JACK to
manage its inputs and outputs. The beauty of
this is that these connections are available for
any other JACK aware programs to also
connect to. They are not restricted to only
being ‘internal’ input and outputs.
All of this makes JACK a very flexible tool
and allows you to interconnect any number of
JACK aware programs.
All in sync
JACK can be used to sync up multiple
programs. This means that if you want to use
the MIDI sequencer from one program, and the
audio sequencer from another, you can easily
keep them in sync with JACK. Both programs
will start and stop at the same time. Moving
along the timeline in one program will be
mirrored in any other programs that are set up
to sync with JACK.
You do know JACK
1 Select a driver
Ensure you select the Cadence
driver tab (left window) and
choose your driver, typically
this will be ALSA.
2 Select interface
Ensure your interface is
connected and powered,
otherwise it might not show up
here for you to select.
As you can tell, JACK is a very capable tool
but don’t be intimidated by its potential
complexity. Using JACK does NOT need to be
complicated. Just because it’s capable of
intricate workflows doesn’t mean it’s not also
suited to more straightforward set ups. It will
do what you want it to do, and is flexible
enough to allow you to do more, if you ever
need it. You can also happily work in one
program and allow JACK to manage all its
connections and never leave that program.
We now know what JACK is and what it’s
capable of. How do we use it? The first thing to
note is that JACK itself is a command line
program; however, there are various graphical
managers for JACK that allow you to easily
unlock its power. There are two main types of
JACK managers, so you can choose the type
that best suits you and your workflow. They
can be categorised as JACK set up managers,
which allow you to start up JACK with specific
settings, and JACK connection managers,
which primarily deal with making connections.
The various offerings of JACK managers
combine these two aspects together to varying
degrees. So, why would you choose one over
the other? If you are doing all of your work
inside of a DAW and you use no external
programs, you might be happy with a JACK set
up manager to just start up JACK with your
3 Choose settings
Match your interface settings
here. If you were wondering,
frames/period is exactly the
same as Buffer.
selected settings. In such a case, you can make
all your software connections from within your
DAW of choice so there isn’t as much need for
an external connection manager.
For people with a more modular recording
set up, including multiple effects processors,
recording and sequencing programs, a JACK
manager more geared towards making
connections might be best, especially as some
modular programs don’t have connection
managers themselves. The following are the
most popular JACK managers:
Qjackctl is a very powerful, and popular,
JACK manager. It allows you to access a large
amount of JACK’s settings and includes a
connection manager, transport controls and
even a manager for JACK session, which is a
session management program. It presents
itself in a small window, allowing you to dig into
only the settings you need to access. It can
also be minimised to your taskbar so it can get
out of your way once you have set it up.
Cadence is an easy to use tool for setting
up and starting JACK. It comes as part of the
KXStudio distro and includes JACK bridges,
which allow you to play normal desktop
sounds, such as flash video, through JACK.
Cadence can also start JACK on login,
including JACK bridges. If you set it up once,
there is very little maintenance thereafter.
JACK Aware
Any software that supports JACK is referred to
as being JACK aware. This means that all it’s
audio and MIDI inputs/outputs are managed
through JACK. The idea of a DAW using another
routing system can be a bit confusing at first.
The way to think of this is by creating a parallel.
If you are used to your DAW managing all your
connections, think of JACK in the same way,
except that it is system wide. Any software that
is JACK aware can be plugged into this system,
similar to how a DAW would add plugins. The
difference is that JACK allows you to have
precise control of how to route anything you
connect up to it.
December 2014 LXF191 49
Monolithic set ups are where you do all your
work in one program. This is the most
common approach people using Windows
and Mac audio software will be used to. In
Linux, JACK allows for very modular set ups,
although some applications are fully
featured and can also be used as monolithic
set ups, eg. Ardour and Qtractor.
JACK into Patchage
Patchage and Catia are two similar visual
connection managers for JACK. They give you
an overview of all your connections. Audio and
MIDI ports are colour coded so it is easier to
identify your connection types. Creating
connections is as simple as clicking and
dragging from an output port to an input port.
Basic settings
If you are familiar with audio production on
other platforms, you may already be familiar
with some of the terminology used in relation
to settings. If you are a new user this can be
confusing as JACK has a vast array of settings.
The main thing to remember here is that most
of these settings are not important, at least not
for starting off. If you don’t know what they are,
you probably don’t need them just yet. Here is
an explanation of the main ones that you need
to worry about to get yourself up and running...
Buffer size (frames/period): Smaller
buffer sizes produce less monitoring latency. A
lower setting will make the computer work
faster, which will allow for lower latency, but at
the expense of increased CPU usage. Higher
(larger) settings are more stable but you won’t
get low monitoring latency with them. If you
are looking to achieve low latency monitoring,
a setting between 64 – 256 should certainly
give you usable results.
Sample Rate: Higher sample rates result in
less latency, for the same buffer setting. This
setting is dependent on the optimal settings
for your audio interface and your own personal
preference. Some people prefer to record at
higher sample rates, others are happy with
44,100 (44.1kHz), which is CD quality. Another
common setting is 48,000, which some
interfaces work better at. The higher you go
with this setting, the lower your latency will be,
but you will also probably end up pushing your
1 Audio
interface inputs
Your main inputs
for the track are
listed here.
2 GuitarIX
A virtual guitar
amp, it’ll act on the
signal exactly as a
real amp.
3 Hydrogen
The GNU/Linux
drum machine
inputs to Ardour.
CPU harder, which can sometimes result in
xruns (pops and clicks).
Periods/buffer: If you are making use of a
USB device, you may achieve more stable low
latency by setting this to 3. Otherwise, use a
setting of 2. If you wish to achieve low latency
settings, it can be a balancing act to find out
what works best without pushing your CPU
beyond the brink. If you don’t need to monitor
your audio through your computer with low
latency, there is probably no need to push your
computer any more than it really needs. You
are better off staying with a higher setting and
keeping reliability a high priority. If you have the
option of hardware monitoring from your
To sync programs, you need to make sure that one is master, and the other/others are slave.
50 LXF191 December 2014
4 Ardour
Brings the allimportant editing,
recording, and
mixing element.
5 Audio
interface outputs
Inputs are mixed
here and sent
to the outputs.
interface, it’s wise to use this to monitor your
recordings instead.
Setting up JACK
This guide assumes that you have a properly
configured realtime set up. KXStudio (http:// and AVLinux
are two Linux distros that provide such an
environment out of the box. We’ll see how to
set up JACK settings with both Qjackctl and
Cadence. The following steps are relevant to
both, with any exceptions noted...
Step 1: Click on the Setup button in Qjackctl,
or the configure button in Cadence.
Step 2: Make sure that the realtime option is
enabled (engine tab in Cadence)
Step 3: Select your audio driver (make sure
driver tab is selected for this in Cadence). If
you are using a firewire device, select Firewire.
In any other cases, leave it at ALSA.
Step 4: Select your interface from the
interface drop down menu.
Step 5: Choose the settings for your interface.
Step 6: Click OK to apply settings.
Step 7: Press start!
In Qjackctl, do not touch the name field.
Some new users like to give their JACK server
which means that JACK will now follow Ardour
as timemaster.
Steps for Hydrogen
The connection matrix within Ardour is an
easy way of managing many connections.
a name but this can cause JACK not to work as
expected. Unless you know what you are doing,
leave this alone as it will cause more hassle
than it’s worth.
Once you have these settings set up, you
can then use any JACK aware programs and
they will run with those settings. Some
programs will allow you to change your buffer
size, so that you can adjust audio latency
without having to restart JACK.
So, even though JACK is very powerful, you
needn’t be intimidated. At its most basic, it is
very easy to set up your interface and get
started. Like with any decent program, you can
learn more about the power of JACK as you go
along but don’t get too bogged down starting
off when all you really want to do is learn to use
some new software.
If you are having any trouble at all getting
JACK to run, or to recognise your interface, just
make sure your interface is both connected up
and turned on before opening up your JACK
manager. If you are still having any issues, as
ever, try rebooting your system while your
interface is hooked up.
Step 1: Navigate to tools > preferences. In the
Audio System tab, make sure JACK is chosen
as the audio driver and restart Hydrogen.
Step 2: Now you will see two timemaster
buttons showing up in the toolbar. Make sure
that J.trans (Jack transport) is enabled. This
means that it will follow JACK transport, which
is now being controlled by Ardour.
Step 3: Now both programs will be in sync
when you roll their transports. Additionally, as
Ardour is timemaster, you can change the
tempo in Ardour and this will be reflected in
Hydrogen, or any other slaved program.
The fact that JACK manages audio and
MIDI ports, as well as allowing any programs
using them to sync up makes it a very powerful
tool. This enables very modular set ups to be
created using JACK. For anyone interested in
modular set ups, we recommend you look into
session management, particularly Non Session
Manager, to explore the possibilities of
managing modular set ups with these tools.
Making connections
Let’s create a recording scenario to
demonstrate how to make connections using
JACK. Here is the scenario we’ll be using
Ardour to record the following instruments:
Male vocals
Input 1
Female vocals
Input 2
Acoustic Guitar
Input 3
Acoustic Bass
Input 4
Lead Guitar
Running through
virtual guitar amp,
Hydrogen drum
machine synced up
MIDI keyboard
plugged into MIDI
input with virtual
piano plugin on track
Another aspect of JACK that can be useful is
JACK sync. Not only can you interconnect
programs, you can also make sure that their
transports run in perfect synchronicity.
We take a quick overview of how to do this
by means of a demonstration that you can
follow here...
Step 1: First of all, start running JACK via your
preferred JACK manager.
Step 2: Next start up the programs you wish
to sync up, in this case Ardour and Hydrogen.
You’ll need to make sure the setting within
these programs are correct first.
Step 3: By default, Ardour is set to be JACK
timemaster, which is what we want. This
setting can be found by navigating to session >
properties and going to the timecode tab. We
will leave this enabled for now though.
Step 4: Next, we click on the ‘Internal’ button
on Ardour’s toolbar. This will change to JACK,
The built in connection manager in Ardour is
used to make these connections, but the logic
is the same whichever connection manager
you use. In the top left you will see a mixer
window set up in Ardour. Out of shot we also
have Guitarix and Hydrogen both running.
Note the highlighted row. All connections are
already made, including connections with
Guitarix and Hydrogen.
To make these connections in Ardour, left
click on the input buttons shown in the
highlighted row. This will show a window from
which you can select your input. In this case we
wanted to route the second input from the
audio interface into the second track (female
vocals), so select capture 2 by ticking its box.
Modular set ups are where more than one
program is used in a set up. JACK allows you
to connect and sync various audio programs
together so you can benefit from the
strengths of individual applications. Session
management can be used to manage and
recall complex set ups. One suite of modular
applications is the Non Suite, which includes
Non Timeline, Non Mixer and Non Sequencer,
although any JACK aware application can be
incorporated into a modular set up. The Non
Suite also includes a very intuitive session
manager called Non Session Manager.
Repeat the same steps for each other track
until all the connections are set up how you
want them.
Something to note here is that all these
connections made will show up in any other
JACK aware programs, for example, if you look
at the connections window in Guitarix, you will
also see the connections there that are made
from within Ardour. This is because it’s not the
programs that make the connections. They
use JACK to manage the connections and tell it
what to do. Remember, JACK is your routing
system and these programs are only using it.
The following is a complete overview of all
connections broken down so they are easy to
Patchage is the best to illustrate something
like this, open it up and check it out – note the
colour code, audio ports are in blue and MIDI
ports are in red. All connection managers have
their positives and negatives. While it is very
straight forward for a smaller amount of inputs
and outputs, Patchage can get messy and hard
to decipher with large amounts of connections.
Ardour’s built in matrix connection manager is
very good for very large numbers of connects
as you can deal with them across categorised
tabs. This makes it much neater.
This is a basic session but it demonstrates
how audio and MIDI connections work in JACK.
It also incorporates JACK applications external
to DAW, one of which is in sync with JACK. Try
out all JACK managers/connection managers
and see what you find you’re most comfortable
with and what best suits your workflow. They
all do the same thing, they just do them in
slightly different ways. In the end, you are using
the same logic to make the same connections.
The best thing about JACK is that it doesn’t
lock you into any single workflow. We’ve seen
how to sync programs and make connections
between them but if you want to do all your
work in one DAW and never leave it, you can do
that too. JACK makes no assumptions. It
leaves all options open to you, to take
advantage of at any time, and makes various
workflows possible, monolithic or modular. If
you find the tools you like, JACK will make
whatever set up you have in mind possible. LXF
December 2014 LXF191 51
PHP primer
Build a PHP
virtual dev box
Kent Elchuk tells us about virtual machines and PHP
scripts to create the perfect development environment.
HP might not be pretty, but it
sure is useful and anything that
can make developing and learning
PHP can only be a good thing.
We’re going to discover how using a
complete VM from scratch, will be able to
deliver web pages, run PHP scripts and
deliver email. On top of that, we’ll show you
how to clone a virtual machine and transfer
it to another machine, regardless of
whether or not the host or guest is 32-bit
or 64-bit. The entire assembly will be
discussed from the download of the
latest Ubuntu, installing Virtualbox
on Ubuntu, installing packages and
modifying packages.
Although this lesson focuses on Virtualbox,
there are virtualisation alternatives, such as
KVM, VMware and Xen. Virtual machines can
be quite convenient and versatile. For example,
you can use packages like Rsync and other
methods to synchronise files and databases to
other virtual machines or directly to another
machine entirely.
Installing Virtualbox
The method to install Virtualbox on Debianbased machines is to add a line into your /etc/
apt/sources.list file for the appropriate
package. Although Virtualbox is flexible, you
may have to enable virtualisation in your
computer BIOS to make it work properly with
64-bit guests.
An example installation is shown below. For
starters, open your sources.list file in an editor.
All commands will be run by the root user:
vi /etc/apt/sources.list
Copy the line into the sources.list file with
debian quantal contrib
Then run the following commands:
wget -q
oracle_vbox.asc -O- | sudo apt-key add -
52 LXF191 December 2014
Creating a new virtual machine select iso.
apt-get update
apt-get install virtualbox-4.3
You’ll need to select Y when necessary.
If you have any problems, you can refer to the
official wiki at
With older installations, sources.list can be
a problem. Alternatively, you can always make
a backup and just add the source for Virtualbox
for now, or you can bring your sources.list file
up to date. You can always generate a list:
To open Virtualbox, look for it from the Dash
or run virtualbox from the command line and
to create a new machine, select New, supply a
Name, select a Type, a Version and click Next.
After that, you need to supply a memory size
or simply accept the default. These values can
be changed after install. Select Next > Create.
When you’re prompted for the file type, VDI
is a good option and is also the default setting.
Finish the obvious installation and create the
virtual machine. By default, the disk size is
often 8GB and to make a virtual Ubuntu server,
you may want to add a little more disk space.
After installation, select your virtual
machine from the list. After that, click Settings
> Storage > Empty (Under Controller IDE) >
Select your ISO > OK. Now you can start your
machine by, rather conveniently, selecting
Start. Like a regular PC, you’ll proceed with the
installation and in the case of Ubuntu, you need
to select Install Ubuntu. Although the
installation will appear like it’s installing to a
hard drive, it’ll just be a virtual machine residing
in a .vdi file. Once you’ve installed Ubuntu, you’ll
be able to start the VM and login with the
username and password you’ve created.
The View setting gives you various options.
On a new Virtualbox install, you may find the
screen doesn’t look the way you want it to.
The good news is that you can change that
easily enough. To have more View options, such
as Seamless Mode, install vbox guest additions
on the guest operating system. This can be
downloaded from http://download. Note: You will need
run this as admin.
Now, set up your desired view and prepare
to be patient. The screen may look small at
first, but this is fixable. Once you have Vbox
Guest additions installed you can access
Seamless Mode, Full Screen and Scaled Mode,
but you may need to do a boot or two to get
everything looking as you want it to.
Clones and server setups
To clone a virtual machine, select the machine
> Snapshots > Click the Clone icon (which will
either look like pages or a sheep). At this point
you’ll give the clone a name and follow some
obvious steps. You have options to move a VM
to a new PC. One method is to move the entire
virtual machine folder from the Virtualbox VMS
folder. Then, you just need to fire up Virtualbox
and select Machine > Add. Another method is
to make a new machine and add an existing
VDI file. Now that Virtualbox is set up, let’s get
a functional server up and running with
Apache, MySQL, PHP and Postfix. These
installations will only take a few minutes.
To install Apache, run the command
apt-get install apache2
PHP primer
as root. After you’ve installed Apache, you
should open a browser and type localhost.
That should give you the Apache2 Ubuntu
default page. For MySQL, run the command
apt-get install mysql-server mysql-client
as root. You’ll be prompted for a password on
several occasions during the install. You can
use one, or just press Enter, which will continue
on with the installation without creating a
password. If you choose not to use a password
and decide to do so at a later date, it’s very
easy to change the MySQL password. It’s a
one-liner command. We now need to install
PHP with the command
apt-get install php5 libapache2-mod-php5
again as root.
At time of writing, the folder which displays
the website is /var/www/html. In previous
installations, the default folder was /var/www.
Also note that Apache sets AllowOverride to
None and you may want to change it to All if
you want to use .htaccess files.
Let’s go ahead and create a test PHP file to
see how it’s working. You can open any editor
you want. By default, Nano and Gedit editors
will work right out of the box. If you want to use
vi, you will need to run the command:
apt-get install vim
You’ll want to make a file called test.php.
Of course, it resides in the /var/www/html
folder that was mentioned above. A simple line
of code is <?php echo phpinfo(); ?>. This
function shows you many details about the
configuration. Now, you can open the page in
your browser and see the details about your
PHP configuration. This configuration can be
altered by editing the default located at /etc/
php5/apache2/php.ini. You can easily login
as root and edit this file.
DNS and port forwarding
Although PHP must be installed, other
scripting languages, such as Python and Perl
are already there by default. A simple whereis
python and whereis perl will tell you that.
At this point, the server works and only a
few small tweaks are needed in order to make
it viewable at an address like
If you want to use a domain name, you will
have to point the DNS to your IP. You can use a
free DNS service or your web hosting service
Choose the ISO that you want to use for your virtual box.
to do this. Technically, you create an A record
that points to your IP provided from your
internet service provider (ISP).
Once the DNS is setup, we need to
configure your router, and to display your
website, we need to enable the port forwarding
for port 80. By default, Apache and other
servers will use this port upon installation.
But, keep in mind that you can and may want
to add more port forwarding. For example, to
enable email to forward on port 25, SSH on
port 22 and, optionally, FTP on port 21. All
these port forwarding rules will use the same
local network IP address of your VM.
Okay so what’s your local IP address? When
you build a virtual machine with Virtualbox,
your networking will use NAT and take on the
default IP of your hosts connection. However,
you can set up Virtualbox so that the virtual
machine takes on its own IP address. You can
change this any time before you boot your
Virtual Machine. To change the configuration,
right-click on your VM and click Settings >
Network > Attached to.
Connecting with Ethernet and NAT is a
reliable and quick connection or using a
Bridged connection with Ethernet (eth0) and
changing promiscuous mode to Allow VMs.
This latter method enables your VM to use
Ethernet and have its own IP. If you run
ifconfig on the host and guest you should see
the two different IP addresses. The router
should pick them up too. Although you could
add USB wireless adaptors and use Virtualbox
with wireless, this tutorial will focus on
Ethernet which doesn’t have wireless issues.
Now that you have a working server that
can display public web pages, you may want to
continue on and add email features. Like a
rented VPS or dedicated box from a hosting
provider, you’ll have the basic tools that every
website owner would have. Now, we can move
onto the PHP primer.
PHP primer
Now that you have all your ducks in a row, you
can open that favourite editor of yours and
start coding some PHP. Although PHP has its
detractors its popularity is something of a
phenomenon as a server side scripting
language and most of its users run PHP on
some type of Linux server. It’s also very
common for web designers and web
developers to use a hosting platform with a
LAMP (Linux, Apache, MySQL & PHP) stack or
a LEMP stack. (Nginx, the HTTP server and
reverse proxy replaces Apache. Nginx is
pronounced ‘Engine-X’, hence E instead of N.)
PHP security
With your new PHP installation, there is no safe
mode and there are security issues. But, for a
home server you can eliminate most problems
by disabling file_uploads, disabling functions,
controlling all other access to the system and
monitoring file changes with Bash scripting.
All changes to your PHP configuration can be
done in the file /etc/php5/apache2/php.ini.
For example, you can change file_uploads
setting from On to Off and you can add more
functions to be disabled. By default, functions
like exec, shell_exec and base64 functions can
cause headaches if files are on your system.
Functions like shell_exec and exec enable you
to write Linux commands which can overwrite
files, show the raw code using cat command and
even take control of databases. The web is full of
kiddie scripts that can show and alter your data.
Those are not the only security issues when
using PHP. Bad coding can be exploited and
custom PHP.ini files can be added to your web
directories. These PHP.ini files can override the
main file. There is protection from this with the
suhosin patch.
All in all, your home PHP server should be fine
if you disable uploads and don’t allow access to
your system. And of course, never trust the
user's input into any forms. There are many
functions like htmlentities() and mysql_real_
escape_string() that sanitise user input.
December 2014 LXF191 53
PHP primer
What this has done is create an abundance
of scripts and libraries that be can easily
accommodate many needs. Some well-known
applications are Wordpress, Joomla, Drupal,
Magento, Prestashop, and many more scripts.
In addition to many scripts, there are many
frameworks that enable you to use libraries to
write code, such as Symphony, Codeigniter
and Zend.
Although PHP is convenient, it’s so loose
that it becomes very easy to write bad code.
On the other hand, you can write solid objects
and solid code. On top of that, there are many
libraries you can add to separate logic from
design; like the Smarty template engine. If you
have a background with C or Java, you will have
the grounding for writing solid code, minus
having to declare variable types.
Although there are many popular scripts
out there, many do have a dark side. Some are
very big and become so popular that they are
also popular targets for crackers. On the other
hand, if you code lean PHP websites with an
emphasis on security, the code is much more
manageable and the performance often excels.
It can take a while, but with desire and
patience, you can get to the level of handcoding PHP, MySQL, HTML and CSS which will
allow you to build whatever you can imagine
(given the time, of course).
PHP scripting
In this section we’ll cover PHP scripting which
can be done via the command line or with files
in the browser. PHP is interpreted on the server
and displays the output in a user’s browser.
Essentially, this is what you see everyday from
websites using PHP applications, such as
Wordpress and Magento.
When you make scripts in a secure
directory, nested in your Linux box outside the
home directory, you can go ahead and run
those files via the command line. Sometimes,
you only want utility scripts that you will run
Install your virtual machine just as you
would on your hard drive.
with cron jobs or whenever you want to
execute them.
Let’s cover some of the basic features, such
as how to create comments, variables, arrays,
loops, functions, classes and objects, CRUD,
mySQL queries and include files.
All PHP files end with the .php extension.
Any code between <?php ?> tags will be
interpreted. Any code outside these tags will
be interpreted as HTML in a browser. <? ?>
tags work too, but, only if you configure
PHP(php.ini) to allow the usage of short tags.
PHP has three main methods for
commenting (that’s adding notes that aren’t
interpreted so you can easily follow what the
code is doing):
// comments after two forward slashes
# Command after hash tag
/* Comments between these can run as many
lines as you want */
A common variable is either a string
(characters) or integer (numbers). Variables
can be declared via the $ symbol. The example
we’re using below shows how to declare both
types of variable:
$my_string = “My string”;
$my_number = 3;
You can print to the page by enclosing text
within single or double quotes. Single will be ‘as
is’ while double will be interpreted. This means
PHP scripting and command line
When you run PHP scripts from the command
line there are subtle differences than
interpreting it in the browser. One major
difference is that '\n' makes space for a new
line with the command line while '<br/>' does
the same thing when HTML is interpreted in the
browser. Furthermore, when PHP scripting via
the command line you add the following code to
the top of file:
And to execute the file within the current
php ./update-status2.php
Here is a typical cron job that runs the file
filename.php every ten minutes. The cron file
can be accessed for the root or other users at
/var/spool/cron/username or by the simple
command crontab -e.
54 LXF191 December 2014
*/10 * * * * username /usr/bin/php -f /var/
Files that will run with cron jobs can require
different coding from those files running from
the browser. Files in the browser are relative to
the public_html or www folder. Meanwhile, files
that are run with cron jobs need all included and
required files to take on a path relative to the
root folder.
Therefore, although include ("my_included_
file.php") will work in the browser if the file
resides in the same folder, the cron file needs
the path to be something like include ("/var/
www/html/my_included_file.php") in order
to be able to reference its location.
An alternative to changing path names is to
remove include or require functions and add all
of the code into a single file.
you can print variables within double quotes.
echo ‘Hi there!’; //outputs: Hi there!
echo ‘Hi there $variable’; //outputs: Hi there
... And with double quotes:
$variable = “John”;
echo “Hi there!”; //outputs: Hi there!
echo “Hi there $variable”; //outputs: Hi there
There can be times when you may see
double quotes within double quotes; like the
usage of HTML. If that is the case, it can be
backslashed to avoid interpretation. An
example of this is
echo “<a href =\”\”>My Link</
An alternative to the above is to change the
double quotes to single quotes or use another
way to write it:
<?php ?>
<a href =””>My Link</a>
<?php //Add php here ?>
Feature examples
For those who come from a background with C
or Python, you’ll be familiar with printing and
the sprintf() function. An example of it’s use is
shown below.
$variable1 = 10;
$variable2 = 5;
$variable3 = $variable1 + $variable2;
$variable4 = ‘football team’;
$format = ‘There are %d men and %d women
for a total of %d on the %s.’;
echo sprintf($format, $variable1, $variable2,
$variable3, $variable4);
Arrays are a group of items. The syntax of
indexed and associative array are show below:
$indexed = array(‘apple’, ‘peach’,’pear’,
$associative = array(‘name’ => ‘John’, ‘age’ =>
24, ‘height’ = > ‘2 metres’);
Two commands that will always be useful
are print_r() and var_dump(). They will show
you the keys and values of arrays if you need to
take a deeper look. Try print_r($indexed) and
print_r($associative) to compare the keys
and values of both arrays.
PHP also has many array functions that can
be used to sort them, add the integer values
and much more. Variables and blocks of text
are separated by a period, which is called
concatenation. You’ll get to see concatenation
in use in the section on loops (below).
The most common loops you will use and
encounter are foreach, for and while loops.
Usually, the foreach loop parses an array.
You can separate any array into values or into
keys and values. Examples of various loops and
functions are shown below.
Foreach Loop This type of loop will parse
the array in order.
foreach($indexed as $item){
echo $item . “<br/>”; //concatenation between
PHP primer
public $name = ‘John’;
function get_name(){
return $this->name;
$my_test = new Test();
echo $my_test->get_name();
a variable and the HTML <br/> tag.
foreach($associative as $key => $item){
echo $key. “ - “ . $item . “<br/>”; //
concatenation between variables and printed
For Loop This For loop prints the number
one and adds 1 each time it loops. It continues
as long as the value of $i is less than 5.
for($i=1; $i < 5; $i++) {
echo $i; //output: 1234
MySQL queries One feature that’s very
useful is to write MySQL queries which return
the data you desire. Queries can be executed
using PDO or the mysql_query() function.
To make a query, you connect to a database
through authentication.
While Loop This while loop prints the
number one and adds 1 each time it loops.
It also continues as long as the value of $i is
less than 5.
$i = 1;
while($i < 5) {
echo $i; //output: 1234
CRUD This stands for Create, Read, Update
and Delete. Practically most web applications
use CRUD, coded in one way or another, to
allow the user to make modifications to the
web application. All CRUD events will take
place using the MySQL SELECT, INSERT,
UPDATE and DELETE commands.
Include Files If you want to add another file
to the current file, it can be done with the
include(), include_once(), require() and
require_once() functions. An example of this is
shown below.
Functions These are a set of procedures
that you can call to take place. You call a
function by using its name. Normally, functions
are located in their own file(s) and are included
into a file where you want to use them. A
simple, custom function is shown below and it
can be executed by using writing name_it():
function name_it (){
echo “Here is my text”;
On top of that, PHP installs with thousands
of functions. Some of these functions are
object-oriented, for example new
DOMDocument(). is the
go-to website for all your PHP needs. For
example, the page
en/filter.filters.php shows many validating
filters that you could use with a function such
as filter_var() that are used within frameworks
such as Codeigniter.
When you use PHP functions, you should
keep in mind that they use memory, and some
can use up quite a lot. If you code for efficiency
you can minimise resource consumption and
obtain higher performance.
Classes and Objects With PHP, you can
code procedurally from the top down. But,
OOP (Object Oriented Programming) gives you
a much more manageable way to write code.
The main class is a parent and it can have child
classes that are able to inherit some or all of shows your PHP config and
can be altered by editing the php.ini file or
can be overridden by adding a php.ini file
into the html folder.
the parent class. To use OOP, you create a class
which includes properties and methods.
A property is similar to a regular variable while
a method is like a function. To instantiate a
class, you use the ‘new’ keyword. Properties
and methods can be public, private and
protected. Public means they are accessible
everywhere, private are only available to the
class and protected are available to child and
parent classes from where they are declared.
One more aspect of OOP is the use of the
$this keyword (this being interchangeable
which is used to reference a property. In
addition to that, you will use -> to access
methods after you instantiate an object.
A simple example shown below should make
this clear.
Class Test {
Running PHP within Bash Since there’s a
good chance that you have a background with
Bash scripting, you can apply this skill by
adding PHP coding into bash scripts using the
Here Document(‘EOF’) tags.
php_cwd=`/usr/bin/php << ‘EOF’
<?php echo getcwd(); ?>
echo “$php_cwd”
Now you have the basics to write PHP
programs on your new virtual machine. You
can back it up and restore it anywhere you
want. In addition to Virtualbox backups, you
can use ssh, rsync, mysqldump, scp and cron
so that you can always have a handy backup
on another machine, at your convenience.
Most PHP websites and web apps use files
and databases. Having a plan to clone or sync
these data sources from your VM can be very
convenient and fast to implement. Enjoy. LXF
The email server
To get started with email, you will need to install
an email server. There are several available such
as Postfix and Exim. This tutorial will use Postfix.
To install Postfix, run the command just as
shown below.
root# apt-get install postfix postfix-mysql
dovecot-core dovecot-imapd dovecot-lmtpd
dovecot-mysql dovecot-pop3d
During installation you may want to select
the tab key > Internet Site > hit tab key and
select OK > Hit Enter > Give system mail name
or use the default > Select tab > Enter
Once the server is successfully set up, you
may want to edit the /etc/postfix/ file
and add a script such as Squirrelmail to handle
incoming and outgoing email. Note: setting up
email can be time consuming and is beyond the
scope of this tutorial.
However, you can test outgoing email with
simple commands shown below. To test an
outgoing email message, write out each line and
hit Enter.
When you come to the period after the email
body, you will type it and press Enter as you’ve
done for the other commands. This isn’t a typo,
it is a requirement to send the email.
telnet localhost 25
MAIL FROM: [email protected]
RCPT TO: [email protected]
Subject: Add it here
Add the body text now
December 2014 LXF191 55
Dr Brown’s Administeria
Dr Brown’s
Dr Chris Brown
The Doctor provides Linux training, authoring
and consultancy. He finds his PhD in particle
physics to be of no help in this work at all.
Esoteric system administration goodness from
the impenetrable bowels of the server room.
Eggs and baskets
he President was annoyed to say
the least. For one thing he had just
realised that the keynote he'd
agreed to give at the International
Symposium on Digital Biodiversity clashed
with his son's baseball match. Secretly he
thought they were a bunch of loons, but the
party advisers assured him there were easy
votes in doing it. And for another thing, he
was concerned about the 46 messages
he'd just received all containing level 2
launch authorisations, and all, apparently,
from himself.
He decided to prioritise and picked up
the phone. "Alice, email Bob and get him to
pull me off that biodiversity keynote thing
tonight". "Sorry chief", said Alice, but my
PC's on the blink. Bob and Eve have just
called to say that they've got problems too".
A quick walk down the corridor revealed
that in office after office, people were
cursing at their computers with more
vehemence that was usual.
He telephoned the chairman of the
National Security Council. "Get Nadella on
the phone", he demanded. There was no
point in being the most powerful man on
the planet, he thought, if you couldn't go to
the top. But no-one at Microsoft was
available for comment. Indeed, no-one at
Microsoft seemed to be available for any
reason whatsoever.
And as more and more people called in
to say their computers were down, the
president began to feel less and less
powerful. Finally a junior aide arrived in his
office, out of breath, having run up four
flights of stairs. "Mr. President, sir, I thought
you ought to know that there's a guy in the
sub-basement who says his computer is
still working. He says it's running Linux".
[email protected]
Cracking passwords
What have squirrel noises got to do with the second
law of thermodynamics? It's a question of entropy.
Character Set
Digits 0-9
Upper-case letters A-Z
A-Z a-z 0-9 [email protected]#$%^&()-_+=
56 LXF191 December 2014
passwords a second. The real figure will
depend on the algorithm used and how much
water you’ve available to cool your
supercomputer. But the message is clear:
longer passwords and a larger character set
make a huge difference.
However, Mark Burnett, who seems to have
spent half a lifetime harvesting username/
password pairs (from what he assures us are
"sources that have already been made public")
presents data at (http://bit.
ly/10KTopPasswords) that will convince you
that the whole thing about combinatorial
maths and squirrel noises is largely irrelevant.
Mark's data suggests that 40% of
passwords fall into the top 100 list, 91% fall into
the top 1,000, and a whopping 98.8% into the
top 10,000. Of course, your results will depend
on your data sample. As Mark
admits, the passwords are mostly
Brute Force time
from sites that don't enforce strong
111 seconds
passwords. So forget the squirrel
185 minutes
noises. And there's a nice example
308 hours
at XKCD (
which compares the entropy of a
typical password derived from the
60 hours
ritual disembowelling of a real word
5 years
(Tr0ub4dor&3), with that of using
3000 years
four unrelated words (correct horse
battery staple). The latter has 16
bits more entropy, meaning it's
60 hours
65,000 times as hard to guess,
206,000 years
but much easier to remember, and
1 billion years
much easier to type..
ntropy is a measure of randomness or
disorder. The second law of
thermodynamics states that the
entropy of a closed system never decreases.
(Think of a teenager's bedroom). My favourite
Dilbert cartoon is the one where the boss says
"Starting today, all passwords must contain
letters, numbers, doodles, sign language and
squirrel noises". Like all Dilbert cartoons the
insanity contains more than a hint of truth.
The point is that the bigger the character
set from which passwords are drawn, the
greater their entropy, and the longer it takes to
guess them by brute force. A little play on Open
Security Research’s Brute Force Calculator
( resulted in
the table (below). This is pure combinatorial
maths, and assumes you can test 1,000,000
Dr Brown’s Administeria
Run Windows applications on Linux using Wine. The Good Doctor helps you
take the cork out of the bottle and helpfully takes a sip or two…
ou're a system administrator, and your company has
just announced that it's transferring its operation
lock, stock, and barrel from Windows to Linux. Since
you've been advocating this for nearly a decade, it feels like a
dream come true. But that initial euphoria begins to wear off
when your users start coming to you with worried
expressions, explaining that they can't possibly do their job
without application A or B, and that they only run on Windows.
Now you need to take these statements with a pinch of
salt. In truth, there are very few tasks for which there isn't an
open-source solution that’s at least as good as its proprietary
cousins. It may simply be that those users are reluctant to
come out of their comfort zone, or learn new technology.
Or they may be concerned that they will need to trade
documents with other companies who are still wedded to
proprietary formats. Or maybe young Rosie in accounts is
seriously addicted to World of Wacraft and has threatened to
stop your paychecks if she can't play it at work any more.
If you need to run a mix of Windows and Linux applications
you have three choices – you can set up a dual-boot
environment, run Windows in a virtual machine on top of
Linux (or the other way round, it doesn't really much matter),
or you can use Wine.
Essentially, Wine is a compatibility layer supporting an API
that mimics the standard Windows system DLLs USER32,
GDI32 and KERNEL32. This compatibility layer sits atop a
POSIX-compliant kernel (it can run on Linux, BSD, Solaris and
Mac OSX). From the point of view of windows applications,
wine provides an emulation of the windows system, and some
folks say the name stands for ‘Windows Emulator’. The folks
at WineHQ don't like that, and would prefer it to be an
acronym for ‘Wine is Not an Emulator’. Their concern is that
many people associate emulation with poor performance.
It would be misleading to suggest that every Windows
application will run flawlessly using Wine. The emulation is not
perfect, and some applications run better than others, and
some don't run at all. The website carries a
Incontrovertible proof that 2 plus 2 is 4. Microsoft Excel
and LibreOffice Calc side by side, both running on CentOS.
Wine vs Virtualisation
Virtualisation (running a Windows
virtual machine as a guest on top of a
Linux host) will likely give you an easier
way to get those windows apps running
on Linux. Since you get a ‘true’ windows
environment, most apps should run.
By contrast, support for Windows apps
under Wine is patchy. On the other
hand, the Wine approach has a much
smaller memory footprint, because you
don't have a full Windows OS installed.
Most importantly, though, Wine
doesn't require you to have a licensed
copy of Windows. The Microsoft
document that describes your rights to
run Windows inside VMs runs to six
pages and is largely incomprehensible
unless you have a degree in infinitely
differentiable licence terms, but one
thing is clear – it's not free. Don't forget
that even under Wine, you need a legal
copy of the software you want to run.
large database where each application is rated Platinum,
Gold, Sliver, Bronze, or Garbage. The list is heavily gamesdominated. In the platinum, gold and silver top 10 lists (30
apps in all) 24 are games. Often, the rating depends on which
version of the app and which version of the wine library you're
using. For example, Microsoft Money has ratings from
Platinum down to Garbage, depending on the version. As I
write this there are 21,626 applications listed but many of
them are old and many of the links to vendor or developer
sites are broken.
First sip of wine
Anyway, let's get the cork out of the bottle and take a gulp (or
two) of Wine. I chose to do this on Ubuntu 14.04 and it turns
out that Wine is in the Ubuntu repositories, so I can install it
very easily like this:
$ sudo apt-get install wine
You'll have a bit of a wait because the installation brings in
a total of 174 packages. Along the way, you'll be asked to
accept the licence agreement for the MS truetype fonts.
Installing onto a Red Hat derivative distro, such as CentOS
needs slightly more work because you'll need to enable the
EPEL repositories. (See the EPEL box, p58 for instructions on
how to do this.)
Once Wine is installed, the wine command can be used to
launch a windows executable. There are a number of sample
programs bundled with Wine, including Notepad, Wordpad,
Regedit, a command interpreter (cmd), and Wine's version of
Internet Explorer. So you can try it out immediately with a
command such as:
$ wine wordpad
In fact, some of these programs have wrapper scripts in
/usr/bin, so you can simply launch them by name as you
would any other command:
$ notepad
Text-based console applications should be launched with
wineconsole, which gives them their own console window. So
$ wineconsole cmd
will give you your very own windows command interpreter.
Lucky you. Wine creates a small Windows-like environment
under the directory ~/.wine (by default). For example, in the
directory ~/.wine/drive_c/users/chris I have a miniature
December 2014 LXF191 57
Dr Brown’s Administeria
Windows-style home directory, where symbolic links with the
traditional Windows names (such as ‘My Pictures’) point to
their Linux equivalents (such as /home/chris/Pictures).
Under ~/.wine/drive_c/windows/system32 you'll find a
goodly collection of DLLs and Windows executable files.
Also in ~/.wine there are files containing Windows Registry
settings (see Windows Registry box, p59). Taken together,
these files define a virtual windows environment. The Wine
runtime uses an environment variable called WINEPREFIX to
specify the location of this directory, enabling you to set up
entirely separate windows environments for different
applications. Behind the scenes, there's also a server process
called wineserver, which offers some kernel-like services to
Wine, including message routing, access to the Registry,
debugging, and some window management. This server is
started on demand by the Wine runtime, you shouldn’t need
to worry about it.
To install additional windows applications is in principal a
matter of finding the application's installation program and
running it under Wine, probably something like:
$ wine setup.exe
I've emphasized "in principal" because in practice you may
encounter an entire zoo of error messages.
Codeweavers to the Rescue
If you're struggling to get applications working under Wine,
consider using Crossover from Codeweavers. This is a
supported version of Wine, and the Codeweavers team have
put a lot of effort into making it easier to use. It's not a free
application, however – a 12-month subscription will cost you
£38, and phone support is extra, but in a commercial setting
that's a lot cheaper than a day of your time struggling to get
Wine to work, plus you have the pleasure of knowing that
you're contributing to the development of Wine. Educational
pricing is slightly cheaper, and there's a 14-day free trial
period, so you don't need to part with any money until you
know for sure if your application will run. The licence is perperson, enabling you to install Wine onto several machines so
long as you only use one at a time. As the CodeWeavers
website points out, whether you choose to go down the free
route of using Wine alone, or to use Crossover, depends on
your budget, technical competence, and pain threshold.
Crossover's website also carries a large database of
applications and they have a similar classification scheme,
with gold, silver, bronze and ‘known not to work’ medals
awarded, plus an ‘untested’ medal. It feels better organised
than the one at WineHQ. Altogether I counted 12,700 apps
listed here, of which about 4,500 are games. However, the
overall amount of working applications is much lower, as I
found a rather high proportion of the applications labelled
as ‘untested’.
Crossover provides you with a graphical screen to manage
your Windows applications from, and – importantly –
an installer that makes the process much easier. I decided to
use Centos 6.5 as my host system to give Crossover a trial.
The process goes like this:
Enable the EPEL repositories on CentOS. (See the EPEL box
below for how to do this.)
Browse to and click on the
Download Free Trial button.
Select what flavour of Linux you're running from the
dropdown. (This will then determine what kind of package
that you get.)
Enter your email address and click on Download. This will
get you a file called something like crossover-13.2.0-1.rpm
Install the rpm using yum:
# yum install crossover
This will drag in quite a few dependencies (62 in my case)
but it should proceed without problems.
Crossover installs into /opt/cxoffice. (and It's one of the
few applications in my experience that actually follows the
filesystem hierarchy guidelines by installing into /opt). It will
also add Crossover to your Applications menu, and you'll want
to begin by launching it from here. From the main Crossover
screen you can go ahead and install new Windows
applications, launch the ones already installed, and manage
your bottles (which are discussed below).
Don't lose your bottle
The main screen of Crossover. From here you can launch your Windows
applications and install new ones, or manage your bottles.
Earlier I mentioned that wine maintains a virtual windows
environment under ~/.wine. Crossover calls this environment
a ‘bottle’ and takes the concept further, enabling you to easily
maintain multiple bottles. These bottles can act like a
sandbox, which will prevent any applications in separate
bottles from interacting with each other. You can also use
bottles to emulate multiple versions of the Windows OS, or to
move the environment to a different host machine. Crossover
keeps its bottles in subdirectories under ~/.cxoffice rather
than under ~/.wine (Crossover Office was the original name
of the product).
EPEL (Extra Packages for Enterprise Linux) is a
repository of additional packages for Red Hat
Enterprise Linux (or CentOS or Scientific Linux)
maintained by a Fedora Special Interest Group
58 LXF191 December 2014
These packages come from the Fedora project
and are chosen so they will "never conflict with
or replace packages in the base Enterprise Linux
distributions". To use EPEL you’ll need to enable
the repository and grab the GPG key. On
CentOS there's a tiny package called epelrelease in the official repo that will do these
things for you, so you should be able to just do:
# yum install epel-release
to enable EPEL.
Dr Brown’s Administeria
Wine provides
a compatibility
layer enabling
Windows apps to
run on Linux. It's
especially popular
for gaming.
Wine Architecture
Application DLLs
Windows Executable
DLL Emulation
Linux Kernel
Crossover's bottle management tool lets you view your
bottles, create new ones, explore the file systems within them,
list the programs installed in them, edit menus and file
associations, and even package a bottle as a .deb or RPM file,
so that you could easily install it on another machine. And all
from the comfort of a graphical user interface!
Like falling off a log
The easiest way of installing a Windows application using
Crossover is through a feature called crosstie. Basically a
crosstie is a recipe (in the form of an XML file) that tells the
installer exactly how to install a specific application. If the
application you want to install has a cross-tie file, you should
get a really easy ride.
As an example, we'll install a free program called e-Sword
(a bible-study aid) which is only available for Windows. Here
are the steps:
Browse to and enter e-Sword in the
search box. The list of search results tells us that e-Sword
has a silver medal.
Click on the link to the program's page. From there you can
browse screenshots of the application.
Click the big green button labelled Install e-Sword via
CrossTie. This will download the crosstie file (called e-Sword.
tie) which, by default, will open in the Crossover installer.
The installer gets all the information it needs from the
crosstie file so once it's running just click on the Install button.
The installer will create a new bottle to install the
application into then run the installation
In my case, Crossover decided that I needed the Microsoft
XML Parser as well, and ran the appropriate setup wizard.
From there on you'll go through the InstallShield screens
just as you would if you were installing it on Windows. The
installer defaults to installing into C:\Program
Files\e-Sword, which, of course, is inside the bottle we
just created.
Once the installation is completed you'll see a launch
button on the main Crossover screen. Just click it and –
hey presto! – the application will be up and running.
Installing from media
Here's another example. This time we'll install Microsoft
Office, using the official product DVD. Just to be clear –
you don't need a licensed copy of the Windows operating
system itself to do this, but you will need to own a legal copy
of the software you're installing. Installation turns out to be
just as easy as installing via a crosstie file. Just put the
installation media into your DVD drive, run Crossover and
click on the big button labelled Install Windows Software...
The installer automatically detected the installation DVD,
found the setup program on it, and auto-populated the
installation options so all I had to do was click on Install.
Again, the installer will go ahead and create a new bottle,
and again, the installation screens you'll see from here on
(including the ritual ‘accept this scary licence and don’t worry
about it’ check boxes) will be the same as you would see if
you installed the product onto a Windows operating system.
Once the installation is complete, new launch buttons will
be added to the main Crossover screen for your newly
installed components.
So there you have it. And don't forget: a day without Wine
is a day without sunshine. Salute! LXF
Windows Registry
One of the things that Wine has to
emulate is the Windows Registry. This
is a hierarchical database that stores
configuration settings for Windows apps
in the form of keys and values. It largely
replaced the many INI files that were
used in early versions of Windows. Many
applications rely on the Registry, though
(notably) .NET doesn’t. Wine includes a
Registry editor which looks a lot like its
Windows cousin. Registry settings are
stored in ~/.wine, or in Crossover bottles.
December 2014 LXF191 59
The best new open source
software on the planet
Alexander Tolstoy
dons his deerstalker hat and hunts
down the hottest and greatest open
source software and puts them on a
pedestal for all to see.
QMplay2 Rosa PDFSaM Rodent Core
Blobby Volley 2 Caesaria Otter Browser
KXStitch KEncFS
Media Player
Version: 14.07.27 Web:
his is another media player that's
capable of handling almost all
sorts of audio and video files.
However, there are a number of
outstanding features that make
QMplay2 different and it'll also feel
better to use. It may sound like an
obvious feature, but this is one of few
players that enables you to easily sort
your playlist without closing the main
video view. For instance, VLC has a
great playlist editor and a neat playback
controls, but although you can switch
between them you won’t be able to
display both elements at the same time.
On the other hand, apps like Gnome
MPlayer let you enable a side pane with
a playlist, but you can’t drag and drop
items there. Among mainstream Linux
media players only Totem combined all
these features together, but recently
newer versions of Totem (starting from
3.12) got rid of the extra playlist pane,
perhaps because it felt too cluttered.
So, QMplay2 could well be the
media player you've been missing. The
player offers simultaneous display of
video pane, playlist visualisation and
The number of controls and tabs are plentiful in QMplay2,
making it a good fit for a professional desktop.
“Unlike its name,
QMplay2 is not an
MPlayer front-end.”
Exploring the QMplay2 interface
Whether you
watch a movie or
listen to your
favourite track,
this two-option
visualiser can be
very helpful…
QMplay2 keeps
your previously
played items as
long as you want
them to be
available, so you
can always enjoy
them again.
Live metadata feed
YouTube tool
Set priority
Most useful for remote
streams: you can watch
current buffered data size
and its equivalent in
seconds, as well as live audio.
The YouTube search
results pane enables
you to instantly select
the desired playback
quality you want.
This is possibly the most
friendly way to set the
output method priority
that you want to use –
just drag the items!
60 LXF191 December 2014
lets your customise the whole view the
way you like it. Unlike its name,
QMplay2 isn't an MPlayer front-end.
Actually, it uses FFMpeg for media
format support, wrapping it into a Qt4
(and possible Qt5 if you're brave
enough to build QMplay2 from source)
GUI with a wide variety of settings.
The stand-out feature is the intuitive
method of setting audio and video
output methods, available in player's
properties window (press Ctrl+O to
open it). It's very easy to set to
accelerated video output for VAAPIenabled Intel Graphics system, while
Nvidia and Radeon users will quite
probably use VDPAU.
The playlist pane accepts not only
local files, but also easily handles
YouTube videos and many other sorts of
URLs of remote streams. The player
always restores your previous playlist
upon start up, so you can always come
back to your previously played items
without having to use the Recent or
Favourite menus. Playing and searching
inside the YouTube sample video
worked like a charm, so if your
connection is fast enough you may not
notice the difference between local
video and stream playback.
Live USB flash tool
Rosa ImageWriter
Version: 2.4 Web:
ptical laser discs are rapidly
leaving our world, being
superseded with USB flash
drives and cloud storage. Though you
can decide to move your precious files
to an online service, you still have to
deal with physical media for installing
an OS (for now, at least). Preparing a
USB mass storage drive (a stick, a
memory card etc), then writing your
favourite Linux distro ISO onto it and
finally getting a bootable Linux at your
fingertips all sounds easy, especially for
geeks with years of dd-fu. For normal
earthlings, however, doing this is still an
expert-level task. Though many have
heard, and are used to, UNetbootin in
Ubuntu world, not every ISO can be
successfully written on a USB. Some
are designed only for CDs/DVDs, some
need extra tweaks for EFI support and
some offer non-standard partitioning.
Rosa ImageWriter is part of Rosa
Linux, which in turn is a continuation of
Mandriva S.A. Linux distribution. A few
years ago, the Rosa team forked SUSE
Studio Imagewriter – a modest and
very easy to use tool for writing ISO
images to USB devices. The fork was
heavily optimised and enhanced; they
got rid of C# and .NET code from Linux
and Windows versions respectively and
rewrite the tool using entirely C++. The
interface was also ported to the Qt5
framework, while retaining the original
visual simplicity.
The end result is a tool that
constantly updates the list of currently
available USB media, displays its labels
and sizes and supports ISO selecting
either by classic file open dialog or by
dragging the file on the tool's window.
Another small Qt5 utility with finely crafted features
making it worth a decent perusal.
“Flash your USB stick
with an ISO from any
kind of modern system.”
Rosa ImageWriter is distributed as a
source tarball and a set of binary
packges for 32- and 64-bit Linux,
Windows 7/8 and Mac OS X. Along with
distro specific packages some static
builds are available for all platforms, so
you can prepare your USB stick with an
ISO from any kind of contemporary
Linux system, even from proprietary
aliens. According to our tests, Rosa
ImageWriter correctly handled all
Ubuntu and Debian derivatives, all sorts
of RPM-based distros, FreeBSD images
and plenty more.
PDF split and merge tool
Version: 2.2.4 Web:
erhaps the most common
interchange file format is PDF.
Unlike .docx or even .odt it's still
the most accessible because almost
every OS provides some kind of
standard tool for reading the format.
Linux compares favourably in that
regard: its Evince and Okular
applications are both excellent pieces of
software and at least one of them is
always bundled with any major distro
you might choose. Although PDF was
designed by Adobe as a target format
and thus it's meant for modification, in
real life some altering is often required,
like rotating, cropping, merging and
extracting pages and so on.
PDF Split and Merge tool (known to
its friends as PDFSaM) has grown from
a simple app designed for splitting and
rotating PDF files into a Swiss Army
knife of PDF manipulation, capable of
doing many tricks with PDFs. It is a
platform and OS independent software
written in Java Swing which requires
Java or OpenJDK runtime (which we
admit makes the interface look just a
tiny bit old-fashioned).
Unpack the PDFSaM tarball and run
java -jar pdfsam-2.2.4.jar in its
directory. The interface is quite unusual,
at least initially. You'd expect to be able
to browse for PDF files and apply
desired actions to them but the logic is
reversed. PDFSaM is actually a browser
of PDF modification plugins – you
choose what you're going to do and
apply your PDF to that plugin. Currently
six plugins are available: Alternate mix
(move pages between PDFs), Merge/
Not very handsome, but a clean and feature-rich tool.
“It's confusing at first
– PDFSaM is a browser
of PDF mod plugins.”
Extract, Rotate, Split, Visual document
composer and Visual reorder. The two
latter plugins enable previewing PDF
pages and their live modification, while
the others produce modified PDF files
in the directory you point the plugin to.
You can control some options, like
overwriting the source file or specifying
the internal version of PDF.
PDFSaM will be a good companion
for those who frequently work with PDF
files or need a cross-platform tool for
batch-file processing. According to our
tests it was quite stable even under the
1heavy load of a bunch of ponderous
files. Reordering and moving pages
from one file to another also worked
flawlessly for us.
December 2014 LXF191 61
File manager
Rodent Core
Version: 5.2.9 Web:
side from heavyweight desktop
environments like KDE SC or
Cinnamon, there's a bunch of
compact and speedy desktops that
always find their devoted users. Mostly
those are the owners of old PCs but
sometimes even the users of top-level
power beasts prefer not to waste
system resources on bells and whistles
and stick to simple and fast software, so
the Rodent Applications stack is a
typical collection of small and handy
utilities. One of its key applications, the
Rodent Core file manager (aka xffm)
has seen a new release recently.
It's a lightweight GTK2 application
based on a plugin design with a decent
set of features for daily use. Rodent
Core looks and feels very unusual,
especially if you've been used to classic
file managers such as Dolphin or Nemo.
It replaces your current desktop with its
own, so the whole thing looks like a
mini-graphic shell.
The main Rodent window, by default,
is set to display square tiles of files and
directories. Most graphic, PDF and even
text filetypes are supported by the builtin thumbnailer, so you're unlikely to get
lost even though the paradigm of
Rodent is really different. In order to
open files and perform other basic
actions, Rodent relies on numerous
external utilities, many of which are
standard in Linux. Browsing with admin
permissions utilises sudo, text field
autocompletion data stored in Bash
and filesystem automounter is a
combination of /etc/fstab watcher
and FUSE front-end. While trying to get
familiar with Rodent, you'll surely notice
unique features, like advanced file
A tailored file manager which takes a little more control
over your desktop.
“Replaces your desktop
with its own, like a
mini-graphic shell.”
shredding for higher security, easy file
encrypting with Bcrypt and lots of other
handy tools, such as front-ends for diff
and ssh. The lower part of the Rodent
window carries the shell where most of
the manipulations with files are doubled
with respective shell commands.
If you decide to try the Rodent Core
file manager out, take care of the
system you’re using. Binary packages
exist for OpenSUSE, Gentoo, Ubuntu
and FreeBSD and while the source is
available like any GPL software, it can be
painful to build Rodent yourself, but if
you're brave enough, you'll get this
blazing fast low fat file manager.
Cross stitch patterns editor
Version: 1.2.0 Web:
ant to employ your
grandma's ninja-knitting
skills but not sure how to
control the design? Here's the perfect
solution. KXStitch is a fun piece of
software which lets you transform any
bitmap graphic file into a cross stitch
pattern, apply manual editing, print it
and use it a guide for real stitching.
KXStitch uses its own internal file
format for storing stitch schemes
(KXS), so in order to use a photo or a
drawing with the app, it's supposed to
be imported through File > Import
image dialog (the same logic can be
found in Gimp). Once you do this,
another dialog will ask you to choose
the floss type, number of cloth count
per inch and general pattern scale. The
imported image in most cases needs
extra editing: it's unlikely that a human
would use forty spinning rolls of
different colours to stitch tiny pixels on
62 LXF191 December 2014
a real cloth. So you'll need to simplify
the scheme and edit certain stitches by
hand. KXStitch offers a wide range of
dedicated tools for this, including an
excellent colour palette editor, drawing
and erasing pens, polygonal filling and
plenty much more.
The app is a long-term veteran in the
FOSS world. KXStitch was initially
started in 2005 as a Qt3 software. It's
since grown through dozens of
enhancements, including the transition
to Qt4 and now we can enjoy the 1.2.0
version which brought more
translations and a few improvements.
The entry level for first time KXStitch
users isn't as high as you may think,
Have a rest from your Linux box, go stitch....
“KXStitch offers a wide
range of dedicated tools
for editing stitches.”
either. In fact, there's a comprehensive
guide for both drawing the scheme
from scratch and for handling bitmap
graphics, all packed in a serviceable
docbook format.
Note that the KXStitch download
link is found on the kde-apps
homepage. This is important, because
the KXStitch website at Sourceforge
hasn't been updated for a long time
and offers an outdated old version,
while the kde-apps page is current.
Compiling KXStitch isn't tricky at all,
thanks to the excellent cmake wrapper.
You'll need KDE runtime headers,
cmake plus standard build environment,
and after that the only job left is to run
the ./ script.
Encrypted filesystem manager
Version: 1.4.0 Web:
inux users are generally seen as
security conscious people,
though a large proportion of that
stems from clichéd attitudes to
alternative culture and hackers.
However, the demand for high-quality
encryption technology now stretches
far beyond the enterprise users in this
post-Snowden world. Even the less
experienced user is interested in
keeping some files safe and hidden.
This is where EncFS comes in – a FUSEbased cryptographic solution which
transparently encrypts everything in a
specified directory. Unlike disk
encryption software such as TrueCrypt,
EncFS works on a directory level, so the
ciphertext directory structure mirrors
the plaintext's directory structure. In
other words you have to set two
directories for EnFS: a mount point for
your protected files and an encrypted
directory for ciphered files. Each set
gets protected with master password,
which is the only key to mounting the
encrypted destination. It's hard to say
whether the ciphertext decryption
attack has any merit, but EncFS has a
good rep with IT security professionals.
KEncFS is perhaps the most
intuitive and high-quality EncFS frontend, with a focus on KDE SC integration.
It has a clean and neat Qt4 interface
and some lovely details. The main
window stores the list of filesystems
which a KEncFS uses to represent the
encrypted directories. The user can add,
delete and manage entries as well as
mount or unmount them. When an
entry is mounted, a corresponding
directory automatically opens in the
default file manager (say Dolphin). In
A very straightforward GUI for a powerful directory
encryption solution.
“KEncFS can automount
your encrypted
filesystems on startup.”
addition, KEncFS can auto-mount your
encrypted filesystems on startup and
use KWallet for password storage (don't
forget to protect the KWallet itself with
a strong password) and the while thing
has a very small impact on system
performance. KEncFS does offer some
precompiled binaries, but chances are
that you'll have to build it yourself. As
it's a rather contemporary Qt project,
installation is as simple as running
qmake, make and make install,
although it takes some time to install all
necessary build prerequisites, which
are mainly KDE runtime libs.
Free system profiler
Version: 7.4.0 Web:
ometimes you need to know
your screen resolution numbers
or the exact chipset model of
your motherboard. If so I-Nex is the tool
you've been looking for. For those fresh
from Windows world, this plays nearly
the same role as CPU-Z. It's a profiler
(or a monitor) that detects the CPU,
RAM, motherboard chipset, and other
hardware features of your computer,
and presents the info in one window.
The I-Nex window consists of several
tabs, and each dedicated to a different
aspect of the system. The CPU tab
displays the full name of your
processing unit, the L1 and L2 cache
sizes, current temperature readings and
load meter etc. On the GPU tab it's easy
to find your graphic chip specs, learn its
OpenGL capabilities and also observe
your screen details (data is retrieved
through EDID). The full list of what I-Nex
covers in all its tabs are extensive and
not good use of this space, but it does
make a perfect companion for PC
hardware enthusiasts and system
builders. For other users, it can be
helpful in determining what's inside
your notebooks or computer.
In order to install I-Nex first check
your Linux system repositories, as the
tool is often available. If not then you'll
need to download the tarball from the
project's website. You won't need to
compile anything because I-Nex is
distributed as a set of already prepared
files. We don't say 'binary' files because
I-Nex is made in an unusual way: it
requires the Gambas project runtime (a
dialect of the honourable BASIC
The full specs of your system perfectly replicated.
“For Windows users,
this plays nearly the
same as CPU-Z.”
language). I-Nex gathers information
from dozens of sources, so it would be
sane to launch it from the command
line, so you'll be able to keep crucial
track any missing things, such as
temperature sensors.
December 2014 LXF191 63
HotGames Entertainment apps
Volleyball game
Blobby Volley 2
Version: 1.0rc1 Web:
mong all the many sport
simulators, Blobby Volley 2
holds a special place for us.
At first glance you may have mixed
feelings and think about giving it a
miss for being too simple. But don't.
Appearances can be deceptive, and
after a couple of minutes of play
you'll realise that Blobby Volley 2
deserves much more attention.
Basically, this is a classic sidescrolling arcade game for two players,
and to just confuse things a little, the
recent version of the game is actually
1.0rc1, although it's actually an open
source remake of the old proprietary
ver1 that’s been given the moniker
Blobby Volley 2.
Each player is a coloured bumpy
blob with a simple moving animation
which makes it feel like some kind of
thick gel. However, the gameplay of
Blobby is extremely addictive. Each
player can use custom keys or a mouse,
which means that it's possible to
comfortably share a keyboard or have
one player use the keyboard and the
other the mouse. Blobby Volley 2 also
lets you play against a bot (a LUA script,
by the way). The real challenge is
playing against a friend, of course. The
LUA bots are pretty dumb, but they are
good for first time training.
But there's even more to this game.
The real magic starts with the network
game where you connect to one of the
Try playing for both parties at a time and find out
which of your hands will win.
“In the network game
you connect to one of
the public servers.”
Blobby Volley League public
servers. And yes, there is a strong
community around this game, with
real tournaments and
championships, all hosted at You can play
against a random user or, better
still, try to arrange a duel with
someone from the official league –
in both cases it's great fun.
Open Source Caesar III remake
Version: 0.4 Web:
ometimes classics come
back with greater glory and
shine, and now it's the turn of
Caesar III – a very popular economic
city builder released by Sierra
Entertainment back in 1998.
In 2013, a Linux hacker called
Dalerank started making an open
source remake of the venerable
game, writing most code from
scratch. The game has become a
very close clone of the original, with
most of its logic carefully restored
but constantly being enhanced. The
project website regularly publishes
new builds of Caesaria (aka
OpenCaesar3) for Windows, Linux
and OS X, providing both 32- and
64-bit versions. To run the game,
simply unpack the downloaded ZIP
archive and run ./caesaria.linux
from there. The game only depends
64 LXF191 December 2014
on a set of SDL libraries, so if any of
them are missing, you instantly find out
which from the command line output.
The game asks you to put some
original Caesar III resource files in
certain directories for audio support
and extra maps, but even if you don't
own a game, the vanilla Caesaria build
already contains all graphics which are
perfectly enough to enjoy building a
Roman town.
The aim of the game is to build rank
by achieving objectives set by the
Emperor for construction and the
development of the city. The game
pretty accurately reflects the life of
Perhaps we'll build more potteries to make use of that
extra loam. We’re crazy like that.
“Build rank by
achieving objectives set
by the Emperor...”
ancient Romans, including housing,
goods manufacturing and trading,
fighting with barbarians, setting
entertainment events and satisfying
the gods. Dalerank wrote in his blog
that the most difficult part was
writing complex mathematical
conditions for calculating citizens'
routes and behaviour. Though, we
admit right now there's really a lot to
be done to make Caesaria stable
(currently it crashes sometimes
when you try to rotate the map), the
game is still playable and quite an
achievement in its own right.
Web browser
Otter Browser
Version: 0.9.03 Web:
he Otter Browser (not to be
confused with the Otter theorem
prover) is a challenging and
rather brave Qt5 project with a goal of
recreating the Opera 12.x user
experience. As you might know, in 2012
Opera Software discontinued its core
technology, the Presto HTML rendering
engine which was the heart of the
desktop and mobile Opera browser, as
well as some other products. The Opera
Software team also shrank at that time
and the company decided to move its
efforts to the Chromium project and
help Google to develop the code for its
open source browser. Since then the
'new' Opera has been based on the Blink
engine from Google and really
resembles Chromium visually. In fact,
the most beloved Opera features of
Presto were discarded to
accommodate a redesigned Chromium
clone. Those features form a long list
and include bookmark manager, built-in
content blocker, Javascript switcher,
images display switcher, delayed tab
loading, mouse gestures, smart text
selection and much, much more.
Folks behind the Otter Browser are
intent on recreating all those prominent
features without losing the identity of
the classic Opera. The project is in beta
stage and boasts initial support for the
Adblock Plus extension, support for
HTML bookmarks import, keyboard
shortcut customisation, a download
manager, a cookies manager, a handy
sidebar and lots of minor UI
improvements. The Otter Browser also
seems to be the only complete (or soon
to be) website based on Qt5 widgets, so
if you plan to build yourself a pure Qt5
The browser shows off decent figures in a standard
HTML5 test. Not bad for beta.
“It’s designed to
recreate the classic
Opera features.”
desktop, say Lumina for example, the
Otter Browser will fit there perfectly.
To install the app, the Otter Browser
has been packaged for various Linux
flavours, including OpenSUSE, Ubuntu,
Arch and their derivatives. However, as
the project is developing rather quickly
(developers publish weekly reports on
their website), it would be a good idea
to build the most recent Git version
yourself. You'll need recent versions of
cmake and Qt5 development stack,
with QtWebkit 5.x dev headers.
Mobile devices and connections manager
Version: 1.4 Web:
espite the march of mobile
technology, sometimes we still
use modems, especially on the
go or in areas without terrestrial
internet. Modems hide inside almost
every phone and any smartphone, as
well as USB dongles. Gone are the days
of studying the ifup command and
playing with ppp arguments in Linux,
and ModemManager reflects that
change – it's a NetworkManager-like
tool for handling modems, dongles,
Bluetooth-paired telephones, or even
more professional equipment with
external power supply.
ModemManager is a DBusactivated daemon which controls
mobile broadband (2G/3G/4G) devices
and connections and provides you with
some features for managing calls and
SMS, as well as contacts and some
GSM-specific features. If you want to
have an internet connection on your
computer using a smartphone as a
modem (assuming you have
appropriate data plan on your phone)
and a Bluetooth or USB connection,
ModemManager is a smarter and more
up-to-date way than the likes of
Bluedevil+KPPP or wvdial.
ModemManager provides a GUI
which can be launched just like any
standalone application and is
automatically called when you pair your
phone using Bluetooth and choose to
add a DUN (dial up networking) option.
This way the connection is added to the
NetworkManager configuration and
displays among other available
connections there.
The ModemManager GUI is a bench board for your
modem in Linux.
“A NetworkManagerlike tool for handling
built-in modems etc.”
Most of Linux vendors consider
ModemManager a must-have
mainstream software, so you're likely to
see the fresh release will be sooner
rather than later bundled with regular
updates. The new 1.4 version can
manage a device's power state, has
better handling of IPs (for instance, you
can now set static IPv6 address
together with DHCP mode) and
supports recent ZTE, Huawei and
Sequans Communcation modems. LXF
December 2014 LXF191 65
1 for Free Software
Issue 190
November 2014
Issue 189
October 2014
Issue 188
September 2014
Product code:
Product code:
Product code:
In the magazinee
In the magazine
In the magazine
Origin of the distro –
LXF tracks the source of
your favourite distro and
picks the best of each
genus. Plus: we chart
Ubuntu’s bumpy history
as it celebrates 10 years.
Also, Pi alternatives and
the best web browsers.
Discover how to solve
your Linux problems
with a huge trouble
shooting guide. Plus:
Chromebook Roundup,
run Linux on almost
any Android device,
cryptography explained
and Minecraft hacking.
Streamline your
desktop, cut boot times
and get more from your
system with our guide to
speeding up Linux. Plus:
voice recognition app
roundup, how good is 4K
support in Linux, and
the state of education.
LXFDVD highlights
Linux Mint DE, Mageia 4.1,
Trisquel 6, CentOS 7 and more.
LXFDVD highlights
Tails 1.1 Live DVD, Deepin 2014
and 3 essential rescue distros
Issue 187
Summer 2014
Issue 186
August 2014
Issue 185
July 2014
Product code:
Product code:
Product code:
In the magazine
In the magazine
In the magazine
Use our foolproof guide
to switch from Windows,
including backup and
dual-booting. Plus turn
a Raspberry Pi into a
wiki server, choose an
HD media player, run
Windows apps in Linux
with Wine, and more.
Get Ubuntu 14.04
running the way you want
it with our tweaks and
hacks. Plus learn to build
a Pi cannon in Minecraft,
control a sky of clouds
using OpenStack and get
Samba dancing perfectly
with Linux, and more.
Building your own PC
doesn’t just save you
money, it means you get
exactly what you need.
We show you how! Plus,
find out how to install
Linux on a Chromebook,
choose the best wiki
platform, and more.
LXFDVD highlights
Ubuntu 14.04 LTS, Mint 17,
SystemRescueCD and more.
LXFDVD highlights
CrunchBang 11, Elive 2.2.2 beta,
Puppy Slacko 5.7 and others.
LXFDVD highlights
CentOS 7, Arch Linux,
Peppermint Five, Rescatux 0.32
LXFDVD highlights
Ubuntu 14.04 LTS (64-bit) LXF
remix, MX-14, and much more.
Quote the issue code shown above and
have your credit or debit card details ready
Available on your device now
*Free Trial not available on Zinio.
Turn on, log in and let our experts help make you a smarter Linux user
has been learning
how to Bootstrap.
He can also tie his
own shoelaces.
This month learn how to...
an fix, man good. I’d
forgotten that feeling
you get when you
resurrect a broken piece of
hardware. It’s great. For a few
moments you feel like a miniPrometheus returning with gift of
fire – or in my case the gift of
Facebook. A Samsung Galaxy
had been bricked, and although I
was looking forward to a few
evenings without baby pictures
being shoved under my nose, it
was apparently my duty to fix it.
The internet is great for useful
information and particularly so
for Linux queries, but opinions, I
discovered, on something as
mundane as a broken Samsung
phone are like nipples: everyone
has them, but a lot of them are
utterly useless.
Magic & Gods
After reversing out of a few rabbit
holes, including one on fixing
defective power buttons, I called
on the power of Odin (the phone
flashing software, not the Norse
deity). Thanks to the incredibly
well-informed users on the XDA
Developer’s forum, I was able to
find the right firmware (for the
boot, pda etc) and a pit file, which
meant that all I had to do was run
Odin and we were back in
business. Auto- magic! This
month, we’ve got some (pert)
tutorials that feel a bit like magic
too, like the piece on Cython, the
Python to C compiler (p84) and
Docker (p80), but whatever your
level we’ve got something to
tickle your tutorial itch. Enjoy!
[email protected]
Journald ......................... 68 Wireshark...................... 70
Awk ...................................74
Sean Conway enters the brave new
world of Journald for all your logging
needs and shows you how to get the
most out of the Journalctl tool.
Get to the useful data among your
realms of text result.Neil Bothwick
cries “Awk” and dives headlong into
the fray with this trusty tool.
Mihalis Tsoukalos gets out his
spyglass for a lesson on analysing
your network traffic using the
visually appealingWireshark.
Code in tutorials
Source code is presented in beige
boxes. When lines of code are too long
for our columns, the remaining text
appears on the next line in a solid box:
MySQL:........................... 76
Andrew Mallett finishes off this
series covering the creation of
dynamic websites on the Raspberry
Pi by slotting in MySQL.
We have a ‘whale’ of a time with
Docker, as Jolyon Brown shows us
how to adopt the open platform for a
development environment.
procedure TfrmTextEditor.
Otherwise, there is a gap, like this:
mniWordWrap.Checked := false
Whether you’re a beginner or
a guru, learn new skills in our
programming section
In this issue...
Cython: Compiling
Git: Get started
Philip Herron straps a nitro to
Python. Say hello to a Python to
C compiler called Cython, p84
Jonni Bidwell gets some odd
looks yelling “Fork!”, “Git!” and
“3-way!” in the office, p88
December 2014 LXF191 67
Journald Understanding the new
logging process and journaling tool
Journald: Log
for Systemd
Sean Conway shows the weary sysadmin how to work with Journald, and
how to tap into all that logging data with the journal tool, Journaldctl.
Some service status details and ten lines of logs rolled
into one screen. You got to like it!
Sean Conway
is a former
aviation electronic
métier whose is
enjoying life now
that he’s found
the consequences
of his actions
no longer include
raining aluminium.
t is two o'clock in the AM when you receive a call from the
helpless desk (not a typo) reporting that a computer
service is not available. You access the business system
from home to do some remote troubleshooting. A quick URL
check with a browser and you discover there’s no web page
being displayed. Logging into the server you follow the normal
troubleshooting procedure of looking at the log files.
Examining the log directory you discover all existing process
log files have a zero size. In this scenario, an OS software
upgrade replaced the legacy logging system with another
one. The new logging system stored files in a different
location and needed different tools to access them. This
The Systemd command list-unit-files is a sweet Systemd
feature that furnishes you with the status of services.
68 LXF191 December 2014
simple change creates an immediate learning curve for in
order to start examining the system’s problem.
If you are stroking along merrily in the Fedora Linux swim
lane, you will no doubt discover that Fedora 19 has made the
journald process the default logger. This replaces legacy
logging delivered via the rsyslog process. When Fedora 19 was
first released Journald was having some teething issues and
was not out front in logging. Those issues appear to be solved
now thankfully, with journald the default logger in a recent
software upgrade.
Why change logging, anyway? The new logging process
provides long overdue improvements that’s why. Journald
helps improve logging to the same degree that Systemd
improves system boot response. However, for most system
administrators these changes will force many to update their
knowledge and step outside of their comfort zones.
We're not going to try and sell readers on the change.
The replacement brings some standards to logging that was
alluded to in the legacy logging system but rarely followed.
The new logging search is faster because of indexing. The
new system produces binary output that’s unreadable
without a tool. The new logs are accessed via a journal tool
and not a simple, cat, more or tail command.
One method for easing the discomfort in learning a new
development is to demonstrate how simple tasks done using
the old methods can be completed using the new methods.
The key to looking at what journald is doing requires using the
journal tool, Journalctl. The new logs are in a binary format.
Journalctl is a command line interface (CLI) tool that’s used
to probe the logs.
From a terminal CLI the reader can type the code for a
help listing of the tool. The output is a list of the flags and
command options available.
journalctl -h
Journald Tutorial
Foundational knowledge
Logs similar to the /var/log/secure legacy logging file.
Examining the man pages for journalctl using
man journalctlt
provides an extensive description of options and query
support. Scroll to the bottom of the man pages, you will find
links to resources. The link at
Software/systemd is a premium place for finding
information on Systemd and Journald. Note: The URL is even
referenced at the end of the man pages, saving you from
having to store the link.
One more knowledge source that will prove useful if you
are planning a deep dive into Systemd and Journald is the
blog by Red Hat developer Lennart Poettering
( It’s a good idea
to take conceptual scuba tanks, because this resource pool is
deep and expansive. Many thanks to Lennart for sharing this
with the community.
The developers have enhanced the tool with tab assist to
display options list to complete the command. This saves the
administrator from having to remember command syntax.
When an option is appended to the command, tab assist will
continue to provide attributes to support the request.
journalctl _COMM=<tab><tab>
sshd <enter>
From the tabbing look up example, journalctl command is
accessing logs that would have existed in /var/log/secure
on the legacy logging system.
To exit the Journald and Systemd tools press c. If you have
executed a command similar to tail, you will need to press
Ctrl and the c key to exit the commands. Some screens
provide this tidbit of direction to help the user, others don't.
Before exploring more Journald features we need to stop
by Systemd for a quick look at a command that partners with
logging. We suspect some of you are still frustrated by the
decision to implement Systemd in favour of SysV.
systemctl list-unit-files
The screenshot of the CLI (pictured bottom, p68) shows
how the sshd service looks when enabled. The question is:
were there any logs generated for this service? You can
answer this with:
systemctl status sshd.service
You now have the status of the service along with ten lines of
the logs (pictured top, p69). Now try this Systemd command
on for size:
systemd-analyze blame
This is an aside, but we're like magpies attracted to shiny
objects. When we bumped into this Systemd command that
displays boot times during our research, we stopped writing
to investigate why the ntpdate service was taking so long to
start. But let's return to Systemd logging discussions...
The next command displays the logs for the sshd service.
In a former logger this would have been contained in the
secure files in /var/log/:
User administration commands, system
access, service delivery, system boot
and file system commands are
examples of foundational knowledge –
this knowledge is what other knowledge
is built on. When foundational
knowledge is replaced, the job of a
systems administrator becomes
difficult, to say the least. Access to log
files generated by applications or indeed
by the system itself are a critical
resource for troubleshooting.
Foundational knowledge for any
computer system is knowing how a
system logs information. The logs
provide snippets of details that can
assist an administrator in identifying
and possibly resolving an issue.
journalctl _COMM=sshd
Examining the entries, it looks like the logs are reporting
some nefarious action on the port supporting the ssh service.
How can we tell what logs journald is handling?
journalctl -F _SYSTEMD_UNIT
Examining the command output (picture, below), the
sshd.service is one of the services using Journald for logging.
In the past, the sysadmin’s mainstay command for examining
logs and viewing entries being made in real time was tail -f /
var/log/messages. Fear not, the tool for Journald log
provides for this need.
journalctl -f
While this command is running in one terminal window, poke
the ntpd time daemon with ntpq -p from another terminal
window. The logs display the error in real time and the ntp
query tool cannot be accessed as a non-root user. We had
success throwing -f at the end of some of the other Journalctl
commands, to follow the logs as they were created.
We wonder if the decision to use the -f attribute was
something the Journalctl developers decided to use to tap
into knowledge that sysadmins have of the tail command.
With the display of logs from different services all
glommed together, it helps that the journaling tool provides
some flexibility for filtering data.
Recall, though, that the logging is indexed which provides
some query capabilities. Try these examples out:
journalctl --since=yesterday
journalctl --since=2014-09-15 --until="2014-09-16 23:59:59"
Being conditioned to use log files as the starting point for
investigating computer issues, it's necessary to develop some
understanding of Journald logging because the old system
was deprecated leaving zero information. You'll find that
uncovering the basic commands is pretty straightforward,
though it's not going to resolve the frustration that many
system administrators are going to be experiencing.
At this point though it's past 2am, the problem has been
repaired and we're off to the Linux Format Tower boudoir for a
well-deserved mug of Horlicks and some sleepy time. LXF
Note: sshd is listed as a service using logging.
If you missed last issue See
December 2014 LXF191 69
Wireshark Set up the network analyser
Analyse traffic
Mihalis Tsoukalos explains the necessary things that you need to know to
start using Wireshark, and analyses three kinds of network traffic.
enjoys protocol
analysis and
packet inspection
using Wireshark.
He's a UNIX
administrator, a
programmer, a
DBA and a
ireshark is a very popular and extremely capable
network protocol analyser that was developed by
Gerald Combs. Wireshark was born in June 2006
when Combs renamed the network tool Ethereal, which he
also created, as he was changing jobs and couldn't use the
old name anymore. Nowadays, most people use Wireshark
and Ethereal has been consigned to history. Your Linux
Capturing network data
The easiest way to start capturing network packets is by
selecting your preferred interface after starting Wireshark and
then pressing Start. Wireshark will show network data on your
screen depending on the traffic of your network. Note that
you can select more than one interface. If you know nothing
about TCP, IP or the other TCP/IP protocols, you may find the
output complicated and difficult to read or understand. In
order to stop the capturing process you just select Capture >
Stop from the menu. Alternatively, you can press the fourth
icon from the left, the one with a red square (which is
shorthand for 'Stop the running live capture') on the Main
toolbar (Note: its exact location depends on your Wireshark
version). This button can only be pressed while you are
capturing network data.
When using the described method for capturing, you can't
change any of the default Wireshark Capture Options. You can
The TCP packet and the IP packet format.
70 LXF191 December 2014
distribution will have a ready to install package for analyser
too, so go ahead and install it.
You may ask what makes Wireshark different to other
network analysers – apart from the fact that it's free – and
why we're not simply advocating using tcpdump for packet
capturing? The main advantage of Wireshark is that it's a
graphical application. Capturing and inspecting network
traffic using a graphical user interface is a very helpful thing
because it cuts through the complexity of network data.
To help the beginner understand Wireshark they will need
to understand network traffic. The aim of this article then is to
supply a comprehensive introduction to TCP/IP to enable you
to come to useful conclusions about the network traffic data
you're analysing.
If you run Wireshark as a normal user, you won't be able to
use any network interfaces for capturing, because of the
default Unix file permission that network interfaces have.
It's more convenient to run Wireshark as root (sudo
wireshark) when capturing data and as a normal user when
analysing network data. Alternatively, you can capture
network data using the tcpdump command line utility as root
and analyse it using Wireshark afterwards. Please keep in
mind that on a truly busy network, capturing using Wireshark
might slow down a machine or, even worse, might not enable
you to capture everything because Wireshark needs more
system resources than a command line program. In such
cases using tcpdump for capturing network traffic is the
wisest solution.
Wireshark Tutorial
see and change the Capture Options by selecting Capture >
Options from the menu. There you can select the network
Interface(s), see your IP address, apply capture filters, put
your network card in promiscuous mode, and save your
capture data in one or multiple files. You can even choose to
stop packet capturing after a given number of network
packets or a given amount of time or indeed a given size of
data (in bytes).
Wireshark doesn't save the captured data by default but
you can always save your data afterwards. It's considered
good practice to first save and then examine the network
packets unless there's a specific reason for not doing so.
Wireshark enables you to read and analyse already
captured network data from a large amount of file formats
including tcpdump, libpcap, Sun’s snoop, HP’s nettl, K12 text
file etc. This means that you can read almost any format of
captured network data with Wireshark. Similarly, Wireshark
enables you to save your captured network data in a variety
of formats. You can even use Wireshark to convert a file from
a given format to another.
You can also export an existing file as a plain text file from
the File menu. This option is mainly for manually processing
network data or using it as input to another program.
There is an option that allows you to print your packets.
I have never used this option in real life but it may be useful to
print packets and their full contents for educational purposes.
Display filters
While capture filters are applied during network data capture
and make Wireshark discard network traffic that doesn't
match the filter, display filters are applied after capture and
'hide' network traffic without deleting it. You can always
disable a Display filter and get your hidden data back.
Generally, display filters are considered more useful and
versatile than capture filters because it's unlikely you'll know
in advance what you'll capture or want to examine.
Nevertheless, applying filters at capture time can save you
time and disk space and that's the main reason you might
want to use them.
Wireshark will highlight when a display filter is syntactically
correct with a light green background. When the syntax is
erroneous, the background becomes pink.
Display filters support comparison and logical operators.
The http.response.code == 404 && ip.addr ==
display filter shows the traffic that either comes from the IP address or goes to the IP address
that also has the 404 (Not Found) HTTP response code in it.
The !bootp && !ip && !arp filter excludes BOOTP, IP and
ARP traffic from the output. The eth.addr ==
01:23:45:67:89:ab && tcp.port == 25 filter displays the
traffic from or to network device with the 01:23:45:67:89:ab
MAC address that uses TCP port number 25 in its incoming
or outgoing connections.
Keep in mind that display filters don't magically solve
problems. They are extremely useful tools when used
correctly but you still have to interpret the results, find the
problem and think about the possible solutions yourself.
When defining rules please remember that the (ip.addr != expression doesn't mean that none of the ip.
addr fields can contain the IP address. It actually
means that one of the ip.addr fields should not contain the IP address. Therefore, the other ip.addr field value
can be equal to You can think of it as 'there exists
one ip.addr field that is not'. The correct way of
expressing it is by typing !(ip.addr == This is a
common misconception.
Also remember that MAC addresses are truly useful when
you want to track a given machine on your LAN because the
IP of a machine can change if it uses DHCP but its MAC
address is more difficult to change.
It is advisable that you visit the display filters reference site
for TCP related traffic at For
the list of all the available field names related to UDP traffic,
it's advisable to look at
The three
packets (SYN,
ACK) of a TCP
3-way handshake.
The fact that the
FTP protocol
usually uses
port number 21
doesn’t mean
it’s not allowed
to use a different
port number. In
other words, don't
blindly rely on the
port number to
characterise TCP/
IP traffic.
About TCP/IP, TCP and IP
TCP/IP is the most widely used protocol for interconnecting
computers and it is so closely related to the internet that it's
extremely difficult to discuss TCP/IP without talking about
the Internet and vice versa. Every device that uses it has:
The TCP protocol
TCP stands for Transmission Control Protocol.
The main characteristic of TCP is that it's reliable
and makes sure that a packet was delivered. If
there's no proof of packet delivery, it resends the
packet. TCP software transmits data between
machines using segments (also called a TCP
packet). TCP assigns a sequence number to
each byte transmitted, and expects a positive
acknowledgment (or ACK) from the receiving
TCP stack. If the ACK is not received within a
timeout interval, the data is retransmitted as the
original packet is considered undelivered. The
receiving TCP stack uses the sequence numbers
to rearrange the segments when they arrive out
of order, and to eliminate duplicate segments.
The TCP header includes both the Source Port
and Destination Port fields. These two fields, plus
the source and destination IP addresses are
combined to uniquely identify each TCP
connection. Ports help TCP/IP stacks in network
connected devices (PCs and routers etc) to
distribute traffic among multiple programs
executing on a single device. If a service wants to
be seen as reliable it's usually based on TCP,
otherwise it's based on IP. But as you can
imagine, reliability comes at a cost and therefore
isn't always desirable.
Save money, subscribe now! See
December 2014 LXF191 71
Tutorial Wireshark
Part of an Nmap ping scan on a LAN as captured by Wireshark.
An IP address This address must be unique at least to its
local network.
A network mask Used for dividing big IP networks into
smaller networks that's related to the current network,
One or more DNS servers Used for translating an IP
address to a human-memorable format and vice versa
A Default Gateway This is optional if you want to
communicate with devices beyond your local network. A
Default Gateway is the network device that TCP/IP sends a
network packet to when it doesn't 'know' where else to
actually send it.
Every TCP service listens to a port that is unique to each
machine. A machine that supports the HTTP protocol, the
protocol that serves WWW, is also called an HTTP server.
Similarly there exist FTP servers, DNS servers, etc. It's the two
pairs of the IP addresses and port numbers on both ends of a
TCP/IP interaction that uniquely identify a connection
between two machines that use TCP/IP.
A TCP packet (see the format of a TCP and an IP packet
segment, pictured on p70) can be used to establish
connections; transfer data; send acknowledgements,
advertise the buffer that holds incoming data, which is called
Window Size, and close connections. As you can see in the
packet screenshot (see p70), each TCP segment has a
header part and a data part.
When you put
your network card
in promiscuous
mode, you allow
the network device
to catch and read
every network
packet that arrives
to it even if the
receiver is another
device on the
network. Network
packets still go
to their original
The TCP 3-way handshake
TCP provides a connection-oriented, reliable byte stream
service. It's a full duplex protocol, which means that each TCP
connection supports a pair of byte streams; one flowing in
each direction. The term 'connection-oriented' means the two
applications using TCP must first establish a TCP connection
with each other before exchanging any data.
The TCP header includes a 6-bit flags field that's used to
relay control information between TCP peers. The possible
flags include SYN, FIN, RESET, PUSH, URG, and ACK. SYN
and ACK flags are used for the initial TCP 3-way handshake as
you will see in a while. The RESET flag signifies that the
receiver wants to abort the connection.
The TCP three-way handshake goes like this: the client
sends a TCP SYN packet to the server, and its TCP header
includes a sequence number field that has an arbitrary value
in the SYN packet. The server sends back a TCP (SYN, ACK)
packet which includes the sequence number of the opposite
direction and an acknowledgement of the previous sequence
number. Finally, in order to truly establish the TCP connection,
the client sends a TCP ACK packet to acknowledge the
sequence number of the server. After the TCP three-way
handshake, the connection is established and is ready to
send and receive data.
The traffic for this case was produced by running the
following command:
After some necessary DNS, ARP and ICMP network traffic,
the TCP three-way handshake begins (pictured top, p71).
The client IP address is and the destination IP
address is A pretty simple display filter (tcp
&& !http) makes Wireshark display 63 out of 82 packets. The
three packet numbers used in the handshake are sequential
because the host wasn’t performing any other network
activity at the time of capturing, but this is rarely the case.
Ping scans
This part will examine the network traffic that's produced by
Nmap when it performs a ping scan.
LAN ping scans are executed using the ARP protocol.
Hosts outside a LAN are scanned using the ICMP protocol, so
if you execute a Nmap ping scan outside of a LAN, the traffic
will be different from one presented. In the example below,
the Nmap command scans 255 IP addresses, from
to The results show that at execution time only
10 hosts were up or, to be precise, only ten hosts answered
the Nmap scan:
Never miss another issue Subscribe to the #1 source for Linux on page 32.
72 LXF191 December 2014
Wireshark Tutorial
Host is up.
The purpose of the ping test is simply to find out if an IP is
up or not – see the grab on the opposite page. What's
important for Nmap in a ping scan is not the actual data of
the received packets but, put relatively simply, the existence
of a reply packet. As all traffic is in a LAN, each network device
uses its MAC address in the reply so you only see MAC
addresses in both Source and Destination fields. The
presence of a reply makes Nmap understand that a host is up
and running. As a MAC address includes information about
the manufacturer of the network device, Nmap also reports
that information for you.
Nmap also calculates the round trip time delay (or
latency). This gives a pretty accurate estimate of the time
needed for the initial packet (sent by Nmap) to go to a target
device, plus the time that the response packet took to return
to Nmap. A big latency time is not a good thing and should
certainly be examined.
there will be two answers for the DNS query. In time, you will
be able to take all this in with one glance.
UDP uses the underlying IP protocol to transport a
message from one machine to another, and provides the
same unreliable, connectionless packet delivery as IP. It
doesn't use acknowledgements to make sure messages
arrive, it doesn't order incoming messages, and it doesn't
provide feedback to control the rate at which information
flows between the machines. Thus, UDP messages can be
lost, duplicated, or arrive out of order. Furthermore, packets
can arrive faster than the recipient can process them.
The destination port of the first packet is 53 which is the
usual port number of the DNS service. The UDP part of the
second packet shows the port numbers used for the reply:
&KHFNVXP[EE>[email protected]
As it happens with most tools, the more you use
Wireshark, the more efficient you will become with it, so keep
on practicing and learning! LXF
There is also a
console version of
Wireshark called
tshark. The two
main advantages
of tshark are that
it can be used in
scripts and that
it can be used
through an SSH
connection. Its
main disadvantage
is that it does not
have a GUI. Tshark
can also entirely
replace tcpdump.
Analysing DNS traffic
DNS queries are very common in TCP/IP networks. A DNS
query creates little traffic and therefore it is an appropriate
example for learning purposes. The following command will
be used for generating the necessary DNS network traffic
that will be examined:
Two packets are needed in total: one for sending and one
for answering the DNS query (see grab, right).
The first packet is number 3 and the second is number 4.
A Display filter (DNS) is used to minimise the displayed data
and reveal the useful information. The UDP (User Datagram
Protocol) protocol was used and the desired information was
sent back without any errors as shown by the Flags
information. You can also tell by noting the time difference
between the DNS query (1.246055000) and its answer
(1.255059000) that the DNS services work fine because of
the reasonable response time. The DNS server asked has the IP address – as you can see from the destination IP
address of the first packet. The same DNS server answered
the DNS query as you can see from the source IP address of
the second packet. The 'Answer RRs: 2' line informs us that
Here is how Wireshark shows the traffic of a DNS query after applying a
Display filter. Notice the green colour around DNS that shows the validity of it.
The IP protocol
IP stands for Internet Protocol. The main
characteristic of IP is that it's not a reliable
protocol by design. Unreliable means that
packets may not reach its destination for various
reasons, including transmission errors, network
hardware failures and network congestion.
Networks may also deliver packets out of order,
deliver them after a substantial delay or deliver
duplicates. Nevertheless, a programmer can
program reliable applications that use IP by
implementing their own error-checking code but
this is a non-trivial task.
When the information doesn't need many
network packets, using a protocol that's based
on IP is more efficient than using TCP, even if you
have to re-transmit a network packet, because
there’s no three-way handshake traffic overhead.
IP encapsulates the data that travels in a
TCP/IP network, because it's responsible for
delivering packets from the source host to the
destination host according to the IP addresses.
IP has to find an addressing method to
effectively send the packet to its destination.
Dedicated devices that you'd recognise as
routers mainly perform IP routing but every
TCP/IP device has to do basic routing.
Each IP address is a sequence of four 8-bit
numbers, separated by dots. Each number has a
value between 0 (=2^0-1) and 255 (=2^8-1).
Example IP addresses are, and
IPv6 was developed by IETF and its purpose is
to solve the problem of running out of IPv4
addresses. IP uses 32-bit addresses whereas
IPv6 uses 128-bit addresses, offering more than
7.9×1,028 times as many as IPv4.
December 2014 LXF191 73
Awk Use the command line tool for data
extraction and reporting from lots of text
Awk: Extract
data from text
If you want to useful data from a whole lot of text results, try using awk,
Neil Bothwick thinks it may be the tool that you’re missing.
Neil Bothwick
Neil Bothwick has
a computer in
every room, but
won’t disclose the
location of his
central server for
security reasons.
ext handling has a long tradition in Unix and Linux.
Part of 'The Unix Way' is to use text-based
configuration and data files wherever possible, and be
able to feed data from one program to another by using
pipes. Most Linux users will have encountered with grep,
(Globally search a Regular Expression and Print) the text
search tool, and many will also use sed (Stream EDitor) to
edit text streams or files. There's another tool that
complements these two, and it's used by fewer people but
those that do use awk tend to use it quite a lot. Their reason
for both of these is the same, awk is a powerful tool that can
be quite scary at first sight – so much so that you’ll try to
avoid using it. When you do bite the bullet, you’ll understand
the power of awk and find yourself using it more and more.
There's a lot to awk and whole books have been written on
it. All we can do here is help you overcome the first hurdle,
take away the mystery of awk and help you get started.
To start with an example, let's say you want a list of logged in
users on your system, such as provided by w or who, but you
only want the usernames and the time they logged in.
The standard output from who is like this:
nelz :0
2014-09-10 14:42
nelz pts/0
2014-09-10 14:43 (:0)
root pts/10
2014-09-14 12:32 (:0:S.0)
root pts/9
2014-09-16 19:14 (shooty:S.1)
The output from w is even more verbose (ironically). With
awk, we can pick out just the information we need:
who | awk '{print $1,$3,$4}'
The input, either from a file or a pipe as shown here, is
processed line by line. In awk terms, a line is a record and
each record is split into fields. By default, these fields are
separated by white space. So what this simple use of the awk
command does is print the first, third and fourth fields from
each of the lines, which happen to be the name, login date
and time.
Awk or Gawk?
You may see references to gawk at
times. Awk has been around for
decades, and there have been various
implementations. Linux distributions
generally use GNU awk, gawk, and awk
is then a symbolic link to gawk. So while
74 LXF191 December 2014
we talk about the general program awk,
you are almost certainly using gawk.
Gawk complies with the POSIX
definition of awk so – except in some
unusual cases that you are unlikely to
encounter – the two are identical.
Extracting information. Here we’ve printed the names and
versions of packages installed on a remote server.
The string passed to awk tells it what to do with the
information it receives, in this case that’s to print some fields.
There are two main parts, the part enclosed in braces is
actually the second part and determines the action to be
taken. The first part is a pattern to match; only records
matching that pattern are processed. No pattern was given
before, so all records were processed. What if we wanted to
limit the output to users whose names start with n?
who | awk '{/^n/ print $1,$3,$4}'
Any records that don't match are simply discarded.
The part in braces is optional too, it defaults to {print $0}
where $0 is the whole record, so a pattern without a
command makes awk more like grep, and omitting both turns
it into cat. The quotes around the whole awk command string
are required.
Patterns and actions
This is fine when records are separated by white space (this
includes tabs), but what if the input uses a different field
separator? It may be a CSV file from a database or
spreadsheet. You can tell awk to use the different field
separator in one of two ways. The -F or --field-separator
option sets it on the command line, for example:
awk -F, '{print $3}' data.csv
The other way to do it is to set the FS variable in an awk
statement, but you would normally only do that in more
complex cases. You can also set RS to change the record
separator from a new line, but that’s also an uncommon
need. If the separator is set to a single character, as above,
Awk Tutorial
Awk is a scripting language
We have looked at awk mainly as a
command line tool here, which is how it
is used much of the time. However, there
is far more to awk – it's an interpreted
programming language with variables,
control loops, functions and everything
else you would expect. Clearly, we are
not going to fit all that into two pages
when there are complete books written
on the subject. If you want further
it is treated as such. If it’s any longer this would be considered
a regular expression. This can be useful method if you don’t
have full control over the input and need to cover various
alternative separators.
So far, we've used a regular expression as the pattern.
We can match more precisely than this: for example, to
match a regular expression in a particular field, you could use
$1 ~ /John/. So to make our user list more useful, we could
list all non-root users with:
who | awk '$1 !~ /^root$/ {print $1,$3,$4}'
There's no guarantee that each line will have the same
number of fields. If you specify a field that doesn't exist, awk
simply uses the empty string, it doesn’t return an error.
This is important as you may be working with input that
contains blank lines. What if you want the last field of each
record but you can’t be sure that all lines have a set number
of fields? Try using
awk '{print NF}' sometext
which will show the number of fields on each line, while
awk '{print $NF}' sometext
will output the content of the last field on each line. You can
combine this with the above to exclude lines with less than
the required number of fields:
awk 'NF >=5 {print $0}'
The print command outputs the contents of the field, or
whatever else you tell it to print, as a plain string. There’s also
a printf function that works the same as in other languages,
taking a format specifier to set the output format. For
example, to print a numeric field as a currency, you could do:
awk '{printf("£%0.2f\n" $1)}'
Compound actions
While awk can work with individual single fields, like $3 for
instance, it has no feature to directly deal with a range of
fields, such as $3 to $6. As this is commonly needed, here is
one way (there are others) to do it:
A full programming language, awk can be used to write
(and run) scripts like this one.
information, read the awk info pages.
If you need more detail, there are several
online tutorials and books to choose
from, such as O'Reilly's sed & awk or
Effective Awk Programming.
There are
plenty of
resources if you
want to learn
more about awk,
this one is on the
IBM developer
works website
awk '{out = ""; for ( i = 2; i <= NF; i +=2 ) out = out $i " ";
print out }'
This illustrates a couple of features of awk actions.
The first is that you can have multiple statements, separated
by semi-colons. The second statement shows one of the loop
constructs available in awk. Here were are selecting only the
even numbered fields, building them into a string and then
printing it. There is also a sneaky trick for removing one or
more fields from the output:
awk '{$1 = $2 = ""; print $0 }'
Here we are assigning the empty string to the first and
second fields and then printing out the whole, modified,
record. These examples help to illustrate another important
point. Don't get hung up on trying to do everything with one
program. Awk isn’t a universal solution to all text processing
needs and this is a good example. If you want a specific range
of fields, sometimes it's far simpler to forget awk and use cut:
cat somefile | cut -d, -f 3-5,7
The delimiter is set with -d and -f specifies the ranges of
fields to print. Cut is a simple program, so you may still want
to use awk to format or select from the input data before
sending it to cut for slicing into fields.
Awk, as a programming language is well-suited to oneliners, with patterns and actions on a single command line.
There comes a point when a one liner becomes too
cumbersome or unreadable (Perl programmers, take note).
When you have more complex needs, or simply want to be
able to re-use the same invocation, you can save it into a file
and pass that to awk with the -f argument. You can even call
an awk script directly if you make it executable like this
#!/usr/bin/awk -f
if you include this shebang as the first line. LXF
If you missed last issue See
December 2014 LXF191 75
Nginx Use LEMP setup on Raspberry
Pi to run queries via an MySQL server
MySQL: Link to
your database
Andrew Mallett finishes off this series covering the creation of dynamic
websites on the Raspberry Pi by slotting in MySQL.
is a Linux trainer
with over 700
videos on YouTube
UrbPeng). You’ll
also find his
courses on www.
MySQL statements
can be broken
into three main
types: DML Data
Language which
makes up most of
our regular inserts
and selects; DDL
Data Definition
Language where
we create and
drop objects like
tables and DCL
the Dat Control
Language where we
his is the fourth and final instalment of our Nginx/
LEMP series. We've been crafting together a LEMP
solution and our final step will be working with the
MySQL database server to create a simple contact database.
Once the database is running and having mastered some
basic SQL query techniques, we'll glue this final piece of the
puzzle into our PHP-enabled website and we'll then be able to
execute remote searches within the database using HTML
forms on the Nginx web server.
Creating a database
If you've not already installed your MySQL database server,
refer back to part one of the series (Tutorials, p72 LXF188).
The database server will hold the backend storage element of
the solution. The database that we'll create will act as a
context to house and locate tables, which ultimately define
and store our data. Before proceeding we can run some
simple pre-flight checks that will verify that the database
server is running. The simplest way to do this is to use the
service command line tool:
sudo service mysql status
76 LXF191 December 2014
The resulting output gives a really useful display
summarising the activity of the database service. We also
need to ensure that the database server can be connected
only from the localhost. In other words the web server, which
is also running on the same host, should be able to connect
to the database server but not remote clients. We can display
this using the Linux command ss or show sockets:
ss -lt | grep mysql
The output from ss should show that the MySQL service
is listening on just, the localhost and not the external
interface of the Pi. The actual port or socket being used by
the service is 3306. The port merely represents the address
of the service. You may know that the Nginx web service
listens on port 80, well MySQL listens on port 3306. To see
the mappings of common ports to service names take a look
in the file /etc/services. This has over 600 lines on the Pi,
however, we can quickly look for the required information
using the command grep from the Linux shell:
grep mysql /etc/services
We can remain logged into the Raspberry Pi as a standard
user and from the LXTerminal we can authenticate to the
MySQL service using the mysql command line client. As we
log in independently to MySQL there's no need for root
privileges within Linux. However, don't be confused, we'll be
logging into MySQL as the MySQL root account. Currently,
this is the only user account on the database server and has
full rights to the MySQL resources in the same way as the
Linux root account has full rights to the Linux system. Using
the client mysql we can logon using the following command:
mysql -u root -p
You will be prompted for the password and, with
successful authentication, we will be presented with the
MySQL prompt and we’re in. We can now concentrate on the
important task of creating a database. The database is a
place holder for tables (content). In many respects a
database is much like a folder if we were to compare it to the
file system. Don't forget, that when issuing SQL statements,
each line of code is terminated with a semi-colon. We will
begin by listing the current databases, then create the new
database and list all databases again, are you ready?
USE contact;
We can see that we start with just the standard system
databases in existence and our new database is then created
and listed in the output – we name the database contact. To
MySQL Tutorial
Web Services in a Pi
We have created something quite
extraordinary and powerful on the trusty
little Raspberry Pi. The industry often
talks about embedded devices as if they
are something really special and
expensive. Using the simplicity of the
Raspberry Pi we have demonstrated
how we can create an embedded web
service that can easily be attached to
anything. Imagine this at a trade show
where you can carry your product
database and interface with people. The
technological knowhow learned,
however, is not restricted to the Pi and
can be transferred to any Linux
distribution. If this has taken your
move into that database we can make use of the command:
use in the same way as we do the cd command within the file
system. In this way we can refer to tables that we create with
just the table name and remove the need for including the
database name when referencing tables. As we now have a
database we can go ahead and create a table. The table will
hold the data that we'll add for this simple contact
application. The table will consist of columns that will store
the user id, first name and last name. To create the table we
define these columns and enforce the data-type for each
column plus any additional properties for the table. We will
make use of the CREATE TABLE statement in the SQL
language to achieve this first step.
USE contact;
fname varchar(50) NULL,
lname varchar(50) NOT NULL);
Once the table is created we'll be able to verify its success
from the display outputted from the command SHOW
TABLES. Moving forward we can make use of the command
DESCRIBE to detail the columns created as well as more
detailed information with SHOW CREATE TABLE:
Let’s now turn to the detail behind the creation of the
table. We called the table users. The definition, though, is
encompassed within the parentheses and consists of a single
line of MySQL code until we close the parenthesis, and end
the code line with a single semi-colon. Each column in the
table is delimited by the comma. To aid readability we spread
these column definitions over multiple lines, so each column
is represented by its own line. First, we create the uid column
and we accept a data type if INT or an integer, (whole
number). Setting this to unsigned means that we accept only
positive values disallow negative values. The storage used for
this is 4 bytes and allows values from 0 through to a massive
4,294,967,295. Maybe a little overkill for our needs, (will we
ever have 4 billion users?) but it certainly allows for enough
user IDs in the long term. NOT NULL does exactly as it
announces on the tin and ensures that a value must be added
to this column. We can, however, make the process a little
easier by allowing the option auto_increment, making sure
the value will be added automatically if we don't supply one.
We additionally set this column as the 'primary key', and in
interest you do not want to stop here;
this is just the first station on an exciting
journey. Oracle maintain great
documentation on MySQL: http://dev. and there are great
resources on PHP, CSS and HTML,
w3schools being one of many sites to
check out:
doing so we enforce the uniqueness of the uid for each user in
the table and provide an easy way to locate each entry in the
table. This first column has got a lot going on so I hope this
has not put you off. We can see from the supplied code that
the fname and lname columns are much simpler in their
definitions. The data type for both columns being
varchar(50); this will allow for up to 50 characters to be
stored but will adjust to a lower amount if the names entered
are shorter. Just by example we show that we can allow the
fname column to be nullable.
Inserting data into the table
From experience I can reliably inform you that having a table
without data is a little like having bread without butter; the
two inextricably go together. Let’s add some users to our
table, and this is achieved using the MySQL INSERT
statement. First we will add a single row and query the table
after and then we can add multiple rows:
USE contact;
INSERT INTO users ( fname, lname) values(‘Bob’,’Jones’);
SELECT * from users;
We start by ensuring that the database context is correct
by issuing the USE statement. This will be okay to do even if
we are in the contact database and won't error, but is a great
double-check before issuing any code. The INSERT statement
adds the row to the table. We further describe which
elements we are adding and the order they are provided in
the first parentheses; the values then follow in the second set.
We deliberately haven't included the uid column as we can
leave this to AUTO_INCREMENT to complete. When we
retrieve the data with the SELECT statement, INSERT is
similar to a write and SELECT similar to a read. From the
resultant information we will be able to see Bob with the uid
of 1. The use of the asterisk within the statement instructs all
matching columns to be displayed in the output.
Now we’re competent in inserting a single row of data, we
can see how to insert multiple rows using INSERT. The syntax
is similar but we now have a comma delimited list of values
grouped with parenthesis as in the previous example:
PHP files, such as
the config.php
are included from
the include_path,
by default this will
include the current
directory; additional
paths can be
added by using the
PHP function set_
include_path. If you
manage the server
you may also set
this permanently
in the PHP.ini and
the include_path
The output from ss can show the ports that are in use on your Raspberry Pi.
This is similar to the output from netstat but ss is more dedicated to this task.
Catch up with the series See
December 2014 LXF191 77
Tutorial MySQL
INSERT INTO users ( fname, lname) values(‘Jane’,’Jules’),(‘
Improving the SELECT
So far, with just three entries, the results we see are not
difficult for us to manage, but this is not going to be scalable
as more users are added to the contact database. To help
reduce the row returned, we can use the WHERE clause in the
SELECT statement. In this way we could show users with a
certain ID: WHERE uid = 1 or a given surname: WHERE
lname = ‘Smith’. We could also allow a little more flexibility
using WHERE and LIKE such as: WHERE lname LIKE ‘j%’;
this would show users whose last name began with a J. NOTE:
The strings we search for, by default, are NOT case-sensitive.
SELECT * from users WHERE uid = 1;
SELECT * from users WHERE lname = ‘smith’;
SELECT * from users WHERE lname LIKE ‘j%’;
That gives us enough knowledge now to be a little
dangerous and venture into PHP for our website. You may
remember we have a link to a search page that, so far, doesn't
exist, but given our new found knowledge we can return to
that now. First, don’t forget to lock up before you leave by
logging out of the MySQL client. You do this by typing exit at
the mysql> prompt.
Creating the HTML search page
Working in the file system directory /var/www – which we've
configured to be the Document Root of the Nginx server – we
will create the elusive page: /var/www/search.html. This will
be a pure XHTML page that will reference a PHP page that we
will create later. The web page we create will have a simple
form enabling us to search for users by their last name. The
page should look similar to the following code sample:
<?xml version="1.0" encoding="UTF-8"?>
<html xmlns="">
<meta name="Author" content="Andrew Mallett" />
<meta name="Description" content="Database Search" />
<link rel="stylesheet" href="/style.css" />
<div id="page">
<div id="header" class="frame">
<h1>Database Search</h1>
The HTML form now shows and we can use this form to search the database
once we implement the back-end PHP code...
<div id="menu" class="frame">
<li><a id="home" href= "/index.html">Home</a></li>
<li><a id="search" href= "/search.html">Search</a></li>
<div id="content" class="frame">
<p>Search MySQL with PHP</p>
<form method="get" action="/search.php">
<label for="last">Enter a last name:</label>
<input type="text" name="last"></input>
<br />
<input type="submit" value="Search"
name="submit"> </input>
We can see that the form uses the GET method and the
action makes reference to the search.php located in the
server’s Document Root; it is this page that connects to the
MySQL server, executes the SQL search and displays the
result. At this stage we should be able to browse our website
and see the search page, although the button won't work
without the PHP page being in place.
Creating the PHP search code
To finalise the project, we will need to create the PHP page
that will execute and display the search result. We will rustle
up the page /var/www/search.php in such a way that it will
be similar to the following code, take care with your typing:
<html xmlns="">
<meta name="Author" content="Andrew Mallett" />
<meta name="Description" content="Database result” />
<link rel="stylesheet" href=" /style.css" />
<div id="page">
<div id="header" class="frame">
<h1>Search Results</h1>
<div id="menu" class="frame">
<li><a id="home" href= "/index.html">Home</a></li>
<li><a id="search" href= "/search.html">Search</a></li>
<div id="content" class="frame">
$host = 'localhost';
$user = 'root';
$pwd = 'Password1';
$db = 'contact';
Get Urban Penguin videos at
78 LXF191 December 2014
MySQL Tutorial
$dbh = mysqli_connect($host,$user,$pwd,$db) or die(
"ooooops" );
$last = $_GET['last'];
if ( $last != '' ) {
$query="SELECT * from users WHERE lname LIKE
} else {
$query="SELECT * from users";
while ($row = mysqli_fetch_array($result,MYSQLI_
$uid = $row['uid'];
$fn = $row['fname'];
$sn = $row['lname'];
print "<tr><td>" .
. "</td><td>" .
. "</td><td>" .
. "</td></tr>";
The main section of code to focus on is within the
"contents" div. Within this div, we display the paragraph with
'Results:' and we then open a table element, this is followed
by the PHP code. Once the PHP element is closed we close
the table element. The PHP code will generate the body of the
table for us.
Within the PHP tag, the initial five lines establish a
connection to the database server. In reality these lines would
be best served from a separate file and accessed with an
include statement. In this way the user name and password
for the database would not need to be in our webpage, an
obvious security no-no. For ease we wrote it in the single file
so it is easier to follow. On your own system those lines with
something similar: include_once('config.php');. The code
replaced with this line will be within its own php tags within
the config.php file. The object variable that's returned by the
database connection we name “$dbh” and this will be carried
into the rest of the code to represent the connection or
Database Handle.
From the query string in the URI we extract the value of
the last element within the array _GET and save this as a
local variable, aptly named $last. Using the conditional if
statement we check to see if $last has a value or not. Without
a value, our SQL query string will search for all users. With a
value, it will search for users with the last name starting with
the characters supplied within the variable entered in the
form field. The query is executed with the use of the mysqli_
query function and the results stored within the variable
inventively named: $result.
To populate the table, the PHP while loop is implemented
to iterate through the $result variable and add rows for each
record found. The cell delimiting HTML tags are embedded
into the PHP code. The last item of the row is the last name of
the user and we follow this by terminating each row with the
tag </tr>. In this way, we can dynamically add to the HTMLdefined table as many rows as needed by the result of the
MySQL query.
Once we exit from the while loop we release memory used
by the MySQL result set and close the database connection.
Controlling resources used is vitally important to the reliability
of our solution and we should ensure that both of these tasks
are implemented.
Note: This is a simple way to do this process and is
designed to allow you to familiarise yourself with how PHP is
used to access the MySQL database. Treat it as a starting
point and proof of concept. A production system would
require more feedback and security to be truly effective.
Although the
database is
inaccessible from
the outside world
the web page can
act as our portal
to the data.
Testing the solution
With the chocks now removed and the engine running, we
can ensure that both the search.html and search.php pages
are saved to the directory /var/www on the Nginx server. We
can refresh the web browser and navigate to the search page.
Hopefully, you'll be presented with our beautiful crafted site.
On testing, we'll discover that we can search for users with or
without their last name. Entering the letter J in the last name
field should return the two users that have a last name
beginning with J. If you see this, give yourself a well-deserved
pat on the back. You've created something quite wonderful
and we hope this is the start of many more dynamic web
pages to come for you. If it doesn’t work then you will need to
check your log files on the server, which you'll find at /var/
log/nginx/error.log. LXF
Primary, Unique and Foreign Key constraints
We have seen in this example the use of a
PRIMARY KEY constraint. For each table we can
have a single primary key; the use of a key is not
mandatory but it is advised as the key can
uniquely identify each row in the table. A primary
key can be defined on one or more columns in a
table, the idea being that we need to create a
unique value for the key; usually this can be from
a single column but sometimes the key will need
to be defined across more than one column. For
example, a library table may have a primary key
defined on the ISBN and the Copy Number
column. On its own the ISBN is not unique nor is
the copy but together they produce a unique
value identifying the copy of a book. Similar to a
primary key constraint is a UNIQUE constraint;
unlike a primary key we can have more than one
unique constraint defined on a single table but
each value must, like a primary key, be unique.
Another type of key constraint that we didn't use
in this example is a FOREIGN KEY constraint.
These ensure that data added to one column
must exist in another. For example in a customer
table the county column that makes up part of
the each customer’s address, it would make
sense that only existing counties could be added.
We would implement a foreign key constraint
from the customer and county column to a
county table, ensuring that only valid counties
could be added.
December 2014 LXF191 79
up Continuous Integration support
Docker: Jenkins
and Dockerfiles
Jolyon Brown looks at some tasks relating to adopting Docker in a
development environment, including Continuous Integration with Jenkins.
Jolyon Brown
much to the
dismay of his wife,
recently quit his
stable, respectable
job in order to
pursue a freelance
career specialising
in DevOps and
ast months tutorial [p80, LXF 190], we introd
Docker, an implementation of software contaiiners on
Linux, and looked at some of the basic functio
and commands available to us. Building on that work
k, we’re
going to look at some of the steps involved in adoptin
Docker in a development environment. We’ll look at o
for sharing Docker containers among team memberrs, and
also look at how Docker can be used in a continuous
integration (CI) workflow, using the well-known tool
Jenkins, before taking a quick look at some of the
tasks a sysadmin would like to know before running any
service: things like how to back things up and how to
o capture
logs etc. We won’t cover installing Docker again – if you’re not
sure about this, take a look back at LXF190 or at the simple
instructions on
Companies running IT services usually have several
environments in which to run the same application at
different stages of it’s development. The ‘Dev’ environment
could be individual developer’s laptops for example. Prod
equals ‘Production’ or ‘Live’. Others might be UAT (user
acceptance testing), DR (disaster recovery) or Pre-Prod (very
similar to production, used perhaps for testing production
fixes). Various versions of the application will make their way
through these environments (hopefully in a linear fashion, but
sometimes not) until they hit production. In old, traditional
infrastructures, each of these environments might have
consisted of one or more physical boxes running a complete
Linux installation from it’s local disk. Maintaining these
servers could be a real headache for a system administrator.
The environments would ideally need to be the same to
The full reference
manual for
commands and
a best practices
guide can be found
at http://docs.
Rails, in Docker. It might not look like much, but Twitter’s
first page will have looked like this back in the day.
80 LXF191 December 2014
ensure that the applications hosted in them ran in a
consistent manner and all kinds of barriers would need to be
overcome to prevent that. Despite all the tools at a sysadmins
disposal, the familiar phrase ‘but it worked on my machine’
can still be heard throughout the land. Docker is aimed
directly at this problem by enabling apps to be quickly
assembled from components and having the same container
run in whatever environment we want.
Using Dockerfiles
While in the first article we used a lot of commands at the
prompt to spin up Docker containers, in practice almost all
development using Docker will make use of Dockerfiles.
These simple text files offer the benefit of being easily put
under version control (we can store them in Git, or whichever
source control tool we like) and while usually being simpler
than the average shell script can be very powerful in terms of
building systems. Here’s an example of one which brings up a
full Ruby on Rails stack:
FROM phusion/passenger-ruby21:0.9.12
ENV HOME /root
CMD [“/sbin/my_init”]
RUN gem install rails
RUN cd $HOME; rails new lxf
RUN apt-get install git -y
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/
To test this Dockerfile, create it in a new (empty) directory. In
that directory, simply issue the command:
sudo docker build -t railstest .
Docker Tutorial
This will download the base image from the Docker hub –
the image repository run by Docker Inc – and apply it’s
various deltas to get to the version specified (0.9.12 in our
case). Passenger-ruby is a Docker image created by the wellknown (in Ruby circles at least) Phusion development team
(Passenger is a web/application server known best for
hosting Ruby on Rails apps but can do a lot more). Their
image provides some sensible defaults for this image. We
have added the gem install rails and cd $HOME; rails new
lxf commands. Anyone who has installed Rails recently will
know it can be quite a time-consuming task with several
commands required. Docker handles this easily, thanks to our
reuse of the passenger image (although this can take quite
some time for this initial download).
After the downloads and installs have completed, we can
start our Docker container up by doing the following:
sudo docker run -p 3000:3000 —name lxf -t -i railstest /bin/
This command starts Docker up, binds container port
3000 to the same localport, calls the container lxf, gives us a
tty, makes the container interactive (ie. it will close when
finished with), specifies the use of our rails test image and
finally drops us to a bash prompt in the container itself.
From here we can start rails up. In the Dockerfile, we asked
rails to create a new app under /root/lxf. If we now cd to that
directory we can issue the Rails server command:
cd /root/lxf
rails server
Rails is configured by default to use WEBrick, a small
lightweight ruby HTTP Server suitable for development
environments. This starts on port 3000. As we issued a bind
command for that port in the container to our host, we can
connect to it via from our desktop.
The familiar Rails default screen appears (pictured, p80).
The power of Docker
While this probably doesn’t seem that impressive, the power
of Docker comes from being able to now take the container
we have created and use it in multiple places. While we’ve
only done the bare minimum in terms of Rails configuration,
we could add to it, creating a baseline for our development
teams to use for testing their code against. There are a few
options for sharing images created in this way. Docker Inc
offers it’s Hub – (browse images at which has a free to use
option (with paid options for multiple private repositories).
However, in some situations code isn’t allowed outside of
Jenkins, a CI
tool with a ton
of options and
plugins (some
might say too
company boundaries/networks. In this particular scenario,
images can be saved as a regular file which can, in turn, be
copied to a target machine and run from there quite easily
enough. Here we save a copy of the Rails test image in the
local directory:
sudo docker save -o ./railstest railstest
The output file (essentially a TAR archive) can be picked
up and dropped anywhere we fancy. If we want to run it on
another machine that has Docker installed we simply use the
load command.
sudo docker load -i railstest
From here we can do exactly the steps above to start the
Rails server on this new system. During tests we passed this
image between an Ubuntu 14.04 desktop and a Centos 6.5
with no issues at all. There’s another option though, running
Docker’s registry locally within the bounds of a data centre or
company network. As you would expect, this is open source
and freely available for download, both as a standalone
application and easiest of all, as a Docker image.
sudo docker pull registry
Gets us a copy. The Registry has a lot of options: it can use
object storage, such as OpenStack’s Swift module, to keep
Docker images in, but by default just uses the local filestore.
It’s a good idea for this basic test to provide it with a path to
some local storage. We can start it up as follows:
sudo docker run -p 5000:5000 -v /tmp/registry:/tmp/registry
This starts up our local Registry and has it listening on
port 5000. It also tells Docker to use a local directory as an
attached volume, so data written to it will persist after the
container is shut down. We can now store images in this
container – starting with our railstest development
environment. In order to store an image we use the docker
push command. However, by default this will push to the
If you’re not
comfortable using vi
within the container
to edit files at the
command line, you
can use another
editor on your
desktop and edit
them locally under
The continuing rise of Docker
Docker and Docker, Inc (the company formed
behind the project) have been in the headlines a
lot in the last year and this continued recently
with the announcement of a 40 million dollar
investment led by Sequoia, the well-known VC
firm who backed Google and many other
familiar names in the technology industry. In
startup terms (if Docker Inc still qualifies as
that) that’s a large amount of runway to use up,
allowing Docker to be enhanced beyond where it
is now – a format for containers, with a healthy
ecosystem of contributors enhancing it daily
(have a search for Docker on GitHub for
example). Docker is seeing more and more
adoption by other platforms and projects (Core
OS, Apache Mesos) and has some high profile
companies (eBay, Spotify, Baidu) using it.
Docker Inc’s focus appears to be to make the
software more ‘production ready’ and hence
more attractive for wider use – orchestration,
clustering, scheduling, storage and networking
being mentioned as areas for improvement.
As these improvements become available there
will undoubtedly be a clamour for enhanced
commercial support and management tools.
This will be where Docker aims to make its
money and its investors will hope to see some
return on their cash. The bet here is that Docker
becomes the new standard for deploying
applications in ‘the cloud’. Docker remains open
source, however, and the community will
continue to enhance and develop it in all kinds of
ways, some of them quite unexpected.
If you missed last issue See
December 2014 LXF191 81
Tutorial Docker
This is the
first part of
our Jenkins
job, showing us
accessing Git via
a fileshare.
global Docker repository. In order to use our local one, we
need to ‘tag’ the image with its hostname/ip address and
port. In a new terminal (as the Docker Registry will be running
in our original window, we need to type the following:
sudo docker tag railstest localhost:5000/railstest
sudo docker push localist:5000/railstest
This will send off a whirl of activity in both windows –
a stream of HTTP PUT commands in the one running the
Registry and image upload status in the other. Once complete
running the command sudo docker images should show our
new localhost:5000/railstest image as being available to
us. We can also see that the Registry has been using it’s
volume by looking under /tmp/registry for it’s newly created
file structures. Of course, in a real situation we’d be looking to
have the Registry sitting on a proper server, available to our
whole development team. For this task the recommendation
is to have it fronted by an Nginx (or Apache) web server.
Take a look at the advanced features documentation at
//bit l /D kerRegistryAdvanced.
Now that we have a way of sharing our Dockerfiles
n team members, we need to tackle
a common requirement in modern
development environments, which is
Integration or CI.
This refers to the practice of (among
other things) running unit tests
ebase to ensure that when new code
on our code
is added to the system, it doesn’t break completely or throw
up errors. CI (and it’s close relation, Continuous Delivery) is a
fairly large subject and an in depth analysis of it is really
beyond the scope of this article. For the moment, however,
let’s assume that the task we have is to run one of the
common open source CI systems out there, and we’re going
to use Jenkins. This is in pretty widespread use and has a
large community behind it. In a pre-Docker project, this would
have meant standing up a new server, installing a JDK
(Jenkins being written in Java) and then downloading the
Jenkins software. However, with Docker available to us,
creating a basic Jenkins system is as simple as this:
sudo docker pull jenkins
sudo docker run —name localjenkins -p 8080:8080 -v /var/
jenkins_home jenkins
After downloading the Jenkins image from the Docker
Hub (which can take a while), we’ve started a Jenkins server
up on port 8080 and added a persistent volume for it’s data.
Note that we haven’t pointed this at local storage (we could
easily have done so using the same syntax from the previous
example) but we can copy data out of our Jenkins system (or
any other Docker container for that matter) using the docker
cp command.
Continuously docking
In some environments, Jenkins can run whole suites of tests
while building an application. At the end of this process, some
packages or executables can be generated; in some cases
virtual machines are spun up and tests run against them
running this code. Wouldn’t it be neat if we could use the low
resource costs of Docker to spin up a container for this
purpose? And even better, could we then get Jenkins to
import a successful build into our local Docker Registry?
Why yes, yes it would! Shutdown the Jenkins container for
the moment (just hit CTRL+C in it’s window). We’ll come back
to it. Before going further we also need to allow Docker to
listen to remote commands as well as the socket it listens to
by default. In a real scenario we would need to add extra
security, but for the sake of this tutorial it’s fine. Using sudo,
edit the file /etc/default/docker and add the following line:
DOCKER_OPTS=“-H -H unix:///var/run/docker.
Let’s simulate our first check in of Ruby on Rails code
which our development team have written using our Rails
The term microservices is the name of a software
architecture, which has grown in popularity of
late. The idea of it is to replace monolithic
applications which are generally the middle layer
of enterprise ’n-tier’ applications; the bits that sit
between the client (often the browser) interface
and a back-end database with individual
elements that perform specific smaller tasks.
This is, in a way, echoes the idea of Unix –
many small applications that do one thing very
well. With monolithic applications, proponents of
microservices would argue, changes become
much more difficult over time as even a small
change to one element can mean that the whole
app has to be rebuilt. Scaling can be expensive
as well for applications that are large and
somewhat wasteful, especially in the cases
where only an element of the whole is required to
scale but where everything is deployed.
With the microservice approach, individual
elements are separate services, enabling them to
be independently deployed and scaled as
required. These services communicate across
the boundaries between them using well-known
published interfaces.
Microservice development can also be
handled by smaller teams, using whatever tools
or language they feel will get the best results for
their particular need. They are arguably more
resilient and much easier to replace, avoiding
‘legacy’ issues.
Opponents of this whole idea dismiss
microservices as ‘hipster SOA’ (service oriented
architecture) and point to the level of complexity
that they bring to infrastructure.
Disagreements over architectures aside,
it’s clear though that Docker has captured the
imagination of the microservice proponents and
is seeing rapid adoption in these kinds of
projects, which seems to make sense as Docker
is a natural fit for running these kinds of
dedicated applications.
More to explore this issue Forking and 3-way merging in Git, page 88.
82 LXF191 December 2014
Docker Tutorial
container, which in our case is the Rails skeleton structure we
created in the first Dockerfile.
First, let’s create a local directory for storing code in –
assuming we’re in our home directory, a simple mkdir code
command will do it. Then, lets reuse our railstest image which
we used earlier:
sudo docker run -p 3000:3000 --name railsdev -v ~/code:/
code -t -i railstest /bin/bash
This drops us back to a prompt in a new container, with
our ‘code’ directory shared to it as /code. Let’s copy over our
original Rails app and check the initial code into source
control. Don’t worry too much about the commands here –
this isn’t a Rails tutorial!
Checking in our app
We’ve another quick task to do before using Jenkins, namely
creating a Git repository to use with it. In real life, this would
likely be another server (or Docker container) or possibly an
internet-based service like GitHub. Making sure we’re in the
/test/dockerapp directory, we just need to issue the
following commands (substituting our email address and
name for the entries below if we wish):
cd /code
cp -r /root/dockerapp .
cd dockerapp
git init
touch git-daemon-export-ok
mkdir docker
git add .
git config —global “[email protected]”
git config —global “Sidney Sysadmin”
git commit -m “initial check in”
This creates a new Git repository on our local disk
containing the whole structure of our new Rails application.
Our plan here is that we’ll get Jenkins to read in our repo,
create a new Docker image from a Dockerfile we have in it,
and stand it up, checking the rest of our code in as it does so.
Again, in real life we’d likely have Dockerfiles and app code in
separate repositories, but for the sake of space this time, lets
create a new Dockerfile in the docker subdirectory of our
existing repo:
FROM railstest
ENV HOME /root
RUN cd $HOME; rm -fr dockerapp
RUN git clone git://<ip address of our desktop>/dockerapp
Check this edit in with git add . and git commit -m
“added Dockerfile”.
We can use a simple Git server to enable these files to be
read by new Docker containers. Open a new terminal, cd to
the ~/code/dockerapp directory and run the following:
sudo git daemon —reuseaddr —base-path=/home/<your
name>/code —export-all
Leave this window open for the time being.
Jenkins, build me a Docker container
In a separate window, start Jenkins back up again, removing
the container we used previously if we want to use the same
name for it:
sudo docker rm localjenkins
sudo docker run —name localjenkins -p 8080:8080 -v /var/
jenkins_home -v ~/code:/code jenkins
Starting a browser on our local desktop, we can connect
to, and our Jenkins page will appear.
First, we need to install a plugin for Git for Jenkins to be able
to read our code. Click on the Manage Jenkins link on the left
hand side of the screen, then on Manage Plugins, which is the
fourth option down on the screen that appears. Click on the
Available tab, then in the filter box for search (top right), type
in Git Plugin. This filters out the many, many plugins available
for Jenkins. Choose the Git Plugin by clicking on the tick box
next to it’s name and then hit Install without restart. Once
complete, we also need to go to Plugins again (left-hand side
this time), select the Available tab and filter for docker. The
one we want is the Docker Build. Same drill, select it and
install without restart. Rinse and repeat for another plugin,
Once this is done, head back to the Manage Jenkins link
and choose configure system from the next screen. We need
to configure our Docker plugin and make sure we can
communicate with our Docker host (our desktop in this case).
Scroll way down to the bottom of the screen and enter the
Docker URL as follows – http://<ip address of your
desktop>:4243. We can test the connection here with the
appropriately named button, and should be rewarded with a
Connected to… message. If all looks OK, hit Save.
Now we can click on the create new jobs link, naming our
job dockertest and selecting the free-style software project
option before clicking on OK. For the Source Code
Management option, we can select Git thanks to our plugin
and enter the URL of file:///code/dockerapp (no
credentials needed).
Heading down to the bottom of the screen we can add a
build step, choosing Execute Docker Container from the drop
down list (pictured top, p82). We’re going to select Create
Image here. The default options for the context folder is OK as
we just need to amend the tag by adding rails_ to the front of
it. Add a second build step, this time creating a container –
the Image name is the tag of the previous step. Hostname
here can be anything. Add another step, this time starting a
Docker container with the ID of $DOCKER_CONTAINER_IDS
(this is an environment variable from the plugin). Finally, add
a step with stop containers as the action. Again $DOCKER_
CONTAINER_IDS is the value of the field here. When
everything is on, save the job and choose the Build Now
option from the left-hand side. Jenkins will check out our
Docker file, build an image, run that image and on
confirmation of success, shut it down. Check the status of the
job – red is bad, blue is good! – and look at the console
output for the steps being run. A sudo docker images will
show the rails_* image now available for us to use. This
simple job can be used as the basis of a CI system involving
Docker and can be expanded to involve greater code and
application testing. Have fun! LXF
The second part
of our Jenkins
job, showing how
we can interact
with Docker and
You can run Jenkins
build jobs at any
point during their
creation – simply
save and hit Build
Now. Experiment
with the various
options and see
what errors appear!
December 2014 LXF191 83
Cython: Python
to native code
Cython is a Python to C compiler, Philip Herron will be your loyal guide
on how to make your code 12 times faster! Your mileage may vary…
Philip Herron
is a self-confessed
Python nut after
Google's Summer
of Code for the
3rd year in a row.
He works for
Instil Software
in Belfast.
Cython doesn't
support traditional
imports of code,
since a single
cython file is in
itself an entire
Python module.
But what you can
do is a C/C++ style
include in your
main Cython file, of
more Cython code
example: include
ou may have heard of Cython. It generally pops in
discussions over Python and speed. Cython is a
programming language which extends Python with
some keywords and constructs, so you can make use of
native C-Types. This means if you have the calculation ‘1+2’,
Cython can see this is two integers and compiles this to C:
int result = 1 + 2;
This has some interesting side effects which you'll see
later in this tutorial. But for now lets think of this as Python
but being able to use real types and even type Python code.
First, to install Cython you can use system package managers
such as apt-get, which hold a Cython package or more
portably use pip for the latest releases.
$ pip install cython
We also need the Python development headers and
libraries installed so you have the python-config script. On
Ubuntu you can do this by:
sudo apt-get install python-dev
Or on Fedora you can do this by:
sudo yum install python-devel
Now you are ready to use Cython. To get a feel for it in
action let’s look at the classic fibonacci function in Python:
84 LXF191 December 2014
def fib(n):
if n == 0:
return 0
elif n == 1:
return 1
return fib(n-1) + fib(n-2)
This is Python code you will be familiar with we’re sure. It
returns the nth element of the fibonacci sequence. But now in
Cython we can simply rewrite the function declaration as this:
cdef fib(int n):
if n == 0:
return 0
elif n == 1:
return 1
return fib(n-1) + fib(n-2)
You can see it looks almost identical, but we declared the
function a cdef cython function and we declared the
parameter n to be an integer. Cython can use this information
to generate optimal code, since we know we are definitely
talking about an integer n here and we know the scoping of
the data within the suite so using c-stack is possible. However,
to compile and run this code, the process is a little more
involved. Instead of simply running the Python interpreter on
the code, we need to run the Cython compiler and then use
GCC to compile the code into an executable:
$ cython --embed fib2.pyx -o fib2.c
$ gcc -g -O2 -o fib2 fib2.c `python-config --includes --ldflags`
Now If you compare the execution times of running both
implementations of the function for the first 30 numbers. We
can do this by adding this bit of Python code to call the
function to each version as Cython can compile this just fine!
result = []
for i in range (30):
print result
Now to compare how long it takes to run each version:
$ time ./ # pure python version
real 0m0.537s
user 0m0.529s
sys 0m0.006s
$ time ./fib2
# cython version
Everything’s a module, everything is cool.
In Cython your single Cython source code file
represents an entire Python module. When
writing larger Cython extensions we use the
include directives to make code more modular.
When it comes to designing your Cython
extension module you need to think about how
your module will be executed, as seen from the
examples we pass the embed argument to the
Cython compiler. What this does is generate a
boiler plate main method for the overall program,
such that execution will start as you would
expect in your Cython code.
But you don't have to do this, you could make
it more like a normal Python module – and we
see this when we don't pass this argument and
compile it as a shared library.
However, to interact with this shared library we
must be careful about naming and how we title it.
If your Cython code file is MyModule.pyx you will
need to compile this to MyModule.c and finally
real 0m0.045s
user 0m0.040s
sys 0m0.005s
The Cython version has a notable speed up, almost 12
times faster! As we said, the fact we’re using native types
means the Cython compiler knows how to entirely bypass the
python-runtime to evaluate this code by generating raw C
code to do the work. The interesting side effect is that due to
the fact Cython produces C code, we can call directly into
pure C/C++ code and data-structures. And again from C call
directly into Python code again!
We believe this solves an issue with writing complex
systems in C/C++ vs JVM, where you lack a lot of dynamic
abstraction comparatively, meaning you could split software
up into C/C++ with Python to deal with dynamic behaviour to
control your data-structures. Even unit-testing C/C++ code,
Cython provides a nice layer on top for mocking or setting up
the test environment. To see this in action we will embed a
Flask webapp into a C application and serve some in-memory
data-structures to JSON. This technique I have used in high
performance trading systems, having a web dashboard to
shared memory tables using a C++ api to access the data,
instead of writing Python code to do this.
There are several parts to doing this but first let’s compile
the helloworld flask application from their homepage at – after we've installed it, of course:
$ pip install flask
Now on to Hello world:
from flask import Flask
app = Flask(__name__)
via gcc to When you are in your
Python shell you will be able to import this simply
via import MyModule as you would expect, so
long as this module is in your PYTHONPATH.
Also we can can compile this to a shared library
or simply link this into a C/C++ program – but
we will need to initialise the Python runtime and
also initialise the Python module before you
interact with the Cython code, or you will get a
crash. You can see this in the flask example.
the app.pyx file will have the main method and execution will
run from there. If you don’t specify this we will need to
actually manually start the Python runtime and import the
module... or the program will crash!
Imagine we wanted to use C to control when our appserver was started. Then we could wrap the call
inside its own method, which we could call from C. We can do
this by again making it a ‘cdef’ function but use another
keyword ‘public’:
cdef public void startFlaskApp():
The return type is void but more importantly this ‘public’
keyword will make the cython-compiler generate a header
app.h, which will have this function prototype declared for us.
So we can call directly into this Cython code from C as if it
was just another C function, and all we have to do is include
the header and work away.
Since were going to have our main method inside a C
program and call into Cython to start the webapp we have a
small piece of boiler plate to write before we do anything.
#include <Python.h>
int main (int argc, char **argv) {
printf ("Initializing Python Runtime...\n");
Py_Initialize ();
// call into python here
def hello():
return "Hello World!"
if __name__ == "__main__":
As you can see, Cython can compile pure Python code to
C but it will be using the Python runtime here. To compile this
we use the Cython compiler again to generate the respective
C code fand then run our gcc to compile it into an executable:
cython -2 --embed app.pyx -o app.c
gcc -g -O2 -o app app.c `python-config --includes --ldflags`
Just for clarity, the options used when invoking the
compiler -2 means generate code for Python 2.x and
--embed means embed the main method into this. Meaning
Running the Cythonized version of the fibonacci program with the
code in Emacs underneath.
If you missed last issue Call 0844 848 2852 or +44 1604 251045
December 2014 LXF191 85
printf ("Cleanup...\n");
Py_Finalize ();
return 0;
strncpy (record->name, "test", sizeof (record->name));
record->value = callCount;
return record;
So inside the main function we need to initialise the
Python runtime and clean it up. And in between these two
calls you can call into Python code, outside of this will result in
a seg-fault. Remember because we declared the
startFlaskApp cdef public we can include the header app.h
after including Python.h. And call into your Cython function
directly, like so:
// make sure you include ‘Python.h’ first!
#include <Python.h>
#include “app.h”
Running the
Flask app and
using curl to
access the json
int main (int argc, char **argv) {
printf ("Initializing Python Runtime...\n");
Py_Initialize ();
initapp(); // equivalent to python ‘import app’ part of ‘app.h’
startFlaskApp(); // call the function!
printf ("Cleanup...\n");
Py_Finalize ();
return 0;
Let’s compile this all together:
$ cython -2 app.pyx -o app.c
$ gcc -g -O2 -o app main.c app.c `python-config --includes
And we run ./app. Now you have a flask application
running from a C program! Next define a function which
simply returns some random data in a C-Struct. Maybe a
fixed string and a callcount just for simplicity sake; let’s
pretend it’s a memory record for some kind of trading
system, since the principle is exactly the same.
struct data {
char name[10];
int value;
This is our struct definition and all we need now is a
function we can just call and get a new record.
static struct data * record = NULL;
struct data * getData (void)
static int callCount = 1;
if (!record)
record = malloc (sizeof (struct data));
memset (record, 0, sizeof (*record));
This function will simply fill out this record and serving this
as a JSON object from Python is pretty elegant. The best way
to go is put the struct definition into a header as well as the
prototype for the function to get the data. This is important,
since Cython compiles to C it needs to know to include a
header in the C app such that everything wires up correctly.
#ifndef DATA_H
#define DATA_H
struct data {
char name[10];
int value;
extern struct data * getData (void);
#endif //DATA_H
Inside the Cython code you need to declare this again so
the Cython compiler knows how to handle the types correctly.
Otherwise the Cython compiler won’t know about this struct
or the function and will lead to an error.
cdef extern from "data.h":
struct data:
char name[10]
int value
data * getData ()
As you can see, it’s actually very familiar code but it is
tedious to have to write it all out again. There are initiatives
(auto-pxd-generation to generate
this code for you but they aren’t really ready yet. The next
piece we need is a function to wrap calling into the C code
and return a dictionary of the returned C struct.
cpdef getBackendData():
cdef data * resp = getData ()
return {'name':, 'value': resp.value}
This function we used cpdef, this keyword means you can
call this function from Python code or C code and the body
can use the Cython extension to use native typing. If you’re
going to call this from C the return type is always PyObject
(internal Python object type). This body simply calls directly
into that getData function we defined in our C code and
turns this struct into a Python dictionary. Finally, to bring this
all together we need to amend the Flask route to JSONify and
serve the response!
from flask import Flask, jsonify
Go digital, save money See
86 LXF191 December 2014
Cython alternatives
This kind of language binding can also be
achieved via Swig. Where Cython shines is the
type coercion and the direct implementation it
offers. For example, in Swig you will need to write
quite a lot of type wrapper functions to coerce
between the different languages. You will also
need to write wrapper functions to handle more
complex data structures. Cython does this for
you since it understands the types! This isn't to
say Swig is bad, but it’s very generalised since it
supports so many languages like Java, Go, etc.,
taking out all of the boiler plate code required.
Cython is purely designed for Python and can
assume many more preconditions. Notable
app = Flask ('myapp')
cpdef getBackendData():
cdef data * resp = getData ()
return {'name':, 'value': resp.value}
def index ():
return jsonify (getBackendData ())
cdef public void startFlaskApp (): ()
Recapping, what we have so far is a single C file containing
the main method to initialise the Python runtime and start
the flaskApp. As well as a function to get some data, an
accompanying header for this, and a Cython (*.pyx) file which
contains the flask app code to serve something. Compiling
this and linking it all together again we can get busy
constructing a simple Makefile:
cython -2 flaskApp.pyx -o flaskApp.c
gcc -g -O2 example main.c flaskApp.c `python-config
--includes --ldflags`
Lets run the app:
$ ./example
Initialising Python Runtime...
Importing Flask App...
Starting Flask App...
* Running on
Let’s use curl to access the flask route, notice how every
request will update the JSON object.
$ curl -X GET “http://localhost:5000”
"name": "test",
"value": 1
$ curl -X GET “http://localhost:5000”
"name": "test",
"value": 2
And there we have it – we've successfully called Python
code from C and called C code from Cython! There is so
much more we can do from this, in other code-bases I have
mapped restful resources to C++ classes. Cython has been
getting better and better support for C++ – you can even
define templates and operator overloads on classes and
Cython knows how to pick all of this up.
There are some caveats to be considered with Cython
before you begin coding. Since you are mixing languages,
debugging things can be confusing. Keeping interfaces
simple is absolutely key and handling exceptions from Python
to C++ can be clumsy as your exceptions will be changed
depending on what language you are going to and from. You
will not be able to magically compile an entire Python module
to a shared library, as Cython doesn’t follow imports. To
other projects in this space would be Parakeet,
which uses LLVM to just-in-time compile code
via the use of decorators in Python code to
speedup code, and then there’s Nuitka, which is
a Python to C++ compiler which tries to
statically type your Python program at compile
time similar to PyPy.
handle multiple Cython files, you must use an include
directive and this works just like C/C++ includes putting the
included code in place. The reason for this is to conform to
the Python module system. So, for example, you might notice
we used --embed a lot in our code, or defined a main method
ourselves in Cython so we can create a module:
def myfunction ():
return ‘Hello World’
And we can compile this with:
cython -2 mycode.pyx -o mycode.c
gcc -g -O2 -shared -o mycode.c `python-config
--includes --ldflags`
So now you have a shared library and we can include this
like any other Python module:
>>> import mycode
>>> mycode.myFunction ()
'Hello World!'
Just make sure your shared library name is the same as
the name of the Cython code file, bar the extension. Cython
doesn't end here, it provides extremely clean interfaces for
working with C++ code including classes and templates, and
with Python and C++ exception handling there's also great
GDB (GNU Debugger) support.
The author used to work on big multi-processing trading
systems, which extensively use custom shared memory
databases, such that all processing is in memory and there is
no overhead in talking to a database. The downside of this is
that debugging and actually viewing what is in these tables is
usually difficult to view, among other issues such as growing
shared memory. But using Cython and Flask we were able to
use all of the techniques shown here to create a web
dashboard using Python and AngularJS to build an epic
realtime database viewer for transactions. And what's more,
instead of using code in Python to attach to shared memory
we were able to simply reuse our C++ API to do all of this
work in Cython directly, so this enabled seamless writing of
Python code and C++ code.
This really was pretty impressive, in big financial institutes
you will always get a better response if you embrace extend
and reuse, overwriting new APIs or code.
This technique could also be applied very easily to other
areas, even when it comes to writing new C/C++ applications.
Why not write wrappers to use the Python logger or Python
config parser from inside C/C++ applications so you don't
need to reinvent the wheel again to do something as simple
as that? If you want to see a demonstration of some of these
techniques read Philip Herron’s magnum opus Learning
Cython, which is available on Amazon (see also http://bit.
ly/LearningCython and
cython-book). Although Cython was primarily designed to
take advantage of native types for high performance
mathematical computations, you really can use this in other
areas, and not just science. LXF
December 2014 LXF191 87
Git: Get started
Jonni Bidwell introduces the basic Git nomenclature so you can ‘git’
to grips with forking and three-way merges.
Jonni Bidwell
is an amateur
algebraist and
cryptographer by
day. By night, he
is a vagabond of
the lowest order.
it is a distributed source code and versioning
management tool, though in many ways it's closer to
a file system than it is to other versioning systems,
such as CVS and Mercurial. Code is stored in a (local or
remote) repository. Users with the relevant permissions may
clone this repository to their own local working directory, and
thereafter will have access to not only the latest master
branch of the source code, but all other branches, and all of
their histories. They can make changes, commit them to their
The git {status,log,show} commands are useful for seeing where you’re at with
your commits and where you’ve been.
88 LXF191 December 2014
local repository and if necessary file a pull request to have
those changes incorporated into the project. Since very soon
after its release, Git has been self-hosted – its source code is
managed through a Git repository, too. Sourcery, we say.
Once you clone or initialise a repository, you will at first
glance see only the current state (master branch in Git
parlance) of the project. But if you look more closely you will
see there's a hidden directory named .git/ , where all the
magic happens. More specifically, this folder is where
repository configuration files and scripts are stored, as well as
the fundamentally important index (.git/index) and object
database (.git/objects/). The index is a staging area and
deals with the state of the current working directory, tracking
changes in preparation for the next commit. (See Object
databases: Dramatis Personae for details about the primitives
which populate the database.)
Despite source code management being Git's original
modus operandi, it can still oversee any project which
involves changing files. Maybe you're carefully photoshopping
(sorry, Gimping) your in-laws out of photos, or experimenting
with different storylines for that novel you've been working on,
either way Git can be used. With the second example here in
mind, let's make our very own Git repository. You'll need Git
installed, it absolutely will be easy to find in your distribution's
package manager.
Let the committing commence
The first thing you should do is tell Git your name and public
email, these will be visible should you start committing to
other projects.
git config --global "Your name"
git config --global [email protected]
Now we'll make an initial project consisting of a couple of
text files, and then place the project's folder under Git's aegis:
mkdir ~/gitdemo
cd ~/gitdemo
echo "The butler done it" > chapter1
echo "TODO: write thrilling plot twist" > chapter2
git init
The last command just sets up the .git/ directory, Git
doesn't start caretaking our files until we do
git add
Now all the files in our working directory are snapshotted
in the index file that we mentioned earlier. This stages any
changes in preparation for commiting, which we will do in a
moment. First though, run this command just to see where
we're at:
git status
Note that the status command shows that we have only
the master branch; that we are preparing our initial commit;
and that this will consist of adding our newly created
chapter1 file. To proceed with our first commit, do:
Object databases: Dramatis Personae
The object database stores everything that is
and ever was about a project. It can only be
appended to – objects, once committed, are
never deleted otherwise we would not have a
complete history. Each object is assigned a
unique id that’s 40 hex digits (160 bits) long, and
generated by SHA1 hashing their contents and
other relevant data. Objects are stored according
to this id, making it a content-addressable store.
The first two characters of the id are used for the
subdirectory in .git/objects, while the remaining
38 are used for the filename. The Git object
model comprises four distinct entities:
Blobs (binary large objects) The contents of
files, and just amorphous data. It’s important to
note that a blob is not the same as a file, since by
itself it’s not associated with a filename, date or
permissions. Identical blobs will have identical
ids, so no space is wasted storing duplicates,
even if the filename associated with a given
blob changes.
Trees These are analogous to conventional
directories, containing a list of filenames and
other metadata together with associated blob
(or other tree) ids. Hence a tree contains a
snapshot of a given state of the project.
git commit -m 'Initial commit, butler'
Commits always need a descriptive message, if only so
people can shout at you when it breaks things. If you don't
specify one with the -m option an editor will open for you to
input one. As seen in the picture, our initial commit gets an id
beginning with 2921a6 (Yours will be different as it depends
on the time and user details). We generally only need the first
few digits of the id, since the chances of a collision are
infinitesimal. The full id can be seen in the history via the git
log command. Or you can see all the gory details with:
git show 2921a6
If you look in the .git/objects directory, you should see
two directories named info and pack as well as four two-digit
directories. This latter quartet of directories should each
contain one file, since our project at present consists of four
objects: two blobs (ids fd2b4b and 5a280b), a tree
(d0347a) and a commit (2921a6).
Now let's further develop our macabre tale, adding some
details of the butler's heinous alleged crime. Add a line to
chapter1 by doing:
echo "with the blunderbuss, in the study" >> chapter1
Of course, you can also use your favourite text editor to
make changes. Running git status tells you that these
changes are not staged for commit – we either must git add
the file again, or use the -a option to commit changes from all
files already in the index. Don't forget to add a description:
git commit -a -m 'detail muh-dur weapon and crimescene
Git informs us that we have changed one file with one
insertion, and as before we can get the lowdown from git log
and git show. We've also just added another three objects to
the object store. A new blob containing the two lines of
chapter1 (Git doesn't initially use delta compression), a new
tree and our new commit (id ef6d55). The diagram (see
right) shows what has happened to the object structure
across our two commits. Note that our second commit has a
reference to its parent (the initial) commit (id 2921a6),
and that both trees refer to the same blob (id 5a280b) as
chapter2, since it’s unchanged.
If we decide that we're not sure about our latest commit,
then we can revert the working directory back the previous
commit with:
git checkout ef6d55
Git tells you that you're in the (slightly unsavoury
sounding) 'detached HEAD' mode. This roughly translates as
Commits These connect trees to form a
history. They contain a pointer to a tree that
represents the state of a repository after a
change was made. They may also contain
references to relevant previous commits, known
as parent references; these make commits easy
to revert. Furthermore, they store information
about who did the committing, when they did so
and a human-readable synopsis of the changes.
Tags Pointers to commit objects, they enable
you to assign a human-readable name to a given
commit. You can also add an additional and more
verbose descriptive message.
having gone back in time. You can verify this by examining
chapter1 and seeing that it’s back to being the one-liner of a
few minutes ago. You can make changes without fear of
upsetting the project: Unlike science fiction, Git has welldefined methods for changing the past without affecting the
future, which we will look at in a moment. First, we'll go back
to the present with:
git checkout master
Branches and unreliable narrators
In Git speak, projects are divided into 'branches', which are
independent lines of development. The process of creating a
new branch is known as ‘forking’. It’s good practice to isolate
any changes you make to your project in their own branch,
primarily to prevent them from breaking/polluting the main
code base, which is known as the master branch.
Let us make a new branch to experiment with a new
storyline, and then tell Git that we want to work on this.
Branches are just pointers to commits, and we can navigate
them in the same way, using git checkout:
git branch plot-twist
git checkout plot-twist
Unlike commits, though, checking out branches will
enable you to save changes in the project history. Let's do
that with a radical second chapter:
echo "It was I, your narrator, that committed this vile act. I
framed the butler out of jealousy, and though you, dear
reader can see my confession, none in these pages shall do
so. Verily, I am a git of the lowest form." > chapter2
The name Git is
another example
of Linus Torvalds
naming a project
after himself.
He felt that the
English vernacular
person", summed
him up nicely.
After just a couple of commits, the object database becomes quite
complicated. Fortunately Git abstracts all this complication away.
Go digital, save money See
December 2014 LXF191 89
Larks, we bet you never saw that one coming, truly a Poeworthy turn of events. We need to add this file to the staging
area in order to commit this change, but we can do it all on
one line with:
git commit -a -m "Surprising confession from author"
If we at this stage and decide that we like this branch, then
it is straightforward to make it the master branch. This is
known as ‘merging’, and in this straightforward case (there’s a
linear path from the master branch tip to our plot-twist
branch) fast-forward merging. We would do this with:
git merge plot-twist
Our plot-twist branch lives on, but you can delete it with
git branch -d plot-twist since it would no longer be required.
Three-way merging
However, supposing we returned to the master branch and
wrote a different second chapter, but then decided that, in
fact, we thought our unreliable narrator chapter was better,
and that we wanted to incorporate it into the master branch.
Merging becomes more complicated (there is no longer a
linear path between branch tips), but Git can resolve this with
a three-way merge. The name comes from the fact the
resolution takes place by considering the three commits
bolded in the diagram (pictured below): the two branch tips,
and their common ancestor.
The command is the same, but this time Git tells us that
there’s a merge conflict in chapter2. Indeed, if you look at the
contents of this file, you will see an amalgamation of both
versions. To resolve the conflict, we need only checkout the
plot-twist version of our conflicted chapter2, and then the
merge can proceed unhindered:
git checkout plot-twist chapter2
git commit -m "Merge branch plot-twist"
Git is able to auto-merge files if they differ in an easily
understood manner, eg If one contains an extra paragraph or
codeblock, there are also some preprogrammed strategies
(called 'ours’, 'theirs' and 'both') for naive merging. But most
of the time merging is more complicated and requires user
interaction. In yet more complicated cases that involve
multiple branches, Git has to resort to what is known as a
recursive three-way.
On that decidedly kinky sounding note we will sign off this
tutorial. Though we have barely scratched the surface of this
all-powerful versioning and management tool, hopefully your
appetite is whetted. We'll delve further into Git in a future
issue, but for now, happy forking. LXF
A fast-forward
merge (topleft, resulting
in bottom-left)
and a three-way
merge (right). In
the latter case,
we end up with a
new commit.
A GitHub to call your own
GitHub ( is a web-based
repository where anyone can store their projects
for free, provided they're okay with those
projects being public. A paid-for service is
available for those wanting to keep their code
private. Besides a Git repository, GitHub
provides users with other tools for managing
their projects, including wikis and bug trackers.
Once you've signed up to GitHub you can
push a local repository to it either by HTTPS or
SSH. The former requires password access
whereas the latter allows you to use SSH Keys.
Once you've initialised and got some files in
your repository (it's recommended to include at
least a file that describes your
project) then you can add the remote location
with the following
git remote add origin
and when you're ready to push the master
branch of your code, do so with:
git push -u origin master
You can grant access to your GitHub repo to
anyone you trust, and then they can add their
branches and contributions and grammatical
errors. A central store used in this way is referred
to as a ‘bare repository’, since nothing is
committed to it directly: all the content comes
from the non-bare repositories of you and your
cohort of contributors.
Get a bundle Subscribe to the #1 source for Linux on page 32.
90 LXF191 December 2014
Get started with the
Raspberry Pi
Order online at
or find us in your nearest supermarket, newsagent or bookstore!
Got a question about open source? Whatever your level, email it to [email protected] for a solution.
This month we
answer questions on:
1 Fixing hard
drive space
2 Finding
damaged files
3 Raspberry Pi
remote desktops
4 Getting started
with Linux
5 Upgrading
Ubuntu via DVD
+ Debian kernel
Hidden space hog
I have run Linux Mint on my PC for
over three years. First I used 11,
then upgraded to 13 about a year
ago. It's running really well, but I have a
serious problem with free space on my /
partition which I don't understand. Looking
at Control Centre > System Information >
Filesystems, I see that my / partition is
listed as 94.4% full (1.3GB left free out of
23.2GB). However, using Baobab, Filelight
etc. I simply cannot see where all this space
has gone. Looking at the properties of
individual folders in /, I have a
total of 8.3GB. So why does
System Info say I only have
1.3GB left in my 23.2GB
partition? My /home is on a
separate partition.
From the forums
Did your previous
installation have a
separate /home?
I suspect not. That could mean
the files from your old home
directory are still on your root
partition but no longer visible.
Linux mounts each filesystem at
A disk usage analyser shows where your space is used,
a 'mount point', a directory that
but it cannot account for files that have been hidden by
then holds the contents of the
mounting another filesystem on top of them.
filesystem. Normally, a mount
point is an empty directory but this isn’t a
contents of that directory are still occupying
requirement. When you mount a filesystem on
the same amount of space, and they reappear
a directory that already contains files, those
when you unmount the other filesystem, and
files disappear from view, replaced by the
free up the mount point.
contents of the newly mounted filesystem.
If you have all these old files but cannot see
You may not be able to see them but the
them, how can you go about deleting them?
One option is to boot from a live CD, mount
your root filesystem, but not the home
filesystem and then inspect its contents.
However, there's a clever trick that avoids the
need to reboot, called bind mounting. Open a
terminal and run:
sudo mkdir -p /mnt/root
sudo mount --bind / /mnt/root
This mounts / at /mnt/tmp. You’ll see the
same files in both, and any changes you make
in one affects both, with one important
difference. Anything mounted on a directory in
/ is not mounted on the corresponding
directory in the bind mount. Directories such
as proc and sys will be empty in /mnt/root,
as should the likes of mnt, media and home.
If you run a Baobab (Disk Usage Analyzer) or
Because we’re nice chaps as well as
Linux gurus, and because our offices have
Filelight on /mnt/root, you’ll see what your
very little storage space, we’re giving one
root filesystem actually contains, or you can do
lucky reader each issue the chance to
it in the terminal with du
win some of the books that get sent to
sudo du -sch /mnt/root/* | sort -h
us for review. They’re mostly reference
I suspect that you’ll find a lot of old files in
books that will give your shelves an air
/mnt/root/home. You’ll probably want to
of computational gravitas. For a chance
delete these, they will be your home directory
to win, email your question to
from your previous installation, but you may
[email protected], or post it to
want to move them to a temporary directory in to seek
your home first. That will free up the space on
help from our lively community of readers.
/ and enable you to browse and delete the old
files at your leisure – all without rebooting.
See page 94 for our star question.
Enter our competition
if ($letter == winner)
get $books
Get your questions answered and exploit our generosity.
Linux Format is proud
to produce the biggest
and best magazine
1 for Free Software
about Linux and free
software that we can. A word count of
LXF185 showed it had 67,790 words. About
as much as Heart of Darkness and Animal
Farm combined, but with way more Linux,
coding and free software!
That’s more than most of our competitors,
and that’s how we know that we’re big. As for
the best, well… that’s a subjective claim, but it’s
one we’re happy to stand by.
92 LXF191 December 2014
Terminals and
We often give a solution as commands to type in
a terminal. While it is usually possible to do the same
with a distro’s graphical tools, the differences between
these mean that such solutions are very specific.
The terminal commands are more flexible and, most
importantly, can be used with all distributions.
System configuration commands often have to
be run as the superuser, often called root. There are
two main ways of doing this, depending on your distro.
Many, especially Ubuntu and its derivatives, prefix the
command with sudo, which asks for the user password
and sets up root privileges for the duration of the
command only. Other distros use su, which requires the
root password and gives full root access until you type
logout. If your distro uses su, run this once and then run
any given commands without the preceding sudo.
Silent corruption
My hard drive started playing up,
giving read errors. I replaced it and
copied all of my files over, apart
from a couple that would not copy because
of the errors. I know I have lost those but can
I be sure everything else is correct? How do I
know that files weren't corrupted by the
dying drive? Is there a way of testing them
so at least I know now and don't get a nasty
surprise later?
Andrew Wright
Unless you use one of the latest
generation of filesystems, like ZFS or
btrfs, which automatically checksum
all files and check for integrity, there's no
universal solution. If you have backups of your
files, you can compare the files with the
backups. Copy or extract the backups
somewhere, such as an external drive, cd to
the directory and run md5deep like this:
md5deep -r . >all.md5
This generates an MD5 checksum of every
file in the current directory and below and
saves them all to a file. Now copy that file to
your home directory, cd to it and run:
md5deep -x all.md5 -r .
The -x option tells md5deep to compare the
files with the checksums in the all.md5 file and
output only those files that don’t match. It's
then up to you to decide whether each file is
corrupted or has simply changed since the
backup, but you will have a lot less to work to
do, through.
If you don't have a backup, there are still
some things you can do – apart from resolving
to make backups in the future. Some file types
include their own checks. For example any TAR
file, compressed or not, can be checked with
the -t option. This basically extracts the
contents of the TAR file but then discards
instead of saving them. Using
tar tf sometarball.tar.gz >/dev/null
will give no output if the file is good. You can
use the find command to locate and test all
TAR files like this:
find -name '*.tar' -or -name '*.tar.*' -exec tar tf
{} + >/dev/null
The find command will find all files that
match either of the name patterns and run tar
tf on them, redirecting the output to /dev/null.
Only error messages will be displayed in the
terminal so no news is indeed good news.
The {} in the command is replaced by the list of
files matched. The + means add as many as
possible at once, so if you have three tarballs, it
will end up running:
tar tf file1.tar file2.tar.gz file3.tar.bz2
Replacing the + with a ; causes find to run
the command separately for each file it finds,
like this:
tar tf file1.tar
tar tf file2.tar.gz
tar tf file3.tar.bz2
JPEG images can be tested in a similar way
to this, using the jpeginfo command. This, like
md5deep, is probably not installed by default
but will be in your distro's package repositories.
When used with the --check option, jpeginfo
will check the integrity of all JPEG images
passed to it. Once again, use find to get the
names of the files to pass to the program:
find -iname '*.jpg' -exec jpeginfo {} + | grep -v \
This time we use -iname to match both .jpg
and .JPG. We use grep to filter out the files that
are marked with [OK], so we only see the
errors and warnings.
RPi RDP Grrr!
My Raspberry Pi is running
Raspbian and I have xrdp installed.
Whenever I connect to my
Raspberry Pi from the Remote Desktop
Viewer in Mate 17, and open up a terminal
and type
emacs &
I get the following error message:
Xlib: extension "RANDR" missing on display
Emacs starts, but unlike when I start in
directly from the Raspberry Pi, it's only in
black and white, there's no colour splash
screen. I have googled this and looked at
various forums but cannot find an answer to
how to stop this from appearing.
Rob Mundin
There are a number of red herrings at
work here. While you are experiencing
this issue when running Emacs (and
it's good to see someone using a proper text
editor instead of that vi stuff) you will get this
message when running almost any program
that uses X. Second, the message is simply a
warning that your VNC server, which is used by
RDP, doesn't implement the RANDR extension
to X. RANDR is used to manipulate the root
window of a display, for example when resizing
your desktop, which is not normally needed for
remote connections. It's not an error message
and doesn't affect the running of any
programs. Yes, Emacs does look different when
run over RDP, probably because it's detecting
that it's on a remote desktop and displaying its
less fancy, text only, initial screen. There's no
loss of colour, try loading a Python script and
you'll see colour highlighting. The Emacs
display is monochrome because it is showing a
monochrome page, it's nothing to do with the
missing extension.
A quick reference to...
File links
inux makes extensive use of file links,
mainly symbolic (or soft) links. A file
link is a way of having the same file (or
directory) appear in two or more places, or
under different names. You could simply
copy the file to each location, but this has a
couple of disadvantages; it wastes space and
if the original file is updated, the copy is still
the old version.
Linux filesystems have a means of
accessing the same file or directory from a
number of places using links. There are two
types: hard links and symbolic links, or
symlinks. Technically, every file is a hard link,
as this is a link from the file's name to the
data structure on the disk that holds its
contents. When referring to hard links,
we tend to mean extra links, so that a file has
more than one name. Hard links have some
limitations, most importantly that they only
apply to files, not directories, and that all
links must be on the same filesystem.
A symbolic link is more flexible and you
will see them a lot in a Linux system. You can
see them when using the -l option with the ls
command and they show up in most file
managers. A classic example can be seen in
the /usr/src directory, where /usr/src/linux
is a symlink to the source for the current
kernel, say linux-2.6.15. This way, you have
the sources of more than one kernel installed
and anything that needs the sources can
look in /usr/src/linux to find the relevant
one. You will also find symlinks used heavily
in /etc/rc.d. Symlinks can be created with:
ln -s /path/to/file /path/to/link
Either or both paths can be relative.
December 2014 LXF191 93
It's not necessary to run a full remote
desktop to display a single program remotely.
You’re already using a terminal so you may as
well use SSH to run a remote shell session in a
local terminal. If you run SSH with the -Y option
ssh -Y [email protected]
you enable X forwarding. Then you can either
run your program from that remote shell, or
start it with the ssh command:
ssh [email protected] emacs
Either way, Emacs, or any other program
you choose to run, will open its window on the
local desktop. You will notice that Emacs gives
you a colour intro screen when run this way.
Foreign language
I'm transferring from Windows XP to
Linux but find that I'm speaking a
foreign language in Linux Land.
How do I learn to understand Linux lingo?
Noel Stuart
It’s always good to see someone
trying an alternative OS, and the first
steps are never easy. Linux is different
from Windows (why bother otherwise?) and it
has, as you’ve found, its own terminology. The
good news is that there’s a large community of
Linux users, all of whom were newbies at some
time (even if they don't all admit it), so there’s
plenty of help and hand holding available.
The first place to start is close to hand,
the New to Linux section on the LXFDVD
explains many of the basics, open the DVD's
index.html in a web browser and click on the
penguin near the top left. One of your first
decisions is to choose a distro. That's one of
those Linux terms. A distro, short for
distribution, is a complete OS based on Linux.
There are many to choose from, some more
suitable for new users than others. It's not just
the content of the distro that matters but the
community around it. Distros like Ubuntu or
Linux Mint are good for new users and have
plenty of resources to help you. I’d suggest you
pick one and persevere with it, rather than
distro-hopping. That will give you time to come
to terms with the features and terminology of
that particular distro and get help from its
community. All distros have web forums or
mailing lists where users and developers can
help one another, and many have areas
specifically for new users, where you can ask
the most basic of questions without
I'll let you into a secret, there is not that
much difference between Linux distros
This month’s winner is Hampshire_Hog1. Get in touch with us to claim your glittering prize!
LMDE kernel update
I am running Linux Mint Debian
Edition on a 32-bit desktop PC and
I’d like to know how to update the
kernel to the latest stable version if possible...
From the forums
There are two main reasons for
needing, as opposed to wanting a later
kernel: hardware support and security
fixes. The distro will release an update if the
latter is an issue. Currently, there’s no newer
kernel package for LMDE than the one installed
by the 201403 installer. If you need a more
recent kernel you’ll need to compile it from
source, which isn't a particularly difficult task.
LMDE provides the kernel-package package
that makes the task even easier. This is a
collection of programs for creating Debian
kernel packages from standard kernel source
code. First go to and
download the latest source code tarball, it’s
3.16.2 at the time of writing. You then need to
94 LXF191 December 2014
Running a desktop remotely is simple with RDP, but not the best way to run a text editor.
install both, kernel-package and linux-source.
The former is the kernel builder, the latter is the
source for the current kernel – you don't need
the source code but you do need the
configuration file. Open a terminal and switch
to root with:
sudo bash --login
Switch to the kernel source directory and
unpack the tarball you downloaded:
cd /usr/src
tar xf ~user/Downloads/linux-3.16.2.tar.xz
ln -s linux-3.16.2 linux
The last command isn’t essential, but some
module installers expect to find the kernel
source in /usr/src/linux. The linux-source
package will install the older source, still as a
tarball, which you can ignore, and a set of
config files in linux-config-3.11. Pick the one
appropriate to your kernel, run uname -a if
unsure, and unpack it into the correct location
in the new source directory:
xzcat linux-config-3.11/config.i386_none_486.xz
Now you can run make-kpkg from the kernel
source directory:
cd /usr/src/linux
make-kpkg buildpackage
The config file you’ve installed is for an older
version so the first thing it does is run make
oldconfig to update it. This will prompt for
each new option with a suggested choice.
You can press ? for further details or press
Enter to accept the default, which is
generally the wise choice. There will be a lot
of changes between 3.11 and 3.16. Most of
the new options are for recently added
hardware, so keep an eye out for yours.
Then make-kpkg will compile the kernel and
build it into a Deb package in /usr/src. You
can install this with:
dpkg -i ../linux-image-3.16.2_3.16.2-10.00.
The exact numbering will vary, then you
can reboot into your new kernel, safe in the
knowledge that if anything goes wrong with
it, the old one is still there on the boot menu.
Help us to help you
We receive several questions each month that we are
unable to answer, because they give insufficient detail
about the problem. In order to give the best answers to
your questions, we need to know as much as possible.
If you get an error message, please tell us the
exact message and precisely what you did to invoke it.
If you have a hardware problem, let us know about the
hardware. If Linux is already running, you can use the
Hardinfo program ( that
gives a full report on your hardware and system as an
HTML file you can send us.
Alternatively, the output from lshw is just as useful
One or both of these should be in your distro’s
repositories. If you are unwilling, or unable, to install
these, run the following commands in a root terminal
and attach the system.txt file to your email. This will
still be a great help in diagnosing your problem.
The Reinstall option on Linux Format’s Ubuntu Remix, or any Ubuntu disc for that matter,
installs the operating system again without affecting your personal files.
underneath, they are all Linux after all. That
means that once you become comfortable
with one distro, the knowledge you have gained
will put you in good stead when you decide to
try another. If you don't gain that initial
knowledge, no amount of distro hopping will
help you.
There are also more general purpose
resources, like and,
you can always try our forums (www., where new users
are always welcomed.
Linux is different, not difficult. Stick with a
distro, use the community resources available
and you will soon find it all makes sense. Before
long you may find yourself on the other side of
the fence, helping out other new users on your
chosen distro's forums.
Upgrade to Remix
I have Ubuntu 14.04 installed on a
64-bit HP 250 laptop (with UEFI
disabled) and would like to use a
DVD I made from the Remix ISO on LXF185.
Can I upgrade using the new DVD rather
than reinstall as offered by the Remix
installation process?
From the forums
Yes you can. Because of the way the
Ubuntu installer works, basically
formatting your hard disk and
unpacking a large archive onto it – every
installation is a reinstallation. The installer does
a standard install but preserves the contents of
certain directories, such as /home, so all your
files and settings are untouched. This works
uname -a >system.txt
lspci >>system.txt
lspci -vv >>system.txt
even if you don’t have the home directory on a
separate filesystem.
The installer recognises that you already
have the same version installed and offers
reinstallation, as opposed to upgrade, as the
first choice. However, although personal data is
preserved, system settings are not. You may
want to make a backup of /etc before
installing, so you can copy settings back
afterwards. The same goes for /var/lib if you
are running any services that store data there,
like databases or even BitTorrent software.
It doesn't matter which of the desktop
options you choose when booting from the
DVD. Those choices only determine the default
desktop presented when booting the live
environment. The installer puts them all on
your hard disk, regardless. LXF
Frequently asked questions…
What is a rootkit?
A rootkit is a collection of software
installed by a cracker to hide their
presence on your computer. This
includes programs, such as
modified versions of ls and ps that
don't show any files or processes
belonging to the cracker. The
programs are generally hidden as
trojans to avoid suspicion. The
name comes from the fact that
these programs are installed by,
and can be executed as, root. An
affected system is usually
described as having been 'rooted'.
What damage does a rootkit
usually do?
In itself, not that much. Its function
is to enable the cracker that
installed it to have his wicked way
without your computer arousing
your suspicions. It mainly does this
by modifying programs you may
use to monitor the health of your
system, so they report nothing
untoward. In this respect, they
differ from Trojans.
A trojan, what's that?
A trojan horse – often shortened
to trojan – is a malicious program
disguised as something legitimate.
The name comes from the Trojan
Horse ruse used by the Greeks to
infiltrate the city of Troy. Hence the
phrase 'beware of Greeks bearing
gifts and filesharers offering
cracked software'. Adware,
fortunately not a real problem for
Linux users, is an example of trojan
software, as is anything that
'phones home' without telling you.
Does this have anything to do
with music CDs?
Only in that some older Sony DRM
software was described as a
rootkit, albeit for Windows. The
fact that it was installed by such a
prominent company on so many
computers, without the users'
knowledge or consent, makes it
the highest profile (so far) rootkit.
How do I know if my system
has been rooted?
There are various programs that
will check your system for signs of
a rootkit. These work in two main
ways; looking for code sequences
from known rootkits and
comparing various files against
MD5 checksums made when the
system was known to be clean.
Because of the latter, it’s crucial
that such a program is installed
and run as soon as possible after
install, so it has a record of the
checksums for a clean system.
Two such programs are Rootkit
Hunter ( and
Chkrootkit (
They should be run regularly,
preferably from Cron. It helps to
run them when you’re online,
because they can automatically
update their databases.
Will these programs fix things
for me?
No. Once a rootkit has been
installed on your computer, it has
been well and truly compromised.
A rootkit is a way for a cracker to
cover their tracks, so anything else
on your computer could be
affected. The only safe course of
action is to backup your data and
configuration files and re-install.
December 2014 LXF191 95
On the disc
Distros, apps, games, podcasts, miscellany and more…
The best of the internet, crammed into a phantom-zone like 4GB DVD.
his month's
DVD is
by the Raspberry Pi and there’s no
doubt this tiny computer has
towered over the Linux scene, and
computing in general, since it was
launched in early 2012. You might
be forgiven for thinking that it’s the
only computer of its type available
but that’s not the case. For
example, the Beaglebone Black is
more powerful yet costs only £10
more – yet you hear very little
about it. There's no doubt that the
pre-launch publicity of the Pi
contributed to its initial success,
but that doesn't explain the
continued interest almost three
years later.
The factor that makes the
Raspberry Pi is the same one that’s
responsible for the continued
success of Linux in general – and
that’s the community. You can talk
hardware specs all you like but
what matters is the people. It’s
them that come up with the
incredible breadth of ideas for what
to do with the Pi and it’s them that
drives the small companies that
have sprung up – without them to
buy the products these companies
wouldn’t exist. The success of the
Pi mirrors what drives Linux and
open source, it’s
just that more
because there is
that small board
to focus on.
OS installation manager
NOOBS 1.3.10
When we say we have five Raspberry Pi distros on
this month's LXFDVD, that isn't exactly true.
We actually have four distros plus NOOBS. NOOBS
(New Out Of Box Software) gets around any
problems associated with the traditional method of
copying a Pi distro to an SD card with dd: such as
needing root access or the risk of giving the wrong
drive name (dd is very unforgiving). With NOOBS,
you just unpack the ZIP archive onto a SD card, as
a normal user, and then boot the Pi from that card.
So what is NOOBS if it’s not a distro? It’s a
collection of distros and an installer: boot from the
SD card and you will see a list of six different
distros. These include the standard Raspbian along
with ports of other distros familiar to desktop
users, such as Arch and Fedora. There are also the
specialist media centre distros OpenELEC and
RaspBMC and, for a rather different experience,
a port of RiscOS.
You can select the ones you want from the menu
and NOOBS will install them for you. When you
reboot, NOOBS presents you with a menu to
choose the distro to use, assuming you installed
more than one, just like dual booting with Grub…
only prettier. Even better, NOOBS stays on the SD
card and can be accessed later by holding down
the shift key while booting. This give access to the
recovery interface where you can fix a corrupted
system or to install a different distro. Note that
some of the distros require an Internet connection,
and they are clearly indicated by the icon to the
right of the name.
NOOBS means you no longer need to juggle SD
cards to switch distros, card that are rarely labelled
and often misplaced, particularly if you have a
model B+ Pi that uses the microSD cards that are
impossible to label and almost as impossible to
avoid losing. One card contains all you need, and
NOOBS also has an option to set aside a partition
for shared data. Although NOOBS itself only
occupies around 700MB of space on your SD card,
it is recommended that you use a card of at least
4GB, preferably 8GB, to ensure there is plenty of
space for the installed distros.
Defective discs
In the unlikely event of your Linux Format
coverdisc being in any way defective,
please visit our support site at for
further assistance. If you would prefer to
talk to a member of our reader support
team, email us at [email protected]
com or telephone +44 (0) 1225 822743.
96 LXF191 December 2014
NOOBS offers six distros and lets you install one or more without using resorting to dd or the CLI.
New to Linux?
Start here
What is Linux? How do I install it?
Is there an equivalent of MS Office?
What’s this command line all about? Are you reading
this on a tablet?
How do I install software?
Open Index.html on the disc to find out
KaliLinux, Jasper & RetroPie
More Pi goodness
The other distros on the DVD show the versatility
of the Raspberry Pi. In addition to the more general
purpose desktop environments found in NOOBS,
we have media servers, a jukebox, a security
penetration testing distro and a retro gaming
platform. Apart from NOOBS, the others are
supplied as compressed disk images. Uncompress
the TAR or ZIP and you will find a .img file that
needs to be copied to an SD card. Make sure you
select the correct device, or you could wreck the
data on your hard drive. Then copy the image with:
sudo dd if=/path/to/image of=/dev/sdX bs=4k
Download your DVD from
And more!
System tools
Checkinstall Install tarballs with your
package manager.
GNU Core Utils The basic utilities that
should exist on every operating system.
Hardinfo A system benchmarking tool.
Kernel Source code for the latest stable
kernel release, should you need it.
Memtest86+ Check for faulty memory
Plop A simple manager for booting
OSes, from CD, DVD and USB.
RaWrite Create boot floppy disks under
MS-DOS in Windows.
SBM An OS-independent boot manager
with an easy-to-use interface.
WvDial Connect with a dial-up modem.
Reading matter
The Cathedral and the Bazaar Eric S
Raymond’s classic text explaining the
advantages of open development.
Linux Kernel in a Nutshell An
introduction to the kernel written by
master-hacker Greg Kroah-Hartman.
Music server
Debian Administrators’ Handbook
An essential guide for sysadmins.
PiMusicBox 0.5
Linux Dictionary The A-Z of everything
to do with Linux.
This month’s Round Up covers media players and
one of the candidates is Music Player Daemon
(MPD). As it happens, MPD is also the basis for this
Pi distro. PiMusicBox uses Mopidy (www.mopidy.
com), which is in turn based on MPD, each layer
adding more to the already rich feature set of MPD.
This is a music server, it sits somewhere discrete
(the Pi is nothing if not discrete) connected to your
audio system and network. It can be controlled
directly through a web browser or through one of
the many MPD clients available for desktop Linux
and Android. PiMusicBox plays music from any
locally attached storage or from elsewhere on your
network, but it goes a lot further than that.
PiMusicBox also plays streamed audio from many
popular online services, including: Spotify,
SoundCloud, Google Music,
Subsonic and Soma FM.
It can also play podcasts
from gPodder and iTunes.
Everything is controlled
from a web browser or the
remote clients, there’s no
need to delve into the
command line to set up
anything, with the possible
exception of connecting to
a wireless network.
Dive Into Python A masterclass in this
popular language.
Bourne Shell Scripting Guide
Get started with shell scripting.
Advanced Bash Scripting Guide
Go further with shell scripting.
Intro to Linux A handy guide full of
useful pointers for new Linux users.
System Administrators Guide Get
your system to do what you want.
Tools Summary A complete overview
of GNU tools.
December 2014 LXF191 97
1 for Free Software
Future Publishing, Quay House,
The Ambury, Bath, BA1 1UA Tel 01225 442244
Email [email protected]
20,238 January – December 2013
A member of the Audit Bureau of Circulations.
Editor Neil Mohr
[email protected]
Technical editor Jonni Bidwell
[email protected]
Operations editor Chris Thornett
[email protected]
Art editor Efrain Hernandez-Mendoza
[email protected]
Editorial contributors Neil Bothwick, Chris Brown,
Jolyon Brown, Rivhard Cobbett, Sean Conway, Kent
Elchuk, Matthew Hanson, Philip Herron, Jeremy Laird,
Andrew Mallett, Les Pounder, Jem Roberts, Mayank
Sharma, Shashank Sharma, Richard Smedley, Matt
Swider, Alexander Tolstoy, Mihalis Tsoukalos.
Art assistance Nick Aspell
Illustrations Shane Collinge,
Simon Middleweek, iStockphoto
Advertising sales director Nick Weatherall
020 7042 4155 [email protected]
Account sales manager Ricardo Sidoli
020 7042 4124 [email protected]
Campaign manager Sam Book
LXF 192
will be on sa
Next month
Distro power
secrets revealed
25 Nov 2014
Take your knowledge to the next level with the
hidden secrets stuffed inside your favourite distro!
Build a Linux robot
We have the OS, code and hardware to build low-cost,
autonomous, kitchen-traversing robots. Here’s how…
Build an ultimate NAS
Why buy a NAS box, when you can build one! Create a
24TB super-NAS powered by Arch, don’t be scared.
Light window managers
Roundup delves in to the world of keyboard-driven
and alternative window managers, Emacs anyone?
Contents of future issues subject to change – the robot overlords might take over again. We never learn.
98 LXF191 December 2014
Production co-ordinator Ian Wardle
Production manager Mark Constance
Trade marketing manager Colin Hornby
Distributed by Seymour Distribution Ltd, 2 East
Poultry Avenue, London EC1A 9PT Tel 020 7429 4000
Overseas distribution by Future Publishing Ltd
Tel +44 (0)1225 442244
Licensing & syndication manager Regina Erak
[email protected]
Tel +44 (0)1225 732359
International account manager Rebecca Hill
[email protected]
UK readers’ order line 0844 848 2852
General UK enquiries 0844 848 1602
Overseas readers +44 (0)1604 251045
Online enquiries
Email [email protected]
Head of content and marketing
for Tech Group Nick Merritt
Group editor-in-chief Paul Newman
Group art director Steve Gotobed
Editor-in-chief, Computing Brands Graham Barlow
LINUX is a trademark of Linus Torvalds, GNU/Linux is abbreviated to Linux
throughout for brevity. All other trademarks are the property of their respective
owners. Where applicable code printed in this magazine is licensed under the GNU
GPL v2 or later. See
Copyright © 2014 Future Publishing Ltd. No part of this publication may be
reproduced without written permission from our publisher. We assume all letters
sent – by email, fax or post – are for publication unless otherwise stated, and reserve
the right to edit contributions. All contributions to Linux Format are submitted and
accepted on the basis of non-exclusive worldwide licence to publish or license others
to do so unless otherwise agreed in advance in writing. Linux Format recognises all
copyrights in this issue. Where possible, we have acknowledged the copyright holder.
Contact us if we haven’t credited your copyright and we will always correct any
oversight. We cannot be held responsible for mistakes or misprints.
All DVD demos and reader submissions are supplied to us on the assumption they
can be incorporated into a future covermounted DVD, unless stated to the contrary.
Disclaimer All tips in this magazine are used at your own risk. We accept no liability
for any loss of data or damage to your computer, peripherals or software through
the use of any tips or advice.
Printed in the UK by William Gibbons on behalf of Future
Future is an award-winning international media
group and leading digital business. We reach more
than 49 million international consumers a month
and create world-class content and advertising
solutions for passionate consumers online, on
tablet and smartphone and in print.
Future plc is a public
company quoted on the
London Stock Exchange
(symbol: FUTR).
Chief executive Zillah Byng-Maddick
Chairman Peter Allen
Chief financial officer Richard Haley
Tel +44 (0)207 042 4000 (London)
Tel +44 (0)1225 442 244 (Bath)
We are committed to only using magazine paper which is
derived from well managed, certified forestry and chlorine
free manufacture. Future Publishing and its paper suppliers
have been independently certified in accordance with the rules
of the FSC (Forest Stewardship Council).
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF