11 pages of the best live distros - Ketchup Themes

11 pages of the best live distros - Ketchup Themes
11 PAGES OF THE BEST LIVE DISTROS
www.linuxuser.co.uk
THE ESSENTIAL MAGAZINE
FOR THE GNU GENERATION
12 FOR THE TOOLKIT
• Diagnostics • Maintenance
• Privacy • Repair • Security
BLOCKCHAIN
INVASION
How the tech behind Bitcoin
is taking over fintech
IoT EXPLOITS
The White Hat’s Guide
to the Internet of Things
TWITTER KILLER
Set up a server for the social
platform shaking the web
Qt5 & QML
Create great-looking
cross-platform GUIs
INTERVIEW
The rise of
Mastodon
PI AUTOPILOT
Eugen Rochko on
the future of social
Projects to try
» Plotly: stop making boring graphs
» Set up a digital display system
Write and deploy Python
programs to your drone
Maui Linux
The KDE-based distro
focused on simplicity
ALSO INSIDE
Issue 177
» Program in Erlang
» Linksys Velop
» Top audio players
PRINTeD IN THe uK
£6.49
THE MAGAZINE FOR
THE GNU GENERATION
Future Publishing Ltd
Quay House, The Ambury
Bath, BA1 1UA
+44 (0) 1225 442244
Web: www.linuxuser.co.uk
www.myfavouritemagazines.co.uk
www.futureplc.com
%
Editorial
Editor Chris Thornett
[email protected]
01202 442244
%
Senior Art Editor Jo Gulliver
Designer Rosie Webber
Production Editor Phil King
Editor in Chief Paul Newman
Contributors
Dan Aldred, Joey Bernard, Jonni Bidwell, Christian Cawley,
Toni Castillo Girona, Tam Hanna, Oliver Hill, Paul O’Brien, Jon
Masters, Calvin Robinson, Mayank Sharma, Richard Smedley
and Mihalis Tsoukalos
Advertising
Digital or printed media packs are available on request.
Commercial Sales Director Clare Dove
Advertising Director Richard Hemmings
[email protected]
01225 687615
Account Director Andrew Tilbury
[email protected]
01225 687144
Account Director Crispin Moller
[email protected]
01225 687335
%
%
%
International
Linux User & Developer is available for licensing. Contact the
International department to discuss partnership opportunities.
Head of International Licensing Cathy Blackman
+44 (0) 1202 586401
[email protected]
%
Subscriptions
For all subscription enquiries:
[email protected]
0844 249 0282
Overseas +44 (0)1795 418661
www.imaginesubs.co.uk
Head of subscriptions Sharon Todd
%
%
Circulation
Circulation Director Darren Pearce
01202 586200
%
Production
Production Controller Nola Cokely
Management
Finance & Operations Director Marco Peroni
Creative Director Aaron Asadi
Editorial Director Ross Andrews
Printing & Distribution
William Gibbons, 26 Planetary Road, Willenhall, West Midlands,
WV13 3XT
Distributed in the UK, Eire & the Rest of the World by
Marketforce, 5 Churchill Place, Canary Wharf, London, E14 5HU
0203 787 9060 www.marketforce.co.uk
%
Distributed in Australia by Gordon & Gotch Australia Pty Ltd,
26 Rodborough Road, Frenchs Forest, New South Wales 2086
+ 61 2 9972 8800 www.gordongotch.com.au
%
Disclaimer
The publisher cannot accept responsibility for any unsolicited material lost or
damaged in the post. All text and layout is the copyright of Future Publishing Ltd.
Nothing in this magazine may be reproduced in whole or part without the written
permission of the publisher. All copyrights are recognised and used speciically
for the purpose of criticism and review. Although the magazine has endeavoured
to ensure all information is correct at time of print, prices and availability may
change. This magazine is fully independent and not afiliated in any way with the
companies mentioned herein.
If you submit material to Future Publishing via post, email, social network or any
other means, you automatically grant Future Publishing an irrevocable, perpetual,
royalty-free licence to use the material across its entire portfolio, in print, online
and digital, and to deliver the material to existing and future clients, including
but not limited to international licensees for reproduction in international,
licensed editions of Future Publishing products. Any material you submit is sent
at your risk and, although every care is taken, neither Future Publishing nor its
employees, agents or subcontractors shall be liable for the loss or damage.
© 2017 Future Publishing Ltd
ISSN 2041-3270
Welcome
to issue 179 of Linux User & Developer
This issue
» The best live booters for your toolkit, p18
» The White Hat’s guide to IoT exploits, p50
» Blockchain takes on banking, p60
Welcome to the UK and North America’s
favourite Linux and FOSS magazine. We start
this issue off with a somewhat loaded question:
Is Microsoft about to invest in Canonical, the
company behind Ubuntu, the most popular
Linux distro on the planet? It’s become clear that
Canonical’s change of heart over Unity had much
to do with dropping loss-making projects in a
bid to court outside investors for a race to IPO. A
Microsoft investment would make sense given
the way that Ubuntu and Azure, Microsoft’s cloud
computing platform, have been working together in so many ways
over the years: Ubuntu was first on the Azure Stack, first for the big
data offering on Azure and, just recently on Windows 10, the first
Linux distro to be supported on Windows Subsystem for Linux.
But let’s casually walk away from that live grenade and focus
on what’s in the magazine. We’re celebrating live-booting distros
and have pulled together a great selection you can stick on USB
for repairs, security work or just surfing the web (p18). Next, you’ll
need to roll your sleeves up for our dive into IoT vulnerabilities (p50)
and how to protect yourself. Of course, we also have a tempting
platter of tutorials: from mucking about with pretty graphs (p78); to
setting up a Mastodon instance (p36), the superstar decentralised
microblogging platform; programming in Erlang (p44); controlling
drones with a Pi (p74) and a whole lot more. Dig in!
Chris Thornett, Editor
Get in touch with the team:
[email protected]
Facebook:
Twitter:
LinuxUserUK
@linuxusermag
[email protected]
For the best subscription deal head to:
www.imaginesubs.co.uk/lud
Save 20%! Enter the promo code: PS172. See page 30 for details
www.linuxuser.co.uk
3
Contents
TOTAL
DISTRO
TOOLKIT
Reviews
81 Audio players
Which one is music to the ears?
18 Total distro toolkit
The joy of live Linux distros and their many uses
Audacious
Quod Libet
Amarok
Banshee
OpenSource Tutorials
08 News
The biggest stories from the
open source world
12 Interview
Eugen Rochko chats about
his Mastodon social network
16 Kernel column
The latest on the Linux
kernel with Jon Masters
Features
18 Total distro toolkit
Try out the best
live-booting Linux distros
50 Internet of Things
Identify IoT security flaws
and protect your devices
32 Concerto
Set up a digital display server to stream
custom messages across multiple screens
34 Shell scripting
Spruce up your shell scripts with loops,
positional variables and user input
36 Mastodon
Create a decentralised, Twitter-like social
network on your own server
40 Qt5 development
Use the QML runtime environment to create
cool-looking cross-platform user interfaces
86 Linksys Velop
A mesh networking kit with a simple
setup but is it worth the money?
88 Maui Linux 17.3
We find out if simplicity and
functionality a winning combination
90 Free software
We test Bokeh 0.12.5, adx 1.13,
Neoleo 5.0.0 and Xonsh 0.5.9
44 Program in Erlang:
Erlang and OTP
Develop Erlang applications that implement
the functionality of finite-state machines
60 Fintech
How blockchain is
conquering banking
Subscribe
& save!
30
4
Check out our
great new offer!
US customers
can subscribe
on page 64
06 Free downloads
65 Practical Raspberry Pi
We’ve uploaded a host of new FOSS this month.
Try out six live distros, including antiX and
Porteus. Plus grab all the tutorial project files –
all hosted on our secure repo.
Learn how to control a drone from your Pi,
draw 3D text in Minecraft, boot from USB
and monitor collected data with Plotly.
Plus the incredibly robust Tough Pi-ano.
Free with
your magazine
The best distros
and FOSS
Professional
video tutorials
Essential software for
your Linux PC
The Linux Foundation
shares its skills
Tutorial
project files
All the assets you’ll need
to follow our tutorials
Plus, all of this
is yours too…
• Download all the IT security tools that
are covered in our White Hat's guide
to the Internet of Things, including
Binwalk,Checksec, Radare2 and
Zap Proxy
• Enjoy 20 hours of expert video
tutorials from The Linux
Foundation
• Get the program code for our Erlang
and Pi tutorials
Log in to www.filesilo.co.uk/linuxuser
Register to get instant access
to this pack of must-have Linux
distros and software, how-to
videos and tutorial assets
Free
for digital
readers too!
Read on your tablet,
download on your
computer
The home of great
downloads – exclusive to
your favourite magazines
from Future Publishing
Secure and safe online
access, from anywhere
Free access for every
reader, print and digital
Download only the files
you want, when you want
All your gifts, from all
your issues, in one place
Get started
Everything you need to
know about accessing
your FileSilo account
Unlock
every
issue
01
Follow the instructions
on screen to create an
account with our secure FileSilo
system. Log in and unlock the
issue by answering a simple
question about the magazine.
Subscribe today & unlock the free
gifts from more than 40 issues
Access our entire library of resources with a money-saving
subscription to the magazine – that’s hundreds of free resources
02
You can access FileSilo
on any computer, tablet
or smartphone device using any
popular browser. However, we
recommend that you use a
computer to download content,
as you may not be able to
download files to other devices.
Over 20 hours
of video guides
The best
Linux distros
Free open
source software
Essential advice from the
Linux Foundation
Specialist Linux
operating systems
Must-have programs for
your Linux PC
Head to page 30 to subscribe now
If you have any
problems with
accessing content on FileSilo,
take a look at the FAQs online
or email our team at the
address below.
03
[email protected]
Already a print subscriber?
Here’s how to unlock FileSilo today…
Unlock the entire LU&D FileSilo library with your unique Web ID
– the eight-digit alphanumeric code that is printed above your
address details on the mailing label of your subscription copies.
It can also be found on any renewal letters.
More
added
every
issue
08 News & Opinion | 12 Interview | 96 FileSilo
HARDWARE
Google and Huawei launch potential
Raspberry Pi killer
The HiKey 960 is a high-end rival to the Pi
Thanks to continuous updates from
the Raspberry Pi Foundation, emerging
competitors have often fallen by the
wayside, unable to compete with both the
feature set and price point of what each Pi
model can offer. However, a new board from
two of the world’s biggest tech giants might
turn that on its head.
Google and Huawei have teamed up to
produce the HiKey 960, with further input
coming from seasoned Linux developer
Linaro (http://bit.ly/HiKey960). While support
for Linux comes as standard, the HiKey also
offers a high-end feature set for developing
on the Android platform.
Huawei has developed many of the board’s
core components, with its own octa-core
Kirin 960 chip powering each unit. Alongside
this, there’s a powerful 3GB of LPDDR4 RAM,
tailored for use on ARM technology.
“If you’re simply looking at the
specification sheet, Google’s HiKey certainly
is an appealing piece of kit,” says freelance
tech analyst Liam Triggs. “While the form
8
factor is the same as what the Raspberry Pi
offers, there are upgrades throughout and
there’s enough power here to compare it to a
high-end smartphone.”
Digging deeper into the HiKey 960, there’s
plenty of room here for integrating the
board into any number of projects. With
both 40-pin and 60-pin connectors, there’s
space for various customisations, while
an additional on-board PCIe M.2 slot gives
scope for upgrading to 32GB of on-board
storage. It doesn’t stop there either: there are
built-in wireless and Bluetooth options and it
includes both fastboot and recovery modes –
a first for a single-board computer of its kind.
There’s a lot to love about what the HiKey
brings to the table, especially when it comes
to possible development opportunities,
but it’s the fact that Google is directly
involved with the project that’s arguably
the most exciting element. “Whatever your
personal opinion on Google may be, they’ve
done wonders for the tech industry as a
whole. They’re also making big strides in
Above The HiKey 960 includes the same
processor as Huawei’s MATE smartphones
the open source industry, with their recent
DIY AI assistant kits being just one of their
ambitious projects,” says Liam Trigg. “What
is important to remember, however, is that
the Raspberry Pi has been the long-term king
of single-board computers, and it’s a near
impossible task for anything to overtake it.
But if anyone can do it, Google (with Huawei’s
help), must be a favourite.”
The HiKey 960 is available online in the
US, Europe and Japan for $239. Stock will be
limited, so you’ll need to get in there early.
TOP FIVE
Open source
backup clients
1 Bacula
When it comes to an all-in-one package for
backing up your computer data, Bacula is hard
to match. It’s perfectly tailored to use on lowresource machines due to its minimal build, but is
powerful enough to automate the entire backup
process for you, saving you time and effort.
HARDWARE
Intel patches seven-year
remote hijacking bug
2 Areca
Areca is without doubt a little complicated to
set up, but remains the perfect choice for users
wanting to back up multiple files simultaneously.
It’s also remarkably versatile, enabling you to
tailor it to the requirements of your system.
Serious security flaw is patched
Intel has long been considered one of the
premier chip manufacturers on the market,
with security playing a vital part in the
firm’s overall success. However, it seems
that one element had slipped through the
net. Recently conducted research found
that Intel processors shipped since 2010
contained remote management features.
While from the outset this may not seem
like a really big deal, the issue was found to
give attackers full control over any computer
that was connected to the same network. An
official report from Intel explained the flaw
further: “There is an escalation of privilege
vulnerability in Intel Active Management
Technology (AMT), Intel Standard
Manageability (ISM), and Intel Small Business
Technology versions firmware versions 6.x,
7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6 that can
allow an unprivileged attacker to gain control
of the manageability features provided by
these products. This vulnerability does not
exist on Intel-based consumer PCs.”
The flaw is noted to have affected many
core Intel manageability firmwares, covering
both early and later variants of the firmware.
Due to the flaw going unnoticed for several
years, it’s unknown just how many machines
have been affected. However, the ease
of access an attacker would have to the
network is alarming.
Due to the research, Intel delivered a
patch in record-quick time. The patch,
which resides in Intel’s Active Management
Technology, is automatically downloaded
onto affected machines, but will be
unnoticeable to many users. It’s important,
however, to note that Intel has rated the
vulnerability as critical, and so users should
look to perform a system scan of their
desktop to eliminate any external issues.
In the same advisory posted by Intel, a
second flaw is said to have been discovered.
Although much less of a threat than the issue
mentioned earlier, the second flaw is said to
give attackers an alternative way to attack
your desktop. Again, Intel has been quick
to release all necessary patches. For those
worried about whether their desktop could
be at risk, it’s worthwhile checking out the
official Intel Newsroom (https://newsroom.
intel.com) for more information.
3 UrBackup
Automation plays a big role in UrBackup, and it’s
ideal for business users. Timed backups are easy
to create in UrBackup’s attractive UI, enabling you
to effortlessly transfer your files online.
4 Box Backup
Box Backup is one of the few dedicated Linux
backups that runs entirely online. Due to this,
processing speeds are lightning quick, enabling
you to move both small and large files to the cloud
with the utmost of ease.
5 Zmanda
The primary reason why Zmanda has gained
such a great following is that it works in tandem
with many leading cloud backup services.
Files can be distributed to different accounts,
depending on what you need placed where.
www.linuxuser.co.uk
9
OpenSource
Your source of Linux news & views
OPEN SOURCE
Baidu open-sources self-driving tech
The ‘Apollo’ project could be on a road near you soon
Although best known for being China’s
largest search engine group, Baidu has
also been one of the pioneers of the selfdriving automotive industry in recent years.
While its plans closely resemble those that
we’re hearing from the likes of Google, Apple
and Uber, the Chinese firm has taken the
decision to open-source its entire suite of
self-driving secrets.
Named ‘Apollo’, Baidu describes its open
source endeavours as a ‘complete and
reliable software platform for those within
the automotive and autonomous driving
industry.’ Although it’s unclear as to the
scale of the project, the company has gone
on record to discuss how it will open up its
technology. In July, Baidu will begin opening
its intellectual property for its ‘restricted
environment’ policies, quickly followed by the
research the firm has done on urban road
driving. By the end of 2020, the company
By the end of 2020,
Baidu’s fully autonomous
driving capabilities will be
readily available
Above China’s largest search engine group has made big strides in its automotive tech
has said that its fully autonomous driving
capabilities will be readily available.
While Tesla has previously led the way
when it comes to open-sourcing its driving
technology, it’s yet to be done on the scale
that Baidu has promised. Until recently, the
company has been developing its technology
with the help of Chinese domestic car
makers, and it has already been road-tested
in California and Beijing. While certain legal
issues still need to be considered, Baidu
already boasts an Autonomous Vehicle
Testing Permit, something that many of its
competitors are yet to achieve, and early
plans are in place for further testing through
the US and China in the coming months.
KERNEL
Linux 4.11 has been released
Improved support for SSD swapping and much more
As far as updates go, the Linux 4.11 kernel
update is one of the biggest yet to come
from Linus Torvalds.
“So after that extra week with an rc8,
things were calm, and I’m much happier
releasing a final 4.11 now,” said Torvalds in an
official blog post.
At the top of the update list is improved
support for swapping solid-state disks;
there’s also official support for OPAL
self-encrypting disk drives. For business
users, the integration of Intel’s Turbo Boost
Max Technology into the kernel will be a
10
major plus, as it’s ideal for automating
several key components of your CPU, and
tailored to work well on low-resource
machines. Virtual machine support has
also gained some much-needed attention,
with Shared Memory Communication over
RDMA allowing for quicker communications
between machines.
Digging a little deeper, graphical
enhancements have also been made clear in
the new update. There’s improved integration
of AMDGPU power management, while Intel’s
DRM driver can now handle DisplayPort
connectivity. Among the smaller updates
found in the 4.11 kernel, a new front-end for
the ftrace interface has been introduced,
named as the perf ftrace tool, alongside
support for pluggable I/O schedulers.
With the 4.11 update now readily available,
the merge window for the Linux kernel 4.12
is officially open. It’s believed that the next
update will be noticeably smaller than what’s
been featured here, with more of a focus on
fixing the errors found in 4.11. As ever, turn to
Jon Masters’ Kernel Column, on p16, for the
deep dive.
DISTRO FEED
KERNEL
Pi Zero W celebrating
250,000 unit sales
Smallest Raspberry Pi model hits
significant milestone
Top 10
(Average hits per day, 11 April–9 May 2017)
1.
2.
3.
4.
5.
6.
Integrating wireless connectivity directly
into the tiny Raspberry Pi Zero was always
going to be a big deal, so it’s no surprise
that 250,000 units of the Pi Zero W have
already been sold. It’s an impressive feat,
and early indications show that by the end
of 2017, it’s likely the Pi Zero W will have
sold more than 500,000 units. However, the
one downside of the Pi Zero W has been its
limited availability, primarily due to a lack of
core distributors. The latest news to come
from the Pi Foundation is that 13 new Pi Zero
distributors have now been added.
With the announcement of the new
distributors, the overall reach of the Pi Zero
W is set to be massively increased. Many of
the new distributors will serve previously
unreachable territories, including Australia,
New Zealand, Malaysia and Japan. Alongside
new territories, the Pi Foundation will also
be improving its overall reach in established
countries like USA, Germany and Canada. Of
course, to coincide with the improved number
7.
8.
9.
10.
Mint
Debian
Manjaro
Ubuntu
Deepin
Fedora
openSUSE
Antergos
Solus
CentOS
3080
2169
2086
1751
1316
1160
1133
1122
1048
888
This month
■ Stable releases (15)
■ In development (7)
of distributors, the Pi Foundation will also
be ramping up production of Pi Zero W units,
hopefully to make sure that any increased
demand is met. For more information on the
new stockists, head across to the Raspberry
Pi Foundation site (www.raspberrypi.org).
SOFTWARE
GRUB gets first update in years
Popular bootloader gets much needed TLC in
latest upgrade
It’s been an up-and-down
month for distros, with
frontrunners Mint and
Debian continuing to
build a lead. Can any other
distro offer a challenge
next month?
Highlights
Mint
The Mint team provided a series of stability
improvements this month, helping to
polish off some of the smaller bugs that had played
havoc with elements of its default KDE interface. More
improvements are expected later this year.
Antergos
Despite only recently launching a new
update, Antergos has slightly dropped in
this month’s listings. Teething errors have plagued the
latest update, so further work needs to be done.
CentOS
Despite it being close to five years since
its last update, GRUB continues to be one
of the more popular bootloaders. GRUB
2.02 is now ready for release, and boasts an
impressive changelog to boot. At the centre
of the update is a new filesystem front-end,
with Coreboot CBFS now being a central
element to the tool. There’s also been an
overhaul in the primary graphics menu,
allowing for easier control and navigation
of existing tools. However, at the time of
writing, the menu can be a little awkward to
get working. Aside from this, users will find
better support for both FreeDOS and Arm64
EFI, while there are also multiple upgrades
to existing ZFS integration – enabling better
file storage options to be refined. While an
official release announcement has yet to be
made by the GRUB team, a full run-through
of changes, as well as all download links, can
be found over on its official Git news page:
http://bit.ly/GRUBnews.
Initial feedback for CentOS’s new update
has been good, helping the distribution
slowly gain a following. It remains a great choice for
Red Hat Enterprise Linux users.
Latest distros
available:
filesilo.co.uk
www.linuxuser.co.uk
11
OpenSource
Your source of Linux news & views
INTERVIEW MASTODON
The rise of Mastodon
Eugen Rochko, the creator of Mastodon, tells us what’s on the roadmap for the
microblogging social platform that’s trampling through Twitter’s backyard in a bid
to make a decentralised network the next big thing in social media
Eugen Rochko
is a 24-year-old computer
science graduate, who, after
leaving Friedrich Schiller
University Jena in Germany,
decided to begin work on
an open and federated
microblogging platform.
Called Mastodon, it is a
reimplementation of GNU
Social and uses the OStatus
suite of protocols.
https://mastodon.social
Centralised social networks were
famously described as “spying for free”
by Columbia University law professor
Eben Moglen, who, among many things, helped
defend Phil Zimmerman, the creator of Pretty
Good Privacy, against the US government in the
mid-1990s. The decentralised or federated model is
popular among free software and privacy advocates
for good reason: it’s generally non-commercial, open
source and returns privacy controls to the user.
Mastodon, which is built on GNU Social, is the latest
in a long line of decentralised platform launches –
from Ident.ica back in 2008 to Diaspora and others
– hoping to carve a big enough slice of the social pie
to kick-start a social media revolution.
Mastodon scales by allowing anyone to set up a
server, or ‘instance’, that can link to other instances
across the federated Mastodon network. How each
instance is run is defined by the admin, and users
can register on any instance and start posting (with
12
a much healthier 500-character limit) locally or
publicly across the federated network. It’s not what
many users are used to and much has already been
made of celebrities like William Shatner complaining
that he can’t find his friends, as you can only see
people in your federated timeline that have been
followed by another local user.
But Mastodon’s clean interface and cheeky
mascot have hit at the right time and the platform
is growing rapidly, so we collared its creator, Eugen
‘Gargron’ Rochko over Discord.
How did you get to the point where you decided to
develop a microblogging service?
I think every big thing is built from really small
things, and I never intended for this to get this big.
It was essentially me playing around with some
concepts that I was familiar with and seeing what I
could do. Eventually, with every step, it has became
more ambitious. I have a history going back to
The Japanese effect
One thing you can’t fail to notice when scrolling the
federated timeline on Mastodon is the number of
Japanese speakers (and their love of manga). The
growth and spread of Mastodon is a fascinating
investigation into how new ideas spread on the
internet, but aside from Rochko’s connections with
many artists, it’s unclear how Pixiv, a company
in Shibuya-ku, Tokyo decided to adopt Mastodon
as its social media platform. Pixiv currently runs
the largest Mastodon instance in the world for its
Pawoo.net community of artists. Pawoo.net is the
Japanese equivalent of DeviantArt and users tend
to use Mastodon to share their illustrations and
novels. The instance currently has 146,000 users
and released its own Android app at the end of
April. Pixiv is currently working on an iOS app.
decentralised microblogging stuff in 2010-2012 […].
I had friend who was really into that stuff and he
introduced me to the concepts. I was just starting
my development career, before I went to university,
and I was just playing around back then.
So years later, I wondered how decentralised
microblogging was doing now, because I hadn’t
heard much about it and I checked and saw that it
was still around, but it hadn’t kept up with the times.
It wasn’t in a presentable state and I thought I need
something different. I’m a long-time TweetDeck user
so I thought I need a TweetDeck-like interface to use
this, so that was my initial plan. My plan was just
to make an interface for existing software, but then
after I ventured into the codebase of the existing
software, I decided that I needed to start from
scratch. That’s how Mastodon started.
So when you say it wasn’t up to date, are we
talking about GNU Social?
Yes, GNU Social is very ambitious and a very good
piece of software, but it’s from 2010-2012. [Note:
GNU Social, or Laconica as it was called initially, was
first deployed back in July 2008 for ident.ica.] It has
been maintained somewhat, but it was created some
time ago and it suffers from some systematic code
organisational issues, because it is a PHP project
and it also unites in itself two previous projects that
were merged. So there is a lot of stuff there that is
confusing to developers who are new to it.
So starting from scratch was easier for me and it
made sure that everything that is in there belongs
there and is not a remnant or legacy code. Yes, so
the initial iteration of Mastodon was just an API. My
design principles are basically: try to make one thing
and make it well. Initially, I only wanted to make an
Kazuhiro Taira, Asahi https://twitter.com/kaztaira
Above Mastodon caused a stir in the
Japanese mainstream media and even
gained coverage in the Asahi Shimbun
daily newspaper which has a circulation of
almost 7.9 million readers
Left Mastodon has a growing Japanese
following, which is mostly down to a
large instance managed by Pixiv for the
Pawoo.net community, the Japanese
equivalent of the popular artists and art
enthusiasts site, DeviantArt
API and delegate all the user interface stuff to apps,
because I’m a real fan of the third-party ecosystems.
I’m really sad that Twitter killed its own.
After the first prototype was done – it was an API
only and I had to use it from the command line using
curl – I thought I really need to put something in,
some default interface, so that’s when I started to
build the interface that you see now.
Are there any third-party projects that people are
doing that you particularly like?
There are Android and iOS apps for it that I’m a big
fan of. I don’t have an iOS device, so I can’t really
test it out, but Amaroq is supposed to be a good
one. Tusky for Android is one that I personally
contributed to – making it look prettier. There’s also
the Japanese company Pixiv [based in Shibuya-ku,
Tokyo], which has recently created [its] own app
for Mastodon called Pawoo. Unfortunately, it’s not
compatible with my device.
We must admit we struggled to get on board with
it quickly. Partly that’s because when you go to
the site (https://mastodon.social), you’re having to
make a choice over what instance to join.
Yes, we had the mastodon.social as the flagship
instance for registrations at the very start. The
problem with that is not just [extending] beyond
50,000 users. I only have a few really small servers
running the whole thing. The problem is more than
that; it’s ideological. We don’t really want everybody
to go on the one single server, because that kind of
defeats the purpose and while closing registrations
[on mastodon.social] has probably been confusing
for new people, it’s had an enormous effect on
federation itself. It’s caused about 1,000 instances
I think every
big thing is
built from
really small
things, and I
never intended
for this to get
this big
www.linuxuser.co.uk
13
OpenSource
Your source of Linux news & views
to spring up and people to be spread out between
them rather than be centred on mastodon.social –
and that’s a good thing, so I don’t feel too bad about
closing down registrations.
enough for you to work on the project full-time.
A few days more! Yeah, you also have to consider
that it’s not just a wage. It has to cover not only my
living expenses but the hosting expenses as well.
I need to have more staff, more personnel working
on the project, full- or part-time. For example,
recently we’ve had a project manager to handle the
coordination between developers, contributors and
the community. I obviously can’t handle everything.
My time gets thinned out by different things.
Did you expect Mastodon to blow up to that
extent? Quite so rapidly?
No, I did not. Truth be told, I’ve been working hard on
making it happen. In the previous months, starting
from about October, I’d been writing articles all the
time about why a federated approach was better
than centralised. Writing why Mastodon was better
than Twitter, telling people about it, saying one day
they are going to push a feature that you don’t like
and then you’ll need an alternative. Now here it is.
Yes, I was looking at the level of commits on the
project in the last month and it’s shot up, hasn’t
it? So the workload has grown massively?
Yes, people donate their work in terms of pull
requests, but the funny thing is that by donating
their work, they are creating more work from having
to review that work. [Laughs.]
When did you start working on the project?
It started around April of last year, but it was that
API only prototype that I talked about. I was still
studying at university. After I was done with the API,
I had an exam session so I stopped working and
after I graduated, I decided to go back to see what
else I could do. I have lots of artist friends that do
art for crowdfunded payback and I decided to see if
this model would work for me as well [...] That’s how
the Patroen thing started.
What features are you working on currently?
We working on some privacy-related things and
protocol related. There is a push for a new protocol
to replace the one we’re using, called OStatus. It is
from 2010 and it’s missing some things. Mastodon
has to innovate on the top of it with some other
post-status platforms in the process just to be able
to have more features. The newer protocol is the
one that everyone apparently uses and it’s called
ActivityPub (http://w3c.github.io/activitypub).
That seems to have built up to a good amount of
cash. We appreciate you’ve not seen it yet, but it’s
14
Python 3 script called MastodonToTwitter
(http://bit.ly/MastodonToTwitter). This
does involve joining dev.twitter.com, as
you need a consumer key and access
token, and creating an app. However,
many people are trying a #woollyweek
where they go cold turkey and drop
Twitter entirely.
Above Mastodon’s error page has a wry dig at Twitter
Eugen Rochko
As mentioned in the interview,
jumping onto Mastodon isn’t as
seamless as it could be, as you need
to decide what instance to join. Check
out the full list of instances here:
http://bit.ly/MastodonList. At the time
of writing, Rochko’s flagship instance,
https://mastodon.social, was open
again for new registrations. Once
registered, it’s worthwhile spending
time on a 500-character introductory
post and tagging it with #introduction.
Currently, the easiest way to
kickstart your account is to use
Mastodon Bridge (http://bit.ly/
MastodonBridge), where you can log
in via Twitter and Mastodon to see
which people you follow are also on
Mastodon. But we’ve also found the
local and federated timelines easy
enough to find people to follow.
There is a way to cross-post
to Twitter and Mastodon, using a
https://www.youtube.com/user/dopatwo
Have a #woollyweek
Above Mastodon is non-commercial and funded
by donations, but artists have donated their fan
work and Mastodon plans to sell merchandise,
like these stickers. If you’d like to donate, go to
www.patreon.com/user?u=619786
Above Rochko is a fan of TweetDeck and used its columnbased interface as inspiration for Mastodon’s UI
There’s a big push for Mastodon to adopt it and we’re
working on that right now.
Other privacy-related stuff involve making sure
people can mute conversations they don’t like. We’re
also working on organisational issues, in terms of
how releases are made; how they are tested; how the
contributors are organised and how the roadmap is
communicated – exactly what we’re talking about
right now. The plan is to basically have a roadmap
about every three months in terms of what we want
to do in the next three months and when that time
ends, come up with a new map.
Have you got a sense of what feature request is the
major one right now?
On the one hand, as I’ve mentioned, I’m a big fan of
the Unix philosophy of doing one thing and doing
it well, so I can’t think of any features that aren’t
already a part of a feature we have. I don’t think
there’s going to be any branching out into new areas.
There’s going to be improvements of the user
experience, of the features we already have and
expansions on those topics, but nothing entirely new.
People want to have the ability [...] to block
entire domains through blacklisting […]. We’re
actually working on a feature that allows us to filter
languages and add preferences for what languages
you want to see on your timelines and what ones you
want to filter out.
Just as an example, if you never wanted to see the
Japanese language (see The Japanese Effect, p12)
because you can’t understand it, a domain block
would allow you to that very easily.
That’s interesting you say that, as we’ve noticed
a lot of people asking for that – is there a big
contingent of Japanese fans of the service?
Yes! It has absolutely exploded in Japan. The growth
started with information security Twitter, then
moved onto France and when it became popular in
France, it started getting media attention, but in
Japan it’s been even more crazy. It’s been picked
up by Pixiv, sort of like DeviantArt, and they have
their own instance and I think they have engineers
working on Mastodon full-time. That’s incredible
to hear about. They also have over 100,000 users
on their instance and there [are] at least two more
Japanese instances with more than 50,000 users
each, so it’s absolutely huge in Japan in comparison
with other instances.
Is there any sense that the growth is going to
continue? Early on you were interviewed about
gaining 40,000 users and today we were looking at
the figures and you’ve gone over half a million.
Yeah, it’s half a million. That’s amazing! I never
thought that was possible. You’ll be surprised, but
I don’t have as much data as you think I do. First
Above Rather than build official mobile apps for Mastodon, Rochko has encouraged the creation of
third-party apps. The current favourites are Amaroq for iOS and Tusky for Android
of all, I don’t do tracking, even on my instance. No
analytics, no anything. I’m limited by hardwarerelated tracking, like how many HTTP requests I
get; how many connections are open at any given
moment. I don’t look at that data very often – I just
focus on development.
Are you looking into an instance selection wizard?
Yes, we have that on our to-do list. It’s pretty
important. Making a website for the project instead
of having the [mastodon.social] instance as the
front page that many come by. We want to have one
place that presents the project as it is without being
affiliated with any particular instance. This will let
you choose from a list or from a selection wizard
where you want to go. That’s in the works right now.
We’re quite fascinated to see how the different
communities within the instances develop.
It’s an interesting dynamic. Initially, I expected
communities to form on instances like any other
centralised network, like Twitter which is divided
into different sub-groups based on interests, but
they are less connected between each other than
between their individual members. As it turns out
[on Mastodon], people have created instances for
particular communities and those communities are
interconnected. That makes more sense than my
initial assumption and it’s been interesting to watch.
By the time it’s taken us to transcribe this interview,
Mastodon has jumped up by another 150,000
users and now has 628,000 people spread across
1,600 instances. To put this into perspective,
Diaspora, the last great ‘decentralised hope’, has
about 667,000 users after almost seven years. For
a tutorial on how to set up your own Mastodon
instance, see p36.
www.linuxuser.co.uk
15
OpenSource
Your source of Linux news & views
OPINION
The kernel column
Jon Masters summarises the latest happenings in the kernel community
as Linux 4.11 is released, and the merge window for 4.12 opens up
Jon Masters
is a Linux-kernel hacker who has
been working on Linux for more
than 22 years, since he first
attended university at the age
of 13. Jon lives in Cambridge,
Massachusetts, and works for
a large enterprise Linux vendor,
where he is driving the creation
of standards for energy efficient
ARM-powered servers.
16
Linus Torvalds announced Linux 4.11,
noting that toward the end of the
development cycle “things were pretty
calm”. There had been a one-week delay (and an
extra -rc8 beyond the common rc7) due to a few
outstanding bugs, in particular related to NVMe
(Non-Volatile Memory Express) storage devices,
which are now common on contemporary Linux
laptops. The 4.11 kernel includes many new features,
as well as pre-enablement for work that will be
completed in the ongoing 4.12 cycle. An example of
the latter is support for Intel’s ‘la57’ (five-level
paging) feature on some future CPUs that will allow
for up to 56 bits of Virtual Address space (128
pebibytes – a pebibyte is 250 bytes, a lot more RAM
than most of us will see for many years to come).
With the growth in non-volatile storage memory
technologies that look like RAM (e.g. NV-DIMMs),
a greater address space becomes very convenient.
Linux kernel 4.11 includes a number of cool
features ‘under the hood’. One of those is a fix
for the (very) long-standing ‘write hole’ in mdraid.
This is where a power failure at exactly the wrong
moment during a write operation could result in
the striped RAID data and parity information for an
array being out of sync and Linux not knowing which
to rely upon if an array reconstruction were then
performed. Hardware RAID arrays ‘solve’ this by
having expensive battery-backed SRAMs and similar
mechanisms to journal updates to the array and
prevent a poorly timed power loss from destroying
any state. Linux 4.11 can also address this by using a
separate flash device to store a journal.
Another new feature in 4.11 is the ‘statx’ system
call. This is an extended form of the venerable ‘stat’
which dates back to the earliest days of Unix.
Back then, time on Unix was measured as it is
today, in terms of seconds since the ‘epoch’ (Unix
beginning of time): 1 January 1970. There were
many fewer seconds to worry about in those days,
but in the 21st century, we are concerned with the
‘impending’ arrival of the year 2038, during which
the traditional 32-bit Unix time overflows and wraps
to zero. While 2038 may seem like a long way away,
it is somewhat closer by the day, and practically
tomorrow for those calculating long-term mortgage
rates or other longer-term calendar events. So it is
perhaps unsurprising that a multi-year effort has
been underway to finally ‘solve’ the 2038 problem,
and adding a new 64-bit clean version of the older
‘stat’ is a part of the solution.
Incidentally, the Unix time epoch being 1 January
1970 is why your Linux machine might report that
date if its RTC (real-time clock) battery is removed
or the date stored is somehow corrupted during
a power event. Many other systems in the world
have epochs. The PC RTC stores time in terms of
seconds since 1952, 1980 or 2000, depending upon
configuration. The Iridium satellite constellation had
an epoch wrap a few years ago which required users
to manually update their handsets. And this author
admits he used to remember a former girlfriend’s
birthday since she shared it with the GPS epoch
(the trick is to never, ever, tell them that’s how you
remember their birthday).
Summit happening
Various CFPs (Calls for Papers) have gone out
concerning the Open Source Summit Europe, which
will be held between 23-25 October in Prague, and
will be colocated with this year’s Kernel Summit. The
latter is an invite-only event, but it has some overlap
with the broader Open Source Summit (which used
to be known as ‘LinuxCon’). As a consequence
of the sheer number of developers expected to
attend, various side conferences and events are
being organised. These include the KVM Forum,
and the RT (Real Time) Summit, as well as many
more. If you attend one conference this year, Open
Source Summit is not a bad choice. And if you miss
the European event, Open Source Summit North
America is in Los Angeles the month before, and is
colocated with the Linux Plumbers Conference 2017.
It was fun to see Andre Przywara (ARM) posting
about cross-compilers. There used to be (well)
maintained cross-compilers (GCC and friends built
for x86 but generating binaries for other targets,
such as ARM, MIPS, PowerPC, etc) available from
https://kernel.org, which were very popular among
embedded developers (including Raspberry Pi
users). These toolchains are in fact still used, but
they are now very long in the tooth. Andre was
planning to revive them properly.
Linux 4.12 merge window
With the release of 4.11 final came the opening of
the ‘merge window’ for Linux 4.12. This is the frantic
period of about two weeks during which very large
numbers of (potentially destabilising) patches are
posted for Linus to pull into his kernel tree. Most
of these (should) have been tested in Stephen
Rothwell’s daily ‘linux-next’ kernel during the
previous (4.11) kernel cycle and so are baked enough
that they can be hardened with fixes during the two
months that follow the closure of the merge window.
In fact, this time around, Stephen was kind enough
in one case to give Linus a heads-up about a ‘large
new drm driver’ (direct rendering manager, otherwise
known as hardware-assisted rendering). If you’re
interested in seeing what will be in Linux 4.13, do
take a look at the ‘linux-next’ kernel docs.
The 4.12 kernel so far will at least include the
removal of the deprecated ‘AVR32’ architecture;
new randomisation of the location of UEFI Runtime
Services (which might expose random firmware
bugs on some systems); and a new ‘corrected
Errors collector’ RAS feature (reliability, availability,
serviceability) which keeps track of memory errors
that were silently corrected by hardware (on nonconsumer server and workstation grade machines,
with ECC RAM) and offlines bad regions of memory
if they have persistent errors. There’s also new
support for USB 3.0 debug cables which allow
developers to reliably get at early console messages,
even on modern machines without traditional serial
ports; Bluetooth support for Intel’s ‘Edison’ IoT
development platform; new optimised gettimeofday
support for Microsoft HyperV (as used in Microsoft
Azure) via a new vDSO TSC page; more enablement
of KVM hypervisor support for RaspberryPi 3; and
much more besides. It’s already shaping up to be a
larger kernel cycle.
Ongoing development
The present focus in Linux kernel development does
seem to be on VM scalability, interesting hardware
accelerator work (including use of FPGAs as we
described in last month’s issue – let us know if you’d
like a detailed FPGA piece), and lots of refinements.
In that vein, it was not surprising to see Laurent
Dufour’s ‘speculative page faults’ work. This builds
upon work done by a few others and seeks to allow
the kernel to handle a ‘page fault’ – the faulting
back in from swap or files on disk of data that is
temporarily not present in physical memory due to
resource constraints or for performance reasons
– in a lockless way by first attempting to handle
the fault directly and falling back to the old model
if locking is required due to other simultaneous
users of the backing virtual memory area (VMA). It
was also unsurprising to see VM work related to the
Intel five-level paging feature mentioned earlier, and
so on.
Ongoing filesystem work currently includes
an effort by Serge E Hallyn to implement ‘v3
namespaced file capabilities’ that allow for true
filesystem ‘capabilities’ in a safe fashion from
within containers. Traditional file capabilities are
used to allow binaries (such as ‘ping’) to be tagged
with special abilities (to open raw sockets, in the
case of ping) without making them run with full root
privileges. But allowing regular file capabilities in
containers is inherently unsafe: a user could use
them to create a file with more capabilities than
they have outside the container and share it with
the external environment. Serge’s work adds a
layer of translation that will aim to safely prevent
capabilities from leaking outside of the container.
Finally this month, Yann E Morin recently
relinquished ownership of kconfig, the tooling used
to handle kernel configuration (it’s the thing that
generates those .config files using Kconfig directives
in kernel source directories).
If you’re looking for a challenge, a fabulous
opportunity awaits!
www.linuxuser.co.uk
17
Feature
Total Distro Toolkit
TOTAL
DISTRO
TOOLKIT
Paul O’Brien reveals the joy of live Linux distributions and how
they can provide the ability to boot a Linux environment directly
from a disc or USB stick for many different uses
18
great feature of Linux is the
ability to run the operating
system without making
modifications to the main drive
on your machine, courtesy of live-boot
distros. For users who are exploring Linux for
the first time, this makes the platform
incredibly accessible. Being able to write an
Ubuntu image, for example, to a USB stick
and try the system out on your own hardware
is very powerful as it enables a user to both
get a feel for the OS and ensure hardware
compatibility in advance – you simply can’t
do the same with Windows or Mac OS X.
Although live distros originated for this
purpose, a large number of images are now
available that use this functionality for
different purposes such as system recovery
and maintenance, security testing, privacy
protection and much more.
Most Linux distributions (distros) have
come on in leaps and bounds in recent
years and are generally very stable OSes
for general use. In reality, most of us will
A
agree that using a Linux distro as your main
platform, while not going overboard on the
modifications, will mean you're no more
likely to get into trouble than you are on
Windows. Even so, it’s still possible to get
into situations where you need to do some
repairs and the ability to boot a recovery
distro is invaluable, and it's very useful if
you need to fix someone else’s Windows
machine. Accidentally deleted files are
another great example of the value of the
concept. The first rule of file recovery is to
not use the disk where the files previously
resided, but if the disk in question also
contains your OS, this can be a tricky
problem and is something that live-boot
distros help overcome.
The privacy aspect of live-boot distros
is becoming more and more appreciated.
If you are using a machine that isn’t your
own, it’s very difficult to ensure that none
of your private data is saved locally, even if
you only use incognito windows and are very
careful – you can never be sure whether
a covert keylogger or similar is installed.
Carrying around a USB stick with a readyto-go bootable distro means you can run the
system independent of any host OS and be
aware of exactly what you are running.
Although live-boot distros don’t save
anything to the main drive on the host
machine, this doesn’t mean that you have
to throw away any data that you’ve created
each time you boot. If booting from a USB
stick (the preferred method as more new
devices ship without DVD drives), a portion
of the available space can be partitioned for
persistent storage. As with a local drive, this
partition can be strongly encrypted such
that should your USB stick fall into the wrong
hands, the data will be unrecoverable.
One point to note when running a distro in
this way is that performance is limited by the
speed of your USB port or USB stick. Try to
use USB 3.0 compatible ports where possible
(the age of the machine permitting) and if
buying a drive specifically for this purpose,
look for the best price vs speed option.
www.linuxuser.co.uk
19
Feature
TOP TIP
Debian-based
distro Netrunner
has its own
backports
channels
based on
Debian’s Tested
repositories,
allowing you
to either stay
on the install
channels or
activate the CI
repos to receive
continuously
tested updates.
Total Distro Toolkit: Portables
We’ll kick off this feature with a selection of some of
the best lightweight, general-purpose live-booters.
First, we have Porteus (www.porteus.org) a Slackwarebased Linux OS optimised to run from CD, USB flash
drive, hard drive or other bootable storage media.
It’s designed to be small and it’s incredibly fast. This
is achieved by storing the distro in XZM files, which
decompress very quickly. As such it’s under 300MB,
allowing you to start up and get online as quickly as
possible (typically in under 25 seconds). Porteus comes
in both 32- and 64-bit versions and aims to keep on the
bleeding edge. It also supports several languages.
Porteus
Porteus started life as a bleeding-edge version of Slax,
a small and fast Linux OS based around a modular
approach. Slax, and by association Porteus, provides
a wide collection of pre-installed software for daily
use. Instead of using a package manager, adding new
features to Porteus is as simple as clicking on a module,
which then injects the required files straight into the
filesystem. Double-click the module again to remove it.
Both operations take only a few seconds and help avoid
having huge numbers of unused files within the distro.
Porteus previously provided a custom ISO online build
tool, but this has now been replaced with a standard
distribution, offering Cinnamon, KDE4, LXQt, MATE or
Xfce desktop environments with saved changes (or
use ‘always fresh’ to discard changes on shutdown),
simple installers for Linux and Windows if desired, a
‘Porteus Settings Centre’ for central management of
updating, installing, managing settings, viewing system
information, and much more.
Netrunner
Netrunner (www.netrunner.com) is a Debian-based
Linux OS that targets netbooks, desktops and ARMbased microcomputers. It uses the Plasma desktop
environment and other KDE software for an ultramodern look and feel and is available in two flavours: the
‘Desktop’ build for netbooks and desktop computers,
which ships with a complete set of software installed for
daily use; and a ‘Core’ build, which enables you to build
up your own system or run it on low-spec hardware.
Netrunner is an open source project that benefits
from a commercial backer, sponsoring development of
the KDE Plasma core. Plasma features contributed and
released early on in Netrunner include: the simplemenu
launcher; task manager with expanding icons; desktop
workspace (icons on a clean desktop, no overlays); hotspot ‘Show Desktop’ in lower right corner; auto-started
Kwallet; simplified system settings; Firefox-ESR and
Thunderbird with Plasma Integration and the unified look
for KDE and non-KDE-apps via GTK-Configuration.
Netrunner is a good-looking distro, even on less
powerful machines, shipping with several window
and desktop themes to choose from, so you can start
customising right away. The distro uses the Aurorae 3
engine of KWin, which allows blur and transparency even
on low-end hardware.
20
Above Porteus uses the KDE desktop, which gives it
a contemporary graphical feel. You'll also notice that
applications are easy to find and use
BunsenLabs
BunsenLabs (http://bunsenlabs.org) is another Debianbased distro offering a lightweight desktop environment,
this time based on Openbox. Previously known as
CrunchBang Linux, the current release of BunsenLabs is
Hydrogen1, based on Debian Jessie.
BunsenLabs stays completely true to a Debian core,
such that the distro completely consists of configuration
What are Porteus
cheatcodes?
After you’ve created your Porteus disc or USB
stick, you’ll find a /boot/docs/cheatcodes.txt file
detailing the Porteus ‘cheatcodes’. These are used
to affect the booting process of Porteus. You can
use them to disable desired kinds of hardware
detection, start Porteus from a specific location,
load additional modules, and much more.
To use the codes, reboot your computer and wait
several seconds until the graphical Porteus logo
appears with a boot menu. Choose your desired
menu entry and hit Tab, which will allow you to edit
the command line. Add your desired boot argument
to affect booting the way you like. These cheatcodes
can also be added to the APPEND line of your
/boot/syslinux/porteus.cfg entries (or other
bootloader config files) to apply them automatically
on every boot.
For example, the changes=/path/ cheatcode tells
Porteus to use a device (or a file or directory) other
than your memory for storing changes. You could
format your disk partition /dev/sdb2 with some
Linux filesystem (eg XFS), then use changes=
/dev/sdb2 to store all changes to that partition.
This way you won’t lose your changes after reboot.
The changes-ro cheatcode tells Porteus to keep
your change area read only – ideal for use after
you’ve got your system set up as you like it.
Above Cheatcodes provide great flexibility when
booting Porteus – use base_only to load only the
base set of modules
How to diagnose boot
issues with Porteus
Porteus’s cheatcodes can be used to persist
storage and change a number of settings in your
installation. This functionality becomes particularly
useful if you end up with a non-booting system.
As well as removing the option to use your saved
storage, the base_only cheatcode prevents the
system from loading any modules at startup other
than the ‘base’ modules included with the default
ISO. This is useful in debugging to see if problems
you are having are associated with some module
you’ve added to the system.
• cliexec=my_script allows you to run a script
before the graphical interface is loaded.
• debug will start the shell several times during the
boot to perform debugging actions.
• from=/dev/device loads Porteus from the
specified device, folder or ISO.
• fsck runs a filesystem check.
and resource packages installed on top of Debian
and there are no changes to the way the underlying
Debian base system is administered. The system is
preconfigured with the Openbox window manager,
together with the tint2 panel and conky system monitor.
This is complemented by an assortment of harmonising
Above BunsenLabs’ dark theme together with Conky’s
lightweight system monitor makes the community
organised Crunchbang successor feel unique
GTK2/3 themes, wallpapers and conky configurations.
Conky is a lightweight system monitor for X that shows
any kind of information on your desktop. It can display
more than 300 built-in objects, such as a huge variety of
OS stats and mail via its built-in POP3/IMAP support, as
either text or graph widgets using Lua-based extensibility
features. A large community discussing Conky
customisation can be found on the BunsenLabs site.
While BunsenLabs uses a lightweight environment, it’s
not light on features. As well as custom configuration and
application utilities to maintain the system, extra desktop,
multimedia and hardware-related packages come preinstalled to offer a fuller ‘out-of-the-box’ experience.
Above AntiX’s iceWM
doesn’t quite have
the polish of modern
window managers,
but it does offer
great performance
AntiX
AntiX (http://antix.mepis.org) is a fast, lightweight and
easy-to-install systemd-free Linux distro based on
Debian Stable. The stated goal of antiX is to provide a
light, but fully functional and flexible free OS for both new
and experienced users of Linux. AntiX should run on most
hardware, with a minimum of 256MB RAM. The installer
needs at least 2.7GB of available storage, but antiX can
also be used directly without installation as a live distro.
The current release of antiX is ‘Berta Cáceres’ and
comes as a 695MB full distro, a 510MB base distro and
a 190MB ‘core-libre’ distro, all for both 32-bit and 64-bit
computers. If you wish to have total control over the
install, use the antiX-core and build up from there. Note
however that core-libre doesn’t ship with any window
managers at all, nor wireless support of any sort, so you
really are starting from scratch!
So what do you get with a full antiX install? You get a
4.4.10 kernel, LibreOffice, Firefox, Claws-mail, spacefm,
Wicd for network management and iceWM as the default
window manager. IceWM is unique in that it is particularly
keyboard friendly and despite being exceptionally light, it
still supports multiple workspace and simple themes.
If iceWM isn’t for you, a special edition of antiX is
available called ‘the MX edition’ – this uses Xfce for its
desktop environment, which is just that little bit more
fully featured.
TOP TIP
Looking for a
great themed
wallpaper for
your BunsenLabs
install? Head
on over to the
dedicated
BunsenLabs
DeviantArt page
here: http://
bunsenlabs.
deviantart.com
www.linuxuser.co.uk
21
Feature
Total Distro Toolkit: Security
TOP TIP
When trying
out security
and privacy
distributions,
consider whether
the tools and
techniques are
also portable for
your everyday use.
Above Qubes OS provides an incredible level of security
via its use of compartmentalisation, yet it is still
accessible and genuinely usable every day
Qubes OS
Looking for a security- and privacy-aligned version
of Linux? There are a number of excellent distros to
choose from and we’ve included how to get a JonDo/
Tor-Secure-Live-DVD ISO up and running (see, right), but
let’s start with the modest Qubes OS. Despite a tag-line
of ‘A reasonably secure operating system’, the Qubes OS
homepage (https://www.qubes-os.org) proudly includes
a tweet from none other than Edward Snowden, stating:
“If you’re serious about security, @QubesOS is the best
OS available today. It’s what I use, and free.” High praise
indeed and it doesn’t stop there – quotes are included
from other influential privacy advocates as well as a
number of big-name publications such as The Economist,
which states: “For those willing to put in the effort, Qubes
is more secure than almost any other operating system
What is Whonix?
Based on Debian, Whonix is an OS designed for
advanced security and privacy that runs directly
within Qubes. It addresses potential attacks while
still maintaining system usability with pre-installed,
preconfigured apps. Whonix makes online anonymity
possible via fail-safe, automatic and desktop-wide
use of the Tor network. The OS provides a TwoVM split
security architecture with isolated ‘Whonix-Gateway’
(ProxyVM) for total Tor traffic routing and ‘WhonixWorkstation’ (AppVM) for user desktop apps, which
serves as a tailored OS environment for Tor-based
privacy/anonymity. Within Qubes, you can even create
multiple ProxyVM and AppVM instances to keep your
digital work and personal lives separate.
22
available today.” So what is it? Well, Qubes takes a
different approach to many other Linux distros, by using
an technique called ‘security by compartmentalisation’,
which allows you to arrange the various parts of
your digital life into securely isolated compartments
called ‘qubes’.
This approach allows you to keep the different things
you do on your computer isolated from each other so that
one qube getting compromised won’t affect your whole
system. You might have one qube for visiting untrusted
websites, such as your social media destinations, and
a completely different qube for a secure task such as
online banking. If your untrusted browsing qube gets
compromised by an infected website, your online banking
activities won’t be at risk. If you’re concerned about
malicious email attachments, Qubes can make it so
that every attachment gets opened in its own singleuse disposable qube. In this way, Qubes allows you to
do everything on your same physical computer without
having to worry about a single compromised activity
affecting all of your digital life.
You might expect that compartmentalising in this way
would severely compromise the user experience, but
happily that is not the case with Qubes OS. Programs are
isolated in their own separate qube windows, but they
are all displayed in a single, unified desktop environment
with unforgeable coloured window borders so that
you can easily identify windows with different security
levels. Common attack vectors like network cards and
USB controllers are isolated in their own hardware
qubes while functionality is preserved through secure
networking, firewalls and USB device management.
Integrated file and clipboard copy-and-paste
operations make it easy to work across various qubes
without compromising security. A built-in template
system separates software installation from software
use, allowing qubes to share a root filesystem without
sacrificing security.
Unlike some of the distros we’ve mentioned in this
feature, Qubes is not designed for old or low-end
hardware. To run Qubes effectively, you’ll need at least
4GB of RAM, preferably with support for VT-x and VT-d.
Ideally you should go ‘all in’ with Qubes and not multiboot with another OS, for security reasons.
One thing we haven’t covered is which Linux distro is
Qubes based on. That’s because within Qubes’ templating
system, you can run a selection, including Fedora,
Debian, Arch Linux, Ubuntu and Whonix (see, left) as well
as a number of tailored pen-testing distros.
One of the best aspects of Qubes is its extensive
documentation. At the official Qubes website you’ll find
extensive FAQs and guides for both users and developers,
as well as video tours, screenshots and a great getting
started guide.
The Qubes download page lists both full install
(recommended) and live USB versions of the OS. The live
version is officially out of support (and as such sitting
at version 3.1 rather than the latest version 3.2), but it
remains useful as a way to sample the OS, even if a full
install is recommended for maximum security.
Download, configure and use
JonDo/Tor-Secure-Live-DVD
01
Download the ISO
04
Launch applications
The JonDo/Tor-Secure-Live-DVD ISO
can be downloaded from http://bit.ly/JonDo.
The release itself is about 1.2GB in size, and
includes a SHA256 hash on the site. You
should verify this before installing to ensure
that the file hasn’t been tampered with (this
is good practice, but particularly important
with security-related distros!). Doing so is as
simple as using the sha256sum command,
followed by the filename. If the checksum
doesn’t match, don’t use the image! An
OpenPGP signature is also available on the
page for additional security.
The bottom bar is set to auto-hide,
so if you’re looking for where to launch
additional applications from other than those
on the desktop, that’s where you’ll find them.
In the Internet section shown here, you’ll
find clients for all major services, as well as
the Wireshark tool should you wish to sniff
your network connection to be absolutely
sure what information is going out from your
machine. The Settings section contains
additional secure configuration options such
as DNScrypt and encryption settings.
02
Select firewall mode
05
Use JonDo proxy
Upon booting the ISO, you’ll first
be asked to select a firewall mode. Three
options are available: the ‘Simple Firewall’
blocks all inbound connections but allows
all outbound ones. All installed applications
will be able to connect to the internet. The
second option is for a ‘Restricted Firewall
– Tor Only’; in this mode, a Tor daemon
is automatically started if a network
connection is being attempted. Local apps
will only be able to connect if using Tor. The
third option uses the same approach, but
restricts to JonDo connections.
The main JonDo app on the desktop
manages the JAP/JonDo proxy tool on your
system. If you have a premium code for the
JonDo service, you can enter it here (free
premium test codes are also available from
the anonymous-proxy-servers site). Using
the JonDo proxy provides a significant
speed boost over the TOR option. Also on
the desktop you’ll find a link to JonDoFox –
a profile for the Firefox browser optimised for
anonymous, secure web surfing. By default
JonDoFox uses restrictive settings.
03
Set a password
06
Use Tor instead
As this is your first run, you’ll be
prompted to set an administrator password
at this time. There are a few things to note
here – first is that the password will not
persist between sessions. Second is that
if you do not choose to set a password,
then you won’t be able to use sudo to get
administrative access until you reboot.
Although this will limit your ability to
change system settings, it may also make
your system less prone to attack via
administrative privileges.
If you prefer to use Tor, the Vidalia
Tor GUI is pre-installed and on the desktop.
Vidalia allows you to start, stop or view the
status of your Tor proxy; view, filter or search
log messages; monitor bandwidth usage
and configure your settings. Should you
wish, you can also use Vidalia to contribute
to the Tor network by setting up a Tor relay.
A cool feature of Vidalia is its Tor network
map which shows the geographic location of
relays on the Tor network, as well as where
the user’s application traffic is going.
www.linuxuser.co.uk
23
Feature
TOP TIP
If you liked
what you’ve
experienced
on JonDoLinux,
JonDonym
clients are
available for
other operating
systems too,
including
Windows and
Mac OS X.
Total Distro Toolkit: Security
JonDo
The JonDo/Tor secure live DVD (http://bit.ly/JonDo)
offers a secure, preconfigured environment for
anonymous web surfing and other online activities based
on Debian with Xfce.
The live system contains proxy clients for
JonDonym, Tor and the Mixmaster remailer. JonDoFox
is preconfigured as the web browser for anonymous
surfing, although TorBrowser is also installed.
Thunderbird is used for email; Pidgin for anonymous
instant messaging and chats. There’s also the Parole
media player, MAT for cleaning documents, plus TorChat,
LibreOffice, GIMP and a number of other useful tools.
At the heart of the distro (albeit optional) is
‘JonDonym’. JonDonym servers are operated by
independent entities committed to protecting your data.
Because these operators are independent from each
other, no single organisation has complete information
about you. Your anonymity is fully protected. When you
surf the web, your requests travel across different relay
points before serving the webpage. Each of JonDonym’s
premium services (there is a cost associated) consists of
several servers in several different countries.
Does this sound broadly similar to Tor? It is. However,
JonDonym’s premium services are as fast as VPN
services and the fastest web proxies, and up to a
hundred times faster than comparable services like Tor.
Above Kali Linux has a large number of security tools
built in, but they are all neatly organised by category,
making a tester’s life easier
The JonDo live DVD is very resource light – minimum
specs call for a 486 processor, 1GB RAM, a 1,024×768
screen and the ability to boot from CD/DVD or USB.
After booting the live DVD, choose your preferred firewall
configuration and you’re ready to go!
Kali Linux
Kali Linux (www.kali.org) is a live-boot distro funded
and maintained by Offensive Security, a provider of
information security training and penetration-testing
services. The Debian-based distro is designed for digital
forensics and penetration testing and supersedes
Backtrack, a previous Knoppix-based distro developed
24
Above The Kali boot menu offers the ability to run in
regular modes, forensic modes and with USB persistence
and maintained by the same team. Kali uses the GNOME
Window Manager (with alternate Xfce, MATE, LXDE etc
versions also offered) and is available in ISO form as 32and 64-bit images for both x86 and ARM in both weekly
and less-regular stable versions.
Kali Linux is pre-installed with over 300 pen-testing
programs and is a supported platform of the Metasploit
Project’s Metasploit Framework, a tool for developing
and executing security exploits. Like its predecessor,
Kali contains a ‘forensic mode’ which ensures the system
doesn’t touch the internal hard drive or swap space and
auto mounting is disabled. The developers recommend
that users test these features extensively before using
Kali for real-world forensics, however.
Kali Linux is developed using a secure environment
with only a small number of trusted people that are
allowed to commit packages, with each package being
signed by the developer. Kali also uses a custom-built
kernel that is patched for injection. This was primarily
added because the development team found they
needed to do a lot of wireless assessments.
If you are looking for a distro to use for pen testing, Kali
is a logical choice – it’s employed daily by thousands of
users for this purpose and is also backed up by extensive
Creative booters
As well as live-boot distros to cover operations
such as system rescue, privacy and security,
other distros are available to serve more creative
purposes. An example of this is AV Linux, a
Debian-based disto that includes a large collection
of audio and video production software. It also
includes a custom kernel with IRQ threading
enabled for low-latency audio performance and
the JACK audio connection kit. The hardwareefficient Xfce4 desktop is in place to save your
machine’s processing power for the tasks at hand.
AV Linux’s goal is to bring together the raft of highquality, open source products in the media space
to showcase exactly what’s available.
The GParted interface
4
5
6
1
2
3
1
Hard disk partitions
This area shows a proportional graphical representation
of your disk. You can see partition sizes here and partition
types in the area below.
Right-click menu
2
Right-clicking in either window opens a menu from where
you can add, delete or resize/move partitions.
3
Choice of filesystems
Partitions can be formatted also – note that GParted
supports a huge number of different filesystems, including
Windows NTFS.
documentation on the Kali site as well as training courses
from the developers. The team are currently in the
process of implementing ‘Recipes’, which will allow you to
construct ISOs refined to serve a specific purpose.
We couldn’t cover live-booting distros without
highlighting a selection of the superb number of back
up, diagnostic, partitioning, rescue and repair tools that
use live-booting technology. One of the most regularly
used live-boot distros is the GParted (http://gparted.
org/livecd.php) live CD. GParted is particularly popular
because it’s not just valuable to Linux users, but also a
must-have for anyone who has to work with Windows
4
Made a mistake?
The Undo button lets you undo the last step or all steps –
note that changes are not applied immediately in GParted.
Apply changes
When you are confident you are happy with your partition
layout, you can hit the Apply button. After doing so, don’t
interrupt operations!
5
6
Change disk
Use the indicator panel on the right-hand side of the
panel to work with a different disk. The list can be reloaded
from the GParted menu.
machines, where partition management is traditionally
very difficult.
GParted Live
Released on average bi-monthly, GParted Live is small
at around 275MB, so will fit on any USB stick. With a UI
that’s reminiscent of the Partition Magic application of
old, GParted is powerful yet incredibly easy to use. It
uses a model where you prepare all the operations on
your disk, but they are only carried out when you hit the
‘Apply’ button; so you get an ‘Undo’ option – invaluable
when working with such a system-critical thing as disc
TOP TIP
Many live-boot
distros include
a mechanism
to enable you to
build custom ISO
images including
and excluding
features as they
are needed.
www.linuxuser.co.uk
25
Feature
TOP TIP
If you have an
Ubuntu install
disk, this can
also function as
a rescue disc,
as the GParted
utility comes
pre-installed.
Total Distro Toolkit: Utilities
partitioning. GParted supports hard disk and flash
memory devices, as well as hardware and software RAID.
GParted live is based on Debian live, which means of
course that as well as having the GParted application,
you have a proper Linux distro (with terminal access via
LXTerminal) allowing you to drop to the command line if
needed to get serious with the system!
On the GUI side, GParted Live includes the pcmanfm
file manager, Leafpad text editor, NetSurf for basic web
browsing, GSmartControl for hard disk/SSD diagnosis,
and the Calcoo scientific calculator (for working out
those drive sizes and boundaries!). If you drop to the
command line, you’ll find a host of utilities that allow
Above GParted runs with a graphical interface, but the
advanced options will support even the oldest machines.
Use SystemRescueCD…
01
Download and burn ISO
To get started with SystemRescueCD,
download the ISO file for your architecture.
Generally speaking, you’ll need the x86
edition that supports both 32- and 64-bit
processors. Once the image is downloaded,
check the checksum using the md5
command. Next, either burn the image to a
CD or use isohybrid to convert to a bootable
image and use dd to copy to the appropriate
device using dd if=/path/ilename of=/
dev/sdx, where sdx is the USB stick.
26
02
you to achieve everything you can in the GUI; highlights
include GRUB for repairing broken boot settings, nano
and vi for text editing, mc for file management, and the
TestDisk data recovery tool.
It goes without saying that GParted should be used
with care – there is a lot of scope for completely breaking
your system and once you hit that ‘Apply’ button, there’s
no going back!
Clonezilla Live
Clonezilla Live (http://clonezilla.org/clonezilla-live.php) is
a small bootable Linux distro for x86-based computers,
also coming in at just over 275MB. The distro is derived
from Clonezilla SE (Server Edition), which debuted
in 2004 and focused primarily on deploying multiple
machines simultaneously using a centralised server and
a PXE network boot system.
In order to provide a more general-purpose tool,
Clonezilla combined with Debian Live to form Clonezilla
Live, a live-boot distro that can be used to easily image
and clone individual machines without the need for
the centralised server. Clonezilla Live can be used to
image or clone computers using a CD/DVD or USB flash
drive. Images can be created directly to or restored
from attached physical media (e.g. a hard disk or USB
stick) or alternatively via the network by using a network
filesystem such as SSHFS or Samba.
A key point to note about Clonezilla Live is that it
doesn’t provide a rich GUI in the same way that GParted
does – when you boot Clonezilla Live, you’ll notice that
everything runs in text mode. This does mean that
system compatibility is very extensive and the user
interface itself is very intuitive.
Fundamentally, Clonezilla runs in two modes –
‘device-image’, which allows you to work with disks or
partitions using images; or ‘device-device’, which is
Boot it up
With the image ready to go, insert
the SystemRescueCd disk or USB stick and
boot your system. Press F2/F3/F4/F5/F6 and
read the advanced boot instructions or press
Enter at the prompt to boot with the default
options. There are two parts in the boot
command – the boot-image and the bootoptions. For example, you may want to boot
with rescue64 as boot-image and docache
setkmap=uk as boot-options. Remember to
use spaces between the options.
03
Select boot image
There are four main boot images
with SystemRescueCd: rescue32 is for
32-bit systems, which is the default choice
if your processor doesn’t support 64-bit
instructions. If you have a 64-bit capable
processor, use rescue64. There’s also
an alternative kernel, altker32, for 32-bit
systems; use this if you have problems with
rescue32 or need a more recent kernel.
Finally, altker64 is an alternative kernel for
64-bit systems.
Above Clonezilla’s network support means images can be
read from and saved to a huge variety of locations
for working directly from a disk or partition to another
disk or partition. The former is more commonly used
for backup and restore purposes, while the latter is
ideal for scenarios such as replacing faulty hard disks
or migrating from a hard disk to a larger disk or SSD.
If selecting to work in ‘device-image’ mode, Clonezilla
will provide the choice of using local storage or SSH/
Samba/NFS shares for image storage. If needed, you
can also drop to the command line and perform manual
operations – the advantage of having Debian Live
underpinning the tool!
The latest version of Clonezilla adds WebDAV support
for network image storage retrieval as well as encryption
support – vital if you want your images to be secure.
SystemRescueCd
SystemRescueCd (www.system-rescue-cd.org) is a
Gentoo Linux-based system rescue disk available as
a live-boot distro for administrating or repairing your
04
Choose boot options
The docache boot option copies the
files to RAMfs and permits the disc to be
ejected. setkmap=xx sets the keyboard map
keyboard, eg uk for British. root=/dev/idxn
boots an existing Linux system. You can use
ide=nodma or all-generic-ide if the kernel
boot process hangs on a driver-related to
storage. Use doxdetect or forcevesa if you
can’t get the graphical interface to work.
acpi-off, noapic and irqpool are useful if
you have a problem when the kernel boots.
05
system and data after a crash. It aims to provide an
easy way to carry out admin tasks on your computer
and, as such, comes with a lot of Linux system utilities
such as GParted, FSArchiver, filesystem tools and basic
tools (editors, Midnight Commander, network tools).
As with GParted Live, it can be used (and is useful!) for
both Linux and Windows computers, on desktops as
well as servers. SystemRescueCD is particularly well
suited to Windows emergencies, as it includes ntfs-3g
(third-generation NTFS driver) needed to support
Windows partitions.
Although the rescue system requires no installation,
it can be installed on the hard disk if you wish (and it
can be kept there, effectively dormant, ready for use
later). The kernel version used in the distro supports all
important file systems (ext3/ext4, XFS, Btrfs, ReiserFS,
JFS, VFAT, NTFS) as well as network filesystems such as
Samba and NFS.
The Xfce desktop environment is included along with
a basic web browser and a number of other tools. While
this might seem strange on a rescue distro, the thinking
behind it actually makes sense – while you’re recovering
a broken system, there’s a pretty good chance you’ll
need to refer to the internet for documentation or
assistance, and providing network access and a browser
makes this possible. That’s helpful!
Although the most common use of SystemRescueCD
is on a desktop in interactive mode, like Clonezilla, it
can be started across a network using PXE. This is
invaluable if you happen to manage a server that is in
a remote data centre and something has gone horribly
wrong. There’s no need to be sitting in front of the
physical machine: you can use a network boot server to
get SystemRescueCD up and running on your instance.
You can even generate a custom distro image to perform
specific functions at startup.
Mount partitions
From the console mode, you can
mount partitions to troubleshoot an installed
Linux or Windows system. You can mount
Linux filesystems (ext3, ext4, XFS, Btrfs,
ReiserFS, Reiser4, JFS) and Windows FAT
and NTFS. You can back up/restore data
or OS files. Midnight Commander (mc) can
copy/move/delete/edit files and directories.
The SystemRescueCD has a list of the main
system tools with documentation. Six virtual
consoles are available; press Alt+F1 to F6.
06
TOP TIP
Most live-boot
distros will
come with
a lightweight
window manager
installed, but
they also allow
you to switch
based on your
preference.
Start graphical interface
If you want to use graphical tools,
you can start the graphical environment by
typing startx. The graphical environment
allows you to work with the GParted partition
manager and use graphical editors including
Geany or gVim, or to browse the web and
use terminals such as xfce-terminal. From
both console and GUI modes, you can set
up your network connection manually or by
using the automated wizard (net-setup in
console mode).
www.linuxuser.co.uk
27
Feature
TOP TIP
Running a distro
with a unionbased mount
means that
you can always
ensure its up to
date, just as you
do on your main
machine.
Total Distro Toolkit
Knoppix
The grand-daddy of live-booting distros, Knoppix
(http://bit.ly/KnoppixLive) is an OS designed to be run
directly from a CD/DVD or a USB flash drive. There are
two main Knoppix editions: the traditional CD edition
that comes in at around 700MB, and the DVD ‘Maxi’
edition which is just under 4.7GB. Each main edition has
two language-specific editions – English and German.
Knoppix is based on Debian and uses the LXDE Window
Manager. More than 1,000 software packages are
included on the CD edition and more than 2,600 on the
DVD edition, with highlights being OpenOffice, Chrome
and Firefox, GIMP, Wine (for integration of Windowsbased programs) and a wide range of other packages
from the Debian repositories. Up to 10GB can be stored
on the DVD in compressed form.
Knoppix can, of course, be installed locally, but it
also enables storage persistent storage via a filesystem
union model, previously UnionFS but now Aufs. The
union mount allows virtual updates to the data on
the read-only CD/DVD media (or USB stick partition)
by storing changes on separate writable media (or
a separate partition) and then representing the
combination of the two as single storage device.
A special version of Knoppix is available known as
ADRIANE (Audio Desktop Reference Implementation And
Networking Environment). This includes a talking menu
system, so it can be used entirely without a monitor,
making it ideal for blind or partially sighted users.
The latest versions of Knoppix include a tiny ‘boot
only’ CD image inside the ‘Knoppix’ directory for
computers that can only boot from CD, but not from DVD
or USB flash drive. The images initiate the boot process,
which then proceeds with an attached USB stick or USB
hard disk.
Zsh on GRML
01
Use Z shell
One of the key features of Grml is
Zsh (Z shell), the default interactive shell
when using the distribution. If you are a
seasoned Bash user, don’t be concerned –
99% of what you know about Bash works
the same way in Zsh. There are some great
additional features, though. How about Tab
completion on cd? What a time-saver!
28
02
Above Knoppix is one of the oldest live-boot distros in
town, but is still relevant, useful and effective today
Grml
Long overdue an update but still frequently used, Grml
(https://grml.org) is a Debian-based OS designed to
run mainly from a live CD, but can be made to run from
a USB flash drive. Grml is designed to be well suited for
sysadmins and other users of text tools, but does include
X Window and a few minimalist window managers, such
as wmii, Fluxbox, and Openbox to use the graphical
programs such as Firefox included in the distro.
Grml provides several utilities to make life easier:
• grml-x is a wrapper for configuring and using the X
window system. grml2usb is a tool for installing a Grml
ISO on a USB device for booting.
• grml-crypt provides an easy wrapper around
cryptsetup, mkfs, losetup and mount.
• grml-live is a build framework based on FAI (Fully
Automatic Installation) for generating a Grml and Debianbased Linux live system (CD/ISO).
• grml-tips provides useful tips and tricks for daily life on
the command line!
Is Grml dead? The project team insist not. While the
last general release was 2014.11, up-to-date versions are
available via a daily automated build system. The big holdup for a new release has been systemd integration, but
it’s now progressing. Updates are few and far between,
but the team have posted in 2017… so fingers crossed!
Type partial commands
You probably already use Ctrl+R to
do a recursive search of your history in Bash.
This is a great way to reuse commands, but
Zsh is even smarter. You can type part of a
command, then press the up arrow to find
the last command that started with the
character you typed, also continuing back in
the history with further presses if required.
03
Use its killer features
If you’ve used the kill command,
you’ll know that generally speaking you’ll
use ps to find the right item first. Once again
the Tab key makes this much smarter when
using Zsh – type kill followed by a letter
to search for and you’ll see a navigable list
of processes! These are just a few of the
powerful Zsh features.
Subscribe
Never miss an issue
£6.49
£5.63
per issue
*
Subscribe and save 20%
Every issue, delivered straight to your door
Never miss an issue
Delivered to your home
Get the biggest savings
13 issues a year, and you’ll be
sure to get every single one
Free delivery of every issue,
direct to your doorstep
Get your favourite magazine for
less by ordering direct
What our readers are saying about us…
“I’ve only just found out about this
magazine today. It’s absolutely brilliant
and exactly what I was looking for.
I’m amazed!”
Donald Sleightholme via Facebook
30
“@LinuxUserMag just arrived by post.
Wow what a fantastic issue! I was just
about to start playing with mini-pcs and
a soldering iron. TY”
@businessBoris via Twitter
“Thanks for a great magazine. I’ve been a
regular subscriber now for a number
of years.”
Matt Caswell via email
Pick the subscription that’s right for you
MOST
FLEXIBLE
GREAT
VALUE
Subscribe and save 20%
One year subscription
4 Automatic renewal – never miss an issue
4 Great offers, available world-wide
4 One payment, by card or cheque
4 Pay by Direct Debit
Recurring payment of £33.75 every six issues,
saving 20% on the retail price
Name of bank
Instruction to your Bank
or Building Society to
pay by Direct Debit
A simple one-off payment ensures you never
miss an issue for one full year. That’s 13 issues,
direct to your doorstep
Originator’s reference
UK £67.50 (saving 20% on the retail price)
5 0 1 8 8 4
Europe £76
USA £87
Rest of the world £87
Pay by card or cheque
Address of bank
Pay by Credit or Debit card
Mastercard
Visa
Amex
Card number
Postcode
Account Name
Expiry date
Account no
Sort Code
Please pay Imagine Publishing Limited Direct Debits from the account detailed in this instruction subject to the
safeguards assured by the Direct Debit guarantee. I understand that this instruction may remain with
Imagine Publishing Limited and, if so, details will be passed on electronically to my Bank/Building Society.
Banks & Building Societies may not accept Direct Debit instructions for some types of account
Signature
Date
Pay by Cheque
I enclose a cheque for
Made payable to Imagine
Publishing Ltd
£
Signature
Date
Your information
Name
Address
Telephone number
Mobile number
Email address
Postcode
Please tick if you do not wish to receive any
promotional material from Imagine Publishing Ltd
n By post n By telephone n By email
Please tick if you do not wish to receive any promotional material
from other companies n By post n By telephone
n Please tick if you DO wish to receive such information by email
Please post this form to
Linux User & Developer Subscriptions, 800 Guillat Avenue, Kent
Science Park, Sittingbourne, Kent ME9 8GU
Order securely online www.imaginesubs.co.uk/lud
Enter the promo code PS172 to get these great offers
Speak to one of our friendly
customer service team
Call 0844 249 0282
These offers will expire on
Friday 30 June 2017
Please quote code PS172
Calls cost 7p per minute plus your telephone company’s access charge
*This offer entitles new UK Direct Debit subscribers to pay only £33.75 every 6 issues. Offer code PS172 must be quoted to receive this special subscription price. New subscriptions will
start from the next available issue. Details of the Direct Debit guarantee are available on request. Subscribers can cancel this subscription at any time. This offer expires 30 June 2017.
Tutorial
Concerto
Set up an open source
digital display system
Mayank
Sharma
is an expert tech
journalist specialising
in Linux and open
source and a former
contributing editor of
Linux.com.
Grab eyeballs by streaming multimedia-rich custom
messages across multiple screens
Resources
Concerto
www.concertosignage.org
There’s no better way to get your message out to an
audience than by using a digital display. Whether
you want to display menu boards at restaurants
or schedules at a conference, a digital signboard
will get your message across conveniently. Thanks
to the dipping prices of flat-panel monitors, a digital
display is cheaper than traditional signage and offers
several benefits. You can use it to display multiple
bits of real-time information and it can be easily
reprogrammed to show a different set of information at
the touch of a button.
Concerto is an open source piece of software that
is designed to manage content on digital signboards
and broadcast it to multiple displays from a webbased interface.
The software can display multiple types of content,
including static images, text, videos and RSS feeds.
You can configure Concerto to show different bits of
information on each display. To top it all, Concerto
can display content on any device that features a web
browser, which means that you can turn any monitor,
tablet or even a smartphone into a digital signboard!
The Concerto content server requires a web server
that’s capable of serving Ruby on Rails applications, as
well as a MySQL database for storing the data to display
and information about the various display screens.
You don’t need to pre-install these since Concerto will
fetch them during installation.
32
Before you install Concerto, assign
a static IP address to the machine
that you’ll use to serve content and
manage the displays. The easiest
way to do this is via your router’s
administration page. Most routers
will identify an Ubuntu installation
via the MAC address of its Ethernet
or wireless network card and allow
you to lock it down to a fixed IP
address. The exact procedure to
accomplish this varies from router
to router. For this tutorial, let’s
assume the machine has the IP
address 192.168.1.2.
Once you’ve set up a static IP
address for the server, it’s time to
install Concerto. There are a couple of different ways of
installing the software. The most convenient is to install
the Concerto package on a DEB-based distribution such
as Debian or Ubuntu. First, add Concerto’s repository
using curl get.concerto-signage.org/add_repo.sh
| sh. When the script has refreshed your repositories,
install Concerto with sudo apt install concertofull; that will pull in all the required dependencies. After
all the packages have been downloaded and installed,
Concerto will set up each and every component. During
this process it will prompt you to set the password for the
app to register with the MySQL server.
Preparing your Concerto setup
At the end of the installation, the Concerto package will
place a Concerto site configuration file in /etc/apache2/
sites-available/concerto, along with the appropriate
Concerto virtual host configuration. This file points to the
directory that includes all the Concerto code, which by
default is /usr/share/concerto.
Before bringing up the Concerto installation, disable
the default virtual host configuration with sudo
a2dissite 000-default. Enable the Concerto virtual
host with sudo a2ensite concerto and then restart the
Apache web server to bring the new configuration into
effect with sudo apache2ctl restart.
Now fire up a browser on any computer on the network
and in the address field, enter the IP address of the
Concerto installation, which in our case is 192.168.1.2.
Since this is a new installation, you’ll be directed to the
first run wizard, where you’ll have to register a new admin
user for managing the installation. You can add more
users from within the Concerto administration panel.
Your Concerto server is set up and ready to serve
content to screens around the network. Log into the
Concerto administration panel using the credentials of
the user you’ve just created. This will take you to the
Dashboard, from where you can create feeds, upload
content and create different screens based on existing
templates or your own.
Create feeds
The first order of business is to create a feed. Think of
feeds as containers into which users insert their content.
Each feed is moderated by a group of privileged users
who can approve and deny messages submitted to the
feed. To create a feed, log into the Admin Panel, click on
the 'Browse 'button under the Content section and click
on the 'New Feed' button in the right.
In the page that opens, provide some basic details
about the feed by giving it a name and a brief description.
You can also assign a feed to a particular group of
Concerto users. Next, you’ll have to pick the type of
content that’ll be allowed inside the stream. Currently,
Concerto supports seven types of content. When you’re
done, click on the 'Create Feed' button to save the feed.
Once a feed has been created, you can add content to it.
To review a feed, click on the Browse button, which will
display all feeds. Click on the name of a particular feed to
view its active contents. You can also edit or delete a feed
from this screen.
Once a feed has been created, you can upload content
and associate it with the feeds. To this end, click on the
'Add' button under the Content section in the top field
of Concerto’s admin interface. This takes you to a page
with multiple tabs where each tab represents a type of
content supported by Concerto, such as graphics, plain
text, video, calendar and so on.
The different tabs enable you to upload and customise
the different types of supported content. For example,
from the Video tab you can display either a self-hosted
video or fetch one from popular video sharing websites
such as YouTube, Vimeo and Daily Motion. Concerto will
also show you a preview of the content after it’s been
added. Once you have customised a type of content,
you’ll have to give it a name and add details about its
display date and duration. Remember that before you can
view the content, you’ll have to add it to an existing feed.
You can also add the same content to multiple feeds.
Arrange screens
The last order of business is to put the content-enriched
feed onto a screen. A screen in Concerto-speak is a
virtual arrangement of content. Screens are based on
templates and you can use one of the seven that come
with the default installation or create your own.
To place the elements on a new screen, click on the
'Screens' button in the top bar and then press the 'New
Above You can also use the customisable Concerto Player live CD to display
your screens on remote machines
Screen' button. In the page that opens, enter details
about the screen including its name, location where it
will be displayed (such as the lobby, porch and suchlike),
and the owner of the screen. When you’re done, select a
template from the list of available templates and click on
the 'Create Screen' button.
When you’ve created a screen, Concerto will give you
an overview of the arrangement based on its template.
From this screen you can get various details about the
screen such as its display location, as well as its current
status. The 'Edit Screen' button takes you back to the
previous screen, from where you can change its settings.
The 'Delete Screen' button is used to zap it out of
existence, so don’t do this right now.
You can now associate the added content to the screen
by assigning feeds to the various fields. Hover over a
A digital display can be
easily reprogrammed to
display a different set of
information at the touch
of a button
field inside a screen to view its settings and click on the
Manage button next to the content you wish to add. Then,
click on the ‘Add a New Feed’ button to pick a feed you’ve
created earlier. You can add multiple feeds for each field
and Concerto will cycle through the content of the feeds.
To preview a screen, click on the Screens button at the
top, which lists all configured screens. Pick any screen
from the list and use the 'Preview Screen' button to see
how it’s going to appear to your audience. That’s all there
is to it. Each screen has a unique ID and you can use this
to beam the screens to displays across your network. To
display a screen, run a fullscreen browser session with
the ID of the screen on a remote machine in the network,
such as http://192.168.1.2/frontend/2.
Power a
signboard
with a Pi
The tiny Raspberry
Pi is a wonderful
option to display
content streamed
by the Concerto
server. You also
don’t need to tinker
with it too much.
Just follow the
instructions in the
first reply in this
thread (http://
bit.ly/concertopi)
at the Raspberry
Pi forums. Then
connect a display
to the Pi via the
HDMI port and
hook it up to your
network, either via
the Ethernet port
or a supported
USB Wi-Fi adaptor
(if needed), and
start beaming
your adverts.
www.linuxuser.co.uk
33
Tutorial
Shell scripting
Shell scripting tips
Spruce up your scripts with loops, positional variables and
user input in our quick guide
#!/bin/bash
COLOURS="red green blue"
Jason
Cannon
for COLOUR in $COLOURS
do
echo "COLOUR: $COLOUR"
done
started his career
as a Unix and Linux
system engineer in
1999. Since then he’s
used his skills at such
companies as Xerox,
UPS, Hewlett-Packard
and Amazon.
Right Having an
editor that does
syntax highlighting
makes writing
scripts much
easier to follow
The result is exactly as before. We access the contents of
a variable using a $ followed by the variable’s name. The
next script renames all files with the extension .jpg by
inserting today’s date before the original filename:
for VARIABLE_NAME in ITEM_1 … ITEM_N
do
command 1
…
command N
done
ITEM_1 … ITEM_N is a space-separated list of items. The
commands between do and done are executed for each
Belt and
braces
If we don’t
encapsulate the
variable name in
curly braces, e.g.
if we were to use
$USER.tar.gz in our
archive_user.sh
script, then the
interpreter treats
the extra letters as
part of the variable
name. Since a
variable with that
name does not
exist, nothing is put
in its place and the
tar command fails.
34
#!/bin/bash
PICTURES=$(ls *jpg)
DATE=$(date +%F)
When shell scripting, it’s handy to perform an action on
a list of items. We can do this using a for loop:
item. The first item in the list is assigned to the variable,
then the code block is executed. The next list item is then
assigned to the variable and the commands are executed
again, and so on until we reach the end of the list. Here’s
an example script which shows how a for loop works.
To run this, save it as, say, colours.sh, make it executable
with chmod +x colours.sh , and then run it with ./
colours.sh.
#!/bin/bash
for COLOUR in red green blue
do
echo "COLOUR: $COLOUR"
done
COLOUR: red
COLOUR: green
COLOUR: blue
for PICTURE in $PICTURES
do
echo "Renaming ${PICTURE} to ${DATE}
${PICTURE}"
mv ${PICTURE} ${DATE}-${PICTURE}
done
We use a $ and () to store the output of the ls and date
commands in the variables $PICTURES and $DATE. The
%F specifier is used to get the date in year-month-day
format. We use {} to encapsulate our variables so that we
don’t insert rogue spaces or trip over special characters.
If this script is named rename-pics.sh and we run it in a
directory that also contains the three images bear.jpg,
man.jpg and pig.jpg, we see the following:
$ ./rename-pics.sh
Renaming bear.jpg to 2017-05-01-bear.jpg
Renaming man.jpg to 2017-05-01-man.jpg
Renaming pig.jpg to 2017-05-01-pig.jpg
Positional parameters
Positional parameters are variables that contain the
contents of the command line. For example, if we execute
a script with some extra arguments:
$ script.sh parameter1 parameter2 parameter3
We have three items in our list. For the first iteration,
red is assigned to the variable $COLOUR and is printed on
screen, preceded by the word COLOUR , using the echo
command. The same thing happens for the values green
and then blue and then the script completes.
It’s also common practice for the list of items to be
stored in a variable, as in this example:
…the variables $0 through $9 take on these values:
$0
$1
$2
$3
"script.sh"
"parameter 1"
"parameter 2"
"parameter 3"
The first variable $0 takes on the name of the script itself,
and each subsequent variable takes on each argument
passed to the script. This script called archive_user.sh
accepts a parameter which happens to be a username:
#!/bin/bash
echo "Executing script: $0"
echo "Archiving user: $1"
# Lock the account
passwd -l $1
Learn and save with Udemy
If you’ve enjoyed this small taste of the Shell Scripting Succinctly Udemy course, you can
gain unrestricted access to the full course at http://udemy.com with an exclusive Linux
User & Developer reader discount. You’ll discover a full working knowledge of using the
standard sysadmin terminal tools, setting up a test machine, Linux file permissions,
standard text editors, manipulating files, using network transfers, controlling processes
and much more. Get the discount, start learning! Visit http://bit.ly/SHELLSCRIPT10 to
enrol in the course at a discounted price of £15 (92% off)! Click the 'Buy Now' button and
sign up for an account on Udemy. Once signed up for an account, you will be asked to
confirm your purchase. The discounted course price of £10 will be applied automatically
with the above provided link. Input your credit card information and click on ‘Pay now’.
You’ve successfully enrolled for the course! Enjoy a lifetime of access on the go.
# Create an archive of the home directory.
tar cf /archives/${1}.tar.gz /home/${1}
Anything that follows a # is a comment, with the
exception of the shebang #! on the first line, which says
that this script should be run by the Bash interpreter.
Comments are ignored by the interpreter as they’re only
for the benefit of us humans.
Instead of referring to $1 throughout this script, let’s
assign its value to a more meaningful variable name. In
this version of the script, we use the variable name USER.
The output of the script remains the same.
#!/bin/bash
USER=$1 # The irst parameter is the user.
echo "Executing script: $0"
echo "Archiving user: $USER"
Udemy was founded in 2010 with the aim of improving lives
through learning. Udemy is a global marketplace for learning
and teaching online, where more than 15 million students learn
from a library of 45,000 courses taught by expert instructors.
Accepting user input
If you want to accept standard input (stdin), use the
read command. Remember that standard input typically
comes from a person typing at the keyboard, but it can
also come from other sources such as the output of a
command in a command pipeline. The format is:
read -p "PROMPT" VARIABLE
The -p option makes the command display a prompt,
specified immediately after, and the input is stored in the
variable name specified. This version of our script asks
for the user account to archive:
# Lock the account
passwd -l $USER
#!/bin/bash
# Create an archive of the home directory.
tar cf /archives/${USER}.tar.gz /home/${USER}
read -p "Enter a user name: " USER
echo "Archiving user: $USER"
You can access all the positional parameters starting at
$1 until the very last one on the command line by using
the special variable [email protected] Here’s how to update the script to
accept one or more parameters:
# Lock the account
passwd -l $USER
# Create an archive of the home directory.
tar cf /archives/${USER}.tar.gz /home/${USER}
#!/bin/bash
echo "Executing script: $0"
for USER in [email protected]
do
echo "Archiving user: $USER"
# Lock the account
passwd -l $USER
# Create an archive of the home
directory.
tar cf /archives/${USER}.tar.gz
home/${USER}
done
Now you can pass in multiple users to the script and the
for loop will execute for each user that you supplied on
the command line.
In this example, we run the script and type in the
username mitch:
$ ./archive_user.sh
Enter a user name: mitch
Archiving user: mitch
passwd: password expiry information changed.
tar: Removing leading `\’ from member names
This tutorial is an excerpt from Udemy’s Shell Scripting
Succinctly course. The rest of the course covers Exit
Statuses and Return Codes, Functions, Wildcards, Case
Statements, Logging, While Loops and Debugging. The
course includes some excellent real-world scripts, such
as the one pictured for generating random passwords, so
check out our exclusive reader offer!
www.linuxuser.co.uk
35
Tutorial
Mastodon
Join the rise of Mastodon and run
your own microblogging instance
Avoid all that harassment or offensive content on Twitter by setting up an
open and decentralised social network on your own server
Christian
Cawley
is a former IT and
software support
engineer and since 2010
has provided advice and
inspiration to computer
and mobile users online
and in print. He covers
the Raspberry Pi
and Linux at
http://makeuseof.com.
Resources
Ubuntu Server
16.04 x64
Domain name
and admin
access – this
should already
be pointing to the
server’s IP address
Free account from
www.mailgun.com
36
Twitter, the home of microblogging, has problems.
In short, people just cannot help sharing offensive
comments and material. It’s a home for cyberbullies,
not to mention terrorist sympathisers. The idea behind
Twitter is great; the problem is moderation. So wouldn’t it
be great if Twitter could be moderated in a more effective
way? Based on GNU Social, Mastodon offers a possible
solution: community-based moderation.
To achieve this, Mastodon – a microblogging service
like Twitter, but with a 500-character limit – runs on
your own server, or ‘instance’. The result is a local
social network that can be administered with its own
rules and account privileges. Posts can be shared to
other (or all) instances, or kept local. Individual users,
meanwhile, can specify privacy settings for each post
that they make. There’s even a ‘content warning’ feature
to alert users to NSFW, spoilers and other sensitive
material. Anyone interested in joining Mastodon, can
visit https://instances.mastodon.xyz to find a list of
instances to sign up to. Meanwhile, if you want to run
your own instance of Mastodon on your server, you can
set it up right now, using Docker.
Above Mastodon enables you to run a social network
with protections against bullying and NSFW content
01
Create a user and install Docker
Get started by connecting to your server via
SSH and adding a user. Call it mastodon with adduser
mastodon. The user will need root privileges; once these
are given, you can switch to mastodon with usermod -aG
sudo mastodon su - mastodon.
Next, it’s time to install Docker. Begin by updating
the package database, then install repository
management tools.
sudo apt-get update
sudo apt-get install apt-transport-https
softwareproperties-common
You’ll need a GPG key for the official Docker repo:
sudo apt-key adv --keyserver hkp://p80.
pool.sks-keyservers.net:80 -recv-keys
58118E89F3A912897C070ADBF76221572C52609D
Wouldn’t it be great
if social networks could
be moderated in a more
effective way?
With this set, add the Docker repo to APT sources:
03
Install Mastodon
With everything prepared, it’s time to install
Mastodon itself. Switch into the mastodon subdirectory
under home with cd /home/mastodon. Clone mastodon.
git into the /home/mastodon directory with:
git clone https://github.com/tootsuite/
mastodon.git
cd mastodon
sudo apt-add-repository ‘deb https://apt.
dockerproject.org/repo ubuntu-xenial main’
Now run an update again. We need to install Docker from
the repo for this project, rather than the default Ubuntu
16.04 repo, so set this policy: sudo apt-cache policy
docker-engine.
You now have everything you need to run Docker
installed. Move on, this time installing Docker itself with
sudo apt-get install -y docker-engine. This will
take a few moments to complete, and the process should
automatically run its own daemon, with the process set
to start when the server boots. You can check that it’s
running with sudo systemctl status docker.
02
Configure Docker Compose
Rather than bother inputting sudo with each
and every Docker command in this Mastodon setup,
it’s simpler to add the mastodon user account to the
new Docker group with sudo usermod -aG docker
$(whoami). Now exit the SSH session, then log back in.
Docker Compose – a tool for running multi-container
Docker applications – now needs to be installed. Head
to http://bit.ly/DockerCompose to check the latest
release number; substitute it for the version number in
this command:
Here, make a copy of the production environment
sample file:
cp .env.production.sample .env.production
This can be edited later, once a trio of secret keys have
been created. To do this, you should first build the Docker
image with docker-compose build. This can take a
while to complete, but once the image is built, the secret
keys can be created. Do this by running the following
command three times, and making a note of each
generated key. These are long strings, so temporarily
pasting into a text editor might be a good idea.
Social
networking
safety with
Mastodon
Social networks
should be better at
protecting users.
Microblogging
should be about
discussion and
debate, not
cyberbullying
and offensive,
NSFW images.
This is Mastodon’s
strength: creating
an online space
where posters
can manage their
posts’ visibility
and readers
can screen out
offensive content.
While it currently
seems unlikely that
it will overcome
Twitter, we can
at least hope
that Mastodon’s
approach will be
adopted by other
social networks.
docker-compose run --rm web rake secret
sudo curl -o /usr/local/bin/docker-compose
-L "https://github.com/docker/compose/releases/
download/1.12.0/docker-compose-$(uname -s)$(uname -m)"
With the three keys safe, open the copy of the production
environment file with nano .env.production.
Find PAPERCLIP_SECRET, SECRET_KEY_BASE and
OTP_SECRET and paste one of the keys against each
in turn. The order doesn’t matter; they just need to
be different.
04
Make it executable:
sudo chmod +x /usr/local/bin/docker-compose
Verify the installation has worked by checking the
installed version with docker-compose -v. If everything
has worked as intended, you’ll see the version number.
This should read:
version [version-number] build [build number]
The version number should match the one you input.
Set up a Mailgun account
To enable sign-ups to your Mastodon instance,
you’ll need a Mailgun account. This enables your
instance to issue up to 10,000 emails per month for free,
if you input credit card details. Required for sign-up
confirmation, those emails are pretty vital!
Leaving the SSH window open for a moment,
switch to your computer’s browser and head to
https://app.mailgun.com/new/signup. From here,
follow the steps to create a new account, and use the
domain verification process to ensure the domain is
listed as active.
Now visit https://app.mailgun.com/app/domains, click
the domain you’re using for the instance, and identify the
Default SMTP Login and Default Password. Copy these
www.linuxuser.co.uk
37
Tutorial
Mastodon
06
Get an SSL certificate
As an SSL certificate is required to keep things
secure, you’ll need to install the certbot PPA:
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
With that done, install certbot itself with sudo apt-get
install certbot.
into the .env.production file in your SSH session, as
values for SMTP_LOGIN and SMTP_FROM_ADDRESS.
Next, change the SMTP_FROM_ADDRESS, so that it
reads something along the lines of "[email protected]
domain.name". Also, look for LOCAL_DOMAIN and change
this to match the domain name that you’re using to point
at the server.
Finally, click Ctrl+X to save the file and exit, tapping Y
to confirm.
With the configuration changed, you’ll need to rebuild
Docker with docker-compose build. Next, enable
migrations, and precompile assets:
To generate the SSL certificates for your Mastodon
instance, you’ll need to temporarily stop NGINX.
sudo systemctl stop nginx.service
docker-compose run --rm web rails db:migrate
docker-compose run --rm web rails
assets:precompile
Then, run the command to create a certificate,
substituting your own domain name for example.com.
Once that’s done, run the container with docker-compose
up -d.
sudo letsencrypt certonly --standalone -d
example.com
05
Several prompts will be displayed. Follow these
instructions to complete the process.
Set up NGINX
NGINX is used alongside Mastodon as a reverse
proxy. To do this, it must be installed with the usual sudo
apt-get install nginx and then configured. Next,
remove the default profile:
sudo rm /etc/nginx/sites-available/default
sudo rm /etc/nginx/sites-enabled/default
A new profile is required:
sudo touch /etc/nginx/sites-available/mastodon
Enable this profile with a symbolic link:
sudo ln -s /etc/nginx/sites-available/mastodon
/etc/nginx/sites-enabled/mastodon
Configuring NGINX requires the copying of the config file
at http://bit.ly/NGINXconfig into the text editor.
07
Get Mastodon running
You’re now ready to run Mastodon itself. Check
you’re in the correct directory with pwd. You should be in
/home/mastodon/mastodon. If not, cd into there, and
then stop Docker.
docker-compose down
Next, run the following one line at a time.
docker-compose
docker-compose
assets:precompile
docker-compose
docker-compose
build
run --rm web rails
run --rm web rails db:migrate
up -d
This should take a couple of minutes to complete. Once
you’re done, bring NGINX back online.
sudo nano /etc/nginx/sites-available/mastodon
sudo systemctl restart nginx.service
In the file, find each instance of example.com and replace
it with the name of your own domain, without the ‘www’.
This includes changing the ssl_certiicate lines. Again,
hit Ctrl+X to save and exit.
38
If all of the above worked, you should be able to open a
web browser on your PC and visit your new Mastodon
instance, via HTTPS!
08
Use Cron for scheduled tasks
Some tasks on Mastodon need to be automated.
You can configure these with cronjobs. Begin by changing
directory with cd /home/mastodon. Next, create a file
with the scheduled commands – nano mastodon_cron –
and copy and paste the following into the file:
cd /home/mastodon/mastodon
docker-compose run --rm web
mastodon:media:clear
docker-compose run --rm web
mastodon:push:refresh
docker-compose run --rm web
mastodon:push:clear
docker-compose run --rm web
mastodon:feeds:clear
mastodon:make_admin
USERNAME=yourusername
New to
domain
names?
rake
rake
rake
rake
Hit Ctrl+X to exit, confirming with Y, then enter:
sudo chmod +x mastodon_cron && sudo crontab
-e
This opens the crontab file. Scroll to the end, and insert:
0 0 * * * /home/mastodon/mastodon_cron >
/home/mastodon/mastodon_log
Again, Ctrl+X will exit. As the SSL certificate from Let’s
Encrypt will expire after 90 days, you need to schedule
a cron to auto-renew the certificate. This will happen
periodically as a background task.
sudo crontab -e
This will give you access to the various administrative
tools that will let you manage the site. You’ll find these
by signing into Mastodon, selecting Preferences >
Administration. For instance, you’ll find a list of Accounts
that have been set up (at this stage it will only list yours),
as well as some statistics via Sidekiq. Meanwhile, the
Site Settings page lets you specify contact information, a
site title and description, and more.
If there is any problem with sending confirmation
emails, you can issue a manual command to confirm a
single account:
If you’re registering
a new domain, once
you’ve settled on a
name (our domain
was disused), the
next thing to do is
find out where the
DNS settings can
be found. Changing
these settings can be
difficult, and typos
will be punished.
Just remember to
make sure you know
where the ‘Reset to
Defaults’ button is
before starting – and
use it when you run
into problems!
docker-compose run --rm web rails
mastodon:conirm_email USER_
[email protected]
Meanwhile, unconfirmed users can be cleared in
bulk with:
docker-compose run --rm web rake
mastodon:users:clear
Scroll to the end of the file and add this:
0 1 * * 1 /usr/bin/letsencrypt renew >>
/home/mastodon/letsencrypt.log
5 1 * * 1 /bin/systemctl reload nginx
Press Ctrl+X to exit, tapping Y to confirm. This instruction
will renew a certificate over 60 days old on a Monday at
1am, reloading NGINX five minutes later.
09
Administer mastodon & manage accounts
With Mastodon now running on the server and
accessible via your browser, you’ll need to create an
account. This will check if your Mailgun configuration is
correct – if not, follow Mailgun’s troubleshooting steps.
Next, you’ll need to make your account into an admin
account. First, make sure you’re in the mastodon
subdirectory with cd /home/mastodon/mastodon.
Then, elevate the intended account username.
docker-compose run --rm web rails
10
Troubleshooting
There is a manual method to install Mastodon,
but it’s pretty buggy. We recommend using the Docker
installation to gain familiarity with the process, as there
are several potential ‘gotchas’ throughout. For example: if
your domain host is slow to update DNS records to point
at your Mastodon instance, then your entire project will
be left hanging until the change is applied, which could
take up to 48 hours. Another potential issue is a Let’s
Encrypt error that can occur when restarting NGINX,
which is resolved by generating a dhparam file:
sudo openssl dhparam -out /etc/ssl/certs/
dhparam.pem 2048
Missing stylesheets in Mastodon can be fixed by
rerunning the Docker compose commands (see Step 7).
Watch out for typos that can cause a config file to fail
or software to install incorrectly. And keep the manual
install until you’re more familiar with Mastodon!
www.linuxuser.co.uk
39
Tutorial
Tam Hanna
is the CEO of the
Bratislava-based
consulting company
Tamoggemon Holding
k.s. The firm’s focus
is consulting in the
development of
interdisciplinary
systems consisting
of software, HID
and hardware.
Right This wizard
contains a
variety of project
skeletons for
QML applications
Tutorial files
available:
filesilo.co.uk
40
Qt5 development
Qt5 development:
Use QML to connect
JavaScript and C++
The QML runtime environment provides an
helpful way to create great-looking crossplatform user interfaces
Creating good-looking cross-platform user interface
designs has always been a challenging task, but given
that Nokia intended Qt to act as a cross-platform UI
development toolkit for various consumer-oriented
mobile devices, the ability to create svelte-looking user
interfaces was paramount. Since QtGUI wasn’t ideal for
mobile apps, Nokia’s answer was QML. The principles
behind it are simple: a JavaScript interpreter is tacked
onto the user interface stack, permitting developers to
reuse some or all of their knowledge when working on
mobile apps for various platforms.
In addition, the Qt runtime is accelerated as it can
make use of the browser’s rendering infrastructure
already found on the mobile device. Controls are
constructed by combining the various graphical
primitives; the widget can then be used as a constituent
for other user interfaces.
Changes in the ownership of Qt affected the QML
roll-out process: in many cases, modules were not ready
or shipped in marginal form. Fortunately, the last year
has seen the stabilisation of the QML embedded runtime
environment – it truly is the age of QML.
Alongside the framework, the QML Controls library
was also finalised. QML apps can now be created using a
WYSIWYG editor similar to the one from traditional QtGUI.
In order to avoid problems, the following steps assume
you’re working with the latest version of Qt Creator and
have Qt 5.8 installed. When clicking the new project
wizard, a dialog similar to that in Figure 1 will pop up.
Developers have the choice between three QMLderived application skeletons. Qt Quick Application
provides a basic QML-based app, while the two other
templates – Qt Quick Controls 2 Application and Qt Quick
Controls Application – use the control libraries. As we
want to look at the basics of QML, we’ll create an app
based on the Qt Quick Application template.
The following steps assume that the name of the
application is QMLDemo1. The assistant will ask you
which version of QML should be used. Select Qt 5.6,
and leave the option for the creation of the .ui.qml-File
selected. Next, select the kits which will be responsible
for program execution. Finally, the overview dialog will be
shown, after which the project structure will be created.
Let’s start by looking at QMLDemo1.pro – the structure
of the file is largely as expected. The main change is
Figure 1
we’re now including two modules, and a resource file in
addition to the code file containing the entry point:
TEMPLATE = app
QT += qml quick
CONFIG += c++11
SOURCES += main.cpp
RESOURCES += qml.qrc
The first interesting change occurs in the main method.
Instead of instantiating the normal QApplication class,
we use QGuiApplication. Furthermore, it is provided with
an instance of QqmlApplicationEngine, which its load
function is invoked in order to populate it with a QML file:
Figure 2
int main(int argc, char *argv[]) {
QGuiApplication app(argc, argv);
QQmlApplicationEngine engine;
engine.load(QUrl(QStringLiteral("qrc:/main
qml")));
return app.exec();
}
The parsing of the QML file is performed at runtime: this
is important not only for technical reasons, but also due
to the possibility that syntax errors might not be found
by Qt Creator. With that, the C++ part is finished: while
QML can be integrated into C++, this story focuses on the
JavaScript aspect.
Find the QML
QML files are edited like any other code file – WYSIWYG
support is not provided. As the Controls library evolved,
Qt Creator was expanded with a variety of features
intended to provide graphical editing. As of this writing,
the two editing modes are kept apart by the filename: a
.qml file is intended for plaintext mode, while .ui.qml files
will be opened in the editor shown in Figure 2.
Click main.qml to bring its code onto the screen. The
file starts out with a batch of import declarations:
import QtQuick 2.6
import QtQuick.Window 2.2
Imports inform the QML runtime about the libraries
needed to populate the declarations found in the rest of
the file. After that, we find the declaration of the actual
window object:
Window {
visible: true
width: 640
height: 480
title: qsTr("Hello World")
MainForm {
anchors.ill: parent
mouseArea.onClicked: {
console.log(qsTr('Clicked on
background. Text: "’ +textEdit.text + ‘"'))
}
}
}
Experienced JavaScript programmers baulk at this
syntax: it looks like JSON, but it's actually quite a
different. The key difference is that attributes start
out with a ':' character, but usually get terminated by a
linefeed – using a semicolon is not commonly done.
In addition to that, the values written into an attribute
don’t necessarily have to be constants: in the case of our
window, its title is populated from a qStr() function.
The next interesting aspect involves the declaration
of children: tack them into the main JSON body as
Above This editor looks intentionally similar to the one used in QtGUI
if they were an attribute. In the case of our example
program, this is the lot of the main form element: its child
MouseArea is provided with an attribute of its own.
The next question involves the declaration of the main
form file. A .qml file is not only a QML file, but also a
component which can be accessed via its filename: in the
case of our Mainform, the corresponding filename could
be MainForm.qml or MainForm.ui.qml.
Know thy layout
By default, clicking a .ui.qml file opens its contents in
the editor shown in Figure 2. Clicking the edit symbol in
the command bar on the left-hand side of the Qt Creator
window allows you to take a look at the code behind it.
Be aware that editing .ui.qml files by hand is a stupid
idea: it’s only a matter of time until the graphical editor
will destroy your changes. If you have to do it for some
reason, ensure the version control system has your back.
We must move on to a different topic. One of the
most significant strengths of QtGUI is the ability of user
interfaces to adjust themselves: due to the underlying
layout subsystem, widgets rearranged on the fly.
In QML, developers have to deal with two groups of
layouts: a good way to keep them apart involves thinking
of them as active and passive. Passive layouts, aka
positioners, simply arrange their child elements on the
QML files are edited
like any other code file
– WYSIWYG support is
not provided
screen. Active layouts – aka ‘layouts’ – differ in that their
activity modifies the size of their children.
To test out these features, we need to start out with
an example widget. For this, we will create a little push
button which changes its colour when clicked.
Magic
localisation
Qstr sends its
parameter into
the Qt localisation
framework,
replacing it with a
translated version
of itself. This
complex-sounding
system makes
translating Qt
applications easy:
a program called Qt
Lingust sniffs out
strings found inside
a qStr function and
provides a central
place where they
can be translated.
After that,
developers simply
load the resulting
language file to
translate their
application.
www.linuxuser.co.uk
41
Tutorial
Qt5 development
Figure 3
Selfmodifying
code!
In theory, code
provided to the
QML runtime can
also be obtained by
the program during
its execution: this
is a nice way to
create a selfmodifying binary,
which gets around
the various app
stores’ compliance
directives. Sadly,
in practice, this is
unlikely to work:
most stores vet
for this, banning
accounts of
developers
who happen to
misbehave.
Right The lack of
resizing control
leads to odd
display output
Above Anchors are points where components can be
‘docked’ to one another
Return to editing mode, and click the ‘/’ below the
file qml.qrc. Select the 'Add New...' option to open the
file generation wizard. Found in the group Qt, the file
template we need is QML File (Qt Quick 2). Choose the
name FutureToggle and click 'Next'. The project and its
version control settings can be left alone – when done, a
new file will appear with the following content:
Rectangle { . . .
property bool isToggleActive
Component.onCompleted:{
console.log("Completed Running!");
isToggleActive = false;
}
Figure 4
import QtQuick 2.0
Item {
}
Replace its content with the following, very
‘basic’QML ile:
Rectangle {
width: 300
height: 300
color: "steelblue"
Text{
text: "The Future is Now!"
anchors.verticalCenter: parent
verticalCenter
anchors.horizontalCenter:
parenthorizontalCenter
}
}
Rectangles are the bread and butter of QML: this
function is best described by its name. We provide some
attributes to describe its visual styling, and furthermore
integrate a text element. As the text is intended to float
in the middle of the element, we use its anchors to attach
it – see Figure 3.
parent provides direct access to the parent element:
in the case of our text box, attaching it centrally to the
parent – using Center – ensures that the text is always
in the middle of the rectangle. With that done, return to
Main.qml and replace the MainForm:
Window {
. . .
title: qsTr("Hello World")
FutureToggle{}
}
42
This code provides an empty form containing one
instance of the toggle. Creating colour-changing logic
requires us to accomplish two changes: first of all, a way
to store state is needed. Second, we also need to track
incoming click events delivered to the QML runtime.
Statefulness is best accomplished by properties: they
behave just like in any other programming language.
QML supports a variety of data types – for more info, see
http://doc.qt.io/qt-5/qtqml-typesystem-basictypes.html.
The second part involves the creation of a constructor,
which requires a workaround: creating an event handler
for the Component.onCompleted event. Task one is
accomplished like this:
Solving the second problem requires us to deploy a
MouseArea control. It is problematic, as it only handles
taps inside of its allocated size budget.
Developers might worry about how to do this: in many
languages, a pixel can only be home to one element at a
time. Fortunately, QML is free from this restriction:
Rectangle {. . .
Component.onCompleted:{. . .}
MouseArea{
anchors.ill: parent
onClicked: {
isToggleActive=!isToggleActive;
updateColors();
}
}
Text{. . .
QML does not restrict developers to creating inline
functions via attributes. One can also create a function
to be shared across the entire file – in the case of our
program, it looks like this:
Rectangle {
function updateColors()
{
if(!isToggleActive){color=
"steelblue"}
else {color="red"}
}
Figure 5
With that done, run the program once again. The colour
changes every time the element is clicked.
The next step involves creating multiple versions of
our button. Start out by returning to Main.QML, and
modifying it as following:
Above Using GridLayout improves quality
Window {
visible: true
width: 800
height: 700
title: qsTr("Hello World")
Grid
{
anchors.ill: parent
columns: 2
rows: 2
spacing: 10
FutureToggle{}
FutureToggle{}
FutureToggle{}
FutureToggle{}
}
}
Run the program again. Making the window larger now
leads to adjustment of the design, as in Figure 5.
QML layouts need to be told about how their elements
are to be resized. As a first experiment, set the illWidth
property on one of the elements to inform the layout
engine it is allowed to change the size of this element:
GridLayout{
anchors.ill: parent
columns: 2
rows: 2
FutureToggle{Layout.illWidth:
true}
FutureToggle{}
FutureToggle{}
FutureToggle{}
}
A grid widget does what one would expect from its name:
controls are arranged according to the size specified by
the column and row property. Simply providing children
on a ‘one by one’ basis is but one way – a QML repeater
control can also be used for advanced use of layouts.
Run the program and try to change the size of the
window. Due to the ‘staticness’ of the positioner, it’s easy
to create an ‘error display’ such as in Figure 4. This can
be solved by replacing the positioner with a real layout.
First, include the module containing the various layouts:
Figure 6
Left Permitting
modification
leads to a
changed output
import QtQuick.Layouts 1.1
Next, Grid must be replaced with GridLayout. As this
does not support the spacing attribute, this one must go:
Window { . . .
GridLayout
{
anchors.ill: parent
columns: 2
rows: 2
FutureToggle{}
FutureToggle{}
FutureToggle{}
FutureToggle{}
}
}
Running the program once again leads to the results
shown in Figure 6. Due to space constraints, we can not
provide any further information – find out more by visiting
http://doc.qt.io/qt-5/qtquick-usecase-layouts.html.
Writing JavaScript code for QML is quite an interesting
experience: existing design knowledge can usually be
transferred to the world of cross-platform apps. The
true power of QML lies in its ability to integrate with the
underlying C++ runtime. Business logic can be written
in C++, while the presentation layer is handwritten
JavaScript, as we’ll see in the next instalment.
www.linuxuser.co.uk
43
Tutorial
Mihalis
Tsoukalos
is a UNIX
administrator,
programmer (UNIX
and iOS), DBA and
mathematician.
He has been using
Linux since 1993.
You can reach him at
@mactsouk.
Erlang
Part 2:
Erlang and OTP
Learn how to develop Erlang
applications that implement the
functionality of finite-state machines!
Resources
A3
A recent Erlang
installation
Your favourite
text editor
A1
A2
A4
A0
Tutorial files
available:
filesilo.co.uk
44
This is the second part in a series of Erlang tutorials
that talks about OTP, the most important part of the
Erlang programming language apart from the language
itself. This tutorial will begin with the necessary theory
in order to let you understand what finite-state machines
are and why they are so important that OTP has a
dedicated behaviour for supporting them. Then, the
Erlang way for supporting finite-state machines, which
is the gen_fsm behaviour and the gen_fsm module, will
be explained in more detail before you start developing
actual Erlang and OTP applications.
Next, the Erlang code of two handy and representative
examples will be presented. The first one implements
a finite-state machine with only two states; the other
example implements a finite-state machine with three
states, in order to let you understand how much the
complexity of an Erlang program is affected when you
have more states to process. This tutorial will also talk
you about the Applications tab of the Observer graphical
application that allows you to see the tree of the Erlang
processes that run on your Linux system, which also
reveals the relationships between them.
The next tutorial in the Erlang series will talk about the
Mnesia database as well as the way Erlang deals with
data records, which allow you to have data persistency.
A7
A8
A6
A10
A5
A9
A11
A12
About finite-state machines
Every computer is a state machine: a device that stores
the status of something at a given time and can operate
according to its input to change the status and/or
cause an action or output for any given modification. A
finite-state machine is a state machine with a limited
number of possible states and is also called a finite-state
automaton. There are two types of finite-state machines:
deterministic and nondeterministic. What differentiates
a deterministic from a nondeterministic finite-state
machine is that on a nondeterministic machine, a
given state can have more than one possible transition
for the same input. Put simply, this means that on a
nondeterministic machine, you cannot always predict the
next state for a given input/event.
Figure 1 shows a simple finite-state machine that is
linear, which means that you should visit all of its states
in order to get to the last one. Additionally, you can only
get back to the first state from the last one in order to
start over. Generally speaking, such finite-state machines
are pretty simple and easy to understand, but as you will
see, most of the time things are not that straightforward!
So, in this finite-state machine you are obligated to visit
states A, B and C in order to get to state D. Additionally,
all states form a virtual circle.
1,4,5, 6
Figure 2
0,2, 3
1,2,3,4,5, 6
5
0
S0
1
0,2,4, 5
0,1,2, 4
S3
S2
3
6
S6
3
S4
6
S5
1
0,2,4,5, 6
1
2
0,3,4, 5
1
Figure 1
following code and save it as simple.erl and then try to
compile it, Erlang will print various warning messages
that will reveal the functions that you need to implement
for the gen_fsm behaviour:
c
b
A
B
C
a
S1
0,2,3,4,5, 6
d
D
Figure 2 shows a more complex example of a finitestate machine. The main difference between the finitestate machine shown in Figure 2 and the one in Figure 1
is that the finite-state machine of Figure 2 is not linear.
This means that from state S2 you can go to state S4 or
to state S6 or to state S1 or even stay at state S2! You are
also allowed to skip some states if you give the correct
input, which is the case with states S3, S2 and S6 where
you can go directly from S3 to state S6 and completely
bypass state S2 if you want.
Please bear in mind that finite-state machines that
model real-world problems tend to be complicated!
OTP and FSM
In this section, you are going to learn how OTP deals with
finite-state machines with the help of the necessary
functions and behaviours. But first you will learn about
the reason Erlang supports finite-state machines: if you
recall from the previous tutorials, Erlang was initially
used for supporting telephony equipment and networks.
The operation of such equipment is usually based on
finite-state machines. So, after the creators of Erlang
decided that finite-state machines were important, the
choice to support them in OTP was easy.
The gen_fsm behaviour
This is the core of every Erlang program that implements
a finite-state machine using OTP. If you write the
Above A quite
complex finitestate machine with
many states that
accepts many kinds
of events, which
in this case are
natural numbers
$ cat simple.erl
-module(simple).
-behaviour(gen_fsm).
-export([]).
$ erlc simple.erl
simple.erl:2: Warning: undeined callback
function codechange/4 (behaviour ‘gen_fsm')
simple.erl:2: Warning: undeined callback
function handle_event/3 (behaviour ‘gen_fsm')
simple.erl:2: Warning: undeined callback
function handle_info/3 (behaviour ‘gen_fsm')
simple.erl:2: Warning: undeined callback
function handle_sync_event/4 (behaviour ‘gen_fsm')
simple.erl:2: Warning: undeined callback
function init/1 (behaviour ‘gen_fsm')
simple.erl:2: Warning: undeined callback
function terminate/3 (behaviour ‘gen_fsm')
Across A finitestate machine
whose entry point
is state A and
operates in a linear
manner
So, in the Erlang code of your application you will need
to implement the following functions:
• code_change/4: This function is used for live code
upgrades or downgrades. However, talking about this
subject is beyond the scope of this tutorial.
• handle_event/3: This function is for handling
asynchronous events that are sent to gen_fsm.
• handle_info/3: This allows the finite-state machine
to receive other messages than just events. However,
the presented examples will only handle events, so the
implementation of handle_info/3 will be minimalistic.
• handle_sync_event/4: This function handles the
events that are sent to the finite-state machine
www.linuxuser.co.uk
45
Tutorial
Right A simple
finite-state
machine with just
two states that are
named YES and NO.
The Erlang code of
simple.erl is based
on this finitestate machine
Erlang
Figure 3
YES
NO
synchronously using the gen_fsm:sync_send_all_state_
event function.
• init/1: This function is the entrance point to the
finite-state machine because this is where you define the
opening state of your machine.
• terminate/3: This function is called for performing
any cleanup operations that need to be done in your
Erlang application.
Across The Erlang
code of simple.erl,
where a finite-state
machine with two
states, named yes
and no, is defined
and implemented
using the gen_fsm
behaviour
While it looks like a lot of effort to implement all these
functions, bear in mind that most of them have a
standard implementation.
Although for the gen_server behaviour, which was
presented in the tutorial of the previous issue, you
needed to have two separate files with Erlang code, this
is not the case the gen_fsm, where the entire program
can be in a single file. This happens mainly because
gen_fsm does not offer any high availability features, so
there is no need for a separate supervisor process.
Please bear in mind that the gen_fsm module is not
the most popular OTP module but when it fits the needs
of your application, it will save you lots of time because
it can simplify the development of the desired task and
make your Erlang code easier to read without adding
unnecessary complexity to it.
An example with two states
Here we will present a small yet fully working example
that uses a finite-state machine with only two states,
Why are finite-state machines
important in OTP?
First and foremost, finite-state machines are important
because they allow you to describe complex systems in a
way that is easy to understand and implement – imagine
having to describe what a finite-state machine with ten
nodes does using just words! Second, they are important
because they are computable, which means that finitestate machines can be fully implemented as computer
programs. Moreover, they also allow you to create parsers
for grammars. Last, they can be used in network protocols
that keep the state of a connection – TCP is such a protocol!
Generally speaking, finite-state machines are used
everywhere, especially in situations where you need to
parse a message when it is received. So the next time
you try to get a drink from a vending machine, feel free
to assume that there might be a finite-state machine
somewhere inside that machine!
46
which is not a very rare situation – imagine the light
switches as well as the doors at your office or at home.
Figure 3 shows a finite-state machine with only two
states, named yes and no, that will be used in the code
example of this section. For each state, you will need to
implement and export a separate function with the same
name, which makes your code even easier to understand
and maintain. The complete Erlang code of simple.erl can
be seen in Figure 4.
The execution of simple.erl must begin with the
simple:start_link() function:
2> {ok, Pid} = simple:start_link().
{ok,<0.71.0>}
The only thing that you should take care of is keeping the
returned process ID in a separate variable in order to be
able to use it afterwards to send the desired messages
to the process. Now that you have the process ID of the
preferred process saved in the Pid variable, you can start
sending messages to it:
5> ok = gen_fsm:sync_send_event(Pid, yes).
Remaining in yes state!
ok
7> ok = gen_fsm:sync_send_event(Pid, no).
Going to no state!
ok
As you can see, in order to send a message to the
program, you will need to use a function provided by the
Figure 4
gen_fsm module called gen_fsm:sync_send_event. Its
first argument is the process ID of the process you want
to pass the message to, whereas the second argument is
the message/event itself.
For more examples of simple.erl in action, you can look
at Figure 5.
change state. The second branch allows the yes()
function to accept every other message while being in the
yes state without changing the state of the finite-state
machine. The messages that are printed on the screen
using the io:format() function allow you to understand
what is going on behind the scenes, but they are not
compulsory. The return values of the two branches of
the yes() function are the most important part of the
Erlang code because this is where you send the desired
message to the gen_fsm module, you return the control
to it and allow it to wait for the next event to come.
As you can see, the required Erlang code is much
shorter than you could have imagined when you started
reading this tutorial – this shows how good Erlang is at
programming finite-state machines. Additionally, the
export() statements at the beginning of the simple.erl
module allow you to understand how the module works
and what it does without having to read its entire code.
The next section will implement a finite-state
machine with three states; it would be a good exercise
for you to try to implement it on your own, based on the
code of simple.erl, before looking at the actual code
of threeStates.erl.
Explaining the Erlang code
Adding another state
The most important Erlang code of simple.erl has to
do with the implementations of the yes() and no()
functions, which are the two states of the finite-state
machine that is implemented. As the two functions are
similar, only the yes() function will be explained here:
This time you are going to deal with three states.
Additionally, one of the three states will allow you to
make a colour decision in order to make things even more
interesting! After a successful colour selection, you will
automatically return to the first state. Nevertheless, as
you cannot always trust the user, this program will not
wait forever for the user to choose a colour. After the
defined timeout period expires, the finite-state machine
will go to a different state instead of waiting forever! So,
when you are at the colour state and the timeout period
expires, the state machine will go back to the choose
state. Once again, this feature does not require too
much Erlang code to be implemented – just look for the
TIMEOUT string in the code of threeStates.erl.
The finite-state machine for this project is presented
in Figure 6 (overleaf)– you should clearly understand
now that the graphical representation of a finite-state
machine gives you almost all the information you need
about its operation and makes things clear about the way
a finite-state machine operates.
The related Erlang code is saved in threeStates.
erl; you can see the full code in Figure 9 (overleaf).
What is interesting in Figure 9 is the fact that most of
the gen_fsm related functions have almost the same
implementations as in simple.erl – only the format of the
message returned by each function is different.
The most interesting part of threeStates.erl is the
code that allows you to choose a colour using a function
instead of having to remember the process ID of the
Erlang process:
Regular expressions and FSM
The term ‘regular expression’ comes from
computational theory, which is a branch of computer
science and mathematics. You can use a deterministic
finite automaton (DFA) to implement regular expressions
– a DFA is a finite-state machine that does not use
backtracking. Backtracking occurs when the regular
expression engine encounters a regex token that does not
match the next character in the string.
The regex engine will then back up part of what it
matched so far in order to try other alternatives. In other
words, the regex engine knows it can backtrack to where it
chose the option and can continue with the match by trying
a different one. Perl regular expressions are implemented
using a nondeterministic finite automaton (NFA) and
therefore use backtracking.
yes(no, _From, StateData) ->
io:format("Going to no state!~n"),
{reply, ok, no, StateData};
yes(_Event, _From, StateData) ->
io:format("Remaining in yes state!~n"),
{reply, ok, yes, StateData}.
The first branch of the yes() function allows it to process
the no message in order to let the finite-state machine
Figure 5
Across The use
of simple.erl
application with
the help of the
gen_fsm:sync_
send_event
function that sends
events to the finitestate machine
red() ->
gen_fsm:send_event(?MODULE, {clr,red}).
green() ->
gen_fsm:send_event(?MODULE, {clr,green}).
www.linuxuser.co.uk
47
Tutorial
Right The finitestate machine
that will be
implemented in
threeStates.erl,
which is much more
advanced than the
one presented in
Figure 3
Erlang
Figure 6
CHOOSE
TIMEOUT
ENTER
RED
COLOUR
GREEN
What the aforementioned Erlang code does is send the
appropriate message to the finite-state machine function
for you – the function of the active state will process the
message. As only the color() function can handle the
two events ({clr,red} and {clr,green}), this technique
does the desired job – the other two functions just ignore
both events.
The other interesting part of threeStates.erl is the
code of the color() function that handles the two
supported colours:
color(timeout, StateData) ->
io:format("Timeout! Going back to
CHOOSE state~n"),
{next_state, choose, StateData};
color({clr,red}, StateData) ->
io:format("Thanks for choosing the Red
color~n"),
io:format("Going to ENTER state!~n"),
{next_state, enter, StateData};
color({clr,green}, StateData) ->
The Applications tab of Observer
Across The
Applications tab
of the Observer
application will
allow you to reveal
information about
any Erlang process
you want
48
As you might recall from the previous Erlang tutorials,
the standard Erlang installation comes with the Observer
graphical application that aims to make your life easier. The
Observer has many tabs, including the Applications. The
purpose of this particular tab is to help you find information
about processes and applications. (Figure 7 shows the
Applications tab in action).
The technique used for finding information about the
process of the threeStates.erl program is as follows: first,
you click on the process with a process ID of <0.50.0> (upper
right). Then, a new window opens up (bottom right) and you
go to the Dictionary tab, where you select the process ID of
the Erlang shell. Then the upper left window opens up and
you go to the Dictionary tab. From the last tab, you select
the process ID that was returned by the threeStates:start_
link/0 function you executed and you can get the bottomleft window that shows information about process ID
<0.64.0>, including memory and garbage collection data.
Such tools may not be suitable for everyday use, especially
when you are busy doing actual coding, but when you have
problems with your software, they might save the day, so do
not underestimate their value.
io:format("Thanks for choosing the
Green color~n"),
io:format("Going to ENTER state!~n"),
{next_state, enter, StateData};
color(_Other, StateData) ->
{next_state, color, StateData,
?TIMEOUT}.
The color() function has four branches. The first is
for taking care of the timeout event, the second is for
handling the {clr,red} event, the third is for handling the
{clr,green} event, and the last is for managing everything
else. As you can understand, if at a given state you have
to process n events, you will need to have n+1 branches
at the respective function.
The last interesting part is the implementation of the
goToChoose() and the goToColor() functions:
goToChoose() ->
gen_fsm:send_event(?MODULE,choose).
goToColor() ->
gen_fsm:send_event(?MODULE,color).
Each of them sends the desired event to the gen_fsm
module, which is being processed by the function that
represents the active state of the finite-state machine.
This saves you from having to remember a more complex
command and from keeping the process ID of the Erlang
process that started with the start_link() function.
Running the code threeStates.erl, which as expected
starts with the execution of the start_link() function,
will help you test its operation:
2> threeStates:start_link().
Going to select a color!
{ok,<0.65.0>}
What you get from start_link() is a help message and
the process ID of the generated process, which you will
not need to keep. Then, you can go into a different state
by executing the goToChoose() function:
3> threeStates:goToChoose().
Going to CHOOSE state!
ok
Figure 7
A deterministic and a
nondeterministic finite-state machine
First of all, look at Figure 8 where you can see examples
of the two different kinds of finite-state machines. If you
look closely, you can easily recognise the nondeterministic
finite-state machine even if you did not read the caption for
it, because if you are on the S2 state on the right finitestate machine and get a c, you cannot know whether you
will be staying in state S2 or going to state S1!
Although this might look like complicating things for no
reason, nondeterministic finite-state machine have unique
characteristics that make them necessary and handy –
backtracking is just one of them!
While in the choose state, you can try selecting a colour,
which will cause the finite-state machine to remain in its
current state:
4> threeStates:red().
Remaining in CHOOSE state!
ok
Figure 8
DETERMINISTIC FSM
B
S1
B
S
A
S3
A A
B
S2
NONDETERMINISTIC FSM
A
S
A
S1
A
C
C
S2
C
In order to be able to select a colour, you will need to go to
the colour state:
Left The
Erlang code of
threeStates.erl,
where a finite-state
machine with three
states and timeout
functionality is
implemented
Figure 9
5> threeStates:goToColor().
Going to COLOR state!
ok
Last, you can select a colour from the available two and
the finite-state machine will automatically return to its
first state:
10> threeStates:green().
Thanks for choosing the Green color
ok
Going to ENTER state!
If the program waits too long for you to choose a colour, it
will automatically return to the choose state and wait for
you to call goToColor() again:
Timeout! Going back to CHOOSE state
8> threeStates:green().
Remaining in CHOOSE state!
ok
machines, called gen_statem. Although you can still
use gen_fsm, gen_statem will replace gen_fsm in the
near future, so be prepared for this transition. You can
obtain more information about gen_statem at
http://erlang.org/doc/man/gen_statem.html and at
http://erlang.org/doc/design_principles/statem.html.
Should you wish to find out the functions needed for
implementing the gen_statem behaviour, you can do the
same trick you did earlier for finding out the required
functions for the gen_fsm behaviour:
You should try using the Erlang code of threeStates.erl
on your own in order to better understand its operation.
As you can imagine, developing an Erlang application
with even more states should be relatively easy as long as
you have understood the presented Erlang code! So, now
might be the right time to create your own finite-state
machine and implement it in Erlang!
$ cat simple.erl
-module(simple).
-behaviour(gen_statem).
-export([]).
$ erlc simple.erl
simple.erl:2: Warning: undeined callback function
callback_mode/0 (behaviour ‘gen_statem')
Other important things about OTP
However, this time the output is much less informative as
it only lists one required function, so you will need to look
at the documentation of gen_statem!
At the time of writing, the latest Erlang version is 19,
which came with a new behaviour for creating finite-state
Left A deterministic
finite-state
machine on
the top and a
nondeterministic
one on the bottom.
The S2 state on the
latter makes all
the difference
www.linuxuser.co.uk
49
Feature
IoT hacks
THE WHITE HAT’S
GUIDE TO
THE INTERNET
OF THINGS
Toni Castillo Girona shows you how to identify
security flaws in IoT devices and put mitigations
in place to protect yourself against the bad guys
Ethical hacking
This guide is solely for securitytesting your own PC,
IoT devices, server
or domain!
he Internet Of (Hackable)
Things (IoT) is here. It’s been
here for some years now and
it’s growing. Smart TVs, bulbs,
thermostats, pets, cars, wearables,
e-skin… you name it! If you want to see
how crazy things are becoming go to
http://iotlist.co and be amazed! Whereas
sometimes these devices are here to help
us all – i.e. medical and home security
applications – sometimes they are here
because of a tendency to put everything
on the net, even your oven or your favourite
coffee machine (http://bit.ly/IoT-coffee).
Most of these devices are poorly thought
out when it comes to security and more
often than not, security issues are present
in their firmware. It's here that you will
find the same sort of security flaws as in
any other piece of software, ranging from
command injection to memory corruption.
But the device is not the only hackable
thing here, of course. Some vendors have
no security at all in their cloud counterpart
implementations. The rush for having
everything connected to the internet is
the main culprit. Traditionally, engineers
designing and implementing fridges, TVs,
thermostats, bulbs and so on have never
worried about security vulnerabilities. They
did not need to. They have been dealing
all the time with safety issues instead,
adhering to standards in order to prevent
someone from being hurt by a device. This
market has evolved: almost everything you
can think of is (or is about to be) connected
to the internet, either directly or indirectly.
Now take all those engineers and put
them to work with microcontrollers (MCUs),
real-time operating systems (RTOSes),
web interfaces, network protocols and
T
cryptographic primitives in order to
smarten these devices and the handbasket
has arrived to take us all to hell. Besides,
some vendors do not implement security
policies during the design and, later on,
implementation of IoT devices either.
Lax security standards
As with any other sort of IT systems, IoT
vendors should adhere to well-known
security standards and follow a secure
implementation life cycle for their devices.
Unfortunately for us all, this is not so most
of the time. IoT devices blend the virtual
and the physical world together, so when a
vulnerability is detected and successfully
exploited, the consequences can be
harmful. This is why, when talking about
exploitation and hardening of IoT devices,
we need to take into account both safety
and security. Dr Barry Boehm (http://bit.
ly/Boehm-wiki) beautifully expressed the
relationship between the two as follows:
• Safety: The system must not harm
the world.
• Security: The world must not harm
the system.
Have a quick look at your own house.
Go around its rooms and make a list of
all the IoT gadgets you have and how they
interact with the physical world: maybe
you’ve got a smart lock that takes care of
your garage’s door. A vulnerability there
could perfectly well expose your entire
house to burglars. Have you set up an IP
camera to watch your house when you are
away? A flaw in its web interface can let a
stranger spy on you and your family! Now
extrapolate this to the Industrial Internet
of Things (IIOT, see http://bit.ly/iiotattacks). IIOT devices are cyber-physical
systems (traditionally SCADA systems)
that now have similar capabilities to your
amazing last Christmas gizmo: internet
connectivity, a fancy app to control them,
a cloud infrastructure gathering tonnes
of data from them and complex machine
learning algorithms sitting in the middle
that compute trends and attempt to
foresee behavioural changes. What if a
vulnerability arises on one of these critical
systems? Forget about your garage’s door
or your cool IP camera with tilt capabilities
and nocturnal vision: we are talking about
real harm. Do we have your attention now?
Good; keep reading.
IoT devices are connected to the internet
not just to be monitored or controlled, but
to send data to the cloud. Most of these
devices aren't capable of storing and
processing the data themselves, so they
are constantly communicating with the
cloud. This data is either processed as soon
as it is received or later, in order to make
informed decisions or to keep analytical
data for further use. Apart from raw data,
there may be user credentials and sensitive
information. If this infrastructure is not
properly secured, it can be attacked and
exploited. So when you purchase a new
electronic gadget, it is not only the device
design and implementation flaws that
matter; it's the vendor’s cloud component
that needs to be taken into account, too.
In this feature, you will learn about some
poorly thought out IoT devices, the most
common vulnerabilities present in them
and how to go about finding and exploiting
flaws in their firmware and reverse
engineering them with radare2. Have fun!
51
Feature
The quick
IoT fix list
As soon as you
get a new IoT
gadget, change
the default
credentials,
update firmware
and protect it
behind a firewall.
If it doesn't need
internet access
keep it isolated.
IoT: Vulnerabilities explained
IoT devices are basically everyday objects with
embedded devices in them that interact with
our physical world. They have sensors to send to
and receive data from the outside world, such as
temperature, light, voice and so on. These devices
run on different operating systems, such as realtime operating systems, e.g. QNX or Linux-based.
There are a bunch of APIs that allow these devices
to communicate with one another, e.g. REST-ful,
MQTT and CAN, and of course most of them can be
connected to the internet. Other devices may use
Bluetooth or ZigBee and so on to receive and transmit
data (the latter is widely used by smart home devices).
The vast majority of these devices run on MIPS or ARM
processors. Because they run software, IoT devices
also inherit the same issues found in computers.
Right You can obtain
the mitigations
applied to a bunch
of binaries from
a TP-Link NC200
camera’s firmware
by using checksec.
sh script (www.
trapkit.de/tools/
checksec.html)
Everything runs as root
Some IoT devices run different services as root. As a
systems manager, the first thing you learn not to do
is to run Apache as root, right? This is known as ‘the
principle of least privilege’. Well, it seems that some
vendors have turned a blind eye to that and they keep
Because firmware
is software, it has the
same sort of security
vulnerabilities
Default credentials or no credentials at all
Some IoT devices ship with default usernames and
passwords that end-users do not care to change. In
some cases, the situation is even worse: there are no
credentials at all, so if you can communicate with the
device, then it’s yours! There’s a good example here:
http://bit.ly/coffee-hack.
Backdoors (software or hardware)
Backdoors can be intentional or unintentional. Some
devices may ship with non-documented means of
opening a remote shell with super-user privileges.
This is something intentional, either by the vendor
itself or by a third-party supplier: bear in mind
that most IoT devices are made of different parts,
such as the device itself, the cloud infrastructure
and sometimes the smartphone app. During the
development phase of an IoT device, sometimes the
developers have direct and special ways in to speed
up the debugging process. Once the system is ready,
it is not uncommon to forget about this completely and
ship the device without disabling these mechanisms
first. (For more details on these unintentional
backdoors go to: http://bit.ly/backdoor-goip.)
Software vulnerabilities in the firmware
Every single device (not only IoT devices) runs on
firmware: from your hard disk, non-smart GPS and
so on. Because firmware is software, it has the same
52
sort of security vulnerabilities you can find in any
piece of software running on your computer: such as
command injection and buffer overflows. Most of these
devices use HTTP-encapsulated APIs (ie REST), and
some of them even run actual web servers (lighttpd is
a winner here), so you may expect the same sort of web
vulnerabilities as well. Later on, you will see a couple
of real cases of web exploitation in some IP cameras.
For the time-being, go read about a classic commandinjection flaw here: http://bit.ly/root-camera.
running everything as root. So if you happen to find a
classic command injection on, say, the web interface of
your fancy IP camera then it's exploitable.
No binary mitigations
Modern operating systems and processors implement a
bunch of binary mitigations to make memory corruption
flaws more difficult to exploit: ASLR, RELRO, Full
RELRO, Stack Canaries and NX [Tutorials, p32, LU&D168
cover modern software protections – go check it out!]
So, when it comes to IoT devices, you would expect to
have the same sort of protections. But due to hardware
constraints and poor design decisions, most IoT devices
ship with no mitigations at all. If anything, the binaries
are normally protected by the NX bit, so common buffer
overflows are more difficult to be exploited without
using Return to Libc or ROP techniques. But it's still
easier than on a modern computer…
Unsafe firmware upgrading
Upgrading the firmware is a common issue for these
devices. Some vendors allow public downloading of the
firmware as a binary blob from their website. This is
good for auditing the software, of course, and later on
you will learn how to unpack and look for low-hanging
fruit in firmware. However, the vast majority of IoT
devices blindly trust in whatever binary blob you are
supplying them with and happily going on with the
upgrading process! This is because they do not use
digitally signed firmwares. So it is not uncommon to
download or extract a device’s firmware, backdoor
it, regenerate the binary blob and finally send it back
to the device by any means feasible. P0wned! A good
example of providing an ‘evil software upgrade’ for
a really curious IoT device, a rifle with Wi-Fi, can be
viewed here: https://youtu.be/ZcukSF7ruZY.
well. Later on, we will use Censys to look for potential
vulnerable IoT devices. [See Tutorials, p40 LU&D170 for
how to use Zmap and masscan along with Censys and
its Python API]. Troy Hunt wrote a detailed post on the
CloudPets incident here: http://bit.ly/cloudpets-leak.
Man-in-the-middle attacks
Hardware flaws
Related to the previous flaw, most IoT devices do
not use certificates – neither for authenticating
themselves (client certificates) nor for authenticating
the remote servers where they send data to or connect
(server certificates). This is really dangerous, because
it allows attackers to sit in the middle and happily
capture or redirect every single packet – man-in-themiddle attacks or MITMs – or impersonate servers
and clients too. Sometimes devices implement some
sort of encryption (e.g. TLS) to connect to the remote
server, normally the cloud, securely. But then they fail
at checking the certificate, either by accepting any
certificate, or by not checking whether it has expired
or has been revoked at all! So packet-sniffing, packetreplaying and packet-injection, among other well-known
network attacks, are possible. This applies to non-TCP
stacks as well, such as Bluetooth Low Energy (BLE, see
http://bit.ly/Bluetooth-hack).
IoT devices have serial ports and JTAG connectors
that allow developers to debug them during the
implementation process. They connect to these
devices using these special ports. Once the system
is ready, it is shipped more often than not with all
these ports enabled – either by mistake or because
they do not care at all. So if you are lucky enough, you
can attach a serial terminal to any of these ports (e.g.
Minicom) and extract the device’s firmware, interrupt
the U-boot booting process or gain a root shell. There
are myriad devices with a plain root shell already
sitting there for anyone with a null-modem cable to
connect to! See https://youtu.be/h5PRvBpLuJs.
Same
problems
IoT devices run
on firmware, so
you can expect to
see the same sort
of vulnerabilities
found in any
software running
on PCs. Therefore,
you can use
almost the
same tools and
techniques to find
and exploit them.
Cloud-counterpart issues
Most IoT devices send data to the cloud. If they do not
implement TLS and proper digital signatures, client and
server certificates and so on, they can be vulnerable
to MITM attacks. But sometimes they do implement
all these protections and yet, they fail at protecting
other parts. As the saying goes: a chain is only as
strong as its weakest link. Well, for vendor CloudPets,
its weakest link turned out to be the unprotected
MongoDB database that was publicly accessible from
the internet through an open URL. The company did not
http://www.devttys0.com
The Mirai botnet
protect this database at all, and Shodan (www.shodan.
io), a search engine for IoT devices, indexed it. This also
demonstrates that even if you are using well-known
APIs and cloud providers for your IoT devices (Amazon
AWS in the case of CloudPets), you can still get it wrong!
Shodan, Censys (https://censys.io) and Zmap (https://
zmap.io) are common search engines and tools to map
the entirety of the IPv4 range of connected hosts on the
internet – and that includes, of course, IoT devices as
This botnet took advantage of default
credentials of thousands of IoT devices
connected to the internet, exposing themselves
via Telnet. Whenever one of these devices was
publicly accessible, Mirai tried to log in using a
list of well-known usernames and passwords.
For those devices keeping the same default
credentials, Mirai was successfully logged in
with a privileged account, so it was able to
download and install the bot with administrative
privileges. This incident, which took down
a large section of the internet, would never
have happened in the first place if end users
had changed the default credentials and had
protected the devices behind their firewalls.
The Telnet service should never have been
accessible from the internet either. For those
cases where the device must be accessible,
an SSH tunnel can be considered. For
example, if you want to access to your smart
thermostat from work, first establish an SSH
tunnel with your SSH server at home and use
your smartphone app to connect to the local
thermostat via the tunnel.
Left Opening some
IoT devices may
reveal serial ports
or JTAG connectors.
This picture shows
the circuit board of a
router with three pin
connectors (a UART)
Source: http://www.
devttys0.com
www.linuxuser.co.uk
53
Feature
(Not) safe
search
The bad guys
use well-known
public databases
like Shodan or
Censys to find
vulnerable IoT
devices. Query
these yourself
from to see if any
of your devices
pop out.
IoT: Reverse engineering
As with any other IT system, IoT devices can be
hacked and exploited to perform evil things. It does
not matter if we are talking here about a smart
toaster or a smart fridge; they’ve got MCUs and RAM,
a filesystem and network connectivity too. They run
an operating system, often GNU/Linux, and therefore
they can be located, port-scanned, exploited and then
used for whatever dark purposes the black hats have
decided. But, how do they go about finding vulnerable
devices in the first place?
Find potential vulnerable IoT devices
Shodan and Censys are well-known databases and
search engines that perform massive port-scans of
the entirety of the IPv4 range of hosts. That includes
IoT devices, as long as they’re visible with a public
IP address. So let’s say that you’re a bad guy. You’re
looking for TP-Link NC200 IP cameras all over the
internet. So you go to Shodan (http://shodan.io) and
type this in its search text box: title:"NC200 Admin
- Login”. You are only interested in finding cameras
located in, say, Italy. So you repeat the previous
search, adding a new filter like this: title:"NC200
Admin - Login” country:"IT”.
The Moxa Nport series allow serial devices to be
network-ready in an instant. You can connect them to
the internet and are used mainly to connect industrial
machines, such as gas equipment, to the internet in
order to control and monitor them. There are a bunch
of well-known vulnerabilities affecting Moxa Nport
5110 devices running old firmware versions (see
http://bit.ly/ics-moxa). A bad guy would search for
these devices across the internet in order to exploit
them. It’s time to find these devices, now using Censys
instead of Shodan. Navigate to the Censys website
(http://censys.io), create an account if you don’t have
one already, then log in and type this one-liner in the
search text box: 23.telnet.banner.banner:"Model
name: NPort 5110" and 23.telnet.banner.
banner:"Firmware version: 1.1".
[0x00400450]> aaaa
Look for the address where the ‘Invalid key!’ string is
located using the iz command:
[0x00400416]> iz~Invalid key
vaddr=0x0040062c paddr=0x0000062c ordinal=002
sz=13 len=12 section=.rodata type=ascii
string=Invalid key!
What you need now is to find those opcodes that
reference this address (0x0040062c). That will probably
be a position in the code segment outputting the string
to stdout. The command axt will give you this info:
So there are still publicly accessible, vulnerable
Nport devices out there! The bad guys could also look
for already open devices that don’t ask for credentials.
We know – thanks to the Nport 5110 user’s manual –
that once you are authenticated, the system presents
a menu with the string ‘<<Main Menu>>’. Let’s find
Nport devices that do not ask for a password! Go back
to Censys and type this into the search box: 23.telnet.
banner.banner:"<< Main menu >>".
[0x00400416]> axt @0x0040062c
data 0x400583 mov edi, str.Invalid_key_ in main
Reverse-engineer with Radare2
0x00400583
key_
0x00400588
For those devices with no apparent well-known
vulnerabilities, black hats will spend quite a long time
reverse-engineering some of their binaries. This is
known as ‘static binary analysis’. It is a complex and
demanding area of expertise, with plenty of tools
designed to aid the reverse engineer. For IoT devices,
reverse engineering is more or less the same as for
binaries meant for computers. The main difference
here is the architecture: instead of X86_64 or i386, you
54
will be dealing with ARM and MIPS most of the time.
Ida Pro is a classic disassembler widely used by
reverse engineers, but it is not open source. Radare2
(https://github.com/radare/radare2) aims to be the de
facto reverse engineering framework for UNIX systems.
So, are you ready to reverse-engineer your first
binary? First, install radare2 (if you prefer, you can
install Parrot or Kali distros; radare2 ships out-of-thebox with them). Then, use the simple C program on the
cover disc and on FileSilo called pass.c and compile it
with gcc -O0 pass.c -o pass.
As you can clearly see, this is a very simple C program
that expects a ‘key’; if it is not the right one, it shows an
error message. When reverse engineering, it is better to
start this way: doing what is known as ‘crackmes’:
http://bit.ly/crackmes. It is legal, and you don’t have to
worry too much about breaking things. Now, compile
it and imagine you don’t have the source code. You
don’t even know that the key is actually inside the
binary. What you do know is that the software is always
showing the message ‘Invalid key’ because you don’t
know what the right key is. Let’s start using radare2
using r2 pass. Then type ‘aaaa’ in the prompt in order
to analyse the binary:
The address is 0x400583 – bear in mind that these
offsets may vary in your system. You can go to this
address and then disassemble some bytes:
[0x00400583]> s 0x400583
[0x00400583]> pd 10
bf2c064000
e883feffff
mov edi, str.Invalid_
call sym.imp.puts
Look at offset 0x00400588; a call to sysm.imp.puts is
made. This is the imported function puts, and it will
show the string stored in EDI. So after executing these
instructions, the error message will be outputted. What
you need to do, of course, is to avoid this execution
path entirely. Radare2 comes with the ‘Visual Mode’,
Analyse binaries with Radare2's Visual Mode
1
2
5
6
3
4
1
Node code
Pseudo-code
4 A pseudo-code is shown beside the assembly,
In visual mode, the assembly code for the
selected node in the mini graph is shown here
(Offset 0x400583).
so you can easily locate strings, common function
names and understand what the code does C-like.
Mini-graph
Radare2 allows you to see the mini-graph showing
all the possible execution paths – branching when a
conditional jump is evaluated to TRUE (t) or FALSE (f).
OFFSET graph
The graph helps you spot any important branching
in the code quickly, as in our example: the string
comparison and the invalid key string.
2
3
Function code
5
6
Jump key
The assembly code for any function is shown
with offsets, op-codes and mnemonics. Fancy ASCII
arrows help you follow the code between jumps, too.
Branches taken when a conditional jump
evaluates to TRUE are shown as 't'; those for FALSE
are marked as 'f' and unconditional jumps are 'v'.
a graphical representation of the binary and its
branches to speed up the reverse engineering process;
type VV in the prompt to activate it. Use the cursors to
move about a bit; try to spot the imported function sym.
imp.strcmp. This is where (you assume) the key you
are inputting is compared to the right one. The branch
to ‘invalid key’ is shown (the jne instruction evaluates
to True). So you don’t want to branch to address
0x400583. Therefore, all you have to do is to patch the
jne instruction. Radare2 allows you to patch binaries
on-the-fly, and the changes are permanent! To ensure
jne evaluates to False whatever key you input, you
have to patch this instruction and replace it with the
opposite one: je. Type : to gain a prompt, and look for
the address for this instruction:
:> /c jne 0x400583
0x00400575
# 2: jne 0x400583
Go to the offset where the instruction is located:
:> s 0x00400575
Reopen the binary as read/write before attempting to
overwrite bytes:
Binary
cousins
The process
of reverseengineering IoT
binaries is similar
to reversing
binaries compiled
for OSes running
on PCs; the key
difference is they
use ARM or MIPS
architecture most
of the time.
www.linuxuser.co.uk
55
Feature
Check
your MCUs
Most IoT vendors
share the
same software
and MCUs/
MPUs in their
implementations,
so if you know of
a vulnerability
affecting one
IoT device, the
chances are that
other devices
may have it.
IoT: Firmware analysis
:> oo+
File pass reopened in read-write mode
Now you are ready to patch the binary using:
:> wx 740c
Make sure that the instruction has been successfully
patched before exiting radare2 (you can just dissemble
it by using the pd command):
:> pd 1
0x00400575
740c
je 0x400583
Now, run the binary and pass whatever key you want:
./pass CRACKME
Login successful!
Right Firmalyzer has
detected dangerous
shell invocations
in /sbin/ipcamera
along with default
credentials in
/etc/passwd
So this is sort of a crash course on reverse engineering!
Radare2 can also be used with IoT binaries, of course.
Some vendors may strip their binaries, therefore
making the name of functions and symbols vanish from
the binary. Naturally, this would render the reverse
engineering process far more difficult. Radare2 is a
memory eater when it comes to analysing huge binaries,
so be warned!
Audit a device’s firmware using Firmalyzer
You can still search for vulnerabilities even if you don’t
own an IoT device. In order to do so, all you need is
the IoT device’s firmware. Some vendors offer their
firmware for free on their website, even for nonowners of a particular device. So you can download the
firmware and then analyse it. Some vendors only offer
their firmware from the device itself; in such cases,
getting the firmware could be trickier but still feasible
by using proxies like Burp. In some particular cases, the
vendor requires valid credentials in order to obtain the
firmware. For those cases where there is no way to get
the device’s firmware, you have to be creative. Either
you delve into hardware hacking by looking for serial
ports (as described in this feature) in order to extract it,
or you exploit some vulnerability that can lead to obtain
part of the files in its filesystem. It’s time to analyse
your first firmware! Download TP-Link IP Camera NC200
firmware (this is a well-known poorly implemented
piece of software) first:
wget http://static.tp-link.com/resources/software/
NC200_V1_141114.zip
Hack a couple of cameras
01
Secure against attacks
Secure your home against external
IoT attacks. Use your computer to scan your
network to determine which IoT devices
are responding to a local IP: fping -g
192.168.1.0/24; for those devices that are
alive, port-scan them: nmap -P0 -script
banner IP; if they have open ports, try to
determine if they are necessary – if not,
close them. Make sure you don’t have any
port redirection set up in your home router to
avoid external connections to your devices.
56
02
Now, perform a quick analysis by using Firmalyzer;
Test authentication
To test a TENVIS T6812 IP camera
web interface authentication routine, set up
a web proxy (such as Burp or ZAP) and make
legitimate requests using valid credentials.
After a successful login, Burp shows you that
the authentication credentials are sent in
the Authentication header, base64-encoded.
This reveals a lot about the authentication
mechanism used by your camera. Since
authentication is sent in a header, it may be
processed by the camera’s web server itself.
03
Fuzz it
Fuzz the authentication mechanism
by sending an huge number of characters in
the Authentication header to see if, at some
point, the camera stops responding. You can
do that using Burp or ZAP, or a simple Python
script. When fuzzing, you normally start with
some characters in the payload and then, at
each iteration, you increase this number until
the system crashes (if it has a vulnerability).
For this camera, when the payload is 188
bytes long, it reboots on its own!
a bit: you will extract the same firmware you have
downloaded before, so go unzip it and then run binwalk:
navigate to https://firmalyzer.com/#analyze and click
on ‘Upload a Firmware’. Choose the firmware just
downloaded and wait for the analysis to complete.
Those juicy files will be rendered in red; you can
download them or view their contents directly on the
web. Have a look at the /sbin/ipcamera binary (in
red). Firmalyzer has marked this binary as potentially
exploitable because of this call:
unzip NC200_V1_141114.zip
binwalk nc200_1.4.0_Build_141114_Rel.18261.bin
shellinvocations: /usr/local/sbin/autoupgrade.sh
%s %s &
If you click on 'Binary Stats', you will find all the
memory addresses where a call to system is performed,
along with dangerous API calls (such as memcpy and so
on). This binary is probably a good candidate for finding
memory corruption flaws. What are you waiting for? It is
time for you to start analysing this binary with radare2!
Extract a device’s firmware using binwalk
Brute-force directories
Look for low-hanging fruit in your
TENVIS web interface by brute-forcing
directories. Open Burp, intercept some
HTTP requests to your camera and then,
pointing at its URL, right-click and choose
Engagement Tools/Discover Content. Click
‘Session is not running’ and wait. You’ll see a
directory, ‘tmpfs’. This directory should not
be accessible from the web interface. Point
your web browser to this URL and you will
see a bunch of interesting files!
You don’t need to
own an actual IoT
device to audit
its firmware.
It is also feasible
to interact with a
device's binaries
(using QEMU) to
perform dynamic
analysis, too.
apt-get install python-lzma
pip install cstruct
Then, clone and install Jefferson itself:
Using online services such as Firmalyzer is quick. But
sometimes you might want to extract all the files inside
a firmware binary blob yourself, in order to perform
a more precise analysis. Extracting the contents of
a binary blob is easy with binwalk. You can combine
binwalk with other tools, such as our old friend dd,
to extract what you really need from the firmware.
In the world of IoT security, extracting the firmware
is a common task. Then, you can execute your own
static binary analysis tools and even perform dynamic
analysis, as well running the binaries under the QEMU
emulator. Because you have all the files at your
disposal, you have access to configuration settings, the
web interface source code and so on. Time to practise
04
A list of the binary blob contents along with the offsets
will be presented to you. As you can see, this camera
is running a Linux operating system with a JFFS2
filesystem and the architecture is MIPS. Binwalk can
extract all the files from this binary blob for you, but
before that you need to install the Jefferson extractor
– binwalk uses external extractors in order to extract
almost everything you could think of when it comes to
binary blobs. First, install its dependencies:
Firmware
auditing
05
git clone https://github.com/sviehb/jefferson.git
sudo python setup.py install
Extract all the files now from the firmware:
binwalk -reM nc200_1.4.0_Build_141114_Rel.18261.bin
The previous command will put all the extracted files
inside the directory _nc200_1.4.0_Build_141114_
Rel.18261.bin. The JFFS2 filesystem contents will be
located under the jffs2-root directory.
Now that you have all the files extracted, you are
free to look for interesting things, such as default
Find out more
The ipc_server file is a binary,
probably the server controlling the CGI
interface of the TENVIS camera through
the web interface. Download it via your web
browser and get some info from it: readelf
-h ipc_server. Readelf tells you this is an
ARM32 Little Endian executable. So, you
know the camera runs on ARM. Before trying
to execute it inside a QEMU chroot jail, get
info about all the libraries this binary needs:
readelf -d ipc_server|grep NEEDED.
06
Send to Firmalyzer
Send the ipc_server binary file to
Firmalyzer to perform a static analysis. Once
the file has been uploaded, click on Binary
Stats and wait for a while until the analysis
finishes. Scroll down a bit and you will see
an interesting finding: apparently, there may
be a code path allowing command injection!
Have a look at the binary mitigations as well;
as already introduced earlier in this feature,
the only mitigation enabled for this binary is
the non-executable bit (NX).
www.linuxuser.co.uk
57
Feature
IoT: Camera hacks
credentials, web implementation flaws,
backdoors and well-known vulnerable
software versions.
Interact with the firmware dynamically
Another common task is to perform static,
dynamic or symbolic analysis of the
binaries you don’t know beforehand they
are vulnerable. Because this architecture
(MIPS) is different from the one you are
probably running (x86 or x86_64), you
need an emulator to be able to perform
dynamic analysis or simply to execute
and interact with the binaries. QEMU to
the rescue! Go to the directory where the
usual Linux root filesystem is located:
cd 100.extracted/432000.extracted/
cpio-root
Here you will find BusyBox. Most IoT
devices ship with BusyBox. Some versions
have well-known vulnerabilities, so one
thing you can do is to determine which
BusyBox version this firmware ships with.
First, you need to know BusyBox binary
architecture; the readelf command will
provide you with this information:
readelf -h bin/busybox
…
Class:
ELF32
Data:
2’s complement, little
endian
Machine:
MIPS R3000
According to the readelf output, this is
a MIPS 32-bit Little Endian binary. In
order to execute it interactively on your
computer, you will need the appropriate
QEMU emulator, in this case qmeu-staticmipsel (mipsel stands for MIPS Endian
Little). First, install QEMU with apt-get
install qemu-user-static. Copy the
binary
qemu-mipsel-static into the firmware
directory and then execute it within a
chroot environment like this:
cp /usr/bin/qemu-mipsel-static .
chroot . ./qemu-mipsel-static bin/
busybox|head -1
BusyBox v1.12.1 (2014-11-03 00:35:15
PST) multi-call binary
According to the previous output, this
firmware is running BusyBox 1.12.1. There
are a couple of well-known vulnerabilities
affecting this version of BusyBox (see
http://bit.ly/busybox-stats).
58
07
Hack a second camera
Now let’s try the TP-Link NC200,
firmware v1.1. When a valid user exists and
the password is invalid, it shows ‘Wrong
password’. If the user doesn’t exist, ‘User
not valid’ is shown. So you can do userenumeration and password brute-forcing! Use
Burp or ZAP to intercept the authentication
parameters to see how they are sent to the
web app (a simple POST request).
09
Try common passwords
10
Protect your devices
You can write a Python script to
guess the admin password. Get a list of
the most-used passwords: http://bit.ly/
password-list. The TP-Link camera does not
have mitigation against password bruteforcing, so you can leave your script running
for days. But try admin first: you never know!
Get network info
Once you have access to a device,
try to get as much info as possible about
the network and other hosts/settings you
can see from this compromised device. For
example, if you’ve gained access to a TP-Link
NC200 camera using its default credentials
or by guessing the ‘admin’ password, you
can get information about the Wi-Fi it is
connected to from the web interface.
11
08
First, make sure port-forwarding
is disabled. Now, set up a local secure shell
server in your network, say 192.168.1.2, and
enable only key-pair authentication and
install fail2ban as well. Once this is done,
set up port forwarding in your router to allow
external connections to your server (set up a
different port than 22/tcp, eg 2221/tcp).
Use an SSH tunnel
Connect to your IoT devices using an
SSH tunnel: ssh -N -L your_port:local_
iot_ip:iot_port [email protected]_public_ip. As
well as preventing attackers from connecting
to your devices, they won’t be indexed by
Shodan or Censys. In fact, you’re adding
another layer of authentication: only by using
your private key will you be able to connect to
your SSH server and start the tunnel.
12
Test it out
Ensure your devices are safe from the
external world by querying Censys (http://
censys.io) and Shodan (http://shodan.io)
using your public IP address. Type ‘YOUR_
PUBLIC_IP’ in the Censys search box. and
‘ip:YOUR_PUBLIC_IP’ in the Shodan search
box. Port-scan your own public IP address
from the internet as well, from time to time.
The source for tech buying advice
techradar.com
Feature
Fintech
THE FUTURE OF
New open standards are quickly gaining ground for moving
digital money between multiple platforms alongside the
rise of the blockchain. Both herald a revolution in financial
technology – the future of money is open
n all the hype around Bitcoin,
the real story – blockchain
technology – has struggled to
be heard outside of tech
circles. There are a number of interesting
open source blockchain projects (see
Opening the Blockchain box, below), but the
fastest growing is Hyperledger.
Under the watchful care of the Linux
Foundation and with strong backing
from IBM, it’s built for real business use.
Hyperledger enables the same anonymity,
trackability and security as Bitcoin,
but allows the construction of private
blockchains with membership services
and assigned roles. Fabric is the first
Hyperledger implementation to be released
from incubator status.
We caught up with IBM’s Chris Ferris –
who sits on the project’s governing board,
as well as helping to manage the release
process for v1.0 of Fabric – about the
challenges posed by managing a project
across so many companies with competing
priorities: “It is fairly common in large
open source projects to have challenges
attendant to getting lots of stakeholders
engaged and aligned," he tells us. “With
Hyperledger Fabric, the greatest challenge
is helping prospective contributors
find something to do, or helping them
get their idea into the code base. This
is largely due to the fact that the pace
I
Opening the
blockchain
It’s less than a decade since the blockchain
appeared: a list of records (blocks), each
timestamped and linked to the previous
block. As the design makes them resistant
to tampering, a blockchain provides
an open, distributed ledger, recording
transactions in a way that’s taken them
way beyond crypto currency and into
supply chains, identity management and
records management.
Ethereum is an open source, public,
blockchain-based distributed computing
platform created to provide a more
generally scriptable platform than Bitcoin;
it provides a Turing-complete virtual
machine, and contracts can be scripted in
Solidity, an entirely new language created
of IBM contributions can be somewhat
overwhelming to someone new to the
project and platform.
“Much of the code we published in v0.6
of Fabric has been refactored,” he adds,
“and in some cases completely replaced to
enable us to address the things we learned
from working with clients and partners with
early versions of the platform.”
Ferris chairs the Technical Steering
Committee, and the rapid growth – and
scale of collaboration – is a challenge:
“More and more people are joining and
contributing to the platform. We now have
129 contributors representing at least 24
companies (many aren’t self-identified
with an employer, which is common in open
source). IBM’s share has gone from over
90% of the contributors to 46%, while at
the same time increasing its number of
contributors from mid-30s to over 60!”
Fabric is written in Go, but a C++
implementation, Iroha, is in use by
Fujitsu, and in turn has libraries for Swift
and JavaScript. Intel has implemented
Sawtooth, with a ‘proof of elapsed time’
in Python. Hyperledger is an architecture,
and a set of requirements, rather than
any one software implementation. The
original white paper is still worth reading:
http://bit.ly/Hyperledger. Meanwhile, the
pace of new announcements coming from
the Hyperledger project is breathtaking.
for the purpose. Given the hard limit on
Ether, the currency used on the Ethereum
platform, there remain similar problems to
Bitcoin with attacks and speculation.
Ethereum rival Ripple is a consensus
ledger and open protocol which can
represent any currency, and has
been taken up by large banks and
payment networks as a key settlement
infrastructure technology. Ripple
developers are the originators of the W3C
Interledger project (see overleaf).
The Ripple protocol formed the initial
foundation of Stellar.org, a platform to
connect banks, payments systems and
people, now running on its own code.
In facilitating next-to-no-cost exchanges
across currencies and borders, Stellar
has attracted a lot of institutional users
in the developed world: from the Praekelt
Above Chris Ferris, busy chair of
Hyperledger’s Technical Steering Committee
“I am always energised when working
in a new technical domain that has huge
industry potential, such as blockchain,”
says Ferris. “Hyperledger has exceeded
my wildest expectations in the first year
and some. We now have over 129 members
and seven top-level projects and the
inter-project collaboration is starting to
take hold, which should inject even more
excitement in the community.”
Foundation’s scheme to help young girls
in sub-Saharan Africa start saving to
connecting microfinance initiatives.
Transactions are processed through
consensus between the decentralised set
of ledgers made up of all Stellar servers.
It takes between two and five seconds
for the network to resolve its regular sync
process and settle all payments.
Stellar’s ‘Finance with a Mission’
is echoed in initiatives such as the
Blockchains For Good think tank,
and others concerned with using the
blockchain for tracking the supply chain
for ethical purposes, in fashion, food or
electronics. Disberse, for example, is a
company with a social mission to improve
the efficiency and transparency of aid
and humanitarian payments. All quite
remarkable for a technology so young.
61
Feature
Fintech
Sender
Receiver
Application
SPSP
SPSP
Transport
PSK
PSK
Interledger
Ledger
ILP
L1 Plugin
Connector
Connector (N-1 Instances)
ILP
ILP
L1 Plugin
Ledger 1
L2 Plugin
L1 Plugin
Ledger 2
LN Plugin
ILP
LN Plugin
Ledger N
Above The W3C Interledger stack looks a lot like the internet’s own protocol stack; this is not a coincidence
Rewarding
openings
NESTA is running
a £5m prize to
reward the next
generation of
fintech services,
apps and tools for
small businesses.
Have you got a
winning idea for
some software
that won’t get
conventional
backing? Apply
online at:
http://openup.
challenges.org.
62
Open source is changing the face of financial
technology beyond blockchain. From GNU/Linux
containers keeping banks flexible through takeovers
to new open standards for banking information –
both voluntary, and mandated – even the staid and
conservative end of banking is adopting free and open
source software and its methodologies. And as for
the disruptive fintech (financial technology) startups,
they’ve grown up in GitHub world and where open APIs
– if not always open source – are the standard.
To round up the state of fintech, we visited the JAX
Finance event in London and heard tales of Santander
Bank’s flexible setup with Linux containers, part of the
tech approach which has seen them absorb dozens
of banks without the infrastructure crumbling. We’ve
heard the fintech panel at IPExpo sing the praises of
machine learning, AI and the disruptive newcomers
to the fintech scene. At the recent Money & Society
Summit (see But What is This Money Thing Anyway?
box, p63), participants started work on designing the
currencies of tomorrow: free, open and blockchainbased, but also units of pure exchange, designed to
facilitate economic activity, not gravitate towards
speculators and hoarders, like Bitcoin, or today’s
conventional currencies.
Interledger
For all the innovation, the technology is in its infancy.
With no standardisation and dozens of different
payment methods, all incompatible with each other.
The World Wide Web Consortium (W3C) set up the Web
Payments Working Group to bring together payment
providers, and thrash out a common standard; first the
terminology, then for a wallet and API – where wallet is
any digital record of balance held.
The W3C’s approach in working towards a Secure Web
Payments draught was to avoid creating yet another
network, but a network of networks. And in looking at
connecting payment providers, as well as last autumn’s
‘First Public Working Draft’ of the Web Payments HTTP
API 1.0, the W3C is incubating Interledger. In a packed
lightning talk at this year’s FOSDEM conference,
Ripple’s Evan Schwartz presented the concept, calling
it “an internet for payments”. Now incubated within
W3C’s Web Payments group, Ripple developer Adrian
Hope-Bailie describes it as a “browser polyfill” of the
Working Group’s API.
Credit Commons: A
money system for the
solidarity economy
There are many local, complementary currencies
in operation around the world. Some recent, some
decades old but they all face the same problem –
how to trade with people in another local currency
or easily allow someone to convert their money to
a new currency.
Community Forge developer Matthew Slater
– currently working with 250 local currencies in
Switzerland, France and Belgium – is looking
for help to coordinate and develop a new
platform to meet the present and future needs of
local currencies for interoperation. Many of the
building blocks are there, but there’s plenty of
work to do – if you’re interested in helping, head
to: http://creditcommons.net.
But what is this money
thing anyway?
Fintech has had a boom in the past decade, but
these post-crash years have also seen a growing
awareness of fundamental flaws in the system of
banking, and indeed the nature of money – that
we take for granted – that stretches way beyond
acquisitive speculation in Bitcoin.
Innovative groups such as Positive Money
(http://positivemoney.org) work to educate
parliamentarians and the public on why the way
money is generated (since 1694) works against
the wider economy. As it stands, we have a
system where 97% of money is created by 46
banks (as debt), and 3% by the government (as
cash). Meanwhile, interest in local currencies and
complementary systems, such as time banks,
continues to grow. Cumbria University’s Institute
for Leadership & Sustainability (IFLAS) runs
a Money & Society MOOC (http://ho.io/mooc)
explaining the whole thing over four themed
lessons; it’s a seriously eye-opening experience
that will change the way you think about money.
Winchester City Council Museums
But it’s not the only broad and open approach to
making payments: GNU Taler, developed with Inria –
the French National Institute for computer science
and applied mathematics – is an interesting twist
on electronic payment systems, and is designed to
address the concerns of both governments and citizens.
Customers wishing to buy something use their
traditional money to buy anonymised digital cash
from an exchange. This allows them to keep their
privacy when making purchases, yet the blindly signed
cryptographic currency is secure and the merchant
can exchange it for their local currency. But – and this
is where the interest of governments comes in – the
merchant is not anonymous, which means the exchange
can be taxed. It also avoids the environmental overhead
of blockchain transactions like Bitcoin.
Your GNU Taler wallet stores all your receipts
denominated in your local currency for stability
Above A medieval blockchain – tally sticks recorded royal
or government debt, and were split to provide a unique
key for redemption.
against exchange fluctuations. It also holds your
digital cash and acts as proof-of-purchase in case you
need a refund. Your Taler wallet exists on your own
computer and you are responsible for its security and
safekeeping, as with a cash wallet, but unlike cash, you
can make backups. Try the demo: https://demo.taler.
net/index.html.
Few free software-related organisations are
missing the chance to ensure that software freedom is
baked into tomorrow’s money, for instance, Software
Freedom Conservancy, the charity which provides
infrastructure to dozens of FOSS projects from
BusyBox to Wine, has its own non-profit accountancy
project: https://sfconservancy.org/npoacct. At an
early stage, it’s a user-friendly set of reporting scripts
built on top of Ledger CLI. Designed for government
filings in the USA, it would be great if non-US
developers got involved to make versions usable in
other countries. Yes, this is far from the glamour of
disruptive fintech and crypto currencies, but it’s a very
real need in civil society or third-sector organisations
right across the globe.
A career move
Myth of
barter
School textbooks
still cite money
as replacing
unwieldy systems
of barter, but
evidence shows
that the earliest
systems of
writing were
used for ledgers
of transactions,
and exchange
worked freely
without money
where there
was community
and trust –
encryption and
blockchains now
allow us to take
ledger systems
worldwide.
For many, the idea of a programming career in financial
services was a lot less interesting a decade or so back,
when most of the jobs had moved from Smalltalk to
Java, but today’s fintech scene is a diverse one, with
one financial institution hosting hundreds of thousands
of lines of code in OCaml, while others are advertising
for cprogrammers with skills in Golang, Julia, Clojure or
even Haskell.
At the recent JAX Finance event, Python guru Dr
Russel Winder was demonstrating the kind of nearinstant data-wrangling that you can do in Python,
which he trains people in hedge funds and investment
banks to do using Matplotlib and Pandas, while fellow
Pythonista Burkhard Kloss demonstrated the power of
Numpy and Scipy.
These libraries, key to Python’s popularity across the
data science fields, combined with some of the deep
learning and general AI, have made strong inroads into
fintech recently.
If you love the Python language, and the challenges
of working with data, but don’t want to devote your
time and efforts to making rich people even richer,
then consider building open tools for everyone (see
Rewarding Openings box, p62), or looking at the role of
blockchain in philanthropy.
And hanging over everything fintech is the EU’s PSD2
– the Payment Services Directive – mandating the
opening of payment services to new players through
the effective opening of APIs. Expect a further boom in
fintech startups, but also the possibility of social banks
and NGOs to get involved to support the ‘unbanked’
of the UK and Europe. Oh, and there's lots of code on
GitHub to play with – it’s appearing already: the future
of money is open, and we look forward to telling you
more about it as it happens.
www.linuxuser.co.uk
63
Subscribe
Special offer for readers in North America
Get 6 issues FREE
When you subscribe*
The open-source
authority for
professionals
and developers
FREE
resource
downloads
in every
issue
Order hotline +44 (0)1795 418661
Online at www.imaginesubs.co.uk/lud
This is a US subscription offer. You will actually be charged £87 sterling for an annual subscription. This is equivalent
to $112 at the time of writing, exchange rate may vary. 6 free issues refers to the USA newsstand price of $16.99 for 13
issues being $220.87, compared with $112 for a subscription. Your subscription starts from the next available issue and
will run for 13 issues. This offer expires July 31 2017.
Quote
‘USA5’
for this
exclusive
offer!
THE ESSENTIAL GUIDE FOR CODERS & MAKERS
PRACTICAL
Raspberry Pi
74
“Learn how to automate the drone and
program a predetermined set of flight actions”
Contents
66
My Pi project: the
robust Tough Pi-ano
68
Use Python to draw
3D text in Minecraft
70
Save your SD card: boot
your Pi from a USB stick
78
Monitor collected
data with Plotly
www.linuxuser.co.uk
65
Components list
n Raspberry Pi units
n Charging and OTG cable
n Dual sound cards
n Thumbwheel switch
n Headphone cable
n Powered speakers
n Headphone splitter
n Power adaptor
n Hardwood casing
n Buttons
Keeping it tidy
Due to the number of wires
involved, Bryan had to make sure
that each of them was fastened
to the wooden board as neatly as
possible. Doing this meant that if
a problem did arise, Bryan would
be able to diagnose and fix the
problem quickly and efficiently
Programming octaves
To reproduce the correct sound
levels, each octave of the Tough
Pi-ano had to be individually
tuned. To do this, Bryan used a
thumbwheel switch to help find
the exact levels that each octave
should be reproducing
Headphone connection
One of the last things Bryan
implemented here was a
headphone connection, which
meant all reproduced sound could
be enjoyed by just one person. This
required some slight tinkering with
the on-board sound cards, but
nothing overly complicated
Pygame and Python
Although traditionally used for
building games, Bryan tinkered
with Pygame so that it would
work in tandem with Python to
reproduce the sound files he
needed. The key benefit to Pygame
here is that it didn’t take a lot of
processing power away from the Pi
Zero units that he used
Right Due to the
number of octaves
and buttons
required, Bryan
needed to use
multiple Raspberry
Pi units to play
all necessary
sound files
Far right Bryan
had some initial
static problems
from the default
USB power
supply he used.
Switching to a
coaxial connector
seemed to fix the
issue, however
66
My Pi project
Tough Pi-ano
Bryan McEvoy’s tough-as-nails piano takes an all-new
approach to music exploration
Where did your inspiration for the
Tough Pi-ano come from?
I started the Tough Pi-ano after
talking with my aunt. My cousin loves
music therapy, particularly playing the
piano, but he has Down’s syndrome
and his excitable therapy sessions
can destroy instruments meant purely
for gentle fingers. While I thought of
a solution, he kept smashing every
garage sale piano they were able to
pick up and buy. I knew that each
of those pianos was really just a
series of inputs and a rather basic
sound generator.
Has it been easy getting the project
from concept to working device?
One year ago I started working on a
version which used hardwood keys
with the same proportions as a
standard piano, but they were difficult
to make in my local hackspace – so I
had to rethink this idea.
A couple months after the piano
had been left aside, I decided it would
be better to use commercial parts for
the keys even if they didn’t look like a
piano. After all, I was building this for
kids who wanted to smash buttons,
not concert pianists. I ordered a ton
of arcade buttons in black and white,
which obviously give that natural
piano look.
Replaceable octaves didn’t make
it to the final version, but using one
computer for each octave stayed.
Even if someone managed to destroy
one of the computers, three-quarters
of the piano would keep running.
It would also be possible to have
spares on hand until a proper repair
could be made. All the arcade
switches were connected with
slide-on terminals so they could
be replaced if someone wore it
out. Redundancy was probably the
strongest quality of the Pi-ano, but
you wouldn’t see it unless you looked
under the hood.
Designing a musical instrument
with the explicit intention of absorbing
abuse was a different kind of
brainstorming for me. Many of my
previous projects were only meant
to be strong enough to withstand
ordinary wear and tear. The Tough
Pi-ano had to take punishment all
day long without hurting anyone and
have lots of parts that were easy to
service. That’s why there’s a plastic
sheet over the keys: no slivers. It’ll
be wall mounted so it can’t fall on
anyone. There is no exposed metal
if knuckles start scraping. It was a
whole different way to approach the
user interface.
when I started this project and I
learned a lot. It was honestly a very
good project for a beginner, since
you build the same switch circuit 12
times for each octave and then build a
total of four octaves. One of my rookie
mistakes was building a circuit board
to hold pull-up resistors for each of
the inputs. Later, I learned this could
have been done in software. I won’t
repeat that mistake in the future.
How accurately does the Tough
Pi-ano reproduce sound?
The sound files for this project were
legitimate piano samples and the keys
corresponded to the correct notes,
so anyone who played the piano could
sit down and perform a song, but they
would have to reach more since the
keys were spaced out so much further
than a standard piano. Of course,
certain limitations are noticeable,
but nothing major.
How happy are you with the finished
product? What would you say is its
best feature?
The Tough Pi-ano’s best quality, in
my opinion, was the simplicity. You
put power on it and then you have 48
buttons that make sounds. Nothing
extraneous, no settings that might
interrupt a jam session, no knobs
to break off, no hinges to pinch
fingers. One change I would welcome
would be a solid case made of
heavy-duty plastic.
My aunt and uncle are opening
a facility on their property where
other kids with Down’s, or on the
autism spectrum, can come with
their families to spend time in a safe
environment and be themselves.
I’m pleased that they will have a piano
they don’t have to be gentle with.
We’ve seen some mention of using
the piano tool within Pygame on your
Tough Pi-ano. Did this prove to be a
big help?
Pygame was a huge help with this
project. The community surrounding
Pygame was wonderful, I didn’t have
to ask them a single question, I was
"The Tough Pi-ano had to take
punishment all day long"
able to find everything I needed in
their guides and forums. That kind of
support is the same reason I program
with the Raspberry Pi. If someone
wanted to copy my project, they could
do it inexpensively and if they had
trouble, there’s help available. In fact,
someone has already started to build
their own Tough Pi-ano for a school
and together we discovered that my
code won’t work on a Pi Zero W.
How much of a learning curve has
this project been for you in terms of
developing with the Pi and Python?
I knew almost nothing about Python
Looking past the Tough Pi-ano, will
you be using the Pi to build any other
projects in the future?
I think the Raspberry Pi line
computers are great pieces of
hardware. In the past, I’ve built a
MAME box and a wearable computer
with a head-mounted display; it runs
on just a USB battery pack. Right now
though, I’m using them to build a laser
tag game with Wi-Fi connectivity. I
like to detail all my projects, including
how I build them, on my personal blog
(www.24hourengineer.com), as I find
it helps me to keep motivated and
keep on developing.
Bryan
McEvoy is
an automation
engineer by trade,
which has given
him the chance
to design and
build some pretty
incredible things.
Like it?
The Tough Pi-ano
is just one of a
number of musical
instruments that
use the Raspberry
Pi at their core.
Another popular
one is Dave
Sharples’s Joytone
(http://www.
davesharpl.es),
which combines
Pi units and
a plethora of
ambient sounds
in one unique
package.
Further
reading
Bryan has been
kind enough to
detail the entire
build process of
his Tough Pi-ano
unit over on
his site (http://
www.24hour
engineer.com).
While you’re there,
take a look at
some of his other
projects, many
of which use the
Pi in some rather
unusual ways.
www.linuxuser.co.uk
67
Tutorial
Draw 3D text in Minecraft
using Python code
Last issue we hooked directly into Minecraft with Python code.
This time we’re using Turtle Graphics to draw 3D text
Calvin
Robinson
is head of computing
at an all-through
state school.
Calvin also works
with schools
all over London
as a computing
consultant in
education, helping
create high-standard
computing specs.
To install it:
What you’ll need
n Minecraft www.mojang.com/games
n Python www.python.org
n McPiFoMo
http://rogerthat.co.uk/McPiFoMo.rar
n Minecraft Turtle http://bit.ly/MinecraftTurtle
Minecraft’s ‘Creative Mode’ is indeed a great mode to
get creative with, but it’s a time-consuming job placing
blocks down into your world one-by-one. That’s why
we put together a package which incorporates a range of
modifications that enable us to execute Python scripts
directly into a Minecraft world. McPiFoMo includes MCPiPy
by ‘fleap’ and ‘bluepillRabbit’ of https://mcpipy.wordpress.
com, and the Raspberry Jam Mod, developed by Alexander
Pruss. We’ll also be using Minecraft Turtle, which was put
together by Martin O’Hanlon of http://stuffaboutcode.com.
That’s a lot of mods, but together they allow us to do
with Minecraft on Linux what would usually have only been
possible with the Raspberry Pi edition of Minecraft.
As a prerequisite, we assume you’ve installed McPiFoMo,
from last issue’s tutorial.
01
Minecraft Turtle Library
Turtle Graphics are bundled with Python, but to
get this kind of functionality in Minecraft we’ll need to
download a custom Minecraft Turtle Library:
cd ~
git clone https://github.com/martinohanlon/
minecraft-turtle.git
68
cd ~/minecraft-turtle
python setup.py install
python3 setup.py install
If you’ve used Turtle Graphics before, you’ll have no
problem getting to grips with this incarnation.
Above Automate your in-game vandalism/signature
with a simple Python script
02
Getting to know Turtle Graphics
03
Third dimension
Imagine you stuck a paintbrush in the mouth of
a real turtle; everywhere that turtle moves, it drags the
brush along, drawing a path. That’s the basis of Turtle
Graphics. You’re drawing vector graphics with a relative
cursor, across a virtual canvas. We direct the turtle where
to go, and it leaves a trail of lines behind it. In Minecraft,
those lines are represented by blocks.
Traditionally, Turtle Graphics programs can move
forward and backward, but not necessarily left or right.
We instead rotate left/right in degrees, and move forward/
backward to draw our lines in the direction we need.
Minecraft Turtle, however, has an additional dimension,
Minecraft
Above Here’s a handy cheat sheet of all the basic
commands used in Turtle Graphics
with up and down commands moving toward to the sky/
ground accordingly. Pictured above is a list of the basic
commands we’ll be using to traverse our virtual canvas.
Above Diamonds are forever. Make your custom
signature pop by using fancy material blocks
fred.penup()
fred.forward(30)
fred.pendown()
Above Using up/down commands adds another
dimension to your lettering
04
Test your turtle
Create a new Python script in the IDLE and input:
from mcturtle import minecraftturtle
from mcpi import minecraft
from mcpi import block
#Connect to Minecraft and ind the player’s
current position
mc = minecraft.Minecraft.create()
pos = mc.player.getPos()
#Spawn a new turtle, giving the variable a
name of your choice
fred = minecraftturtle.MinecraftTurtle(mc, pos)
#Test your turtle
fred.forward(10)
fred.backward(20)
05
#Draw the letter R
fred.right(90)
fred.forward(100)
fred.backward(100)
fred.left(90)
fred.forward(50)
fred.right(90)
fred.forward(50)
fred.right(90)
fred.forward(50)
fred.left(135)
fred.forward(75)
Draw your initials
Now we’re going to draw out our initials. Some trial
and error will be needed. Here’s how we drew ‘CR’:
#Draw a sharp letter C
fred.backward(50)
fred.left(90)
fred.forward(100)
fred.right(90)
fred.forward(50)
#Provide a little space, by picking up the
pen and moving along
06
Spruce up your turtle
You’ll likely want to change the type of block your
turtle is drawing with: fred.penblock(block.DIRT.id) –
where DIRT is a Minecraft block type; others include WOOD,
GRASS, WOOL, TNT and DIAMOND.
To check if your pen (or paintbrush) is up or down at any
given time, use: print fred.isdown().
You can also change the speed of your turtle, 1 being a
turtle, 10 being a hare: fred.speed(10).
Position your 3D initials
You may want to change the location of your turtle, before or
during the drawing process. To do this, we can use print or set
commands to alter the position with Minecraft coordinates (x,y,z),
just as you would to teleport another player around your world.
Pairing these commands with penup() and pendown() allows you
to keep your writing neat, kerning your letters when necessary.
#Print your turtle’s position
turtlePos = fred.position
print(turtlePos.x)
print(turtlePos.y)
print(turtlePos.z)
#Reassign your turtle’s position
fred.setposition(0,0,0)
fred.setx(0)
fred.sety(0)
fred.setz(0)
There’s also a handy shortcut to send your turtle immediately
back to the location it started in: fred.home().
www.linuxuser.co.uk
69
Tutorial
Preserve your Raspberry
Pi's Micro SD card
Use a few tweaks and USB-booting to prolong your SD card
and avoid losing important Raspberry Pi projects
01
The read/write cycle limit is a physical hardware
restriction preventing infinite reuse of an SD card. So
look after it! While ‘swappiness’ (the kernel parameter
defining how much and often RAM is copied to storage)
is apparently set low in Raspbian, there is more you can do.
One way to extend the lifespan of the device is to avoid
flashing a fresh version of Raspbian (or whatever your
preferred OS is) every single time you start a new project.
Christian
Cawley
is a former IT and
software support
engineer and
since 2010 has
provided advice
and inspiration to
computer and mobile
users online and in
print. He covers the
Raspberry Pi and
Linux at http://www.
makeuseof.com.
What you’ll need
n USB storage device
n Ethernet cable
At some point during the lifetime of your Raspberry Pi,
you are likely to encounter a problem with your
SD/microSD card. If you’re lucky, this will be minimal;
perhaps a reboot will fix it.
If you’re unlucky, it could mean the end of your SD
card, and the loss of all data on your Pi, including your
latest project. The simple fact is that SD cards do not last
forever. Flash storage, by design, has a limited number
of read/write cycles. While error-correction software is
built-in, and cards ship with larger-than-described storage
to cover damaged blocks, eventually corruption will cause
a problem.
SD card corruption occurs in various ways. It might be
due to a sudden variation in voltage during a read/write
cycle or from being removed from the Raspberry Pi. Flash
storage is also susceptible to extreme temperatures and
physical damage. Cheap SD cards, meanwhile, are usually
unreliable; whatever the situation, you should rely on the
more expensive cards from SanDisk or Kingston.
While it might be useful to regularly back up your flash
storage to enable quick recovery, a more proactive option
is to bypass the SD card entirely by relying on other storage
mediums for booting the Raspberry Pi, but you should also
be aware of various tools that can be used to protect your
microSD card.
70
Don’t flash a fresh OS for every project
Doing so applies a card-wide reduction in the remaining
read-write cycles. So, by maintaining a working microSD
card throughout several projects, you avoid the effect that
regular flashing has on the card.
This is not a perfect solution, but it will help you
get projects started without the initial flashing and
configuration that is typically required. Need to keep
things tidy? Remove software with sudo apt-get remove
APPNAME. This will uninstall the software you no longer
need, but may be time-consuming. In short, only flash
Raspbian when you really must.
02
Maintain a constant power supply
A reliable power supply is a major aspect in the
preservation of your Raspberry Pi’s SD storage. If there’s
much variation, data can be lost, ultimately causing
SD card tips
SD cards do not last
forever. Flash storage
has a limited number of
read/write cycles
corruption of the card. At this point, your Pi probably won’t
boot, and a new OS will need flashing.
The Pi requires a constant voltage of 5V. This is available
via the approved mains adaptors, but keep in mind that
these devices cannot account for failings in the sockets
they’re plugged into. Don’t use cheap extension leads.
Instead, ensure your mains adaptor is connected either to
the wall (if you have modern surge protection built in) or
otherwise to an extension with surge protection.
It’s common to attempt to squeeze as much power out
of a Raspberry Pi through overclocking, but this too is a
possible cause of disk corruption. Rather than go through
the rigmarole – and card-degrading – act of flashing a new
ROM, it’s safer to avoid overclocking the Pi’s CPU if you
want to maintain a stable system.
03
Write to RAM, not storage
A great way to reduce the number of read/write
cycles on your SD card is to not write to it in the first place.
This doesn’t mean leaving your Pi powered off! Instead,
you can write everything to the computer’s RAM. As
such, nothing will be written to the microSD card, thereby
extending its life.
Better still, this is easy to set up using tempfs. Begin by
opening the fstab in nano with sudo nano /etc/fstab.
At the bottom of the file, add this line:
standard HDD or even an SSD), bypassing the microSD card
entirely. Via SSH, begin with:
sudo apt-get update
sudo BRANCH=next rpi-update
Then enable USB boot mode:
echo program_usb_boot_mode=1 | sudo tee -a
boot/conig.txt
With the program_usb_boot_mode=1 instruction added to
the end of the config.txt file, reboot with sudo reboot.
When the Pi restarts, you’ll need to check if the one-time
programmable (OTP) memory has been changed.
Avoid SD card
corruption
with checks
It is possible to run
checks to give you
advance warning
of impending SD
card corruption.
Begin with this
command to check
the filesystem
every x number of
reboots. Change [x]
for your preferred
number: tune2fs
-c [x] /dev/
mmcblkp02. You
might also enable
autocorrection, to
help with accurate
data storage
with nano /etc/
vcgencmd otp_dump | grep 17:
default/rcS -->
FSCKFIX=yes.
If the previous steps have been successful, you should
see something like ‘0x3020000a’ (pictured, above). Your
Raspberry Pi is now ready to boot from a USB device, so
connect the one that you want to use; but note that it will
be reformatted. You can Identify your USB device with
lsblk, which will list all block devices.
Connected USB devices are usually called ‘sda’.
Enter the following to unmount the device and run the
Parted tool:
sudo umount /dev/sda
sudo parted /dev/sda
tmpfs /var/log tmpfs defaults,noatime,nosuid,
oe=0755,size=100m 0 0
Press Ctrl+X to exit and save. This moves the /var/log
directory to RAM, reducing the microSD card’s read/write
cycles. Other locations can be safely moved to RAM too:
tmpfs /tmp tmpfs defaults,noatime,nosuid,siz
=100m 0 0
tmpfs /var/tmp tmpfs defaults,noatime,nosuid
size=30m 0 0
tmpfs /var/log tmpfsdefaults,noatime,nosuid,mo
de=0755,size=100m 0 0
tmpfs /var/run tmpfs defaults,noatime,nosuid,
mode=0755,size=2m 0 0 tmpfs /var/spool/mqueue
tmpfs defaults,noatime,nosuid,mode=0700,gid=12,s
ize=30m 0 0
This will take you to the Parted prompt.
05
Copy Raspbian to your USB drive
At the prompt, enter:
mkpart primary fat32 0% 100M
mkpart primary ext4 100M 100%
print
Beware: this will only last until you reboot your Pi, at which
point everything is cleared from RAM.
04
Boot your Pi from a USB stick
Recent updates to Raspbian enable you to boot a
Raspberry Pi 3 via an attached USB device (a flash drive,
Then use Ctrl+C to exit. Back at the command prompt,
you will need to create both a new boot filesystem and
root filesystem:
sudo mkfs.vfat -n BOOT -F 32 /dev/sda1
sudo mkfs.ext4 /dev/sda2
www.linuxuser.co.uk
71
Tutorial
Get the
best value
SD cards
Low-cost SD and
micro SD cards
are generally
unreliable. But how
can you tell what's
good quality or
not? First, check
the brand. Buy
cards from SanDisk
or Kingston as a
rule. Second, look
at the Speed Class
rating. SDHC and
SDXC cards are
available, fast,
reliable media.
The higher the
speed rating
(upwards from
2MB/sec), the
better the card.
Next, mount the target filesystems:
sudo
sudo
sudo
sudo
sudo
rsync
mkdir /mnt/target
mount /dev/sda2 /mnt/target/
mkdir /mnt/target/boot
mount /dev/sda1 /mnt/target/boot/
apt-get update; sudo apt-get install
To copy Raspbian to your USB device use:
sudo rsync -ax --progress / /boot /mnt
target
This will take a while to complete, so leave it to finish.
Once done, you’ll need to copy the SSH host keys from the
microSD card to the USB device to maintain the connection
via SSH. Enter the following a line at a time.
cd /mnt/target
sudo mount --bind /dev dev
sudo mount --bind /sys sys
sudo mount --bind /proc proc
sudo chroot /mnt/target
rm /etc/ssh/ssh_host*
dpkg-reconigure openssh-server
exit
sudo umount dev
sudo umount sys
sudo umount proc
06
You might retain the
Raspbian image to prepare
other Pis for USB booting
in future
usually stops around the 512GB capacity (although there
are larger devices) while mechanical hard disk drives can
potentially add terabytes of storage to your Pi. Alternatively,
an SSD device will speed things up considerably.
07
Boot Raspbian across your network!
Booting from USB can be taken to the next level
– network boot. Using a Pi as a server, you can set up a
Raspberry Pi 3 with Raspbian Lite and set it to initially
boot from USB. With the server and new client configured
correctly, the Pi can boot from the network, again reducing
the impact on the SD card.
On your intended client, follow the previous steps up to
the point of removing program_usb_boot_mode from
/boot/config.txt, then running the poweroff command.
Prepare to boot from USB
Before you reboot your Pi from the USB device,
edit the cmdline.txt file again in the terminal:
sudo sed -i "s,root=/dev/mmcblk0p2,root=/dev
sda2," /mnt/target/boot/cmdline.txt
A similar change must also be made to /etc/fstab:
Next, remove the SD card and insert it into the Pi you’ll be
using as a server. Boot this device, then run sudo raspiconig. This will open the configuration options. Select the
Expand Filesystem option. Then create a copy of the root
filesystem:
sudo sed -i "s,/dev/mmcblk0p,/dev/sda," /mnt
target/etc/fstab
sudo mkdir -p /nfs/client1
sudo apt-get install rsync
sudo rsync -xa --progress --exclude /nfs
/nfs/client1
You’re now ready to unmount the filesystem:
This will take a while to complete, so be patient!
cd ~
sudo umount /mnt/target/boot
sudo umount /mnt/target
At this stage, you can enter the poweroff command to shut
down your Pi. Once the lights are off, disconnect the power
supply and remove the microSD card. A few minutes later,
you can reconnect the power and boot your Pi – from the
USB device!
Your microSD card is now guaranteed a much longer
lifespan. You might retain the Raspbian image, however, for
preparing other Pis for USB booting in future.
Better still, this means that you can have as much
storage as possible for your Raspberry Pi. USB flash
72
08
Find the addresses
Continue by maintaining your connection via SSH,
by regenerating the SSH host keys:
cd /nfs/client1
sudo mount --bind /dev dev
sudo mount --bind /sys sys
sudo mount --bind /proc proc
sudo chroot .
rm /etc/ssh/ssh_host_*
SD card tips
dpkg-reconigure openssh-server
exit
sudo umount dev
sudo umount sys
sudo umount proc
Press Ctrl+C to exit, then enter
sudo echo | sudo tee /etc/dnsmasq.conf
sudo nano /etc/dnsmasq.conf
Then find the address of your router. If you don’t know
this, run:
ip route | grep default | awk ‘{print $3}’
Delete everything in the file and add:
Check your Pi’s own IP address with:
ip -4 addr show dev eth0 | grep inet
Then use cat /etc/resolv.conf to find the address of
your DNS server. You should now have your device’s IP,
broadcast and DNS server addresses, so note these down.
Run sudo nano /etc/network/interfaces and edit the
line iface eth0 inet manual so that it reads:
auto eth0
iface eth0 inet static
address [YOUR_IP_ADDRESS]
netmask 255.255.255.0
gateway [YOUR_BROADCAST_ADDRESS]
Press Ctrl+X to save and exit. To make this work, you’ll
need to disable DHCP networking, so run the following and
then reboot your Pi:
sudo systemctl disable dhcpcd
sudo systemctl enable networking
09
Configure your server’s network settings
Enter the following with the DNS IP address you
noted earlier.
echo "nameserver [YOUR_NAMESERVER_IP]" | sudo
tee -a /etc/resolv.conf
Prevent changes to this with:
sudo chattr +i /etc/resolv.conf
You then need to install some software:
port=0
dhcp-range=[YOUR_BROADCAST_IP],proxy
log-dhcp
enable-tftp
tftp-root=/tftpboot
pxe-service=0,"Raspberry Pi Boot"
Next, create a new directory, tftpboot:
sudo
sudo
sudo
sudo
10
mkdir /tftpboot
chmod 777 /tftpboot
systemctl enable dnsmasq.service
systemctl restart dnsmasq.service
Prepare for network boot
Continue by monitoring the dnsmasq log with:
tail -f /var/log/daemon.log
If working, a ‘not found’ message will be displayed. Copy
the necessary files with cp -r /boot/* /tftpboot.
Then restart dnsmasq with sudo systemctl restart
dnsmasq and the client Pi is ready to boot the root
filesystem and then boot from the network. The
/nfs/client1 filesystem must now be exported:
sudo apt-get install nfs-kernel-server
echo "/nfs/client1 *(rw,sync,no_subtree
check,no_root_squash)" | sudo tee -a /etc/
exports
sudo systemctl enable rpcbind
sudo systemctl restart rpcbind
sudo systemctl enable nfs-kernel-server
sudo systemctl restart nfs-kernel-server
sudo apt-get update
sudo apt-get install dnsmasq tcpdump
The dnsmasq tool can cause problems, so prevent
this with:
sudo rm /etc/resolvconf/update.d/dnsmasq
Reboot again, then run tcpdump to detect DHCP from the
client Pi.
sudo tcpdump -i eth0 port bootpc -v
At this stage, connect the client Pi to the network via
Ethernet, and connect the power cable. After ten seconds,
the LEDs should light, and a packet from the client will be
received by the server and displayed in the tcpdump tool.
Next, edit /tftpboot/cmdline.txt, changing the line
beginning ‘root=’ to:
root=/dev/nfs nfsroot=[YOUR_DEVICE_IP]:/nfs
client1 rw ip=dhcp rootwait elevator=deadline
Open fstab (with sudo nano /nfs/client1/etc/fstab)
and remove the /dev/mmcblkp1 and /dev/mmcblkp2 lines.
You’re done! Now start the client and wait for it to boot
from the network. This may be a little slower than what
you’re used to (depending on your network speed), but will
extend the life of your Pi’s microSD card considerably.
www.linuxuser.co.uk
73
Tutorial
Use the Raspberry Pi to
take control of your drone
Create code which enables you to customise and automate drone
flights, retrieve flight data and respond to the drone tag
Dan Aldred
is a Raspberry Pi
Certified Educator
and a lead school
teacher for CAS.
He led the winning
team of the Astro
Pi Secondary
School contest.
What you’ll need
Tutorial files
available:
n Wi-Fi enabled Raspberry Pi
n Parrot Drone 2.0 (or other)
filesilo.co.uk
Right The
PS-Drone
API is used to
control the
drone from the
Raspberry Pi
74
Drones are becoming ever more popular and
mainstream. You may already be aware that Amazon is
currently working on using drones to deliver packages
directly to your house within hours of you placing
an order, creating a new requirement for them to be
autonomous and programmable.
This tutorial offers a little taster of a Python module
which enables you to write and deploy programs to your
drone. The API used is now a few years old (2014) and
supports operation with Python 2.7. However, it offers a
number of simple methods for beginners to create their
own programs and spark your curiosity.
The tutorial begins with a quick walkthrough on installing
the required software and libraries. Then jump straight in
and create a simple but inspiring program which, when run,
automatically launches the drone and then lands it. The
tutorial covers the instructions to connect to the drone via
the Raspberry Pi’s Wi-Fi and then deploy your programs.
You will then learn how to automate the drone and program
a predetermined set of flight actions; for example, fly
forward, turn left, turn right and then land. You can
experiment with the various movements and create your
own versions and flight plans. The final section of the
tutorial introduces the use of the ‘tag’ symbol which can
be recognised by the drone’s front-facing camera. Once
recognised, flight data is sent back to the drone and you
can use these values to trigger an action or response such
as flying forward towards the tag or landing the drone.
Drone
01
Update Pi and install drone API
This project uses a Python API which enables
you to access and control the drone directly with Python
code. To get started, update your Raspberry Pi using the
standard update and upgrade commands (sudo apt-get
update, sudo apt-get upgrade). Then, using the web
browser on your Pi or other computer, head over to
www.playsheep.de/drone/downloads.html, right-click on
the PS-Drone API program and save it. (The API file is also
available in the FileSilo.)
02
Test program – part 1
time.sleep(2)
drone.land()
05
Connect to the drone via Wi-Fi
06
Run the program
Before the program will interact with the drone,
you need to connect to it via the wireless network that it
creates. Each drone has an on-board router which creates
and broadcasts an open network. Power up the drone and
wait for it to run the preflight checks. Success is usually
indicated by four green lights. Then use the network finder
on your Pi to search for the drone. Double-click to connect.
Note that it requires no key or password to connect.
The PS-Drone API file requires no installation, but
it must be placed into the same folder as the Python files
that you create to control your drone. Open the LXTerminal
and type sudo idle to open the Python 2 programming
environment. Begin by creating a simple test program
which will launch the drone and then land it. Start a new
file and import the time and PS-Drone module.
import time
import ps_drone
To run the program, you’ll need to execute it from
the LXTerminal window. Open the Terminal and use cd to
navigate to the folder where your test program is saved. To
run the program, type sudo python name_of_the_ile.
py and press Enter. (In this tutorial, the file is named
take_off_land). Ensure you have enough space to safely
launch the drone. If not, then set the time value to one.
03
Test Program part 2
The next step is to initialise the drone, setting up
and opening a communication connection between it and
the Raspberry Pi. Begin by initialising the API, then connect
to the drone via the software, starting the subprocesses.
Next, add the command to launch the drone; this uses the
code drone.takeoff().
drone = ps_drone.Drone()
drone.startup()
drone.takeoff()
sudo python name_of_the_ile.py
07
Automate the drone
Now that you have a working connection and
program, adapt it to move or fly the drone. Start a new
Python file and import the ps_drone. Then, as before,
initialise the API and connect to the drone. Set the drone
Hints and tips
04
Test program – part 3
The last stage is to use the time library to add a
short delay between the drone taking off and then landing.
This can be used to check the connections are working by
setting the delay to two seconds, which is enough time to
start the rotors and stop them without the drone leaving
the ground. To land the drone, use drone.land(). Add the
following two lines to your program and save it, ensuring
that the file is in the same folder as the PS-Drone API file.
When connecting to the drone via the Raspberry Pi, there are
possible errors. Many have been resolved in the PS-Drone API
version two.
1) Sometimes, running a program after a previous program has
finished can result in the error ‘address in use’. This is because the
drone/Pi has not disconnected from the network socket. Simply
reset your Pi.
2) Always ensure that the drone has completed its startup checks
and is online, indicated by four green LEDs, before you attempt to
connect to it.
3) When using the ‘tag’, it is easier to recognise if the background
is in contrast; for example, white paper against a white shirt
does not aid identification. Check out the tag hack in action at
https://youtu.be/gXKdUhz7zAw.
www.linuxuser.co.uk
75
Tutorial
Hack other
drones
You may have a
different model
or make of drone
and there will be
compatibility issues
with the interfacing.
However, a quick
search of the
internet or GitHub
throws up a wide
range of software
for other makes
and models. For
example, this
project page
website (http://
bit.ly/DroneNav)
supports the Parrot
BeBop drone
and mini copter
drones. It covers
some exciting
hacks such as
image recognition,
following a
particular
colour and
streaming video.
to take off, but this time set the time delay to 7.5 seconds.
This provides sufficient time for the drone to take off,
stabilise itself and wait for the next commands.
import time
import ps_drone
drone = ps_drone.Drone()
drone.startup()
drone.takeoff()
time.sleep(7.5)
08
Fly forward
Add the code to fly the drone forward; this is
simply drone.moveForward(). Then add a pause which
represents the duration that you want the drone to fly
forward for. In this example, the drone flies forward for two
seconds. Next, stop the drone. Like a car, this requires a
stopping time, so add a short time delay; you can match
the delay value used when flying forward. You can test this
program before moving onto the next steps.
drone.moveForward()
time.sleep(2)
drone.stop()
time.sleep(2)
09
Fly backwards
Once the drone has stopped, it will hover until it
receives another command. On the next line down, add the
code to fly the drone backwards. Again, you will need to
add a short time delay, then stop the drone. Finally, set the
code to land the drone.
drone.moveBackward()
time.sleep(1.5)
drone.stop()
time.sleep(2)
drone.land()
10
Run the program
Ensure that your Pi is connected to the drone’s
network, as shown in Step 5. Remember that the program
needs to be executed from the Terminal window. As before,
open it and use cd to navigate to the folder where the
program is saved. To run the program, type sudo python
name_of_the_ile.py and press Enter.
11
Turning left or right
Modify the same program to alter the flight
direction of the drone. This uses the code lines,
drone.turnLeft() and drone.turnRight(). After each
call, remember to state the duration or time that the action
runs for. For example, using time.sleep(2) will turn the
drone left or right for two seconds. After the required time,
stop the turning action using the code drone.stop(). This
returns the drone to the hovering state. Add the relevant
code to your program and then save it. Connect and run as
previously demonstrated in Steps 5 and 6.
drone.turnLeft()
time.sleep(2)
drone.stop()
time.sleep(2)
12
Tag detection
The drone software has the capability to use
the forward-facing camera to identify and read a ‘tag’.
(Downloadable from the website or available in the FileSilo.)
Once detected, this can then be used to trigger an event
such as landing the drone or flying it towards the tag. Begin
the program by starting a new Python file; import the time
and sys modules. Next, import ps_drone. As before, add
the startup and connect to the subprocesses.
import time, sys
import ps_drone
drone = ps_drone.Drone()
drone.startup()
13
Tag settings – part 1
There are four configuration settings to set.
First, reset the drone (line one) so that the status is set
to ‘good’, as indicated by four green LEDs. Use the code
drone.useDemoMode(True) to use a dataset of 15 when
transferring the data from the camera. Now set which
packets will be decoded. Finally, set a small time delay to
enable the drone to fully awake after the reset.
drone.reset()
drone.useDemoMode(True)
drone.getNDpackage(["demo","vision_detect"])
time.sleep(0.5)
14
Tag settings – part 2
In this second set of configurations, start by
enabling the universal detection by setting the detect
type to a value of 3. This triggers the drone to look for the
specific tag. Since you are using the front camera only,
disable detection from the ground camera. Then set the
drone configuration count.
drone.setConig("detect:detect_type", "3")
drone.setConig("detect:detections_select_h",
"128")
drone.setConig("detect:detections_select_v",
"0")
CDC = drone.ConigDataCount
while CDC == drone.ConigDataCount:
time.sleep(0.01)
15
Takeoff and taking a reading
The drone is now configured to recognise and read
the tag; add a small time delay, line one. This is useful if
you need to move the drone outside before it launches. Add
lines two and three to launch the drone; once deployed,
76
Drone
Left If the
distance to the
tag is greater
than 300, the
drone keeps
flying forward
it will continue to hover. Next, create a loop to continually
check for the tag and take the readings. On line nine,
tagNum returns the number of tags found; tagX returns
the horizontal position of Drone in relation to the tag. The
vertical position of the drone is collected with the code on
line 11 and stored in a variable called tagY. The distance of
the drone from the tag, tagZ, is on the penultimate line and
orientation of the drone is stored in tagRot. Remember to
include the indentations when adding the lines of code.
For example, set the drone to fly towards the tag for two
seconds, lines seven and eight.
if tagNum:
for i in range (0,tagNum):
print "Tag no "+str(i)+" : X=
"+str(tagX[i])+" Y= "+str(tagY[i])+" Dist=
"+str(tagZ[i])+" Orientation= "+str(tagRot[i])
time.sleep(10)
distance = int(tagZ[i])
print (distance)
###take off###
drone.takeoff()
time.sleep(7.5)
if distance > 300:
print ("Moving Forward")
drone.moveForward()
# Get detections
stop = False
while not stop:
NDC = drone.NavDataCount
while NDC == drone.NavDataCount:
time.sleep(0.01)
if drone.getKey():
stop = True
# Loop ends when key was pressed
tagNum = drone.NavData["vision_detect"][0]
tagX = drone.NavData["vision_detect"][2]
tagY = drone.NavData["vision_detect"][3]
tagZ = drone.NavData["vision_detect"][6]
tagRot = drone.NavData["vision_detect"][7]
16
Responding to the data
Now set up a conditional, an if statement to
display the data and take action. Line three prints out the
collected data; note that it is converted into a string as the
original data format is returned as a float, ie a decimal.
Create a new variable called distance to store the physical
measurement of the distance of the drone from the tag.
This is stored as an integer, line four. Check if this distance
is greater than 300, line six; if it is then trigger an event.
time.sleep(2)
17
Close enough
Once the drone is close enough to the tag, (in this
program less than 300), then it stops flying forward, line
one. Add a short time delay to allow it to stop, line two. Add
the two other conditions; if the drone is already less than
a distance of 300 from the tag then print ‘close enough’.
Finally, respond if the tag is not detected, lines five and six.
Save your program code and connect to the drone. (Ensure
that the indentation levels are correct; if not, this will cause
errors when you run the code.) Execute the program as
before, following the method described in Steps 5 and 6.
drone.stop()
#time.sleep(2)
else:
print ("close enough")
else:
print "No tag detected"
#drone.stop()
This concludes a basic overview of getting started with the
drone hacks. Now get experimenting!
www.linuxuser.co.uk
77
Python column
Plot your data on a Raspberry Pi
Plotly is a great framework to use when you need to see what’s happening to the
data that is being collected by your Raspberry Pi
Joey Bernard
is a true renaissance
man, he splits
his time between
building furniture,
helping researchers
with scientific
computing problems
and writing
Android apps.
Why
Python?
It’s the official
language of the
Raspberry Pi.
Read the docs at
python.org/doc.
In several past articles, we
looked at ways that your
Raspberry Pi could be used
to do data collection. This might be a
scientific experiment, or you may be
running a weather station, or some
other monitoring system. In any case,
we had focused on how to do the
actual data collection and had not
really looked at how to make it useful
or interesting for any humans who
might be checking in on the system.
This month, we will look at one option
available to handle the visualisation of
all of this incoming data, namely the
Plotly module. Plotly provides a very
robust set of functions and classes to
generate visualisations of data.
We will assume that you have some
type of display or monitor attached to
your Raspberry Pi, and that you have
an X11 desktop set up. This way, we
can focus on how to actually generate
the data displays for your project. You
also need the Plotly module installed.
mode. To use it in this fashion, you
need to be sure that you are using
version 1.9.x or later. You can check
this in an interactive Python session
with the following code:
from plotly import __version__
print(__version__)
Since we will be using Plotly in an
offline mode, we will need to use the
offline versions of the main functions.
You can import them with the
following code:
from plotly.ofline import
download_plotlyjs, init_
notebook_mode, plot, iplot
This import statement loads the
core functions that you will need.
Plots are all handled by the main
function, named plot. All graphs are
created using this function, where the
details are handled by parameters
within the call.
But, what can
you hand in as
parameters?
There are several
other classes
that are used to create specific types
of graphs. For example, if you wanted
to create a scatter plot, you would
need to import the relevant class, as
shown below:
"Plotly will generate
visualisations of data"
If using Raspbian, or a variant, you can
install Plotly with the command:
sudo apt-get install pythonplotly
If you are coding for Python 3, you can
install the python3-plotly package
instead. Or, if you absolutely need the
latest and greatest version, you can
use the following command:
from plotly.graph_objs import
Scatter
As a test, you can directly create
and display a graph with the
following code:
sudo pip install plotly
This will install Plotly into the system
Python library location.
So, now that you are ready, how
do you get started with Plotly? Plotly
comes in two flavours: online and
offline. Since we are going to use
Plotly as part of a monitoring project,
we will focus on using Plotly in offline
78
plot([Scatter(x=[1,2,3],
y=[4,5,6])])
As you can see, the plot function
takes a list of objects to graph out.
In this case, we are handing in a
Scatter object; this is instantiated
with a list of values for the
x-coordinate and a list of values
for the y-coordinate. Once the plot
function is done, it will try to display
the generated graph as a webpage,
using the default web browser as
the displaying program. Assuming
everything is installed correctly,
you should see a fresh plot in your
browser. This plot is an interactive
one, enabling you to do things like
zoom in on the plot, select individual
data points and also either save off a
static image file of your data.
This is not likely the way you will
use Plotly, however. Assuming that
you are using your Raspberry Pi to
collect data, you will likely have large
amounts that you’ll be wanting to plot.
This means that we need another
method of loading data. There are
several different options. If you are
doing data analysis anyway, you
could load the data using pandas and
creating a data object that can get
handed in to one of the graph objects.
For example, if you had your data in a
comma-separated-values (CSV) file,
the following code would load your
data and generate a scatter plot:
import pandas as pd
df = pd.read_csv('my_data.csv')
plot([Scatter(x=df['x'],y=['y'])])
This assumes that the columns in
the CSV file are actually labelled
as ‘x’ and ‘y’. You can even connect
to a database to pull your data for
generating your graphs.
You will likely also want to do
updates to your plots over time as
new data comes in. You can do this
through either IPython or Jupyter
worksheets. In this case, you first
need to initialise the system so that
the appropriate JavaScript code
is imported into your worksheet.
This is handled with the boilerplate
code below:
init_notebook_mode()
Once this call is finished, you can
use the function iplot(), rather than
Python column
Going online
with Plotly
plot(), to generate and display your
plots within your worksheet. If you
want to be able to update this plot,
you will need to include the parameter
ilename, and hand in a unique
filename to store the graph image for
display. This way, when you want to
update the graph, you can regenerate
the plot with the updated data, using
the same filename; this new graph is
then refreshed within the worksheet.
These plots are also interactive,
similar to the interactivity you
get in the browser-based display.
The default display gives you a set of
axes, labelled with the value ranges
for each axis. There are several
other options available to add more
information to your plots. These
are handled as parameters to the
graphics objects, such as title, mode,
markers, hover text and several
others. It is well worth taking a look
at the full list of parameters available
when you start adding details to your
plots. When you start looking at all the
options, youll also see that there are a
huge number of plots available. There
are a series of basic plots, statistical
plots, scientific charts, financial
charts, maps and 3D charts. There
are also custom controls that can add
more interactivity to your plots.
Plotly also interacts with other
Python modules rather well. In this
way, you could actually have some
type of numerical analysis happening
within your monitoring project. As a
longer example, say you had data for
a pendulum. You could do a numerical
integration of that data, where the
time index is your x variable and y is
the distance from the centre. That
code would look like:
import numpy as np
import plotly.graphics_objs as
go
trace1 = go.Scatter(x=x, y=y,
mode='lines’, )
dy = np.trapz(y, x)
annotation =
go.Annotation(x=4.5, y=1.25,
text='Numerical Integration of
sin(x) is approximately %s’ %
(dy), showarrow=False)
layout = go.Layout(annotations=[
annotation] )
trace_data = [trace1]
ig = Figure(data=trace_data,
layout=layout)
iplot(ig, ilename='1dnumerical-integration')
There are a few new items in this
example. We are using an Annotation
object that contains the results of the
numerical integration. There is also
a Layout object, which is used to put
together multiple graphics objects
to obtain a more complicated plot
image. These other graphics objects
could be annotations, images, labels
and even rendered LaTeX code, which
would enable you to include prettyprinted equations.
The last item we’ll introduce is the
ability to do animations with Plotly.
In order to handle animations, you
will need to use a new object, called
a Frame, to store a list of images to
use in the animation. You hand in
this Frame to either plot or iplot
to generate the animation display
itself. By default, you get a simple
Play button that lets you start the
animation, but this is a great place
to use custom controls. If you use a
Layout object, you can add control
items, such as a slider, to give you
control over the animation playback.
As you can see, just from this
very short introduction, there is a
lot you can add to any Raspberry Pi
projects that manage data. If you are
building a house-monitoring system,
for example, you could add Plotly
graphs to show temperature trends
or power usage. Or maybe you might
want to track your network usage
and have a simple graphic of the
average bandwidth used. With a bit of
research, you will be able to find many
different plots and data visualisation
methods that will be useful for
your projects.
In this article, we focused on using Plotly in the
offline mode so that we could use the Raspberry
Pi even when there is no internet access. However,
Plotly was designed as an online service that
handles the plotting of data on its servers. In order
to do this, you will, of course, need to be connected
to the internet, and you can use the plot and iplot
functions from the main Plotly module. In this way,
you can have your monitoring project set up as
a headless machine and be able to check on the
status from anywhere in the world. You will need
to have an account at the main Plotly site
(https://plot.ly). The free account enables you
create plots, but they will be public. If you wish to
have private plots, you’ll need to look into the paid
options and see which one best fits your needs. You
can then have your Python program authenticate
with the Plotly site with the following code.
import plotly
plotly.tools.set_credentials_
ile(username='my_name’, api_key='my_key')
Your API key can be found from the account
settings page on the Plotly site. Now, when you call
the plot function, you get a URL back pointed at
the location of the rendered plot. By default, it will
try to open this URL within the default browser on
your Raspberry Pi. If you are creating a headless
monitoring system, you can use the parameter
auto_open=False to turn off that behaviour. If
new data comes in, but you want to use the same
plot URL, you have three options to do the update
by using the ileopt parameter. If it is set to
overwrite, a completely new plot will be generated
and be available at the same URL. If you use the
option extend, then the additional data is added to
the already existing data and the plot is redrawn.
The final option is append, which generates a new
data set and adds it as a separate plotted set
on the same graph. If you use the iplot function
instead, then the returned URL is used within
your IPython or Jupyter worksheet to embed the
associated graph within the worksheet.
The other option that becomes available when
you go online is to be able to use streaming data
sources. If your project has your Raspberry Pi
serving up a stream of measurements out to the
internet, you could use the streaming functionality
within Plotly to be able to render visualisations of
this data.
www.linuxuser.co.uk
79
ON SALE NOW!
AVAILABLE AT WHSMITH, MYFAVOURITEMAGAZINES.CO.UK
OR SIMPLY SEARCH FOR T3 IN YOUR DEVICE’S APP STORE
SUBSCRIBE TODAY AND SAVE! SEE WWW.MYFAVOURITEMAGAZINES.CO.UK/T3
81 Group test | 86 Hardware | 88 Distro | 90 Free software
Audacious
Quod Libet
Amarok
Banshee
GROUP TEST
Audio players
When it comes to storing and enjoying your music library, there are a lot of
programs out there, but which should be the go-to choice for Linux users?
Audacious
Quod Libet
Amarok
Banshee
Priding itself on its low-resource
UI, Audacious is one of the more
well-known audio players in
this group. While simple in its
nature, it’s packed full of unique
features that help it stand out
from the crowd. Can it do enough
to top the other players listed
here, however?
Audacious-media-player.org
Don’t be fooled by its strange
name, because Quod Libet is a
GTK+ audio player that’s been
written almost exclusively in
Python. It’s another player that
prides itself on its simplicity,
but also looks to enhance
its reputation with a suite of
organisation tools.
Quadlibet.io
One look at Amarok’s official
development page will show how
the community around it plays an
important role when developing
for the project. External user
involvement has helped curate
one of the more user-friendly
audio players that’s currently on
the market.
Amarok.kde.org
Banshee looks to stand out
from the rest of the group with
its own storefront, making it
easier to not only import your
music, but also download new
tracks from leading audio
providers. It sounds great on
paper, but does it work well
enough in practice?
Banshee.fm
www.linuxuser.co.uk
81
Review
Audio players
Audacious
Quod Libet
Simplicity remains key in
maintaining Audacious’s success
Unknown to many, but an impressive
piece of software nonetheless
n Metadata is automatically added to your imported music, but users
can also add their own when needed
n Hotkey support allows for users to customise and control their Quod
player through a range of keyboard shortcuts
Design and UI
Design and UI
The initial setup of Audacious can be a little tricky, especially when
it comes to finding initial import options. However, the rest of the
UI can be navigated with ease, and users won’t find any complex
menu systems to navigate. Certain refinement could be made to the
window system used, but it’s only a small issue in the grand scheme
of things.
Arguably Quod Libet’s best asset is its highly customisable back-end,
allowing for different audio add-ons to be implemented. It allows for
users to completely customise how Quod both looks and functions,
although we must say the default window arrangement works just
fine. Installation is relatively simple, if you stick to default options.
Music management
Import options are plentiful in Quod Libet, with auto-import options
particularly of use. In practice, these enable users to prioritise files
from certain directories, which is ideal if you’re downloading a lot of
music. Multi-file edits are also another possibility here, which can be
used if you’re intent on creating playlists and personalised track lists.
Importing can be a little laborious at times, with Audacious being
noticeably slower than some of its rivals. It does boast a good array
of management tools, however, allowing users to divide albums and
tracks into different categories. We also like the auto-tracking tool,
which will help find and delete duplicate tracks.
Playback control
All the basic options are accounted for, with additional options also
available through the tools section. While a nice addition, these extra
controls don’t necessarily do a whole lot for your music. The onboard
visual equaliser looks great, but during our time with it, seemed to
cause some stutter on our desktop for some reason.
Management options
Playback controls
While there’s a decent shuffle mode included within Quod Libet, the
rest of the playback controls can be a bit temperamental at times.
The rewind function only seems to work sporadically, while the
skip song option tends to skip entire albums at times. Considering
these are key features, we’re surprised to see how poorly they’ve
been implemented.
Extra features
Extra features
Audacious boasts a decent range of unique audtool commands,
which should suit Linux users perfectly. You’ll be able find
everything from enabling stream recordings, to disabling certain
playback controls through this audtool menu. Of course, these
tools shouldn’t be used by everyone, and require some knowledge
before testing.
Thanks to its Python-based heritage, users can implement a range
of Python enhancements into Quod Libet. One of the best is the
automatic tagging function from MusicBrainz for instantly adding
the metadata to your imported tracks. We also recommend the
duplicate finder, a great tool for discovering and removing duplicate
files from your library.
Overall
Overall
Thanks to its audtool support, Audacious is
arguably the best tailored player for Linux users.
However, certain UI elements are ropey and the
program needs a little polish here and there.
82
7
There’s a lot of things we like about Quod Libet, but
big improvements need to be made to its suite of
playback controls. It’s one to keep an eye on, but
not an essential download for now.
7
Amarok
Banshee
An all-in-one audio player that aims
to compete with Apple’s iTunes
A feature-packed player with a
storefront for buying new music
n If you’re missing any cover art, Amarok includes all the tools needed to
import and implement any artwork you need
n Being packed full of features doesn’t necessary work in Banshee’s favour,
as its flaws are likely to make you want to scream
Design and UI
Design and UI
Amarok looks amazing from top to bottom, sporting a highly
intuitive UI that is easy to use. While it may look complicated from
the outset, advanced options have been cleverly hidden away,
allowing for beginner users to get used to the absolute basics at first.
However, its design-heavy UI isn’t best suited for those using lowresource machines.
Considering the amount of content Banshee looks to cram into its
UI, it isn’t overpowering when you first set the program up. Sections
are segregated through a handy category system, which makes it
particularly useful to navigate through the player. However, the lack
of customisable options is a bit of an annoyance if the default design
doesn’t work for you.
Management options
Management options
Out of all the players here, Amarok boasts the best support for
those moving across from alternate players. Direct iTunes support
means users can import their entire library in minutes, while its
onboard metadata function will fill in all the blanks of your tracks.
It’s also one of the few players to offer CD ripping, but it can be
temperamental to use.
If you’re moving a large music collection across to Banshee, then the
lack of multi-file support will be an issue. Individual tracks need to
be edited one at a time, which is a massive annoyance if you need to
apply the same changes to multiple tracks. While sync support for
smartphones is also listed, we had trouble getting this to work during
our tests.
Playback controls
Playback controls
Keeping it simple seems to work for Amarok, with a top-mounted
toolbar being on hand with everything a user could need. Extra
options for monitoring volume levels can also be added, and
integrate well into the toolbar as well. Another plus is hotkey support,
perfect for users wanting to simplify their music control.
This is another audio player that keeps it basic, which does help if
you simply want to load up a playlist and enjoy. As in Amarok, hotkey
support is implemented here, so simplifying certain playback controls
is also easy. Banshee’s unique Smartly Shuffle feature is also great
for creating on-the-spot playlists out of your existing tracks.
Extra features
Extra features
Amarok has an ever-growing community of users, who have been
kind enough to develop a range of interchangeable scripts. While
they can be tricky to get working if you’re new to Linux, these scripts
can help optimise everything from the sound quality of your tracks,
to changing up the default library layout of the program.
Much of your audio library will lack artwork, so Banshee’s auto-cover
project will be a big help. This feature sources artwork from a variety
of online sources, downloading it and then applying it to your tracks.
It also has its own storefront for buying new music, but the links to its
embedded sites are on the slow side.
Overall
Overall
Apart from some minuscule annoyances, Amarok
delivers nearly everything users could want from an
audio player. This sets the bar high for other Linuxbased players to follow.
9
Banshee sports some good features, but several
of its key areas aren’t implemented as well as they
should be. There are better players in this group
test, that’s for sure.
6
www.linuxuser.co.uk
83
Review
Audio players
In brief: compare and contrast our verdicts
Audacious
Design and UI
Installation is a little
tricky, but the UI is
generally a pleasure
to use.
Management
options
Importing is slower
than the competition,
an annoyance for
multi-file imports.
Playback
controls
Basic controls are
present, with extras
available via the
onboard tools menu.
Extra
features
Audtool support is
a great addition for
seasoned Linux users
to experiment with.
Overall
There’s little wrong
with it, but slow
import times offer
some annoyances.
Quod Libet
7
A highly customisable
back-end allows support
for a range of impressive
add-ons.
6
Offers multi-file edits and
the auto-import options
generally work well for
prioritising certain tracks.
7
All the basics are there.
Certain controls seem to
only work intermittently, a
big problem for users.
7
Python enhancements
are great for changing
up your Quod experience
on-the-fly.
7
We’d rate Quod Libet
higher if not for the
playback control issues
we encountered.
Amarok
8
Highly intuitive UI. Well
designed, easy to navigate
and a real pleasure to use
as always.
8
Direct support for other
players makes it easy
to move your music
library across.
5
Both a customisable
toolbar and hotkey
support enable easy
control of your tunes.
7
Interchangeable
scripts enable users
to customise a variety
of sections.
7
It isn’t perfect, but
Amarok is the best
overall audio player for
Linux users.
Banshee
8
Its on-board category
system prevents too
much clutter from
becoming an issue.
7
8
A lack of multi-file
support will be a big
issue for those with
large libraries to import.
5
9
Check out Smartly
Shuffle for on-the-spot
playlist creation – a
unique feature.
7
8
Artwork can be pulled in
from different sources
– a nice touch, if a
little slow.
6
9
Poor implementation
of certain features
leaves Banshee behind
the competition.
6
AND THE WINNER IS…
Amarok
One of the most important things to
remember about audio players, especially
Linux ones, is that every user will want
something different from them. In many
ways, all the audio players we’ve featured
here will be suited to types of users, but
finding that all-in-one player will be key
for many. We should stress that Amarok
isn’t perfect, but it offers and implements
a wide range of features better than
the competition.
Importing your library of tracks will
arguably be the first thing that many people
do within any audio player, and Amarok’s
system is tailored to work flawlessly always.
It’s the ideal solution for those moving
hundreds of tracks across at once, and
thanks to its metadata system, all the details
of your tracks are added instantly.
When it then comes to controlling your
playback, there’s nothing overly fancy about
what Amarok offers, but again, it just works
well. A customisable toolbar is on hand to
make playback as easy, or as convoluted as
you like, and it’s something that advanced
users can get their teeth stuck into. Looking
84
n Amarok also boasts direct lyric support, so you can sing along to your favourite tracks
past this, probably the most exciting thing
about using Amarok is being involved with its
ever-growing community of users. Existing
users have created a range of scripts that
can be implemented into the program,
enabling users to completely alter the way
Amarok works and looks. Trust us, you’ll need
to check it out.
If you download Amarok, and simply can’t
get on with it, then by all means give Quod
Libet a try. Its more simplistic build and
feature set will please some users, but we
implore you to stick with Amarok as it is
the best audio player that Linux users have
available to them.
Oliver Hill
Review
Linksys Velop
HARDWARE
Linksys Velop
Price
£199 (1 unit), £349 (2 units) or
£499 (3 units)
Website
www.linksys.com
Specs
Processor: ARM Cortex A7
Memory: 512MB DDR3
On-board flash: 4GB
Wireless streams (2×2, with a
third dynamically allocated for
the connection to other nodes)
Supported standards:
802.11ac/n/g/b/a
Other: Distributed mesh Wi-Fi
system
Gigabit ports: 2 per node
USB: No
86
A mesh networking kit with elegantly designed
software and a simple setup process
Mesh networking is a bright idea to solve an old
problem with home wireless setups: dead spots.
These are areas of your house where the signal
drops out, slowing web access to a crawl. Rather
than relying on a single set of antennas, mesh
networks use multiple units placed around the
home, repeating and extending the coverage.
With Velop, Linksys has prioritised simplicity.
Configuration is via an app for iOS or Android that
guides the user through each step of the setup
process. Even less technical users will be able to get
up and running with Velop.
Instead of being a package of a router and
wireless extenders, each Velop unit is a router that
can operate independently, if so configured. Velop
nodes are tall and white, favouring minimalism, with
holes in the side to act as heat exhausts. The lean
array of ports – just two Gigabit Ethernet ports per
node, with no shared USB ports – are hidden from
view in the base of the unit, with a small gap in the
side to feed cables through.
When you first turn on one of the Velop nodes,
the app uses Bluetooth to configure basic settings,
first creating and securing a wireless network before
additional nodes connect to it. It suggests you give
a friendly name to that node to identify it, based on
its location in your house, which helps keep track of
the hardware.
Pros
Above There’s a place in the market for both advanced, highly customisable routers and products such as the Linksys Velop
mesh networking kits,that make it child’s play to get online.
If a node goes offline, the app tells you which one is
down – but the network stays up
Set up the next node and the app carefully advises
you on its positioning; press a button, give it a name
and it’s connected to your Velop mesh network
automatically. If a node goes offline, the app tells
you which one is down – but the network stays up.
Other features, too, are easy to set up and
manage. Parental controls are all controlled from
within the same app, without needing to log in to the
router’s internal software.
Delve a bit more under the hood and Velop doesn’t
offer the same vast feature set of other routers.
It only supports dual-stream 802.11ac, since the
third stream is dynamically allocated to providing a
connection to other nodes. It also lacks some of the
advanced functions of other routers. You can forget
about DD-WRT, or VPN support.
The wireless performance was sufficient,
but nothing special. We used a simple test from
speedtest.net to get an idea of how well our
200Mbit/s broadband connection was being utilised,
first at short range with just one node, then 15m
further away, then again with a second node added
to the mesh.
Speeds of over 117Mbit/s were achieved at
close range, but this dropped off heavily when we
moved the laptop further away, to just 7Mbit/s.
With a second node switched on, at the same far
distance, we got better speeds of 74 Mbit/s. That’s
roughly what you might expect from a high-quality
wireless extender. In general, the fastest possible
performance is not Velop’s strong point.
But it’s important to remember that Velop is not
a networking product aimed at the type of user who
wants to spend time configuring and controlling
every last parameter in the product’s software.
Linksys already sells a number of other routers that
allow for that, such as the advanced WRT series.
Instead, Velop is a product you could place in
the hands of technically illiterate friends or family,
and be confident they can get it up and running
without using you as free technical support. Few
other products truly offer this, so Velop is exploiting
a big gap in the market. But its simplicity carries a
significant price tag – considerably more than other
routers, or competing mesh products.
Orestis Bastounis
Successfully boosts network
coverage, when positioned
correctly. Well designed
software with minimalist and
unobtrusive appearance.
Cons
Expensive and lacks some
advanced features. Incapable
of delivering record-breaking
transfer speeds.
Summary
Linksys has achieved
ultimate simplicity
with Velop, one of
the easiest-to-use
networking products
we’ve ever seen. Some
users might sneer at
the lack of advanced
settings and ports
or the high pricing,
but for many others,
its approach to easy
configuration
will be a breath
of fresh air.
8
www.linuxuser.co.uk
87
Review
Maui Linux 17.3
DISTRO
Maui Linux 17.3
Can Maui show that simplicity and functionality are a
winning combo for all KDE-based distros to follow?
RAM
1GB
Storage
15GB HD space
Specs
128MB VM
1.6GHz Intel Atom CPU
(Recommended)
88
Despite Ubuntu no longer being the only
distribution (distro) for Linux newcomers, it hasn’t
stopped an array of emerging distros based on
it. One of the newest on the scene is Maui Linux,
based on Netrunner (which is based on Ubuntu,
which is based on Debian) but with an all-new
focus on KDE at its core. Since its initial release, the
distro has gained a small following, but its recent
17.3 release has promised big changes.
Installation hasn’t changed much since the initial
release of Maui, and still focuses on the Calameres
installer. Now updated to version 3.1, Calamares
has proven to be a big aid in most modern distros
and remains a vital part of Maui’s overall appeal.
The install process is streamlined from beginning
to end, and it’s even easier to install subsequent
language packs when needed. One caveat, however,
is that users can find themselves needing to toggle a
desktop’s network access, which proves to be fiddly
through Calameres’s window-based system.
Once installation is done and dusted, users are
greeted with the familiar KDE Plasma desktop.
Above Users can customise everything from the desktop layout to their window animations in Maui Linux
You’ll get all the joys of what KDE offers, so things like
simplicity and usability are at the forefront of Maui
For a long time now, KDE Plasma has been a focal
point for a good number of Ubuntu-based distros,
and while there’s nothing overly unique about its
implementation here, it works well. For users, you’ll
get all the joys of what KDE offers, so things like
simplicity and usability are at the forefront of Maui
– and we should mention at this point that Plasma
is a great desktop to have around on low-resource
machines. Of course, some users will prefer to make
it their own, which is possible thanks to numerous
options available when it comes to window
management, background and general layouts. They
don’t necessarily change up the user experience, but
they certainly look good.
The main new feature of Plasma’s integration
within Maui is a new volume applet, ideal for
controlling individual sound levels on your desktop.
There’s also a new Minimise All corner trigger,
used to gain instant access to your desktop. In
principal, it’s a great inclusion, but can be a little
tricky to activate correctly at times. While not new,
X Window has also got some much-needed TLC
here, with the essential development tool now
providing support for a greater range of interface
toolkits. If you’re at all interested in tinkering
with Plasma or Maui in more detail, X Window
is essential.
When Maui first launched, one of its weakest
areas was its choice of bundled software. Thanks
to its KDE integration, that isn’t an issue now, with
most of the software choices hand-picked to work
flawlessly on the Plasma desktop. Firefox is the
native browser of choice and has been updated to
version 51, while the latest version of Thunderbird
is also included. Both Krita and Kdenlive are also
welcome additions, but the latter does seem to
cause some occasional slowdown. Managing
packages is a strong point in Maui, with the highly
successful Synaptic Package Manager taking centre
stage here. It makes light work of managing and
merging packages in one place, but installation is
another area where slight slowdown seems to occur.
One of the reasons we’ve grown to really like Maui
Linux is that it works out-of-the-box: you could
install it and really not need to do anything else
with it. That being said, it doesn’t bring anything
new to the distro scene, and more seasoned users
won’t find much here to enjoy. It’s a good distro, but
nothing that we haven’t seen already.
Oliver Hill
Pros
The Installation process has
been massively simplified,
making it a suitable choice for
those moving across to Linux.
Cons
There’s nothing about Maui that
seasoned users wouldn’t have
seen before. We’d love to see it
be bold and different.
Summary
Maui offers all the
basic elements that
new Linux users could
want. It’s easy to use,
customisable and has
everything you need. But
it lacks that identity that
has helped take other
distros to the next level.
There’s nothing really
wrong with Maui,
but there’s little to
get excited about.
7
www.linuxuser.co.uk
89
Review
Free software
INTERACTIVE VISUALISATION LIBRARY
Bokeh 0.12.5
Generate impressive
interactive graphs
We live in a data-driven world, where
persuasive arguments come from
those who can show you a convincing
graph making the data and its
implications clear. With a little bit of Python, Bokeh
will quickly give you lovely graphs ready to embed
in reports – or produce them in Jupyter notebooks
showing your work in context.
Installation is simple with your favourite Python
tools: pip3 install or use conda if you have an
Anaconda setup of Python and its scientific libraries.
The documentation will get you started quickly,
walking you through several different types of
examples, but you may first want to head directly to
the gallery page of the official Bokeh website to see
what can be achieved.
Bokeh’s strengths are interactive displays,
working with very large or even streaming datasets.
This is in line with the aim of the project, addressing
the challenge of ‘How can scientists and data
analysts be empowered to use visualisation fluidly,
not merely as an output facility or one stage of a
pipeline, but as an entire mode of engagement with
data and models?’
Above Bokeh can produce a variety of colourful data visualisations
Pros
Very powerful, interactive
capabilities. Extensive
documentation, both for
users and developers
Cons
Browser-based output is
great, but will be a limitation
for some. There’s a lot in there
to learn
Great for…
Budding data scientists.
Anyone with a report to write
http://bokeh.pydata.org
MINIMALIST ADDRESS BOOK
adx 1.13
A lightweight XML-based address book that runs in the browser
You may not have considered the need
for an address book separate from your
email client (or nowadays, your mobile
phone’s email client). And XML is
sometimes seen as a clunky legacy standard, but for
holding address data it’s a sensible format, and adx
gives you a simple (easy to edit) text format, easy
web access and a very light, minimal load on your
disk space and system resources.
If you’ve decided that your contacts should be
listed on a server under your own control, not some
internet company, then adx is also worth a look.
It works well in Firefox, but Chrome users will need
to work around security restrictions to run it. You
can export all of your contacts to one VCF file – for
90
backup, or transferring to other software or your
telephone – or as individual VCF files. Or as QR
codes. Or ‘create an XSLT document to export the
contacts to the format of your choice.’
That DIY XLST approach may also be the best
way to import contacts, working from adx’s Open
Contacts to adx XSLT file. Other options are
available. That done, things look up as you will get
to use adx and appreciate its features, such as
registering adx as a browser search engine, so that
you’re only a short step away from looking up a
contact. Given its small size and portable format,
sticking it on an old USB flash drive – or uploading to
a VPS – is an easy step to taking back control of your
important data.
Pros
Light and browser-based. Offers
2D barcodes and integrated
browser search
Cons
XML is a source of friction if
you’re not a fan, particularly
XLST transformation
Great for…
Reclaiming data privacy from
intrusive online services
http://bit.ly/adx_1_13
LIGHTWEIGHT SPREADSHEET
Neoleo
5.0.0
Back to the 1990s, as the Oleo spreadsheet gets a fresh airing
There is plenty of code that has been
abandoned. Like the undersea parts
FRESH
an iceberg, it makes up the majority
FOSS of
of open source code available, but lies
unnoticed, hidden in old repositories. Not all of it is
without merit, and occasionally someone will dust
off decades-old source code and get it working
again. Oleo was a spreadsheet that had its heyday
long before OpenOffice came along, before even the
Gnumeric spreadsheet, when office functionality
was a weakness for GNU/Linux distros and the GNU
project developed various pieces of office software.
Oleo was started in 1992 and came to a halt in 1999,
just before 2.0 was released. Neoleo is a relaunch
by Mark Carter – you could read it as neo (new) Oleo,
but ne- is Esperanto for ‘not’, so it’s also ‘Not Oleo’.
So what does it offer? Not all of the functionality
you’re used to, but enough for many common
spreadsheet tasks. It can run on quite old hardware
(or a Raspberry Pi Zero), too. While earlier releases
weren’t easy to get running, this version compiled
on our test machines without problem. With an
ncurses-based terminal interface, Neoleo zips along
speedily, provided you can get over one little bump.
The navigation is based on Emacs’s keys – Ctrl
plus a key for direction – so those who have never
used Emacs will flounder without a cheat sheet.
For everyone else, start with the examples included.
There is documentation, but which of it is fresh
and which is two decades old isn’t always obvious.
Features are being added, and this is a project worth
watching and getting involved with.
Pros
Wonderful to have another light
and flexible alternative for old
hardware – or your Pi!
Cons
Emacs key bindings may leave
novice users unable to move
between cells! Much still to do
Great for…
A distinctly old-school
interface for your accounts!
http://bit.ly/neoleo_5
SHELL LANGUAGE
Xonsh
0.5.9
Shell language and command prompt combining Bash and Python
Pronounced Conch, like the shell,
Xonsh is a drop in /bin/bash
replacement that gives you a Python
shell, as well as the usual sh and bash
functionality. This makes it easy to perform simple
scripted tasks on the fly, entering short but multiline snippets of Python code (pictured right).
The occasions where something would carry a
different meaning in Python or Bash are handled
quite well. For example, ls -l is a long list in Bash,
and typing it in Xonsh will give you that listing.
However, first define the variables ls and l, and
Xonsh will now take ls -l (with or without spaces
either side of the -) as a subtraction. Delete the
variable ls, and your list functionality returns.
Installation is straightforward, with pip or
conda. The tutorial on the website does a good job
of pointing out the areas where things aren’t the
same in Xonsh as in Bash – e.g. only in the former
are $NAME and ${NAME} different; in Bash they’re
syntactically equivalent. Initially, you’ll get tripped
up occasionally, when Python interprets something
you’d intended for the Bash subprocess, but the
first time you have occasion to import sys and do
something that Bash cannot do, you’ll be hooked.
Give it a go!
Above Entering multi-line Python commands into the shell is great, but watch your indentation
Pros
The power of Python and the
Python Standard Library in
your shell. Offers IPython
inspection of objects
Cons
Can be perceptibly slow,
compared to Bash. Edge
cases between Bash and
Python may trip you up
Great for…
Pythonistas with sysadmin
duties. Anyone else too!
http://xonsh.org
www.linuxuser.co.uk
91
Get your listing in our directory
To advertise here, contact Luke
OpenSource
[email protected] | +44 (0)1202586431
RECOMMENDED
Hosting listings
Featured host:
Use our intuitive Control
Panel to manage your
domain name
www.thenames.co.uk
0370 321 2027
About us
Part of a hosting brand started in 1999,
we’re well established, UK based,
independent and our mission is simple
– ensure your web presence ‘just works’.
We offer great-value domain names,
cPanel web hosting, SSL certificates,
business email, WordPress hosting,
cloud and VPS.
What we offer
•Free email accounts with fraud, spam
and virus protection.
•Free DNS management.
•Easy-to-use Control Panel.
•Free email forwards –
automatically redirect your email to
existing accounts.
•Domain theft protection to prevent it
being transferred out accidentally or
without your permission.
•Easy-to-use bulk tools to help you
register, renew, transfer and make
other changes to several domain
names in a single step.
•Free domain forwarding to point your
domain name to another website.
5 Tips from the pros
01
Optimise your website images
When uploading your website
to the internet, make sure all of your
images are optimised for websites! Try
using jpegmini.com software; or if using
Wordpress, install the EWWW Image
Optimizer plugin.
02
Host your website in the UK
Make sure your website is hosted
in the UK, not just for legal reasons!
If your server is overseas, you may be
missing out on search engine rankings
on google.co.uk – you can check where
your site is on www.check-host.net.
03
Do you make regular backups?
How would it affect your
business if you lost your website today?
It is essential to always make your own
backups; even if your host offers you a
backup solution, it’s important to take
92
responsibility for your own data and
protect it.
04
Trying to rank on Google?
Google made some changes
in 2015. If you’re struggling to rank on
Google, make sure that your website
is mobile-responsive! Plus, Google
now prefers secure (HTTPS) websites!
Contact your host to set up and force
HTTPS on your website.
05
Testimonials
David Brewer
“I bought an SSL certificate. Purchasing is painless, and
only takes a few minutes. My difficulty is installing the
certificate, which is something I can never do. However,
I simply raise a trouble ticket and the support team are
quickly on the case. Within ten minutes I hear from the
certificate signing authority, and approve. The support
team then installed the certificate for me.”
Tracy Hops
“We have several servers from TheNames and the
network connectivity is top-notch – great uptime and
speed is never an issue. Tech support is knowledge and
quick in replying – which is a bonus. We would highly
recommend TheNames. ”
Avoid cheap hosting
We’re sure you’ve seen those TV
adverts for domain and hosting for £1!
Think about the logic… for £1, how many
J Edwards
clients will be jam-packed onto that
“After trying out lots of other hosting companies, you
server? Surely they would use cheap £20
seem to have the best customer service by a long way,
drives rather than £1k+ enterprise SSDs!
and all the features I need. Shared hosting is very fast,
Try to remember that you do get what
and the control panel is comprehensive…”
you pay for!
SSD web hosting
Supreme hosting
www.bargainhost.co.uk
0843 289 2681
www.cwcs.co.uk
0800 1 777 000
Since 2001, Bargain Host has
campaigned to offer the lowest possible
priced hosting in the UK. It has achieved
this goal successfully and built up a
large client database which includes
many repeat customers. It has also
won several awards for providing an
outstanding hosting service.
CWCS Managed Hosting is the UK’s
leading hosting specialist. It offers a
fully comprehensive range of hosting
products, services and support. Its
highly trained staff are not only hosting
experts, they’re also committed to
delivering a great customer experience
and passionate about what they do.
• Colocation hosting
• VPS
• 100% Network uptime
• Shared hosting
• Cloud servers
• Domain names
Enterprise
hosting:
Value Linux hosting
Value hosting
www.2020media.com | 0800 035 6364
elastichosts.co.uk
02071 838250
WordPress comes pre-installed
for new users or with free
managed migration. The
managed WordPress service
is completely free for the
first year.
We are known for our
‘Knowledgeable and
excellent service’ and we
serve agencies, designers,
developers and small
businesses across the UK.
ElasticHosts offers simple, flexible and
cost-effective cloud services with high
performance, availability and scalability
for businesses worldwide. Its team
of engineers provide excellent support
around the clock over the phone, email
and ticketing system.
www.hostpapa.co.uk
0800 051 7126
HostPapa is an award-winning web hosting
service and a leader in green hosting. It
offers one of the most fully featured hosting
packages on the market, along with 24/7
customer support, learning resources, as
well as outstanding reliability.
• Website builder
• Budget prices
• Unlimited databases
Linux hosting is a great solution for
home users, business users and web
designers looking for cost-effective
and powerful hosting. Whether you
are building a single-page portfolio,
or you are running a database-driven
ecommerce website, there is a Linux
hosting solution for you.
• Student hosting deals
• Site designer
• Domain names
• Cloud servers on any OS
• Linux OS containers
• World-class 24/7 support
Small business host
patchman-hosting.co.uk
01642 424 237
Fast, reliable hosting
Budget
hosting:
www.hetzner.de/us | +49 (0)9831 5050
Hetzner Online is a professional
web hosting provider and
experienced data centre
operator. Since 1997 the
company has provided private
and business clients with
high-performance hosting
products, as well as the
necessary infrastructure
for the efficient operation of
websites. A combination of
stable technology, attractive
pricing and flexible support
and services has enabled
Hetzner Online to continuously
strengthen its market
position both nationally
and internationally.
• Dedicated and shared hosting
• Colocation racks
• Internet domains and
SSL certificates
• Storage boxes
www.bytemark.co.uk
01904 890 890
Founded in 2002, Bytemark are “the UK
experts in cloud & dedicated hosting”.
Their manifesto includes in-house
expertise, transparent pricing, free
software support, keeping promises
made by support staff and top-quality
hosting hardware at fair prices.
• Managed hosting
• UK cloud hosting
• Linux hosting
www.linuxuser.co.uk
93
OpenSource
Your source of Linux news & views
Contact
us…
[email protected]
COMMENT
Your letters
Questions and opinions about the mag, Linux and open source
Above The next time we cover VPNs, we promise we’ll cover a server setup
Cover disc
emptiness
Dear LU&D,
I purchased a copy of LU&D 173 at my local
electronics store. I did not check for the
included Ultimate Distros & FOSS 2017 disc
at checkout. The disc is a major reason to
purchase. When I returned to the store two
days later, the issue had been replaced with
LU&D 174 and I did purchase that issue. So I
have LU&D173 with no disc. Can you please
provide me with the disc for LU&D173?
I have previously noticed your magazine
without a disc on the new stand. Apparently,
it is too easy to strip it from the front of
magazine. Perhaps it would be wise to move
the disc inside. Also, because of my vision,
I had to get a very strong light to read the
address from your ‘Problems with the disc?’.
White on grey does not provide enough
contrast to read the contents of this sidebar.
Jack Sales
Chris: Jack, we’re very sorry to hear that a
nefarious open source snatcher stole your
DVD. Frustratingly, we don’t have any copies
of that disc in the office as we didn’t receive
94
much in the way of an archive when the
magazine moved from Bournemouth to Bath,
but we’ll see if we can pop something else in
the post to make up for your loss.
The DVD pages are due for a refresh very
soon, but in the meantime, we’re going to
increase the font weight of the white text
to see if that helps. Additionally, if readers
have any suggestions for what things they’d
like to see on the disc, please email us at
[email protected]
Virtual waste
Welcome aboard! I have been reading LU&D
since 2012 when Russell Barnes was at
the helm. The magazine has more or less
retained its appeal over the years, but the
one section that disappoints quite often is
the features – the latest example being the
VPN feature in LU&D177, which I felt was a
waste of space and time. If all you need is
to mask your web browsing, you can do so
with the latest version of Opera for free and
without jumping through any hoops. The
feature was only about running VPNs through
a VPS despite the much grander title. There
was nothing about setting up a VPN server on
your own hardware, such as with the PiVPN
script, which is what I use to connect to my
home network from public networks. Another
popular use of VPNs was explained in the
previous issue, but wasn’t mentioned here.
In terms of suggestions, while I appreciate
the step-by-step approach of the tutorials,
sometimes it feels forced and comes at the
expense of readable screenshots like in the
Raspberry Pi VPN feature in LU&D176. Also
in some other tutorials (Bitcoin cold store,
LU&D177) why do you show the full screen in
the screenshots instead of just zooming in
on the area of interest like in the rest of the
mag? And yes, a page that curates the best
indie games would be awesome.
Chris: Thanks! It’s good to be sailing the
open (source) seas at the helm of such an
illustrious magazine and thanks for writing
in, even if it is to give us a clip around the
ear hole. I must admit I wasn’t particularly
pleased with how any of the features came
out in LU&D177 (aside from our art editor,
Rosie Webber’s excellent artwork). However,
we did mention PiVPN a little at the end of
the VPN feature, albeit as a small section of
a walkthrough, but I appreciate what you’re
saying in regard to features generally.
The team have spent some time
considering how to approach features and
we felt it was about time a proportion of
them allowed for more long-form writing.
We’re grown-ups after all. We’ll still have a
proportion of sectionalised features with
walkthroughs and so on, but the long-form
approach frees up our writers to take deeper
dives into topics, an example being last
month’s Ubuntu 17.04 feature, and we believe
our readers want this kind of content.
Based on recent reader surveys, we know
a lot of you like to read something you can
get your teeth into during your commute or at
lunchtime, so we hope you enjoy this mix and
please get in touch if you feel we are getting
it right or what you’d like us to cover.
Twitter:
Facebook:
@linuxusermag
Linux User & Developer
Once Kolibri has been publicly launched,
Learning Equality will be starting to
deprecate KA Lite, as Kolibri will do
everything KA Lite could do and more.
Kolibri will launch this summer. It features
a much bigger open source content library, a
dashboard with which teachers can monitor
and evaluate student progress and learning
outcomes, as well as design classes and
exams, and finally it enables content to be
curated and aligned to local syllabi and
standards, to then be downloaded into the
platform for offline use.
To find out more about the Kolibri project,
go to https://learningequality.org/kolibri.
Above What do you think of
the new long-form features
we’ve introduced?
Regarding tutorials, we’ve been beavering
away at zooming in screens wherever
possible. Rest assured we are making the
main thing the focus rather than being
fixated on showing the full desktop.
On the gaming front, we’re currently got
our eyeglass out searching the horizon for
someone who can handle all that Linux
gaming has to offer.
organisation learned from building and
deploying KA Lite).
KA Lite only included Khan Academy
content, whereas Kolibri will include Khan
Academy content along with a wide array of
content from other sources. All of these will
be made available as public channels to be
downloaded into a Kolibri installation.
In LU&D 178, we covered Kolibri, an open
source education project highlighted by
Black Duck in its Open Source Rookie of the
Year awards. Afterwards, Learning Equality,
the ed-tech non-profit behind the project,
got in touch just to clarify a few points that
we made. In our write-up we said:
“Kolibri uses the Khan Academy Lite
(KA Lite) software but this will gradually be
phased out in favour of public channels” and
“the children (pictured left) are using Kolibri’s
KA Lite deployment in Nalanda, India”.
There are some misinterpretations for the
following reasons:
KA Lite and Kolibri are two separate
open source projects (both created by
Learning Equality). Kolibri is not based on
the KA Lite software (although it has been
designed based on things the non-profit
Thomas Van Den Driessche
Kolibri
clarification
Above Kolibri is an inspiring education project with the lofty goal of making high-quality education technology
available in low-resource communities
www.linuxuser.co.uk
95
E • FREE
SU
Available
from all good
newsagents and
supermarkets
SOURCE
RE
EVERY IS
DS
BUILD
A
BETTER
WEB
www.webdesignermag.co.uk
WNLOA
DO
ON SALE NOW
Industry interviews | Expert tutorials & opinion | Contemporary features | Behind the build
DESIGN INSPIRATION
PRACTICAL TIPS
BEHIND THE SCENES
STEP-BY-STEP ADVICE
INDUSTRY OPINION
BUY
YOUR
ISSUE
TODAY
Print edition available at www.imagineshop.co.uk
Digital edition available at www.greatdigitalmags.com
Available on the following platforms
facebook.com/webdesignermag
twitter.com/webdesignermag
9000
9021
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement