UBports Documentation
UBports Documentation
Release 1.0
Marius Gripsgard
Oct 09, 2017
About
1
About UBports
3
2
Install Ubuntu Touch
5
3
Daily use
7
4
Advanced use
11
5
Contributing to UBports
15
6
App development
21
i
ii
UBports Documentation, Release 1.0
Welcome to the official documentation of the UBports project!
UBports develops the mobile phone operating system Ubuntu Touch. Ubuntu Touch is a mobile operating system
focused on ease of use, privacy, and convergence.
On this website you find instructions how to install Ubuntu Touch on your mobile phone, user guides and detailed
documentation on all system components. If this is your first time here, please consider reading our introduction.
Note: This documentation is currently in a quite volatile state, so don’t be alarmed if pages are shuffled around from
the last time you were here! If you want to help improving the docs, this will get you started.
About
1
UBports Documentation, Release 1.0
2
About
CHAPTER
1
About UBports
Introduction
This is the documentation for the UBports project. Our goal is to create an open-source (GPL if possible) mobile
operating system that converges and respects your freedom.
About UBports
The UBports project was founded by Marius Gripsgard in 2015 and in its infancy was a place where developers could
share ideas and educate each other in hopes of bringing the Ubuntu Touch platform to more and more devices.
After Canonical suddenly announced their plans to terminate support of Ubuntu Touch in April of 2017, UBports and
its sister projects began work on the open-source code, maintaining and expanding its possiblilities for the future.
About the Documentation
This documentation is always improving thanks to the members of the UBports community. It is written in ReStructuredText and converted into this readable form by Sphinx, ReCommonMark, and Read the Docs. You can start
contributing by checking out the Documentation intro.
All documents are licensed under the Creative Commons Attribution ShareAlike 4.0 (CC-BY-SA 4.0) license. Please
give attribution to “The UBports Community”.
Attribution
This page was heavily modeled after the Godot Engine’s Documentation Introduction, attribution to Juan Linietsky,
Ariel Manzur and the Godot community.
3
UBports Documentation, Release 1.0
4
Chapter 1. About UBports
CHAPTER
2
Install Ubuntu Touch
There are many ways to install Ubuntu Touch on your supported device. To check if your device is supported, check
this page
Back up your data
Your data on your phone is important. You don’t need to lose it in the upgrade.
If you’re already using Ubuntu Touch on your phone and any distro that supports snaps on your PC, use the magicdevice-tool to back up your device.
Non-Canonical devices
These instructions will help you install our OS on the “Core Devices” such as the Nexus 5 or Fairphone 2.
Switch from Android to Ubuntu Touch
• On any Linux distro with Snaps: Use the magic-device-tool. Please read the instructions carefully!
• On Windows or MacOS (beta!): Use the UBports GUI installer
Official “Ubuntu for Devices” devices
These instructions will help you install to a device that ran an official Canonical build of Ubuntu for Devices, such as
the BQ M10 or Meizu MX4
5
UBports Documentation, Release 1.0
Switch from Canonical builds to UBports builds
• On any Linux distro with Snaps: Use the magic-device-tool. Please read the instructions carefully!
• On Windows or MacOS (beta!): Use the UBports GUI installer
Switch from Android to Ubuntu
BE VERY CAREFUL! This can permanantly damage or brick your device. NEVER check the “Format All” option in
SP Flash Tool and carefully read everything that it tells you. Some users have destroyed the partition that holds their
hardware IDs and can no longer connect to Wi-Fi or cellular networks.
• BQ devices: Download the official Ubuntu Edition firmware from here and use SP Flash Tool to flash it.
• Meizu devices: You are pretty much stuck on Flyme. For the MX4, there are some instructions floating around
for downgrading your OS, gaining root with an exploit, unlocking your bootloader, and so on. We aren’t going
to link to them here for obvious reasons. The Pro5 is Exynos-based and has its own headaches. You’re even
more at your own risk on these.
We are being vague with these instructions on purpose. While we appreciate that lots of people want to use our OS,
flashing a device with OEM tools shouldn’t be done without a bit of know-how and plenty of research. People have
destroyed their phones.
6
Chapter 2. Install Ubuntu Touch
CHAPTER
3
Daily use
This section of the documentation details common tasks that users may want to perform while using their Ubuntu
Touch device.
Run desktop applications
Libertine allows you to use standard desktop applications in Ubuntu Touch.
To display and launch applications you need the Desktop Apps Scope which is available in the Canonical App Store.
To install applications you need to use the commandline as described below.
Manage containers
Create a container
The first step is to create a container where applications can be installed:
libertine-container-manager create -i CONTAINER-IDENTIFIER
You can add extra options such as:
• -n name name is a more user friendly name of the container
• -t type type can be either chroot or lxc. Default is chroot and is compatible with every device. If the
kernel of your device supports it then lxc is suggested.
The creating process can take some time, due to the size of the container (some hundred of megabytes).
Note: The create command shown above cannot be run directly in the terminal app, due apparmor restrictions.
You can run it from another device using either adb or ssh connection. Alternatively, you can run it from the terminal
app using a loopback ssh connection running this command: ssh localhost.
7
UBports Documentation, Release 1.0
List containers
To list all containers created run: libertine-container-manager list
Destroy a container
libertine-container-manager destroy -i CONTAINER-IDENTIFIER
Manage applications
Once a container is set up, you can list the installed applications:
libertine-container-manager list-apps
Install a package:
libertine-container-manage install-package -p PACKAGE-NAME
Remove a package:
libertine-container-manager remove-package -p PACKAGE-NAME
Note: If you have more than one container, then you can use the option -i CONTAINER-IDENTIFIER to specify
for which container you want to perform an operation.
Files
Libertine applications do have access to these folders:
• Documents
• Music
• Pictures
• Downloads
• Videos
Tipps
Locations
For every container you create there will be two directories created:
• A root directory ~/.cache/libertine-container/CONTAINER-IDENTIFIER/rootfs/ and
• a
user
directory
CONTAINER-IDENTIFIER/
8
~/.local/share/libertine-container/user-data/
Chapter 3. Daily use
UBports Documentation, Release 1.0
Shell access
To execute any arbitrary command as root inside the container run:
libertine-container-manager exec -c COMMAND
For example, to get a shell into your container you can run:
libertine-container-manager exec -c /bin/bash
Note: When you launch bash in this way you will not get any specific feedback to confirm that you are now inside
the container. You can check ls / to confirm for yourself that you are inside the container. The listing of ls / will
be different inside and outside of the container.
To get a shell as user phablet run:
DISPLAY= libertine-launch -i CONTAINER-IDENTIFIER /bin/bash
Background
A display server coordinates input and output of an operating system. Most Linux distributions today use the X server.
Ubuntu Touch does not use X, but a new display server called Mir. This means that standard X applications are not
directly compatible with Ubuntu Touch. A compatibility layer called XMir resolves this. Libertine relies on XMir to
display desktop applications.
Another challenge is that Ubuntu Touch system updates are released as OTA images. A consequence of this is that
the root filesystem is read only. Libertine provides a container with a read-write filesystem to allow the installation of
regular Linux desktop applications.
3.1. Run desktop applications
9
UBports Documentation, Release 1.0
10
Chapter 3. Daily use
CHAPTER
4
Advanced use
This section of the documentation details advanced tasks that power users may want to perform on their Ubuntu Touch
device.
Note: Some of these guides involve making your system image writable, which may break OTA updates. The guides
may also reduce the overall security of your Ubuntu Touch device. Please consider these ramifications before hacking
on your device too much!
Shell access via adb
You can put your UBports device into developer mode and access a Bash shell from your PC. This is useful for
debugging or more advanced shell usage.
Install ADB
First, you’ll need ADB installed on your computer.
On Ubuntu:
sudo apt install android-tools-adb
On Fedora:
sudo dnf install android-tools
And on MacOS with Homebrew:
brew install android-platform-tools
For Windows, grab the command-line tools only package from here.
11
UBports Documentation, Release 1.0
Enable developer mode
Next, you’ll need to turn on Developer Mode.
1. Reboot your device
2. Place your device into developer mode (Settings - About - Developer Mode - check the box to turn it on)
3. Plug the device into a computer with adb installed
4. Open a terminal and run adb devices.
Note: When you’re done using the shell, it’s a good idea to turn Developer Mode off again.
If there’s a device in the list here (The command doesn’t print “List of devices attached” and a blank line), you are
able to use ADB successfully. If not, continue to the next section.
Add hardware IDs
ADB doesn’t always know what devices on your computer it should or should not talk to. You can manually add the
devices that it does not know how to talk to.
Just run the command for your selected device if it’s below. Then, run adb kill-server followed by the command
you were initially trying to run.
Fairphone 2:
printf "0x2ae5 \n" >> ~/.android/adb_usb.ini
Oneplus One:
printf "0x9d17 \n" >> ~/.android/adb_usb.ini
Shell access via ssh
You can use ssh to access a shell from your PC. This is useful for debugging or more advanced shell usage.
You need a ssh key pair for this. Logging in via password is disabled by default.
Copy the public key to your device
First you need to transfer your public key to your device. There are multiple ways to do this. For example:
• Connect the ubports device and the PC with a USB cable. Then copy the file using your filemanager.
• Or transfer the key via the internet by mailing it to yourself, or uploading it to your own cloud storage, or
webserver, etc.
• You can also connect via adb and use the following command to copy it:
adb push ~/.ssh/id_rsa.pub /home/phablet/
12
Chapter 4. Advanced use
UBports Documentation, Release 1.0
Configure your device
Now you have the public key on the UBports device. Let’s assume it’s stored as /home/phablet/id_rsa.pub.
Use the terminal app or and adb connection to perform the following steps on your phone.
mkdir /home/phablet/.ssh
chmod 700 /home/phablet/.ssh
cat /home/phablet/id_rsa.pub >> /home/phablet/.ssh/authorized_keys
chmod 600 /home/phablet/.ssh/authorized_keys
chown -R phablet.phablet /home/phablet/.ssh
Now start the ssh server:
sudo service ssh start
To make sure the ssh server is automatically started in the future, execute:
sudo setprop persist.service.ssh true
Connect
Now everything is set up and you can use ssh
ssh [email protected]<ip-address>
Of course you can now also use scp or sshfs to transfer files.
References
• askubuntu.com: How can I access my Ubuntu phone over ssh?
• gurucubano: BQ Aquaris E 4.5 Ubuntu phone: How to get SSH access to the ubuntu-phone via Wifi
Switch release channels
4.3. Switch release channels
13
UBports Documentation, Release 1.0
14
Chapter 4. Advanced use
CHAPTER
5
Contributing to UBports
Welcome! You’re probably here because you want to contribute to UBports. The pages you’ll find below here will
help you do this in a way that’s helpful to both the project and yourself.
If you’re just getting started, we always need help with thorough bug reporting. If you are multilingual, translations
are also a great place to start.
If those aren’t enough for you, see our contribute page for an introduction of our focus groups.
Bug reporting
This page contains information to help you help us reporting an actionable bug for Ubuntu Touch. It does NOT contain
information on reporting bugs in apps, most of the time their entry in the OpenStore will specify where and how to do
that.
Get the latest Ubuntu Touch
This might seem obvious, but it’s easy to miss. Go to (Settings - Updates) and make sure that your device doesn’t have
any Ubuntu updates available. If not, continue through this guide. If so, update your device and try to reproduce the
bug. If it still occurs, continue through this guide. If not, do a little dance! The bug has already been fixed and you
can continue using Ubuntu Touch.
Check if the bug is already reported
Open up the bug tracker for ubports-touch.
First, you’ll need to make sure that the bug you’re trying to report hasn’t been reported before. Search through the bugs
reported. When searching, use a few words that describe what you’re seeing. For example, “Lock screen transparent”
or “Lock screen shows activities”.
If you find that a bug report already exists, select the “Add your Reaction” button (it looks like a smiley face) and
select the +1 (thumbs up) reaction. This shows that you are also experiencing the bug.
15
UBports Documentation, Release 1.0
If the report is missing any of the information specified later in this document, please add it yourself to help the
developers fix the bug. Reproduce the issue you’ve found
Next, find out exactly how to recreate the bug that you’ve found. Document the exact steps that you took to find the
problem in detail. Then, reboot your phone and perform those steps again. If the problem still occurs, continue on to
the next step. If not...
Getting Logs
We appreciate as many good logs as we can get when you report a bug. In general, /var/log/dmesg and the output of
/android/system/bin/logcat are helpful when resolving an issue. I’ll show you how to get these logs.
To get set ready, follow the steps to set up ADB.
Now, you can get the two most important logs.
dmesg
1. Using the steps you documented earlier, reproduce the issue you’re reporting
2. cd to a folder where you’re able to write the log
3. Delete the file UTdmesg.log if it exists
4. Run the command: adb shell “dmesg” > “UTdmesg.txt”
This log should now be located at UTdmesg.txt under your working directory, ready for uploading later.
logcat
1. Using the steps you documented earlier, reproduce the issue you’re reporting
2. cd to a folder where you’re able to write the log
3. Delete the file UTlogcat.log if it exists
4. Run the command: adb shell “/android/system/bin/logcat -d” > “UTlogcat.txt”
This log will be located at UTlogcat.txt in your current working directory, so you’ll be able to upload it later.
Making the bug report
Now it’s time for what you’ve been waiting for, the bug report itself!
First, pull up the bug tracker and click “New Issue”. Log in to GitHub if you haven’t yet.
Next, you’ll need to name your bug. Pick a name that says what’s happening, but don’t be too wordy. Four to eight
words should be enough.
Now, write your bug report. A good bug report includes the following:
• What happened: A synopsis of the erroneous behavior
• What I expected to happen: A synopsis of what should have happened, if there wasn’t an error
• Steps to reproduce: You wrote these down earlier, right?
• Logs: Attach your logs by clicking and dragging them into your GitHub issue.
16
Chapter 5. Contributing to UBports
UBports Documentation, Release 1.0
• Software Version: Go to (Settings - About) and list what appears on the “OS” line of this screen. Also include
the release channel that you used when you installed Ubuntu on this phone.
Once you’re finished with that, post the bug. You can’t add labels yourself, so please don’t forget to state the device
you’re experiencing the issue on in the description so a moderator can easily add the correct tags later.
A developer or triager will confirm and triage your bug, then work can begin on it. If you are missing any information,
you will be asked for it, so make sure to check in often!
Documentation
Tip: Documentation on this site is written in ReStructuredText, or RST for short. Please check the RST Primer if
you are not familiar with RST.
This page will guide you through writing great documentation for the UBports project that can be featured on this site.
Documentation guidelines
These rules govern how you should write your documentation to avoid problems with style, format, or linking. If you
don’t follow these guidelines, we will not accept your document.
Title
All pages must have a document title. This title is shown on the table of contents (to the left) and at the top of the page.
The title, underlined with the Equals sign, is shown in the table of contents to the left of the page.
Titles should be sentence cased rather than Title Cased. For example:
Incorrect casing:
Writing A Good Bug Report
Correct casing:
Writing a good bug report
Correct casing when proper nouns are involved:
Installing Ubuntu Touch on your phone
There isn’t a single definition of title casing that everyone follows, but sentence casing is easy. This helps keep
capitalization in the table of contents consistent.
Reference
References create a permanent link. One should always appear as the first line of your document.
For example, take a look at this document’s first three lines:
.. _contribute-doc-index:
Documentation
=============
The reference name can be called in another document to easily link to a page:
5.2. Documentation
17
UBports Documentation, Release 1.0
For example, check out the :ref:`Documentation intro <contribute-doc-index>`
This will create a link to this page that won’t change if this page changes directories in a reorganization later.
Your reference should follow the naming scheme part-section-title. This document, for example, is the
index of the Documentation (doc) section in the Contribute part of the documentation.
Table of contents
People can’t navigate to your new page if they can’t find it. Neither can Sphinx. That’s why you need to add new
pages to Sphinx’s table of contents.
You can do this by adding the page to the index.rst file in the same directory that you created it. For example, if
you create a file called “newpage.rst”, you would add the line marked with a chevron (>) in the nearest index:
.. toctree::
:maxdepth: 1
:name: example-toc
>
oldpage
anotheroldpage
newpage
The order matters. If you would like your page to appear in a certain place in the table of contents, place it there. In
the previous example, newpage would be added to the end of this table of contents.
Contribution workflow
Note: You will need a GitHub account to complete these steps. If you do not have one, click here to begin the process
of making an account.
Directly on GitHub
Read the Docs and GitHub make it fairly simple to contribute to this documentation. This section will show you the
basic workflow to get started by editing an existing page on GitHub
1. Find the page you would like to edit
2. Click the “Edit on GitHub” link to the right of the title
3. Make your changes to the document. Remember to write in ReStructuredText!
4. Propose your changes as a Pull Request.
If there are any errors with your proposed changes, the documentation team will ask you make some changes and
resubmit. This is as simple as editing the file on GitHub from your fork of the repository.
Manually forking the repository
You can make more advanced edits to our documentation by forking ubports/docs.ubports.com on GitHub. If you’re
not sure how to do this, check out the excellent GitHub guide on forking projects.
18
Chapter 5. Contributing to UBports
UBports Documentation, Release 1.0
Building this documentation locally
If you’d like to build this documentation before sending a PR (which you should), follow these instructions on your
local copy of your fork of the repository.
Note: You must have pip installed before following these instructions. On Ubuntu, install the pip package by running
sudo apt install python-pip. This page has instructions for installing Pip on other operating systems and
distros.
1. Install the Read the Docs theme and ReCommonMark (for Markdown parsing):
pip install sphinx sphinx_rtd_theme recommonmark
2. Change into the docs.ubports.com directory:
cd path/to/docs.ubports.com
3. Build the documentation:
python -m sphinx . _build
This tells Sphinx to build the documentation found in the current directory, and put it all into _build. There will be
a couple of warnings about README.md and a nonexistent static path. Watch out for warnings about anything else,
though, they could mean something has gone wrong.
If all went well, you can enter the _build directory and double-click on index.html to view the UBports documentation.
Translations
Although English is the official base language for all UBports projects we believe you have the right to use it in any
language you want.
We are working hard to meet that goal, and you can help as well.
There are two levels for this:
• A casual approach, as a translator volunteer.
• A fully committed approach as a UBports Member, filling in this application.
Tools for Translation
For everyone: A web based translation tool called Weblate. This is the recommended way.
• For advanced users: Working directly on .po files with the editor of your choice, and a GitHub account. The .po
files for each project are in their repository on our GitHub organization.
A Translation Forum to discuss on translating Ubuntu Touch and its core apps.
How-To
5.3. Translations
19
UBports Documentation, Release 1.0
UBports Weblate
You can go to UBports Weblate, click on “Dashboard” button, go to a project, and start making anonymous suggestions
without being registered. If you want to save your translations, you must be logged in.
For that, go to UBports Weblate and click on the “Register” button. Once in the “Registration” page, you’ll find two
options:
• Register using a valid email address, a username, and your full name. You’ll need to resolve an easy control
question too.
• Register using a third party registration. Currently the system supports accounts from openSUSE, GitHub,
Fedora, and Ubuntu.
Once you’re logged in, the site is self-explanatory and you’ll find there all the options and customization you can do.
Now, get on with it. The first step is to search if your language already exists in the project of your choice.
If your language is not available for a specific project, you can add it yourself.
You decide how much time you can put into translation. From minutes to hours, everything counts.
.po file editor
As was said up above, you need a file editor of your choice and a GitHub account to translate .po files directly.
There are online gettext .po editors and those you can install in your computer.
You can choose whatever editor you want, but we prefer to work with free software only. There are too many plain
text editors and tools to help you translate .po files to put down a list here.
If you want to work with .po files directly you know what you’re doing for sure.
Translation Team Communication
The straightforward and recommended way is to use the forum category that UBports provides for this task.
To use it you need to register, or login if you’re registered already.
The only requirement is to start your post putting down your language in brackets in the “Enter your topic title here”
field. For example, [Spanish] How to translate whatever?
Just for your information, some projects are using Telegram groups too, and some teams are still using the Ubuntu
Launchpad framework.
In your interactions with your team you’ll find the best way to coordinate your translations.
License
All the translation projects, and all your contributions to this project, are under a Creative Commons AttributionShareAlike 4.0 International (CC BY-SA 4.0) license that you explicitly accept by contributing to the project.
Go to that link to learn what this exactly means.
20
Chapter 5. Contributing to UBports
CHAPTER
6
App development
Make the next Generation of apps
Welcome to an open source and free platform under constant scrutiny and improvement by a vibrant global community,
whose energy, connectedness, talent and commitment is unmatched.
Ubuntu is also the third most deployed desktop OS in the world.
Get started with app development
Design comes first - build upon solid principles
From solid fundamental principles to refined UI building blocks and typography, all Ubuntu apps share a simple design
and superb functionality.
From top to bottom, they feel and behave as part of the same family, regardless of the implementation toolkit.
Learn more about our design values
The Ubuntu App platform - develop with seamless device integration
The list of Ubuntu App Platform APIs is long and constantly growing, integrating all Ubuntu apps seamlessly into the
device experience, whatever the app’s toolkit and coding language.
Security and privacy are not after-thoughts and are built at the core of our APIs to empower users and developers.
This tight integration also enables a true write-once, run-everywhere approach that conserves precious developer time.
Learn more about platform features
21
UBports Documentation, Release 1.0
Getting started
Here you can install everything needed to get developing apps for Ubuntu.
1. Start by Installing the Ubuntu SDK.
2. Check out the Ubuntu installation guide for devices to install Ubuntu on a supported device.
Tip: A device is not required: you can develop and run apps and scopes using the Ubuntu emulator right in your
Ubuntu desktop. For more info, see Ubuntu SDK
The Ubuntu development model
Frameworks: targeting APIs
Ubuntu applications and scopes are packaged, distributed and deployed using a format called click packaging. A
framework is a set of Ubuntu APIs an app or scope is developed for.
When packaged, all apps and scopes must declare which API framework they are intending to use on the device.
Learn more about frameworks ›
Security and app isolation
All Ubuntu apps and scopes are confined, meaning they only have access to their own resources and are isolated from
other apps and parts of the system. The developer must declare which policy groups are needed for the app or scope
to function properly within the confinement rules providing security and privacy.
Learn more about security policies
The build environment
A build environment, or click target, will be required to develop and test an app or scope. This environment will make
it possible to build the software for a different architecture if cross-compilation is required (e.g. an app that uses C++)
and to run it on different devices (the desktop, a phone/tablet or the emulator). Whenever a target is required the IDE
will help to configure it based on the framework and target architecture (e.g. i386 or armhf). The architecture will
correspond to the test environment the developer is using and ultimately what the products are built with.
Learn more about building for different architectures ›
Testing applications on devices
As far as testing environments, the developer can choose an Ubuntu emulator, which can be x86 or armhf, or real
hardware with a reference device, such as the Nexus 4 or Nexus 7. While it is possible that simple apps may work in
the local desktop environment, it is only in one of these supported testing environments that the entire set of framework
APIs are available. It is generally recommended that an app or scope be packaged as a click and installed to the device
or emulator in order to properly test it. Again the IDE will assist with creating, validating, deploying and installing the
package.
Learn how to run apps with the Ubuntu SDK IDE
22
Chapter 6. App development
UBports Documentation, Release 1.0
Your first app
Get the tools
Developers feel right at home and productive in the Ubuntu SDK IDE, whatever their experience.
This integrated development environment offers a richly featured and deeply integrated set of development tools that
gears up productivity and includes direct access to attached Ubuntu devices and Ubuntu emulators.
Install the Ubuntu SDK IDE
Pick your language
For the UI, you can choose either QML or HTML5 to write Ubuntu apps.
For the logic, JavaScript, Qt and other languages such as Python or Go can power refined QML UIs.
Note: for starters, we recommend QML and JavaScript, which are the languages used in most tutorials.
Write your first app
Design
Together we can design and build beautiful and usable apps for Ubuntu.
Get started
Fig. 6.1: 366w_GetStarted_GetStarted
Familiarise yourself with the essentials before designing your app.
Design values ›
Style (coming soon)
Fig. 6.2: 366w_GetStarted_Style (2)
Make your app look beautiful by using the uniquely designed Ubuntu fonts and colours.
Patterns
Fig. 6.3: 366w_GetStarted_Patterns (1)
Use common patterns to allow users to get where they want to naturally and with little effort.
Gestures &rsauo;
6.3. The Ubuntu App platform - develop with seamless device integration
23
UBports Documentation, Release 1.0
Fig. 6.4: 366w_GetStarted_BuildingBlock (2)
Building blocks
See uses cases and advice to get the best out of the Ubuntu toolkit.
Use the header ›
System integration (coming soon)
Fig. 6.5: 366w_GetStarted_SystemIntegration
See how your app can integrate with the Ubuntu shell.
Resources (coming soon)
Fig. 6.6: 366w_GetStarted_Resources (3)
Download handy templates and the Ubuntu color palette to help you on your way.
Start building your app!
The toolkit contains all the important components you need to make your own unique Ubuntu experience. Follow the
link below for all the API and developer documentation.
Ubuntu SDK ›
Release phases
The new App Guide will be released in phases over the coming days and weeks.
• Phase 1 – Get started and Building blocks
• Phase 2 – Patterns
• Phase 3 – System integration
• Phase 4 – Resources and Style
||See the Insights blog for more updates.| | |—|—–|
Or follow us on Google+ and see the Canonical Design blog for all the latest news and designs.
Get started overview
Understand the Ubuntu design values and how to achieved a seamless experience
24
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.7: 366w_GetStarted_Toolkit (1)
Fig. 6.8: 366w_Overview_Convergence (1)
Convergence
See how convergence is achieved to provide a seamless experience across all devices.
Mapping interactions ›
Design values
Fig. 6.9: 366w_Overview_DesignValues(tablet)
Understand the Ubuntu design values and how they can be applied to your designs.
Focus on content ›
Why design for Ubuntu?
Discover how your designs can be part of a thriving community.
Get involved ›
Make it Ubuntu
Apply Ubuntu’s key components and patterns to achieve a great user experience inside your app.
Use the bottom edge ›
Convergence
Use one operating system for all devices to provide familiar experiences from phone to tablet to desktop, and back
again.
• What is convergence? ›
• Why are we doing it? ›
• How are we doing it? ›
• See for yourself ›
What is convergence?
Convergence is a single user experience that spans to all form factors and adapts to the different contexts of use. It
means exactly the same operating system and applications run on phones, tablets and desktops. This is done by using
responsive layouts that adapt to the different screen or window sizes.
Convergence supports all input types equally and simultaneously to allow users to interact using a pointer, touch or
keyboard; whenever and however they choose.
6.3. The Ubuntu App platform - develop with seamless device integration
25
UBports Documentation, Release 1.0
Fig. 6.10: 366w_Overview_WhyDesignUbuntu (1)
Fig. 6.11: 366w_Overview_MakeItUbuntu (2)
Why are we doing it?
Over the last twenty years computing has become exponentially faster, cheaper and more power efficient. As a result,
phones and tablets today have the processing power to undertake tasks that only a few years ago required PC hardware.
The boundaries between form factors are becoming blurred; there is very little difference in terms of hardware between
an ultrabook with a touchscreen and a 12in tablet with a keyboard attached.
By using convergence we breakdown the last barrier between form factors with a single operating system and app
ecosystem for all different types of hardware. This enables new forms of interaction. For example, drafting an email
on your phone during your journey to work, and then when you arrive at your desk you can plug the phone into a
monitor and continue composing the same email in a desktop environment.
How are we doing it?
In 2013, Ubuntu announced a crowdfunding effort to build a flagship device called the Ubuntu Edge. It was to be a
next-generation smartphone that also worked as a full desktop PC. Although the device was never realized, the vision
of a convergent operating system that shifts seamlessly from smartphone to desktop is still alive and well.
Responsiveness and consistency
When designing across different sized devices you have to bear in mind how an app will adapt to having more or less
real-estate when presented in a small, medium or large screen.
Where possible place panels together to take full advantage of additional screen real estate on different devices, in
order to create a consistent and proportionate design that makes use of the available space.
Dekko app
The Dekko app responses to more real-estate and keeps its look and feel from mobile to tablet to desktop.
Adaptive layouts
Applications live in windows (in a windowed environment) or surfaces (in a non-windowed environment). Application
layouts change in a responsive manner depending on the size of their window or surface. One common method of
creating a responsive layout is to use panels. In a small window or surface, only a single panel needs to be displayed.
The user can navigate through the panels by tapping on items or going back. When the window or surface size gets
larger, the application can switch to displaying two or more surfaces side by side. Thus reducing the amount of
navigational actions the user needs to undertake.
Typical examples of this are applications like contacts, messages, and email. Of course, there can be any number of
combinations of panels depending on the specific app’s needs.
Fig. 6.12: 750w_Convergence_MainImage
26
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.13: Convergence - Responsivness and consistency 02
Fig. 6.14: 750w_WhyDesignUbuntu_DekkoApp
The AdaptivePageLayout API component eliminates guesswork for developers when adapting from one form factor
to another. It works by tracking an infinite number of virtual columns that may be displayed on a screen at once. For
example, an app will automatically switch between a 1-panel and 2-panel layout when the user changes the size of the
window or surface, by dragging the app from the main stage to the side stage.
Changing the size of the window or surface resizes one or more joined panels. Typically, the right-most panel resizes
and the left-most panel maintains its original dimensions. The dimensions of the right-most panel will normally be 40
grid units or 50 grid units, though this panel may itself be resizable depending on the developer’s requirements.
How it works
The developer will be able to specify where panels should go and the breakpoint in which they can expand to. The
adaptive layout will automatically place them.
Fig. 6.15: 750w_Convergence_HowItWorks
Minimal changes to functionality
For a consistent and familiar user experience, the SDK maps touch, pointer, and keyboard (focus) interactions to every
function.
Context menus
Using touch a user can swipe or long-press on a list item to reveal a contextual menu. Using a pointer (mouse or
trackpad) a user can right-click the item to reveal the contextual menu. Using a keyboard a user can focus the desired
item and press the MENU key to open the context menu. This is a great example of how each SDK component
supports all input types equally and simultaneously.
All the components in the toolkit adapt to a convergent environment. See how the header converges to provide more
room for actions within different surfaces.
See for yourself
Ubuntu devices are shipped with built-in apps that converge over multiple devices, such as: Dekko, Calendar, Contacts
and Music. They all work in the same way on your phone, tablet and desktop, giving you a seamless experience across
all devices.
Design values
This guide is intended to help designers and developers create unique and valuable user experiences.
|750w\_DesignValues(tablet)\_MainImage|
• All input types supported equally ›
6.3. The Ubuntu App platform - develop with seamless device integration
27
UBports Documentation, Release 1.0
Fig. 6.16: 750w_Design_Values_AllInputEqualv2
Fig. 6.17: link_external
• Fast and effortless interactions ›
• Action placement ›
• Meaning in colors ›
• Focus on content ›
All input types supported equally
In order to achieve convergence, the toolkit has adapted all components to work seamlessly across all devices with
minimal changes to functionality and visual appearance.
This means that touch, pointer and focus interactions are now mapped to perform similar functions across different
devices for a consistent and familiar user experience. No matter what the input method, the UI will respond to the
user’s interaction with what they expect to happen automatically.
Use case
||For more details on how a seamless experience can be achieved in your app, see Convergence.| | |—|—–|
Fast and effortless interactions
Allow users to effortlessly move through your app with minimum effort, where it is both natural and logical to them.
Bottom edge
The bottom edge allows for a natural progressive swipe from the bottom of the screen. By using touch, clicking on
the bottom edge tab with a pointer, or pressing Return when the bottom edge tab is focused to open using keyboard
navigation.
Task switcher
The task switcher allows the user to easily switch between apps or scopes using a right edge swipe. By pushing the
pointer against the right edge of the screen, or pressing SUPER+W.
Action placement
Throughout the Ubuntu platform positive actions, such as OK, Yes and Accept are placed on the right, and negative
actions, such as Delete and Cancel are placed on the left.
Fig. 6.18: 750w_Convergence_Calendar
28
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.19: 366w_BottomEdge_AdditonalInfo3 (1)
||The position of positive and negative actions are important to consider when designing your app, because it can
reinforce behavior when used in a consistent way.| | |—|—–|
Negative swipes left
The user swipes left to right to reveals a red deletion option when editing a contact.
Positive swipes right
The user swipes right to left to reveal contextual options, such as information and messaging.
||Users can access the same actions with a pointer or keyboard by pressing the right mouse button or menu key to open
a context menu.| | |—|—–|
Meaning in colors
The Suru design language associates meanings with certain colours to help the user distinguish between actions.
Most color blind people have difficulty distinguishing red from green. Don’t use color in isolation, but instead bring
them together with additional visual cues (e.g. text labels, button position and style).
||Think about how colors complement each other and how they can create a harmony that is pleasing on the eye.| |
|—|—–|
Green
Positive actions, such as OK, new, add or call.
Red
Negative and destructive actions, such as delete or block contact.
Blue
Blue is an informative colour, it is neither positive or negative. Use blue for selected activity states. It works with all
other elements, on both dark and light backgrounds, and stands out clearly and precisely when used in combination
with a focus state.
||For more information on how color is used across the platform see Color palette (coming soon).| | |—|—–|
Fig. 6.20: 366w_DesignValues_TaskSwitcher
6.3. The Ubuntu App platform - develop with seamless device integration
29
UBports Documentation, Release 1.0
Fig. 6.21: 366w_ListItems_ContextualActions1 (2)
Fig. 6.22: 366w_ListItems_ContextualActions2
Focus on content
Too much user interface can interfere with content; but too little can make your app difficult to use. By focusing
clearly on content many pitfalls can be avoided.
Make it easy to find content
Allow users to access content easily through navigational methods by using the most appropriate components.
|366w\_Overview\_FocusOnContentDO| |do\_32|
Do
The header can provide quick access to important actions and navigational options at the top of the screen or window.
|366w\_Overview\_FocusOnContentDont| |dont\_32|
Don’t
Drawers have low discoverability and can hide important views from the user. Consider using the header or header
section instead.
Design philosophy
The Ubuntu interface has been designed according to a philosophy called Suru.
• Suru meaning ›
• Translated in design ›
• Suru mood board ›
||See how the Suru visual language is integrated into the new Xerus 16.04 wallpaper here.| | |—|—–|
Suru meaning
Here at Ubuntu we believe that everyone should have access to free, reliable, and trusted software that can be shared
and developed by anyone. This has paved the way to allow the community to grow and prosper with freedom trust and
collaboration. This integral belief has been translates into our design philosophy called Suru.
• Suru stems from the Ubuntu brand values alluding to Japanese culture.
• The design of Suru is inspired by origami, because it give us a solid and tangible foundation.
Fig. 6.23: 366w_Overview_MeaningInColoursGreen
30
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.24: 366w_DesignValues_MeaningInColoursRed
Fig. 6.25: 750w_DesignValues_ColourBlue (1)
• Paper can be used in all areas of the brand in two and three dimensional forms, because it is transferable and
diverse.
Translated in design
Suru brings a precise yet organic structure to the Ubuntu interface. The sharp lines and varying levels of transparency
evoke the edges and texture of paper. All elements are placed deliberately, with the express aim of being easy for the
user to identify and use.
When using a small layout, the information and functionality is folded into a compact object that that can be refolded
to expose different areas. As the layout size increases, the object can become progressively larger, allowing more of
the information and functionality to be exposed at any one time.
Origami
Origami has long been associated with good fortune and represents the visual style for the Ubuntu Phone. Origami
folds are used to define the design.
Simple details
What is most important is that screen layouts retain a natural, rhythmic quality, and a neatness and clarity that helps
the user find things quickly and use them intuitively.
Using subtle grids for accuracy
The folds from origami produce simple graphical details that allow designers to create a subtle grid for positioning
brand elements and components, such as logos, icons or copylines. This helps maintain focus on the main image or
graphic element.
Suru mood board
Influences and inspiration
Why design for Ubuntu?
Design an app that will be part of a growing new eco-system which is powered by a thriving community.
• Your app will be part of the third most deployed desktop OS in the world, which is free and accessible to all
• Your app will be able to work seamlessly across all Ubuntu client platforms (desktop, phone, tablet)
Fig. 6.26: 750w_DesignPhilosophy_MainImage
6.3. The Ubuntu App platform - develop with seamless device integration
31
UBports Documentation, Release 1.0
Fig. 6.27: 366w_DesignPhilosophy_origami
Fig. 6.28: 366w_DesignPhilosophy_simple
• The list of Ubuntu App Platform APIs is ever expanding, integrating all Ubuntu apps seamlessly into the Unity
shell and user experience, whatever the app’s toolkit and coding language
• Join a vibrant global community whose enthusiasm, energy, connectedness, many talents and commitment is
unmatched
The Ubuntu App Platform
The list of Ubuntu apps is constantly growing and evolving. This tight integration also enables a true write-once,
run-everywhere approach that conserves precious developer and designer time.
Music
The Music app is a perfect example of how the Design Team has collaborated with the community to produce a well
designed and functional app that works on all form factors.
See how the Music app was put together from the people who made it in this video.
Clock
The Clock app encapsulates the clear and concise Suru language with its straight lines and fold out design.
Read about how the new Clock app reflects convergence design thinking.
Dekko
The Dekko app is an email client that was created in collaboration with the community and is one of the first showcase
convergent apps.
Read an interview with the community member that collaborated with the Ubuntu Design Team to achieve Dekko.
Get involved
Design
Put your creativity to work by improving the look and feel of Ubuntu. Help design graphics, backgrounds or themes
for the next release.
Contribute to design ›
Fig. 6.29: 750w_DesignPhilosophy_SubtleGrid
32
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.30: suru_mood_board
Fig. 6.31: 750w_WhyDesignUbuntu_MainImage (2)
Developer
Write and package new software or fix bugs in existing software. Write apps on QML or HTML5.
Write apps for Ubuntu ›
Documentation
Help produce official documentation, share the solution to a problem, or check, proof and test other documents for
accuracy.
Improve and assist with documentation ›
Make it Ubuntu
Consider the following features to create a truly unique and beautiful Ubuntu experience within your app.
|750w\_MakeItUbuntu\_MainImage (1)|
1. Create a consistent look across your app
Simply place your designs over the layout by using the grid units to help you arrange your content in a more readable,
manageable way.
Layouts ›
2. Create a unique experience with the bottom edge
The bottom edge provides a more accessible way to obtain content or actions within your app. Use it to create
something special.
Bottom edge ›
3. Surface most important features inside your app
Let the user know where they are, what they can do and where they can go by using the Ubuntu Header.
Ubuntu header ›
4. Make your app beautiful with our Ubuntu fonts and icon designs
The uniquely stylish Ubuntu font influences UI elements and icons, making them distinctive and consistent.
Fig. 6.32: 750w_WhyDesignUbuntu_MusicApp
6.3. The Ubuntu App platform - develop with seamless device integration
33
UBports Documentation, Release 1.0
Fig. 6.33: information-link
Fig. 6.34: 750w_WhyDesignUbuntu_ClockApp
Building blocks overview
Start creating your app with components from the UI toolkit for the best user experience.
Header
Use the header for placing actions and navigational options inside your app.
Use the header ›
Bottom edge
Learn how you can create something special from the bottom of the screen.
Inspirational patterns ›
List items
Find recommendations for list item layouts and what type of actions a list item can contain.
List items ›
Selection controls
See the different components that can be used for selecting and controlling inside a form.
Use checkboxes ›
Header
Use the header to let the user know where they are, what they can do, and where they can go inside your application.
• Usage ›
• Slots ›
• Toolbar ›
• Edit mode ›
• Responsive layout ›
• Header appearance ›
• Header section ›
Fig. 6.35: information-link
34
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.36: 750w_WhyDesignUbuntu_DekkoApp
Fig. 6.37: external-link
• Best practices ›
|no alt
text |
The ‘Heade r API <. ./../a pi-qml -curre nt/Ubu ntu.Co mponen ts.Hea der.md >‘__ includ
es the expose d, flicka ble and moving proper ties of the header .
Usage
The header area can contain the main navigation options and actions inside your app. It is used to enhance the user
experience in specific device layouts.
When should I use a header?
• If your app has multiple sections
• If your app performs an action that requires the full screen, such as a camera, then don’t use a header.
Multiple panels may appear when the surface or window increases in size. When this happens, each panel can contain
its own header. For example, on a mobile surface, one panel is present at a time as the pages are stacked on top of each
other in a hierarchical order. However, when translated onto a medium to large surface the panels become adjacent to
each other and will contain their own header, while still remaining in a hierarchical order.
• **Navigational options **on the left
The navigation area can include a Back Button, title, a subtitle or a navigation drawer for when there is no room to fit
all buttons for major views.
• **Actions **on the right
The action area can include actions such as settings, search, views, or an action drawer for when there’s no room to
place further actions.
||Don’t use a navigation drawer and an action drawer at the same time, because users are unlikely to distinguish
between them.| | |—|—–|
Slots
The header contains a number of slots that can hold actions or navigational options. Depending on the surface or
window size, additional slots can be added to show the actions otherwise hidden in drawers.
||Think about the most important actions and views you want the user to perform and make it easy for them to find by
using the header.| | |—|—–|
For smaller surfaces, such as on mobile, the SDK provides a maximum of four slots per header that can be arranged
in two ways.
Fig. 6.38: 366w_MakeItUbuntu_GridLayout (1)
6.3. The Ubuntu App platform - develop with seamless device integration
35
UBports Documentation, Release 1.0
Fig. 6.39: 366w_MakeItUbuntu_BottomEdge (1)
Fig. 6.40: 366w_MakeItUbuntu_Header
Slot arrangement
Slots can be arranged in a variety of ways to surface actions and navigational options to best suit the user experience
of your application.
Slot A
• First position on the left hand side
• When slot A is not needed, slot B should move to this position
• A navigation drawer can displays all main views in an application
Slot B
• Mandatory title of your app or view, only one line
• An optional subtitle can sit below the title, which can be two lines
Slot C
Slot C can have any action inside it, such as ‘Add new contact’ or a ‘Call’ action.
Search
If you are using Slot C for Settings, then it should always be positioned last.
Settings
If you are using Slot C to place a Search icon, or any other action, then place it to the right of the title.
Action drawer
An action drawer can be used for when no other slots are available to show them. However, when your app is on a
larger surface, like on a desktop, then actions will appear in the slots.
Fig. 6.41: 366w_MakeItUbuntu_Fonts (1)
36
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.42: 366w_BuildingBlocks_Overview_Header
Fig. 6.43: 366w_BuildingBlocks_Overview_BottomEdge (1)
Fig. 6.44: 366w_BuildingBlocks_Overview_ListItems (1)
Fig. 6.45: 366w_BuildingBlocks_Overview_FormElements
Fig. 6.46: 750w_Header_Orientation (3)
Fig. 6.47: l
Fig. 6.48: 750w_Header_Usage2panels
Fig. 6.49: 750w_Header_HeaderComponents
Fig. 6.50: 750w_Header_Slots (2)
Fig. 6.51: 366w_Header_SlotAexample1 (3)
Fig. 6.52: 366w_Header_SlotBexample1 (3)
Fig. 6.53: 366w_Header_SlotCexample1 (3)
Fig. 6.54: 366w_Header_SlotCexample2 (2)
Fig. 6.55: 366w_Header_ActionDrawerExpanded (1)
6.3. The Ubuntu App platform - develop with seamless device integration
37
UBports Documentation, Release 1.0
Responsive layout
As the header gains width across screen sizes, additional slots become visible and actions in the drawer will appear
automatically.
3 slot layout
Fig. 6.56: Header_SlotArrangement1 (2)
4 slot layout
Fig. 6.57: Header_SlotArrangement2 (2)
5 slot layout
Fig. 6.58: Header_SlotArrangement3 (2)
6 slot layout
|Header\_SlotArrangement4 (3)| |Header\_SlotArrangement5 (2)|
Medium to large screens
The maximum number of visible action slots in a convergent environment is 6. If this is exceeded then additional
actions will migrate to the action drawer.
||If your header has no more slots for actions, then everything after Slot D goes into Slot E inside an action drawer.| |
|—|—–|
Search inside the header
You can use search within the main header to filter the currently displayed content; or as a global search.
Multi-panel layout
Search can appear in both panels when two or more headers are present. For example, in a mail client you may want
a filter for your inbox in the first panel, and a search in the second panel to find a recipient.
Avoid placing search in both panels unless necessary, because it could confuse the user as to what content is being
filter. For example, they may type in the wrong field to search for a specific query if it isn’t in a hierarchical order.
||Find more information on search in the header see Navigation (coming soon).| | |—|—–|
38
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.59: 750w_Header_HeaderSearchV2 (4)
Fig. 6.60: 750W_Header_Convergence search (1)
Toolbar
The toolbar is an additional component that can be used to hold actions.
||The Toolbar API allows you to determine the action or options you want to display in the toolbar.| | |—|—–|
Edit mode
Edit mode allows users to modify a particular item or multiple items at once. To enter edit mode users can initiate it
by directly interacting with a list item, title or card, or through an action inside the header.
When should I use edit mode?
Use a separate edit mode if making the information editable all the time would substantially interfere with viewing,
copying, or other tasks. For example, in the Notes app, if a note were editable all the time then the OSK would take
up valuable reading space, and hyperlinks in notes would be hard to click or tap.
A toolbar can be used below the header to provide additional actions associated with editing. When editing content
the actions that appear inside the main header and toolbar are relevant to an edit state allowing the user to perform
tasks on the content, such as: select, rearrange or delete.
Use cases
#Actions in the header picking and editing content
If a primary action of your app is to allow users to select and move content in a list, such as a list of contacts, then
surface the editing action inside the main header.
Once the user has initiated the editing action, the toolbar will appear below the header with the associated editing
actions for the content.
If you only use one text button then place it on the left hand side, because it will be easier for the user to reach with
one gesture.
||The toolbar can contain additional actions other than editing ones, such as ‘Share’ or ‘Forward’.| | |—|—–|
Edit mode in a multi-panel layout
Edit mode can be triggered through an action in the header or right-clicking or long-pressing the contextual menu.
An activated edit mode must always apply to the panel view it is triggered in. It should not affect any other panels.
If you need a delete icon place it on the left of the toolbar. If the content you are editing needs to be saved then use
two text buttons instead, such as ‘Cancel’ and ‘Save’.
Fig. 6.61: 750w_Header_Convergence search box (1)
6.3. The Ubuntu App platform - develop with seamless device integration
39
UBports Documentation, Release 1.0
Fig. 6.62: 366w_Header_EditInHeader1 (5)
Fig. 6.63: 366w_Header_EditInHeader2 (3)
||Place negative actions on the left and positive actions on the right in the main header for consistency across the
platform. See Design values for more information.| | |—|—–|
Toolbar placement
The toolbar appears below the main header when edit mode is initiated.
1. Main header
2. Toolbar
Header appearance
You can decide how you want the header to appear in four ways: Fixed, Fixed and Opaque, Fixed and Transparent and
Hidden.
||When a header is displayed in a larger surface or a window, such as in a desktop, it will be fixed, because there will
be more room to display content.| | |—|—–|
Fixed (default)
A fixed header will appear at all times until the user starts to scroll down within your app’s content. Having a fixed
header can be useful if you have a few sections or actions that need to be accessible even when the user scrolls. For
instance, in a photo editing app the user may want the editing tools to be fixed in the header for easier access.
If your app displays a header section below the main header, then it will follow the defined behavior of the main
header.
The header can be brought back into view by:
• scrolling up on the content
• tapping or interacting with the content.
Fixed and transparent
The header will be available at all times and have a transparency of 80-90%. This type of header can be useful if you
don’t want it to be the focus of attention, but still available if the user wishes to have quicker access to a view or action.
Fig. 6.64: 750w_Header_MultiPanelLayout1 (4)
40
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.65: 750w_Header_MultiPanelLayout2 (2)
Fig. 6.66: 750w_Header_ToolBar (1)
Multi-panel layout
If your app is presented in a multi-panel layout, then the headers that appears in each panel will remain fixed and
always visible when scrolling.
Overwritten fixed header
If you choose to overwrite the default header, then it should:
• react with its associated panel
• not affect other panels.
Hidden
Overlay
The header is not visible to the user. This type of header is useful for full-screen applications, such as the Camera app.
Useful in displaying more content in a single screen.
Apps without a header
If you choose not to have a header then think about how users will navigate through your UI in a different way.
Overview
Top level
For example, the Clock app has a customized header and uses icons at the top of the screen to take the user to different
modes of the app.
Header section
The header section allow users to easily shift between category views within the same page. It has the same visibility
as the main header. For example, if the header is set to default it will slide away with the sections when the user scrolls
down.
||The Section API displays a list of sections that the user can select. It is strongly recommended to limit the number of
sections to two or three to avoid a cultured looking header.| | |—|—–|
Fig. 6.67: 366w_Header_TouchEnvironment1 (1)
6.3. The Ubuntu App platform - develop with seamless device integration
41
UBports Documentation, Release 1.0
Fig. 6.68: 366w_Header_TouchEnvironment2 (2)
Fig. 6.69: 366w_Header_HeaderFixedTransparent
Dekko app
For example, if your app was presenting an inbox of emails, from ‘All’, the sub-sections could display ‘Recent’ and
‘Archive’ to further filter the content. More sections on the screen can be visible by swiping right.
When a mouse is attached
More tabs are indicated by an arrow revealed when the user interacts with the header section using a mouse.
1. The main header is a separate component that can hold actions and navigational options
2. The header section sits below the main header and allows for sub-navigation or filtering within the screen,
which is indicated by the header above. One option is always selected
Best practices
Header section
|366w\_Header\_ClearHeader1 (1)| |do\_32|
Do
Make your sections clear and concise.
|366w\_Header\_ClearHeader2 (2)| |dont\_32|
Don’t
The header section can look cluttered if you make the titles too big.
Actions
Allow users quick access to the most important actions by placing them inside the header. For example, in the Contact
app: ‘Call’ and ‘Add Contact’ are available in the header to give quick access to the Dialler and Address book.
Bottom edge
Create something special with a unique bottom edge that belongs to your app from the bottom of the screen.
Fig. 6.70: 750w_Header_TouchMultiPanelView1 (1)
42
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.71: 750w_Header_TouchMultiPanelView2 (1)
Fig. 6.72: 750w_Header_OverwrittenFixedheader1 (1)
Quick access to new content
• Overview ›
• Use cases ›
• Hints ›
Hint: The BottomEdge API provides bottom edge content handling. See also the BottomEdgeHint API, which
displays a label or an icon, or both, at the bottom of the component it is attached to.
Overview
The bottom edge allows for a very natural transition through a progressive gesture from the bottom of the screen. The
gesture should take logical steps to reach a point of interest for the user. It can provide access to a view via page stack,
important actions, or access to app settings and features.
Tip: You can create your own customised bottom edge and add different content depending on the context of your
app. See ‘Loving the bottom edge’ for more information.
Use cases
The bottom edge can be used to give access to the most important features inside your app.
Is your app often used to create new content?
Use the bottom edge to quickly create or draft new content, such as composing a new email or text message.
Does your app need access to a commonly used feature that needs a separate view?
Use the bottom edge to give the user quick access to an app setting or feature, such as setting a new alarm in the Clock
app.
Does your app allow the user to add information in a form?
Use the bottom edge to provide quick access to a form, such as adding a new contact or creating a new account.
Fig. 6.73: 750w_Header_OverwrittenFixedheader2 (1)
6.3. The Ubuntu App platform - develop with seamless device integration
43
UBports Documentation, Release 1.0
Fig. 6.74: 366w_Header_HeaderHidden
Fig. 6.75: 366w_Header_HeaderFixedTransparent
Does your app allow users to access more views?
You can use the bottom edge to reveal all views or tabs currently open to allows the user to switch between them easily
and quickly. For example, the bottom edge in the Browser app reveals all the open tabs the user has open.
Hints
The toolkit provides a hint that consists of two elements: Hint 1 and Hint 2. The hint is used to let the user know that
there is something worth trying at the bottom of the screen.
Hint 1
When your application is launched for the first time, the user will see a floating icon, known as Hint 1.
Hint 2
After the user has interacted with Hint 1, the hint will morph to become Hint 2, which contains a label, icon or a
combination of the two. Using a label with an icon gives the user more detail of the content it will show.
Hint labels
It is important that your hint label is concise and clear to avoid confusing the user.
Do
Don’t
Step 1. Unfolding hint
Hint 1 is visible when the user first interacts with your app. By short swiping from Hint 1; Hint 2 starts to replace Hint
1 which then becomes fully visible.
Step 2. Collapsing
Hint 2 is now fully visible; however if the user doesn’t interact with the content or screen for a period time, then Hint
1 it will automatically fade in and replace Hint 2.
Fig. 6.76: 366w_Header_HeaderCustumised1
44
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.77: 366w_Header_HeaderCustumised2
Fig. 6.78: 366w_Header_ActionInHeader1 (6)
Fig. 6.79: 366w_Header_ActionInHeader2 (2)
Fig. 6.80: 750w_Header_Pointer environment
Fig. 6.81: 750w_Header_HeaderSection
Fig. 6.82: 366w_Header_ClearHeaderAction (1)
6.3. The Ubuntu App platform - develop with seamless device integration
45
UBports Documentation, Release 1.0
Hiding the hint
You can choose to have the bottom edge hint hidden from view when the user scrolls the content on the screen. This
would work well for apps that need the whole screen, such as the Camera app, because the primary goal is to take a
picture.
List items
List items can be used to make up a list of ordered scrollable items that are related to each other.
Fig. 6.83: 750w_ListItems_MainImage (1)
A list of emails
• Overview ›
• Contextual actions for list items ›
• Lists in edit mode ›
• Structure ›
• Actions ›
• Communicating feedback ›
• List item layouts ›
||See the ListItemLayout API that provides customisable templates, and the ListItem API that provides swiping actions.| | |—|—–|
Overview
Lists are displayed in a single column layout and are made up of items that can contain one or more controls. Items
should be grouped together in a logical way that makes sense to the user.
Items in a form
Fig. 6.84: 366w_ListItems_UseCases1 (2)
A list of settings
Use appropriately to the content
When images or icons are presented without text or actions, it would make more sense to show them inside a grid
rather than a list; like in a photo gallery.
46
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.85: 366w_ListItems_UseCases2 (2)
Fig. 6.86: 366w_ListItems_ImageList (1)
Use search function
Fig. 6.87: 366w_ListItems_UseCaseSearchFunction (2)
Consider adding a search function for lists that are likely to contain a large number of items, in order for the users to
quickly search a particular item.
Contextual actions for list items
Items in a list can have actions that can be placed in a context menu. The context menu can be accessed in two ways:
by swiping or right-clicking the list item.
Touch and pointer interactions perform the same functions across convergent devices for consistency and familiarity
across the platform. Swiping right may reveal a button for the leading action, such as ‘Delete’ or something similar.
Swiping left may reveal buttons for (up to) three other important actions; these are the trailing actions. When the user
interacts with an item using a mouse, right-clicking will reveal the context menu, and click and drag will reveal the
leading and trailing actions either side of the item. This gives the same experience as swiping.
The actions are placed within two categories: leading for negative actions and trailing for positive actions. Grouping
actions into positive and negative areas inside your list items will reinforce familiarity inside your app; allowing users
to find and identify important actions easily.
Touch – Leading action
Swipe left to right
Touch – Trailing action
Swipe right to left
Pointer
A user can right-click to reveal the contextual menu, or drag right to left to reveal the leading or trailing options in an
item.
Focus
A user can reveal the contextual menu by focusing on an item using keyboard navigation and hitting a keyboard key
to reveal it.
6.3. The Ubuntu App platform - develop with seamless device integration
47
UBports Documentation, Release 1.0
Fig. 6.88: 366w_ListItems_ContextualActions1 (3)
Fig. 6.89: 366w_ListItems_ContextualActions2
Lists in edit mode
Edit mode allows users to modify a particular item or multiple items at once.
You can use edit mode to allow users to multi-select, rearrange or delete items inside a list. When edit mode is
entered the whole screen becomes an edit state and the header will show associated editing actions for the content.
Alternatively, if the user long presses an item a context menu will show the associated editing actions too.
Use case
Edit contacts
In the Contacts app for example, the list of contacts is made editable to allow users to delete or edit a contact’s
information.
1. A user selects an item in the list by using the edit icon in the header.
2. The list becomes selectable with checkboxesthat provides swiping actions for multi-select mode.
3. The header changes to reveal editing actions, and the header section is replaced with a toolbar underneath the
main header with further editing actions.
||For more information about how edit mode is used see Header.| | |—|—–|
Structure
The toolkit provides list item layouts that consist of 1 to 4 slots which can be arranged in a variety of ways. These
slots can contain components that allow the list item to perform actions and display content.
Slot A (mandatory)
Can only contain text, such as a title with an optional subtitle.
Slot B (optional)
For additional text, an icon or a component.
List items must always contain at least one slot.
Fig. 6.90: 366w_ListItems_ContextualActionsPointer (3)
48
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.91: 366w_ListItems_ContextualActionsFocus
Fig. 6.92: 366w_ListItems_ListEditMode1 (4)
Chevron (optional)
If your list item allows for navigation through to an associated view, then a ProgressionSlot (chevron) is used in a fixed
position in the right-most slot. No other actions is displayed in this slot, because this would conflict with the chevron
navigation.
||The ProgressionSlot API is designed to provide an easy way for developers to add a progression symbol to the list
item created using ListItemLayout or SlotsLayout.| | |—|—–|
Content
If you use the ListItemLayout API then Slot A can contain a 1 line title, a subtitle, and a 2 line summary. If you use
SlotsLayout API, you can put whatever you choose in to Slot A. A recommendation is to place the most distinguishing
content in the first line of your list item.
Text is always aligned according to the currently displayed language. For example, in the case of English it is left to
right, whereas Arabic is right to left.
ListItemLayout labels:
1. 1 line – Title
2. 1 line – Subtitle
3. 2 lines – Summary
||Developers are free to override the maximum amount lines for each label. See the Label API for more information.| |
|—|—–|
Actions
Primary
The primary action is the main action you want a user to perform.
Secondary
A secondary action is an action the user may wish to perform instead of the primary action.
One action
Primary action: a user wants to turn their dial paid sound on or off.
Fig. 6.93: 366w_ListItems_ListEditMode2 (3)
6.3. The Ubuntu App platform - develop with seamless device integration
49
UBports Documentation, Release 1.0
Fig. 6.94: 750w_ListItems_4SlotLayout
Fig. 6.95: 750w_ListItems_1SlotLayout
Two actions
Primary action: a user can call using tap or click on a contacts name.
Secondary action: a user can message a contact by taping or clicking on the message action icon.
Two actions – with primary icon
Primary action: call using tap or click on the dial action.
Secondary action: message using tap or click on the message action icon.
||Avoid creating visual noise by repeatedly using additional actions in list items.| | |—|—–|
Touch regions
Tapping anywhere in the list item should perform the primary action. The secondary action is only triggered by
touching a particular touch region where the action resides.
For example, user will expect to tap on the contact name or call button (primary action) to call a contact. The secondary
action would be to message the contact using the message action icon.
Primary action – call
Secondary action – message
Communicating feedback
You can use a slot to communicate if something has changed within a list item. For example, a timestamp on a message
indicates when the message was received and a tick to show the message has been read.
Use text labels
If a list item needs to provide feedback from an associated action, then the list item should not be used to communicate
this.
In System Settings if a user has tried to connect to another device using Bluetooth and no device has been found, a
text label within the view is used to indicate feedback.
List item layouts
The toolkit provides a number of layouts when creating a list item to ensure users get the best experience from your
app across different surfaces.
Fig. 6.96: developer_links.
50
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.97: 366w_ListItems_Chevron1 (4)
Fig. 6.98: 366w_ListItems_Chevron2 (3)
Consider:
• Slot A is mandatory and should always contain text.
• The maximum number of slots is four.
||You can place what you wish inside the slots. However, these recommendations take into consideration cognitive
familiarity to provide a clean and minimalist look.| | |—|—–|
One slot
Two slot
Three slot
Four slot
||Provide a caption under the title to give the user more information if necessary. For example, displaying a contact’s
email address saves the user clicking through to find the information.| | |—|—–|
Avoid cluttered list items
In this example, the list item is too overcrowded and it is not immediately apparent what the primary action is.
Selection controls
The following components are used to change the state of a property or setting from a set of predefined values.
• Checkbox ›
• Radio buttons ›
• Switches ›
• Date and time pickers ›
• Slider ›
Checkbox
Use a checkbox to enable or select an option from a list; or as a singular option. For example, a singular option that
is an instruction to the system such as ‘Show password’, or selecting a property to be applied or unapplied to add or
change a setting; such as changing a font style to ‘Bold’ and ‘Italic’.
Fig. 6.99: 750w_ListItems_Content3
6.3. The Ubuntu App platform - develop with seamless device integration
51
UBports Documentation, Release 1.0
Fig. 6.100: 750w_ListItems_1action2action (3)
Fig. 6.101: 366w_ListItems_ActionsPrimary (1)
||The Checkbox API is a component with two states: checked or unchecked. It can be used to set boolean options.| |
|—|—–|
Multiple options
Use multiple checkboxes to allow users to select more than one option. For example, selecting a number of contacts
from a list to delete at once.
Single option
Use stand-alone checkboxes for a single option where the user can turn it on or off, or select or unselect an option. For
example, selecting automatic brightness adjustment in System Settings you only need one checkbox.
Use cases
If you are asking the user to turn a setting or instruction on or off, then use a switch instead. For example, turning the
Bluetooth setting on or off.
Make the options clear
Do
Use radio buttons or a radio menu if you have enough space for both the label and the option at once. This gives the
user a clearer understanding of the choices they can make.
|366w\_FormElements\_UseCasesDont (1)| |dont\_32|
Don’t
Use a single checkbox when it is not clear what the alternative option is. For example, the user might want to set their
time zone manually, so give them that option as well.
Selection
Each checkbox is independent of each other and can be selected individually. However, if the indeterminate checkbox
is checked then all checkboxes are selected and unselected, respectively.
Fig. 6.102: 366w_ListItems_ActionsSecondary (1)
52
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.103: 366w_ListItems_InformationStates
Fig. 6.104: 366w_ListItems_CommunicatingFeedback1 (1)
Fig. 6.105: 366w_ListItems_CommunicatingFeedback2 (1)
Fig. 6.106: 366w_ListItems_OneSlotSmall (2)
Fig. 6.107: 366w_ListItems_TwoSlotSmall2
Fig. 6.108: 366w_ListItems_ThreeSlotSmall2 (1)
Fig. 6.109: 366w_ListItems_FourSlotSmall2 (1)
Fig. 6.110: 366w_ListItems_FourSlotBad
Fig. 6.111: 750w_FormElements_Alignment (1)
Fig. 6.112: 366w_FormElements_MultipleOptions
Fig. 6.113: 366w_FormElements_SingleOptions
Fig. 6.114: 366w_ListItems_CommunicatingFeedback1 (1)
Fig. 6.115: 366w_FormElements_UseCasesDo
Fig. 6.116: do_32
Fig. 6.117: 366w_FormElements_Selection (1)
Fig. 6.118: 366w_FormElements_Confirmation (1)
6.3. The Ubuntu App platform - develop with seamless device integration
53
UBports Documentation, Release 1.0
Confirmation
Use for single selection where users confirm an action, such as accepting Terms and Conditions of a setting.
||Use indeterminate checkboxes when the value is neither checked or unchecked.| | |—|—–|
Make it obvious
Don’t make it hard for the user to understand the effect of the unchecked value.
|366w\_FormElements\_MakeItObvious\_Good (1)| |do\_32|
Do
|366w\_FormElements\_MakeItObvious\_Bad (1)| |dont\_32|
Don’t
Alignment
When aligning checkboxes with labels, or other dependent controls, it is important that the user knows which checkbox
belongs to the corresponding explanation.
Fig. 6.119: 750w_FormElements_Alignment (1)
||For more guidance on using familiar language and the right tone of voice for labels see Writing (coming soon).| |
|—|—–|
Radio buttons
Use radio buttons when there is a list of two or more options that are exclusive of each other and only one choice can
be selected.
Choosing a message tone
Fig. 6.120: 366w_FormElements_MessageTone
Clicking a non-selected radio button will deselect whichever button was previously selected. For example, ‘Soft delay’
will be deselected if the user selects another option.
||Options presented with radio buttons require less mental effort, because users can easily compare options as they are
all visible at once.| | |—|—–|
54
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.121: 366w_FormElements_OneSelection
Fig. 6.122: 366w_FormElements_MultipleSelection
One selection – use radio buttons
Multiple selection – use checkboxes
Use other controls if necessary
If you have a selection of options that are long to list and the user could type it faster, then use a text field instead.
|366w\_FormElements\_OtherControls\_radio| |do\_32|
Do
|366w\_FormElements\_OtherControls\_form| |dont\_32|
Don’t
Don’t use a radio menu entirely for command items. If the menu never contains any radio items, then use a toolbutton
menu instead.
||A toolbutton is a borderless button, as found in the header or a bottom-edge panel. It usually consists of an icons, but
may instead contain text buttons. See Buttons (coming soon) for more details.| | |—|—–|
Radio list
If you have a large set of radio buttons then place them in a list. That way users can easily navigate and scroll through
the options.
A list of organizations
Fig. 6.123: 366w_FormElements_Organisations
Don’t interrupt the user
When a user selects an option avoid hindering them from choosing another option by opening up a dialog or closing
the window.
Switches
The switch allows the user to perform an action by turning it on or off.
||The Switch API is a component with two states: checked or unchecked. It can be used to set boolean options. The
behavior is the same as CheckBox, the only difference is the graphical style.| | |—|—–|
6.3. The Ubuntu App platform - develop with seamless device integration
55
UBports Documentation, Release 1.0
Fig. 6.124: 750w_FormElements_DontInterupt
Fig. 6.125: 750w_FormElements_UseCasesBluetooth
Use cases
If you are asking the user to turn a setting or instruction on or off, then use a switch.
Fig. 6.126: 366w_ListItems_UseCases1 (2)
When not to use
If you asking the user to choose between options to set a value, then use checkboxes or radio buttons instead. For
example, choosing a selection of font styles where you can have a combination.
Date and time pickers
The toolkit provides a combination of multiple pickers for you to use to show the time and date in your app.
Spinner
Use the spinner component to display a set of values on a reel that can be either flickable or draggable.
||The PickerPanel API is a component that provides the date and time values with picking functionality.| | |—|—–|
Display month, year and day
Display time
Layout
There are three possible ways you can layout pickers: fullscreen overlay, as a popover, or embedded into the UI.
Fullscreen overlay
Use a fullscreen overlay in larger screen environments, such as tablet or desktop.
Popover
Use for popup or inline calendars when you are short of space.
56
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.127: 366w_FormElements_WhenNotToUse (1)
Fig. 6.128: 366w_FormElements_DayMonth
Embedded
Use for when you want the picker to be expandable and always visible. For example, inside the Clock app you will
see it used for when you want to edit an alarm.
Using multi-spinners
The Time Picker supports hour, minute and seconds elements in any combination; except hours with seconds.
Three spinner time picker
||An AM/PM selector will be added if the 12-hour clock is used.| | |—|—–|
Slider
Use interactive sliders to select a value from a continuous or discrete range of values.
||The Slider API is a component that allow the user to select a value from a continuous range of values.| | |—|—–|
Slider types
You can choose between different slider types to allow the user to set different values.
||The interactive nature of the slider makes it a great choice for settings that reflect intensity levels, such as volume,
brightness, or color saturation.| | |—|—–|
Default slider
You can use this slider to select a specific value or a maximum value in a range. For example, adjusting the screens
brightness percentage.
Minimum value slider
Use to select a minimum value in a range, by providing two handles that can select between values. For example, set
the value to a minimum price range to make it easier for the user to select between prices.
Fig. 6.129: 366w_FormElements_time
6.3. The Ubuntu App platform - develop with seamless device integration
57
UBports Documentation, Release 1.0
Fig. 6.130: 366w_FormElements_date desktop
Fig. 6.131: 366w_FormElements_time picker
Interval slider
The interval slider has two handles that can select between values. For example, setting a price range between £20 to
£40 inside a Shopping app.
System volume control
A system volume control is a control that any app can embed in its UI. You should use this slider control when your
app needs only one volume control.
For example, if you app has a media player or is a game that has sound effects, but no background music.It consists of
a slider that automatically reflects and adjusts the audio volume for the current output role through the current output
device.
||The System volume control component is currently under heavy development because it might also include other
audio features, so you won’t have to worry about developing it yourself.| | |—|—–|
The advantages of using system volume control:
• People won’t be annoyed that your app is louder or quieter than others, because your app uses the system audio
volume
• Volume change notifications don’t appear in front of your app when the slider is altered (especially important
for a video player)
• You don’t need to implement your own volume-adjusting code, because Ubuntu changes the volume of your app
automatically
• Any future Ubuntu features for audio routing will become available to your app automatically, without any code
changes required
If your app plays multiple types of sound, then provide a mute button and separate volume control for each type. For
example, a game that plays background music as well as sound effects. Avoid labelling the system volume control
because it already includes icons that indicate its purpose.
Activity indicators
Use Activity Indicators to give the user an indication of how long a running task might take and how much work has
already been done.
Hint: The Activity Indicator API visually indicates that a task of unknown or known duration is in progress.
Fig. 6.132: 366w_FormElements_embedded
58
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.133: 366w_FormElements_time 3 inputs (1)
Fig. 6.134: 366w_FormElements_DefaultSlider (1)
Types of indicators
The toolkit provides progress bars or spinners that can either be: determinate or indeterminate. Use either state
depending on whether the proportion being completed is known or not.
Usage
Determine if you need to use either a progress bar or a spinner on how important it is for the user to keep track of
progress, and how much space you have to show it.
For example, a download manager may use a progress bar so that you can easily tell that a download is continuing.
But a mail client may use spinners for sending mail or updating mail folders, since that is something that can happen
in the background.
Determinate indicators
Use a determinate progress bar or spinner for tasks where the activity can be determined at any point in time, such as
downloading or importing an item. If you have space within the same screen then place the progress bar below the
action that initiated it.
Progress bar – downloading
Spinner – transferring
Hint: The toolkit progress bars and spinners automatically handle presence for individual tasks by waiting for two
seconds. If the task takes less than that they won’t appear at all.
Indeterminate indicators
Use a indeterminate progress bar or spinner if the proportion complete is unknown. For instance, the loading of
a screen or re-caching a browser is something the user doesn’t need further information on and can happen in the
background.
Progress bar – updating
Spinner – loading
Fig. 6.135: 366w_FormElements_MinimumValueSlider
6.3. The Ubuntu App platform - develop with seamless device integration
59
UBports Documentation, Release 1.0
Fig. 6.136: 366w_FormElements_IntervalValueSlider
Fig. 6.137: 366w_FormElements_VolumeControl
Best practices
Steps of completeness
A determinate progress bar or spinner with a known period of completion should always fill for a successful task,
exactly once. For example when a user is downloading a new music track, then an acknowledgement that the download
has been completed would be a filled progress bar.
Indeterminate steps
If the last step in a task is verifying its success, then allocate a fraction of the indicator to it. This communicates to the
user that the software is preparing to be complete.
Determinate steps
Never let an Activity Indicator go backwards. If the task size changes part-way through, reallocate the remaining
fraction of the indicator to that.
Use only for task progression
Don’t use an Activity Indicator for anything that isn’t progress of a task, such as waiting for user input or as a gauge
for anything else.
Avoid confusion
Don’t fill the indicator if the task has failed, because it could confuse the user.
Hint: See Communicating Progress (coming soon) for best practices on labelling Activity Indicators.
Context menus
Use a context menu to
|750w\_Menus\_MainImage|
provide
quick
access
to
important
actions
within
your
application.
• Overview ›
• Revealing actions ›
• Layouts ›
• Behavior ›
60
Chapter 6. App development
UBports Documentation, Release 1.0
Overview
The toolkit includes convergent menu components that can be applied across all devices to provide a shortcut to the
most relevant actions within your app.
A context menu can contain shortcuts to primary actions or commands that are relevant to the user’s current context.
Staged
A contextual menu reveals relevant commands using long-press, such as saving an image in a web browser.
Windowed
The same context menu appears with more commands when a user right-clicks on a web image.
|750w\_Menus\_PointerEnvironment|
||See how context menus behave in List items.| | |—|—–|
Cascading menus
Cascading menus act as sub-menus within your main contextual or application menu.
||Try to limit nesting to one level deep, because it can be difficult for the user to navigate through multiple nested
submenus in staged environments.| | |—|—–|
Use case
Larger screen (tablet, desktop)
Revealing actions
Touch and pointer interactions perform the same functions for familiarity and consistency across convergent devices.
On a touch screen, a context menu is revealed by long tapping or swiping the list item from left to right. Swiping right
may reveal a button for the leading action, such as ‘Delete’ or similar. Swiping left may reveal buttons for up to three
other important actions, which are the trailing actions. When a pointer device is attached, right-clicking an item will
reveal the context menu, and click and drag will reveal the leading and trailing actions either side of the item – giving
the same experience as swiping.
Context menu
For medium to large screens, long-press (touch) and right click (pointer) can be used to reveal a context menu. For
instance, if you have a touch screen desktop monitor, you can long press a list item to reveal a context menu, or if you
have a mouse connected then you can right click.
6.3. The Ubuntu App platform - develop with seamless device integration
61
UBports Documentation, Release 1.0
Right-click
Long-press
Focus
Leading and trailing actions
On smaller screens, such as mobile, users reveal leading and trailing actions by left or right swipe. The trailing actions
will contain the same contextual actions as the context menu on right-click. If there are more than three trailing actions
you can provide an overflow menu inside the header, or inside the list item itself.
Swipe right – Leading action
Swipe left – Trailing actions
||For more information about leading and trailing actions see List Item.| | |—|—–|
Layouts
It is important that each menu retains a consistency in its layout and content when used across different devices.
1. Select item
2. Region
3. Window
4. Application
Do
Place the most frequently used menu items at the top of the menu. Use sentence capitalisation for each command
name.
Don’t
Place negative actions close to positive actions, because users may accidentally trigger them.
Menu items
Each menu is made up of a set items that can include text or an icon, or both, to best display your menu items.
62
Chapter 6. App development
UBports Documentation, Release 1.0
Text labels
It is important that you accurately describe the associated action or option in a succinct manner when using text labels
inside your menus.
|do\_32|
Do
Be concise and clear to avoid confusing or misinforming the user.
|dont\_32|
Don’t
Use over-long text labels that result in truncation (. . . ).
||By default the SDK applies a truncation to long text labels, therefore avoid placing them manually.| | |—|—–|
Label examples
• Add
• Edit
• New (rather than ‘create’)
• Move
• Save/ Save As
• Delete/ Remove
• Send
• Share
Grouping menu items
Items should be grouped in a logical manner using dividers to separate related actions that have been grouped together.
|366w\_Menus\_ItemGroupingDo| |do\_32|
Do
|dont\_32|
6.3. The Ubuntu App platform - develop with seamless device integration
63
UBports Documentation, Release 1.0
Don’t
Divide a predictable set of commands, such as clipboard commands (Cut, Copy, Paste) from app-specific or viewspecific commands.
Placing actions
In cases where editable or configurable groups of similar items are presented to the user (for example, editing a List
of contacts or a Grid of application icons) actions are placed according to the user’s interaction with the item.
The top three actions inside your menu will appear as trailing actions when the user swipes right. Destruction actions
inside the menu, such as delete, will be available as a leading action when the user swipes left.
||Developer can choose to input a burger menu to store the actions inside the header rather than inside the list item, if
they wish.| | |—|—–|
Avoid duplicating actions
Actions may be present within the app menu and elsewhere within the interface, such as actions within a toolbar.
Care should be taken to ensure that duplicate actions are as relevant and useful as possible and represent a small,
highly-relevant subset of the actions available.
When the user is using touch, the most primary actions are placed inside the header area. Other actions specific to a set
of list items can be found using swipe where possible. Care should be taken to avoid duplicating actions that appear
in the header area within contextual actions menus.
Disabling actions when inactive
Rather than removing the item completely, show the user that the action exists by disabling it within the menu, when
applicable.
In this example, ‘Rename’ is greyed out in order to indicate to the user that it is not possible to select this option at
this time (as no name has been given).
Flag gutters
The Flag Gutter will always be present in the context menu in order to allow flags for toggle or radio actions to be
displayed. For example, if you want the user to make a selection from your context menu, you can add checkboxes for
multiple selections within the flag gutter.
||For more information on checkboxes and radio buttons see Selection controls.| | |—|—–|
Behavior
Fig. 6.138: 366w_Menus_ContextualActionsTouch
64
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.139: 366w_Menus_ContextualActionsFocus
Fig. 6.140: 366w_ListItems_ContextualActions2 (1)
Keyboard shortcuts
Keyboard shortcuts allow users to quickly perform an action or navigate through your UI. Many shortcuts are inherently familiar to the user and should map precisely to the relevant action or option that appears within your menu.
Shortcut Function
Ctrl+C Copy the selected text/object.
Ctrl+X Cut the selected text/object.
Pinch close (two finger) Zooming out on content.
Long press (one finger) Start selection of content or item.
Rotate (two finger) Moving around a centre point simultaneously with two fingers.
Flick (one finger) Scroll in the direction you want the screen to move.
Long-press drag (one finger) To move, lift and rearrange content in a view or, in a multi-window environment, between
windows whilst in edit mode.
Dismissing or closing menus
Once open, a context menu may be dismissed by either making a selection from the actions or by clicking or tapping
anywhere outside of the menu area.
Keyboard input
The Escape Key (esc) will dismiss the contextual actions menu, as will as any user action that results in focus shifting
away from the application.
Default positioning
Context menus should be positioned in a consistent and predictable fashion across all device layouts. This is to aid
visibility and provide a clear touch target for when the user interacts with the screen with their finger.
Touch interaction
Context menus are centrally aligned on both horizontal and vertical axes.
Fig. 6.141: 366w_ListItems_ContextualActions1 (1)
6.3. The Ubuntu App platform - develop with seamless device integration
65
UBports Documentation, Release 1.0
Fig. 6.142: 366w_Menus_LayoutMenuItems
Fig. 6.143: do_32
Pointer interaction
Menu is aligned down and to the right of the pointing device cursor point at which the user right clicked or longpressed.
Scrolling
The toolkit provides a ScrollView component that allows users to scroll content inside panels, text fields and lists
across all devices.
||The ScrollView API is a scrollable view that features scrollbars and scrolling when using keyboard keys.| | |—|—–|
ScrollView vs. Scrollbar APIs
The ScrollView API works by wrapping the Scrollbar API in a view and provides additional features such as:
• keyboard navigation and focus handling for a complete convergent experience
• automatic positioning of vertical and horizontal scrollbars, which prevents them from overlapping one another
when both are present on screen
The Scrollbar API doesn’t handle keyboard input and has the following requirements:
• the content position is driven through the attached Flickable item
• the alignment management has to adhere to the anchors for built-in alignment functionality
• every style implementation should drive the position through contentX/contentY properties, depending on
whether the orientation is vertical or horizontal
Handling overlay
A ScrollView handles scrollbar placement by automatically placing the scrollbars horizontally and vertically where
appropriate in the device layout.
Scrollbar
|366w\_Scrollbar\_HandlingOverlay\_Good| |do\_32|
Do
ScrollView
Fig. 6.144: dont_32
66
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.145: 366w_Menus_DisablingActions
Fig. 6.146: 366w_Menus_FlagGutter
|366w\_Scrollbar\_HandlingOverlay\_Bad| |dont\_32|
Don’t
Use cases
Borderless content
If the content of your app is borderless, like the camera, it wouldn’t be practical to have scrollbars because it can hinder
the user’s view and primary task of taking a picture.
Borderless
|366w\_Scrollbar\_BorderlessContent\_Good| |do\_32|
Do
With scrollbars
|366w\_Scrollbar\_BorderlessContent\_Bad| |dont\_32|
Don’t
Avoid custom scrollers
Custom scrollers usually work poorly because they are hard to recognise, or they do not include all the functions
people expect.
Scrolling through a list
Place any ListView API inside a ScrollView to present a scrollbar when items have scrolled off-screen.
||Use the ListView API or see List Items for more guidance on using lists inside your application.| | |—|—–|
Scrolling within a text field
If your app allows for multi-line input inside a text field, then the user will expect to scroll the content.
Fig. 6.147: 366w_Menus_DefaultPositioning
6.3. The Ubuntu App platform - develop with seamless device integration
67
UBports Documentation, Release 1.0
Fig. 6.148: 750w_Scrollbar_MainImage
Fig. 6.149: 750w_Scrollbar_CustomScrollbar
In a text field, such as in the Messaging app, the field automatically displays a scrollbar that overlays the content to
allow users to scroll once they have entered more than five lines of text.
Scrolling inside panels
The toolkit provides panels that can be used to display anything from images, large amount of text or videos. The user
will expect to scroll either vertically or horizontally, or both to view the content.
By wrapping the panel inside a ScrollView it will automatically adhere to the content in any device layout.
Design patterns
Solve reoccurring design problems with common patterns to provide a familiar and usable interface.
Gestures
Apply natural and progressive gestures to your app to allow users to get where they want to be.
Gestures activities ›
Navigation
Allow users to navigate through your app in logical steps using components with innate behaviour.
Understand page stack ›
Layouts
Use predefined layouts to help you achieve a seamless experience across all devices.
Use an adaptive layout ›
Gestures
Make the most of Ubuntu’s gestures to establish consistency and familiarity within your application.
• Edge gesture ›
• Gestural activities ›
• Discoverability ›
Fig. 6.150: 366w_Scrollbar_List
68
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.151: 366w_Scrollbar_Text
Fig. 6.152: 750w_Scrollbar_InsidePanel
Edge gestures
The edge gestures provide access to system features that work outside applications and cannot be overwritten. It is
important to consider these edge gestures to avoid conflict with your own application gestures. For example, of you
place a gesture from the top of the screen, this may cause conflict with the indicator menu that is revealed in the same
area.
System
1. Top edge swipe reveals the indicator menu that contains settings and notifications when swiping down.
2. **Short left edge swipe **reveals favorited and frequently used apps from the launcher menu.
3. **Long left edge swipe **takes you back to the app screen (shows all the installed apps) when you are inside
an application.
4. **Short right edge swipe **reveals the previous app used.
5. Long right edge swipe reveals an app stack to show all the apps that are currently open; like a stack of cards.
Application gestures
Application gestures happen within your application. It is how the user interacts with the content of your app with
commonly used gestural activities like flicking, dragging and tapping within your application, such as selecting text to
edit a message.
Application specific
The app specific area is reserved for the bottom edge, which can reveal actions or a view from the bottom of the screen.
Avoid using a stack of screens inside an app itself, because this would confuse the overall mental model of the system.
Instead consider a two-dimensional model for your app, where you pan or zoom between screens.
Gestural activities
Certain gestures are associated with a particular movement of the finger and often come naturally to the user. Functions
should map closely to the physical action implied by the gesture, such as flicking through content with one finger.
Tap (one finger)
Use to activate a screen element, like a button.
Fig. 6.153: 366w_Overview_Gestures (1)
6.3. The Ubuntu App platform - develop with seamless device integration
69
UBports Documentation, Release 1.0
Fig. 6.154: 366w_Ovreview_Navigation (1)
Fig. 6.155: 366w_GetStarted_BuildingBlock (2)
Double tap (one finger)
Use to double tap an item or select an area, such as selecting text in a message.
Long press drag (one finger)
Use to pick up, move and select multiple items.
Flick (one finger)
Use to scroll in the direction you want the screen to move.
Long press (one finger)
Use to start a selection of content or an item within the application window, such as selecting a URL to copy in the
Browser.
Rotate (two finger)
Use to move around a centre point simultaneously with two fingers to rotate an object, such as editing an image.
Pinch in or out (two finger)
Use to zoom in or out of something, such as an image or a view.
Discoverability
Although functions should be intuitive, sometimes users may need a hand to discover new features within your interface.
Bottom edge hint
The bottom edge is specific to your app and can be used to reveal the most important actions, or a frequently used
view.
The bottom edge is made discoverable to the user by a hint at the bottom of the screen. This indicates to the user that
there is a visible area by swiping up from the floating hint.
Fig. 6.156: 750w_Getsures_MainImage (1)
70
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.157: 750w_Gestures_EdgeGestures
Fig. 6.158: 750w_Gestures_StackScreen
Hints
The bottom edge hint comprises of two elements: Hint 1 and Hint 2.
Hint 1
When your application is launched for the first time, the user will see a floating icon which is known as Hint 1.
Hint 2
After the user has interacted with Hint 1, the hint will morph to become Hint 2. This hint contains a label, icon or a
combinations of the two. By using both a label and an icon it gives the user more detail of the content it will show,
such as ‘ + New page’.
||For more information on the behavior of the bottom edge hint see Bottom edge.| | |—|—–|
Instructional overlays
When the user initially opens your app you can guide them through the different features and gestures with instructional
overlays to aid discoverability.
The SDK toolkit provides coach marks and tutorials that you can use to illustrate gestures using text and arrows.
The look and feel of an instructional overlay should differ from your UI visual style. Doing this will create a distinction
between what is permanently part of the app and what is an initial overlay feature.
||For more information on instructional overlays see Coach Marks (Coming soon).| | |—|—–|
Coach marks
Use coach marks as a single instructional overlay to point out a particular interaction or feature that may not be
obvious, or naturally discoverable.
Tutorial
Use tutorials on rare occasions where you feel you need to give the user further instructions to discover gestures or
features.
On an environment where the interface may be a little different, a sequence of instructions can be used to point out
where different features live. For example, from mobile to tablet the bottom edge can be placed in both panels.
The bottom edge is highlighted in the left panel with instructional text above it, together with a ‘Next Button to lead
the user to the following instruction in the tutorial.
Fig. 6.159: gesture_1f_tap (1)
6.3. The Ubuntu App platform - develop with seamless device integration
71
UBports Documentation, Release 1.0
Fig. 6.160: gesture_1f_double-tap
Navigation
Consistent and effortless navigation
|750w\_Navigation\_MainImage (2)|
is
an
essential
element
of
the
overall
user
experience.
• Usage ›
• Structure ›
• Components ›
Usage
Before building your application think about the overall structure and organize the content considering the device
layout and navigation that will need to happen.
Grouping content
Categorizing content and elements into related groups will allow the user to easily scan and perform actions within
your application in a way that is logical.
Signposting
Signposts are recurring UI elements that help indicate to the user of where they can go to reach their goal within your
app. Using the same elements throughout your interface will create a familiar environment for the user and minimise
their learning curve.
Structure
Structure your app by organizing the content into a logical hierarchy.
1. Overview – the most accessible features you want the user to have instant access to, such as a list of emails.
2. Top level – filters of the overview, such as threads or recent emails.
3. Lower level – detailed views that show detailed information, such as contact information.
4. App settings – a place for the settings of your app, such as notification settings for receiving emails.
Overview – Dialer
Top level – Contacts
Fig. 6.161: gesture_1f_drag-right (1)
72
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.162: gesture_1f_touch
Fig. 6.163: gesture_2f_rotate
Lower detailed level – Contact information
Page stack
A typical structure may consist of one or more top level views, with detailed views on a lower level. Page stack
navigation allows user to drill down from the top level to the detailed views through actions and navigational options
within your UI.
On mobile devices, only one page is available at once. Once a page is chosen, the page will stack over the previous
page. A Back Button will appear in the header to take the user back to the previous page if they wish.
Page stack in a multi-panel layout
Larger screens provide more screen estate for two or more panels to be visible at once. There are two ways page stack
behaves depending on which panel the action is placed in.
Page stack over both panels
If an action is triggered in the leftmost panel, then the page will takeover all panels.
Page stack over the right hand panel
If an action is triggered in the rightmost panel, then the page will stack over the same panel.
System Settings – right panel view change
In this example, ‘Brightness & Display’ has been selected inside the left panel and takes over the rightmost panel view.
Components
The Ubuntu toolkit provides a variety of components that can provide navigation within your application.
Header
Use the header to contain the most important actions and navigational options inside your app. This allows the user to
know where they are and what they can do.
Fig. 6.164: gesture_2f_pinch-in
6.3. The Ubuntu App platform - develop with seamless device integration
73
UBports Documentation, Release 1.0
Fig. 6.165: 366w_BottomEdge_BehaviourHints2
Fig. 6.166: 366w_BottomEdge_BehaviourHints1 (2)
Fig. 6.167: 366w_Gestures_CoachMarks1
Fig. 6.168: 750w_Gestures_CoachMarksTablet1
Fig. 6.169: 750w_Gestures_CoachMarksTablet
Fig. 6.170: 366w_Navigation_GroupingContent (2)
Fig. 6.171: 366w_Navigation_SignPosting
Fig. 6.172: 366w_Navigation_UserJoureny2 (2)
Fig. 6.173: 366w_Navigation_UserJourney1 (1)
Fig. 6.174: 366w_navigation_UserJourney3 (2)
Fig. 6.175: 750w_Navigation_PageStack_HowItWorks (4)
Fig. 6.176: 750w_Navigation_PageStackWithTwoPanelView (3)
Fig. 6.177: 750w_Navigation_PageStackWithJustRightPanelView (2)
Fig. 6.178: 750w_Navigation_SecondPanelView (1)
Fig. 6.179: 750w_Navigation_Header (3)
74
Chapter 6. App development
UBports Documentation, Release 1.0
Slot arrangement
The header features a maximum of four slots that can be arranged and combined to fulfills the user needs.
Slot Navigational option
A
• **Back – **use to navigate to a previous page of the app (if other pages are available)
• **Navigation drawer – **use to store more pages if there is no room in the header
B
• **Title (mandatory) – **provide a one line title of the app or view
• **Subtitle (optional) – ** extra explanatory text up to two lines
C/D
• **Search – **use to search for specific content
• **Settings – **use to navigate to your app’s settings page
Use drawers sparingly because it:
• Hides pages and actions from the user
• Conflicts with the Back Button
• Requires a tap to see available pages/or actions and two taps every time a user switches pages.
||A Back Button would be irrelevant if your app only has one page, because there would be no pages to go back from;
so it is not required.| | |—|—–|
Headers in multi-panel layout
For a multi-panel layout, such as on a desktop, each panel can display its own header, which can contain additional
slots because more real estate is presented. This can be useful to reveal actions or views that were previously hidden
in drawers in a single panels screen, like on mobile.
More actions revealed
In this example, Dekko displays an action for the bottom edge, search and settings inside the lefthand panel, and in the
rightmost panel it shows a delete, favourite and messaging.
Fig. 6.180: 750w_Navigation_ConvergentHeader3actions (2)
||For more slot layout examples see Header| | |—|—–|
Header appearance
You can decide how you want the the header to appear in four ways: Fixed, Fixed and Opaque, Fixed and Transparent,
Hidden.
6.3. The Ubuntu App platform - develop with seamless device integration
75
UBports Documentation, Release 1.0
Fig. 6.181: 366w_Navigation_HeaderFixed (1)
Fixed (default)
Useful for making section or action always accessible for when the user scrolls.
Transparent
Fig. 6.182: 366w_Navigation_HeaderTransparent (1)
Useful if you don’t want the header to be the focus of attention, but want it readerly available if the user needs it.
Hidden
Fig. 6.183: 366w_Navigation_HeaderHidden (1)
Useful for full-screen applications, such as the Camera App.
Overlay
Useful to display more content in a single screen.
Customised header
If you choose not to have a header, then think of how users will navigate through your UI in a different way.
Overview
Top level
For example, the Clock app has a customized header where it uses icons at the top of the screen to take the user to
different levels of the app.
Header sections
The header section allows users to easily shift between categories views within the same page. If the main header is
set to default, then the sections will slide away when the user scrolls down.
1. The main header is a separate component that can hold actions and navigational options.
2. The header section sits below the main header and allows for sub-navigation or filtering within the screen
indicated by the header above. One option is always selected.
76
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.184: 366w_Navigtaion_HeaderOverlay
Fig. 6.185: 366w_Navigation_HeaderCustumised1 (1)
Dekko app
For example, if your app was presenting an inbox of emails, from ‘All’, the sub-sections could display ‘Recent’ and
‘Archive’ to further filter the content for the user. More section on the screen can be visible through swipe or clicking
the hint that appears when a mouse is attached.
Pointer environment
More tabs are indicated by an arrow revealed when the pointer hovers over it.
Search in the main header
You can use search within the main header for an additional filter for your application; or as a global search. Search is
invoked in a similar way in a touch and pointer environment by tapping or clicking on the search icon.
Multi-panel layout
Bottom edge
The bottom edge is specific to your application. It can give users quick access to the most important actions within
your app, or a related view from the bottom of the screen via a hint (on touch), or from an action inside the header
(pointer).
When the bottom edge is revealed the page stacks over the previous page and a chevron pointing down appears in the
header to allow the user to go back to the previous page.
Hint 1
Appears on application launch to hint to the user that there is a bottom edge available.
Hint 2
The bar is revealed after Hint 1 has been interacted with.
Reveal
Once the user starts to swipe up from the hint. The new view starts to be revealed.
Fig. 6.186: 366w_Navigation_HeaderCustumised2 (2)
6.3. The Ubuntu App platform - develop with seamless device integration
77
UBports Documentation, Release 1.0
Fig. 6.187: 750w_Navigation_HeaderSection (3)
Fig. 6.188: 366w_Navigation_Tabs (2)
New view
A new view stacks over the previous page once the user has committed to the swipe.
Layouts
Make your app consistent and adaptive across all screen sizes with just one API.
• Grid unit system ›
• Layouts ›
• Good practice ›
||The Adaptive Layout API allows you to add multiple columns to a page (under heavy development).| | |—|—–|
Grid Unit System
A Grid Unit (GU) is a virtual measure of the screen space that is calculated according to the device’s width in pixels
and the predefined layout. They have been designed to suit a range of screen sizes.
Placing elements
Use Grid Units to help visualise how much space you have in order to create a consistent and proportionate UI. It
proves beneficial for when you are placing components and labels within your app.
Predefined grid unit layouts
The layout is calculated by taking the short edge of the screen and dividing the amount of pixels by one of the chosen
predefined layouts, which are:
• 40/50GU for mobile and phablets screens
• 90GU for tablets, desktop and larger screens.
Example of 50GU layout for mobile
A mobile device would typically suit a 50 GU-wide virtual portrait screen, because it offers the right balance of content
to screen real estate for palm-sized viewing.
Fig. 6.189: 366w_Navigation_TabsRecent (2)
78
Chapter 6. App development
UBports Documentation, Release 1.0
Fig. 6.190: 750w_Header_Pointer environment
Fig. 6.191: 750w_Navigation_HeaderSearchV2 (3)
Example of 90GU layout on tablet in portrait mode
90GU is ideal for tablet sized screens, because it offers more real-estate for panels.
||See the design blog for developer specifications of Grid Units and layouts.| | |—|—–|
Layouts
Panels
A panel is a way of grouping together Grid Units to split the screen into different windows. Panels of predefined
layouts can be joined together to create a multi-functionally interface from portrait to landscape.
For example, placing a 50 and 40 grid unit layout in portrait mode can easily be translated to landscape mode for larger
surfaces, such as desktop.
||If your app can use multiple columns then use a single screen layout on mobile touch that changes to a 2 or 3 panel
layout on tablet and desktop.| | |—|—–|
If you think of it in screen sizes, the hierarchy would be:
Mobile 50GU – 1 panel (fixed panel)
Tablet – 2 panels, very occasionally 3 panels on larger tablets
Desktop – 2 or 3 panels
On a windowed environment, just like on a tablet, more than one panel can be displayed simultaneously. By joining
them in the same window, we get the familiar list panel and conjoined detail panel – a pattern typical in applications
like contacts, messages, and email. Of course, there can be any number of combinations of panels depending on the
specific app’s needs.
||Developers can choose to create completely adaptive 2 or 3 panel layouts for desktop if they desire.| | |—|—–|
Adaptive layout
Use the AdaptiveLayout API to display panels in one or more columns from left to right.
||The AdaptiveLayout API provides a flexible way of viewing a stack of pages in one or more columns. Unlike in
PageStack, there can be more than one Page active at a time, depending on the number of the columns in the view.| |
|—|—–|
Fig. 6.192: 750w_Navigation_Convergence search
6.3. The Ubuntu App platform - develop with seamless device integration
79
UBports Documentation, Release 1.0
Fig. 6.193: 750w_Navigation_Convergence search box
Fig. 6.194: 366w_Navigation_BottomEdge1
Fig. 6.195: 366w_Navigation_BottomEdge2
Fig. 6.196: 366w_Navigation_BottomEdge4
Fig. 6.197: 366w_Navigation_BottomEdge4
Fig. 6.198: 750w_Layout_MainImage
Fig. 6.199: 750w_Layouts_GridUnitSystem
Fig. 6.200: 366w_layout_PanelsMusic_50gu
Fig. 6.201: 750w_Layouts_PanelsMusicPortrait
Fig. 6.202: 750w_Layouts_Panels
Fig. 6.203: 366w_Layouts_PanelsCalendar1
Fig. 6.204: 366w_Layouts_PanelsCalendar2
Fig. 6.205: 750_Layouts_Panels3
80
Chapter 6. App development
UBports Documentation, Release 1.0
Changing the size of the window resizes one or more joined panels. Typically, the right-most panel resizes and the
left-most panel maintains its original dimensions. The dimensions of the right-most panel will normally be 40 or 50
grid units; though this panel may itself be resizable depending on the developer’s requirements.
Fig. 6.206: 750w_Layouts_AdaptiveLayout
Example – 50GU phone and 50GU/variable on a desktop screen
The panel that is defined as the main panel (for example 50GU) will initially be visible in the first (leftmost) column;
this will have to be specified by the developer. The subsequent columns can then be added depending on the device
layout.
Good practice
Use a fixed panel
Fig. 6.207: 750w_Layouts_GoodPractice
To provide a consistent user experience across the whole platform leave at least one of the panels fixed at a minimum
size of either 50 or 40GU inside each screen size. This creates a familiar experience from mobile, tablet and desktop.
QML apps
QML - the best tool to unlock your creativity
QML is an extremely powerful JavaScript-based declarative language for designing intuitive, natural and responsive
user interfaces. The Ubuntu SDK provides fluid and natural user interface QML elements that blend into Ubuntu
without getting in the way. And with a rich framework and APIs, based on the cross-platform Qt framework, QML
features an extensive set of APIs that cover the needs of the most demanding developers.
Read the API documentation (coming soon...)
A powerful language to write compelling UIs
QML, the Qt Meta-Object Language, is a programming language with similarities in syntax to JavaScript. Being
a modern language designed with simplicity and extensibility in mind it has borrowed syntax and paradigms from
ubiquitous web technologies, such as CSS and JavaScript. It is built upon and constitutes a core component of the
cross-platform Qt framework. In essence, QML coupled with the underlying Qt libraries is a high-level UI technology
that enables developers to create animated, touch-enabled UIs and light-weight applications. It differs from traditional
procedural languages such as C or Python in the sense that it is declarative at its core, that is, it is used to describe UI
elements and their interaction. As such, traditionally complex visual motion transitions become a natural part of QML
and are extremely easy to implement.
QML tutorials
The tutorials below will help you get started writing your first QML applications, as well as learning how to better use
specifics parts of the Ubuntu SDK.
6.3. The Ubuntu App platform - develop with seamless device integration
81
UBports Documentation, Release 1.0
Tutorials - building your first QML app
In this recipe you will learn how to write a currency converter app for Ubuntu on the phone. You will
be using several components from the Ubuntu QML toolkit: i18n, units, ItemStyle for theming,
Label, ActivityIndicator, Popover, Button, TextField, ListItems.Header and ListItems.
Standard.
The application will show you how to use the QML declarative language to create a functional user interface and its
logic, and to communicate through the network and fetch data from a remote source on the Internet.
In practical terms, you will be writing an application that performs currency conversion between two selected currencies. The rates are fetched using the European Central Bank’s API. Currencies can be changed by pressing the buttons
and selecting the currency required from the list.
Requirements
• Ubuntu 14.04 or later – get Ubuntu
• The Ubuntu SDK – install the Ubuntu SDK
The tools
The focus of this tutorial will be on the Ubuntu UI toolkit preview and its components, rather than on the tools.
However, it is worth mentioning and giving an overview of the tools you will be using:
Development host
Ubuntu 14.04 (or later) will be used as the host machine for development. At the end of this recipe you will have
created a platform-agnostic QML app that can be run on the development host machine. Subjects such as crosscompiling for a different architecture and installation on a phone are more advanced topics that will be covered at a
later date when the full Ubuntu SDK is released.
Integrated Development Environment (IDE)
We will be writing declarative QML code, which does not need to be compiled to be executed, so you can use your
favourite text editor to write the actual code, which will consist of a single QML file.
For this tutorial, we recommend using Ubuntu SDK. Ubuntu SDK is a powerful IDE to develop applications based
on the Qt framework.
QML viewer
To start QML applications, either during the prototyping or final stages, we will use Ubuntu SDK and the Ctrl+R
shortcut.
However, as an alternative for quick app viewing with QML Scene, it is worth noting that you can also use QML
Scene without Ubuntu SDK. QML Scene is a command-line application that interprets and runs QML code.
To run a QML application with QML Scene, open up a terminal with the Ctrl+Alt+T key combination, and execute
the qmlscene command, followed by the path to the application:
$ qmlscene /path/to/application.qml
82
Chapter 6. App development
UBports Documentation, Release 1.0
Learn more about QML Scene ›
Getting started
To start Ubuntu SDK, simply open the Dash, start typing “ubuntu sdk“, and click on the Ubuntu SDK icon that
appears on the search results.
Next stop: putting our developer hat on.
The main view
We’ll start off with a minimum QML canvas with our first Ubuntu component: a label inside the main view.
1. In Ubuntu SDK, press Ctrl+N to create a new project
2. Select the Projects > Ubuntu > App with Simple UI template and click Choose. . .
3. Give the project CurrencyConverter as a Name. You can leave the Create in: field as the default and then
click Next.
4. You can optionally set up a revision control system such as Bazaar in the final step, but that’s outside the scope
of this tutorial. Click on Finish.
5. Replace the Column component and all of its children, and replace them with the Page as shown below, and
then save it with Ctrl+S:
import QtQuick 2.4
import Ubuntu.Components 1.3
/*!
\brief MainView with a Label and Button elements.
*/
MainView {
id: root
6.3. The Ubuntu App platform - develop with seamless device integration
83
UBports Documentation, Release 1.0
// objectName for functional testing purposes (autopilot-qt5)
objectName: "mainView"
// Note! applicationName needs to match the "name" field of the click manifest
applicationName: "currencyconverter.yourname"
width: units.gu(100)
height: units.gu(75)
property real margins: units.gu(2)
property real buttonWidth: units.gu(9)
Page {
title: i18n.tr("Currency Converter")
}
}
Try to run it now to see the results:
1. Inside Ubuntu SDK, press the Ctrl+R key combination. It is a shortcut to the Build > Run menu entry
Or alternatively, from the terminal:
1. Open a terminal with Ctrl+Alt+T
2. Run the following command: qmlscene ~/CurrencyConverter/main.qml
Hooray! Your first Ubuntu app for the phone is up and running. Nothing very exciting yet, but notice how simple it
was to bootstrap it. You can close your app for now.
Now starting from the top of the file, let’s go through the code.
import QtQuick 2.4
import Ubuntu.Components 1.3
Every QML document consists of two parts: an imports section and an object declaration section. First of all we
import the QML types and components that we need, specifying the namespace and its version. In our case, we import
the built-in QML and Ubuntu types and components.
We now move on to declaring our objects. In QML, a user interface is specified as a tree of objects with properties.
JavaScript can be embedded as a scripting language in QML as well, but we’ll see this later on.
MainView {
id: root
// objectName for functional testing purposes (autopilot-qt5)
objectName: "mainView"
// Note! applicationName needs to match the "name" field of the click manifest
applicationName: "currencyconverter.yourname"
84
Chapter 6. App development
UBports Documentation, Release 1.0
width: units.gu(100)
height: units.gu(75)
property real margins: units.gu(2)
property real buttonWidth: units.gu(9)
Page {
title: i18n.tr("Currency Converter")
}
}
Secondly, we create a MainView, the most essential SDK component, which acts as the root container for our application. It also provides the standard toolbar and Header.
With a syntax similar to JSON, we define its properties by giving it an id we can refer it to (root), and then we
define some visual properties (width, height, color). Notice how in QML properties are bound to values with
the ‘property: value‘ syntax. We also define a custom property called margins, of type real (a number
with decimal point). Don’t worry about the buttonWidth property for now, we’ll use it later on. The rest of the
properties available in the MainView we leave at their default values by not declaring them.
Notice how we specify units as units.gu. These are grid units, which we are going to talk about in a minute. For
now, you can consider them as a form- factor-agnostic way to specify measurements. They return a pixel value that’s
dependent on the device the application is running on.
Inside our main view, we add a child Page, which will contain the rest of our components as well as provide a title.
We title text to the page, ensuring it is enclosed with the i18n.tr() function, which will make it translatable.
Resolution independence
A key feature of the Ubuntu user interface toolkit is the ability to scale to all form factors in a world defined by users
with multiple devices. The approach taken has been to define a new unit type, the grid unit (gu in short). Grid units
translate to a pixel value depending on the type of screen and device the application is running on. Here are some
examples:
Device
Most laptops
Retina laptops
Smart phones
Conversion
1 gu = 8 px
1 gu = 16 px
1 gu = 18 px
Learn more about resolution independence
Internationalization
As part of the Ubuntu philosophy, internationalization and native language support is a key feature of the Ubuntu
toolkit. We’ve chosen gettext as the most ubiquitous Free Software internationalization technology, which we’ve
implemented in QML through the family of i18n.tr() functions.
Fetching and converting currencies
Now we will start adding the logic to our app, which will mean getting the currency and rates data and doing the actual
conversion.
Start by adding the following code around line 33 before the Page’s closing brace. We will mostly be appending
code in all subsequent steps, but any snippet will be contained inside our root MainView. So when you append code,
make sure it is still before the MainView’s closing brace at the end of the file.
6.3. The Ubuntu App platform - develop with seamless device integration
85
UBports Documentation, Release 1.0
ListModel {
id: currencies
ListElement {
currency: "EUR"
rate: 1.0
}
function getCurrency(idx) {
return (idx >= 0 && idx < count) ? get(idx).currency: ""
}
function getRate(idx) {
return (idx >= 0 && idx < count) ? get(idx).rate: 0.0
}
}
What we are doing here is to use currencies as a ListModel object that will contain a list of items consisting of
currency and rate pairs. The currencies ListModel will be used as a source for the view elements that will
display the data. We will be fetching the actual data from the Euro foreign exchange reference rates from the European
Central Bank. As such, the Euro itself is not defined there, so we’ll pre-populate our list with the EUR currency, with
a reference rate of 1.0.
The function statements in currencies illustrate another powerful feature of QML: integration with JavaScript. The two
JavaScript functions are used as glue code to retrieve a currency or rate from an index. They are required as currencies
may not be loaded when component property bindings use them for the first time. But do not worry much about their
function. For now it’s just important to remember that you can transparently integrate JavaScript code iny our QML
documents.
Now we’ll fetch the actual data with a QtQuick object to load XML data into a model: the integrate JavaScript code
in your QML documents. To use it, we add an additional import statement at the top of the file, so that it looks like:
import QtQuick 2.4
import QtQuick.XmlListModel 2.0
import Ubuntu.Components 1.3
And then around line 49, add the actual rate exchange fetcher code:
XmlListModel {
id: ratesFetcher
source: "http://www.ecb.int/stats/eurofxref/eurofxref-daily.xml"
namespaceDeclarations: "declare namespace gesmes='http://www.gesmes.org/xml/2002˓→08-01';"
+"declare default element namespace 'http://www.ecb.int/
˓→vocabulary/2002-08-01/eurofxref';"
query: "/gesmes:Envelope/Cube/Cube/Cube"
onStatusChanged: {
if (status === XmlListModel.Ready) {
for (var i = 0; i < count; i++)
currencies.append({"currency": get(i).currency, "rate":
˓→parseFloat(get(i).rate)})
}
}
XmlRole { name: "currency"; query: "@currency/string()" }
XmlRole { name: "rate"; query: "@rate/string()" }
}
The relevant properties are source, to indicate the URL where the data will be fetched from; query, to specify an absolute XPath query to use as the base query for creating model items from the XmlRoles below; and
namespaceDeclarations as the namespace declarations to be used in the XPath queries.
86
Chapter 6. App development
UBports Documentation, Release 1.0
The onStatusChanged signal handler demonstrates another combination of versatile features: the signal and handler system together with JavaScript. Each QML property has got a <property>Changed signal and its corresponding on<property>Changed signal handler. In this case, the StatusChanged signal will be emitted to
notify of any changes of the status property, and we define a handler to append all the currency/rate items to the
currencies ListModel once ratesFetcher has finished loading the data.
In summary, ratesFetcher will be populated with currency/rate items, which will then be appended to
currencies.
It is worth mentioning that in most cases we’d be able to use a single XmlListModel as the data source, but in our case
we use it as an intermediate container. We need to modify the data to add the EUR currency, and we put the result in
the currencies ListModel.
Notice how network access happens transparently so that you as a developer don’t have to even think about it!
Around line 66, let’s add an ActivityIndicator component to show activity while the rates are being fetched:
ActivityIndicator {
objectName: "activityIndicator"
anchors.right: parent.right
running: ratesFetcher.status === XmlListModel.Loading
}
We anchor it to the right of its parent (root) and it will show activity until the rates data has been fetched.
And finally, around line 32 (above and outside of the Page), we add the convert JavaScript function that will
perform the actual currency conversions:
function convert(from, fromRateIndex, toRateIndex) {
var fromRate = currencies.getRate(fromRateIndex);
if (from.length <= 0 || fromRate <= 0.0)
return "";
return currencies.getRate(toRateIndex) * (parseFloat(from) / fromRate);
}
Choosing currencies
At this point we’ve added all the backend code and we move on to user interaction. We’ll start off with creating a new
Component, a reusable block that is created by combining other components and objects.
Let’s first append two import statements at the top of the file, underneath the other import statements:
import Ubuntu.Components.ListItems 0.1
import Ubuntu.Components.Popups 1.3
And then add the following code around line 79:
Component {
id: currencySelector
Popover {
Column {
anchors {
top: parent.top
left: parent.left
right: parent.right
}
height: pageLayout.height
Header {
6.3. The Ubuntu App platform - develop with seamless device integration
87
UBports Documentation, Release 1.0
id: header
text: i18n.tr("Select currency")
}
ListView {
clip: true
width: parent.width
height: parent.height - header.height
model: currencies
delegate: Standard {
objectName: "popoverCurrencySelector"
text: currency
onClicked: {
caller.currencyIndex = index
caller.input.update()
hide()
}
}
}
}
}
}
At this point, if you run the app, you will not yet see any visible changes, so don’t worry if all you see is an empty
rectangle.
What we’ve done is to create the currency selector, based on a Popover and a standard Qt Quick ListView. The
ListView will display the data from the currencies ListMode. Notice how the Column object wraps the Header
and the list view to arrange them vertically, and how each item in the list view will be a Standard list item component.
The popover will show the selection of currencies. Upon selection, the popover will be hidden (see onClicked
signal) and the caller’s data is updated. We assume that the caller has currencyIndex and input properties, and
that input is an item with an update() function.
Arranging the UI
Up until now we’ve been setting up the backend and building blocks for our currency converter app. Let’s move on to
the final step and the fun bit, putting it all together and seeing the result!
Add the final snippet of code around line 110:
Column {
id: pageLayout
anchors {
fill: parent
margins: root.margins
}
spacing: units.gu(1)
Row {
spacing: units.gu(1)
Button {
id: selectorFrom
objectName: "selectorFrom"
property int currencyIndex: 0
property TextField input: inputFrom
text: currencies.getCurrency(currencyIndex)
onClicked: PopupUtils.open(currencySelector, selectorFrom)
}
88
Chapter 6. App development
UBports Documentation, Release 1.0
TextField {
id: inputFrom
objectName: "inputFrom"
errorHighlight: false
validator: DoubleValidator {notation: DoubleValidator.StandardNotation}
width: pageLayout.width - 2 * root.margins - root.buttonWidth
height: units.gu(5)
font.pixelSize: FontUtils.sizeToPixels("medium")
text: '0.0'
onTextChanged: {
if (activeFocus) {
inputTo.text = convert(inputFrom.text, selectorFrom.currencyIndex,
˓→ selectorTo.currencyIndex)
}
}
function update() {
text = convert(inputTo.text, selectorTo.currencyIndex, selectorFrom.
˓→currencyIndex)
}
}
}
Row {
spacing: units.gu(1)
Button {
id: selectorTo
objectName: "selectorTo"
property int currencyIndex: 1
property TextField input: inputTo
text: currencies.getCurrency(currencyIndex)
onClicked: PopupUtils.open(currencySelector, selectorTo)
}
TextField {
id: inputTo
objectName: "inputTo"
errorHighlight: false
validator: DoubleValidator {notation: DoubleValidator.StandardNotation}
width: pageLayout.width - 2 * root.margins - root.buttonWidth
height: units.gu(5)
font.pixelSize: FontUtils.sizeToPixels("medium")
text: '0.0'
onTextChanged: {
if (activeFocus) {
inputFrom.text = convert(inputTo.text, selectorTo.currencyIndex,
˓→selectorFrom.currencyIndex)
}
}
function update() {
text = convert(inputFrom.text, selectorFrom.currencyIndex, selectorTo.
˓→currencyIndex)
}
}
}
Button {
id: clearBtn
objectName: "clearBtn"
text: i18n.tr("Clear")
width: units.gu(12)
onClicked: {
6.3. The Ubuntu App platform - develop with seamless device integration
89
UBports Documentation, Release 1.0
inputTo.text = '0.0';
inputFrom.text = '0.0';
}
}
}
It’s a piece of code that’s longer than previous snippets, but it is pretty simple and there is not much new in terms of
syntax. What we’re doing is arranging the visual components to provide user interaction within the root area and
defining signal handlers.
Notice how we use the onClicked signal handlers to define what will happen when the user clicks on the currency selectors (i.e. the pop ups are opened), the onTextChanged handler to call the convert() function defined earlier to
do conversions as we type, and we define the update() function the list view items from the currencySelector
component defined earlier expect.
We are using a Column and two Rows to set up the layout, and each row contains a currency selector button and a text
field to display or input the currency conversion values. We’ve also added a button below them to clear both text fields
at once. Here’s a mockup to illustrate the layout:
Lo and behold
So that’s it! Now we can lay back and enjoy our creation. Just press the Ctrl+R shortcut within Ubuntu SDK, and
behold the fully functional and slick currency converter you’ve just written with a few lines of code.
90
Chapter 6. App development
UBports Documentation, Release 1.0
6.3. The Ubuntu App platform - develop with seamless device integration
91
UBports Documentation, Release 1.0
Test it!
Now that the application is running, don’t forget about tests! The qualitypage for qml applications has you covered.
Learn about writing tests for every level of the testing pyramid by using the application you just built.
Conclusion
You’ve just learned how to write a form-factor-independent Ubuntu application for the phone. In doing that, you’ve
been exercising and combining the power of technologies such as QML, Javascript and a variety of Ubuntu components, to produce an app with a cohesive, crisp and clean Ubuntu look.
You’ll surely have noticed the vast array of possibilities these technologies open up, so it’s now up to you: help us
testing the toolkit preview, write your own apps and give us your feedback to make Ubuntu be in the next billion
phones!
Learn more
If this tutorial has started whetting your appetite, you should definitely check out the Component Showcase app that
comes with the Ubuntu QML toolkit preview. With it, you’ll be able to see all of the Ubuntu components in action and
look at their code to learn how to use it in your apps.
If you want to study the Component Showcase code:
1. Start Ubuntu SDK by pressing the Ubuntu button in the Launcher. That will bring up the Dash.
2. Start typing ubuntu sdk and click on the Ubuntu SDK icon.
3. In Ubuntu SDK, then press the Ctrl+O key combination to open the file selection dialog.
92
Chapter 6. App development
UBports Documentation, Release 1.0
4. Select the file /usr/lib/ubuntu-ui-toolkit/examples/ubuntu-ui-toolkit-gallery/
ubuntu-ui-toolkit-gallery.qml and click on Open.
5. To run the code, you can select the Tools > External > Qt Quick > Qt Quick 2 Preview (qmlscene) menu
entry.
Alternatively, if you only want to run the Component Showcase:
1. Open the Dash
2. Type toolkit gallery and double click on the “Ubuntu Toolkit Gallery” result that appears to run it
Reference
• Code for this tutorial (use bzr branch lp:ubuntu-sdk-tutorials to get a local copy)
• Writing tests for Currency Converter
• Ubuntu UI Toolkit API documentation
• Qt Quick documentation
• Getting Started Programming with Qt Quick
• Syntax of the QML language
• Integrating JavaScript and QML
Questions?
If you’ve got any questions on this tutorial, or on the technologies that it uses, just get in touch with our App Developer
Community!
Tutorials - add a C++ backend to your QML app
Whether you are creating a new app or porting an existing one from another ecosystem, you may need more backend
power than the QML + JavaScript duo proposed in the QML app tutorial. Let’s have a peek at how to to add a C++
backend to your application, using system libraries or your own, and vastly increase its performance and potential
features.
In this tutorial, you will learn how to use and invoke C++ classes from QML and integrate a 3rd party library into your
project.
The SDK template
Let’s start by opening the Ubuntu SDK and click on New Project to be presented with the project wizard. For this
tutorial, we are going to use the QML App with C++ plugin (cmake) template.
Continue through the wizard by picking:
• A project name. For the sake of consistency during this tutorial, let’s use “mycppapp“
• An app name (mycppapp)
• Enter your developer information
• Choose a framework (if you are unsure about which one to use, see the Frameworks guide). For this tutorial, we
are going to use ubuntu-sdk-15.04.
6.3. The Ubuntu App platform - develop with seamless device integration
93
UBports Documentation, Release 1.0
94
Chapter 6. App development
UBports Documentation, Release 1.0
6.3. The Ubuntu App platform - develop with seamless device integration
95
UBports Documentation, Release 1.0
• A kit corresponding to the type of device and architecture your app will be published for. For this tutorial, we
are only going to use a desktop kit.
Template files
After creating the project, the SDK has now switched to the editor tab. If you are already used to QML-only apps, you
can see a slightly different file tree on the left pane:
An app folder for QML files and the desktop file, a backend folder for C++ modules and tests, and a po folder that
will hold generated translations templates.
96
Chapter 6. App development
UBports Documentation, Release 1.0
Other files worthy of a note are manifest.json.in and mycppapp.apparmor, both needed for packaging.
Since we are using the CMake build tool, each directory also contains a CMakeLists.txt file.
Manifest.json.in, app/mycppapp.desktop.in and mycppapp.apparmor have already been prefilled with the information you entered in the wizard. You can edit them to manage your app version, maintainer
info, change the framework and permission policies your app is going to use. We are not going to use them during this
tutorial, you can safely ignore them.
Running the template
If you run the app provided by the template (by pressing Ctrl+R or clicking the green Play icon), you can see that it
looks similar to a standalone QML app, the big difference is that the QML object used to display the “Hello world”
string is actually a class imported from C++.
First example - Calling the command-line
In this example, we are going to learn how to call the command line (and get its output) from a QML UI.
Note that what you will be able to do (in terms of command line use) on a device other than the Ubuntu desktop will
be fairly limited due to our app confinement policies, but this will be a good introduction to exchanging data between
6.3. The Ubuntu App platform - develop with seamless device integration
97
UBports Documentation, Release 1.0
the backend and the UI.
backend/modules/Mycppapp/mytype.h
Currently, this file defines a MyType class with a very simple API: it receives a string and returns it.
Let’s change it to a Launcher class, using Qprocess to run system commands.
Replace the content of the file with:
#ifndef LAUNCHER_H
#define LAUNCHER_H
#include <QObject>
#include <QProcess>
class Launcher : public QObject
{
Q_OBJECT
public:
explicit Launcher(QObject *parent = 0);
~Launcher();
Q_INVOKABLE QString launch(const QString &program);
protected:
QProcess *m_process;
};
#endif
mytype.cpp
Now, we are going to make use of this Launcher class. It will receive strings from QML, interpret it as a command,
wait for the command to finish and return the output.
Replace the content of mytype.cpp with:
#include "mytype.h"
Launcher::Launcher(QObject *parent) :
QObject(parent),
m_process(new QProcess(this))
{
}
QString Launcher::launch(const QString &program)
{
m_process->start(program);
m_process->waitForFinished(-1);
QByteArray bytes = m_process->readAllStandardOutput();
QString output = QString::fromLocal8Bit(bytes);
return output;
}
Launcher::~Launcher() {
}
backend.cpp
This is where the QQmlExtensionPlugin is used to register C++ classes as QML Types, that you will be able to
use in your UI.
98
Chapter 6. App development
UBports Documentation, Release 1.0
The syntax is fairly explicit, and the most important line of this file is where the type registration is made:
qmlRegisterType<MyType>(uri, 1, 0, "MyType");
Since our class is now called Launcher, We need to change it to:
qmlRegisterType<Launcher>(uri, 1, 0, "Launcher");
That means that from the QML side, you now have access to a Launcher type with a launch function taking a string
and returning the terminal output. How cool is that?
QML side
In Main.qml, let’s replace the content of our page component with a very simple UI
Page {
title: i18n.tr("mycppapp")
// Here, we instantiate our Launcher component
Launcher {
id:launcher
}
Column{
anchors.fill:parent
spacing: units.gu(2)
anchors.margins:units.gu(2)
Row {
spacing: units.gu(2)
TextField{
id:command
}
Button{
id:button
text:i18n.tr("Run")
onClicked:{
// And we call its launch function
// when the Run button is clicked
txt.text = launcher.launch(command.text)
}
}
}
Text{
id:txt
}
}
}
Run the app and enjoy a tiny shell access!
Second example - Integrate an external library
In this example, we are going to use a very straightforward SVG drawing library (simple-svg) which comes as a
standalone header file, and turn our first example above into an SVG graph plotting app.
6.3. The Ubuntu App platform - develop with seamless device integration
99
UBports Documentation, Release 1.0
100
Chapter 6. App development
UBports Documentation, Release 1.0
Accessing the library
Let’s start by downloading simple_svg_1.0.0.hpp , rename it to simplesvg.h and add it to the rest of our source
files (in backend/modules/Mycppapp/).
Then, we edit mytype.h to slightly change the structure of our launch function
From
Q_INVOKABLE QString launch(const QString &program);
To
Q_INVOKABLE void draw(const int &width, const int &height, const QString &array);
We now have a draw function that takes a width, a height and a string (which will be a stringified, space separated,
array of integers).
The draw function
It’s time to flesh out our SVG generating function in mytype.cpp
First, a few more includes at the top of the file :
#include "mytype.h"
#include "simplesvg.h"
#include <iostream>
#include <string>
using namespace svg;
Then, we need going to replace our launch() function, that takes a string from QML:
QString Launcher::launch(const QString &program)
{
m_process->start(program);
m_process->waitForFinished(-1);
QByteArray bytes = m_process->readAllStandardOutput();
QString output = QString::fromLocal8Bit(bytes);
return output;
}
...by a draw() one, still using data sent from QML:
It makes heavy use of functions and classes provided by the bundled SVG library
void Launcher::draw(const int &width, const int &height, const QString &array)
{
// Create the SVG doc
Dimensions dimensions(width, height);
Document doc("../mycppapp/app/graph.svg", Layout(dimensions, Layout::BottomLeft));
// Parse our string into an array
std::istringstream buf(array.toStdString());
std::istream_iterator<std::string> beg(buf), end;
std::vector<std::string> tokens(beg, end);
// Create a line
Polyline polyline_a(Stroke(1.5, Color::Cyan));
// Iterate over our array to define line start/end points
for( int a = 0; a < tokens.size(); a = a + 1 )
6.3. The Ubuntu App platform - develop with seamless device integration
101
UBports Documentation, Release 1.0
{
if (tokens.size() < 2) {
polyline_a << Point(width/(tokens.size())*(a), atoi(tokens[a].c_str())) <
˓→< Point(width/(tokens.size())*(a+1), atoi(tokens[a].c_str()));
} else {
polyline_a << Point(width/(tokens.size()-1)*(a), atoi(tokens[a].c_str()));
}
}
doc << polyline_a;
// Save the doc
doc.save();
}
On the QML side of things, you can now call Launcher.draw() with(width, height, points) as arguments to generate a SVG file!
QML UI
Here is our QML UI using the draw() function in Main.qml
Page {
title: i18n.tr("Graph")
id:page
Rectangle {
anchors.fill:parent
Launcher {
id: launcher
}
TextField{
id:txt
width:parent.width - units.gu(4)
anchors.top:parent.top
anchors.horizontalCenter:parent.horizontalCenter
anchors.margins:units.gu(2)
}
Row {
anchors.top:txt.bottom
anchors.horizontalCenter: parent.horizontalCenter
anchors.margins: units.gu(2)
spacing: units.gu(2)
Button {
id:drawButton
anchors.margins:units.gu(2)
text:i18n.tr("Draw")
enabled:(txt.length)
color: UbuntuColors.orange
onClicked: {
launcher.draw(page.width, page.height, txt.text)
img.source = ""
img.source = Qt.resolvedUrl("graph.svg")
}
}
Button {
id:clearButton
anchors.margins:units.gu(2)
text:i18n.tr("Clear")
102
Chapter 6. App development
UBports Documentation, Release 1.0
onClicked: {
launcher.draw(page.width, page.height, "")
img.source = ""
img.source = Qt.resolvedUrl("graph.svg")
}
}
}
Image {
id:img
anchors.fill:parent
anchors.margins:units.gu(2)
cache:false
source: Qt.resolvedUrl("graph.svg")
}
}
}
That’s it! Our simple QML app is ready to use: enter some Y-axis values in the input field and it will generate and
display a SVG graph.
6.3. The Ubuntu App platform - develop with seamless device integration
103
UBports Documentation, Release 1.0
Packaging
The packaging process is as simple as others applications. The Publish tab allows you to package it as a .click and
the template CMakeLists are here to make sure everything is included in your build.
If you add libraries bigger than standalone header files, you can use the existing CMakeLists files in the template
as a starting point on how to include them as modules.
As an example, here is the CMakeLists file for our backend directory:
include_directories(
${CMAKE_CURRENT_SOURCE_DIR}
)
set(
Mycppappbackend_SRCS
modules/Mycppapp/backend.cpp
modules/Mycppapp/mytype.cpp
)
# Make the unit test files visible on qtcreator
add_custom_target(Mycppappbackend_UNITTEST_QML_FILES ALL SOURCES "tests/unit/tst_
˓→mytype.qml")
add_library(Mycppappbackend MODULE
${Mycppappbackend_SRCS}
)
set_target_properties(Mycppappbackend PROPERTIES
LIBRARY_OUTPUT_DIRECTORY Mycppapp)
qt5_use_modules(Mycppappbackend Gui Qml Quick)
# Copy qmldir file to build dir for running in QtCreator
add_custom_target(Mycppappbackend-qmldir ALL
COMMAND cp ${CMAKE_CURRENT_SOURCE_DIR}/modules/Mycppapp/qmldir ${CMAKE_CURRENT_
˓→BINARY_DIR}/Mycppapp
DEPENDS ${QMLFILES}
)
# Install plugin file
install(TARGETS Mycppappbackend DESTINATION ${QT_IMPORTS_DIR}/Mycppapp/)
install(FILES
modules/Mycppapp/qmldir DESTINATION ${QT_IMPORTS_DIR}/Mycppapp/)
Further reading
• This CMake tutorial should help you getting started with complex CMake projects
• The Qt documentation on integrating QML and C++ will give you a high level overview of integration concepts,
such as data type conversion
Tutorials - QML unit testing
In this tutorial you will learn how to write a unit test to strengthen the quality of your Ubuntu QML application. It
builds upon the Currency Converter Tutorial.
Requirements
• Ubuntu 14.04 or later - Get Ubuntu
• The Currency Converter tutorial - if you haven’t already, complete the Currency Convertertutorial
104
Chapter 6. App development
UBports Documentation, Release 1.0
• The QML test runner tool - open a terminal with Ctrl+Alt+T and run these commands to install all required
packages:
$ sudo apt-get install qtdeclarative5-dev-tools qtdeclarative5-test-plugin
What are unit tests?
To help ensure your application performs as expected it’s important to have a nice suite of unit tests. Unit tests are the
foundation of a good testing story for your application. Let’s learn more about them.
A unit test should generally test a specific unit of code. It should be able to pass or fail in only one way. This means
you should generally have one and only one assertion or assert for short. An assertion is a statement about the expected
outcome of a series of actions. By limiting yourself to a single statement about the expected outcome, it is clear why
a test fails.
Unit tests are the base of the testing pyramid. The testing pyramid describes the three levels of testing an application,
going from low level tests at the bottom and increasing to high level tests at the top. As unit tests are the lowest level,
they should represent the largest number of tests for your project.
In Ubuntu, unit tests for your QML application:
• Are written in JavaScript within an Ubuntu Testcase type. This makes it easy to write tests with only a few lines
of JavaScript
• Are executed with the qmltestrunner tool, which will determine whether they pass or fail
Testing with qmltestrunner
QML makes developing applications easy. Fortunately, it makes testing those applications easy too! The
qmltestrunner tool allows you to execute QML files as testcases. As we will learn later, these files should
contain test_ functions and use the Ubuntu Testcase type.
Here’s an example of a very basic unit test:
import QtTest 1.0
import Ubuntu.Test 1.0
UbuntuTestCase {
name: "MathTests"
function test_math() {
compare(2 + 2, 4, "2 + 2 = 4")
}
}
Running the example
To help you see what unit tests look like in real life, grab a branch of the currency converter code from the tutorial.
Run this command in a terminal:
bzr branch lp:ubuntu-sdk-tutorials
This creates a new folder called ubuntu-sdk-tutorials. The code we’ll be looking at inside the branch is under
getting-started/CurrencyConverter. On the terminal, now switch to the tutorial folder:
cd ubuntu-sdk-tutorials/getting-started/CurrencyConverter
6.3. The Ubuntu App platform - develop with seamless device integration
105
UBports Documentation, Release 1.0
If you navigate to that folder with the file browser, you can click on the CurrencyConverter.qmlproject file
and open it with the Ubuntu SDK IDE:
Inside you will notice a tests folder that contains three subfolders named unit, integration, and functional. You’ll notice
this corresponds to the testing pyramid.
Since we are interested in the unit tests, navigate to the unit folder. Inside you’ll find the tst_convert.qml file,
which is a QML unit test.
So let’s run it! Switch back to your terminal and run:
qmltestrunner -input tests/unit
If everything went successfully, you should see a small printout displaying all tests as passing.
What to test
Unit tests are a great way to ensure our code and functions react as we expect them to. The currency converter project
has a convert function, so let’s test it! First a quick look at the QML function in the Main.qml file:
function convert(from, fromRateIndex, toRateIndex) {
var fromRate = currencies.getRate(fromRateIndex);
if (from.length <= 0 || fromRate <= 0.0)
return "";
106
Chapter 6. App development
UBports Documentation, Release 1.0
return currencies.getRate(toRateIndex) * (parseFloat(from) / fromRate);
}
The function converts currencies given a currency and the rate indexes to convert from and to. With that in mind, let’s
write some tests to test and ensure it’s working properly.
Our first test case
Since we want to test the convert function, let’s feed it some specific input and ensure it returns the proper results.
Let’s start simple enough and pass in a value of 1. You can see this test case written out in test_convert1()
inside of tst_convert.qml.
function test_convert1() {
// convert 1.00 from currency 5 to currency 10
var value = currencyConverter.convert("1.00", 5, 10)
verify(value > 0)
}
This shows the basic format for a unit test. We call the function with a known value and then assert our expectations
about the result. If for some reason the result is different than our expectations, the test will fail.
Another great use of unit tests is to explore how your code will react in edge cases. For instance, what would happen
(and what should happen!) if 0 is passed to the function? What about -1? How about a string? Explore these edge
cases and test them!
Running a testcase
After you’ve written your set of test cases, it’s important to understand how they will be executed. Ubuntu Testcase
contains some built in methods that control execution. For example, here’s the actual order of execution for our
example unit test suite in tst_convert.qml.
initTestCase()
init()
test_convert0()
cleanup()
init()
test_convert1()
cleanup()
cleanupTestCase()
If you need to execute some code before or after running a test that is common to all tests, put it in init () / cleanup ().
If you have a bit of code that needs to execute before any tests are run, or after the test suite is complete, place it in
initTestCase() / cleanupTestCase() respectively.
For our test suite you can see we do have some code in both initTestCase() and cleanupTestCase(). Since
our app requires an Internet connection, initTestCase() accounts for the data loading time. Loading data for an
application is a common use case of something that might need to be added to initTestCase().
Finally, you will also notice that the test cases run in ascending order, sorted by name. However, generally tests should
be self contained and independent, so don’t be tempted by assuming a test will run after a prior test based on ordering.
6.3. The Ubuntu App platform - develop with seamless device integration
107
UBports Documentation, Release 1.0
Conclusion
You’ve just learned how to write unit tests for a form-factor-independent Ubuntu application for the phone. But there
is more information to be learned about how to write qml tests. Check out the links below for more documentation
and help.
Resources
• Ubuntu Test components reference
• Running tests with qml testrunner
• Get started with Qt Quick Test
Tutorials - QML integration testing
In this tutorial you will learn how to write an integration test to strengthen the quality of your Ubuntu QML application.
It builds upon the Currency Converter Tutorial.
Requirements
• Ubuntu 14.04 or later - Get Ubuntu
• The currency converter tutorial - if you haven’t already, complete the currency converter tutorial
• The unit testing tutorial for currency converter - if you haven’t already, unit testing tutorial
• The QML test runner tool - open a terminal with Ctrl+Alt+T and run these commands to install all required
packages:
sudo apt-get install qtdeclarative5-dev-tools qtdeclarative5-test-plugin
What are integration tests?
An integration test tests interactions between pieces of your code. It can help ensure that data is passed properly
between functions, exceptions are handled properly and passed, etc.
Integration tests are the middle of the testing pyramid. They cover more code at once and at a higher level than unit
tests. As you remember, the testing pyramid describes the three levels of testing an application, going from low level
tests at the bottom and increasing to high level tests at the top.
In Ubuntu, like unit tests, integration tests for your qml application:
• Are written in JavaScript within an Ubuntu Testcase type
• Are executed with the qmltestrunner tool, which will determine whether they pass or fail
Again, the qmltestrunner tool allows you to execute QML files as testcases consisting of test_ functions. We’ll
make use of it to run our tests.
108
Chapter 6. App development
UBports Documentation, Release 1.0
Running the example
To help you see what integration tests look like in real life, grab a branch of the currency converter code from the
tutorial. Run this command in a terminal:
bzr branch lp:ubuntu-sdk-tutorials
This creates a new folder called ubuntu-sdk-tutorials. The code we’ll be looking at inside the branch is under
getting-started/CurrencyConverter. On the terminal, now switch to the tutorial folder:
cd ubuntu-sdk-tutorials/getting-started/CurrencyConverter
If you navigate to that folder with the file browser, you can click on the CurrencyConverter.qmlproject file
and open it with the Ubuntu SDK IDE:
So let’s run it! Switch back to your terminal and run:
qmltestrunner -input tests/integration
If everything went successfully, you should see a small window appear and disappear quickly and a printout displaying
all tests as passing.
Integration tests for Currency Converter
The currency converter application involves inputting data into TextField’s on a Page and pressing a Button. We know
from our previous unit test tutorial that the convert function we use to do this operates correctly, so it’s time to test
passing data from our TextField’s to the convert function and vice versa.
Now let’s write some tests!
Preparing the testcase
Before we can test these qml components we’ll need to create an instance of the QML element we want to test. To do
this, open the tst_currencyconverter.qml file. In order to declare an instance of our main qml window, we’ll
need to import our qml files from the root folder. The import “../..” line imports all of the qml from the root
6.3. The Ubuntu App platform - develop with seamless device integration
109
UBports Documentation, Release 1.0
110
Chapter 6. App development
UBports Documentation, Release 1.0
folder ensuring we can declare a new instance. Our main window is called Main in our application, so let’s declare a
new instance.
import "../.."
...
Item {
width: units.gu(40)
height: units.gu(30)
// Create an instance of the QML element we want to test
Main {
id: currencyConverter
anchors.fill: parent
}
}
This will create a small (40 by 30 units) instance of our currency converter main window when we execute this QML.
We will use this to test.
Simulating mouse and keyboard input
We also need to think about how we will simulate mouse and keyboard input, since we intend to pass data into UI
elements. Fortunately, there are useful methods from Qt.TestCase to help us.
ThekeyPress(),keyRelease(),
andkeyClick() methods can be used to simulate keyboard events,
while
mousePress(),[
mouseRelease()]
(../api-qml-current/QtTest.TestCase.md#mouseReleasemethod),mouseClick(),mouseDoubleClick(), andmouseMove() methods can be used to simulate mouse events.
These useful methods are self-describing and allow us to interact with the active qml element. Before using them
however, we must ensure the window has loaded. To do this, we’ll be using the when andwindowShown properties.
when:
windowShown
Adding this simple check will ensure our test won’t begin until the window has loaded.
Our first testcase
With our test now all ready to launch and wait for our element to load, we can write our test for converting rates. Note
again that we simulate the mouse and keyboard as inputs for our test.
function test_convert(data) {
var inputFrom = findChild(currencyConverter, "inputFrom")
var inputTo = findChild(currencyConverter, "inputTo")
// Click in the middle of the inputFrom TextField to focus it
mouseClick(inputFrom, inputFrom.width / 2, inputFrom.height / 2)
// Click at the right end of the inputFrom TextField to clear it
mouseClick(inputFrom, inputFrom.width - units.gu(1), inputFrom.height / 2)
// Press key from data set
keyClick(data.inputKey)
// Now the field should contain the value from the data set
// compare() also checks the type. We need to convert text to int if the data set
˓→holds ints.
compare(parseInt(inputFrom.text), data.value)
// The output field should be 0 when the input is 0, otherwise it should be
˓→greater than 0
if (data.value == 0) {
// Here we compare the text to the string "0"
compare(inputTo.text, "0", "0 Euros is not 0 Dollars!?!?")
6.3. The Ubuntu App platform - develop with seamless device integration
111
UBports Documentation, Release 1.0
} else {
// With verify() automatic casting can happen.
verify(inputTo.text > 0)
}
}
This test case will clear the input text field and input values. We then assert to ensure that two things occur. The first
is that the text field receives and properly reacts to our input. The second assertion checks if the conversion field is
properly updated with a converted value.
Going deeper
Did you notice our test case also has an import of data? This lets us test a few different values to make sure we have
all our edge cases covered. We can do this by defining _data functions. Examine the following function in the test
case.
function test_convert_data() {
return [
{ tag: "0", inputKey: Qt.Key_0, value: 0 },
{ tag: "5", inputKey: Qt.Key_5, value: 5 }
]
}
This function is named the same as our test_convert function, with an additional string of _data appended to the end.
This instructs qmltestrunner to run our test_convert function with the given inputs; 1 run for each set of
values.
Another test
There’s an additional test we can code to ensure our input fields behave properly. The clear button is a part of the main
window and the text fields should react when it is pressed. Let’s write a testcase to ensure this behaves as expected.
function test_clearButton() {
var inputFrom = findChild(currencyConverter, "inputFrom")
var inputTo = findChild(currencyConverter, "inputTo")
// Click in the middle of the inputFrom TextField to focus it
mouseClick(inputFrom, inputFrom.width / 2, inputFrom.height / 2)
// Press Key "5"
keyClick(Qt.Key_5)
// Now the field should contain the value 0.05 because 0.0 is already in there in
˓→the beginning
tryCompare(inputFrom, "text", "0.05")
var clearBtn = findChild(currencyConverter, "clearBtn")
mouseClick(clearBtn, clearBtn.width / 2, clearBtn.height / 2)
// Now the field should be set back to "0.0"
tryCompare(inputFrom, "text", "0.0")
}
In this testcase we utilize the tryCompare function to issue asserts in reaction to our simulation of inputs. This allows
for an asynchronous event to occur, as opposed to the compare function which we used above. In other words, our
assertion won’t fail immediately, since the inputfield needs some small amount of time to react to the button state.
Notice the multiple assertions as well. If we ever decide the clear button should perform additional functions, we can
update this testcase.
112
Chapter 6. App development
UBports Documentation, Release 1.0
Conclusion
You’ve just learned how to write integrations tests for a form-factor- independent Ubuntu application for the phone. But
there is more information to be learned about how to write qml tests. Check out the links below for more documentation
and help.
Resources
• Ubuntu Test components API reference
• Running tests with qmltestrunner
• Learn how to simulate mouse and keyboard input with Qt Quick Test
Tutorials - writing QML acceptance tests
In this tutorial you will learn how to write an autopilot test to strengthen the quality of your Ubuntu QML application.
It builds upon the Currency Converter Tutorial.
Requirements
• Ubuntu 14.04 or later – get Ubuntu.
• The Currency Converter tutorial – if you haven’t already, complete the Currency Converter tutorial.
• The lower level testing tutorials on unit testing, and integration testing.
Testing like a user with autopilot
Whew! Presuming we’ve written our QML application, and written some unit tests for it, we can now be assured our
program works properly, and if we break it we’ll know about it, right?
Well, from a logical level yes, we’ve now assured ourselves the program should behave reasonably. That is until a user
gets ahold of it.
How can we make sure when they press a button or interact with our application that it will respond properly? What
can we do to fill this final gap? The answer is a functional testing tool called Autopilot.
Preparing for launch
First things first, we’ll need to make sure we install the autopilot tool. This can be done using the autopilot PPA. Add
the PPA and install the packages.
sudo apt-add-repository ppa:autopilot/1.5 && sudo apt-get update && sudo
apt-get install python3-autopilot python3-autopilot-vis
Let’s also grab a branch of currency converter code from the tutorial with autopilot tests already written and in place
to look at.
bzr branch lp:ubuntu-sdk-tutorials
This creates a new folder called ubuntu-sdk-tutorials. The code we’ll be looking at inside the branch is under
getting-started/CurrencyConverter.
6.3. The Ubuntu App platform - develop with seamless device integration
113
UBports Documentation, Release 1.0
cd ubuntu-sdk-tutorials/getting-started/CurrencyConverter
Learning the basics of autopilot
A basic autopilot test consists of:
• a setup phase where we start the application and create any data we might need. Next,
• we interact with the application by pressing buttons, sending keystrokes and doing things a user would do.
Finally,
• we make some assertions about our actions to ensure the application responded appropriately.
If you’ve used other testing frameworks that follow in the line of xUnit, you will notice the similarities.
So what does a test look like? If you notice there is already an autopilot folder waiting for us inside the tests subfolder of the “CurrencyConverter” subdirectory. Inside is a folder aptly called currencyconverter which represents our
testsuite name. Finally, inside this folder is the testcases and supporting code.
cd tests/functional/currencyconverter
So, let’s take a look and talk about how it works.
114
Chapter 6. App development
UBports Documentation, Release 1.0
Looking at the __init__.py file:
from autopilot.testcase import AutopilotTestCase
...
class CurrencyConverterTestCase(AutopilotTestCase):
...
def setUp(self):
super(CurrencyConverterTestCase, self).setUp()
self.launcher, self.test_type = self.get_launcher_and_type()
self.app = currencyconverter.CurrencyConverter(self.launcher(),
self.test_type)
And then the test_currencyconverter.py file:
from currencyconverter.tests import CurrencyConverterTestCase
class TestMainWindow(CurrencyConverterTestCase):
def test_from_currency_convert(self):
""" Setting from currency value should update to currency """
self.app.main_view.set_random_from_value()
to_value = self.app.main_view.get_to_currency_field().get_value()
self.assertGreater(to_value, 0)
def test_to_currency_convert(self):
""" Setting to currency value should update from currency """
self.app.main_view.set_random_to_value()
from_value = self.app.main_view.get_from_currency_field().get_value()
self.assertGreater(from_value, 0)
def test_clear_button(self):
""" Test if the clear button clears the screen """
self.app.main_view.set_random_from_value()
self.app.main_view.use_clear_button()
self.assertEquals(self.app.main_view.get_from_currency_field().text,
'0.0')
self.assertEquals(self.app.main_view.get_to_currency_field().text,
'0.0')
Back to __init__.py:
class CurrencyConverter(object):
"""Autopilot helper object for the currencyconverter application."""
def __init__(self, app_proxy, test_type):
self.app = app_proxy
self.test_type = test_type
self.main_view = self.app.select_single(Main)
@property
def pointing_device(self):
return self.app.pointing_device
class Main(ubuntuuitoolkit.MainView):
"""Autopilot helper for the MainView."""
def __init__(self, *args):
super(Main, self).__init__(*args)
self.visible.wait_for(True)
self.wait_for_network()
We implement an AutopilotTestCase object (the CurrencyConverterTestCase class) and define a new method
for each test (e.g. test_clear_button).
You will also notice the setUp method inside __init__.py. This is called before each test is run by the testrunner.
In this case, our setup only consists of launching the application before we run each test and waiting for it to appear
before continuing.
6.3. The Ubuntu App platform - develop with seamless device integration
115
UBports Documentation, Release 1.0
After setUp is launched a test_* function is executed. Finally the tearDown is run and the cycle continues with the
next testcase.
Since we’re testing our UI on multiple form factors, you’ll notice we include logic for a mouse or touch device in
__init__.py. Autopilot supports running against an agnostic display server (Xorg, mir, etc). We simply initialize
our “pointing_device”, and we can issue touch/click and movement commands generically. In this way our testcase
can be the same across multiple form factors.
The computer has eyes
To make things easier for us, we’ve also defined a class called Main inside of __init__.py with several helper
functions which you see utilized in the tests inside of test_currencyconverter.py. In fact, this class builds
upon an entire suite of helpers made just for autopilot testing of Ubuntu SDK applications.
These helper functions are the basis of the vision we have inside the application. This is because autopilot hooks into
the dbus session of our application to read the data behind the scenes. In this way we can then make assertions about
an object’s properties.
If you look closely you’ll notice something else about the QML source file for currency converter. To aid autopilot’s
vision, we’ve added objectName‘s to the objects we wish to inspect at runtime. Using this objectName, we can
issue a select_single or select_multiple call to autopilot to grab this specific object easily. Once we have
the object, we can examine an object’s data structures to confirm application behavior at runtime by using asserts.
Button {
id: clearBtn
objectName: "clearBtn"
text: i18n.tr("Clear")
width: units.gu(12)
onClicked: {
inputTo.text = '0.0';
inputFrom.text = '0.0';
}
}
Testing the clear button
So, let’s examine the testcase written to test the clear button for the application to see how this works.
1. First we utilize our helper method to set a random value in the from field. This is done using the
select_single method autopilot exposes to us. Given a named property type and object name, we can
retrieve the object during runtime and examine it.
2. Next we utilize a second helper method, which relies on the autopilot functions to tap or click on the clearButton.
You can see all of these helper methods inside the Main class in __init__.py.
3. Lastly, we need to assert the resulting text fields are zeroed out – just like we coded it.
from currencyconverter.tests import CurrencyConverterTestCase
class TestMainWindow(CurrencyConverterTestCase):
def test_clear_button(self):
""" Test if the clear button clears the screen """
self.app.main_view.set_random_from_value()
self.app.main_view.use_clear_button()
self.assertEquals(self.app.main_view.get_from_currency_field().text,
'0.0')
116
Chapter 6. App development
UBports Documentation, Release 1.0
self.assertEquals(self.app.main_view.get_to_currency_field().text,
'0.0')
Computer, run my test!
We’re now ready to execute the test to see what happens. Autopilot supports listing the testcases present in a testsuite
and executing them via the autopilot list [testsuite] and autopilot3 run [testsuite] commands respectively. Autopilot
also supports running in verbose mode via the ‘-v’ argument. This helps us see the output as we execute the test. So
from the tests/autopilot subfolder, execute:
autopilot3 run -v currencyconverter
This will execute the entire testsuite. It’s important you execute this command from the tests/autopilot subfolder, else
python will fail to find your testsuite. We can also run a single test at a time by specifying the testname in our run
command. We can use the list command to see what’s available, and then run just one test.
autopilot list currencyconverter
autopilot3 run -v currencyconverter.tests.test_currencyconverter.
˓→TestCurrencyConverter.test_clear_button
Seeing what autopilot sees
Autopilot contains an additional tool that let’s us see the entire dbus session that autopilot has available; including
things we might not realize are defined by our application. This can be useful for adding more advanced testcases or
to debug your existing tests further. This happens via the autopilot launch and autopilot vis commands. The launch
command prepares and launches the application with a hook for autopilot to introspect it’s data. The vis command
then launches a visualizer allowing us to examine the data autopilot gathers.
autopilot launch -i Qt qmlscene /path/to/file.qml
autopilot3 vis
Select the QtQmlViewer Connection from the dropdown and presto, say hello to the entire dbus session properties and values for our application.
Conclusion
You’ve just learned how to test a form-factor-independent Ubuntu application for the phone. But there is more information to be learned about the powers of autopilot. Check out the links below for more documentation and help.
Welcome to the world of testing!
Resources
• Official Autopilot Tutorial
• Autopilot API Documentation
• Autopilot SDK Helpers
6.3. The Ubuntu App platform - develop with seamless device integration
117
UBports Documentation, Release 1.0
Tutorials - Ubuntu UI toolkit palette
What the palette looks like now
The Palette has been in need of some updating for some time and we’ve been working hard behind the scenes to update
this for OTA 10. We’ve stripped back the palette and rebuilt it from the ground up, considering every aspect as we
went along.
Below is an introduction to how the palette is constructed and how we apply it to components in the UI toolkit. The
majority of the elements are coming with OTA 10 release, and we will point out those which will come with OTA 11
explicitly.
How the palette is constructed
• Colour set
• Theme
• Palette value
• Palette value sets
The Ubuntu color set
The colors in the color set are the foundation of the palette. However these values or names should never be hardcoded
into any component as this will lead to conflicts and misrepresentation of color when other themes will be used. These
colors are defined in UbuntuColors singleton.
Colour name
Jet
Inkstone
Slate
Graphite
Ash
Silk
Porcelain
Blue
Green
Red
Orange
Colour values
#111111
#3B3B3B
#5D5D5D
#666666
#888888
#CDCDCD
#F7F7F7
#19B6EE
#3EB34F
#ED3146
#E95420
Colour example
The theme
The theme is the visual look of the UI components. There are two themes which come with the SDK, Ambiance (light)
and Suru Dark (dark). The colors a component is painted with depend on the theme chosen by the developer.
Palette values
These are the normal color values given to a specific component. Each palette value name has a semantic meaning and
is applied on a certain layer within the application UI.
There are three layers in an Ubuntu UI and those are the following:
118
Chapter 6. App development
UBports Documentation, Release 1.0
1. background - the base layer where the application window resides
2. foreground - a layer above the base, holding components brought into foreground
3. overlay - a layer floating above background, mostly contains popups and dialogs
There are also 2 sub layers which can sit on top of any of these main layers. These are:
1. base - sits flat on the surface of any main layer.
2. raised - sits proud but not detached from the surface of any main layer.
In addition to these there are palette values which are not applied on any particular layer, but mostly color a section of
a component. Those will be described in the following chapters in more details.
Each palette value follows the UI layer it is applied in, and each of them has at least one palette value suffixed with
“Text”, which defines the color value to be used when drawing on the base color or putting a text above it.
Rectangle {
color: theme.palette.normal.**base**
border {
width: 3
color: theme.palette.normal.**baseText**
}
Text {
text: “Hello world”
anchors.centerIn: parent
horizontalAlignment: Text.AlignHCenter
color: theme.palette.normal.**baseText**
}
}
Background
These are the colors applied to the bottom level (or background) of the application.
6.3. The Ubuntu App platform - develop with seamless device integration
119
UBports Documentation, Release 1.0
Background
.background
.backgroundText
.backgroundSecondaryText
.backgroundTertiaryText
.backgroundPositiveText
.backgroundNegativeText
Ambiance
White
Jet
Slate
Ash
Green
Red
Suru dark
Jet
White
Silk
Ash
Green
Red
Base
These are the colors applied to elements that sit flat on the main layers. For example the outline of a text field or the
bar of the slider.
Base
.base
.baseText
Ambiance
Silk
Ash
Suru dark
Graphite
Ash
Foreground
These are the colors applied to components that sit on top of the background layer. For example the background of a
neutral button.
Foreground
.foreground
.foregroundText
Ambiance
Porcelain
Jet
Suru dark
Inkstone
White
Raised
These are the colors applied to elements that are raised above the main layers. For example the thumb toggles for
sliders and switches.
120
Chapter 6. App development
UBports Documentation, Release 1.0
Raised
.raised
.raisedText
.raisedSecondaryText
Ambiance
White
Slate
Silk
Suru dark
White
Slate
Silk
Overlay
These are the colors applied to elements that float above the background layer. For example popovers, dialogs and
menus.
Overlay
.overlay
.overlayText
.overlaySecondaryText
Ambiance
White
Slate
Silk
Suru dark
Inkstone
white
Slate
Selection
These are the colors applied to components that have selected content. This should not be confused with the entire
component’s selected state. For example text in an editable text field.
Selection
.selection
.selectionText
Ambiance
Blue (20% opacity)
Jet
Suru dark
Blue (40% opacity)
White
Field
These are the colors applied to the background of input controls . For example the background of a text field, checkbox
or radio button.
Field
.field
.fieldText
Ambiance
White
Jet
Suru dark
Jet
White
These are the colors applied to positive actions. For example a positive button.
Positive
.positive
.positiveText
Ambiance
Green
White
Suru dark
Green
White
Negative
These are the colors applied to negative actions. For example a negative button.
Negative
.negative
.negativeText
Ambiance
Red
White
Suru dark
Red
White
Activity
These are the colors applied to active items. For example the indication of progress on a progress bar or a slider.
6.3. The Ubuntu App platform - develop with seamless device integration
121
UBports Documentation, Release 1.0
Activity
.activity
.activityText
Ambiance
Blue
White
Suru dark
Blue
White
Palette value sets
In addition to the palette values above, an item can have a value set to control the look of the item as it enters or leaves
a state. The defined value sets are:
• theme.palette.disabled
• theme.palette.normal
• theme.palette.highlighted
• theme.palette.focused
• theme.palette.selected
• theme.palette.selectedDisabled
Each value set contains the color value for each of the color names listed above.
Note: the focused value set will land in OTA11.
How we define the color of an item
Each item is considered to have different states, though not specified explicitly through a given property or enumeration.
For instance an Item as well as a StyledItem in most of the cases is in normal state, being in normal use. This state is
represented by the enabled property. This property can already drive the normal and disabled state. Now, a component can be focused or not, which is driven through the activeFocus and keyNavigationFocus for the StyledItem.
Some items which react on mouse or touch interaction, have a property that drives the highlighted state of the component, for example, AbstractButton has pressed and ListItem has highlighted. ListViews have a special state called
selected state, which is used when a given ListView element is set to be the current one through the currentIndex/currentItem properties. A ListView can have a selected element also when disabled, in which case the enabled
and currentIndex properties will drive us to the selected disabled state.
These states draw the palette to have a color set for each state so a different color can be applied on the component
whenever a given state is entered. These color sets are called value sets. A component can choose the color using the
following formula:
theme.palette.valueSet.value
where valueSet corresponds to one of the states enumerated above, with camel case, and value is one of the palette
color values listed in Palette values.
When coloring a component it is highly recommended to choose the value set corresponding to a given state of the
component, and never choose a different color value from the value sets.
The wrong way:
Rectangle {
color: enabled ? theme.palette.normal.base : theme.palette.disabled.overlay
}
The right way:
122
Chapter 6. App development
UBports Documentation, Release 1.0
Rectangle {
color: enabled ? theme.palette.normal.base : theme.palette.disabled.base
}
For example, coloring a custom Button could be done in the following way:
Rectangle {
signal clicked
MouseArea {
id: mouseArea
anchors.fill: parent
onClicked: parent.clicked()
}
color: enabled ? (mouseArea.pressed
? theme.palette.highlighted.base
: theme.palette.normal.base)
: theme.palette.disabled.base
}
Coloring the selected element of a ListView on the other hand is a lot different:
ListView {
id: listView
model: 10
delegate: ListItem {
// [...]
}
highlight: currentItem ? highlightComponent : null
Component {
id: highlightComponent
Rectangle {
color: listView.enabled ? (listView.activeFocus
? theme.palette.focused.background
: theme.palette.selected.background)
: theme.palette.selectedDisabled.background
}
}
}
The following diagram illustrates the state transitions of a component driving the colors.
Choosing the palette value set automatically
We are working on an API to chose the color value set based on the component’s current state. This will be an
extension of the StyledItem and ThemeSettings component, and we hope it will reach the toolkit in OTA11. With
the API available, component styles will no longer need to use huge bindings to find out the color set to be used, but
instead will be able to use a simple binding line. The API is in prototyping phase, thus this chapter will be updated
later.
Tutorials - Ubuntu screen keyboard tricks
Originally posted by Nekhelesh Ramananthan ‘on his blog <https://web.archive.org/web/20160713053154/http://nik90.com/ubuntutouch-keyboard-tricks/>‘__
6.3. The Ubuntu App platform - develop with seamless device integration
123
UBports Documentation, Release 1.0
While working on Ubuntu Clock App and Flashback I used to always run into small issues like the input fields being
hidden by the on-screen keyboard (OSK) on the phone. Back in April 2013 (for starters I cannot believe it has been
8 months already!) I did not have an Ubuntu Phone and hence it was rather difficult to identify these issues on time
resulting in commits still being pushed to trunk. With time, I have learnt to address them and hence I thought I share
some of those tricks along with some new ones so that you don’t have to start from scratch like me or some of the core
apps developers.
Preventing UI from being overlapped by OSK
If you look at most applications run on the phone, their UI elements are often given sufficient padding to ensure they
are aesthetically pleasing and provide breathing space for the elements. As a result, when the OSK appears it results
in about 40% of the screen covered and hiding many application UI elements. This could result in the user searching
for vital application elements like buttons and having a rather touch time.
Hence while using input fields like TextField, TextArea it is important to ensure that critical UI elements which
need to be visible to the user at all times are not hidden by the OSK. An important thing to keep in mind that you don’t
really face this issue while testing your application on the desktop. So do make it a point to ensure that you test your
commit on a phone or an emulator before committing to trunk.
One way of doing this in QML is by using a Flickable which automatically allows the user to scroll the interface to find
the appropriate content. However this is still cumbersome to make the user scroll in the remaining 60% screen height
available to get to the content he/she is looking for. A more clever approach is to anchor those critical UI elements like
a button, label to the OSK. The Ubuntu SDK offers a rather clean approach to doing this.
In your main QML file, inside the MainView{} element you can set a property true as shown below,
anchorToKeyboard:
124
true
Chapter 6. App development
UBports Documentation, Release 1.0
This allows any content or UI element that you anchor to the bottom of the application to automatically appear above
the OSK. Let’s look at an example. We are going to make an account creation page together. It is going to consist of a
username, password and email input field. Finally there will be a button to perform the account creation action.
Pay close attention to the anchors of the flickable and the createButton!
Page {
id: accountPage
Flickable {
id: sampleFlickable
clip: true
contentHeight: mainColumn.height + units.gu(5)
anchors {
top: parent.top
left: parent.left
right: parent.right
bottom: createButton.top
bottomMargin: units.gu(2)
}
Column {
id: mainColumn
spacing: units.gu(2)
anchors {
top: parent.top
left: parent.left
right: parent.right
margins: units.gu(2)
}
TextField {
id: username
width: parent.width
placeholderText: "username"
}
TextField {
id: password
width: parent.width
placeholderText: "password"
}
TextField {
id: email
width: parent.width
placeholderText: "email"
}
}
}
Button {
id: createButton
text: "Create Account"
anchors {
horizontalCenter: parent.horizontalCenter
bottom: parent.bottom
margins: units.gu(2)
}
onClicked: // perform log in function
}
}
When you use this login page without setting the anchorToKeyboard property to true, it will result in the create
account button being hidden as shown in the screenshot below.
6.3. The Ubuntu App platform - develop with seamless device integration
125
UBports Documentation, Release 1.0
126
Chapter 6. App development
UBports Documentation, Release 1.0
However by setting the anchorToKeyboard property to true, you will get a better results as shown. Isn’t that much
better? By using a flickable, a user will be able to scroll the UI to see other text fields, but the create account button is
always visible
The example above is just one use case of many. Another example that comes to my mind is having a search page with
a search box below which a list view is shown. Ideally you want to anchor the list view to the bottom of the page. So
when the OSK appears, the list view gets anchored to the OSK providing a much better view.
Special Keyboards for specific purposes
Another minor detail that I think improves the user experience is showing the right keyboard. Here is a quote that I
have read in several websites about good application design,
A good UI is one which guides and self corrects the user to perform the right action rather than one which
shows a notification prompt informing the user that he has made a mistake.
Though I am no designer, when I think about it from the user’s perspective I agree to it completely. Who likes to see
annoying pop-ups (from the windows XP times) like,
So when it comes to receiving input from the user, one step towards guiding the user is by showing the correct onscreen keyboard. Let me illustrate :-)
Let’s say you want to get the user’s phone number (commonly seen in messaging apps), it is better to show a keyboard
allowing only numeric inputs,
TextField {
id: username
width: parent.width
placeholderText: "phone number"
inputMethodHints: Qt.ImhDialableCharactersOnly
}
Let’s say you want to get the user’s email address, it is better to show a keyboard with common characters such as @
and .com.
TextField {
id: username
width: parent.width
placeholderText: "phone number"
inputMethodHints: Qt.ImhEmailCharactersOnly
}
As you may have noticed from the above code examples, you can control the OSK type shown using the
inputMethodHints method. The Ubuntu SDK is quite powerful (by inheriting and improving on upstream QML
widgets). If you would like to get more information about the different textfield method hints, I will refer you to the
official documentation found here.
Good Luck with your app! Remember one achieves a great user experience by paying attention to such small details.
Update 1: Just out of curiosity, if you are interested in doing some special actions when the OSK appears and
disappears you can do that using the connections elements as shown below:
Connections {
target: Qt.inputMethod
onVisibleChanged: console.log("OSK visible: " + visible)
}
Whenever the OSK appears, it will fire the onVisibleChanged signal.
6.3. The Ubuntu App platform - develop with seamless device integration
127
UBports Documentation, Release 1.0
128
Chapter 6. App development
UBports Documentation, Release 1.0
Update 2: One other inputMethodHints that I failed to mention is Qt.ImhNoAutoUppercase. By default the
OSK capitalises the first letter of a sentence. This is sometimes deterrent to input fields like username. By setting this
method hint, you can disable that feature.
Update 3: I just learned from Jamie Strandboge that the OSK can be hidden using Qt.inputMethod.hide().
Tutorials - performance and QML applications on Ubuntu
Tutorials - internationalizing your app
As a developer, you probably want to see your apps in many hands. One way to make it happen is to enable your
application for translation.
With minimal effort, you can mark your application strings for translation, expose them to community translators and
integrate these translations into your package. The translations building process is handled by the SDK itself and if
you happen to use Launchpad, translators will quickly see your project and help you, but you still need to mark your
user-visible strings as translatable. Let’s get started.
Glossary
A few terms you need to understand before diving in.
• Gettext: the technology used by Ubuntu to translate applications
• Internationalization (i18n): what you will be doing in your app to enable translations
• Localization (l10n): what translators do
• User locale: for most cases, you can think of it as the language the user has chosen for the UI of their system.
However, locale is the broader term that includes the group of settings associated with a particular localized
configuration: language, date/time format, currency, etc.
• POT files: template files containing all your application strings, exposed to translators. There is generally only
one and it is also known as “Translation template.”
• PO files: what translators (or an online translation system) produce, they contain translated strings based on a
POT file. There is a .po file for each language a translation is available in, and the files are commonly called the
“Translations”.
• MO files: binary files loaded in your app at runtime, built from PO files. These are the only files that your
packaged app will need to ship to use translations.
6.3. The Ubuntu App platform - develop with seamless device integration
129
UBports Documentation, Release 1.0
130
Chapter 6. App development
UBports Documentation, Release 1.0
6.3. The Ubuntu App platform - develop with seamless device integration
131
UBports Documentation, Release 1.0
Getting started
First, make sure you are up-to-date on what an SDK app needs in term of preparation, for example, adding click targets
to build for a specific device.
During this tutorial, we are going to use a sample app and see what it takes to get a translated version into users hands.
You can grab the code by running:
bzr branch lp:~davidc3/howmanyapples/no-i18n
Or read it online ›
Or simply use one of your existing projects and try to follow along (Note: make sure to update your project to the
latest SDK template by following these steps).
This tutorial is generic, so that in practice you can apply the steps to any QML project that uses the Ubuntu UI Toolkit.
Before getting into the code, let’s have a look at some common methods.
Marking a string for translation
The SDK provides an i18n API with a very straightforward way to do that. Marking a string is as simple as calling
i18n.tr(string).
For example:
Label {
text:i18n.tr("My Label")
}
Managing plural forms
In many latin languages such as english or french, putting a sentence to the plural form is - most of the time - simply
a matter of adding an “s”. But this is not the case in a lot of languages, for example, Arabic has six different plural
forms, Croatian and Russian have three, etc.
The i18n API gives you a clean solution:
i18n.tr("%1 cat", "%1 cats", nb_of_cats).arg(nb_of_cats)
In the example, the first argument to i18n.tr() is the english singular form, the second one is the English plural,
then comes the integer which will trigger the change of form. They will be used to generate a translation template
suited to all languages.
Working with the plural form and i18n has been extensively documented, for more information on that topic, such as
a guide to design a correct plural form, have a look here.
Providing context
Translators only see the string enabled for translation, not the code around, which means they can be confused by the
exact meaning or context of a string, and most of the time, they don’t see the app in action at this specific step. Don’t
let them in the dark and make sure they fully understand what the purpose of your strings are.
You can do that by adding translators comments to your code. Above any translatable string, you can add a special
comment starting with “TRANSLATORS:”. Note that the default Ubuntu SDK setup will only pick up translators
comment starting this way. For example
132
Chapter 6. App development
UBports Documentation, Release 1.0
// TRANSLATORS: %1 refers to the amount of animals, %2 to the species
text: i18n.tr(“Do you want to buy %1 more %2?”).arg(nb_animals).arg(species)
In practice
If you haven’t already, download the source of the sample app by opening a terminal with Ctrl+Alt+t and running
bzr branch lp:~davidc3/howmanyapples/no-i18n
Open it with the SDK
cd no-i18n
ubuntu-sdk .
It will open the editor. Click on Main.qml in the left column. The file should look like this: http://bazaar.launchpad.
net/~davidc3/howmanyapples/no-i18n/view/head:/Main.qml
Now, try to change all user-visible strings to i18n.tr(string)
Hint: They are located at lines 27, 47 49, 51, 53, 74, 92, 108, 132 & 140
Try to run the app with Ctrl+R, to see if it launches. If it doesn’t, make sure to check the error log for typos you
could have made.
That’s it, you know how to internationalize!
But that’s not all, you can see that at lines 74 and 92, you have the potential to use the plurals method
Change line 74 to:
text:i18n.tr("You are making %1 for %2 guest",
"You are making %1 for %2 guests",
guests).arg(recipe).arg(guests)
The third argument of the tr() method (here “guests”) is the value that triggers the change to the plural form.
Therefore, at line 92, you can do:
text:i18n.tr("You will need %1 apple for this recipe", "You will need %1 apples for
˓→this recipe", apples).arg(apples)
Other internationalization features
In this app, you also have the opportunity to use localized currencies, with the toLocaleCurrencyString()
method
Change line 108:
text: "This will cost you %1".arg(price)
To:
text: i18n.tr("This will cost you %1").arg(Number(price).toLocaleCurrencyString(Qt.
˓→locale())
6.3. The Ubuntu App platform - develop with seamless device integration
133
UBports Documentation, Release 1.0
It will pick the correct currency symbol and the right number format depending on the locale.
This is a feature of the Locale QML Type, documented here, which provides a list of convenient methods for app
developers: metric and imperial units formats, nativeCountryName, nativeLanguageName, dateFormat, timeFormat,
etc.
The Date type is also worth looking into if your application is displaying dates or times. This will be the topic of
another tutorial.
Let’s have a look at the final internationalized version of our sample app
As you can see, I’ve also added a few translator comments. Make sure to use them for any strings needing context!
Building the POT file
The POT file will be located in a po/ folder in our app and will contain every string we have marked for translations
(including translators comments).
The SDK automatically builds it during the build step of your application. When you run it, create your click package
or click the build button, it generates or re-generates it.
If you use Launchpad to get your app translated collaboratively by a community of translators, the Translations page
(https://translations.launchpad.net/<projectname>) will propose you to use this POT file for
translations and automatically import available translations back in your project, as .po files, when they are available.
Remember to add the .pot file to revision control and to set up your project fortranslations.
Building translations before publishing your app
Once translators have worked on your app, make sure you run a last bzr pull to get all the translations (.po files)
from Launchpad before building the actual files (.mo) that will be shipped in your package.
Build your application one more time or simply create your click package from the Publish tab of the SDK to build
your translations.
This creates binary .mo files from the .po files provided by translators. They will be loaded at runtime depending on
the user locale.
That’s it, you are ready to publish a multi-language QML app!
Shipping translations
Translations are included in your click package in share/locale/$lang/LC_MESSAGES/$appname.mo, if
you have built your package outside of the SDK, make sure to check they are included in this path, or your app won’t
be translated on users devices.
Testing your app in other languages
To evaluate the quality of your translations or just see how your UI looks in another language, the easiest way is to use
your target device (phone, tablet, emulator. . . ) and change its language from System Settings > Language & Text >
Language.
134
Chapter 6. App development
UBports Documentation, Release 1.0
There is more to internationalization
Some areas of i18n are not covered by this tutorial. For example, The SDK doesn’t automatically mark for translation
the content of the .desktop file of your app (its name, description, etc.), which is handled separately by CMake.
This will be the topic of a more general i18n guide, stay tuned!
Optional: Updating your project SDK template
From time to time, project templates provided by the SDK get updated. To get the changes needed for this tutorial
(released end of April 2015), you need to update your project template manually. If your project as been created after
this date, you don’t have anything to do.
1. Make sure you have updated to the latest version of the Ubuntu SDK
2. Rename your project folder to something else
3. Create a new project with the SDK similar in all points to your original project
4. Copy everything from your renamed project folder to the new one. Except the Makefile, .qmlproject and
.qmlproject.user. Don’t forget to copy your .bzr folder if you use bzr.
Register your url dispatcher
The URL dispatcher is a service which allows applications to launch other applications and to open specific URLs
with specific applications.
• The most common case is to open https://foo.com/* links in a specific foo app. If you have the Youtube
webapp installed, you will notice that Youtube links (from the Video scope, for example) are opened with it.
• Another use case would be to open an application or scope from another application.
In this tutorial, you are going to learn how to setup the URL dispatcher for your app or webapp and retrieve arguments
from QML. It assumes you already have a working app or that you are starting developing one, using a QML template
provided by the SDK.
Note: that the sample app we are going to use is called “urldispatcher- tutorial”, you will need to adapt the snippets
with your application name.
Basic setup
Registering your app as a URL opener, is a matter of adding a JSON formatted file to your app metadata.
To do so, start by editing the hooks part of your manifest.json file.
{
(...)
"hooks": {
"urldispatcher-tutorial": {
"apparmor": "urldispatcher-tutorial.apparmor",
"desktop": "urldispatcher-tutorial.desktop"
}
},
(...)
}
It needs a new urls key, pointing to a .dispatcher file you are going to create at the root of your SDK project.
6.3. The Ubuntu App platform - develop with seamless device integration
135
UBports Documentation, Release 1.0
{
(...)
"hooks": {
"urldispatcher-tutorial": {
"apparmor": "urldispatcher-tutorial.apparmor",
"desktop": "urldispatcher-tutorial.desktop",
"urls": "urldispatcher-tutorial.dispatcher"
}
},
(...)
}
This new JSON formatted file needs to define a series of protocols and domains, more specifically: an array of
protocol and optional domain- suffix dictionaries:
In this example, let’s make it a default opener for all http(s)://design.ubuntu.com/* links. Create an
<appname>.dispatcher file containing the following code:
[
{
"protocol": "http",
"domain-suffix": "design.ubuntu.com"
},
{
"protocol": "https",
"domain-suffix": "design.ubuntu.com"
}
]
That’s it, you app will be used as an opener when design.ubuntu.com links are activated by the user.
Custom protocols
This simple syntax means you can create your own protocols and have your app opened when they are present in
activated links.
For example:
[
{
"protocol": "url-dispatcher",
}
]
Opening URLs and apps
Opening other apps and URLs in general from QML can be done with Qt.openUrlExternally().
For example, you can create a button to open the camera app this way:
Button {
id: openCameraApp
text: i18n.tr("Camera")
onClicked: {
Qt.openUrlExternally("appid://com.ubuntu.camera/camera/current-user-version");
136
Chapter 6. App development
UBports Documentation, Release 1.0
}
}
As you can see, it’s using the appid protocol, followed by:
• the package name
• the app name
• and the version number (replacable with a current-user-version wildcard).
You can also use the application protocol with a desktop file name:
application:///com.ubuntu.camera_camera_3.0.0.572.desktop
Handling command-line arguments
An Arguments component can be used to retrieve command-line launch arguments.
You can also use it to specify usage and help text.
Here is an example of what a music player could use:
Arguments {
id: args
defaultArgument.help: i18n.tr("Expects URL of the media to play.")
defaultArgument.valueNames: ["URL"]
Argument {
name: "playlist"
help: i18n.tr("Path of playlist to play")
required: false
valueNames: ["PATH"]
}
}
Which will generate the following terminal help output:
Usage: <app> --playlist=PATH URL
Options:
--playlist=PATH
Path of playlist to play
Expects URL of the media to play.
Arguments values can be retrieved by simply using the component id.
• The default argument value is retrieved with: args.defaultArgument.at(<position of the
argument>)
• Named arguments values are retrieved with: args.values.<argument name>
Known issues
When multiple applications register for the same protocol and domain, the last one installed takes precedence over
the others. A new UI to let the user pick which application to use is currently being worked on. You can follow its
progress on bug report #1378350
If you have questions about this tutorial or corner cases usage of the URL Dispatcher, make sure to ask your question
on AskUbuntu with the application-development tag.
6.3. The Ubuntu App platform - develop with seamless device integration
137
UBports Documentation, Release 1.0
Tutorials - using the Ubuntu thumbnailer
The Ubuntu Thumbnailer QML plugin gives you extremely fast access to thumbnails of pictures and videos stored
locally, as well as music artwork (albums and artists) using a third party backend.
Why use the thumbnailer
We believe this thumbnailer solves a whole range of issues, not only for galleries and media players, but for all
developers wanting to enrich their app with media content.
Speed
Using thumbnails instead of loading and reducing arbitrary images in your code will dramatically improve loading
speed of your components. This thumbnailer is heavily optimized for speed and caching.
Developer time
Video thumbnails often need to be generated with specific libraries (such as FFMPEG) you would need to embed in
your package. A common service is a clean solution for all apps consuming it.
Consistency
Users get the same thumbnails for the same content between apps. A music album looks the same everywhere on their
device.
How to use it
Importing Ubuntu.Thumbnailer in your QML code will give new powers to Image components, via dedicated
URI schemes for your image source:
• "image://thumbnailer/<file URI>" : local videos, audio files and pictures
• "image://albumart/album=<album>&artist=<artist>" : any music album (online query)
• "image://artistart/album=<album>&artist=<artist>" : any music artist (online query)
Note that for privacy reasons, user devices won’t be talking directly to the third party backend (currently: 7digital), all
responses and queries are proxied by the musicproxy server.
Successful responses for such online queries are cached locally to avoid unnecessary network use.
## Example
import QtQuick 2.0
import Ubuntu.Components 1.3
import Ubuntu.Thumbnailer 0.1
Image {
source: "image://thumbnailer/"+Qt.resolvedUrl("data/videos/file.mp4")
width:units.gu(20)
height:width
fillMode:Image.PreserveAspectFit
sourceSize:Qt.size(width, height)
138
Chapter 6. App development
UBports Documentation, Release 1.0
anchors.centerIn: parent
}
Note that you need to specify sourceSize for the thumbnailer to know at which size to produce the thumbnail.
A few other examples
Size and ratio
• The thumbnailer always preserves aspect ratio.
6.3. The Ubuntu App platform - develop with seamless device integration
139
UBports Documentation, Release 1.0
• It never up-scales. Returned thumbnails may be smaller than what was asked for because the thumbnailer never
up-scales. If the original artwork is smaller than what was asked for, the largest possible thumbnail will be
returned.
• Thumbnails are never larger than 1920 pixels in the larger dimension (even if the original artwork is larger).
• Requests with either dimension < 0 are invalid and return an error.
• Requests for (0,0) mean “as large as possible” (subject to the 1920 pixel limit).
• Requests for (x,0) or (0,y) mean “no larger than x or y”, while keeping the original aspect ratio.
QML API
The Ubuntu platform provides a rich set of technologies for your applications to integrate with and blend in with the
desktop. The API documentation will provide you with detailed technical information on how to interact with the
platform using the most popular programming languages.
Note: The API documentation has not yet been imported. The old canonical documentation can be found here.
Welcome to HTML5 apps!
Note: Here be dragons! This part of the docs could be very outdated or incomplete and has not been completely
triaged. Refer to the Ubuntu docs for further reference.
Ubuntu embraces HTML5 as a first-class app toolkit. While its support is constantly evolving and you can expect a
lot of new things to come, most of the core parts are in good working order! So HTML5 developers can start making
true HTML5 applications (as opposed to web pages) that fit right into the dazzling Ubuntu experience.
What is an HTML5 app?
HTML5 is traditionally for web pages. CSS provides styling and animations, and JavaScript provides logic and control.
But now, these web technologies can be used to write apps for Ubuntu.
What is an Ubuntu HTML5 app?
It is written in HTML5, CSS and JavaScript and it runs in a web container. It is an app, just like any other Ubuntu app,
integrated into Unity in all the usual ways. The web container provides access to a wide and growing set of JavaScript
APIs the app can use. This includes Ubuntu platform APIs such as the Content Hub, Alarms and Online accounts and
others not specific to Ubuntu, such as Cordova APIs, which provides access to system and device level functionality
like the camera and the accelerometer.
Looks and feels like an Ubuntu app
An HTML5 app UI can be created with Ubuntu HTML5 widgets, like Tabs, Pages, Dialogs, Buttons, and more. When
you declare these widgets in your HTML5 code, they are automatically styled by Ubuntu CSS, so they fit right in
visually. This also includes a JavaScript runtime framework, which lets you control widgets using a convenient JS
API.
140
Chapter 6. App development
UBports Documentation, Release 1.0
In addition to Ubuntu-specific HTML declarations, the app can use standard HTML5. Since the Ubuntu CSS provides
styling for most cases, even when using additional HTML5, the app still looks and feels like an Ubuntu app.
Ubuntu app design
Ubuntu puts design first and considers toolkits (HTML5, QML and others) as an implementation detail. It is design
that makes an Ubuntu app look and feel like an Ubuntu app.
A good step before getting started writing Ubuntu HTML5 apps is looking at design section. You will find examples
and guidance on using Ubuntu UI layouts and building blocks (from a toolkit agnostic viewpoint).
Questions?
There’s a lot to know, and fortunately the Ubuntu community is rich with sources of help and information. Here are a
couple good places to visit:
• http://askubuntu.com, a very active site. Check out the list of already answered questions. Feel free to ask your
own questions as well and make sure you use the HTML5 tag.
• Our app developer community, a great gathering place for people who share an interest in developing for Ubuntu
and sharing knowledge!
Next steps
Guides
Be sure to check out our HTML5 guides and others, like those for the Ubuntu App Platform. These focused articles
cover key topics of interest to app developers and are designed to give you a high level overview of critical topics.
After reading the guides, understanding APIs and platform features is much easier.
Tutorials
Definitely check out the HTML5 tutorials. These give you detailed steps, examples and explanations that let you leap
into productivity with Ubuntu HTML5 apps.
APIs
And of course, developers need the API Reference docs forHTML5/JavaScript. These provide the implementationlevel detail you need make your HTML5 apps use the full sweet of platform APIs .
HTML5 Tutorials - an introduction
Get productive in HTML5 app development with these tutorials that guide you through creating an app with stepby-step procedures and explanations of key points.
6.3. The Ubuntu App platform - develop with seamless device integration
141
UBports Documentation, Release 1.0
Guides - HTML5 guide
What is an HTML5 app?
HTML5 is traditionally for web pages. CSS provides styling and animations, and JavaScript provides logic and control.
But now, these “web” technologies can be used to write apps for Ubuntu. How does this work?
A few key points
• A web container, not a traditional browser: The HTML5 app runs in a web container. When the app is
launched, the container is launched preconfigured to display the app’s HTML5/CSS/JavaScript. That is, the
container’s default page is the app’s index.html file.
• First-class support of Web APIs: With a web engine scoring 511/555 on html5test.com, you get the familiar
landscape of Web APIs standardized by the W3C and can start building or porting your app on solid and wellknown foundations.
• Container provides run-time access to APIs: The container is built to expose a growing set of APIs that let
your app’s JavaScript access the Ubuntu Platform, like for example the Content Hub
• Ubuntu HTML5 UI toolkit: Ubuntu provides a set of HTML5 layouts and widgets (with associated CSS and
JavaScript) that you can use to build an HTML5 app that looks and behaves like other Ubuntu apps, for example
QML apps. We provide a high level introduction to key Ubuntu HTML5 layouts and widgets in a separate
mini-guide.
• Not just a web page: true Ubuntu apps: An HTML5 app is just like other apps made with the Ubuntu SDK.
They include the bits an Ubuntu app need, for example a Desktop file for Unity integration, a manifest file for
click packaging, etc. Bottom line: HTML5 apps are now first class citizens in Ubuntu.
How to create an HTML5 app from QtCreator
QtCreator is the preferred IDE for the Ubuntu SDK. It integrates with physical Ubuntu devices and emulators, allowing
you to package, run and debug your applications from it. The easiest way to create a new app from QtCreator is to
create a New project and select the HTML5 app template. You will be asked for a project name, then an app name.
Other required fields will be useful for packaging and integrating your application within the Ubuntu app confinement
model. You can have a look at this article to get a better grasp of what our security model is.
Next, you will need to select devices Kits for running your app. Kits are containers to run your app in the context of a
specific architecture (arm, x86) and framework (set of APIs available for each SDK release). For example, if you want
to test your app on your phone or in an arm emulator, you need to select at least one “armhf” kit. It is recommended to
have one desktop and one phone Kit: this should allow you to test, build and distribute your app without hassle on all
form factors. Click targets and device kits should give you all the details you need if you want to dive deeper or need
more help.
That’s it, your app template is created and ready to be edited. You can even run it right now by clicking the play button
at the bottom of the left pane (or press Ctrl+R).
How to structure your app
After creating your app with QtCreator, you will notice that a recommended file tree has been provided by the template,
here are the highlights of this tree:
• css: contains your css files
142
Chapter 6. App development
UBports Documentation, Release 1.0
6.3. The Ubuntu App platform - develop with seamless device integration
143
UBports Documentation, Release 1.0
• img: for visual resources
• js: contains your javascript files
• index.html: homepage of your app
• icon.png: your app icon
• appname.apparmor: security policy groups declared by your app to access device functionalities (camera,
networking, access to user content, etc.)
• appname.desktop: your app declaration to the shell that will manage its launch, icon, etc.
• manifest.json: your package declaration to the system installer.
These last three files have been pre-filled by the SDK and you probably won’t have to edit their content. If you need
to, don’t worry, the system will warn you of any mistakes when you try to run or package your app.
How to use Web APIs in your application
You can expect the large majority of standard APIs to be supported and be as easy to use as usual. Here is the
compatibity chart of our web engine on html5test.com (score of 511/555). For example, to play a song, you can call
the html5 player element:
<audio id="demo" src="audio.mp3"></audio>
<div>
<button>Play the Audio</button>
<button>Pause the Audio</button>
<button>Increase Volume</button>
<button>Decrease Volume</button>
</div>
You can find more documentation on Web APIs at webplatform.org.
How to use Ubuntu APIs in your application
OS specific features and design patterns pioneered in Ubuntu can be accessed very easily as well, such as the mediahub, online-accounts, content-hub, etc. For example, to know when your application is about to be closed, just use:
window.onload = function() {
var api = external.getUnityObject('1.0');
api.RuntimeApi.getApplication(function(application) {
application.onAboutToQuit(function(killed) {
console.log('killed: ' + killed)
});
});
};
You can find more documentation on Ubuntu HTML5 APIs in the API section.
144
Chapter 6. App development
UBports Documentation, Release 1.0
How to add an Ubuntu style
Your app can use any visual style, but if you want to give a more native feel to it, Ubuntu provides a set of HTML5
layouts and widgets (with associated CSS and JavaScript) that you can use to build an HTML5 app that looks and
behaves like other Ubuntu apps, for example QML apps. For example, you can import a complete theme simply by
calling :
<!-- Ubuntu UI Style imports - Ambiance theme -->
<link href="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/appTemplate.css" rel=
˓→"stylesheet" type="text/css" />
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></
˓→script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></
˓→script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tab.js"></script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>
And using <button> and <header> tags. This article provides a high level introduction to key Ubuntu HTML5
layouts and widgets.
Read the full API.
Guides - introduction to the HTML5 UI toolkit
Your app can use any visual style, but if you want to give a more native feel to it, Ubuntu provides a set of HTML5
layouts and widgets (with associated CSS and JavaScript) that you can use to build an HTML5 app that looks and
behaves like other platform apps.
To get started with the UI toolkit , let’s dive first into app layout options.
App Layouts
There are two main layout options that organize your app’s GUI at the highest level:
• A header with tabitems: when the user clicks a tabitem, the GUI switches to the associated tab content. This
is also called “Flat” navigation because the tabs are at the same “level” and the user switches between them
“horizontally” by clicking the header.
• A pagestack of pages. This is called “deep” navigation. With this, the user can drill further into the stack of
pages and use the “Back” button to climb out.
Be sure to check out design examples for further guidance on app layouts and widgets.
Widgets
Let’s take a look at some of the important Ubuntu HTML5 widgets you can use in your apps. For example, there are
tabs, pagestacks/pages, buttons, dialogs, lists, shapes, popovers, footers, and more.
Naturally, you declare these widgets in the app’s index.html file using Ubuntu-specific HTML element/attribute
combinations. This specific markup enables the Ubuntu framework to recognize and manage your app components
and to provide a convenient JavaScript API for use in your apps.
6.3. The Ubuntu App platform - develop with seamless device integration
145
UBports Documentation, Release 1.0
Most Ubuntu markup is declared with the data-role attribute indicating the widget type. An id attribute is often
required as well.
For example, you declare:
• a list with: <div data-role=”list”>,
• a button with <button data-role=”button” id=”uniqueID”>,
• and a dialog with <div data-role=”Dialog” id=”uniqueID”>.
It’s all standard HTML5, of course. The Ubuntu HTML5 framework does not introduce any new elements.
Next, let’s take a look at the overall structure of the index.html file itself.
Tip: Declare all the main markup and widgets in HTML instead of creating them dynamically in JavaScript in order
to minimize app launch time.
Head and imports
When you create a new app in the Ubuntu SDK, its index.html file <head>...</head> imports various JavaScript
and CSS files. Let’s review.
Ubuntu CSS and JavaScript
Naturally, the head imports Ubuntu CSS and Javascript. For example:
<link href="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/appTemplate.css" rel=
˓→"stylesheet" type="text/css"/>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
<!-- [...] -->
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></
˓→script>
Note: The Ubuntu JavaScript framework must also be initialized in your app’s JavaScript as explained below.
App-specific JavaScript
The app also imports its own app-specific JavaScript file:
<script src="js/app.js"></script>
Tip: This file is where you initialize the Ubuntu framework, as described below.
App-specific CSS
By default, there is no app-specific CSS file created for you. You can easily add one to your source tree and use and
import it like this:
<link href="app.css" rel="stylesheet" type="text/css"/>
146
Chapter 6. App development
UBports Documentation, Release 1.0
Other app-specific JavaScript
The app may import and use other JavaScript libraries that are included in the app source tree. Here’s an example with
JQuery:
<!-- jquery lib -->
<script src="js/jquery.min.js"></script>
There are no surprises here: the app’s index.html imports everything it needs, including Ubuntu JavaScript and CSS
and app specific JavaScript and CSS. Let’s take a closer look at the rest of the index.html file and find out how layouts
and widgets are declared.
Body and mainview
Each HTML5 app has an Ubuntu mainview inside the <body>...</body>. This main view typically contains a
header and content:
<body>
<header data-role="header">
<!-- [...] -->
</header>
<div data-role="content">
<!-- [...] -->
</div>
</div>
</body>
The header and content are used for both tab-style and pagestack-style app navigation. Let’s take a look.
Tabs for flat navigation
Here we look at how the header and content are used in tab-style (“flat”) app navigation.
• The header that contains tabitems
• The content section contains the tabs that correspond to each tab item
Header with tabitems
The header element contains an unordered list. Each list item has the tabitem’s text (this id displayed in the app).
When a user clicks a tabitem, the content identified by the tabitem’ s data-page attribute displays. Here’s a sample
with two tabitems:
<header data-role="header">
<ul data-role="tabs">
<li data-role="tabitem" data-page="main">Main</li>
<li data-role="tabitem" data-page="anotherpage">Another</li>
</ul>
</header>
6.3. The Ubuntu App platform - develop with seamless device integration
147
UBports Documentation, Release 1.0
Content contains tabs
For each tabitem, your app needs to declare corresponding tab content with the correct id. Here is a content section
with two tabs:
<div data-role="content">
<div data-role="tab" id="main">
<!-- [...] -->
</div>
<div data-role="tab" id="anotherpage">
<!-- [...] -->
</div>
</div>
Tip: You can make a single page app with a single tabitem and one corresponding tab.
Pagestack for deep navigation
Some apps are a natural fit for deep navigation, so a pagestack of pages makes sense. For example, consider an RSS
reader app.
• The top page could list the feeds.
• When the user selects a feed, a child page displays with list of articles.
• When the user selects an article, a child page displays with the article text.
This a hierarchical, or “deep” style: Feeds &gt; Articles &gt; Article
In Ubuntu HTML5, a pagestack is used by the framework to keep track of pages in “deep” navigation: which Pages
exist, and which one is on the top of the stack, that is, the one that is currently displayed. Pages are declared inside
pagestack markup.
A footer with a “Back” button is provided by the framework when needed. This allows the user to move from a page
up the pagestack to its parent page.
Naturally, the Ubuntu JavaScript API provides methods for manipulating the pagestack and pages. A simple pagestack
looks like this:
<body>
<div data-role="mainview">
<header data-role="header">
<!-- [...] -->
</header>
<div data-role="content">
<div data-role="pagestack">
<div data-role="page" id="main">
<!-- [...] -->
</div> <!-- page: main -->
<div data-role="page" id="anotherPage">
<!-- [...] -->
</div> <!-- page: anotherPage -->
</div> <!-- pagestack -->
</div> <!-- content -->
</div> <!-- mainview -->
</body>
148
Chapter 6. App development
UBports Documentation, Release 1.0
The footer
As noted, pages have a footer that runs across the bottom with a “Back” button that displays the parent page. No
markup is required for this default footer to display.
You can modify a pagestack’s default footer by adding footer markup inside the pagestack and outside of any of its
child pages. For example, you might want to add a button.
You can also add page-specific footers to a page. If a page has a footer declared, it overrides both the default footer
and any pagestack footer you have declared, if any.
Here is an example of a customized pagestack footer:
<div data-role="pagestack">
<div data-role="page” id="page1">
<!-- [...] -->
</div>
<div data-role="page" id="page2">
<!-- [...] -->
</div>
<!-- this footer overrides
the default pagestack footer -->
<footer data-role="footer" id="footerID">
<div data-role="list">
<ul>
<li>
<a href="#" id="home">
<img src=”./back.png”/>
<span>Tap me!</span>
</a>
</li>
</ul>
</div>
</footer>
<div> <!- end of pagestack -->
Here’s how to add footer to a specific page that overrides the default footer:
<div data-role="page" id="anotherPage">
<!-- [...] -->
<footer data-role="footer">
<!-- [...] -->
</footer>
</div>
Note: A footer is represented by the Toolbar class in the Ubuntu JavaScript API.
Dialogs and buttons
An Ubuntu dialog displays maximized above the current page. It is “modal” in the traditional sense that it must be
dismissed before the app GUI continues. As such it is useful to obtain needed input from the user. You declare dialogs
inside the content as siblings to tabs or pagestacks.
Ubuntu buttons have a useful click() method to provide click event handling.
Here’s an example of declaring a dialog:
6.3. The Ubuntu App platform - develop with seamless device integration
149
UBports Documentation, Release 1.0
<body>
<div data-role="mainview">
<!-- [...] -->
<div data-role="content">
<div data-role="tab" id=”main”>
<!-- [...] -->
</div>
<!-- [...] -->
<div data-role="dialog" id="mydialog">
<!-- [...] -->
<button data-role="button id="close”>Close</button>
</div>
</div>
</div>
</body>
Dialogs can contain arbitrary markup. They almost always contain a button to dismiss themselves. Such a button is
usually connected to a JavaScript click event handler that would call the Ubuntu JS method to hide the dialog.
Dialogs and buttons example
Here’s an example with:
• A button to show a dialog
• A dialog with a button to hides the dialog
html <div data-role="content"> <div data-role="tab” id="hello-page"> <button
data-role="button" id='show'>show</button> </div> <div data-role='dialog'
id='dialog'> <button data-role="button" id='hide'>Hide</button> </div> </div>
The following JavaScript handles the button click events and shows/hides the dialog:
window.onload = function () {
var UI = new UbuntuUI();
UI.init();
var dialog = UI.dialog('dialog');
var show = UI.button('show').click( function () {
dialog.show();
});
var hide = UI.button('hide').click( function () {
dialog.hide();
});
};
Lists
The Ubuntu HTML5 framework provides flexible lists. A list can optionally have header text. Each list item supports
various options, including primary and secondary text labels, an icon, and more. Here’s a sample list declaration:
<div data-role="list" id="testlist">
<header>My header text</header>
<ul>
<li>
<a href="#">Main text, to the left</a>
</li>
150
Chapter 6. App development
UBports Documentation, Release 1.0
<li>
<a href="#">Main text</a>
<label>Right text</label>
</li>
<li>
<aside>
<img src="someicon.png">
</aside>
<a href="#">Main text</a>
<label>Right</label>
</li>
</ul>
</div>
More widgets
That’s a quick overview of some of the key Ubuntu widgets, but there are more, for example shapes and popups.
For a presentation of Ubuntu HTML5 widgets, check out the HTML5 Gallery App (installed by the ubuntu-html5ui-toolkit-examples package). You can launch the gallery by searching the Ubuntu Applications scope for “Ubuntu
HTML5 UI Gallery”.
Be sure to check out the JavaScript API reference docs for everything.
Initializing the Ubuntu JavaScript framework
As noted above, your index.html file imports Ubuntu JavaScript framework files. These bring the app to life as a true
Ubuntu app.
Your app must initialize the framework from JavaScript.
Note: When you create an HTML5 app in the Ubuntu SDK, your app already has the code needed for this. Here we
simply take a look at this code to understand why it exists.
The app’s JavaScript file
Your brand new app has a js/app.js file by default. It does a few key things after the DOM is loaded:
• Creates an UbuntuUI object: var UI = new UbuntuUI();
• Runs its init.() method: UI.init();
• (Optional) Create an event handler for the Cordova ready event (below).
This code runs when the window.onload event is received, which means when the DOM is fully loaded. Here’s an
example:
window.onload = function () {
var UI = new UbuntuUI();
UI.init();
document.addEventListener("deviceready", function() {
if (console && console.log)
console.log('Platform layer API ready');
}, false);
};
6.3. The Ubuntu App platform - develop with seamless device integration
151
UBports Documentation, Release 1.0
As previous examples show, this onload even handler is where initialize your own GUI, adding objects and event
handlers to be sure the GUI is ready to respond to user interactions right from the start.
HTML5 Tutorials - Meanings app
This is a great starting place to learn the basics of writing an HTML5 app.
Here, you:
• Start with a new, default HTML5 app project in the Ubuntu SDK
• Implement a simple Ubuntu HTML5 GUI
• Add some Javascript
• Run and test the app
• Take a quick run through packaging the app as a click package
These are the steps you follow for most apps.
You will put together a simple app called “Meanings”. The app displays a simple Ubuntu HTML5 GUI with a header,
a text input box, and a button. When the user enters a word in the box and clicks the button, a web API is called that
returns the meanings of the word. They are displayed in an Ubuntu List.
This simple app does not use any Ubuntu Platform APIs. Nor does it use a Cordova API. It is a straightforward Ubuntu
App that happens to be written in HTML5. Be sure to check out other tutorials that dive into these important areas too.
Before getting started
There are a couple requirements:
• You need to install the Ubuntu SDK
• You need to know how to create an HTML5 app project in the SDK
• You should have some experience running apps from the SDK
Getting the app source
The completed app source tree is available as a Bazaar branch. You can get it as follows:
1. Open a terminal with Ctrl + Alt + T.
2. Ensure the bzr package is installed with: $ sudo apt install bzr
3. Get the branch with: $ bzr branch lp:ubuntu-sdk-tutorials
4. Move into the html5/html5-tutorial-meanings directory:
html5-tutorial-meanings
$ cd ubuntu-sdk-tutorials/html5/
Now, let’s get developing!
Create your HTML5 app project in the SDK
Go ahead and create an HTML5 app project in the SDK.
Give the project any name you want. Here, we call it “meanings” Later we give the app the proper title displayed to
users at runtime: “Meanings”.
152
Chapter 6. App development
UBports Documentation, Release 1.0
Practise running the app
After creating an HTML5 app project in the SDK, you can run it directly from the SDK on the Ubuntu Desktop (and
on an attached devices, including physical devices and Ubuntu emulators you have created with the SDK).
Get it running on the Desktop with: Build > Run.
Tip: The SDK has an icon for this (on the left side vertical panel) and a keyboard shortcut: Ctrl + R.
Here’s how a brand new app looks when run from the SDK (the actual GUI may vary as refinements are released):
The brand new HTML5 app project has the basic set of files you need. But, naturally, the GUI and control logic are
simply the defaults for any new app. We’ll implement a GUI and control logic that suits the needs of or Meanings app
below.
Note: If you have a physical device, you can try running it there by following the tips in the Ubuntu SDK section. You
can also try creating an emulator and running it there, again following those tips.
Run the app from the terminal
This is a great time to try running the unmodified app directly from the terminal. This can be convenient.
1. Open a terminal. There are many ways. A quick way is Ctrl + Alt + T.
6.3. The Ubuntu App platform - develop with seamless device integration
153
UBports Documentation, Release 1.0
2. Move to your app project directory.
3. Launch the app as follows: $ ubuntu-html5-app-launcher --www=www
Let’s take a closer look at that command:
• ubuntu-html5-app-launcher: This is the executable that launches the web container in which that
HTML5 app runs. The container exposes built-in Ubuntu App Platform APIs that your app’s JavaScript can call
directly.
• --www=www: This argument simply tells ubuntu-html5-app-launcher where to find the directory that contains
the app’s HTML5 files. Currently, the HTML5 files are required to be in the www/ directory of the app project.
Debugging the app’s JavaScript
Before taking a closer look at Ubuntu HTML5, let’s take a moment to learn how to debug the app’s JavaScript.
Many web developers are familiar with debugging a web page they are developing right in the browser displaying the
page using the browser’s own development tools. That’s the approach used with Ubuntu HTML5 apps.
The Ubuntu HTML5 app runtime container is based on WebKit. So is Chrome/Chromium. The approach used here is
to send the debug data behind the scenes to a URL. You then open that URL in a WebKit browser, and you can then
use its debug capabilities, for example having direct access the the JavaScript console.
Add the –inspector argument to launch in debug mode
When you launch the app from the terminal with ubuntu-html5-app-launcher, you simply add the –inspector argument.
Then watch the output in the launch terminal for Inspector server. . . and open the stated URL with the Chrome,
Chromium (or other WebKit) browser.
For example, you would use a command like this:
$ ubuntu-html5-app-launcher --www=www --inspector
Now, watch the output for something like this:
Inspector server started successfully. Try pointing a WebKit browser to http://192.
˓→168.1.105:9221
Then, you would open the URL in a WebKit browser (like Chromium) and use its native development tools. In the
case of chromium, the displayed web page has a link you click, which takes you to the debug tools for the running app
instance.
Tip: An app with a JavaScript error may fail to load the HTML GUI, so getting used to launching in inspector (debug)
mode and opening the URL in a WebKit browser is an essential skill.
Let’s move on and take a look at the key files in your new app project.
HTML5 app project files
index.html
Naturally, your new HTML5 app project has an index.html, the root file for the app.
Tip: Currently, all HTML5 files, including index.html, are expected to be in the www/ directory. The index.html
file imports all it needs, including Ubuntu CSS and Ubuntu JavaScript, which provides a convenient set of methods
154
Chapter 6. App development
UBports Documentation, Release 1.0
to control you Ubuntu HTML5 widgets. By default, it also imports ./js/app.js, the app-specific JavaScript file.
And, it may also import a Cordova JavaScript file (not needed for this app, so you can delete it if you want).
Let’s zero in on ./js/app.js.
App specific JavaScript: app.js
This is your app’s essential JavaScript file. You add your control code here.
But first, let’s take a quick look at some critical code it contains by default:
window.onload = function () {
var UI = new UbuntuUI();
UI.init();
[...]
}
This is the required code that creates an UbuntuUI object (locally named UI). This object is your entry point into the
UbuntuUI API. This API is used to control the Ubuntu HTML5 GUI.
Tip: Later, take a look at the HTML5 UbuntuUI API reference docs.
This is an event handler for the window.onload event. It provides an anonymous function that executes when the
event is received. This event is received after the DOM fully loads, which is the proper time to initialize the UbuntuUI.
Note: Another approach is to use the JQuery(document).ready() event handler method, as we do later in this
app.
After the UI object is created, the code runs the essential UI.init() method. This method is needed to initialize
the UI framework.
Other project files
Here’s a quick summary of other key files:
• APP.desktop: As noted, this is the file used by the system to launch the app. Check it out and note the critical
Exec line that shows the command line the system uses to start the app. Note also useful bits like the Icon line
that you use to name the icon file the system uses to represent the app in Unity. This is usually an icon in the
app’s source tree.
There are two files that are hidden in the SDK GUI:
• APP.ubuntuhtmlproject: This is the Ubuntu SDK (really, the QtCreator) project file. Select this when browsing
the file system from the SDK to open a project.
• APP.ubuntuhtmlproject.user: This contains per project SDK settings. This is normally not edited directly –
use the SDK GUI to set preferences instead. Note, this file is normally not added to version control.
Other key files are added when you package the app, as we see below.
Let’s get on with the HTML5 development!
Ubuntu HTML5 markup intro
Ubuntu HTML5 apps use specific markup to implement the GUI.
Let’s take a super fast look at Ubuntu HTML5 highlights.
Tip: Check out the HTML5 Guide for a more detailed look.
6.3. The Ubuntu App platform - develop with seamless device integration
155
UBports Documentation, Release 1.0
App layout
You can have “flat” organization with tab-style navigation or “deep” organization with pagestack-style navigation. Our
app will use the simple tab- style navigation with a single tabitem and a single corresponding tab (for content).
Ubuntu widgets
Ubuntu HTML5 provides a set of widgets you can declare in markup for things like buttons, lists, toolbars (also called
footers), dialogs, and etc.
Our app will use:
• A header with a single tabitem with text: “Meanings”
• A corresponding tab that contains the main content, including:
• An input box where the user enters a word
• A button looks up the word in the web API
• A list that displays the returned meanings of the word
Replacing the default HTML5
We don’t need most of the default HTML in index.html. So let’s replace the whole <body>[...]</body> with
HTML5 that declares our app’s GUI.
Copy the following into index.html, replacing the <body>[...]</body>:
<body>
<div data-role="mainview">
<header data-role="header">
<ul data-role="tabs">
<li data-role="tabitem" data-page="main-page">Meanings</li>
</ul>
</header>
<div data-role="content">
<div data-role="tab" id="main-page">
<div><input type="text" id="word">Enter a word</input></div>
<button data-role="button" id="lookup">Get</button>
<div data-role="list" id="res" />
</div> <!-- tab: main-page -->
</div> <!-- content -->
</div> <!-- mainview -->
</body>
Tip: It may be easier to copy and paste from the app source branch described above.
Let’s check out how the app looks if you run it now with Ctrl + R. Note that the GUI does not function yet because we
have not yet added the JavaScript control logic.
App HTML5 highlights
Let’s examine some highlights of this HTML.
156
Chapter 6. App development
UBports Documentation, Release 1.0
6.3. The Ubuntu App platform - develop with seamless device integration
157
UBports Documentation, Release 1.0
Mainview
All the HTML5 inside the body is wrapped in a <div data-role=”mainview”>. This is standard for Ubuntu
HTML5 apps.
Header
• There is a header:
• The header contains an unordered list (ul)
• The unorder list has a single listitem (li) whose data-role is “tabitem”:
Meanings
This implements the header part of our tab-style layout:
• We have a single tab.
• The text that displays is “Meanings”
• Note the tabitem’s data-page attribute. This value (main-page) is what connects the tabitem to the tab declared
lower down whose id is the same: <div data-role="tab" id="main-page">.
When the user clicks the tabitem in the header, the corresponding tab displays. We have only a single tabitem/tab.
Content
Below the header, we have a content div, declared like this:
<div data-role="content">
[...]
</div> <!-- content -->
This div contains the tabs that correspond with each tabitem declared in the header (in our case, only one tab). Let’s
take a look at our tab.
Tab
Here is our one tab:
<div data-role="tab" id="main-page">
[...]
</div> <!-- tab: main-page -->
The data-role=”tab” is what declares it as an Ubuntu tab.
As noted above, the id="main-page" is what causes this tab to be displayed when the user click on the header’s
corresponding tabitem.
Let’s peer inside the tab.
158
Chapter 6. App development
UBports Documentation, Release 1.0
Input box
There’s a single input box that the Ubuntu framework styles automatically:
<div><input type="text" id="word">Enter a word</input></div>
We put this in a div so it is rendered as block, not inline, per normal HTML5.
Note the id="word”. We will use this ID from JavaScript to get the word the user has entered below.
“Get” Button
There is one button that triggers the JavaScript code that calls the web API to look up meanings for the word the user
has entered:
<button data-role="button" id="lookup">Get</button>
This button is declared as an Ubuntu button, with a data-role of button. This means it is pulled into the framework and
therefore you get a convenient API for it. For example, you can add an click event handler using the id easily.
Tip: Ubuntu CSS provide styles for several button classes. Check out the actual Ubuntu CSS files to see what
is available. For example, check out: /usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/
buttons.css
Empty List, populated later
We declare a list that starts off empty:
<div data-role="list" id="res" />
That’s an Ubuntu list. We will use the UbuntuUI framework to obtain the list in JavaScript and populate it with the
meanings for the word that are returned from the web API lookup.
That’s about it for the HTML5. Pretty straightforward. Now, let’s add the JavaScript we need to complete this app’s
basic pieces.
Implementing our Javascript
Adding the JQuery lib
This app uses JQuery to call the web API. We need to add the JQuery lib to our package, which takes a few steps:
• Ensure libjs-jquery package is installed with: $ sudo apt-get install libjs-jquery
• Copy the lib into your app directory with $ cp /usr/share/javascript/jquery/jquery.min.js .
• Tip: You might need to close and open the project for the jquery.min.js file to display in the SDK project.
• Include the jquery.min.js file into your index.html file by adding this line into the main HTML <header> ..
</header>:
6.3. The Ubuntu App platform - develop with seamless device integration
159
UBports Documentation, Release 1.0
Using the JQuery ready event handler
In js/app.js, find the default window.onload event handler:
window.onload = function () {
var UI = new UbuntuUI();
UI.init();
[...]
}
Change the first and last lines to use the JQuery ready method, like this:
$( document ).ready(function() {
var UI = new UbuntuUI();
UI.init();
[...]
});
Note that the last line has changed!.
Add the button event handler
As noted above, the button event handler code gets the word the user entered and calls the web API to get meanings
for it.
Start by deleting all the code inside ready function except for the creation of the UI object and running of its init()
method, so it looks like this:
$( document ).ready(function() {
var UI = new UbuntuUI();
UI.init();
DELETE ALL THIS CODE
});
Now, after the UI.init(); line, add the following:
UI.button('lookup').click(function () {
var lookup = document.getElementById('word').value;
console.log('Looking up: ' + lookup);
var url = "http://glosbe.com/gapi/translate?from=eng&dest=eng&format=json&phrase="
˓→+ lookup + "&pretty=true";
$.ajax({
type: 'GET',
url: url,
success: success,
dataType:'jsonp',
contentType: "application/json"
});
});
Examining the button’s event handling code
First, we see the button being found with its id (“lookup”) and its click event handling code being set by this Ubuntu
framework code:
160
Chapter 6. App development
UBports Documentation, Release 1.0
UI.button('lookup').click(FUNCTION);
Tip: That’s a common and useful coding pattern in the Ubuntu framework. That is, most UbuntuUI objects that
correspond to a specific HTML element with an ID can be obtained in a similar way, for example: UI.dialog(ID).
The value of the user-entered word is obtained.
The URL required by the web API is constructed, inserting the word appropriately.
Then the JQuery ajax() method is called. The key points to note here are that the URL is provided and the function to
be called on success is named (it is also named “success”).
This success function is defined next, outside of the JQuery ready code.
The success function
Add the following success function at the end of the js/app.js file:
function success( data ){
console.log('AJAX success.');
var resEl = document.getElementById('res');
var res ='<header>Meanings</header><ul>';
for ( var idx1 = 0; idx1 < data.tuc.length; idx1++ ) {
if ( data.tuc[idx1].meanings ) {
console.log('meanings');
for ( var idx2 = 0; idx2 < data.tuc[idx1].meanings.length; idx2++ ) {
if ( data.tuc[idx1].meanings[idx2].text ) {
console.log( data.tuc[idx1].meanings[idx2].text );
res += '<li>' + data.tuc[idx1].meanings[idx2].text + '</li>';
}
}
}
}
resEl.innerHTML = res + '</ul>';
}
This function is called when the web API returns a result. The result is passed into the function with the local name
data.
An object named resEl is created for the empty list declared in our HTML and it is given a header: “Meanings”.
Two nested loops iterate through the data to find and extract the contained meanings of the word. These are appended
to a string variable (res) as HTML listitems.
Finally the list (resEl) is populated with the built up meanings list by setting its inner HTML to res.
That’s it for app development!
With development done, time to run the app
You can now run the app using the methods referred to previously. For example, you can use the Ctrl + R shortcut
to run it in a window on the Desktop.
Here we see it running after the user has typed in the word ‘hack’ and clicked the Get button:
If you have problems, you might have accidentally introduced errors, so trying debugging the app’s JavaScript as
described above.
Let’s package it!
6.3. The Ubuntu App platform - develop with seamless device integration
161
UBports Documentation, Release 1.0
162
Chapter 6. App development
UBports Documentation, Release 1.0
Give the app a Name and an Icon
Open the app’s desktop file. This file contains key-value pairs that represent the app in the Unity shell and enables
displaying and launching the app from the Applications scope and the Unity launcher.
Tip: If you named your app project “meanings”, then the desktop file is named “meanings.desktop.”
Give the app a reasonable name, for example: Meanings
Name=Meanings
You may also want to add an icon to the desktop file. This icon is displayed in the Unity shell to represent the app.
Simply add the icon to the app’s source tree (and optionally add it to Bazaar revision control). Then in the desktop
file, set the filename:
Icon=meanings.png
Packaging the app as a click package
The Ubuntu SDK makes this really easy.
In the simplest case, all you need to do is navigate to the Publish tab on the left side of the SDK GUI.
Here you see a General tab that displays key info about the package, including:
• Name
• Maintainer: Verify this is you
• title: set this to “Meanings”
• Version: this is the click package version. Be sure to increment this when appropriate, for example when
publishing a new version.
• Security policy groups: This is the list of apparmor policies your app needs.
rity/confinement tool used in Ubuntu.)
(Apparmor is the secu-
Tip: Don’t add any security policy groups you don’t really need. Apps are confined by these policies and we all
want Ubuntu app confinement to be the best available, which means developers use thoughtful discretion and only add
policies as absolutely necessary.
There are other tabs here that we can ignore for now.
Go ahead and click Create Package. This creates the manifest.json file for the first time. (The manifest.json
file may not display in the SDK until you close and reopen the app project.)
Tip: If you are managing your app project with revision control, for example Bazaar, you should add this file with
Tools > Bazaar > Add followed by Tools > Bazaar > Commit.
Manifest.json is the key file when it comes to packaging your app as a click package. Includes information just
gathered It also states the framework the app requires (this is way to group and version APIs and runtime requirements).
It includes a section called “hooks” that lists the apparmor file that states the app’s confinement data and that lists the
app’s desktop file (used by the system to launch the app).
Clicking Create Package also creates the actual installable click package file in the app’s parent directory. This file is
named based on the fields in the manifest and, by default, is something like this:
com.ubuntu.developer.MAINTER.PACKAGE_VERSION_all.click
Note that package review tools automatically run. You can see their results in the Publish tab in the Validate Click
Package section to the right. Check out the report there for errors and fix any you see.
6.3. The Ubuntu App platform - develop with seamless device integration
163
UBports Documentation, Release 1.0
Installing and running the app as a click package
Note:Click packages are currently only officially supported for installation on an Ubuntu for phones/tablets device and
in the Ubuntu Emulator.
Now that you have the app packaged, you can use the SDK to install it and run it on an attached device or emulator.
Once you have an attached device or emulator, try it out with: Build > Ubuntu > Install Application on Device
Now, use the device’s GUI to find the app and launch it.
Next Steps
Be sure to check out other HTML5 tutorials and guides in the HTML5 section.
Guides - creating Ubuntu applications with Cordova
This is a high-level guide to Cordova on Ubuntu. The guide contains information both for creating a new Cordova
application for Ubuntu, or for adding Ubuntu as a distribution platform for an existing Cordova application.
What is Cordova?
Cordova is an Apache project that provides an HTML5 app build framework that supports multiple distribution platforms, including Android, iOS, and now, Ubuntu. Cordova also provides a plugin framework that exposes system and
device level functionality to HTML5 apps through plugins and JavaScript APIs for these platforms. These APIs are
available for popular mobile operating systems, like Android, iOS, and Ubuntu as well.
Let’s first take a quick look at using upstream Cordova with its built-in Ubuntu support and explain how existing
Cordova app developers can add Ubuntu as a build platform. Then we can take a look at using Cordova APIs to build
Ubuntu HTML5 apps.
If you already develop Cordova apps
Starting with upstream Cordova 3.3, existing Cordova app developers can use the usual Cordova CLI commands they
already know. This includes adding Ubuntu as a platform and building their project to create a click package that can
be used in Ubuntu.
Building a Cordova app for Ubuntu requires an Ubuntu system, much like building for iOS requires an Apple system.
We recommend the latest LTS version, which is Ubuntu 16.04 LTS at the time of this writing. You need to install the
Cordova toolset on this build system, as described in the section below: Configuring your environment.
Once you are ready building apps on Ubuntu, recompiling your Cordova app is as simple as simple as this:
Add Ubuntu as a platform:
• Run this command from your application directory
cordova platform add ubuntu
• Add a plugin, in this case Camera
$ cordova plugin add cordova-plugin-camera
• Build the app to for devices:
164
Chapter 6. App development
UBports Documentation, Release 1.0
$ cordova build --device
That creates a click package suitable for use in Ubuntu.
Note: the cordova-ubuntu support code will generate a manifest.json file (and other standard click package bits) based
on configuration elements found in the project config.xml.
Run the app on your attached device, in debug mode:
$ cordova run --device --debug
Note: you need cordova-cli 4.3.x to have all of these options available.
Important: we do not recommend to use cordova-cli > 4.x yet, because of the tool API change that is not fully tested
on Ubuntu. Check out upstream Cordova docs for detailed information about Ubuntu platform support in the native
Cordova CLI workflow.
Naturally, you need to write some JavaScript to use the APIs for the plugins you have added too.
Creating a Cordova app for Ubuntu
Ubuntu now supports the core Cordova APIs to let developers create native Ubuntu applications that will use the
Cordova runtime and can be recompiled for other platforms as well.
So this section will be of interest for Ubuntu developers who want to distribute their applications on multiple app
stores, including the Ubuntu App Store, but also the stores for iOS or Android apps.
Let’s take a moment to see what this means at a high level, then we can dive into a few of the necessary details.
A few key points
1. The Cordova runtime comprises a simple webview and a set plugins. This runtime will the load the actual
application code and UI, made of Javascript, HTML and CSS files.
2. Cordova Plugins: System and device access is provided by plugins. For example, there is a Camera plugin, an
Accelerometer plugin, and more.
3. Each Cordova app provides the Cordova runtime: Each app provides the cordova runtime that it uses.
This gives developers perfect control over their app, by embedding all runtime dependencies inside their click
packages.
4. Cross-compilation required. Plugins are written in C++ and need to compiled for the target runtime architecture (armhf, x86, x86_64). So to create a fully functional application, the runtime must be compiled. This
generally means cross-compiling armhf binaries from an x86 desktop system. This process is similar to the
requirements for native C++ applications and uses the sams click chroot toolset.
Configuring your environment
As noted above, you need an Ubuntu desktop system to build your application for Ubuntu. We recommend Ubuntu
16.04 LTS as the base system to build your application from. Your application can then execute on either your desktop,
or an attached Ubuntu phone or tablet device.
Install cordova from the Ubuntu Cordova PPA
$ sudo apt-add-repository ppa:cordova-ubuntu/ppa; sudo apt-get update
$ sudo apt-get install cordova-cli
6.3. The Ubuntu App platform - develop with seamless device integration
165
UBports Documentation, Release 1.0
Note: for expert Cordova developers: you can also install cordova manually via npm; but please stay with [email protected] until the platformAPI switch is fully tested on Ubuntu.
The build environment needs to be separated from the developer’s environment, to prevent unwanted side effects and
provide a clean, repeatable process.
Create a click chroot for the armhf architecture:
$
$
#
$
sudo
sudo
this
sudo
apt-add-repository ppa:ubuntu-sdk-team/ppa
apt-get update
will create a clean click chroot build environment
apt-get install click-dev phablet-tools ubuntu-sdk-api-15.04
Add build dependencies for Cordova apps inside the chroot:
# add build dependencies inside the click chroot
$ sudo click chroot -a armhf -f ubuntu-sdk-15.04 install cmake libicu-dev:armhf pkg˓→config qtbase5-dev:armhf qtchooser qtdeclarative5-dev:armhf qtfeedback5-dev:armhf
˓→qtlocation5-dev:armhf qtmultimedia5-dev:armhf qtpim5-dev:armhf libqt5sensors5˓→dev:armhf qtsystems5-dev:armhf
Note: the ubuntu-sdk-15.04 framework is the recommended base framework to use for cordova apps. If you wish to
move to future revisions of the base framework, you will need to provide an extra option in the build step below.
Verify your environment by running the sample app.
$
$
$
$
cordova create myapp myapp.myid "My App"
cd myapp
cordova platform add ubuntu
vi config.xml
Note: be sure to have a default application icon in www/img/logo.png
Also, update the author email field with a valid one:
<author email="[email protected]" />
Then, you should build the application for the target device:
$ cordova build --device
Note: On first run, you may have to install some build dependencies in the click chroot. Check the section above for
details
And then just start the application on the phone:
$ cordova run --device --debug
At this point, you should see the familiar Cordova logo in the application running on your phone.
Your Ubuntu system is ready for Cordova development.
Now, let’s take a high-level look at using the Cordova APIs.
So many APIs! Which to use?
There is overlap in APIs from various sources for use in HTML5 apps. Consider geolocation. Many web engines now
support a geolocation API. W3C has a proposed geolocation API as well. Cordova also provides a geolocation API.
Here we provide some guidelines for developers to align with Ubuntu directions:
166
Chapter 6. App development
UBports Documentation, Release 1.0
6.3. The Ubuntu App platform - develop with seamless device integration
167
UBports Documentation, Release 1.0
First Choice: Ubuntu App Platform APIs
When an Ubuntu App Platform API is available and not deprecated, it is the best choice. This provides the best
integration with the platform. However, it will affect your ability to port to other platforms, if that is your goal. For
example, developers should use Content Hub, Online Accounts and Alarms APIs even if other APIs may exist that
provide similar functionality.
Second Choice: W3C
Working W3C standard APIs should be used when there is no Ubuntu App Platform API for the functionality. W3C
APIs are quickly and well supported in browsers and web containers and are likely to provide the most stability and
standard APIs, so these are the best choice when platform APIs do not exist.
Rocking with Cordova APIs
Cordova APIs provide key functionality not yet present in W3C standards or the Ubuntu Platform. Examples include
Splash Screen and Accelerometer. As such Cordova APIs are a great choice for these system and device level features
that can really make your HTML5 app rock!
Ubuntu HTML5, Cordova and Web APIs are in constant development, so the recommendations for the particular APIs
mentioned above may be updated. Please stay tuned.
Programming with Cordova
Here we look at how your app knows that Cordova is loaded and ready. This is where you can place code that should
only run once Cordova has fully detected your device, for example event handlers that use Cordova navigator objects.
Handling Cordova’s deviceready event
Web developers are familiar with the window.onload event that signals when the DOM is fully loaded. This event
is useful is for running event handler code right after the DOM is loaded.
In Ubuntu HTML5 apps, we use that event to run the code that initializes the Ubuntu UI framework. After that
initialization code, your Cordova app can set up an event handler for Cordova’s deviceready event. This event
signals that the Cordova runtime is fully ready for operations. For example, this is where you should place your event
handlers that invoke Cordova objects.
Let’s take a look at sample code that has these parts:
window.onload = function () {
/* Optional: Initialize the Ubuntu UI framework */
var UI = new UbuntuUI();
UI.init();
/* Handle the Cordova deviceready event */
document.addEventListener("deviceready", function() {
if (console && console.log)
console.log('Platform layer API ready');
/* Add event listeners that invoke Cordova here */
// take picture with Cordova navigator.camera object
UI.button("click").click( function() {
navigator.camera.getPicture(onSuccess, onFail, {
destinationType: Camera.DestinationType.DATA_URL
168
Chapter 6. App development
UBports Documentation, Release 1.0
});
console.log("Take Picture button clicked");
}); // "click" button event handler
}, false);
};
function onSuccess(data){ DO SOMETHING };
function onFail(data){ DO SOMETHING };
Here, inside the deviceready event handler, we add an event handler for an Ubuntu button that callsnavigator.
camera.getPicture(...). That’s a standard and straightforward pattern for a lot of what you can do with
Cordova APIs.
Next steps
Check out the Cordova Camera Tutorial, which provides all the steps you need to make a working HTML5 Camera
app that let’s you snap a picture and then displays it in the app.
You may also want to check out the HTML5 Guide for an overview of Ubuntu HTML5.
HTML5 Tutorials - Cordova camera app
This tutorial takes you through the steps needed to create an HTML5 app that uses the Cordova runtime and its Camera
API.
The app we develop here is quite simple:
• It provides a Take Picture button.
• When Take Picture is clicked, the Cordova Camera displays.
• The user takes a picture.
• The picture is returned through Cordova and is displayed in the app’s HTML.
Before getting started
Cordova guide
You may want to read the Cordova Guide. It contains all the info you need to set up your development environment.
The three prerequisites being:
• Installing cordova-cli from the Ubuntu Cordova PPA
• Creating a click chroot for the armhf architecture, to run and contain your application
• Installing build dependencies in the click chroot; refer to the corresponding section in the Cordova Guide
HTML5 UI Toolkit basics
This tutorial is not focused on the UI Toolkit. For help, see the Ubuntu HTML5 UI Toolkit Guide.
6.3. The Ubuntu App platform - develop with seamless device integration
169
UBports Documentation, Release 1.0
Getting the resources for this app
You can obtain the source tree for this app as follows:
Open a terminal with Ctrl+Alt+T and get the branch with:
$ bzr branch lp:ubuntu-sdk-tutorials
Creating your Cordova app project
We will be creating the application from scratch and copy-pasting parts from the reference code.
You will need to instantiate a new project, with the following Cordova command:
$ cordova create cordovacam cordovacam.mydevid
$ cd cordovacam
Tip: You may want to add app project files to revision control such as Bazaar and commit them (except the .user file,
which is typically not stored in VCS).
Define your application icon
To define the icon for you application, you should first copy the sample icon from the ubuntu-sdk-tutorials/
html5/html5-tutorial-cordova-camera directory
$ cp ../ubuntu-sdk-tutorials/html5/html5-tutorial-cordova-camera/www/icon.png ./www/
˓→img/logo.png
Then you need to add this entry into the Cordova app configuration file.
Edit the config.xml file and add the line below:
<icon src="www/img/logo.png" />
Note: this is a mandatory step, to let the application pass the package validation tests.
Add the Ubuntu platform support code to your project
As explained in the Cordova Guide, you need to add platform support code to your project, which will be compiled
and integrated in the Cordova runtime shipped with your application.
Add the Cordova Ubuntu runtime files into your app project:
$ cordova platform add ubuntu
Now, your project contains some additional files, notably:
platforms/ubuntu/
Add support for the Camera API
Add the Cordova Ubuntu runtime files into your app project:
170
Chapter 6. App development
UBports Documentation, Release 1.0
$ cordova plugin add cordova-plugin-camera
Tip: Put all of files added by the previous commands into your version control system and commit them as appropriate.
Build the app
Use the standard Cordova command line tool to prepare the app for running on your Ubuntu phone.
Generally, you don’t need to build the app and then run it. The run command will ensure the app builds and the click
package sent to the phone before starting the application directly.
$ cordova run --device --debug
6.3. The Ubuntu App platform - develop with seamless device integration
171
UBports Documentation, Release 1.0
Tip: you may see warning messages after the build. For example: you haven’t specified an icon for your application
yet.
As the application is started on the device, you should also notice that the output contains debug messages to let you
connect to the running Javascript code and inspect the HTML5 UI.
At this point, the app GUI is still in its default unmodified state. We implement our app GUI in the next section.
Define the HTML5 GUI
Here we replace the GUI declared in the default app with one appropriate for this Camera app.
• In index.html, add the following stylesheet declarations in the
section of the document:
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1,
˓→user-scalable=0">
<!-- Ubuntu UI Style imports - Ambiance theme -->
<link href="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/appTemplate.css" rel=
˓→"stylesheet" type="text/css" />
<!-- Ubuntu UI javascript imports - Ambiance theme -->
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></
˓→script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></
˓→script>
<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>
• Ensure, you call the following 2 Javascript files in the
section as well:
<!-- Cordova platform API access - Uncomment this to have access to the Javascript
˓→APIs -->
<script src="cordova.js"></script>
<!-- Application script and css -->
<script src="js/app.js"></script
• Then, delete the entire div inside the
...
element and add the following new HTML fragment:
<div data-role="mainview">
<header data-role="header">
<ul data-role="tabs">
<li data-role="tabitem" data-page="camera">Camera</li>
</ul>
</header>
<div data-role="content">
<div data-role="tab" id="camera">
<div id="loading">
<header>Loading...</header>
<progress class="bigger">Loading...</progress>
172
Chapter 6. App development
UBports Documentation, Release 1.0
</div>
<div id="loaded">
<button data-role="button" class="ubuntu" id="click">Take Picture</button>
<img id="image" src="" />
</div>
</div> <!-- tab: camera -->
</div> <!-- content -->
</div> <!-- mainview -->
This is a simple implementation of an Ubuntu HTML5 app. It declares the following:
• A mainview div (required)
• A header with a single tabitem: “Camera”
• A content div with two internal divs: loading and loaded
• loading div displays at launch time and includes a progress spinner. This is hidden when Cordova is ready by
JavaScript code we look at later
• loaded div displays when Cordova is ready by JavaScript and contains:
• A Take Picture button: We create an event listener for this below to popup the Cordova Camera
• An empty img element: When the camera takes a picture, it uses this element to display the return image
If you run the app now, the GUI appears as follows:
As noted above, that is the loading div that displays until Cordova deviceready event is received.
Tip: To isolate your application UI from future UI toolkit changes, we now recommend to bundle a copy of the toolkit
inside your application package. There is a small tool documented here that will assist you in migrating your project.
See https://code.launchpad.net/~dbarth/ubuntu-html5-theme/cmdline-tool/+merge/253498
Note: at the end of the index.html file you should also see a reference to a cordova.js script file which is loaded at
the beginning of the page. This file is not present in the source ‘www’ directory. However it is automatically copied
with the rest of the cordova runtime startup code, during the build phase. So don’t worry, the file will be present in the
resulting click package.
Let’s take the next step and add the JavaScript that responds to the Cordova deviceready event by hiding the loading
div, displaying the loaded div, and providing an event handler for the Take Picture button.
6.3. The Ubuntu App platform - develop with seamless device integration
173
UBports Documentation, Release 1.0
Adding JavaScript to display the Cordova Camera
Here we add an event handler for the Cordova deviceready event and, inside that code, sets up our Take Picture to call
the Cordova Camera API to let the user take a picture.
You should mostly replace the default www/js/index.js file with a new file called app.js from the tutorial
branch. We will look at the key elements of this file below.
The first step is to init the UbuntuUI object to setup the main user interface parts. The following event listener will be
triggered on the initial window load event, and prepare the rest of the UI
window.onload = function () {
var UI = new UbuntuUI();
UI.init();
document.addEventListener("deviceready", function() {
if (console && console.log)
console.log('Platform layer API ready');
//hide the loading div and display the loaded div
document.getElementById("loading").style.display = "none";
document.getElementById("loaded").style.display = "block";
Inside this function you can install a listener to react to the main button press, and capture the image with the camera.
Here is how it looks:
// event listener to take picture
UI.button("click").click( function() {
navigator.camera.getPicture(onSuccess, onFail, {
quality: 100,
targetWidth: 400,
targetHeight: 400,
destinationType: Camera.DestinationType.DATA_URL,
correctOrientation: true
});
console.log("Take Picture button clicked");
}); // "click" button event handler
}, false); // deviceready event handler
}; // window.onload event handler
This is the first bit of new code that’s needed. Let’s take a look at it.
Examining the new event listener
• An event handler for the Cordova deviceready event is added. This is received when the Cordova system is
fully loaded and ready, so this is a great place to put code that uses Cordova objects. (See Cordova Guide for
information.)
• Inside the deviceready handler, first the loading div is hidden and then then loaded div is displayed.
• Then, the Take Picture button is obtained with: UI.button(“click”).
• Its click(FUNCTION) method provides the FUNCTION that runs when the button is clicked, the button’s event
handler code. (See HTML5 APIs for complete API reference docs.)
• This event handling function calls the navigator.camera.getPicture(. . . ) method.
• The navigator object is the base Cordova object and is available in the HTML5 runtime container when the app
includes Cordova as described above.
174
Chapter 6. App development
UBports Documentation, Release 1.0
• getPicture(. . . ) takes three arguments: the name of the function to run when a picture is taken(this is calledonSuccess here and is defined below), the name of a function to run when an attempt to take a picture fails (onFail
here, defined below), and some optional arguments.
• In the optional arguments, we set the image quality, its size, the type of image returned to DATA_URL, which
enables passing the image directly in JavaScript as a base64 encoded piece of data (without saving it as a file),
and enable orientation correction
Tip: The getPicture(. . . ) method and its arguments are defined in the Cordova API reference docs.
Defining the onSuccess function
As we saw above, Cordova getPicture is told to run onSuccess when the picture is taken. Cordova runs it and passes it
the actual picture, formatted as Cordova type DATA_URL.
So this app:
• Needs an onSuccess function
• That receives the passed image data
• And modifies the app’s HTML img element’s src attribute to actually display the image from the passed image
data
Here is code that does these things. You can paste this into the bottom of app.js:
function onSuccess(imageData) {
var image = document.getElementById('image');
image.src = "data:image/jpeg;base64," + imageData;
image.style.margin = "10px";
image.style.display = "block";
}
Defining the onFailure function
For this simple app, we simply log the message provided by Cordova to console. Paste this at the bottom of app.js:
function onFail(message) {
console.log("Picture failure: " + message);
Running the app
With these pieces in place, the app should run and allow you to take a picture.
As usual, do:
$ cordova run --device --debug
Here is how the application looks like after clicking Take Picture:
Once you validate the picture, the system will bring back your application and will display the photo below the button.
6.3. The Ubuntu App platform - develop with seamless device integration
175
UBports Documentation, Release 1.0
176
Chapter 6. App development
UBports Documentation, Release 1.0
Polish
Add CSS
Let’s add some CSS styling:
• Make our Take Picture button Ubuntu orange
• Center it
• Center the “Loading. . . ” progress spinner
Create www/app.css with this content:
#loading {
position: absolute;
left:45%;
}
#loaded {
display: none;
}
Now, in index.html, simply add the following inside the <head>
<link href="app.css" rel="stylesheet" type="text/css"/>
Now, the Loading page and the home page look like this:
Next steps
Check out the Cordova Guide for a high level review of using Cordova in Ubuntu HTML5 apps and for adding Ubuntu
as a built platform for native Cordova projects.
The Cordova APIs give your HTML5 apps access to other system and device-level things, so check these out by
visiting the Cordova API docs.
6.3. The Ubuntu App platform - develop with seamless device integration
177
UBports Documentation, Release 1.0
HTML5 Tutorials - online accounts
Here we provide and discuss two example HTML5 apps that use the Ubuntu App Platform JavaScript Online Accounts
API:
• html-example-online-accounts app: This app lets you browse all currently enabled Online Accounts and lets
you drill down to see account details including authorization status and token.
• html-example-online-accounts-facebook-albums app: This app uses the Facebook authorization token derived from Online Accounts to browse your Facebook photo albums.
The discussion here is focused primarily on Online Accounts API usage from JavaScript. For help getting started
writing Ubuntu HTML5 apps, check out the Online accounts developer guide.
Online Accounts overview
Ubuntu has a service and a corresponding system settings utility for Online Accounts. The user may provide login
credentials for various online accounts such as Google, Facebook and others through Online Accounts settings. For
such user-enabled Online Accounts, the service logs in for the user and receives authorization tokens from the external
accounts. These tokens can be used to enhance the user experience in Ubuntu and in your app. For example, after
enabling the Facebook account, searches in the Ubuntu Photos scope also return photos your Facebook friends have
posted. And, you can write an app that obtains the Online Account authentication tokens from the Online Accounts
API and uses them.
Online Accounts API key points
Provider and Services
An Online Account is identified in the API by a Provider and a Service.
• Provider: An object that represents a web service provider. For example, Facebook is a Provider. Google is
another.
• Service: A Provider can offer one or more Services. For example, Facebook has several services:
facebook-contacts, facebook-sharing, and facebook-microblog.
The API call used to obtain the current accounts allows you to obtain a filtered set of accounts by specifying the
Provider or Service. (This is used in the example Facebook Albums app discussed below.)
Provider and Service files
In order to be able to use Online Account’s API and access accounts data, it is important to make sure that an application properly declares appropriate policy group in the application manifest and creates the necessary .provider
and .service files as described in the following reference text: Online accounts developer guide
Authorization data
When you have an object representing a particular account, you can use it to check the account authorization status
and obtain the authorization token.
178
Chapter 6. App development
UBports Documentation, Release 1.0
Getting the source trees
The app source trees for these two example apps are available as subdirectories in the ubuntu-sdk-tutorials Bazaar
branch on launchpad.net. Get the branch as follows:
1. Open a terminal with Ctrl + Alt + T.
2. Ensure the bzr package is installed with: $ sudo apt-get install bzr
Tip: Tell bzr who you are with bzr whoami.
3. Get the branch with: $ bzr branch lp:ubuntu-sdk-tutorials
4. Move into the branch’s html5/ directory: $ cd ubuntu-sdk-tutorials/html5
The two apps are subdirectories named for the app:
• html5-example-online-accounts/
• html5-example-online-accounts-facebook-albums/
Run the apps
Run both apps to familiarize yourself with them:
1. Ensure you have enabled some Online Accounts with System Settings > Online Accounts
2. Move
into
the
appropriate
app
subdirectory:
html5-example-online-accounts
or
html5-example-online-accounts-facebook-albums
$ cd ubuntu-sdk-tutorials/
$ cd ubuntu-sdk-tutorials/
3. Launch the app, for example on the Desktop with ubuntu-html5-app-launcher --www=www
App 1: Online Accounts browser
This app lets you browse and drill into currently available Online Accounts.
• The app’s home page provides optional input fields to limit the displayed accounts by filtering by Provider and
Service.
• There’s a Show Accounts button to list accounts.
• You can click an account to show the Account Details page, which includes the authorization status and token.
• When on Account Details, you can click to show the Raw Details page for the account, which is simply the
account details displayed as JSON.
• The app uses the “deep” navigation pattern, which means the HTML5 consists of a Pagestack of Pages, so a
toolbar with a Back button is available to remove the current Page from the top of the Pagestack and return to
the previous Page.
Now, let’s take a closer look at the relevant API calls.
Getting the OnlineAccounts object
Naturally, you need to get the Online Accounts JavaScript object.
This is done in the window.onload event handler (or equivalent):
6.3. The Ubuntu App platform - develop with seamless device integration
179
UBports Documentation, Release 1.0
window.onload = function () {
var UI = new UbuntuUI();
UI.init();
[...]
var api = external.getUnityObject('1.0');
var oa = api.OnlineAccounts;
Getting the list of providers for the current application
Through the .provider and .service files, an application defines the list of Online Account providers and specific
services from those providers that it would require. Although this is a necessary step for the application to use Online
Account, this is not enough to get started using API to access account information.
As an UbuntuTouch application, the user has to first grant the application the right to access a given provider. If no
account exists for a given provider, the user should have the option to create one before being able to use it.
Those important elements are taken care of by one specific Online Account HTML5 API function: var api = external.getUnityObject(‘1.0’); var oa = api.OnlineAccounts; oa.api.requestAccount(string short_application_id, string
provider_id, function callback)
The API function requestAccount does the work described above: allows to the user to grant access to a given provider
and (if it applies) to create a new account for this provider.
For the definition of “short application id” and “provider_id” please refer to Online accounts developer
guide
Getting and displaying a list of enabled accounts
This is done by providing a FILTERS object and a CALLBACK function to the oa.api.getAccounts(FILTERS, CALLBACK) function.
• The FILTERS object has two keys: ‘provider’ and ‘service’. When these keys have values, the returned accounts are limited to those that match.
• The CALLBACK runs and receives an object that is a list of the current accounts.
Let’s take a closer look at the CALLBACK.
oa.api.getAccounts(FILTERS, function(accounts) { [...] });
This defines an anonymous callback function that receives the list of accounts, here as accounts.
Tip: The app then checks whether there are no accounts and, if so, alerts the user through the app home page.
Populating the list of accounts
The app then populates an Ubuntu List with the accounts, where the displayed text is extracted from the particular
account, including its displayName, Provider ID and Service ID, obtained with the API as follows:
var info =
act.displayName() + ' '
+ JSON.stringify((act.provider()['id'])) + ' '
+ JSON.stringify(act.service()['id']);
180
Chapter 6. App development
UBports Documentation, Release 1.0
This List is populated with the Ubuntu List.append() method. This uses the above info string and also takes the
name of a callback function to be executed when the user clicks the list item. So the app creates the mod object (a few
lines above) to store the values of the current account:
var mod = {
'name': act.displayName(),
[...]
'act': act
}
The callback function is created and passed the mod object with:
var dL = displayList(mod);
And the list is populated for each account with key account info and the callback dL function.
Account Details page
When the user clicks the account list item on the home page, the dL callback displays the Account Details page. This
page consists of four Ubuntu lists:
• The first displays a single item, the account’s displayName, obtained with ACCOUNT.displayname()
• The second iterates through the Provider object keys and adds a list item with the key and its value
• The third does the same, but for the Service object
• The fourth does the same for the the Authorization object, and here the authorization token and other data is
obtained through another API call, discussed next
Get authentication data for an account
To obtain current authentication data for an account, use the following.
ACCOUNT.authenticate(CALLBACK)
Where ACCOUNT is one of the items in the array returned by oa.api.getAccounts(...).
The callback function receives an object with the authorization data. In this case, we name it results:
Note: in the following mod[‘act’] is the ACCOUNT object.
mod['act'].authenticate(function (results) {
// CODE TO HANDLE THE RESULTS
}
);
The results object is parsed and added to the fourth list on the Account Details page.
App2: Facebook Albums browser
As noted, this app lets you browse and drill into your Facebook photo albums, displaying the photos for each.
• The app home page has a Get Albums button that displays a list of your Facebook albums
• You can click an album list item to display an Album page that displays photos in the album using the Ubuntu
Shapes widget
6.3. The Ubuntu App platform - develop with seamless device integration
181
UBports Documentation, Release 1.0
• You can click a photo shape on the Album page to display the Photo page that displays the photo in larger
format
• The app uses the “deep” navigation pattern, which means the HTML5 consists of a Pagestack of Pages, so a
toolbar with a Back button is available to remove the current Page from the top of the Pagestack and return to
the previous Page.
Getting the OnlineAccounts object
This app also obtains the OnlineAccounts object in the same way as the previous app:
window.onload = function () {
[...]
var api = external.getUnityObject('1.0');
var oa = api.OnlineAccounts;
Getting the list of enabled accounts
Then, the list of accounts is obtained. However in this case a filter object is provided that ensure only Facebook
accounts are returned.
var filters = {'provider': 'facebook', 'service': ''};
oa.api.getAccounts(filters, function(accounts){
[...]
}
As you can see, the getAccounts method is passed an anonymous function as the callback, and this receives the
accounts array.
Authenticating
Next, the first account in accounts has its authenticate method called and a callback is provided. All Facebook accounts
use the same authentication token, so it is sufficient to use the first Facebook account without checking the Service
type.
accounts[0].authenticate(authcallback);
The authcallback function receives the authentication data, here named res, and the authentication token is obtained
from it:
function authcallback(res){
token = res['data']['AccessToken'];
[...]
}
Getting albums and photos from the Facebook Graph API
Now that we have covered the Ubuntu Online Accounts API usage, let’s only touch the highest points on the rest of
the code.
The app uses the token to get a list of the user’s Facebook albums through the Facebook Graph API with this function:
182
Chapter 6. App development
UBports Documentation, Release 1.0
getFacebookAlbums(token, function(albums) {
[...]
}
getFacebookAlbums is passed an anonymous that receives the list of Facebook albums as albums.
The albums are iterated through and the home page GUI is constructed. It consists of an Ubuntu List, where the text
is the album name and album id. Each listitem has a click callback that on execution obtains the photos in the album
from Facebook and displays the Album page populated with photos as Ubuntu Shape widgets, each of which has a
click function to display the Photo page with the right photo.
Key points
• Online Accounts keeps track of user enabled web accounts, including authorization status and tokens
• The Online Accounts JavaScript API lets your HTML5 app obtain this information
• You can get a list of Accounts identified by Provider and Service
• You can get authorization data for each account for the current user
• You can use the authorization data to interact with the external web site with their API and build rich apps that
include personal content from protected external sources
HTML5 Tutorials - unit testing
In this tutorial you will learn how to write a unit test to strengthen the quality of your Ubuntu HTML5 application. It
builds upon the HTML5 development tutorials.
Requirements
• Ubuntu 14.10 or later
– Get Ubuntu
• The HTML5 development tutorials
– If you haven’t already complete the HTML5 development tutorials
• nodejs
– Open a terminal with Ctrl+Alt+T and run these commands to install all required packages:
– sudo apt-get install nodejs
What are unit tests?
To help ensure your application performs as expected it’s important to have a nice suite of unit tests. Unit tests are the
foundation of a good testing story for your application. Let’s learn more about them.
A unit test should generally test a specific unit of code. It should be able to pass or fail in only one way. This means
you should generally have one and only one assertion or assert for short. An assertion is a statement about the expected
outcome of a series of actions. By limiting yourself to a single statement about the expected outcome, it is clear why
a test fails.
6.3. The Ubuntu App platform - develop with seamless device integration
183
UBports Documentation, Release 1.0
Unit tests are the base of the testing pyramid. The testing pyramid describes the three levels of testing an application,
going from low level tests at the bottom and increasing to high level tests at the top. As unit tests are the lowest level,
they should represent the largest number of tests for your project.
In Ubuntu, unit tests for your HTML5 application:
• Are written in javascript
• Utilize jasmine, grunt and nodejs
Speaking Jasmine
A simple spec (testcase)
A basic spec is very simple.
• Declare a describe() function. This forms the test suite definition
• Using the it function, create test cases using javascript
• Utilize expect and matchers to make an assertion about results
describe("Testsuite", function() {
it("testname", function() {
expect(true).toBe(true);
});
});
Example
For example, heres a simple test suite for a function which reverses a string:
describe('String Tests',function(){
beforeEach(function(){
stringFunc = {
reverse: function(string) {
var reversed;
for(var i = string.length - 1; i >= 0; i--) {
reversed += string[i];
}
return reversed;
}
};
});
it("string is reversed", function() {
string = 'thisismystring';
expect(stringFunc.reverse(string)).toEqual('gnirtsymsisiht');
});
});
Building blocks of a spec
184
Chapter 6. App development
UBports Documentation, Release 1.0
describe function
This defines the testsuite. It takes two parameters: a simple string argument which is utilized as the name of the suite,
and a function which contains the testsuite code.
it function
This defines the testcase. It also takes two parameters: a simple string argument which is utilized as the name of the
testcase, and a function which contains the testcase code.
expect function
This is used in unison with matchers to allow expectations or assertions to be made. This takes a single parameter that
is utilized as the first part of the assertion.
Matchers
Matches are utilized to provide the logic for expect as above. There is a plethora of built-in matchers that jasmine
makes available by default. These matchers all take a single parameter that combined with the matcher, serves as the
second part of the assertion.
Below is a list of built-in matchers:
• toBe
– compares with ===
• toEqual
– compares ==
• toMatch
– for regular expressions
• toBeDefined
– compares against undefined
• toBeNull
– compares against null
• toBeTruthy
– for boolean casting testing
• toBeFalsy
– for boolean casting testing
• toContain
– for finding an item in an array
• toBeLessThan
– for mathematical comparisons
• toBeGreaterThan
6.3. The Ubuntu App platform - develop with seamless device integration
185
UBports Documentation, Release 1.0
– for mathematical comparisons
• toBeCloseTo
– for precision math comparison
• toThrow
– for testing if a function throws an exception
• toThrowError
– for testing a specific thrown exception
Advanced Usage
Setup and Teardown
Should you need to perform actions before or after each testcase runs; or before or after an entire testsuite runs, you
can utilize the aptly named Each and All functions. These are beforeEach, afterEach, beforeAll, and afterAll. The
All functions will be performed before and after each testsuite, while the Each functions will be performed before and
after each testcase.
Here’s an example with two simple testcases:
describe("testsuite1", function() {
beforeEach(function() {
before = 1;
});
afterEach(function() {
before = 0;
});
afterAll(function() {
waybefore = 0;
});
it("test1", function() {
expect(true).toBe(true);
});
it("test2", function() {
expect(false).toBe(false);
});
});
And finally here’s how they will be executed:
beforeAll
testsuite1
beforeEach
test1
afterEach
beforeEach
test2
afterEach
afterAll
186
Chapter 6. App development
UBports Documentation, Release 1.0
Custom Matchers
Sometimes you might need to make an assertion that isn’t readily covered by the built-in matchers. To alleviate this
problem, you can define your own custom matcher for later use. A custom matcher must contain a compare function
that returns a results object. This object must have a pass boolean that is set to true when successful, and false when
unsuccessful.
While optional, you should also define a message property that will be utilized when a failure occurs.
Example
Here’s an example custom matcher to check and ensure a value is even.
var customMatchers = {
toBeEven: function() {
return {
compare: function(actual, expected) {
result.pass: (actual % 2) === 0
if (not result.pass) {
result.message = "Expected " + actual + "to be even";
}
return result;
};
}
}
};
To include a custom matcher in your testcases, utilize the addMatchers function. This can be done for each testcase
or testsuite using the aforementioned Each and All functions. For example for our toBeEven custom matcher,
beforeEach(function() {
jasmine.addMatchers(customMatchers();
});
Spies
A spy allows you to spy on any function, tracking all calls and arguments to that function. This allows you to easily
keep track of things and gain useful insight into what is happening inside of different functions.
This also allows you to fake any piece of a function you wish. For example, you can fake a return value from a
function, throw an error, or even call a different function.
• and.throwError
– force an error to be thrown
• and.callThrough
– calls the spy function before invoking the actual function
• and.callFake
– allows you to call a different function completely
• and.stub
– calls the original function, ignoring callFake and callThrough
6.3. The Ubuntu App platform - develop with seamless device integration
187
UBports Documentation, Release 1.0
• and.returnValue
– forces the returned value from the function call
Here’s an example of changing a returned value via the and.returnValue function.
describe('Spy Fake Return',function(){
beforeEach(function(){
myFunc = {
returnZero: function() {
return 0;
}
};
});
it("spy changes value", function() {
foo = spyOn(myFunc, "returnZero").and.returnValue(1)
expect(foo).toEqual(1);
});
it("normal value is zero", function() {
foo = myFunc.returnZero
expect(foo).toEqual(0);
});
});
Conclusion
Let me try!
Try Jasmine is an excellent web based resource that will let you experiment with and learn jasmine from the comfort
of your browser. Try it out!
You’ve just learned how to write unit tests for a Ubuntu HTML5 application. But there is more information to be
learned about how to write HTML5 tests. Check out the links below for more documentation and help.
Resources
• Jasmine
• Grunt
• NodeJS
• HTML5 SDK documentation
HTML5 tutorials - writing functional tests
In this tutorial you will learn how to write functional tests to strengthen the quality of your Ubuntu HTML5 application.
It builds upon the HTML5 development tutorials.
Requirements
• Ubuntu 14.10 or later
– Get Ubuntu
188
Chapter 6. App development
UBports Documentation, Release 1.0
• The HTML5 development tutorials
– If you haven’t already complete the HTML5 development tutorials
• autopilot, selenium
– Open a terminal with Ctrl+Alt+T and run these commands to install all required packages:
– sudo apt-add-repository ppa:canonical-platform-qa/selenium
– sudo apt-get update
– sudo apt-get install python3-autopilot python3-selenium
oxideqt-chromedriver
What are acceptance tests?
Functional or acceptance tests help ensure your application behaves properly from a user perspective. The tests seek
to mimic the user as closely as possible. Acceptance tests are the pinnacle of the testing pyramid. The testing pyramid
describes the three levels of testing an application, going from low level tests at the bottom and increasing to high level
tests at the top. As acceptance tests are the highest level, they will represent the smallest number of tests, but will also
likely be the most complex.
In Ubuntu, functional tests for your HTML5 application:
• Are written in python
• Utilize selenium and autopilot
What is autopilot? selenium?
Autopilot is a tool for introspecting applications using dbus. What this means is autopilot can read application objects
and their properties, while also allowing you to mock user interactions like clicking, tapping and sending keystrokes.
Selenium is also a testing tool meant for testing web applications. Like autopilot, it allows you to find and interact
with page elements, but does this by driving a browser and providing programmatic access to it.
A simple testcase
The setup
Before you can run a testcase, you’ll need to setup your environment.
• Create a test class that inherits AutopilotTestCase
• Define your Setup() and TearDown() functions
• Launch the application with introspection via launch_test_application
Fortunately, this setup is taken care of for you by the testing templates provided by the SDK. Let’s break down a few
important pieces to understand.
First is how we launch the application. Autopilot is used to introspect the html5-app-launcher executable which
will run the web app and contains the web view.
def launch_html5_app_inline(self, args):
return self.launch_test_application(
'ubuntu-html5-app-launcher',
6.3. The Ubuntu App platform - develop with seamless device integration
189
UBports Documentation, Release 1.0
*args,
emulator_base=uitk.UbuntuUIToolkitCustomProxyObjectBase)
Next, we define a webdriver for selenium that we can use to interact with the webview. A webdriver an interface to a
browser allowing for programmatically interacting with an application. Each browser has a separate browser driver.
Since our HTML5 application will be running utilizing Blink, we launch a Chrome driver.
def launch_webdriver(self):
options = Options()
options.binary_location = ''
options.debugger_address = '{}:{}'.format(
DEFAULT_WEBVIEW_INSPECTOR_IP,
DEFAULT_WEBVIEW_INSPECTOR_PORT)
self.driver = webdriver.Chrome(
executable_path=CHROMEDRIVER_EXEC_PATH,
chrome_options=options)
Finally we are able to launch the application and start the webdriver once it’s loaded.
def launch_html5_app(self):
self.app_proxy = self.launch_html5_app_inline()
self.wait_for_app_to_launch()
self.launch_webdriver()
Building blocks of a testcase
Testcase
• Create a Testcase class that inherits your test class
• Define your Setup() (and perhaps TearDown()) functions
• Launch the application with introspection via launch_test_application
Here’s a simple test example of testing an HTML5 app with 2 buttons.
def test_for_buttons(self):
html5_doc_buttons = self.page.find_elements_by_css_selector(
"#hello-page a")
self.assertThat(len(html5_doc_buttons), Equals(2))
Making use of selenium
Once you’ve launched the application successfully, you will have access to the object tree as usual. You will find the
objects you need under the WebAppContainer object. A simple select will get you the object:
select_single(WebAppContainer)
Even further, you can also utilize the selenium webdriver methods to interact with the application.
For example, you will find it useful to search for objects using selenium, while interacting with the container will be
easier using autopilot (tapping the back button for example). As you see in the example above we are able to easily find
elements on the page using a find_elements_by_css_selector method which is provided by the selenium
webdriver. This is in contrast to introspecting for the object over the dbus tree via autopilot.
190
Chapter 6. App development
UBports Documentation, Release 1.0
Finding and Selecting Objects
Fortunately selenium also makes it easy to find and introspect objects. You can issue a find by id, name, path, link,
tag, class, and css! You can also find multiple elements by most of the same attributes.
You can read more about finding elements in the Selenium documentation.
Once you have found an element you can interact with it by reading its properties or performing an action. Let’s talk
about each one.
Reading attributes
You can read element attributes by utilizing the get_attribute method. For example, we can read attributes of the button
from the previous example.
button.get_attribute(“class”)
Note that getting a list of all attributes isn’t possible via the API. Instead, you can visualize the element using web
developer tools or javascript to list it’s attributes.
You can also get values of css properties via the value_of_css_property method.
Action Chains
Now that we can find objects and get details about them, let’s interact with them as well. A user interacting with our
application will swipe and tap our UI elements. To do the same in selenium, we can utilize what is known as an action
chain. This is simply a set of actions that we ask selenium to perform in the same way as a user.
Let’s provide an example, by expanding the example testcase we gave above. After finding the buttons, let’s add an
action to click the first button.
First, let’s define a new actionchain for the main page.
actions = ActionChains(self.page)
Now we can add actions to perform. Selenium allows us to click on items, drag, move, etc. For our purposes let’s add
a single action to click the button.
actions.click(button)
Once all of our actions are added, we call the perform method to execute the actions. So putting it all together, here’s
our full testcase:
def test_click_button(self):
button = self.page.find_elements_by_class_name(“ubuntu”)[0]
actions = ActionChains(self.page)
actions.click(button)
actions.perform()
To find out about other useful methods, check out the Actions Chain documentation.
Assertions and Expectations
In addition to the suite of assertions that autopilot has, selenium allows for you to create expectations about elements.
These are called expected conditions. For example, we could wait for an element to be clickable before clicking on it.
wait.until(expected_conditions.element_to_be_clickable(By.class("ubuntu")))
6.3. The Ubuntu App platform - develop with seamless device integration
191
UBports Documentation, Release 1.0
Page Object Model
When you are architecting your test suite, it’s important to think about design. Functional tests are the most UI
sensitive testcases in your project and are more likely to break than lower level tests. To address this issue, the page
object model can guide you towards writing tests that can scale and deal with changes over time easily. Check out the
Page ObjectModel for more information.
Conclusion
You’ve just learned how to write acceptance tests for a Ubuntu HTML5 application. But there is more information to
be learned about how to write HTML5 tests. Check out the links below for more documentation and help.
Resources
• Autopilot API
• Selenium Webdriver API
• HTML5 SDK documentation
HTML 5 API
Ubuntu HTML5 APIs enable a rich set of technologies for your applications to integrate with and blend in with the
platform. The documentation will provide you with detailed technical information and examples on how to make the
most of device and platform functionalities.
Note: The API documentation has not yet been imported. The old canonical documentation can be found here.
Autopilot
Note: Here be dragons! This part of the docs could be very outdated or incomplete and has not been completely
triaged. Refer to the Ubuntu docs for further reference.
ubuntuuitoolkit
Ubuntu UI Toolkit Autopilot tests and helpers.
class ubuntuuitoolkit.AppHeader(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
AppHeader Autopilot custom proxy object.
click_action_button(action_object_name)¶
Click an action button of the header.
Parameters:
object_name – The QML objectName property of the action
192
Chapter 6. App development
UBports Documentation, Release 1.0
Raises ToolkitException:
If there is no action button with that object name.
click_back_button()¶
click_custom_back_button()¶
ensure_visible()¶
get_selected_section_index()¶
switch_to_next_tab(instance, *args, **kwargs)¶
Open the next tab.
Raises ToolkitException:
If the main view has no tabs.
switch_to_section_by_index(instance, *args, **kwargs)¶
Select a section in the header divider
Parameters:
index – The index of the section to select
Raises ToolkitEmulatorException:
If the selection index is out of range or useDeprecatedToolbar is set.
switch_to_tab_by_index(instance, *args, **kwargs)¶
Open a tab. This only supports the new tabs in the header
Parameters:
index – The index of the tab to open.
Raises ToolkitException:
If the tab index is out of range or useDeprecatedToolbar is set.
wait_for_animation()¶
ubuntuuitoolkit.check_autopilot_version()¶
Check that the Autopilot installed version matches the one required.
Raises ToolkitException:
If the installed Autopilot version does’t match the required by the custom proxy objects.
class ubuntuuitoolkit.CheckBox(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
CheckBox Autopilot custom proxy object.
change_state(instance, *args, **kwargs)¶
Change the state of a CheckBox.
If it is checked, it will be unchecked. If it is unchecked, it will be checked.
Parameters:
time_out – number of seconds to wait for the CheckBox state to change. Default is 10.
check(instance, *args, **kwargs)¶
6.3. The Ubuntu App platform - develop with seamless device integration
193
UBports Documentation, Release 1.0
Check a CheckBox, if its not already checked.
Parameters:
timeout – number of seconds to wait for the CheckBox to be checked. Default is 10.
uncheck(instance, *args, **kwargs)¶
Uncheck a CheckBox, if its not already unchecked.
Parameters:
timeout – number of seconds to wait for the CheckBox to be unchecked. Default is 10.
ubuntuuitoolkit.get_keyboard()¶
Return the keyboard device.
ubuntuuitoolkit.get_pointing_device()¶
Return the pointing device depending on the platform.
If the platform is Desktop, the pointing device will be a Mouse. If not, the pointing device will be Touch.
class ubuntuuitoolkit.Header(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._header.AppHeader
Autopilot helper for the deprecated Header.
class ubuntuuitoolkit.Dialog(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
Autopilot helper for the Dialog component.
class ubuntuuitoolkit.UCListItem(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
Base class to emulate swipe for leading and trailing actions.
toggle_selected(instance, *args, **kwargs)¶
Toggles selected state of the ListItem.
trigger_leading_action(instance, *args, **kwargs)¶
Swipe the item in from left to right to open leading actions and click on the button representing the requested action.
parameters: action_objectName - object name of the action to be
triggered. wait_function - a custom wait function to wait till the action is triggered
trigger_trailing_action(instance, *args, **kwargs)¶
Swipe the item in from right to left to open trailing actions and click on the button representing the requested action.
parameters: action_objectName - object name of the action to be
triggered. wait_function - a custom wait function to wait till the action is triggered
class ubuntuuitoolkit.MainView(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
MainView Autopilot custom proxy object.
click_action_button(instance, *args, **kwargs)¶
Click the specified button.
194
Chapter 6. App development
UBports Documentation, Release 1.0
Parameters:
action_object_name – the objectName of the action to trigger.
Raises ToolkitException:
The requested button is not available.
close_toolbar(instance, *args, **kwargs)¶
Close the toolbar if it is opened.
Raises ToolkitException:
If the main view has no toolbar.
get_action_selection_popover(object_name)¶
Return an ActionSelectionPopover custom proxy object.
Parameters:
object_name – The QML objectName property of the popover.
get_header()¶
Return the AppHeader custom proxy object of the MainView.
get_tabs()¶
Return the Tabs custom proxy object of the MainView.
Raises ToolkitException:
If the main view has no tabs.
get_text_input_context_menu(object_name)¶
Return a TextInputContextMenu emulator.
Parameters:
object_name – The QML objectName property of the popover.
get_toolbar()¶
Return the Toolbar custom proxy object of the MainView.
Raises ToolkitException:
If the main view has no toolbar.
go_back(instance, *args, **kwargs)¶
Go to the previous page.
open_toolbar(instance, *args, **kwargs)¶
Open the toolbar if it is not already opened.
Returns:
The toolbar.
Raises ToolkitException:
If the main view has no toolbar.
switch_to_next_tab(instance, *args, **kwargs)¶
Open the next tab.
6.3. The Ubuntu App platform - develop with seamless device integration
195
UBports Documentation, Release 1.0
Returns:
The newly opened tab.
switch_to_previous_tab(instance, *args, **kwargs)¶
Open the previous tab.
Returns:
The newly opened tab.
switch_to_tab(instance, *args, **kwargs)¶
Open a tab.
Parameters:
object_name – The QML objectName property of the tab.
Returns:
The newly opened tab.
Raises ToolkitException:
If there is no tab with that object name.
switch_to_tab_by_index(instance, *args, **kwargs)¶
Open a tab.
Parameters:
index – The index of the tab to open.
Returns:
The newly opened tab.
Raises ToolkitException:
If the tab index is out of range.
classmethod validate_dbus_object(path, state)¶
class ubuntuuitoolkit.OptionSelector(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
OptionSelector Autopilot custom proxy object
get_current_label()¶
gets the text of the currently selected item
get_option_count()¶
Gets the number of items in the option selector
get_selected_index()¶
Gets the current selected index of the QQuickListView
get_selected_text()¶
gets the text of the currently selected item
select_option(*args, **kwargs)¶
Select delegate in option selector
196
Chapter 6. App development
UBports Documentation, Release 1.0
Example usage:
.. raw:: html
</dt>
select_option(objectName=”myOptionSelectorDelegate”) select_option(‘Label’, text=”some_text_here”)
Parameters:
kwargs – keywords used to find property(s) of delegate in option selector
class ubuntuuitoolkit.QQuickFlickable(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._flickable.Scrollable
pull_to_refresh(instance, *args, **kwargs)¶
Pulls the flickable down and triggers a refresh on it.
Raises ubuntuuitoolkit.ToolkitException:
If the flickable has no pull to release functionality.
swipe_child_into_view(instance, *args, **kwargs)¶
Make the child visible.
Currently it works only when the object needs to be swiped vertically. TODO implement horizontal swiping. –elopio
- 2014-03-21
swipe_to_bottom(instance, *args, **kwargs)¶
swipe_to_show_more_above(instance, *args, **kwargs)¶
swipe_to_show_more_below(instance, *args, **kwargs)¶
swipe_to_top(instance, *args, **kwargs)¶
class ubuntuuitoolkit.QQuickGridView(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._flickable.QQuickFlickable
Autopilot helper for the QQuickGridView component.
class ubuntuuitoolkit.QQuickListView(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._flickable.QQuickFlickable
click_element(instance, *args, **kwargs)¶
Click an element from the list.
It swipes the element into view if it’s center is not visible.
Parameters:
objectName – The objectName property of the element to click.
direction – The direction where the element is, it can be either ‘above’ or ‘below’. Default value is None, which means
we don’t know where the object is and we will need to search the full list.
drag_item(instance, *args, **kwargs)¶
enable_select_mode(instance, *args, **kwargs)¶
Default implementation to enable select mode. Performs a long tap over the first list item in the ListView. The
delegates must be the new ListItem components.
class ubuntuuitoolkit.TabBar(*args)¶
6.3. The Ubuntu App platform - develop with seamless device integration
197
UBports Documentation, Release 1.0
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
TabBar Autopilot custom proxy object.
switch_to_next_tab(instance, *args, **kwargs)¶
Open the next tab.
class ubuntuuitoolkit.Tabs(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
Tabs Autopilot custom proxy object.
get_current_tab()¶
Return the currently selected tab.
get_number_of_tabs()¶
Return the number of tabs.
class ubuntuuitoolkit.TextArea(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._textfield.TextField
TextArea autopilot emulator.
clear()¶
Clear the text area.
class ubuntuuitoolkit.TextField(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
TextField Autopilot custom proxy object.
clear(instance, *args, **kwargs)¶
Clear the text field.
is_empty()¶
Return True if the text field is empty. False otherwise.
write(instance, *args, **kwargs)¶
Write into the text field.
Parameters:
text – The text to write.
clear – If True, the text field will be cleared before writing the text. If False, the text will be appended at the end of the
text field. Default is True.
class ubuntuuitoolkit.Toolbar(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase
Toolbar Autopilot custom proxy object.
click_back_button(instance, *args, **kwargs)¶
Click the back button of the toolbar.
click_button(instance, *args, **kwargs)¶
Click a button of the toolbar.
198
Chapter 6. App development
UBports Documentation, Release 1.0
The toolbar should be opened before clicking the button, or an exception will be raised. If the toolbar is closed for some
reason (e.g., timer finishes) after moving the mouse cursor and before clicking the button, it is re-opened automatically
by this function.
Parameters:
object_name – The QML objectName property of the button.
Raises ToolkitException:
If there is no button with that object name.
close(instance, *args, **kwargs)¶
Close the toolbar if it’s opened.
open(instance, *args, **kwargs)¶
Open the toolbar if it’s not already opened.
Returns:
The toolbar.
exception ubuntuuitoolkit.ToolkitException¶
Bases: exceptions.Exception
Exception raised when there is an error with the custom proxy object.
class ubuntuuitoolkit.UbuntuListView11(*args)¶
Bases: ubuntuuitoolkit._custom_proxy_objects._qquicklistview.QQuickListView
Autopilot helper for the UbuntuListView 1.1.
manual_refresh_nowait()¶
manual_refresh_wait()¶
pull_to_refresh_enabled()¶
wait_refresh_completed()¶
class ubuntuuitoolkit.UbuntuUIToolkitCustomProxyObjectBase(*args)¶
Bases: autopilot.introspection.dbus.CustomEmulatorBase
A base class for all the Ubuntu UI Toolkit custom proxy objects.
is_flickable()¶
Check if the object is flickable.
If the object has a flicking attribute, we consider it as a flickable.
Returns:
True if the object is flickable. False otherwise.
swipe_into_view(instance, *args, **kwargs)¶
Make the object visible.
Currently it works only when the object needs to be swiped vertically. TODO implement horizontal swiping. –elopio
- 2014-03-21
6.3. The Ubuntu App platform - develop with seamless device integration
199
UBports Documentation, Release 1.0
tutorial-getting_started
This document contains everything you need to know to write your first autopilot test. It covers writing several simple
tests for a sample Qt5/Qml application. However, it’s important to note that nothing in this tutorial is specific to
Qt5/Qml, and will work equally well with any other kind of application.
Files and Directories
Your autopilot test suite will grow to several files, possibly spread across several directories. We recommend that you
follow this simple directory layout:
The autopilot folder can be anywhere within your project’s source tree. It will likely contain a setup.py file.
The autopilot/<projectname>/ folder is the base package for your autopilot tests. This folder, and all child folders, are
python packages, and so must contain an init.py file. If you ever find yourself writing custom proxy classes (This is
an advanced topic, and is covered here: Writing Custom Proxy Classes), they should be imported from this top-level
package.
Each test file should be named test_<component>.py, where <component> is the logical component you are testing in
that file. Test files must be written in the autopilot/<projectname>/tests/ folder.
A Minimal Test Case
Autopilot tests follow a similar pattern to other python test libraries: you must declare a class that derives from
AutopilotTestCase. A minimal test case looks like this:
Autopilot Says
Make your tests expressive!
It’s important to make sure that your tests express your intent as clearly as possible. We recommend choosing long,
descriptive names for test functions and classes (even breaking PEP 8, if you need to), and give your tests a detailed
docstring explaining exactly what you are trying to test. For more detailed advice on this point, see Write Expressive
Tests
The Setup Phase
Before each test is run, the setUp method is called. Test authors may override this method to run any setup that needs
to happen before the test is run. However, care must be taken when using the setUp method: it tends to hide code from
the test case, which can make your tests less readable. It is our recommendation, therefore, that you use this feature
sparingly. A more suitable alternative is often to put the setup code in a separate function or method and call it from
the test function.
Should you wish to put code in a setup method, it looks like this:
Note
Any action you take in the setup phase must be undone if it alters the system state. See Cleaning Up for more details.
Starting the Application
At the start of your test, you need to tell autopilot to launch your application. To do this, call launch_test_application.
The minimum required argument to this method is the application name or path. If you pass in the application name,
autopilot will look in the current working directory, and then will search the PATH environment variable. Otherwise,
autopilot looks for the executable at the path specified. Positional arguments to this method are passed to the executable
being launched.
Autopilot will try and guess what type of application you are launching, and therefore what kind of introspection
libraries it should load. Sometimes autopilot will need some assistance however. For example, at the time of writing,
autopilot cannot automatically detect the introspection type for python / Qt4 applications. In that case, a RuntimeError
will be raised. To provide autopilot with a hint as to which introspection type to load, you can provide the app_type
keyword argument. For example:
200
Chapter 6. App development
UBports Documentation, Release 1.0
See the documentation for launch_test_application for more details.
The return value from launch_test_application is a proxy object representing the root of the introspection tree of the
application you just launched.
Autopilot Says
What is a Proxy Object?
Whenever you launch an application, autopilot gives you a “proxy object”. These are instances of the ProxyBase class,
with all the data from your application mirrored in the proxy object instances. For example, if you have a proxy object
for a push button class (say, QPushButton, for example), the proxy object will have attribute to match every attribute
in the class within your application. Autopilot automatically keeps the data in these instances up to date, so you can
use them in your test assertions.
User interfaces are made up of a tree of widgets, and autopilot represents these widgets as a tree of proxy objects.
Proxy objects have a number of methods on them for selecting child objects in the introspection tree, so test authors
can easily inspect the parts of the UI tree they care about.
A Simple Test
To demonstrate the material covered so far, this selection will outline a simple application, and a single test for it.
Instead of testing a third-party application, we will write the simplest possible application in Python and Qt4. The
application, named ‘testapp.py’, is listed below:
As you can see, this is a trivial application, but it serves our purpose. For the upcoming tests to run this file must be
executable:
We will write a single autopilot test that asserts that the title of the main window is equal to the string “Hello World”.
Our test file is named “test_window.py”, and contains the following code:
Note that we have made the test method as readable as possible by hiding the complexities of finding the full path to
the application we want to test. Of course, if you can guarantee that the application is in PATH, then this step becomes
a lot simpler.
The entire directory structure looks like this:
The init.py files are empty, and are needed to make these directories importable by python.
Running Autopilot
From the root of this directory structure, we can ask autopilot to list all the tests it can find:
Note that on the first line, autopilot will tell you where it has loaded the test definitions from. Autopilot will look in
the current directory for a python package that matches the package name specified on the command line. If it does
not find any suitable packages, it will look in the standard python module search path instead.
To run our test, we use the autopilot ‘run’ command:
You will notice that the test application launches, and then dissapears shortly afterwards. Since this test doesn’t
manipulate the application in any way, this is a rather boring test to look at. If you ever want more output from the run
command, you may specify the ‘-v’ flag:
You may also specify ‘-v’ twice for even more output (this is rarely useful for test authors however).
Both the ‘list’ and ‘run’ commands take a test id as an argument. You may be as generic, or as specific as you like. In
the examples above, we will list and run all tests in the ‘example’ package (i.e.- all tests), but we could specify a more
specific run criteria if we only wanted to run some of the tests. For example, to only run the single test we’ve written,
we can execute:
A Test with Interaction
Now lets take a look at some simple tests with some user interaction. First, update the test application with some input
and output controls:
6.3. The Ubuntu App platform - develop with seamless device integration
201
UBports Documentation, Release 1.0
We’ve reorganized the application code into a class to make the event handling easier. Then we added two input
controls, the hello and goodbye buttons and an output control, the response label.
The operation of the application is still very trivial, but now we can test that it actually does something in response to
user input. Clicking either of the two buttons will cause the response text to change. Clicking the Hello button should
result in Response: Hello while clicking the Goodbye button should result in Response: Goodbye.
Since we’re adding a new category of tests, button response tests, we should organize them into a new class. Our tests
module now looks like:
In addition to the new class, ButtonResponseTests, you’ll notice a few other changes. First, two new import lines were
added to support the new tests. Next, the existing MainWindowTitleTests class was refactored to subclass from a base
class, HelloWorldTestBase. The base class contains the launch_application method which is used for all test cases.
Finally, the object type of the main window changed from QMainWindow to AutopilotHelloWorld. The change in
object type is a result of our test application being refactored into a class called AutopilotHelloWorld.
Autopilot Says
Be careful when identifing user interface controls
Notice that our simple refactoring of the test application forced a change to the test for the main window. When
developing application code, put a little extra thought into how the user interface controls will be identified in the tests.
Identify objects with attributes that are likely to remain constant as the application code is developed.
The ButtonResponseTests class adds two new tests, one for each input control. Each test identifies the user interface
controls that need to be used, performs a single, specific action, and then verifies the outcome. In test_hello_response,
we first identify the QLabel control which contains the output we need to check. We then identify the Hello button. As
the application has two QPushButton controls, we must further refine the select_single call by specifing an additional
property. In this case, we use the button text. Next, an input action is triggered by instructing the mouse to click the
Hello button. Finally, the test asserts that the response label text matches the expected string. The second test repeats
the same process with the Goodbye button.
The Eventually Matcher
Notice that in the ButtonResponseTests tests above, the autopilot method Eventually is used in the assertion. This
allows the assertion to be retried continuously until it either becomes true, or times out (the default timout is 10
seconds). This is necessary because the application and the autopilot tests run in different processes. Autopilot could
test the assert before the application has completed its action. Using Eventually allows the application to complete its
action without having to explicitly add delays to the tests.
Autopilot Says
Use Eventually when asserting any user interface condition
You may find that when running tests, the application is often ready with the outcome by the time autopilot is able
to test the assertion without using Eventually. However, this may not always be true when running your test suite on
different hardware.
tutorial-advanced_autopilot
This document covers advanced features in autopilot.
Cleaning Up
It is vitally important that every test you run leaves the system in exactly the same state as it found it. This means that:
Any files written to disk need to be removed.
Any environment variables set during the test run need to be un-set.
Any applications opened during the test run need to be closed again.
202
Chapter 6. App development
UBports Documentation, Release 1.0
Any Keyboard keys pressed during the test need to be released again.
All of the methods on AutopilotTestCase that alter the system state will automatically revert those changes at the end
of the test. Similarly, the various input devices will release any buttons or keys that were pressed during the test.
However, for all other changes, it is the responsibility of the test author to clean up those changes.
For example, a test might require that a file with certain content be written to disk at the start of the test. The test case
might look something like this:
However this will leave the /tmp/datafile on disk after the test has finished. To combat this, use the addCleanup method.
The arguments to addCleanup are a callable, and then zero or more positional or keyword arguments. The Callable
will be called with the positional and keyword arguments after the test has ended.
Cleanup actions are called in the reverse order in which they are added, and are called regardless of whether the test
passed, failed, or raised an uncaught exception. To fix the above test, we might write something similar to:
Note that by having the code to generate the /tmp/datafile file on disk in a separate method, the test itself can ignore
the fact that these resources need to be cleaned up. This makes the tests cleaner and easier to read.
Test Scenarios
Occasionally test authors will find themselves writing multiple tests that differ in one or two subtle ways. For example,
imagine a hypothetical test case that tests a dictionary application. The author wants to test that certain words return
no results. Without using test scenarios, there are two basic approaches to this problem. The first is to create many test
cases, one for each specific scenario (don’t do this):
The main problem here is that there’s a lot of typing in order to change exactly one thing (and this hypothetical test
is deliberately short, to ease clarity. Imagine a 100 line test case!). Another approach is to make the entire thing one
large test (don’t do this either):
This approach makes it easier to add new input strings, but what happens when just one of the input strings stops
working? It becomes very hard to find out which input string is broken, and the first string that breaks will prevent the
rest of the test from running, since tests stop running when the first assertion fails.
The solution is to use test scenarios. A scenario is a class attribute that specifies one or more scenarios to run on each
of the tests. This is best demonstrated with an example:
Autopilot will run the test_bad_strings_return_no_results once for each scenario. On each test, the values from the
scenario dictionary will be mapped to attributes of the test case class. In this example, that means that the ‘input’
dictionary item will be mapped to self.input. Using scenarios has several benefits over either of the other strategies
outlined above:
Tests that use strategies will appear as separate tests in the test output. The test id will be the normal test id, followed
by the strategy name in parenthesis. So in the example above, the list of test ids will be:
Since scenarios are treated as separate tests, it’s easier to debug which scenario has broken, and re-run just that one
scenario.
Scenarios get applied before the setUp method, which means you can use scenario values in the setUp and tearDown
methods. This makes them more flexible than either of the approaches listed above.
Test Logging
Autopilot integrates the python logging framework into the AutopilotTestCase class. Various autopilot components
write log messages to the logging framework, and all these log messages are attached to each test result when the test
completes. By default, these log messages are shown when a test fails, or if autopilot is run with the -v option.
Test authors are encouraged to write to the python logging framework whenever doing so would make failing tests
clearer. To do this, there are a few simple steps to follow:
Import the logging module:
Create a logger object. You can either do this at the file level scope, or within a test case class:
6.3. The Ubuntu App platform - develop with seamless device integration
203
UBports Documentation, Release 1.0
Log some messages. You may choose which level the messages should be logged at. For example:
Note
To view log messages when using debug level of logging pass -vv when running autopilot.
For more information on the various logging levels, see the python documentation on Logger objects. All messages
logged in this way will be picked up by the autopilot test runner. This is a valuable tool when debugging failing tests.
Environment Patching
Sometimes you need to change the value of an environment variable for the duration of a single test. It is important
that the variable is changed back to it’s original value when the test has ended, so future tests are run in a pristine
environment. The fixtures module includes a fixtures.EnvironmentVariable fixture which takes care of this for you.
For example, to set the FOO environment variable to “Hello World” for the duration of a single test, the code would
look something like this:
The fixtures.EnvironmentVariable fixture will revert the value of the environment variable to it’s initial value, or will
delete it altogether if the environment variable did not exist when fixtures.EnvironmentVariable was instantiated. This
happens in the cleanup phase of the test execution.
Custom Assertions
Autopilot provides additional custom assertion methods within the AutopilotTestCase base class. These assertion
methods can be used for validating the visible window stack and also properties on objects whose attributes do not
have the wait_for method, such as Window objects (See In Proxy Classes for more information about wait_for).
autopilot.testcase.AutopilotTestCase.assertVisibleWindowStack
This assertion allows the test to check the start of the visible window stack by passing an iterable item of Window
instances. Minimised windows will be ignored:
Note
The process manager is only available on environments that use bamf, i.e. desktop running Unity 7. There is currently
no process manager for any other platform.
autopilot.testcase.AutopilotTestCase.assertProperty
This assertion allows the test to check properties of an object that does not have a wait_for method (i.e.- objects that
do not come from the autopilot DBus interface). For example the Window object:
Note
assertProperties is a synonym for this method.
Note
The process manager is only available on environments that use bamf, i.e. desktop running Unity 7. There is currently
no process manager for any other platform.
autopilot.testcase.AutopilotTestCase.assertProperties
See autopilot.testcase.AutopilotTestCase.assertProperty.
Note
assertProperty is a synonym for this method.
Platform Selection
Autopilot provides functionality that allows the test author to determine which platform a test is running on so that
they may either change behaviour within the test or skipping the test all together.
For examples and API documentaion please see autopilot.platform.
Gestures and Multi-touch
204
Chapter 6. App development
UBports Documentation, Release 1.0
Autopilot provides API support for both single-touch and multi-touch gestures which can be used to simulate user
input required to drive an application or system under test. These APIs should be used in conjunction with Platform
Selection to detect platform capabilities and ensure the correct input API is being used.
Single-Touch
autopilot.input.Touch provides single-touch input gestures, which includes:
tap which can be used to tap a specified [x,y] point on the screen
drag which will drag between 2 [x,y] points and can be customised by altering the speed of the action
press, release and move operations which can be combined to create custom gestures
tap_object can be used to tap the center point of a given introspection object, where the screen co-ordinates are taken
from one of several properties of the object
Autopilot additionally provides the class autopilot.input.Pointer as a means to provide a single unified API that can be
used with both Mouse input and Touch input . See the documentation for this class for further details of this, as not all
operations can be performed on both of these input types.
This example demonstrates swiping from the center of the screen to the left edge, which could for example be used in
Ubuntu Touch to swipe a new scope into view.
First calculate the center point of the screen (see: Display Information):
Then perform the swipe operation from the center of the screen to the left edge, using autopilot.input.Pointer.drag:
Multi-Touch
autopilot.gestures provides support for multi-touch input which includes:
autopilot.gestures.pinch provides a 2-finger pinch gesture centered around an [x,y] point on the screen
This example demonstrates how to use the pinch gesture, which for example could be used on Ubuntu Touch webbrowser, or gallery application to zoom in or out of currently displayed content.
To zoom in, pinch vertically outwards from the center point by 100 pixels:
To zoom back out, pinch vertically 100 pixels back towards the center point:
Note
The multi-touch pinch method is intended for use on a touch enabled device. However, if run on a desktop environment
it will behave as if the mouse select button is pressed whilst moving the mouse pointer. For example to select some
text in a document.
Advanced Backend Picking
Several features in autopilot are provided by more than one backend. For example, the autopilot.input module contains
the Keyboard, Mouse and Touch classes, each of which can use more than one implementation depending on the
platform the tests are being run on.
For example, when running autopilot on a traditional ubuntu desktop platform, Keyboard input events are probably
created using the X11 client libraries. On a phone platform, X11 is not present, so autopilot will instead choose to
generate events using the kernel UInput device driver instead.
Other autopilot systems that make use of multiple backends include the autopilot.display and autopilot.process modules. Every class in these modules follows the same construction pattern:
Default Creation
By default, calling the create() method with no arguments will return an instance of the class that is appropriate to the
current platform. For example:
6.3. The Ubuntu App platform - develop with seamless device integration
205
UBports Documentation, Release 1.0
.. raw:: html
</dt>
The code snippet above will create an instance of the Keyboard class that uses X11 on Desktop systems, and UInput
on other systems. On the rare occaison when test authors need to construct these objects themselves, we expect that
the default creation pattern to be used.
Picking a Backend
Test authors may sometimes want to pick a specific backend. The possible backends are documented in the API
documentation for each class. For example, the documentation for the autopilot.input.Keyboard.create method says
there are three backends available: the X11 backend, the UInput backend, and the OSK backend. These backends can
be specified in the create method. For example, to specify that you want a Keyboard that uses X11 to generate it’s
input events:
Similarly, to specify that a UInput keyboard should be created:
Finally, for the Onscreen Keyboard:
Warning
Care must be taken when specifying specific backends. There is no guarantee that the backend you ask for is going to
be available across all platforms. For that reason, using the default creation method is encouraged.
Warning
The OSK backend has some known implementation limitations, please see autopilot.input.Keyboard.create method
documenation for further details.
Possible Errors when Creating Backends
Lots of things can go wrong when creating backends with the create method.
If autopilot is unable to create any backends for your current platform, a RuntimeError exception will be raised. It’s
message attribute will contain the error message from each backend that autopilot tried to create.
If a preferred backend was specified, but that backend doesn’t exist (probably the test author mis-spelled it), a RuntimeError will be raised:
In this example, uinput was mis-spelled (backend names are case sensitive). Specifying the correct backend name
works as expected:
Finally, if the test author specifies a preferred backend, but that backend could not be created, a autopilot.BackendException will be raised. This is an important distinction to understand: While calling create() with
no arguments will try more than one backend, specifying a backend to create will only try and create that one backend
type. The BackendException instance will contain the original exception raised by the backed in it’s original_exception
attribute. In this example, we try and create a UInput keyboard, which fails because we don’t have the correct permissions (this is something that autopilot usually handles for you):
Keyboard Backends
A quick introduction to the Keyboard backends
Each backend has a different method of operating behind the scenes to provide the Keyboard interface.
Here is a quick overview of how each backend works.
Backend
Description
X11
206
Chapter 6. App development
UBports Documentation, Release 1.0
The X11 backend generates X11 events using a mock input device which it then syncs with X to actually action the
input.
Uinput
The UInput backend injects events directly in to the kernel using the UInput device driver to produce input.
OSK
The Onscreen Keyboard backend uses the GUI pop-up keyboard to enter input. Using a pointer object it taps on the
required keys to get the expected output.
Limitations of the different Keyboard backends
While every effort has been made so that the Keyboard devices act the same regardless of which backend or platform
is in use, the simple fact is that there can be some technical limitations for some backends.
Some of these limitations are hidden when using the “create” method and won’t cause any concern (e.g. X11 backend
on desktop, UInput on an Ubuntu Touch device.) while others will raise exceptions (that are fully documented in the
API docs).
Here is a list of known limitations:
X11
Only available on desktop platforms
X11 isn’t available on Ubuntu Touch devices
UInput
Requires correct device access permissions
The user (or group) that are running the autopilot tests need read/write access to the UInput device (usually
/dev/uinput).
Specific kernel support is required
The kernel on the system running the tests must be running a kernel that includes UInput support (as well as have the
module loaded.
OSK
Currently only available on Ubuntu Touch devices
At the time of writing this the OSK/Ubuntu Keyboard is only supported/available on the Ubuntu Touch devices. It is
possible that it will be available on the desktop in the near future.
Unable to type ‘special’ keys e.g. Alt
This shouldn’t be an issue as applications running on Ubuntu Touch devices will be using the expected patterns of use
on these platforms.
The following methods have limitations or are not implemented:
autopilot.input.Keyboard.press: Raises NotImplementedError if called.
autopilot.input.Keyboard.release: Raises NotImplementedError if called.
autopilot.input.Keyboard.press_and_release: can can only handle single keys/characters. Raises either ValueError if
passed more than a single character key or UnsupportedKey if passed a key that is not supported by the OSK backend
(or the current language layout).
Process Control
The autopilot.process module provides the ProcessManager class to provide a high-level interface for managing applications and windows during testing. Features of the ProcessManager allow the user to start and stop applications
6.3. The Ubuntu App platform - develop with seamless device integration
207
UBports Documentation, Release 1.0
easily and to query the current state of an application and its windows. It also provides automatic cleanup for apps that
have been launched during testing.
Note
ProcessManager is not intended for introspecting an application’s object tree, for this see Launching Applications.
Also it does not provide a method for interacting with an application’s UI or specific features.
Properties of an application and its windows can be accessed using the classes Application and Window, which also
allows the window instance to be focused and closed.
A list of known applications is defined in KNOWN_APPS and these can easily be referenced by name. This list can
also be updated using register_known_application and unregister_known_application for easier use during the test.
To use the ProcessManager the static create method should be called, which returns an initialised object instance.
A simple example to launch the gedit text editor and check it is in focus:
Note
ProcessManager is only available on environments that use bamf, i.e. desktop running Unity 7. There is currently no
process manager for any other platform.
Display Information
Autopilot provides the autopilot.display module to get information about the displays currently being used. This
information can be used in tests to implement gestures or input events that are specific to the current test environment.
For example a test could be run on a desktop environment with multiple screens, or on a variety of touch devices that
have different screen sizes.
The user must call the static create method to get an instance of the Display class.
This example shows how to get the size of each available screen, which could be used to calculate coordinates for a
swipe or input event (See the autopilot.input module for more details about generating input events).:
Writing Custom Proxy Classes
By default, autopilot will generate an object for every introspectable item in your application under test. These are
generated on the fly, and derive from ProxyBase. This gives you the usual methods of selecting other nodes in the
object tree, as well the the means to inspect all the properties in that class.
However, sometimes you want to customize the class used to create these objects. The most common reason to want to
do this is to provide methods that make it easier to inspect or interact with these objects. Autopilot allows test authors
to provide their own custom classes, through a couple of simple steps:
First, you must define your own base class, to be used by all custom proxy objects in your test suite. This base class
can be empty, but must derive from ProxyBase. An example class might look like this:
For Ubuntu applications using Ubuntu UI Toolkit objects, you should derive your custom proxy object from UbuntuUIToolkitCustomProxyObjectBase. This base class is also derived from ProxyBase and is used for all Ubuntu UI
Toolkit custom proxy objects. So if you are introspecting objects from Ubuntu UI Toolkit then this is the base class to
use.
Define the classes you want autopilot to use, instead of the default. The simplest method is to give the class the same
name as the type you wish to override. For example, if you want to define your own custom class to be used every
time autopilot generates an instance of a ‘QLabel’ object, the class definition would look like this:
If you wish to implement more specific selection criteria, your class can override the validate_dbus_object method,
which takes as arguments the dbus path and state. For example:
This method should return True if the object matches this custom proxy class, and False otherwise. If more than one
custom proxy class matches an object, a ValueError will be raised at runtime.
208
Chapter 6. App development
UBports Documentation, Release 1.0
An example using Ubuntu UI Toolkit which would be used to swipe up a PageWithBottomEdge object to reveal it’s
bottom edge menu could look like this:
Pass the custom proxy base class as an argument to the launch_test_application method on your test class. This base
class should be the same base class that is used to write all of your custom proxy objects:
For applications using objects from Ubuntu UI Toolkit, the emulator_base parameter should be:
You can pass the custom proxy class to methods like select_single instead of a string. So, for example, the following
is a valid way of selecting the QLabel instances in an application:
If you are introspecting an application that already has a custom proxy base class defined, then this class can simply
be imported and passed to the appropriate application launcher method. See launching applications for more details
on launching an application for introspection. This will allow you to call all of the public methods of the application’s
proxy base class directly in your test.
This example will run on desktop and uses the webbrowser application to navigate to a url using the base class
go_to_url() method:
Launching Applications
Applications can be launched inside of a testcase using the application launcher methods from the AutopilotTestCase
class. The exact method required will depend upon the type of application being launched:
launch_test_application is used to launch regular executables
launch_upstart_application is used to launch upstart-based applications
launch_click_package is used to launch applications inside a click package
This example shows how to launch an installed click application from within a test case:
Outside of testcase classes, the NormalApplicationLauncher, UpstartApplicationLauncher, and ClickApplicationLauncher fixtures can be used, e.g.:
or a similar example for an installed click package:
Within a fixture or a testcase, self.useFixture can be used:
or for an installed click package:
Additional options can also be specified to set a custom addDetail method, a custom proxy base, or a custom dbus bus
with which to patch the environment:
Note
You must pass the test case’s ‘addDetail’ method to these application launch fixtures if you want application logs to
be attached to the test result. This is due to the way fixtures are cleaned up, and is unavoidable.
The main qml file of some click applications can also be launched directly from source. This can be done using the
qmlscene application directly on the target application’s main qml file. This example uses launch_test_application
method from within a test case:
However, using this method it will not be possible to return an application specific custom proxy object, see Writing
Custom Proxy Classes.
guides-installation
Contents
Installing Autopilot
Ubuntu
Other Linux’s
6.3. The Ubuntu App platform - develop with seamless device integration
209
UBports Documentation, Release 1.0
Autopilot is in continuous development, and the best way to get the latest version of autopilot is to run the latest
Ubuntu development image. The autopilot developers traditionally support the Ubuntu release immediately prior to
the development release via the autopilot PPA.
Ubuntu
I am running the latest development image!
In that case you can install autopilot directly from the repository and know you are getting the latest release. Check
out the packages below.
I am running a stable version of Ubuntu!
You may install the version of autopilot in the archive directly, however it will not be up to date. Instead, you should
add the latest autopilot ppa to your system (as of this writing, that is autopilot 1.5).
To add the PPA to your system, run the following command:
Once the PPA has been added to your system, you should be able to install the autopilot packages below.
Which packages should I install?
Are you working on ubuntu touch applications? The autopilot-touch metapackage is for you:
If you are sticking with gtk desktop applications, install the autopilot-desktop metapackage instead:
Feel free to install both metapackages to ensure you have support for all autopilot tests.
Other Linux’s
You may have to download the source code, and either run from source, or build the packages locally. Your best bet is
to ask in the autopilot IRC channel ( Q. Where can I get help / support?).
guides-running_ap
Autopilot test suites can be run with any python test runner (for example, the built-in testtools runner). However,
several autopilot features are only available if you use the autopilot runner.
List Tests
Autopilot can list all tests found within a particular module:
where <modulename> is the base name of the module you want to look at. The module must either be in the current
working directory, or be importable by python. For example, to list the tests inside autopilot itself, you can run:
Some results have been omitted for clarity.
The list command takes only one option:
-ro, –run-order
Display tests in the order in which they will be run, rather than alphabetical order (which is the default).
Run Tests
Running autopilot tests is very similar to listing tests:
However, the run command has many more options to customize the run behavior:
-h, –help
show this help message and exit
-o OUTPUT, –output OUTPUT
Write test result report to file. Defaults to stdout. If given a directory instead of a file will write to a file in that directory
named: <hostname>_<dd.mm.yyy_HHMMSS>.log
210
Chapter 6. App development
UBports Documentation, Release 1.0
-f FORMAT, –format FORMAT
Specify desired output format. Default is “text”. Other option is ‘xml’ to produce junit xml format.
-r, –record
Record failing tests. Required ‘recordmydesktop’ app to be installed. Videos are stored in /tmp/autopilot.
-rd PATH, –record-directory PATH
Directory to put recorded tests (only if -r) specified.
-v, –verbose
If set, autopilot will output test log data to stderr during a test run.
Common use cases
Run autopilot and save the test log:
Run autopilot and record failing tests:
ogg-vorbis files, with an .ogv extension. They will be named with the test id that failed. All videos will be placed in
the directory specified by the -rd option - in this case the currect directory. If this option is omitted, videos will be
placed in /tmp/autopilot/.
Save the test log as jUnitXml format:
Launching an Application to Introspect
In order to be able to introspect an application, it must first be launched with introspection enabled. Autopilot provides
the launch command to enable this:
The <application> parameter could be the full path to the application, or the name of an application located somewhere
on
autopilot3 launch gedit
A Qt example which passes on parameters to the application being launched:
Autopilot launch attempts to detect if you are launching either a Gtk or Qt application so that it can enable the correct
libraries. If it is unable to determine this you will need to specify the type of application it is by using the -i argument.
This allows “Gtk” or “Qt” frameworks to be specified when launching the application. The default value (“Auto”) will
try to detect which interface to load automatically.
A typical error in this situation will be “Error: Could not determine introspection type to use for application”. In which
case the -i option should be specified with the correct application framework type to fix the problem:
Once an application has launched with introspection enabled, it will be possible to launch autopilot vis and view the
introspection tree, see: Visualise Introspection Tree.
Visualise Introspection Tree
A very common thing to want to do while writing autopilot tests is see the structure of the application being tested. To
support this, autopilot includes a simple application to help visualize the introspection tree. To start it, make sure the
application you wish to test is running (see: Launching an Application to Introspect), and then run:
The result should be a window similar to below:
Selecting a connection from the drop-down box allows you to inspect different autopilot-supporting applications. If
Unity is running, the Unity connection should always be present. If other applications have been started with the
autopilot support enabled, they should appear in this list as well. Once a connection is selected, the introspection tree
is rendered in the left-hand pane, and the details of each object appear in the right-hand pane.
6.3. The Ubuntu App platform - develop with seamless device integration
211
UBports Documentation, Release 1.0
Autopilot vis also has the ability to search the object tree for nodes that match a given name (such as “LauncherController”, for example), and draw a transparent overlay over a widget if it contains position information. These tools,
when combined can make finding certain parts of an application introspection tree much easier.
guides-good_tests
This document is an introduction to writing good autopilot tests. This should be treated as additional material on
top of all the things you’d normally do to write good code. Put another way: test code follows all the same rules as
production code - it must follow the coding standards, and be of a professional quality.
Several points in this document are written with respect to the unity autopilot test suite. This is incidental, and doesn’t
mean that these points do not apply to other test suites!
Write Expressive Tests
Unit tests are often used as a reference for how your public API should be used. Functional (Autopilot) tests are no
different: they can be used to figure out how your application should work from a functional standpoint. However, this
only works if your tests are written in a clear, concise, and most importantly expressive style. There are many things
you can do to make your tests easier to read:
Pick Good Test Case Class Names
Pick a name that encapsulates all the tests in the class, but is as specific as possible. If necessary, break your tests into
several classes, so your class names can be more specific. This is important because when a test fails, the test id is the
primary means of identifying the failure. The more descriptive the test id is, the easier it is to find the fault and fix the
test.
Pick Good Test Case Method Names
Similar to picking good test case class names, picking good method names makes your test id more descriptive. We
recommend writing very long test method names, for example:
Write Docstrings
You should write docstrings for your tests. Often the test method is enough to describe what the test does, but an
English description is still useful when reading the test code. For example:
We recommend following PEP 257 when writing all docstrings.
Test One Thing Only
Tests should test one thing, and one thing only. Since we’re not writing unit tests, it’s fine to have more than one assert
statement in a test, but the test should test one feature only. How do you tell if you’re testing more than one thing?
There’s two primary ways:
Can you describe the test in a single sentence without using words like ‘and’, ‘also’, etc? If not, you should consider
splitting your tests into multiple smaller tests.
Tests usually follow a simple pattern:
Set up the test environment.
Perform some action.
Test things with assert statements.
If you feel you’re repeating steps ‘b’ and ‘c’ you’re likely testing more than one thing, and should consider splitting
your tests up.
Good Example:
This test tests one thing only. Its three lines match perfectly with the typical three stages of a test (see above), and it
only tests for things that it’s supposed to. Remember that it’s fine to assume that other parts of unity work as expected,
212
Chapter 6. App development
UBports Documentation, Release 1.0
as long as they’re covered by an autopilot test somewhere else - that’s why we don’t need to verify that the dash really
did open when we called self.dash.ensure_visible().
Fail Well
Make sure your tests test what they’re supposed to. It’s very easy to write a test that passes. It’s much more difficult to
write a test that only passes when the feature it’s testing is working correctly, and fails otherwise. There are two main
ways to achieve this:
Write the test first. This is easy to do if you’re trying to fix a bug in Unity. In fact, having a test that’s exploitable via
an autopilot test will help you fix the bug as well. Once you think you have fixed the bug, make sure the autopilot test
you wrote now passed. The general workflow will be:
Branch unity trunk.
Write autopilot test that reproduces the bug.
Commit.
Write code that fixes the bug.
Verify that the test now passes.
Commit. Push. Merge.
Celebrate!
If you’re writing tests for a bug-fix that’s already been written but is waiting on tests before it can be merged, the
workflow is similar but slightly different:
Branch unity trunk.
Write autopilot test that reproduces the bug.
Commit.
Merge code that supposedly fixes the bug.
Verify that the test now passes.
Commit. Push. Superseed original merge proposal with your branch.
Celebrate!
Think about design
Much in the same way you might choose a functional or objective-oriented paradigm for a piece of code, a testsuite
can benefit from choosing a good design pattern. One such design pattern is the page object model. The page object
model can reduce testcase complexity and allow the testcase to grow and easily adapt to changes within the underlying
application. Check out Page Object Pattern.
Test Length
Tests should be short - as short as possible while maintaining readability. Longer tests are harder to read, harder to
understand, and harder to debug. Long tests are often symptomatic of several possible problems:
Your test requires complicated setup that should be encapsulated in a method or function.
Your test is actually several tests all jammed into one large test.
Bad Example:
This test can be simplified into the following:
Here’s what we changed:
Removed the set_unity_option lines, as they didn’t affect the test results at all.
6.3. The Ubuntu App platform - develop with seamless device integration
213
UBports Documentation, Release 1.0
Removed assertions that were duplicated from other tests. For example, there’s already an autopilot test that ensures
that new applications have their title displayed on the panel.
With a bit of refactoring, this test could be even smaller (the launcher proxy classes could have a method to click an
icon given a desktop id), but this is now perfectly readable and understandable within a few seconds of reading.
Good docstrings
Test docstrings are used to communicate to other developers what the test is supposed to be testing. Test Docstrings
must:
Conform to PEP8 and PEP257 guidelines.
Avoid words like “should” in favor of stronger words like “must”.
Contain a one-line summary of the test.
Additionally, they should:
Include the launchpad bug number (if applicable).
Good Example:
Within the context of the test case, the docstring is able to explain exactly what the test does, without any ambiguity.
In contrast, here’s a poorer example:
Bad Example:
The docstring explains what the desired outcome is, but without how we’re testing it. This style of sentence assumes
test success, which is not what we want! A better version of this code might look like this:
The difference between these two are subtle, but important.
Test Readability
The most important attribute for a test is that it is correct - it must test what’s it’s supposed to test. The second most
important attribute is that it is readable. Tests should be able to be examined by themselves by someone other than the
test author without any undue hardship. There are several things you can do to improve test readability:
Don’t abuse the setUp() method. It’s tempting to put code that’s common to every test in a class into the setUp
method, but it leads to tests that are not readable by themselves. For example, this test uses the setUp method to start
the launcher switcher, and tearDown to cancel it:
Bad Example:
This leads to a shorter test (which we’ve already said is a good thing), but the test itself is incomplete. Without scrolling
up to the setUp and tearDown methods, it’s hard to tell how the launcher switcher is started. The situation gets even
worse when test classes derive from each other, since the code that starts the launcher switcher may not even be in the
same class!
A much better solution in this example is to initiate the switcher explicitly, and use addCleanup() to cancel it when the
test ends, like this:
Good Example:
The code is longer, but it’s still very readable. It also follows the setup/action/test convention discussed above.
Appropriate uses of the setUp() method include:
Initialising test class member variables.
Setting unity options that are required for the test. For example, many of the switcher autopilot tests set a unity option
to prevent the switcher going into details mode after a timeout. This isn’t part of the test, but makes the test easier to
write.
214
Chapter 6. App development
UBports Documentation, Release 1.0
Setting unity log levels. The unity log is captured after each test. Some tests may adjust the verbosity of different parts
of the Unity logging tree.
Put common setup code into well-named methods. If the “setup” phase of a test is more than a few lines long, it makes
sense to put this code into it’s own method. Pay particular attention to the name of the method you use. You need to
make sure that the method name is explicit enough to keep the test readable. Here’s an example of a test that doesn’t
do this:
Bad Example:
In contrast, we can refactor the test to look a lot nicer:
Good Example:
The test is now shorter, and the launch_test_apps method can be re-used elsewhere. Importantly - even though I’ve
hidden the implementation of the launch_test_apps method, the test still makes sense.
Hide complicated assertions behind custom assertXXX methods or custom matchers. If you find that you frequently
need to use a complicated assertion pattern, it may make sense to either:
Write a custom matcher. As long as you follow the protocol laid down by the testtools.matchers.Matcher class,
you can use a hand-written Matcher just like you would use an ordinary one. Matchers should be written in the
autopilot.matchers module if they’re likely to be reusable outside of a single test, or as local classes if they’re specific
to one test.
Write custom assertion methods. For example:
This test uses a custom method named assertSearchText that hides the complexity involved in getting the dash search
text and comparing it to the given parameter.
Prefer wait_for and Eventually to sleep
Early autopilot tests relied on extensive use of the python sleep call to halt tests long enough for unity to change its
state before the test continued. Previously, an autopilot test might have looked like this:
Bad Example:
This test uses two sleep calls. The first makes sure the dash has had time to open before the test continues, and the
second makes sure that the dash has had time to respond to our key presses before we start testing things.
There are several issues with this approach:
On slow machines (like a jenkins instance running on a virtual machine), we may not be sleeping long enough. This
can lead to tests failing on jenkins that pass on developers machines.
On fast machines, we may be sleeping too long. This won’t cause the test to fail, but it does make running the test
suite longer than it has to be.
There are two solutions to this problem:
In Tests
Tests should use the Eventually matcher. This can be imported as follows:
The Eventually matcher works on all attributes in a proxy class that derives from UnityIntrospectableObject (at the
time of writing that is almost all the autopilot unity proxy classes).
The Eventually matcher takes a single argument, which is another testtools matcher instance. For example, the bad
assertion from the example above could be rewritten like so:
Since we can use any testtools matcher, we can also write code like this:
Note that you can pass any object that follows the testtools matcher protocol (so you can write your own matchers, if
you like).
In Proxy Classes
6.3. The Ubuntu App platform - develop with seamless device integration
215
UBports Documentation, Release 1.0
Proxy classes are not test cases, and do not have access to the self.assertThat method. However, we want proxy class
methods to block until unity has had time to process the commands given. For example, the ensure_visible method on
the Dash controller should block until the dash really is visible.
To achieve this goal, all attributes on unity proxy classes have been patched with a wait_for method that takes a
testtools matcher (just like Eventually - in fact, the Eventually matcher just calls wait_for under the hood). For
example, previously the ensure_visible method on the Dash controller might have looked like this:
Bad Example:
In this example we’re assuming that two seconds is long enough for the dash to open. To use the wait_for feature, the
code looks like this:
Good Example:
Note that wait_for assumes you want to use the Equals matcher if you don’t specify one. Here’s another example
where we’re using it with a testtools matcher:
Scenarios
Autopilot uses the python-testscenarios package to run a test multiple times in different scenarios. A good example
of scenarios in use is the launcher keyboard navigation tests: each test is run once with the launcher hide mode set to
‘always show launcher’, and again with it set to ‘autohide launcher’. This allows test authors to write their test once
and have it execute in multiple environments.
In order to use test scenarios, the test author must create a list of scenarios and assign them to the test case’s scenarios
class attribute. The autopilot ibus test case classes use scenarios in a very simple fashion:
Good Example:
This is a simplified version of the IBus tests. In this case, the test_simple_input_dash test will be called 5 times. Each
time, the self.input and self.result attribute will be set to the values in the scenario list. The first part of the scenario
tuple is the scenario name - this is appended to the test id, and can be whatever you want.
Important
It is important to notice that the test does not change its behavior depending on the scenario it is run under. Exactly the
same steps are taken - the only difference in this case is what gets typed on the keyboard, and what result is expected.
Scenarios are applied before the test’s setUp or tearDown methods are called, so it’s safe (and indeed encouraged) to
set up the test environment based on these attributes. For example, you may wish to set certain unity options for the
duration of the test based on a scenario parameter.
Multiplying Scenarios
Scenarios are very helpful, but only represent a single-dimension of parameters. For example, consider the launcher
keyboard navigation tests. We may want several different scenarios to come into play:
A scenario that controls whether the launcher is set to ‘autohide’ or ‘always visible’.
A scenario that controls which monitor the test is run on (in case we have multiple monitors configured).
We can generate two separate scenario lists to represent these two scenario axis, and then produce the dot-product of
thw two lists like this:
(please ignore the fact that we’re assuming that we always have two monitors!)
In the test classes setUp method, we can then set the appropriate unity option and make sure we’re using the correct
launcher:
Which allows us to write tests that work automatically in all the scenarios:
This works fine. So far we’ve not done anything to cause undue pain.... until we decide that we want to extend the
scenarios with an additional axis:
216
Chapter 6. App development
UBports Documentation, Release 1.0
Now we have a problem: Some of the generated scenarios won’t make any sense. For example, one such scenario will
be (autohide, monitor_1, launcher on primary monitor only). If monitor 0 is the primary monitor, this will leave us
running launcher tests on a monitor that doesn’t contain a launcher!
There are two ways to get around this problem, and they both lead to terrible tests:
Detect these situations and skip the test. This is bad for several reasons - first, skipped tests should be viewed with the
same level of suspicion as commented out code. Test skips should only be used in exceptional circumstances. A test
skip in the test results is just as serious as a test failure.
Detect the situation in the test, and run different code using an if statement. For example, we might decode to do this:
As a general rule, tests shouldn’t have assert statements inside an if statement unless there’s a very good reason for
doing so.
Scenarios can be useful, but we must be careful not to abuse them. It is far better to spend more time typing and end
up with clear, readable tests than it is to end up with fewer, less readable tests. Like all code, tests are read far more
often than they’re written.
Do Not Depend on Object Ordering
Calls such as select_many return several objects at once. These objects are explicitly unordered, and test authors must
take care not to make assumptions about their order.
Bad Example:
This code may work initially, but there’s absolutely no guarantee that the order of objects won’t change in the future.
A better approach is to select the individual components you need:
Good Example:
This code will continue to work in the future.
guides-page_object
Contents
Page Object Pattern
Introducing the Page Object Pattern
1. The public methods represent the services that the page offers.
2. Try not to expose the internals of the page.
3. Methods return other PageObjects
4. Assertions should exist only in tests
5. Need not represent an entire page
6. Actions which produce multiple results should have a test for each result
Introducing the Page Object Pattern
Automated testing of an application through the Graphical User Interface (GUI) is inherently fragile. These tests
require regular review and attention during the development cycle. This is known as Interface Sensitivity (“even minor
changes to the interface can cause tests to fail”). Utilizing the page-object pattern, alleviates some of the problems
stemming from this fragility, allowing us to do automated user acceptance testing (UAT) in a sustainable manner.
The Page Object Pattern comes from the Selenium community and is the best way to turn a flaky and unmaintainable
user acceptance test into a stable and useful part of your release process. A page is what’s visible on the screen at a
single moment. A user story consists of a user jumping from page to page until they achieve their goal. Thus pages
are modeled as objects following these guidelines:
6.3. The Ubuntu App platform - develop with seamless device integration
217
UBports Documentation, Release 1.0
The public methods represent the services that the page offers.
Try not to expose the internals of the page.
Methods return other PageObjects.
Assertions should exist only in tests
Objects need not represent the entire page.
Actions which produce multiple results should have a test for each result
Lets take the page objects of the Ubuntu Clock App as an example, with some simplifications. This application is
written in QML and Javascript using the Ubuntu SDK.
1. The public methods represent the services that the page offers.
This application has a stopwatch page that lets users measure elapsed time. It offers services to start, stop and reset
the watch, so we start by defining the stop watch page object as follows:
2. Try not to expose the internals of the page.
The internals of the page are more likely to change than the services it offers. A stopwatch will keep the same three
services we defined above even if the whole design changes. In this case, we reset the stop watch by clicking a button
on the bottom-left of the window, but we hide that as an implementation detail behind the public methods. In Python,
we can indicate that a method is for internal use only by adding a single leading underscore to its name. So, lets
implement the reset_stopwatch method:
Now if the designers go crazy and decide that it’s better to reset the stop watch in a different way, we will have to make
the change only in one place to keep all the tests working. Remember that this type of tests has Interface Sensitivity,
that’s unavoidable; but we can reduce the impact of interface changes with proper encapsulation and turn these tests
into a useful way to verify that a change in the GUI didn’t introduce any regressions.
3. Methods return other PageObjects
An UAT checks a user story. It will involve the journey of the user through the system, so he will move from one page
to another. Lets take a look at how a journey to reset the stop watch will look like:
In our sample application, the first page that the user will encounter is the Clock. One of the things the user can do
from this page is to open the stopwatch page, so we model that as a service that the Clock page provides. Then return
the new page object that will be visible to the user after completing that step.
Now the return value of open_stopwatch will make available to the caller all the available services that the stopwatch
exposes to the user. Thus it can be chained as a user journey from one page to the other.
4. Assertions should exist only in tests
A well written UAT consists of a sequence of steps or user actions and ends with one single assertion that verifies that
the user achieved its goal. The page objects are the helpers for the user actions part of the test, so it’s better to leave
the check for success out of them. With that in mind, a test for the reset of the stopwatch would look like this:
We have to add a new method to the stopwatch page object: get_time. But it only returns the state of the GUI as the
user sees it. We leave in the test method the assertion that checks it’s the expected value.
5. Need not represent an entire page
The objects we are modeling here can just represent a part of the page. Then we build the entire page that the user
is seeing by composition of page parts. This way we can reuse test code for parts of the GUI that are reused in the
application or between different applications. As an example, take the _switch_to_tab(‘StopwatchTab’) method that
we are using to open the stopwatch page. The Clock application is using the Header component provided by the
Ubuntu SDK, as all the other Ubuntu applications are doing too. So, the Ubuntu SDK also provides helpers to make it
easier the user acceptance testing of the applications, and you will find an object like this:
218
Chapter 6. App development
UBports Documentation, Release 1.0
This object just represents the header of the page, and inside the object we define the services that the header provides
to the users. If you dig into the full implementation of the Clock test class you will find that in order to open the
stopwatch page we end up calling Header methods.
6. Actions which produce multiple results should have a test for each result
According to the guideline 3. Methods return other PageObjects, we are returning page objects every time that a user
action opens the option for new actions to execute. Sometimes the same action has different results depending on the
context or the values used for the action. For example, the Clock app has an Alarm page. In this page you can add new
alarms, but if you try to add an alarm for sometime in the past, it will result in an error. So, we will have two different
tests that will look something like this:
Take a look at the methods add_alarm and add_alarm_with_error. The first one returns the Alarm page again, where
the user can continue his journey or finish the test checking the result. The second one returns the error dialog that’s
expected when you try to add an alarm with the wrong values.
Porting your autopilot tests
This document contains hints as to what is required to port a test suite from any version of autopilot to any newer
version.
Contents
Porting Autopilot Tests
A note on Versions
Porting to Autopilot v1.4.x
Gtk Tests and Boolean Parameters
select_single Changes
DBus backends and DBusIntrospectionObject changes
Python 3
Porting to Autopilot v1.3.x
QtIntrospectionTestMixin and GtkIntrospectionTestMixin no longer exist
autopilot.emulators namespace has been deprecated
A note on Versions
Autopilot releases are reasonably tightly coupled with Ubuntu releases. However, the autopilot authors maintain
separate version numbers, with the aim of separating the autopilot release cadence from the Ubuntu platform release
cadence.
Autopilot versions earlier than 1.2 were not publicly announced, and were only used within Canonical. For that reason,
this document assumes that version 1.2 is the lowest version of autopilot present “in the wild”.
Porting to Autopilot v1.4.x
The 1.4 release contains several changes that required a break in the DBus wire protocol between autopilot and the
applications under test. Most of these changes require no change to test code.
Gtk Tests and Boolean Parameters
Version 1.3 of the autopilot-gtk backend contained a bug that caused all Boolean properties to be exported as integers
instead of boolean values. This in turn meant that test code would fail to return the correct objects when using selection
criteria such as:
and instead had to write something like this:
6.3. The Ubuntu App platform - develop with seamless device integration
219
UBports Documentation, Release 1.0
This bug has now been fixed, and using the integer selection will fail.
select_single Changes
The select_single method used to return None in the case where no object was found that matched the search criteria.
This led to rather awkward code in places where the object you are searching for is being created dynamically:
This makes the authors intent harder to discern. To improve this situation, two changes have been made:
select_single raises a StateNotFoundError exception if the search terms returned no values, rather than returning None.
If the object being searched for is likely to not exist, there is a new method: wait_select_single will try to retrieve an
object for 10 seconds. If the object does not exist after that timeout, a StateNotFoundError exception is raised. This
means that the above code example should now be written as:
DBus backends and DBusIntrospectionObject changes
Due to a change in how DBusIntrospectionObject objects store their DBus backend a couple of classmethods have
now become instance methods.
These affected methods are:
get_all_instances
get_root_instance
get_state_by_path
For example, if your old code is something along the lines of:
You will instead need to have something like this instead:
Python 3
Starting from version 1.4, autopilot supports python 3 as well as python 2. Test authors can choose to target either
version of python.
Porting to Autopilot v1.3.x
The 1.3 release included many API breaking changes. Earlier versions of autopilot made several assumptions about
where tests would be run, that turned out not to be correct. Autopilot 1.3 brought several much-needed features,
including:
A system for building pluggable implementations for several core components. This system is used in several areas:
The input stack can now generate events using either the X11 client libraries, or the UInput kernel driver. This is
necessary for devices that do not use X11.
The display stack can now report display information for systems that use both X11 and the mir display server.
The process stack can now report details regarding running processes & their windows on both Desktop, tablet, and
phone platforms.
A large code cleanup and reorganisation. In particular, lots of code that came from the Unity 3D codebase has been
removed if it was deemed to not be useful to the majority of test authors. This code cleanup includes a flattening of
the autopilot namespace. Previously, many useful classes lived under the autopilot.emulators namespace. These have
now been moved into the autopilot namespace.
Note
There is an API breakage in autopilot 1.3. The changes outlined under the heading “DBus backends and DBusIntrospectionObject changes” apply to version 1.3.1+13.10.20131003.1-0ubuntu1 and onwards .
QtIntrospectionTestMixin and GtkIntrospectionTestMixin no longer exist
220
Chapter 6. App development
UBports Documentation, Release 1.0
In autopilot 1.2, tests enabled application introspection services by inheriting from one of two mixin classes: QtIntrospectionTestMixin to enable testing Qt4, Qt5, and Qml applications, and GtkIntrospectionTestMixin to enable testing
Gtk 2 and Gtk3 applications. For example, a test case class in autopilot 1.2 might look like this:
In Autopilot 1.3, the AutopilotTestCase class contains this functionality directly, so the QtIntrospectionTestMixin and
GtkIntrospectionTestMixin classes no longer exist. The above example becomes simpler:
Autopilot will try and determine the introspection type automatically. If this process fails, you can specify the application type manually:
See also
Method autopilot.testcase.AutopilotTestCase.launch_test_application
Launch test applications.
autopilot.emulators namespace has been deprecated
In autopilot 1.2 and earlier, the autopilot.emulators package held several modules and classes that were used frequently
in tests. This package has been removed, and it’s contents merged into the autopilot package. Below is a table showing
the basic translations that need to be made:
Old module
New Module
autopilot.emulators.input
autopilot.input
autopilot.emulators.X11
Deprecated - use autopilot.input for input and autopilot.display for getting display information.
autopilot.emulators.bamf
Deprecated - use autopilot.process instead.
faq-contribute
Contents
Contribute
Autopilot: Contributing
17. How can I contribute to autopilot?
17. Where can I get help / support?
17. How do I download the code?
17. How do I submit the code for a merge proposal?
17. How do I list or run the tests for the autopilot source code?
17. Which version of Python can Autopilot use?
Autopilot: Contributing
17. How can I contribute to autopilot?
Documentation: We can always use more documentation.
if you don’t know how to submit a merge proposal on launchpad, you can write a bug with new documentation
and someone will submit a merge proposal for you. They will give you credit for your documentation in the merge
proposal.
6.3. The Ubuntu App platform - develop with seamless device integration
221
UBports Documentation, Release 1.0
New Features: Check out our existing Blueprints or create some yourself... Then code!
Test and Fix: No project is perfect, log some bugs or fix some bugs.
17. Where can I get help / support?
The developers hang out in the #ubuntu-autopilot IRC channel on irc.freenode.net.
17. How do I download the code?
Autopilot is using Launchpad and Bazaar for source code hosting. If you’re new to Bazaar, or distributed version
control in general, take a look at the Bazaar mini-tutorial first.
Install bzr open a terminal and type:
Download the code:
This will create an autopilot directory and place the latest code there. You can also view the autopilot code on the web.
17. How do I submit the code for a merge proposal?
After making the desired changes to the code or documentation and making sure the tests still run type:
Write a quick one line description of the bug that was fixed or the documentation that was written.
Signup for a launchpad account, if you don’t have one. Then using your launchpad id type:
Example:
All new features should have unit and/or functional test to make sure someone doesn’t remove or break your new code
with a future commit.
17. How do I list or run the tests for the autopilot source code?
Running autopilot from the source code root directory (the directory containing the autopilot/ bin/ docs/ debian/ etc.
directories) will use the local copy and not the system installed version.
An example from branching to running:
Note
The ‘Loading tests from:’ or ‘Running tests from:’ line will inform you where autopilot is loading the tests from.
To run a specific suite or a single test in a suite, be more specific with the tests path.
For example, running all unit tests:
For example, running just the ‘InputStackKeyboardTypingTests’ suite:
Or running a single test in the ‘test_version_utility_fns’ suite:
17. Which version of Python can Autopilot use?
Autopilot supports Python 3.4.
faq-faq
Contents
Frequently Asked Questions
Autopilot: The Project
17. Where can I get help / support?
17. Which version of autopilot should I install?
17. Should I write my tests in python2 or python3?
222
Chapter 6. App development
UBports Documentation, Release 1.0
Q: Should I convert my existing tests to python3?
17. Where can I report a bug?
17. What type of applications can autopilot test?
Autopilot Tests
Q. Autopilot tests often include multiple assertions. Isn’t this bad practise?
Q. How do I write a test that uses either a Mouse or a Touch device interchangeably?
17. How do I use the Onscreen Keyboard (OSK) to input text in my test?
Autopilot Tests and Launching Applications
Q. How do I launch a Click application from within a test so I can introspect it?
Q. How do I access an already running application so that I can test/introspect it?
Autopilot Qt & Gtk Support
Q. How do I launch my application so that I can explore it with the vis tool?
17. What is the impact on memory of adding objectNames to QML items?
Autopilot: The Project
17. Where can I get help / support?
The developers hang out in the #ubuntu-autopilot IRC channel on irc.freenode.net.
17. Which version of autopilot should I install?
Ideally you should adopt and utilize the latest version of autopilot. If your testcase requires you to utilize an older
version of autopilot for reasons other than Porting Autopilot Tests, please file a bug and let the development team
know about your issue.
17. Should I write my tests in python2 or python3?
As Autopilot fully supports python3 (see Python 3), you should seek to use python3 for new tests. Before making a
decision, you should also ensure any 3rd party modules your test may depend on also support python3.
Q: Should I convert my existing tests to python3?
See above. In a word, yes. Converting python2 to python3 (see Python 3) is generally straightforward and converting
a testcase is likely much easier than a full python application. You can also consider retaining python2 compatibility
upon conversion.
17. Where can I report a bug?
Autopilot is hosted on launchpad - bugs can be reported on the launchpad bug page for autopilot (this requires a
launchpad account).
17. What type of applications can autopilot test?
Autopilot works with severall different types of applications, including:
The Unity desktop shell.
Gtk 2 & 3 applications.
Qt4, Qt5, and Qml applications.
Autopilot is designed to work across all the form factors Ubuntu runs on, including the phone and tablet.
Autopilot Tests
Q. Autopilot tests often include multiple assertions. Isn’t this bad practise?
6.3. The Ubuntu App platform - develop with seamless device integration
223
UBports Documentation, Release 1.0
Maybe. But probably not.
Unit tests should test a single unit of code, and ideally be written such that they can fail in exactly a single way.
Therefore, unit tests should have a single assertion that determines whether the test passes or fails.
However, autopilot tests are not unit tests, they are functional tests. Functional test suites tests features, not units of
code, so there’s several very good reasons to have more than assertion in a single test:
Some features require several assertions to prove that the feature is working correctly. For example, you may wish to
verify that the ‘Save’ dialog box opens correctly, using the following code:
Some tests need to wait for the application to respond to user input before the test continues. The easiest way to do
this is to use the Eventually matcher in the middle of your interaction with the application. For example, if testing the
Firefox browsers ability to print a certain web comic, we might produce a test that looks similar to this:
In general, autopilot tests are more relaxed about the ‘one assertion per test’ rule. However, care should still be taken
to produce tests that are as small and understandable as possible.
Q. How do I write a test that uses either a Mouse or a Touch device interchangeably?
The autopilot.input.Pointer class is a simple wrapper that unifies some of the differences between the Touch and Mouse
classes. To use it, pass in the device you want to use under the hood, like so:
Combined with test scenarios, this can be used to write tests that are run twice - once with a mouse device and once
with a touch device:
If you only want to use the mouse on certain platforms, use the autopilot.platform module to determine the current
platform at runtime.
17. How do I use the Onscreen Keyboard (OSK) to input text in my test?
The OSK is an backend option for the autopilot.input.Keyboard.create method (see this Advanced Autopilot section
for details regarding backend selection.)
Unlike the other backends (X11, UInput) the OSK has a GUI presence and thus can be displayed on the screen.
The autopilot.input.Keyboard class provides a context manager that handles any cleanup required when dealing with
the input backends.
For example in the instance when the backend is the OSK, when leaving the scope of the context manager the OSK
will be dismissed with a swipe:
Autopilot Tests and Launching Applications
Q. How do I launch a Click application from within a test so I can introspect it?
Launching a Click application is similar to launching a traditional application and is as easy as using
launch_click_package:
Q. How do I access an already running application so that I can test/introspect it?
In instances where it’s impossible to launch the application-under-test from within the testsuite use
get_proxy_object_for_existing_process to get a proxy object for the running application. In all other cases the recommended way to launch and retrieve a proxy object for an application is by calling either launch_test_application or
launch_click_package
For example, to access a long running process that is running before your test starts:
Autopilot Qt & Gtk Support
Q. How do I launch my application so that I can explore it with the vis tool?
Autopilot can launch applications with Autopilot support enabled allowing you to explore and introspect the application using the vis tool
For instance launching gedit is as easy as:
224
Chapter 6. App development
UBports Documentation, Release 1.0
Autopilot launch attempts to detect if you are launching either a Gtk or Qt application so that it can enable the correct
libraries. If is is unable to determine this you will need to specify the type of application it is by using the -i argument.
For example, in our previous example Autopilot was able to automatically determine that gedit is a Gtk application
and thus no further arguments were required.
If we want to use the vis tool to introspect something like the testapp.py script from an earlier tutorial we will need to
inform autopilot that it is a Qt application so that it can enable the correct support:
Now that it has been launched with Autopilot support we can introspect and explore out application using the vis tool.
17. What is the impact on memory of adding objectNames to QML items?
The objectName is a QString property of QObject which defaults to an empty QString. QString is UTF-16 representation and because it uses some general purpose optimisations it usually allocates twice the space it needs to be
able to grow fast. It also uses implicit sharing with copy-on-write and other similar tricks to increase performance
again. These properties makes the used memory not straightforward to predict. For example, copying an object with
an objectName, shares the memory between both as long as they are not changed.
When measuring memory consumption, things like memory alignment come into play. Due to the fact that QML is
interpreted by a JavaScript engine, we are working in levels where lots of abstraction layers are in between the code
and the hardware and we have no chance to exactly measure consumption of a single objectName property. Therefore
the taken approach is to measure lots of items and calculate the average consumption.
Measurement of memory consumption of 10000 Items
Without objectName
With unique objectName
With same objectName
65292 kB
66628 kB
66480 kB
=> With 10000 different objectNames 1336 kB of memory are consumed which is around 127 Bytes per Item.
Indeed, this is more than only the string. Some of the memory is certainly lost due to memory alignment where certain
areas are just not perfectly filled in but left empty. However, certainly not all of the overhead can be blamed on that.
Additional memory is used by the QObject meta object information that is needed to do signal/slot connections. Also,
QML does some optimisations: It does not connect signals/slots when not needed. So the fact that the object name is
set could trigger some more connections.
Even if more than the actual string size is used and QString uses a large representation, this is very little compared
to the rest. A qmlscene with just the item is 27MB. One full screen image in the Nexus 10 tablet can easily consume
around 30MB of memory. So objectNames are definitely not the first places where to search for optimisations.
Writing the test code snippets, one interesting thing came up frequently: Just modifying the code around to set the
objectName often influences the results more than the actual string. For example, having a javascript function that
assigns the objectName definitely uses much more memory than the objectName itself. Unless it makes sense from
a performance point of view (frequently changing bindings can be slow), objectNames should be added by directly
binding the value to the property instead using helper code to assign it.
Conclusion: If an objectName is needed for testing, this is definitely worth it. objectName’s should obviously not be
added when not needed. When adding them, the general QML guidelines for performance should be followed.
faq-troubleshooting
Contents
6.3. The Ubuntu App platform - develop with seamless device integration
225
UBports Documentation, Release 1.0
Troubleshooting
General Techniques
Common Questions regarding Failing Tests
Q. Why is my test failing? It works some of the time. What causes “flakyness?”
StateNotFoundError Exception
General Techniques
The single hardest thing to do while writing autopilot tests is to understand the state of the application’s object tree.
This is especially important for applications that change their object tree during the lifetime of the test. There are three
techniques you can use to discover the state of the object tree:
Using Autopilot Vis
The Autopilot vis tool is a useful tool for exploring the entire structure of an application, and allows you to search for
a particular node in the object tree. If you want to find out what parts of the application to select to gain access to
certain information, the vis tool is probably the best way to do that.
Using print_tree
The print_tree method is available on every proxy class. This method will print every child of the proxy object
recursively, either to stdout or a file on disk. This technique can be useful when:
The application cannot easily be put into the state required before launching autopilot vis, so the vis tool is no longer
an option.
The application state that has to be captured only exists for a short amount of time.
The application only runs on platforms where the vis tool isn’t available.
The print_tree method often produces a lot of output. There are two ways this information overload can be handled:
Specify a file path to write to, so the console log doesn’t get flooded. This log file can then be searched with tools such
as grep.
Specify a maxdepth limit. This controls how many levels deep the recursive search will go.
Of course, these techniques can be used in combination.
Using get_properties
The get_properties method can be used on any proxy object, and will return a python dictionary containing all the
properties of that proxy object. This is useful when you want to explore what information is provided by a single
proxy object. The information returned by this method is exactly the same as is shown in the right-hand pane of
autopilot vis.
Common Questions regarding Failing Tests
Q. Why is my test failing? It works some of the time. What causes “flakyness?”
Sometimes a tests fails because the application under tests has issues, but what happens when the failing test can’t be
reproduced manually? It means the test itself has an issue.
Here is a troubleshooting guide you can use with some of the common problems that developers can overlook while
writing tests.
StateNotFoundError Exception
Not waiting for an animation to finish before looking for an object. Did you add animations to your app recently?
problem:
solution:
226
Chapter 6. App development
UBports Documentation, Release 1.0
Not waiting for an object to become visible before trying to select it. Is your app slower than it used to be for some
reason? Does its properties have null values? Do you see errors in stdout/stderr while using your app, if you run it
from the commandline?
Python code is executed in series which takes milliseconds, whereas the actions (clicking a button etc.) will take longer
as well as the dbus query time. This is why wait_select_* is useful i.e. click a button and wait for that click to happen
(including the dbus query times taken).
problem:
solution:
Waiting for an item that is destroyed to be not visible, sometimes the objects is destroyed before it returns false:
problem:
problem:
solution:
solution:
Trying to use select_many like a list. The order in which the objects are returned are non-deterministic.
problem:
solution:
autopilot
autopilot.get_test_configuration()
Get the test configuration dictionary.
Tests can be configured from the command line when the autopilot tool is invoked. Typical use cases involve configuring the test suite to use a particular binary (perhaps a locally built binary or one installed to the system), or configuring
which external services are faked.
This dictionary is populated from the –config option to the autopilot run command. For example:
autopilot run –config use_local some.test.id
Will result in a dictionary where the key use_local is present, and evaluates to true, e.g.-:
Values can also be specified. The following command:
autopilot run –config fake_services=login some.test.id
...will result in the key ‘fake_services’ having the value ‘login’.
Autopilot itself does nothing with the conents of this dictionary. It is entirely up to test authors to populate it, and to
use the values as they see fit.
autopilot.get_version_string()
Return the autopilot source and package versions.
autopilot.have_vis()
Return true if the vis package is installed.
6.3. The Ubuntu App platform - develop with seamless device integration
227
UBports Documentation, Release 1.0
autopilot.application.ClickApplicationLauncher
class
autopilot.application.ClickApplicationLauncher(case_addDetail=None,
dbus_bus=’session’)
emulator_base=None,
Fixture to manage launching a Click application.A class that knows how to launch an application with a certain type
of introspection enabled.
Parameters:
case_addDetail – addDetail method to use.
proxy_base – custom proxy base class to use, defaults to None
dbus_bus – dbus bus to use, if set to something other than the default (‘session’) the environment will be patched
launch(package_id, app_name=None, app_uris=[])
Launch a click package application with introspection enabled.
This method takes care of launching a click package with introspection exabled. You probably want to use this method
if your application is packaged in a click application, or is started via upstart.
Usage is similar to NormalApplicationLauncher.launch:
Parameters:
package_id – The Click package name you want to launch. For example: com.ubuntu.dropping-letters
app_name – Currently, only one application can be packaged in a click package, and this parameter can be left at
None. If specified, it should be the application name you wish to launch.
app_uris – Parameters used to launch the click package. This parameter will be left empty if not used.
Raises:
RuntimeError – If the specified package_id cannot be found in the click package manifest.
RuntimeError – If the specified app_name cannot be found within the specified click package.
Returns:
proxy object for the launched package application
autopilot.application.NormalApplicationLauncher
class
autopilot.application.NormalApplicationLauncher(case_addDetail=None,
dbus_bus=’session’)
emulator_base=None,
Fixture to manage launching an application.A class that knows how to launch an application with a certain type of
introspection enabled.
Parameters:
case_addDetail – addDetail method to use.
proxy_base – custom proxy base class to use, defaults to None
dbus_bus – dbus bus to use, if set to something other than the default (‘session’) the environment will be patched
launch(application, arguments=[], app_type=None, launch_dir=None, capture_output=True)
Launch an application and return a proxy object.
228
Chapter 6. App development
UBports Documentation, Release 1.0
Use this method to launch an application and start testing it. The arguments passed in arguments are used as arguments
to the application to launch. Additional keyword arguments are used to control the manner in which the application is
launched.
This fixture is designed to be flexible enough to launch all supported types of applications. Autopilot can automatically
determine how to enable introspection support for dynamically linked binary applications. For example, to launch a
binary Gtk application, a test might start with:
For use within a testcase, use useFixture:
from autopilot.application import NormalApplicationLauncher launcher = self.useFixture(NormalApplicationLauncher())
app_proxy = launcher.launch(‘gedit’)
Applications can be given command line arguments by supplying an arguments argument to this method. For example,
if we want to launch gedit with a certain document loaded, we might do this:
... a Qt5 Qml application is launched in a similar fashion:
If you wish to launch an application that is not a dynamically linked binary, you must specify the application type. For
example, a Qt4 python application might be launched like this:
Similarly, a python/Gtk application is launched like so:
Parameters:
application –
The application to launch. The application can be specified as:
A full, absolute path to an executable file. (/usr/bin/gedit)
A relative path to an executable file. (./build/my_app)
An app name, which will be searched for in $PATH (my_app)
arguments – If set, the list of arguments is passed to the launched app.
app_type – If set, provides a hint to autopilot as to which kind of introspection to enable. This is needed when the
application you wish to launch is not a dynamically linked binary. Valid values are ‘gtk’ or ‘qt’. These strings are case
insensitive.
launch_dir – If set to a directory that exists the process will be launched from that directory.
capture_output – If set to True (the default), the process output will be captured and attached to the test as test detail.
Returns:
A proxy object that represents the application. Introspection data is retrievable via this object.
autopilot.application
Base package for application launching and environment management.
Elements
ClickApplicationLauncher
Fixture to manage launching a Click application.A class that knows how to launch an application with a certain typ
NormalApplicationLauncher
Fixture to manage launching an application.A class that knows how to launch an application with a certain typ
UpstartApplicationLauncher
6.3. The Ubuntu App platform - develop with seamless device integration
229
UBports Documentation, Release 1.0
A launcher class that launches applications with UpstartAppLaunch.A class that knows how to launch an application
with a certain typ
autopilot.application.UpstartApplicationLauncher
class
autopilot.application.UpstartApplicationLauncher(case_addDetail=None,
dbus_bus=’session’)
emulator_base=None,
A launcher class that launches applications with UpstartAppLaunch.A class that knows how to launch an application
with a certain type of introspection enabled.
Parameters:
case_addDetail – addDetail method to use.
proxy_base – custom proxy base class to use, defaults to None
dbus_bus – dbus bus to use, if set to something other than the default (‘session’) the environment will be patched
launch(app_id, app_uris=[])
Launch an application with upstart.
This method launches an application via the upstart-app-launch library, on platforms that support it.
Usage is similar to NormalApplicationLauncher:
Parameters:
app_id – name of the application to launch
app_uris – list of separate application uris to launch
Raises RuntimeError:
If the specified application cannot be launched.
Returns:
proxy object for the launched package application
autopilot.application.get_application_launcher_wrapper(app_path)
Return an instance of ApplicationLauncher that knows how to launch the application at ‘app_path’.
autopilot.display.Display
class autopilot.display.Display
The base class/inteface for the display devices.
static create(preferred_backend=’‘)
Get an instance of the Display class.
For more infomration on picking specific backends, see Advanced Backend Picking
Parameters:
preferred_backend –
A string containing a hint as to which backend you would like.
possible backends are:
X11 - Get display information from X11.
230
Chapter 6. App development
UBports Documentation, Release 1.0
UPA - Get display information from the ubuntu platform API.
Raises:
RuntimeError if autopilot cannot instantate any of the possible backends.
Raises:
RuntimeError if the preferred_backend is specified and is not one of the possible backends for this device class.
Raises:
BackendException if the preferred_backend is set, but that backend could not be instantiated.
Returns:
Instance of Display with appropriate backend.
exception BlacklistedDriverError
Cannot set primary monitor when running drivers listed in the driver blacklist.
Display.get_num_screens()
Get the number of screens attached to the PC.
Display.get_primary_screen()
Display.get_screen_width(screen_number=0)
Display.get_screen_height(screen_number=0)
Display.get_screen_geometry(monitor_number)
Get the geometry for a particular monitor.
Returns:
Tuple containing (x, y, width, height).
autopilot.display.get_screenshot_data(display_type)
Return a BytesIO object of the png data for the screenshot image.
display_type is the display server type. supported values are:
“X11”
“MIR”
Raises:
RuntimeError – If attempting to capture an image on an unsupported display server.
RuntimeError – If saving image data to file-object fails.
autopilot.display
The display module contaions support for getting screen information.
autopilot.display.is_rect_on_screen(screen_number, rect)
Return True if rect is entirely on the specified screen, with no overlap.
autopilot.display.is_point_on_screen(screen_number, point)
Return True if point is on the specified screen.
point must be an iterable type with two elements: (x, y)
6.3. The Ubuntu App platform - develop with seamless device integration
231
UBports Documentation, Release 1.0
autopilot.display.is_point_on_any_screen(point)
Return true if point is on any currently configured screen.
autopilot.display.move_mouse_to_screen(screen_number)
Move the mouse to the center of the specified screen.
Elements
Display
The base class/inteface for the display devices.
autopilot.emulators
Autopilot Says
Deprecated Namespace!
This module contains modules that were in the autopilot.emulators package in autopilot version 1.2 and earlier, but
have now been moved to the autopilot package.
This module exists to ease the transition to autopilot 1.3, but is not guaranteed to exist in the future.
See also
Modulule autopilot.display
Get display information.
Module autopilot.input
Create input events to interact with the application under test.
autopilot.exceptions
Autopilot Exceptions.
This module contains exceptions that autopilot may raise in various conditions. Each exception is documented with
when it is raised: a generic description in this module, as well as a detailed description in the function or method that
raises it.
exception autopilot.exceptions.BackendException(original_exception)
An error occured while trying to initialise an autopilot backend.
exception autopilot.exceptions.ProcessSearchError
Object introspection error occured.
exception autopilot.exceptions.StateNotFoundError(class_name=None, **filters)
Raised when a piece of state information is not found.
This exception is commonly raised when the application has destroyed (or not yet created) the object you are trying to
access in autopilot. This typically happens for a number of possible reasons:
The UI widget you are trying to access with select_single or wait_select_single or select_many does not exist yet.
The UI widget you are trying to access has been destroyed by the application.
exception autopilot.exceptions.InvalidXPathQuery
Raised when an XPathselect query is invalid or unsupported.
232
Chapter 6. App development
UBports Documentation, Release 1.0
autopilot.gestures
Gestural support for autopilot.
This module contains functions that can generate touch and multi-touch gestures for you. This is a convenience for
the test author - there is nothing to prevent you from generating your own gestures!
autopilot.gestures.pinch(center, vector_start, vector_end)
Perform a two finger pinch (zoom) gesture.
Parameters:
center – The coordinates (x,y) of the center of the pinch gesture.
vector_start – The (x,y) values to move away from the center for the start.
vector_end – The (x,y) values to move away from the center for the end.
The fingers will move in 100 steps between the start and the end points. If start is smaller than end, the gesture will
zoom in, otherwise it will zoom out.
autopilot.input.Keyboard
class autopilot.input.Keyboard
A simple keyboard device class.
The keyboard class is used to generate key events while in an autopilot test. This class should not be instantiated
directly. To get an instance of the keyboard class, call create instead.
static create(preferred_backend=’‘)
Get an instance of the Keyboard class.
For more infomration on picking specific backends, see Advanced Backend Picking
For details regarding backend limitations please see: Keyboard backend limitations
Warning
The OSK (On Screen Keyboard) backend option does not implement either release methods due to technical implementation details and will raise a NotImplementedError exception if used.
Parameters:
preferred_backend –
A string containing a hint as to which backend you would like. Possible backends are:
X11 - Generate keyboard events using the X11 client
libraries.
UInput - Use UInput kernel-level device driver.
OSK - Use the graphical On Screen Keyboard as a backend.
Raises:
RuntimeError if autopilot cannot instantate any of the possible backends.
Raises:
RuntimeError if the preferred_backend is specified and is not one of the possible backends for this device class.
Raises:
6.3. The Ubuntu App platform - develop with seamless device integration
233
UBports Documentation, Release 1.0
BackendException if the preferred_backend is set, but that backend could not be instantiated.
focused_type(input_target, pointer=None)
Type into an input widget.
This context manager takes care of making sure a particular input_target UI control is selected before any text is
entered.
Some backends extend this method to perform cleanup actions at the end of the context manager block. For example,
the OSK backend dismisses the keyboard.
If the pointer argument is None (default) then either a Mouse or Touch pointer will be created based on the current
platform.
An example of using the context manager (with an OSK backend):
press(keys, delay=0.2)
Send key press events only.
Parameters:
keys – Keys you want pressed.
delay – The delay (in Seconds) after pressing the keys before returning control to the caller.
Raises:
NotImplementedError If called when using the OSK Backend.
Warning
The OSK backend does not implement the press method and will raise a NotImplementedError if called.
Example:
presses the ‘Alt’ and ‘F2’ keys, but does not release them.
release(keys, delay=0.2)
Send key release events only.
Parameters:
keys – Keys you want released.
delay – The delay (in Seconds) after releasing the keys before returning control to the caller.
Raises:
NotImplementedError If called when using the OSK Backend.
Warning
The OSK backend does not implement the press method and will raise a NotImplementedError if called.
Example:
releases the ‘Alt’ and ‘F2’ keys.
press_and_release(keys, delay=0.2)
Press and release all items in ‘keys’.
This is the same as calling ‘press(keys);release(keys)’.
Parameters:
keys – Keys you want pressed and released.
234
Chapter 6. App development
UBports Documentation, Release 1.0
delay – The delay (in Seconds) after pressing and releasing each key.
Example:
presses both the ‘Alt’ and ‘F2’ keys, and then releases both keys.
type(string, delay=0.1)
Simulate a user typing a string of text.
Parameters:
string – The string to text to type.
delay – The delay (in Seconds) after pressing and releasing each key. Note that the default value here is shorter than
for the press, release and press_and_release methods.
Note
Only ‘normal’ keys can be typed with this method. Control characters (such as ‘Alt’ will be interpreted as an ‘A’, and
‘l’, and a ‘t’).
on_test_end(*args)
on_test_start(*args)
autopilot.input.Mouse
class autopilot.input.Mouse
A simple mouse device class.
The mouse class is used to generate mouse events while in an autopilot test. This class should not be instantiated
directly however. To get an instance of the mouse class, call create instead.
For example, to create a mouse object and click at (100,50):
static create(preferred_backend=’‘)
Get an instance of the Mouse class.
For more infomration on picking specific backends, see Advanced Backend Picking
Parameters:
preferred_backend –
A string containing a hint as to which backend you would like. Possible backends are:
X11 - Generate mouse events using the X11 client libraries.
Raises:
RuntimeError if autopilot cannot instantate any of the possible backends.
Raises:
RuntimeError if the preferred_backend is specified and is not one of the possible backends for this device class.
Raises:
BackendException if the preferred_backend is set, but that backend could not be instantiated.
x
Mouse position X coordinate.
y
6.3. The Ubuntu App platform - develop with seamless device integration
235
UBports Documentation, Release 1.0
Mouse position Y coordinate.
press(button=1)
Press mouse button at current mouse location.
release(button=1)
Releases mouse button at current mouse location.
click(button=1, press_duration=0.1, time_between_events=0.1)
Click mouse at current location.
Parameters:
time_between_events – takes floating point to represent the delay time between subsequent clicks. Default value 0.1
represents tenth of a second.
click_object(object_proxy, button=1, press_duration=0.1, time_between_events=0.1)
Click the center point of a given object.
It does this by looking for several attributes, in order. The first attribute found will be used. The attributes used are (in
order):
globalRect (x,y,w,h)
center_x, center_y
x, y, w, h
Parameters:
time_between_events – takes floating point to represent the delay time between subsequent clicks. Default value 0.1
represents tenth of a second.
Raises:
ValueError if none of these attributes are found, or if an attribute is of an incorrect type.
move(x, y, animate=True, rate=10, time_between_events=0.01)
Moves mouse to location (x,y).
Callers should avoid specifying the rate or time_between_events parameters unless they need a specific rate of movement.
move_to_object(object_proxy)
Attempts to move the mouse to ‘object_proxy’s centre point.
It does this by looking for several attributes, in order. The first attribute found will be used. The attributes used are (in
order):
globalRect (x,y,w,h)
center_x, center_y
x, y, w, h
Raises:
ValueError if none of these attributes are found, or if an attribute is of an incorrect type.
position()
Returns the current position of the mouse pointer.
Returns:
236
Chapter 6. App development
UBports Documentation, Release 1.0
(x,y) tuple
drag(x1, y1, x2, y2, rate=10, time_between_events=0.01)
Perform a press, move and release.
This is to keep a common API between Mouse and Finger as long as possible.
The pointer will be dragged from the starting point to the ending point with multiple moves. The number of moves,
and thus the time that it will take to complete the drag can be altered with the rate parameter.
Parameters:
x1 – The point on the x axis where the drag will start from.
y1 – The point on the y axis where the drag will starts from.
x2 – The point on the x axis where the drag will end at.
y2 – The point on the y axis where the drag will end at.
rate – The number of pixels the mouse will be moved per iteration. Default is 10 pixels. A higher rate will make the
drag faster, and lower rate will make it slower.
time_between_events – The number of seconds that the drag will wait between iterations.
on_test_end(*args)
on_test_start(*args)
autopilot.input.Pointer
class autopilot.input.Pointer(device)
A wrapper class that represents a pointing device which can either be a mouse or a touch, and provides a unified API.
This class is useful if you want to run tests with either a mouse or a touch device, and want to write your tests to use a
single API. Create this wrapper by passing it either a mouse or a touch device, like so:
or, like so:
Warning
Some operations only make sense for certain devices. This class attempts to minimise the differences between the
Mouse and Touch APIs, but there are still some operations that will cause exceptions to be raised. These are documented in the specific methods below.
x
Pointer X coordinate.
If the wrapped device is a Touch device, this will return the last known X coordinate, which may not be a sensible
value.
y
Pointer Y coordinate.
If the wrapped device is a Touch device, this will return the last known Y coordinate, which may not be a sensible
value.
press(button=1)
Press the pointer at it’s current location.
If the wrapped device is a mouse, you may pass a button specification. If it is a touch device, passing anything other
than 1 will raise a ValueError exception.
6.3. The Ubuntu App platform - develop with seamless device integration
237
UBports Documentation, Release 1.0
release(button=1)
Releases the pointer at it’s current location.
If the wrapped device is a mouse, you may pass a button specification. If it is a touch device, passing anything other
than 1 will raise a ValueError exception.
click(button=1, press_duration=0.1, time_between_events=0.1)
Press and release at the current pointer location.
If the wrapped device is a mouse, the button specification is used. If it is a touch device, passing anything other than 1
will raise a ValueError exception.
Parameters:
time_between_events – takes floating point to represent the delay time between subsequent clicks/taps. Default value
0.1 represents tenth of a second.
move(x, y)
Moves the pointer to the specified coordinates.
If the wrapped device is a mouse, the mouse will animate to the specified coordinates. If the wrapped device is a touch
device, this method will determine where the next release or click will occur.
click_object(object_proxy, button=1, press_duration=0.1, time_between_events=0.1)
Attempts to move the pointer to ‘object_proxy’s centre point. and click a button
It does this by looking for several attributes, in order. The first attribute found will be used. The attributes used are (in
order):
globalRect (x,y,w,h)
center_x, center_y
x, y, w, h
If the wrapped device is a mouse, the button specification is used. If it is a touch device, passing anything other than 1
will raise a ValueError exception.
Parameters:
time_between_events – takes floating point to represent the delay time between subsequent clicks/taps. Default value
0.1 represents tenth of a second.
move_to_object(object_proxy)
Attempts to move the pointer to ‘object_proxy’s centre point.
It does this by looking for several attributes, in order. The first attribute found will be used. The attributes used are (in
order):
globalRect (x,y,w,h)
center_x, center_y
x, y, w, h
Raises:
ValueError if none of these attributes are found, or if an attribute is of an incorrect type.
position()
Returns the current position of the pointer.
Returns:
238
Chapter 6. App development
UBports Documentation, Release 1.0
(x,y) tuple
drag(x1, y1, x2, y2, rate=10, time_between_events=0.01)
Perform a press, move and release.
This is to keep a common API between Mouse and Finger as long as possible.
The pointer will be dragged from the starting point to the ending point with multiple moves. The number of moves,
and thus the time that it will take to complete the drag can be altered with the rate parameter.
Parameters:
x1 – The point on the x axis where the drag will start from.
y1 – The point on the y axis where the drag will starts from.
x2 – The point on the x axis where the drag will end at.
y2 – The point on the y axis where the drag will end at.
rate – The number of pixels the mouse will be moved per iteration. Default is 10 pixels. A higher rate will make the
drag faster, and lower rate will make it slower.
time_between_events – The number of seconds that the drag will wait between iterations.
autopilot.input
Autopilot unified input system.
This package provides input methods for various platforms. Autopilot aims to provide an appropriate implementation
for the currently running system. For example, not all systems have an X11 stack running: on those systems, autopilot
will instantiate input classes class that use something other than X11 to generate events (possibly UInput).
Test authors should instantiate the appropriate class using the create method on each class. Calling create() with no
arguments will get an instance of the specified class that suits the current platform. In this case, autopilot will do it’s
best to pick a suitable backend. Calling create with a backend name will result in that specific backend type being
returned, or, if it cannot be created, an exception will be raised. For more information on creating backends, see
Advanced Backend Picking
There are three basic input types available:
Keyboard - traditional keyboard devices.
Mouse - traditional mouse devices (Currently only avaialble on the
desktop).
Touch - single point-of-contact touch device.
The Pointer class is a wrapper that unifies the API of the Mouse and Touch classes, which can be helpful if you want
to write a test that can use either a mouse of a touch device. A common pattern is to use a Touch device when running
on a mobile device, and a Mouse device when running on a desktop.
See also
Module autopilot.gestures
Multitouch and gesture support for touch devices.
Elements
Keyboard
A simple keyboard device class.
6.3. The Ubuntu App platform - develop with seamless device integration
239
UBports Documentation, Release 1.0
Mouse
A simple mouse device class.
Pointer
A wrapper class that represents a pointing device which can either
Touch
A simple touch driver class.
autopilot.input.Touch
class autopilot.input.Touch
A simple touch driver class.
This class can be used for any touch events that require a single active touch at once. If you want to do complex
gestures (including multi-touch gestures), look at the autopilot.gestures module.
static create(preferred_backend=’‘)
Get an instance of the Touch class.
Parameters:
preferred_backend –
A string containing a hint as to which backend you would like. If left blank, autopilot will pick a suitable backend for
you. Specifying a backend will guarantee that either that backend is returned, or an exception is raised.
possible backends are:
UInput - Use UInput kernel-level device driver.
Raises:
RuntimeError if autopilot cannot instantate any of the possible backends.
Raises:
RuntimeError if the preferred_backend is specified and is not one of the possible backends for this device class.
Raises:
BackendException if the preferred_backend is set, but that backend could not be instantiated.
pressed
Return True if this touch is currently in use (i.e.- pressed on the ‘screen’).
tap(x, y, press_duration=0.1, time_between_events=0.1)
Click (or ‘tap’) at given x,y coordinates.
Parameters:
time_between_events – takes floating point to represent the delay time between subsequent taps. Default value 0.1
represents tenth of a second.
tap_object(object, press_duration=0.1, time_between_events=0.1)
Tap the center point of a given object.
It does this by looking for several attributes, in order. The first attribute found will be used. The attributes used are (in
order):
240
Chapter 6. App development
UBports Documentation, Release 1.0
globalRect (x,y,w,h)
center_x, center_y
x, y, w, h
Parameters:
time_between_events – takes floating point to represent the delay time between subsequent taps. Default value 0.1
represents tenth of a second.
Raises:
ValueError if none of these attributes are found, or if an attribute is of an incorrect type.
press(x, y)
Press and hold at the given x,y coordinates.
move(x, y)
Move the pointer coords to (x,y).
Note
The touch ‘finger’ must be pressed for a call to this method to be successful. (see press for further details on touch
presses.)
Raises:
RuntimeError if called and the touch ‘finger’ isn’t pressed.
release()
Release a previously pressed finger
drag(x1, y1, x2, y2, rate=10, time_between_events=0.01)
Perform a drag gesture.
The finger will be dragged from the starting point to the ending point with multiple moves. The number of moves, and
thus the time that it will take to complete the drag can be altered with the rate parameter.
Parameters:
x1 – The point on the x axis where the drag will start from.
y1 – The point on the y axis where the drag will starts from.
x2 – The point on the x axis where the drag will end at.
y2 – The point on the y axis where the drag will end at.
rate – The number of pixels the finger will be moved per iteration. Default is 10 pixels. A higher rate will make the
drag faster, and lower rate will make it slower.
time_between_events – The number of seconds that the drag will wait between iterations.
Raises:
RuntimeError – if the finger is already pressed.
RuntimeError – if no more finger slots are available.
6.3. The Ubuntu App platform - develop with seamless device integration
241
UBports Documentation, Release 1.0
autopilot.introspection.ProxyBase
class autopilot.introspection.ProxyBase(state_dict, path, backend)
A class that supports transparent data retrieval from the application under test.
This class is the base class for all objects retrieved from the application under test. It handles transparently refreshing
attribute values when needed, and contains many methods to select child objects in the introspection tree.
This class must be used as a base class for any custom proxy classes.
See also
Tutorial Section Writing Custom Proxy Classes
Information on how to write custom proxy classes.
get_all_instances()
Get all instances of this class that exist within the Application state tree.
For example, to get all the LauncherIcon instances:
Warning
Using this method is slow - it requires a complete scan of the introspection tree. You should only use this when you’re
not sure where the objects you are looking for are located. Depending on the application you are testing, you may get
duplicate results using this method.
Returns:
List (possibly empty) of class instances.
get_children()
Returns a list of all child objects.
This returns a list of all children. To return only children of a specific type, use get_children_by_type. To get objects
further down the introspection tree (i.e.- nodes that may not necessarily be immeadiate children), use select_single and
select_many.
get_children_by_type(desired_type, **kwargs)
Get a list of children of the specified type.
Keyword arguments can be used to restrict returned instances. For example:
will return only Launcher instances that have an attribute ‘monitor’ that is equal to 1. The type can also be specified
as a string, which is useful if there is no emulator class specified:
Note however that if you pass a string, and there is an emulator class defined, autopilot will not use it.
Parameters:
desired_type – Either a string naming the type you want, or a class of the type you want (the latter is used when
defining custom emulators)
See also
Tutorial Section Writing Custom Proxy Classes
get_parent(type_name=’‘, **kwargs)
Returns the parent of this object.
One may also use this method to get a specific parent node from the introspection tree, with type equal to type_name
or matching the keyword filters present in kwargs. Note: The priority order is closest parent.
242
Chapter 6. App development
UBports Documentation, Release 1.0
If no filters are provided and this object has no parent (i.e.- it is the root of the introspection tree). Then it returns itself.
Parameters:
type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden
emulator classes).
Raises StateNotFoundError:
if the requested object was not found.
get_path()
Return the absolute path of the dbus node
get_properties()
Returns a dictionary of all the properties on this class.
This can be useful when you want to log all the properties exported from your application for a particular object. Every
property in the returned dictionary can be accessed as attributes of the object as well.
get_root_instance()
Get the object at the root of this tree.
This will return an object that represents the root of the introspection tree.
classmethod get_type_query_name()
Return the Type node name to use within the search query.
This allows for a Custom Proxy Object to be named differently to the underlying node type name.
For instance if you have a QML type defined in the file RedRect.qml:
You can then define a Custom Proxy Object for this type like so:
class RedRect(DBusIntrospectionObject):
@classmethod def get_type_query_name(cls):
This is due to the qml engine storing ‘RedRect’ as a QQuickRectangle in the UI tree and the xpathquery query needs
a node type to query for. By default the query will use the class name (in this case RedRect) but this will not match
any node type in the tree.
is_moving(gap_interval=0.1)
Check if the element is moving.
Parameters:
gap_interval – Time in seconds to wait before re-inquiring the object co-ordinates to be able to evaluate if, the element
is moving.
Returns:
True, if the element is moving, otherwise False.
no_automatic_refreshing()
Context manager function to disable automatic DBus refreshing when retrieving attributes.
Example usage:
with instance.no_automatic_refreshing():
# access lots of attributes.
6.3. The Ubuntu App platform - develop with seamless device integration
243
UBports Documentation, Release 1.0
This can be useful if you need to check lots of attributes in a tight loop, or if you want to atomicaly check several
attributes at once.
print_tree(output=None, maxdepth=None, _curdepth=0)
Print properties of the object and its children to a stream.
When writing new tests, this can be called when it is too difficult to find the widget or property that you are interested
in in “vis”.
Warning
Do not use this in production tests, this is expensive and not at all appropriate for actual testing. Only call this
temporarily and replace with proper select_single/select_many calls.
Parameters:
output – A file object or path name where the output will be written to. If not given, write to stdout.
maxdepth – If given, limit the maximum recursion level to that number, i. e. only print children which have at most
maxdepth-1 intermediate parents.
refresh_state()
Refreshes the object’s state.
You should probably never have to call this directly. Autopilot automatically retrieves new state every time this object’s
attributes are read.
Raises StateNotFound:
if the object in the application under test has been destroyed.
select_many(type_name=’*’, ap_result_sort_keys=None, **kwargs)
Get a list of nodes from the introspection tree, with type equal to type_name and (optionally) matching the keyword
filters present in kwargs.
You must specify either type_name, keyword filters or both.
This method searches recursively from the instance this method is called on. Calling select_many on the application
(root) proxy object will search the entire tree. Calling select_many on an object in the tree will only search it’s
descendants.
Example Usage:
As mentioned above, this method searches the object tree recursively:
Warning
The order in which objects are returned is not guaranteed. It is bad practise to write tests that depend on the order in
which this method returns objects. (see Do Not Depend on Object Ordering for more information).
If you want to ensure a certain count of results retrieved from this method, use wait_select_many or if you only want
to get one item, use select_single instead.
Parameters:
type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden
emulator classes).
ap_result_sort_keys – list of object properties to sort the query result with (sort key priority starts with element 0 as
highest priority and then descends down the list).
Raises ValueError:
if neither type_name or keyword filters are provided.
244
Chapter 6. App development
UBports Documentation, Release 1.0
See also
Tutorial Section Writing Custom Proxy Classes
select_single(type_name=’*’, **kwargs)
Get a single node from the introspection tree, with type equal to type_name and (optionally) matching the keyword
filters present in kwargs.
You must specify either type_name, keyword filters or both.
This method searches recursively from the instance this method is called on. Calling select_single on the application
(root) proxy object will search the entire tree. Calling select_single on an object in the tree will only search it’s
descendants.
Example usage:
If nothing is returned from the query, this method raises StateNotFoundError.
Parameters:
type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden
emulator classes).
Raises:
ValueError – if the query returns more than one item. If you want more than one item, use select_many instead.
ValueError – if neither type_name or keyword filters are provided.
StateNotFoundError – if the requested object was not found.
See also
Tutorial Section Writing Custom Proxy Classes
classmethod validate_dbus_object(path, _state)
Return whether this class is the appropriate proxy object class for a given dbus path and state.
The default version matches the name of the dbus object and the class. Subclasses of CustomProxyObject can override
it to define a different validation method.
Parameters:
path – The dbus path of the object to check
state – The dbus state dict of the object to check (ignored in default implementation)
Returns:
Whether this class is appropriate for the dbus object
wait_select_many(type_name=’*’, ap_query_timeout=10, ap_result_count=1, ap_result_sort_keys=None, **kwargs)
Get a list of nodes from the introspection tree, with type equal to type_name and (optionally) matching the keyword
filters present in kwargs.
This method is identical to the select_many method, except that this method will poll the application under test for
ap_query_timeout seconds in the event that the search result count is not greater than or equal to ap_result_count.
You must specify either type_name, keyword filters or both.
This method searches recursively from the instance this method is called on. Calling wait_select_many on the application (root) proxy object will search the entire tree. Calling wait_select_many on an object in the tree will only search
it’s descendants.
Example Usage:
6.3. The Ubuntu App platform - develop with seamless device integration
245
UBports Documentation, Release 1.0
Warning
The order in which objects are returned is not guaranteed. It is bad practise to write tests that depend on the order in
which this method returns objects. (see Do Not Depend on Object Ordering for more information).
Parameters:
type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden
emulator classes).
ap_query_timeout – Time in seconds to wait for search criteria to match.
ap_result_count – Minimum number of results to return.
ap_result_sort_keys – list of object properties to sort the query result with (sort key priority starts with element 0 as
highest priority and then descends down the list).
Raises ValueError:
if neither type_name or keyword filters are provided. Also raises, if search result count does not match the number
specified by ap_result_count within ap_query_timeout seconds.
See also
Tutorial Section Writing Custom Proxy Classes
wait_select_single(type_name=’*’, ap_query_timeout=10, **kwargs)
Get a proxy object matching some search criteria, retrying if no object is found until a timeout is reached.
This method is identical to the select_single method, except that this method will poll the application under test for 10
seconds in the event that the search criteria does not match anything.
This method will return single proxy object from the introspection tree, with type equal to type_name and (optionally)
matching the keyword filters present in kwargs.
You must specify either type_name, keyword filters or both.
This method searches recursively from the proxy object this method is called on. Calling select_single on the application (root) proxy object will search the entire tree. Calling select_single on an object in the tree will only search it’s
descendants.
Example usage:
If nothing is returned from the query, this method raises StateNotFoundError after ap_query_timeout seconds.
Parameters:
type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden
emulator classes).
ap_query_timeout – Time in seconds to wait for search criteria to match.
Raises:
ValueError – if the query returns more than one item. If you want more than one item, use select_many instead.
ValueError – if neither type_name or keyword filters are provided.
StateNotFoundError – if the requested object was not found.
See also
Tutorial Section Writing Custom Proxy Classes
wait_until_destroyed(timeout=10)
Block until this object is destroyed in the application.
246
Chapter 6. App development
UBports Documentation, Release 1.0
Block until the object this instance is a proxy for has been destroyed in the applicaiton under test. This is commonly
used to wait until a UI component has been destroyed.
Parameters:
timeout – The number of seconds to wait for the object to be destroyed. If not specified, defaults to 10 seconds.
Raises RuntimeError:
if the method timed out.
wait_until_not_moving(retry_attempts_count=20, retry_interval=0.5)
Block until this object is not moving.
Block until both x and y of the object stop changing. This is normally useful for cases, where there is a need to ensure
an object is static before interacting with it.
Parameters:
retry_attempts_count – number of attempts to check if the object is moving.
retry_interval – time in fractional seconds to be slept, between each attempt to check if the object moving.
Raises RuntimeError:
if DBus node is still moving after number of retries specified in retry_attempts_count.
autopilot.introspection.CustomEmulatorBase
alias of ProxyBase
autopilot.introspection.is_element(ap_query_func, *args, **kwargs)
Call the ap_query_func with the args and indicate if it raises StateNotFoundError.
Param:
ap_query_func: The dbus query call to be evaluated.
Param:
args: The *ap_query_func positional parameters.
Param:
**kwargs: The ap_query_func optional parameters.
Returns:
False if the ap_query_func raises StateNotFoundError, True otherwise.
autopilot.introspection.get_classname_from_path(object_path)
Given an object path, return the class name component.
autopilot.introspection.get_path_root(object_path)
Return the name of the root node of specified path.
exception autopilot.introspection.ProcessSearchError
Object introspection error occured.
autopilot.introspection.get_proxy_object_for_existing_process(**kwargs)
Return a single proxy object for an application that is already running (i.e. launched outside of Autopilot).
6.3. The Ubuntu App platform - develop with seamless device integration
247
UBports Documentation, Release 1.0
Searches the given bus (supplied by the kwarg dbus_bus) for an application matching the search criteria (also supplied
in kwargs, see further down for explaination on what these can be.) Returns a proxy object created using the supplied
custom emulator emulator_base (which defaults to None).
This function take kwargs arguments containing search parameter values to use when searching for the target application.
Possible search criteria: (unless specified otherwise these parameters default to None)
Parameters:
pid – The PID of the application to search for.
process – The process of the application to search for. If provided only the pid of the process is used in the search, but
if the process exits before the search is complete it is used to supply details provided by the process object.
connection_name – A string containing the DBus connection name to use with the search criteria.
application_name – A string containing the applications name to search for.
object_path – A string containing the object path to use as the search criteria.
lot.introspection.constants.AUTOPILOT_PATH.
Defaults to:
autopi-
Non-search parameters:
Parameters:
dbus_bus – The DBus bus to search for the application. Must be a string containing either ‘session’, ‘system’ or the
custom buses name (i.e. ‘unix:abstract=/tmp/dbus-IgothuMHNk’). Defaults to ‘session’
emulator_base – The custom emulator to use when creating the resulting proxy object. Defaults to None
Exceptions possibly thrown by this function:
Raises:
ProcessSearchError – If no search criteria match.
RuntimeError – If the search criteria results in many matches.
RuntimeError – If both process and pid are supplied, but process.pid != pid.
Examples:
Retrieving an application on the system bus where the applications PID is known:
Multiple criteria are allowed, for instance you could search on pid and connection_name:
If the application from the previous example was on the system bus:
It is possible to search for the application given just the applications name. An example for an application running on
a custom bus searching using the applications name:
autopilot.introspection.get_proxy_object_for_existing_process_by_name(process_name, emulator_base=None)
Return the proxy object for a process by its name.
Parameters:
process_name – name of the process to get proxy object. This must be a string.
emulator_base – emulator base to use with the custom proxy object.
Raises ValueError:
if process not running or more than one PIDs associated with the process.
Returns:
248
Chapter 6. App development
UBports Documentation, Release 1.0
proxy object for the requested process.
autopilot.introspection
Package for introspection object support and search.
This package contains the methods and classes that are of use for accessing dbus proxy objects and creating Custom
Proxy Object classes.
For retrieving proxy objects for already existing processes use get_proxy_object_for_existing_process This takes
search criteria and return a proxy object that can be queried and introspected.
For creating your own Custom Proxy Classes use autopilot.introspection.CustomEmulatorBase
See also
The tutorial section Writing Custom Proxy Classes for further details on using ‘CustomEmulatorBase’ to write custom
proxy classes.
Elements
ProxyBase
A class that supports transparent data retrieval from the applica
autopilot.introspection.types.DateTime
class autopilot.introspection.types.DateTime(*args, **kwargs)
The DateTime class represents a date and time in the UTC timezone.
DateTime is constructed by passing a unix timestamp in to the constructor. The incoming timestamp is assumed to be
in UTC.
Note
This class expects the passed in timestamp to be in UTC but will display the resulting date and time in local time
(using the local timezone).
This is done to mimic the behaviour of most applications which will display date and time in local time by default
Timestamps are expressed as the number of seconds since 1970-01-01T00:00:00 in the UTC timezone:
This timestamp can always be accessed either using index access or via a named property:
DateTime objects also expose the usual named properties you would expect on a date/time object:
Two DateTime objects can be compared for equality:
You can also compare a DateTime with any mutable sequence type containing the timestamp (although this probably
isn’t very useful for test authors):
Finally, you can also compare a DateTime instance with a python datetime instance:
Note
Autopilot supports dates beyond 2038 on 32-bit platforms. To achieve this the underlying mechanisms require to work
with timezone aware datetime objects.
This means that the following won’t always be true (due to the naive timestamp not having the correct daylight-savings
time details):
But this will work:
And this will always work to:
6.3. The Ubuntu App platform - develop with seamless device integration
249
UBports Documentation, Release 1.0
Note
DateTime.timestamp() will not always equal the passed in timestamp. To paraphrase a message from [http://bugs.
python.org/msg229393] “datetime.timestamp is supposed to be inverse of datetime.fromtimestamp(), but since the
later is not monotonic, no such inverse exists in the strict mathematical sense.”
DateTime instances can be converted to datetime instances:
autopilot.introspection.types.PlainType
class autopilot.introspection.types.PlainType
Plain type support in autopilot proxy objects.
Instances of this class will be used for all plain attrubites. The word “plain” in this context means anything that’s
marshalled as a string, boolean or integer type across dbus.
Instances of these classes can be used just like the underlying type. For example, given an object property called
‘length’ that is marshalled over dbus as an integer value, the following will be true:
However, a special case exists for boolean values: because you cannot subclass from the ‘bool’ type, the following
check will fail ( object.visible is a boolean property):
However boolean values will behave exactly as you expect them to.
autopilot.introspection.types.Point
class autopilot.introspection.types.Point(*args, **kwargs)
The Point class represents a 2D point in cartesian space.
To construct a Point, pass in the x, y parameters to the class constructor:
These attributes can be accessed either using named attributes, or via sequence indexes:
Point instances can be compared using == and !=, either to another Point instance, or to any mutable sequence type
with the correct number of items:
autopilot.introspection.types.Rectangle
class autopilot.introspection.types.Rectangle(*args, **kwargs)
The RectangleType class represents a rectangle in cartesian space.
To construct a rectangle, pass the x, y, width and height parameters in to the class constructor:
These attributes can be accessed either using named attributes, or via sequence indexes:
You may also access the width and height values using the width and height properties:
Rectangles can be compared using == and !=, either to another Rectangle instance, or to any mutable sequence type:
autopilot.introspection.types
Autopilot proxy type support.
This module defines the classes that are used for all attributes on proxy objects. All proxy objects contain attributes
that transparently mirror the values present in the application under test. Autopilot takes care of keeping these values
up to date.
250
Chapter 6. App development
UBports Documentation, Release 1.0
Object attributes fall into two categories. Attributes that are a single string, boolean, or integer property are sent
directly across DBus. These are called “plain” types, and are stored in autopilot as instnaces of the PlainType class.
Attributes that are more complex (a rectangle, for example) are called “complex” types, and are split into several
component values, sent across dbus, and are then reconstituted in autopilot into useful objects.
Elements
DateTime
The DateTime class represents a date and time in the UTC timezone.
PlainType
Plain type support in autopilot proxy objects.
Point
The Point class represents a 2D point in cartesian space.
Rectangle
The RectangleType class represents a rectangle in cartesian space.
Size
The Size class represents a 2D size in cartesian space.
Time
The Time class represents a time, without a date component.
autopilot.introspection.types.Size
class autopilot.introspection.types.Size(*args, **kwargs)
The Size class represents a 2D size in cartesian space.
To construct a Size, pass in the width, height parameters to the class constructor:
These attributes can be accessed either using named attributes, or via sequence indexes:
Size instances can be compared using == and !=, either to another Size instance, or to any mutable sequence type with
the correct number of items:
autopilot.introspection.types.Time
class autopilot.introspection.types.Time(*args, **kwargs)
The Time class represents a time, without a date component.
You can construct a Time instnace by passing the hours, minutes, seconds, and milliseconds to the class constructor:
The values passed in must be valid for their positions (ie..- 0-23 for hours, 0-59 for minutes and seconds, and 0-999
for milliseconds). Passing invalid values will cause a ValueError to be raised.
The hours, minutes, seconds, and milliseconds can be accessed using either index access or named properties:
Time instances can be compared to other time instances, any mutable sequence containing four integers, or datetime.time instances:
Note that the Time class stores milliseconds, while the datettime.time class stores microseconds.
Finally, you can get a datetime.time instance from a Time instance:
6.3. The Ubuntu App platform - develop with seamless device integration
251
UBports Documentation, Release 1.0
autopilot.matchers.Eventually
class autopilot.matchers.Eventually(matcher, **kwargs)
Asserts that a value will eventually equal a given Matcher object.
This matcher wraps another testtools matcher object. It makes that other matcher work with a timeout. This is
necessary for several reasons:
Since most actions in a GUI applicaton take some time to complete, the test may need to wait for the application to
enter the expected state.
Since the test is running in a separate process to the application under test, test authors cannot make any assumptions
about when the application under test will recieve CPU time to update to the expected state.
There are two main ways of using the Eventually matcher:
Attributes from the application:
Here, window is an object generated by autopilot from the applications state. This pattern of usage will cover 90% (or
more) of the assertions in an autopilot test. Note that any matcher can be used - either from testtools or any custom
matcher that implements the matcher API:
Callable Objects:
In this example we’re using the autopilot.platform.model function as a callable. In this form, Eventually matches
against the return value of the callable.
This can also be used to use a regular python property inside an Eventually matcher:
Note
Using this form generally makes your tests less readable, and should be used with great care. It also relies the
test author to have knowledge about the implementation of the object being matched against. In this example, if
self.mouse.x were ever to change to be a regular python attribute, this test would likely break.
Timeout
By default timeout period is ten seconds. This can be altered by passing the timeout keyword:
Warning
The Eventually matcher does not work with any other matcher that expects a callable argument (such as testtools’
‘Raises’ matcher)
autopilot.matchers
Autopilot-specific testtools matchers.
Elements
Eventually
Asserts that a value will eventually equal a given Matcher object.
autopilot.platform
autopilot.platform.model()
Get the model name of the current platform.
For desktop / laptop installations, this will return “Desktop”. Otherwise, the current hardware model will be returned.
For example:
252
Chapter 6. App development
UBports Documentation, Release 1.0
autopilot.platform.image_codename()
Get the image codename.
For desktop / laptop installations this will return “Desktop”. Otherwise, the codename of the image that was installed
will be returned. For example:
platform.image_codename()
... “maguro”
autopilot.platform.is_tablet()
Indicate whether system is a tablet.
The ‘ro.build.characteristics’ property is checked for ‘tablet’. For example:
platform.tablet()
... True
Returns:
boolean indicating whether this is a tablet
autopilot.platform.get_display_server()
Returns display server type.
Returns:
string indicating display server type. Either “X11”, “MIR” or “UNKNOWN”
autopilot.process.Application
class autopilot.process.Application
desktop_file
Get the application desktop file.
This returns just the filename, not the full path. If the application no longer exists, this returns an empty string.
name
Get the application name.
Note
This may change according to the current locale. If you want a unique string to match applications against, use
desktop_file instead.
icon
Get the application icon.
Returns:
The name of the icon.
is_active
Is the application active (i.e. has keyboard focus)?
is_urgent
Is the application currently signalling urgency?
user_visible
6.3. The Ubuntu App platform - develop with seamless device integration
253
UBports Documentation, Release 1.0
Is this application visible to the user?
Note
Some applications (such as the panel) are hidden to the user but may still be returned.
get_windows()
Get a list of the application windows.
autopilot.process.ProcessManager
class autopilot.process.ProcessManager
A simple process manager class.
The process manager is used to handle processes, windows and applications. This class should not be instantiated
directly however. To get an instance of the keyboard class, call create instead.
KNOWN_APPS = {‘System Settings’: {‘process-name’: ‘unity-control-center’, ‘desktop-file’: ‘unity-controlcenter.desktop’}, ‘Mahjongg’: {‘process-name’: ‘gnome-mahjongg’, ‘desktop-file’: ‘gnome-mahjongg.desktop’},
‘Text Editor’: {‘process-name’: ‘gedit’, ‘desktop-file’: ‘gedit.desktop’}, ‘Terminal’: {‘process-name’: ‘gnometerminal’, ‘desktop-file’: ‘gnome-terminal.desktop’}, ‘Character Map’: {‘process-name’: ‘gucharmap’, ‘desktop-file’:
‘gucharmap.desktop’}, ‘Remmina’: {‘process-name’: ‘remmina’, ‘desktop-file’: ‘remmina.desktop’}, ‘Calculator’:
{‘process-name’: ‘gnome-calculator’, ‘desktop-file’: ‘gcalctool.desktop’}}
static create(preferred_backend=’‘)
Get an instance of the ProcessManager class.
For more infomration on picking specific backends, see Advanced Backend Picking
Parameters:
preferred_backend –
A string containing a hint as to which backend you would like. Possible backends are:
BAMF - Get process information using the BAMF Application
Matching Framework.
Raises:
RuntimeError if autopilot cannot instantate any of the possible backends.
Raises:
RuntimeError if the preferred_backend is specified and is not one of the possible backends for this device class.
Raises:
BackendException if the preferred_backend is set, but that backend could not be instantiated.
classmethod register_known_application(name, desktop_file, process_name)
Register an application with autopilot.
After calling this method, you may call start_app or start_app_window with the name parameter to start this application. You need only call this once within a test run - the application will remain registerred until the test run ends.
Parameters:
name – The name to be used when launching the application.
desktop_file – The filename (without path component) of the desktop file used to launch the application.
254
Chapter 6. App development
UBports Documentation, Release 1.0
process_name – The name of the executable process that gets run.
Raises:
KeyError if application has been registered already
classmethod unregister_known_application(name)
Unregister an application with the known_apps dictionary.
Parameters:
name – The name to be used when launching the application.
Raises:
KeyError if the application has not been registered.
start_app(app_name, files=[], locale=None)
Start one of the known applications, and kill it on tear down.
Warning
This method will clear all instances of this application on tearDown, not just the one opened by this method! We
recommend that you use the start_app_window method instead, as it is generally safer.
Parameters:
app_name – The application name. This name must either already be registered as one of the built-in applications that
are supported by autopilot, or must have been registered using register_known_application beforehand.
files – (Optional) A list of paths to open with the given application. Not all applications support opening files in this
way.
locale – (Optional) The locale will to set when the application is launched. If you want to launch an application without
any localisation being applied, set this parameter to ‘C’.
Returns:
A Application instance.
start_app_window(app_name, files=[], locale=None)
Open a single window for one of the known applications, and close it at the end of the test.
Parameters:
app_name – The application name. This name must either already be registered as one of the built-in applications that
are supported by autopilot, or must have been registered with register_known_application beforehand.
files – (Optional) Should be a list of paths to open with the given application. Not all applications support opening
files in this way.
locale – (Optional) The locale will to set when the application is launched. If you want to launch an application without
any localisation being applied, set this parameter to ‘C’.
Raises:
AssertionError if no window was opened, or more than one window was opened.
Returns:
A Window instance.
get_open_windows_by_application(app_name)
Get a list of ~autopilot.process.Window‘ instances for the given application name.
6.3. The Ubuntu App platform - develop with seamless device integration
255
UBports Documentation, Release 1.0
Parameters:
app_name – The name of one of the well-known applications.
Returns:
A list of Window instances.
close_all_app(app_name)
get_app_instances(app_name)
app_is_running(app_name)
get_running_applications(user_visible_only=True)
Get a list of the currently running applications.
If user_visible_only is True (the default), only applications visible to the user in the switcher will be returned.
get_running_applications_by_desktop_file(desktop_file)
Return a list of applications with the desktop file desktop_file.
This method will return an empty list if no applications are found with the specified desktop file.
get_open_windows(user_visible_only=True)
Get a list of currently open windows.
If user_visible_only is True (the default), only applications visible to the user in the switcher will be returned.
The result is sorted to be in stacking order.
wait_until_application_is_running(desktop_file, timeout)
Wait until a given application is running.
Parameters:
desktop_file (string) – The name of the application desktop file.
timeout (integer) – The maximum time to wait, in seconds. If set to something less than 0, this method will wait
forever.
Returns:
true once the application is found, or false if the application was not found until the timeout was reached.
launch_application(desktop_file, files=[], wait=True)
Launch an application by specifying a desktop file.
Parameters:
files (List of strings) – List of files to pass to the application. Not all apps support this.
Note
If wait is True, this method will wait up to 10 seconds for the application to appear.
Raises:
TypeError on invalid files parameter.
Returns:
The Gobject process object.
256
Chapter 6. App development
UBports Documentation, Release 1.0
autopilot.process
Elements
Application
Get the application desktop file.
ProcessManager
A simple process manager class.
Window
Get the X11 Window Id.
autopilot.process.Window
class autopilot.process.Window
x_id
Get the X11 Window Id.
x_win
Get the X11 window object of the underlying window.
get_wm_state
Get the state of the underlying window.
name
Get the window name.
Note
This may change according to the current locale. If you want a unique string to match windows against, use the x_id
instead.
title
Get the window title.
This may be different from the application name.
Note
This may change depending on the current locale.
geometry
Get the geometry for this window.
Returns:
Tuple containing (x, y, width, height).
is_maximized
Is the window maximized?
Maximized in this case means both maximized vertically and horizontally. If a window is only maximized in one
direction it is not considered maximized.
application
6.3. The Ubuntu App platform - develop with seamless device integration
257
UBports Documentation, Release 1.0
Get the application that owns this window.
This method may return None if the window does not have an associated application. The ‘desktop’ window is one
such example.
user_visible
Is this window visible to the user in the switcher?
is_hidden
Is this window hidden?
Windows are hidden when the ‘Show Desktop’ mode is activated.
is_focused
Is this window focused?
is_valid
Is this window object valid?
Invalid windows are caused by windows closing during the construction of this object instance.
monitor
Returns the monitor to which the windows belongs to
closed
Returns True if the window has been closed
close()
Close the window.
set_focus()
autopilot.testcase.AutopilotTestCase
class autopilot.testcase.AutopilotTestCase(*args, **kwargs)
Wrapper around testtools.TestCase that adds significant functionality.
This class should be the base class for all autopilot test case classes. Not using this class as the base class disables
several important convenience methods, and also prevents the use of the failed-test recording tools.
launch_test_application(application, *arguments, **kwargs)
Launch application and return a proxy object for the application.
Use this method to launch an application and start testing it. The positional arguments are used as arguments to the
application to lanch. Keyword arguments are used to control the manner in which the application is launched.
This method is designed to be flexible enough to launch all supported types of applications. Autopilot can automatically determine how to enable introspection support for dynamically linked binary applications. For example, to
launch a binary Gtk application, a test might start with:
Applications can be given command line arguments by supplying positional arguments to this method. For example,
if we want to launch gedit with a certain document loaded, we might do this:
... a Qt5 Qml application is launched in a similar fashion:
If you wish to launch an application that is not a dynamically linked binary, you must specify the application type. For
example, a Qt4 python application might be launched like this:
258
Chapter 6. App development
UBports Documentation, Release 1.0
Similarly, a python/Gtk application is launched like so:
Parameters:
application –
The application to launch. The application can be specified as:
A full, absolute path to an executable file. (/usr/bin/gedit)
A relative path to an executable file. (./build/my_app)
An app name, which will be searched for in $PATH (my_app)
app_type – If set, provides a hint to autopilot as to which kind of introspection to enable. This is needed when the
application you wish to launch is not a dynamically linked binary. Valid values are ‘gtk’ or ‘qt’. These strings are case
insensitive.
launch_dir – If set to a directory that exists the process will be launched from that directory.
capture_output – If set to True (the default), the process output will be captured and attached to the test as test detail.
emulator_base – If set, specifies the base class to be used for all emulators for this loaded application.
Returns:
A proxy object that represents the application. Introspection data is retrievable via this object.
launch_click_package(package_id, app_name=None, app_uris=[], **kwargs)
Launch a click package application with introspection enabled.
This method takes care of launching a click package with introspection exabled. You probably want to use this method
if your application is packaged in a click application, or is started via upstart.
Usage is similar to the AutopilotTestCase.launch_test_application:
Parameters:
package_id – The Click package name you want to launch. For example: com.ubuntu.dropping-letters
app_name – Currently, only one application can be packaged in a click package, and this parameter can be left at
None. If specified, it should be the application name you wish to launch.
app_uris – Parameters used to launch the click package. This parameter will be left empty if not used.
emulator_base – If set, specifies the base class to be used for all emulators for this loaded application.
Raises:
RuntimeError – If the specified package_id cannot be found in the click package manifest.
RuntimeError – If the specified app_name cannot be found within the specified click package.
Returns:
proxy object for the launched package application
launch_upstart_application(application_name,
uris=[],
lot.application._launcher.UpstartApplicationLauncher’>, **kwargs)
launcher_class=<class
‘autopi-
Launch an application with upstart.
This method launched an application via the ubuntu-app-launch library, on platforms that support it.
Usage is similar to the AutopilotTestCase.launch_test_application:
Parameters:
application_name – The name of the application to launch.
6.3. The Ubuntu App platform - develop with seamless device integration
259
UBports Documentation, Release 1.0
launcher_class – The application launcher class to use. Useful if
you need to overwrite the default to do something custom (i.e. using
AlreadyLaunchedUpstartLauncher)
Parameters:
emulator_base – If set, specifies the base class to be used for all emulators for this loaded application.
Raises RuntimeError:
If the specified application cannot be launched.
take_screenshot(attachment_name)
Take a screenshot of the current screen and adds it to the test as a detail named attachment_name.
If attachment_name already exists as a detail the name will be modified to remove the naming conflict (i.e. using
TestCase.addDetailUniqueName).
Returns True if the screenshot was taken and attached successfully, False otherwise.
patch_environment(key, value)
Patch environment using fixture.
This function is deprecated and planned for removal in autopilot 1.6. New implementations should use EnvironmenVariable from the fixtures module:
‘key’ will be set to ‘value’. During tearDown, it will be reset to a previous value, if one is found, or unset if not.
assertVisibleWindowStack(stack_start)
Check that the visible window stack starts with the windows passed in.
Note
Minimised windows are skipped.
Parameters:
stack_start – An iterable of Window instances.
Raises AssertionError:
if the top of the window stack does not match the contents of the stack_start parameter.
assertProperty(obj, **kwargs)
Assert that obj has properties equal to the key/value pairs in kwargs.
This method is intended to be used on objects whose attributes do not have the wait_for method (i.e.- objects that do
not come from the autopilot DBus interface).
For example, from within a test, to assert certain properties on a ~autopilot.process.Window instance:
Note
assertProperties is a synonym for this method.
Parameters:
obj – The object to test.
kwargs – One or more keyword arguments to match against the attributes of the obj parameter.
Raises:
ValueError – if no keyword arguments were given.
260
Chapter 6. App development
UBports Documentation, Release 1.0
ValueError – if a named attribute is a callable object.
AssertionError – if any of the attribute/value pairs in kwargs do not match the attributes on the object passed in.
assertProperties(obj, **kwargs)
Assert that obj has properties equal to the key/value pairs in kwargs.
This method is intended to be used on objects whose attributes do not have the wait_for method (i.e.- objects that do
not come from the autopilot DBus interface).
For example, from within a test, to assert certain properties on a ~autopilot.process.Window instance:
Note
assertProperties is a synonym for this method.
Parameters:
obj – The object to test.
kwargs – One or more keyword arguments to match against the attributes of the obj parameter.
Raises:
ValueError – if no keyword arguments were given.
ValueError – if a named attribute is a callable object.
AssertionError – if any of the attribute/value pairs in kwargs do not match the attributes on the object passed in.
autopilot.testcase
Quick Start
The AutopilotTestCase is the main class test authors will be interacting with. Every autopilot test case should derive
from this class. AutopilotTestCase derives from testtools.TestCase, so test authors can use all the methods defined in
that class as well.
Writing tests
Tests must be named: test_<testname>, where <testname> is the name of the test. Test runners (including autopilot
itself) look for methods with this naming convention. It is recommended that you make your test names descriptive of
what each test is testing. For example, possible test names include:
Launching the Application Under Test
If you are writing a test for an application, you need to use the launch_test_application method. This will launch the
application, enable introspection, and return a proxy object representing the root of the application introspection tree.
Elements
AutopilotTestCase
Wrapper around testtools.TestCase that adds significant functionality.
App development cookbook
The App Developer Cookbook is a collection of short examples, how to’s and answered questions from our developer
community. In the sections below you will find information about how to perform common tasks, answers to frequently
asked questions, and code snippets from real world examples.
6.3. The Ubuntu App platform - develop with seamless device integration
261
UBports Documentation, Release 1.0
General App Development
• Basic QML tutorial
• Ubuntu Touch app development book
• Is there way to compile Qt5 programs, written with c++, to Ubuntu Touch?
• Is QML the only way to create apps in Ubuntu for tablets?
• Are Ubuntu Phone apps compatible across different devices? And if yes, how?
• Is it possible to write a Mobile app with a engine written in C?
• How can I install commonly used developer tools?
• Can I develop a hybrid native/HTML5 app for the Ubuntu Phone?
• Will developers be able to use ruby or python for apps on ubuntu mobile?
Platform and System Services
• When to use gconf vs dconf?
• How to add support for new services to Friends?
• How can I use the voice recognition used by Android on Ubuntu?
• How Will App Permissions be Handled in Ubuntu Touch?
• How do I programmatically detect presence changes in the messaging menu?
• Is there a digital protection system in place to prevent piracy of commercial applications?
• What is the best practice for saving data in Ubuntu one db from mobile?
• How to retrieve a list of devices connected to Ubuntu One?
• Ubuntu one API file upload maximum size
• How can I run a command from a qml script?
• Run system commands from QML App
UI Components and Shell Integration
• Is there any tutorial for programming Unity indicators?
• How do I create a working indicator with Qt/C++?
• What is the equivalent to Android Intent and BroadcastReceiver?
• How to use gettext to make QML files translatable
• What is the @ sign added as a suffix to images used for apps based in the Ubuntu SDK?
• How to remove my application’s appindicator programmatically?
• What Interface Toolkit is being recommended for Ubuntu on Nexus7/Mobile Devices?
• Unity Launcher API for C++
• How to use theming in QML for Ubuntu Phone
• How to create a dialog and set title and text dynamically
262
Chapter 6. App development
UBports Documentation, Release 1.0
• What icon does Unity use for an application?
• How can I center an ActivityIndicator within the screen?
• How to create very very simple GUI application for Ubuntu?
• How to emit onDropped in QML drag n drop example?
• Re-use toolbar code for each tab
• Screen dependent image resolution
• Using AppIndicators with the Qt framework
• Where are the default Unity lenses stored?
• Large amount of scrollable text in Ubuntu touch
• How do I get an UbuntuShape to transition (fade) between different images?
• Bad color of backgroundColor in a MainView when fixed to “#F1E1A3”
• Set background for Page{} element in ubuntu touch
• Buttons in ubuntu touch toolbar
• How can I invoke the soft-keyboard widget on Ubuntu-touch?
Device Sensors
• Low-level 10-finger multi-touch data on the Nexus 7?
• Will location data be available to ubuntu mobile apps?
Games
• Is there a simple “Hello World” for making games?
• What 2D/3D engines and game SDKs are available?
• Which free 2D game engine for Ubuntu is the best choice for me?
• Where is the documentation for programming OpenGL ES for Ubuntu Touch?
Files and Storage
• Where do applications typically store data?
• What is the preferred way to store application settings?
• Can commercial applications use Gsettings?
Multimedia
• How to pass the image preview from the QML Camera component to a C++ plugin
• Is there a standard or recommended sound lib in Ubuntu?
• Playing Sound with Ubuntu QML Toolkit preview
• Problem with SVG image in QML
6.3. The Ubuntu App platform - develop with seamless device integration
263
UBports Documentation, Release 1.0
Networking
• How to programmatically get a list of wireless SSIDs in range from NetworkManager
• get text from a Website in javascript/qml
264
Chapter 6. App development
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement