MIKELANGELO project
Project No. 645402
MIKELANGELO Deliverable D6.2
MIKELANGELO
D6.2
Final report on the Architecture and Implementation
Evaluation
Workpackage
6
Author(s)
John Kennedy, Marcin Spoczynski
INTEL
Gregor Berginc, Daniel Vladušič, Justin Činkelj
XLAB
Yosandra Sandoval, Nico Struckmann
HLRS
Peter Chronz, Maik Srba
GWDG
Shiqing Fan, Fang Chen
HUA
Benoît Canet, Nadav Har’El
SCYLLA
Kalman Meth, Yossi Kuperman
IBM
Niv Gilboa, Gabriel Scalosub
BGU
Matej Andrejašič
PIPISTREL
Reviewer
Bastian Koller
HLRS
Reviewer
Holm Rauchfuss
HUAWEI
Dissemination
Level
PU
Infrastructure Integration
Date
Author
Comments
Version
Status
23 Oct 2017
John Kennedy
Initial structure
V0.0
Draft
17 Dec 2017
All WP6 Partners
Initial draft for review
V0.1
Review
27 Dec 2017
All WP6 Partners
Final version
V1.0
Final
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 1 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Executive Summary
The MIKELANGELO project[1] has sought to improve the I/O performance and security of
Cloud and HPC deployments running on the OSv[2] and sKVM[3] software stack. The project
has now concluded. Architectures and prototypes for all envisioned components and more
have been prepared, and all completed implementations have been successfully deployed
into appropriate testbeds: either local testbeds, the GWDG integrated Cloud testbed or the
HLRS integrated HPC testbed.
An evaluation of the various individual components developed by MIKELANGELO, and of the
integrated Cloud and HPC stacks, is presented in this document. The emphasis is on
reviewing their architecture, performance and implementation. For details on the
performance and integration of the components within the various MIKELANGELO use-cases,
the reader is directed to the complementary report D6.4, Final Report on the Use-Case
Implementations.
Regarding individual components developed by MIKELANGELO, their architectures all proved
successful at delivering performance or security improvements. Some improvements were
substantial - e.g. vRDMA delivering up to 98% the performance of unvirtualized host-to-host
communications between virtual guests, SeaStar helping Scylla achieve 10x the throughput of
Apache Cassandra, SCAM detecting 100% of all side-channel attacks in evaluation. Other
improvements were more modest, or applied to only certain scenarios, or required
significantly more effort to implement than originally anticipated (e.g. ZeCoRx, whose
development is continuing at the time of writing).
The integration of the Cloud and HPC stacks has demonstrated how many of the
MIKELANGELO
components
can
be
successfully
integrated
into
production-grade
deployments. Some incompatibilities in kernel versions and hardware support did mean that
not all components could be deployed on both these environments.
In terms of implementation evaluation, the successful upstreaming of code from
MIKELANGELO into many production-grade open-source projects such as OSv, SeaStar, Snap,
UniK and Virtlet demonstrates the quality of the code that has been developed. These
projects have embraced the MIKELANGELO contributions and included them in their
showcase demonstrations.
The architecture and implementation evaluation activities outlined in this deliverable have
helped confirm significant success in MIKELANGELO delivering value-added enhancements
for advanced Cloud and HPC environments. Partners are keen to build on their achievements
to-date in future work.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 2 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Acknowledgement
The work described in this document has been conducted within the Research & Innovation
action MIKELANGELO (project no. 645402), started in January 2015, and co-funded by the
European Commission under the Information and Communication Technologies (ICT) theme of
the H2020 framework programme (H2020-ICT-07-2014: Advanced Cloud Infrastructures and
Services)
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 3 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Table of contents
1
Introduction ..................................................................................................................................................... 11
2
Component Evaluation ................................................................................................................................ 13
2.1
Introduction ............................................................................................................................................ 13
2.2
Linux Hypervisor IO Core Management ...................................................................................... 13
2.2.1
Architectural Evaluation ............................................................................................................ 14
2.2.2
Performance Evaluation ............................................................................................................ 15
2.2.3
Implementation Evaluation ..................................................................................................... 19
2.3
Zero Copy on Receive......................................................................................................................... 19
2.3.1
Architectural Evaluation ............................................................................................................ 19
2.3.2
Performance Evaluation ............................................................................................................ 20
2.3.3
Implementation Evaluation ..................................................................................................... 21
2.4
Virtual RDMA ......................................................................................................................................... 22
2.4.1
Architectural Evaluation ............................................................................................................ 22
2.4.2
Performance Evaluation ............................................................................................................ 25
2.4.3
Implementation Evaluation ..................................................................................................... 30
2.5
Unikernel Guest Operating System ............................................................................................... 31
2.5.1
Architectural Evaluation ............................................................................................................ 31
2.5.2
Performance Evaluation ............................................................................................................ 32
2.5.3
Implementation Evaluation ..................................................................................................... 41
2.6
Lightweight Execution Environment Toolbox............................................................................ 42
2.6.1
Architecture Evaluation ............................................................................................................. 43
2.6.1.1
UniK ............................................................................................................................................. 45
2.6.1.2
Kubernetes Virtlet ................................................................................................................... 46
2.6.2
Performance Evaluation ............................................................................................................ 46
2.6.2.1
First Time OSv and Application Users............................................................................. 48
2.6.2.2
Application Package Authors ............................................................................................. 49
2.6.3
Implementation Evaluation ..................................................................................................... 52
2.6.3.1
Package Metadata.................................................................................................................. 52
2.6.3.2
Volume Management ........................................................................................................... 53
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 4 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
2.6.3.3
2.6.4
2.7
Key Performance Indicators .................................................................................................... 55
UNikernel Cross-Layer opTimisation - UNCLOT....................................................................... 58
2.7.1
Architecture Evaluation ............................................................................................................. 59
2.7.2
Performance Evaluation ............................................................................................................ 59
2.7.3
Implementation Evaluation ..................................................................................................... 62
2.8
Full-Stack Monitoring ......................................................................................................................... 63
2.8.1
Architecture Evaluation ............................................................................................................. 63
2.8.2
Performance Evaluation ............................................................................................................ 66
2.8.3
Implementation Evaluation ..................................................................................................... 70
2.9
Efficient Asynchronous Applications - Seastar .......................................................................... 71
2.9.1
Architectural Evaluation ............................................................................................................ 72
2.9.2
Performance Evaluation ............................................................................................................ 75
2.9.3
Implementation Evaluation ..................................................................................................... 80
2.10
Side Channel Attack Mitigation ...................................................................................................... 80
2.10.1
Architectural Evaluation ............................................................................................................ 81
2.10.2
Performance Evaluation ............................................................................................................ 81
2.10.2.1
System Setup ....................................................................................................................... 81
2.10.2.2
Execution Setup .................................................................................................................. 82
2.10.2.3
Monitoring and profiling evaluation .......................................................................... 82
2.10.2.4
Noisification evaluation ................................................................................................... 84
2.10.3
3
Application Packages ............................................................................................................ 54
Implementation Evaluation ..................................................................................................... 87
Full Stack Evaluation ..................................................................................................................................... 88
3.1
Introduction ............................................................................................................................................ 88
3.2
Full Stack for Cloud ............................................................................................................................. 88
3.2.1
KPIs relevant to the cloud integration ................................................................................ 88
3.2.1.1
KPI1.2........................................................................................................................................... 89
3.2.1.2
KPI2.1........................................................................................................................................... 89
3.2.1.3
KPI3.1........................................................................................................................................... 89
3.2.1.4
KPI3.2........................................................................................................................................... 90
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 5 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
3.2.1.5
KPI3.3........................................................................................................................................... 90
3.2.1.6
KPI4.1........................................................................................................................................... 90
3.2.1.7
KPI5.1........................................................................................................................................... 91
3.2.1.8
KPI6.1........................................................................................................................................... 91
3.2.2
Components Integrated ........................................................................................................... 91
3.2.3
Functional Tests ........................................................................................................................... 94
3.2.4
Use Cases ....................................................................................................................................... 94
3.3
Full Stack for HPC ................................................................................................................................. 96
3.3.1
KPIs relevant to HPC .................................................................................................................. 97
3.3.2
KPI Evaluation ............................................................................................................................... 98
3.3.3
Components Integrated ........................................................................................................ 100
3.3.3.1
Snap .......................................................................................................................................... 100
3.3.3.2
IOcm ......................................................................................................................................... 100
3.3.3.3
vRDMA ..................................................................................................................................... 100
3.3.3.4
UNCLOT .................................................................................................................................. 101
3.3.3.5
OSv ............................................................................................................................................ 101
3.3.3.6
LEET ........................................................................................................................................... 101
3.3.3.7
Scotty ....................................................................................................................................... 101
3.3.4
Integration Tests ....................................................................................................................... 101
3.3.5
Use Cases .................................................................................................................................... 102
3.3.5.1
Cancellous Bones................................................................................................................. 102
3.3.5.2
Aerodynamic Maps with OpenFOAM .......................................................................... 103
3.3.6
3.4
Challenges................................................................................................................................... 104
3.3.6.1
Security .................................................................................................................................... 104
3.3.6.2
Stability .................................................................................................................................... 104
3.3.6.3
Performance .......................................................................................................................... 105
3.3.6.4
Usability and User Experience......................................................................................... 105
Unsupervised Exploratory Data Analysis .................................................................................. 108
3.4.1
All Metrics Analysis .................................................................................................................. 110
3.4.2
I/O Specific Analysis ................................................................................................................ 111
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 6 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
4
Observations ................................................................................................................................................ 115
4.1
Individual Components ................................................................................................................... 115
4.1.1
Linux Hypervisor IO Core Management - sKVM’s IOcm ........................................... 115
4.1.2
Zero Copy on Receive - sKVM’s ZeCoRx ......................................................................... 115
4.1.3
Virtual RDMA - sKVM’s virtual RDMA .............................................................................. 115
4.1.4
Unikernel Guest Operating System - OSv....................................................................... 116
4.1.5
Lightweight Execution Environment Toolbox - LEET .................................................. 116
4.1.6
UNikernel Cross-Layer opTimisation - UNCLOT .......................................................... 116
4.1.7
Full-Stack Monitoring - Snap............................................................................................... 117
4.1.8
Efficient Asynchronous Applications - Seastar.............................................................. 117
4.1.9
Side Channel Attack Mitigation - sKVM’s SCAM ......................................................... 118
4.2
Integrated Stacks ............................................................................................................................... 118
4.2.1
Full Stack for Cloud ................................................................................................................. 118
4.2.2
Full Stack for HPC..................................................................................................................... 118
5
Concluding Remarks ................................................................................................................................. 120
6
References and Applicable Documents ............................................................................................. 121
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 7 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Table of Figures
Figure 1: MIKELANGELO Final architecture................................................................................................... 11
Figure 2: IO Core Management video ............................................................................................................ 14
Figure 3: IO Core Management test system setup .................................................................................... 15
Figure 4: Performance comparison of netperf TCP stream between baseline and elvis-X......... 16
Figure 5: Performance comparison of netperf TCP stream between the baseline and the best
of elvis-X (per message size) .............................................................................................................................. 17
Figure 6: Performance comparison of Apache HTTP server between the baseline and elvis-X18
Figure 7: Comparing the performance of Apache HTTP server between the baseline and the
io-manager................................................................................................................................................................ 19
Figure 8: Overall Architecture of the final design of vRDMA based on Prototype II and III ...... 23
Figure 9: PerfTest Write Bandwidth ................................................................................................................. 26
Figure 10: NetPIPE bandwidth run with Open MPI using rdmacm and oob module ................... 27
Figure 11: NetPIPE latency run with Open MPI using rdmacm and oob module .......................... 28
Figure 12: NetPIPE bandwidth run using socket and rsocket ................................................................ 28
Figure 13: NetPIPE latency run with socket and rsocket.......................................................................... 29
Figure 14: Virtual Machine image size for Aerodynamic Maps............................................................. 40
Figure 15: Time to boot VMs for Aerodynamic Maps .............................................................................. 40
Figure 16: Final architecture of the MIKELANGELO Package Management. Components
marked with green colour are available in the final release of Capstan............................................ 43
Figure 17: OSU Micro-Benchmarks bandwidth test. ................................................................................. 60
Figure 18: OSU Micro-Benchmarks latency test. ........................................................................................ 61
Figure 19: OSU Micro-Benchmarks latency test for small message sizes. ........................................ 61
Figure 20: Nginx performance evaluation using wrk. ............................................................................... 62
Figure 21: Snap task workflow for 500-node test ...................................................................................... 68
Figure 22: Snap grafana dashboard illustrating mean CPU utilisation across 500 nodes........... 69
Figure 23: Go report page for Snap kvm collector .................................................................................... 71
Figure 24: YCSB benchmarking on ScyllaDB vs Cassandra. .................................................................... 77
Figure 25: Latency of SMF Write-Ahead-Log with Seastar vs Apache Kafka ................................... 78
Figure 26: QPS of SMF Write-Ahead-Log with Seastar vs Apache Kafka .......................................... 78
Figure 27: Seastar httpd performance evaluation ...................................................................................... 79
Figure 28: Trace of monitoring activity with inter-sample time of HPCs set to s=100us. The
figure shows, from top to bottom: (1) access count per sample, (2) miss count per sample, (3)
miss/access ratio per sample, and (4) moving average with window width of W=50 samples.
....................................................................................................................................................................................... 83
Figure 29: Trace of monitoring activity with inter-sample time of HPCs set to s=1000us. The
figure shows, from top to bottom: (1) access count per sample, (2) miss count per sample, (3)
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 8 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
miss/access ratio per sample, and (4) moving average with window width of W=10 samples.
....................................................................................................................................................................................... 83
Figure 30: Distributions of LLC misses without noisification when the target is inactive (left)
and active (right). .................................................................................................................................................... 85
Figure 31: Distributions of LLC misses with noisification of six lines by the first core and five
lines by the second core when the target is inactive (left) and active (right)................................. 86
Figure 32: A comparison of the statistical distance between the distributions for different
combinations of accessed lines per cores participating in noisification (lower is better)........... 87
Figure 33: KPIs associated with the Full Stack for Cloud ......................................................................... 88
Figure 34: Components integrated in the Full-Stack Cloud Testbed.................................................. 93
Figure 35: MIKELANGELO software stack for HPC ..................................................................................... 97
Figure 36: Principal components showing the percentage of variation captured by each
component/dimension. ..................................................................................................................................... 109
Figure 37: Feature contribution analysis for a dimension / principal component. ..................... 110
Figure 38: Analysis of selected features from all metrics data ........................................................... 111
Figure 39: Analysis of correlations from vanilla data set ...................................................................... 111
Figure 40: Analysis of correlations from MIKELANGELO data set ..................................................... 111
Figure 41: Analysis of selected features from I/O specific data ......................................................... 112
Figure 42: Comparison of sdc disk mean value of write operation per second. ......................... 112
Figure 43: Comparison of trends of sdc disk mean value of write operation per second. ...... 113
Figure 44: Analysis of correlations of selected features........................................................................ 114
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 9 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Table of Figures
Table 1: Boot time of OSv vs Linux .................................................................................................................. 35
Table 2: Image size of OSv vs Linux ................................................................................................................. 35
Table 3: Minimum memory requirements of OSv vs Linux ..................................................................... 37
Table 4: Time to compose images, LEET vs Developer Scripts .............................................................. 47
Table 5: Application Package Management tool comparison ............................................................... 49
Table 6: KPIs for LEET ............................................................................................................................................ 56
Table 7: Comparing telemetry frameworks collecting 10 probes ....................................................... 66
Table 8: Comparing telemetry frameworks collecting 50 probes...................................................... 66
Table 9: Comparing telemetry frameworks collecting 100 probes ................................................... 67
Table 10: KPIs relevant to HPC Integrations ................................................................................................. 97
Table 11: Job submission time, qsub vs. vsub .......................................................................................... 106
Table 12: Job submission time, qsub vs. vsub-specific logic............................................................... 107
Table 13: vTorque wrapper timings by phase........................................................................................... 107
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 10 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
1 Introduction
The MIKELANGELO project has sought to improve the I/O performance and security of Cloud
and HPC software running on top of the OSv and sKVM software stack.
The final architecture of the MIKELANGELO project is documented in Deliverable D2.21, The
Final MIKELANGELO Architecture[4]. The key components of this architecture are illustrated in
Figure 1.
Figure 1: MIKELANGELO Final architecture
This document presents an evaluation of this architecture and its implementation at the end
of the project.
The technical architecture, performance and implementation of all MIKELANGELO
components intended to improve performance and/or security are evaluated in Chapter 2.
Both benefits and limitations of the MIKELANGELO approaches are discussed. The results of
benchmarking are presented. Any Key Performance Indicators - KPIs - associated with the
components are considered.
The individual components evaluated in Chapter 2 are designed to complement each other
when integrated in stacks to address Cloud or HPC use-cases. Evaluations of both an
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 11 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
integrated Cloud stack and an integrated HPC stack are presented in Chapter 3. An
unsupervised exploratory data analysis on a cloud deployment is also described.
Chapter 4 summarises the key observations of the architecture and implementation
evaluation of both the individual and integrated components.
Chapter 5 provides some concluding remarks, and references are provided in Chapter 6.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 12 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
2 Component Evaluation
2.1 Introduction
This
chapter
evaluates
the
technical
architecture,
performance
and
the
current
implementation of individual components of the MIKELANGELO stack. The components
considered are:
●
Linux Hypervisor IO Core Management - sKVM’s IOcm
●
Zero Copy on Receive - sKVM’s ZeCoRx
●
Virtual RDMA - sKVM’s virtual RDMA
●
Unikernel Guest Operating System - OSv
●
Lightweight Execution Environment Toolbox - LEET
●
UNikernel Cross-Layer OpTimisation - UNCLOT
●
Full-stack Monitoring - Snap
●
Efficient Asynchronous Applications - Seastar
●
Side Channel Attack Mitigation - sKVM’s SCAM
2.2 Linux Hypervisor IO Core Management
Work on IOcm was completed in year two of the project, and the evaluation was reported in
document D6.1[5], First report on the Architecture and Implementation Evaluation. We
reproduce those results here for completeness. Additional code clean-up was performed and
the code was packaged for easier deployment. Two educational videos were also prepared:
●
https://www.youtube.com/watch?v=pku-_7Io4II
●
https://www.youtube.com/watch?v=sShqUVe_JEc
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 13 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 2: IO Core Management video
2.2.1 Architectural Evaluation
In the current implementation of KVM, each virtual device gets its own vhost thread. This is a
very simple programming model since threads are a convenient abstraction, but not
necessarily the most efficient. In essence, as the number of virtual machines increases, so
does the number of virtual devices, and in turn the number of vhost threads. At some point,
all of these threads start to affect each other, and the overhead of switching between them
gets in the way of the threads doing useful work.
One idea that has been proposed to address this issue from an architectural point of view is
to use shared vhost threads. It turns out that sharing a vhost thread among multiple devices
can reduce overhead, and improve efficiency. Moreover, each shared vhost thread occupies a
core for the sole purpose of processing I/O. To further reduce the contention between the
threads, we disallow the virtual machines to share the cores with vhost threads. This approach
is described and evaluated in the ELVIS paper[6]. One major drawback of ELVIS is its inability
to dynamically adjust the number of cores according to the current workload.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 14 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
In MIKELANGELO we enhanced ELVIS with a mechanism to modify the number of shared
vhost threads at runtime. Determining the optimal number of shared vhost threads is done
automatically at runtime by the I/O manager. The I/O manager is a user-space application
which continuously monitors the system, and adapts the number of shared threads according
to the current CPU load.
Next we evaluate our prototype with the aforementioned architectural enhancement.
2.2.2 Performance Evaluation
Our test system is composed of two physical machines: a load generator and a machine that
hosts the Virtual Machines (VMs). Both machines are identical and of type IBM System x3550
M4, equipped with two 8-core sockets of Intel Xeon E5-2660 CPU running at 2.2 GHz, 56GB
of memory and two Intel x520 dual port 10Gbps NICs. All machines run Ubuntu 14.04 with
Linux 2.18 (guests, host, and load generator)[7]. The host’s hypervisor is KVM[8] with QEMU
2.2[9]. To minimize benchmarking noise, hyperthreading and all power management features
are disabled in the BIOS.
The machines are connected in a point-to-point fashion as depicted in Figure 3.
Figure 3: IO Core Management test system setup
Each experiment is executed 5 times, for 60 seconds each. We make sure the variance across
the results (both performance and CPU utilization) is negligible, and present their average.
Benchmark parameters were meticulously chosen in order to saturate the vCPU of each VM.
The experiment evaluates the performance of three basic configurations:
●
baseline:
We
use
KVM
virtio
as
the
state-of-practice
representative
of
paravirtualization. We denote it as the baseline configuration
●
elvis-X: Our modified version of vhost, with a different number of dedicated I/O cores
(1-4), denoted by X
●
io-manager: Our modified version of vhost driven by the I/O manager which
automatically adjusts the number of I/O cores in response to the current load
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 15 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
With all configurations, we set the number of VMs to be 12 (overcommit) throughout all the
benchmarks, utilizing only one 8-core socket. Each VM is configured with 1 vCPU, 2GB of
memory and a virtual network interface. All four physical ports are in use and assigned evenly
between VMs. The NICs are connected to the VMs using the standard Linux Bridge.
With the “elvis-X” configuration, we vary the number of I/O cores from 1 to 4 at the expense
of available VM cores. Given a number of I/O cores, the VMs are assigned in a cyclic fashion
to the remaining cores. For the “baseline” setup, there is no affinity between activities and
cores, namely, interrupts of the physical I/O devices, I/O threads (vhost), and vCPUs.
The experiment is executed using two workloads: Netperf[10] and Apache HTTP Server[11].
Netperf
Our first experiment evaluates a throughput-oriented application. We use the Netperf TCP
stream for this purpose, which measures network performance by maximizing the amount of
data sent over a single TCP connection, simulating an I/O-intensive workload. We vary the
message size between 64 and 16384 bytes. Similar results are obtained for messages larger
than 16KB.
Figure 4: Performance comparison of netperf TCP stream between baseline and elvis-X
With elvis-X configurations, each additional I/O core comes at the expense of the cores that
are available for running VMs. For example, elvis-4 dedicates 4 I/O cores and only 4 cores are
shared among the 12 VMs. In the graph above we can see that elvis-3 underperforms elvis-1
for messages smaller than 1024 bytes, as the latter configuration allows 7 cores for the virtual
machines while the I/O core is not saturated.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 16 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Naturally, an I/O core has a limit to the amount of traffic it can handle in a given period. For
elvis-X, we can see the throughput curves become flatter at a certain point as message size
increases. In elvis-1, the I/O core is saturated with the smallest message size, while for elvis-2
both I/O cores reached their maximum capacity with a message size of 512 bytes.
Figure 5: Performance comparison of netperf TCP stream between the baseline and the best of elvis-X (per
message size)
The figure above presents the baseline result alongside the best of elvis-X configurations,
depicted as “optimum”. Additionally, we present our automatic I/O manager (denoted by iomanager) which switches between elvis-X configurations based on the current state of the
system.
Apache HTTP Server
To evaluate the performance on a real application, we use the Apache HTTP Server. We drive
it using the ApacheBench[12] (also called “ab”) which is distributed with Apache. It assesses
the number of concurrent requests per second that the web server is capable of handling. We
use 16 concurrent requests per VM for different file sizes, ranging from 64 bytes to 1 MB. The
results are shown in the following figure.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 17 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 6: Performance comparison of Apache HTTP server between the baseline and elvis-X
In baseline, KVM allocates one I/O thread per virtual device and one thread per VCPU. Thus,
24 threads compete for the available CPU cores. This contention increases the latency and is
most acute when using small files as there are more requests per second.
For elvis-X configurations, instead of 12 I/O threads, only X threads are allocated and run on
separate cores. This reduces the contention and improves the latency. From the above graph
it is clear that elvis-1 outperforms the baseline for smaller requests, as latency is more
dominant when requesting small files. However, all configurations converge as we increase
the request size as now it becomes more stream oriented, thus hiding the latency with
concurrent requests.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 18 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 7: Comparing the performance of Apache HTTP server between the baseline and the io-manager
Similar to netperf, we present the baseline result for Apache compared to the io-manager,
bounded by the optimum (Figure 7). This again shows the need for dynamic monitoring and
management of cores reserved for I/O operations based on the current workload.
2.2.3 Implementation Evaluation
The implementation is available from the MIKELANGELO git repository[13]. The kernel
portion does not have a unit test per se, since the normal procedure for the Linux kernel is to
test once packages are integrated, and not test them standalone. We test the code through a
series of benchmarks, which exercises various code paths for different packet types (virtio-net
and virtio-scsi devices). The user space portion is a set of scripts that monitor the resource
utilization and configure the cores through sysfs and is also available in the git repository. We
can show that these scripts are working as expected by comparing the throughput of the
system at a given moment to what can be attained with a static ELVIS configuration with the
same workload.
2.3 Zero Copy on Receive
2.3.1 Architectural Evaluation
During the first two years of the MIKELANGELO project, we worked on IOcm (IO Core
Manager) to reduce IO overhead for VMs. We noted that for large data transfers we paid a
significant cost in CPU consumption to copy data between the hypervisor and VM buffers.
This applies to the receive path, as Zero-Copy Transmit for VMs already exists in the Linux
kernel for a number of years. During the third year of the MIKELANGELO project we worked
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 19 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
on Zero-copy Receive, attempting to DMA network data directly into VM supplied buffers.
This turned out to be a more difficult task and took much longer to implement than we
originally anticipated.
We aimed to implement Zero-copy Receive (ZeCoRx) with minimal changes to the existing
network-stack in linux. In particular, we hoped we would be able to find a solution that did
not require changes to the VM guest. After reading a lot of code, experimentation, and some
reverse engineering, we understood that we needed to make more extensive changes than
we originally planned. It is no surprise that Zero-copy Receive is not yet in the linux kernel,
while Zero-copy Transmit has been incorporated already for a number of years. Changes
were required to the VM guest, vhost-net hypervisor component, macvtap device driver,
macvlan driver, Ethernet adapter (ixgbe) device driver, QEMU, as well as to the virtio protocol
itself. Details of the implementation are in document D3.3, The final Super KVM - Fast virtual
I/O hypervisor.
2.3.2 Performance Evaluation
Initial experiments led us to believe that we can achieve about 20% savings on CPU usage by
avoiding the copy between hypervisor and VM guest buffers. As of this writing, the Zerocopy Rx implementation is not yet sufficiently stable to execute a performance evaluation.
Once the code is sufficiently stable, we intend to do the performance evaluation and to
publish the results.
The performance test plan is as follows:

Measure resource usage (CPU usage, network bandwidth and latency) under different
scenarios.

Document under which circumstances we have performance improvement or
degradation.

Run a network performance tool on the following configurations:
o
base Linux host with base Linux guest - this typically uses mergeable buffers.
This is the baseline.
o
base Linux host, but force guest to use big buffers (turn off mergeable buffers)
- this uses scatter-gather list of 64K bytes.
o
base Linux host, but force guest to use small buffers - 1500 bytes, nonmergeable.
o
base Linux host, but run guest using aligned 4K buffers (we need these for
zero-copy case).
o
zero-copy Rx host with guest running aligned 4K buffers.
o
zero-copy Rx host, but not utilizing the zero-copy feature, with base Linux
guest. Verify that we did not degrade performance.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 20 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2

Test how changing the mtu size impacts the results. 1500, 6000, 9000.

Run netperf tests: tcp_stream, udp_rr

Measure CPU on the guest and the host.

Run test using different sized loads: 64K, 16K, 4K, 1K, 256, 64.

Use an identical host and VM setup for all the comparisons. Use the same file system,
etc, but only change the kernel that is running with the different features.
2.3.3 Implementation Evaluation
As the implementation progressed, additional issues were discovered, which required more
complexity to be added to the implementation. Some of the issues encountered are:
●
Uniquely map the VM guest device (macvtap) to a queue on the Ethernet adapter. At
first, this was not possible with the adapters we were using. We eventually figured out
we could achieve the mapping using a feature called l2-forward offloading, and
applying a patch to macvtap/macvlan that allowed the use of this feature.
●
The ethernet adapter driver assumes 4K page-aligned buffers, but virtio did not
provide such buffers under any of its possible configurations. We had to implement
such a configuration and the negotiation between hypervisor and guest to choose
this configuration.
●
We needed a way to map between Virtio descriptors and their corresponding buffers.
At first we mimicked the mechanism used by Zero-copy Transmit of saving the
descriptor number in an skb callback field, but this turned out to be insufficient since
the network stack did not always give us back the same skb.
●
The default virtio implementation provided 256 descriptors per queue, while the
ethernet adapter we used managed queues of size 512 buffers. With buffers now
being provided only by the VM guest (and not by the Ethernet adapter) we quickly
ran out of available buffers for DMA and lots of packets were dropped. We were able
to raise the number of virtio descriptors to 1024 by using a more recent version of
qemu, but the timing between the threads still causes us to sometimes run out of
buffers in the ethernet driver.
●
When performing TCP reassembly, each ethernet packet gets DMA-d into a separate
4K buffer, and all the buffers of a single TCP message are returned attached to a
single skb. The intermediate ethernet headers are chopped off the beginnings of each
buffer and a new header is generated with the overall TCP checksum. This requires
buffers to be consumed by virtio with data that is valid only from some offset within
the buffer. There was not previously any support in virtio to handle such data, as it
was assumed that all data was copied into contiguous space in the virtio buffers. We
had to extend the virtio protocol and add an offset parameter and implement code to
process buffers with offsets.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 21 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
●
Corrupted packets now take up buffers in the VM guest and need to be properly
recycled. Previously these were recycled by the ethernet adapter driver that originally
allocated these buffers. Since these buffers now belong to the VM guest, we need the
VM-guest to recycle the buffers.
●
We now need to return multiple return values from macvtap to vhost-net for each
consumed buffer: buffer descriptor number, number of bytes read, offset within buffer
where data begins. We needed to extend the interface between these components in
order to accommodate all these return values.
●
Sometimes (when we receive an ARP or other broadcast message) an skb arrives at
macvtap that does not point to a buffer. The entire message was generated internally
and fits in the skb. We need to grab an available VM guest buffer, copy the skb into it,
and return the buffer to the VM guest. This required us to always provide macvtap
with a spare virtio buffer and to implement a mechanism to indicate that this buffer
was used.
As a result of these accumulated complications, the implementation is not as clean as we
hoped it would be. Parts of the code were re-written several times as issues were discovered.
In preparation for upstreaming the code, the code will probably have to go through several
more iterations of refinement to minimize impact on the existing network stack.
2.4 Virtual RDMA
By the end of the project, we have designed and implemented three Virtual RDMA (vRDMA)
prototypes for different scenarios. The goal of this section is to evaluate the vRDMA
architecture, performance and implementation, with the improvements highlighted for
network performance. First we review and evaluate the final architecture of the three
prototypes. Then we evaluate performance results using different scientific benchmarks.
Finally we present an evaluation of the implementation.
2.4.1 Architectural Evaluation
As prototype II and III are able to work independently, the final architecture (Figure 8) of
vRDMA for this project has been updated by integrating prototype II and III, which are
targeted as the main output of vRDMA implementation.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 22 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 8: Overall Architecture of the final design of vRDMA based on Prototype II and III
In Deliverables D2.21, The final MIKELANGELO architecture, and D4.6, OSv - Guest Operating
System – final version, we have described the detailed final architecture of vRDMA, which
includes several communication modes, i.e. IB verbs with socket initialization, IB verbs with
RDMA Communication Manager (CM) initialization, traditional socket with rsocket and RDMA
CM, and finally shared memory using ivshmem for the case that the communicating VMs are
on the same host. This design is compatible with the most common communication APIs, so
a user application may benefit from the high performance of the background RDMA channel
but without any modification of its code. Next we discuss more details of each supported
communication mode and their capabilities:
●
Mode 1: IB verbs with socket initialization,
This communication mode supports applications that have been implemented with IB
verbs API, additionally with RDMA setup using socket API. Before the communication
starts, the application exchanges its RMDA information, e.g. RDMA hardware
information, Queue Pair numbers with other peers using socket channel. During the
RDMA communication, the socket channel may still be kept alive as a separated path
for sending control commands among the communication peers.
The limitation of using this mode is that it does not support RoCE mode, it only
supports the InfiniBand mode of the RDMA port.
There are several available applications using this communication mode, e.g. PerfTest
with default communication mode, Open MPI using OOB module, i.e. “--mca btl
openib --mca btl_openib_cpc_include oob”.
●
Mode 2: IB verbs with RDMA CM initialization
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 23 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Similar to the first communication mode, but instead using socket, it uses RDMA CM
API for the RDMA setup and control. For this case, the RDMA communication for the
VM doesn’t require any additional virtual ethernet device, everything is done via
RDMA channels.
This mode supports both InfiniBand and RoCE mode of the RDMA port.
The supported applications in this mode include PerfTest with RDMA CM
communication mode, Open MPI using rdmacm module, i.e. “--mca btl openib --mca
btl_openib_cpc_include rdmacm”.
●
Mode 3: traditional socket with rsocket and RDMA CM
Socket is one of the most widely used communication APIs, which normally passes
the communication through the TCP/IP stack. The rsocket support within librdmacm
module in the architecture is aiming to support socket applications but transfer the
communication into the RDMA stack, i.e. the virito-rdma driver. So the user
application implemented with socket API will be able to achieve much higher
performance by switching to the RMDA stack without any modification of its code.
Any application implemented with socket is able to use this communication mode in
the VM.
●
Mode 4: shared memory using ivshmem
The shared memory mode is an additional support for the above modes. It is
designed to be used by virtio-rdma module. It means that the virtio-rdma frontend
driver will check if the communication is a intra-host one or not, in order to decide
whether to use shared memory mode or another mode.
The RDMA memory regions have also been initialized using ivshmem, so that all the
VMs on the same host and in the same communication domain should be able to
access the same memory area. Meanwhile, the RDMA memory regions are also
accessible and manageable by the backend driver on the host. The virtio-rdma
frontend driver is able to modify the RDMA memory regions directly when it is
necessary. For example, when the QP is sent to the other VM on the same host, the
communication data is copied out and CQ is also updated directly by the virito-rdma
of the receiving side. The RDMA batch operation will allow the communication to be
finished with only one memory copy. Due to the nondeterministic usage of the
communication data buffer inside the RDMA region, it is not safe to give its address
directly to the user application, which might cause serious issue for the application
and the virtual driver.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 24 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
We have underestimated the actual effort of implementing the prototypes when we
originally planned the whole work stack, and the work for shared memory has been
postponed and converted to a technical white paper, which can be considered as an
in-depth reference and implemented in the future. To still improve the performance
of intra-host communication, another component, UNCLOT, has been introduced.
Evaluation of UNCLOT is presented in a later section.
2.4.2 Performance Evaluation
Performance improvements are the most important evaluation factor of the vRDMA
component, considering that the ultimate purpose of developing vRDMA drivers was to
provide improved I/O performance of the underlying HPC or cloud infrastructure. In this
project, we have defined the following key performance indicators for vRDMA:
●
KPI2.1: relative efficiency of virtualized I/O between KVM and sKVM (developed in the
project).
●
KPI3.1: The relative improvement of efficiency of MIKELANGELO OSv over the
traditional guest OS.
●
KPI3.2: The relative improvement of efficiency [size, speed of execution] between the
Baseline Guest OS vs. the MIKELANGELO OSv.
In Deliverables D4.1, The First Report on I/O Aspects[14], we presented the benchmark
performance of vRDMA prototype I, which showed a significant improvement of the
bandwidth and latency compared to the traditional virtio based network interface. Prototype
II was targeted at achieving additional improvements over prototype I. Regarding the KPI 2.1,
the hypercall implementation in virtio-rdma device has been developed to improve the
virtualized I/O communication between the host and guest. For KPI 3.1 and KPI 3.2, the virtiordma component on OSv has been implemented, which will improve the network
communication performance comparing to the traditional guest OS.
The testbed used to run performance benchmarks was two Huawei RH2288H V3 servers, each
with 24 CPU cores at 2.6GHz frequency on two NUMA nodes two sockets, 32K L1d cache,
32K L1i cache, 256K L2 cache,
30M L3 cache. Tested benchmarks were PerfTest[15] ,
NetPIPE[16] with Open MPI.
All the tests were run with three cases:
●
Linux Host: Linux host to host communication test, i.e. the baseline performance
●
Linux Guest: Linux guest to guest communication test
●
OSv Guest: OSv guest to guest communication test. As the implementation of
prototype III on OSv is still in the finalization phase, the results for prototype III on
OSv guest are not available at moment.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 25 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 9 shows the bandwidth of running PerfTest using communication Mode 1. The overall
results indicate that the Linux host has the best performance, OSv guest is in the second
place, and Linux guest is the third. But both OSv and Linux guests tests are very close to the
Linux host baseline performance, especially for message size larger than 128. The best case of
OSv guest achieves 98% of the baseline performance.
Figure 9: PerfTest Write Bandwidth
In Figure 10 and 11 we show the bandwidth and latency of running NetPIPE MPI benchmark
with Open MPI oob/rdmacm modules using vRDMA prototype II and III, i.e. communication
mode 1 and 2 over InfiniBand and RoCE connections on Linux host and Linux guest.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 26 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 10: NetPIPE bandwidth run with Open MPI using rdmacm and oob module
The overall results show that performance of using InfiniBand mode is better than RoCE (32
Gbps v.s. 26 Gbps), which is due to the difference of hardware working mode and protocol
capacities. The performance of using rdmacm module and oob module over InfiniBand mode
are very close for the test on guest and on host, i.e. the vRDMA performance on the guest
achieves 96% of the host performance for large messages. The rdmacm module is only
supported by the RoCE mode of the hardware, and the performance running on Linux guest
and Linux host are also close, i.e. for the best case it achieves about 97% of the host
performance.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 27 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 11: NetPIPE latency run with Open MPI using rdmacm and oob module
Figure 12: NetPIPE bandwidth run using socket and rsocket
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 28 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 13: NetPIPE latency run with socket and rsocket
With RDMA socket (rsocket) support, we are able to benefit as much as possible from the
hardware capacities compared to socket over RoCE mode (utilizing the normal TCP/IP stack)
Figure 12 and 13 compare the rsocket support with vRDMA and traditional socket
performance running over RoCE mode. The results show that the performance using rsocket
and vRDMA for small message size (1 to 4k bytes) is about 6-12 times better than using the
traditional socket on Linux host, and for large message size (4k to 64k bytes) is about 3-5
times better. Using rsocket and vRDMA for small messages on Linux guests achieves about
80% of the rsocket performance on Linux host.
All the above performance results have shown that performance using vRDMA is close to the
baseline performance on the host. Among the different modes of vRDMA, the IB verbs mode
(either using oob or rdmacm) over InfiniBand has the best performance, and IB verbs mode
over RoCE is the second but very close to the first. As an additional support for socket
applications, rsocket involves several translation layers, which slows down the performance
compared to the IB verbs mode. But its improvement for socket applications is dramatic and
significant.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 29 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
2.4.3 Implementation Evaluation
As MIKELANGELO targeted at providing contributions to open source and standardized
technology communities, the final source code including manuals and performance testing
documents will be published as part of the MIKELANGELO Stack 3.0 under the open-source
licenses GPL v2.0 (for vRDMA backend and frontend drivers on Linux) and BSD (for vRDMA
frontend driver on OSv). In the implementation progress, we have gained better
understanding of technical challenges of implementing different vRDMA solutions. More
details about the contributions to the implementation can be found in Deliverable D4.6. Here
we list briefly the accumulated accomplishments and evaluation for the implementation:
●
We provided a solution of integrating Open vSwitch and DPDK PMD in prototype I,
which builds and implements a concrete architecture of a virtual RDMA environment
with minimum effort. During the implementation, we noticed that this solution is
difficult to manage and the performance does not fulfill the requirements. But the
experience and knowledge gained during the implementation has been proved very
important and invaluable for the subsequent implementations.
●
When we moved on to implementing prototype II, we took the advantages of using
and simplifying the existing HyV[17] solution, which provides a simple hypercall
mechanism and a very basic implementation for virtual RDMA. We used HyV as our
base to implement a more robust and complete frontend and backend drivers, and
re-implemented the frontend driver specifically for OSv.
●
In order to run scientific benchmarks and applications like OpenFOAM, Open MPI has
to be supported to run with vRDMA on the guest. We implemented the missing user
libraries and kernel modules for the guest and implemented Open MPI extensions,
which allows MPI-based applications to benefit from vRDMA prototype II and III.
●
Apart from supporting the scientific applications, socket applications should also be
supported by vRDMA in order to benefit from the high performance of RDMA
hardware. We implemented additional extensions for the frontend and backend
drivers in prototype III in order to provide RDMA CM and rsocket support.
Asynchronous hypercall and a new RDMA CM frontend driver were implemented for
OSv.
●
To use rsocket in OSv, special care needed to be taken to avoid confusing the
traditional socket and rsocket API. We implemented and extended the socket wrapper
in OSv to support rsocket only when the virtio-rdma virtual device is present.
●
Due to underestimating the workload for prototype II and III, the planned ivshmem
shared memory support has been postponed and simplified into a white paper, which
includes technical details on how the new shared memory module could be
implemented and how it should work with the implemented vRDMA virtual drivers.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 30 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
2.5 Unikernel Guest Operating System
In M30’s D2.21, “MIKELANGELO Final Architecture”,[18] we documented in detail the design
and architecture of OSv. We will therefore keep the introductory material here short, and just
briefly remind the reader that OSv is a unikernel (i.e., a kernel for running a single application
in a guest in the cloud) which can run existing Linux applications. The goal of this chapter is
to evaluate the success of OSv as MIKELANGELO’s guest operating system. The first section
provides a qualitative evaluation of the differences between OSv’s architecture and that of
other existing alternatives. The second section provides quantitative benchmarks
demonstrating OSv’s performance. The third section evaluates the process through which we
implemented OSv.
2.5.1 Architectural Evaluation
The Guest Operating System for the MIKELANGELO architecture is unikernel[19]-based: the
application runs in a virtual machine (VM) on top of a small and efficient Linux (POSIX)
compliant kernel, OSv. This contrasts with recent market trends where containers are
receiving a lot of attention as replacements for VMs. In containers, the host kernel’s resources
are segmented in order to isolate each hosted application. The obvious drawback of
containers compared to virtual machines and unikernels is the huge attack surface of the host
kernel: a unikernel being run in a well audited virtual machine offers less holes for an attacker
to leverage because the virtual machine hardware is small and scrutinized. A paper published
in SOSP this year, “My VM is Lighter (and Safer) than Your Container”[20], demonstrated that
a unikernel running on a VM is not only more secure than a container, it may not be far from
the memory use and boot speed of containers - the two often-mentioned advantages of
containers.
OSv is the C++11 unikernel used in the MIKELANGELO project. The main OSv differentiator
compared to alternatives (such as ClickOS, Clive, JaLVM, IncludeOS, LING, Mirage, Runtime.js
and Rump Run) is that OSv is intended to be a comprehensive alternative to the Linux kernel,
supporting existing Linux applications and multiple hypervisors. OSv acts as an almost dropin GNU/Linux replacement optimized for virtual machine hardware. The unikernel closest to
OSv is Rump Run due to its NetBSD-based sources that make it akin to a real operating
system and not a simple library. IncludeOS is another example of a unikernel that claims to
run existing Linux software, but it actually only implements a very limited subset of Linux,
reducing its compatibility significantly. The other unikernels in the above list all support
applications written in just one programming language, running on just one type of
hypervisor (usually, Xen), or suffer from both limitations.
The main benefits of a unikernel are revealed in a context where the DevOps team
industrialize and streamline their virtual machine image construction. A combined unikernelPublic deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 31 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
and-application resulting in a virtual machine artifact must be considered as a cloud-like
process once executed in a virtual machine. The initial industrialization steps of using a
unikernel are steeper than using a container because it forces the dev-op team to rethink
their virtual machine construction processes but in the end the result is much cleaner than
the traditional approach of relying on an unwieldy, difficult to maintain, collection of shell
scripts.
Users of a unikernel like OSv cannot make use of multiple processes in a single virtual
machine instance. One of the consequences is that shell scripts that inherently use process
forking are unavailable in OSv. A high level REST API is thus provided by OSv allowing
management of the entire lifecycle of the OSv instance and the application. One of the
observations has been that this slightly limits the adoption of OSv: existing tools typically
need to be reconsidered and rewired to the alternative interface and, consequently, an
investment must be made into adapting existing processes.
Beyond the expected gaps in Linux compatibility and bugs, some additional challenges were
encountered when employing OSv in MIKELANGELO, but these have all been addressed by
the OSv team:
NFS[21] is currently used in a HPC context to get data in and out of the compute node. To
support this need, MIKELANGELO has ported a basic NFS client to the OSv kernel. It builds as
an additional option and is released under an LGPL license. Two additional OSv commands
mount-nfs.so and unmount.so were added so the user can easily mount their share. This NFS
client implements NFSv3. The OSv architecture allowed the porting of NFS to be performed
quickly. A description of the porting process has been documented online[22].
To simulate UNIX processes in Open MPI[23], an additional mechanism to isolate the memory
space of a thread has been added to OSv. The isolation characteristics of these thread
namespaces are somewhat weaker than UNIX processes, but they allow users to successfully
run Open MPI payloads like OpenFOAM.
Some HPC payloads (such as Cancellous Bones provided by USTUTT) need to be linked with a
configuration file generated by the rest of the infrastructure, without using NFS. With a Linux
guest the classical method is to create an ISO9660 (CDROM) image[24] and pass it to the
guest. In MIKELANGELO we added the same ability, of reading a CDROM image for
configuration, to OSv.
2.5.2 Performance Evaluation
The MIKELANGELO Grant Agreement lists the following Key Performance Indicators which are
relevant to the OSv guest operating system:
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 32 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
●
KPI 1.1: Time to start a number of virtual machines.
●
KPI 3.1: The relative improvement of efficiency of MIKELANGELO OSv over the
traditional guest OS.
●
KPI 3.2: The relative improvement of efficiency [size, speed of execution] between the
Baseline Guest OS vs. the MIKELANGELO OSv.
●
KPI 3.3: The relative improvement of compatibility between baseline and
MIKELANGELO versions of OSv.
●
KPI 7.1: All use cases properly demonstrated.
We will now survey OSv’s progress on all these KPIs, in the order of importance:
KPI 7.1: All use cases properly demonstrated:
Throughout the project, we improved OSv’s support for running the four MIKELANGELO use
cases. We therefore satisfied KPI 7.1:
In the beginning of the project, none of the four MIKELANGELO use cases worked correctly
on OSv. OSv claimed to have implemented most of Linux’s ABIs, and could already run fairly
complex Linux programs (such as the Java runtime environment). And yet, OSv was still a new
project and it turned out that every large program will likely use a rare feature that OSv
hasn’t yet implemented, or has not been fully tested yet. A lot of the work we did on OSv
during the three years of MIKELANGELO was about adding to OSv various missing features
and fixing various broken features - some small (such as missing system calls) and some large
(such as NFS and supporting a partial implementation of Linux processes). Features which
were needed for running the four MIKELANGELO use cases, which all run on OSv today.
We have put a special focus on making the Open MPI library work well on OSv, as this library
is used by two of the use cases (Aerodynamic Maps, and Cancellous Bones), as well as being
the basis for many other HPC applications. Today, the Aerodynamic Maps use case is fully
able to run on OSv, including parallel runs on multiple cores and nodes. The Cancellous
Bones one also fully works today on OSv. For a third MIKELANGELO use case, “Big Data”, we
have already successfully run several applications (such as Hadoop HDFS and Apache Spark).
For the fourth MIKELANGELO use case, “Cloud Bursting”, based on the Cassandra NoSQL
database, we have seen Seastar - described in a later section - to provide a much better
improvement in performance than OSv does, so our main focus in that use case is Seastar.
Still, we have verified that Cassandra does work correctly on OSv and without any tweaking
does so with similar performance to that on Linux (with additional tweaking OSv
outperformed Linux, as we’ll see in KPI 3.1 below).
KPI 3.3: The relative improvement of compatibility with Linux, between baseline and
MIKELANGELO versions of OSv:
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 33 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
As already mentioned above, throughout the project we made continuous improvements in
OSv’s ability to run a large number of different Linux applications. We fixed a large number of
C library functions which were either missing, or had implementation bugs, which prevented
some applications from running correctly on OSv. We got several important application
frameworks and languages, including Go, Python, Erlang, Javascript and new variants of Java,
to work on OSv, which allows running a large number of new applications on OSv.
A quantitative way to demonstrate improvement of KPI 3.3 is the number of Linux
applications which have been tested to run correctly on OSv. osv-apps.git[25] is a public
repository of applications, libraries and runtime environments which are known to work on
OSv. We also have in MIKELANGELO our own repository of applications for the
MIKELANGELO use cases, such as Open MPI and OpenFOAM. Since the beginning of the
MIKELANGELO project, the number of these tested packages grew from 55 to 98.
KPI 3.1: The relative improvement of efficiency of MIKELANGELO OSv over the
traditional guest OS:
The “efficiency” of the guest OS which we would like to measure for this KPI includes four
aspects: boot time, image size, memory use, and run speed:
Boot time:
One of the benefits of replacing a general-purpose Linux distribution with a smaller unikernel
such as OSv, is the reduction of boot time. Reducing the boot time in turn has many useful
advantages, from more responsive manual cloud management (i.e., reducing the time from
when the administrator clicks on “start VM” until its service actually runs), to more effective
resizing of a cluster on the cloud in response to growing load. As an example application, we
published this year a blog post[26] explaining how OSv can be used to implement “serverless
computing”, a.k.a. “Function-as-a-Service” (FaaS). FaaS allows many clients to run small
functions written in a high-level language (e.g., Java) as a response to network requests. This
proposal hinges on OSv’s ability to boot quickly (although see also KPI 1.1 below - there is
more to VM startup time than just the boot time).
To quantitatively compare OSv’s boot time to that of Linux, we chose a typical cloudoptimized Linux distribution, the Fedora 26 Cloud Base image[27]. Fedora describes this
image as having “the bare minimum packages required to run on any cloud environment.”
and as such is expected to result in minimal image sizes (we will check this below) and boot
times. For measuring the boot speed, both OSv and Linux images were converted to the
“raw” image (which is slightly faster to use), the cloud-init service was disabled on both, and
the “lighttpd” application - a small, optimized, HTTP server - was installed on both. We ran
these images on a local machine, using qemu and KVM, with identical parameters. We
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 34 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
measured the total time to boot the application, i.e., the time between starting the qemu
command to run the guest - until the lighttpd application prints the message saying that it is
ready for service. The boot time measured were:
Table 1: Boot time of OSv vs Linux
OS
OS+Lighttpd boot time
OSv
0.39 seconds
Linux (Fedora 26
Cloud Base)
3.4 seconds
As this improvement was already impressive enough (a 9-fold increase in boot speed
compared to Linux), we did not continue to further optimize OSv’s boot speed during the
project, although we did gather evidence that this is possible. For example, we have seen that
a substantial part of OSv’s boot time is due to initialization of the ZFS file system, which can
be avoided by using a simpler read-only filesystem (this is relevant for stateless applications).
Also, other researchers have shown that another big part of the boot time we measured is
the work the host needs to do to start the VM, and that by improving the hypervisor this
overhead can be reduced (see for example the lightvm[28] and ukvm (Unikernel KVM)
projects).
Image size:
In addition to the fast boot time, one of the promised benefits of unikernels - and OSv in
particular - are smaller images, where the “image” is the file which contains the operating
system and the application and is presented to the VM as its disk. There are several benefits
to having smaller images: the obvious benefit is smaller storage costs, but an even more
interesting benefit is that smaller images are faster to build, and faster to transmit to a new
host when we want to boot this image (this benefit is discussed below, in the context of KPI
1.1), and often faster to boot when less code needs to be read from disk.
If we look at the example from the previous section, lighttpd application on OSv vs. the
minimal Fedora 26 Cloud Base, here are the image sizes in both cases (the images are in
qcow2 format, so empty parts of the image do not contribute to its size):
Table 2: Image size of OSv vs Linux
OS
OS+Lighttpd image size
OSv
10,616,832 bytes
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 35 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Linux (Fedora 26
Cloud Base)
969,932,800 bytes
Here the OSv image, including both kernel and application, is just 10MB in size, almost 100
times smaller than the 1GB Linux image. The Linux image is so large because it includes a lot
of libraries, shells, tools, etc., which are commonly used in cloud software. 1 GB is actually
small compared to “standard” Linux distributions which are not optimized for the cloud, and
include by default a huge selection of irrelevant software and are often several gigabytes in
size.
With significant effort, a much smaller subset of the Fedora packages could be manually
assembled into a smaller image, and some people have been doing this, although reaching
OSv’s size will be very hard (e.g., just Fedora’s default shell, bash, is 1 MB in size). Moreover,
if the absolute minimal image size was critical, OSv’s size could also be further reduced. For
example, big pieces of the OSv’s kernel and C library could be compiled out to run images
which do not need them. For example, some state-less images do not need the ZFS
filesystem. Also, during the third year, we managed to make the OSv image a bit smaller by
recognizing and removing duplicate or dead code. This effort could be continued to make
OSv even smaller.
Memory use:
Another important measure of the operating system efficiency is its memory use. For some
types of applications, the memory that the OS uses is a non-issue, because the application
itself takes huge amounts of memory, orders of magnitude more than any OS overhead. For
example, memcached, cassandra, and similar applications, will take whatever amount of
memory is given to them and use it all for caching previously-served data. And yet, other
types of applications need only small amounts of memory. Such are, for example,
applications which operate on streams of data or packets - virtual routers, packet filters,
firewalls (that is the example used in the lightvm paper mentioned above), reverse proxies,
load balancers, SSL accelerators, and so on. When the application itself needs small amounts
of memory, the memory consumed by the OS can noticeably increase the total amount of
memory needed to run the VM, and accordingly lower the density - the number of VMs
which can be run on each host machine.
Let’s look at the same example we used above, of the lighttpd application. All operating
systems (including Linux and OSv) will happily consume all memory given to them for things
like caching disk blocks, so it is not always clear how to measure the amount of memory
overhead consumed by the operating system. What we decided to measure is the least
amount of memory which will allow the VM to successfully boot with 4 virtual CPUs and run
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 36 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
the intended application. If we give the VM less than this least-amount-of-memory, it will
either fail to boot, fail to start the lighttpd, or the application will run but fail to serve pages.
The minimal amount of memory needed for each OS was as follows:
Table 3: Minimum memory requirements of OSv vs Linux
OS
OS+Lighttpd minimal VM memory
OSv
49 MB
Linux (Fedora 26
Cloud Base)
93 MB
So for this application, OSv could run with half the memory that Linux did, which translates to
double the density of OSv VMs (double the number of VMs could be run with the same
amount of total memory).
The amount of memory used by OSv was moderately reduced during MIKELANGELO work, as
we eliminated unnecessary duplication, unnecessary buffers, and made other improvements.
However, we believe that in the future, when this becomes important, OSv’s memory use can
still be significantly reduced. For example, in one experiment we noticed that as much as 20
MB can be shaved off from OSv’s memory use by avoiding the ZFS filesystem - which comes
with a hefty cost in amount of code, caches and extra threads.
Run speed:
When the OSv project project started, several months before the MIKELANGELO project
proposal, it was envisioned that the new lighter and optimized-for-the-cloud OS will speed
up most cloud applications. OSv’s distinguishing features such as faster context switches,
more scalable network stack, lower system call overhead, smaller code, and focus on the
cloud use case, were supposed to translate to faster runs on OSv than on Linux.
The increased speed was indeed visible in micro-benchmarks (e.g., context switches were
measured to be three times faster on OSv than on Linux), and we also were able to measure
good speedups on specific applications - such as Cassandra getting 34% higher throughput
on OSv than on the Linux, and memcached achieving 22% higher throughput (we reported
these numbers in a 2014 paper about OSv in the Usenix Annual Technical Conference[29]). In
the previous version of this deliverable (D6.1) we also compared the Redis in-memory keyvalue store running on OSv and Ubuntu single-core guests, and measured 60%-100%
speedup for different types of small queries. These results have been very encouraging.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 37 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
But unfortunately, reaching each of these performance increase results required extensive
and difficult profiling sessions by several OSv developers. The more usual experience was that
running an application for the first time on OSv resulted in a performance drop compared to
Linux. The problem is the “weakest link” problem: Each large application uses dozens of Linux
features. If just one of these features has not been optimized on OSv, and runs significantly
slower than it does on Linux, this will slow down the entire run. During the MIKELANGELO
project we solved some of these problems, but others remain unsolved. As an example, it
turns out that OSv’s ZFS filesystem, which was never optimized for speed, is slower than
Linux’s filesystem. This slows down every workload which is heavy in disk I/O and usually
offsets any other performance improvements.
Moreover, during the work with MIKELANGELO’s chosen use cases, it became evident that
some of them would never see major performance improvements by switching to OSv,
because they spend most of the time in the application - not in the kernel or doing I/O. The
“Cancellous Bones” use case, for example, is CPU bound and does relatively small amounts of
I/O or any other type of interaction with the kernel. It cannot be sped up by a different kernel
implementation. The configurations we tested of the “Aerodynamic Maps” use case were also
CPU-bound. The Cassandra-based “Cloud Bursting” use case could get minor speedups on
OSv (as noted above) but those were almost negligible compared to the speedups which
were achieved by rewriting Cassandra using the Seastar library (described in its own section
below, and in the D4.6 and D6.4 deliverables), so working on Seastar has proven a much
more effective research direction for speeding up that type of
asynchronous server
application.
With this experience, we no longer believe that OSv should be chosen specifically for its
higher performance, although there are other important reasons to choose it - such as boot
time, image size and memory use as shown above. OSv’s performance would undoubtedly
continue to improve in the next few years, but Linux is improving too - and a much larger
number of people are working on every part of it than are working on OSv.
That being said, picking up MIKELANGELO’s OSv gives you performance improvements
beyond what just OSv alone brings. MIKELANGELO’s OSv also includes vRDMA and
UNCLOT. These two are new mechanisms for communicating between different VMs (on
different, or same, host, respectively) which are more efficient than the traditional TCP/IP. This
can help performance for workloads where such communication between different VMs is a
significant part of the run time. We will evaluate the performance improvements that these
two techniques bring in two separate sections below.
KPI 3.2: The relative improvement of efficiency between the Baseline Guest OS vs. the
MIKELANGELO OSv:
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 38 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
This KPI is similar to KPI 3.1 above, except that while in KPI 3.1 we compared MIKELANGELO’s
OSv to Linux, here in KPI 3.2 we are to compare MIKELANGELO’s OSv to the 3-year-old
version of OSv. As it turns out, this comparison is both less interesting and much harder to
perform than KPI 3.1. It is less interesting because the 3 year old OSv was an immature
version of OSv which although already ran several interesting applications, couldn’t run any
of MIKELANGELO’s use cases. It is difficult to run the 3 year old OSv for the comparison,
because the lighttpd application we used as an example above didn’t work 3 years ago, and
because the 3 year old OSv cannot be compiled with modern compilers and libraries - as we
explained in D4.6: new versions of the compilers and libraries exposed various bugs and
invalid assumptions in OSv, which we fixed during the work on MIKELANGELO.
For these reasons, we do not report here precise numbers for KPI 3.2. We already mentioned
in the KPI 3.1 section above some qualitative results about memory use and image size being
reduced over the three years. Even more importantly, during the three years we discovered
several applications which had abysmal run speed on OSv because of specific bugs that we
fixed to get these applications back to acceptable (i.e. Linux-like) performance numbers.
KPI 1.1: Time to start a number of virtual machines:
This KPI is interesting because it involves several MIKELANGELO components, and not just
OSv. Yes, to start a VM running OSv and an application, the time to boot OSv (see KPI 3.1
above) is important. But actually starting an application involves more than just booting OSv.
Some of the things we may need to do include:
1. Composing a new image from a pre-built OSv kernel and application (or set of
application components).
2. Send the image to a new host which is about to run this VM.
3. Boot the image and start the application.
What we measured in KPI 3.1 was just step 3. Step 1 is important, and will be measured in the
context of the LEET component, elsewhere in this document, but does not have to be
repeated when starting a number of VMs as KPI 1.1 requires: We can compose an image
once, and then start multiple copies of the same image, where each of those new VMs figures
out its role in the cluster via cloud-init or similar contextualization mechanism.
Therefore, for KPI 1.1, we would like to measure just steps 2+3 together. What step 2 adds to
step 3 that we already evaluated above is the time to send the image over the LAN. A big
difference in the size of the image makes a big difference in the time to send it:
As a realistic benchmark for this KPI we took the MIKELANGELO Aerodynamic Maps use case,
for which we built an OSv-based image, and a Linux-based image. The image sizes were as
follows:
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 39 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 14: Virtual Machine image size for Aerodynamic Maps
As was the case for the lighttpd image, the OSv image is significantly smaller - about 20
times smaller in this case - than the Linux image. Unsurprisingly, when we wanted to start
VMs using these images in an OpenStack-based testbed, there was a big difference between
the startup time of the two images:
Figure 15: Time to boot VMs for Aerodynamic Maps
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 40 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
This graph shows image startup split into steps 1 (light blue), 2 (dark blue) and 3 (yellow), for
Linux and OSv. The image-building step (light blue) took roughly the same time for the OSv
and Linux images. The second step (dark blue) of starting a VM on one of the cloud’s nodes,
appears to have a fixed overhead of almost 20 seconds (!), plus whatever time it takes to send
the content of the image. For the tiny OSv image, almost nothing was added to this fixed
overhead, but on the first time that the large Linux image was run on a host, it took roughly
90 additional seconds to send the larger images. This is a very noticeable delay.
It is worth noting that in this Openstack-based setup, the yellow part of the bar - the boot
time we measured in KPI 3.1 above - made little difference. When starting a VM always has
an unexplained 20 second delay, and sending the VM image may take around 90 seconds it does not really matter if the yellow part of the bar (the boot time) was 3 seconds for Linux
but only 1 second in OSv.
The key takeaway from this experiment is that having a small image is important not just for
saving disk space, but perhaps more importantly for allowing faster image distribution and,
accordingly, VM startup on new hosts.
2.5.3 Implementation Evaluation
OSv is an open source project, and all patches to OSv that we developed for MIKELANGELO
(except the separate vRDMA and UNCLOT components) were contributed and accepted into
the main OSv repository. The benefit of this approach is that most OSv code that we wrote
was tested by external OSv users and contributors. The number of these isn’t as high as we
would have liked, but is not zero either: During the last year we had nine people outside the
MIKELANGELO project contributing patches to OSv, and a larger number of people are using
it or following its development (as of today, there are 407 people subscribed to the OSv
developers’ mailing list, and 315 people closely “watching” OSv’s development via github). All
these extra eyes - and users - help improve the quality of OSv.
Beyond that, OSv also has an extensive set of unit tests which aim to verify that the large
number of Linux-compatible APIs OSv provide are indeed compatible with Linux. All changes
to the API start with a test validating the behavior from the perspective of Linux or the POSIX
standard. A developer may run each of these tests individually, or run all of them together via
the “make check” command. Moreover, we have a Jenkins installation which runs all these
tests after every commit to the OSv source code repository, as well as once every night, and
reports and tracks any failures.
Because OSv is not a commercial product to be compiled by a single company, but an opensource project with many developers with different build environments, it was very important
to support a large number of C++ compiler (gcc) and library (libstdc++, Boost, etc.) versions.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 41 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Throughout the three years of the MIKELANGELO project, gcc has undergone major changes
- we started with gcc 4.8, and ended with gcc 7.2.1. Supporting all these compiler versions
exposed pre-existing bugs in OSv which we needed to fix, but also bugs in those compilers
we needed to work around. The most difficult of breakages involved new optimization
techniques which broke certain assumption that low-level code in OSv made - e.g., OSv
assumed that a certain function does not use floating-point instruction, and yet a new
optimization in gcc 7 converted two integer assignments into one floating-point assignment.
OSv has now been built and tested on many different build environments, ranging all the way
from 5 year old Linux distributions with gcc 4.8 (which was the first version of gcc to properly
support C++11), to the newest Linux distributions at the time of this writing (Fedora 27, with
gcc 7.2.1).
The MIKELANGELO cloud is based on 64-bit x86 VMs and (an improved version of) the KVM
hypervisor, so OSv needs to support those. But we wanted OSv to not be limited to that
configuration, so today - with the help of external code contributors - OSv supports most
common cloud configurations: both x86 and ARM CPUs (both at 64-bit only) are supported,
and so are most common hypervisors: KVM, Xen, VMware, VirtualBox.
This year initial
support for HyperV was added by an external contributor (who wanted to run OSv in
Microsoft’s Azure cloud) and also support for the 64-bit ARM together with the Xen
hypervisor. Hangs in running under VMware and VirtualBox were fixed.
2.6
Lightweight Execution Environment Toolbox
Evaluation of the application package management builds upon the preliminary evaluation
presented in report D6.1. This report focused mainly on the evaluation of the enhancements
made to Capstan[30] tool because Capstan was the main line of work for the application
package management. The report furthermore evaluated application packages that have
been made available by the time of the report. All this allowed for further investigation of
potential improvements, gathering of additional requirements and prioritisation thereof.
Besides Capstan, this report also summarises the evaluation of the integration of Capstan
with UniK[31], The Unikernel Compilation and Deployment Platform,
and Virtlet[32], a
Kubernetes extension for managing virtual machines. Despite the fact that UniK project has
merged all changes MIKELANGELO project has proposed, it is nevertheless presented as a
lesson-learned which might benefit other projects when dealing with upstream communities
and the caveats this brings. Contrary to this, integration with Virtlet and Kubernetes offers a
much more stable path for MIKELANGELO recognition in the area of management of
lightweight workloads.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 42 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
2.6.1 Architecture Evaluation
The final Capstan architecture was described in detail in report D2.21. For completeness, the
following figure depicts different components of the Capstan architecture. Compared to the
original version of Capstan, MIKELANGELO introduced the package manager component and
support for cloud backends. Over the course of the project the architecture did not change
significantly. Some components were added (e.g. disk builder) while others changed the way
they are interconnected with the rest of the components (e.g. cloud backend).
As shown in the figure, most of the components have been successfully implemented (green
boxes). The only two remaining components were initially planned to support a service-based
package repository instead of a local one. To this end, initial design included REST API and
authentication components that would allow for role-based querying, retrieval and storage of
application packages. Although clear benefits of these components exist, we opted not to
include them in the implementation because of the overall complexity they would bring with
no real value, at least for the MIKELANGELO use cases. To this end, only a local repository is
employed, storing application packages and composed virtual machine images.
To allow end users to reuse pre-built application packages offered by the MIKELANGELO
consortium, support for Amazon S3 (object storage) repository from the original Capstan
project was extended to support access to a public S3 repository. These enhancements
enable querying and downloading of the application packages into users’ local repositories
from where they can be composed into target unikernels.
Figure 16: Final architecture of the MIKELANGELO Package Management. Components marked with green colour
are available in the final release of Capstan.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 43 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
As the next section will compare performance of the final Capstan against the performance of
the two approaches for building unikernels OSv supported before, the next paragraphs
introduce these two approaches and outline the caveats when using them.
Developer scripts are provided by the OSv kernel source tree. They include a number of
BASH and Python scripts that help OSv developers to build, test and validate their changes to
the kernel using real applications. The main script is called scripts/build, which in turn builds
the OSv kernel and the requested modules, and ensures that results of the compilation
process (libraries, applications, supporting files, etc) are uploaded into the target QEMU
virtual machine image.
Existing applications are maintained in a central Github repository (osv-apps[33]) which is
also linked in the main OSv kernel repository as a submodule. Applications are maintained as
sorts of recipes: instead of having prepackaged binary application packages, each application
provides a set of scripts that builds the target application in a way suitable for inclusion into
the target VM. The benefit of this is that it empowers end users to alter applications manually
or update them to the newer versions, while, at the same time, keeping the repository small
and well organised. It also prevents a potential problem when users build different parts of
the unikernel on different machines causing problems with library mismatch.
However, this also has several drawbacks, some of which are described as follows:
●
The end user is required to have a full development environment to use applications,
even if only a single application is needed.
●
The user must always build the OSv core prior to being able to build the unikernel.
●
The user must build the entire application which, as it will be seen in the next sections,
might take longer than expected.
●
It is up to the application maintainer to ensure cross-compilation in case system-wide
libraries are required for building the target application (for example, if a system
package is required, apt-get would have to be used for Ubuntu/Debian and yum for
Fedora/CentOS). There are no specific guidelines for building recipes of applications.
●
Very limited possibilities for validation/verification (no validation of packages, no
dependency management, resolution, etc.).
●
Lack of formalized structure and processes to be followed in building a complete OSv
virtual machine image.
We argue that these scripts are definitely an invaluable resource for software developers
working on the changes on the kernel itself. For example, while working on UNCLOT, which
changes the networking stack of OSv, it was important to be able to rebuild the kernel and
test applications quickly. Because small changes were made, the overall time required to
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 44 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
compile the most up-to-date version was significantly smaller than making the necessary
packages.
Capstan (original version) is a specialised tool for building virtual machine images for
applications running on top of the OSv kernel. Capstan does not require recompilation of the
OSv kernel when the application is being built for an OSv image. Instead, it relies on base
images containing the kernel and one or more additional applications built into the image.
Capstan further formalizes the structure of the application description. However, its reliance
on the base images is also its biggest drawback as it only augments the base image by
uploading additional files. Base images are of fixed size (most of them 10 GB) which is
sufficient in most cases but does not provide the flexibility often required in the cloud.
Resizing the base image would require the use of the OSv developer scripts discussed
previously.
2.6.1.1 UniK
Following the introduction of an abstraction for the infrastructure (cloud) services, which was
built directly into Capstan, UniK was also considered for the integration. The UniK project had
a more general purpose goal than Capstan as it intended to build a common platform for
different kinds of unikernels and infrastructures. Initially, it started with support for the
original version of Capstan which simplified creation of Java-based virtual machines on top of
OSv. The MIKELANGELO project submitted two contributions to the UniK project: full
integration of package management with the updated version of Capstan, and support for
OpenStack as a provider.
Architecturally, UniK offers several advantages over plain Capstan, however the following
three are the most relevant
●
REST API. Capstan does not expose its capabilities via REST API. This complicates
integration with external systems, making it trickier and more error prone. UniK
provides an API through which composition, staging (of VM images) and provisioning
is significantly simpler.
●
Containerised environment. One of the drawbacks of Capstan is that it requires
certain system-wide packages to be available, e.g. QEMU. UniK wraps unikernel
compilation into a dedicated container that already includes all the necessary
packages. This simplification comes with a slight performance penalty, in particular
when KVM acceleration can not be enabled.
●
Multi-cloud support. Although Capstan abstracted the cloud provider, only an
OpenStack provider was actually integrated. UniK on its own comes with support for
Amazon and Google. With OpenStack, added by MIKELANGELO, the set of stacks is
better represented.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 45 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Despite these advantages, integration with third party applications, such as OpenFOAM
Cloud where we are composing the unikernels on the fly, is rather cumbersome and does not
achieve the performance of the final version of Capstan. The UniK project also started to lose
their momentum after the core team left EMC. This resulted in a decision not to pursue
further enhancements to the UniK project.
2.6.1.2 Kubernetes Virtlet
Virtlet is an extension for Kubernetes that allows deployment of standard virtual machines as
if they were Kubernetes-managed containers. Its architecture has been presented in detail in
report D5.3, along with the integrations made by MIKELANGELO project. This report also
presented three use cases serving the acceptance criteria for this line of work: Apache Spark
application, OpenFOAM use case and a simple microservice-like application.
All three applications have been successfully configured and managed dynamically, either at
the time of service provisioning or during runtime. Furthermore the use of Kubernetes
infrastructure simplifies the mixing of unikernels, standard Linux guests and Docker
containers. The above applications employ this technique to choose the most suitable
technology for a particular part of the application (e.g. a docker container for the Spark driver
instance and a set of OSv-based unikernels to run the distributed workloads).
Regarding architecture evaluation it is safe to assume that this last push on lightweight
execution environment has been successful in terms of bringing OSv-based unikernels,
composed by Capstan, closer to containers and standard Linux guests.
2.6.2 Performance Evaluation
This section presents an evaluation of the virtual machine image composition performance
comparing developer scripts and the LEET version of Capstan. Comparison with the original
version of Capstan is omitted because it relies on the developer scripts to build images for
custom packages. Times shown in the table for developer scripts include the time required to
execute the recipe, i.e. it includes time to download source distribution, configure and
compile it, and finally, build the target VM image. The only exceptions to this are the
packages for OpenFOAM, because the time required to compile OpenFOAM goes well
beyond 1 hour. Times shown for LEET require only composition of the target VM image.
The most important factor determining the overall time for LEET is not only the size of the
package, but also the number of files that a package is comprised of. As an example, LEET
shows an impressive 268-times improvement for the MySQL package. MySQL is a complex
application and compiling it requires a significant amount of time. However, the end result is
a small number of libraries and configuration files (274 files). On the other hand, the Erlang
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 46 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
package is not as complex to compile, but results in a total of 4,429 files, causing a rather
long overall composition time. This signals that the mechanism to upload the files to the
target VM could be improved.
Table 4: Time to compose images, LEET vs Developer Scripts
Developer scripts
(s)
LEET (s)
Improvement
1.077,57
19,61
55x
164,56
48,64
3x
2.353,45
8,78
268x
node-4.4.5
309,52
4,15
75x
node-4.8.2
309,81
4,16
75x
node-6.10.2
483,94
4,55
106x
ompi-1.10
512,75
4,74
108x
openfoam.core-2.4.0
36,08
10,89
3x
openfoam.pimplefoam2.4.0
41,45
11,11
4x
openfoam.pisofoam2.4.0
43,54
11,52
4x
openfoam.poroussimple
foam-2.4.0
42,49
11,26
4x
openfoam.potentialfoam
-2.4.0
38,12
11,18
3x
openfoam.rhoporoussim
plefoam-2.4.0
42,23
11,31
4x
openfoam.rhosimplefoa
m-2.4.0
42,87
11,20
4x
openfoam.simplefoam2.4.0
39,92
11,20
4x
openjdk7
33,48
13,85
2x
openjdk8-zulu-compact1
76,55
6,01
13x
openjdk8-zulucompact3-with-javabeans
123,26
7,01
18x
osv.bootstrap
13,16
3,39
4x
osv.cloud-init
57,03
4,32
13x
osv.httpserver-api
12,10
3,98
3x
osv.httpserver-html5-gui
11,31
5,00
2x
Package
apache.spark-2.1.1
erlang-7.0
mysql-5.6.21
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 47 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
osv.nfs
11,86
3,31
4x
python-2.7
251,38
22,73
11x
Besides the time required to compose the image for the first time, it is also interesting to
analyse the performance when updating existing images. This is important because more
often than anticipated, it is necessary to compose the image several times, updating only
parts of the image.
To update an image with LEET (e.g. adding a single file), it is approximately 2.5-times faster
than composing the image all over again (average of all updates for the above packages).
Predictably, the biggest improvement is observed with the Erlang package. LEET updates the
original image 9-times faster than recomposing it from scratch. Similar behaviour is observed
with Python and Spark packages (5.5- and 3-times faster).
From the perspective of LEET and the workflows it is supposed to address, this is an
important advance. For example, when composing Python applications, end users will
typically modify only their Python scripts, while the runtime itself will not be changed. Being
able to update the image in under 4 seconds, instead of 22 seconds is an enormous
improvement that does not affect developers’ lifecycle.
2.6.2.1 First Time OSv and Application Users
As we have already described, developer scripts can only be used in conjunction with the
entire OSv kernel source tree. Consequently, prior to building an application image, the OSv
kernel must be compiled. Besides the time required to setup the development environment
this results in an additional 10 minutes or more required to build the first application image.
After the kernel and the application are fully compiled, the times from the aforementioned
table are applicable.
This difference is even more significant for large applications with complex compilation logic,
such as OpenFOAM. Because OpenFOAM is used in the Aerodynamics use case (as detailed
in Deliverable D2.10 The First Aerodynamic Map Use Case Implementation Strategy) the
MIKELANGELO consortium provides the OpenFOAM application compatible with other OSv
applications from the OSv-apps repository. However, using developer scripts, this still takes
several hours to compile before the application image can actually be used, and this does not
include the time needed to contextualise the image. Because Capstan uses the same build
command, it would suffer from the same problem. Contrary to this, LEET is not affected in any
way because a pre-built package is available. The user is immediately allowed to download
and compose their OpenFOAM simulation into an executable image.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 48 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
2.6.2.2 Application Package Authors
The following table is an updated comparison evaluating the three tools from the perspective
of application authors interested in sharing their applications as OSv-compliant application
images or packages. The evaluation is based on the typical workflows of the corresponding
tool.
Table 5: Application Package Management tool comparison
Developer scripts
Capstan
LEET
Besides the build script
which is the same as in the
case of developer scripts, a
dedicated Capstan image
specification is also required
(Capstanfile).
Application authors are
encouraged to create a
verbatim structure of the
package. This can be done in
any way they see fit (either
via script or manually
placing application content
into corresponding
directories)
Preparation of the application content
Developer is supposed to
prepare a script that ensures
the application is
downloaded, patched and
compiled automatically.
There is no common
infrastructure that would
facilitate cross-platform
builds (for example, apt-get
vs. yum).
The tool supports the author
with the creation of an
application manifest file
which describes the
application and its
dependencies.
Dependency management
A Python module is available
for OSv supporting the
specification of dependent
modules. These must be
specified in a special Python
script (module.py). Each of
the dependencies links to a
different module that is in
turn compiled and bundled
into a target image.
Capstan only supports the
notion of a base image. This
may contain arbitrary
modules, however there is
no way of composing several
modules into a single
application image (apart
from iteratively building
images until all modules are
uploaded). With the
increasing number of
packages and combinations
Required packages
(modules) are specified in
the application manifest file.
Application packages are
collected by the tool and
uploaded onto the target
application image.
Automatic pulling of missing
dependencies from a
package hub is supported.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 49 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
thereof this became a
tedious task. No base image
update has occurred for a
long time.
Image building workflow
The user needs to invoke the
main OSv build script and
specify the list of required
modules. This script will in
turn invoke scripts from
required modules and
eventually include all of
them in the target image. On
every image build, the code
is checked and rebuilt if
necessary. For external
packages it is the role of the
package maintainer to
define the recipe in such a
way that multiple
invocations result in the
same result.
Capstanfile is consulted for
information about the build
process and the file structure
of the application. The build
script is subjected to the
same limitations as in the
case of build scripts.
The application manifest is
used for basic metadata and
other required packages.
Content is retrieved directly
from the application’s root
directory.
Same as in the case of build
scripts because Capstanfile
just references the build
script.
The application author
prepares the package in a
form suitable for execution
on top of OSv, consequently
the end user never needs to
rebuild the package
manually.
Performance
The main build script is
highly optimised. However,
applications’ build scripts
may not be efficient as they
are provided by third-party
application providers. Since
the application user is
required to recompile the
application, this could affect
their performance.
Additionally, the tool
supports efficient
incremental updates to
target images by uploading
only the content that has
been changed.
Application execution
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 50 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
The image must be built
before it is used.
The image must be built
before it can be used.
A script for running OSvbased images is included in
the OSv source tree. Being
developer-oriented, it
provides the most
configuration options. These
options may also be
extended or passed directly
to the underlying hypervisor.
Capstan only supports a
subset of configuration
options of the developer
script.
Application may be launched
directly from the application
root directory. The tool will
automatically ensure the
image is updated with the
latest content.
Several enhancements for
the hypervisor configuration
options have been added
based on the requirements.
It also allows deploying
applications on OpenStack
directly.
Package and application repository
Applications and packages
are available in OSv Apps
Github repository. This
repository is already
referenced as a submodule
in the OSv kernel source.
Additional local repositories
may be added and
configured in OSv
(config.json file).
Application images are
available on a repository
hosted on Amazon S3.
MIKELANGELO provides a
central hub with pre
packaged applications.
Currently 24 packages are
11 base images are available. available. Most of the
applications currently are
used by the internal use
cases and demonstrators.
LEET supports multiple
package repositories, i.e.
users can create their own
remote repositories to be
used throughout application
composition.
At the time of this report, 89
different applications are
available.
Image updates
With the support of the REST server built into OSv, it is possible to update instances on the
fly regardless of the way they were built. This category therefore evaluates how simple and
efficient it is to update an image.
Any update to the image will
first result in checks whether
code recompilation is
required and creation of the
base OSv image. Afterwards,
Similar to developer scripts.
It is not possible to update
existing images with only the
modified content.
LEET provides an option that
allows its image composer
to check for changes in the
application content and
upload only those to the
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 51 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
all files are uploaded onto
this image.
existing image. This results
in significantly improved
image composition
performance and user
experience.
2.6.3 Implementation Evaluation
Management of lightweight applications is addressing two target audiences that are of
interest to MIKELANGELO project. On one hand, there are application/package maintainers
who are interested in building reusable, self-contained, packages suitable for inclusion into
unikernels. On the other hand, there are application users looking for a simple way of
composing their unikernels out of existing packages. Based on the internal use cases and the
demonstrators, it is clear that enhancements made to LEET are supporting both of these
audiences.
The following subsections provide an evaluation of different aspects of the current version of
the Lightweight Execution Environment Toolbox.
2.6.3.1 Package Metadata
LEET builds upon the previously developed Capstan tool. It was developed as a fork to the
original version with an intention to be merged in the end. However, during the project
lifetime, the original Capstan slowly faded away resulting in the fact that the MIKELANGELO
release became the main Capstan version.
Capstan distinguished the image repository from the running instances allowing execution of
several instances from the same base image. LEET added another layer on top of this by
introducing the concept of an application package. An application package is a compressed
archive with additional metadata information used when composing a set of packages into
application virtual machine images. Package metadata plays an important role in the package
lifecycle. Information available in the metadata is used for querying remote (central package
hub) or local repositories. All information is also displayed to the users inspecting the
packages. The name of the package serves as a reference for dependency management, i.e.
package dependencies are specified by their full names.
Metadata is furthermore recursively used during the image composition. This means that
metadata of any included package is consulted and processed, resulting in a dependency
tree. Image composition starts at the root of this tree allowing more specialised packages to
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 52 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
override parts of their dependencies (generic packages). This is important for example when
the user needs to provide their own configuration files for their target environments.
Version information stored in application metadata currently only specifies the version of the
LEET application package and not the version of the underlying package. This means that
different versions of underlying packages must currently be specified in the name, e.g. node4.4.5, node-4.8.2, and node-6.10.2. Once any of these packages is rebuilt, its version will
increase indicating that the new package is available and can be updated through the
command line tool.
When packages are provided in standard operating systems, such as Linux, they are
traditionally accompanied with a set of scripts that simplify first steps with the application.
Similarly, the final version of the tool added support for run configurations. These are part of
the package manifest and allow package maintainers to define one or more standard
commands that are directly available to the users of the package. Run configurations for Java,
Node.JS, Python and native runtimes are supported. Users of the package are allowed to alter
parts of the commands through the environment variables. This way it becomes trivial to, for
example, run Spark master and Spark workers by invoking different run configurations.
A final version of the tool also provides the ability to document the package and the way it
should be used. All run configurations are automatically processed and presented as part of
the package documentation. This allows users to inspect the command, and check whether
default run configurations and their customisations are sufficient. If they are not, users have a
clear idea on how to tailor the command to their needs.
2.6.3.2 Volume Management
One of the most recent additions to Capstan is support for management of block storage, i.e.
additional disk images suitable for OSv. OSv uses ZFS as the only general-purpose file
system. However since the ZFS has not been updated recently in OSv, disk images built with
standard Linux tools are not usable within OSv.
To this end, tools for creating, formatting (partitioning) and attaching volumes to OSv
instances have been introduced. These tools furthermore facilitate use of cloud-init
contextualisation of OSv-based VMs because it is no longer necessary to deploy an instance
in the cloud to test the cloud-init user data configuration - with the support of volume
management, all this can be tested in the local environment of the end user.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 53 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
2.6.3.3 Application Packages
Along with the updated tool for managing application packages and virtual machine images,
the MIKELANGELO consortium maintains several pre-built application packages. Current
packages are mainly required by internal use cases and demonstrators.
The OSv kernel has been compiled using the developer scripts. Two core artefacts have been
used and integrated into LEET:
●
Kernel loader: the OSv loader is responsible for loading the kernel when the virtual
machine is instantiated. Multiple loaders are provided to include different
functionalities (e.g., core kernel, kernel with NFS support, kernel with NFS and
UNCLOT).
●
Bootstrap package: a set of libraries and tools that are mandatory for any kind of
application running on top of OSv. These include a tool for formatting the target
partition (mkfs) and the tool to upload application content onto the OSv image.
Libraries include the ZFS filesystem support and some of the libraries used by the
kernel itself.
Both of these are available in the LEET package repository and are included into the target
application image automatically without the user having to specify any explicit reference.
Similar to these two components, LEET also provides several of the widely used OSv core
modules (HTTP REST server, Command Line Interface, Java and cloud-init to name a few).
These modules can easily be incorporated into any application image simply by specifying
them as a required package. The benefit of providing these modules as LEET packages is that
end users are not required to build them from source nor are they limited to the pre-built
virtual machine images containing a subset of these modules.
OpenFOAM provides a set libraries (framework) and applications (solvers) supporting
intensive computational fluid dynamics. It is used by the Aerodynamic Maps use case. In
order to verify the composability of applications, we have built several packages out of
OpenFOAM. First, OpenFOAM Core includes libraries that are used in all application solvers,
namely the underlying CFD functionalities. Second, for every solver that was of interest to the
use case owner, a specific package was built to include only the libraries that the solver needs
on top of the core functionalities. All these depend on the core OpenFOAM package ensuring
that all required files (configurations, libraries and binaries) are composed into the target
unikernel. The fact that application solvers are separated significantly decreases the overall
size of these unikernels.
As part of the big data or data analytics use cases, several packages were considered:
Hadoop HDFS, Apache Storm and Apache Spark. HDFS and Spark were turned into packages,
while Storm was not, mainly because it was not used by any of the use cases. Because these
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 54 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
products revolve around similar core concepts, similar changes were required in these
packages as well. Most importantly, none of them worked out of the box in OSv because they
all relied on process forks which are not allowed in OSv. In order to overcome this, wrappers
for OSv API were created allowing simple transformation of process forks into thread
creation. This means that for every new process that is started by these products, a new
thread is started within OSv. Analysis of Apache Storm revealed that the approach to create a
package for Storm is exactly the same as in the case of Spark.
Packages serving as runtime providers have also been updated and included in the central
package hub. These packages include different versions of Node.JS, Java and, very recently,
also Python. Once one of these runtime provider packages is included by the application, end
users only need to provide their artifacts and compose the unikernel. Artifacts can be scripts
(in case of Node.JS and Python) or complete binary distribution (self-contained JAR file for
Java).
All these packages come with the predefined run configurations for standard roles (for
example HDFS namenode vs. HDFS datanode, and Spark master vs. Spark worker). These
commands allow basic customisations as well as specification of custom commands. As
explained in the previous section, run configurations are displayed as part of package
documentation.
Building all of the above packages proved a trivial task once Capstan was updated with the
additional functionalities. The common approach was to build the target application from
source, extract relevant binaries and build the application image for testing. The most time
consuming task of this approach is the actual validation procedure guaranteeing that the
application and all of its required libraries are properly integrated into the package.
To improve the agility of this process, a platform[34] for the development of application
packages was built. This platform standardises the way packages are built and verified. It uses
Docker containers to separate the user’s environment from the environment of the package
building process. Package maintainers can use the platform to defined their build tasks
resulting in a package suitable for LEET Capstan. Recipes for all packages hosted by
MIKELANGELO hub are openly available.
2.6.4 Key Performance Indicators
Lightweight Execution Environment Toolbox addresses several of the key performance
indicators listed in the project proposal. The following table presents those that are are being
addressed, with additional details for each of them following the table.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 55 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Table 6: KPIs for LEET
Objective
Key Performance Indicator
Status
O1: To improve
responsiveness and agility of
the virtual infrastructure. By
responsiveness and agility
we denote the efficiency of
setting up a new set of
virtual machines and their
pulling back. The objective is
to improve the time to
"burst" cloud or to set-up
and tear- down a set of
virtual machines, either with
the same or with a different
application.
KPI1.2: Relative
improvement of time to
change the installed
application on a number of
virtual machines
Indirectly addressed
O4: To provide application
packaging, improving
portability and deployment
of virtualized applications.
KPI4.1: A system for
appropriately packaging of
applications in
MIKELANGELO OSv is
provided.
Addressed
KPI4.2: A relative
improvement of efficiency
[time] between deploying a
packaged and nonpackaged application.
Addressed
KPI7.1: All use cases
appropriately demonstrated
Addressed for use cases
using OSv
O7: To appropriately
demonstrate the project’s
outcomes on groundbreaking use-cases.
A more detailed assessment of these KPIs based on the current version of LEET is presented
next. These assessments are updated from previous reports with the most up to date state.
KPI 1.2 Relative improvement of time to change the installed application on a number
of virtual machines
The status of this KPI has been marked as indirectly addressed because this task is not
focusing on it explicitly. However, as the description of KPI 4.2 status below shows, we have
managed to extend the application packaging tools to support application updates to the
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 56 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
existing images that are significantly faster than any of the previous techniques by storing the
information about the VM image content. This information is used during application
updates to deploy only the modified content onto target VM images.
KPI 4.1 A system for appropriately packaging of applications in MIKELANGELO OSv is
provided.
Reports D4.6 and D5.3 present the status of the final version of application package
management tools (Capstan and Virtlet). Most of the most recent enhancements are related
to this particular KPI with a focus on delivering a tool, suitable for management of
lightweight unikernel images using OSv.
KPI 4.2 A relative improvement of efficiency [time] between deploying a packaged and
non-packaged application
Report D4.7[35] has introduced the improvements of efficiency between deploying a
packaged and unpackaged application, comparing existing tools (build scripts and Capstan)
against the LEET package manager. That comparison has shown that LEET is slightly slower
than Capstan when using base images that are already built. Nevertheless, it is still
significantly faster when using pre-compiled packages when compared to the build scripts.
Enhancements made to the image composition, in particular package content caching,
allowed for incremental updates of existing virtual machine images. The cache stores the
information about previously uploaded files and directories and improves the performance of
consecutive VM builds as it uploads only the content that has modified since the last time
image was built. In case the modifications are minor, these builds are practically
instantaneous. For example, the HDFS application is a large Java application with lots of
different libraries and configuration files. In case the user is only changing the configuration
of the deployed application, this mechanism will deploy only the modified configuration file
to the existing VM image. Alternatively, if only a subset of Java libraries (JAR file) needs to be
modified, the caching mechanism will request only the changed files.
KPI 7.1 All use cases appropriately demonstrated
The application package management supports this KPI twofold. First, it is providing a tool
suitable for the preparation of packages suitable for execution in virtual environments using
OSv. Second, it supports the use cases in preparation of the corresponding application
packages. In doing so, it provides the mandatory base packages (for example Open MPI for
HPC use cases and OpenFOAM solvers for the Aerodynamics use case).
As of the time of writing this report, the packages required by the Aerodynamic Maps use
case have been prepared using the latest versions of OSv source tree and a stable version of
OpenFOAM platform (2.4.0) used by the use case owner. These packages have been put in
use in HPC and cloud environments. Packages for several of the most popular OpenFOAM
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 57 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
application solvers have been provided in a central package repository allowing external
users to reuse them in their own simulations. Packages are provided in such a way that
solvers are completely interchangeable, meaning that one input data can be processed using
any of the supported solvers.
The status in regards to the following use cases is as follows:
●
Bones simulation: base packages have been used to compose target OSv images,
however no packages have been provided for the application due to certain
limitations in the application license itself.
●
Big data: easily extendable packages for the Hadoop HDFS and Apache Spark have
been provided. These allow users to deploy a complete distributed file system using
OSv-based master nodes and data nodes storing the data. Apache Spark provides
master and worker capabilities, while the driver is omitted because it traditionally
requires scripting capabilities which are not available in OSv.
●
2.7
Cloud bursting: no additional applications have been required at this point
UNikernel Cross-Layer opTimisation - UNCLOT
UNCLOT is a new component that was introduced in the last year of the project. It is based
on the intermediate evaluation of real-world use cases (mainly Aerodynamic maps use case
and some synthetic benchmarks) that demonstrated the fact that OSv is experiencing
performance issues when deployed over multiple NUMA nodes of modern processors.
Contrary to Linux, OSv does not use NUMA information. This means that when an OSv
instance is provisioned on a CPU with multiple NUMA nodes, threads pinned to cores from
one NUMA node may allocate memory from a different NUMA node. This results in
deteriorated performance.
UNCLOT addresses this by allowing OSv-based virtual machines running on the same host to
communicate via shared memory, effectively bypassing the standard virtual networking stack.
As a consequence, the cloud/HPC management layer can provision one instance per NUMA
node and establish shared memory between the instances. Besides addressing the HPC
workloads, the decision to design and implement UNCLOT was also supported by the idea
that lightweight unikernels could compete with containerised environments in the field of
serverless architectures, or FaaS, Function-as-a-Service. Their immutability and efficient
recomposition via LEET offer new ways of handling lightweight services in large clustered
applications.
The UNCLOT component is currently in a functional proof-of-concept state. It supports both
synthetic workloads testing a very specific part of the networking stack, as well as more
complex workloads running real-world MPI applications.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 58 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
2.7.1 Architecture Evaluation
The main architectural requirement for UNCLOT was that the changes made by UNCLOT are
completely transparent to applications exploiting UNCLOT as the underlying communication
channel. This means that the component must not change any of the existing APIs that
applications use to fulfill their networking needs.
UNCLOT relies on the shared memory device which is allocated by the KVM hypervisor. It is
presented as a PCI device inside the guest operating system. The hypervisor can create
multiple shared memory devices and a single device can be shared between multiple VM
instances, allowing the infrastructure management layer to optimally configure guests that
should be able to communicate over shared memory. For example, when an MPI workload is
provisioned, the management layer knows which instances will be processing the same
workload. Similarly, when functions (or microservices) are deployed for a specific application,
it is known which functions work on the same application and thus should be allowed to
share their data through the shared memory.
UNCLOT augments the standard POSIX API. It automatically detects whether instances are
collocated and dynamically routes the communication from the standard networking stack to
the use of shared memory.
One drawback resulting from the architecture design is a potential security issue. Virtual
machines presented with the same IVSHMEM memory region can read and write data from
anywhere in this region. This can lead to memory corruption, either intentional or
unintentional. However, because both use cases we wanted to evaluate UNCLOT against
(HPC workloads and serverless architecture) allow the management layer to know exactly
how to decide which VMs should share the same memory region, we opted for a simpler
architecture (from a security point of view). This allowed us to evaluate the performance with
no unnecessary overheads.
2.7.2 Performance Evaluation
During the evaluation we focused primarily on comparing unmodified OSv against OSv with
UNCLOT enabled. Two synthetic benchmarks were used to analyse the improvements and
limitations of UNCLOT:

OSU Micro-Benchmarks[36] is a suite of benchmarks used to analyse the performance
of MPI standard implementations.

Nginx[37] server with wrk[38] workload generator is used to measure performance of
UNCLOT-enabled HTTP communication.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 59 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
The testbed used one Supermicro server with two Intel Xeon E5540 CPUs at 2.53 GHz. Each
CPU has 4 real cores, 8 MB of shared L3 cache, 4*256 KB L2 cache, 4*32 KB L1i cache and
4*32 KB L1d cache. The system has 48 GB of DDR3 memory (16*4 GB). The system accesses
memory in two NUMA nodes (i.e. each CPU socket is in its own NUMA node, and has direct
access to 24 GB of memory).
Figure 17 shows the bandwidth of running OSU Micro-Benchmark. Three scenarios are
considered. The ideal case represents a case where the benchmark is executed in a single OSv
instance, thus using optimal communication path between MPI workers. OSv with TCP/IP
represents a case where two instances running unmodified OSv kernel on the same host.
Standard TCP/IP networking is used. OSv with UNCLOT also uses two OSv instances, but
enables UNCLOT to exchange messages between MPI workers. Even though the ideal case
outperforms both other scenarios, UNCLOT shows an improvement, in particular when
message size is larger than 1024 bytes. The bandwidth of UNCLOT constantly outperforms
unmodified OSv by a factor of 3-6.
Figure 17: OSU Micro-Benchmarks bandwidth test.
Latency benchmarks shown in Figures 18 and 19 show that UNCLOT furthermore reduces the
latency by a factor of between 2.5 (larger messages) and 5 (smaller messages). Similar to the
bandwidth, the ideal case where workers are running in the same OSv instance clearly
outperforms UNCLOT.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 60 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 18: OSU Micro-Benchmarks latency test.
Figure 19: OSU Micro-Benchmarks latency test for small message sizes.
The final benchmark evaluates UNCLOT using Nginx HTTP server serving simple static page
to a wrk workload generator, depending on the number of concurrent connections, i.e.
sockets. This evaluation clearly shows the theoretical improvement UNCLOT can deliver in
case of one or two active sockets and indicates a limitation of the current implementation. It
is assumed that performance deterioration is attributed to the way communication is
synchronised in this proof of concept (through a busy wait loop).
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 61 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 20: Nginx performance evaluation using wrk.
2.7.3 Implementation Evaluation
Technical details about the implementation of the UNCLOT component are presented in
report D3.3, The final Super KVM - Fast virtual I/O hypervisor. The current implementation
offers a proof of concept that allowed functional evaluation and preliminary performance
analysis. Implementation closely follows the coding standards of OSv core source code in
order to target eventual acceptance in the upstream project.
The remainder of this section discusses some implementation topics covered in the
aforementioned report. Some potential follow-on work is also identified.
The ring buffer is implemented based on a single producer - single consumer queue. We can
argue that in many/most applications only one thread sends data to a given socket, and that
only one thread reads from the given socket at any time. Even in the case of applications with
a worker thread pool (e.g. many HTTP servers), each thread will get assigned to a socket, and
is then an exclusive user of that socket until the incoming request processing is finished.
However in general, the POSIX specification allows concurrent access from multiple threads.
Thus the ring buffer should be modified to be multi producer - multi consumer compatible.
During our evaluation experiments, we noticed that epoll is significantly faster when used on
a pipe file descriptor then when used on a socket file descriptor. In this context, “faster”
means how many 1 character large messages are exchanged between two threads in a
standard ping-pong fashion - e.g. the test measures latency. If lower latency would be
required, epoll might be better implemented by supporting functions from pipes. This would
result in a different implementation that would not affect standard network sockets.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 62 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
As discussed previously, ring buffers store whole messages, e.g. when 100 bytes of data is
written to the socket, data gets copied into the memory of the ring buffer as a whole. As long
as target VM consumes and processes entire data, this is fine (this happens for HTTP and
database servers, for example). However, for applications that process only part of the data,
such an approach makes redundant copies of the data. Examples of such applications are
network routers or load balancers. Network routers need to inspect only the IP header,
modify that header and forward it. Most of the data is thus left unmodified. More
importantly, the application does not even look at it. In such cases it would be better to store
the IP header separately from the data. The router can focus only on the header and, once
processed, move only the header towards the next VM. This use case is rather specific, but is
aligned with an interesting and challenging area of software defined networks, where
lightweight unikernels can offer dynamic and versatile architecture, supporting real-time
scaling based on network demand.
When hard VM shutdown or reset occurs, memory chunks in IVSHMEM are left allocated.
During allocation, each VM marks allocated chunks with an owner tag. This allows detection
of dangling chunks from previous VMs, and to force-clean them. During allocation a custom
spinlock class is used to serialize allocations. This custom class also has an owner tag. Thus if
a VM would be killed while holding the lock, other VMs (or external processes on host) could
detect that lock owner does not exist any more. That might be used to recover the situation.
Each VM has full access to the whole chunk of IVSHMEM memory. This is questionable from a
security point of view, as VMs might corrupt memory from other, unrelated VMs. It might be
possible to implement an IVSHMEM device to offer smaller chunks of memory. Each chunk
would be writable by one VM only (TCP sender) and readable by a second VM only (TCP
reader). Alternatively, we might provide one IVSHMEM device for each VM sender - VM
receiver pair. The second case would be easier to implement. It would however still allow a
misbehaving VM to corrupt memory, but at least memory of only one VM. Another drawback
of the second approach is that we either have to know in advance which VMs are supposed
to communicate with other VMs on the same host (so that corresponding IVSHMEM devices
are attached at VM boot) or that we have to support PCI hotplug.
2.8 Full-Stack Monitoring
2.8.1 Architecture Evaluation
An initial evaluation of the architecture of Snap was presented in Report D6.1, First Report on
the Architecture and Implementation Evaluation. As the core architecture of snap has not
changed fundamentally, all of the observations in that report still stand, albeit Snap now has
98 plugins published rather than the 51 it had then. To restate and refresh some of the key
observations it made:
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 63 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
●
Snap is designed from the ground up to be scalable, secure, easily manageable and
operate at cloud-scale.
●
Snap can not only capture rich metrics from the hypervisor level, but can tag the data
in arbitrary ways also. This can be used to highlight VM migrations, experiment
identifiers, etc.
●
Snap can collect over 260 metrics from OSv, with web server traces also available.
●
Over 64 collector plugins have now been published for snap, each collecting
potentially hundreds of metrics from some part of the hardware, OS, middleware or
workload.
●
gRPC
now
enables
distributed
telemetry
gathering
workflows
across
the
infrastructure. Thus CPU-intensive steps in a workflow, for example, can be offloaded
to appropriate, possibly dedicated hardware if required.
●
Metrics gathering can be dynamically reconfigured without requiring an application
or service restart.
●
Tribes are used to allow arbitrary groupings or classifications of nodes to be
addressed and managed with the one command
From an architectural point of view a number of observations were made with the initial
releases of Snap and these encouraged some modifications which were published in Snap
Version 2.0 in Sept 2017.
For the first 2 years of Snap framework development only one type of collector was
supported. It had the ability to collect metrics in specific intervals, defined in the task
manifest. Collection intervals determined how frequently data was collected and was
configured per datasource in the task definition. While this was fine for long term collection
of metrics, for capturing one-off events it was not efficient. Additionally, if a use-case
required a metric event to be dealt with as soon as possible, the collection interval defined
would limit how soon the metric could be dealt with.
To address these needs support for a new type of collector, a streaming collector, was
introduced. Streaming collector plugins use grpc streams to allow the plugin to send data
immediately or after a certain period of time instead of on an interval governed by Snap.
Streaming by Snap enables:
●
Improved performance by enabling event based data flows
●
Runtime configuration controlling event throughput
●
Buffer configurations which dispatch events after a given duration and/or when a
given event count has been reached
Another architectural gap in the initial Snap implementation was diagnostic support. There
was no convenient way for an administrator to verify if Snap was installed and configured
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 64 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
correctly, without having to turn on and start gathering all the metrics of interest from all the
nodes of interest.
To address this need, the concept of plugin diagnostics was introduced. Plugin diagnostics
provides a simple way to verify that a plugin is capable of running without requiring the
plugin to be loaded and a task started. This feature works for collector plugins written using
one of the new snap-plugin-libs.
Plugin diagnostics delivers the following information:
●
runtime details (plugin name, version, etc.)
●
configuration details
●
list of metrics exposed by the plugin
●
metrics with their values that can be collected right now
●
and times required to retrieve this information
Whilst the original Snap release included powerful management functionality, installation and
deployment typically required bespoke scripting solutions to be developed by the
administrator. A new application Snap-deploy was developed to address this challenge and
remove this overhead. It automates the provisioning and configuration management of
the Snap framework and plugins.
The snap-deploy implementation includes the following functionality:
●
Deploy - configure and deploy Snap framework with configured plugins
●
Redeploy - reconfigure and re-deploy Snap framework with configured plugins
●
Download - download Snap framework binaries and configured plugins
●
Kill - kill Snap service process along with plugins
The final architectural change of significant note since D6.1 was the reimplementation of the
various Snap RESTful APIs. To facilitate clients to be developed for these APIs using the
developer’s languages and tools of choice, the decision was made to rewrite the Snap APIs
using Swagger 2.0 - the OpenAPI specification.
Snap exposes a list of RESTful APIs to perform various actions. All of Snap's API requests
return JSON-formatted responses, including errors. Any non-2xx HTTP status code may
contain an error message. All API URLs listed in this documentation have the endpoint:
http://localhost:8181. Snap has now defined API namespace categories as follows:
●
Plugin API: to load, unload and retrieve plugin information
●
Metric API: to retrieve all or specific information about metrics that are running
●
Task API: to create, start, stop, remove, enable, retrieve and watch scheduled tasks
●
Tribe API: to manage tribe agreements and allow tribe members join or leave tribe
contracts.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 65 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
These architectural enhancements have helped Snap mature into an even more flexible and
manageable tool for administrators of modern cloud and HPC deployments.
2.8.2 Performance Evaluation
To evaluate the performance of Snap and compare it to other telemetry systems, automatic
deployment scripts were written to gather equivalent sets of metrics from a local node using
Snap and the following systems, as described previously in Report D6.1:
●
Collectd version 5.5.0[39]
●
Ganglia 3.6.0-1ubuntu2 distributed with Ubuntu 14.04[40]
●
Telegraf 1.0.0[41]
The experimental setup is described in D6.1, but a summary of the results is reproduced
below for completeness.
Table 7: Comparing telemetry frameworks collecting 10 probes
Idle
Telemetry
Runtime
CPU Util
Memory
CPU Util
Memory
Disk Util
Collectd
0.3 %
50 MB
1-4%
50 MB
0%
Telegraf
0.0 %
10 MB
0-0.25 %
30 MB
0%
Ganglia
0.1 %
14 MB
0-0.25%
14 MB
0%
Snap
0.1 %
140 MB
2-3%
160 MB
0%
Platform
Table 8: Comparing telemetry frameworks collecting 50 probes
Idle
Telemetry
Platform
Collectd
Runtime
CPU Util
Memory
CPU Util
Memory
Disk Util
0.3 %
50 MB
3-4%
80 MB
1%
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 66 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Telegraf
0.0 %
10 MB
3-4 %
38 MB
0%
Ganglia
-
14 MB
-
-
-
0.1 %
140 MB
2-3%
150 MB
0%
Snap
Table 9: Comparing telemetry frameworks collecting 100 probes
Idle
Telemetry
Runtime
CPU Util
Memory
CPU Util
Memory
Disk Util
Collectd
0.3 %
50 MB
3-5%
110 MB
2-3%
Telegraf
0.0 %
10 MB
3-30 %
120-240 MB
0%
Ganglia
-
14 MB
-
-
-
0.1 %
140 MB
2-3%
150 MB
0%
Platform
Snap
Exploring this data it can be seen that all Telemetry Platforms except snap have a noticeable
increase in demand for local memory and CPU resources as the number of probes being
gathered increased. Snap’s relatively static overhead is possible due an optimised allocation
of memory and the very efficient scheduler enabled by the Go language, compiler and
runtime[42].
Additionally, to validate the implementation of snap clustering on a large-scale deployment,
an ansible script was written to deploy snap on 500 compute nodes that were configured to
continuously run a monte carlo simulation under various conditions. The snap daemon was
configured using tribe agreements to

collect 8 metrics:
o
compute utilization / saturation
o
memory capacity utilization / saturation
o
network card utilization / saturation
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 67 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
o
storage utilization / saturation,

process data using automatic anomaly detection via the Tukey method[43]

send processed data to an InfluxDB[44] database.
Figure 21: Snap task workflow for 500-node test
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 68 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 22: Snap grafana dashboard illustrating mean CPU utilisation across 500 nodes
Results showed that the clustering features built into the snap framework were able to
manage a 500 node monte-carlo simulation with no issues. Telemetry was successfully
captured from all nodes, processed locally, published to a scalable InfluxDB backend and
available for review via the grafana dashboard graphical user interface.
Regarding MIKELANGELO KPIs, there were no KPIs defined at the start of the project that
referred explicitly to telemetry. However, Snap has been successfully integrated into both the
Cloud and HPC MIKELANGELO Testbeds, and has been enhanced to collect data from many
components of the MIKELANGELO stacks, from hardware through hypervisors, guest
operating systems and end users workloads. Thus by helping to measure the performance of
the various elements of MIKELANGELO, Snap has at least contributed to the evaluation of the
following KPIs:

KPI1.1: relative efficiency of bursting a number of virtual machines

KPI1.2: relative improvement of time to change the installed application on a number
of virtual machines

KPI2.1: relative efficiency of virtualized I/O between KVM and sKVM (developed in the
project)

KPI3.1: The relative improvement of efficiency of MIKELANGELO OSv over the
traditional guest OS

KPI3.2: The relative improvement of efficiency [size, speed of execution] between the
Baseline Guest OS vs. the MIKELANGELO OSv
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 69 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2

KPI4.2: A relative improvement of efficiency [time] between deploying a packaged and
non packaged application

KPI5.1: Demonstration of a management infrastructure on diverse physical
infrastructures

KPI5.2: Relative efficiency [time, CPU, disk overheads] of traditional HPC over Cloud
HPC offered in MIKELANGELO

KPI7.1: All use cases appropriately demonstrated
2.8.3 Implementation Evaluation
As reported in D6.1, Snap was open sourced in December 2015 and the open-source
community continue to be actively encouraged to contribute via the project facilities which
have been enhanced over time. Best-in-class development practices and tools such as
GitHub, Travis CI[45] and Jenkins[46] have been adopted to maximise the quality and
robustness of the code.
At the core of continuous integration of snap lies the ability to automate the tests that are
being developed in parallel with the code. Code is tested for quality, functionality, integration
and performance, Every plugin repository contains its own test files and configurations for
Travis CI - the continuous integration tool employed by snap. Every change of the code in the
repository automatically triggers testing to be carried out on both the Docker Travis
environment and on the Intel Jenkins server. Additionally, for every snap collector and
framework package itself we now check and refine the implementation using the following
golang static code-verification tools:

Gofmt - a tool that automatically formats Go source files to make code easy to read,
write and maintain

Go_vet - a tool that examines Go source code and reports suspicious constructs, such
as Printf calls whose arguments do not align with the format string.

Gocyclo - a tool that calculates cyclomatic complexities of functions in Go source
code. The cyclomatic complexity of a function is calculated according to the following
rules: 1 is the base complexity of a function +1 for each 'if', 'for', 'case', '&&' or '||'

Ineffassign - a tool that detects ineffectual assignments in Go code.

Licence - a tool which checks for the presence of a LICENCE file

Misspell - a tool that finds commonly misspelled English words
To automatically perform a static analysis of code a cloud service named “go report”
designed to classify and rank source code is used. All repositories contributed by
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 70 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
MIKELANGELO to the open-source project Snap have been analysed by this independent
tool, and were rated A+ as illustrated in Figure 23.
Figure 23: Go report page for Snap kvm collector
In the last year we also enhanced the landing page of every plugin contributed by
MIKELANGELO by adding the build status of the plugin generated from the output of Travis
CI. The following badge statuses are reported:

"build passing" - which indicates that the project's tests all pass as expected.

"broken" - which means that the software does not work as advertised

“unknown” - test was not yet evaluated on Travis CI
At the time of writing all plugins contributed by MIKELANGELO project have green, build
passing, status.
2.9 Efficient Asynchronous Applications - Seastar
In M30’s D2.21, “MIKELANGELO Final Architecture”, we documented in detail the design and
architecture of Seastar. In D4.6 “Guest O/S Final Version”, delivered in parallel with this
deliverable, we also include a long tutorial on how to use Seastar. We will not repeat all of
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 71 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
this here. Rather, the goal of this section is to evaluate the success of Seastar in fulfilling its
goals. The first section summarizes the goals of Seastar, and provides a qualitative evaluation
of the differences between its architecture and that of other existing alternatives. The second
section provides quantitative benchmarks demonstrating Seastar’s performance. The third
section evaluates the process through which we implemented Seastar.
2.9.1 Architectural Evaluation
In MIKELANGELO’s first architecture document, D2.16, we noted that certain Linux APIs, such
as the socket APIs, are inherently inefficient on modern multi-core hardware. New
implementations of these existing APIs - as we developed in OSv - have the potential of
improving application performance somewhat, but will not eliminate these inherent
inefficiencies. Moreover, traditional Linux applications frequently use techniques - such as
many threads and locks - which are inefficient and unscalable on modern machines with
many cores and non-uniform memory architecture (NUMA). We suggested that a new API
should be introduced which solves these problems, and new applications be written to make
use of this new API.
Seastar is this new API, a C++14 library, which we introduced in MIKELANGELO for writing
highly efficient and complex server applications on modern multi-core machines:
Traditionally, the programming languages, libraries and frameworks used for writing server
applications have been divided into two distinct camps: those focusing on efficiency, and
those focusing on complexity. Some frameworks are extremely efficient and yet allow
building only simple applications (e.g., DPDK allows applications which process packets
individually; libevent allows building efficient event-driven servers but it is very difficult to
handle complex chains of events). Other frameworks allow building extremely complex
applications, at the cost of run-time efficiency (examples include Node.js in Javascript, Vertx
in Java, Twisted in Python, EventMachine in Ruby). Seastar is our attempt to get the best of
both worlds: To create a library which allows building highly complex server applications, and
yet achieve superior performance.
The inspiration and first use case of Seastar was Scylla[47], the open-source rewrite of Apache
Cassandra being written by ScyllaDB, one of the MIKELANGELO partners. Cassandra is a very
complex application, and yet, with Seastar we were able to re-implement it with as much as
10-fold throughput increase, as well as significantly lower and more consistent latencies (we
will see some examples in the next section).
Server applications - e.g., the classic HTTP (Web) or SMTP (email) servers, or a networked
database - are inherently concurrent: Multiple clients send requests in parallel, and usually a
request cannot be completely handled before it blocks for various reasons (e.g., full TCP
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 72 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
window on a slow connection, or disk I/O, or even the client holding on to an inactive
connection) - and the server needs to move on to handling other requests. A request may
even block multiple times as it needs multiple blocks from disk, more data from the network
or other servers, and so on.
The classic approach to handling concurrent connections, employed by classic network
servers such as Inetd, Apache Httpd and Sendmail, was to use a separate process or thread
per connection. At each moment, each thread handles exclusively a single connection. The
server code is free to use blocking system calls. Programming such a server is known as
synchronous programming; The code is written linearly, one line of code starts to run after the
previous line finishes, and it is fairly easy to write such code.
But synchronous,
thread-per-connection,
server programming came with a
hefty
performance penalty. Starting threads and switching between them is slow. Threads need
their data on multiple CPUs which require slow locks and cache-line bouncing. And threads
also come with fairly large stacks which consume memory even when a connection is idle.
Servers with synchronous designs had unsatisfactory performance, and scaled badly as the
number of concurrent connections grew, and as the number of CPU cores grew. In 1999, Dan
Kigel popularized "the C10K problem"[48], the need of a single server to efficiently handle
10,000 concurrent connections - most of them slow or even inactive. This could not be done
with a synchronous server design.
The solution, which became popular in the following decade, was to abandon the cozy but
inefficient synchronous server design, and switch to a new type of server design - the
asynchronous, or event-driven, server. An event-driven server has just one thread per CPU.
This single thread runs a tight loop which, at each iteration, checks, using poll() or epoll(), for
new events on many open file descriptors. For example, an event can be a socket becoming
readable (new data has arrived from the remote end) or becoming writable (we can send
more data on this connection). The application handles this event by doing some nonblocking operations, modifying one or more of the file descriptors, and maintaining its
knowledge of the state of this connection.
Seastar offers a complete asynchronous programming framework, which uses two concepts futures and continuations - to uniformly represent, and handle, every type of asynchronous
event including network I/O, disk I/O, and complex combinations of other events. A future is
a an event or value which will be available sometime in the future, and a continuation is a
piece of code (a C++ lambda) to run when some future becomes available.
This uniform representation of every asynchronous event is very important: If certain basic
events could not be represented as futures, we would need to fall back to slow mechanisms
for running them in the background - such as starting additional threads. This is, for example,
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 73 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
what happened in Apache Cassandra: It was also written as an asynchronous application
using futures to manage asynchrony, but disk access is done via mmap() which can block the
current thread, thereby requiring many threads per core and slowing the server down. To
contrast, Seastar provides a futures-based API for storage as well, which does not require
additional threads (it uses AIO features of the operating system).
The uniform representation of every asynchronous event as a future also allows the
application writer to represent complex combinations of events as a future, which appears to
the programmer just like any other future. This is one of the things which makes writing
complex servers in Seastar possible - once you understand how to program simple servers in
Seastar, writing complex servers is not significantly harder.
Modern machines are very different from those of just 10 years ago. They have many cores
and deep memory hierarchies (from L1 caches to NUMA) which reward certain programming
practices and penalize others: Unscalable programming practices, such as taking locks or
writing data on one core and reading it on another, can devastate performance on manycore machines. Shared memory and lock-free synchronization primitives are available (i.e.,
atomic operations and memory-ordering fences) but are dramatically slower than operations
that involve only data in a single core's cache, and also prevent the application from scaling
to many cores. For this reason, Seastar uses the share-nothing programming model: The
available memory is divided between the cores, each core works on data in its own part of
memory, and communication between cores happens via explicit message passing (which
itself happens using the SMP's shared memory hardware, of course).
In its share-nothing design, Seastar differs from many other future-based asynchronous
programming frameworks. For example, the C++11 standard added partial future support,
but its futures are meant to work across threads and require atomic operations - and are
therefore slower and less scalable than Seastar’s implementation.
Seastar’s choice of programming language - C++14 - is also important. Most high-level
programming languages (e.g., Java, Python, Javascript, etc.) make the implicit assumption
that all memory can be shared between threads and therefore need to use slow atomic
operations to protect potentially-shared memory objects (for, e.g., reference counting,
garbage collection, etc.), Seastar is free from this worry, and free to use the fastest possible
memory operations. E.g., Seastar’s reference counting is often just an operation in the core’s
cache. Additionally, the high-level languages give the programmer less control over the
performance of critical code. For really optimal performance, we need a programming
language which gives the programmer full control, zero run-time overheads, and on the
other hand - sophisticated compile-time code generation and optimization, and C++14 does
great in all counts.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 74 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
As explained above, Seastar is based on running small pieces of code, continuations or tasks,
when certain futures become available. Continuations are objects which contain (using C++
lambda capturing feature) the data they need to work on, and therefore have very low
switching overhead - significantly lower than OS-level threads and even lower than “userspace threads” or “green threads” of other frameworks which have separate stacks.
Continuations are never preempted, which together with the share-nothing design (meaning
that only one core works on any particular piece of data) means that continuations do not
need to use locking mechanisms, which simplifies them and makes them faster. Seastar can
further batch instances of the same continuation code to run together improving the
processor’s instruction-cache utilization and improving performance (this technique is known
as SEDA[49] - Staged Event-Driven Architecture).
One of the important consequences of Seastar’s allowing of complex chains of asynchronous
events to be represented as a “future”, just like more basic events, was that it was fairly
straightforward for Seastar to provide its own high-performance user-space TCP/IP stack
built on top of the DPDK network drivers as an alternative to the OS’s TCP/IP stack. This userspace TCP/IP stack is also share-nothing (each connection belongs to one core) and provides
zero-copy in both directions: you can process data directly from the TCP stack's buffers, and
send the contents of your own data structures as part of a message without incurring a copy.
A number of other projects implemented a TCP/IP stack over DPDK, including f-stack, mTCP,
ANS, TLDK, and others. However, Seastar’s implementation is unique in its emphasis of sharenothing design (and therefore high scalability to many cores, especially when the network
card supports multiple queues), and also in providing an holistic framework in which not just
asynchronous network events - but also disk events, timers, and other types of events, can be
handled uniformly.
2.9.2 Performance Evaluation
The MIKELANGELO Grant Agreement lists the following Key Performance Indicators which are
relevant to Seastar:

KPI 3.1: The relative improvement of efficiency of MIKELANGELO guest OS over the
traditional guest OS.

KPI 3.2: The relative improvement of efficiency [size, speed of execution] between the
Baseline Guest OS vs. the MIKELANGELO guest OS.

KPI 7.1: All use cases properly demonstrated.
KPI 7.1: All use cases properly demonstrated:
For the “Cloud Bursting” use case we have used Scylla, an open-source reimplementation of
Cassandra using Seastar. As we shall see in D6.4 (the use-case evaluation deliverable), Scylla
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 75 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
dramatically improves this workload’s performance. Therefore, Seastar contributes to KPI 7.1
(all use cases properly demonstrated).
KPI 3.1: The relative improvement of efficiency of MIKELANGELO OSv over the
traditional guest OS; KPI 3.2: The relative improvement of efficiency [size, speed of
execution] between the Baseline Guest OS vs. the MIKELANGELO OSv:
Here we need to compare performance of Seastar-based applications to the performance of
the traditional implementation of the same application on top of the traditional Linux APIs,
on Linux or OSv.
We claim that Seastar-based applications can be much faster than the traditional Linux
applications, and scale to large number of cores. We have run a large number of benchmarks
to demonstrate this.
First, we already showed in previous versions of this report that a Seastar-based memcached
re-implementation achieved 78% higher throughput than the standard one (compare this to
22% improvement by baseline OSv), and that a Seastar-based re-implementation of
Cassandra achieved 900% higher throughput than the standard one (compared to just 34%
improvement by baseline OSv). This demonstrates both KPI 3.1 (Seastar is faster than the
traditional Linux approach) and KPI 3.2 (Seastar is faster than the OSv of 3 years ago).
A recent independent benchmark run by Samsung[50] showed even more dramatic speedups
by the Seastar-based Scylla. In this benchmark Scylla outperformed the traditional Cassandra
on a cluster of 24-core machines by a factor of 10x-37x depending on the workload (they
tried four different YCSB[51] workloads). These very high speedups are a consequence of
Scylla’s almost linear scaling to 24 cores, contrasted with traditional Cassandra’s much worse
scalability. This validates Seastar’s design goal of emphasizing many-core machines, which
are becoming more pervasive in the cloud.
Seastar aims to give programmers not only higher throughput, but also better control over
latency - i.e. allow consistently low latency for important user requests, while background
operations (e.g., compaction operations in Scylla) are in progress.. For example, Seastar’s IO
scheduler ensures that disk-using background operations do not significantly delay user
requests (which also need to read from disk). We demonstrate this in the following
benchmark:
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 76 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 24: YCSB benchmarking on ScyllaDB vs Cassandra.
In this benchmark, we ran the popular YCSB “update” benchmark many times, each time the
load-generator is configured to generate a different rate of requests per second. The result of
each of these many runs is a dot, plotting the measured throughput and tail latency (99th
percentile).
The orange dots are measurements on a cluster of 3 Cassandra nodes, We can see that as the
load increases, the throughput can increase up to about 130,000 requests per second (the
right-most orange dot) but that is it - a 3-node Cassandra cannot go beyond this throughput
even if the loader tries to send more. Then, in the gray, yellow, and dark-blue lines, we see
the performance of a cluster with 9, 15 and finally 30 Cassandra nodes. We see that 30
Cassandra nodes can handle a throughput of roughly 600,000 requests per second. But as
the light-blue (“3 Scylla”) line shows, just 3 nodes of Scylla (one tenth of the Cassandra
nodes!) can handle almost as many requests per second (about 550,000), and do so with
significantly lower 99th percentile latency (this is the Y axis in the graph).
Memcached and Cassandra were not the only applications that benefited from a Seastarbased rewrite. One of the users in the Seastar community outside MIKELANGELO wrote a
Redis clone in Seastar known as Pedis[52] and reported it scaled well for 8 cores, while the
original Redis did not. Another user implemented SMF Write-Ahead-Log[53] with Seastar,
and reported 4-times higher throughput and 34x-48x (!) lower latency compared to their
baseline of Apache Kafka, as can be seen in the following charts from their site:
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 77 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 25: Latency of SMF Write-Ahead-Log with Seastar vs Apache Kafka
Figure 26: QPS of SMF Write-Ahead-Log with Seastar vs Apache Kafka
We also ran a benchmark to visualize Seastar’s scalability to many cores: We wrote a simple
Seastar-based HTTP server, and a Seastar-based HTTP load generator (we call the latter
Seawreak) and measured throughput of almost 7 million requests/second on a single 28-core
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 78 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
node. The benchmark was run using two identical Intel® Server System R2000WT servers one for the server and one for the load generators. Each server was configured as follows:

2x Intel® Xeon® Processor E5-2695 v3: 2.3GHz base, 35M cache, 14 core (28 cores
per host, with HyperThreading to 56 cores per host)

8x 8GB DDR4 Micron memory

12x 300GB Intel S3500 SSD (in RAID5 configuration, with 3TB of storage for OS)

2x 400GB Intel NVMe P3700 SSD (not mounted for this benchmark)

2x Intel Ethernet CNA XL710-QDA1 (two cards per server, cards are separated by
CPUs. card1: CPU1, card2: CPU2)

OS info: Fedora Server 21, with updates as of February 19, 2015.

Kernel: Linux dpdk1 3.17.8-300.fc21.x86_64

Default BIOS settings (TurboBoost enabled, HyperThreading enabled)
The results were as follows:
Figure 27: Seastar httpd performance evaluation
This figure reaffirms the scalability of the Seastar programming model. This is mainly due to
the sharded design of Seastar application. Each request is assigned to a specific shard and all
its processing and the response generation remains within that shard, and non-scalable SMP
features such as atomic operations and moving data ownership between cores are used
sparingly (locks are not used at all).
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 79 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
2.9.3 Implementation Evaluation
Seastar is an open source project, and all improvements made to it were contributed directly
into the main Seastar repository. The benefit of this approach is that all Seastar code that we
wrote was tested by external Seastar users and contributors. We currently have 39
contributors to Seastar - almost all of them outside the MIKELANGELO project. Seastar is the
foundation of the Scylla database project[54], which has 38 developers that are constantly
testing and stressing Seastar. Also, as of today, there are 239 people subscribed to the
Seastar developers’ mailing list[55], and 302 people closely “watching” Seastar’s development
via github[56]. Many of these are working on their own Seastar-based projects, some we are
aware of and some we are not. All these extra eyes - and users - help improve the quality of
Seastar. As already mentioned above Seastar powers the main product of ScyllaDB (called
Scylla) so it gets heavily tested and stressed, and there are also several other companies
known to be using Seastar and testing it.
Seastar is written in about 70k lines of modern C++14 code (several spectators noted that
the Seastar and Scylla code base are a showcase of what a modern C++14 program should
look like). Unit tests code size is approximately 14% of the entire codebase, and they can be
run manually by developers - and they also be run automatically by Jenkins on every new
code commit, and nightly.
Additionally, on top of Seastar and Scylla there are multiple test suites:

C++14 unit tests with one suite for Seastar and one for Scylla

A Fork of Cassandra dtest which does functional testing of the database using python.

A Small artifact result test suite to check that the various distributions (dpkg/rpm) of
the database are well built.

The scylla-cluster python test suite which tests live clusters running on top of AWS
EC2
Jenkins is used as a continuous integration tool for all the ScyllaDB projects running test
suites automatically for every commit. The development process of Seastar and Scylla is
modelled after the Linux kernel development process by using a public Google group as a
mailing list.
2.10 Side Channel Attack Mitigation
The following section provides a detailed description of the evaluation performed for the
SCAM module. All components of SCAM are evaluated, and specifically we show and discuss
the performance of (1) the monitoring and profiling service, and (2) the mitigation by
noisification functionality. We present results for both peacetime activity, as well as during an
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 80 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
attack, giving some indication of SCAM’s function and performance. The complete design
details and motivation are described in D3.3.
2.10.1
Architectural Evaluation
SCAM is designed to be a software module that runs on any single host, as part of the sKVM
developed within the MIKELANGELO stack. As such, SCAM does not require any networking
or significant storage capabilities. As opposed to most measures developed for addressing
the security hazard of cache side-channel attacks, and specifically those targeting the LLC
(Last Level Cache), SCAM does not require any changes to the underlying application being
protected, nor does it require any specific hardware capabilities or support. A detailed
discussion of these potential alternative measures was provided in D6.1.
The SCAM module is implemented entirely in user space, and does not require any changes
on the kernel level. This enables simple adaptations of SCAM to other Linux kernels, which
allows for greater impact. SCAM components interact with various timers and hardware
registers and counters. For the monitoring and profiling service, the overhead incurred is
minimal (less than 1%, as was described in D3.2). We note that the current monitoring
approach, as described in D3.3, focuses solely on the target, which implies an even smaller
effect on performance. For the mitigation by noisification approach, our solution is reactive,
and is designed to start only once a potential attack is identified by the monitoring module.
As such, when no signs of attack are at hand, SCAM’s noisification solution does not incur
any overhead (other than that required for a short, one-time, reconnaissance phase that runs
once per target VM to be secured). When activated upon the identification of a potential
attack, the noisification module uses one of the cores on the host (or more - as specified by
the provider in the SCAM service configuration) to mitigate the attack.
In what follows we provide the detailed account of our evaluation of the components of
SCAM, within the framework of the above architectural considerations.
2.10.2
Performance Evaluation
2.10.2.1 System Setup
We perform our evaluation on a system equipped with a 4-core Intel Core(TM) i5-4590 CPU
(part of the Haswell architecture) running at 3.3GHz, with 8GB RAM. The CPU has an LLC of
size 6144KB, and L2, and L1d and L1i caches of size 256KB, 32KB and 32KB, respectively. The
host runs Ubuntu 14.04 with Linux kernel 4.4.0-47.68. The hypervisor running on the host is
KVM with qemu 2.0.0. On the host we have 2 VMs running: (1) a target VM running a slightly
altered GnuTLS server, version 3.4 (changed so as to have constant time per bit, commonly
viewed as the best security practice to avoid simpler timing attacks), and (2) an attacker VM
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 81 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
capable of performing a prime-and-probe attack (the details of which are provided in D3.4,
sKVM security concepts – first version[57]). SCAM runs as a user space process on the host
OS, performing all of its functionalities (monitoring, profiling, and mitigation by noisification).
We note that the GnuTLS server takes ~70ms to perform a single RSA decryption, out of a
total ~100ms for performing the complete TLS authentication.
2.10.2.2 Execution Setup
SCAM is run first and performs a baseline cache activity scan using MASTIK[58]. This provides
a reference model for identifying cache activity by the various processes and VMs that will be
started on the host. This process takes between 3-10 minutes, depending on the number of
cores available on the host, but is performed only once upon running SCAM initially on the
host. The target VM is run next, along with a client repeatedly connecting to the GnuTLS
server in order to invoke the target’s RSA decryption procedure. This activity is observed by
the SCAM module, in order to identify the cache sets that are active when the target
performs decryption. The sets which exhibit significant difference compared to the baseline
activity observed initially are identified as the collection of cache sets to be secured by SCAM.
On our system, this amounts to ~300 LLC sets. This procedure of identifying the target’s
active sets takes 3 minutes. We recall that this is performed only once for every target that
requires the service on the host, upon setup. The PID of the target VM is then provided to the
SCAM monitoring service, and monitoring begins (for details, see D3.3). Once both the target
VM and SCAM are running, the attacker is initiated. This performs the prime-and-probe
attack in an attempt to extract the private RSA key used by the server running within the
target VM. SCAM’s monitoring service constantly monitors the target’s VM process on the
host. Upon identifying a potential attack, a message is sent via an internal TCP interface to
the noisification process pending within SCAM, and noisification begins (for details, see D3.3,
The final Super KVM - Fast virtual I/O hypervisor).
2.10.2.3 Monitoring and profiling evaluation
The following figure provides an example of an interval during which the target VM is
monitored by SCAM. The figure shows the results of the monitoring service for sampling the
access and miss High Performance Counters (HPC) of the target PID. A sample of the two
counters is taken every s=100us. The ratio between the two counters is averaged over a
window of width W=50 samples, corresponding to 5ms. If that average exceeds a threshold T
set by default to T=0.8 then a counter C is incremented. If the average is less than T then C is
set to zero. If C is larger than a threshold R set by default to 20 then SCAM decides that an
attack is taking place. To summarize, if there are many relatively many LLC misses compared
to LLC accesses over a long enough period then SCAM monitoring declares that an attack is
in progress.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 82 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 28: Trace of monitoring activity with inter-sample time of HPCs set to s=100us. The figure shows, from top
to bottom: (1) access count per sample, (2) miss count per sample, (3) miss/access ratio per sample, and (4)
moving average with window width of W=50 samples.
Figure 29 provides an example of similar settings, where the inter-sample time is set to
s=1000us. Maintaining the same set of parameters as the one used for the evaluation
producing figure 28 indeed shows a 10-fold decrease in sampling intensity, but also implies a
minimum delay of 70ms before the monitoring service identifies an attack, which is
unacceptable, since this is on the order of time it takes the GnuTLS server to complete an RSA
decryption. Figure 29 therefore shows the results of using a smaller window of width W=10,
which results in a minimum delay of 30ms which is sufficiently below the time it takes to
complete an RSA decryption by the GnuTLS server.
Figure 29: Trace of monitoring activity with inter-sample time of HPCs set to s=1000us. The figure shows, from top
to bottom: (1) access count per sample, (2) miss count per sample, (3) miss/access ratio per sample, and (4)
moving average with window width of W=10 samples.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 83 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
In our evaluation we run SCAM monitoring and profiling on sequences in which the attack
was turned on and off numerous times, where each time the attack was turned on for 2
seconds (resulting in ~20 attempts to extract key samples), and then paused, repeatedly,
While the SCAM monitoring and profiling service was operating we were able to detect all
attacks with perfect precision.
2.10.2.4 Noisification evaluation
A full description of the architecture and functionality of the noisification sub-module of
SCAM appears in deliverable D3.3. That deliverable also includes a more complete
description of definitions and terms used in SCAM.
Recall that SCAM knows (due to its reconnaissance phase during setup) the collection of Last
Level Cache (LLC) sets that are active while the target performs its sensitive operation (in our
case RSA decryption), referred to as the secured sets. In our setup this collection consists of
~300 LLC sets. Noisification is performed over all these sets, uniformly, in round-robin
fashion, by some of the cores available on the host. In our evaluation we studied the
performance of noisification when using either a single core, or two cores. We tested varying
values in the range 0,1,...,12 for the parameters ℓ1 and ℓ2, the number of lines each of the two
cores accesses in each of the secured sets.
To evaluate the noisification functionality of SCAM, we developed and implemented a
collection of testing and validation tools that allow us to study the effect noisification has on
the attacker. The attacker operates by collecting data on cache hits and misses from specific
cache set S*. It should be noted that SCAM has no knowledge of which set S* is; we recall
that the attacker focuses on identifying (and producing) misses which are due to the target
performing an operation that occurs twice for 1-bits of the private key, and once for 0-bits of
the private key. As described in D3.3, our noisification module is based on the concept of
introducing noise, i.e., forcing the attacker to have cache misses, during the periods of target
inactivity misleading the attacker to believe the current bit is a 1-bit, when in fact it is a 0-bit.
Evaluation setup. The evaluation checks the accuracy of attacker measurements. In order to
do so the evaluation must know precisely when the target is active and compare that to the
measured hits and misses that the attacker records. To ensure precision and synchronization
instead of using the real target we used a “dummy” target that alternates between periods of
activity (accessing one line in each of 300 cache sets) and periods of inactivity. Each period is
about 100,000 clock cycles, enabling the attacker, who needs 6600 clock cycles for each
round of prime and probe, to fully probe the cache set 15 times. Since the clocks of the
attacker and the target are synchronized it is possible to exactly map attacker measurements
to periods of target activity or target inactivity.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 84 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
In order to evaluate the effectiveness of the noisification we compare two probability
distributions, the number of cache misses when the target is active and the number of misses
when the target is inactive. We estimate the two distributions in the standard way by running
the experiment many times (in our case several hundreds of thousands) and using the
resulting samples as an approximation of the distribution. The closer these two distributions
are to each other the harder it is for an attacker to determine whether the target is active. We
measure the standard statistical distance between two distributions D and D’ over n values
x1,...,xn, i.e.,
½ ∑i|Pr[D=xi]-Pr[D’=xi]|.
Figure 30 shows the distributions of the number of cache-misses (as observed by an attacker
on a targeted set), between these two scenarios, without noisification. Measurements taken
when the target is inactive, on the left in Figure 30, are concentrated around the value 0, i.e.,
in this case the attacker almost never has even a single cache miss. In contrast, the
distribution of cache misses corresponding to target activity, on the right in Figure 30, has
almost all of its mass supported in values strictly greater than 0, i.e., target activity almost
always causes the attacker to encounter at least one LLC miss.
We have conducted 12 such tests, and the average total statistical distance between these
two distributions, with no noisification active, in our tests, was 0.824.
Figure 30: Distributions of LLC misses without noisification when the target is inactive (left) and active (right).
Figure 31 shows the corresponding distributions for the same scenarios, while noisification is
active. Specifically, these distributions are typical for the scenario of noisification using 2
cores, with the first core accessing 6 lines in each set, and the second core accessing 5 lines in
each set, for a total of k=11 lines accessed by the noisification in each set. It should be noted
that these are indeed distinct distributions, although the differences are hardly discernible.
The average total variation distance between these two distributions in these settings (i.e.,
while noisification is operating), over 12 tests, was 0.034.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 85 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 31: Distributions of LLC misses with noisification of six lines by the first core and five lines by the second
core when the target is inactive (left) and active (right).
In order to evaluate the performance of the noisification module in various configurations, we
consider two main setups; one where we use a single core for performing the noisification,
and the other where we use two cores. In each of these setups, we tested various
combinations of the number of cache lines accessed by each core running noisification.
Using one or two cores for this purpose clearly implies a distinct performance toll for each of
these setups. The following figures and discussion shed light on the tradeoff between
performance and security.
Figure 32 shows the results for both setups, for various values of k. The figure shows 3 plots:

1 core avg: depicts for every value of k, the average total statistical distance between
the distributions cache misses during periods of target activity and of target inactivity,
over 12 tests, using a single core for noisification. Note that the average for k=0 is
essentially the distance without noisification.

core min-avg: for every value of k lines of noise there are k+1 combinations of ℓ1 and
ℓ2 such that the first core accesses ℓ1 lines in each cache set, the second core accesses
ℓ2 lines in each cache set and ℓ1+ℓ2=k+1. The value of this test was computed for every
k by calculating the average statistical distance between the two distributions for
every ℓ1, ℓ2 such that ℓ1+ℓ2=k+1 and taking the minimum value. This value indicates
which partition of k into two summands is the best on average. As can be seen from
the figure, for most values of k, the minimum average statistical distance was actually
obtained by the setup where just one of the cores accessed all k lines (i.e., there was
no need for an additional core).

core min-min: depicts for every value of k, the best average total statistical distance
between the distributions, in these 12 tests, using two cores which accessed k lines
together. This serves as an empirical lower bound on the obtainable total variation
distance, using two cores.
Our evaluation shows that using two cores may provide some benefit in certain scenarios, but
overall a single core performs reasonably well compared to two cores. In particular, our
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 86 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
results show that the performance toll of using an additional core does not significantly
improve the security of the system when employing noisification.
Figure 32: A comparison of the statistical distance between the distributions for different combinations of
accessed lines per cores participating in noisification (lower is better).
We note that all the tests were run assuming a list of 300 protected cache sets. As the
number of secured sets grows the Noisification Module may be forced to reduce the value of
k, if it is to ensure it secures all potentially targeted sets. This, in turn, implies a larger attack
surface, and therefore might require more resources to secure.
2.10.3
Implementation Evaluation
Our implementation of SCAM and its various components is available at the MIKELANGELO
git repository[59]. In particular, each of the components is provided as a separate package,
with an integration envelope that ties the monitoring service and the noisification mitigator.
We note that each of the components may be used independently of the other; E.g., one can
use the monitoring capabilities of SCAM, and combine them with other profiling
mechanisms, one can use different monitoring/profiling approaches and apply noisification
reactively when required, etc. We performed extensive testing of the code on several other
platforms, and were able to reproduce our results. We note that care should be taken when
configuring the parameters specified in the README files associated with SCAM’s
implementation available in the repository.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 87 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
3 Full Stack Evaluation
3.1 Introduction
A separate evaluation has been presented in the previous chapter for each of the
MIKELANGELO components that have been implemented to date. The project has also
dedicated significant resources into creating complete software stacks for both Cloud and
HPC deployments. An evaluation of the architecture and implementation of these full stacks
is presented in this section.
3.2 Full Stack for Cloud
The full stack cloud evaluation evaluates the MIKELANGELO stack from four viewpoints. First,
the evaluation reviews the KPIs that are relevant to the cloud integration. Second, the
evaluation reviews the components that have been integrated in the cloud testbed. Third,
functional tests performed using Scotty are discussed. Fourth, the use cases performed on
the cloud testbed are presented. The different KPIs and the aspects that they reflect are
shown in Figure 33. The cloud integration evaluates mostly KPIs 1.2, 2.1, 3.1, 3.2, 3.3, 4.1, 5.1,
and 6.1. The following paragraphs elaborate on those KPIs.
3.2.1 KPIs relevant to the cloud integration
Figure 33: KPIs associated with the Full Stack for Cloud
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 88 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
3.2.1.1 KPI1.2
KPI1.2, relative improvement of time to change the installed application on a number of
virtual machines, refers to application management in the cloud. In this KPI, we assume that
the application in a set of virtual machines needs to be updated. The KPI has been tackled by
LEET, which offers application packaging with OSv and application management. LEET is
integrated with the cloud testbed, which means that LEET has been tested with the cloud
testbed to deploy applications. The tests were successful. Further evaluation of LEET can be
found in Section 2.6.
3.2.1.2 KPI2.1
KPI2.1, relative efficiency of virtualized I/O between KVM and sKVM (developed in the
project), refers to the hoped for improvements in sKVM in I/O performance. These
performance improvements were planned to originate from vRDMA and IOcm. However,
vRDMA has not finished the necessary integration to allow evaluation of vRDMA in the cloud
testbed. The vRDMA drivers work solely with Infiniband and RoCE-capable NICs. These
however are rather atypical in the cloud and were not available in the cloud testbed. Thus,
there is no performance advantage through vRDMA. Furthermore, IOcm should have
provided performance improvements. Isolated tests have shown IOcm to improve IO
performance in certain scenarios with a large number of large packets. However, workloads
that one often encounters in the cloud do not gain from IOcm. Thus, ZeCoRx has been
proposed as another, more promising, approach. However, ZeCoRx was not finished by the
end of the project. Thus, no performance evaluation in the cloud could be performed. In
summary, no speedup for IO operations has been observed in the integrated cloud scenario.
3.2.1.3 KPI3.1
KPI3.1, the relative improvement of efficiency of MIKELANGELO OSv over the traditional
guest OS, refers to general improvements of efficiency between OSv and a traditional guest
OS. The project has agreed to use Ubuntu as reference OS for VMs. Various tests have been
performed to assess the efficiency of OSv. In the first year HDFS was tested and revealed
significant slowdowns when using OSv. In the second year Apache Storm and Spark have
been tested with OSv with negative results. OSv showed at first, significantly worse
performance than Ubuntu. The performance bottlenecks have been identified and partially
resolved. At the project’s end OSv offers runtime performance for applications, such as Spark,
that is mostly on par with Ubuntu.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 89 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
3.2.1.4 KPI3.2
KPI3.2, the relative improvement of efficiency, size, speed of execution, between the Baseline
Guest OS vs. the MIKELANGELO OSv, refers to a comparison in efficiency between the initial
version of OSv and the version after three years of development in MIKELANGELO. Several,
significant performance gains have been reached, as described in the paragraph on KPI3.1.
However, these improvements have merely reached a similar performance level to that of
Ubuntu. The size of the OSv image has not changed significantly. However, the size of a VM
images does not have a strong impact on the cloud. Cloud systems, like our cloud testbed,
use central storage to store VM images. These systems by default have deduplication and
replication features. VM images are usually cloned using copy-on-write. Thus, the footprint of
the OS’s size does not matter for practical considerations. Furthermore, even for scenarios
where the VM images are stored on local drives in the compute nodes, the VM image only
has minor relevance. In a typical scenario, the deployed applications and potentially datasets
in the VMs by far outstrip the size of the images, reaching from several gigabytes to several
terabytes.
It should be noted, however, that the size of VM images might play an important role when
dealing with microservice-like architectures or Function-as-a-Service. In these cases, it is
important to keep image sizes at the bare minimum allowing them to be moved across the
cloud efficiently. Smaller images also mean faster update times, e.g. changing a function is a
simple update of a small VM image. Functions can therefore live inside small, immutable
VMs, easily replaced when an update is requested.
3.2.1.5 KPI3.3
KPI3.3, the relative improvement of compatibility between baseline and MIKELANGELO
versions of OSv, refers to improved compatibility of OSv. The compatibility in this case refers
to the compatibility of applications that run in OSv without major issues. The compatibility of
OSv has been improved significantly. The integration of cloudinit is beneficial for cloud
deployments and allows simpler packaging and seamless management of OSv. Furthermore,
OSv’s compatibility has been improved significantly regarding its runtimes. Apache Storm
and Spark have been made compatible, which allows a wider use of OSv in the cloud.
Furthermore, the Go and Python runtimes have been integrated with OSv, which again
increases OSv’s value for the cloud. Further details on the improved compatibility of OSv can
be seen in Section 2.5.
3.2.1.6 KPI4.1
KPI4.1, a system for appropriately packaging of applications in MIKELANGELO OSv is
provided, refers to improved packaging and management of applications in OSv. This work
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 90 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
has first been tackled by extending Capstan. In the second year, MPM has been developed
for application packaging. Finally, LEET has been developed for application packaging and
management for unikernel OSs. This work has made it significantly easier to deploy an
application in a unikernel in the cloud, especially on a large scale. Further details on the
mechanisms for application packaging and management can be found in Section 2.6
3.2.1.7 KPI5.1
KPI5.1, demonstration of a management infrastructure on diverse physical infrastructures,
refers to the use of MIKELANGELO management systems that manage different physical
infrastructures. In the project, two testbed infrastructures exist. First, there is the cloud
testbed and, second, there is the HPC testbed. MIKELANGELO has shown that it can deal with
both to run experiments by deploying Scotty. Scotty uses the cloud testbed and the HPC
testbed to run jobs. Scotty is used in this way by the data analytics use case, the cancellous
bones use case, and the OpenFOAM use case.
3.2.1.8 KPI6.1
KPI6.1, Enumerated security concerns solved on hypervisor level, refers to the identification
and mitigation of security issues in the hypervisor. This work has been carried out in the form
of SCAM. First, cached-based side-channel attacks have been identified as a major problem
that VMs face. Then, an attack has been developed as proof of concept. The attack shows
how an attacker can obtain a secret, a private key in this case, from a different VM. Then,
detection and mitigation methods have been developed and packaged in a Side Channel
Attack Mitigation (SCAM) module. SCAM is a user space process which can be simply
activated (or terminated) by the hypervisor. SCAM runs a light-weight monitoring system that
detects when a cache based side-channel attack is in progress. SCAM also includes a
mitigation sub-module which works by “noisification”, i.e. adding noise to the cache,
reducing the effectiveness of the attack. Both monitoring and mitigation work best when
assigned to protect a specific VM. In this case SCAM has a mode of execution with only the
protected VM active which is designed to collect baseline data on the VM. This one-time
setup requires a few minutes. The attack shows a serious vulnerability and also a way to
detect and mitigate such an attack. Thus, for practical cloud deployment this work is very
important.
3.2.2 Components Integrated
The full stack evaluation for cloud computing is based on the integration of IOcm, SCAM,
Snap, OSv, LEET, Scotty, actuator.py, and MCM. Mainly missing in the cloud evaluation are
vTorque, vRDMA, Seastar, and ScyllaDB. The reason for not integrating vTorque is that
vTorque only makes sense in an HPC setting, with an installation of Torque. We have not
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 91 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
integrated vRDMA in the cloud testbed, since vRDMA is only applicable in HPC systems with
Infiniband. Seastar and ScyllaDB have been tested individually on AWS.
Figure 34 shows the components used in the full stack cloud evaluation. The testbed runs
multiple physical hosts, which host the cloud middleware. The cloud middleware is
OpenStack.
All but one physical host are used as Nova hosts. These hosts run virtual machines, and no
other services. One host is used as a controller, which runs a set of middleware services for
OpenStack. The MIKELANGELO components are distributed across this stack. The Nova hosts
run IOcm, SCAM, snap, and an actuator slave. IOcm and SCAM need to interact with the host
system directly. Snap is used to collect time series data from the host and the VMs via the
libvirt plugin. The actuator slave on the Nova host allows to control the host directly via its
plugins. The actuator slave gets its command from the actuator master. The control host runs
the actuator master, Scotty, MCM, and LEET. The actuator master controls the actuator slaves
in the others hosts and in the VMs. Scotty manages experiment execution in the cloud
testbed. MCM interacts with Nova to migrate VMs, and with master actuator to control the
execution environment. LEET uses OpenStack Nova and Heat to deploy applications in VMs
with OSv installed.
The virtual machines run OSv and Ubuntu as guest OS. The Ubuntu systems additionally run
snap and an actuator slave. Snap is used to collect metrics directly from within the VMs. This
is especially useful to collect application-level data. The actuator slaves are there to control
configuration inside VMs, which can, again, be application-level parameters. Further details
on the integration of MIKELANGELO components with the cloud testbed can be found in
D5.3.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 92 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 34: Components integrated in the Full-Stack Cloud Testbed
Most components were straightforward to integrate. IOcm poses the largest barrier to
adoption as it requires an out-of-date kernel. This kernel in turn pins the whole host OS and
the cloud middleware to old versions. Furthermore, a challenge with IOcm lies in the fact that
it does not deal gracefully with VM migrations. Once VM migrations have been performed,
IOcm would not turn on anymore. The development of ZeCoRx is expected to resolve these
issues.
SCAM can be installed relatively easily. The latest tutorials describe the build process and
how to run SCAM. The monitoring and noisification can be run without large effort. The sidechannel attack with SCAM, however requires tweaking of parameters and is not trivial to
reproduce.
Snap has been easy to deploy. Especially the new features developed as part of
MIKELANGELO allow for a simpler installation of plugins, which in turn simplifies deployment
significantly. Furthermore, the plugins have been extended, such that now there are plugins
for all typical needs in a cloud scenario.
OSv was easy to integrate with the cloud once cloudinit could be used. The downside of OSv
is a somewhat harder application integration and the sporadically poor performance that
appears in some applications after longer runtimes or when scaling to many nodes.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 93 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
LEET is easy to deploy and only requires access to the OpenStack APIs. LEET significantly
reduces the barrier to use OSv in the cloud, because it simplifies OSv’s largest problem to
adoption: application deployment and management.
Scotty is easy to install, but requires a GitLab server in addition to OpenStack. For cloud
experimentation Scotty allows to reduce the time to deploy a workload from weeks to
minutes. This automation makes it possible to perform a large number of experiments
quickly, which in turn helps to resolve inefficiencies in research.
Actuator is easily integrated as a python package in hosts and VMs. Actuator furthermore
requires an AMQP installation, which can be installed and configured with a few commands
on a typical Ubuntu machine. In an OpenStack installation, OpenStack’s AMQP server can
even be used. Actuator allows the agile control of the infrastructure across hosts and VMs.
MCM is easily integrated as well. MCM is a python package that automatically installs its
dependencies with pip. MCM allows to execute resource management strategies using Nova
and actuator.py for control, and snap and InfluxDb for monitoring. Thus, the combination of
Scotty, actuator.py, and MCM has the potential to boost the development of new research in
infrastructures significantly.
3.2.3 Functional Tests
Functional tests for the cloud integration have been created via Scotty. In fact, Scotty’s
experiments are functional tests. The tests are helped greatly by the integration with
actuator.py. Actuator can be used to control features developed in MIKELANGELO, such as
IOcm, SCAM, and MCM.
Scotty experiments run workloads to generate load on the hosts and to measure the impact
of MIKELANGELO components. The available workloads, which can be used to probe for
certain features in the cloud stack are Cloudsuite Data Caching, CloudSuite Web Serving,
OpenFOAM, MSR, DH, iperf, fio, CPU stressor, and stress-ng. The first three workloads have
been developed for the data analytics use case and the OpenFOAM use case.
The scotty experiments test all of the integrated components. IOcm, SCAM, snap, and
actuator.py are tested with each experiment, because they are always present in the cloud
stack. OSv and LEET are tested by the OpenFOAM use case. MCM can be tested by loading a
custom strategy in MCM.
3.2.4 Use Cases
The integrated cloud stack is used by two use cases: data analytics and OpenFOAM. The data
analytics use case runs Scotty experiments with four workloads: CloudSuite Data Caching,
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 94 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
CloudSuite Web Serving, MSR, and DH. The OpenFOAM use case runs an additional, custom
workload, which computes aerodynamic maps.
The use case deployments have not shown significant performance improvements in the
cloud. These are lacking due to the nature of IOcm and OSv. ZeCoRx is expected to provide
performance improvement, but has not been available at the time of writing this report.
Finally, vRDMA cannot be integrated in the cloud without Infiniband.
However, the cloud integration has shown improved agility in the cloud. LEET in combination
with OSv allows for quick deployment of applications. Snap, MCM and actuator.py allow finegrained, live control of resources in the cloud. Snap provides live data to MCM, while
actuator.py uses provide fine-grained control to MCM. MCM in turn provides a perfect
framework to write resource management strategies.
Further details with performance measurements are available in D6.4.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 95 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
3.3 Full Stack for HPC
The evaluation of the full HPC stack starts with the setup of vTorque in an HPC environment.
For the installation there is a setup script, however all steps for a manual installation are
outlined in vTorque’s documentation and can also be found in D5.3. The setup for the
installation of each of the individual components are outlined in the relevant work-package 3,
4 and 5 deliverables[60].
As soon as the installation and configuration of vTorque is done, a first test job running in a
virtual machine can be submitted to ensure everything is working as desired.
In a next step each individual component can be independently chosen and solely tested,
then enabled in vTorque. Some of the components require additional steps before they can
be used. Snap, for example, requires an InfluxDB backend and grafana frontend for our
purposes, while IOcm requires a custom kernel, and vRDMA setup on the nodes requires also
custom adaptations.
vTorque works perfectly fine solely on its own, and is capable of running workloads in virtual
guests. vTorque lays the foundations for a virtualised job management layer on top of an
industry standard Torque system. However, its greatest power comes with the cross-layer
improvements enabled by the MIKELANGELO software stack - from hypervisor level through
guest OS up into the application level. Without these enhancements there is not much added
value for HPC center owners and their users to run workloads in virtual machines, as the
barrier of high overhead costs for virtualization would remain unaddressed.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 96 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 35: MIKELANGELO software stack for HPC
Figure 35 provides an overview of all components integrated with vTorque. Light purple
colored boxes are cloud-only components. All green coloured boxes are components
developed within the project that are eligible for to HPC environments and are integrated
into vTorque.
More details about all (optional) components for vTorque and their integration into an HPC
environment are highlighted in the subsequent sections below.
3.3.1 KPIs relevant to HPC
For the integration of all components of the MIKELANGELO stack relevant to HPC with
vTorque, a list of key-performance-indicators (KPIs) has been defined as evaluation criteria.
These are designed to make the overall system accessible to administrators and users via
HPC batch system environments, and provide a measurable benefit, These KPIs have been
defined in the GA and are listed in the table below.
Table 10: KPIs relevant to HPC Integrations
KPI Description of KPI according to GA
1.1 Relative efficiency of bursting a number of virtual machines
1.2 Relative improvement of time to change the installed application on a number of virtual
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 97 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
machines
2.1 Relative efficiency of virtualized I/O between KVM and sKVM (developed in the project)
3.1 The relative improvement of efficiency of MIKELANGELO OSv over the traditional guest OS.
The relative improvement of efficiency [size, speed of execution] between the
3.2 Baseline Guest OS vs. the MIKELANGELO OSv.
The relative improvement of compatibility between baseline and MIKELANGELO versions of
3.3 OSv
4.1 A system for appropriately packaging of applications in MIKELANGELO OSv is provided
A relative improvement of efficiency [time] between deploying a packaged and non-packaged
4.2 application.
Demonstration of a management infrastructure on diverse physical
5.1 infrastructures.
Relative efficiency [time, CPU, disk overheads] of traditional HPC over Cloud HPC offered in
5.2 MIKELANGELO
6.1 Enumerated security concerns solved on hypervisor level
7.1 All use cases appropriately demonstrated
7.2 Documentation of using MIKELANGELO in Use cases - Best Practices tutorial
7.3 Documentation of using MIKELANGELO in Use cases - Documented Benefits
3.3.2 KPI Evaluation
KPIs relevant to the full HPC stack are evaluated here from a more abstract point of view, as
most KPIs are directly related to a specific component developed in the project, and
documented earlier in this deliverable.
KPI 1.1 Relative efficiency of bursting a number of virtual machines
The time for standard Linux guests from VM instantiation until it becomes available mostly
depends on whether updates and additional packages are pulled and applied during boot. A
standard Linux cloud image without further modification takes just seconds. With prepared
OSv images the boot time is even further reduced. With the workloads moving towards
immutable instances, introducing LEET with OSv contributes to this KPI. Details can be found
in section 2.5.
KPI 1.2 Relative improvement of time to change the installed application on a number of
virtual machines
To update applications within virtual machines does not make sense for HPC environments,
either an image is capable to run a specific workload, or it is not and a dedicated image will
be built and used.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 98 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
KPI 2.1 Relative efficiency of virtualized I/O between KVM and sKVM (developed in the
project)
In sections 2.2-2.5 the individual components are evaluated. A combination of vRDMA, IOc,,
UNCLOT and OSv cannot be evaluated at present, due to vRDMA and IOcm being based on
different kernels. Furthermore, the latest vRDMA implementation is not production ready, but
at tech-preview maturity only. The only working combination of I/O improvements to validate
is “IOcm, UNCLOT and OSv”, while “standard Linux guests and IOcm” is evaluated in section
2.2 already.
KPI 3.1 The relative improvement of efficiency of MIKELANGELO OSv over the traditional
guest OS.
Please refer to section 2.5.
KPI 3.2 The relative improvement of efficiency [size, speed of execution] between the
Baseline Guest OS vs. the MIKELANGELO OSv.
Please refer to section 2.5.
KPI 3.3 The relative improvement of compatibility between baseline and MIKELANGELO
versions of OSv
Please refer to section 2.5.
KPI 4.1 A system for appropriately packaging of applications in MIKELANGELO OSv is
provided
Please refer to section 2.6.
KPI 4.2 A relative improvement of efficiency [time] between deploying a packaged and nonpackaged application.
Please refer to D6.4 “Final report on the Use Case Implementations”.
KPI 5.1 Demonstration of a management infrastructure on diverse physical infrastructures.
The management infrastructure for HPC is provided by vTorque (vsub and vmgr cli tools) in
combination with MIKELANGELO components: LEET (application packaging), Scotty
(automated experiment execution) and Snap-Telemetry (monitoring and measurements).
KPI 5.2 Relative efficiency [time, CPU, disk overheads] of traditional HPC over Cloud HPC
offered in MIKELANGELO
Please refer to D6.4 “Final report on the Use Case Implementations” where the Aerodynamics
use case is described in-depth. It is the only use case that has been deployed on both Cloud
and HPC infrastructures.
KPI 6.1 Enumerated security concerns solved on hypervisor level
In sharded HPC environments where nodes are allocated exclusively to users, the major
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 99 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
concern is to secure the physical host On the level of the hypervisor there are no further
concerns in HPC environments.
KPI 7.1 All use cases appropriately demonstrated
Since not all use cases are eligible for HPC, the evaluation on HPC is limited to the two use
cases: Aerodynamic Maps and Cancellous Bones
KPI 7.2 Documentation of using MIKELANGELO in Use cases - Best Practices tutorial
Please refer to D6.4 “Final report on the Use Case Implementations”.
KPI 7.3 Documentation of using MIKELANGELO in Use cases - Documented Benefits
Please refer to D6.4 “Final report on the Use Case Implementations”.
3.3.3 Components Integrated
Not all components developed in the MIKELANGELO software stack are relevant and useful
for HPC environments, or are not available at time of writing.
For example ZeCoRX is not integrated yet due to being still under development. Additionally,
since nodes are usually allocated exclusively for a single user, otherwise they would impact
each other (networking, memory, context switches, etc), the security module SCAM doesn’t
provide any benefit for typical HPC environments in return for the CPU cycles it consumes.
Thus, not all components are integrated and evaluated in an HPC environment.
3.3.3.1 Snap
Integration of snap in vTorque and its installation in an HPC environment is a straightforward
task. Some additional work however is required to setup an InfluxDB server and a grafana
dashboard to display the live measurements captured.
3.3.3.2 IOcm
IOcm has the disadvantage that it requires a specific kernel version and configuration in
order to work. This kernel version cannot be used in conjunction with vRDMA, yet.
3.3.3.3 vRDMA
The initial vRDMA prototype one was a challenge to integrate due to a number of hardware
and software compatibility and dependency issues, but was successfully installed on the HPC
testbed. The subsequent vRDMA prototypes have proven even more challenging to integrate.
They are complex implementations with many dependencies (patch QEMU version, specific
kernel), and a local bridge for the guests has to be setup and teared down, including a
dedicated DHCP server for the virtual RDMA network. Many kernel modules need to be
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 100 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
unloaded and afterwards reloaded which can be a challenge on its own. Further, opensm, the
infiniband subnet manager, is also affected. The reality is that the vRDMA component
implementation is not yet appropriate for production environments yet, and its integration
into vTorque is a complex task.
3.3.3.4 UNCLOT
UNCLOT is straightforward to setup, and its integration in vTorque did not require much
work. Its integration into vTorque is simply the configuration and command line options to
enable it, and the addition of a small fragment for a corresponding placeholder in the libvirt
domain.xml template.
3.3.3.5 OSv
In order to support OSv guests additional work was required compared to standard Linux
guests (SLG). OSv does not provide a SSH server, but uses HTTP instead to start applications
via RESTful calls, to avoid blocking command line calls. Also building applications into OSv is
different from SLG and highlighted in section 2.6.
3.3.3.6 LEET
Lightweight Execution Environment Toolbox requires no specific integration with vTorque or
the HPC environment because of its standalone nature. LEET is used to compose runnable
VM images built on top of OSv. These images are compliant with the HPC environment, i.e.
they include properly configured NFS, cloud-init, and RESTful API modules. NFS is used to
access shared storage, while the other allow for seamless contextualisation of workloads,
similar to how cloud-init and SSH support contextualisation of standard Linux guests.
3.3.3.7 Scotty
The CI component Scotty is not integrated directly into vTorque, but rather utilizes it for the
execution of experiments in HPC environments. Scotty is usually set up outside of the HPC
environment, i.e. running in a Cloud. All it needs is a connection to the submission front-end
via SSH.
3.3.4 Integration Tests
Integration tests have been carried out with our two real world use cases, described below,
utilizing different combinations of available vTorque components. Measurements for both
HPC use cases are presented in D6.4.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 101 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
3.3.5 Use Cases
Two of the four use cases of the MIKELANGELO project target HPC systems. The Cancellous
Bones simulation (details to be found in D6.3), is a typical HPC application while the second
one, Aerodynamic Maps, can also be run in the Cloud.
3.3.5.1 Cancellous Bones
The deployment of Cancellous Bones on bare metal does not introduce special constraints on
the location of the binaries. Once the software requirements are fulfilled and the user
compiles the source code of the application, the binaries can be located in any directory (e.g.
home directory). The path to the binaries is set up on the job script that executes the use
case in the HPC batch-system (Torque PBS).
For the virtualized execution with Standard Linux Guest (SLG) and vTorque a VM image is
needed to execute the job. The image contains the application binaries and its dependent
libraries needed to successfully run the simulation. The data, the configuration and the job
script are not part of the image. This modularity makes it easy to set up different
configurations without touching the images at all. This image can be understood as “fat”, a
single file which contains the smallest set of components setup to run the simulation. The
steps to create the images from scratch were defined in the Intermediate Use Cases
Implementation Strategy deliverable (D2.2).
To port the use case to OSv the source code was included within the MIKELANGELO
applications directory, that is, the source code is currently under the path “~/osv/mikeapps/cancellous-bones/serial_fmps”, where “serial_fmps” contains the code for the
Cancellous Bones solver.
The use case can run inside OSv without major rework of the code. The modification has
mainly happened within the MPI implementation. The MPI has been modified previously to
run threads instead of processes. OSv is a single user, single process operating system. This
leads to the problem that MPI can not spawn processes, it has to work with threads instead.
This supported the porting, due to the fact that handling of processes for this use case is
managed by MPI. In this last case the Cancellous Bones UC was built using the default OSv
build system.
Performance measurements and comparisons between bare-metal execution and virtualized
workloads in SLG and OSV with different combinations of components enabled, and for both
HPC related use-cases, can be found in D6.4 ”Final report on the Use Case Implementations”.
The general observation from an integration point of view is that OSv has benefits compared
to fat standard Linux guest operating systems. Particularly in regard to boot times, guest
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 102 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
image size (crucial for staging) and security (reduced functionality means reduced attack
vectors).
3.3.5.2 Aerodynamic Maps with OpenFOAM
Bare metal deployment of OpenFOAM will install binaries into a standard system path, to
make software available to all users. If users wish to use a different version of OpenFOAM
and/or Open MPI, they have to compile and install it. During compilation, additional
development libraries might be required. Installation of the development libraries can be
typically done by cluster administrators only, making the process inconvenient.
Regular users will be able to install the binaries into non-default directory paths only (e.g.
into their home directory only). To use custom binaries and libraries, the user has to adjust
PATH and LD_LIBRARY_PATH environment variables. OpenFOAM is not directly started by the
end user, as Torque (or some other) resource manager system is used. At that point the
setting of PATH/LD_LIBRARY_PATH becomes challenging, as ssh, Open MPI and Torque have
a tendency of resetting user environment variables to default values. To ensure that correct
binaries and libraries are used, we had to log into compute hosts and analyze running
OpenFOAM processes /proc/PID/maps. On the first trials, even when correct (user compiled)
binaries were used, they still loaded incorrect (system default) libraries. It was required to
pass additional Open MPI parameters and to start OpenFOAM via an exec-wrapper script
before the user-compiled OpenFOAM could be used.
Achieving the same goal in SLG (Linux VM) was more straightforward, as users can install
desired binaries and libraries into the default system path. There will be only one version of a
binary or library present in the VM, thus a running binary means the correct libraries are
loaded.
Building an SLG image or an OSv image are both relatively simple. For SLG, we start with
minimal Ubuntu or CentOS VM. Then we follow official installation instructions, which means
installing required development tools and development libraries to compile the Open MPI
and OpenFOAM. Finally, the compiled binaries and libraries are installed into the VM, and the
VM image is saved.
The OSv image build process is slightly simpler, as Open MPI and OpenFOAM build recipes in
the mike-apps repository include commands to install required development tools and
libraries. This further simplifies the build process.
Both SLG and OSv VM images also have to be prepared to include cloud-init, so that
USTUTT/vTorque specific initialization and customization takes place once the image is
deployed on a cluster. This part of customization is automatic and transparent to the user.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 103 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
For the specific evaluation of the Aerodynamics use case in an HPC environment, as well as in
a Cloud environment, please refer to D6.4 ”Final report on the Use Case Implementations”.
3.3.6 Challenges
During the HPC integration of the MIKELANGELO components, namely their integration into
vTorque, several challenges have been identified and resolved. These impacted stability,
performance and security - all three crucial for adoption in production HPC environments.
3.3.6.1 Security
Users cannot be allowed to break out of their shared user level, however virtualization
introduces several issues in this regard that need to be considered. User can gain root access
via different ways. To begin with, they may simply have root access enabled for them in an
image, they may use an image prepared with vulnerable software in order to be exploited at
runtime and gain root or some other kind of backdoor. Thus users cannot be permitted to
run their own images, but only trusted ones provided by cluster administrators. With root
access inside a VM a user would be able to access a shared file-system, common in HPC
environments, under another user’s uid/gid and gain access to confidential data.
3.3.6.2 Stability
Since vTorque works with flags on the file-system for synchronization, a misconfigured NFS
can cause trouble because processes run into timeouts even though the remote side is ready.
The basic problem is that the file updates need to be propagated quickly to other nodes. NFS
is not designed as parallel file system and thus the updates are immediately known only to
the server and the invoking node. All other nodes use a local attribute cache and so receive
the updates only on request and that is executed in many cases only after 30 seconds.
But there are some additional NFS mount options that reduce this delay by deactivating
some parts of the attribute cache causing any local request to fetch the information from the
server. The drawback of this is the increased load on the server for all observed files.
According to an article[61], the following options should be used.
●
noatime: Setting this value disables the NFS server from updating the inodes access
time. As most applications do not necessarily need this value, you can safely disable
this updating.
●
nodiratime: Setting this value disables the NFS server from updating the directory
access time. This is the directory equivalent setting of noatime.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 104 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
●
noac: Disable all forms of attribute caching entirely. This extracts a significant
performance penalty but it allows two different NFS clients to get reasonable results
when both clients are actively writing to a common export on the server.
By setting the three options “noatime,nodiratime,noac” the caching of file attributes and
settings is deactivated and the changes are visible on all nodes when they check the directory
of files again.
This change removes the timeouts we experienced before depending on the last cache
update, and we achieved a delay less than a second which can simply be covered by timeout
limits.
3.3.6.3 Performance
With regard to performance there are several aspects to highlight. There is the user
submitting jobs, where performance may be relevant in terms of job submissions per minute.
There is the node preparation and tear down phase that blocks the nodes without any
workload being executed. And finally there is the performance of virtualized workloads, from
hypervisor level through the stack up to the application level. From an integration point of
view, the first and second can be tackled, while the third requires careful tuning all the way
down to the hypervisor.
A KVM installation can be tuned in many ways to improve the performance of virtual guests
depending on your actual hardware and software environment. The enhancements to the
hypervisor and how they can be fine tuned are described in the corresponding section 4.1.
3.3.6.4 Usability and User Experience
With regard to user experience many aspects are relevant to both administrators maintaining
an HPC environment as well as the end users running their workloads.
To begin with, HPC environments may provide to their users optimized proprietary compilers
or libraries. In order to make them accessible for the case that those can be utilized (if
available then use, else skip), an HPC cluster’s shared application directory, usually mounted
as /opt is made available to virtual guests as /opt-hpc.
Furthermore, given Torque’s admin user experience, an equivalent of its queue manager
(qmgr) but for the management of images, is provided by vTorque, called vmgr. Details
about vmgr can be found in D5.3 and D2.21.
In addition to vTorque’s vmgr cli tool, there is a job submission cli tool designed as an
equivalent to Torque’s qsub, in order to facilitate common working patterns. It is called
vsub, responsible for the submission of jobscripts to be executed in a virtualized
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 105 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
environment. Like qsub it allows all default resources allocated for a job to be overridden, but
falls back on defaults defined by HPC cluster administrators if nothing is provided by users at
submission time, either on the command line or inline in the job script. The syntax of vsub
resource requests is based on qsub, e.g.
qsub -l nodes=1: jobscript
The example requests one node from Torque. vsub will be used similarly, however with
another set of arguments.
vsub -vm img=bones-osv.img jobscript
Besides newly introduced arguments for virtual resources, vsub accepts all qsub arguments,
as it basically forwards them on. If none are given it is not any different than when calling
vsub without any. Same counts for vsub, if none are given defaults (image, vcpus, ram, etc)
are applied, i.e
vsub -l nodes=1 jobscript
However there is one disadvantage introduced by vsub. As it is implemented as a bash script
and evaluated by an interpreter at runtime, that then invokes qsub for the job submission, it
cannot provide the same speed as qsub. This small overhead is considered acceptable given
the benefits: binary incompatibility is avoided and many different versions of Torque can
easily be supported. Since jobs need their time for execution, there is not much added
benefit if vsub is capable of submitting millions of jobs per second given that huge clusters,
running even small jobs, can be saturated.
The Following tables show the difference between a bare metal job submission with PBS
qsub vs. vTorque’s vsub. vsub calls qsub after processing all VM-related arguments and
applying default values, defined by administrators. Time has been measured over 100
iterations and averaged, to wave out peaks.
Commands used for job submission, in combination with a simple ‘hostname’ command:
qsub -l nodes=1 jobScript.sh
vsub -l nodes=1 jobScript.sh
Table 11 below shows the difference for the commands used, in total execution time.
Table 11: Job submission time, qsub vs. vsub
qsub in ms
vsub (+qsub) in ms
Difference in ms
Difference in %
66.2994
895.94896
829.64956
594.42385
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 106 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Considering that vsub execution time is extended by calling qsub, the following table
provides a comparison of the actual runtime for vsub-specific operations vs qsub.
Table 12: Job submission time, qsub vs. vsub-specific logic
qsub in ms
vsub in ms
Difference in ms
Difference in %
66.2994
829.6496
763.3502
509.0966
To summarize, vsub bash script wrapper adds a remarkable overhead to the job submission
process. However, since jobs cannot be started and executed faster than they can be
submitted, there is obviously no need to improve this part for now, as other work is more
beneficial in regard to overall application performance and runtime.
The following table shows the overhead added by vTorque for a single node job. The table is
also representative for a multi-node job, since the prologue and prologue.parallel have some
common logic, however the root prologue runs longer due to additional work. The epilogue
and prologue.parallel have identical logic, except the root prologue also performs some
additional clean-up work. One has to keep in mind that the wrappers might be replaced with
a patch for Torque, providing better performance than a shell script interpreter is able to
deliver. However, this is considered as possible future work, and has a low priority since the
runtime of HPC jobs usually lasts several hours if not days or weeks. The relatively minor
overhead introduced by vTorque to make HPC environments ready to run virtualized
workloads is considered justifiable and acceptable.
Table 13: vTorque wrapper timings by phase
prologu
e
forked root process
(boots VMs)
vmPrologu
e
(prepares
files)
vmPrologu
e
.parallel
(boots
VMs)
JobWrappe epilogu
r and
e
simple job
script
6.0348 s
33.4683 s
41.8211 s
8.0933 s
2.6695 s
2.6896 s
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 107 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
3.4 Unsupervised Exploratory Data Analysis
Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the
goal of discovering useful information, suggesting conclusions, verification and validation,
and
supporting
decision-making.
For
the
MIKELANGELO
architecture
evaluation,
unsupervised exploratory data analysis techniques were used to investigate the changes in
performance driven by MIKELANGELO components. A three phase approach was taken: Data
Extraction, Data Cleaning and Transformation and Data Analysis.
In the data extraction phase, experimental data was retrieved in a structured format from a
dedicated MIKELANGELO performance analysis testbed. Both the Cloud and HPC testbeds
were under construction when the analysis was scheduled to be performed. The performance
analysis testbed consisted of 2 nodes with Intel® Xeon® Processor E5 v3 E5-2699V3, 220 GB
of RAM and Intel® SSD DC S3710 for Openstack Nova storage. The testbed was deployed
using Ubuntu 14.04 with Fuel 9.2.
The extracted data comprised of two data sets: Vanilla and MIKELANGELO. Vanilla is the data
set with default architectural settings and configurations. The MIKELANGELO data is the data
set derived after enabling the new MIKELANGELO components. 452 metrics were collected
from a variety of MIKELANGELO snap collector plugins. The testbed was instrumented with
14 Snap collectors, configured to collect various information from the hardware and software:

CPU collector

Disk space collector

Disk Info collector

Network interface collector

Iostat collector

Libvirt collector

Load collector

Memory Info collector

Pcm collector

Processes collector

Psutil collector

Swap collector

USE collector

KVM collector
The data was published to an InfluxDB database and exported after the experiment to
comma separated variable format.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 108 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
In the data cleaning and transformation phase, data was inspected carefully. Any incomplete
values were omitted and columns with no labels were also removed. Data was scaled to
normalize and facilitate comparison. Scaling is the process of standardizing the range of
independent variables or features of data and data is then transformed into the normalized
range.
In the data analysis phase, principal component analysis (PCA) was used to select the
significant features of the data (dimensional reduction), and correlations. After applying PCA
on the data, each principal component and its contributing factors were analysed.
The PCA identified the principal components of the variances between the Vanilla and
MIKELANGELO data sets, and the number of dimensions involved, as illustrated in Figure 36
Figure 36: Principal components showing the percentage of variation captured by each component/dimension.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 109 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 37: Feature contribution analysis for a dimension / principal component.
For every component, the features contributing the most were identified. Their mean values,
their mean difference (Mean_Diff), standard deviations (SD), minimum values (MIN),
maximum values (MAX) and difference of mean values in both Vanilla and MIKELANGELO
data sets were recorded. Figure 37 illustrates the features (metrics) that contributed the most
for the dimensionality of 1.
3.4.1 All Metrics Analysis
Analysing all 452 metrics gathered, the most significant metrics that differed between Vanilla
and MIKELANGELO data sets are shown in Figure 38. Some of the major changes in the mean
values of selected features are highlighted in red (very high) and green (mediumly high).
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 110 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 38: Analysis of selected features from all metrics data
It can be seen that the cpu time is significantly higher with MIKELANGELO data set compared
to the Vanilla data set. This can be explained by the fact that the IOcm component of
MIKELANGELO can reduce the time blocked for IO, allowing the CPU to process more data.
In terms of correlations, the following two figures describe the most correlated variables in
both the Vanilla and MIKELANGELO data sets. It is interesting to observe that in the Vanilla
data set, the CPU time mean is perfectly correlated with the mean of bytes read, confirming
that the workload is IO constrained.
Figure 39: Analysis of correlations from vanilla data set
Figure 40: Analysis of correlations from MIKELANGELO data set
3.4.2 I/O Specific Analysis
Given the focus of IOcm on improving the performance of virtualised read and write
operations, the analysis was further refined to focus only on the IO metrics gathered. Figure
41 below shows the analysis of selected features for I/O specific metrics data. The features
with the most significant changes are highlighted. The mean value difference (Mean_Diff)
shows an increase or decrease of the selected feature in the MIKELANGELO data set
compared to the Vanilla data set. The standard deviation (SD) shows the variation of the data
in each data set. The analysis presents an opportunity to analyse significant changes in the
data set.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 111 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 41: Analysis of selected features from I/O specific data
The highlighted features in the analysis can be visualized to more readily observe any
interesting aspects. Figure 42, for example, shows the data gathered for a selected feature,
the mean value of disk write operation per second (disk.ops_write.mean).
Figure 42: Comparison of sdc disk mean value of write operation per second.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 112 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
The mean value of the metric in the Vanilla data set is 1.23 and in MIKELANGELO data set, the
mean value is 1.01. The mean number of disk writes per second dropped when
MIKELANGELO components (IOcm) was enabled. This can be explained by the MIKELANGELO
architectural changes as more of the data is now retained in memory, hence requiring less
disk writes. The trend analysis can highlight the behavior of change in both datasets as
shown in the following figure.
Figure 43: Comparison of trends of sdc disk mean value of write operation per second.
Using PCA, correlations of selected features can also be reviewed. Each principal component’s
contributing features are highly correlated. Some strong correlations that have arisen in this
exploratory data analysis are illustrated in Figure 44.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 113 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
Figure 44: Analysis of correlations of selected features.
These unsupervised exploratory data analysis techniques illustrate some of the data that can
be captured by automated analysis. MIKELANGELO’s Scotty has been designed to not just
run configure, deploy and execute experiments on Cloud or HPC testbeds, but also to
capture the experimental results data and initiate analysis. As Scotty matures, the
practicalities of integrating this degree of automated analysis with Scotty will be explored.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 114 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
4 Observations
The architecture and implementation of the MIKELANGELO software stack has been
evaluated in the previous chapters. This chapter summarises the key observations.
4.1 Individual Components
4.1.1 Linux Hypervisor IO Core Management - sKVM’s IOcm
●
Automated dynamic allocation of cores to IO has been successfully demonstrated.
●
Performance improvements using both throughput intensive (netperf) and real world
(HTTP server) workloads have been confirmed.
●
The implementation has been developed using Linux kernel best practices to simplify
upstreaming.
4.1.2 Zero Copy on Receive - sKVM’s ZeCoRx
●
Initial experiments suggest a 20% savings on CPU usage may be achieved by avoiding
the copy between hypervisor and VM guest buffers.
●
The implementation has proven to be much more involved than initially expected,
with a complex architecture requiring changes to the VM guest, vhost-net hypervisor
component, macvtap device driver, macvlan driver, Ethernet adapter (ixgbe) device
driver, qemu, and virtio protocol.
●
This component is still under development at the time of writing.
4.1.3 Virtual RDMA - sKVM’s virtual RDMA
●
Three separate approaches to deliver vRDMA were prototyped, and a final vRDMA
architecture delivered. To accommodate different communication stacks and
technologies (e.g. Infiniband, RDMA, rsocket) it supports multiple modes.
●
KPIs 2.1, 3.1 and 3.2 have all been enhanced by the vRDMA implementation.
●
Using IB verbs with socket initialization, OSv guests can now achieve up to 98% the
I/O performance of Linux hosts.
●
Using Infiniband and RoCE, Linux guests can now achieve 80% performance for small
messages, and 96% performance for large messages. 97% of host performance is
possible when using rdmacm on RoCE enabled hardware.
●
Using rsocket and vRDMA over RoCE, Linux guests can achieve approx 80% the
rsocket performance on Linux host, which itself is 6-12 times better than using
traditional network sockets on Linux hosts.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 115 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
●
ivshmem shared memory support approaches to improve performance of guests on
the same host have been documented in a white paper, but not implemented as part
of vRDMA component.
4.1.4 Unikernel Guest Operating System - OSv
●
MIKELANGELO has helped OSv expand the Linux API calls that it supports, and deliver
support for OpenMPI, UNIX processes and NFS functionality. While not being able to
host any of the use cases at the start of the project, all four can now be hosted.
●
Ths OSv unikernel offers a significantly reduced surface area to containers, and so can
be considered to be less vulnerable than containers whilst often achieving equivalent
performance.
●
OSv offers significant improvements over Linux VMs in terms of boot-time (9x faster),
image size (100x smaller) and memory usage (half the size).
●
Applications hosted on OSv can usually be made to perform at least equivalently to
Linux if not faster, but this may require detailed inspection and refactoring to identify
any previously unnoticed bottlenecks or bugs that may exist within OSv.
●
KPIs 1.1, 3.1, 3.2, 3.3 and 7.1 have all been delivered or enhanced by the OSv
implementation.
4.1.5 Lightweight Execution Environment Toolbox - LEET
●
LEET provides a powerful tool for management of application packages, self-sufficient
building blocks that can be used for composition of runnable unikernels.
●
LEET also enables integrations for different infrastructures, ranging from local
environment, OpenStack and Amazon Web Services clouds, as well as container-like
clusters managed by Kubernetes.
●
MIKELANGELO use cases have demonstrated that LEET has the flexibility to create and
compose complex, nested, end-user images. The efficiencies of LEET allow particularly
complex images to be constructed faster than with conventional scripts and tools.
●
KPIs 1.2, 4.1, 4.2, and 7.1 have all benefited from the development of the LEET either
directly or indirectly.
4.1.6 UNikernel Cross-Layer opTimisation - UNCLOT
●
UNCLOT delivers improved communications between OSv VMs on the same host as
OSv itself cannot take full advantage of NUMA architectures.
●
The shared memory approach has been implemented under the POSIX API and so
user applications do not need to be modified to take advantage of UNCLOT.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 116 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
●
UNCLOT does allow multiple VMs to write to shared memory, which may cause
instabilities or insecurity if not used appropriately. It is assumed that the cloud
manager will only facilitate shared memory between VMs that can trust each other.
●
Enabling UNCLOT for the OSU Micro-Benchmark improves bandwidth by a factor of
3-6, depending on message size. Latency is reduced by a factor of 2.5 for larger
messages and factor of 5 for smaller messages.
4.1.7 Full-Stack Monitoring - Snap
●
The flexible, extensible architecture of Snap has been successfully demonstrated with
numerous plugins developed by multiple partners, published and deployed across the
use-cases.
●
Recent incremental enhancements to the architecture have included the ability to
capture and process streaming events, plugin diagnostic functionality, tools to
simplify the provisioning and configuration management of snap, and a rewrite of the
Snap RESTful APIs to simplify the development of custom clients by end-users.
●
The management and performance of Snap have been explored and the overhead is
nearly always less than alternatives, including collectd, especially at scale. Scaling to
500 nodes was verified.
●
Snap has been developed using best-practice tools and processes and all plugins
contributed by MIKELANGELO have green build status and an A+ rating as measured
by an independent third-party service.
●
No MIKELANGELO KPIs refer directly to improvements of the monitoring system,
however many of the KPIs have benefitted from Snap as it has enabled their
evaluation.
4.1.8 Efficient Asynchronous Applications - Seastar
●
Seastar is a powerful, modern, C++14 library that has been architected with a sharenothing philosophy to deliver significant performance improvements over standard
Linux libraries and APIs for cloud-hosted applications.
●
A re-implemented memcached delivered approximately 3 times the throughput of
standard memcached running on Linux. Other tests have demonstrated that
performance scales linearly with resources.
●
Seastar is a core library leveraged by the Scylla rewrite of Apache Cassandra, helping
it achieve not just approximately 10 times the throughput of Cassandra, but also
linear scalability.
●
KPI’s 3.1, 3.2 and 7.1 have all been advanced with the realisation and enhancement of
Seastar.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 117 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
4.1.9 Side Channel Attack Mitigation - sKVM’s SCAM
●
SCAM has been architected to be compatible with all Linux kernels, and to run in user
space with no changes required to the application being protected.
●
The monitoring and profiling service has been developed to minimise overhead - less
than 1% of CPU. It does have a learning phase: a one-off 3-10 minute cache activity
scan to create a baseline model, and a period - perhaps 3 minutes depending on
setup - to identify the active cache sets to be secured.
●
A proof-of-concept attack has been developed, and all attacks were detected
successfully.
●
Effective mitigation of the attack using nosification has been developed. It only
becomes active when an attack is detected. The mitigation is also low-overhead: using
more than one core for nosification does not significantly improve the security of the
system.
4.2 Integrated Stacks
4.2.1 Full Stack for Cloud
●
A full cloud deployment has successfully demonstrated the integration of IOcm,
SCAM, Snap, OSv, LEET, Scotty, Actuator and MCM.
●
Improvements in KPIs 1.2, 3.2, 3.3, 4.1, 5.1, and 6.1 have all been demonstrated in the
integrated Full stack for Cloud
●
KPI 2.1 could not be addressed on the MIKELANGELO Cloud Testbed due to (1) lack of
vRDMA support on the testbed, (2) Cloud workloads that do not benefit from the
performance enhancements delivered by IOcm (which works best with large packets),
and (3) ZeCoRx has not been ready for integration with the cloud testbed at time of
writing.
●
KPI 3.1 could not be satisfactorily demonstrated on the Cloud Testbed due to
performance degradation for some workloads running on OSv, as explained in
Section 2.5
●
Although significant performance improvements have not been confirmed on the
Cloud to date, significant improvements in agility and flexibility have been delivered.
4.2.2 Full Stack for HPC
●
A full HPC deployment has successfully demonstrated the integration of Snap,
versions of vRDMA, OSv, IOcm, UNCLOT, LEET and Scotty.
●
Improvements in KPIs 1.1, 2.1, 3.1, 3.2, 3.3, 4.1, 4.2, 5.1, 5.2, 6.1, 7.1, 7.2, 7.3 have all
been demonstrated in the integrated Full stack for HPC
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 118 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
●
At the time of writing ZeCoRX nor the latest version of vRDMA were integrated with
the HPC stack as development of both components was still in progress. SCAM was
not integrated as it tackled a challenge for Cloud that was not so relevant for HPC
environments.
●
HPC administrators should be aware of the potential for HPC users to browse shared
file-systems that are commonly used in HPC deployments. It is proposed that HPC
administrators build and control any VMs that are hosted, to minimise security risk.
●
NFS is not designed as a parallel file system and delayed propagation of changes can
affect HPC workloads. Adjusting NFS mount options can remove any timeouts that
may otherwise occur.
●
MIKELANGELO’s development of vTorque carefully mirrored the command line tools
that HPC administrators are familiar with including qmgr and qsub. vmgr and vsub
wrap on top of qmgr and qsub, sacrificing a negligible amount of performance to
maximise compatibility with the version of qmgr and qsub that are installed.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 119 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
5 Concluding Remarks
Over the three years of the MIKELANGELO project numerous components were architected
and implemented to help improve the performance and security of virtualised I/O for Cloud
and HPC workloads.
All individual components developed by MIKELANGELO, and both the Cloud and HPC
integrated stacks, were evaluated as part of this deliverable.
Regarding the individual components, all succeeded in delivering appreciable performance or
security improvements. Some of the originally expected performance improvements were not
realised for all scenarios - e.g. IOcm for small messages, and OSv hosting of particular
applications. Additionally, some implementation architectures were much more complex than
originally expected due to the breadth of hardware technologies that needed to be
considered, and the complexities of the internal architecture of modern hypervisors and
guest OS’s. The consortium learned valuable results from this research, and where
fundamental barriers were met, the consortium was able to redirect resources into
alternative, equally relevant areas with more potential for the consortium to address. Thus the
progress in ZeCoRX, UNCLOT and Seastar, where efforts proved much more rewarding.
Regarding integrated full stacks for cloud and HPC, MIKELANGELO has demonstrated how
many of the components developed can be successfully integrated into production-grade
deployments. Some components, including the final vRDMA and ZeCoRX implementations,
are still under development at the time of writing and so have not yet been deployed in these
testbeds.
The quality of the implementation of the code developed by the consortium can be seen by
the successful upstreaming to production-grade relevant open-source projects including
OSv, Snap, Seastar, and Virtlet. Many of these projects have included MIKELANGELO
developed code in their showcase demonstrations, and have built on it further in subsequent
work.
This evaluation of the architecture and implementation of MIKELANGELO has demonstrated
significant success in delivering value-added enhancements for advanced Cloud and HPC
environments. It has also helped identify future efforts that partners are keen to pursue to
maximise the value of this work.
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 120 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
6 References and Applicable Documents
[1] The MIKELANGELO project, http://www.mikelangelo-project.eu/
[2] The OSv guest operating system for the cloud, http://osv.io/
[3] sKVM, The First Super KVM - Fast virtual I/O hypervisor, https://www.mikelangeloproject.eu/wp-content/uploads/2016/06/MIKELANGELO-WP3.1-IBM-v1.0.pdf
[4]
MIKELANGELO
Deliverable
D2.21,
https://www.mikelangelo-project.eu/wpcontent/uploads/2017/09/MIKELANGELO-WP2.21-USTUTT-v2.0.pdf
[5]
MIKELANGELO
Deliverable
D6.1,
https://www.mikelangelo-project.eu/wpcontent/uploads/2016/07/MIKELANGELO-WP6.1-INTEL-v1.0.pdf
[6] ELVIS, Efficient and Scalable Paravirtual I/O System, N. Har’El et al, available at
http://www.hypervisorconsulting.com/pubs/eli/elvis-atc13.pdf
[7] Ubuntu 14.04, http://www.ubuntu.com/
[8] KVM, the Kernel Virtual Machine, http://www.linux-kvm.org/
[9] QEMU 2.2, http://wiki.qemu.org/
[10] Netperf, http://www.netperf.org/
[11] The Apache HTTP Server Project, https://httpd.apache.org/
[12]
Ab,
the
Apache
HTTP
server
benchmarking
tool,
https://httpd.apache.org/docs/2.2/programs/ab.html
[13] The MIKELANGELO project GitHub, https://github.com/mikelangelo-project
[14]
Deliverable
D4.1,
https://www.mikelangelo-project.eu/wpcontent/uploads/2016/06/MIKELANGELO-WP4.1-Huawei-DE_v2.0.pdf
[15] https://community.mellanox.com/docs/DOC-2802
[16] https://linux.die.net/man/1/netpipe
[17] https://github.com/zrlio/hyv
[18] MIKELANGELO deliverable D2.21, M30, “MIKELANGELO Final Architecture”,
[19] http://unikernel.org/
[20] “My VM is Lighter (and Safer) than Your Container”, Symposium of Operating System
Principles (SOSP) 2017, http://cnp.neclab.eu/projects/lightvm/lightvm.pdf
[21] NFS, http://nfs.sourceforge.net/
[22] The Porting of NFS to OSv, https://www.mikelangelo-project.eu/2016/04/nfs_on_osv/
[23] Open MPI, https://www.open-mpi.org/
[24] ISO 9660, http://www.iso.org/iso/catalogue_detail.htm?csnumber=17505
[25] https://github.com/cloudius-systems/osv-apps
[26] http://blog.osv.io/blog/2017/06/12/serverless-computing-with-OSv/
[27]
https://download.fedoraproject.org/pub/fedora/linux/releases/26/CloudImages/x86_64/imag
es/Fedora-Cloud-Base-26-1.5.x86_64.raw.xz
[28] http://cnp.neclab.eu/projects/lightvm/lightvm.pdf
[29] Avi Kivity et al., “OSv—Optimizing the Operating System for Virtual Machines."
Proceedings of USENIX ATC’14: 2014 USENIX Annual Technical Conference. 2014.
https://www.usenix.org/node/184012
[30] Capstan Github repository, https://github.com/mikelangelo-project/capstan/
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 121 of 122
Project No. 645402
MIKELANGELO Deliverable D6.2
[31] Unik Github repository, The Unikernel Compilation and Deployment Platform,
https://github.com/solo-io/unik
[32] Virtlet Github repository, https://github.com/mirantis/virtlet
[33] OSv Applications Github repository, https://github.com/cloudius-systems/osv-apps
[34] Capstan packages Github repository, https://github.com/mikelangelo-project/capstanpackages
[35] D4.7 First version of the application packages, https://www.mikelangelo-project.eu/wpcontent/uploads/2016/06/MIKELANGELO-WP4.7-XLAB-v1.0.pdf
[36] OSU Micro-Benchmarks, http://mvapich.cse.ohio-state.edu/benchmarks/
[37] Nginx, https://nginx.org
[38] Wrk HTTP benchmarking, https://github.com/wg/wrk
[39] Collectd, https://collectd.org/
[40] Ganglia, http://ganglia.info/
[41] Telegraf, https://github.com/influxdata/telegraf
[42] Go, https://golang.org/
[43] Tukey Method, http://www.itl.nist.gov/div898/handbook/prc/section4/prc471.htm
[44] InfluxDB, https://influxdata.com/
[45] Travis CI, https://travis-ci.org/
[46] Jenkins, https://jenkins.io/
[47] http://www.scylladb.com/
[48] http://www.kegel.com/c10k.html
[49] www.sosp.org/2001/papers/welsh.pdf
[50]
http://www.samsung.com/semiconductor/global/file/insight/2017/04/ScyllaDB_Report_v410.pdf
[51] https://en.wikipedia.org/wiki/YCSB
[52] https://github.com/fastio/pedis
[53] https://senior7515.github.io/smf/write_ahead_log/
[54] https://github.com/scylladb/scylla
[55] https://groups.google.com/forum/#!forum/seastar-dev
[56] https://github.com/scylladb/seastar
[57] Deliverable D3.4, sKVM security concepts – first versionm availalbe at
https://www.mikelangelo-project.eu/wp-content/uploads/2016/06/MIKELANGELO-WP3.4BGU-v1.0.pdf
[58] https://cs.adelaide.edu.au/~yval/Mastik/
[59] https://github.com/mikelangelo-project/SCAM
[60]
MIKELANGELO
Deliverables,
available
at
https://www.mikelangeloproject.eu/deliverables/
[61]
Linux:
Tune
NFS
Performance
https://www.cyberciti.biz/faq/linux-unix-tuning-nfs-server-client-performance/
Public deliverable
© Copyright Beneficiaries of the MIKELANGELO Project
Page 122 of 122
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement