An Empirical Investigation of the Impact of Server Virtualization on

An Empirical Investigation of the
Impact of Server Virtualization on
Energy Efficiency for Green Data
Center
Yichao Jin1 , Yonggang Wen1 , Qinghua Chen2 and Zuqing
Zhu3
1
Nanyang Technological University, Singapore
Yangtze Delta Institute of Tsinghua University, PRC
3
University of Science and Technology of China, PRC
Email: {yjin3,ygwen}@ntu.edu.sg, chenbill@gmail.com, zqzhu@ustc.edu.cn
2
The swift adoption of cloud services is accelerating the deployment of data
centers. These data centers are consuming a large amount of energy, which is
expected to grow dramatically under the existing technological trends. Therefore,
research efforts are in great need to architect green data centers with better
energy efficiency. The most prominent approach is the consolidation enabled by
virtualization. However, little effort has been paid to the potential overhead
in energy usage and the throughput reduction for virtualized servers. Clear
understanding of energy usage on virtualized servers lays out a solid foundation
for green data center architecture. This paper investigates how virtualization
affects the energy usage in servers under different task loads, aiming to understand
a fundamental trade-off between the energy saving from consolidation and
the detrimental effects from virtualization. We adopt an empirical approach
to measure the server energy usage with different configurations, including a
benchmark case and two alternative hypervisors. Based on the collected data,
we report a few findings on the impact of virtualization on server energy usage
and their implications to green data center architecture. We envision that these
technical insights would bring tremendous value propositions to green data center
architecture and operations.
Keywords: Virtualization; Server Consolidation; Green Data Center; Energy Efficiency
Received 26 June 2012; revised 04 January 2013
1.
INTRODUCTION
Recent advances in cloud computing are transforming
the information and communication technologies (ICT)
[1]. The rapid adoption of cloud services is accelerating
the deployment of data centers. These data centers
in turn are contributing a dramatic growth of energy
consumption in the ICT sector. It was estimated that
data centers in the United States consumed about 61
billion kW h of electricity in 2006, accounting for 1.5% of
all U.S. electricity consumption and more than doubled
the energy consumption in 2000 [2]. The cost were
expected to double again by reaching 120 billion kW h,
or 4 kW h per person in 2011 [3]. Moreover, an annual
growth rate at 30% on data center energy consumption
was predicted from 2012 to 2016 [4]. This exploding
energy cost, which would overshadow the capital cost of
data centers, must be tamed for a sustainable growth.
The energy usage in data centers consists of two
parts, including energy consumed by the ICT subsystem
(i.e., servers, storage and networkings) and energy
consumed by infrastructure (e.g., Heating, Ventilation,
Air-Conditioning (HVAC)). This research focuses on
the energy consumed by the ICT subsystem in
data centers. Specifically, the energy usage by the
ICT subsytem depends on both the specifications of
individual building components and the operational
configurations (e.g., applications and load balancing
in the ICT subsystem) [5]. Previous research efforts
have explored both areas of improvements for energy
conservation. On the former aspect, early in 1992,
the US Environmental Protection Agency (EPA)
introduced the ENERGY STAR as a voluntary labeling
program to identify and promote energy-efficient
2
Y. Jin, Y. Wen, Q. Chen and Z. Zhu
products and to reduce greenhouse gas emissions and
the electricity consumption of computers and monitors
[6]. Recently, new hardware designs, for example,
next-generation memory solutions (i.e., DDR3 SRAM)
and solid state drive (SSD), are emerging to improve
both the computing speed and the energy efficiency
in the ICT subsystem [7]. On the latter aspect,
researchers aim to optimize the energy consumption
of computing systems via dynamic resource allocation
on an operation-system (OS) level and a data-center
level [8, 9]. It has been generally accepted that
server consolidation will improve the energy efficiency
in data center operations [10]. More specifically, it was
expected [11] that savings on the order of 20% can be
achieved in server and network energy consumption,
by optimizing the data center operations. Although
some works have addressed the energy issue for data
centers as a whole [12, 13], little effort has been paid to
investigate the server energy usage under the context
of cloud computing, in particular, the impact of datacenter virtualization.
The technical feasibility of curbing the growing
energy usage for the ICT sub-system in data centers is
enabled by the latest development of cloud computing
technologies.
The most prominent solution is to
consolidate applications from multiple servers to one
server, enabled by server virtualization.
On a
first-order approximation, server virtualization could
potentially reduce energy usage by turning off those
idle savers. However, virtualization would also lead to
other potential hazard effects, in particular, a possible
overhead in energy usage and a possible reduction in
maximum throughput. These detrimental effects, if
not well understood and controlled, could offset the
benefits of server virtualization. As a result, clear
understanding and precise modeling of server energy
usage in data centers, coupled with virtualization in
cloud computing, will provide a fundamental basis for
data center operational optimizations, to accomplish
the green ICT vision [14].
In this research, we investigate the impact of server
virtualization on energy usage for data centers, with an
ultimate goal to provide insights for optimizing datacenter architecture and operations. In particular, we
adopt an empirical approach to measure the energy consumed by servers under different virtualization configurations, including a benchmark case (i.e., physical machine) and two alternative hypervisors (i.e., Xen and
KVM). Our experiments aim to characterize the energy overhead incurred by computing-intensive tasks
and networking-intensive applications, corresponding to
two important resources (computing and networking4)
in cloud computing [16], under the situation in which a
physical server is virtualized into multiple virtual machines (VM). During the experiments, we obtain statis4 Energy consumption in storage is not investigated in this
research, because its energy consumption is comparatively smaller
than that for computing and networking[15].
tics for the CPU usage, the power level, the elapsed
time, and the energy consumption, under both local
(internal) and network (external) traffic stresses.
Our empirical characterization generates fundamental understandings of server energy usage in the context
of cloud computing and suggests engineering insights for
energy-efficient data center operations. In-depth analysis of the empirical results reveals a few fundamental
insights about the impact of virtualization on server energy usage, including:
•
•
•
•
•
Server architecture is still far from being energyproportional in that a significant amount of power
is consumed when the server is idle, thus opening
an opportunity for server consolidation in data
centers for reducing energy cost.
Virtualized servers consume more energy than
physical ones, for both computing and networking
intensive traffic.
The energy overhead from
virtualized servers increases as the utilization of
physical resources increases.
The energy overhead resulted from server virtualization highly depends on the hypervisor used,
which in turn is determined by the software architecture of the hypervisor (e.g., CPU scheduling and
I/O design, etc).
From a given traffic load, the energy consumption
can be minimized by launching an optimal number
of virtual machines.
In a multi-core server running multi-process
applications, physical servers, if a multi-core
optimization mechanism is absent, could consume
more energy than virtualized servers.
These empirical insights suggest some operational
optimizations toward an energy-efficient data center
design and operations, including
•
•
•
•
Server consolidation for green data center should
aim to balance a fundamental trade-off between the
energy saving from shutting down idle servers and
the detrimental effects (i.e., the energy overhead
and the throughput reduction from hypervisor) due
to server virtualization.
Hypervisors should be architect with energy
consumption objectives, while providing the
maximal flexibility in resource management.
Resources should be allocated dynamically according to the real-time demand, with an objective to
minimize the energy consumption.
Multi-core scheduling algorithms should be incorporated in hypervisor design for virtualized servers
and OS design for physical servers, to minimize the
energy consumption.
The reminder of this paper is structured as follows.
We describe the virtualization model and its impact
on energy consumption in Section 2. Our detailed
experimental setup is illustrated in Section 3. Section
An Empirical Investigation of the Impact of Server Virtualization ...
4 presents the empirical results and the fundamental
insights. Relevant engineering impacts on data-center
operations are outlined in Section 5.
In Section
6, we explain the fundamental trade-off in server
virtualization and its application to server consolidation
in data center operations. Section 7 presents previous
work in similar realms. Section 8 concludes our work
and discusses future research directions.
2.
SERVER VIRTUALIZATION
One of the key technical enablers for server consolidation in cloud computing is virtualization. Specifically,
a server administrator uses a software application (i.e.,
hypervisor) to divide one physical server into multiple
isolated virtual machines (VM), each of which runs its
own guest operating system and specific applications.
In this section, we present two leading virtualization
models, including their implementation mechanisms in
I/O, CPU and networking resource management. The
impacts of these implementation details on server energy usage will be characterized with our proposed measurement objectives.
2.1.
Virtualization Model
Hypervisor, also referred to as virtual machine
manager (VMM), is one of hardware virtualization
techniques that allow multiple operating systems to
run concurrently on a host server. The hypervisor
presents to the guest operating systems a virtual
operating platform and manages the execution of the
guest operating systems. Existing hypervisors, based
on their relationship with the hardware platform, can
be classified into two alternative types [17], including:
•
•
Type 1 : hypervisor runs directly on the host’s
hardware to control the underlying hardware and
to manage the guest operating system. The guest
operating system thus runs on the level above the
hypervisor.
Type 2 : hypervisor runs as a module within the
operating system environment. In this case, the
hypervisor layer can be considered as a second
software level, while the guest operating system
runs at the third level above the hardware.
In Figure 1, we illustrate the logic structure of both
types of hypervisors.
In this research, we focus on the leading opensource hypervisors, including Xen [18] and KVM [19],
as exemplary hypervisors.
Specifically, Xen is a
type-1 hypervisor, which directly interfaces with the
underlying hardware and uses a special, privileged
domain 0 to manage other kernel modified guests
[18]. KVM is designed as a type-2 hypervisor, in
which the virtualization interface is designed to function
the same as the actual physical hardware [19]. As
such, their design principles and mechanisms are
3
FIGURE 1. Two Alternative Types of Hypervisors: type
1 hypervisor runs directly over the hardware, and type 2
hypervisor runs within the host system.
different in the areas of hardware resource scheduling,
interrupt handling, and virtual machine management,
as elaborated on the following subsections.
2.1.1. Virtualized I/O Mechanism
Xen exposes a hypercall mechanism (also known
as paravirtualization interface) in which all guest
operating systems have to be modified to perform
privileged operations (e.g., updating page table).
Moreover, an event notification mechanism is proposed
to deliver virtual interrupts derived from real device
interrupts and other notifications to VMs.
KVM is an extension of the QEMU emulator with
support for the x86 VT extensions (VTx), and it
typically uses full virtualization [19]. Guest operating
systems above KVM do not need to change, and they
appear as normal Linux processes instead of separated
domains. When I/O instructions are issued by a
guest operating system, a process context switch in
the hypervisor is enabled to allow I/O signals passing
through the hypervisor and the host OS.
The difference in virtualized I/O mechanisms for Xen
and KVM directly impacts the energy consumption for
virtualized servers. Specifically, Xen allows guest VMs
to make system calls without invoking the kernel in the
host operation system, whereas KVM incurs additional
kernel operations to support the I/O behaviors of guest
systems. The set of additional operations translates to
extra CPU cycles and memory access. Consequently,
KVM is expected consume more energy than Xen does,
for similar traffic patterns and loads.
2.1.2. Virtualized CPU Model
The current default CPU scheduler in Xen is a
proportional fair share scheduler, called Credit-Based
CPU scheduler [20]. This scheduler assigns each VM a
weight, and an optional cap. The weight indicates the
relative physical CPU allocation of a domain, and the
cap sets an absolute limit by percentage of a real CPU
on the amount of time a domain can consume. The
scheduler, running on a separate accounting thread in
the host system, transforms the weight into a credit
allocation for each virtualized CPU (VCPU). When a
VCPU is running, it consumes the credit allocated to
it. Once the VCPU runs out of the credit, it only runs
4
Y. Jin, Y. Wen, Q. Chen and Z. Zhu
when other, more thrifty VCPUs have finished their
executing. Periodically, the accounting thread is reactivated and distributes more credits to VMs [21].
As a comparison, KVM is a part of Linux and uses the
regular Linux CPU scheduler and memory management
[22]. It is known that by default, KVM makes use of
the Linux Kernel component, namely Completely Fair
Scheduler (CFS), to treat every KVM guest machine as
a normal thread. Every task running on KVM has a
priority from 0 to 139 in the kernel. The range from 0
to 99 is reserved for real time processes, and the range
from 100 to 139 is for the user space. The smaller
priority number is, the more important the task is
viewed. In contrast to Xen’s Credit Scheduler, the CFS
scheduler implementation is not based on run queues.
Instead, a red-black tree implements a timeline of future
task execution. This data structure ensures that every
runnable task chases other tasks to maintain a balance
of execution across the set of runnable tasks including
guest domains.
In spite of their different implementations, these
two CPU schedulers fairly allocate CPU cycles among
all the active guest systems. As a result, the CPU
scheduler of either Xen or KVM is capable of balancing
global load on multi-processors to balance the both
computing and energy usage, which will be verified by
our measurements.
2.1.3. Virtualized Networking Model
Xen with the default configuration uses network
bridging and virtual firewall router (VFR) within
domain 0 to allow all domains to appear on the network
as individual hosts. Each guest domain attaches to one
or more virtual network interfaces (VIF), which consists
of two I/O rings of buffer descriptors for transmitting
and receiving, to identify its unique IP address.
In comparison, KVM inherits the networking
virtualization ability from QEMU to use TUN (network
TUNnel)/TAP (network tap) in Linux kernel to create
virtual network bridge and routing.
The bridge
essentially emulates a software switch, allowing each
VM to have individual networking resources.
KVM would consume more energy than that
consumed by Xen when they are exposed to networkingintensive tasks, because significant software operations
are required by KVM, while Xen takes advantage of its
modified interface which needs relatively less software
participation.
2.2.
Measurement Objectives
The objective of our experiment is two fold. First,
we aim to quantify and profile energy consumption
of virtualized servers under different traffic patterns.
Second, our measurement aims to map out the energy
consumed in different virtualization components, as
shown in table 1. Our measurements will be based
on comparison between virtualized servers and physical
FIGURE 2. Experimental Setup: three systems under
test were configured, including a non-virtualized server, a
Xen-virtualized server and a KVM-virtualized server.
servers under local computing-intensive tasks and
networking-intensive applications.
3.
EXPERIMENTAL SETUP
This section describes our experimental environment.
3.1.
Physical Setup
Figure 2 illustrates our experimental setup, consisting
of three identical physical servers. The machines under
test are Inspur 3060 servers, each of which contains a
quad-core Intel 2.13 GHz Xeon processor with 8 MB L2
cache, 2 GB RAM, 500 GB hard disk and an 1 Gigabit
Ethernet card. All of them are connected to our test
intranet over a D-link GDS-1024T 24-ports 1000 BaseT switch. We use CentOS 5.6-final-x86 64 with Linux
kernel 2.6.18 as our OS platform for both host and guest
systems. Xen 3.0.3 and KVM 83 are installed from
CentOS packages on server machine B and machine
C respectively. Three guest virtual machines running
on server B and C are allocated with 4 VCPUs, 512
MB RAM, 50 GB image and also installed CentOS
5.6-final-x86 64 as guest OS. We leave all the software
parameters intact. Besides, a Kill-A-Watt power meter,
with a standard accuracy of 0.2%, is connected to each
physical server, measuring the energy usage. Finally,
our experiment is controlled by a desktop computer,
connected to the testbed intranet as a monitor.
3.2.
Test Case Design
We begin with collecting the background energy
consumption when all the servers are idle. Following
that, we launch a set of different profiles of local and
network traffic to stress all three servers. During
these tests, statistics on the CPU usage, the power
consumption, and the task completion time, are
collected. Detailed test cases are explained as follows.
3.2.1. Local Computation Benchmark
For computing task benchmark, an easy and widely
accepted way is to calculate π to as many as decimal
places. In our experiment, bc command in Linux is
utilized to calculate the mathematic constant π into an
An Empirical Investigation of the Impact of Server Virtualization ...
5
TABLE 1. Major Components of Virtualization Model
Xen
KVM
Structure
Type 1 Hypervisor
Type 2 Hypervisor
I/O Mechanism Hypercall & Virtual Event Software Contest Switch & VTx
CPU Scheduler
Credit Based Scheduler
Completely Fair Scheduler
Networking
VFR & VIF
TUN & TAP
accurate level (100,000 digits after the decimal point).
We simultaneously run multiple instances of this πbased CPU/RAM intensive application as concurrent
processes to generate different local computing loads.
Specifically, five cases with an increasing number of
concurrent processes, ranging from 3 to 7 instances,
are tested for two domain configurations (i.e., 2 or
3 active domains). On the physical machine, all the
instances are executed on the same host OS, while on
the virtualized servers, the concurrent instances are
distributed evenly across all the active domains. For
instance, when there are 3 guest domains running 7
instances in parallel, two domains will execute two
instances each and the last domain will execute 3
instances. The same configuration will rotate among
all three guest domains and the final result will be
calculated as the mean of three sets of data collected.
3.2.2. Http Request Benchmark
The network-intensive traffic benchmark is simulated
through HTTP requests, since the HTTP based traffic
is the most widely used one, which accounts for more
than 85% of the total web usage [23]. Our approach is
similar to the web-server workload benchmark used in
[24]. The configurations on the server and client side
are explained as follows.
On the server side, three Apache servers (version
2.2.18)are configured on all servers under test (SUT).
On the physical server (Server A), these three Apache
servers are executed on three TCP ports (80, 81 and 82)
for traffic segregation. For virtualized servers (Server
B and C), the three instances of HTTP servers are
uniformly distributed across all active guest domains,
for both domain configurations (i.e., 2 or 3 active
domains). The same TCP ports are used for fair
comparison. The contents stored on the HTTP servers
are 1000 unique files retrieved from one commercial web
site, with a mean file size of 10.8 KB. In addition, we
configure the Apache servers to allow a high density
volume of web traffic, by increasing the MaxClients 5
value from 256 to 2000, and the MaxRequestsPerChild 6
value from 100 to 10000.
On the client side, we use ab (Apache Bench) tool [25]
to simulate real web traffic. Specifically, three clients
are configured to generate http GET requests at specific
rates, each of which dedicates to one instance of the
5 MaxClients sets the limit on the number of simultaneous
requests that will be served
6 MaxRequestsPerChild sets the limit on the number of
requests that an individual child server process will handle
Apache server on one TCP port. Every client sends
5000 requests for each file, to cover two importance test
cases of HTTP benchmark including: i) single unique
hit7 and ii) all unique hit8 . Note that the cache miss
test9 is not conduct because we are only interested in
networking energy consumption, which is being covered
by the two cases tested. In this test profile, the overall
size of data transferred is approximately 150 GB, which
is large enough to thoroughly exercise the network
interface controller (NIC) device.
In our experiment, we gradually increase the request
rate by the clients to scope the energy consumption as
a function of the workload. Specifically, four increasing
request rates of 2,500 requests/sec, 5,000 requests/sec,
10,000 requests/sec and 15,000 requests/sec are used to
simulate low, moderate and peak web traffic workloads,
suggested by the workload for real commercial web
server [26].
3.3.
Methodologies on Result Gathering
3.3.1. CPU Utilization
We implement a Python script on each server to
obtain the average CPU utilization during the test
period. Specifically, by using top command, it starts
the monitor as soon as the workloads are initialized,
and ends the data gathering once the workloads are
completed. Finally, it can automatically record the
obtained results into a local file.
3.3.2. Elapsed Time
For the local computation benchmark, the elapsed time
is obtained by the server via a Python script. The script
uses a timer to record the test period.
For the http request benchmark, the elapsed time is
obtained by the monitor via a Python script. By using
iftop command, the script uses a timer to record the
period when there are significant networking traffics.
3.3.3. Power/Energy Consumption
The average power consumption and the overall energy
consumption is obtained by the readings on the power
meters. Specifically, we manually reset the power
meter before each test round, and get the reading
7 For the single unique hit case, the same content file is served
to the client consistently from the memory.
8 In the all-unique test case, all content files are read from the
disk into the memory to serve the client.
9 In the cache-miss case, the content files are retrieved from
remote servers to serve the client.
6
Y. Jin, Y. Wen, Q. Chen and Z. Zhu
80
60
Baseline
2 Xen VMs
3 Xen VMs
2 KVM VMs
3 KVM VMs
100
Avg. CPU Usage (%)
Power Consumption (Watt)
70
50
40
30
20
80
Baseline
2 Xen VMs
3 Xen VMs
2 KVM VMs
3 KVM VMs
60
40
20
10
0
Standby
0
Turned On
Status
FIGURE 3. Background Power Consumption
including power and energy consumptions once each
specific benchmark has been finished.
4.
EMPIRICAL RESULTS AND FUNDAMENTAL INSIGHTS
In this section, we present our empirical findings of the
impact of virtualization on server energy consumption.
Moreover, we generalize a few fundamental insights
from this set of empirical findings.
4.1.
Background Energy Consumption
In Figure 3, we plot the background power consumption
of the physical machine A, the Xen-based server B
and the KVM-based server C. Specifically, we measure
the power consumed by all the servers when they are
turned off and when they are turn on but idle. For
the virtualized servers (i.e., B and C), two domain
configurations (i.e., 2 or 3 active domains) are tested.
The bar indicates the average power consumption, and
the line refers to the fluctuation based on our observed
maximum and minimum power during the test period.
From Figure 3, we have the following empirical
findings, including:
Finding (a): All the servers consume about the
same power (around 2.8W) when turned off, but
plugged into their power supplies. This amount of
power is simply wasted in supporting server internal
indicators (e.g., LED). Therefore, it is advantageous to
turn off the power supply, instead of the servers, when
servers are scheduled to shut down.
Finding (b): When servers are turned on and
active virtual machines stay idle, the power overhead of
different hypervisors against the physical server varies
significantly from each other. Specifically, the Xenbased server consumes almost the same amount of
power as the physical server does. Specifically, it
consumes 63.1W (3 active Xen VMs) and 63.0W (2
active Xen VMs) of power, which is only 0.47% and
0.32% more than 62.8W consumed by the physical
3
4
5
6
7
Number of Concurrent Processes
FIGURE 4. CPU Usage Comparison for Local CPUintensive Task Benchmark
server. However, the KVM-based server incurs a much
higher overhead, consuming 70.1W (11.6% overhead)
for 3 active VMs and 68.8W (9.55% overhead) for 2
active VMs. Moreover, the power usage of the KVMbased server C fluctuates within a wider range.
The Finding (b) can be explained by the different
virtualization models adopted by Xen and KVM, which
impact the usage of CPU and RAM resources. The
CPU utilization of the idle physical server is generally
less than 0.3 %, compared with approximately 0.7-0.8
% for the Xen-based server, and 0.8%-2.6% for the
KVM-based server. The extra CPU usage of virtualized
servers accounts for a portion of the energy overhead.
The larger energy overhead for the KVM-based server
can also be due to the larger memory footprint in KVM,
as indicated by the results of memory test in [27, 28].
Tests are being conduct to collect additional data for
principle component analysis to attribute the overhead
into extra CPU cycle and memory footprint.
4.2.
Local Calculation Benchmark
Results from local computation benchmark are illustrated in Figure 4-8 and Table 2. Figure 4-7 present
our measurements of the CPU usage, the power consumption, the task completion time, and the calculated
energy consumption respectively. Figure 8 presents the
relative energy overhead consumed by the virtualized
servers compared to the physical server. Table 2 summarizes the energy overhead for all the test cases.
In the following, a few key observations from these
empirical results are explained.
Finding (c): It was observed that the virtualized
server could consume less energy than the physical
server does. Specifically, when 5 instances of the
bc application are executed in parallel (the number
of concurrencies is one more than CPU cores in the
server), the energy overhead is negative for the Xenbased server, as the valley point shown in figure 8.
Such an observation can be understood as the inter-
7
An Empirical Investigation of the Impact of Server Virtualization ...
TABLE 2. Energy Consumed by CPU-Intensive Benchmark (unit: kwh)
Workload 3 Tasks 4 Tasks 5 Tasks 6 Tasks 7 Tasks
Baseline
0.0065
0.0072
0.0104
0.0109
0.0130
Overhead
0.0%
0.0%
0.0%
0.0%
0.0%
2 Xen
0.0065
0.0072
0.0093
0.0107
0.0130
Overhead
0.0%
0.0%
-10.6%
-1.8%
0.0%
3 Xen
0.0065
0.0073
0.0093
0.0110
0.0131
Overhead
0.0%
1.4%
-10.6%
0.9%
0.8%
2 KVM
0.0067
0.0082
0.0102
0.0122
0.0137
Overhead
3.1%
13.9%
-1.9%
11.9%
5.4%
3 KVM
0.0068
0.0083
0.0113
0.0147
0.0161
Overhead
4.6%
15.3%
8.7%
34.9%
23.8%
100
700
Baseline
2 Xen VMs
3 Xen VMs
2 KVM VMs
3 KVM VMs
600
Completion Time (sec)
Avg. Power Consumption (Watt)
120
80
60
40
20
0
500
Baseline
2 Xen VMs
3 Xen VMs
2 KVM VMs
3 KVM VMs
400
300
200
100
3
4
5
6
7
Number of Concurrent Processes
0
3
4
5
6
7
Number of Concurrent Processes
FIGURE 5. Power Consumption Comparison for Local
CPU-intensive Task Benchmark
FIGURE 6. Completion Time Comparison for Local CPUintensive Task Benchmark
play between the concurrent processes and the CPU
cores in a multi-core server. For the physical server,
we observe that 4 bc instances complete first and
the last instance is completed much later. This is
further verified by the observation that the CPU usage
maintains nearly 100% until the first 4 instances are
completed, and then it drops to around 25% afterwards.
This observation can be contributed to a lack of multicore scheduling mechanism in Linux across different
processes, which can be compensated via a multithread processing paradigm. As a comparison, in the
virtualized servers, all the instances complete almost
at the same time, due to the built-in CPU scheduler
in hypervisors allocating CPU cycles across active
VMs. Consequently, the CPU usage on the Xen-based
server maintains at a high level of 99.8%, compared to
87.7% for the physical server. In this case, the Xenbased server, either running two or three VMs, takes
approximately 10% less time and consumes 11% less
energy than that of the physical server. For the KVMbased server, the advantage of the CPU scheduler is
reversed by the extra penalty of hypervisor in most
cases, except for the case when 2 active VMs are
configured for the test case, resulting in a saving of
2% energy than the physical server. This observation
suggests that, if there is no binding between running
processes and CPU-cores, native operation system can
not truly take advantage of multi-core architecture; in
contrast, virtualized systems, based on either Xen or
KVM, is able to partition computing resources into
smaller pieces to achieve energy saving.
Finding (d): The KVM-based server consumes
more energy than the Xen-base server. For example,
when processing 7 concurrent tasks, 2 KVM VMs based
server consumes 5.4% energy more than that based
on 2 Xen VMs, and the gap reaches 23% between 3
KVM VMs and 3 Xen VMs. This is due to the fact
that the KVM hypervisor consumes more CPU cyles
and occupies higher memory footprint, compared to the
Xen hypervisor. The additional requirement for system
resources translates into higher energy consumption.
Finding (e): The number of active VMs affects
the energy consumption for the KVM-based server.
Specifically, when configuring three active VMs, the
KVM-based server consumes more energy than that
consumed by two active VMs configured on the same
server. Such an observation can be understood from the
Frequent Lock Holder Preemption (LHP) mechanism,
investigated in [22]. A guest VCPU in the KVM-based
server is a normal thread in the host operating system,
8
Y. Jin, Y. Wen, Q. Chen and Z. Zhu
0.018
0.014
90
80
Avg. CPU Usage (%)
Energy Consumption (kWh)
0.016
100
Baseline
2 Xen VMs
3 Xen VMs
2 KVM VMs
3 KVM VMs
0.012
0.01
0.008
0.006
0.004
70
60
50
40
30
20
0.002
0
Baseline
2 Xen VMs
3 Xen VMs
2 KVM VMs
3 KVM VMs
10
3
4
5
6
0
7
2500
Number of Concurrent Processes
FIGURE 7. Energy Consumption Comparison for Local
CPU-intensive Task Benchmark
FIGURE 9.
Benchmark
40
30
10000
15000
CPU Usage Comparison for HTTP
90
3 KVM VMs
2 KVM VMs
3 Xen VMs
2 Xen VMs
Avg. Power Consumption (Watt)
Relative Energy Overhead (%)
35
5000
Request Rate (req/sec)
25
20
15
10
5
0
−5
85
Baseline
2 Xen VMs
3 Xen VMs
2 KVM VMs
3 KVM VMs
80
75
70
65
−10
−15
3
4
5
6
7
Number of Concurrent Processes
FIGURE 8.
Energy Overhead of CPU-Intensive
Benchmark: the abnormality for 5 concurrent processes is
explained in Finding (c).
which may be preempted when the host system deschedules the VCPU threads. If the preempted VCPU
is running in a critical section, the lock will be held for
a long time from the perspective of the guest operating
system. The probability of LHP increases with a higher
number of active VMs. As a result, CPU resources
are simply wastes in the lock holding period, which in
turn increses the task completion time. It is observed
that the average power consumption for the KVMbased server with 3 active VMs is the lowest, but the
task completion time is the longest, resulting in a high
energy consumption. This suggests that the number of
VMs can be optimally configured to reduce the energy
consumption for the KVM-based server.
4.3.
HTTP Request Benchmark
Results from the HTTP request benchmark are plotted
in Figure 9-13 and Table 3, respectively. Figure 9-12
present the statistics collected for our HTTP request
benchmark test, including the average CPU usage, the
60
2500
5000
10000
15000
Request Rate (req/sec)
FIGURE 10. Power Consumption Comparison for HTTP
Benchmark
average power, the task completion time, and the total
energy consumption. Figure 13 illustrates the energy
overhead for the virtualized servers, compared to the
physical server. Table 3 summarizes the result of the
energy consumption for different test cases.
In the following, we highlight a few findings suggested
by the set of data collected for the HTTP request test.
Finding (f ): The virtualization overhead for
network-intensive traffic is significantly larger than that
for computing-intensive traffic. For the Xen-based
server, the energy overhead for computing-intensive
traffic is less than 5%, while the overhead for networkintensive traffic could rise up to 70%. The same
situation happens to the KVM-based server.
The cause of this finding is at least two-fold. First,
notice that, for network-intensive traffic, the CPU
usage for virtualized servers is higher than that for the
physical server; while for computing-intensive traffic,
the CPU usage for all the servers is almost equal
for the same test case. This difference suggests that
a significant amount of CPU usage is budget for
the hypervisors to handle the vNIC operations (i.e.,
9
An Empirical Investigation of the Impact of Server Virtualization ...
300
Baseline
2 Xen VMs
3 Xen VMs
2 KVM VMs
3 KVM VMs
Completion Time (sec)
4000
3500
3000
2500
2000
1500
Relative Energy Overhead (%)
4500
1000
500
0
3 KVM VMs
2 KVM VMs
3 Xen VMs
2 Xen VMs
200
150
100
50
0
2500
5000
10000
15000
Request Rate (reqs/sec)
2500
5000
10000
15000
Request Rate (req/sec)
FIGURE 11. Completion Time Comparison for HTTP
Benchmark
FIGURE 13. Relative Energy Overhead of Networking
Benchmark
4500
3 KVM VMs
2 KVM VMs
3 Xen VMs
2 Xen VMs
Baseline
4000
Baseline
2 Xen VMs
3 Xen VMs
2 KVM VMs
3 KVM VMs
0.08
0.07
0.06
0.05
0.04
Completion Time (sec)
0.09
Energy Consumption (kWh)
250
3500
3000
2500
2000
1500
0.03
1000
0.02
500
2500
5000
10000
15000
Request Rate (req/sec)
0.01
0
2500
5000
10000
15000
Request Rate (req/sec)
FIGURE 12. Energy Consumption Comparison for HTTP
Benchmark
VFR/VIF for Xen and TUN/TAP for KVM). Second,
it is shown in [29] that the probability of the occurrence
of Lock Hold Preemption (LHP) for I/O-intensive
workloads reaches at 39% on average for virtualized
machine. As a result, it takes much longer to complete
the task by virtualized system, translating into higher
energy cost.
Finding (g):
The energy overhead for the
virtualized server is highly correlated with the number
of active VMs. Specifically, the energy overhead is
consistently higher for a larger number of active VMs.
The energy overhead for 3 active VMs in the KVMbased server is around 1.5 times higher than that for 2
active VMs; similarly, the energy overhead for 3 active
VMs in the Xen-based server is almost twice of that for
the case of 2 active VMs. Moreover, the gap for the
KVM-based server is higher. For example, in the case
of 15,000 req/sec, the overhead gap between 3 active
VMs and 2 active VMs for the KVM-based server is
more than 80%; while it is around 20% for the Xenbased server. The explanation could be more active
guests aggravate the impact of LHP. Therefore it takes
FIGURE 14.
Benchmark
Completion Time Curve for HTTP
much longer to complete the task with more active guest
domains, resulting in additional energy consumption.
Finding (h): The network throughput for the
KVM-based server reaches its maximum between
10,000 reqs/sec and 15,000 reqs/sec, while the network
throughput for the physical server and the Xen-based
server continue to grow up as the traffic rate increases.
This observation can be made clearer in Figure 14-14,
where the task completion time and the energy cost are
plotted as a function of the request rate. Specifically,
when the request rate is 15,000 reqs/sec, the KVMbased server takes longer time to complete the task and
thus consumes more energy, compared to the case of
10,000 reqs/sec; as a comparison, the task completion
time and the energy cost for the physical machine and
the Xen-based server monotonically decrease as the
request rate increases up to 15,000 reqs/sec.
This observation is largely due to the extra memory
footprint for the KVM hypervisor. In the Apache
web server, each serving request takes some amount
of memory. The maximum number of requests that
can be served simultaneously is thus proportional to
the amount of available resources. For the case of
KVM hypervisor, the extra memory footprint shrink
the amount of available memory for request serving. As
10
Y. Jin, Y. Wen, Q. Chen and Z. Zhu
TABLE 3. Energy Consumed by Networking Benchmark (unit: kwh)
Workload 2500req/sec 5000req/sec 10000req/sec 15000req/sec
Baseline
0.0456
0.0290
0.0181
0.0165
Overhead
0.0%
0.0%
0.0%
0.0%
2 Xen
0.0493
0.0298
0.0251
0.0203
Overhead
8.1%
2.8%
38.7%
23.0%
3 Xen
0.0532
0.0370
0.0308
0.0237
Overhead
16.7%
27.6%
70.2%
43.6%
2 KVM
0.0728
0.0538
0.0438
0.0478
Overhead
59.6%
85.5%
142.0%
189.7%
3 KVM
0.0830
0.0616
0.0543
0.0614
Overhead
82.0%
112.4%
200.0%
273.1%
0.09
Energy Consumption (kWh)
0.07
0.06
0.05
0.04
0.03
85
Xen
KVM
Baseline
Power Consumption (Watts)
3 KVM VMs
2 KVM VMs
3 Xen VMs
2 Xen VMs
Baseline
0.08
80
75
70
65
0.02
0.01
60
2500
5000
10000
15000
FIGURE 15.
Benchmark
0
2500
5000
10000
Network Traffic (req/s)
Request Rate (req/sec)
Energy Consumption Curve for HTTP
FIGURE 16. Expansive Energy Overhead due to Server
Virtualization
a result, the network throughput is smaller, compared
to the physical server and the Xen-based server.
Finding (i): The marginal power consumed by
the server under different load conditions is limited,
compared to the power consumed when the server is
idle. Specifically, the additional power consumed by
the server under different level of networking requests
is at most 37.3 % against the idle state, and the
maximum additional power consumption for the local
computation benchmark is 57.6 %. Moreover, the
marginal power consumption is highly correlated with
the CPU usage observed in our experience. As a result,
our experiment verifies a previously power consumption
model for the server, in which the power consumption
of the server can be viewed almost as an affline function
of CPU usage with the idle power consumption as the
y-intercept [30]. It is desirable for the y-intercept to be
as small as possible, to achieve an energy-proportional
architecture.
Finding (j):
The energy overhead for the
virtualized servers is expansive, as a function of the
traffic throughput. As illustrated in figure 16 where
the lines are generated by one degree polynomial
curve fitting based on the power consumption of
different configurations, the power gap between the
baseline and the virtualized servers for both Xen and
KVM hypervisors increases as the throughput increases,
before the maximum throughput of the KVM-based
server is reached. When there is no network traffic,
the gap between Xen and baseline is around 1%(0.8
Watt), and the gap between the KVM-based server and
the baseline server is approximately 10% (6.9 Watt).
When the throughput grows to 10000 reqs/sec, the
gap becomes 15.2%(10.8 Watt) for Xen and 11.2%(7.9
Watt) for KVM.
4.4.
Fundamental Insights
Generalizing from the list of empirical findings, we
present the following fundamental insights about the
impact of virtualization on server energy consumption,
as follows:
(1) The server is still far from energy-proportional.
The idle server even consumes approximately two
thirds of the energy when its computing resource
is fully occupied. Hence, it will be advantageous to
consolidate applications from multiple servers to
one and turn off those idle servers to save energy.
(2) The virtualized server in general consumes
more energy than the physical server does,
for both computing-intensive and networkingintensive tasks, even when they are idle. Besides,
the energy overhead for the virtualized servers
11
An Empirical Investigation of the Impact of Server Virtualization ...
increases as the resource utilization increases.
When the virtualized servers are idle, the
Xen hypervisor incurs less than 1% energy
overhead, and the KVM hypervisor contributes
around 10% extra energy cost. For networkingintensive benchmark, Xen’s virtual firewall router,
virtual network interface, and virtual event
mechanism add an energy overhead ranging from
2.8% to 70.2% for different workloads and VM
configurations; and KVM’s Linux kernel based
virtual network bridge and x86 VTx invoking
results in an energy overhead between 59.6% and
273.1% for various combinations of configuration
and workload.
(3) The two types of hypervisors exhibit different
characteristics in energy cost for various tasks. The
KVM-based server consumes more energy than the
Xen-based server under the same test case, due to
their different architectural principles. Specifically,
KVM embeds the virtualization capability into
the native Linux kernel. As a result, it adds
one more layer into the software stack and
accordingly consumes more energy. Moreover,
KVM’s networking model consumes more energy
(on average nearly twice) than that of Xen, and
the KVM-based server consumes on average 12%
more energy than its counterpart with a XEN
hypervisor. On the other hand, the benefit of using
a type-2 hypervisor is that it requires almost no
modification to the host operating system; while
the host operating system needs to be modified to
work with a type-1 hypervisor.
(4) Significant energy saving can be achieved by
launching the traffic to an optimum number of
virtual machines.
In our measurement, the
virtualized server with 2 active VMs consumes
less than energy than the one with 3 active VMs.
Specifically, approximately 20% energy for KVM
and 15% energy for Xen on average could be
conserved for all cases under network-intensive
traffic benchmark, by migrating tasks from one VM
to another and turning off the idle VM. For the
computing-intensive traffic benchmark, a similar
pattern is observed for the KVM-based server when
more than 5 bc instances are executed in parallel.
(5) When a multi-core server is running multi-process
applications, the physical machine could consume
more energy than the virtualized servers. It is
due to a lack of the multi-core optimization in
the physical machine. By default, each application
is executed with a single thread, which can be
processed only by one CPU core, even if other cores
are idle. As a comparison, both Xen and KVM are
able to distribute physical CPU cores into virtual
CPU cores, avoiding starvation. This demonstrates
TABLE 4. Match-up between Findings and Insights*
H
H O (b)
I HH
H
(1)
(2)
(3)
(4)
(5)
(c)
M
M
M
(d)
(e)
(f)
(g)
(h)
M
(i)
(j)
M
M
M
M
M
M
M
* ’M’ marks the supporting relationship.
one key advantage of virtualization in improving
resource utilization.
To connect these insights with our empirical findings,
we summarize them in a tabular format in Table 4.
In the table, an entry marked with ‘M’ indicates that
the corresponding finding in that column supports the
insight associated with the corresponding row.
5.
ENGINEERING IMPLICATIONS
As mentioned perviously, the purpose of our empirical
study is to develop engineering guidelines for energyefficient architecture and operations of green data
centers. In this section, we leverage the empirical
findings and the derived fundamental insights to
suggest some engineering implications for data center
operations.
5.1.
Hypervisor Optimization
Our measurements indicate that both hypervisors
incur an energy overhead, compared to the physical
server. This suggests that, in addition to the standard
performance optimization for hypervisors, minimizing
energy consumption should another objective in their
software architecture.
Xen, as a type 1 hypervisor, is more energy
efficient than KVM under both networking and local
computation traffic benchmark. In some case, Xen’s
CPU scheduler even makes it more energy efficient than
the physical machine for the multi-process application.
However, the energy cost for networking-intensive
traffic for the Xen-based server is still relatively
expensive. Some studies [20, 31] have aimed to optimize
the mechanism of Xen to improve the performance,
such as tuning the weight and cap value in Xen
domain 0, optimizing CPU scheduler algorithm and
networking structure like netfilter on bridge, etc. Those
efforts are provable to improve Xen’s performance, but
still need verifications when performance and energy
consumption are both concerned.
KVM runs inside the Linux kernel, making it easier
to deploy and manage than Xen. However, there is
always a trade-off between convenience and efficiency.
Based on our observations, it spends more energy on
coordinating all the guest OSes, even when they are idle.
As such, some kind of energy-efficient CPU scheduler
12
Y. Jin, Y. Wen, Q. Chen and Z. Zhu
could be proposed for KVM . In addition, strategies
towards LHP avoidance should be in order. Finally,
given the advantageous energy performance of the Xen
para-virtualization established in our study, introducing
virtual hardware mechanism could improve the energy
efficiency for KVM.
5.2.
Dynamic Resource Management
Our study suggests a two-tier dynamic resource
management framework for data center operations.
First, in a more coarse granularity, physical servers can
be turned off, completely saving the energy cost, if the
total load is not high enough. This optimization is
derived from that the fact that the idle server consumes
a large portion of power compared to its fully loaded
situation. Second, in a more fine granular, VMs can
be shut down and/or migrate into other physical server
to save energy. This optimization is derived from our
observation that energy consumption is sensitive to
the number of active VMs and is enabled by the live
migration mechanism in virtualization [32].
5.3.
Multi-Thread Programming Paradigm for
Multi-Core System
Modern servers are usually equipped with multiple CPU
cores. It provides an opportunity to enhance the raw
computing capability; however, harness such a raw capability requires additional care paid for programming
practice when deploying applications onto data centers.
As shown in our measurement, the energy consumption is sensitive to the binding between applications and
CPU cores. As a result, the software architect should
decide whether to bind applications with CPU cores
judiciously for the non-virtualized servers (or adopt a
multi-thread programming paradigm); while the hypervisor inherently provides the capability to distribute the
computing capability among different VMs.
6.
FUNDAMENTAL
TRADE-OFF
SERVER VIRTUALIZATION
IN
Our research has also revealed a fundamental trade-off
for server virtualization, as illustrated in Figure 17.
On one hand, the energy consumption in data centers
can be reduced by consolidating applications from
multiple servers to one server and shutting down the
idle servers. This benefit results from our observation of
an affline power consumption model for native servers.
On the other hand, for the virtualized servers, there
are two detrimental effects that would hamper their
energy efficiency. First, the hypervisor introduces a
potential energy overhead over the physical machine,
by allocating system resources for its execution. This
overhead is expansive as a function of the “goodput”,
which denotes the portion of computation capabilities
used for support applications. Second, the maximum
supportable goodput for virtualized server is reduced,
FIGURE 17.
Virtualization
Fundamental Trade-Off In Server
compared to its native server. The combination of these
two detrimental effects would offset the energy benefit
of server consolidation. Moreover, the impact of these
detrimental effects depends on the type of hypervisor
chosen. Specifically, the relationship between the
goodput and the power consumption for native server
can be expressed as,
Pn = Pn0 + (Pmax − Pn0 ) ∗ rn /rn⋆ ,
(1)
where Pn is the power consumption for native server,
Pn0 is the power consumption when native server is idle,
Pmax is the maximum power consumption, rn is the
goodput, and rn⋆ is the maximum supportable goodput.
Similarly, for the virtualized server, the relationship
is as,
(2)
Pv = Pv0 + (Pmax − Pv0 ) ∗ rv /rv⋆ ,
where Pv0 > Pn0 and rv⋆ < rn⋆ . The detailed value of Pv0
and rv⋆ largely depends on the type of hypervisor.
This fundamental trade-off dictates how server
consolidation should be designed to reduce energy usage
for green data centers. Specifically, the decision of
server consolidation should balance the two competing
forces, to reduce the energy usage by data centers.
7.
RELATED WORK
The popularity of open-source hypervisors (i.e., Xen
and KVM) benefits from an easy access to their detailed
design. In [18], Xen was described as an x86 virtual
machine monitor that allows multiple commodity
operating systems to share conventional hardware in a
safe and resource managed fashion without sacrificing
either performance or functionality. Alternatively the
Kernel-based Virtual Machine, namely, KVM, emerges
as a new Linux subsystem that leverages virtualization
extensions to add a virtual machine monitor (or
hypervisor) capability to Linux [19].
Deshane et al. presented the initial quantitative
analysis of these two hypervisors in [33], focusing
An Empirical Investigation of the Impact of Server Virtualization ...
on the overall performance, performance isolation
and scalability of virtual machines running on them.
An extensive empirical performance measurement on
such evaluation was conducted in [27, 28]. In [22],
the authors analyzed the CFS CPU scheduler of
KVM on a multi-core environment, and proposed
optimizations for performance enhancement.
[34]
presented a framework for VM-based computing for
High Performance Computing (HPC) applications. It
built a computing cluster with virtual machines, and
evaluates the performance as well as the cost compared
to a native, non-virtualized environment.
Kim et al. investigated a power-aware provisioning
of virtual machines for real-time services, with
a dual objective of reducing operating costs and
improving system reliability [35]. The architectural
principles for energy-efficient management of clouds,
resource allocation policies, scheduling algorithms were
discussed in [36], demonstrating the immense potential
of cloud computing to offer significant cost savings
under dynamic workload scenarios. [37] reviewed the
methods and technologies currently used for energyefficient operation of computer hardware and network
infrastructure. It also concluded that cloud computing
with virtualization can greatly reduce the energy
consumption.
Another area of study in the green ICT focuses
on attributing energy cost to various components in
a computing system. [38] characterized the energy
consumed by desktop computers, suggesting that one
typical desktop PC consumes 80-110 W when active,
and 60-80 W when idle. Pat Bohrer et al. created a
power simulator for web serving workloads that is able
to estimate CPU energy consumption with low error
rate [26]. Contreras and Martonosi used hardware event
counters to derive power estimates that are accurate at
sub-second time intervals [39]. Our approach is similar
to that of Economou et al. [40] , which focuses on
the coarser activity metrics such as CPU load and I/O
activities by using a variety of workloads. The novelty
of our research is that we are the first group to cast this
research agenda in a data center under cloud computing
context. The results obtained in our research shed new
lights in energy-efficient architecture and operations of
data centers, which in turn would help to curb the
energy explosion predicted for ICT systems.
8.
CONCLUSION AND FUTURE WORK
This paper reported an empirical study on the impact
of server virtualization on energy efficiency for green
data centers. Through intensive measurements, we
have obtained sufficient statistics for energy usage
for both the native server and the virtualized servers
with two alternative hypervisors (i.e., Xen and KVM).
Through an in-depth analysis of this data set, we
presented a few important findings, regarding to the
energy usage of the virtualized server, and crucial
13
implications of these empirical findings. Moreover, our
investigation suggested a few engineering implications
for architecture green data centers. Finally, the most
important result from our study is the fundamental
trade-off in virtualized servers, which would dictate how
server consolidation should be designed and deployed to
tame the explosive energy usage in data centers.
Our future work will concentrate on three domains,
including measurements, model and verifications. First,
a green modular data center, which consists of more
than 270 physical servers, is under construction at
NTU. We will continue to measure the energy usage
with the virtualized servers under real web trace with
combined traffic features (including HTTP traffic, P2P
traffic, VoIP, etc.) in this data center. Second, based
on our fine-granulated measurements, we will develop
an analytical model for energy usage of the virtualized
servers and focus on optimization techniques for server
consolidation in green data center operations. Finally,
our optimization strategies will be configured on our
data center and additional measurements will be taken
to verify their effectiveness. The ultimate goal of this
research is to develop an analytical framework for green
data centers and provide a list of best practices for
energy-efficient operations for data centers.
ACKNOWLEDGEMENTS
This research benefited from the discussions with a
few world-class researchers in the area of green ICT
research. Dr. Dan Kilper and Dr. Kyle Guan, from
Bell Laboratories, were inspirational to our research
and helped to make sure that our research was on right
track. Our research also benefited from the discussions
with Mr. Keok Tong LING at IDA, Singapore.
REFERENCES
[1] Armbrust, M. and et al. (2010) A view of cloud
computing. Communications of the ACM, 53, 50–58.
[2] Brown, R. and et al. (2008) Report to congress on server
and data center energy efficiency: Public law 109-431.
Technical report., Berkeley, CA, USA.
[3] Le, K., Bianchini, R., Nguyen, T., Bilgir, O.,
and Martonosi, M. (2010) Capping the brown
energy consumption of internet services at low cost.
International Green Computing Conference, pp. 3–14.
IEEE, Washington.
[4] Corcoran, P. (2012) Cloud computing and consumer
electronics: A perfect match or a hidden storm. IEEE
Consumer Electronics Magazine, 1, 14–19.
[5] Beloglazov, A., Buyya, R., Lee, Y., Zomaya, A., et
al. (2011) A taxonomy and survey of energy-efficient
data centers and cloud computing systems. Advances
in Computers, 82, 47–111.
[6] Energy star. http://www.energystar.gov/.
[7] Samsung. Samsung green memory. http://www.
samsung.com/greenmemory.
[8] Wierman, A., Andrew, L., and Tang, A. (2009) Poweraware speed scaling in processor sharing systems. IEEE
14
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
Y. Jin, Y. Wen, Q. Chen and Z. Zhu
INFOCOM’09, Rio, Brazil, pp. 2007–2015. IEEE,
Washington.
Meisner, D., Gold, B., and Wenisch, T. (2009)
PowerNap: eliminating server idle power. ACM Sigplan
Notices, 44, 205–216.
Hewitt, C. (2008) Orgs for scalable, robust, privacyfriendly client cloud computing.
IEEE Internet
Computing, 12, 96–99.
Accenture (2008) Data centre energy forecast reportfinal report. Technical report., Oakland, CA, USA.
Benini, L., Bogliolo, A., and De Micheli, G. (2000) A
survey of design techniques for system-level dynamic
power management. IEEE Trans. on Very Large Scale
Integration (VLSI) Systems, 8, 299–316.
Gandhi, A., Harchol-Balter, M., Das, R., and Lefurgy,
C. (2009) Optimal power allocation in server farms.
ACM SIGMETRICS’09, pp. 157–168. ACM.
Greentouch. http://www.greentouch.org/.
Economou, D., Rivoire, S., Kozyrakis, C., and
Ranganathan, P. (2006) Full-system power analysis
and modeling for server environments. Workshop on
Modeling, Benchmarking, and Simulation, pp. 70–77.
Mell, P. and Grance, T. (2011). The NIST definition
of cloud computing (draft). http://csrc.nist.gov/
publications/nistpubs/800-145/SP800-145.pdf.
IBM. IBM system virtualization version 2 release
1.
http://publib.boulder.ibm.com/infocenter/
eserver/v1r2/topic/eicay/eicay.pdf.
Barham, P. and et al. (2003) Xen and the art of
virtualization.
ACM SIGOPS Operating Systems
Review, pp. 164–177. ACM.
Kivity, A., Kamay, Y., Laor, D., Lublin, U., and
Liguori, A. (2007) KVM: the linux virtual machine
monitor. Linux Symposium, pp. 225–230.
Cherkasova, L., Gupta, D., and Vahdat, A. (2007)
Comparison of the three CPU schedulers in Xen. ACM
SIGMETRICS Performance Evaluation Review, 35,
42–51.
Xen credit-based CPU scheduler.
http://wiki.
xensource.com/xenwiki/CreditScheduler.
Jiang, W., Zhou, Y., Cui, Y., Feng, W., Chen, Y.,
Shi, Y., and Wu, Q. (2009) CFS optimizations to
KVM threads on multi-core environment. International Conference on Parallel and Distributed Systems
(ICPADS’09), pp. 348–354. IEEE, Washington.
Schulze, H. and Mochalski, K.
Internet
study 2008/2009.
http://www.ipoque.com/
sites/default/files/mediafiles/documents/
internet-study-2008-2009.pdf.
Menon, A., Santos, J., Turner, Y., Janakiraman, G.,
and Zwaenepoel, W. (2005) Diagnosing performance
overheads in the Xen virtual machine environment. International Conference on Virtual Execution Environments, pp. 13–23. ACM.
ApacheBench.
http://en.wikipedia.org/wiki/
ApacheBench.
Bohrer, P. and et al. (2002) Power aware computing,
chapter. The case for power management in web
servers, pp. 261–289. Kluwer Academic Publishers,
Norwell, MA, USA.
[27] Che, J., He, Q., Gao, Q., and Huang, D.
(2008) Performance measuring and comparing of
virtual machine monitors. IEEE/IFIP International
Conference on Embedded and Ubiquitous Computing
(EUC’08), pp. 381–386. IEEE, Washington.
[28] Che, J., Yu, Y., Shi, C., and Lin, W. (2010) A synthetical performance evaluation of OpenVZ, Xen and
KVM. IEEE Asia-Pacific Services Computing Conference (APSCC’10), pp. 587–594. IEEE, Washington.
[29] Uhlig, V., LeVasseur, J., Skoglund, E., and Dannowski,
U. (2004) Towards scalable multiprocessor virtual
machines. Virtual Machine Research and Technology
Symposium, pp. 43–56.
[30] Fan, X., Weber, W., and Barroso, L. (2007) Power
provisioning for a warehouse-sized computer. ACM
SIGARCH Computer Architecture News, 35, 13–23.
[31] Santos, J., Janakiraman, G., Turner, Y., and
Pratt, I. (2007).
Netchannel 2:
Optimizing
network performance, XenSource/Citrix Xen Summit. http://www.xen.org/files/xensummit_fall07/
16_JoseRenatoSantos.
[32] Verma, A., Ahuja, P., and Neogi, A. (2008) pMapper:
power and migration cost aware application placement
in virtualized systems. International Conference on
Middleware, pp. 243–264. Springer-Verlag, New York.
[33] Deshane, T. and et al. (2008).
Quantitative
comparison of Xen and KVM, Xen Summit.
http://www.xen.org/files/xensummitboston08/
Deshane-XenSummit08-Slides.
[34] Huang, W., Liu, J., Abali, B., and Panda, D.
(2006) A case for high performance computing
with virtual machines. International Conference on
Supercomputing, pp. 125–134. ACM.
[35] Kim, K., Beloglazov, A., and Buyya, R. (2011)
Power-aware provisioning of virtual machines for realtime cloud services. Concurrency and Computation:
Practice and Experience, 23, 1491–1505.
[36] Beloglazov, A., Abawajy, J., and Buyya, R. (2012)
Energy-aware resource allocation heuristics for efficient
management of data centers for cloud computing.
Future Generation Computer Systems, 28, 755–768.
[37] Berl, A. and et al. (2010) Energy-efficient cloud
computing. The Computer Journal, 53, 1045–1051.
[38] Agarwal, Y., Hodges, S., Chandra, R., Scott,
J., Bahl, P., and Gupta, R. (2009) Somniloquy:
augmenting network interfaces to reduce PC energy
usage. USENIX Symposium on Networked Systems
Design and Implementation, pp. 365–380. USENIX.
[39] Contreras, G. and Martonosi, M. (2005) Power predicR processors using performance
tion for Intel XScale
monitoring unit events. International Symposium on
Low Power Electronics and Design (ISLPED’05), pp.
221–226. IEEE, Washington.
[40] Economou, D., Rivoire, S., Kozyrakis, C., and Ranganathan, P. (2006) Full-system power analysis and
modeling for server environments. In Proceedings of
Workshop on Modeling, Benchmarking, and Simulation, pp. 70–77.
Download PDF