advertisement
Virtuozzo 7
Installation Guide
August 24, 2017
Parallels International GmbH
Vordergasse 59
8200 Schaffhausen
Switzerland
Tel: + 41 52 632 0411
Fax: + 41 52 672 2010 https://virtuozzo.com
Copyright ©2016-2017 Parallels International GmbH. All rights reserved.
This product is protected by United States and international copyright laws. The product’s underlying technology, patents, and trademarks are listed at https://virtuozzo.com
.
Microsoft, Windows, Windows Server, Windows NT, Windows Vista, and MS-DOS are registered trademarks of Microsoft Corporation.
Apple, Mac, the Mac logo, Mac OS, iPad, iPhone, iPod touch, FaceTime HD camera and iSight are trademarks of Apple Inc., registered in the US and other countries.
Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein may be trademarks of their respective owners.
Contents
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.1 Requirements for Standalone Installations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2 Planning Infrastructure for Virtuozzo Storage with GUI Management
. . . . . . . . . . . . . . . . .
2
1.2.1 Understanding Virtuozzo Storage Architecture
. . . . . . . . . . . . . . . . . . . . . . . . . .
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Network Roles (Storage Access Points)
. . . . . . . . . . . . . . . . . . . . . . . . . .
4
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.2.2 Planning Node Hardware Configurations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
Hardware and Software Limitations
. . . . . . . . . . . . . . . . . . . . . . . . . . .
9
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Network Recommendations for Clients
. . . . . . . . . . . . . . . . . . . . . . . . . 22
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.2.4 Understanding Data Redundancy
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 i
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.2.5 Understanding Failure Domains
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.2.6 Understanding Storage Tiers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.3 Planning Infrastructure for Virtuozzo Storage with CLI Management
. . . . . . . . . . . . . . . . . . 29
1.3.1 Understanding Virtuozzo Storage Architecture
. . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.3.2 Virtuozzo Storage Configurations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.4 Preparing for Installation from USB Storage Drives
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.1.1 Choosing Installation Type
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.1.2 Enabling Forced Detection of SSDs
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.2 Installing Virtuozzo with GUI Management
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.2.3 Selecting the Keyboard Layout
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Creating Bonded and Teamed Connections
. . . . . . . . . . . . . . . . . . . . . . . 45
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.2.5 Installing Virtuozzo Automator Components
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Installing Management Panel and Compute on the First Server
. . . . . . . . . . . 51
Installing Compute on the Second and Other Servers
. . . . . . . . . . . . . . . . . 52
2.2.6 Installing Virtuozzo Storage Components
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Installing Management Panel and Storage on the First Server
. . . . . . . . . . . . 54
Installing Storage on the Second and Other Servers
. . . . . . . . . . . . . . . . . . 55
2.2.7 Choosing Installation Destination
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Building Virtuozzo Infrastructure
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 ii
2.3 Installing Virtuozzo with CLI Management
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.3.3 Selecting the Keyboard Layout
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Creating Bonded and Teamed Connections
. . . . . . . . . . . . . . . . . . . . . . . 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.3.5 Choosing the Storage Type
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.3.6 Setting Virtuozzo Storage Installation Options
. . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Creating a New Virtuozzo Storage Cluster
. . . . . . . . . . . . . . . . . . . . . . . . 72
Joining an Existing Virtuozzo Storage Cluster
. . . . . . . . . . . . . . . . . . . . . . 73
Virtuozzo Storage Server Roles
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.3.7 Partitioning the Hard Drives
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.4 Installing Virtuozzo in Basic Graphics Mode
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.5 Installing Virtuozzo via VNC
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.6 Installing Virtuozzo in Text Mode
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.7 Configuring Server Ports for Virtuozzo
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.7.1 Opened Ports on Standalone Servers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.7.2 Opened Ports on Servers in Virtuozzo Storage Clusters
. . . . . . . . . . . . . . . . . . . . . 87
3. Exploring Additional Installation Options
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.2 Running Virtuozzo in Virtual Machines
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.2.1 Restrictions and Peculiarities
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 iii
CHAPTER 1
Preparing for Installation
This chapter describes how to plan the deployment of Virtuozzo Storage with GUI and CLI management. It also explains how to prepare for installing Virtuozzo from a USB flash drive.
1.1 Requirements for Standalone Installations
The recommended hardware requirements for running Virtuozzo 7 as a standalone installation are as follows:
• x86-64 platform with hardware virtualization support: Intel VT-x (with “unrestricted guest”),
.
...
Note:
To check if the Intel processor supports the “unrestricted guest” feature: 1) Download vmxcap.py
from https://github.com/qemu/qemu/blob/master/scripts/kvm/vmxcap , 2) Run python vmxcap.py | grep -i unrest
. The result must be yes
.
• CPU: at least 4 cores, a 64-bit processor is required for running 64-bit guest operating systems,
• RAM: 4 GB or more,
• HDD: 64 GB or more,
• SSD (optional): at least 30 GiB (at least 32 GiB with
/boot
)
• Network: an Ethernet network adapter and a valid IP address.
The actual number of virtual machines and containers you can run on a physical server and their performance depend on resources they will consume.
1
2
Chapter 1. Preparing for Installation
1.1.1 System Limits
The table below lists the current hardware limits for Virtuozzo 7 servers:
Hardware
RAM
HDD
Theoretical
64 TB
1 EB
Certified
1 TB
50 TB
1.2 Planning Infrastructure for Virtuozzo
Storage with GUI Management
To plan your infrastructure for Virtuozzo Storage managed via the web-based management panel, you will need to decide on the hardware configuration of each storage node, plan the storage networks, decide on the redundancy method (and mode) to use, and decide which data will be kept on which storage tier.
Information in this section is meant to help you complete all of these tasks.
1.2.1 Understanding Virtuozzo Storage Architecture
The fundamental component of Virtuozzo Storage is a cluster: a group of physical servers interconnected by network. Each server in a cluster is assigned one or more roles and typically runs services that correspond to these roles:
• storage role: chunk service or CS
• metadata role: metadata service or MDS
• network roles:
•
iSCSI access point service (iSCSI)
•
Acronis Backup Gateway access point service (ABGW)
•
S3 gateway (access point) service (GW)
•
S3 name service (NS)
•
S3 object service (OS)
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
•
Web CP
•
SSH
• supplementary roles:
•
management,
•
SSD cache,
•
system
Any server in the cluster can be assigned a combination of storage, metadata, and network roles. For example, a single server can be an S3 access point, an iSCSI access point, and a storage node at once.
Each cluster also requires that a web-based management panel be installed on one (and only one) of the nodes. The panel enables administrators to manage the cluster.
1.2.1.1 Storage Role
Storage nodes run chunk services, store all the data in the form of fixed-size chunks, and provide access to these chunks. All data chunks are replicated and the replicas are kept on different storage nodes to achieve high availability of data. If one of the storage nodes fails, remaining healthy storage nodes continue providing the data chunks that were stored on the failed node.
Only a server with disks of certain capacity can be assigned the storage role (see
on page 5).
1.2.1.2 Metadata Role
Metadata nodes run metadata services, store cluster metadata, and control how user files are split into chunks and where these chunks are located. Metadata nodes also ensure that chunks have the required amount of replicas and log all important events that happen in the cluster.
To ensure high availability of metadata, at least five metadata services must be running per cluster. In this case, if up to two metadata service fail, the remaining metadata services will still be controlling the cluster.
3
Chapter 1. Preparing for Installation
1.2.1.3 Network Roles (Storage Access Points)
Storage access points enable you to access data stored in Virtuozzo Storage clusters via the standard iSCSI and S3 protocols and use the clusters as backend storage for Acronis Backup Cloud.
To benefit from high availability, access points should be set up on multiple node.
The following access points are currently supported:
• iSCSI, allows you to use Virtuozzo Storage as a highly available block storage for virtualization, databases, office applications, and other needs.
• S3, a combination of scalable and highly available services (collectively named Virtuozzo Object Storage) that allows you to use Virtuozzo Storage as a modern backend for solutions like OpenXchange
AppSuite, Dovecot, and Acronis Access. In addition, to developers of custom applications Virtuozzo
Object Storage offers an Amazon S3-compatible API and compatibility with the S3 libraries for various programming languages, S3 browsers, and web browsers.
• Acronis Backup Gateway, allows you to connect Virtuozzo Storage to Acronis Backup Cloud via Acronis
FES API.
NFS, SMB, and other access point types are planned in the future releases of Virtuozzo Storage.
The following remote management roles are supported:
• Web CP, allows you to access the web-based user interface from an external network.
• SSH, allows you to connect to Virtuozzo Storage nodes via SSH.
1.2.1.4 Supplementary Roles
• Management, provides a web-based management panel that enables administrators to configure, manage, and monitor Virtuozzo Storage clusters. Only one management panel is needed to create and manage multiple clusters (and only one is allowed per cluster).
• SSD cache, boosts chunk write performance by creating write caches on selected solid-state drives
(SSDs). It is recommended to also use such SSDs for metadata, see
on page 3. The use of write cache may speed up write operations in the cluster by two and more times.
• System, one disk per node that is reserved for the operating system and unavailable for data storage.
4
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
1.2.2 Planning Node Hardware Configurations
Virtuozzo Storage works on top of commodity hardware, so you can create a cluster from regular servers, disks, and network cards. Still, to achieve the optimal performance, a number of requirements must be met and a number of recommendations should be followed.
1.2.2.1 Hardware Requirements
The following table lists the minimal and recommended hardware for a single node in the cluster:
Type
CPU
RAM
System disk
Storage disk
Disk controller
Network
SSD
Sample configuration
Minimal
Dual-core CPU
2GB
See Storage disk below
Three 100GB SATA HDDs (one system, one storage, one MDS) on the first five nodes
Two 100GB SATA HDDs (one system, one storage) on the sixth and other nodes
None
1 Gbps or faster network interface
None
Recommended
Intel Xeon E5-2620V2 or faster; at least one CPU core per 8 HDDs
16GB ECC or more, plus 0.5GB ECC per each HDD
250GB SATA HDD
Four or more HDDs or SSDs (1 DWPD endurance minimum)
HBA or RAID
Two 10Gbps network interfaces; dedicated links for internal and public networks
One or more recommended enterprise-grade SSDs with power loss protection; 100GB or more capacity; at least 50-75 MB/s sequential write performance per each HDD serviced by the SSD (10 DWPD recommended)
Intel Xeon E5-2620V2, 32GB, 2xST1000NM0033,
32xST6000NM0024, 2xMegaRAID SAS 9271/9201,
Intel X540-T2, Intel P3700 800GB
5
Chapter 1. Preparing for Installation
1.2.2.2 Hardware Recommendations
The following recommendations explain the benefits added by specific hardware in the hardware requirements table and are meant to help you configure the cluster hardware in an optimal way:
General Hardware Recommendations
• At least five nodes are required for a production environment. This is to ensure that the cluster can survive failure of two nodes without data loss.
• One of the strongest features of Virtuozzo Storage is scalability. The bigger the cluster, the better
Virtuozzo Storage performs. It is recommended to create production clusters from at least ten nodes for improved resiliency, performance, and fault tolerance in production scenarios.
• Even though a cluster can be created on top of varied hardware, using nodes with similar hardware in each node will yield better cluster performance, capacity, and overall balance.
• Any cluster infrastructure must be tested extensively before it is deployed to production. Such common points of failure as SSD drives and network adapter bonds must always be thoroughly verified.
• It is not recommend for production to run Virtuozzo Storage in virtual machines or on top of SAN/NAS hardware that has its own redundancy mechanisms. Doing so may negatively affect performance and data availability.
• At least 20% of cluster capacity should be free to avoid possible data fragmentation and performance degradation.
• During disaster recovery, Virtuozzo Storage may need additional disk space for replication. Make sure to reserve at least as much space as available on a single storage node.
Storage Hardware Recommendations
• Using the recommended SSD models may help you avoid loss of data. Not all SSD drives can withstand enterprise workloads and may break down in the first months of operation, resulting in TCO spikes.
•
SSD memory cells can withstand a limited number of rewrites. An SSD drive should be viewed as a consumable that you will need to replace after a certain time. Consumer-grade SSD drives can withstand a very low number of rewrites (so low, in fact, that these numbers are not shown in their technical specifications). SSD drives intended for Virtuozzo Storage clusters must offer at least 1
6
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
DWPD endurance (10 DWPD is recommended). The higher the endurance, the less often SSDs will need to be replaced, improving TCO.
•
Many consumer-grade SSD drives can ignore disk flushes and falsely report to operating systems that data was written while it in fact was not. Examples of such drives include OCZ Vertex 3, Intel
520, Intel X25-E, and Intel X-25-M G2. These drives are known to be unsafe in terms of data commits, they should not be used with databases, and they may easily corrupt the file system in case of a power failure. For these reasons, use to enterprise-grade SSD drives that obey the flush rules (for more information, see http://www.postgresql.org/docs/current/static/wal-reliability.html
). Enterprise-grade SSD drives that operate correctly usually have the power loss protection property in their technical specification. Some of the market names for this technology are Enhanced Power Loss Data
Protection (Intel), Cache Power Protection (Samsung), Power-Failure Support (Kingston), Complete
Power Fail Protection (OCZ).
•
Consumer-grade SSD drives usually have unstable performance and are not suited to withstand sustainable enterprise workloads. For this reason, pay attention to sustainable load tests when choosing SSDs. We recommend the following enterprise-grade SSD drives which are the best in terms of performance, endurance, and investments: Intel S3710, Intel P3700, Huawei ES3000 V2,
Samsung SM1635, and Sandisk Lightning.
• The use of SSDs for write caching improves random I/O performance and is highly recommended for all workloads with heavy random access (e.g., iSCSI volumes).
• Running metadata services on SSDs improves cluster performance. To also minimize CAPEX, the same
SSDs can be used for write caching.
• If capacity is the main goal and you need to store non-frequently accessed data, choose SATA disks over
SAS ones. If performance is the main goal, choose SAS disks over SATA ones.
• The more disks per node the lower the CAPEX. As an example, a cluster created from ten nodes with two disks in each will be less expensive than a cluster created from twenty nodes with one disk in each.
• Using SATA HDDs with one SSD for caching is more cost effective than using only SAS HDDs without such an SSD.
• Use HBA controllers as they are less expensive and easier to manage than RAID controllers.
• Disable all RAID controller caches for SSD drives. Modern SSDs have good performance that can be reduced by a RAID controller’s write and read cache. It is recommend to disable caching for SSD drives and leave it enabled only for HDD drives.
7
Chapter 1. Preparing for Installation
• If you use RAID controllers, do not create RAID volumes from HDDs intended for storage (you can still do so for system disks). Each storage HDD needs to be recognized by Virtuozzo Storage as a separate device.
• If you use RAID controllers with caching, equip them with backup battery units (BBUs) to protect against cache loss during power outages.
Network Hardware Recommendations
• Use separate networks (and, ideally albeit optionally, separate network adapters) for internal and public traffic. Doing so will prevent public traffic from affecting cluster I/O performance and also prevent possible denial-of-service attacks from the outside.
• Network latency dramatically reduces cluster performance. Use quality network equipment with low latency links. Do not use consumer-grade network switches.
• Do not use desktop network adapters like Intel EXPI9301CTBLK or Realtek 8129 as they are not designed for heavy load and may not support full-duplex links. Also use non-blocking Ethernet switches.
• To avoid intrusions, Virtuozzo Storage should be on a dedicated internal network inaccessible from outside.
• Use one 1 Gbit/s link per each two HDDs on the node (rounded up). For one or two HDDs on a node, two bonded network interfaces are still recommended for high network availability. The reason for this recommendation is that 1 Gbit/s Ethernet networks can deliver 110-120 MB/s of throughput, which is close to sequential I/O performance of a single disk. Since several disks on a server can deliver higher throughput than a single 1 Gbit/s Ethernet link, networking may become a bottleneck.
• For maximum sequential I/O performance, use one 1Gbit/s link per each hard drive, or one 10Gbit/s link per node. Even though I/O operations are most often random in real-life scenarios, sequential I/O is important in backup scenarios.
• For maximum overall performance, use one 10 Gbit/s link per node (or two bonded for high network availability).
• It is not recommended to configure 1 Gbit/s network adapters to use non-default MTUs (e.g., 9000-byte jumbo frames). Such settings require additional configuration of switches and often lead to human error. 10 Gbit/s network adapters, on the other hand, need to be configured to use jumbo frames to achieve full performance.
8
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
1.2.2.3 Hardware and Software Limitations
Hardware limitations:
• Each physical server must have at least 3 disks: for the operation system, metadata, and storage.
Servers with fewer disks cannot be added to clusters.
• Five servers are required to test all the features of the product.
• The system disk must have at least 100 GBs of space.
Software limitations:
• The maintenance mode is not supported. Use SSH to shut down or reboot a node.
• One node can be a part of only one cluster.
• Only one S3 cluster can be created on top of a storage cluster.
• Only predefined redundancy modes are available in the management panel.
• Thin provisioning is always enabled for all data and cannot be configured otherwise.
.
...
Note:
For network limitations, see
on page 16.
1.2.2.4 Minimum Configuration
The minimum configuration described in the table will let you evaluate Virtuozzo Storage features:
Node # Access points
1
1st disk role
System
2nd disk role
Metadata
3rd and other disk roles
Storage
2
3
4
5
System
System
System
System
Metadata
Metadata
Metadata
Metadata
Storage
Storage
Storage
Storage iSCSI, S3 (private and public), Acronis
Backup Gateway iSCSI, S3 (private and public), Acronis
Backup Gateway iSCSI, S3 (private and public), Acronis
Backup Gateway iSCSI, S3 (private), Acronis Backup Gateway iSCSI, S3 (private), Acronis Backup Gateway
9
Chapter 1. Preparing for Installation
Node #
5 nodes in total
1st disk role
2nd disk role
5 MDSs in total
3rd and other disk roles
5 or more CSs in total
Access points
Access point services run on five nodes in total
.
...
Note:
SSD disks can be assigned metadata and cache roles at the same time, freeing up one more disk for the storage role.
Even though five nodes are recommended even for the minimal configuration, you can start evaluating
Virtuozzo Storage with just one node and add more nodes later. At the very least, an Virtuozzo Storage cluster must have one metadata service and one chunk service running. However, such a configuration will have two key limitations:
1. Just one MDS will be a single point of failure. If it fails, the entire cluster will stop working.
2. Just one CS will be able to store just one chunk replica. If it fails, the data will be lost.
1.2.2.5 Recommended Configuration
The recommended configuration will help you create clusters for production environments:
Node # 1st disk role 2nd disk role Access points
Nodes 1 to 5 System
3rd and other disk roles
Storage
Nodes 6+
5 or more nodes in total
System
SSD; metadata, cache
SSD; cache
5 MDSs in total
Storage
5 or more CSs in total iSCSI, S3 (private and public),
Acronis Backup Gateway iSCSI, S3 (private), Acronis
Backup Gateway
All nodes run required access points
Even though a production-ready cluster can be created from just five nodes with recommended hardware, it is still recommended to enter production with at least ten nodes if you are aiming to achieve significant performance advantages over direct-attached storage (DAS) or improved recovery times.
10
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
.
...
Important:
To ensure high availability of metadata, at least five metadata services must be running per cluster in any production environment. In this case, if up to two metadata service fail, the remaining metadata services will still be controlling the cluster.
Following are a number of more specific configuration examples that can be used in production. Each configuration can be extended by adding chunk servers and nodes.
HDD Only
This basic configuration requires a dedicated disk for each metadata server.
Nodes 1-5 (base)
...
N
2
3
Disk No.
1
Disk Type
HDD
HDD
HDD
HDD
Disk Role(s)
System
MDS
CS
CS
Nodes 6+ (extension)
3
...
N
Disk No.
1
2
Disk Type
HDD
HDD
HDD
HDD
Disk Role(s)
System
CS
CS
CS
HDD + System SSD (No Cache)
This configuration is good for creating capacity-oriented clusters.
Nodes 1-5 (base)
11
...
N
2
3
Disk No.
1
Nodes 6+ (extension)
3
...
N
Disk No.
1
2
Disk Type
SSD
HDD
HDD
HDD
Disk Type
SSD
HDD
HDD
HDD
Chapter 1. Preparing for Installation
Disk Role(s)
System, MDS
CS
CS
CS
Disk Role(s)
System
CS
CS
CS
HDD + SSD
This configuration is good for creating performance-oriented clusters.
Nodes 1-5 (base)
...
N
2
3
Disk No.
1
Disk Type
HDD
SSD
HDD
HDD
Disk Role(s)
System
MDS, cache
CS
CS
Nodes 6+ (extension)
3
...
Disk No.
1
2
Disk Type
HDD
SSD
HDD
Disk Role(s)
System
Cache
CS
12
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
Disk No.
N
Disk Type
HDD
Disk Role(s)
CS
SSD Only
This configuration does not require SSDs for cache.
Nodes 1-5 (base)
3
...
N
Disk No.
1
2
Disk Type
SSD
SSD
SSD
SSD
Nodes 6+ (extension)
...
N
2
3
Disk No.
1
Disk Type
SSD
SSD
SSD
SSD
Disk Role(s)
System, MDS
CS
CS
CS
Disk Role(s)
System
CS
CS
CS
HDD + SSD (No Cache), 2 Tiers
In this configuration example, tier 1 is for HDDs without cache and tier 2 is for SSDs. Tier 1 can store cold data (e.g., backups), tier 2 can store hot data (e.g., high-performance virtual machines).
Nodes 1-5 (base)
Disk No.
1
2
3
Disk Type
SSD
HDD
SSD
Disk Role(s)
System, MDS
CS
CS
Tier
1
2
13
Disk No.
...
N
Disk Type
HDD/SSD
Nodes 6+ (extension)
...
N
2
3
Disk No.
1
Disk Type
SSD
HDD
SSD
HDD/SSD
Chapter 1. Preparing for Installation
Disk Role(s)
CS
Disk Role(s)
System
CS
CS
CS
Tier
1/2
Tier
1
2
1/2
HDD + SSD, 3 Tiers
In this configuration example, tier 1 is for HDDs without cache, tier 2 is for HDDs with cache, and tier 3 is for
SSDs. Tier 1 can store cold data (e.g., backups), tier 2 can store regular virtual machines, and tier 3 can store high-performance virtual machines.
Nodes 1-5 (base)
5
...
3
4
N
Disk No.
1
2
Disk Type
HDD/SSD
SSD
HDD
HDD
SSD
HDD/SSD
CS
CS
CS
Disk Role(s)
System
MDS, T2 cache
CS
Tier
1
2
3
1/2/3
Nodes 6+ (extension)
2
3
Disk No.
1
Disk Type
HDD/SSD
SSD
HDD
Disk Role(s)
System
T2 cache
CS
Tier
1
14
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
5
...
Disk No.
4
N
Disk Type
HDD
SSD
HDD/SSD
Disk Role(s)
CS
CS
CS
Tier
2
3
1/2/3
1.2.2.6 Raw Disk Space Considerations
When planning the Virtuozzo Storage infrastructure, keep in mind the following to avoid confusion:
• The capacity of HDD and SSD is measured and specified with decimal, not binary prefixes, so “TB” in disk specifications usually means “terabyte”. The operating system, however, displays drive capacity using binary prefixes meaning that “TB” is “tebibyte” which is a noticeably larger number. As a result, disks may show capacity smaller than the one marketed by the vendor. For example, a disk with 6TB in specifications may be shown to have 5.45 TB of actual disk space in Virtuozzo Storage.
• Virtuozzo Storage reserves 5% of disk space for emergency needs.
Therefore, if you add a 6TB disk to a cluster, the available physical space should increase by about 5.2 TB.
1.2.3 Planning Network
Virtuozzo Storage uses two networks (e.g., Ethernet): a) a internal network that interconnects nodes and combines them into a cluster, and b) a public network for exporting stored data to users.
The figure below shows a top-level overview of the internal and public networks of Virtuozzo Storage. One network interface on each node is also used for management: through it, administrators can access the node from the management panel and via SSH.
15
Chapter 1. Preparing for Installation
1.2.3.1 General Network Requirements
• Make sure that time is synchronized on all nodes in the cluster via NTP. Doing so will make it easier for the support department to understand cluster logs.
1.2.3.2 Network Limitations
• Nodes are added to clusters by their IP addresses, not FQDNs. Changing the IP address of a node in the cluster will remove that node from the cluster. If you plan to use DHCP in a cluster, make sure that IP addresses are bound to the MAC addresses of nodes’ network interfaces.
• Fibre channel and InfiniBand networks are not supported.
• Each node must have Internet access so updates can be installed.
• MTU is set to 1500 by default.
16
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
• Network time synchronization (NTP) is required for correct statistics.
• The management role is assigned automatically during installation and cannot be changed in the management panel later.
• Even though the management node can be accessed from a web browser by the hostname, you still need to specify its IP address, not the hostname, during installation.
1.2.3.3 Per-Node Network Requirements
Network requirements for each cluster node depend on roles assigned to the node. If the node with multiple network interfaces has multiple roles assigned to it, different interfaces can be assigned to different roles to create dedicated networks for each role.
• Each node in the cluster must have access to the internal network and have the port 8889 open to listen for incoming connections from the internal network.
• Each storage and metadata node must have at least one network interface for the internal network traffic. The IP addresses assigned to this interface must be either static or, if DHCP is used, mapped to the adapter’s MAC address. The figure below shows a sample network configuration for a storage and metadata node.
• The management node must have a network interface for internal network traffic and a network interface for the public network traffic (e.g., to the datacenter or a public network) so the management
17
Chapter 1. Preparing for Installation panel can be accessed via a web browser.
The following ports need to be open on a management node by default: 8888 for management panel access from the public network and 8889 for cluster node access from the internal network.
The figure below shows a sample network configuration for a storage and management node.
18
• A node that runs one or more storage access point services must have a network interface for the internal network traffic and a network interface for the public network traffic.
The figure below shows a sample network configuration for a node with an iSCSI access point. iSCSI access points use the TCP port 3260 for incoming connections from the public network.
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
The next figure shows a sample network configuration for a node with an S3 storage access point. S3 access points use ports 443 (HTTPS) and 80 (HTTP) to listen for incoming connections from the public network.
19
Chapter 1. Preparing for Installation
20
.
...
Note:
In the scenario pictured above, the internal network is used for both the storage and S3 cluster traffic.
The next figure shows a sample network configuration for a node with an Acronis Backup Gateway storage access point. Acronis Backup Gateway access points use port 44445 for incoming connections from both internal and public networks and ports 443 and 8443 for outgoing connections to the public network.
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
1.2.3.4 Network Interface Roles
For an Virtuozzo Storage cluster to function, network interfaces of cluster nodes must be assigned one or more roles described below. Assigning roles automatically configures the necessary firewall rules.
• Internal. If one or more internal roles are assigned to a network interface, traffic on all ports is allowed to and from said interface.
•
Management. The network interface will be used for communication between the nodes and the management panel. To perform this role, the network interface must be connected to the internal network. This role must be assigned to at least one network interface in the cluster.
•
Storage. The network interface will be used for transferring data chunks between storage nodes.
To perform this role, the network interface must be connected to the internal network. This role must be assigned to one network interface on each storage node.
•
S3 private. The network interface will be used by the S3 storage access point. To perform this role,
21
Chapter 1. Preparing for Installation the network interface must be connected to the internal network. This role must be assigned to one network interface on each node running the S3 storage access point service.
•
ABGW private. The network interface will be used by the Acronis Backup gateway storage access point. To perform this role, the network interface must be connected to the internal network. This role must be assigned to one network interface on each node running the Acronis Backup gateway storage access point service.
• Public. If one or more public roles (and no internal roles) are assigned to a network interface, only traffic on ports required by the public role(s) is allowed to and from said interface.
•
iSCSI. The network interface will be used by the iSCSI storage access point to provide access to user data. To perform this role, the network interface must be connected to the public network accessible by iSCSI clients.
•
S3 public. The network interface will be used by the S3 storage access point to provide access to user data. To perform this role, the network interface must be connected to the public network accessible by S3 clients.
•
ABGW public. The network interface will be used by the Acronis Backup gateway storage access point to provide access to user data. To perform this role, the network interface must be connected to the public network accessible by Acronis Backup Cloud agents.
•
Web CP. The network interface will be used to transfer web-based user interface data. To perform this role, the network interface must be connected to the public network.
•
SSH. The network interface will be used to manage the node via SSH. To perform this role, the network interface must be connected to the public network.
• Custom. These roles allow you to open specific ports on public network interfaces.
1.2.3.5 Network Recommendations for Clients
The following table lists the maximum network performance a Virtuozzo Storage client can get with the specified network interface. The recommendation for clients is to use 10Gbps network hardware between any two cluster nodes and minimize network latencies, especially if SSD disks are used.
Storage network interface
Entire node maximum I/O throughput
1Gbps
100MB/s
2 x 1Gbps
~175MB/s
3 x 1Gbps
~250MB/s
10Gbps
1GB/s
2 x 10Gbps
1.75GB/s
22
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
Storage network interface
Single VM maximum I/O throughput (replication)
Single VM maximum I/O throughput (erasure coding)
1Gbps
100MB/s
70MB/s
2 x 1Gbps
100MB/s
3 x 1Gbps
100MB/s
10Gbps
1GB/s
~130MB/s ~180MB/s 700MB/s
2 x 10Gbps
1GB/s
1.3GB/s
1.2.3.6 Sample Network Configuration
The figure below shows an overview of a sample Virtuozzo Storage network.
In this network configuration:
• The Virtuozzo Storage internal network is a network that interconnects all servers in the cluster. It can be used for the management, storage (internal), and S3 (private) roles. Each of these roles can be moved to a separate dedicated internal network to ensure high performance under heavy workloads.
This network cannot be accessed from the public network. All servers in the cluster are connected to this network.
23
Chapter 1. Preparing for Installation
.
...
Important:
Virtuozzo Storage does not offer protection from traffic sniffing. Anyone with access to the internal network can capture and analyze the data being transmitted.
• The Virtuozzo Storage public network is a network over which the storage space is exported.
Depending on where the storage space is exported to, it can be an internal datacenter network or an external public network:
•
An internal datacenter network can be used to manage Virtuozzo Storage and export the storage space over iSCSI to other servers in the datacenter, that is, for the management and iSCSI (public) roles.
•
An external public network can be used to export the storage space to the outside services through S3 and Acronis Backup Gateway storage access points, that is, for the S3 (public) and
Acronis Backup Gateway roles.
1.2.4 Understanding Data Redundancy
Virtuozzo Storage protects every piece of data by making it redundant. It means that copies of each piece of data are stored across different storage nodes to ensure that the data is available even if some of the storage nodes are inaccessible.
Virtuozzo Storage automatically maintains the required number of copies within the cluster and ensures that all the copies are up-to-date. If a storage node becomes inaccessible, the copies from it are replaced by new ones that are distributed among healthy storage nodes. If a storage node becomes accessible again after downtime, the copies on it which are out-of-date are updated.
The redundancy is achieved by one of two methods: replication or erasure coding (explained in more detail in the next section). The chosen method affects the size of one piece of data and the number of its copies that will be maintained in the cluster. In general, replication offers better performance while erasure coding leaves more storage space available for data (see table).
Virtuozzo Storage supports a number of modes for each redundancy method. The following table illustrates data overhead of various redundancy modes. The first three lines are replication and the rest are erasure coding.
24
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
Redundancy mode
1 replica (no redundancy)
2 replicas
3 replicas
Encoding 1+0
(no redundancy)
Encoding 1+2
Encoding 3+2
Encoding 5+2
Encoding 7+2
3
5
7
9
Encoding 17+3 20
2
3
1
Minimum number of nodes required
1
1
2
0
2
2
2
2
3
How many nodes can fail without data loss
0
Storage overhead,
%
0
Raw space required to store 100GB of data
100GB
100
200
0
200
67
40
29
18
200GB
300GB
100GB
300GB
167GB
140GB
129GB
118GB
.
...
Note:
The 1+0 and 1+2 encoding modes are meant for small clusters that have insufficient nodes for other erasure coding modes but will grow in the future. As redundancy type cannot be changed once chosen (from replication to erasure coding or vice versa), this mode allows one to choose erasure coding even if their cluster is smaller than recommended. Once the cluster has grown, more beneficial redundancy modes can be chosen.
You choose a data redundancy mode when configuring storage access points and their volumes. In particular, when:
• creating LUNs for iSCSI storage access points,
• creating S3 clusters,
• configuring Acronis Backup Gateway storage access points.
No matter what redundancy mode you choose, it is highly recommended is to be protected against a simultaneous failure of two nodes as that happens often in real-life scenarios.
.
...
Note:
All redundancy modes allow write operations when one storage node is inaccessible. If two storage nodes are inaccessible, write operations may be frozen until the cluster heals itself.
25
Chapter 1. Preparing for Installation
1.2.4.1 Redundancy by Replication
With replication, Virtuozzo Storage breaks the incoming data stream into 256MB chunks. Each chunk is replicated and replicas are stored on different storage nodes, so that each node has only one replica of a given chunk.
The following diagram illustrates the 2 replicas redundancy mode.
Replication in Virtuozzo Storage is similar to the RAID rebuild process but has two key differences:
• Replication in Virtuozzo Storage is much faster than that of a typical online RAID 1/5/10 rebuild. The reason is that Virtuozzo Storage replicates chunks in parallel, to multiple storage nodes.
• The more storage nodes are in a cluster, the faster the cluster will recover from a disk or node failure.
High replication performance minimizes the periods of reduced redundancy for the cluster. Replication performance is affected by:
• The number of available storage nodes. As replication runs in parallel, the more available replication sources and destinations there are, the faster it is.
• Performance of storage node disks.
• Network performance. All replicas are transferred between storage nodes over network. For example,
1 Gbps throughput can be a bottleneck (see
on page 17).
• Distribution of data in the cluster. Some storage nodes may have much more data to replicate than other and may become overloaded during replication.
26
1.2. Planning Infrastructure for Virtuozzo Storage with GUI Management
• I/O activity in the cluster during replication.
1.2.4.2 Redundancy by Erasure Coding
With erasure coding, Virtuozzo Storage breaks the incoming data stream into fragments of certain size, then splits each fragment into a certain number (M) of 1-megabyte pieces and creates a certain number (N) of parity pieces for redundancy. All pieces are distributed among M+N storage nodes, that is, one piece per node. On storage nodes, pieces are stored in regular chunks of 256MB but such chunks are not replicated as redundancy is already achieved. The cluster can survive failure of any N storage nodes without data loss.
The values of M and N are indicated in the names of erasure coding redundancy modes. For example, in the
5+2 mode, the incoming data is broken into 5MB fragments, each fragment is split into five 1MB pieces and two more 1MB parity pieces are added for redundancy. In addition, if N is 2, the data is encoded using the
RAID6 scheme, and if N is greater than 2, erasure codes are used.
The diagram below illustrates the 5+2 mode.
27
Chapter 1. Preparing for Installation
1.2.4.3 No Redundancy
.
...
Warning:
Danger of data loss!
Without redundancy, singular chunks are stored on storage nodes, one per node. If the node fails, the data may be lost. Having no redundancy is highly not recommended no matter the scenario, unless you only want to evaluate Virtuozzo Storage on a single server.
1.2.5 Understanding Failure Domains
A failure domain is a set of services which can fail in a correlated manner. To provide high availability of data,
Virtuozzo Storage spreads data replicas evenly across failure domains, according to a replica placement policy.
The following policies are available:
• Host as a failure domain (default). If a single host running multiple CS services fails (e.g., due to a power outage or network disconnect), all CS services on it become unavailable at once. To protect against data loss under this policy, Virtuozzo Storage never places more than one data replica per host. This policy is highly recommended for clusters of five nodes and more.
• Disk, the smallest possible failure domain. Under this policy, Virtuozzo Storage never places more than one data replica per disk or CS. While protecting against disk failure, this option may still result in data loss if data replicas happen to be on different disks of the same host and it fails. This policy can be used with small clusters of up to five nodes (down to a single node).
1.2.6 Understanding Storage Tiers
Storage tiers represent a way to organize storage space. You can use them to keep different categories of data on different chunk servers. For example, you can use high-speed solid-state drives to store performance-critical data instead of caching cluster operations.
When assigning disks to tiers, have in mind that faster storage drives should be assigned to higher tiers. For example, you can use tier 0 for backups and other cold data (CS without SSD journals), tier 1 for virtual environments–a lot of cold data but fast random writes (CS with SSD journals), tier 2 for hot data (CS on SSD), journals, caches, specific disks, and such.
28
1.3. Planning Infrastructure for Virtuozzo Storage with CLI Management
This recommendation is related to how Virtuozzo Storage works with storage space. If a storage tier runs out of free space, Virtuozzo Storage will attempt to temporarily use a lower tier. If you add more storage to the original tier later, the data, temporarily stored elsewhere, will be moved to the tier where it should have been stored originally.
For example, if you try to write data to the tier 2 and it is full, Virtuozzo Storage will attempt to write that data to tier 1, then to tier 0. If you add more storage to tier 2 later, the aforementioned data, now stored on the tier 1 or 0, will be moved back to the tier 2 where it was meant to be stored originally.
1.3 Planning Infrastructure for Virtuozzo
Storage with CLI Management
To plan your infrastructure for Virtuozzo Storage managed by command-line tools, you will need to decide on the hardware configuration of each storage node and plan the storage networks.
Information in this section is meant to help you complete these tasks.
1.3.1 Understanding Virtuozzo Storage Architecture
Before starting the deployment process, you should have a clear idea of the Virtuozzo Storage infrastructure.
A typical Virtuozzo Storage is shown below.
29
Chapter 1. Preparing for Installation
The basic component of Virtuozzo Storage is a cluster. The cluster is a group of physical computers connected to the same Ethernet network and performing the following roles:
• chunk servers (CS),
• metadata servers (MDS),
• client computers (or clients).
All data in a Virtuozzo Storage cluster, including virtual machine and container disk images, is stored in the form of fixed-size chunks on chunk servers. The cluster automatically replicates the chunks and distributes them across the available chunk servers to provide high availability of data.
To keep track of data chunks and their replicas, the cluster stores metadata about them on metadata (MDS) servers. The central MDS server, called the master MDS server, monitors all cluster activity and keeps metadata current.
Clients manipulate data stored in the cluster by sending different types of file requests, such as modifying an existing file or creating a new one.
• Chunk servers (CS). Chunk servers store all the data, including the contents of virtual machines and
30
1.3. Planning Infrastructure for Virtuozzo Storage with CLI Management containers, in the form of fixed-size chunks and provide access to these chunks. All data chunks are replicated and the replicas are kept on different chunk servers to achieve high availability. If one of the chunk servers goes down, the other chunk servers will continue providing the data chunks that were stored on the failed server.
• Metadata servers (MDS). Metadata servers store metadata about chunk servers and control how files keeping the contents of virtual machines and containers are split into chunks and where these chunks are located. MDS servers also ensure that a cluster has enough chunk replicas and store a global log of all important events that happen in the cluster.
To provide high availability for a Virtuozzo Storage cluster, you need to set up several MDS servers in the cluster. In this case, if one MDS server goes offline, another MDS server will continue keeping control over the cluster.
.
...
Note:
MDS servers deal with processing metadata only and do not normally participate in any read/write operations related to data chunks.
• Clients. Clients are computers with Virtuozzo 7 from where you run virtual machines and containers stored in a Virtuozzo Storage cluster.
.
...
Note:
1. You can set up any computer in the cluster to perform the role of a metadata server, chunk server, or client. You can also assign two or all three roles to one and the same computer. For example, you can configure a computer to act as a client by installing Virtuozzo 7 on it and running virtual machines and containers from the computer. At the same time, if you want this computer to allocate its local disk space to the cluster, you can set it up as a chunk server.
2. Though Virtuozzo Storage can be mounted as a file system, it is not a POSIX-compliant file system and lacks some POSIX features like ACL, user and group credentials, hardlinks, and some other.
1.3.2 Virtuozzo Storage Configurations
This section provides information on two Virtuozzo Storage configurations:
•
on page 32. You can create the minimum configuration for evaluating the
Virtuozzo Storage functionality. This configuration, however, is not recommended for use in production
31
Chapter 1. Preparing for Installation environments.
•
on page 33. You can use the recommended Virtuozzo Storage configuration in a production environment “as is” or adapt it to your needs.
1.3.2.1 Minimum Configuration
The minimum hardware configuration for deploying a Virtuozzo Storage cluster is given below:
Server Role
Metadata Server
Chunk Server
Client
Total number of servers
Number of Servers
1 (can be shared with chunk servers and clients)
1 (can be shared with metadata servers and clients)
1 (can be shared with chunk and metadata servers) with role sharing: 1 without role sharing: 3
Graphically, the minimum configuration can be represented as follows:
For a Virtuozzo Storage cluster to function, it must have at least one MDS server, one chunk server, and one client. The minimum configuration has two main limitations:
1. The cluster has one metadata server, which presents a single point of failure. If the metadata server
32
1.3. Planning Infrastructure for Virtuozzo Storage with CLI Management fails, the entire Virtuozzo Storage cluster will become non-operational.
2. The cluster has one chunk server that can store only one chunk replica. If the chunk server fails, the cluster will suspend all operations with chunks until a new chunk server is added to the cluster.
1.3.2.2 Recommended Configuration
The table below lists two of the recommended configurations for deploying Virtuozzo Storage clusters.
Metadata Server
3 (can be shared with chunk servers and clients)
5 (can be shared with chunk servers and clients)
Clients
Chunk Server Total Number of Servers
5-9 (can be shared with metadata servers and clients)
10 or more (can be shared
5 or more (depending on the number of clients and chunk servers and whether they share roles)
5 or more (depending on the number of with metadata servers and clients) clients and chunk servers and whether they share roles)
1 or more
You can include any number of clients in the cluster. For example, if you have
5 servers with Virtuozzo, you can configure them all to act as clients.
You can share servers acting as clients with chunk and metadata servers. For example, you can have 5 physical servers and configure each of them to simultaneously act as an MDS server, a chunk server, and a client.
Even though new clusters are configured to have 1 replica for each data chunk by default, you need to configure each data chunk to have at least 3 replicas to provide high availability for your data.
In total, at least 9 machines running Virtuozzo Storage are recommended per cluster. Smaller clusters will work as fine but will not provide the significant performance advantages over direct-attached storage (DAS) or improved recovery times.
33
Chapter 1. Preparing for Installation
.
...
Note:
1. For large clusters, it is critically important to configure proper failure domains to improve data availability. For more information, see the Virtuozzo Storage Administrator’s Command Line
Guide .
2. In small and medium clusters, MDS servers consume little resources and do not require being set up on dedicated Hardware Nodes.
3. A small cluster is 3 to 5 machines, a medium cluster is 6 to 15-20 machines, and a large cluster is
15-20 machines and more.
4. Time should be synchronized on all servers in the cluster via NTP. Doing so will make it easier for the support department to understand cluster logs (migrations, failovers, etc.). For more details, see http://kb.virtuozzo.com/en/3197 .
1.3.3 Hardware Requirements
Before setting up a Virtuozzo Storage cluster, make sure you have all the necessary equipment at hand.
You are also recommended to:
• Consult the SSD section of the Virtuozzo Storage Administrator’s Command Line Guide to learn how you can increase cluster performance by using solid-state drives for write journaling and data caching, and how many SSDs you may need depending on the number of HDDs in your cluster.
• Check the troubleshooting section of the Virtuozzo Storage Administrator’s Command Line Guide for common hardware issues and misconfigurations that may affect your cluster performance and lead to data inconsistency and corruption.
General
• Each service (be it MDS, CS or client) requires 1.5 GB of free space on root partition for logs. For example, to run 1 metadata server, 1 client, and 12 chunk servers on a host, you will need 21 GB of free space on the root partition.
Metadata Servers
• 1 CPU core,
• 1 GB of RAM per 100 TB of data in the cluster,
• 3 GB of disk space per 100 TB of data in the cluster,
34
1.3. Planning Infrastructure for Virtuozzo Storage with CLI Management
• 1 or more Ethernet adapters (1 Gbit/s or faster).
.
...
Note:
It is recommended to place the MDS journal on SSD, either dedicated or shared with CS and client caches, or at least on a dedicated HDD which has no CS services on it.
Chunk Servers
• 1/8 of a CPU core (e.g., 1 CPU core per 8 CS),
• 1 GB of RAM,
• 100 GB or more of free disk space,
• 1 or more Ethernet adapters (1 Gbit/s or faster).
.
...
Note:
1. On using local RAID with Virtuozzo Storage, consult the Virtuozzo Storage Administrator’s
Command Line Guide .
2. Using a shared JBOD array across multiple nodes running CS services may introduce a single point of failure and make the cluster unavailable if all data replicas happen to be allocated and stored on the failed JBOD. For more information, see here .
3. For large clusters, it is critically important to configure proper failure domains to improve data availability. For more information, see here .
4. Do not place chunk servers on disks already used in other I/O workloads, e.g., system or swap.
Sharing disks between CS and other sources of I/O will result in severe performance loss and high
I/O latencies.
Clients
• 1 CPU core per 30,000 IOPS,
• 1 GB of RAM,
• 1 or more Ethernet adapters (1 Gbit/s or faster).
The following table lists the maximum network performance a Virtuozzo Storage client can get with the specified network interface. The recommendation for clients is to use 10Gbps network hardware between any two cluster nodes and minimize network latencies, especially if SSD disks are used.
35
Storage network interface
Entire node maximum I/O throughput
Single VM maximum I/O throughput (replication)
Single VM maximum I/O throughput (erasure coding)
1Gbps
100MB/s
100MB/s
70MB/s
Chapter 1. Preparing for Installation
2 x 1Gbps
~175MB/s
3 x 1Gbps
~250MB/s
10Gbps
1GB/s
100MB/s 100MB/s 1GB/s
~130MB/s ~180MB/s 700MB/s
2 x 10Gbps
1.75GB/s
1GB/s
1.3GB/s
.
...
Note:
For hard disk requirements and the recommended partitioning scheme for servers that run
Virtuozzo and participate in clusters, see the
on page 78.
1.3.4 Network Requirements
When planning your network, make sure that it
• operates at 1 Gbit/s or faster (for more details, see here ),
• has non-blocking Ethernet switches.
You should use separate networks and Ethernet adapters for user and cluster traffic. This will prevent possible I/O performance degradation in your cluster by external traffic. Besides, if a cluster is accessible from public networks (e.g., from the Internet), it may become a target of Denial-of-Service attacks, and the entire cluster I/O subsystem may get stuck.
The figure below shows a sample network configuration for Virtuozzo Storage.
36
1.3. Planning Infrastructure for Virtuozzo Storage with CLI Management
In this network configuration:
• BackNet is a private network used solely for interconnection and intercommunication of servers in the cluster and is not available from the public network. All servers in the cluster are connected to this network via one of their network cards.
• FrontNet is a public network customers use to access their virtual machines and containers in the
Virtuozzo Storage cluster.
.
...
Note:
1. Network switches are a very common point of failure, so it is critically important to configure proper failure domains to improve data availability. For more information, see here .
2. To learn more about Virtuozzo Storage networks (in particular, how to bind chunk servers to specific IP addresses), see here .
37
Chapter 1. Preparing for Installation
1.4 Preparing for Installation from USB
Storage Drives
To install Virtuozzo from a USB storage drive, you will need a 2 GB or higher-capacity USB drive and the
Virtuozzo 7 distribution ISO image.
Make a bootable USB drive by transferring the distribution image to it with dd
.
.
...
Important:
Be careful to specify the correct drive to transfer the image to.
For example, on Linux:
# dd if=vz-iso-7.0.0-3391.iso of=/dev/sdb
And on Windows (with dd for Windows ):
C:\>dd if=vz-iso-7.0.0-3391.iso of=\\?\Device\Harddisk1\Partition0
38
CHAPTER 2
Installing Virtuozzo
This chapter explains how to install Virtuozzo.
2.1 Starting Installation
Virtuozzo can be installed from
• DVD discs
• USB drives (see
Preparing for Installation from USB Storage Drives
on page 38)
• PXE servers (see the Installation via PXE Server guide for information on installing Virtuozzo over the network)
.
...
Note:
Time synchronization via NTP is enabled by default.
To start the installation, do the following:
1. Configure the server to boot from a DVD or USB drive.
2. Boot the server from the chosen media and wait for the welcome screen:
2.1.1 Choosing Installation Type
You can install Virtuozzo 7 in one of the following modes:
• Graphics. In this mode, you can install Virtuozzo components with one of the following management
39
Chapter 2. Installing Virtuozzo interfaces:
•
Web-based user interface (default, recommended). The interface consists of Virtuozzo Storage management panel and Virtuozzo Automator management panel (Control Center), see
on page 40.
•
Command-line interface, see
Installing Virtuozzo with CLI Management
on page 60.
• Basic graphics (in case of issues with video card drivers), see
Installing Virtuozzo in Basic Graphics Mode
on page 84.
• Graphics via VNC, see
on page 85.
• Text (Virtuozzo Storage cannot be installed in this mode), see
Installing Virtuozzo in Text Mode
on page 85.
Your further installation steps will differ depending on the chosen mode.
2.1.2 Enabling Forced Detection of SSDs
Certain solid-state drives (SSDs) may not be autodetectable by the installer. This may result in issues when you create or join Virtuozzo Storage clusters. To avoid this problem, you can force the installer to identify the required drives as SSDs by doing the following:
1. Select the required installation option (e.g., Install Virtuozzo 7) and press E to start editing it.
2. Add ssd_hack=sd<N>[,...] at the end of the line starting with linux /images/pxeboot/vmlinuz
. For example: linux /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=vz-iso-7.0.0-3589.iso quiet ip=dhcp \ ssd_hack=sdb,sdc
3. Press Ctrl-X to start booting the chosen installation option.
The installer should identify the specified drives as SSDs.
2.2 Installing Virtuozzo with GUI Management
To install Virtuozzo with GUI management in graphics mode, choose Install Virtuozzo 7 with GUI
management on the welcome screen. After the installation program loads, you will see the Installation
Summary screen. On this screen, you need to specify a number of parameters required to install Virtuozzo.
40
2.2. Installing Virtuozzo with GUI Management
2.2.1 Accepting EULA
You need to accept the Virtuozzo End-User License Agreement to install Virtuozzo.
To accept the Virtuozzo EULA, open the EULA screen, select Accept, and click Done.
41
Chapter 2. Installing Virtuozzo
2.2.2 Setting Date and Time
If you need to set the date and time for your Virtuozzo installation, open the DATE & TIME screen and make the necessary changes.
42
2.2. Installing Virtuozzo with GUI Management
2.2.3 Selecting the Keyboard Layout
The selected keyboard layout can be used during installation and, once the installation is complete, in the console (e.g., for entering localized descriptions, configuration file comments, and such).
If you need to change the default English (US) keyboard layout, open the KEYBOARD screen, click the plus sign to add a layout, and click Options to choose a key combination for switching layouts.
43
Chapter 2. Installing Virtuozzo
2.2.4 Configuring Network
Usually network is configured automatically by the installation program. If you need to modify network settings, you can do so on the NETWORK & HOST NAME screen.
To install Virtuozzo, you will need to have at least one network card configured and you will also need to provide a hostname: either a fully qualified domain name (
<hostname>.<domainname>
) or a short name
(
<hostname>
).
44
2.2. Installing Virtuozzo with GUI Management
2.2.4.1 Creating Bonded and Teamed Connections
Bonded and teamed connections offer increased throughput beyond the capabilities of a single network card as well as improved redundancy.
During installation, you can configure bonding on the NETWORK & HOSTNAME screen as described below.
Teaming can be configured in a similar way after choosing Team on step 1.
1. To add a new bonded connection, click the plus button in the bottom, select Bond from the drop-down list, and click Add.
45
46
Chapter 2. Installing Virtuozzo
2. In the Editing Bond connection... window, set the following parameters:
2.1. Mode to
XOR
.
2.2. Link Monitoring to
MII (recommended)
.
2.3. Monitoring frequency, Link up delay, and Link down delay to
300
.
2.2. Installing Virtuozzo with GUI Management
.
...
Note:
It is also recommended to manually set xmit_hash_policy to layer3+4 after the installation. For more information on network bonding, see Red Hat Enterprise Linux Deployment
Guide and Linux Ethernet Bonding Driver HOWTO .
3. In the Bonded connections section on the Bond tab, click Add.
4. In the Choose a Connection Type window, select Ethernet from the in the drop-down list, and click
Create.
5. In the Editing bond slave... window, select a network interface to bond from the Device drop-down list.
47
Chapter 2. Installing Virtuozzo
6. Configure other parameters if required and click Save.
7. Repeat steps 3 to 7 for each network interface you need to add to the bonded connection.
8. Configure other parameters if required and click Save.
The connection will appear in the list on the NETWORK & HOSTNAME screen.
48
2.2. Installing Virtuozzo with GUI Management
2.2.4.2 Creating VLAN Adapters
While installing Virtuozzo, you can also create virtual local area network (VLAN) adapters on the basis of physical adapters on the NETWORK & HOSTNAME screen as described below.
1. To add a new VLAN adapter, click the plus button in the bottom, select VLAN from the drop-down list, and click Add.
2. In the Editing VLAN connection... window:
2.1. Select a physical adapter the VLAN adapter will be based on from the Parent interface drop-down list.
2.2. Specify a VLAN adapter identifier in the VLAN ID field. The value must be in the 1-4094 range.
49
Chapter 2. Installing Virtuozzo
3. Configure other parameters if required and click Save.
The VLAN adapter will appear in the list on the NETWORK & HOSTNAME screen.
50
2.2. Installing Virtuozzo with GUI Management
2.2.5 Installing Virtuozzo Automator Components
To install Virtuozzo Automator along with Virtuozzo, you need to choose the respective component to install on the VIRTUOZZO AUTOMATOR screen.
The following options are available:
• Management Panel and Compute. Install the management panel and agent components of Virtuozzo
Automator. The former is installed into a container on the node and allows you to manage registered nodes and virtual environments on them. The latter is installed on the node itself and lets the node be registered in the management panel.
• Compute. Install the agent on the node so the node can be registered in the management panel.
.
...
Important:
Only one VA Management Node is needed, so choose Management Panel and
Compute only for the first node in your Virtuozzo Automator infrastructure. For the second and other nodes choose Compute. The detailed instructions are provided in the following subsections.
2.2.5.1 Installing Management Panel and Compute on the First Server
On the first server in your Virtuozzo Automator infrastructure, you will need to install both the management panel and compute components.
Do the following on the VIRTUOZZO AUTOMATOR screen:
1. Choose Management Panel and Compute.
51
Chapter 2. Installing Virtuozzo
2. Specify an IP address and, optionally, a Hostname.
Virtuozzo will create a container with the specified IP and install the management panel in it. The management panel will be available at the specified IP address (or hostname) on default port 4648.
3. Click Done and proceed to configure Virtuozzo Storage as described in
on page 53.
2.2.5.2 Installing Compute on the Second and Other Servers
On the second and other servers in your Virtuozzo Automator infrastructure, you will need to install the compute component only.
Do the following on the VIRTUOZZO AUTOMATOR screen:
1. Choose Compute.
52
2.2. Installing Virtuozzo with GUI Management
2. To register the node in the management panel, specify the following:
2.1. in the Management node field, the IP address or hostname of the container with the management panel;
2.2. in the Password field, the password of the management panel administrator.
.
...
Note:
Alternatively, you can register the node in Virtuozzo Automator manually later.
3. Click Done and proceed to configure Virtuozzo Storage as described in
on page 53.
2.2.6 Installing Virtuozzo Storage Components
To install Virtuozzo Storage along with Virtuozzo, you need to choose the respective component to install on the VIRTUOZZO STORAGE screen.
The following options are available:
• Management Panel and Storage. Install the Virtuozzo Storage management panel and prepare the node for data storage. The management panel is installed into a container on the node. Storage nodes run data storage services as well as store and provide access to user data.
• Storage. Prepare the node for data storage. Storage nodes run data storage services as well as store and provide access to user data.
• No Virtuozzo Storage. Install Virtuozzo on local hard drives. If you select this option, click Done, and
53
Chapter 2. Installing Virtuozzo proceed to set the installation destination as described in
Choosing Installation Destination
on page 57.
.
...
Important:
Only one management panel is needed, so choose Management Panel and Storage only for the first node in your Virtuozzo Storage infrastructure. For the second and other nodes choose
Storage. The detailed instructions are provided in the following subsections.
2.2.6.1 Installing Management Panel and Storage on the First Server
On the first server in your Virtuozzo Storage infrastructure, you will need to install both the management panel and storage components. This node will also become a storage node. It will be added to the Virtuozzo
Storage infrastructure and registered in the management panel during installation.
Do the following on the VIRTUOZZO STORAGE screen:
1. Choose Management Panel and Storage.
54
2. Specify an IP address and, optionally, a Hostname.
Virtuozzo will create a new container with the specified IP and install the Virtuozzo Storage management panel in it. The management panel will be available at the specified IP address (or hostname) on default port 8888.
3. Click Done and proceed to set installation destination as described in
Choosing Installation Destination
on page 57.
2.2. Installing Virtuozzo with GUI Management
2.2.6.2 Installing Storage on the Second and Other Servers
On the second and other servers in your Virtuozzo Storage infrastructure, you will need to install the Storage component only. Storage nodes will be added to the Virtuozzo Storage infrastructure and registered in the management panel during installation.
For security reasons, you will need to provide a token that can only be obtained from the management panel you have installed on the first server. A single token can be used to install the storage component on multiple servers in parallel.
To obtain a token:
1. Log in to the Virtuozzo Storage management panel with the password created while deploying the first server. To do this, on any computer with network access to the container with the management panel, open a web browser and visit the container’s IP address or hostname on port 8888.
2. In the management panel, click ADD NODE to open a screen with the token (you can generate a new one if needed; generating a new token invalidates the old one).
55
Chapter 2. Installing Virtuozzo
Having obtained the token, do the following on the VIRTUOZZO STORAGE screen:
1. Choose Storage.
56
2.2. Installing Virtuozzo with GUI Management
2. Register the node in the Virtuozzo Storage management panel. To do this, specify the following:
2.1. in the Management node field, the IP address or hostname of the container with the management panel;
2.2. in the Token field, the acquired token.
.
...
Note:
Alternatively, you can register the node in the management panel manually later by logging in to the node via SSH and running
/usr/libexec/vstorage-ui-agent/bin/register-storage-node.sh -m <MP_addr> -t <token> in the node’s console, where
<MP_addr> is the IP address of the management panel container and
<token> is the token obtained in the management panel.
3. Click Done and proceed to set installation destination as described in
Choosing Installation Destination
on page 57.
2.2.7 Choosing Installation Destination
Having chosen the storage type, open the INSTALLATION DESTINATION screen. If you have multiple disks, select which one will be system. Other disks will be used according to the chosen storage type.
Warning:
All disks recognized by the installation program will be cleaned from partitions once you
.
...
click Begin Installation.
57
Chapter 2. Installing Virtuozzo
Click Done and proceed to finish the installation.
2.2.8 Finishing Installation
Having configured everything necessary on the INSTALLATION SUMMARY screen, click Begin Installation.
While Virtuozzo is installing, click ROOT PASSWORD to create a password for the root account. Installation will not finish until the password is created.
While Virtuozzo is installing, you can also activate a Virtuozzo license and disable the automatic updating of
Virtuozzo ReadyKernel patches if required.
2.2.8.1 Activating License
To activate a Virtuozzo license, click LICENSE KEY, enter a key, and click Done.
If you do not have a license, you can get one by clicking “Try Virtuozzo Today” at https://virtuozzo.com/ .
Alternatively, you can activate a license with the vzlicload command after the installation is complete. For more details, see the Virtuozzo 7 User’s Guide .
58
2.2. Installing Virtuozzo with GUI Management
.
...
Note:
If the license is not activated, containers and virtual machines will be deactivated (suspended without possibility to resume) after a grace period. All the newly created containers and virtual machines will run 5 minutes. Containers with Virtuozzo Storage and Virtuozzo Automator management panels (if installed) will run 24 hours.
2.2.8.2 Configuring ReadyKernel
With automatic updating enabled by default, ReadyKernel will check for new patches daily at 12:00 server time. If a patch is available, ReadyKernel will download, install, and load it for the current kernel. For more details, see the Virtuozzo 7 User’s Guide .
While Virtuozzo is installing, you can disable the automatic updating of Virtuozzo ReadyKernel patches if required. To do this, click READYKERNEL, uncheck ReadyKernel automatic update, and click Done.
2.2.8.3 Completing Installation
Once the installation is complete, click Reboot to restart the server.
.
...
Note:
If you are installing Virtuozzo from a USB drive, remove the drive before restarting the server.
After restart, you can continue building your Virtuozzo infrastructure as described in
on page 60.
59
Chapter 2. Installing Virtuozzo
2.2.8.4 Building Virtuozzo Infrastructure
Your next steps will depend on which Virtuozzo Storage and Virtuozzo Automator components you have already installed and which you still need to install.
• If you installed Virtuozzo Storage and Virtuozzo Automator management panels on the first server, proceed to install storage and compute components, respectively, on the second and other servers.
• If you installed storage and compute components on a server and need to install it on more servers, repeat the performed installation steps. When on the VIRTUOZZO STORAGE or VIRTUOZZO
AUTOMATOR screen, follow the instructions in
Installing Storage on the Second and Other Servers
on page 55 or
Installing Compute on the Second and Other Servers
on page 52.
• If you installed storage and compute components on the last server, do the following:
•
Log in to the Virtuozzo Storage management panel and make sure that all storage nodes are present in the UNASSIGNED list on the NODES screen.
•
Log in to the Virtuozzo Automator management panel and make sure that all compute nodes are present on the Infrastructure > Hardware Nodes tab.
With everything ready, you can start managing your infrastructure as described in the Virtuozzo Storage
Administrator’s Guide and Virtuozzo Automator Administrator’s Guide .
2.3 Installing Virtuozzo with CLI Management
To install Virtuozzo with CLI management in graphics mode, choose Install Virtuozzo 7 with CLI
management on the welcome screen. After the installation program loads, you will see the Installation
Summary screen. On this screen, you need to specify a number of parameters required to install Virtuozzo.
60
2.3. Installing Virtuozzo with CLI Management
2.3.1 Accepting EULA
You need to accept the Virtuozzo End-User License Agreement to install Virtuozzo.
To accept the Virtuozzo EULA, open the EULA screen, select Accept, and click Done.
61
Chapter 2. Installing Virtuozzo
2.3.2 Setting Date and Time
If you need to set the date and time for your Virtuozzo installation, open the DATE & TIME screen and make the necessary changes.
62
2.3. Installing Virtuozzo with CLI Management
2.3.3 Selecting the Keyboard Layout
The selected keyboard layout can be used during installation and, once the installation is complete, in the console (e.g., for entering localized descriptions, configuration file comments, and such).
If you need to change the default English (US) keyboard layout, open the KEYBOARD screen, click the plus sign to add a layout, and click Options to choose a key combination for switching layouts.
63
Chapter 2. Installing Virtuozzo
2.3.4 Configuring Network
Usually network is configured automatically by the installation program. If you need to modify network settings, you can do so on the NETWORK & HOST NAME screen.
To install Virtuozzo, you will need to have at least one network card configured and you will also need to provide a hostname: either a fully qualified domain name (
<hostname>.<domainname>
) or a short name
(
<hostname>
).
64
2.3. Installing Virtuozzo with CLI Management
2.3.4.1 Creating Bonded and Teamed Connections
Bonded and teamed connections offer increased throughput beyond the capabilities of a single network card as well as improved redundancy.
During installation, you can configure bonding on the NETWORK & HOSTNAME screen as described below.
Teaming can be configured in a similar way after choosing Team on step 1.
1. To add a new bonded connection, click the plus button in the bottom, select Bond from the drop-down list, and click Add.
65
66
Chapter 2. Installing Virtuozzo
2. In the Editing Bond connection... window, set the following parameters:
2.1. Mode to
XOR
.
2.2. Link Monitoring to
MII (recommended)
.
2.3. Monitoring frequency, Link up delay, and Link down delay to
300
.
2.3. Installing Virtuozzo with CLI Management
.
...
Note:
It is also recommended to manually set xmit_hash_policy to layer3+4 after the installation. For more information on network bonding, see Red Hat Enterprise Linux Deployment
Guide and Linux Ethernet Bonding Driver HOWTO .
3. In the Bonded connections section on the Bond tab, click Add.
4. In the Choose a Connection Type window, select Ethernet from the in the drop-down list, and click
Create.
5. In the Editing bond slave... window, select a network interface to bond from the Device drop-down list.
67
Chapter 2. Installing Virtuozzo
6. Configure other parameters if required and click Save.
7. Repeat steps 3 to 7 for each network interface you need to add to the bonded connection.
8. Configure other parameters if required and click Save.
The connection will appear in the list on the NETWORK & HOSTNAME screen.
68
2.3. Installing Virtuozzo with CLI Management
2.3.4.2 Creating VLAN Adapters
While installing Virtuozzo, you can also create virtual local area network (VLAN) adapters on the basis of physical adapters on the NETWORK & HOSTNAME screen as described below.
1. To add a new VLAN adapter, click the plus button in the bottom, select VLAN from the drop-down list, and click Add.
2. In the Editing VLAN connection... window:
2.1. Select a physical adapter the VLAN adapter will be based on from the Parent interface drop-down list.
2.2. Specify a VLAN adapter identifier in the VLAN ID field. The value must be in the 1-4094 range.
69
Chapter 2. Installing Virtuozzo
3. Configure other parameters if required and click Save.
The VLAN adapter will appear in the list on the NETWORK & HOSTNAME screen.
70
2.3. Installing Virtuozzo with CLI Management
2.3.5 Choosing the Storage Type
To choose the storage type, open the SELECT STORAGE TYPE screen:
Virtuozzo with CLI management can be installed on two types of storage:
• Basic storage, i.e. local hard drive(s). This option is chosen by default and requires no configuration.
• Virtuozzo Storage, as a part of a new or existing Virtuozzo Storage cluster. If you choose this option, you will need to set additional options described in
Setting Virtuozzo Storage Installation Options
on page 71.
Note:
Virtuozzo Storage is a solution that transforms local hard drives into a highly protected enterprise-level storage (like SAN or NAS) with data replication, high-availability, and self-healing features. Using Virtuozzo Storage, you can safely store and run virtual machines and containers, migrate them with zero downtime, provide high availability for your Virtuozzo installations, and
.
...
much more. For more information on Virtuozzo Storage, see the Virtuozzo Storage Administrator’s
Guide.
2.3.6 Setting Virtuozzo Storage Installation Options
If you choose to install on Virtuozzo Storage, you need to choose one of these options:
• join the server to an existing Virtuozzo Storage cluster (see
Joining an Existing Virtuozzo Storage Cluster
on page 73), or
• create a new Virtuozzo Storage cluster (see
Creating a New Virtuozzo Storage Cluster
on page 72).
71
Chapter 2. Installing Virtuozzo
.
...
Note:
For detailed information on working with Virtuozzo Storage clusters, consult the Virtuozzo
Storage Administrator’s Guide.
2.3.6.1 Creating a New Virtuozzo Storage Cluster
If you choose to create a new Virtuozzo Storage cluster, you will need to provide the cluster name and password and select one or more roles for your server.
First, in the Name field, specify a name for the cluster that will uniquely identify it among other clusters in your network. A cluster name must consist of the characters a-z, A-Z, 0-9, minus (-), underscore (_), and must not be longer than 63 characters long.
72
2.3. Installing Virtuozzo with CLI Management
Next, click Configure security next to the Name field and specify a password for your cluster. The password must be at least 8 characters long. It is encrypted and saved to the file
/etc/vstorage/clusters/<cluster_name>/auth_digest.key
on the server.
.
...
Note:
A server needs to be authenticated only once. After that, you can configure it as a metadata server, chunk server, or a client. If later you decide to configure the server where you are setting the first MDS as a chunk server, no additional authentication will be required.
Next, choose the cluster role(s) for your server. See
Virtuozzo Storage Server Roles
on page 75. When you create a new Virtuozzo Storage cluster, the Metadata Server Role option is selected by default.
2.3.6.2 Joining an Existing Virtuozzo Storage Cluster
If you choose to join an existing Virtuozzo Storage cluster, you will need to provide the cluster name and password and select one or more roles for your server.
73
First, in the Name field, specify a name of the cluster to join.
74
Chapter 2. Installing Virtuozzo
2.3. Installing Virtuozzo with CLI Management
Next, click Configure security next to the Name field and specify a password for your cluster.
Next, choose the role(s) for your server. See
Virtuozzo Storage Server Roles
on page 75.
2.3.6.3 Virtuozzo Storage Server Roles
Each Virtuozzo server can have one, some, or all of the following roles.
• Metadata Server Role. MDS servers are an essential part of any Virtuozzo Storage cluster. They store metadata about chunk servers and control how files keeping the contents of virtual machines and containers are split into chunks and where these chunks are located.
As a static IP address is required for an MDS server, the IP address detected by the installation program is specified in the corresponding field by default. If multiple IP addresses are available, you need to to choose one to assign to the MDS server. In some cases, you may need to enter a valid IP address manually.
75
76
Chapter 2. Installing Virtuozzo
.
...
Note:
MDS servers require static IP addresses. If you are using DHCP, map an IP address to the
MAC address of the MDS server.
• Chunk Server Role. Chunk servers (CS) store the contents of virtual machines and containers in the form of fixed-size chunks and provide access to these chunks. All data chunks are replicated and the replicas are kept on different chunk servers for high data availability. If one of the chunk servers goes down, other chunk servers continue providing the replicas of data chunks that were stored on the failed server.
Warning:
Virtuozzo Storage has redundancy built in, so you should avoid running Virtuozzo
Storage on redundant types of RAID like 1, 5, or 6 over local storage. In this case, a single write operation may affect a significant number of HDDs resulting in very poor performance. For example, for 3 Virtuozzo Storage replicas and RAID5 on servers with 5 HDDs each, a single write operation may result in 15 I/O operations. For recommendations on optimal local storage
.
...
configurations, consult the Virtuozzo Storage Administrator’s Guide.
By default, the installer does the following:
•
If your server has several disk drives, the installer will automatically configure each disk drive except system to act as a separate chunk server.
•
If one or more SSD drives are available on the server, they will be set up to store chunk server write journals (one journal per chunk server). By using SSD drives for write journaling, you can boost the performance of write operations in the cluster by up to two or more times. For more information on using SSD drives, consult the Virtuozzo Storage Administrator’s Guide. If you need to disable this setting, click Configure under the chunk server role checkbox and clear the checkbox Enable the
use of SSD drives for the CS journal.
2.3. Installing Virtuozzo with CLI Management
.
...
Note:
If one or more SSDs are not detected automatically, find out their device names (for example, invoke the console by pressing Ctrl+Alt+F2 and run lsblk -d -o name,rota
, zeroes in the ROTA column will indicate non-rotational drives, i.e. SSDs), reboot to the installer welcome screen, and follow instructions in
Enabling Forced Detection of SSDs
on page 40.
• Client Server Role. Clients are computers with Virtuozzo 7 from where you run virtual machines and containers stored in your Virtuozzo Storage cluster.
By default, if one or more SSD drives are available on the server, the installer configures them to store a local cache of frequently accessed data. By having a local cache on an SSD drive, you can increase the overall cluster performance by up to 10 and more times. For more information on using SSD drives, consult the Virtuozzo Storage Administrator’s Guide.
.
...
Note:
If one or more SSDs are not detected automatically, find out their drive letters
(for example, invoke the console by pressing Ctrl+Alt+F2 and run lsblk -d -o name,rota
, zeroes in the ROTA column will indicate non-rotational drives, i.e. SSDs), reboot to the installer welcome screen, and follow instructions in
on page 40.
To change either of these settings, click Configure under the client role checkbox and set the corresponding checkboxes in the client settings window.
77
Chapter 2. Installing Virtuozzo
2.3.7 Partitioning the Hard Drives
Having chosen the storage type, you need to choose partitioning options on the INSTALLATION
DESTINATION screen.
.
...
Warning:
All the existing partitions on all disks will be deleted.
78
2.3. Installing Virtuozzo with CLI Management
Firstly, you will need to choose which of the disks are marked as System, Datastore, and Cache:
• Use the System radio button to select a disk where the root partition with Virtuozzo system files
(mounted to
/
) will be kept.
• Use Datastore checkboxes to mark disks where virtual machines and containers will be kept. All such disks will be organized into a single volume group and mounted to the
/vz mount point. At least one disk need to be marked as a data store. If Virtuozzo Storage chunk server role is selected, a chunk server will be created on each disk marked as a data store (in this case, the system disk cannot be marked as a data store).
• Use Cache checkboxes to mark SSD drives where journals and cache will be kept. This option is only applicable for SSD drives to be used in Virtuozzo Storage clusters.
Secondly, in the bottom of the screen, you will need to choose:
79
Chapter 2. Installing Virtuozzo
• Automatically configure partitioning and click Done to have the installation program create the default layout on the server.
• I will configure partitioning and click Done to manually partition your disk(s).
When partitioning the disks, keep in mind that Virtuozzo requires these partitions:
• Boot: mount point
/boot
, 1 GB, boot partition with Virtuozzo boot files, created on each HDD,
• Root: mount point
/
, 12-24 GB, root partition with Virtuozzo files, created on the HDD marked System,
• Swap: paging partition with the swap file system, created on the HDD marked System. The size depends on RAM:
•
if RAM is below 2 GB, swap size should be twice the RAM,
•
if RAM is 2-8 GB, swap size should be equal to RAM,
•
if RAM is 8-64 GB, swap size should be half the RAM,
•
otherwise swap size should be 32 GB.
• Data storage, depends on the chosen storage type:
•
Local: mount point
/vz
, at least 30 GB, storage for virtual machines, containers, and OS and application templates, an LVM volume group that spans all HDDs marked Datastore. The
/vz partition must be formatted to ext4 tuned for VZ
(which is ext4 with additional mount options that improve Virtuozzo performance).
•
Virtuozzo Storage chunk server: mount point
<cluster_name>-cs<N>
, at least 100 GB, only required if the chunk server role is chosen. Chunk servers can only be created on partitions formatted to ext4 tuned for CS
(which is ext4 with additional mount options that improve Virtuozzo Storage performance).
• In addition, either a 1 MB partition with the BIOS boot file system or a 200 MB partition with the EFI boot file system is required depending on your server configuration.
A typical partition layout for Virtuozzo on basic storage may look like this:
80
2.3. Installing Virtuozzo with CLI Management
A typical partition layout for Virtuozzo on Virtuozzo Storage may look like this:
81
Chapter 2. Installing Virtuozzo
2.3.8 Finishing Installation
Having configured everything necessary on the INSTALLATION SUMMARY screen, click Begin Installation.
While Virtuozzo is installing, click ROOT PASSWORD to create a password for the root account. Installation will not finish until the password is created.
82
2.3. Installing Virtuozzo with CLI Management
While Virtuozzo is installing, you can also activate a Virtuozzo license and disable the automatic updating of
Virtuozzo ReadyKernel patches if required.
2.3.8.1 Activating License
To activate a Virtuozzo license, click LICENSE KEY, enter a key, and click Done.
If you do not have a license, you can get one by clicking “Try Virtuozzo Today” at https://virtuozzo.com/ .
Alternatively, you can activate a license with the vzlicload command after the installation is complete. For more details, see the Virtuozzo 7 User’s Guide .
.
...
Note:
If the license is not activated, containers and virtual machines will be deactivated (suspended without a possibility to resume) after a grace period. All the newly created containers and virtual machines will run for 5 minutes. Containers with Virtuozzo Storage and Virtuozzo Automator management panels (if installed) will run for 24 hours.
2.3.8.2 Configuring ReadyKernel
With automatic updating enabled by default, ReadyKernel will check for new patches daily at 12:00 server time. If a patch is available, ReadyKernel will download, install, and load it for the current kernel. For more details, see the Virtuozzo 7 User’s Guide .
83
Chapter 2. Installing Virtuozzo
While Virtuozzo is installing, you can disable the automatic updating of Virtuozzo ReadyKernel patches if required. To do this, click READYKERNEL, uncheck ReadyKernel automatic update, and click Done.
2.3.8.3 Completing Installation
Once the installation is complete, click Reboot to restart the server.
.
...
Note:
If you are installing Virtuozzo from a USB drive, remove the drive before restarting the server.
After restart, you will see the login prompt as well as the server IP address and hostname that you can use to connect to the server remotely.
To manage virtual machines and containers on the Virtuozzo server, you will need to log in as the root user.
After you do so, you will see a shell prompt and can start creating and managing your virtual machines and containers. For quick-start instructions, run man afterboot
. More detailed information is provided in the
Virtuozzo 7 User’s Guide .
2.4 Installing Virtuozzo in Basic Graphics Mode
If the installer cannot load the correct driver for your video card, you can try to install Virtuozzo in the basic graphics mode. To select this mode, on the welcome screen, choose Troubleshooting–>, then Install
Virtuozzo 7 in basic graphics mode. The installation process itself is the same as that in the default graphics mode (see
Installing Virtuozzo with GUI Management
on page 40).
84
2.5. Installing Virtuozzo via VNC
2.5 Installing Virtuozzo via VNC
To install Virtuozzo via VNC, boot to the welcome screen and do the following
1. Select the required installation option (e.g., Install Virtuozzo 7) and press E to start editing it.
2. Add text at the end of the line starting with linux /images/pxeboot/vmlinuz
. For example: linux /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=vz-iso-7.0.0-3589.iso quiet ip=dhcp text
3. Press CTRL-X to start booting the chosen installation option.
4. When presented with a choice of starting VNC or proceeding to the text mode, choose 1 for VNC.
5. When offered, enter a VNC password.
6. In the output that follows look up the hostname or IP address and VNC port to connect to, e.g.,
192.168.0.10:1
.
7. Connect to said address in a VNC client. You will see the usual Installation Summary screen.
The installation process itself is the same as that in the default graphics mode (see
on page 39).
2.6 Installing Virtuozzo in Text Mode
To install Virtuozzo in the text mode, boot to the welcome screen and do the following
1. Select the main installation option Install Virtuozzo 7 and press E to start editing it.
2. Add text at the end of the line starting with linux /images/pxeboot/vmlinuz
. For example: linux /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=vz-iso-7.0.0-3589.iso quiet ip=dhcp text
3. Press CTRL-X to start booting the chosen installation option.
4. When presented with a choice of starting VNC or proceeding to the text mode, choose 2 for text mode.
5. In the installation menu that is shown, at least do the following: set or confirm the installation source
(press 3), the installation destination (press 6), and the root password (press 11) and accept the EULA
(press 5).
6. Press b to begin installation.
85
Chapter 2. Installing Virtuozzo
7. When installation ends, press Enter to reboot.
2.7 Configuring Server Ports for Virtuozzo
Virtuozzo enables Linux kernel firewall during installation. This section lists ports opened by default. The set of ports differs depending on your system configuration:
• If the server does not participate in a Virtuozzo Storage cluster, see
Opened Ports on Standalone Servers
on page 86 for information on ports used by Virtuozzo.
• If the server is part of a Virtuozzo Storage cluster, in addition, see
Opened Ports on Servers in Virtuozzo
on page 87 for information on ports used by the cluster.
2.7.1 Opened Ports on Standalone Servers
The table below lists the ports for servers that do not participate in Virtuozzo Storage clusters.
I in the
Description column signals that the port should be opened for incoming traffic and
O
, for outgoing traffic.
Port
22
80
21
443
5224
64000
1621, 1622
67
<RPC ports>
Description
(IO) Used for secure logins via SSH.
(IO) Used for HTTP connections, e.g., to download Virtuozzo updates and EZ templates from remote repositories.
(O) Used to connect to the Debian repository to cache Debian EZ templates.
(O) Used to send problem reports to the support team.
(O) Used to connect to the Key Administrator server to update Virtuozzo lease licenses.
(IO) Used to connect SDK with the dispatcher running on the remote server, and for communication between the dispatchers on different servers.
(O) Used to migrate containers to virtual machines on servers that run
Virtuozzo hypervisor-based solutions.
Used to support host-only adapters in virtual machines. Virtuozzo does not use port 67 for any external connections.
Used by various RPC services (e.g., to support NFS shares). Port numbers may differ from system to system. To learn what RPC services are registered on your server and what ports they are using, run
# rpcinfo -p localhost
86
2.7. Configuring Server Ports for Virtuozzo
Port
647, 847
5700-6900
Description
Reserved by the Linux portreserve program for the DHCP server, if you use one.
Range of ports used for VNC connections.
You may also need to additionally open ports used to connect to remote yum repositories. Though most of the repositories can be accessed via HTTP, some may require access via HTTPS or FTP. To check what repositories are currently configured for your system and what protocols are used to connect to them, run the following commands and examine their output:
# yum repolist -v | egrep -e 'baseurl|mirrors'
# curl http://repo.cloudlinux.com/psbm/mirrorlists/psbm6-os.mirrorlist
2.7.2 Opened Ports on Servers in Virtuozzo Storage Clusters
A Virtuozzo Storage cluster requires ports listed below to be opened in addition to those on standalone servers. If you use Virtuozzo Storage management panel to create clusters, all the necessary ports are opened automatically. Otherwise, open these ports manually on each node participating in the cluster.
Description Port
MDS Servers
2510
2511
Chunk Servers
2511
<random_port>
(IO) Used for communication between MDS servers.
(IO) Used for communication with chunks servers and clients.
(O) Used for communication with MDS servers.
(I) Used for communication with clients. The chunk server management service automatically binds to any available port. You can also manually assign the service to a specific port.
Clients
2511
<random_port>
(O) Used for communication with MDS servers.
(O) Used for communication with chunk servers. The client management service automatically binds to any available port. You can also manually assign the service to a specific port.
87
CHAPTER 3
Exploring Additional
Installation Options
This chapter describes how to boot into the rescue mode and run Virtuozzo in virtual machines.
3.1 Booting into Rescue Mode
If you experience problems with your system, you can boot into the rescue mode to troubleshoot these problems. Once you are in the rescue mode, your Virtuozzo installation is mounted under
/mnt/sysimage
.
You can go to this directory and make the necessary changes to your system.
To enter the rescue mode, do the following:
1. Boot your system from a Virtuozzo DVD or USB drive.
2. On the welcome screen, click Troubleshooting–>, then Rescue system.
3. Once Virtuozzo boots into the emergency mode, press Ctrl+D to load the rescue environment.
4. In the rescue environment, you can choose one of the following options:
• Continue (press 1): mount the Virtuozzo installation in read and write mode under
/mnt/sysimage
.
• Read-only mount (press 2): mount the Virtuozzo installation in read-only mode under
/mnt/sysimage
.
• Skip to shell (press 3): load shell, if your file system cannot be mounted; for example, when it is corrupted.
88
3.2. Running Virtuozzo in Virtual Machines
• Quit (Reboot) (press 4): reboot the server.
5. Unless you press 4, a shell prompt will appear. In it, run chroot /mnt/sysimage to make the Virtuozzo installation the root environment. Now you can run commands and try to fix the problems you are experiencing.
6. After you fix the problem, run exit to exit the chroot environment, then reboot to restart the system.
3.2 Running Virtuozzo in Virtual Machines
Warning:
Nested virtualization is an experimental feature and tested only on Linux guests. The
.
...
operation of nested virtual machines may be unstable.
Installing Virtuozzo in virtual machines may prove useful if you want to evaluate Virtuozzo.
To run virtual machines with Virtuozzo, the physical server’s procesor(s) must support either of these architectures: Intel VT-x (with “unrestricted guest”) and EPT. The following hypervisors are supported:
Parallels Desktop for Mac, VMware Fusion, VMware Workstation, and VMware ESXi. Make sure that nested virtualization support is enabled in your hypervisor.
The following virtual hardware is recommended for virtual machines with Virtuozzo:
• vCPU: 2 or more
• RAM: 2 GB or more
• HDD: 64 GB or more
To install Virtuozzo in a VM, copy the distribution ISO image to a local drive and create a new VM from it according to your virtualization software documentation. Start the VM, boot to the Virtuozzo installer, and follow the instructions in
on page 39.
3.2.1 Restrictions and Peculiarities
When using Virtuozzo in a virtualized environment, keep in mind the following restrictions and specifics:
• Running Virtuozzo in a virtual machine is intended for evaluation purposes only, and not for production.
89
90
Chapter 3. Exploring Additional Installation Options
• If you change the configuration of a virtual machine where Virtuozzo is installed, you may need to reactivate Virtuozzo.
• When you start a virtual machine with Virtuozzo, VMware Fusion may warn you that it requires full access to the network traffic. Ignore this message, and proceed with booting the virtual machine.
• To run in a virtualized Virtuozzo environment, a virtual machine must have Virtuozzo guest tools installed.
• To enable full support for virtual machines created inside Virtuozzo, make sure to enable nested virtualization support for the Virtuozzo VM in your virtualization software. Otherwise virtual machines created in Virtuozzo will only support 32-bit operating systems and a single CPU.
• Containers created in Virtuozzo running inside a virtual machine have no limitations and work as usual.
advertisement
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Related manuals
advertisement
Table of contents
- 6 Preparing for Installation
- 6 Requirements for Standalone Installations
- 7 System Limits
- 7 Planning Infrastructure for Virtuozzo Storage with GUI Management
- 7 Understanding Virtuozzo Storage Architecture
- 8 Storage Role
- 8 Metadata Role
- 9 Network Roles (Storage Access Points)
- 9 Supplementary Roles
- 10 Planning Node Hardware Configurations
- 10 Hardware Requirements
- 11 Hardware Recommendations
- 14 Hardware and Software Limitations
- 14 Minimum Configuration
- 15 Recommended Configuration
- 20 Raw Disk Space Considerations
- 20 Planning Network
- 21 General Network Requirements
- 21 Network Limitations
- 22 Per-Node Network Requirements
- 26 Network Interface Roles
- 27 Network Recommendations for Clients
- 28 Sample Network Configuration
- 29 Understanding Data Redundancy
- 31 Redundancy by Replication
- 32 Redundancy by Erasure Coding
- 33 No Redundancy
- 33 Understanding Failure Domains
- 33 Understanding Storage Tiers
- 34 Planning Infrastructure for Virtuozzo Storage with CLI Management
- 34 Understanding Virtuozzo Storage Architecture
- 36 Virtuozzo Storage Configurations
- 37 Minimum Configuration
- 38 Recommended Configuration
- 39 Hardware Requirements
- 41 Network Requirements
- 43 Preparing for Installation from USB Storage Drives
- 44 Installing Virtuozzo
- 44 Starting Installation
- 44 Choosing Installation Type
- 45 Enabling Forced Detection of SSDs
- 45 Installing Virtuozzo with GUI Management
- 46 Accepting EULA
- 47 Setting Date and Time
- 48 Selecting the Keyboard Layout
- 49 Configuring Network
- 50 Creating Bonded and Teamed Connections
- 54 Creating VLAN Adapters
- 56 Installing Virtuozzo Automator Components
- 56 Installing Management Panel and Compute on the First Server
- 57 Installing Compute on the Second and Other Servers
- 58 Installing Virtuozzo Storage Components
- 59 Installing Management Panel and Storage on the First Server
- 60 Installing Storage on the Second and Other Servers
- 62 Choosing Installation Destination
- 63 Finishing Installation
- 63 Activating License
- 64 Configuring ReadyKernel
- 64 Completing Installation
- 65 Building Virtuozzo Infrastructure
- 65 Installing Virtuozzo with CLI Management
- 66 Accepting EULA
- 67 Setting Date and Time
- 68 Selecting the Keyboard Layout
- 69 Configuring Network
- 70 Creating Bonded and Teamed Connections
- 74 Creating VLAN Adapters
- 76 Choosing the Storage Type
- 76 Setting Virtuozzo Storage Installation Options
- 77 Creating a New Virtuozzo Storage Cluster
- 78 Joining an Existing Virtuozzo Storage Cluster
- 80 Virtuozzo Storage Server Roles
- 83 Partitioning the Hard Drives
- 87 Finishing Installation
- 88 Activating License
- 88 Configuring ReadyKernel
- 89 Completing Installation
- 89 Installing Virtuozzo in Basic Graphics Mode
- 90 Installing Virtuozzo via VNC
- 90 Installing Virtuozzo in Text Mode
- 91 Configuring Server Ports for Virtuozzo
- 91 Opened Ports on Standalone Servers
- 92 Opened Ports on Servers in Virtuozzo Storage Clusters
- 93 Exploring Additional Installation Options
- 93 Booting into Rescue Mode
- 94 Running Virtuozzo in Virtual Machines
- 94 Restrictions and Peculiarities