Understand Microsoft Hyper Converged solution

Understand Microsoft Hyper Converged solution
2016
Understand Microsoft
Hyper Converged
solution
MVP CLOUD AND DATACENTER MANAGEMENT
ROMAIN SERRE
CHARBEL NEMNOM
Understand Microsoft Hyper Converged solution
Credit page
Romain Serre (author)
MVP Profile
Romain Serre works in Lyon as a Technical Architect. He is focused on Microsoft
Technology, especially on Hyper-V, System Center, Storage, networking and
Cloud OS technology as Microsoft Azure or Azure Stack.
He is certified Microsoft Certified Solution Expert (Server Infrastructure &
Private Cloud), on Hyper-V and on Microsoft Azure (Implementing a Microsoft
Azure Solution).
Romain blogs in all of these technologies at http://www.tech-coffee.net
Charbel Nemnom (author)
MVP Profile
Technical evangelist, totally a fan of the latest IT platform solutions.
Accomplished hands-on technical professional with over 13 years of broad IT
project management and Infrastructure experience, serving on and guiding
technical teams to optimize performance of mission-critical enterprise
systems. Excellent communicator, adept at identifying business needs and
bridging the gap between functional groups and technology to foster targeted,
and innovative systems development. Well respected by peers through
demonstrating passion for technology and performance improvement.
Extensive practical knowledge of complex systems builds, network design and virtualization.
Charbel has extensive experience in various systems, focusing on Microsoft CloudOS Platform,
Datacenter Management, Hyper-V, Microsoft Azure, security, disaster recovery, and many types of
monitoring tools as well. He has a solid knowledge of technical reporting.
If you want learn more about Charbel's activities, you are invited to visit his blog at
https://www.charbelnemnom.com
P a g e 1 | 53
Understand Microsoft Hyper Converged solution
Warning: Technical Preview information
This whitepaper is with regards to the last Microsoft public information and the latest build of Windows
Server 2016. Because of that, the information may change between two builds without having the time
to update this documentation.
This document is written for information related to Windows Server 2016 Technical Preview 5.
Warning: Please note that currently some functionalities are still not working as expected because of
the current beta release which still contains some bugs.
P a g e 2 | 53
Understand Microsoft Hyper Converged solution
Table of contents
From disaggregated systems to Hyper-Convergence ........................................................................... 5
Microsoft disaggregated solution overview......................................................................................... 6
Microsoft Hyper Converged solution overview.................................................................................... 7
Network required ............................................................................................................................... 9
Network Convergence ........................................................................................................................ 9
Switch Embedded Teaming ............................................................................................................... 11
VMQueue and Receive Side Scaling................................................................................................... 12
Datacenter Bridging .......................................................................................................................... 12
Remote Direct Memory Access ......................................................................................................... 13
SMB Direct........................................................................................................................................ 13
Hyper-V ............................................................................................................................................ 14
Live-Migration .................................................................................................................................. 16
Hyper-V Replica ................................................................................................................................ 17
Storage Spaces.................................................................................................................................. 18
Storage Spaces resiliency .................................................................................................................. 19
Scale-Out File Server ......................................................................................................................... 21
Tiering .............................................................................................................................................. 21
Storage Spaces Direct ....................................................................................................................... 23
ReFS file system ................................................................................................................................ 25
Multi-Resiliency Virtual Disks and ReFS Real-Time Tiering ................................................................. 25
Three tiers configuration................................................................................................................... 26
Storage Replica ................................................................................................................................. 27
Storage Quality of Service ................................................................................................................. 28
Design Overview ............................................................................................................................... 34
Requirements ................................................................................................................................... 35
Nano Server’s deployments .............................................................................................................. 35
Virtual machines creation and configuration ................................................................................. 35
Nodes configuration...................................................................................................................... 37
Enable nested Hyper-V.................................................................................................................. 38
Network and OS configuration ...................................................................................................... 38
About Datacenter Bridging............................................................................................................ 40
Storage Spaces Direct deployment.................................................................................................... 40
Cluster deployment....................................................................................................................... 40
Enable Storage Spaces Direct ........................................................................................................ 42
P a g e 3 | 53
Understand Microsoft Hyper Converged solution
Management storage.................................................................................................................... 43
Host a VM on the solution ................................................................................................................ 46
Storage Quality of Service ................................................................................................................. 48
P a g e 4 | 53
Understand Microsoft Hyper Converged solution
Introduction
From disaggregated systems to Hyper-Convergence
Before virtualization was deployed in companies, we installed every single application on a specific
physical server. Most of the time, a physical server hosted a single application. Usually this application
was installed on local storage. Because one application was installed on each server, the resource on
this physical server was largely unused (except in the case of big applications). For example, DHCP and
WDS were installed on the same physical server and consumed few resources. Moreover, each physical
server had multiple Network Interface Controllers (NICs) but each NIC was dedicated for a specific
traffic. For example, some NICs were dedicated to Cluster Heartbeat, which used 1% of the bandwidth.
To avoid wasting resources like CPU, RAM, storage and the power usage as well, we used the
virtualization in the datacenter. Thanks to this technology, we were able to run multiple servers called
Virtual Machines (VMs) in a single physical server (the Hypervisor). We installed a single application in
each VM and so we were able to run multiple applications on a single physical server. Moreover,
because each VM runs its own Operating System, we had a boundary between each VM and so
between each application.
To run more and more Virtual Machines and to support High Availability we installed hypervisor in
clustered mode. So VMs were able to move from one cluster node to another without any downtime.
But hypervisor cluster required shared storage across all nodes to access the same VMs data store.
Therefore, Network Attached Storage (NAS), or Storage Area Network (SAN) appliances were used to
provide shared storage. Usually SAN was used, which brought new challenges.
SAN provides good performance and good resiliency, but brings some difficulties. Firstly, SAN is a
complex solution. It requires specific knowledge as zoning, masking, LUNs, Raid Group and so on.
Secondly, SAN can be connected by several types of protocols such as iSCSI, Fibre Channel (FC) or FC
over Ethernet (FCoE). Depending on the chosen solution, price can change as well as the exploitation
difficulties. For example, Fibre Channel, which provides the best performance, requires specific
switches (SAN Switches), SFP module, fiber optic and specific peripherals on physical servers (called
Host Bus Adapter or HBA). Moreover, when you add a new hardware (either SAN enclosures or physical
servers), you have to check if the device firmware is compatible with the other devices. Therefore, this
solution is not really flexible and is also expensive at the same time.
For all of the above reasons, since past few years, we are trying to bring Software-Defined Datacenter
(SDD) solution in the datacenter. With SDD we will use software instead of using hardware devices
which are usually not flexible. Software is flexible and scalable. A single update can bring new features.
First, we installed Software-Define Compute, which is in fact the hypervisor or the hypervisor cluster.
There are several Hypervisor in the market today such as ESX (VMWare), Hyper-V (Microsoft) or
XenServer (Citrix).
Next, we have to create virtual switches to interconnect virtual machines. Then, to avoid network
bandwidth waste, the Network Convergence was introduced. Thanks to the Network Convergence,
we were able to carry several traffics though a single NIC or a teaming. Moreover, we can now make
network virtualization (VXLAN or NVGRE) to segregate subnet by software without using VLAN. All of
this is called Software-Defined Networking.
P a g e 5 | 53
Understand Microsoft Hyper Converged solution
More recently, we wanted to avoid using SAN because it is too expensive, complex and not scalable.
Thus, some companies created software storage solutions such as vSAN (VMWare Virtual SAN) or
Storage Spaces (Microsoft). This is called Software-Defined Storage.
However, some companies faced a scalability problem between storage and hypervisor. Because
storage is shared, the more you add hypervisors more the available Input/Output per Second (IOPS)
are divided among them. Therefore, you have to add devices in your storage solution. The solution
was scalable but required to be careful between the balance of compute and storage resources.
Hyper-Convergence was introduced into the market by several vendors to tackle all those challenges.
To resolve this, we put the storage in the same physical host with the hypervisor (local direct attached
storage or DAS). Therefore, even if you add a hypervisor, you add storage too. Therefore, it is really
flexible and scalable, especially for On-Premise Cloud solution. Moreover, because almost everything
is in the physical server, hyper converged solution uses little spaces in the datacenter.
Welcome to Hyper Converged world!
Microsoft disaggregated solution overview
Before talking about Hyper Converged solution, it is important to understand disaggregated solution
that we are building currently in Windows Server 2012 R2.
Hypervisor
(Hyper-V)
Hypervisor
(Hyper-V)
Management
Backup
Cluster
Live-Migration
Storage
(RDMA)
VM Networks
Management
Backup
Cluster
Live-Migration
Storage
(RDMA)
VM Networks
Network Devices (>10GB)
Management
Cluster
Storage
(RDMA)
Management
Cluster
File Servers
(Scale-Out File Server)
File Servers
(Scale-Out File Server)
JBOD Tray
Storage
(RDMA)
JBOD Tray
Figure 1: Microsoft disaggregated solution
In Microsoft disaggregated solution, we use Software-Defined Storage (clustered Storage Spaces). In
Windows Server 2012 R2, all storage devices must be visible by each File Server cluster node to create
a clustered Storage Spaces. Therefore, Shared JBOD trays are usually required. They are connected
P a g e 6 | 53
Understand Microsoft Hyper Converged solution
using SAS Cables to two or more File Server clustered (Up to 8) with the Scale-Out File Server role
installed.
Regarding the networks, the current NIC teaming technology does not allow us to converge Storage,
Live-Migration and other networks (C.F Software-Define networking part). Therefore, the networks
cannot be fully converged.
Microsoft Hyper Converged solution overview
Several companies have released their own Hyper Converged such as Nutanix, GridStore or Simplivity.
However, in this whitepaper we will focus on the Microsoft Hyper Converged solution that will be
available with the release of Windows Server 2016.
At the time of writing this whitepaper, the Microsoft Hyper Converged solution must be composed of
three nodes (upon to 16 nodes). However, in three nodes configuration some features are not available
(such as Multi-Resilient Virtual Disk). Each node must be identical (same CPU, RAM, Storage
Controllers, Network Adapters and so on). Each node has its own local storage, either SAS JBOD
enclosure attached locally. Disks can be SAS, SATA or NVMe (SSD connected to PCI-Express). In each
node the cluster feature is installed as well as Hyper-V and File Server.
Hypervisor
(Hyper-V)
Storage
(SSD + Hard Disks)
(NVMe + SSD)
Hypervisor
(Hyper-V)
Storage
(SSD + Hard Disks)
(NVMe + SSD)
Hypervisor
(Hyper-V)
Storage
(SSD + Hard Disks)
(NVMe + SSD)
Hypervisor
(Hyper-V)
Storage
(SSD + Hard Disks)
(NVMe + SSD)
Storage
Management
Live-Migration
Backup
Cluster
Storage
Management
Live-Migration
Backup
Cluster
Storage
Management
Live-Migration
Backup
Cluster
Storage
Management
Live-Migration
Backup
Cluster
Network Devices (>10GB)
Figure 2: Microsoft Hyper Convergence solution overview
Regarding the CPU configuration, Microsoft expects at least a dual socket with Intel Xeon E3 per node.
The CPU consumption is higher with Hyper Converged solution than disaggregated systems because in
our case, the CPU is used for Virtual Machines and for storage processing. This is why more processor
cores are required and a high frequency is recommended.
Relating to the CPU, you have to define the amount of memories per node. Microsoft recommends at
least 128GB per node for small deployments. Be careful to not install too many memories related to
the number of threads available to avoid wasting memory. Moreover, because it is a cluster, you have
to calculate with N-1 factor to take into consideration the loss of one node. Example:
•
•
You have four nodes of 128GB of RAM (512GB of memory in the cluster)
Only 384GB of RAM should be assigned for VMs; if a node is down, all VMs can keep running
P a g e 7 | 53
Understand Microsoft Hyper Converged solution
On the storage side, you also have some prerequisites. To implement the Microsoft Hyper Converged
solution, we will leverage on a new Windows Server 2016 feature called Storage Spaces Direct (S2D).
Firstly, each node must have the same total number of storage devices. Secondly, in the case of a mix
of type of disks (HDD, SSD or NVMe), each node must have the same number of disks for each type.
If you use a mix type of storage devices, each node must have at least two performance storage devices
and four capacity drives. Below you can find examples about the minimum configuration required:
Solution
(SATA SSD + SATA HDD)
(NVMe SSD + SATA HDD)
(NVMe SSD + SATA SSD)
Disks
SATA SSD
SATA HDD
NVMe SSD
SATA HDD
NVMe SSD
SATA SSD
Disks type
Performances
Capacity
Performances
Capacity
Performances
Capacity
Numbers
2
4
2
4
2
4
The performance disks are used for Write-Back caching mechanism (for more information, read
Storage Spaces Direct section).
If you use only a single type of flash storage devices either full NVMe or Full SSD (not both), you can
disable the caching mechanism.
Regarding networking, there are other requirements too. Storage Spaces Direct requires at least a
10GB NIC per node with RDMA capability (RoCE or iWARP). Nevertheless, to avoid single point of failure
I recommend you to use at least two 10GB Ethernet controllers. If you implement network
convergence, I recommend also to install network adapters faster than 10GB for better efficiency. To
finish if you choose RoCE, the physical switches should support RoCE, DataCenter Bridging (DCB) and
Priority Flow Control (PFC).
Since Windows Server 2016 Technical Preview 5, Storage Spaces Direct can be implemented in a 3nodes configuration. 3-nodes and 4-nodes configuration works in the same way except some features
are not available in 3-nodes configuration. In storage part we will talk about mirroring, parity and multiresilient virtual disks. Some of these configurations are not available in 3-nodes configuration:
Optimized for
Use case
Efficiency
File System
Minimum nodes
Mirroring
Performance
All data is
hot
Least (33%)
ReFS or NTFS
3+
Parity
Efficiency
All data is
cold
Most (50%)
ReFS or NTFS
4+
Multi-Resilient
Balanced between performance and efficiency
Mix of hot and cold data
Medium (~50%)
ReFS only
4+
As you can see in the above table, you can’t implement parity and multi-resilient virtual disk in 3-nodes
cluster.
In the next sections of this document, I will detail each component (Network, Compute and Storage)
in order to implement a Microsoft Hyper Converged solution.
P a g e 8 | 53
Understand Microsoft Hyper Converged solution
Software-Defined Networking
In this section, I will describe the network designs and the key features required by a Hyper Converged
solution.
Network required
This kind of solution requires networks for the cluster, Hyper-V, Storage and the management. Maybe
you also use a backup solution where you have segregated the backup traffic on a specific network. So
to implement the Hyper Converged solution we need:
•
•
•
•
•
•
Management (routed): This network is used to carry these kinds of traffics: Active Directory,
RDS, DNS and so on. Regarding the Hyper-V side, we will create the management NIC on this
network and fabric VM will be connected on this network. A virtual NIC will be created on the
Hyper-V parent partition.
Cluster (private): This network carries the cluster heartbeat. A virtual NIC will be created on
the Hyper-V parent partition.
Live-Migration (private): This network carries the Hyper-V Live-Migration flows. A virtual NIC
will be created on the Hyper-V parent partition.
Storage (private): This network uses SMB 3.0 for all intra-node (also called east-west)
communication, and takes advantage of all the powerful features of SMB 3.0, such as SMB
Direct (RDMA-enabled NICs) for high bandwidth and low latency communication, and SMB
Multichannel for bandwidth aggregation and network fault tolerance.
Backup (private): If you are using a backup solution, where you have dedicated a specific
network for backup traffic, this network is needed. A virtual NIC will be created on the HyperV parent partition.
VM Networks (routed): To interconnect VMs other than fabric VM, VM Networks are required.
This can be network isolated by VLAN, NVGRE network (Provider Network) and so on. It
depends on your needs.
Network Convergence
To limit the number of NICs installed on each node and to avoid the bandwidth waste, we will leverage
on Network Convergence. Depending on the budget and the chosen NICs, several designs are possible.
You can for example, install 10GB Ethernet NICs for Hyper-V host and storage needs and 1GB Ethernet
NICs for VM traffic, or you can also buy two 40GB NICs and converge all the traffics. It is up to you.
Below you can find three different designs with their own advantages. For each solution, “network
devices” expression means at least two switches to support the High Availability.
As showing in the design – Figure 3, there are four NICs per node: two 1GB Ethernet Controllers and
two 10GB Ethernet Controllers that are RDMA capable. The two 1GB NICs, in a teaming, are used to
carry VM Networks and the 10GB, in a teaming, are used for the storage, Live-Migration, Backup,
Cluster and management traffics. This solution is great when you have not a lot of VM Networks and
when these VMs don’t require a lot of bandwidth. 1GB NICs are cheap and so if you want to segregate
fabric and VM traffics on different NICs, it is the less expensive solution. It's also a less scalable solution
because if you reach the bandwidth limit, you have to add 1GB NICs in the teaming, up to 8 with Switch
Embedded Teaming (please refer to the Switch Embedded Teaming section for more detail), or buy
10GB network cards.
P a g e 9 | 53
Understand Microsoft Hyper Converged solution
Network Devices (1GB)
VM Networks
VM Networks
VM Networks
VM Networks
1GB NICs
1GB NICs
1GB NICs
1GB NICs
>10GB NICs
RDMA Capable
>10GB NICs
RDMA Capable
>10GB NICs
RDMA Capable
>10GB NICs
RDMA Capable
Storage
Management
Live-Migration
Backup
Cluster
Storage
Management
Live-Migration
Backup
Cluster
Storage
Management
Live-Migration
Backup
Cluster
Storage
Management
Live-Migration
Backup
Cluster
Network Devices (>10GB)
Figure 3 : 1GB for VMs and 10GB for fabric Ethernet controllers
The second solution - Figure4, is the same as above, but with 10GB NICs for VM Networks. The fabric
and VM traffics are segregated on different NICs and the solution is more scalable than the one above.
If you reach the bandwidth limit you can add 10GB NIC in the teaming (up to 8 with Switch Embedded
Teaming).
Network Devices (10GB)
VM Networks
VM Networks
VM Networks
VM Networks
10GB NICs
10GB NICs
10GB NICs
10GB NICs
>10GB NICs
RDMA Capable
>10GB NICs
RDMA Capable
>10GB NICs
RDMA Capable
>10GB NICs
RDMA Capable
Storage
Management
Live-Migration
Backup
Cluster
Storage
Management
Live-Migration
Backup
Cluster
Storage
Management
Live-Migration
Backup
Cluster
Storage
Management
Live-Migration
Backup
Cluster
Network Devices (>10GB)
Figure 4 : 10GB Ethernet Controllers for VMs and Fabric traffics
P a g e 10 | 53
Understand Microsoft Hyper Converged solution
If you want a fully converged network (after all it is Hyper Convergence), you can converge all networks
in a single teaming. For this kind of solution, I recommend you to use at least 25GB Ethernet controllers
to support all the traffic. With this kind of solution, you need just two NICs (plus one for the Baseboard
Management Controller) and to support the High Availability, at least two switches. It simplifies the
cabling management in the datacenter racks. However, a good QoS management is mandatory to leave
enough bandwidth for each type of traffic.
>25GB NICs
RDMA Capable
>25GB NICs
RDMA Capable
>25GB NICs
RDMA Capable
>25GB NICs
RDMA Capable
Storage
Management
Live-Migration
Backup
Cluster
VM Networks
Storage
Management
Live-Migration
Backup
Cluster
VM Networks
Storage
Management
Live-Migration
Backup
Cluster
VM Networks
Storage
Management
Live-Migration
Backup
Cluster
VM Networks
Network Devices (>25GB)
Figure 5 : Converge fabric and VM workloads
Switch Embedded Teaming
Currently we are deploying teaming by using the standard network teaming either by using LACP or
switch independent mode. When the teaming is created, usually we deploy virtual NICs (vNICs) in the
Hyper-V parent partition for management, Live-Migration, Cluster and so on. And currently the vNICs
deployed in the parent partition and does not support some required features such as vRSS, VMQueue
and RDMA. It is a problem to converge storage network as well as the Live-Migration Network.
In Windows Server 2016, we will be able to create Switch Embedded Teaming (SET). SET enables us to
create vNICs in the parent partition, which supports the following technologies:
•
•
•
•
•
•
•
•
Datacenter bridging (DCB)
Hyper-V Network Virtualization – NV-GRE and VxLAN are both supported in Windows Server
2016.
Receive-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if any of the SET team
members support them.
Remote Direct Memory Access (RDMA)
SDN Quality of Service (QoS)
Transmit-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if all of the SET team
members support them.
Virtual Machine Queues (VMQ)
Virtual Receive Side Scaling (RSS)
P a g e 11 | 53
Understand Microsoft Hyper Converged solution
Thanks to SET, we have our key technologies required to converge networks for Storage Spaces Direct.
The below schema illustrates SET:
Figure 6 : Switch Embedded Teaming (illustration taken from TechNet)
SET supports only the Switch Independent mode and no other types as LACP. Two Load-Balancing
algorithms are supported: Hyper-V Port and Dynamic. Therefore, the total number of VM queues in
the team is the sum of the VM Queues of each NIC in the team. It is also called the Sum-Queues.
VMQueue and Receive Side Scaling
Receive Side Scaling (RSS) is a feature that enables to spread the network I/O processing across several
physical processors. Without RSS, all network I/O will go to the processor 0. In other words, you
couldn’t exceed almost 3,5Gb/s. It is problematic when you have bought 10GB Ethernet NIC. The same
technology exists inside the VM and it is called vRSS.
However, in virtualization world, the physical server hosts several virtual machines and now we need
a virtual network queue for each Virtual Machine to give packets to VM directly. It is called a VM
Queue. When you disable VM Queue all traffic is managed by the Core 0. So a bottleneck can be
created when using 10GB Ethernet, because one processor core can give you around 3.5Gbps of
throughput (please disable VMQueue on 1GB Ethernet NIC). You can also enable static VMQueue
which enables to associate a VMQueue to a processor by using a Round Robin algorithm. However, in
Windows Server 2012 R2 Microsoft introduced dynamic VMQueue, which enables to spread the
VMQueue across several processors dynamically based on processor usage. It’s enable to optimize
processor utilization.
Until now, we can’t converge network in the Hyper-V parent partition if we want to use RSS and
VMQueue. It is because vRSS and VMQueue are not supported in virtual NIC created in the parent
partition. With Switch Embedded Teaming, we will be able to create vNIC in the parent partition with
vRSS and VMQueue support.
Datacenter Bridging
Datacenter Bridging (DCB) is a collection of open standards developed by the IEEE (Institute of
Electrical and Electronics Engineers). The main goal of DCB is to resolve the reliability problem of the
Ethernet without using complex transport protocols as TCP. Ethernet makes best-effort by design and
P a g e 12 | 53
Understand Microsoft Hyper Converged solution
sometimes when there are network congestions, some packets can be lost. DCB enables to avoid the
packet loss for a type of traffics. This is why it is very important to enable DCB with RDMA over
Converged Ethernet (RoCE). In this way, storage traffic will have the priority and no packet loss.
To give priority to a type of traffic, DCB uses Priority-Based Flow Control (PFC). PFC is defined in the
IEEE 802.1Qbb standard.
Remote Direct Memory Access
Remote Direct Memory Access (RDMA) is a technology that enables the network adapter to carry data
directly in memory without using buffers, CPU or Operating System. So, RDMA enables us to increase
throughput and reduce the resource utilization.
This is a key feature in Microsoft Hyper Converged solution because it strongly recommends the usage
of RDMA with Storage Spaces Direct.
SMB Direct
Since Windows Server 2012, workload as Hyper-V or SQL Server can leverage a feature called SMB
Direct. This feature uses network adapters RDMA capable. This enables to reduce the CPU utilization
as well as the latency and increase the throughput. Thanks to SMB Direct, the remote file server looks
like a local storage.
Currently there are three types of RDMA network adapters: RoCE (RDMA over Converged Ethernet),
Infiniband and iWARP.
In our solution, SMB Direct is useful for Storage Spaces Direct.
P a g e 13 | 53
Understand Microsoft Hyper Converged solution
Software-Defined Compute
The Software-Defined Compute part concerns technologies that aim to run Virtual Machines and so
multiple Guest OS on a single hypervisor or on a cluster of hypervisors. This is a small part to describe
the key feature for the Microsoft Hyper Converged solution.
Hyper-V
Hyper-V is the Microsoft hypervisor implementation. Hyper-V enables a physical server to become a
hypervisor and so to host several Virtual Machines. Hyper-V Hosts can be added to a cluster to support
the High Availability of Virtual Machines.
Hyper-V isolates physical server and virtual machines Operating System (OS) in the partition. When
you deploy Hyper-V role on the host, a parent partition (or root partition) is created where the host
OS is executed. Then each time you create a VM, its OS is executed in child partition. The parent
partition has direct access to the hardware devices.
Below is a high-level Hyper-V diagram (copied from MSDN):
Figure 7: Hyper-V High Level Architecture
Acronyms and terms used in the diagram above are described below:
•
APIC – Advanced Programmable Interrupt Controller – A device which allows priority levels to
be assigned to its interrupt outputs.
•
Child Partition – Partition that hosts a guest operating system - All access to physical memory
and devices by a child partition is provided via the Virtual Machine Bus (VMBus) or the
hypervisor.
P a g e 14 | 53
Understand Microsoft Hyper Converged solution
•
Hypercall – Interface for communication with the hypervisor - The hypercall interface
accommodates access to the optimizations provided by the hypervisor.
•
Hypervisor – A layer of software that sits between the hardware and one or more operating
systems. Its primary job is to provide isolated execution environments called partitions. The
hypervisor controls and arbitrates access to the underlying hardware.
•
IC – Integration component – Component that allows child partitions to communication with
other partitions and the hypervisor.
•
I/O stack – Input/output stack
•
MSR – Memory Service Routine
•
Root Partition – Manages machine-level functions such as device drivers, power management,
and device hot addition/removal. The root (or parent) partition is the only partition that has
direct access to physical memory and devices.
•
VID – Virtualization Infrastructure Driver – Provides partition management services, virtual
processor management services, and memory management services for partitions.
•
VMBus – Channel-based communication mechanism used for inter-partition communication
and device enumeration on systems with multiple active virtualized partitions. The VMBus is
installed with Hyper-V Integration Services.
•
VMMS – Virtual Machine Management Service – Responsible for managing the state of all
virtual machines in child partitions.
•
VMWP – Virtual Machine Worker Process – A user mode component of the virtualization stack.
The worker process provides virtual machine management services from the Windows
Server 2008 instance in the parent partition to the guest operating systems in the child
partitions. The Virtual Machine Management Service spawns a separate worker process for
each running virtual machine.
•
VSC – Virtualization Service Client – A synthetic device instance that resides in a child partition.
VSCs utilize hardware resources that are provided by Virtualization Service Providers (VSPs) in
the parent partition. They communicate with the corresponding VSPs in the parent partition
over the VMBus to satisfy a child partitions device I/O requests.
•
VSP – Virtualization Service Provider – Resides in the root partition and provide synthetic
device support to child partitions over the Virtual Machine Bus (VMBus).
•
WinHv – Windows Hypervisor Interface Library - WinHv is essentially a bridge between a
partitioned operating system’s drivers and the hypervisor which allows drivers to call the
hypervisor using standard Windows calling conventions
•
WMI – The Virtual Machine Management Service exposes a set of Windows Management
Instrumentation (WMI)-based APIs for managing and controlling virtual machines.
P a g e 15 | 53
Understand Microsoft Hyper Converged solution
Live-Migration
Live-Migration is a Hyper-V feature, which is used to a move a running VM from one Hyper-V node to
another in a cluster, without downtime. Thanks to this feature, we can balance the resource utilization
on each node in the cluster. Moreover, when you have to update a node in a cluster (Microsoft
Update), you can move all VMs to other nodes to reboot the host without impact on the production.
In a Hyper-V cluster, the storage is usually shared and accessible for each node. Therefore, there is just
the VM memory to copy from one node to another. Below is the process to move a VM from one node
to another:
Figure 8: Live-Migration
1. Hyper-V will create a copy of the VM specification and configure dependencies on Node 2.
2. The memory of the source VM is divided up into a bitmap that tracks changes to the pages.
Each page is copied from Host Node 1 to Host Node 2, and each page is marked as clean after
it was copied.
3. The virtual machine on Node 1 is still running, so memory is changing. Each changed page is
marked as dirty in the bitmap. Live Migration will copy the dirty pages again, marking them
clean after the copy. The virtual machine is still running, so some of the pages will change again
and be marked as dirty. The dirty copy process will repeat until: it has been done 10 times or
there is almost nothing left to copy.
4. What remains of the source VM that has not been copied to Node 2 is referred to as the state.
At this point in time, the source VM is paused on Node 1.
5. The state is copied from Node 1 to Node 2, thus completing the virtual machine copy.
6. The VM is resumed on Node 2.
7. If the VM runs successfully on Node 2, then all traces are removed from Node 1.
Once the VM is resumed a small network outage can occur until the ARP (Address Resolution Protocol)
table is updated on network devices (lose of two pings).
P a g e 16 | 53
Understand Microsoft Hyper Converged solution
To speed up the Live-Migration process, the virtual machine memory transferred through the network
is compressed by default in Windows Server 2012 R2. However, it is also possible to leverage SMB
Direct for a faster Live-Migration.
By using SMB Direct, Live-Migration is able to use RDMA network acceleration and SMB Multichannel.
RDMA provides a low latency network and CPU utilization and increases the throughput. The SMB
Multichannel enables to use multiple network connections simultaneously. This result to increase the
throughput and to gain the network fault tolerance.
Hyper-V Replica
Hyper-V Replica enables to implement a Data Recovery Plan (DRP) between two Hyper-V hosts or
cluster. Thanks to Hyper-V Replica, VMs can be replicated to another hypervisor and synchronize every
30 seconds, 5 or 15 minutes. When an outage occurs on the source, a failover can be executed to start
the VM on the target.
When Hyper-V Replica is needed from or to a cluster, it is necessary to deploy the Hyper-V Replica
broker cluster role. Because VM can move in a cluster (Live-Migration or Quick-Migration), it is
necessary to locate where is the VM in the cluster. It is the role of the Hyper-V Replica Broker.
Figure 9: Hyper-V Replica
P a g e 17 | 53
Understand Microsoft Hyper Converged solution
Software-Defined Storage
The Software-Defined Storage part aims to describe the software storage solution in the Microsoft
Hyper Converged infrastructure.
Storage Spaces
The Storage Spaces feature has been released with Windows Server 2012. It enables to aggregate
several storage devices together and create virtual disk on top. The virtual disk created can be resilient
depending of your choice (mirroring or parity). This enables to avoid using expensive and inflexible
hardware devices as RAID Controllers or SAN.
Figure 10: Storage Spaces explanation
Before creating any storage pool, you need several storage devices that can be located in Just a Brunch
of Disk (JBOD) trays, or attached locally. These disks can be SATA, SAS, NVMe or SSD (SATA since
Windows Server 2016). Next, you can create your storage pools by aggregating selected storage
devices. You can mix types of storage devices (SSD + HDD, NVMe + SSD and so on).
Then you can create virtual disks called Storage Spaces on top of the storage pool. Storage Spaces
supports some resiliency mode as mirroring and parity (C.F next section). Once the virtual disks are
created, you can mount them on the machine and create a volume inside.
The above explanation regards Storage Spaces in a single server. Nevertheless, you can also create a
clustered Storage Spaces when you want High Availability on the storage.
Until Windows Server 2012 R2, all storage devices had to be accessible from each server where the
clustered Storages Space was deployed. In Windows Server 2016, thanks to Storage Spaces Direct, a
clustered Storage Spaces can be deployed with local storage devices of each server. We will talk about
this later.
To avoid that all nodes take the ownership of a clustered Storage Spaces at the same time, the Failover
Cluster is used. After creating the Virtual Disks, it is necessary to convert them into a Cluster Shared
Volume. In this way, only one node can take the ownership of a virtual disk at the same time.
P a g e 18 | 53
Understand Microsoft Hyper Converged solution
Figure 11: Clustered Storage Spaces explanation
Storage Spaces resiliency
What will be a storage solution without protection against data loss? If you work on storage and
especially on the hardware, you know that storage devices as Hard Drives are sensitive and often fail.
In order to avoid losing data when a hard drive fails, the data must be redundant on several other
storage devices. With Storage Spaces, there are three resiliency modes:
•
Simple: In this mode, there is no redundancy and so no resiliency. The data is copied across
several disks without extra copies or parity. This mode provides the best performance level
and you can use the total capacity of the physical disks in the Storage Pools.
•
Mirror: In this mode the data are written across multiple physical disks and a copy is performed
one or two times. When the copy is performed one time, it is called Two-Way Mirroring and
when the data is copied two times, it is called Three-Way Mirroring. This mode is great for
most of workloads as Hyper-V and SQL because it provides good performance level especially
when using tiering.
•
Parity: In this mode the data are written across multiple physical disks and a parity information
is copied one or two times. Because of the parity, the write performance is not high, but the
capacity is maximized. Thus, this mode is good for backup workload.
To have multiple disks that read/write simultaneously and so to add the performance of each disk in a
storage pool, the data is divided into blocks and they are distributed across several physical disks. This
is called stripping.
Therefore, the Storage Spaces stripes the data across a specific number of disks also called a number
of columns (a column is a physical disk). The data are divided into small blocks that have a specific size,
also called Interleave (default value: 256KB).
As I said before, for a storage solution designed to host VMs, it is preferred to use Mirroring. Usually
you will also use a Two-Way Mirroring. The number of physical disks required is the number of columns
P a g e 19 | 53
Understand Microsoft Hyper Converged solution
x 2 because the data exists two times in the storage spaces. In this case, you need at least two physical
disks. Below is an explanation schema:
Figure 12: Two-Way Mirroring, 1 data column
In the above example, the data are not stripped across several disks. So, the read/write performances
are equal to one physical disk. However, the data is copied one extra time, so this solution supports
one disk failure.
Figure 13: Two-Way Mirroring, 2 data columns
In the above solution, the data is stripped across two columns and the data have an extra copy. The
read/write performance is the sum of two disks. The block 1 size is equal to 256KB by default
(interleave).
P a g e 20 | 53
Understand Microsoft Hyper Converged solution
To finish, when a disk fails, it is possible to rebuild its data to others disks. A feature called Storage
Spaces Parallel rebuild enables to make an extra copy of the blocks stored on a failed disk to the
running disks. However, this requires some free space in the Storage Spaces. You have to leave unprovisioned space as the size of a single disk in the storage pool.
Scale-Out File Server
Scale-Out File Server is a Failover Cluster role, which provides file shares that are continuously available
and online on each node of the cluster. This feature is designed for application workloads as Hyper-V
and SQL over SMB 3.
Scale-Out File server is useful in disaggregated solution. In Hyper-Converged solution, we don’t use it
because at the time of writing, it is not supported to mix hyper convergence and Scale-Out File Server.
Therefore, you have to choose between Scale-Out File Server and Hyper-Converged solution.
Figure 14: Scale-Out File Server
Tiering
The Tiering or Storage Tiers was released with Windows Server 2012 R2. It enables to mix types of
storage devices in a storage pool such as SSD + HDD. In this way, when you create a Storage Spaces,
you can select each disk type. Memory flash disks as SSD are faster than HDD but they are more
expensive for less capacity. Thus, when we built a Storage Spaces solution in Windows Server 2012 R2,
a little number of memory flashes were installed and a lot of HDD were installed.
In this kind of solution, faster disks are called performance disks and others are called capacity disks.
When the data is frequently used (called also hot data) in the Storage Spaces, the data is copied on the
performance disks. When the data is less used (cold data), the data is moved to HDD. This technology
enables to achieve a great performance level by maximizing the capacity.
P a g e 21 | 53
Understand Microsoft Hyper Converged solution
Regarding the resiliency, the number of columns is the same for the performance or the capacity tier.
So usually, the number of columns is set regarding the number of storage devices in the performance
tier.
Figure 15: 2x Performance and 4x capacity disks, 1 data column
In the above example, there are two performance disks and four capacity disks in a Two-Way Mirroring.
However, the maximum number of columns is 1 because we have just two performance disks. These
disks are also in Two-Way mirroring, and so an extra copy is written. Therefore, the read/write
performance is just equal to one physical disk.
Figure 16: 4x performance and capacity disks, 2 data columns
In the above solution, the data is stripped across two columns. Thus, the read/write performance is
equal to the sum of the number of columns. Now it is up to you to mix disks to maximize performance,
and capacity without blowing up the cost.
P a g e 22 | 53
Understand Microsoft Hyper Converged solution
In Storage Spaces Direct solution, tiering technology is not used anymore. Instead, the performance
devices are used for the caching mechanism (see next section).
Storage Spaces Direct
Storage Spaces Direct is a new Windows Server 2016 feature that enables a new deployment model
without using shared JBODs to create clustered Storage Spaces. Now you can use local storage devices
either connected via JBOD or internally.
It is possible because Microsoft has developed the Software Storage Bus (SSB), which enables each
server to see all disks connected to each node in the cluster. Once each node sees all storage devices,
it is easy to create the Storage Pools. SSB takes advantages of SMB3, SMB Direct and SMB Multichannel
to transfer data blocks between cluster nodes.
Figure 17: Software Storage Bus (SSB)
SSB uses SMB3 and SMB Direct to exchange information between nodes in the cluster. Two
components in SSB enables to create a clustered Storage Spaces with local storages:
•
•
ClusPort: create a Virtual HBA, which enables to connect to storage devices of the other nodes
in the cluster (initiator).
Clusblft: enables to virtualize storage devices in each node in order to ClusPort connect to
(target).
P a g e 23 | 53
Understand Microsoft Hyper Converged solution
Figure 18 : ClusPort & Clusblft interactions
Regarding the resiliency, the Storage Spaces Direct cluster resiliency changes depending on the
resiliency mode chosen and the number of extra copies:
Resiliency mode
2-Way mirroring
3-Way mirroring
Dual Parity (erasure coding)
Cluster resiliency
1 node
2 nodes
1 node
SSB can use a cache mechanism called Storage Bus Cache (SBC) which is protected against failure.
When enabling Storage Spaces Direct, the storage devices are selected for caching and for capacity.
Storage Configuration
SATA SSD + SATA HDD
NVMe SSD + SATA HDD
NVMe SSD + SATA SSD
NVMe SSD + SATA SSD + SATA HDD
Caching devices
All SATA SSD
All NVMe SSD
All NVMe SSD
All NVMe SSD
Only NVMe SSD or Only SATA SSD
/
Capacity devices
All SATA HDD
All SATA HDD
All SATA SSD
All SATA SSD
All SATA HDD
All
Caching behavior
Read + Write
Read + Write
Write only
Read for SATA SSD
Read + Write for SATA HDD
No
When the system has determined which devices are the caching or the capacity devices, the capacity
devices are associated to a caching device by using round robin algorithm.
P a g e 24 | 53
Understand Microsoft Hyper Converged solution
Figure 19: Caching and Capacity devices association when SBC is enabled (Copied from TechNet)
If a Caching device failed, the related capacity devices are bound automatically to a working caching
device. When SBC is enabled, it creates a partition on each caching device that use by default all
capacity except 32GB. This capacity is used for storage pool and virtual disk metadata.
SBC requires some node memories to work. It needs almost 10GB (1%) of memory per one TB of
caching devices.
ReFS file system
Resilient File System (ReFS) is a new file system, which was released with Windows Server 2012. The
ReFS is designed to increase the integrity, availability, scalability and the Proactive Error Correction. In
Windows Server 2012 ReFS was not much used because it lacked features compared to NTFS (No Disk
Quotas, no compression, no EFS and so on).
However, in Windows Server 2016, ReFS brings accelerated VHDX operations. It enables to create fixed
VHDX or merge checkpoint almost instantly. Therefore, in Windows Server 2016, it could be interesting
to store VMs on a ReFS volume.
Moreover, ReFS comes with some integrity and protection against corruption. ReFS does not need to
check disk as we execute on NTFS partition. Thus, when a Cluster Shared Volume is formatted by using
ReFS, you will no longer be warning to run a CHKDSK on the volume.
Multi-Resiliency Virtual Disks and ReFS Real-Time Tiering
Windows Server 2016 Technical Preview 4 has introduced the multi-resiliency virtual disks. This
enables to have two tiers in a virtual disk: one in mirroring and the other in erasure-coded parity).
Figure 20: Multi-Resiliency Virtual Disks (copied from TechNet blog)
P a g e 25 | 53
Understand Microsoft Hyper Converged solution
In this configuration, ReFS will operate on this virtual disk. The data are always written to the Mirror
tier even if it is an update of a data that is already in the parity tier. In this case, the updated data in
the parity tier are invalidated. ReFS always writes in the Mirror Tier because it is the fastest tier,
especially for virtual machines.
Then ReFS rotates the data from the mirror tier to the parity tier and makes the erasure coding
computation. As the mirror tier gets full, data will be moved automatically to the parity tier.
Figure 21: ReFS Real-Time Tiering (copied from TechNet blog)
This configuration is only available with 4-nodes configuration and above.
Three tiers configuration
In the last part we seen how to create a virtual disk with two tiers to balance between performance
and efficiency. When you choose to implement this storage solution based on SSD + HDD, SSDs are
used for the cache and HDD for the multi-resiliency virtual disk.
Nodes
Hyper-V
Storage Spaces Direct
Cache
Capacity
Multi-Resiliency virtual disk
Hot Data
(Mirroring)
SSD
SSD
HDD
HDD
Cold Data
(Parity)
HDD
HDD
HDD
HDD
HDD
Figure 22: SSD+HDD for the three tiers configuration
You can also mix NVMe, SSD and HDD disks for the three tiers configuration. NVMe will be used for
the cache, SSDs for the hot data tier and HDD for the cold data tier.
P a g e 26 | 53
Understand Microsoft Hyper Converged solution
Nodes
Hyper-V
Storage Spaces Direct
Cache
Capacity
Multi-Resiliency virtual disk
Hot Data
(Mirroring)
NVMe
NVMe
SSD
SSD
Cold Data
(Parity)
HDD
HDD
HDD
HDD
HDD
Figure 23: NVMe+SSD+HDD for the three tiers configuration
Thanks to the above solution, the cache is managed by the high performance NVMe disks. Then data
are written on the SSD which are efficient. Then when hot data volume is full, the data rotation occurs
and they are moved to the cold data tier. Because this tier is a parity tier, the capacity is maximized.
To support this kind of solution, you need at least a 4-nodes cluster and you have to use the ReFS file
system.
Storage Replica
Windows Server 2016 comes with a new feature for a Data Recovery Plan or for stretched cluster. In
the next release of Windows Server, you will be able to make a replica from a first Storage Spaces to
another by using SMB.
At the time of writing this article, Storage Replica supports the following scenarios:
•
•
•
•
Server-to-server storage replication using Storage Replica
Storage replication in a stretch cluster using Storage Replica
Cluster-to-cluster storage replication using Storage Replica
Server-to-itself to replicate between volumes using Storage Replica
For Microsoft Hyper Converged solution, usually the chosen scenario will be the Cluster to Cluster
replication.
P a g e 27 | 53
Understand Microsoft Hyper Converged solution
Figure 24 : Cluster to cluster replication
In the above example, the Room 1 cluster is the active one while the cluster in room 2 is the passive.
The storage in Room 1 is replicated to the storage in the Room 2. Because the Storage Replica uses
SMB, it can leverage RDMA to increase throughput and to decrease CPU utilization.
As the most of the replication solution, Storage Replica supports Synchronous and Asynchronous
replication. In Synchronous mode, the application receives an acknowledgement when the replication
has occurred successfully; while in the asynchronous mode the application receives the
acknowledgement immediately after the replication engine has captured data. So synchronous mode
can degrade the application performance if the storage replica solution is not well designed. The
Synchronous replication is suitable for HA and DR solutions.
For both modes, you need a log volume in each room. This log volume should be on a fast storage
device as an SSD. These log volumes are used as a buffer for the replication process.
In Windows Server 2016 Technical Preview 5, the Storage Replica team added the following new
features:
•
•
•
•
•
•
Asynchronous stretch clusters now supported
RoCE V2 RDMA networks now supported
Network Constraint (control which networks SR runs upon)
Integrated with the Cluster Health service
Delegation (delegate access by adding users to the Storage Replica Administrators group)
Thin provisioned storage now supported
Storage Quality of Service
Storage Quality of Service (Storage QoS) was introduced in Windows Server 2012 R2 that enabled to
set the minimum and the maximum IOPS on a single virtual hard disk VHD(X). That was great, but it
was not enough; Microsoft is investing more and more in this area to bring more functionalities.
There are a couple of big challenges with Storage QoS today in Windows Server 2012 R2, this
technology allows us to cap the amount of IOPs for each virtual machine/virtual hard disk individually,
and that all works great on a single Hyper-V server!
However, to be honest, no one is deploying a standalone Hyper-V host in production, of course we are
leveraging Hyper-V cluster for High Availability.
Storage QoS today doesn't really works great if you have dozens of Hyper-V servers talking to same
piece of storage at the back-end, because in Windows Server 2012 R2 those Hyper-V servers are not
aware they are competing with each other for storage bandwidth.
P a g e 28 | 53
Understand Microsoft Hyper Converged solution
Thankfully, in Windows Server 2016, Microsoft introduced Distributed Storage QoS policy manager
directly attached on Failover Cluster as a Storage QoS Resource.
This enables us to create policies to set a minimum IOPS and a maximum IOPS value based on a flow.
Each file handles opened by a Hyper-V server to a VHD or VHDX file is considered as a "flow".
Distributed Storage QoS enables to make multi-tier performance between several VMs. For example,
you can create a “Gold” policy and a “Platinum” policy and associate them to the right group of virtual
machines, and virtual hard disks.
Of course we still have what we had in Windows Server 2012 R2, where you can go to Hyper-V and
configure Storage QoS properties for each virtual machine/virtual hard disk. But in Windows Server
2016 we can actually now go to the Scale-Out File Server cluster or to Hyper-V cluster using Cluster
Shared Volumes (CSV) for storage, and configure the Storage QoS policies there. This enables a couple
of interesting scenarios:
The first scenario, if you have a multiple of Hyper-V Servers talking to the same storage at the backend, then all your storage QoS policies get respected.
The second scenario is actually allowing us to do some cool things, where we can now start pulling
Storage QoS policies and having a single policy that applies to a group of virtual machines instead of
just one VM or one virtual hard disk.
Distributed Storage QoS in Windows Server 2016 supports two deployment scenarios:
1- Hyper-V using a Scale-Out File Server. This scenario requires both of the following:
• Storage cluster that is a Scale-Out File Server cluster and a Compute cluster that has least
one server with the Hyper-V role enabled.
• For distributed Storage QoS, the Failover Cluster is required on Storage side, but it is
optional on the Compute side.
2- Hyper-V using Cluster Shared Volumes such as Storage Spaces Direct in Hyper-Converged
model as described in this whitepaper. This scenario requires both of the following:
• Compute cluster with the Hyper-V role enabled
• Hyper-V using Cluster Shared Volumes (CSV) for storage Failover Cluster is required.
P a g e 29 | 53
Understand Microsoft Hyper Converged solution
Nano Server
Nano Server is a new headless, 64-bit only, deployment option in Windows Server 2016 that has been
optimized for private clouds and datacenters. It is similar to Windows Server in Server Core mode, but
significantly smaller, has no local logon capability, and only supports 64-bit applications, tools, and
agents. It’s takes up far less disk space, sets up significantly faster, and requires far fewer updates and
restarts much faster than full Windows Server. In fact, you can only administrate this server remotely
by using WinRM, WMI or PowerShell. In case of network incident, you can also use Emergency
Management Services by connecting a serial cable to the server.
Figure 25: Nano Server
The footprint on disk of the Nano Server is 10x lower than a standard Windows Server installation.
Moreover, because there are fewer components integrated into Nano Server, the number of deployed
update is significantly lower than Windows Server Core or with GUI. So there are less number of
reboots required, caused by updates. To finish Nano Server start quickly. Therefore, you can fail hard
but fail fast.
Figure 26: Nano Server states
Even if there is no GUI and the full remote administration can be repulsive, I am convinced that Nano
Server is great for Hyper-V and storage solution. The quick restart of a Nano Server and the little
number of required reboots are great advantages in my point of view. Moreover, the deployment of a
P a g e 30 | 53
Understand Microsoft Hyper Converged solution
Nano Server on a physical server is really quick. Therefore, for a Private Cloud Service or for a Hyper
Converged solution, I think Nano Server is the way to go.
In the next part, I will make an implementation guideline to show you how to deploy the solution that
I described at the beginning of this document. This implementation will be done on four Nano Servers.
At the time of writing this whitepaper, Nano Server supports the following roles and features:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Hyper-V Role
Failover Clustering
Hyper-V guest drivers for hosting Nano Server as a Virtual Machine
Basic Drivers for a variety of Network Adapters and Storage Controllers (as a standard server
installation)
File Server role and other storage components
Windows Defender Antimalware, including a default signature file
Reverse forwarders for application compatibility
DNS Server role
Desire State Configuration (DSC)
Internet Information Server (IIS)
Host support for Windows Containers
System Center Virtual Machine Manager agent
Network Performance Diagnostics Service (NPDS)
Data Center Bridging
Ability to boot and run from a RAM disk
Deploying on a virtual machine
Deploying on a physical machine
Secure Startup
Shield VM
Therefore, we have all we need for the Microsoft Hyper Converged solution supported by Nano Server.
We can also automate the Nano Server deployment by using Desired State Configuration (DSC).
P a g e 31 | 53
Understand Microsoft Hyper Converged solution
Windows Server 2016 licensing
Microsoft introduced a new licensing model for Windows Server 2016. Now the license model is per
core. In the below table, you can see the feature available in each edition of Windows Server 2016. As
you can see, features required for Hyper Convergence are included in Datacenter Edition.
P a g e 32 | 53
Understand Microsoft Hyper Converged solution
Microsoft will sell licenses by 2-core packs. At least you need 4 packs for an 8 core processor and at
least 8 packs per physical server.
For two processors with 8 cores each, the price is the same as Windows Server 2012 R2 Datacenter.
Beyond, it will be more expensive than Windows Server 2012 R2.
P a g e 33 | 53
Understand Microsoft Hyper Converged solution
Implementation guideline
Design Overview
As a proof of concept, we will build the Hyper Converged cluster in virtual machines. Please note that
S2D in virtual machines is not supported for production. This solution requires at least four nodes (so
four VMs). Each VMs will have 4 vCPU and 8GB of static memory. These nodes will run on Nano Server,
which supports all features we need.
To create the Storage Pools, I will assign 10 VHDX per VMs. Then two NICs will be associated with the
VM to create a SET composed of two NICs.
Warning: SET is not supported in Virtual Machines by Microsoft. However, the below implementation
of Hyper Convergence is a Proof of Concept (POC) because it is deployed in Virtual Machines. If you
create a SET in a VM without enabling Nested Hyper-V, the VM will crash.
Cluster Name: Hyperconverged
HC-NANO01
HC-NANO02
HC-NANO03
HC-NANO04
Hypervisor
(Hyper-V)
Hypervisor
(Hyper-V)
Hypervisor
(Hyper-V)
Hypervisor
(Hyper-V)
Storage
(VHDX)
Storage
(VHDX)
Storage
(VHDX)
Storage
(VHDX)
Storage
Management
Live-Migration
Cluster
Storage
Management
Live-Migration
Cluster
Storage
Management
Live-Migration
Cluster
Storage
Management
Live-Migration
Cluster
Network Devices (>10GB)
Figure 27: Implementation overview
Four virtual NICs will be created in the parent partition (Management, Cluster, Storage and LiveMigration). Because Hyper-V is installed in the VM, the nested Hyper-V mode will be enabled.
Below the networks configuration:
•
•
•
•
Management: 10.10.0.0/24 (VLAN ID: 10) - Native VLAN
o Gateway: 10.10.0.1
o DNS: 10.10.0.20 and 10.10.0.21
Cluster: 10.10.100.0/24 (VLAN ID: 100)
Live-Migration: 10.10.101.0/24 (VLAN ID: 101)
Storage: 10.10.102.68/26 (VLAN ID: 104)
P a g e 34 | 53
Understand Microsoft Hyper Converged solution
Requirements
To follow this implementation guideline, you need the following requirements:
•
•
•
•
•
Latest ISO of Windows Server 2016 Technical Preview
A Hyper-V host running the last Windows Server Technical preview
32GB of free memory
16 vCPU available
Enough disk spaces to create multiple VHDX
Nano Server’s deployments
Virtual machines creation and configuration
To create the four Nano Server’s virtual machines, we use the below script. This PowerShell script
creates the VMs and configures them, build the VHDX containing the Operating System, creates VHDX
for Storage Spaces Direct usage and associates all VHDX to the VM. We run this script from my HyperV host:
$Password
= Read-Host -Prompt "Please specify Administrator password of
nodes" -AsSecureString
# Nano Server Name
$ServerName
= "HC-Nano"
# Processor per node
$ProcessorCount = 4
# Memory per node
$Memory
= 8GB
# Number of VHDX for Storage Spaces Direct per Node
$VHDXForS2D
= 10
# Size of each VHDX for Storage Spaces Direct (Dynamic)
$VHDXSize
= 10GB
# Number of Node in the Hyper Converged Cluster
$ServerCount
= 4
# Network ID for nodes
$NetworkID
= "10.10.0."
# IP address for nodes (NetworkID + HostIP++)
$HostIP
= 160
# Subnet mask
$SubnetMask = "255.255.255.0"
# Gateway IP Address
$Gateway
= "10.10.0.1"
# Domain name for automatic integration
$DomainName = "int.homecloud.net"
# Technical preview ISO Path
$MediaPath
= "D:"
# Base path where are copied temporary files to build Nano Server VHDX
$BasePath
= "c:\temp\NanoServer\Base"
# Path where are stored your Virtual Machines
$VMPath
= "X:"
# VM SwitchName where are connected your VM
$VMSwitchName = "VMWorkloads"
if (!(Test-Path $BasePath)){New-Item -ItemType Directory -Path $BasePath}
$i = 1
Copy-Item -Path $($MediaPath +
"\NanoServer\NanoServerImageGenerator\NanoServerImageGenerator.psm1") -Destination
. -Force
Copy-Item -Path $($MediaPath + "\NanoServer\NanoServerImageGenerator\ConvertWindowsImage.ps1") -Destination . -Force
Import-Module .\NanoServerImageGenerator.psm1 -verbose
For ($i = 1; $i -le $ServerCount; $i++){
If ($i -lt 10){$x = "0$i"}
Else{$x = $i}
# Building the servername (HC-Nano01, HCNano02 and so on)
P a g e 35 | 53
Understand Microsoft Hyper Converged solution
$NanoServer = $ServerName + $x
On
# VM Creation in Gen2 without VHDX attached
New-VM -Name $NanoServer `
-Path $VMPath `
-NoVHD `
-Generation 2 `
-MemoryStartupBytes $Memory `
-SwitchName $VMSwitchName
# Changing the number of processor and set to Static Memory
Set-VM -Name $NanoServer `
-ProcessorCount $ProcessorCount `
-StaticMemory
# Add a VM network adapter with macspoofing enabled and allowteaming enabled
Add-VMNetworkAdapter -VMName $NanoServer -SwitchName $VMSwitchName
Set-VMNetworkAdapter -VMName $NanoServer -MacAddressSpoofing On -AllowTeaming
# Building the IP Address
$IP = $NetworkID + $HostIP
# Creating the Nano Server VHDX in the VM Repository
# Feature enabled: Clustering, VM drivers, File Server role, Hyper-V, SCVMM
agent, DataCenter Bridging
# SCVMM packages are added if you want add the Hyper Converged cluster to VMM
to manage Hyper-V and storage from VMM
New-NanoServerImage -MediaPath $Mediapath `
-BasePath $BasePath `
-TargetPath $($VMPath + "\" + $NanoServer + "\" +
$NanoServer + ".vhdx")`
-ComputerName $NanoServer `
-AdministratorPassword $Password `
-InterfaceNameOrIndex Ethernet `
-Ipv4Address $IP `
-Ipv4SubnetMask $SubnetMask `
-Ipv4Gateway $Gateway `
-DomainName $DomainName `
-Clustering `
-DeploymentType Guest `
-Edition Datacenter `
-Storage `
-Packages Microsoft-NanoServer-Compute-Package, MicrosoftNanoServer-SCVMM-Compute-Package, Microsoft-NanoServer-SCVMM-Package, MicrosoftNanoServer-DCB-Package `
-EnableRemoteManagementPort
$HostIP++
# Attached VHDX to VM
Add-VMHardDiskDrive -VMName $NanoServer `
-Path $($VMPath + "\" + $NanoServer + "\" + $NanoServer +
".vhdx")
$VirtualDrive = Get-VMHardDiskDrive -VMName $NanoServer -ControllerNumber 0
# Change the boot order to the VHDX first
Set-VMFirmware -VMName $NanoServer -FirstBootDevice $VirtualDrive
# Creating VHDX for Storage Spaces Direct
For ($j = 1 ;$j -le $VHDXForS2D; $j++){
if ($j -lt 10){$y = "0$j"}
Else {$y = $j}
New-VHD -Path $($VMPath + "\" + $NanoServer + "\" + $NanoServer + "-Disk" +
$y + ".vhdx") `
-SizeBytes $VHDXSize `
-Dynamic
# Attach disks to VM
Add-VMHardDiskDrive -VMName $NanoServer `
-Path $($VMPath + "\" + $NanoServer + "\" + $NanoServer
+ "-Disk" + $y + ".vhdx")
}
# Starting VM
Start-VM -Name $NanoServer
}
Once this script is run, you should have four Nano Server’s Virtual Machines configured and running.
Now we can configure the Operating System of each VM.
P a g e 36 | 53
Understand Microsoft Hyper Converged solution
Figure 28: VM Configuration once the script is finished
Nodes configuration
P.S: This part is needed because the Hyper-V nodes are nested. The nested Hyper-V requires multiple
VLAN so we have to configure their VM Network Adapter to trunk.
Set-VMNetworkAdapterVlan -VMName HC-Nano01
AllowedVlanIdList "10,100,101,104"
Set-VMNetworkAdapterVlan -VMName HC-Nano02
AllowedVlanIdList "10,100,101,104"
Set-VMNetworkAdapterVlan -VMName HC-Nano03
AllowedVlanIdList "10,100,101,104"
Set-VMNetworkAdapterVlan -VMName HC-Nano04
AllowedVlanIdList "10,100,101,104"
-Trunk -NativeVlanId 0 -Trunk -NativeVlanId 0 -Trunk -NativeVlanId 0 -Trunk -NativeVlanId 0 -
Thanks to these commands, the VM Network Adapters of the VMs will be able to handle several VLANs
and it is great to make network convergence inside VMs.
Figure 29: VM Network Configuration of Hyper Converged nodes
P a g e 37 | 53
Understand Microsoft Hyper Converged solution
Enable nested Hyper-V
To enable nested Hyper-V on the VMs, just run the below commands:
Set-VMProcessor -VMName “VMName” -ExposeVirtualizationExtensions $true
Be sure that the following features are well set:
•
•
Enable Mac Spoofing
More than 4GB of static memory in each VM
Then you can restart your VMs.
Network and OS configuration
Warning: Before running the script of this section, be sure to have enabled nested Hyper-V! Otherwise,
you will have blue screen in loop because of SET.
Now, it’s time to configure the OS and the networks of Hyper Converged nodes. First, the time zone
has to be configured. Then the network will be configured by creating SET and virtual NICs. Then, the
required features will be installed. All of this will be executed by using a PowerShell script and by
leveraging PowerShell Direct. Copy and paste this script in a Windows PowerShell (not via script
because Enter-PSSession don’t like it):
$ServerName = "HC-Nano01"
#Specify the local administrator account
Enter-PSSession –VMName $ServerName –Credential $($ServerName + "\Administrator")
#Change the TimeZone to Romance Standard Time
tzutil /s "Romance Standard Time"
# Create a Swtich Embedded Teaming with both Network Adapters
New-VMSwitch -Name Management -EnableEmbeddedTeaming $True -AllowManagementOS $True
-NetAdapterName "Ethernet", "Ethernet 2"
# Add Virtual NICs for Storage, Cluster and Live-Migration
Add-VMNetworkAdapter -ManagementOS -Name "Storage" -SwitchName Management
Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName Management
Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName Management
# Set the IP Address for each virtual NIC
netsh interface ip set address "vEthernet (Management)" static 10.10.0.160
255.255.255.0 10.10.0.1
netsh interface ip set address "vEthernet (Storage)" static 10.10.102.160
255.255.255.192
netsh interface ip set address "vEthernet (LiveMigration)" static 10.10.101.16O
255.255.255.0
netsh interface ip set address "vEthernet (Cluster)" static 10.10.100.16O
255.255.255.0
# Enable RDMA on Storage and Live-Migration
Enable-NetAdapterRDMA -Name "vEthernet (Storage)"
Enable-NetAdapterRDMA -Name "vEthernet (LiveMigration)"
# Add DNS on Management vNIC
netsh interface ip set dns "vEthernet (Management)" static 10.10.0.20
Exit
# Install File Server and Storage Replica feature
install-WindowsFeature Storage-Replica -ComputerName $ServerName
# Restarting the VM
Restart-VM –Name $ServerName
Re-run this script for each node by changing the ServerName variable and the IP Addresses.
Warning: If you lose ping or you cannot contact your VMs, set one Network Adapter to Not Connected
inside the VM configuration.
Below you can see the IP Configuration on HC-Nano03 after the script is run. We have four virtual NICs
with IP Addresses well configured.
P a g e 38 | 53
Understand Microsoft Hyper Converged solution
Figure 30: IP configuration on Nano Servers
You can see that all virtual NICs are attached to the switch called Management.
Figure 31: Virtual NIC’s details
To finish you can see in the below screenshot the VM Switch configuration with the
EmbeddedTeamingEnabled to True (SET enabled).
P a g e 39 | 53
Understand Microsoft Hyper Converged solution
Figure 32: VM Switch configuration
The nodes are ready to be added to a cluster and to deploy Storage Spaces Direct.
About Datacenter Bridging
If you have NICs and switches which support DCB, you can enable and configure it for SMB Direct with
the following script (copied from TechNet):
# Set a policy for SMB-Direct
New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3
#
# Turn on Flow Control for SMB
Enable-NetQosFlowControl -Priority 3
#
# Make sure flow control is off for other traffic
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7
#
# Apply policy to the target adapters
Enable-NetAdapterQos -InterfaceAlias "SLOT 2*"
#
# Give SMB Direct 30% of the bandwidth minimum
New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 30 -Algorithm ETS
Storage Spaces Direct deployment
Cluster deployment
First of all, the configuration must be tested for Storage Spaces Direct. For this purpose, the command
Test-Cluster can be used (the Failover Clustering RSAT for Windows Server 2016 is required):
Test-Cluster -Node "HC-Nano01", "HC-Nano02", "HC-Nano03", "HC-Nano04" -Include
"Storage Spaces Direct", Inventory,Network,"System Configuration"
Once the test is finished, the prompt indicates a path to the Failover Cluster Validation Report. If all
report is ok (green), you can create the cluster.
P a g e 40 | 53
Understand Microsoft Hyper Converged solution
Figure 33: Failover Cluster Validation Report
To create the cluster, you can run the command New-Cluster as below:
New-Cluster –Name HyperConverged –Node HC-Nano01, HC-Nano02, HC-Nano03, HC-Nano04 –
NoStorage –StaticAddress 10.10.0.164
Once the cluster is created, you can open the Failover Cluster Manager (RSAT Windows Server 2016).
You should have a four-node cluster.
Figure 34: Four-Node hyper converged cluster
To ease the management of the cluster, the networks are renamed and the cluster use is set:
(Get-ClusterNetwork -Cluster HyperConverged
1").Name="Management"
(Get-ClusterNetwork -Cluster HyperConverged
2").Name="Storage"
(Get-ClusterNetwork -Cluster HyperConverged
3").Name="Cluster"
(Get-ClusterNetwork -Cluster HyperConverged
Migration"
(Get-ClusterNetwork -Cluster HyperConverged
"Storage").Role="ClusterAndClient"
-Name "Cluster Network
-Name "Cluster Network
-Name "Cluster Network
-Name "Cluster Network 4").Name="Live-Name
Then, the Live-Migration networks are set to use in priority the Live-Migration and the Cluster
networks.
P a g e 41 | 53
Understand Microsoft Hyper Converged solution
Figure 35: Live-Migration Settings
To follow best practices, we add a Quorum Witness. I use the new Windows Server 2016 feature that
allows me to use Microsoft Azure as a Witness (called Cloud Witness):
Set-ClusterQuorum -CloudWitness -Cluster HyperConverged -AccountName
publichomecloudstorage -AccessKey <AccessKey>
Figure 36: Cluster configuration
Now the cluster is ready to host the Storage Spaces Direct.
Enable Storage Spaces Direct
Depending on the chosen storage devices, there are several ways to enable the Storage Spaces Direct.
If you have installed SATA SSD and SATA HDD, you should use the below command:
$cluster = New-CimSession -ComputerName hyperconverged
Enable-ClusterS2D -CimSession $cluster -CacheMode ReadWrite
If you have installed NVMe and SATA SSD storage devices, you should use the below command to set
the Storage Bus Cache on the NVMe devices:
Enable-ClusterS2D -CimSession $Cluster -CacheMode ReadOnly
If you have only NVMe or only SATA SSD, you should not use Storage Bus Cache:
Enable-ClusterS2D -CimSession $cluster -CacheMode Disabled
For our example, we need a special command to skip the eligibility of the disk to deploy Storage Spaces
Direct in VMs
Enable-ClusterS2D -CimSession $cluster -CacheMode Disabled -AutoConfig:0 SkipEligibilityChecks -Confirm:$false
If you open the Enclosures tab in the Failover Cluster Manager, you should retrieve four enclosures
(one for each node) and the number of disks that you have installed on each node.
P a g e 42 | 53
Understand Microsoft Hyper Converged solution
Figure 37: Storage Enclosures
Management storage
Before creating Storage Spaces, we need to register the storage subsystem on our management server
(where are installed File Server RSAT) because we are doing remote administration. To register the
hyperconverged storage subsystem run the below command:
Register-StorageSubSystem –ComputerName hyperconverged.int.homecloud.net –
ProviderName *
Now we can manage the cluster storage as a local storage.
Get-StorageSubSystem -Name HyperConverged* | fl *
Figure 38: Storage system information
P a g e 43 | 53
Understand Microsoft Hyper Converged solution
To show all physical disks ready to be aggregated in a Storage Pool, you can run the below command:
Get-StorageSubSystem -Name hyperconverged.int.homecloud.net | Get-PhysicalDisk
Figure 39: Physical disks installed on each node
To create a Storage Pool, the command New-StoragePool can be used. In the below example, a Storage
Pool is created called VMStorage and without write cache. The default provisioning type is fixed and
the default resiliency is Mirror. All available physical disks are selected to be part to the Storage Pool.
New-StoragePool -StorageSubSystemName hyperconverged.int.homecloud.net `
-FriendlyName VMStorage `
-WriteCacheSizeDefault 0 `
-ProvisioningTypeDefault Fixed `
-ResiliencySettingNameDefault Mirror `
-PhysicalDisk (Get-StorageSubSystem -Name
hyperconverged.int.homecloud.net | Get-PhysicalDisk)
# If SATA SSD and SATA HDD are used, set journal to use SSD
Get-StoragePool VMStorage | Get-PhysicalDisk |? MediaType -eq SSD | SetPhysicalDisk -Usage Journal
# If NVME SSD and SATA SSD (flash configuration), set journal to NVMe devices
Get-StoragePool VMStorage | Get-PhysicalDisk |? MediaType -eq SSD -and BusType -eq
NVMe | Set-PhysicalDisk -Usage Journal
P a g e 44 | 53
Understand Microsoft Hyper Converged solution
Figure 40: Storage Pool
Then we can create volumes to store VMs. In the below example, I create two volumes of 50GB. The
data on these volumes will be stripped in two columns. The resiliency mode is a 2-Way Mirroring.
Volumes will be formatted in ReFS.
New-Volume -StoragePoolFriendlyName VMStorage `
-FriendlyName VMStorage01 `
-NumberOfColumns 2 `
-PhysicalDiskRedundancy 1 `
-FileSystem CSVFS_REFS `
–Size 50GB
New-Volume -StoragePoolFriendlyName VMStorage `
-FriendlyName VMStorage02 `
-NumberOfColumns 2 `
-PhysicalDiskRedundancy 1 `
-FileSystem CSVFS_REFS `
–Size 50GB
To create a multi-resiliency virtual disk, you can run the below commands:
## To create multi-resiliency Virtual Disks with 3-Way Mirroring
# Create two Storage Tiers level in 3-way mirroring
New-StorageTier -StoragePoolFriendlyName VMStorage -FriendlyName MirrorTier MediaType HDD -ResiliencySettingName Mirror -NumberOfColumns 4 PhysicalDiskRedundancy 2
New-StorageTier -StoragePoolFriendlyName VMStorage -FriendlyName ParityTier MediaType HDD -ResiliencySettingName Parity -NumberOfColumns 7 PhysicalDiskRedundancy 2
#Get the storage tier definitions
$MirrorTier = Get-StorageTier MirrorTier
$ParityTier = Get-StorageTier ParityTier
#Create mirror + parity virtual disks
New-Volume -StoragePoolFriendlyName VMStorage `
-FriendlyName VMStorage03 `
-FileSystem CSVFS_ReFS `
-StorageTiers $MirrorTier,$ParityTier `
-StorageTierSizes 200GB, 1800GB
P a g e 45 | 53
Understand Microsoft Hyper Converged solution
Figure 41: Clustered Storage Spaces
Now we have two clustered virtual disks which can host VMs data.
Figure 42: Storage QoS
Host a VM on the solution
To try the solution, we will host a VM in the cluster. we use a sysprep’d VHDX containing Windows
Server 2016 Technical Preview 5.
P a g e 46 | 53
Understand Microsoft Hyper Converged solution
Figure 43: Create a VM on the cluster
In the below screenshot, you can see that the VM is started and running. Thanks to the nested HyperV.
Figure 44: VM running on the Hyper Converged cluster
Then we connect to the VM and try to ping Google to verify the network.
P a g e 47 | 53
Understand Microsoft Hyper Converged solution
Figure 45: VM on Hyper Converged cluster running
Storage Quality of Service
First of all, it is necessary to open a CIM session to a cluster node because we are managing the cluster
from a management machine using RSAT Tools:
$CimSession = New-CimSession -Credential inthomecloud\rserre -ComputerName HCNano01
Then I use the below command to show the IOPS for each flow.
Get-StorageQosFlow -CimSession $CimSession |
Sort-Object StorageNodeIOPs -Descending |
ft InitiatorName,
@{Expression={$_.InitiatorNodeName.Substring(0,$_.InitiatorNodeName.IndexOf('.'))};
Label="InitiatorNodeName"}, StorageNodeIOPs, Status,
@{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"}
-AutoSize
You can find the result of this command in the below screenshot. I have the VM name, the Hyper-V
host name, the IOPS and the name of the virtual hard disk file.
Figure 46: Get the Storage QoS per flow
P a g e 48 | 53
Understand Microsoft Hyper Converged solution
To get more information such as the PolicyID, the minimum and the maximum IOPS, you can run the
below command. In the below example, no Storage QoS policy is applied to the flow.
Get-StorageQoSFlow -InitiatorName VM01 -CimSession $CimSession | Format-List
Figure 47: All information about Storage QoS applied to VM01 disks
You can also have a summary about the storage QoS applied to a volume:
Get-StorageQoSVolume -CimSession $CimSession | fl *
Figure 48: Result of Get-StorageQoSVolume PowerShell cmdlet
You can see in the above screenshot that no reservation is applied on this volume.
To make the test, I have run an IOMeter in the VM01. You can find the result without policy applied.
P a g e 49 | 53
Understand Microsoft Hyper Converged solution
Figure 49: IOMeter in VM01 without Storage QoS policy
Then I create a new policy called bronze with a minimum IOPS of 50 and a maximum IOPS of 150.
New-StorageQosPolicy -Name bronze -MinimumIops 50 -MaximumIops 150 -CimSession
$CimSession
Next, I apply the Storage QoS policy to the VM01 disks.
Get-VM -Name VM01 -ComputerName HC-Nano02|
Get-VMHardDiskDrive |
Set-VMHardDiskDrive -QoSPolicyID (Get-StorageQosPolicy -Name Bronze -CimSession
$CimSession).PolicyId
If I run again the Get-StorageQoSVolume, you can see that a reservation is applied to the volume (50
as the policy).
Figure 50: Storage QoS volume information after policy application
Then I again run the Get-StorageQoSFlow. You can see that the StorageNodeIOPS is reduced to 169
(instead of 435).
P a g e 50 | 53
Understand Microsoft Hyper Converged solution
Figure 51: Storage Node IOPS after policy application
Then, if you show all details, you can see that a policy is applied on the flow and there are a minimum
and a maximum IOPS.
Figure 52: Flow details after Storage QoS application
To finish, you can see in the below screenshot that the IOPS inside the VM is also reduced compared
to before Storage QoS application.
Figure 53: IOMeter inside the VM01 after Storage QoS policy application
P a g e 51 | 53
Understand Microsoft Hyper Converged solution
Conclusion
Hyper Convergence is a great way to build modern datacenter. First of all, because storage and
compute are inside the same physical servers, there is no scalability problem except if you want
manage the scalability of the storage and then compute separately. Nevertheless, it is up to you.
Secondly, all the solution is based on software either compute, network or storage. Therefore, it is
easier to manage and to maintain operational. Software solutions are also cheaper than hardware
solution such as SAN.
In my opinion, Nano Server is a good technology for Hyper Convergence. Nano Deployment is faster
than standard Windows Server 2016 (core or not) and there are no legacy and useless services. It
means that there are less crash and less updates. In addition, because the restart is fast, even if the
system fails, you can restart quickly.
However, as you have seen in this whitepaper, there are many technologies around the Hyper
Convergence, each of them is necessary to obtain good performance on the solution. A hard work on
the design is needed before the implementation. Companies which bypass the design phase, will have
some problems, mainly regarding performance and resiliency.
Now that nested Hyper-V is possible, try your solution, and try it again. When you are sure of your
solution, deploy it in production.
P a g e 52 | 53
Understand Microsoft Hyper Converged solution
References
charbelnemnom.com
External blog - Charbel Nemnom
DataCenter Bridging (DCB) overview
TechNet Topic
Getting Started with Nano Server
TechNet Topic
Hardware options for evaluating Storage Spaces Direct in Technical Preview 4
TechNet blog - Claus Joergensen
Hyper-V Architecture
MSDN
Remote Direct Access Memory (RDMA) and Switch Embedded Teaming (SET)
TechNet Topic
Storage Replica overview
TechNet Topic
Storage Spaces Direct in Windows Server 2016
TechNet Topic
Storage Spaces Direct in Technical Preview 4
Technet blog – Claus Joergensen
Storage Spaces Stripping and Mirroring
External Blog - Llubo Brodaric
Storage Spaces Direct – Under the hood with the Software Storage Bus
TechNet blog - Claus Joergensen
Storage Quality of Service
Technet Topic
VMQueue Deep Dive
TechNet blog - Ravi Rao
Tech-Coffee
External blog - Romain Serre
P a g e 53 | 53
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising