Dell AX-7525 Owner's Manual


Add to my manuals
10 Pages

advertisement

Dell AX-7525 Owner's Manual | Manualzz

Microsoft HCI Solutions from Dell

Technologies: End-to-End Single-Node

Deployment

Abstract

This document describes the end-to-end single-node cluster deployment on the Azure

Stack HCI operating system.

Part Number: H19404

December 2022

1

Single-node deployment

About this task

Microsoft added a feature to support single-node clusters on the Azure Stack HCI operating system. Single-node clusters are similar to stand-alone Storage Spaces nodes but are delivered as an Azure service. Single-node clusters support all the Azure services that a multi-node Azure Stack HCI cluster supports. On a single-node cluster, there is no automatic provision of failover to another node. A physical disk is the fault domain in a single-node cluster and only a single-tier configuration (All-Flash or

All-NVMe) is supported.

While node-level High Availability cannot be supported on a single-node cluster, you can choose application-level or VM-level replication to maintain High Availability in your infrastructure.

Topics:

Operating system deployment

Install roles and features

Change the hostname

Registry settings

PAL and DPOR registration for Azure Stack

Joining cluster nodes to an Active Directory domain

Single-node configuration

Updating a standalone node before adding it to the cluster

Network connectivity for single node clusters

Creating the host cluster

Enabling Storage Spaces Direct

Updating the page file settings

Azure onboarding for Azure Stack HCI operating system

Best practices and recommendations

Operating system deployment

These instructions are for manual deployment of the Azure Stack HCI operating system version 22H2 on AX nodes from Dell

Technologies. Unless otherwise specified, perform the steps on each physical node in the infrastructure that will be a part of

Azure Stack HCI.

NOTE: The steps in the subsequent sections are applicable to either the full operating system or Server Core.

Manual operating system deployment

Dell Lifecycle Controller and iDRAC provide operating system deployment options. Options include manual installation or unattended installation by using virtual media and the operating system deployment feature in Lifecycle Controller for Azure

Stack HCI operating system version 22H2 .

A step-by-step procedure for deploying the operating system is not within the scope of this guide.

This guide assumes that:

● Azure Stack HCI operating system version 22H2 installation on the physical server is complete.

● You have access to the iDRAC virtual console of the physical server.

NOTE: For information about installing the operating system using the iDRAC virtual media feature, see the Using the

Virtual Media function on iDRAC 6, 7, 8 and 9 Dell Knowledge Base article.

2 Single-node deployment

NOTE: The Azure Stack HCI operating system is based on Server Core and does not have the full user interface. For more information about using the Server Configuration tool (Sconfig), see Deploy the Azure Stack HCI operating system

Microsoft article.

Factory-installed operating system deployment

If the cluster nodes are shipped from Dell Technologies with a preinstalled operating system, complete the out-of-box experience (OOBE):

● Select language and locale settings.

● Accept the Microsoft and OEM EULAs.

● Set up a password for the local administrator account.

● Update the operating system partition size, and shrink it as needed.

NOTE: Partition size adjustment is not available with the factory-installed Azure Stack HCI operating system.

The OEM operating system is preactivated and the Hyper-V role is predeployed for Windows Server 2019/2022. After completing the OOBE steps, perform the steps in

Install roles and features

to complete the cluster deployment and Storage

Spaces Direct configuration. For the Azure Stack HCI operating system, roles and features are preinstalled.

The Azure Stack HCI operating system factory image has multilingual support for these languages: English, German, French,

Spanish, Korean, Japanese, Polish, and Italian. To change language:

● Run the following PowerShell commands. <LANGUAGE> can be en-US, fr-FR, ja-JP, ko-KR, de-DE, pl-PL, it-IT, or es-ES.

○ Set-WinUserLanguageList <LANGUAGE>

○ Set-WinSystemLocale -systemlocale <LANGUAGE>

● Reboot after running the commands.

NOTE: For all of the languages except Polish, the main screen changes to the appropriate font. For Polish, the English font is used.

If you purchased a license for a secondary operation system to run your virtual machines, the VHD file is located in the

C:\Dell_OEM\VM folder. Copy this VHD file to a virtual disk in your Azure Stack HCI cluster and create virtual machines using this VHD.

NOTE: Do not run your virtual machine with the VHD file residing on the BOSS device (for example, c:\ ). It should always be on c:\clusterstorage\<VD1>

Upgrade using sConfig (Standalone systems)

Using a sconfig menu (Server Configuration from command prompt), you can update the servers one at a time. Customers who receive HCI OS 21H2 can update to 22H2 using this method so that a cluster can be created on 22H2 without having to upgrade after cluster creation.

Steps

1. On the sconfig menu, select option 6 and update all quality updates.

2. Once all quality updates are completed, go to Feature Updates on the sconfig menu and perform an OS upgrade from

21H2 to 22H2. After completing the OS upgrade, perform step 1 to install all the quality updates for 22H2. You may have to run this multiple times to get to the latest cumulative update.

3. Use Windows Admin Center to update each node to the latest hardware support matrix. See Dell Integrated System for

Microsoft Azure Stack HCI: End-to-End Deployment - Cluster Creation Using Windows Admin Center .

4. When the operating system on all nodes is updated to the latest CU of 22H2, you may go to creating the cluster using

PowerShell or Windows Admin Center.

Install roles and features

Deployment and configuration of a Windows Server 2016, Windows Server 2019, Windows Server 2022, or Azure Stack HCI operating system version 22H2 cluster requires enabling specific operating system roles and features.

Enable the following roles and features:

Single-node deployment 3

● Hyper-V service (not required if the operating system is factory-installed)

● Failover clustering

● Data center bridging (DCB) (required only when implementing fully converged network topology with RoCE and when implementing DCB for the fully converged topology with iWARP)

● BitLocker (optional)

● File Server (optional)

● FS-Data-Deduplication module (optional)

● RSAT-AD-PowerShell module (optional)

Enable these features by running the Install-WindowsFeature PowerShell cmdlet:

Install-WindowsFeature -Name Hyper-V, Failover-Clustering, Data-Center-Bridging, BitLocker,

FS-FileServer, RSAT-Clustering-PowerShell, FS-Data-Deduplication -IncludeAllSubFeature

-IncludeManagementTools -verbose

NOTE: Install the storage-replica feature if Azure Stack HCI operating system is being deployed for a stretched cluster.

NOTE: Hyper-V and the optional roles installation require a system restart. Because subsequent procedures also require a

restart, the required restarts are combined into one (see the Note in the "Changing the hostname"

section).

Change the hostname

By default, the operating system deployment assigns a random name as the host computer name. For easier identification and uniform configuration, Dell Technologies recommends that you change the hostname to something that is relevant and easily identifiable.

Change the hostname by using the Rename-Computer cmdlet:

Rename-Computer -NewName S2DNode01 -Restart

NOTE: This command induces an automatic restart at the end of rename operation.

Registry settings

NOTE: This registry key procedure only applies if the operating system is Azure Stack HCI OS. It does not apply for

Windows Server OS (WS2019 or WS2022).

● Azure Stack HCI OS is available in multiple languages. You can download the ISO file of Azure Stack HCI OS (in any language) from Microsoft | Azure Stack HCI software download .

● After installing Azure Stack HCI OS on an AX node, you must perform the following registry configuration before Azure onboarding to ensure the node gets identified as one sold by .

● PowerShell commands:

New-ItemProperty "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\OEMInformation"

-name SupportProvider -value DellEMC

To verify "DellEMC" is successfully entered into the registry, run the following command:

(get-itemproperty "HKLM:

\SOFTWARE\Microsoft\Windows\CurrentVersion\OEMInformation").SupportProvider

4 Single-node deployment

Figure 1. Registry settings

PAL and DPOR registration for Azure Stack

Partner Admin Link (PAL) and Digital Partner of Record (DPOR) are customer association mechanisms used by Microsoft to measure the value a partner delivers to Microsoft by driving customer adoption of Microsoft Cloud services.

Currently, Dell Azure projects that are not associated through either of these mechanisms are not visible to Microsoft and, therefore, Dell does not get credit. Dell technical representatives should attempt to set both PAL and DPOR, with PAL being the priority.

To register the PAL or DPOR for the Azure Stack system, refer to PAL and DPOR Registration for Azure Stack under

Deployment Procedures in the Azure Stack HCI generator in SolVe.

Joining cluster nodes to an Active Directory domain

Before you can create a cluster, the cluster nodes must be a part of an Active Directory domain.

NOTE: Connecting to Active Directory Domain Services by using the host management network might require routing to the Active Directory network. Ensure that this routing is in place before joining cluster nodes to the domain.

You can perform the domain join task by running the Add-Computer cmdlet on each host that will be a part of the Azure Stack

HCI cluster.

NOTE: Optionally, you can add all newly created computer objects from the cluster deployment to a different

Organizational Unit (OU) in Active Directory Domain Services. In this case, you can use the -OUPath parameter along with the Add-Computer cmdlet.

$credential = Get-Credential

Add-Computer -DomainName S2dlab.local -Credential $credential -Restart

NOTE: This command induces an automatic restart at the end of the domain join operation.

Single-node configuration

A single-node cluster has only one node in a cluster. Like a multi-node Azure Stack HCI OS cluster, a single node cluster is delivered as an Azure service and has all the hybrid cloud features. Since disk is the fault domain for a single-node cluster, it does not have to replicate any data over the network. Nevertheless, Dell Engineering still recommends having an RDMA NIC in the setup as it can help in future expansion (once it is supported by Microsoft) or be used in virtual machine traffic. The 25 GbE or 100 GbE adapters can also be used for application replication, Hyper-V Replica, or Shared Nothing Live Migration of VMs to other clusters.

The following figures show network topology configurations that take advantage of the RDMA adapters in the system.

Single-node deployment 5

Figure 2. Configuration 1

Configuration 1 shows how to use the rNDC or RDMA PCIe adapter to create a SET team and use the SET team for

Management, VM traffic, and Live Migration traffic (if desired).

Figure 3. Configuration 2

Configuration 2 shows how traffic can be bifurcated between the rNDC and the RDMA PCIe adapter in the system. Lower bandwidth rNDCs can be used for Management traffic while higher bandwidth PCIe adapters can be used for VM Network,

Hyper-V Replica, or Shared Nothing Live Migration to move VMs and the associated storage from one cluster to another.

Deployment instructions for single-node cluster

This section provides an example of the commands used to configure networking of a single-node cluster.

#Create VMSwitch for Management using rNDC

New-VMSwitch -Name S2DSwitch -AllowManagementOS 0 -NetAdapterName "NIC1","NIC2"

-MinimumBandwidthMode Weight -Verbose

#Create VMSwitch for LiveMigration/Application Replication using RDMA adapters

New-VMSwitch -Name Migration_VM -AllowManagementOS 0 -NetAdapterName "SLOT 6 Port

1","SLOT 6 Port 2" -MinimumBandwidthMode Weight -Verbose

#Assign VLANs as needed

6 Single-node deployment

Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName S2DSwitch -Passthru |

Set-VMNetworkAdapterVlan -Access -VlanId 202 -Verbose

Add-VMNetworkAdapter -ManagementOS -Name "LM" -SwitchName Migration_VM -Passthru | Set-

VMNetworkAdapterVlan -Access -VlanId 210 -Verbose

#Set Jumbo Frames as needed

Set-NetAdapterAdvancedProperty -Name "SLOT 6 Port 1" -DisplayName "Jumbo Packet"

-DisplayValue "9014"

Set-NetAdapterAdvancedProperty -Name "SLOT 6 Port 2" -DisplayName "Jumbo Packet"

-DisplayValue "9014"

#Host Management Adapter

New-NetIPAddress -InterfaceAlias 'vEthernet (Management)' -IPAddress 172.18.48.71

-DefaultGateway 172.18.32.1 -PrefixLength 19 -AddressFamily IPv4 -Verbose

#LiveMigration Adapter

New-NetIPAddress -InterfaceAlias 'vEthernet (LM)' -IPAddress 192.168.200.71

-PrefixLength 24 -AddressFamily IPv4 -Verbose

#Set minimum bandwidhth weight for all adapters. Helpful if you use the same adapter for all traffic. This can be changed as needed by the customer.

Set-VMNetworkAdapter -ManagementOS -Name Management -MinimumBandwidthWeight 10

Set-VMNetworkAdapter -ManagementOS -Name LM -MinimumBandwidthWeight 20

Set-VMSwitch -Name S2DSwitch -DefaultFlowMinimumBandwidthWeight 10

Set-VMSwitch -Name Migration_VM -DefaultFlowMinimumBandwidthWeight 10

#DNS server address

Set-DnsClientServerAddress -InterfaceAlias 'vEthernet (Management)' -ServerAddresses

172.18.40.10,172.18.40.9

#Disable RDMA on the RDMA adapter as it won’t be used in this configuration.

Disable-NetAdapterRdma 'slot 6*'

Updating a standalone node before adding it to the cluster

Before creating a cluster, ensure that each node is updated with the latest versions of firmware and drivers.

Steps

1. In Windows Admin Center, in the left pane, click Add .

2. In the Windows Server tile, click Add .

3. Enter the node name and click Add .

4. Under All connections , select the server and click Manage as .

5. Select use another account for this connection , and then provide the credentials in the domain\username or hostname\username format.

6. Click Confirm .

7. In the Connections window, click the server name.

8. In the left pane of Windows Admin Center, under EXTENSIONS , click OpenManage Integration .

9. Review the Dell Software License Agreement and Customer Notice and select the check box to accept the terms of the license agreement.

10. Click View > Compliance . Another menu appears, select Hardware Updates.

11. Click Check compliance and select either the online catalog or offline catalog .

12. Click Fix Compliance and select update to update the node.

Updating a single-node cluster

Updating a single-node cluster is similar to updating a stand-alone node as a Cluster Aware Updating process cannot be used for a single node cluster. A Cluster Aware Updating process involves pausing a node and moving the physical drives into storage maintenance mode. This is not supported on a single-node cluster.

Single-node deployment 7

Single-node clusters must be updated from Server Manager view in Windows Admin Center. Operating system and hardware updates must be made independently and are not supported in the same workflow. Single-node clusters cannot be paused or moved into Maintenance Mode, as there are no secondary nodes in the cluster to live-migrate the workloads.

NOTE: OMIMSWAC Dell Extension should be used to perform hardware updates from Server Manager view.

Network connectivity for single node clusters

A single node cluster has adapters configured for only management and VM traffic. However, Azure Stack HCI Engineering still recommends configuring a virtual network interface for Live Migration in case you intend to use Shared Nothing Live Migration to move workloads to other clusters at a later time. You can also use the adapter to configure application or VM replication. For guidance on network configurations or topologies, see Network integration overview .

Creating the host cluster

Verify that the nodes are ready for cluster creation, and then create the host cluster.

Steps

1. Run the Get-PhysicalDisk command on all cluster nodes.

Verify the output to ensure that all disks are in the healthy state and that the nodes have an equal number of disks. Verify that the nodes have homogenous hardware configuration.

2. Run the New-Cluster cmdlet to create the host cluster.

NOTE: For the -IgnoreNetwork parameter, specify all storage network subnets as arguments. Switchless configuration requires that all storage network subnets are provided as arguments to the -IgnoreNetwork parameter.

New-Cluster -Name S2DSystem -Node S2Dnode01 -StaticAddress 172.16.102.55 -NoStorage

In this command, the StaticAddress parameter is used to specify an IP address for the cluster in the same IP subnet as the host management network. The NoStorage switch parameter specifies that the cluster is to be created without any shared storage.

The New-Cluster cmdlet generates an HTML report of all performed configurations and includes a summary of the configurations. Review the report before enabling Storage Spaces Direct.

Enabling Storage Spaces Direct

After you create the cluster, run the Enable-ClusterS2D cmdlet to configure Storage Spaces Direct on the cluster. Do not run the cmdlet in a remote session; instead, use the local console session.

Run the Enable-ClusterS2d cmdlet as follows:

Enable-ClusterS2D -Verbose

The Enable-ClusterS2D cmdlet generates an HTML report of all configurations and includes a validation summary. Review this report, which is typically stored in the local temporary folder on the node where the cmdlet was run. The verbose output of the command shows the path to the cluster report. At the end of the operation, the cmdlet discovers and claims all the available disks into an auto-created storage pool. Verify the cluster creation by running any of the following commands:

Get-ClusterS2D

Get-StoragePool

Get-StorageSubSystem -FriendlyName *Cluster* | Get-StorageHealthReport

8 Single-node deployment

Updating the page file settings

To help ensure that the active memory dump is captured if a fatal system error occurs, allocate sufficient space for the page file. Dell Technologies recommends allocating at least 50 GB plus the size of the CSV block cache.

About this task

1. Determine the cluster CSV block cache size value by running the following command:

$blockCacheMB = (Get-Cluster).BlockCacheSize

NOTE: On Windows Server 2016, the default block cache size is 0. On Windows Server 2019, Windows Server 2022, and Azure Stack HCI operating system version 22H2, the default block cache size is 1 GB.

2. Run the following command to update the page file settings:

$blockCacheMB = (Get-Cluster).BlockCacheSize

$pageFilePath = "C:\pagefile.sys"

$initialSize = [Math]::Round(51200 + $blockCacheMB)

$maximumSize = [Math]::Round(51200 + $blockCacheMB)

$system = Get-WmiObject -Class Win32_ComputerSystem -EnableAllPrivileges if ($system.AutomaticManagedPagefile) {

$system.AutomaticManagedPagefile = $false

$system.Put()

}

$currentPageFile = Get-WmiObject -Class Win32_PageFileSetting if ($currentPageFile.Name -eq $pageFilePath)

{

$currentPageFile.InitialSize = $InitialSize

$currentPageFile.MaximumSize = $MaximumSize

$currentPageFile.Put()

} else

{

$currentPageFile.Delete()

Set-WmiInstance -Class Win32_PageFileSetting -Arguments @{Name=$pageFilePath;

InitialSize = $initialSize; MaximumSize = $maximumSize}

}

Azure onboarding for Azure Stack HCI operating system

Clusters deployed using Azure Stack HCI operating system must be onboarded to Microsoft Azure for full functionality and support. For more information about firewall requirements and to connect Azure HCI clusters, see Firewall requirements for

Azure Stack HCI and Connect Azure Stack HCI to Azure respectively.

After Microsoft Azure registration, use the Get-AzureStackHCI command to confirm the cluster registration and connection status.

Single-node deployment 9

Best practices and recommendations

Dell Technologies recommends that you follow the guidelines that are described in this section.

Disable SMB Signing

Storage Spaces Direct uses RDMA for SMB (storage) traffic for improved performance. When SMB signing is enabled the network performance of SMB traffic is significantly reduced.

For more information, see Reduced networking performance after you enable SMB Encryption or SMB Signing in Windows

Server 2016 .

NOTE: By default, SMB Signing is disabled. If SMB Signing is enabled in the environment through a Group Policy Object

(GPO), you must disable it from the domain controller.

Update the hardware timeout for the Spaces port

For performance optimization and reliability, update the hardware timeout configuration for the Spaces port.

The following PowerShell command updates the configuration in the Windows registry and induces a restart of the node at the end of the registry update. Perform this update on all Storage Spaces Direct nodes immediately after initial deployment. Update one node at a time and wait until each node rejoins the cluster.

Set-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Services\spaceport\Parameters -Name

HwTimeout -Value 0x00002710 -Verbose

Restart-Computer -Force

10 Single-node deployment

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Related manuals

advertisement