Cisco Universal Small Cell RAN Management System Installation Guide


Add to my manuals
250 Pages

advertisement

Cisco Universal Small Cell RAN Management System Installation Guide | Manualzz

RAN Management System Installation Guide, Release 4.1

First Published: September 10, 2014

Last Modified: May 25, 2015

Americas Headquarters

Cisco Systems, Inc.

170 West Tasman Drive

San Jose, CA 95134-1706

USA http://www.cisco.com

Tel: 408 526-4000

800 553-NETS (6387)

Fax: 408 527-0883

Text Part Number: May 25, 2015 HF

THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,

INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,

EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.

THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH

THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,

CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “ AS IS" WITH ALL FAULTS.

CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF

MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT

LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS

HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: http:// www.cisco.com/go/trademarks

. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)

© 2014 Cisco Systems, Inc. All rights reserved.

C O N T E N T S

P r e f a c e

C H A P T E R 1

C H A P T E R 2

May 25, 2015 HF

Preface xi

Objectives xi

Audience xi

Conventions xi

Related Documentation xii

Obtaining Documentation and Submitting a Service Request xii

Installation Overview 1

Cisco RAN Management System Overview 1

Cisco RMS Deployment Modes 2

All-in-One RMS 3

Distributed RMS 3

Central RMS Node 4

Serving RMS Node 5

Upload RMS Node 5

Installation Flow 6

Installation Image 8

Installation Prerequisites 9

Sample Network Sizes 9

Hardware and Software Requirements 9

Femtocell Access Point Requirement 10

Cisco RMS Hardware and Software Requirements 10

Cisco UCS C240 M3 Server 10

Cisco UCS 5108 Chassis Based Blade Server 11

Cisco UCS B200 M3 Blade Server 11

FAP Gateway Requirements 12

RAN Management System Installation Guide, Release 4.1 iii

Contents

C H A P T E R 3

Virtualization Requirements 12

Optimum CPU and Memory Configurations 13

Data Storage for Cisco RMS VMs 13

Central VM 13

Serving VM 14

Upload VM 15

PMG Database VM 17

Device Configurations 17

Access Point Configuration 17

Supported Operating System Services 18

Cisco RMS Port Configuration 18

Cisco UCS Node Configuration 22

Central Node Port Bindings 23

Serving and Upload Node Port Bindings 23

All-in-One Node Port Bindings 23

Cisco ASR 5000 Gateway Configuration 24

NTP Configuration 24

Public Fully Qualified Domain Names 25

RMS System Backup 25

Installing VMware ESXi and vCenter for Cisco RMS 27

Prerequisites 27

Configuring Cisco UCS US 240 M3 Server and RAID 28

Installing and Configuring VMware ESXI 5.5.0

30

Installing the VMware vCenter 5.5.0

31

Configuring vCenter 32

Configuring NTP on ESXi Hosts for RMS Servers 33

Installing the OVF Tool 34

Installing the OVF Tool for Red Hat Linux 34

Installing the OVF Tool for Microsoft Windows 35

Configuring SAN for Cisco RMS 36

Creating a SAN LUN 36

Installing FCoE Software Adapter Using VMware ESXi 36

Adding Data Stores to Virtual Machines 37

Adding Central VM Data Stores 37

iv

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Contents

C H A P T E R 4

May 25, 2015 HF

Adding the DATA Datastore 38

Adding the TX_LOGS Datastore 41

Adding the BACKUP Datastore 45

Validating Central VM Datastore Addition 49

Adding Serving VM Data Stores 50

Adding the SYSTEM_SERVING Datastore 50

Adding Upload VM Data Stores 50

Adding the SYSTEM_UPLOAD Datastore 50

Adding PM_RAW and PM_ARCHIVE Datastores 50

Validating Upload VM Datastore Addition 53

Migrating the Data Stores 53

Initial Migration on One Disk 53

RMS Installation Tasks 55

RMS Installation Procedure 55

Preparing the OVA Descriptor Files 56

Validation of OVA Files 60

Deploying the RMS Virtual Appliance 60

All-in-One RMS Deployment: Example 61

Distributed RMS Deployment: Example 62

RMS Redundant Deployment 64

Configuring Serving and Upload Nodes on Different Subnets 68

Configuring Redundant Serving Nodes 71

Setting Up Redundant Serving Nodes 72

Configuring the PNR for Redundancy 74

Configuring the Security Gateway on the ASR 5000 for Redundancy 77

Configuring the Security Gateway on ASR 5000 for Multiple Subnet or

Geo-Redundancy 79

Configuring the HNB Gateway for Redundancy 80

Configuring DNS for Redundancy 83

RMS High Availability Deployment 83

Changing Default Routing Interface on the Central Node 83

Optimizing the Virtual Machines 84

Upgrading the VM Hardware Version 84

Upgrading the VM CPU and Memory Settings 86

RAN Management System Installation Guide, Release 4.1 v

Contents

C H A P T E R 5

Upgrading the Upload VM Data Sizing 87

RMS Installation Sanity Check 90

Sanity Check for the BAC UI 90

Sanity Check for the DCC UI 91

Verifying Application Processes 92

Installation Tasks Post-OVA Deployment 95

HNB Gateway and DHCP Configuration 95

Modifying the DHCP IP Pool Subnet 100

HNB GW Configuration on Redundant Serving Node 101

Installing RMS Certificates 101

Auto-Generated CA-Signed RMS Certificates 101

Self-Signed RMS Certificates 104

Self-Signed RMS Certificates in Serving Node 104

Self-Signed RMS Certificates in Upload Node 108

Enabling Communication for VMs on Different Subnets 112

Configuring Default Routes for Direct TLS Termination at the RMS 113

Post-Installation Configuration of BAC Provisioning Properties 115

PMG Database Installation and Configuration 116

PMG Database Installation Prerequisites 116

PMG Database Installation 117

Schema Creation 117

Map Catalog Creation 119

Load MapInfo Data 119

Grant Access to MapInfo Tables 120

Configuring the Central Node 121

Configuring the PMG Database on the Central Node 121

Area Table Data Population 124

Configuring New Groups and Pools 126

Optional Features 127

Default Reserved Mode Setting for Enterprise APs 127

./configure_func1.sh

127

Configuring Linux Administrative Users 129

NTP Servers Configuration 131

Central Node Configuration 131

vi

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Contents

C H A P T E R 6

C H A P T E R 7

C H A P T E R 8

May 25, 2015 HF

Serving Node Configuration 132

Upload Node Configuration 133

Configuring the SNMP and SNMP Trap Servers 134

Centralized Logging Server Configuration 136

SYSLog Servers Configuration 137

Central Node Configuration 137

Serving Node Configuration 138

Upload Node Configuration 139

LDAP Configuration 139

TACACS Configuration 141

Configuring INSEE SAC 143

Verifying RMS Deployment 145

Verifying Network Connectivity 145

Verifying Network Listeners 146

Log Verification 147

Server Log Verification 147

Application Log Verification 147

End-to-End Testing 148

Updating VMware Repository 149

RMS Upgrade Procedure 151

Upgrade Prerequisites 151

Upgrade on Central Node 152

Upgrade on Serving Node 155

Upgrade on Upload Node 158

Post RMS Upgrade 159

Merging of Files Manually 160

Rollback to Versions RMS 3.0/3.1 and 4.0

162

Troubleshooting 163

Regeneration of Certificates 163

Certificate Regeneration for DPE 163

Certificate Regeneration for Upload Server 166

Deployment Troubleshooting 169

RAN Management System Installation Guide, Release 4.1 vii

Contents

A P P E N D I X A

A P P E N D I X B

CAR/PAR Server Not Functioning 169

Unable to Access BAC and DCC UI 170

DCC UI Shows Blank Page After Login 171

DHCP Server Not Functioning 171

DPE Processes are Not Running 173

Connection to Remote Object Unsuccessful 174

VLAN Not Found 175

Unable to Get Live Data in DCC UI 175

Installation Warnings about Removed Parameters 175

Upload Server is Not Up 176

OVA Installation failures 181

181

Update failures in group type, Site - DCC UI throws an error 181

OVA Descriptor File Properties 183

RMS Network Architecture 183

Virtual Host Network Parameters 184

Virtual Host IP Address Parameters 186

Virtual Machine Parameters 190

HNB Gateway Parameters 191

Auto-Configuration Server Parameters 193

OSS Parameters 193

Administrative User Parameters 197

BAC Parameters 198

Certificate Parameters 199

Deployment Mode Parameters 200

License Parameters 200

Password Parameters 201

Serving Node GUI Parameters 203

DPE CLI Parameters 203

Time Zone Parameter 203

Third-Party SeGW Parameter 205

Examples of OVA Descriptor Files 207

Example of Descriptor File for All-in-One Deployment 207

viii

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Contents

A P P E N D I X C

A P P E N D I X D

A P P E N D I X E

Example Descriptor File for Distributed Central Node 209

Example Descriptor File for Distributed Serving Node 210

Example Descriptor File for Distributed Upload Node 212

Example Descriptor File for Redundant Serving/Upload Node 213

System Backup 215

Full System Backup 215

Back Up System Using VM Snapshot 216

Using VM Snapshot 216

Back Up System Using vApp Cloning 217

Application Data Backup 218

Backup on the Central Node 218

Backup on the Serving Node 221

Backup on the Upload Node 222

RMS System Rollback 225

Full System Restore 225

Restore from VM Snapshot 225

Restore from vApp Clone 226

Application Data Restore 226

Restore from Central Node 226

229

Restore from Serving Node 229

Restore from Upload Node 232

End-to-End Testing 234

Glossary 235

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1 ix

Contents x

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Preface

This section describes the objectives, audience, organization, and conventions of the Cisco RAN Management

System (RMS) Installation Guide.

Objectives, page xi

Audience, page xi

Conventions, page xi

Related Documentation, page xii

Obtaining Documentation and Submitting a Service Request, page xii

Objectives

This guide provides an overview of the Cisco RAN Management System (RMS) solution and the pre-installation, installation, post-installation, and troubleshooting information for the Cisco RMS installation.

Audience

The primary audience for this guide includes network operations personnel and system administrators. This guide assumes that you are familiar with the following products and topics:

• Basic internetworking terminology and concepts

• Network topology and protocols

• Microsoft Windows 2000, Windows XP, Windows Vista, and Windows 7

• Linux administration

• VMware vSphere Standard Edition v5.1 or v5.5

Conventions

This document uses the following conventions:

RAN Management System Installation Guide, Release 4.1 xi May 25, 2015 HF

Preface

Related Documentation

Convention bold font

Italic font

Courier font

< >

[ ]

!, #

Bold Courier font

[x] string

Description

Commands and keywords and user-entered text appear in bold font.

Document titles, new or emphasized terms, and arguments for which you supply values are in italic font.

Terminal sessions and information the system displays appear in courier font.

Bold Courier font indicates text that the user must enter.

Elements in square brackets are optional.

A nonquoted set of characters. Do not use quotation marks around the string or the string will include the quotation marks.

Nonprinting characters such as passwords are in angle brackets.

Default responses to system prompts are in square brackets.

An exclamation point (!) or a pound sign (#) at the beginning of a line of code indicates a comment line.

Related Documentation

For additional information about the Cisco RAN Management Systems, refer to the following documents:

• Cisco RAN Management System Administration Guide

• Cisco RAN Management System API Guide

• Cisco RAN Management System SNMP/MIB Guide

• Cisco RAN Management System Release Notes

Obtaining Documentation and Submitting a Service Request

For information on obtaining documentation, using the Cisco Bug Search Tool (BST), submitting a service request, and gathering additional information, see What's New in Cisco Product Documentation , at: http:// www.cisco.com/c/en/us/td/docs/general/whatsnew/whatsnew.html

.

Subscribe to What's New in Cisco Product Documentation , which lists all new and revised Cisco technical documentation as an RSS feed and delivers content directly to your desktop using a reader application. The

RSS feeds are a free service.

xii

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

C H A P T E R

1

Installation Overview

This chapter provides a brief overview of the Cisco RAN Management System (RMS) and explains how to install, configure, upgrade, and troubleshoot RMS installation.

The following sections provide an overview of the Cisco RAN Management System installation process:

Cisco RAN Management System Overview, page 1

Installation Flow, page 6

Installation Image, page 8

Cisco RAN Management System Overview

The Cisco RAN Management System (RMS) is a standards-based provisioning and management system for

HNB (3G femtocell access point [FAP]) . It is designed to provide and support all the operations required to transmit high quality voice and data from Service Provider (SP) mobility users through the SP mobility core.

The RMS solution can be implemented through SP-friendly deployment modes that can lower operational costs of femtocell deployments by automating all key activation and management tasks.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

1

Installation Overview

Cisco RMS Deployment Modes

The following RMS solution architecture figure illustrates the various servers and their internal and external interfaces for Cisco RMS.

Figure 1: RMS Solution Architecture

Cisco RMS Deployment Modes

The Cisco RMS solution can be deployed in one of the two RMS deployment modes:

2

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Overview

Cisco RMS Deployment Modes

All-in-One RMS

In the All-in-One RMS deployment mode, the Cisco RMS solution is provided on a single host. It supports up to 50,000 FAPs.

Figure 2: All-in-One RMS Node

In an All-In-One RMS node, the Serving Node comprises of the VM combining the BAC DPE, PNR, and

PAR components; the Central Node comprises of the VM combining the DCC UI, PMG, and BAC RDU VM components, and the Upload VM comprises of the Upload Server component.

To deploy the All-in-One node, it is mandatory to procure and install VMware with one VMware vCenter per deployment. For more information, see

Installing VMware ESXi and vCenter for Cisco RMS, on page

27 .

Distributed RMS

In a Distributed RMS deployment mode, the following nodes are deployed:

Central RMS Node, on page 4

Serving RMS Node, on page 5

Upload RMS Node, on page 5

In a Distributed deployment mode, up to 2,50,000 APs are supported.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

3

Installation Overview

Cisco RMS Deployment Modes

Central RMS Node

On a Central RMS node, the Cisco RMS solution is provided on a separate node. It provides the active-active geographical redundancy option. The Central node can be paired with any number of Serving nodes.

Figure 3: Central RMS Node

In any of the Cisco RMS deployments, it is mandatory to have at least one Central node.

To deploy the Central node, it is mandatory to procure and install VMware with one VMware vCenter per deployment. For more information, see

Installing VMware ESXi and vCenter for Cisco RMS, on page 27

4

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Overview

Cisco RMS Deployment Modes

Serving RMS Node

On a Serving RMS node, the Cisco RMS solution is provided on a separate node or host. It supports up to

125,000 FAPs and provides the geographical redundancy with the active-active pair option. The Serving node must be combined with the Central node.

Figure 4: Serving RMS Node

To deploy the Serving node, it is mandatory to procure and install VMware.

In case of serving node deployment failover, the additional Serving nodes can be configured with the same

Central Node. To know more about the redundancy deployment option, see

RMS Redundant Deployment,

on page 64 .

Note The RMS node deployments are supported on UCS hardware and use virtual machines (VMs) for performance and security isolation.

To know how to procure and install VMware on the UCS hardware node, see

Installing VMware ESXi and vCenter for Cisco RMS, on page 27

Upload RMS Node

In the Upload RMS node, the Upload Sever is provided on a separate node.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

5

Installation Flow

The Upload RMS node must be combined with the Serving node.

Figure 5: Upload RMS Node

Installation Overview

Installation Flow

The following table provides the general flow in which to complete the Cisco RAN Management System installation. The table is only a general guideline. Your installation sequence might vary, depending on your specific network requirements.

Before you install Cisco RAN Management System, you need to determine and plan the following:

Step No.

Task Action

1 Install Cisco RAN Management

System for the first time.

Go to Step 3.

Task Completion:

Mandatory or Optional

Mandatory

2

3

Upgrade Cisco RAN Management

System from an earlier to the latest release.

Go to Step 11.

Optional

Do the following:

• Plan on how Cisco RAN

Management System installation will fit in your existing network.

Ensure that you follow the prerequisites listed in

Installation Prerequisites .

Then proceed to Step 4.

Mandatory

• Determine the number of femtocell access points (FAPs) that your network should support.

• Finalize the RMS deployment based on the network size and

APs needed

6

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Overview

Installation Flow

Step No.

4

5

6

7

8

9

10

Task Action Task Completion:

Mandatory or Optional

Procure and install the recommended hardware and software that is required for the RMS deployment mode.

Ensure that all the hardware and software listed in

Cisco RMS

Hardware and Software

Requirements, on page

10 are procured and connected. Then proceed to Step 5.

Mandatory

Ensure all virtualization requirements for your installation are met.

Follow the recommended virtualization requirements listed in the

Virtualization

Requirements, on page

12 . Then proceed to Step

6.

Mandatory

Complete all device configurations.

Complete the device configurations recommended in

Device

Configurations, on page

17 and proceed to Step 7.

Mandatory

Create the configuration file

(deployment descriptor).

Install Cisco RAN Management

System.

Prepare and create the

Open Virtualization

Format (OVF) file as described in

Preparing the

OVA Descriptor Files,

on page 56 .

Mandatory

Complete the appropriate procedures in

RMS

Installation Tasks, on

page 55 and proceed to

Step 9.

Mandatory

Complete the post-installation activities.

Complete the appropriate procedures in cross-reference in

Installation Tasks

Post-OVA Deployment,

on page 95 and proceed to Step 10.

Mandatory

— Start using Cisco RAN Management

System.

See the Cisco RAN

Management System

Administration Guide .

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

7

Installation Overview

Installation Image

Step No.

11

12

Task

Upgrade to the latest Cisco RAN

Management System release.

Action Task Completion:

Mandatory or Optional

Complete the appropriate procedures in RMS

Upgrade Procedure .

Mandatory

Access troubleshooting information for Cisco RAN Management System installation.

Go to

Troubleshooting,

on page 163 to troubleshoot RMS installation issues.

Optional

Installation Image

The Cisco RAN Management System is packaged in Virtual Machine (VM) images ( tar.gz format) that are deployed on the hardware nodes. The deployments supported are.

• Small Scale: Single AP per site

• Large Scale: Distributed with multiple APs per site

For more information about the deployment modes, see

Cisco RMS Deployment Modes, on page 2

.

To access the image files (OVA), log in to https://software.cisco.com and navigate to Support > Downloads to open the Download Software page. Then, navigate to Products/Wireless/Mobile Internet/Universal

Small Cells/Universal Small Cell RAN Management System to open the page where you can download the required image files.

The available OVA files are listed in the Release Notes for Cisco RAN Management System for your specific release.

The RMS image contains the following major components:

• Provisioning and Management Gateway (PMG) database (DB)

• PMG

• Operational Tools

• Log Upload

• Device Command and Control (DCC) UI

• Broadband Access Center (BAC) Configuration

• BAC

• Prime Network Registrar (PNR)

• Prime Access Registrar (PAR)

For information about the checksum value of the OVA files and the version of major components, see the

Release Notes for Cisco RAN Management System for your specific release.

8

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

C H A P T E R

2

Installation Prerequisites

This chapter provides the network size, hardware and software, and device configuration requirements that must be met before installing the Cisco RAN Management System (RMS).

Note Ensure that all the requirements in the following sections are addressed.

Sample Network Sizes, page 9

Hardware and Software Requirements, page 9

Device Configurations, page 17

RMS System Backup, page 25

Sample Network Sizes

While planning the network size, you must consider the following:

• Number of femtocell access points (FAPs or APs, used interchangeably in this guide) in your network

• Current network capacity and additional capacity to meet future needs.

For more information about the recommended deployment modes, see

Cisco RMS Deployment Modes, on

page 2 .

Hardware and Software Requirements

These topics describe the FAPs, RMS hardware and software, gateway, and virtualization requirements:

Note Consult with your Cisco account representative for specific hardware and configuration details for your

APs, RMS, and gateway units.

Hardware requirements assume that Cisco RMS does not share the hardware with additional applications.

(This is the recommended installation.)

RAN Management System Installation Guide, Release 4.1

9 May 25, 2015 HF

Installation Prerequisites

Femtocell Access Point Requirement

Femtocell Access Point Requirement

Cisco RMS supports the FAPs listed in the following table:

Hardware Band Power GPS

USC 3330

USC 3331

USC 3331

USC 5330

USC 5330

USC 7330

USC 7330

USC 9330

USC 9330

2 and 5

1

2 and 5

1

2 and 5

1

2 and 5

1

2 and 5

20 mW

20 mW

20 mW

100 mW

100 mW

250 mW

250 mW

1 W

1 W

Yes

No

No

No

No

No

Yes

No

Yes

Residential/

Enterprise

Residential

Residential

Residential

Enterprise

Enterprise

Enterprise

Enterprise

Enterprise

Enterprise

For information about the AP configuration, see

Access Point Configuration, on page 17

.

Cisco RMS Hardware and Software Requirements

Cisco UCS x86 hardware is used for Cisco RAN Management System hardware nodes.

The table below establishes the supported server models that are recommended for the RMS solution.

Supported UCS Hardware Target RMS Nodes

All RMS nodes

• Cisco UCS C240 M3 Rack Server

• Cisco UCS 5108 Chassis Based Blade Server

Access Mode

Closed

Closed

Closed

Open

Open

Open

Open

Open

Open

Cisco UCS C240 M3 Server

The following hardware configuration is used for all RMS nodes:

• Cisco Unified Computing System (UCS) C240 M3 Rack Server

• Rack-mount

10

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Prerequisites

Cisco RMS Hardware and Software Requirements

• 2 x 2.3 Ghz x 6 Core x86 architecture

• 128 GB RAM

• 12 disks: 4 x 15,000 RPM 300 GB, 8 x 10,000 RPM 300 GB

• RAID array with battery backup and 1 GB cache

• 4 + 1 built-in Ethernet ports

• 2 rack unit (RU)

• Redundant AC power

• VMware vSphere Standard Edition v5.1 or v5.5

• VMware vCenter Standard Edition v5.1 or v5.5

Cisco UCS 5108 Chassis Based Blade Server

The following hardware configuration is used for all RMS nodes:

• Cisco UCS 5108 Chassis

• Rack-mount

• 6 rack unit (RU)

• Redundant AC power

• VMware vSphere Standard Edition v5.1 or v5.5

• VMware vCenter Standard Edition v5.1 or v5.5

• SAN storage with sufficient disks (see,

Data Storage for Cisco RMS VMs, on page 13

)

Note The Cisco UCS 5108 Chassis can house up to eight Cisco UCS B200 M3 Blade Servers.

Cisco UCS B200 M3 Blade Server

• Cisco UCS B200 M3 Blade Server

• Rack-mount

• 2 CPUs using 32 GB DIMMs

• 128 GB RAM

Note Ensure that the selected UCS server is physically connected and configured with the appropriate software before proceeding with the Cisco RMS installation.

To install the UCS servers, see the following guides:

• Cisco UCS C240 M3 Server Installation and Service Guide

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

11

Installation Prerequisites

FAP Gateway Requirements

• Cisco UCS 5108 Server Chassis Installation Guide

• Cisco UCS B200 Blade Server Installation and Service Note

Note The Cisco UCS servers must be pre-configured with standard user account privileges.

FAP Gateway Requirements

The Cisco ASR 5000 Small Cell Gateway serves as the HNB Gateway (HNB-GW) and Security Gateway

(SeGW) for the FAP in the Cisco RAN Management System solution.

It is recommended that the hardware node with the Serving VM is co-located with the Cisco ASR 5000

Gateway. The Cisco ASR 5000 Gateway utilizes the Serving VM for DHCP and AAA services. This gateway provides unprecedented scale that can exceed 2,50,000 APs that can be handled by a Serving VM (or redundant pair).

Ensure that the Cisco ASR 5000 Gateway is able to communicate with the Cisco UCS server (on which RMS will be installed) before proceeding with the Cisco RMS installation.

To install the Cisco ASR 5000 Small Cell Gateway, see the Cisco ASR 5000 Installation Guide.

Virtualization Requirements

The Cisco RAN Management System solution that is packaged in Virtual Machine (VM) images (.ova file) requires to be deployed on the Cisco UCS hardware nodes, defined in the

Cisco RMS Hardware and Software

Requirements, on page 10

.

The virtualization framework of the VM enables the resources of a computer to be divided into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and so on.

The benefit of using VMs is load isolation, security isolation, and administration.

• Load isolation ensures that a single service does not take over all the hardware resources and compromise other services.

• Security isolation enables flows between VMs to be routed via a firewall, if desired.

• Administration is simplified by centralizing the VM deployment, and monitoring and allocating the hardware HW resources among the VMs.

Before you deploy the Cisco RAN Management System .ova file:

• Ensure that you install:

◦ VMware vSphere Standard Edition v5.1 or v5.5

◦ VMware vCenter Standard Edition v5.1 or v5.5

For the procedure to install VMware, see

Installing VMware ESXi and vCenter for Cisco RMS, on page 27

.

12

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Prerequisites

Virtualization Requirements

Optimum CPU and Memory Configurations

Following are the optimal values of CPU and memory required for each VM of the All -In-One setup to support from 50,000 and Distributed RMS setup to support from 2,50,000 devices.

Node vCPU Memory

All -In-One Setup

Central Node

Serving Node

Upload Node

Distributed Setup

Central Node

Serving Node

Upload Node

8

16

8

16

16 GB

64 GB

16 GB

64 GB

Data Storage for Cisco RMS VMs

Before installing the VMware, consider the data storage or disk sizing for each of the Cisco RMS VMs.

Central VM, on page 13

Serving VM, on page 14

Upload VM, on page 15

Central VM

The disk-sizing of the Central VM is based on the calculation logic and size for SAN disk space for each

RAID set:

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

13

Installation Prerequisites

Virtualization Requirements

LUN Name

DATA

TXN_LOG

SYSTEM

BACKUP

Purpose

Database

Database transaction logs

OS and application image and application logs

Database backups

RAID Set

#1

#2

#3

#4

Min Size

200 GB

200 GB

200 GB

250 GB

Calculation Logic

In lab tests file size for database is 1

GB for 10,000 devices and 3000 groups, static neighbors if fully populated for each AP, will require an additional database size of around

1.4 GB per 10,000 devices.

Considering future expansion plans for 2 million devices and 30% for fragmentation, around 73 GB of disk space will be required; 200GB is the recommended value.

25 MB is seen with residential, but with Metrocell, transaction logs will be very high because of Q-SON. It does not depend on AP deployment population size. 200 GB is recommended.

Linux and applications need around

16 GB and application logs need 50

GB; Recommended value 200GB considering Ops tools generated logs and reports. It is independent of AP deployment size.

To maintain minimum four backups for upgrade considerations.

56 GB is the size of the database files for 2 million devices, so minimum required will be approximately 250

GB.

For 10,000 devices, approximately 5

GB will be required to maintain four backups.

If number of backups needed are more, calculate disk size accordingly.

Serving VM

The disk-sizing of the Serving VM is based on the calculation logic and size for SAN disk space for each

RAID set:

14

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Prerequisites

Virtualization Requirements

LUN Name

SYSTEM

Purpose RAID Set

OS and application image and application logs

#1

Min Size

300 GB

Calculation Logic

Linux and applications need approximately 16 GB; logs need

10 GB; for backups, swap space and to allow for additional copies for upgrades, 200 GB. It is independent of AP deployment size.

50 GB for PAR and 150 GB for

PNR.

Upload VM

The disk-sizing of the Upload VM is based on the following factors:

Sl. No.

1

Upload VM Disk Size

Approximate size of performance monitoring (PM) statistics file in each log upload

100 KB for Enterprise FAP and 7.5

MB for Residential FAP

2 Number of FAPs per ULS 2,50,000 (50,000 Enterprise +

2,00,000 Residential)

3 Frequency of PM uploads Once in 15 minutes (4 x 24 = 96 per day) for Enterprise FAPs

Once in a day for Residential FAPs

The following disk-sizing of the Upoad VM is based on the calculation logic and size for SAN disk space for each RAID set:

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

15

Installation Prerequisites

Virtualization Requirements

LUN Name

PM_RAW

Purpose

For storing

RAW files

RAID Set

#1

PM_ARCHIVE For storing

ARCHIVED files

#2

Min Size

350 GB

1000 GB

Calculation Logic

Calculation is for 2,50,000 APs with the following assumptions:

• For Enterprise 3G FAP PM, size of uploaded file at 15 min sampling frequency and

15 min upload interval is 100

KB

• For Residential 3G FAP PM, size of uploaded file at 1 hour sampling frequency and 1 day upload interval is 7.5 MB

• ULS has at the most last 2 hours files in raw format.

For a single mode AP:

Disk space required for PM files =

(50000*4*2*100)/(1024/1024) +

(200000*2*7.5)/(1024*24) = 39 +

122

= 161 GB

Additional space for storage of other files like on-demand = 200

GB

Considering the compression ratio is down to 15% of total size and

ULS starts purging after 60% of disk filled, disk space required by compressed files uploaded in 1 hr

=

(50000*4*2*100)/(1024/1024) +

(200000*2*7.5)/(1024*24))*0.15

= 25 GB

To store 24 hrs data, space required

= 25*24 = 600 GB = 60% of total disk space

Therefore, total disk space for PM files = 1000 GB

16

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Prerequisites

Device Configurations

LUN Name

SYSTEM

Purpose RAID Set

OS and application image and application logs

#3

Min Size

200 GB

Calculation Logic

Linux and applications need around

16 GB and logs need 10 GB; for backups, swap space and to allow for additional copies for upgrades,

200 GB. It is independent of AP deployment size.

PMG Database VM

LUN Name

SYSTEM

Purpose RAID Set

OS and application image and application logs

#1

Min Size

50 GB

Calculation Logic

Linux and Oracle applications need around 25 GB. Considering backups and swap space 50 GB is recommended. It is independent of

AP deployment size.

Device Configurations

Before proceeding with the Cisco RAN Management System installation, it is mandatory to complete the following device configurations to enable the various components to communicate with each other and with the Cisco RMS system.

Access Point Configuration

It is mandatory for all small cell access points to have the minimal configuration to contact Cisco RMS within the service provider environment. This enables Cisco RMS to automatically install or upgrade the AP firmware and configure the AP as required for service.

USC 3000, 5000 and 7000 series access points initially connect to the public Ubiquisys cloud service, which configures the enablement data on the AP and then directs them to the service provider Hosted & Managed

Services (HMS).

The minimum initial AP configuration includes the following:

• 1 to 3 Network Time Protocol (NTP) server IP addresses or fully qualified domain names (FQDNs).

This must be a factory default because the AP has to obtain time in order to perform certificate expiration verification during authentication with servers. HMS will reconfigure the appropriate list of NTP servers on bootstrap.

• Unique AP private key and certificate signed by appropriate Certificate Authority (CA)

• Trust Store configured with public certificate chains of the CA which signs server certificates.

After each Factory recovery, the AP contacts the Ubiquisys cloud service and downloads the following four minimum parameters:

RAN Management System Installation Guide, Release 4.1

17 May 25, 2015 HF

Installation Prerequisites

Supported Operating System Services

1 RMS public key (certificates)

2 RMS ACS URL

3 Public NTP servers

4 AP software

With these four parameters, the AP validates the RMS certificate, loads the AP software from cloud server, and talks to RMS.

Supported Operating System Services

Only following UNIX services are supported on Cisco RMS. The installer disables all other services.

Node Type List of Services

RMS Central node

RMS Serving node

RMS Upload Server node

SSH,, HTTPS, NTP, SNMP, SAN, RSYSLOG

SSH, HTTPS, NTP, SNMP, SAN, RSYSLOG

SSH, HTTPS, NTP, SNMP, SAN, RSYSLOG

Cisco RMS Port Configuration

The following table lists the different ports used on the Cisco RMS nodes.

Node Type Port Source Protocol Usage

18

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Prerequisites

Cisco RMS Port Configuration

All Server

123

514

5488

22

161

162

Administrator

NMS

NMS

NTP Server

Syslog

Administrator

5489 Administrator

SSH Remote log-in(SSH)

UDP (SNMP) SNMP agent used to support get/set

UDP (SNMP) SNMP agent to support trap

UDP

UDP

TCP

TCP

NTP for time synchronization

Syslog - used for system logging

VMware VAMI

(Virtual Appliance

Management

Infrastructure) services

VMware VAMI

(Virtual Appliance

Management

Infrastructure) services

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

19

Installation Prerequisites

Cisco RMS Port Configuration

RMS Central node

8083

443

49187

8090

5435

1244

8009

9006

8015

3799

8001

49887

4678

Random

OSS

UI

DPE

Administrator

Administrator

RDU/PNR

Administrator

TCP (HTTP) OSS<->PMG communication

TCP (HTTPs) DCC UI

TCP Internal RMS communication -

Request coming from DPE

TCP (HTTP) DHCP administration

TCP Postgres database port

TCP

TCP

DHCP internal communication

Tomcat AJP connector port

Administrator

Administrator

ASR5K (AAA)

TCP

TCP

UDP

(RADIUS)

BAC Tomcat server port

PNR Tomcat server port

RADIUS

Change-of-Authorization and Disconnect flows from PMG to

ASR5K (Default

Port)

RDU

RDU

PMG

UDP (SNMP) SNMP Internal

TCP Listening port (for watchdog) for RDU

SNMP Agent

TCP Default listening port for Alarm handler to listen PMG events

RDU/PNR/Postgres/PMG TCP/UDP

20

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Prerequisites

Cisco RMS Port Configuration

RMS Serving node

443

7550

49186

2323

8001

7551

HNB

HNB

RDU

DPE

DPE

DPE/PAR

Random DPE/PNR/PAR

RMS Serving

Node (PNR)

61610

9005

9443

1234

HNB

Administrator

Administrator

RDU/PNR

Random ports used by internal processes: java, postmaster, ccmsrv, cnrservagt, ruby,

RPCBind, and

NFS(Network File system)

TCP (HTTPs) TR-069 management

Firmware download TCP

(HTTPS)

TCP RDU<->DPE communication

TCP DPE CLI

UDP(SNMP) SNMP Internal

TCP

TCP/UDP

DPE authorization service with PAR communication

Random ports used by internal processes: java, arservagt, armcdsvr, cnrservagt, dhcp, cnrsnmp, ccmsrv

,dpe, cnrservagt, and arservagt

UDP (DHCP) IP address assignment

TCP

TCP

(HTTPS)

TCP

Tomcat server port

PNR GUI port

DHCP internal communication

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

21

Installation Prerequisites

Cisco UCS Node Configuration

RMS Serving

Node (PAR)

1812

1234

647

8005

8009

8443

RMS Upload

Server node

443

8082

8082

Random

ASR5K (AAA)

RDU

RMS Serving Node

(PAR)

Administrator

Administrator

Administrator

HNB

RDU

Upload Server

UDP

(RADIUS)

TCP

TCP

TCP

TCP

TCP

(HTTPS)

TCP

(HTTPS)

TCP

TCP

TCP/UDP

Authentication and authorization of

HNB during Iuh

HNB register

DHCP internal communication

DHCP failover communication.

Only used when redundant RMS

Serving instances are used.

Tomcat server port

Tomcat AJP connector port

PAR GUI port

PM & PED file upload

Availability check

North Bound traffic

Random ports used by internal processes: java, ruby

Cisco UCS Node Configuration

Each Cisco UCS hardware node has a minimum of 4 +1 Ethernet ports that connect different services to different networks as needed. It is recommended that the following binding of IP addresses to Ethernet ports must be followed:

22

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Prerequisites

Cisco UCS Node Configuration

Central Node Port Bindings

Port

UCS Management Port

Port 1

Port 2

Port 3

Serving and Upload Node Port Bindings

Port

UCS Management Port

Port 1

Port 2

Port 3

All-in-One Node Port Bindings

Port

UCS Management Port

Port 1

IP Addresses

Cisco Integrated Management Controller (CIMC) IP address

Note CIMC is used to administer Cisco UCS hardware.

Hypervisor IP address

Note Hypervisor access is used to administer VMs via vCenter.

vCenter IP address

Central VM Southbound (SB) IP address

Central VM Northbound (NB) IP address

IP Addresses

CIMC IP address

Hypervisor IP Address

Serving VM north-bound (NB) IP address

Upload VM NB IP address

Serving VM south-bound (SB) IP address

Upload VM SB IP address

IP Addresses

CIMC IP address

Hypervisor IP Address vCenter IP address

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

23

Installation Prerequisites

Cisco ASR 5000 Gateway Configuration

Port

Port 2

Port 3

Port 4

IP Addresses

Central VM SB IP address

Serving VM NB IP address

Upload VM NB IP address

Serving VM south-bound (SB) IP address

Upload VM SB IP address

Central VM NB IP address

Cisco ASR 5000 Gateway Configuration

The Cisco ASR 5000 Gateway utilizes the Serving VM for DHCP and AAA services. The blade-based architecture of the gateway provides unprecedented scale that can exceed 2,50,000 APs that can be handled by a Serving VM (or redundant pair).

To scale beyond 2,50,000 APs, the ASR 5000 uses several instances of SeGW and HNB-GW within the same

Cisco ASR 5000 chassis to direct DHCP and AAA traffic to the correct Serving VM.

• SeGW instances — A separate SeGW instance must be created in the Cisco ASR 5000 for every 2,50,000

APs or every provisioning group (PG) (if smaller PGs are used). Each SeGW instance must:

◦ Have a separate public IP address for APs to connect to;

◦ Configure DHCP requests to be sent to different set of Serving VMs.

The SeGW can be co-located with HNB-GW on the same physical ASR 5000 chassis or alternatively

SeGW can created on an external ASR 9000 or Cisco 7609 chassis.

• HNB-GW instances — A separate HNB-GW instance must be created in the Cisco ASR 5000 for every

2,50,000 APs or every PG (if smaller PGs are used). Each HNB-GW instance must:

◦ Support different private IP addresses for APs to connect via IPSec tunnel

◦ Associate with one SeGW context

◦ Configure AAA traffic to be sent to different set of Serving VMs

◦ Configure AAA traffic to be received from the Central VM (PMG) on a different port or IP

To configure the Cisco ASR 5000 Small Cell Gateway, see the Cisco ASR 5000 System Administration

Guide.

NTP Configuration

Network Time Protocol (NTP) synchronization must be configured on all devices in the network as well as on the Cisco UCS servers. The NTP server can be specified during server installation. Failure to organize

24

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Prerequisites

Public Fully Qualified Domain Names time synchronization across your network can result in anomalous functioning and results in the Cisco RAN

Management System.

Public Fully Qualified Domain Names

It is recommended to have fully qualified domain name (FQDNs) for all public and private IP addresses because it can simplify IP renumbering. The DNS used by the operator must be configured to resolve these

FQDNs to IP addresses of RMS nodes.

If FQDNs are used to configure target servers on the AP, then server certificates must contain the FQDN to perform appropriate security handshake for TLS.

RMS System Backup

It is recommended to perform a backup of the system before proceeding with the RMS installation. For more details, see

System Backup, on page 215

.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

25

RMS System Backup

Installation Prerequisites

26

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

C H A P T E R

3

Installing VMware ESXi and vCenter for Cisco

RMS

This chapter explains how to install the VMware ESXi and vCenter for the Cisco RAN Management System.

The following topics are covered in this chapter:

Prerequisites, page 27

Configuring Cisco UCS US 240 M3 Server and RAID, page 28

Installing and Configuring VMware ESXI 5.5.0, page 30

Configuring vCenter, page 32

Configuring NTP on ESXi Hosts for RMS Servers, page 33

Installing the OVF Tool, page 34

Configuring SAN for Cisco RMS, page 36

Prerequisites

• Rack-mount the Cisco UCS Server and ensure that it is cabled and connected to the network.

• Download VMware ESXi 5.5.0 ISO to the local system

◦ File name: VMware-VMvisor-Installer-5.5.0-1331820.x86_64.iso

• Download VMware vCenter 5.5.0 OVA appliance to the local system

◦ File name: VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.OVA

• Download OVF Tool image to the local system

◦ File name: VMware-ovftool-3.0.1-801290-lin.x86_64.bundle

◦ File name: VMware-ovftool-3.5.1-1747221-win.x86_64.msi (for Microsoft Windows 64 bit)

RAN Management System Installation Guide, Release 4.1

27 May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Configuring Cisco UCS US 240 M3 Server and RAID

Note The OVF Tool image name may change based on the OS version.

• Three set of IP addresses

Note You can download the above-mentioned packages from the VMware website using a valid account.

Configuring Cisco UCS US 240 M3 Server and RAID

SUMMARY STEPS

1.

Assign a Cisco Integrated Management Controller (CIMC) Management IP address by physically accessing the Cisco UCS server:

2.

Enter the CIMC IP on the browser to access the login page.

3.

Enter the default login, Admin, and password .

4.

Select the Storage tab and then click the Create Virtual Drive from Unused Physical Drives option to open the dialog box. In the dialog box, four physical drives are shown as available. Configure a single

RAID 5.

5.

Choose the Raid Level from the drop-down list, for example, 5.

6.

Select the physical drive from the Physical Drives pane, for example, 1.

7.

Click Create Virtual Drive to create the virtual drive.

8.

Next, in the Virtual Drive Info tab, click Initialize and Set as Boot Drive . This completes the Cisco UCS

240 Server and RAID configuration.

DETAILED STEPS

Step 1 Assign a Cisco Integrated Management Controller (CIMC) Management IP address by physically accessing the Cisco

UCS server: a) Boot up the server and click F8 to stop the booting.

b) Set the IP address and other configurations as shown in the following figure.

28

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Configuring Cisco UCS US 240 M3 Server and RAID

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8 c) Press F10 to save the configurations and press Esc to exit and reboot the server.

The CIMC console can now be accessed via any browser from a system within the same network.

Enter the CIMC IP on the browser to access the login page.

Enter the default login, Admin, and password .

Select the Storage tab and then click the Create Virtual Drive from Unused Physical Drives option to open the dialog box. In the dialog box, four physical drives are shown as available. Configure a single RAID 5.

Note If more number of disks are available, it is recommended that RAID 1 drive be configured with two disks for the VMware ESXi OS and the rest of the disks as a RAID 5 drive for VM Datastore.

Choose the Raid Level from the drop-down list, for example, 5.

Select the physical drive from the Physical Drives pane, for example, 1.

Click Create Virtual Drive to create the virtual drive.

Next, in the Virtual Drive Info tab, click Initialize and Set as Boot Drive . This completes the Cisco UCS 240 Server and RAID configuration.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

29

Installing VMware ESXi and vCenter for Cisco RMS

Installing and Configuring VMware ESXI 5.5.0

Installing and Configuring VMware ESXI 5.5.0

SUMMARY STEPS

1.

Log in to CIMC.

2.

Select the Admin and NTP Settings tabs.

3.

Set the available NTP servers and click Save .

4.

Click the Server tab and click Launch KVM Console from Actions to launch the KVM console.

5.

In the KVM Console, click the Virtual Media tab and load the downloaded VMware ESXi 5.5.0 ISO image.

6.

Click the KVM tab and reboot the server. Press F6 to select the Boot menu.

7.

In the Boot menu, select the appropriate image device.

8.

Select the ESXi image in the Boot menu to load it.

9.

Click Continue to Select the operation to perform.

10.

Select the available storage.

11.

Set the root credential for the ESXi OS and press F11 to proceed with the installation.

12.

Reboot the system after installation and wait to boot the OS completely.

13.

Next, set the ESXi OS IP. Press F2 to customize and select Configure Management Network .

14.

Select the IP configuration and set the IP details.

15.

Press Esc twice and Y to save the settings. You should now be able to ping the IP.

16.

Download the vSphere client from http://<esxi-host-Ip> and install it on top of the Windows OS. The installed ESXi can be accessed via the vSphere client.

DETAILED STEPS

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

Step 13

Log in to CIMC.

Select the Admin and NTP Settings tabs.

Set the available NTP servers and click Save .

Note If no NTP servers are available, this step can be skipped. However, these settings help synchronize the VMs with the NTP.

Click the Server tab and click Launch KVM Console from Actions to launch the KVM console.

In the KVM Console, click the Virtual Media tab and load the downloaded VMware ESXi 5.5.0 ISO image.

Click the KVM tab and reboot the server. Press F6 to select the Boot menu.

In the Boot menu, select the appropriate image device.

Select the ESXi image in the Boot menu to load it.

Click Continue to Select the operation to perform.

Select the available storage.

Set the root credential for the ESXi OS and press F11 to proceed with the installation.

Reboot the system after installation and wait to boot the OS completely.

Next, set the ESXi OS IP. Press F2 to customize and select Configure Management Network .

Note Set the VLAN ID if any underlying VLAN is configured on the router.

30

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Installing the VMware vCenter 5.5.0

Step 14

Step 15

Step 16

Select the IP configuration and set the IP details.

Press Esc twice and Y to save the settings. You should now be able to ping the IP.

Note If required, the DNS server and host name can be set in the same window.

Download the vSphere client from http://<esxi-host-Ip> and install it on top of the Windows OS. The installed ESXi can be accessed via the vSphere client.

This completes the VMware ESXi 5.5.0 installation and configuration.

Installing the VMware vCenter 5.5.0

SUMMARY STEPS

1.

Log in to the VMware ESXi host via the vSphere client.

2.

Select the Configuration tab and select Networking.

3.

Select Properties and then select VM Network in the Properties dialog box and edit.

4.

Set the appropriate VLAN ID and click Save .

5.

Next, go to File > Deploy OVA Template and provide the path of the download vCenter 5.5.0 ISO.

6.

Provide a vCenter name. The deployment settings summary is displayed in the next window.

7.

Start the OVA deployment.

8.

Power on the VM and open the console after successful OVA deployment.

9.

Log in with the default credentials root/vmware and set the IP address, gateway, and DNS name, and host name.

10.

Access the vCenter IP https://<vcenter-Ip:5480> from the browser.

11.

Log in with the root/vmware. After log in, accept the license agreement.

12.

Select Configure with Default Settings and click Next and then Start .

13.

Now, access the vCenter via the vSphere client.

DETAILED STEPS

Step 1 Log in to the VMware ESXi host via the vSphere client.

Note Skip steps 2 to 4 if no underlying VLAN is available.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

31

Installing VMware ESXi and vCenter for Cisco RMS

Configuring vCenter

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

Step 13

Select the Configuration tab and select Networking.

Select Properties and then select VM Network in the Properties dialog box and edit.

Set the appropriate VLAN ID and click Save .

Next, go to File > Deploy OVA Template and provide the path of the download vCenter 5.5.0 ISO.

Provide a vCenter name. The deployment settings summary is displayed in the next window.

Start the OVA deployment.

Power on the VM and open the console after successful OVA deployment.

Log in with the default credentials root/vmware and set the IP address, gateway, and DNS name, and host name.

Access the vCenter IP https://<vcenter-Ip:5480> from the browser.

Log in with the root/vmware. After log in, accept the license agreement.

Select Configure with Default Settings and click Next and then Start .

Note Use the embedded to store the vCenter inventory, which can handle up to ten hosts and fifty VMs. Usage of an external database like oracle is out of scope.

It takes around 10 to 15 minutes to configure and mount the database. On completion, the summary vCenter displays the summary.

Now, access the vCenter via the vSphere client.

This completes the VMware vCenter 5.5.0 installation.

Configuring vCenter

SUMMARY STEPS

1.

Log in to the vSphere client.

2.

Rename the top level directory and a datacenter.

3.

Click Add Host and add the same ESXi host in the vCenter inventory list.

4.

Enter the host IP address and credentials (same credential set during the ESXi OS installation) in the

Connection Settings window.

5.

Add the ESXi license key, if any, in the Assign License window.

6.

Click Next . The configuration summary window is displayed.

7.

Click Finish . The ESXi host is now added to the vCenter inventory. You can also find the datastore and port group information in the summary window.

8.

To add a ESXi host if another VLAN is availabe in your network, follow these steps:

DETAILED STEPS

Step 1 Log in to the vSphere client.

32

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Configuring NTP on ESXi Hosts for RMS Servers

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Rename the top level directory and a datacenter.

Click Add Host and add the same ESXi host in the vCenter inventory list.

Enter the host IP address and credentials (same credential set during the ESXi OS installation) in the Connection Settings window.

Add the ESXi license key, if any, in the Assign License window.

Click Next . The configuration summary window is displayed.

Click Finish . The ESXi host is now added to the vCenter inventory. You can also find the datastore and port group information in the summary window.

To add a ESXi host if another VLAN is availabe in your network, follow these steps: a) Select the ESXi host. Go to the Configuration tab and select Networking .

b) Select Properties and then click Add in the Properties window.

c) Select Virtual Machine in the Connection Type window.

d) Provide the VLAN Id e) Click Next and then Finish . The second portgroup will be available on the ESXi standard virtual switch.

Note The network names — VM network and VM network 2 — can be renamed and used in the ovf descriptor file.

This completes the vCenter configuration for the Cisco RMS installation.

Configuring NTP on ESXi Hosts for RMS Servers

Follow this procedure to configure the NTP server to communicate with all the connected hosts.

RAN Management System Installation Guide, Release 4.1

33 May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Installing the OVF Tool

Before You Begin

Before configuring the ESXi to an external NTP server, ensure that the ESXi hosts can reach the required

NTP server.

Step 1

Step 2

Step 3

Step 4

Start the vSphere client.

Go to Inventory > Hosts and Clusters and select the host.

Select the Configuration tab.

In the Software section of the Configuration tab, select Time Configuration to view the time configuration details. If the NTP Client shows "stopped" status, then enable the NTP client by following these steps: a) Click the Properties link (at the top right-hand corner) in the Configuration tab to open the Time Configuration window.

b) Check the NTP Client Enabled checkbox.

c) Click Options to open the NTP Daemon (ntpd) Options window.

d) Click Add to add the NTP server IP address in the Add NTP Server dialog box.

e) Click OK .

f) In the NTP Daemon (ntpd) Options window, check the Restart NTP service to apply changes checkbox.

g) Click OK to apply the changes.

h) Verify that the NTP Client status now is "running".

Installing the OVF Tool

The OVF Tool application is used to deploy virtual appliances on vCenter using CLIs. You can install the

OVF Tool for Red Hat Linux and Microsoft Windows as explained in the following procedures:

Installing the OVF Tool for Red Hat Linux, on page 34

Installing the OVF Tool for Microsoft Windows, on page 35

Installing the OVF Tool for Red Hat Linux

This procedure installs the OVF Tool for Red Hat Linux on the vCenter VM.

Step 1

Step 2

Transfer the downloaded VMware-ovftool-3.0.1-801290-lin.x86_64.bundle to the vCenter VM via scp/ftp tools.

Note The OVF Tool image name may change based on the OS version.

Check the permission of the file as shown below.

34

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Installing the OVF Tool for Microsoft Windows

Step 3 Execute and follow the on-screen instructions to complete the OVF Tool installation. to complete it.

OVF Tool installation completed.

You can use the following command to deploy OVA.

Example:

# ovftool <location-of-ova-file> vi://root:<vmware is the id>@<password to log in to vcenter IP>/blr-datacenter/host/<esxihost-ip>

Installing the OVF Tool for Microsoft Windows

This procedure installs the OVF Tool for Microsoft Windows 64 bit, on the vCenter VM.

Before You Begin

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Double-click the Windows 64 bit VMware-ovftool-3.5.1-1747221-win.x86_64.msi on your local system to start the installer.

Note The OVF Tool image name may change based on the OS version.

In the Welcome screen of the installer, click Next .

In the License Agreement, read the license agreement and select I agree and click Next .

Accept the path suggested for the OVF Tool installation or change to a path of your choice and click Next .

When you have finished choosing your installation options, click Install .

When the installation is complete, click Next .

Deselect the Show the readme file option if you do not want to view the readme file, and click Finish to exit.

After installing the OVF Tool on Windows, run the OVF Tool from the DOS prompt.

You should have the OVF Tool folder in your path environment variable to run the OVF Tool from the command line.

For instructions on running the utility, go to <datacenter name>/host/<resource pool path>/<vm or vApp name>.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

35

Installing VMware ESXi and vCenter for Cisco RMS

Configuring SAN for Cisco RMS

Configuring SAN for Cisco RMS

This section covers the procedure of adding SAN LUN discovery and data stores for RMS hosts on VMware

ESXi 5.5.0. It also describes the procedure to associate desired data stores with VMs.

Creating a SAN LUN, on page 36

Installing FCoE Software Adapter Using VMware ESXi, on page 36

Adding Data Stores to Virtual Machines, on page 37

Migrating the Data Stores, on page 53

Creating a SAN LUN

In the following procedure, Oracle ZFS storage ZS3-2 is used as a reference storage. The actual procedure for creation of logical unit number (LUN) may vary depending on the storage used.

Step 1

Step 2

Step 3

Step 4

Step 5

Log in to the storage using the Oracle ZFS Storage ZS3-2 GUI.

Click Shares .

Click +LUNs to open the Create LUN window.

Provide the Name, Volume size, and Volume block size. Select the default Target group, Initiator group(s) group and click Apply .

New LUN is displayed on the LUN list.

Follow steps 1 to 4 to create another LUN.

What to Do Next

To install FCoE Software Adapter, see

Installing FCoE Software Adapter Using VMware ESXi, on page 36

.

Installing FCoE Software Adapter Using VMware ESXi

Before You Begin

• SAN LUNs should be created based on the SAN requirement (see

Creating a SAN LUN, on page 36

) and connected via the Fibre Channel over Ethernet (FCoE) to the UCS chassis and hosts with multipaths.

• The LUN is expected to be available on SAN storage as described in

Data Storage for Cisco RMS VMs,

on page 13 . The LUN Size can be different based on the Cisco RMS requirements for the deployment.

36

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

• The physical HBA cards should be installed and configured. SAN is attached with the server and LUN shared from storage end.

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

Step 13

Step 14

Log in to the VMware ESXi host via the vSphere client.

Click the Configuration tab. In the Hardware area, click Storage Adapters to check if the FCoE software adapter is installed. In the Configuration tab, the installed HBA cards (vmhba1,vmhba2) will be visible because there are two physical HBA cards present on the ESXi host. If you do not see the installed HBA cards, refresh the screen to view it.

Click Rescan All and select the HBA cards one-by-one and the "targets", "devices", and "paths" can be seen.

In the Hardware pane, click Storage .

In the Configuration tab, click Add Storage to open the Add Storage wizard.

In the Storage Type screen, select the Disk/LUN option. Click Next .

In the Select Disk/LUN screen, select the available FC LUN from the list of available LUNs and click Next .

In the File System Version screen, select the VMFS-5 option. Click Next .

In the Current Disk Layout screen, review the selected disk layout. Click Next .

In the Properties screen, enter a data store name in the field. For example, SAN-LUN-1. Click Next .

In the Disk/LUN - Formatting screen, leave the default options as-is and click Next .

In the Ready to Complete screen, view the summary of the disk layout and click Finish .

Find the datastore added with the host in the Configuration tab. The added SAN is now ready to use.

Repeat steps 4 to 12 to add additional LUNs.

Adding Data Stores to Virtual Machines

Below are the procedures to manually associate datastores to VMs, while OVA installation corresponding

SYSTEM data store is provided during installation from the OVA (like SYSTEM_CENTRAL for Central

VM, SYSTEM_SERVING for Serving VM, SYSTEM_UPLOAD for Upload VM).

Adding Central VM Data Stores, on page 37

Adding Serving VM Data Stores, on page 50

Adding Upload VM Data Stores, on page 50

Adding Central VM Data Stores

Adding the DATA Datastore, on page 38

Adding the TX_LOGS Datastore, on page 41

Adding the BACKUP Datastore, on page 45

Validating Central VM Datastore Addition, on page 49

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

37

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Adding the DATA Datastore

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

Step 13

Step 14

Step 15

Step 16

Step 17

In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the Central node.

Right-click on the Central node and click Edit Settings to open the Central-Node Virtual Machine Properties dialog box.

Click Add in the Hardware tab to open the Add Hardware wizard.

In the Device Type screen, select Hard Disk from the Choose the type of device you wish to add list. Click Next .

In the Select a Disk screen, select the Create a new virtual disk option. Click Next .

In the Create a Disk screen, select the disk capacity or memory to be added. For example, 50 GB.

Click Browse to specify a datastore or datastore cluster to open the Select a datastore or datastore cluster dialog box.

In the Select a datastore or datastore cluster dialog box, select DATA datastore and click Ok to return to the Create a

Disk screen. The selected datastore is displayed in the Specify a datastore or datastore cluster field.

Click Next .

In the Advanced Options screen, leave the default options as-is and click Next .

In the Ready to Complete screen, the options selected for the hardware are displayed. Click Finish to return to the

Central-Node Virtual Machine Properties dialog box.

Click Ok .

For Lab purposes the storage sizes to be chosen for the 'DATA' is 50 GB, for TXN_LOGS is 10 GB and for BACKUPS is 50 GB .

In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the Central node.

Right-click on the Central node and click Power > Restart Guest to restart the VM.

Log in to the Central node VM and enter sudo mode and trigger its failure. Establish a ssh connection to the VM.

ssh 10.32.102.68

The system responds by connecting the user to the Central VM.

Use the sudo command to gain access to the root user account.

sudo su -

The system responds with a password prompt.

Check the status of the newly added disk. The disk that is not partitioned is the newly added disk.

fdisk – l

Disk /dev/sda: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x0005a3b3

Device Boot

/dev/sda1 *

Start

1

End

17

Blocks Id System

131072 83 Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2 17 33 131072 82 Linux swap / Solaris

Partition 2 does not end on cylinder boundary.

/dev/sda3 33 6528 52165632 83 Linux

Disk /dev/sdb: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

38

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 18

Step 19

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Stop the RDU applications.

/etc/init.d/bprAgent stop

BAC Process Watchdog has stopped.

Format the disk by partitioning the newly added disk.

fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel with disk identifier 0xcfa0e306.

Changes will remain in memory only, until you decide to write them.

After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').

Command (m for help): m

Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only)

Command (m for help): p

Disk /dev/sdc: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xcfa0e306

Device Boot Start End Blocks Id System

Command (m for help): n

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

39

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 20

Step 21

Step 22

Command action e extended p primary partition (1-4) p

Partition number (1-4): 1

First cylinder (1-1305, default 1):

Using default value 1

Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305):

Using default value 1305

Command (m for help): v

Remaining 6757 unallocated 512-byte sectors

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

Mark the disk as ext3 type of partition.

/sbin/mkfs -t ext3 /dev/sdb1

[root@blr-rms-ha-upload01 files]# /sbin/mkfs -t ext3 /dev/sdb1 mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

6553600 inodes, 26214055 blocks

1310702 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

800 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or

180 days, whichever comes first.

Use tune2fs -c or -i to override.

Create backup folders for the 'data' partition.

mkdir /backups; mkdir /backups/data

The system responds with a command prompt.

Back up the data.

mv /rms/data/ /backups/data/

The system responds with a command prompt.

40

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 23

Step 24

Step 25

Step 26

Step 27

Create a new folder for the ‘ data ’ partition.

cd /rms; mkdir data; chown ciscorms:ciscorms data

The system responds with a command prompt.

Mount the added partition to the newly added folder.

mount /dev/sdb1 /rms/data

The system responds with a command prompt.

Move the copied folders back for the ‘ data ’ partition.

cd /backups/data/data; mv pools/ /rms/data/; mv CSCObac /rms/data; mv nwreg2

/rms/data; mv dcc_ui /rms/data

The system responds with a command prompt.

Edit the fstab file and add the below highlighted text to the end of the file and save it.

vi /etc/fstab

#

# /etc/fstab

# Created by anaconda on Fri Apr 4 10:07:01 2014

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

UUID=3aa26fdd-1bd8-47cc-bd42-469c01dac313 /

UUID=ccc74e66-0c8c-4a94-aee0-1eb152502e3f /boot

UUID=f7d57765-abf4-4699-a0bc-f3175a66470a swap tmpfs devpts sysfs proc

/dev/shm

/dev/pts

/sys

/proc

/dev/sdb1 /rms/data ext3 rw 0 0

:wq tmpfs defaults sysfs defaults proc defaults ext3 ext3 swap defaults defaults defaults

0 0 devpts gid=5,mode=620 0 0

0 0

0 0

Restart the RDU process.

/etc/init.d/bprAgent start

BAC Process Watchdog has started.

What to Do Next

To add the TX_LOGS datastore, see

Adding the TX_LOGS Datastore, on page 41

.

Adding the TX_LOGS Datastore

1 1

1 2

0 0

Step 1

Step 2

Step 3

Repeat Steps 24 to 27 of

Adding the DATA Datastore, on page 38

in the for the partitions of 'TX_LOGS'.

Log in to the Central node VM and enter sudo mode and trigger its failure. Establish a ssh connection to the VM.

ssh 10.32.102.68

The system responds by connecting the user to the Central VM.

Use the sudo command to gain access to the root user account.

sudo su -

The system responds with a password prompt.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

41

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 4

Step 5

Step 6

Check the status of the newly added disk. The disk that is not partitioned is the newly added disk.

fdisk – l

[blr-rms-ha-central03] ~ # fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x0005a3b3

Device Boot

/dev/sda1 *

Start

1

End

17

Blocks

131072

Id

83

System

Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2 17 33 131072 82 Linux swap / Solaris

Partition 2 does not end on cylinder boundary.

/dev/sda3 33 6528 52165632 83 Linux

Disk /dev/sdb: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xaf39a885

Device Boot

/dev/sdb1

Start

1

End

6527

Blocks Id System

52428096 83 Linux

Disk /dev/sdc: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sdc doesn't contain a valid partition table

Stop the RDU applications.

/etc/init.d/bprAgent stop

BAC Process Watchdog has stopped.

Format the disk by partitioning the newly added disk.

fdisk /dev/sdc

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel with disk identifier 0xcfa0e306.

Changes will remain in memory only, until you decide to write them.

After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').

42

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 7

Command (m for help): m

Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only)

Command (m for help): p

Disk /dev/sdc: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xcfa0e306

Device Boot Start End Blocks Id System

Command (m for help): n

Command action e extended p primary partition (1-4) p

Partition number (1-4): 1

First cylinder (1-1305, default 1):

Using default value 1

Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305):

Using default value 1305

Command (m for help): v

Remaining 6757 unallocated 512-byte sectors

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

Mark the disk as ext3 type of partition.

/sbin/mkfs -t ext3 /dev/sdc1 mke2fs 1.41.12 (17-May-2010)

Filesystem label=

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

43

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

6553600 inodes, 26214055 blocks

1310702 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

800 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or

180 days, whichever comes first.

Use tune2fs -c or -i to override.

Step 8

Step 9

Step 10

Step 11

Step 12

Step 13

Create backup folders for the 'txn' partition.

mkdir /backups/txn

The system responds with a command prompt.

Back up the data.

mv /rms/txn/ /backups/txn

The system responds with a command prompt.

Create a new folder for the ‘ txn ’ partition.

cd /rms; mkdir txn; chown ciscorms:ciscorms txn

The system responds with a command prompt.

Mount the added partition to the newly added folder.

mount /dev/sdc1 /rms/txn

The system responds with a command prompt.

Move the copied folders back for the ‘ txn ’ partition.

cd /backups/txn/txn; mv CSCObac/ /rms/txn/

The system responds with a command prompt.

Edit the file fstab and add the below highlighted text at the end of the file and save it.

vi /etc/fstab

#

# /etc/fstab

# Created by anaconda on Mon May 5 15:08:38 2014

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

UUID=f2fc46ec-f5d7-4223-a1c0-b31476770dc7 /

UUID=8cb5ee90-63c0-4a00-967d-698644c5aa8c /boot

UUID=f1a0bf72-0d9e-4032-acd2-392df6eb1329 swap tmpfs /dev/shm tmpfs defaults ext3 ext3 swap defaults defaults defaults

0 0

1 1

1 2

0 0

44

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 14 devpts sysfs proc

/dev/pts

/sys

/proc

/dev/sdb1 /rms/data ext3 rw 0 0

/dev/sdc1 /rms/txn ext3 rw 0 0

:wq

Restart the RDU process.

/etc/init.d/bprAgent start

BAC Process Watchdog has started.

devpts gid=5,mode=620 0 0 sysfs defaults 0 0 proc defaults 0 0

What to Do Next

To add the BACKUP datastore, see

Adding the BACKUP Datastore, on page 45

.

Adding the BACKUP Datastore

Step 1

Step 2

Step 3

Step 4

Repeat Steps 24 to 27 of

Adding the DATA Datastore, on page 38

in the for the partitions of 'BACKUPS'.

Log in to the Central node VM and enter sudo mode and trigger its failure. Establish a ssh connection to the VM.

ssh 10.32.102.68

The system responds by connecting the user to the Central VM.

Use the sudo command to gain access to the root user account.

sudo su -

The system responds with a password prompt.

Check the status of the newly added disk. The disk that is not partitioned is the newly added disk.

fdisk – l

[blr-rms-ha-central03] ~ # fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x0005a3b3

Device Boot

/dev/sda1 *

Start

1

End

17

Blocks Id System

131072 83 Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2 17 33 131072 82 Linux swap / Solaris

Partition 2 does not end on cylinder boundary.

/dev/sda3 33 6528 52165632 83 Linux

Disk /dev/sdb: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

45

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 5

Step 6

Disk identifier: 0xaf39a885

Device Boot

/dev/sdb1

Start

1

End

6527

Blocks Id System

52428096 83 Linux

Disk /dev/sdc: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xcfa0e306

Device Boot

/dev/sdc1

Start

1

End

1305

Blocks Id System

10482381 83 Linux

Disk /dev/sdd: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sdd doesn't contain a valid partition table

Stop the RDU applications.

/etc/init.d/bprAgent stop

BAC Process Watchdog has stopped.

Format the disk by partitioning the newly added disk.

fdisk /dev/sdd

[blr-rms-ha-central03] ~ # fdisk /dev/sdd

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel with disk identifier 0xf35b26bc.

Changes will remain in memory only, until you decide to write them.

After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').

Command (m for help): m

Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel

46

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only)

Command (m for help): p

Disk /dev/sdd: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xf35b26bc

Device Boot Start End Blocks Id System

Command (m for help): n

Command action e extended p primary partition (1-4) p

Partition number (1-4): 1

First cylinder (1-6527, default 1):

Using default value 1

Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):

Using default value 6527

Command (m for help): v

Remaining 1407 unallocated 512-byte sectors

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

[blr-rms-ha-central03] ~ # /sbin/mkfs -t ext3 /dev/sdd1 mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

3276800 inodes, 13107024 blocks

655351 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

400 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

47

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

Step 13

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or

180 days, whichever comes first.

Use tune2fs -c or -i to override.

Mark the disk as ext3 type of partition.

/sbin/mkfs -t ext3 /dev/sdd1 mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

6553600 inodes, 26214055 blocks

1310702 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

800 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or

180 days, whichever comes first.

Use tune2fs -c or -i to override.

Create backup folders for the 'backups' partition.

mkdir /backups/backups

The system responds with a command prompt.

Back up the data.

mv /rms/backups /backups/backups

The system responds with a command prompt.

Create a new folder for the 'backups ’ partition.

cd /rms; mkdir backups; chown ciscorms:ciscorms backups

The system responds with a command prompt.

Mount the added partition to the newly added folder.

mount /dev/sdd1 /rms/backups

The system responds with a command prompt.

Move the copied folders back for the ‘ backups ’ partition.

cd /backups/backups; mv * /rms/backups/

The system responds with a command prompt.

Edit the file fstab and add the below highlighted text at the end of the file and save it.

vi /etc/fstab

#

48

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 14

# /etc/fstab

# Created by anaconda on Mon May 5 15:08:38 2014

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

UUID=f2fc46ec-f5d7-4223-a1c0-b31476770dc7 /

UUID=8cb5ee90-63c0-4a00-967d-698644c5aa8c /boot

UUID=f1a0bf72-0d9e-4032-acd2-392df6eb1329 swap tmpfs /dev/shm ext3 ext3 defaults defaults tmpfs defaults swap defaults

0 0 devpts gid=5,mode=620 0 0 devpts sysfs

/dev/pts

/sys proc /proc

/dev/sdb1 /rms/data ext3 rw 0 0 sysfs proc defaults defaults

0 0

0 0

/dev/sdc1 /rms/txn ext3 rw 0 0

/dev/sdd1 /rms/backups ext3 rw 0 0

:wq

Restart the RDU process.

/etc/init.d/bprAgent start

BAC Process Watchdog has started.

1 1

1 2

0 0

What to Do Next

To add validate the data stores added to the Central VM, see

Validating Central VM Datastore Addition, on

page 49 .

Validating Central VM Datastore Addition

After datastores are added to the host and disks are mounted in the Central VM, validate the added datastores in vSphere client and ssh session on the VM.

Step 1

Step 2

Step 3

Step 4

Log in to the vSphere client.

In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the Central VM.

Click the General tab to view the datastores associated with the VM, displayed on the screen.

Log in to the Central node VM and establish a ssh connection to the VM to see the four disks mounted.

[blrrms-central-22] ~ $ mount

/dev/sda3 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")

/dev/sda1 on /boot type ext3 (rw)

/dev/sdb1 on /rms/data type ext3 (rw)

/dev/sdc1 on /rms/txn type ext3 (rw)

/dev/sdd1 on /rms/backups type ext3 (rw)

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

49

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

[blrrms-central-22] ~ $

Adding Serving VM Data Stores

Adding the SYSTEM_SERVING Datastore, on page 50

Adding the SYSTEM_SERVING Datastore

In the OVA installation, assign a datastore from the available datastores based on your space requirement for installation. For example, SYSTEM_SERVING.

What to Do Next

To add data stores to the Upload VM, see

Adding Upload VM Data Stores, on page 50

.

Adding Upload VM Data Stores

Adding the SYSTEM_UPLOAD Datastore, on page 50

Adding PM_RAW and PM_ARCHIVE Datastores, on page 50

Validating Upload VM Datastore Addition, on page 53

Adding the SYSTEM_UPLOAD Datastore

In OVA installation provide SYSTEM_UPLOAD as the datastore for installation.

What to Do Next

To add the PM_RAW and PM_ARCHIVE datastores, see

Adding PM_RAW and PM_ARCHIVE Datastores,

on page 50 .

Adding PM_RAW and PM_ARCHIVE Datastores

Step 1

Step 2

Step 3

Step 4

Repeat steps 1 to 14 of

Adding the DATA Datastore, on page 38

to add the PM_RAW data store.

Repeat steps 1 to 14 of

Adding the DATA Datastore, on page 38

to add the PM_ARCHIVE data store.

Log in to the Central node VM and establish a ssh connection to the Upload VM using the Upload node hostname.

ssh admin1@blr-rms14-upload

The system responds by connecting the user to the upload VM.

Use the sudo command to gain access to the root user account.

sudo su -

The system responds with a password prompt.

50

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

Step 13

Apply fdisk -l to display new disk discovered to the system.

Apply fdisk /dev/sdb to create a new partition on a new disk and save.

fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').

Command (m for help): n

Command action e extended p primary partition (1-4) p

Partition number (1-4): 1

First cylinder (1-52216, default 1): 1

Last cylinder, +cylinders or +size{K,M,G} (1-52216, default 52216): 52216

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks..

Follow the on-screen prompts carefully to avoid errors that may corrupt the entire system.

The cylinder values may vary based on the machine setup.

Repeat Step 6 to create partition on the /dev/sdc.

Stop the LUS process.

god stop UploadServer

Sending 'stop' command

The following watches were affected:

UploadServer

Create backup folders for the 'files' partition.

mkdir -p /backups/uploads

The system responds with a command prompt.

mkdir – p /backups/archives

The system responds with a command prompt.

Back up the data.

mv/opt/CSCOuls/files/uploads/* /backups/uploads mv/opt/CSCOuls/files/archives/* /backups/archives

The system responds with a command prompt.

Create the file system on the new partitions.

mkfs.ext4 -i 4049 /dev/sdb1

The system responds with a command prompt.

Repeat Step 10 for /dev/sdc1.

Mount new partitions under /opt/CSCOuls/files/uploads and /opt/CSCOuls/files/archives directories using the following commands.

mount -t ext4 -o noatime,data=writeback,commit=120 /dev/sdb1/opt/CSCOuls/files/uploads/ mount -t ext4 -o noatime,data=writeback,commit=120 /dev/sdc1/opt/CSCOuls/files/archives/

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

51

Installing VMware ESXi and vCenter for Cisco RMS

Adding Data Stores to Virtual Machines

Step 14

Step 15

Step 16

Step 17

Step 18

Step 19

Step 20

The system responds with a command prompt.

Edit /etc/fstab and append following entries to make the mount point reboot persistent.

/dev/sdb1 /opt/CSCOuls/files/uploads/ ext4 noatime,data=writeback,commit=120 0 0

/dev/sdc1 /opt/CSCOuls/files/archives/ ext4 noatime,data=writeback,commit=120 0 0

Restore the already backed up data.

mv /backups/uploads/* /opt/CSCOuls/files/uploads/ mv /backups/archives/* /opt/CSCOuls/files/archives/

The system responds with a command prompt.

Check ownership of the /opt/CSCOuls/files/uploads and /opt/CSCOuls/files/archives directory with the following command.

ls -l /opt/CSCOuls/files

Change the ownership of the files/uploads and files/archives directories to ciscorms.

chown -R ciscorms:ciscorms /opt/CSCOuls/files/

The system responds with a command prompt.

Verify ownership of the mounting directory.

ls -al /opt/CSCOuls/files/ total 12 drwxr-xr-x. 7 ciscorms ciscorms 4096 Aug 5 06:03 archives drwxr-xr-x. 2 ciscorms ciscorms 4096 Jul 25 15:29 conf drwxr-xr-x. 5 ciscorms ciscorms 4096 Jul 31 17:28 uploads

Edit the /opt/CSCOuls/conf/UploadServer.properties file.

cd /opt/CSCOuls/conf; sed – i

's/UploadServer.disk.alloc.global.maxgb.*/UploadServer.disk.alloc.global.maxgb=<Max limit>/'

UploadServer.properties;

System returns with command prompt.

Replace <Max limit> with the maximum size of partition mounted under /opt/CSCOuls/files/uploads directory.

Start the LUS process.

god start UploadServer

Sending 'start' command

The following watches were affected:

UploadServer

Note For the Upload Server to work properly, both/opt/CSCOuls/files/uploads/and/opt/CSCOuls/files/archives/folders must be on different partitions.

What to Do Next

To add validate the data stores added to the Upload VM, see

Validating Upload VM Datastore Addition, on

page 53 .

52

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installing VMware ESXi and vCenter for Cisco RMS

Migrating the Data Stores

Validating Upload VM Datastore Addition

After datastores are added to the host and disks are mounted in the Upload VM, validate the added datastores in vSphere client and ssh session on the VM.

Step 1

Step 2

Step 3

Step 4

Log in to the vSphere client.

In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the Upload VM.

Click the General tab to view the datastores associated with the VM, displayed on the screen.

Log in to the Central node VM and establish a ssh connection to the VM to see the two disks mounted.

Migrating the Data Stores

Initial Migration on One Disk, on page 53

Initial Migration on One Disk

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Log in to the VMware ESXi host via the vSphere client.

In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the Central node.

Right-click on the Central node and click Migrate to open the Migrate Virtual Machine wizard.

In the Select Migration Type screen, select the Change datastore option. Click Next .

In the Storage screen, select the required data store. Click Next .

In the Ready to Complete screen, the options selected for the virtual machine migration are displayed. Click Finish .

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

53

Migrating the Data Stores

Installing VMware ESXi and vCenter for Cisco RMS

54

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

C H A P T E R

4

RMS Installation Tasks

Perform these tasks to install the RMS software.

RMS Installation Procedure, page 55

Preparing the OVA Descriptor Files, page 56

Deploying the RMS Virtual Appliance, page 60

RMS Redundant Deployment, page 64

RMS High Availability Deployment, page 83

Changing Default Routing Interface on the Central Node , page 83

Optimizing the Virtual Machines, page 84

RMS Installation Sanity Check, page 90

RMS Installation Procedure

The RMS installation procedure is summarized here with links to the specific tasks.

Step No.

1

Task Link Task Completion:

Mandatory or Optional

Perform all prerequisite installations

Installation Prerequisites,

on page 9 and

Installing

VMware ESXi and vCenter for Cisco RMS, on page

27

Mandatory

2

3

Create the Open Virtual Application

(OVA) descriptor file

Preparing the OVA

Descriptor Files, on page

56

Mandatory

Deploy the OVA package

Deploying the RMS Virtual

Appliance, on page 60

Mandatory

RAN Management System Installation Guide, Release 4.1

55 May 25, 2015 HF

RMS Installation Tasks

Preparing the OVA Descriptor Files

Step No.

4

5

6

7

8

9

10

11

12

13

Task Link Task Completion:

Mandatory or Optional

Optional Configure redundant Serving nodes

RMS Redundant

Deployment, on page 64

Run the configure_hnbgw.sh script to configure the HNB gateway properties

HNB Gateway and DHCP

Configuration, on page 95

Mandatory if the HNB gateway properties were not included in the OVA descriptor file.

Optimize the VMs by upgrading the

VM hardware version, upgrading the

VM CPU and memory and upgrading the Upload VM data size

Optimizing the Virtual

Machines, on page 84

Mandatory

Perform a sanity check of the system

RMS Installation Sanity

Check, on page 90

Install RMS Certificates

Installing RMS Certificates,

on page 101

Optional but recommended

Mandatory

Configure the default route on the

Upload and Serving nodes for TLS termination

Configuring Default Routes for Direct TLS Termination at the RMS, on page 113

Optional

Install and configure the PMG database

Configure the Central node

PMG Database Installation and Configuration, on page

116

Configuring the Central

Node, on page 121

Optional

Contact Cisco services to deploy PMG DB.

Mandatory

Populate the PMG database

Verify the installation

Configuring the Central

Node, on page 121

Mandatory

Verifying RMS

Deployment, on page 145

Optional but recommended

Preparing the OVA Descriptor Files

The RMS requires Open Virtual Application (OVA) descriptor files, more commonly known as configuration files, that specify the configuration of various system parameters.

The easiest way to create these configuration files is to copy the example OVA descriptor files that are bundled as part of RMS build deliverable itself. The RMS-ALL-In-One-Solution package contains the sample descriptor for all-in-one deployment and the RMS-Distributed-Solution package contains the sample descriptor for

56

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Preparing the OVA Descriptor Files distributed deployment. It is recommended to use these sample descriptor files and edit them according to your needs.

Copy the files and rename them as ".

ovftool " before deploying. You need one configuration file for the all-in-one deployment and three separate files for the distributed deployment.

When you are done creating the configuration files, copy them to the server where vCenter is hosted and the ovftool utility is installed. Alternately, they can be copied to any other server where the ovftool utility tool by VMware is installed. In short, the configuration files must be copied as ".ovftool" to the directory where you can run the VMware ovftool command.

The following are mandatory properties that must be provided in the OVA descriptor file. These are the bare minimum properties required for successful RMS installation and operation. If any of these properties are missing or incorrectly formatted, an error is displayed. All other properties are optional and configured automatically with default values.

Note Make sure that all Network 1 (eth0) interfaces (Central, Serving, and Upload nodes) must be in same

VLAN.

Only .txt and .xml formats support the copy of OVA descriptor file from desktop to Linux machine. Other formats such as .xlsx and .docx, store some garbage value when we copy to linux and throws an error during installation.

In csv file, if any comma delimiter present between two IPs, for example, prop:Upload_Node_Gateway=10.5.4.1,10.5.5.1

, the property gets stored in double quotes when copied to Linux machine, "prop:Upload_Node_Gateway=10.5.4.1,10.5.5.1" . This will throw an error during deployment.

Table 1: Mandatory Properties for OVA Descriptor File

Property name datastore net:Upload-Node Network 1 net:Upload-Node Network 2 net:Central-Node Network 1 net:Central-Node Network 2

Description

Name of the vApp that is deployed on the host name.

Valid Values text

Name of the physical storage to keep the VM files.

text

VLAN for the connection between the Upload node (NB) and the Central node (SB).

VLAN #

VLAN for the connection between the Upload node (SB) and the CPE network (FAPs).

VLAN #

VLAN for the connection between the Central node (SB) and Upload Load (NB) or Serving node (NB).

VLAN #

VLAN for the connection between the Central node (NB) and the OSS network.

VLAN #

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

57

RMS Installation Tasks

Preparing the OVA Descriptor Files

Property net:Serving-Node Network 1 net:Serving-Node Network 2

Description Valid Values

VLAN for the connection between the Serving node (NB) and the Central node (SB).

VLAN #

VLAN for the connection between the Serving node (SoB) and the CPE network (FAPs).

VLAN # prop:Central_Node_Eth0_Address IP address of the Southbound VM interface prop:Central_Node_Eth0_Subnet Network mask for the IP subnet of the

Southbound VM interface.

IPv4 address

Network mask prop:Central_Node_Eth1_Address IP address of the Northbound VM interface.

IPv4 address prop:Central_Node_Eth1_Subnet Network mask for the IP subnet of the

Northbound VM interface.

Network mask prop:Central_Node_Dns1_Address IP address of primary DNS server provided by network administrator.

IPv4 address prop:Central_Node_Dns2_Address IP address of secondary DNS server provided by network administrator.

IPv4 address prop:Central_Node_Gateway IP address of the gateway to the management network for the north bound interface of the

Central node.

IPv4 address

IP address of the gateway from the Northbound interface of the Serving node towards the

Central node southbound network and from the Southbound interface of the Serving node towards the CPE.

prop:Serving_Node_Eth0_Address IP address of the Northbound VM interface.

IPv4 address prop:Serving_Node_Eth0_Subnet Network mask for the IP subnet of the

Northbound VM interface.

Network mask prop:Serving_Node_Eth1_Address IP address of the Southbound VM interface.

IPv4 address prop:Serving_Node_Eth1_Subnet Network mask for the IP subnet of the

Southbound VM interface.

Network mask prop:Serving_Node_Dns1_Address IP address of primary DNS server provided by network administrator.

IPv4 address prop:Serving_Node_Dns2_Address IP address of secondary DNS server provided by network administrator.

IPv4 address

58

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Preparing the OVA Descriptor Files

May 25, 2015 HF

Property Description Valid Values prop:Serving_Node_Gateway prop:Upload_Node_Eth0_Address

IP address of the gateway to the management network.

IP address of the Northbound VM interface.

comma separated IPv4 addresses of the form

[Northbound

GW],[Southbound

GW]

Note It is recommended to specify both the gateways.

IPv4 address prop:Upload_Node_Eth0_Subnet Network mask for the IP subnet of the

Northbound VM interface.

Network mask prop:Upload_Node_Eth1_Address IP address of the Southbound VM interface.

IPv4 address prop:Upload_Node_Eth1_Subnet Network mask for the IP subnet of the

Southbound VM interface.

Network mask prop:Upload_Node_Dns1_Address IP address of primary DNS server provided by network administrator.

IPv4 address prop:Upload_Node_Dns2_Address IP address of secondary DNS server provided by network administrator.

IPv4 address prop:Upload_Node_Gateway prop:Ntp1_Address

IP address of the gateway from Northbound interface of the Upload node for northbound traffic and from Southbound interface of

Upload node towards the CPE.

Primary NTP server.

comma separated IPv4 addresses of the form

[Northbound

GW],[Southbound

GW]

Note It is recommended to specify both the gateways.

IPv4 address prop:Acs_Virtual_Fqdn ACS virtual fully qualified domain name

(FQDN). Southbound FQDN or IP address of the Serving node. For NAT based deployment, this can be set to public IP/FQDN of the NAT.

This is the IP/FQDN which the AP will use to communicate from RMS.

IPv4 address or FQDN value

Note The recommended value is

FQDN. FQDN is required in case of a redundant setup.

RAN Management System Installation Guide, Release 4.1

59

RMS Installation Tasks

Validation of OVA Files

Property prop:Upload_SB_Fqdn

Description Valid Values

Southbound FQDN or IP address of the Upload node. Specify Upload eth1 address if no fqdn exists. For NAT based deployment, this can be set to public IP/FQDN of the NAT.

IPv4 address or FQDN value

Note The recommended value is

FQDN. FQDN is required in case of a redundant setup.

Note For third-party SeGW support for allocating inner IPs (tunnel IPs), set the property "prop:Install_Cnr=False" in the descriptor file.

Refer to

OVA Descriptor File Properties, on page 183

for a complete description of all required and optional properties for the OVA descriptor files.

Validation of OVA Files

If mandatory properties are missing from a descriptor file, the OVA installer displays an error on the installation console. If mandatory properties are incorrectly configured, an appropriate error is displayed on the installation console and the installation aborts.

An example validation failure message in the ova-first-boot.log is shown here:

"Alert!!! Invalid input for Acs_Virtual_Fqdn...Aborting installation..."

Log in to the relevant VM using root credentials (default password is Ch@ngeme1) to access the first-boot logs in the case of installation failures.

Wrongly configured properties include invalid IP addresses, invalid FQDN format, and so on. Validations are restricted to format/data-type validations. Incorrect IP addresses/FQDNs (for example, unreachable IPs) are not in the scope of validation.

Deploying the RMS Virtual Appliance

All administrative functions are available through vSphere client. A subset of those functions is available through the vSphere web client. The vSphere client users are virtual infrastructure administrators for specialized functions. The vSphere web client users are virtual infrastructure administrators, help desk, network operations centre operators, and virtual machine owners.

Note All illustrations in this document are from the VMware vSphere client.

Before You Begin

You must be running VMware vSphere version 5.1 or 5.5 . There are two ways to access the VMware Vcenter:

60

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

All-in-One RMS Deployment: Example

• VMware vSphere Client locally installed application

• VMware vSphere Web Client

Step 1

Step 2

Copy the OVA descriptor configuration files as ".ovftool" to the directory where you can run the VMware ovftool command.

Note If you are running from a Linux server, the .ovftool file should not be in the root directory as it takes precedence over other ".ovftool" files.

While deploying the ova package, the home directory takes the preference over the current directory.

./OVAdeployer.sh ova-filepath/ova-file

vi://vcenter-user:password@vcenter-host/datacenter-name/host/host-folder-if-any/ucs-host

Example:

./OVAdeployer.sh /tmp/RMS-Provisioning-Solution-4.0.0-1E.ova

vi://myusername:mypass#[email protected]/BLR/host/UCS5K/blrrms-5108-09.cisco.com

Note The OVAdeployer.sh tool first validates the OVA descriptor file and then continues to install the RMS. If necessary, get the OVAdeployer.sh tool from the build package and copy it to the directory where the OVA descriptor file is stored.

If the vCenter user or password (or both) is not specified in the command, you are prompted to enter this information on the command line. Enter the user name and password to continue.

All-in-One RMS Deployment: Example

In an all-in-one RMS deployment, all the nodes such as central, serving, and upload are deployed on a single host on the VSphere client.

In an all-in-one RMS deployment, the Serving and Upload nodes should be synchronized with the Central node during first boot up. To synchronize these nodes, add the property "powerOn=False" in the descriptor file (.ovftool).

./OVAdeployer.sh /data/ova/RMS-All-In-One-Solution-4.1.0-1M/

RMS-All-In-One-Solution-4.1.0-1M.ova vi://ova:[email protected]/

BLR/host/RMS/blrrms-c240-05.cisco.com

Starting OVA installation. Network 1(eth0) interface of Central/Serving/Upload Nodes are recommended to be in same VLAN for AIO/Distributed deployments with an exception for

Geo Redundant Setups...

Reading OVA descriptor from path: ./.ovftool

Converting OVA descriptor to unix format..

Checking deployment type

Starting input validation prop:Admin1_Password not provided, will be taking the default value for RMS.

prop:RMS_App_Password not provided, will be taking the default value for RMS.

prop:Root_Password not provided, will be taking the default value for RMS.

Checking network configurations in descriptor...

Deploying OVA...

Opening OVA source:

/data/ova/RMS-All-In-One-Solution-4.1.0-1M/RMS-All-In-One-Solution-4.1.0-1M.ova

The manifest validates

Opening VI target: vi://[email protected]:443/BLR/host/RMS/blrrms-c240-05.cisco.com

Deploying to VI: vi://[email protected]:443/BLR/host/RMS/blrrms-c240-05.cisco.com

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

61

RMS Installation Tasks

Distributed RMS Deployment: Example

Transfer Completed

Warning:

- No manifest entry found for: '.ovf'.

- File is missing from the manifest: '.ovf'.

Completed successfully

Thu 28 Aug 2014 07:01:05 PM IST

OVA deployment took 234 seconds.

After OVA installation is completed, power on only the Central VM and wait until the login prompt appears on the VM console. Next, power on the Serving and Upload VMs and wait until the login prompt appears on the VM consoles.

The RMS all-in-one deployment in the vCenter appears similar to this illustration:

Figure 6: RMS All-In-One Deployment

After all hosts are powered on and the login prompt appears on the VM consoles, only then proceed with the configuration changes (example, creating groups, replacing certificates, adding route, and so on). Else, the system bring-up may overwrite your changes.

Distributed RMS Deployment: Example

In the distributed deployment, RMS Nodes (Central node, Serving node, and Upload node) are deployed on different hosts on the VSphere client. The RMS nodes must be deployed and powered in the following sequence:

1 Central Node

2 Serving Node

3 Upload Node

62

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Distributed RMS Deployment: Example

Note Power on the Serving and Upload nodes after the Central node applications are up. To confirm this:

1 Log in to the Central node after ten minutes (from the time the nodes are powered on).

2 Switch to root user and look for the following message in the /root/ova-first-boot.log

.

Central-first-boot script execution took [xxx] seconds

For example, Central-first-boot script execution took 360 seconds.

The .ovftool files for the distributed deployment differ slightly than that of the all-in-one deployment in terms of virtual host network values as mentioned in

Preparing the OVA Descriptor Files, on page 56

. Here is an example of the distributed RMS deployment:

Central Node Deployment

./OVAdeployer.sh RMS-Central-Node-4.0.0-2I.ova

vi://ova:[email protected]/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com

Reading OVA descriptor from path: ./.ovftool

Checking deployment type

Starting input validation

Deploying OVA...

Opening OVA source: RMS-Central-Node-4.0.0-2I.ova

The manifest validates

Opening VI target: vi://[email protected]:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com

Deploying to VI: vi://[email protected]:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com

Transfer Completed

Warning:

- No manifest entry found for: '.ovf'.

- File is missing from the manifest: '.ovf'.

Completed successfully

Wed 28 May 2014 04:09:24 PM IST

OVA deployment took 335 seconds.

Serving Node Deployment

./OVAdeployer.sh RMS-Serving-Node-4.0.0-2I.ova

vi://ova:[email protected]/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com

Reading OVA descriptor from path: ./.ovftool

Checking deployment type

Starting input validation

Deploying OVA...

Opening OVA source: RMS-Serving-Node-4.0.0-2I.ova

The manifest validates

Opening VI target: vi://[email protected]:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com

Deploying to VI: vi://[email protected]:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com

Transfer Completed

Warning:

- No manifest entry found for: '.ovf'.

- File is missing from the manifest: '.ovf'.

Completed successfully

Wed 28 May 2014 04:09:24 PM IST

OVA deployment took 335 seconds.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

63

RMS Installation Tasks

RMS Redundant Deployment

Upload Node Deployment

./OVAdeployer.sh RMS-Upload-Node-4.0.0-2I.ova

vi://ova:[email protected]/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com

Reading OVA descriptor from path: ./.ovftool

Checking deployment type

Starting input validation

Deploying OVA...

Opening OVA source: RMS-Upload-Node-4.0.0-2I.ova

The manifest validates

Opening VI target: vi://[email protected]:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com

Deploying to VI: vi://[email protected]:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com

Transfer Completed

Warning:

- No manifest entry found for: '.ovf'.

- File is missing from the manifest: '.ovf'.

Completed successfully

Wed 28 May 2014 04:09:24 PM IST

OVA deployment took 335 seconds.

The RMS distributed deployment in the vSphere appears similar to this illustration:

Figure 7: RMS Distributed Deployment

RMS Redundant Deployment

To mitigate Serving node and Upload Server Node deployment failover, additional Serving and Upload nodes can be configured with the same Central node.

This procedure describes how to configure additional Serving and Upload nodes with an existing Central node.

64

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

RMS Redundant Deployment

Note Redundant deployment does not mandate having both Serving and Upload nodes together. Each redundant node can be deployed individually without having the other node in the setup.

Step 1 Prepare the deployment descriptor (.ovftool file) for any additional Serving nodes as described in

Preparing the OVA

Descriptor Files, on page 56

.

For Serving node redundancy, the descriptor file should have the same provisioning group as the primary Serving node.

For an example on redundant OVA descriptor file, refer to

Example Descriptor File for Redundant Serving/Upload

Node, on page 213

.

The following properties are different in the redundant Serving node and redundant Upload node descriptor files:

Redundant Serving Node:

• name

• Serving_Node_Eth0_Address

• Serving_Node_Eth1_Address

• Serving_Hostname

• Acs_Virtual_Address (should be same as Serving_Node_Eth1_Address)

• Dpe_Cnrquery_Client_Socket_Address (should be same as Serving_Node_Eth0_Address)

• Serving_Node_Eth0_Subnet

• Serving_Node_Eth1_Subnet

• Serving_Node_Gateway

• Upload_Node_Eth0_Address

• Upload_Node_Eth0_Subnet

• Upload_Node_Eth1_Address

• Upload_Node_Eth1_Subnet

• Upload_Node_Dns1_Address

• Upload_Node_Dns2_Address

• Upload_Node_Gateway

• Upload_SB_Fqdn

• Upload_Hostname

• Serving-Node Network 1

• Serving-Node Network 2

Redundant Upload Node:

• name

• Upload_Node_Eth0_Address

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

65

RMS Installation Tasks

RMS Redundant Deployment

Step 2

Step 3

• Upload_Node_Eth1_Address

• Upload_Hostname

• Acs_Virtual_Address (should be same as Serving_Node_Eth1_Address)

• Dpe_Cnrquery_Client_Socket_Address (should be same as Serving_Node_Eth0_Address)

• Upload_Node_Eth0_Subnet

• Upload_Node_Eth1_Subnet

• Upload_Node_Gateway

• Serving_Node_Eth0_Address

• Serving_Node_Eth0_Subnet

• Serving_Node_Eth1_Address

• Serving_Node_Eth1_Subnet

• Serving_Node_Dns1_Address

• Serving_Node_Dns2_Address

• Serving_Node_Gateway

• Serving_Hostname

• Upload-Node Network 1

• Upload-Node Network 2

Prepare an input configuration file with all the following properties of the redundant Serving and Upload nodes and name it appropriately. For example, ovadescriptorfile_CN_Config.txt.

• Central_Node_Eth0_Address

• Central_Node_Eth1_Address

• Serving_Node_Eth0_Address

• Serving_Node_Eth1_Address

• Upload_Node_Eth0_Address

• Upload_Node_Eth1_Address

• Serving_Node_Hostname

• Upload_Node_Hostname

• Acs_Virtual_Address

• Acs_Virtual_Fqdn

• Upload_SB_Fqdn

Take a back up of the /rms/app/rms/conf/uploadServers.xml

and /etc/hosts using these commands: cp /etc/hosts /etc/orig_hosts

66

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

RMS Redundant Deployment

Step 4 cp /rms/app/rms/conf/uploadServers.xml /rms/app/rms/conf/orig_uploadServers.xml

Execute as "root" user the utility shell script ( central-multi-nodes-config.sh

) to configure the network and application properties of the redundant nodes on the Central node.

The script is located in the / directory. The above copied configuration text file ovadescriptorfile_CN_Config.txt

should be given as input to the shell script.

Example:

./central-multi-nodes-config.sh <deploy-decsr-filename with absolute path>

Example:

[RMS51G-CENTRAL03] / # ./central-multi-nodes-config.sh ovadescrip.txt

Deployment Descriptor file ovadescrip.txt found, continuing

Central_Node_Eth0_Address=10.1.0.16

Central_Node_Eth1_Address=10.105.246.53

Serving_Node_Eth0_Address=10.4.0.14

Serving_Node_Eth1_Address=10.5.0.23

Upload_Node_Eth0_Address=10.4.0.15

Upload_Node_Eth1_Address=10.5.0.24

Serving_Node_Hostname=RMS51G-SERVING05

Upload_Node_Hostname=RMS51G-UPLOAD05

Upload_SB_Fqdn=10.5.0.24

Acs_Virtual_Fqdn=femtoacs03.movistar.ec

Step 5

Verify the input, Press Cntrl-C to exit

Script will start executing in next 15 seconds

...

......10 more seconds to execute

.........5 more seconds to execute begin configure_iptables iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables begin configure_system end configure_system begin configure_files end configure_files

Script execution completed.

Verify entries in following files:

/etc/hosts

/rms/app/rms/conf/uploadServers.xml

After execution of the script, a new fqdn/ip entry for the new Upload Server node is created in the

/rms/app/rms/conf/uploadServers.xml

file.

Create an individual ovf file based on the redundant Serving node or Upload node as described in Step 1 and use the same for deployment. Install additional Serving and Upload nodes as described in

Deploying the RMS Virtual Appliance,

on page 60 .

Complete the following procedures on the redundant node before proceeding to the next step:

Installing RMS Certificates, on page 101

Enabling Communication for VMs on Different Subnets, on page 112

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

67

RMS Installation Tasks

Configuring Serving and Upload Nodes on Different Subnets

Step 6

Step 7

Configuring Default Routes for Direct TLS Termination at the RMS, on page 113

Specify route and IPtable configurations to establish a proper inter-node communication after deploying redundant

Serving and Upload nodes based on the subnet of the new nodes. For configuring different subnet or geo-redundant

Serving and Upload nodes, see

Configuring Serving and Upload Nodes on Different Subnets, on page 68

.

Configure the Serving node redundancy as described in

Setting Up Redundant Serving Nodes, on page 72

.

Note Redundant Upload node needs no further configuration.

Configuring Serving and Upload Nodes on Different Subnets

Note This section is applicable only if the Serving and Upload nodes have eth0 (NB) interface on a different subnet than that of the Central server eth0 IP.

In a multi-site configuration, due to geo-redundancy the Serving and Upload server on site2 (redundant site) can be deployed with eth0/eth1 IPs being on a different subnet compared to the eth0/eth1 IPs of site1 Central,

Serving, and Upload servers. In such cases, a post-installation script must be executed on site2 Serving and

Upload servers. Follow the procedure to execute this post-installation script.

In a geo-redundant setup, Serving and Upload nodes can be deployed in a different geographical location with

IPs (eth0/eth1) in different subnets compared to that of the Central server (eth0/eth1) IP. In such cases, a post-installation script must be executed on the Serving and Upload nodes. Follow this procedure to execute this post-installation script.

Step 1 Follow these steps on the Serving node deployed in a different subnet: a) Post RMS installation, configure appropriate routes on Serving node to communicate with the Central node. For more information, see

Enabling Communication for VMs on Different Subnets, on page 112

.

Note Start the VM first if the powerOn is set to 'false' in the descriptor file. Else adding routes is not possible.

b) Log in to the Serving node as admin user from the Central node.

c) Switch to root user using the required credentials.

d) Navigate to /rms/ova/scripts/post_install/.

e) Copy the Serving node OVA descriptor to a temporary directory or /home/admin1 and specify the complete path during script execution.

f) Switch back to post_install directory: /rms/ova/scripts/post_install/ g) Run the following commands: chmod +x redundant-serving-config.sh;

./redundant-serving-config.sh <diff_subnet_serving_ova_descriptor_filepath>

Example:

[root@blrrms-serving-19-2 post_install]# ./redundant-serving-config.sh ovftool_serving2

Deployment Descriptor file ovftool_serving2 found, continuing

INFO: Admin1_Username has no value, setting to default

Enter Password for admin user admin1 on Central Node:

68

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Configuring Serving and Upload Nodes on Different Subnets

May 25, 2015 HF

Confirm admin1 Password:

Enter Password for root on Central Node:

Confirm root Password: Function validateinputs starts at 1424262225

INFO: RMS_App_Password has no value, setting to default

INFO: Bac_Provisioning_Group has no value, setting to default

INFO: Ntp2_Address has no value, setting to default

INFO: Ntp3_Address has no value, setting to default

INFO: Ntp4_Address has no value, setting to default

INFO: Ip_Timing_Server_Ip has no value, setting to default

Starting ip input validation

Done ip input validation

Central_Node_Eth0_Address=10.5.1.208

Serving_Node_Eth1_Address=10.5.5.68

Upload_Node_Eth1_Address=10.5.5.69

Upload_SB_Fqdn=femtolus19.testlab.com

Acs_Virtual_Fqdn=femtoacs19.testlab.com

Acs_Virtual_Address=10.5.5.68

USEACE=

Admin1_Username=admin1

Bac_Provisioning_Group=pg01

Ntp1_Address=10.105.233.60

Ntp2_Address=10.10.10.2

Ntp3_Address=10.10.10.3

Ntp4_Address=10.10.10.4

Ip_Timing_Server_Ip=10.10.10.4

Verify the input, Press Cntrl-C to exit

Script will start executing in next 15 seconds

...

......10 more seconds to execute

.........5 more seconds to execute

Function configure_dpe_certs starts at 1424262242

Setting RMS CA signed DPE keystore spawn scp [email protected]:/rms/data/rmsCerts/dpe.keystore /rms/app/CSCObac/dpe/conf/dpe.keystore

The authenticity of host '10.5.1.208 (10.5.1.208)' can't be established.

RSA key fingerprint is d5:fc:1a:af:c8:e0:f7:3a:10:10:4b:22:b6:3c:f2:95.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '10.5.1.208' (RSA) to the list of known hosts.

yes [email protected]'s password:

Permission denied, please try again.

[email protected]'s password: dpe.keystore

100% 3959 3.9KB/s 00:00

Performing additional DPE configurations..

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

blrrms-serving-19-2 BAC Device Provisioning Engine

User Access Verification

Password:

.

.

.

blrrms-serving-19-2> enable

Password: blrrms-serving-19-2# log level 6-info

% OK

File: ../ga_kiwi_scripts/addBacProvisionProperties.kiwi

Finished tests in 13990ms

Total Tests Run - 16

Total Tests Passed - 16

Total Tests Failed - 0

Output saved in file: /tmp/runkiwi.sh_root/addBacProvisionProperties.out.20150218_1755

RAN Management System Installation Guide, Release 4.1

69

RMS Installation Tasks

Configuring Serving and Upload Nodes on Different Subnets

Step 2

__________________________________________________________________________________________

Post-processing log for benign error codes:

/tmp/runkiwi.sh_root/addBacProvisionProperties.out.20150218_1755

Revised Test Results

Total Test Count: 16

Passed Tests: 16

Benign Failures: 0

Suspect Failures: 0

Output saved in file: /tmp/runkiwi.sh_root/addBacProvisionProperties.out.20150218_1755-filtered

~

[blrrms-central-19] ~ # Done provisioning group configuration

[root@blrrms-serving-19-2 post_install]#

Follow these steps on the Upload node deployed in a different subnet: a) Log in to the Upload server (having its IPs in a different subnet) as admin user.

b) Switch to root user using the required credentials.

c) Navigate to /rms/ova/scripts/post_install/.

d) Copy the different subnet Upload server OVA descriptor file to a temporary location or home directory and use this path during script execution.

e) Run the following commands to execute the script: chmod +x redundant-upload-config.sh;

./redundant-upload-config.sh <diff_subnet_upload_ova_descriptor_filepath>

Example:

[root@blr-blrrms-lus-19-2 post_install]# ./redundant-upload-config.sh /home/admin1/ovftool_upload2

Deployment Descriptor file /home/admin1/ovftool_upload2 found, continuing

INFO: Admin1_Username has no value, setting to default

Enter Password for admin user admin1 on Central Node:

Confirm admin1 Password: Function validateinputs starts at 1424263071

Starting ip input validation

Done ip input validation

Central_Node_Eth0_Address=10.5.1.208

Upload_Node_Eth0_Address=10.5.4.69

Admin1_Username=admin1

Verify the input, Press Cntrl-C to exit

Script will start executing in next 15 seconds

...

......10 more seconds to execute

.........5 more seconds to execute

Function configure_dpe_certs starts at 1424263088

Setting RMS CA signed LUS keystore spawn scp [email protected]:/rms/data/rmsCerts/uls.keystore /opt/CSCOuls/conf/uls.keystore

The authenticity of host '10.5.1.208 (10.5.1.208)' can't be established.

RSA key fingerprint is d5:fc:1a:af:c8:e0:f7:3a:10:10:4b:22:b6:3c:f2:95.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '10.5.1.208' (RSA) to the list of known hosts.

yes [email protected]'s password:

Permission denied, please try again.

[email protected]'s password: uls.keystore

100% 3960

[root@blr-blrrms-lus-19-2 post_install]#

3.9KB/s 00:00

70

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Configuring Redundant Serving Nodes

Note that the scripts can be rerun if any error is observed. For example, wrong password input for admin user/root.

Configuring Redundant Serving Nodes

After installing additional serving nodes, use this procedure to update the IP table firewall rules on the serving nodes so that the DPEs on the serving nodes can communicate with each other.

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Log in to the primary serving node using SSH.

Change to root user.

Update the IP table firewall rules on the primary serving node so that the serving nodes can communicate: a) iptables -A INPUT -s serving-node-2-eth1-address/32 -d serving-node-1-eth1-address/32 -i eth1 -p udp --dport

49186 -m state --state NEW -j ACCEPT b) iptables -A OUTPUT -s serving-node-1-eth1-address/32 -d serving-node-2-eth1-address/32 -o eth1 -p udp --dport

49186 -m state --state NEW -j ACCEPT

Port 49186 is used for inter-serving node communications.

Save the configuration: service iptables save

Log in to the secondary serving node using SSH.

Change to root user: su-

Update the IP table firewall rules on the secondary serving node: a) iptables -A INPUT -s serving-node-1-eth1-address/32 -d serving-node-2-eth1-address/32 -i eth1 -p udp --dport

49186 -m state --state NEW -j ACCEPT b) iptables -A OUTPUT -s serving-node-2-eth1-address/32 -d serving-node-1-eth1-address/32 -o eth1 -p udp --dport

49186 -m state --state NEW -j ACCEPT

Save the configuration: service iptables save

Example:

This example assumes that the primary serving node eth1 address is 10.5.2.24 and the primary serving node hostname is blr-rms1-serving; the secondary serving node eth1 address is 10.5.2.20 and the secondary serving node hostname is blr-rms2-serving:

Primary Serving Node:

[root@blr-rms1-serving ~]# iptables -A INPUT -s 10.5.2.20/32 -d 10.5.2.24/32 -i eth1 -p udp --dport

49186 -m state --state NEW -j ACCEPT

[root@blr-rms1-serving ~]# iptables -A OUTPUT -s 10.5.2.24/32 -d 10.5.2.20/32 -o eth1 -p udp --dport

49186 -m state --state NEW -j ACCEPT

[root@blr-rms1-serving ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Secondary Serving Node:

[root@blr-rms2-serving ~]# iptables -A INPUT -s 10.5.2.24/32 -d 10.5.2.20/32 -i eth1 -p udp --dport

49186 -m state --state NEW -j ACCEPT

[root@blr-rms2-serving ~]# iptables -A OUTPUT -s 10.5.2.20/32 -d 10.5.2.24/32 -o eth1 -p udp --dport

49186 -m state --state NEW -j ACCEPT

[root@blr-rms2-serving ~]# service iptables save

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

71

RMS Installation Tasks

Setting Up Redundant Serving Nodes iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Setting Up Redundant Serving Nodes

This task enables the IP tables for port 61610, 61611, 1234 and 647 on both serving nodes.

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

Log in to the primary serving node using SSH.

Change to root user: su-

Note Make sure that you follow the port sequence in this order 61610, 61611, 1234, and 647 while running the commands on both Primary and Secondary serving nodes. Else, the system throws an error.

For ports 61610 and 61611, run this command:

iptables -A OUTPUT -s serving-node-1-eth0-address /32 -d serving-node-2-eth0-address/32 -o eth0 -p udp -m udp

--dport port-number -m state --state NEW -j ACCEPT

For ports 1234 and and 647, run this command:

iptables -A OUTPUT -s serving-node-1-eth0-address /32 -d serving-node-2-eth0-address/32 -o eth0 -p tcp -m tcp

--dport port-number -m state --state NEW -j ACCEPT

Note Make sure that you follow the port sequence in this order 61610, 61611, 1234, and 647 while running the commands on both Primary and Secondary serving nodes. Else, the system throws an error.

For ports 61610 and 61611, run this command:

iptables -A INPUT -s serving-node-2-eth0-address/32 -d serving-node-1-eth0-address/32 -i eth0 -p udp -m udp --dport

port-number -m state --state NEW -j ACCEPT

For ports1234 and 647, run this command:

iptables -A INPUT -s serving-node-2-eth0-address/32 -d serving-node-1-eth0-address/32 -i eth0 -p tcp -m tcp --dport

port-number -m state --state NEW -j ACCEPT

Save the results: service iptables save

Log in to the secondary serving node using SSH.

Change to root user: su-

Note Make sure that you follow the port sequence in this order 61610, 61611, 1234, and 647 while running the commands on both Primary and Secondary serving nodes. Else, the system throws an error.

For ports 61610 and 61611, run this command:

iptables -A OUTPUT -s serving-node-2-eth0-address /32 -d serving-node-1-eth0-address/32 -o eth0 -p udp -m udp

--dport port-number -m state --state NEW -j ACCEPT

For ports 1234 and 647, run this command:

iptables -A OUTPUT -s serving-node-2-eth0-address /32 -d serving-node-1-eth0-address/32 -o eth0 -p tcp -m tcp

--dport port-number -m state --state NEW -j ACCEPT

Note Make sure that you follow the port sequence in this order 61610, 61611, 1234, and 647 while running the commands on both Primary and Secondary serving nodes. Else, the system throws an error.

For ports 61610 and 61611, run this command:

72

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Setting Up Redundant Serving Nodes

Step 13

Step 14

iptables -A INPUT -s serving-node-1-eth0-address/32 -d serving-node-2-eth0-address/32 -i eth0 -p udp -m udp --dport

port-number -m state --state NEW -j ACCEPT

For ports1234 and 647, run this command:

iptables -A INPUT -s serving-node-1-eth0-address/32 -d serving-node-2-eth0-address/32 -i eth0 -p tcp -m tcp --dport

port-number -m state --state NEW -j ACCEPT

Save the results: service iptables save

This example assumes that the primary serving node eth0 address is 10.5.1.24 and that the secondary serving node eth0 address is 10.5.1.20:

Primary Serving Node

[root@blr-rms11-serving ~]# iptables -A OUTPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -o eth0 -p udp -m udp

--dport 61610 -m state --state NEW -j ACCEPT

[root@blr-rms11-serving ~]# iptables -A OUTPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -o eth0 -p udp -m udp

--dport 61611 -m state --state NEW -j ACCEPT

[root@blr-rms11-serving ~]# iptables -A OUTPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -o eth0 -p tcp -m tcp

--dport 1234 -m state --state NEW -j ACCEPT

[root@blr-rms11-serving ~]# iptables -A OUTPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -o eth0 -p tcp

-m tcp --dport 647 -m state --state NEW -j ACCEPT

[root@blr-rms11-serving ~]# iptables -A INPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -i eth0 -p udp -m udp

--dport 61610 -m state --state NEW -j ACCEPT

[root@blr-rms11-serving ~]# iptables -A INPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -i eth0 -p udp -m udp

--dport 61611 -m state --state NEW -j ACCEPT

[root@blr-rms11-serving ~]# iptables -A INPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -i eth0 -p tcp -m tcp

--dport 1234 -m state --state NEW -j ACCEPT

[root@blr-rms11-serving ~]# iptables -A INPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -i eth0 -p tcp -m tcp --dport 647 -m state --state NEW -j ACCEPT

[root@blr-rms11-serving ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Secondary Serving Node

[root@blr-rms12-serving ~]# iptables -A OUTPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -o eth0

-p udp -m udp

--dport 61610 -m state --state NEW -j ACCEPT

[root@blr-rms12-serving ~]# iptables -A OUTPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -o eth0

-p udp -m udp

--dport 61611 -m state --state NEW -j ACCEPT

[root@blr-rms12-serving ~]# iptables -A OUTPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -o eth0

-p tcp -m tcp

--dport 1234 -m state --state NEW -j ACCEPT

[root@blr-rms12-serving ~]# iptables -A OUTPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -o eth0

-p tcp

-m tcp --dport 647 -m state --state NEW -j ACCEPT

[root@blr-rms12-serving ~]# iptables -A INPUT -s 10.5.1.24/32 -d 10.5.1.24/32 -i eth0 -p udp -m udp

--dport 61610 -m state --state NEW -j ACCEPT

[root@blr-rms12-serving ~]# iptables -A INPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -i eth0 -p

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

73

RMS Installation Tasks

Configuring the PNR for Redundancy udp -m udp

--dport 61611 -m state --state NEW -j ACCEPT

[root@blr-rms12-serving ~]# iptables -A INPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -i eth0 -p tcp -m tcp

--dport 1234 -m state --state NEW -j ACCEPT

[root@blr-rms12-serving ~]# iptables -A OUTPUT -s 10.5.1.238/32 -d 10.5.1.64/32 -o eth0

-p tcp

-m tcp --dport 647 -m state --state NEW -j ACCEPT

[root@blr-rms12-serving ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Configuring the PNR for Redundancy

Use this task to verify that all DPEs and the network registrar are ready in the BAC UI and that two DPEs and two PNRs are in one provisioning group in the BAC UI.

Step 1

Step 2

Log in to the PNR on the primary PNR DHCP server via the serving node CLI:

/rms/app/nwreg2/local/usrbin/nrcmd -N cnradmin

Enter the password when prompted.

Configure the backup DHCP server (2nd Serving Node's IP (eth0): cluster Backup-cluster create < Backup DHCP server IP address > admin= < admin username > password= < user admin password > product-version= < version number > scp-port= < port number >

Example: nrcmd> cluster Backup-cluster create 10.5.1.20 admin=cnradmin password=Ch@ngeme1 product-version=8.3 scp-port=1234

100 Ok

Backup-cluster: admin = cnradmin atul-port = cluster-id = 2 fqdn = http-port = https-port = ipaddr = 10.5.1.20

licensed-services = local-servers = name = Backup-cluster password = password-secret = 00:00:00:00:00:00:00:5a poll-lease-hist-interval = poll-lease-hist-offset = poll-lease-hist-retry = poll-replica-interval = [default=4h] poll-replica-offset = [default=4h] poll-subnet-util-interval = poll-subnet-util-offset = poll-subnet-util-retry = product-version = 8.1.3

remote-id = replication-initialized = [default=false] restore-state = [default=active] scp-port = 1234 scp-read-timeout = [default=20m] shared-secret = tenant-id = 0 tag: core

74

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Configuring the PNR for Redundancy

Step 3

Step 4

Step 5

Step 6 use-https-port = [default=false] use-ssl = [default=optional]

Configure the DHCP servers:

failover-pair femto-dhcp-failover create Main DHCP server IP address Backup DHCP server IP address main=localhost backup=Backup-cluster backup-pct=20 mclt=57600

Example: nrcmd> failover-pair femto-dhcp-failover create 10.5.1.24 10.5.1.20

main=localhost backup=Backup-cluster backup-pct=20 mclt=57600

100 Ok femto-dhcp-failover: backup = Backup-cluster backup-pct = 20% backup-server = 10.5.1.20

dynamic-bootp-backup-pct = failover = [default=true] load-balancing = [default=disabled] main = localhost main-server = 10.5.1.24

mclt = 16h name = femto-dhcp-failover persist-lease-data-on-partner-ack = [default=true] safe-period = [default=24h] scopetemplate = tenant-id = 0 tag: core use-safe-period = [default=disabled]

Save the configuration: save

Example: nrcmd> save

100 Ok

Reload the primary DHCP server: server dhcp reload

Example: nrcmd> server dhcp reload

100 Ok

Configure the primary to secondary synchronization: a) cluster localhost set admin=admin user password=admin password

Example: nrcmd> cluster localhost set admin=cnradmin password=Ch@ngeme1

100 Ok b) failover-pair femto-dhcp-failover sync exact main-to-backup

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

75

RMS Installation Tasks

Configuring the PNR for Redundancy

Step 7

Step 8

Example: nrcmd> failover-pair femto-dhcp-failover sync exact main-to-backup

101 Ok, with warnings

((ClassName RemoteRequestStatus)(error 2147577914)(exception-list

[((ClassName ConsistencyDetail)(error-code 2147577914)(error-object

((ClassName DHCPTCPListener)(ObjectID OID-00:00:00:00:00:00:00:42)

(SequenceNo 30)(name femto-leasequery-listener)(address 0.0.0.0)(port 61610)))

(classid 1155)(error-attr-list [((ClassName AttrErrorDetail)(attr-id-list [03 ])

(error-code 2147577914)(error-string DHCPTCPListener 'femto-leasequery-listener' address will be unset. The default value will apply.))]))]))

Note The above error is due to the change in the secondary PNR dhcp-listener-address. Change the dhcp-listner-address in the secondary PNR as mentioned in the next steps.

Log in to the secondary PNR: /rms/app/nwreg2/local/usrbin/nrcmd -N cnradmin

Enter the password when prompted.

Configure the femto lease query listener:

dhcp-listener femto-leasequery-listener set address=Serving node eth0 Ip Address

This address must be the secondary PNR IP address which is the serving node eth0 IP address.

Step 9

Step 10

Example: nrcmd> dhcp-listener femto-leasequery-listener set address=10.5.1.20

100 Ok nrcmd> dhcp-listener list

100 Ok femto-leasequery-listener: address = 10.5.1.20

backlog = [default=5] enable = [default=true] ip6address = leasequery-backlog-time = [default=120] leasequery-idle-timeout = [default=60] leasequery-max-pending-notifications = [default=120000] leasequery-packet-rate-when-busy = [default=500] leasequery-send-all = [default=false] max-connections = [default=10] name = femto-leasequery-listener port = 61610 receive-timeout = [default=30] send-timeout = [default=120]

Save the configuration: save

Example: nrcmd> save

100 Ok

Reload the secondary DHCP server: server dhcp reload

Example: nrcmd> server dhcp reload

76

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Configuring the Security Gateway on the ASR 5000 for Redundancy

Step 11

100 Ok

Verify communication: dhcp getRelatedServers

Example: nrcmd> dhcp getRelatedServers

100 Ok

Type Name Address

Partner State

MAIN -10.5.1.24

NORMAL

TCP-L blrrms-Serving-02.cisco.com 10.5.1.20,61610

Requests Communications State

0 OK

0 NONE

NORMAL

Partner Role

MAIN listening ---

Note Scope list and Lease list are synchronized with the master Serving node.

Proceed to execute configure_PAR_hnbgw.sh

to configure all the radius clients on the redundant Serving node.

Configuring the Security Gateway on the ASR 5000 for Redundancy

Step 1

Step 2

Step 3

Log in to the Cisco ASR 5000 that contains the HNB and security gateways.

Check the context name for the security gateway: show context all .

Display the HNB gateway configuration: show configuration context security_gateway_context_name .

Verify that there are two DHCP server addresses configured. See the highlighted text in the example.

Example:

[local]blrrms-xt2-03# show configuration context HNBGW context HNBGW ip pool ipsec range 7.0.1.48 7.0.1.63 public 0 policy allow-static-allocation ipsec transform-set ipsec-vmct

#exit ikev2-ikesa transform-set ikesa-vmct

#exit crypto template vmct-asr5k ikev2-dynamic authentication local certificate authentication remote certificate ikev2-ikesa transform-set list ikesa-vmct keepalive interval 120 payload vmct-sa0 match childsa match ipv4 ip-address-alloc dynamic ipsec transform-set list ipsec-vmct tsr start-address 10.5.1.0 end-address 10.5.1.255

#exit nai idr 10.5.1.91 id-type ip-addr ikev2-ikesa keepalive-user-activity certificate 10-5-1-91 ca-certificate list ca-cert-name TEF_CPE_SubCA ca-cert-name Ubi_Cisco_Int_ca

#exit interface Iu-Ps-Cs-H ip address 10.5.1.91 255.255.255.0

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

77

RMS Installation Tasks

Configuring the Security Gateway on the ASR 5000 for Redundancy

Step 4 ip address 10.5.1.92 255.255.255.0 secondary ip address 10.5.1.93 255.255.255.0 secondary

#exit subscriber default dhcp service CNR context HNBGW ip context-name HNBGW ip address pool name ipsec exit radius change-authorize-nas-ip 10.5.1.92 encrypted key

+A1rxtnjd9vom7g1ugk4buohqxtt073pbivjonsvn3olnz2wsl0sm5 event-timestamp-window 0 no-reverse-path-forward-check aaa group default radius max-retries 2 radius max-transmissions 5 radius timeout 1 radius attribute nas-ip-address address 10.5.1.92

radius server 10.5.1.20 encrypted key

+A3qji4gwxyne5y3s09r8uzi5ot70fbyzzzzgbso92ladvtv7umjcj port 1812 priority 2 radius server 1.4.2.90 encrypted key

+A1z4194hjj9zvm24t0vdmob18b329iod1jj76kjh1pzsy3w46m9h4

#exit port 1812 priority 1 gtpp group default

#exit gtpu-service GTPU_FAP_1 bind ipv4-address 10.5.1.93

exit dhcp-service CNR dhcp client-identifier ike-id dhcp server 10.5.1.20

dhcp server 10.5.1.24

no dhcp chaddr-validate dhcp server selection-algorithm use-all dhcp server port 61610 bind address 10.5.1.92

#exit dhcp-server-profile CNR

#exit hnbgw-service HNBGW_1 sctp bind address 10.5.1.93

sctp bind port 29169 associate gtpu-service GTPU_FAP_1 sctp sack-frequency 5 sctp sack-period 5 no sctp connection-timeout no ue registration-timeout hnb-identity oui discard-leading-char hnb-access-mode mismatch-action accept-aaa-value radio-network-plmn mcc 116 mnc 116 rnc-id 116 security-gateway bind address 10.5.1.91 crypto-template vmct-asr5k context HNBGW

#exit ip route 0.0.0.0 0.0.0.0 10.5.1.1 Iu-Ps-Cs-H ip route 10.5.3.128 255.255.255.128 10.5.1.1 Iu-Ps-Cs-H ip igmp profile default

#exit

#exit end

If the second DHCP server is not configured, run these commands to configure it: a) configure b) context HNBGW c) dhcp-service CNR d) dhcp server <dhcp-server-2-IP-Addr > e) dhcp server selection-algorithm use-all

Verify that the second DHCP server is configured by examining the output from this step.

78

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Configuring the Security Gateway on the ASR 5000 for Redundancy

Step 5

Step 6

Note Exit from the config mode and view the DHCP

IP.

Example:

[local]blrrms-xt2-03# configure

[local]blrrms-xt2-03(config)# context HNBGW

[HNBGW]blrrms-xt2-03(config-ctx)# dhcp-service CNR

[HNBGW]blrrms-xt2-03(config-dhcp-service)# dhcp server 1.1.1.1

[HNBGW]blrrms-xt2-03(config-dhcp-service)# dhcp server selection-algorithm use-all

To view the changes, execute the following command:

[local]blrrms-xt2-03# show configuration context HNBGW

Save the changes by executing the following command:

[local]blrrms-xt2-03# save config /flash/xt2-03-aug12

Note The saved filename can be as per your choice. For example, xt2-03-aug12.

Configuring the Security Gateway on ASR 5000 for Multiple Subnet or Geo-Redundancy

In a different subnet or geo-redundant deployment, it is expected that the Serving and Upload nodes are deployed with IPs on a different subnet. The new subnet therefore needs to be allowed in the IPsec traffic selector on the Security Gateway (SeGW).

In a deployment where the SeGW (ASR 5000) and RMS are on the same subnet, the output of the HNB GW is displayed as follows (the single subnet information is highlighted below):

[local]blrrms-xt2-03# show configuration context HNBGW context HNBGW ip pool ipsec range 7.0.1.48 7.0.1.63 public 0 policy allow-static-allocation ipsec transform-set ipsec-vmct

#exit ikev2-ikesa transform-set ikesa-vmct

#exit crypto template vmct-asr5k ikev2-dynamic authentication local certificate authentication remote certificate ikev2-ikesa transform-set list ikesa-vmct keepalive interval 120 payload vmct-sa0 match childsa match ipv4 ip-address-alloc dynamic ipsec transform-set list ipsec-vmct tsr start-address 10.5.1.0 end-address 10.5.1.255

#exit

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

79

RMS Installation Tasks

Configuring the HNB Gateway for Redundancy

Follow the below steps to check and add the different subnet in the IPSec traffic selector of the SeGW (ASR

5000):

Step 1

Step 2

Step 3

Step 4

Step 5

Log in to the Cisco ASR 5000 that contains the HNB and security gateways.

Check the context name for the security gateway: show context all .

Display the HNB gateway configuration: show configuration context security_gateway_context_name .

Update the SeGW (ASR 5000) configuration with the additional subnet using the following command: tsr start-address <new subnet start IP address> end-address <new subnet end IP address>

Example: For example, .

tsr start-address 10.5.4.0 end-address 10.5.4.255

[local]blrrms-xt2-19# configure

[local]blrrms-xt2-19(config)# context HNBGW

[HNBGW]blrrms-xt2-19(config-ctx)# crypto template vmct-asr5k ikev2-dynamic

[HNBGW]blrrms-xt2-19(cfg-crypto-tmpl-ikev2-tunnel)# payload vmct-sa0 match childsa match ipv4

[HNBGW]blrrms-xt2-19(cfg-crypto-tmpl-ikev2-tunnel-payload)# tsr start-address 10.5.4.0 end-address

10.5.4.255

[HNBGW]blrrms-xt2-19(cfg-crypto-tmpl-ikev2-tunnel-payload)# exit

[HNBGW]blrrms-xt2-19(cfg-crypto-tmpl-ikev2-tunnel)# exit

[HNBGW]blrrms-xt2-19(config-ctx)# exit

[HNBGW]blrrms-xt2-19(config)# exit

[local]blrrms-xt2-19# save config /flash/xt2-03-aug12

Are you sure? [Yes|No]: yes

[local]blrrms-xt2-19#

Verify the updated SeGW configuration using the command:

show configuration context security_gateway_context_name

The updated output is highlighted below:

[local]blrrms-xt2-03# show configuration context HNBGW config context HNBGW ip pool ipsec range 7.0.1.48 7.0.1.63 public 0 policy allow-static-allocation ipsec transform-set ipsec-vmct

#exit ikev2-ikesa transform-set ikesa-vmct

#exit crypto template vmct-asr5k ikev2-dynamic authentication local certificate authentication remote certificate ikev2-ikesa transform-set list ikesa-vmct keepalive interval 120 payload vmct-sa0 match childsa match ipv4 ip-address-alloc dynamic ipsec transform-set list ipsec-vmct tsr start-address 10.5.1.0 end-address 10.5.1.255

tsr start-address 10.5.4.0 end-address 10.5.4.255

#exit

Configuring the HNB Gateway for Redundancy

Step 1

Step 2

Log in to the HNB gateway.

Display the configuration context of the HNB gateway so that you can verify the radius information:

show configuration context HNBGW_context_name

80

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Configuring the HNB Gateway for Redundancy

If the radius parameters are not configured as shown in this example, configure them as in this procedure.

Example:

[local]blrrms-xt2-03# show configuration context HNBGW context HNBGW ip pool ipsec range 7.0.1.48 7.0.1.63 public 0 policy allow-static-allocation ipsec transform-set ipsec-vmct

#exit ikev2-ikesa transform-set ikesa-vmct

#exit crypto template vmct-asr5k ikev2-dynamic authentication local certificate authentication remote certificate ikev2-ikesa transform-set list ikesa-vmct keepalive interval 120 payload vmct-sa0 match childsa match ipv4 ip-address-alloc dynamic ipsec transform-set list ipsec-vmct tsr start-address 10.5.1.0 end-address 10.5.1.255

#exit nai idr 10.5.1.91 id-type ip-addr ikev2-ikesa keepalive-user-activity certificate 10-5-1-91 ca-certificate list ca-cert-name TEF_CPE_SubCA ca-cert-name Ubi_Cisco_Int_ca

#exit interface Iu-Ps-Cs-H ip address 10.5.1.91 255.255.255.0

ip address 10.5.1.92 255.255.255.0 secondary ip address 10.5.1.93 255.255.255.0 secondary

#exit subscriber default dhcp service CNR context HNBGW ip context-name HNBGW ip address pool name ipsec exit radius change-authorize-nas-ip 10.5.1.92 encrypted key

+A1rxtnjd9vom7g1ugk4buohqxtt073pbivjonsvn3olnz2wsl0sm5 event-timestamp-window 0 no-reverse-path-forward-check aaa group default radius max-retries 2 radius max-transmissions 5 radius timeout 1 radius attribute nas-ip-address address 10.5.1.92

radius server 10.5.1.20 encrypted key +A3qji4gwxyne5y3s09r8uzi5ot70fbyzzzzgbso92ladvtv7umjcj port 1812 priority 2 radius server 1.4.2.90 encrypted key +A1z4194hjj9zvm24t0vdmob18b329iod1jj76kjh1pzsy3w46m9h4 port 1812 priority 1

#exit gtpp group default

#exit gtpu-service GTPU_FAP_1 bind ipv4-address 10.5.1.93

exit dhcp-service CNR dhcp client-identifier ike-id dhcp server 10.5.1.20

dhcp server 10.5.1.24

no dhcp chaddr-validate dhcp server selection-algorithm use-all dhcp server port 61610 bind address 10.5.1.92

#exit dhcp-server-profile CNR

#exit hnbgw-service HNBGW_1 sctp bind address 10.5.1.93

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

81

RMS Installation Tasks

Configuring the HNB Gateway for Redundancy

Step 3

Step 4

Step 5 sctp bind port 29169 associate gtpu-service GTPU_FAP_1 sctp sack-frequency 5 sctp sack-period 5 no sctp connection-timeout no ue registration-timeout hnb-identity oui discard-leading-char hnb-access-mode mismatch-action accept-aaa-value radio-network-plmn mcc 116 mnc 116 rnc-id 116 security-gateway bind address 10.5.1.91 crypto-template vmct-asr5k context HNBGW

#exit ip route 0.0.0.0 0.0.0.0 10.5.1.1 Iu-Ps-Cs-H ip route 10.5.3.128 255.255.255.128 10.5.1.1 Iu-Ps-Cs-H ip igmp profile default

#exit

#exit end

If the radius server configuration is not as shown in the above example, perform the following configuration: a) configure b) context HNBGW_context_name c) radius server radius-server-ip-address key secret port 1812 priority 2

Note When two radius servers are configured, one server is assigned Priority 1 and the other server is assigned

Priority 2. If radius server entries are already configured, check their priorities. Else, assign new server priorities.

Example:

[local]blrrms-xt2-03# configure

[local]blrrms-xt2-03(config)# context HNBGW

[HNBGW]blrrms-xt2-03(config-ctx)# radius server 10.5.1.20 key secret port 1812 priority 2 radius server 10.5.1.20 encrypted key +A3qji4gwxyne5y3s09r8uzi5ot70fbyzzzzgbso92ladvtv7umjcj port 1812 priority 2

If the configuration of the radius server is not correct, delete it: no radius server radius-server-id-address

Example:

[HNBGW]blrrms-xt2-03(config-ctx)# no radius server 10.5.1.20

Configure the radius maximum retries and time out settings: a) configure b) context hnbgw_context_name c) radius max-retries 2 d) radius timeout 1

After configuring the radius settings, verify that they are correct as in the example.

Example:

[local]blrrms-xt2-03# configure

[local]blrrms-xt2-03(config)# context HNBGW

[HNBGW]blrrms-xt2-03(config-ctx)# radius max-retries 2

[HNBGW]blrrms-xt2-03(config-ctx)# radius timeout 1 radius max-retries 2

82

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Configuring DNS for Redundancy radius max-transmissions 5 radius timeout 1

After the configuration is complete, the HNB GW sends access request thrice to the primary PAR with a one-second time delay between the two requests.

Configuring DNS for Redundancy

Configure the DNS with the newly added redundant configuration for the Serving and Upload nodes.

RMS High Availability Deployment

The high availability feature for Cisco RMS is designed to ensure continued operation of Cisco RMS sites in case of network failures. High availability provides a redundant setup that is activated automatically or manually when an active Central node or Provisioning and Management Gateway (PMG) database (DB) fails at one

RMS site. This setup ensures that the Central node and PMG DB are connected at all times.

To implement high availability you will need an RMS site 1 as primary Central node, Serving node, and

Upload node and RMS site 2 with redundant Serving node and Upload node.

To know more about high availability and configure it for Cisco RMS, see the High Availability for Cisco

RAN Management Systems document.

Changing Default Routing Interface on the Central Node

Perform the following steps post-RMS installation:

Before You Begin

Step 1

Step 2

Step 3

Step 4

Log in to the Central node as an admin user and switch to “ root ” user.

Edit the eth0 routing config file and comment out the default gateway.

Example:

[blrrms-central-22] ~ # vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

BOOTPROTO=none

ONBOOT=yes

TYPE=Ethernet

NM_CONTROLLED="no"

IPV6INIT=no

IPADDR=10.5.1.220

NETMASK=255.255.255.0

#GATEWAY=10.105.233.11

BROADCAST=10.5.1.255

Save the edited file.

Check that default gateway entry is present in eth1 routing config file.

RAN Management System Installation Guide, Release 4.1

83 May 25, 2015 HF

RMS Installation Tasks

Optimizing the Virtual Machines

Step 5

Step 6

Example:

[blrrms-central-22] ~ # cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

BOOTPROTO=none

ONBOOT=yes

TYPE=Ethernet

NM_CONTROLLED="no"

IPV6INIT=no

IPADDR=10.105.233.92

NETMASK=255.255.255.128

GATEWAY=10.105.233.11

BROADCAST=10.105.233.127

Reboot the Central node with the reboot command and log in back to the server as an admin user.

Check that default routing is via eth1 interface.

[rms-aio-central] ~ $ route -n

Kernel IP routing table

Destination

10.10.10.3

Gateway

10.105.233.1

72.163.128.140

10.105.233.1

10.10.10.4

10.105.233.1

10.105.233.11

10.105.233.1

171.68.226.120

10.105.233.1

10.105.233.60

10.105.233.1

10.105.233.0

0.0.0.0

10.5.1.0

0.0.0.0

0.0.0.0

10.105.233.11

Genmask Flags Metric Ref

255.255.255.255 UGH 0 0

255.255.255.255 UGH 0

255.255.255.255 UGH 0

0

0

255.255.255.255 UGH

255.255.255.255 UGH

0

0

255.255.255.255 UGH 0

255.255.255.128 U 0

255.255.255.0

0.0.0.0

U

UG

0

0

0

0

0

0

0

0

Use Iface

0 eth1

0 eth1

0 eth1

0 eth1

0 eth1

0 eth1

0 eth1

0 eth0

0 eth1

Optimizing the Virtual Machines

To run the RMS software, you need to verify that the VMs that you are running are up-to-date and configured optimally. Use these tasks to optimize your VMs.

Upgrading the VM Hardware Version

To have better performance parameter options available (for example, more virtual CPU and memory), the

VMware hardware version needs to be upgraded to version 8 or above. You can upgrade the version using the vSphere client .

84

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Upgrading the VM Hardware Version

Note Prior to the VM hardware upgrade, make a note of the current hardware version from vSphere client.

Figure 8: VMware Hardware Version

Step 1

Step 2

Start the vSphere client.

Right-click the vApp for one of the RMS nodes and select Power Off .

Figure 9: Power Off the vApp

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

85

Upgrading the VM CPU and Memory Settings

RMS Installation Tasks

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Right-click the virtual machine for the RMS node (central, serving, upload) and select Upgrade Virtual Hardware .

The software upgrades the virtual machine hardware to the latest supported version.

Note The Upgrade Virtual Hardware option appears only if the virtual hardware on the virtual machine is not the latest supported version.

Click Yes in the Confirm Virtual Machine Upgrade screen to continue with the virtual hardware upgrade.

Verify that the upgraded version is displayed in the Summary screen of the vSphere client.

Repeat this procedure for all remaining VMs, such as central, serving and upload so that all three VMs are upgraded to the latest hardware version.

Right-click the respective vApp of the RMS nodes and select Power On .

Make sure that all VMs are completely up with their new installation configurations.

Upgrading the VM CPU and Memory Settings

Before You Begin

Upgrade the VM hardware version as described in

Upgrading the VM Hardware Version, on page 84

.

86

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Upgrading the Upload VM Data Sizing

Note Upgrade the CPU/Memory settings of the required RMS VMs using the below procedure to match the configurations defined in the section

Optimum CPU and Memory Configurations, on page 13

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Start the VMware vSphere web client.

Right-click the vApp for one of the RMS nodes from the left panel and select Power Off .

Right-click the virtual machine for a RMS node (central, serving, upload) and select Edit Settings .

Select the Virtual Hardware tab. Click or Expand Memory in the Virtual Hardware on the left pane of the screen and update the RAM.

Click the Virtual Hardware tab and update the Number of CPUs.

Click OK .

Right-click the vApp and select Power On .

Repeat this procedure for all remaining VMs (central, serving, and upload).

Upgrading the Upload VM Data Sizing

Note Refer to

Virtualization Requirements, on page 12

for more information on data sizing.

Step 1

Step 2

Log in to the VMware vSphere web client and connect to a specific vCenter server.

Select the Upload VM and click the Summary tab and view the available free disk space in Virtual Hardware > Location.

Make sure that there is sufficient disk space available to make a change to the configuration.

Figure 10: Upload Node Summary Tab

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

87

RMS Installation Tasks

Upgrading the Upload VM Data Sizing

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

Step 13

Step 14

Step 15

Right-click the RMS upload virtual machine and select Power followed by Shut Down Guest .

Right-click again the RMS upload virtual machine and select Edit Settings .

In the Edit Settings page, click New Device and select New Hard Disk or Existing Hard Disk to add or select a new hard disk.

Select one of the data stores based on the disk size needed, give the required disk size as input and create a new hard disk.

Click OK .

Repeat steps 5 and 7 for Hard disk 2 .

Right-click the VM and select Power followed by Power On .

Log in to the Upload node.

a) Log in to the Central node VM using the central node eth1 address.

b) ssh to the Upload VM using the upload node hostname.

Example: ssh admin1@blr-rms14-upload

Check the effective disk space after expanding: fdisk -l .

Apply fdisk on expanded disk and create the new partition on the disk and save.

fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').

Command (m for help): n

Command action e extended p primary partition (1-4) p

Partition number (1-4): 1

First cylinder (1-52216, default 1): 1

Last cylinder, +cylinders or +size{K,M,G} (1-52216, default 52216): 52216

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks..

Follow the on-screen prompts carefully to avoid errors that may corrupt the entire system.

The cylinder values may vary based on the machine setup.

Repeat Step 11 to create partition on another disk.

Stop the LUS process.

Example: god stop UploadServer

Sending 'stop' command

The following watches were affected:

UploadServer

Create backup folders for the 'files' partition.

Example: mkdir -p /backups/uploads

The system responds with a command prompt.

88

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Upgrading the Upload VM Data Sizing

Step 16

Step 17

Step 18

Step 19

Step 20

Step 21

Step 22

Step 23

Step 24

Step 25 mkdir – p /backups/archives

The system responds with a command prompt.

Back up the data.

Example: mv/opt/CSCOuls/files/uploads/* /backups/uploads mv/opt/CSCOuls/files/archives/* /backups/archives

The system responds with a command prompt.

Create the file system on the expanded partitions.

Example: mkfs.ext4 -i 4096 /dev/sdb1

The system responds with a command prompt.

Repeat Step 16 for other partitions.

Mount expanded partitions under /opt/CSCOuls/files/uploads and /opt/CSCOuls/files/archives directories using the following commands.

mount -t ext4 -o noatime,data=writeback,commit=120 /dev/sdb1 /opt/CSCOuls/files/uploads/ mount -t ext4 -o noatime,data=writeback,commit=120 /dev/sdc1 /opt/CSCOuls/files/archives/

The system responds with a command prompt.

Edit /etc/fstab and append following entries to make the mount point reboot persistent.

/dev/sdb1 /opt/CSCOuls/files/uploads/ ext4 noatime,data=writeback,commit=120 0 0

/dev/sdc1 /opt/CSCOuls/files/archives/ ext4 noatime,data=writeback,commit=120 0 0

Restore the already backed up data.

mv /backups/uploads/* /opt/CSCOuls/files/uploads/ mv /backups/archives/* /opt/CSCOuls/files/archives/

The system responds with a command prompt.

Check ownership of the /opt/CSCOuls/files/uploads and /opt/CSCOuls/files/archives directory with the following command.

ls -l /opt/CSCOuls/files

Change the ownership of the files/uploads and files/archives directories to ciscorms.

chown -R ciscorms:ciscorms /opt/CSCOuls/files/

The system responds with a command prompt.

Verify ownership of the mounting directory.

ls -al /opt/CSCOuls/files/ total 12 drwxr-xr-x. 7 ciscorms ciscorms 4096 Aug 5 06:03 archives drwxr-xr-x. 2 ciscorms ciscorms 4096 Jul 25 15:29 conf drwxr-xr-x. 5 ciscorms ciscorms 4096 Jul 31 17:28 uploads

Edit the /opt/CSCOuls/conf/UploadServer.properties file.

cd /opt/CSCOuls/conf; sed – i

's/UploadServer.disk.alloc.global.maxgb.*/UploadServer.disk.alloc.global.maxgb=<Max limit>/'

UploadServer.properties;

System returns with command prompt.

Replace <Max limit> with the maximum size of partition mounted under /opt/CSCOuls/files/uploads.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

89

RMS Installation Tasks

RMS Installation Sanity Check

Step 26 Start the LUS process.

god start UploadServer

Sending 'start' command

The following watches were affected:

UploadServer

Note For the Upload Server to work properly, both/opt/CSCOuls/files/uploads/and/opt/CSCOuls/files/archives/folders must be on different partitions.

RMS Installation Sanity Check

Note Verify that there are no install related errors or exceptions in the ova-first-boot.log present in "/root" directory. Proceed with the following procedures only after confirming from the logs that the installation of all the RMS nodes is successful.

Sanity Check for the BAC UI

Following the installation, perform this procedure to ensure that all connections are established.

Note The default user name is bacadmin. The password is as specified in the OVA descriptor file

(prop:RMS_App_Password). The default password is Ch@ngeme1.

Step 1

Step 2

Step 3

Log in to BAC UI using the URL https://<central-node-north-bound-IP>/adminui .

Click on Servers .

Click the tabs at the top of the display to verify that all components are populated:

90

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Sanity Check for the DCC UI

• DPEs — Should display respective serving node name given in the descriptor file used for deployment. Click on the serving node name. The display should indicate that this serving node is in the Ready state.

Figure 11: BAC: View Device Provisioning Engines Details

• NRs — Should display the NR (same as serving node name) given in the descriptor file used for deployment. Click on the NR name. The display should indicate that this node is in the Ready state.

• Provisioning Groups — Should display the respective provisioning group name given in the descriptor file used for deployment. Click on the Provisioning group name. The display should indicate the ACS URL pointing to the value of the property, “ prop: Acs_Virtual_Fqdn ” that you specified in the descriptor file.

• RDU — Should display the RDU in the Ready state.

If all of these screens display correctly as described, the BAC UI is communicating correctly.

Sanity Check for the DCC UI

Note Before using the username, pmguser or pmgadmin, through the DCC UI to communicate with PMG, ensure that you change their default password.

Following the installation, perform this procedure to ensure that all connections are established.

Step 1 Log in to DCC UI using the URL https://[central-node-northbound-IP]/dcc_ui.

The default username is dccadmin. The password is as specified in the OVA descriptor file (prop:RMS_App_Password).

The default password is Ch@ngeme1.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

91

RMS Installation Tasks

Verifying Application Processes

Step 2 Click the Groups and IDs tab and verify that the Group Types table shows Area, Femto Gateway, RFProfile, Enterprise and Site.

Verifying Application Processes

Verify the RMS virtual appliance deployment by logging onto each of the virtual servers for the Central,

Serving and Upload nodes. Note that these processes and network listeners are available for each of the servers:

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Log in to the Central node as a root user.

Run: service bprAgent status

In the output, note that these processes are running:

[rtpfga-s1-central1] ~ # service bprAgent status

BAC Process Watchdog is running

Process [snmpAgent] is running

Process [rdu] is running

Process [tomcat] is running

Run: /rms/app/nwreg2/regional/usrbin/cnr_status

Note This step is not applicable in a third-party SeGW RMS deployment.

[rtpfga-ova-central06] ~ # /rms/app/nwreg2/regional/usrbin/cnr_status

Server Agent running (pid: 4564)

CCM Server running (pid: 4567)

WEB Server running (pid: 4568)

RIC Server Running (pid:v4569)

Login to the Serving node and run the command as root user.

Run: service bprAgent status

[rtpfga-s1-serving1] ~ # service bprAgent status

BAC Process Watchdog is running.

Process [snmpAgent] is running.

Process [dpe] is running.

Process [cli] is running.

Run: /rms/app/nwreg2/local/usrbin/cnr_status

Note This step is not applicable in a third-party SeGW RMS deployment.

[rtpfga-s1-serving1] ~ # /rms/app/nwreg2/local/usrbin/cnr_status

DHCP server running (pid: 16805)

92

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Installation Tasks

Verifying Application Processes

Step 7

Step 8

Step 9

Server Agent running (pid: 16801)

CCM Server running (pid: 16804)

WEB Server running (pid: 16806)

CNRSNMP server running (pid: 16808)

RIC Server Running (pid: 16807)

TFTP Server is not running

DNS Server is not running

DNS Caching Server is not running

Run: /rms/app/CSCOar/usrbin/arstatus

[root@rms-aio-serving ~]# /rms/app/CSCOar/usrbin/arstatus

Cisco Prime AR RADIUS server running (pid: 24272)

Cisco Prime AR Server Agent running (pid: 24232)

Cisco Prime AR MCD lock manager running (pid: 24236)

Cisco Prime AR MCD server running (pid: 24271)

Cisco Prime AR GUI running (pid: 24273)

[root@rms-aio-serving ~]#

Login to the Upload node and run the command as root user.

Run: service god status

[rtpfga-s1-upload1] ~ # service god status

UploadServer: up

Note If the above status of UploadServer is not up (start or unmonitor state), see

176 for details.

Upload Server is Not Up, on page

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

93

Verifying Application Processes

RMS Installation Tasks

94

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

C H A P T E R

5

Installation Tasks Post-OVA Deployment

Perform these tasks after deploying the OVA descriptor files.

HNB Gateway and DHCP Configuration, page 95

Installing RMS Certificates, page 101

Enabling Communication for VMs on Different Subnets, page 112

Configuring Default Routes for Direct TLS Termination at the RMS, page 113

Post-Installation Configuration of BAC Provisioning Properties , page 115

PMG Database Installation and Configuration, page 116

Configuring the Central Node, page 121

Area Table Data Population, page 124

Configuring New Groups and Pools, page 126

Optional Features, page 127

HNB Gateway and DHCP Configuration

If the HNB Gateway, DHCP, and RMS certificate information is not provided in the OVA descriptor file during RMS deployment or to change the HNB Gateway and DHCP configured values, perform these steps post installation to complete the RMS deployment.

The HNB Gateway and DHCP settings can be configured in the OVA descriptor files. However, if the HNB

Gateway or DHCP configurations are missing or invalid, the IPsec tunnel is not established and the following error can be displayed in the troubleshooting logs of the AP:

[Device [001B67-357539019692488] Session [12f6ec053A146159b1aff53A80000049].

Extension [GA_DiscoveredCapabilities.js].]: [Alarm matched = EventType:Communications

Alarm ,ProbableCause:IPSec connectivity failure ,SpecificProblem:

The HNB has failed to establish or dropped the connection to the IPSec tunnel]

RAN Management System Installation Guide, Release 4.1

95 May 25, 2015 HF

Installation Tasks Post-OVA Deployment

HNB Gateway and DHCP Configuration

Use this procedure to configure the HNB Gateway and DHCP settings after deploying the OVA descriptor files.

Step 1

Step 2

Step 3

Step 4

Step 5

Log in to the Central node.

Switch to root user: su -

Locate the script configure_hnbgw.sh

in the /rms/ova/scripts/post_install directory.

Change the directory: cd /rms/ova/scripts/post_install

Run the configuration script: ./configure_hnbgw.sh

The script prompts you for the HNB-GW and DHCP IP addresses to be configured with the RMS, as shown in this example.

Example:

[rms-distr-central] /rms/ova/scripts/post_install # ./configure_hnbgw.sh

*******************Post-installation script to configure HNB-GW with

RMS*******************************

Enter the value of Asr5k_Dhcp_Address

10.5.4.152

Enter the value of Asr5k_Radius_Address

10.5.4.152

Enter the value of RMS_App_Password from OVA descriptor

(Enter default Ch@ngeme1 if not present in descriptor)

Ch@ngeme1

Enter the value of Admin1_Password from OVA descriptor

(Enter default Ch@ngeme1 if not present in descriptor)

Ch@ngeme1

Enter the value of Root_Password from OVA descriptor

(Enter default Ch@ngeme1 if not present in descriptor)

Ch@ngeme1

Enter the value of Dhcp_Pool_Network

7.0.2.192

Enter the value of Dhcp_Pool_Subnet

255.255.255.240

Enter the value of Dhcp_Pool_FirstAddress

7.0.2.193

Enter the value of Dhcp_Pool_LastAddress

7.0.2.206

Enter yes To Configure the value of the Asr5k_Radius_CoA_Port.

Enter no to use the default value no

Configuring the Default Asr5k_Radius_CoA_Port 3799 iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] spawn scp ./confighnbgw.sh [email protected]:/home/admin1/confighnbgw.sh

[email protected]'s password: confighnbgw.sh

100% 2368 2.3KB/s 00:00 spawn ssh [email protected]

[email protected]'s password:

Last login: Tue Aug 26 05:09:41 2014 from 10.5.4.17

This system is restricted for authorized users and for legitimate business purposes only. The actual or attempted unauthorized access, use, or modification of this system is strictly prohibited Unauthorized users are subject to

Company disciplinary proceedings and/or criminal and civil penalties under state, federal, or other applicable domestic and foreign laws. The use of this system may be monitored and recorded for administrative and security reasons.

[admin1@rms-distr-serving ~]$ su -

Password:

Setting firewall for CNR DHCP....

96

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

HNB Gateway and DHCP Configuration

Setting firewall for CAR Radius iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Configuring CAR....

200 OK

Configuring CNR

100 Ok session: cluster = localhost current-vpn = global default-format = user dhcp-edit-mode = synchronous dns-edit-mode = synchronous groups = superuser roles = superuser user-name = cnradmin visibility = 5 nrcmd> scope dummy-scope delete

100 Ok nrcmd> scope femto-scope delete

100 Ok nrcmd> scope dummy-scope create 10.5.4.152 255.255.255.255

100 Ok dummy-scope: addr = 10.5.4.152

allocate-first-available = [default=false] allocation-priority = [default=0] backup-pct = bootp = [default=disabled] deactivated = description = dhcp = [default=enabled] dns-host-bytes = dynamic-bootp = [default=disabled] embedded-policy = failover-backup-allocation-boundary = free-address-config = ignore-declines = [default=false] mask = 255.255.255.255

ping-clients = ping-timeout = policy = default primary-subnet = renew-only = restrict-to-reservations = [default=disabled] selection-tag-list = subnet = 10.5.4.152/32 tenant-id = [default=0] vpn-id = 0 name: global nrcmd> scope femto-scope create 7.0.2.192 255.255.255.240

100 Ok femto-scope: addr = 7.0.2.192

allocate-first-available = [default=false] allocation-priority = [default=0] backup-pct = bootp = [default=disabled] deactivated = description = dhcp = [default=enabled] dns-host-bytes = dynamic-bootp = [default=disabled] embedded-policy = failover-backup-allocation-boundary = free-address-config =

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

97

HNB Gateway and DHCP Configuration ignore-declines = [default=false] mask = 255.255.255.240

ping-clients = ping-timeout = policy = default primary-subnet = renew-only = restrict-to-reservations = [default=disabled] selection-tag-list = subnet = 7.0.2.192/28 tenant-id = [default=0] vpn-id = 0 name: global nrcmd> scope femto-scope addRange 7.0.2.193 7.0.2.206

100 Ok

7.0.2.193 7.0.2.206

nrcmd> scope femto-scope set primary-subnet=10.5.4.152/32 primary-scope= dummy-scope policy=default

100 Ok primary-subnet=10.5.4.152/32 primary-scope=dummy-scope policy=default nrcmd> save

100 Ok nrcmd> server DHCP reload

100 Ok nrcmd> exit

[root@rms-distr-serving ~]# rm -f /home/admin1/confighnbgw.sh

[root@rms-distr-serving ~]# exit logout

[admin1@rms-distr-serving ~]$ exit logout

Connection to 10.5.4.18 closed.

*******Done************

Installation Tasks Post-OVA Deployment

98

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

HNB Gateway and DHCP Configuration

Note Login to the PNR CLI and check if DHCP pools are created successfully.

[admin1@blr-rms19-serving ~]$ nrcmd username: cnradmin password: enter cnradmin password

100 Ok session: cluster = localhost current-vpn = global default-format = user dhcp-edit-mode = synchronous dns-edit-mode = synchronous groups = superuser roles = superuser user-name = cnradmin visibility = 5 nrcmd> scope listbrief

100 Ok

Name Subnet Policy

---- ------ -----dummy-scope 10.5.1.172/32 default femto-scope 7.0.2.48/28 default nrcmd> lease listbrief

100 Ok - 14 leases found

Address State Lease Expiration MAC Address

------- ----- ---------------- -----------

7.0.2.49 available

7.0.2.50 available

7.0.2.51 available

7.0.2.52 available

7.0.2.53 available

7.0.2.54 available

7.0.2.55 available

7.0.2.56 available

7.0.2.57 available

7.0.2.58 available

7.0.2.59 available

7.0.2.60 available

7.0.2.61 available

7.0.2.62 available nrcmd>

After running the configuration tool, IPSec connectivity should function as expected and the AP should go to the operational state.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

99

Installation Tasks Post-OVA Deployment

Modifying the DHCP IP Pool Subnet

Modifying the DHCP IP Pool Subnet

The DHCP subnet is by default configured as /24 (255.255.255.0) even if a different subnet is specified in the Central server descriptor file. This needs to be modified post installation using one of the steps described below, step 1 or 2:

Step 1

Step 2

If the following properties are configured via OVA descriptor during deployment, then execute steps a and b.

• prop: Dhcp_Pool_Network

• prop: Dhcp_Pool_Subnet

• prop: Dhcp_Pool_FirstAddress

• prop: Dhcp_Pool_LastAddres a) Log in to the Serving node and switch to root.

b) Run the following commands, replacing the highlighted text with correct values as per the OVA descriptor:

# iptables -D OUTPUT -p tcp -s <Serving_Node_Eth0_Address> -d <Dhcp_Pool_Network/24>

--dport 7547 -m state --state NEW -j ACCEPT

# iptables -A OUTPUT -p tcp -s <Serving_Node_Eth0_Address> -d

<Dhcp_Pool_Network/Dhcp_Pool_Subnet>

--dport 7547 -m state --state NEW -j ACCEPT

# service iptables save

If the DHCP information is skipped in the OVA descriptor, run the post installation utility configure_hnbgw.sh and provide the below required DHCP information to create the required IP tables.

• prop: Dhcp_Pool_Network

• prop: Dhcp_Pool_Subnet

• prop: Dhcp_Pool_FirstAddress

• prop: Dhcp_Pool_LastAddres a) Repeat steps 1 to 5 provided in the

HNB Gateway and DHCP Configuration, on page 95

procedure to update the

DHCP information.

Example:

Enter the value of Asr5k_Dhcp_Address

10.5.4.152

Enter the value of Asr5k_Radius_Address

10.5.4.152

Enter the value of RMS_App_Password from OVA descriptor

(Enter default Ch@ngeme1 if not present in descriptor)

Ch@ngeme1

Enter the value of Admin1_Password from OVA descriptor

(Enter default Ch@ngeme1 if not present in descriptor)

Ch@ngeme1

Enter the value of Root_Password from OVA descriptor

(Enter default Ch@ngeme1 if not present in descriptor)

Ch@ngeme1

Enter the value of Dhcp_Pool_Network

7.0.2.192

Enter the value of Dhcp_Pool_Subnet

255.255.255.240

Enter the value of Dhcp_Pool_FirstAddress

7.0.2.193

100

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

HNB GW Configuration on Redundant Serving Node

Enter the value of Dhcp_Pool_LastAddress

7.0.2.206

HNB GW Configuration on Redundant Serving Node

The post install script “ configure_hnbgw.sh

” mentioned in the previous section (

HNB Gateway and DHCP

Configuration, on page 95

) configures the HNB GW information only on the active Serving node. Therefore to configure the HNB GW information on a redundant Serving node, see the Method of Procedure for HNB

Gateway Configuration on Redundant Serving Nodes .

Installing RMS Certificates

Following are the two types of certificates are supported. Use one of the options, depending on the availability of your signing authority:

• Auto-generated CA signed RMS certificates – If you do not have your own signing authority (CA) defined

• Self-signed RMS certificates(for manual signing purpose) – If you have your own signing authority

(CA) defined

Auto-Generated CA-Signed RMS Certificates

The RMS supports auto-generated CA-signed RMS certificates as part of the installation to avoid manual signing overhead. Based on the optional inputs in the OVA descriptor file, the RMS installation generates the customer specific Root CA and Intermediate CA , and subsequently signs the RMS (DPE and ULS) certificates using these generated CAs. If these properties are not specified in the OVA descriptor file, the default values are used.

Table 2: Optional Certificate Properties in OVA Descriptor File

Property prop:Cert_C prop:Cert_ST prop:Cert_L prop:Cert_O prop:Cert_OU

Default Value

US

NC

RTP

Cisco Systems, Inc.

MITG

The signed RMS certificates are located at the following destination by default:

RAN Management System Installation Guide, Release 4.1

101 May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Auto-Generated CA-Signed RMS Certificates

• DPE — /rms/app/CSCObac/dpe/conf/dpe.keystore

• ULS — /opt/CSCOuls/conf/uls.keystore

The following example shows how to verify the contents of keystore, for example, dpe.keystore:

Note The keystore password is Ch@ngeme1

[root@blrrms-serving-08 ~]# keytool -keystore /rms/app/CSCObac/dpe/conf/dpe.keystore -list

– v

Enter keystore password:

Keystore type: JKS

Keystore provider: SUN

Your keystore contains 1 entry

Alias name: dpe-key

Creation date: May 19, 2014

Entry type: PrivateKeyEntry

Certificate chain length: 3

Certificate[1]:

Owner: CN=10.5.2.44, OU=POC, O=Cisco Systems, ST=NC, C=US

Issuer: CN="Cisco Systems, Inc. POC Int", O=Cisco

Serial number: 1

Valid from: Mon May 19 17:24:31 UTC 2014 until: Tue May 19 17:24:31 UTC 2015

Certificate fingerprints:

MD5: C7:9D:E1:A1:E9:2D:4C:ED:EE:3E:DA:4B:68:B3:0D:0D

SHA1: D9:55:3E:6E:29:29:B4:56:D6:1F:FB:03:43:30:8C:14:78:49:A4:B8

Signature algorithm name: SHA256withRSA

Version: 3

Extensions:

#1: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: DC AB 02 FA 9A B2 5F 60 15 54 BE 9E 3B ED E7 B3 ......_`.T..;...

0010: AB 08 A5 68 ...h

]

]

#2: ObjectId: 2.5.29.37 Criticality=false

ExtendedKeyUsages [ serverAuth clientAuth ipsecEndSystem ipsecTunnel ipsecUser

]

#3: ObjectId: 2.5.29.35 Criticality=false

AuthorityKeyIdentifier [

KeyIdentifier [

0000: 43 0C 3F CF E2 B7 67 92 17 61 29 3F 8D 62 AE 94 C.?...g..a)?.b..

0010: F5 6A 5D 30 .j]0

]

]

Certificate[2]:

Owner: CN="Cisco Systems, Inc. POC Int", O=Cisco

Issuer: CN="Cisco Systems, Inc. POC Root", O=Cisco

Serial number: 1

Valid from: Mon May 19 17:24:31 UTC 2014 until: Thu May 13 17:24:31 UTC 2038

Certificate fingerprints:

MD5: 53:7E:60:5A:20:1A:D3:99:66:F4:44:F8:1D:F9:EE:52

SHA1: 5F:6A:8B:48:22:5F:7B:DE:4F:FC:CF:1D:41:96:64:0E:CD:3A:0C:C8

Signature algorithm name: SHA256withRSA

Version: 3

Extensions:

#1: ObjectId: 2.5.29.19 Criticality=true

BasicConstraints:[

102

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Auto-Generated CA-Signed RMS Certificates

CA:true

PathLen:0

]

#2: ObjectId: 2.5.29.15 Criticality=false

KeyUsage [

DigitalSignature

Key_CertSign

Crl_Sign

]

#3: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: 43 0C 3F CF E2 B7 67 92 17 61 29 3F 8D 62 AE 94 C.?...g..a)?.b..

0010: F5 6A 5D 30 .j]0

]

]

#4: ObjectId: 2.5.29.35 Criticality=false

AuthorityKeyIdentifier [

KeyIdentifier [

0000: 1F E2 47 CF DE D5 96 E5 15 09 65 5B F5 AC 32 FE ..G.......e[..2.

0010: CE 3F AE 87

]

.?..

]

Certificate[3]:

Owner: CN="Cisco Systems, Inc. POC Root", O=Cisco

Issuer: CN="Cisco Systems, Inc. POC Root", O=Cisco

Serial number: e8c6b76de63cd977

Valid from: Mon May 19 17:24:30 UTC 2014 until: Fri May 13 17:24:30 UTC 2039

Certificate fingerprints:

MD5: 15:F9:CF:E7:3F:DC:22:49:17:F1:AC:FB:C2:7A:EB:59

SHA1: 3A:97:24:C2:A2:B3:73:39:0E:49:B2:3D:22:85:C7:C0:D8:63:E2:81

Signature algorithm name: SHA256withRSA

Version: 3

Extensions:

#1: ObjectId: 2.5.29.19 Criticality=true

BasicConstraints:[

CA:true

PathLen:2147483647

]

#2: ObjectId: 2.5.29.15 Criticality=false

KeyUsage [

DigitalSignature

Key_CertSign

Crl_Sign

]

#3: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: 1F E2 47 CF DE D5 96 E5 15 09 65 5B F5 AC 32 FE ..G.......e[..2.

0010: CE 3F AE 87 .?..

]

]

*******************************************

*******************************************

You must manually update the certificates to the ZDS server, as described in this procedure.

Step 1 Locate the RMS CA chain at following location in the central node: /rms/data/rmsCerts/ZDS_Upload.tar.gz

The ZDS_Upload.tar.gz file contains the following certificate files:

• hms_server_cert.pem

• download_server_cert.pem

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

103

Installation Tasks Post-OVA Deployment

Self-Signed RMS Certificates

Step 2

• pm_server_cert.pem

• ped_server_cert.pem

Upload the ZDS_Upload.tar.gz file to the ZDS.

Self-Signed RMS Certificates

Before installing the certificates, create the security files on the Serving node and the Upload node. Each of these nodes includes the unique keystore and csr files that are created during the deployment process.

Procedure for creating security files:

Step 1

Step 2

Locate each of the following Certificate Request files.

• Serving Node: /rms/app/CSCObac/dpe/conf/self_signed/dpe.csr

• Upload Node : /opt/CSCOuls/conf/self_signed/uls.csr

Sign them using your relevant certificate authority.

After the CSR is signed, you will get three files: client-ca.cer, server-ca.cer, and root-ca.cer.

Self-Signed RMS Certificates in Serving Node

Step 1 Import the following three certificates (client-ca.cer, server-ca.cer, and root-ca.cer ) into the keystore after getting the csr signed by the signing tool to complete the security configuration for the Serving Node: a) Log in to the Serving node and then switch to root user: su b) Place the certificates (client-ca.cer, server-ca.cer, and root-ca.cer ) into the

/rms/app/CSCObac/dpe/conf/self_signed folder.

c) Run the following commands in /rms/app/CSCObac/dpe/conf/self_signed :

Note The default password for /rms/app/cscobac/jre/lib/security/cacerts is "changeit".

1 /rms/app/CSCObac/jre/bin/keytool -import -alias server-ca -file [server-ca.cer] -keystore

/rms/app/CSCObac/jre/lib/security/cacerts

Sample Output

[root@blrrms-serving-22 self_signed]# /rms/app/CSCObac/jre/bin/keytool -import -alias server-ca

-file server-ca.cer -keystore /rms/app/CSCObac/jre/lib/security/cacerts

104

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Self-Signed RMS Certificates

May 25, 2015 HF

Enter keystore password:

Owner: CN=rtp Femtocell CA, O=Cisco

Issuer: CN=Cisco Root CA M1, O=Cisco

Serial number: 610420e200000000000b

Valid from: Sat May 26 01:04:27 IST 2012 until: Wed May 26 01:14:27 IST 2032

Certificate fingerprints:

MD5: AF:0C:A0:D3:74:18:FE:16:A4:CA:87:13:A8:A4:9F:A1

SHA1: F6:CD:63:A8:B9:58:FE:7A:5A:61:18:E4:13:C8:DF:80:8E:F5:1D:A9

SHA256: 81:38:8F:06:7E:B6:13:87:90:D6:8B:72:A3:40:03:92:A4:8B:94

:33:B8:3A:DD:2C:DE:8F:42:76:68:65:6B:DC

Signature algorithm name: SHA1withRSA

Version: 3

Extensions:

#1: ObjectId: 1.3.6.1.4.1.311.20.2 Criticality=false

0000: 1E 0A 00 53 00 75 00 62 00 43 00 41 ...S.u.b.C.A

#2: ObjectId: 1.3.6.1.4.1.311.21.1 Criticality=false

0000: 02 01 00 ...

]

]

#3: ObjectId: 1.3.6.1.5.5.7.1.1 Criticality=false

AuthorityInfoAccess [

[ accessMethod: caIssuers accessLocation: URIName: http://www.cisco.com/security/pki/certs/crcam1.cer

#4: ObjectId: 2.5.29.35 Criticality=false

AuthorityKeyIdentifier [

KeyIdentifier [

0000: A6 03 1D 7F CA BD B2 91 40 C6 CB 82 36 1F 6B 98 [email protected].

...) 0010: 8F DD BC 29

]

]

#5: ObjectId: 2.5.29.19 Criticality=true

BasicConstraints:[

CA:true

PathLen:0

]

#6: ObjectId: 2.5.29.31 Criticality=false

CRLDistributionPoints [

]]

[DistributionPoint:

[URIName: http://www.cisco.com/security/pki/crl/crcam1.crl]

#7: ObjectId: 2.5.29.32 Criticality=false

CertificatePolicies [

[CertificatePolicyId: [1.3.6.1.4.1.9.21.1.16.0]

RAN Management System Installation Guide, Release 4.1

105

Installation Tasks Post-OVA Deployment

Self-Signed RMS Certificates

[PolicyQualifierInfo: [ qualifierID: 1.3.6.1.5.5.7.2.1

qualifier: 0000: 16 35 68 74 74 70 3A 2F 2F 77 77 77 2E 63 69 73 .5http://www.cis

0010: 63 6F 2E 63 6F 6D 2F 73 65 63 75 72 69 74 79 2F co.com/security/

0020: 70 6B 69 2F 70 6F 6C 69 63 69 65 73 2F 69 6E 64 pki/policies/ind

0030: 65 78 2E 68 74 6D 6C ex.html

]] ]

]

#8: ObjectId: 2.5.29.37 Criticality=false

ExtendedKeyUsages [ serverAuth clientAuth ipsecEndSystem ipsecTunnel ipsecUser

1.3.6.1.4.1.311.10.3.1

1.3.6.1.4.1.311.20.2.1

1.3.6.1.4.1.311.21.6

]

#9: ObjectId: 2.5.29.15 Criticality=false

KeyUsage [

DigitalSignature

Key_CertSign

Crl_Sign

]

#10: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: 5B F4 8C 42 FE DD 95 41 A0 E8 C2 45 12 73 1B 68 [..B...A...E.s.h

0010: 42 6C 0D EF

]

Bl..

]

Trust this certificate? [no]: yes

Certificate was added to keystore

2 /rms/app/CSCObac/jre/bin/keytool -import -alias root-ca -file [root-ca.cer] -keystore

/rms/app/CSCObac/jre/lib/security/cacerts

Note The default password for /rms/app/cscobac/jre/lib/security/cacerts is "changeit".

Sample Output

[root@blrrms-serving-22 self_signed]# /rms/app/CSCObac/jre/bin/keytool -import -alias root-ca

Enter keystore password:

-file root-ca.cer -keystore /rms/app/CSCObac/jre/lib/security/cacerts

Owner: CN=Cisco Root CA M1, O=Cisco

Issuer: CN=Cisco Root CA M1, O=Cisco

Serial number: 2ed20e7347d333834b4fdd0dd7b6967e

Valid from: Wed Nov 19 03:20:24 IST 2008 until: Sat Nov 19 03:29:46 IST 2033

106

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Self-Signed RMS Certificates

Certificate fingerprints:

MD5: F0:F2:85:50:B0:B8:39:4B:32:7B:B8:47:2F:D1:B8:07

SHA1: 45:AD:6B:B4:99:01:1B:B4:E8:4E:84:31:6A:81:C2:7D:89:EE:5C:E7

SHA256: 70:5E:AA:FC:3F:F4:88:03:00:17:D5:98:32:60:3E

:EF:AD:51:41:71:B5:83:80:86:75:F4:5C:19:0E:63:78:F8

Signature algorithm name: SHA1withRSA

Version: 3

Extensions:

#1: ObjectId: 1.3.6.1.4.1.311.21.1 Criticality=false

0000: 02 01 00 ...

Step 2

#2: ObjectId: 2.5.29.19 Criticality=true

BasicConstraints:[

CA:true

PathLen:2147483647

]

#3: ObjectId: 2.5.29.15 Criticality=false

KeyUsage [

DigitalSignature

Key_CertSign

Crl_Sign

]

#4: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: A6 03 1D 7F CA BD B2 91 40 C6 CB 82 36 1F 6B 98 [email protected].

...) 0010: 8F DD BC 29

]

]

Trust this certificate? [no]: yes

Certificate was added to keystore d) Import the certificate reply into the DPE keystore:

· /rms/app/CSCObac/jre/bin/keytool -import -trustcacerts -file [client-ca.cer] -keystore

/rms/app/CSCObac/dpe/conf/self_signed/dpe.keystore -alias dpe-key

Note The password for the client certificate installation is specified in the OVA descriptor file

(prop:RMS_App_Password). The default value is Ch@ngeme1.

Sample Output

[root@blrrms-serving-22 self_signed]# /rms/app/CSCObac/jre/bin/keytool -import -trustcacerts -file client-ca.cer -keystore

/rms/app/CSCObac/dpe/conf/self_signed/dpe.keystore -alias dpe-key

Enter keystore password:

Certificate reply was installed in keystore

Run the following commands to take the backup of existing certificates and copy the new certificates: a) cd /rms/app/CSCObac/dpe/conf

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

107

Installation Tasks Post-OVA Deployment

Self-Signed RMS Certificates

Step 3 b) mv dpe.keystore dpe.keystore_org

c) cp self_signed/dpe.keystore .

d) chown bacservice:bacservice dpe.keystore

e) chmod 640 dpe.keystore

f) /etc/init.d/bprAgent restart dpe

Verify the automatic installation of the Ubiquisys CA certificates to the cacerts file on the DPE by running these commands:

• /rms/app/CSCObac/jre/bin/keytool -keystore /rms/app/CSCObac/jre/lib/security/cacerts -alias UbiClientCa

-list -v

• /rms/app/CSCObac/jre/bin/keytool -keystore /rms/app/CSCObac/jre/lib/security/cacerts -alias UbiRootCa

-list -v

Note The default password for /rms/app/cscobac/jre/lib/secutiry/cacerts is changeit.

What to Do Next

If there are issues during the certificate generation process, refer to

Regeneration of Certificates, on page 163

.

Self-Signed RMS Certificates in Upload Node

Step 1 Import the following three certificates (client-ca.cer, server-ca.cer, and root-ca.cer) into the keystore after getting the csr signed by the signing tool to complete the security configuration for the Upload Node: a) Log in to the Upload node and switch to root user: su b) Place the certificates (client-ca.cer, server-ca.cer, and root-ca.cer) in the /opt/CSCOuls/conf/self_signed folder.

c) Run the following commands in /opt/CSCOuls/conf/self_signed :

1 keytool -importcert -keystore uls.keystore -alias root-ca -file [root-ca.cer]

Note The password for the keystore is specified in the OVA descriptor file (prop:RMS_App_Password). The default value is Ch@ngeme1.

Sample Output

[root@blr-blrrms-lus2-22 self_signed]# keytool -importcert -keystore uls.keystore

-alias root-ca -file root-ca.cer

Enter keystore password:

Owner: CN=Cisco Root CA M1, O=Cisco

Issuer: CN=Cisco Root CA M1, O=Cisco

Serial number: 2ed20e7347d333834b4fdd0dd7b6967e

Valid from: Wed Nov 19 03:20:24 IST 2008 until: Sat Nov 19 03:29:46 IST 2033

Certificate fingerprints:

MD5: F0:F2:85:50:B0:B8:39:4B:32:7B:B8:47:2F:D1:B8:07

SHA1: 45:AD:6B:B4:99:01:1B:B4:E8:4E:84:31:6A:81:C2:7D:89:EE:5C:E7

SHA256: 70:5E:AA:FC:3F:F4:88:03:00:17:D5:98:32:60:3E:EF:AD:51:41:71:

B5:83:80:86:75:F4:5C:19:0E:63:78:F8

Signature algorithm name: SHA1withRSA

108

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Self-Signed RMS Certificates

May 25, 2015 HF

Version: 3

Extensions:

#1: ObjectId: 1.3.6.1.4.1.311.21.1 Criticality=false

0000: 02 01 00 ...

#2: ObjectId: 2.5.29.19 Criticality=true

BasicConstraints:[

CA:true

PathLen:2147483647

]

#3: ObjectId: 2.5.29.15 Criticality=false

KeyUsage [

DigitalSignature

Key_CertSign

Crl_Sign

]

#4: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: A6 03 1D 7F CA BD B2 91 40 C6 CB 82 36 1F 6B 98 [email protected].

0010: 8F DD BC 29

]

...)

]

Trust this certificate? [no]: yes

Certificate was added to keystore

2 keytool -importcert -keystore uls.keystore -alias server-ca -file [server-ca.cer]

Note The password for the keystore is specified in the OVA descriptor file (prop:RMS_App_Password). The default value is Ch@ngeme1.

Sample Output

[root@blr-blrrms-lus2-22 self_signed]# keytool -importcert -keystore uls.keystore

-alias server-ca -file server-ca.cer

Enter keystore password:

Owner: CN=rtp Femtocell CA, O=Cisco

Issuer: CN=Cisco Root CA M1, O=Cisco

Serial number: 610420e200000000000b

Valid from: Sat May 26 01:04:27 IST 2012 until: Wed May 26 01:14:27 IST 2032

Certificate fingerprints:

MD5: AF:0C:A0:D3:74:18:FE:16:A4:CA:87:13:A8:A4:9F:A1

SHA1: F6:CD:63:A8:B9:58:FE:7A:5A:61:18:E4:13:C8:DF:80:8E:F5:1D:A9

SHA256: 81:38:8F:06:7E:B6:13:87:90:D6:8B:72:A3

:40:03:92:A4:8B:94:33:B8:3A:DD:2C:DE:8F:42:76:68:65:6B:DC

Signature algorithm name: SHA1withRSA

Version: 3

Extensions:

RAN Management System Installation Guide, Release 4.1

109

Installation Tasks Post-OVA Deployment

Self-Signed RMS Certificates

#1: ObjectId: 1.3.6.1.4.1.311.20.2 Criticality=false

0000: 1E 0A 00 53 00 75 00 62 00 43 00 41 ...S.u.b.C.A

#2: ObjectId: 1.3.6.1.4.1.311.21.1 Criticality=false

0000: 02 01 00 ...

]

]

#3: ObjectId: 1.3.6.1.5.5.7.1.1 Criticality=false

AuthorityInfoAccess [

[ accessMethod: caIssuers accessLocation: URIName: http://www.cisco.com/security/pki/certs/crcam1.cer

#4: ObjectId: 2.5.29.35 Criticality=false

AuthorityKeyIdentifier [

KeyIdentifier [

0000: A6 03 1D 7F CA BD B2 91 40 C6 CB 82 36 1F 6B 98 [email protected].

...) 0010: 8F DD BC 29

]

]

#5: ObjectId: 2.5.29.19 Criticality=true

BasicConstraints:[

CA:true

PathLen:0

]

#6: ObjectId: 2.5.29.31 Criticality=false

CRLDistributionPoints [

[DistributionPoint:

[URIName: http://www.cisco.com/security/pki/crl/crcam1.crl]

]]

#7: ObjectId: 2.5.29.32 Criticality=false

CertificatePolicies [

[CertificatePolicyId: [1.3.6.1.4.1.9.21.1.16.0]

[PolicyQualifierInfo: [ qualifierID: 1.3.6.1.5.5.7.2.1

qualifier: 0000: 16 35 68 74 74 70 3A 2F 2F 77 77 77 2E 63 69 73 .5http://www.cis

0010: 63 6F 2E 63 6F 6D 2F 73 65 63 75 72 69 74 79 2F co.com/security/

0020: 70 6B 69 2F 70 6F 6C 69 63 69 65 73 2F 69 6E 64 pki/policies/ind

0030: 65 78 2E 68 74 6D 6C ex.html

]] ]

]

#8: ObjectId: 2.5.29.37 Criticality=false

ExtendedKeyUsages [ serverAuth clientAuth ipsecEndSystem

110

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Self-Signed RMS Certificates

] ipsecTunnel ipsecUser

1.3.6.1.4.1.311.10.3.1

1.3.6.1.4.1.311.20.2.1

1.3.6.1.4.1.311.21.6

#9: ObjectId: 2.5.29.15 Criticality=false

KeyUsage [

DigitalSignature

Key_CertSign

Crl_Sign

]

#10: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: 5B F4 8C 42 FE DD 95 41 A0 E8 C2 45 12 73 1B 68 [..B...A...E.s.h

0010: 42 6C 0D EF

]

Bl..

]

Trust this certificate? [no]: yes

Certificate was added to keystore

3 keytool -importcert -keystore uls.keystore -alias uls-key -file [client-ca.cer]

Note The password for keystore is specified in the OVA descriptor file (prop:RMS_App_Password). The default value is Ch@ngeme1.

Sample Output

[root@blr-blrrms-lus2-22 self_signed]# keytool -importcert -keystore uls.keystore

-alias uls-key -file client-ca.cer

Enter keystore password:

Certificate reply was installed in keystore

Step 2

Step 3

Run the following commands to take the backup of existing certificates and copy the new certificates: a) cd /opt/CSCOuls/conf b) mv uls.keystore uls.keystore_org

c) cp self_signed/uls.keystore .

d) chown ciscorms:ciscorms uls.keystore

e) chmod 640 uls.keystore

f) service god restart

Run these commands to verify that the Ubiquisys CA certificates were placed in the Upload node truststore:

• keytool -keystore /opt/CSCOuls/conf/uls.truststore -alias UbiClientCa -list -v

• keytool -keystore /opt/CSCOuls/conf/uls.truststore -alias UbiRootCa -list -v

Note The password for uls.truststore is

Ch@ngeme1.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

111

Installation Tasks Post-OVA Deployment

Enabling Communication for VMs on Different Subnets

What to Do Next

If there are issues during the certificate generation process, refer to

Regeneration of Certificates, on page 163

.

Enabling Communication for VMs on Different Subnets

As part of RMS deployment there could be a situation wherein the Serving/Upload nodes with eth0 IP are in a different subnet compared to that of the Central node. This is also applicable if redundant Serving/Upload nodes have eth0 IP on a different subnet than that of the Central node.

In such a situation, based on the subnets, routing tables need to be manually added on each node so as to ensure communication between all nodes.

Perform the following procedure to add routing tables.

Note Follow these steps on the VM console on each RMS node.

Step 1

Step 2

Step 3

Step 4

Central Node:

This route addition ensures that Central node can communicate successfully with Serving and Upload nodes present in different subnets.

route add – net <subnet of Serving/Upload Node eth0 IP> netmask <netmask IP> gw

<gateway for Central Node eth0 IP>

For example: route add -net 10.5.4.0 netmask 255.255.255.0 gw 10.5.1.1

Serving Node, Upload Node:

These route additions ensure Serving and Upload node communication with other nodes on different subnets.

a) Serving Node: route add – net <subnet of Serving/Upload Node eth0 IP> netmask <netmask IP> gw

<gateway for Serving Node eth0 IP>

For example: route add -net 10.5.4.0 netmask 255.255.255.0 gw 10.5.1.1

b) Upload Node: route add – net <subnet of Serving/Upload Node eth0 IP> netmask <netmask IP> gw

<gateway for Upload Node eth0 IP>

For example: route add -net 10.5.4.0 netmask 255.255.255.0 gw 10.5.1.1

Repeat Step 2 for other Serving and Upload nodes.

Include the entry <destination subnet/netmask number> via <gw IP> in the

/etc/sysconfig/network-scripts/route-eth0 file to make the added routes permanent. If the file is not present, create it. For example: 10.5.4.0/24 via 10.1.0.1

112

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Configuring Default Routes for Direct TLS Termination at the RMS

Configuring Default Routes for Direct TLS Termination at the

RMS

Because transport layer security (TLS) termination is done at the RMS node, the default route on the Upload and Serving nodes must point to the southbound gateway to allow direct device communication with these nodes.

Note If the Northbound and Southbound gateways are already configured in the descriptor file, as shown in the example, then this section can be skipped.

• prop:Serving_Node_Gateway=10.5.1.1,10.5.2.1

• prop:Upload_Node_Gateway=10.5.1.1,10.5.2.1

Step 1

Step 2

Log in to the Serving node and run the following command: netstat – nr

Example: netstat – nr

Kernel IP routing table

Destination Gateway

10.81.254.202

10.5.1.1

10.105.233.81

10.5.1.1

10.10.10.4

64.102.6.247

10.5.1.1

10.5.1.1

10.5.1.9

10.5.1.8

10.5.1.1

10.5.1.1

10.105.233.60

10.5.1.1

7.0.1.176

10.5.1.1

10.5.1.0

0.0.0.0

10.5.2.0

0.0.0.0

0.0.0.0

10.5.1.1

Genmask Flags MSS Window irtt Iface

255.255.255.255 UGH 0 0 0 eth0

255.255.255.255 UGH

255.255.255.255 UGH

255.255.255.255 UGH

255.255.255.255 UGH

255.255.255.255 UGH

0 0

0 0

0 0

0 0

0 0

0 eth0

0 eth0

0 eth0

0 eth0

0 eth0

255.255.255.255 UGH

255.255.255.240 UG

255.255.255.0

U

255.255.255.0

U

0.0.0.0

UG

0 0

0 0

0 0

0 0

0 0

0 eth0

0 eth0

0 eth0

0 eth1

0 eth0

Use the below procedure to set the southbound gateway as the default gateway on the Serving node:

• To make the route settings temporary, execute the following commands on the Serving node:

◦ Delete the northbound gateway IP address using the following command. For example, route delete -net

0.0.0.0 netmask 0.0.0.0 gw 10.5.1.1

◦ Add the southbound gateway IP address using the following command. For example, route add -net 0.0.0.0

netmask 0.0.0.0 gw 10.5.2.1

• To make the route settings default or permanent, execute the following command on the Serving node:

/opt/vmware/share/vami/vami_config_net

Example:

/opt/vmware/share/vami/vami_config_net

RAN Management System Installation Guide, Release 4.1

113 May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Configuring Default Routes for Direct TLS Termination at the RMS

Step 3

Main Menu

0)

1)

2)

3)

4)

5)

Show Current Configuration (scroll with Shift-PgUp/PgDown)

Exit this program

Default Gateway

Hostname

DNS

Proxy Server

6)

7)

IP Address Allocation for eth0

IP Address Allocation for eth1

Enter a menu number [0]: 2

Warning: if any of the interfaces for this VM use DHCP, the Hostname, DNS, and Gateway parameters will be overwritten by information from the DHCP server.

Type Ctrl-C to go back to the Main Menu

0) eth0

1) eth1

Choose the interface to associate with default gateway [0]: 1

Note: Provide the southbound gateway IP address as highlighted below

Gateway will be associated with eth1

IPv4 Default Gateway [10.5.1.1]: 10.5.2.1

Reconfiguring eth1...

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

RTNETLINK answers: File exists

Network parameters successfully changed to requested values

Main Menu

0)

1)

2)

3)

4)

5)

Show Current Configuration (scroll with Shift-PgUp/PgDown)

Exit this program

Default Gateway

Hostname

DNS

Proxy Server

6)

7)

IP Address Allocation for eth0

IP Address Allocation for eth1

Enter a menu number [0]: 1

Verify that the southbound gateway IP address was added: netstat – nr

Example: netstat – nr

Kernel IP routing table

Destination Gateway

10.81.254.202

10.5.1.1

10.105.233.81

10.5.1.1

10.10.10.4

64.102.6.247

10.5.1.9

10.5.1.8

10.5.1.1

10.5.1.1

10.5.1.1

10.5.1.1

10.105.233.60

10.5.1.1

Genmask Flags MSS Window irtt Iface

255.255.255.255 UGH 0 0 0 eth0

255.255.255.255 UGH 0 0 0 eth0

255.255.255.255 UGH

255.255.255.255 UGH

255.255.255.255 UGH

255.255.255.255 UGH

255.255.255.255 UGH

0 0

0 0

0 0

0 0

0 0

0 eth0

0 eth0

0 eth0

0 eth0

0 eth0

114

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Post-Installation Configuration of BAC Provisioning Properties

Step 4

7.0.1.176

10.5.1.0

10.5.2.0

0.0.0.0

10.5.1.1

0.0.0.0

0.0.0.0

10.5.2.1

255.255.255.240 UG

255.255.255.0

U

255.255.255.0

U

0.0.0.0

UG

0 0

0 0

0 0

0 0

0 eth0

0 eth0

0 eth1

0 eth1

To add the southbound gateway IP address from the Upload node, repeat Steps 1 to 3 on the Upload node.

Post-Installation Configuration of BAC Provisioning Properties

The establishment of a connection between the Serving node and Central node can fail during the installation due to network latency in SSH or because the Southbound IP of the Central node and Northbound IP of the

Serving node are in different subnets. As a result, BAC Provisioning properties such as upload and ACS URLs are not added. If this occurs, you must configure the BAC provisioning properties after establishing connectivity between the Central node and Serving node after the installation. RMS provides a script for this purpose. To add the BAC provisioning properties, perform this procedure:

Step 1

Step 2

Step 3

Log in to the central node

Switch to root user using su .

Change to directory /rms/ova/scripts/post_install and run the script configure_bacproperies.sh

.

a) Run these commands: .

• # cd /rms/ova/scripts/post_install

• # ./configure_bacproperies.sh false

Sample Output

*******************Post-installation script to configure BAC

Properties*******************************

/rms/ova/scripts/post_install /rms/ova/scripts/post_install

/rms/ova/scripts/post_install

*******Done************ b) Run the command, ./configure_bacproperies.sh true to add properties in BAC Provisioning properties .

Sample Output

File: /rms/ova/scripts/post_install/addBacProvisionProperties.kiwi

Finished tests in 339ms

Total Tests Run - 14

Total Tests Passed - 14

Total Tests Failed - 0

Output saved in file: /tmp/runkiwi.sh_admin1/addBacProvisionProperties.out.20140529_1752

__________________________________________________________________________________________

Post-processing log for benign error codes: /tmp/runkiwi.sh_admin1/addBacProvisionProperties.out.

20140529_1752

Revised Test Results

RAN Management System Installation Guide, Release 4.1

115 May 25, 2015 HF

Installation Tasks Post-OVA Deployment

PMG Database Installation and Configuration

Step 4

Total Test Count: 14

Passed Tests: 14

Benign Failures: 0

Suspect Failures: 0

After executing the scripts successfully, the BAC properties are added in the BACAdmin UI. To verify the properties that are added: a) Log in to BAC UI using the URL https://<central-node-north-bound-IP>/adminui b) Click on Servers .

c) Click the Provisioning Group tab at the top of the display to verify that all the properties such as ACS URL, Upload

URL , NTP addresses, and Ip Timing_Server IP properties are added.

PMG Database Installation and Configuration

PMG Database Installation Prerequisites

1 The minimum hardware requirements for the Linux server should be as per Oracle 11gR2 documentation.

In addition, 4 GB disc space is required for PMG DB data files.

Following are the recommendations for VM:

• Memory: 8 GB

• Disk Space: 50 GB

• CPU: 8 vCPU

2 Ensure that the Oracle installation directory (for example, /u01/app/oracle) is owned by the Oracle OS root user. For example,

# chown -R oracle:oinstall /u01/app/oracle

3 Ensure Oracle 11gR2 is installed with database name=PMGDB and ORACLE_SID=PMGDB and running on the Oracle installation VM.

Following are the recommendation for database initialization parameters::

• memory_max_target: 3200 MB

• memory_target: 3200 MB

• No. of Processes: 150 (Default value)

• No. of sessions: 248 (Default value)

4 ORACLE_HOME environment variable is created and $ORACLE_HOME/bin is in the system path.

# echo $ORACLE_HOME

/u01/app/oracle/product/11.2.0/dbhome_1

#echo $PATH

/u01/app/oracle/product/11.2.0/dbhome_1/bin:/usr/lib64/qt-3.3/bin:

/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/oracle/bin

5 To populate Mapinfo data from the Mapinfo files:

116

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

PMG Database Installation a Ensure that third party tools “ EZLoader ” and Oracle client (with Administrator option selected in

Installation Types) are installed with Windows operating system.

b Tnsnames.ora has PMGDB server entry.

For example, in the file, c:\oracle\product\10.2.0\client_3\NETWORK\ADMIN\tnsnames.ora

, the following entry should be present.

PMGDB =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = <PMGDB Server IP>)(PORT = <PMGDB server oracle application port>))

)

(CONNECT_DATA =

(SID = PMGDB)

(SERVER = DEDICATED)

)

) c Download the MapInfo files generated by the third party tool.

d Ensure correct IPTable entiries are added on the PMGDB server to allow communication between

EZLoader application and Oracle application on the PMGDB server.

Note Perform the following procedures as an 'oracle' user.

PMG Database Installation

Schema Creation

Step 1

Step 2

Step 3

Step 4

Step 5

Download the .gz file

RMS-PMGDB-<RMS build num>.tar.gz

from the release folder to desktop.

Log in to the database VM.

Copy the downloaded

RMS-PMGDB-<RMS build num>.tar.gz

file from the desktop to the Oracle user home directory

(example, /home/oracle ) on PMGDB server as oracle user.

Login to the PMGDB server as oracle user. In the home directory (example, /home/oracle ), unzip and untar the

RMS-PMGDB-<RMS build num>.tar.gz

file.

# gunzip RMS-PMGDB-<RMS build num>.tar

# tar -xvf RMS-PMGDB-<RMS build num>.tar

Go to PMGDB installation base directory

~/pmgdb_install/

.

Run install script and provide input as prompted.

# ./install_pmgdb.sh

Input Parameters Required:

1 Full filepath and name of data file PMGDB tablespace.

2 Full filepath and name of data file MAPINFO tablespace.

3 Password for database user PMGDBADMIN.

4 Password for database user PMGUSER.

5 Password for database user PMGDB_READ.

6 Password for database user MAPINFO.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

117

Installation Tasks Post-OVA Deployment

PMG Database Installation

Password Validation:

• If password value for any database user provided is blank, respective username (e.g. PMGDBADMIN) will be used as default value.

• The script does not validate password values against any password policy as password policy can vary based on the Oracle password policy configured.

• Following is the sample output for reference:

Note In the output, the system prompts you to change the file name if the file name already exists. Change the file name. Example: pmgdb1_ts.dbf

[oracle@blr-rms-oracle2 pmgdb_install]$ ./install_pmgdb.sh

The script will get executed on database instance PMGDB

Enter PMGDB tablespace filename with filepath

(e.g. /u01/app/oracle/oradata/PMGDB/pmgdb_ts.dbf):

/u01/app/oracle/oradata/PMGDB/pmgdb_ts.dbf

File already exists, enter a new file name

[oracle@blr-rms-oracle2 pmgdb_install]$ ./install_pmgdb.sh

The script will get executed on database instance PMGDB

Enter PMGDB tablespace filename with filepath

(e.g. /u01/app/oracle/oradata/PMGDB/pmgdb_ts.dbf):

/u01/app/oracle/oradata/PMGDB/test_pmgdb_ts.dbf

You have entered /u01/app/oracle/oradata/PMGDB/test_pmgdb_ts.dbf

as PMGDB table space.

Do you want to continue[y/n]y filepath entered is /u01/app/oracle/oradata/PMGDB/test_pmgdb_ts.dbf

Enter MAPINFO tablespace filename with filepath

(e.g. /u01/app/oracle/oradata/PMGDB/mapinfo_ts.dbf):

/u01/app/oracle/oradata/PMGDB/test_mapinfo_ts.dbf

You have entered /u01/app/oracle/oradata/PMGDB/test_mapinfo_ts.dbf as MAPINFO table space.

Do you want to continue[y/n]y filepath entered is /u01/app/oracle/oradata/PMGDB/test_mapinfo_ts.dbf

Enter password for user PMGDBADMIN :

Confirm Password:

Enter password for user PMGUSER :

Confirm Password:

Enter password for user PMGDB_READ :

Confirm Password:

Enter password for user MAPINFO :

Confirm Password:

*****************************************************************

*Connecting to database PMGDB

Script execution completed , verifying...

******************************************************************

No errors, Installation completed successfully!

Main log file created is /u01/oracle/pmgdb_install/pmgdb_install.log

Schema log file created is /u01/oracle/pmgdb_install/sql/create_schema.log

******************************************************************

Step 6

Step 7

On successful completion, the script creates schema on the PMGDB database instance.

If the script output displays an error, " Errors may have occurred during installation ", see the following log files to find out the errors:

118

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

PMG Database Installation a) ~/pmgdb_install/pmgdb_install.log

b) ~/pmgdb_install/sql/create_schema.log

Correct the reported errors and recreate schema.

Map Catalog Creation

Note Creation of Map Catalog is needed only for fresh installation of PMG DB.

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Ensure that the MapInfo files are downloaded and extracted on your computer. (See

PMG Database Installation

Prerequisites, on page 116

).

Go to C:/ezldr/EazyLoader.exe

, and double-click “ EazyLoader.exe

” to open the MapInfo EasyLoader window to load the data.

Click Oracle Spatial and log in to the PMGDB using MAPINFO as the user id and password (which was provided during Schema creation), and server name as tnsname given in tnsnames.ora (example, PMGDB ).

Click Source Tables to load MapInfo TAB file from the extracted location, for example,

"C:\ezldr\FemtoData\v72\counties_gdt73.TAB

” .

Click Map Catalog to create the map catalog. A system message “ A Map Catalog was successfully created.

” is displayed on successful creation. Click OK .

Click Options and verify that the following check boxes are checked in Server Table Processing:

• Create Primary Key

• Create Spatial Index

Click Close to close the MapInfo EasyLoader window.

Load MapInfo Data

Step 1

Step 2

Step 3

Step 4

Ensure that the MapInfo files are downloaded and extracted on your computer.

Log in to the Central Node as an admin user.

Download and ftp the following file on your laptop under EzLoader folder (for example, C:\ezldr).

/rms/app/ops-tools/public/batch-files/loadRevision.bat

Open windows command line tool, change the directory to EZLoader folder and run the bat file.

# loadRevision.bat [mapinfo-revisionnumber] [input file path] [MAPINFO user password] where

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

119

Installation Tasks Post-OVA Deployment

PMG Database Installation

Step 5 mapinfo-revisionnumber is the revision number of the MapInfo files that are downloaded.

input file path is the base path where downloaded MapInfo files are extracted, that is, where the directory with the name

"v<mapinfo-revisionnumber>" like v73 is located after extraction.

MAPINFO user password is the password given to the MAPINFO user during the schema creation. If no input is given then default password is same as username, that is, MAPINFO.

C:\>

C:\>cd ezldr c:\ezldr>loadRevision.bat 73 c:\ezldr\FemtoData MAPINFO c:\ezldr>echo off

Command Line Parameters: revision ID = "73" path = "c:\ezldr\FemtoData" mapinfo password = "<Not Displayed>"

-------

Note:

MAPINFO_MAPCATALAOG should be present in the database. If not, EasyLoader GUI can be used to create it.

-------

Calling easyloader...

Logs are created under EasyLoader.log

Done.

C:\ezldr>

Example: loadRevision.bat 73 c:\ezldr\FemtoData MAPINFO

Note 1 MAPINFO_MAPCATALOG should be present in the database. If not, to create it and load the Mapinfo data again, see the

Map Catalog Creation, on page 119

.

2 Logs are created in a file EasyLoader.log

under current directory (for example, C:\ezldr). Verify the logs if the table does not get created in the database.

3 Multiple revision tables can exist in the database. For example, COUNTIES_GDT72, COUNTIES_GDT73, and so on.

Log in to PMGDB as MAPINFO user from sqlplus client and verify the tables are created and data is uploaded.

Grant Access to MapInfo Tables

Step 1

Step 2

Step 3

Log in to the PMGDB server as an oracle user.

Go to PMGDB installation base directory " ~/pmgdb_install/".

Run grant script.

# ./grant_mapinfo.sh

120

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Configuring the Central Node

Following is the sample output of the Grant access script for reference:

[oracle@blr-rms-oracle2 pmgdb_install]$ ./grant_mapinfo.sh

The script will get executed on database instance PMGDB

Step 4

******************************************************************

Connecting to database PMGDB

Script execution completed , verifying...

******************************************************************

No errors, Executing grants completed successfully!

Log file created is /u01/oracle/pmgdb_install/grant_mapinfo.log

******************************************************************

[oracle@blr-rms-oracle2 pmgdb_install]$

Verify ~/pmgdb_install/grant_mapinfo.log.

Configuring the Central Node

Configuring the PMG Database on the Central Node

Before You Begin

Verify that the PMG database is installed. If not install it as described in

PMG Database Installation and

Configuration, on page 116

.

Step 1

Step 2

Step 3

Log in to the Central node as admin user.

[rms-aio-central] ~ $ pwd

/home/admin1

Change from Admin user to root user.

[rms-aio-central] ~ $ su -

Password:

Check the current directory and the user.

[rms-aio-central] ~ # pwd

/root

[rms-aio-central] ~ # whoami root

RAN Management System Installation Guide, Release 4.1

121 May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Configuring the PMG Database on the Central Node

Step 4

Step 5

Change to install directory /rms/app/rms/install/

# cd /rms/app/rms/install/

Execute the configure script, pmgdb_configure.sh

with valid input. The input values are:

Pmgdb_Enabled -> To enable pmgdb set it to “ true ”

Pmgdb_Primary_Dbserver_Address -> PMG DB primary server ip address for example, 10.105.233.66

Pmgdb_Primary_Dbserver_Port -> PMG DB primary server port for example, 1521

Pmgdb_Standby1_Dbserver_Address -> PMG DB standby 1 server (hot standby) IP address. For example,

10.105.242.64

. Optional, if not specified, connection failover to hot standby database will not be available. To enable the failover feature later, script has to be executed again.

Pmgdb_Standby1_Dbserver_Port -> PMG DB standby 1 server (hot standby) port. For example, 1521 . Do not specify this property if previous property is not specified.

Pmgdb_Standby2_Dbserver_Address -> PMG DB standby 2 server (cold standby) IP address. For example,

10.105.242.64

. Optional, if not specified, connection failover to cold standby database will not be available. To enable the failover feature later, script has to be executed again.

Pmgdb_Standby2_Dbserver_Port -> PMG DB standby 2 server (cold standby) port. For example, 1521 . Do not specify this property if previous property is not specified.

Enter DbUser PMGUSER Password -> Is prompted. Provide Password of the database user "PMGUSER". Also, provide the same password when prompted for confirmation of password.

Usage: pmgdb_configure.sh <Pmgdb_enabled> <Pmgdb_Dbserver_Address> <Pmgdb_Dbserver_Port>

[<Pmgdb_Stby1_Dbserver_Address>] [<Pmgdb_Stby1_Dbserver_Port>] [<Pmgdb_Stby2_Dbserver_Address>]

[<Pmgdb_Stby2_Dbserver_Port>]

Example:

Following is an example where three PMGDB Servers (Primary, Hot Standby and Cold Standby) are used:

[rms-distr-central] /rms/app/rms/install # ./pmgdb_configure.sh true 10.105.242.63 1521 10.105.233.64

1521

10.105.233.63 1521

Executing as root user

Enter DbUser PMGUSER Password:

Confirm Password: Central_Node_Eth0_Address 10.5.4.35

Central_Node_Eth1_Address 10.105.242.86

Script input:

Pmgdb_Enabled=true

Pmgdb_Prim_Dbserver_Address=10.105.242.63

Pmgdb_Prim_Dbserver_Port=1521

Pmgdb_Stby1_Dbserver_Address=10.105.233.64

Pmgdb_Stby1_Dbserver_Port=1521

Pmgdb_Stby2_Dbserver_Address=10.105.233.63

Pmgdb_Stby2_Dbserver_Port=1521

Executing in 10 sec, enter <cntrl-C> to exit

.....

....

...

..

.

Start configure dcc props dcc.properties already exists in conf dir

122

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Configuring the PMG Database on the Central Node

Step 6

Step 7

Step 8

END configure dcc props

Start configure pmgdb props pmgdb.properties already exists in conf dir

Changed jdbc url to jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)

(HOST=10.105.242.63)(PORT=1521))

(ADDRESS=(PROTOCOL=TCP)(HOST=10.105.233.64)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)

(HOST=10.105.233.63)(PORT=1521))(FAILOVER=on)

(LOAD_BALANCE=off))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PMGDB_PRIMARY)))

End configure pmgdb props

Configuring iptables for Primary server

Start configure_iptables

Removing old entries first, may show error if rule does not exist

Removing done, add rules iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables

Configuring iptables for Standby server

Start configure_iptables

Removing old entries first, may show error if rule does not exist

Removing done, add rules iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables

Configuring iptables for Standby server

Start configure_iptables

Removing old entries first, may show error if rule does not exist

Removing done, add rules iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables

Done PmgDb configuration

[rms-distr-central] /rms/app/rms/install #

Restart PMG application as a root user if the configuration is successful.

# service god stop

# service god start

Verify that PMG DB server is connected. Change to user ciscorms and run the OpsTools script: getAreas.sh. If the

PmgDB configuration is successful, the script runs successfully without any errors.

# su - ciscorms

# getAreas.sh -key 100

[rms-aio-central] /rms/app/rms/install # su -

[rms-aio-central] ~ # su - ciscorms

[rms-aio-central] ~ $ getAreas.sh -key 100

Config files script-props/private/GetAreas.properties or script-props/public/GetAreas.properties

not found. Continuing with default settings.

Execution parameters: key=100

GetAreas processing can take some time please do not terminate.

Received areas, total areas 0

Writing to file: /users/ciscorms/getAreas.csv

The report captured in csv file: /users/ciscorms/getAreas.csv

**** GetAreas End Script ***

[rms-aio-central] ~ $

In case of an error, do the following: a) Verify that pmgdb.enabled=true in /rms/app/rms/conf/dcc.properties

.

b) In /rms/app/rms/conf/pmgdb.properties

, verify pmgdb.tomcat.jdbc.pool.jdbcUrl property and edit the values if necessary: pmgdb.tomcat.jdbc.pool.jdbcUrl=jdbc:oracle:thin:@

(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=DBSERVER1)

(PORT=DBPORT1))(ADDRESS=(PROTOCOL=TCP)(HOST=DBSERVER2)(PORT=DBPORT2))

(ADDRESS=(PROTOCOL=TCP)(HOST=DBSERVER3)(PORT=DBPORT3))

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

123

Installation Tasks Post-OVA Deployment

Area Table Data Population

Step 9

(FAILOVER=on)(LOAD_BALANCE=off))(CONNECT_DATA=(SERVER=DEDICATED)

(SERVICE_NAME=PMGDB_PRIMARY))) c) If pmgdb.tomcat.jdbc.pool.jdbcUrl property is edited, restart the PMG and run getAreas.sh again.

Note If a wrong password was given during "pmgdb_configure.sh" script execution., the script can be re-executed with the correct password following "Configuring the PMG Database on the Central Node". Restart the

PMG and run getAreas.sh again after the script execution.

If you can still not connect, check the IPtables entries for the database server.

# iptables -S

Area Table Data Population

After the PMG database installation, the Area table which is used to lookup polygons is empty. It needs to be populated from the MapInfo table. This task describes how to use the script, updatePolygon.sh

to populate the data.

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Log in to Central node as admin user.

[rms-aio-central] ~ $ pwd

/home/admin1

Change from Admin user to Root user.

[rms-aio-central] ~ $ su -

Password:

Check the current directory and the user.

[rms-aio-central] ~ # pwd

/root

[rms-aio-central] ~ # whoami root

If the PMG database configuration is not done, configure the PMG database on the Central node as described in

Configuring the PMG Database on the Central Node, on page 121

.

Change to user ciscorms .

# su - ciscorms

Run the updatePolygons.sh

script with mapinfo revision number as input.

For example,

# updatePolygons.sh -rev 73

The -help option can be used to display script usage:

# updatePolygons.sh -help

[rms-aio-central] ~ $ updatePolygons.sh -rev 73

Config files script-props/private/UpdatePolygons.properties or script-props/public/UpdatePolygons.properties not found. Continuing with default settings.

124

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Area Table Data Population

Step 7

Step 8

Step 9

Step 10

Execution parameters: rev=72

Source table is mapinfo.counties_gdt73

Initializing PMG DB

Update Polygon processing can take some time please do not terminate.

Updated Polygon in PmgDB Change Id: 1

**** UpdatePolygons End Script ***

Verify that the Area table is populated with data.

Run the command to connect to SQL: sqlplus PMGUSER/<PMGUSER password> on PMGDB server.

Sample Output

SQL>

Run the SQL command as PMGUSER on the PMG database server: SQL> select count(*) from area;

Sample Output

COUNT(*)

----------

3232

To register from DCC UI with Lattitude, Longitude coordinates, an Area group with name as valid area key needs to be created.

For example, for "New York" county, where lat= 40.714623 and long= -74.006605, Area group with name "36061" should be created where 36061 is area_key for New York county.

This can be done by running the Operational Tools script updatePolygonsInPmg.sh

as ciscorms user where it creates all the area groups corresponding to the area_keys present in the Area table.

For example:

# updatePolygonsInPmg.sh -changeid <changeid of update transaction>

The change ID of update transaction can be found in logs of updatePolygons.sh

when it is run to update Area table from mapinfo table. (See the output for Step 6, highlighted to obtain the Change ID value.) When Area table is populated with the data after first time installation of PMG database, updatePolygonsInPmg.sh

can be run with other optimization options such as multiple threads, and so on.

For more information on usage, see Operational Tools in the Cisco RAN Management System Administration Guide .

The newly created area group properties are fetched from the DefaultArea properties. The group specific details are to be modified through DCC UI, either from GUI or by exporting/importing csv files.

Note DCC UI may have performance issues when a large number of groups are created.

Alternate way to create area groups is by creating them manually through the DCC UI. That is, exporting existing area in csv, changing the name as valid area_key along with other property values, and importing them back to the DCC UI.

The valid areas (counties) and area_keys can be queried from the PMG database or OpsTools Script. Use getAreas.sh

with the -all option.

From SQL prompt, run the below SQL command as PMGUSER on PMGDB server:

SELECT area_key, area_name, area_region

FROM AREA

WHERE STATUS = 'A'

ORDER BY area_key;

From OpsTools script:

# getAreas.sh

– all

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

125

Installation Tasks Post-OVA Deployment

Configuring New Groups and Pools

[rms-aio-central] ~ $ getAreas.sh -all

Config files script-props/private/GetAreas.properties or script-props/public/GetAreas.properties not found. Continuing with default settings.

Execution parameters: all

GetAreas processing can take some time please do not terminate.

Received areas, total areas 3232

Writing to file: /users/ciscorms/getAreas.csv

The report captured in csv file: /users/ciscorms/getAreas.csv

**** GetAreas End Script ***

[rms-aio-central] ~ $

Note If no data is retrieved by the SQL query or the OpsTools script, Area table may be empty. Ensure that you follow the steps in

PMG Database Installation and Configuration, on page 116

and contact the next level of support.

Configuring New Groups and Pools

The default groups and pools cannot be used post installation. You must create new groups and pools. You can recreate your groups and pools using a previously exported csv file. Alternatively, you can create completely new groups and pools as required. For more information, refer to recommended order for working with pools and groups as described in the in the Cisco RAN Management System Administration Guide .

Note Default groups and pools are available for reference after deployment. Use these as examples to create new groups and pools.

Only for Enterprise support, you need to configure Enterprise and Site groups.

Ensure that you add the following groups and pools before registering a device in the sequence shown as follows: CELL-POOL, SAI-POOL, LTE-CELL-POOL, AREA, Enterprise, FemtoGateway, HeNBGW,

LTESecGateway, RFProfile, RFProfile-LTE, Region, Site, SubSite, and UMTSSecGateway.

1 CELL-POOL

2 SAI-POOL

3 FemtoGateway

4 Area

5 Enterprise

6 Site

7 SubSite

8 UMTSSecGateway — Applicable for 3G devices only

9 Region

126

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Optional Features

Note Provide the FC-PROV-GRP-NAME property in the femtogateway with the provisioning group name,

"Bac_Provisioning_Group" that is provided during the deployment in the OVA descriptor file. The default value for the Bac_Provisioning_Group property is pg01.

Optional Features

Following sections explain how to configure the optional features:

Default Reserved Mode Setting for Enterprise APs

To enable the default reserved mode settings for an enterprise AP by default, run configure_func1.sh.

./configure_func1.sh

Note Run the script using the -h option to check the feature getting enabled with this script.

This tool enables the

Set default Reserved-mode setting to True for Enterprise APs configuration in RMS.

The script is present in the /rms/ova/scripts/post_install path. To execute the script, log in as

'root' user navigate to the path and execute ./configure_func1.sh

.

Sample Output

[RMS51G-CENTRAL03] /rms/ova/scripts/post_install # ./configure_func1.sh

*************Enabling the following configurations in RMS*********************************

*************Setting default Reserved-mode setting to True for Enterprise APs*************

*************Applying screen configurations********************

*************Executing kiwis********************

/rms/app/baseconfig/bin /rms/ova/scripts/post_install

/rms/app/baseconfig/bin /rms/app/baseconfig/bin

Running 'apiscripter.sh /rms/app/baseconfig/ga_kiwi_scripts/custom1/setDefResMode.kiwi'...

BPR client version: BPR2.7

Parsing Args...

Parsing File: /rms/app/baseconfig/ga_kiwi_scripts/custom1/setDefResMode.kiwi

Executing 4 batches...

PASSED|batchID=Batch:RMS51G-CENTRAL03/10.1.0.16:7df65afa:14c68f9eeb6:80000002

Batch Activation mode = NoActivation|Confirmation mode = NoConfirmation|Publishing mode =

NoPublishing

EXPECT

ACTUAL|batch status code=BATCH_COMPLETED[0]|batch error msg=null

Cmd 0 Line 29|Proprietary.changeSystemDefaultsInternally({/pace/crs/start=false}, null)

EXPECT|expected command status code=CMD_OK[0]

ACTUAL|command status code=CMD_OK[0]|command error msg=null|data type=DATA_VOID[-1]|results=null

FAILED|batchID=Batch:RMS51G-CENTRAL03/10.1.0.16:7df65afa:14c68f9eeb6:80000004

Batch Activation mode = NoActivation|Confirmation mode = NoConfirmation|Publishing mode =

NoPublishing

EXPECT|expected batch status code=BATCH_COMPLETED[0]

ACTUAL|batch status code=BATCH_FAILED_WRITE[-121]|batch error msg=Batch failed. See status

RAN Management System Installation Guide, Release 4.1

127 May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Default Reserved Mode Setting for Enterprise APs of the failed command for details.

-ERROR|batch status code !=0

Cmd 0 Line 45|IPDevice.addNode( {Node Name = DefaultSite, Node Type = NodeType:Site},

{STATE=Active, FC-LAC-RAC-CL-GRID=4000:10, FC-CSON-STATUS-HSCO-INNER=Optimized, FC-SITE-ID=1,

GROUPS=FemtoGateway:DefaultFGW,Area:DefaultArea,Enterprise:DefaultEnterprise,

FC-PSC-CL-INNER=1..10, FC-GRID-ENABLE=false, FC-RESERVED-MODE=true})

EXPECT|expected command status code=CMD_OK[0]

ACTUAL|command status code=CMD_ERROR_NODE_EXISTS[-826]|command error msg=Node of type [Site] and name [DefaultSite] already exists in database.|data type=DATA_VOID[-1]|results=null

-ERROR|command status code !=0

PASSED|batchID=CHG_SITE_GROUP

Batch Activation mode = NoActivation|Confirmation mode = NoConfirmation|Publishing mode =

NoPublishing

EXPECT

ACTUAL|batch status code=BATCH_COMPLETED[0]|batch error msg=null

Cmd 0 Line 64|IPDevice.changeNodeProperties( {Node Name = DefaultSite, Node Type =

NodeType:Site}, {STATE=Active, FC-LAC-RAC-CL-GRID=4000:10,

FC-CSON-STATUS-HSCO-INNER=Optimized, FC-SITE-ID=1,

GROUPS=FemtoGateway:DefaultFGW,Area:DefaultArea,Enterprise:DefaultEnterprise,

FC-PSC-CL-INNER=1..10, FC-GRID-ENABLE=false, FC-RESERVED-MODE=true}, null)

EXPECT|expected command status code=CMD_OK[0]

ACTUAL|command status code=CMD_OK[0]|command error msg=null|data type=DATA_VOID[-1]|results=null

PASSED|batchID=Batch:RMS51G-CENTRAL03/10.1.0.16:7df65afa:14c68f9eeb6:80000007

Batch Activation mode = NoActivation|Confirmation mode = NoConfirmation|Publishing mode =

NoPublishing

EXPECT

ACTUAL|batch status code=BATCH_COMPLETED[0]|batch error msg=null

Cmd 0 Line 74|Proprietary.changeSystemDefaultsInternally({/pace/crs/start=true}, null)

EXPECT|expected command status code=CMD_OK[0]

ACTUAL|command status code=CMD_OK[0]|command error msg=null|data type=DATA_VOID[-1]|results=null

File: /rms/app/baseconfig/ga_kiwi_scripts/custom1/setDefResMode.kiwi

Finished tests in 174ms

Total Tests Run - 4

Total Tests Passed - 3

Total Tests Failed - 1

Output saved in file: /tmp/runkiwi.sh_admin1/setDefResMode.out.20150330_0009

__________________________________________________________________________________________

Post-processing log for benign error codes:

/tmp/runkiwi.sh_admin1/setDefResMode.out.20150330_0009

.

.

.

.

The following tasks were affected:

AlarmHandler

/etc/init.d /rms/ova/scripts/post_install

Process [tomcat] has been restarted. Encountered an error while stopping.

/rms/ova/scripts/post_install

***************************Done***********************************

The following procedure is the workaround if the PMG server status is in an unmonitored state.

128

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Configuring Linux Administrative Users

DETAILED STEPS

Step 1

Step 2

Command or Action

Check if the PMGServer status is up. To do this:

If the PMGServer status is unmonitored, run the following command.

Purpose

Note If the PMGServer status is up as shown in Step 1b, skip Step 2. If the

PMGServer status shows as

"unmonitored" in Step 1b, then proceed to Step 2.

Example: god start PMGServer

Sending 'start' command

The following watches were affected:

PMGServer check the status PMGServer should be up and running after sometime

[rms-aio-central] /home/admin1 # god status PMGServer

PMGServer: up

Configuring Linux Administrative Users

By default admin1 user is provided with RMS deployment. Use the following steps post installation in the

Central, Serving, and Upload node to add additional administrative users or to change the passwords of existing administrative users.

Note Changing the root user password is not supported with this post install script.

Use the following steps to configure users on the Central, Serving, or Upload nodes:

Step 1

Step 2

Step 3

Step 4

Step 5

Log in to the Central node .

ssh to the Serving or Upload node as required

This step is required to configure users on either the Serving or Upload node only.

Switch to root user: su -

Change the directory: cd /rms/ova/scripts/post_install

Run the configuration script:

./configureusers.sh

The script prompts you for the first name, last name username, password to be configured for adding user or changing password of existing user, as shown in this example.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

129

Installation Tasks Post-OVA Deployment

Configuring Linux Administrative Users

Note Bad Password should be considered as warning. If the password given does not adhere to the Password Policy, an error is displayed after typing the wrong password in the password prompt. The password should be mixed case, alphanumeric, 8 to 127 characters long, should contain one of the special characters(*,@,#), and no spaces.

In case of a wrong password, try again with a valid password.

Example:

[blrrms-central-02] /rms/ova/scripts/post_install # ./configureusers.sh

MENU

1 - Add linux admin

2 - Modify existing linux admin password

0 - exit program

Enter selection: 1

Enter users FirstName admin

Enter users LastName admin

Enter the username test adding user test to users

Enter the password

Test@2014

Changing password for user test.

New password: Retype new password: passwd: all authentication tokens updated successfully.

MENU

1 - Add linux admin

2 - Modify existing linux admin password

0 - exit program

Enter selection: 2

Enter the username test

Enter the password

Test@123

Changing the password of user test

Changing password for user test.

New password: BAD PASSWORD: it is based on a dictionary word

Retype new password: passwd: all authentication tokens updated successfully.

MENU

1 - Add linux admin

2 - Modify existing linux admin password

0 - exit program

Enter selection: 0

[blrrms-central-02] /rms/ova/scripts/post_install #

Enter users FirstName admin2

Enter users LastName admin2

Enter the username admin2 adding user admin2 to users

Enter the password

Ch@ngeme123

130

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

NTP Servers Configuration

Changing password for user admin4.

New password: Retype new password: passwd: all authentication tokens updated successfully.

MENU

1 - Add linux admin

2 - Modify linux admin

0 - exit program

Enter selection:

NTP Servers Configuration

Note • Follow these steps to configure NTP servers only for RMS.

• NTP addresses can be configured using scripts. For configuring FAP NTP servers, see the Cisco

RAN Management System Administration Guide .

• If the ESXi host is unable to synchronize with an external NTP Server due to network configuration constraints, use the following steps to configure the NTP Server IP on the RMS nodes.

The VMware Level checkbox for enabling synchronization with external NTP Server should be unchecked.

• For Server level NTP configuration, ensure that the NTP Server is reachable from every RMS Node

(Central/Serving/Upload).

Routes should be added to establish connectivity.

Following steps explain how to configure the NTP servers:

Central Node Configuration

Use the following steps post installation in the RMS deployment to configure the NTP servers on the Central node or to modify NTP IP address details if they exist in the descriptor file:

Step 1

Step 2

Step 3

Step 4

Step 5

Log in to the Central node

Switch to root user: su -

Locate the script configurentpcentralnode.sh

in the /rms/ova/scripts/post_install directory.

Change the directory: cd /rms/ova/scripts/post_install

Run the configuration script:

./configurentpcentralnode.sh

The script prompts you for the NTP Servers to be configured, as shown in this example.

*******************Post-installation script to configure NTP Servers on RMS Central

Node*********

**********************

To configure NTP Servers Enter yes

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

131

Installation Tasks Post-OVA Deployment

NTP Servers Configuration yes

Enter the value of Ntp1_Address

10.105.233.60

Enter the value of Ntp2_Address

10.10.10.2

Enter default value 10.10.10.3,if Ntp3_Address is not available

11.11.11.11

Enter default value 10.10.10.4,if Ntp4_Address is not available

12.12.12.12

Configuring NTP servers

Ntpaddress Ntp1_Address 10.105.233.60 Added Succesfully to IPTables

Ntpaddress Ntp2_Address 10.10.10.2 Added Succesfully to IPTables

Ntpaddress Ntp3_Address 11.11.11.11 Added Succesfully to IPTables

Ntpaddress Ntp4_Address 12.12.12.12 Added Succesfully to IPTables iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Shutting down ntpd:

Starting ntpd:

NTP Servers configured Successfully.

[ OK ]

[ OK ]

Serving Node Configuration

Use the following steps post installation in the RMS deployment to configure the NTP servers in Serving

Node:

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Log in to the Central node ssh to Serving node

Switch to root user: su -

Locate the script configurentpservingnode.sh

in the /rms/ova/scripts/post_install directory.

Change the directory: cd /rms/ova/scripts/post_install

Run the configuration script:

./configurentpservingnode.sh

The script prompts you for NTP Servers address as shown in this example.

*******************Post-installation script to configure NTP Server on RMS Serving

Node*******************

************

To configure NTP Servers Enter yes yes

Enter the value of Ntp1_Address

10.105.233.60

Enter the value of Ntp2_Address

10.10.10.2

Enter default value 10.10.10.3,if Ntp3_Address is not available

11.11.11.11

Enter default value 10.10.10.4,if Ntp4_Address is not available

12.12.12.12

Enter the value of Root Password from OVA descriptor(Enter default Ch@ngeme1 if not present in descriptor)

Ch@ngeme1

Ntpaddress Ntp1_Address 10.105.233.60 Added Succesfully to IPTables

Ntpaddress Ntp2_Address 10.10.10.2 Added Succesfully to IPTables

Ntpaddress Ntp3_Address 11.11.11.11 Added Succesfully to IPTables

Ntpaddress Ntp4_Address 12.12.12.12 Added Succesfully to IPTables

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

rms-aio-serving BAC Device Provisioning Engine

132

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

NTP Servers Configuration

User Access Verification

Password: rms-aio-serving> enable

Password: rms-aio-serving# ntp server 10.105.233.60 10.10.10.2 11.11.11.11 12.12.12.12

Shutting down ntpd: [ OK ]

Starting ntpd: [ OK ]

% OK rms-aio-serving# no ip route 10.105.233.60 255.255.255.255

rms-aio-serving# no ip route 10.10.10.2 255.255.255.255

rms-aio-serving# no ip route 10.10.10.3 255.255.255.255

SIOCDELRT: No such process

% Exit value: 7 rms-aio-serving# no ip route 10.10.10.4 255.255.255.255

SIOCDELRT: No such process

% Exit value: 7 rms-aio-serving# ip route 10.105.233.60 255.255.255.255 10.5.4.1

rms-aio-serving# ip route 10.10.10.2 255.255.255.255 10.5.4.1

rms-aio-serving# ip route 11.11.11.11 255.255.255.255 10.5.4.1

SIOCADDRT: File exists

% Exit value: 7 rms-aio-serving# ip route 12.12.12.12 255.255.255.255 10.5.4.1

SIOCADDRT: File exists

% Exit value: 7 rms-aio-serving# dpe reload

Process [dpe] has been restarted.

% OK rms-aio-serving# eConnection closed by foreign host.

iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Shutting down ntpd:

Starting ntpd:

NTP Servers configured Successfully

[ OK ]

[ OK ]

Upload Node Configuration

Use the following steps post installation in the RMS deployment to configure the NTP servers in Upload

Node:

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Log in to the Central node .

ssh to Upload node

Switch to root user: su -

Locate the script configurentploguploadnode.sh

in the /rms/ova/scripts/post_install directory.

Change the directory: cd /rms/ova/scripts/post_install

Run the configuration script:

./configurentploguploadnode.sh

The script prompts you for NTP Servers address as shown in this example.

*******************Post-installation script to configure NTP on RMS Log Upload

Node***********************

********

To configure NTP Servers Enter yes yes

Enter the value of Ntp1_Address

10.105.233.60

Enter the value of Ntp2_Address

10.10.10.2

Enter default value 10.10.10.3,if Ntp3_Address is not available

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

133

Installation Tasks Post-OVA Deployment

Configuring the SNMP and SNMP Trap Servers

10.10.10.3

Enter default value 10.10.10.4,if Ntp4_Address is not available

10.10.10.4

Ntpaddress Ntp1_Address 10.105.233.60 Added Succesfully to IPTables

Ntpaddress Ntp2_Address 10.10.10.2 Added Succesfully to IPTables

Ntpaddress Ntp3_Address 10.10.10.3 Added Succesfully to IPTables

Ntpaddress Ntp4_Address 10.10.10.4 Added Succesfully to IPTables iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Shutting down ntpd: [ OK ]

Starting ntpd:

NTP Servers configured Successfully

[ OK ]

Configuring the SNMP and SNMP Trap Servers

Use the following procedure to run the post install scripts in the RMS deployment to configure the SNMP

Trap servers on the various nodes

Step 1

Step 2

Step 3

Step 4

Step 5

Log in to the Central node .

Log in via ssh to the Serving or Upload node if SNMP trap servers needs to be configured on the Serving or Upload nodes.

Use this step to configure the SNMP servers and trap servers on the Serving or Upload nodes only.

Switch to root user: su -

Change the directory: cd /rms/ova/scripts/post_install

Run the appropriate configuration script: a) ./configuresnmpcentralnode.sh

— on the Central node b) ./configuresnmpservingnode.sh

— on the Serving node c) ./configuresnmpuploadnode.sh

— on the Upload node

The script prompts you to configure the SNMP trap server optionally as shown in these examples.

Configuring SNMP on the Central Node

Example:

[blr-rms12-central_41N] /rms/ova/scripts/post_install # ./configuresnmpcentralnode.sh

*******************Post-installation script to configure SNMP on RMS Central Node

MENU

1 - Configure SNMP Servers

2 - Configure SNMPTrap Server

0 - exit program

Enter selection: 2

Enter the value of Snmptrap_Community public

Enter the value of Snmptrap1_Address

10.65.66.30

Enter default value 12.12.12.12,if Snmptrap2_Address is not available

14.14.14.15

OK

Please restart [stop and start] SNMP agent.

OK

Please restart [stop and start] SNMP agent.

Stopping snmpd:

Starting snmpd:

[ OK ]

[ OK ]

OK

Please restart [stop and start] SNMP agent.

OK

Please restart [stop and start] SNMP agent.

iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

134

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Configuring the SNMP and SNMP Trap Servers

Stopping snmpd:

Starting snmpd:

Sending 'restart' command

The following watches were affected:

PMGServer

Process [snmpAgent] has been restarted.

configured Snmp Trap Servers Successfully

Enter selection: 0

[rms-aio-central] /home/admin1 #

Configuring SNMP on the Serving Node: Example

[ OK ]

[ OK ]

Example:

[root@blr-rms12-serving_41N post_install]# ./configuresnmpservingnode.sh

*******************Post-installation script to configure SNMP on RMS Serving

Node*******************************

MENU

1 - Configure SNMP Servers

2 - Configure SNMPTrap Servers

0 - exit program

Enter selection: 2

Enter the value of Snmptrap_Community trap

Enter the value of Snmptrap1_Address

10.65.40.30

Enter default value 12.12.12.12,if Snmptrap2_Address is not available

14.14.14.14

Enter the value of Root Password from OVA descriptor(Enter default Ch@ngeme1 if not present in descriptor)

Ch@ngeme1

OK

Please restart [stop and start] SNMP agent.

Starting snmpd:

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

rms-aio-serving BAC Device Provisioning Engine

User Access Verification

Password: rms-aio-serving> enable

Password: rms-aio-serving# ip route 10.65.40.30 255.255.255.255 10.5.4.1

SIOCADDRT: File exists

% Exit value: 7 rms-aio-serving# ip route 14.14.14.14 255.255.255.255 10.5.4.1

SIOCADDRT: File exists

% Exit value: 7 rms-aio-serving# dpe reload

Process [dpe] has been restarted.

% OK rms-aio-serving# Connection closed by foreign host.

OK

Please restart [stop and start] SNMP agent.

OK

Please restart [stop and start] SNMP agent.

iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Stopping snmpd: [ OK ]

Starting snmpd:

Process [snmpAgent] has been restarted.

[ OK ] configured Snmp Trap Servers Successfully

MENU

1 - Configure SNMP Servers

2 - Configure SNMPTrap Servers

0 - exit program

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

135

Installation Tasks Post-OVA Deployment

Centralized Logging Server Configuration

Enter selection: 0

[root@rms-aio-serving admin1]#

Configuring SNMP on the Upload Server: Example

Example:

[root@blr-rms12-upload_41N post_install]# ./configuresnmpuploadnode.sh

*******************Post-installation script to configure SNMP on RMS Upload

Node*******************************

MENU

1 - Configure SNMP Servers

2 - Configure SNMPTrap Servers

0 - exit program

Enter selection: 2

Enter the value of Snmptrap_Community trap

Enter the value of Snmptrap1_Address

14.14.14.15

Enter default value 12.12.12.12,if Snmptrap2_Address is not available

14.14.14.16

com.cisco.ca.rms.upload.server.UlsServer is running (pid: 29603)

Sending 'stop' command

The following watches were affected:

UploadServer com.cisco.ca.rms.upload.server.UlsServer is running (pid: 29603)

Sending 'start' command

The following watches were affected:

UploadServer com.cisco.ca.rms.upload.server.UlsServer is running (pid: 29603) iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Stopping snmpd:

Starting snmpd: configured Snmp Trap Servers Successfully

MENU

1 - Configure SNMP Servers

2 - Configure SNMPTrap Servers

[ OK ]

[ OK ]

0 - exit program

Enter selection: 0

Centralized Logging Server Configuration

Follow the steps to configure the centralized logging server:

Before You Begin

External Linux server should be available. This server should be reachable from the Central, Serving, and

Upload nodes.

Step 1

Step 2

Login as root to the centralized logging server.

Create dynamic log files on the centralized logging server by editing and saving /etc/rsyslog.conf

with below configurations:

# provides support for local system logging (e.g. via logger command)

$ModLoad imuxsock.so

# provides kernel logging support (previously done by rklogd)

136

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

SYSLog Servers Configuration

Step 3

$ModLoad imklog.so

# Provides UDP syslog reception

$ModLoad imudp.so

$UDPServerRun 514

# This one is the template to generate the log filename dynamically, depending on the client's IP address.

$template FILENAME,"/var/log/%hostip%/syslog.log"

*.* ?FILENAME

Note hostip is eth0 of each node.

After adding the above entries to the /etc/rsyslog.conf

, restart the rsyslog process by executing the below command. The rsyslog server can accept messages.

service rsyslog restart output :

Shutting down system logger:

Starting system logger:

Note

[ OK ]

[ OK ]

For details on configuring syslog servers on central, serving , upload node, see

SYSLog Servers Configuration

, on page 137 .

SYSLog Servers Configuration

Following steps explain how to configure the SYSLog servers:

Central Node Configuration

Use the following steps post installation in the RMS deployment to configure the SysLog servers in Central

Node.

Step 1

Step 2

Step 3

Step 4

Step 5

Log in to the Central node

Switch to root user: su -

Locate the script configuresyslogcentralnode.sh

in the /rms/ova/scripts/post_install directory.

Change the directory: cd /rms/ova/scripts/post_install

Run the configuration script: ./configuresyslogcentralnode.sh

The script prompts you for SysLog address as shown in this example.

*******************Post-installation script to configure SYSLog on RMS Central

Node***********************

********

To configure syslog Enter yes yes

Enter the value of Syslog1_Address

17.17.17.17

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

137

Installation Tasks Post-OVA Deployment

SYSLog Servers Configuration

Enter default value 13.13.13.13,if Syslog2_Address is not available

15.15.15.15

iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Shutting down system logger: [ OK ]

Starting system logger: [ OK ]

Configured SysLog Servers Successfully

[rms-aio-central] /home/admin1 #

Serving Node Configuration

Use the following steps post installation in the RMS deployment to configure the SysLog servers in Serving

Node.

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Log in to the Central node .

ssh to Serving Node

Switch to root user: su -

Locate the script configuresyslogservingnode.sh

in the /rms/ova/scripts/post_install directory.

Change the directory: cd /rms/ova/scripts/post_install

Run the configuration script:

./configuresyslogservingnode.sh

The script prompts you for SysLog address as shown in this example.

*******************Post-installation script to configure SYSLog Servers on RMS Serving

Node**************

*****************

To configure syslog Enter yes yes

Enter the value of Syslog1_Address

15.15.15.14

Enter default value 13.13.13.13,if Syslog2_Address is not available

15.15.15.15

Enter the value of Root Password from OVA descriptor(Enter default Ch@ngeme1 if not present in descriptor)

Ch@ngeme1

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

rms-aio-serving BAC Device Provisioning Engine

User Access Verification

Password: rms-aio-serving> enable

Password: rms-aio-serving# ip route 15.15.15.14 255.255.255.255 10.5.4.1

SIOCADDRT: File exists

% Exit value: 7 rms-aio-serving# ip route 15.15.15.15 255.255.255.255 10.5.4.1

SIOCADDRT: File exists

% Exit value: 7 rms-aio-serving# dpe reload

Process [dpe] has been restarted.

% OK rms-aio-serving# Connection closed by foreign host.

iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Shutting down system logger: [ OK ]

Starting system logger: [ OK ]

./configuresyslogservingnode.sh: line 146: Syslog_Enable: command not found

Configured SysLog Servers Successfully

138

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

LDAP Configuration

[root@rms-aio-serving admin1]#

Upload Node Configuration

Use the following steps post installation in the RMS deployment to configure the SysLog servers in Upload

Node

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Log in to the Central node ssh to Upload Node

Switch to root user: su -

Locate the script configuresysloguploadnode.sh

in the /rms/ova/scripts/post_install directory.

Change the directory: cd /rms/ova/scripts/post_install

Run the configuration script:

./configuresysloguploadnode.sh

The script prompts you for SysLog address as shown in this example.

*******************Post-installation script to configure SYS Log Servers on RMS Upload

Node***************

****************

To configure syslog Enter yes yes

Enter the value of Syslog1_Address

15.15.15.16

Enter default value 13.13.13.13,if Syslog2_Address is not available

15.15.15.17

iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Shutting down system logger:

Starting system logger:

Configured SysLog Servers Successfully

[ OK ]

[ OK ]

LDAP Configuration

Step 1

Step 2

Step 3

Log in to RDU central node using the command ssh admin1@<RDU_central_node_ipaddress>

The system responds with a command prompt.

Change in to root user and enter the root password, using the command: su -l root

Check the required rpm packages available in central node by using the command: rpm -q pam_ldap.x86_64 nscd.x86_64 nfs-utils-1.2.3-7.el6.x86_64 autofs-5.0.5-31.el6.x86_64

NetworkManager-0.8.1-9.el6.x86_64 readline-6.0-3.el6.i686 sqlite-3.6.20-1.el6.i686

nss-softokn-3.12.9-3.el6.i686 nss-3.12.9-9.el6.i686 openldap-2.4.23-15.el6.i686

nss-pam-ldapd-0.7.5-7.el6.x86_64

Following is the output: pam_ldap-185-11.el6.x86_64

nscd-2.12-1.25.el6.x86_64

nfs-utils-1.2.3-7.el6.x86_64

autofs-5.0.5-31.el6.x86_64

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

139

Installation Tasks Post-OVA Deployment

LDAP Configuration

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

NetworkManager-0.8.1-9.el6.x86_64

readline-6.0-3.el6.i686

sqlite-3.6.20-1.el6.i686

nss-softokn-3.12.9-3.el6.i686

nss-3.12.9-9.el6.i686

openldap-2.4.23-15.el6.i686

nss-pam-ldapd-0.7.5-7.el6.x86_64

Do a checksum on the file and verify with the checksum below, by using the command: md5sum /lib/security/pam_ldap.so

Note Checksum value should match with the given output.

9903cf75a39d1d9153a8d1adc33b0fba /lib/security/pam_ldap.so

Edit the nssswitch.conf file, by using the command: vi /etc/nsswitch.conf

and edit the following: Password: files ldap;

Shadow: files ldap; Group: files ldap.

Run authconfig-tui, by using the command: authconfig-tui

Select:

• Cache Information

• Use LDAP

• Use MD5 Passwords

• Use Shadow Passwords

• Use LDAP Authentication

• Local authorization is sufficient

Configure LDAP Settings, by selecting Next , and entering the below command:

LDAP Configuration ldap://ldap.cisco.com:389/

OU=active,OU=employees,OU=people,O=cisco.com

Note This LDAP configuration varies based on the customer set-up.

Restart the services after the configuration changes, by selecting Ok .

Service nfs start

Service autofs start

Service NetworkManager start

Note This LDAP configuration should be modified based on the customer set-up.

Enable LDAP configuration at dcc.properties by using the command vi /rms/app/rms/conf/dcc.properties

.

Modify:

# PAM configuration pam.service.enabled=true pam.service=login

Restart RDU by using the command /etc/init.d/bprAgent restart .

Log in to the DCCUI via dccadmin .

Add user name and enable External authentication.

Note To be LDAP authenticated, the user must be selected as Externally Authenticated in DCCUI.

140

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

TACACS Configuration

Step 13

Step 14

Step 15

Create a UNIX user account on Central VM to match the account on LDAP server before trying to authenticate the user via DCC UI by using the command: /usr/sbin/useradd <username>

Ensure that the username is correct on LDAP server, DCC UI and Central VM.

Note RMS does not apply the password policy for remote users. This is because LDAP servers manage their login information and passwords.

Update the IP tables for the LDAP configuration: a) Log in to the central node using SSH.

b) Change to root user.

c) iptables -A OUTPUT -m state --state NEW -m udp -p udp --dport port-number -j ACCEPT command.

Updates the IP table firewall rules on the central node, as shown in this example: iptables -A OUTPUT -m state --state NEW -m udp -p udp --dport 389 -j ACCEPT iptables -A OUTPUT -m state --state NEW -m tcp -p tcp --dport 389 -j ACCEPT iptables -A OUTPUT -m state --state NEW -m tcp -p tcp --dport 636 -j ACCEPT iptables -A OUTPUT -m state --state NEW -m udp -p udp --dport 636 -j ACCEPT

Note These commands open port 389 for LDAP listening. If your installation uses another port, the commands need to be changed accordingly.

d) service iptables save

Saves the configuration.

e) service iptables restart

Restarts iptables service.

f) iptables --list

Verifies the IP tables. Sample output is shown here:

ACCEPT

ACCEPT

ACCEPT

ACCEPT

Note udp -anywhere tcp -anywhere udp -anywhere tcp -anywhere anywhere anywhere anywhere anywhere state NEW udp dpt:ldap state NEW tcp dpt:ldap state NEW udp dpt:ldaps state NEW tcp dpt:ldaps

Use the above commands to update iptables with default port values. Ports mentioned can be customized if required.

TACACS Configuration

Use this task to integrate the PAM_TAC library on the Central Node.

Step 1

Step 2

ssh admin1@RDU_central_node_ipaddress .

Logs on to RDU Central Node.

Following is the output:

The system responds with a command prompt.

su -l root

Changes to root user.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

141

Installation Tasks Post-OVA Deployment

TACACS Configuration

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12 md5sum /lib/security/pam_tacplus.so

Performs a checksum on the file and verifies that it is correct.

4d4693bc5a0cb7f38dbadb579280eef3 /lib/security/pam_tacplus.so

Note The checksum value should match this output.

vi /etc/pam.d/tacacs

Creates the TAC configuration file for PAM on the Central Node. Add the following to the TACACS file:

#%PAM-1.0

auth sufficient /lib/security/pam_tacplus.so debug server=<tacacs server ip > secret=<tacacs server secret> encrypt account sufficient /lib/security/pam_tacplus.so debug server==<tacacs server ip > secret=<tacacs server secret> encrypt service=shell protocol=ssh session sufficient /lib/security/pam_tacplus.so debug server==<tacacs server ip > secret=<tacacs server secret> encrypt service=shell protocol=ssh

Example:

#%PAM-1.0

auth encrypt account sufficient /lib/security/pam_tacplus.so debug server=10.105.242.54 secret=cisco123 sufficient /lib/security/pam_tacplus.so debug server=10.105.242.54 secret=cisco123 encrypt service=shell protocol=ssh session sufficient /lib/security/pam_tacplus.so debug server=10.105.242.54 secret=cisco123 encrypt service=shell protocol=ssh vi /etc/pam.d/sshd

Inserts the TACACS entry in the sshd PAM file. Add the following: auth include tacacs vi /rms/app/rms/conf/dcc.properties

Enables the PAM service at dcc.properties, for the DCC UI configuration. Additionally, modify the following:

# PAM configuration pam.service.enabled=true pam.service=tacacs

/etc/init.d/bprAgent restart

Restarts the RDU.

Log in to DCC UI via the dccadmin .

Add the user name and enable External authentication by checking the External authentication check box.

Note To be TACACS authenticated, the user must be selected as Externally Authenticated in DCCUI.

/usr/sbin/useradd username

Creates a UNIX user account on the Central VM to match the account on TACACS+ server. Do this before trying to authenticate the user via the DCC UI.

Following is the output:

The system responds with a command prompt.

Ensure that the username is correct on TACACS+ server, DCC UI and Central VM.

Note The password policy does not apply to non-local users that authentication servers such as TACACS server manage their login information and passwords.

Update the IP tables to open port 49 for TACACS: a) Log in to the central node using SSH.

142

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Installation Tasks Post-OVA Deployment

Configuring INSEE SAC b) Change to root user.

c) iptables -A OUTPUT -m state --state NEW -m udp -p udp --dport port-number -j ACCEPT

Updates the IP table firewall rules on the central node.

iptables -A OUTPUT -m state --state NEW -m udp -p udp --dport 49 -j ACCEPT iptables -A OUTPUT -m state --state NEW -m tcp -p tcp --dport 49 -j ACCEPT

Note If your installation uses another port for TACACS listening, this command needs to be changed accordingly.

d) service iptables save

Saves the configuration.

e) service iptables restart

Restarts the IP tables service.

f) iptables --list

Verifies the IP tables. Following is sample output:

ACCEPT

ACCEPT

Note tcp -anywhere udp -anywhere anywhere anywhere state NEW tcp dpt:tacacs state NEW udp dpt:tacacs

Use the above commands to update iptables with default port values. Ports mentioned can be customized if required.

Configuring INSEE SAC

To configure INSEE SAC on the deployed system, run the configure_func2.sh script .

For more details on the location and usage of this script, see the "Configuring INSEE" section of the Cisco

RAN Management System Administration Guide .

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

143

Configuring INSEE SAC

Installation Tasks Post-OVA Deployment

144

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

C H A P T E R

6

Verifying RMS Deployment

Verify if all the RMS Virtual hosts have the required network connectivity.

Verifying Network Connectivity, page 145

Verifying Network Listeners, page 146

Log Verification, page 147

End-to-End Testing, page 148

Verifying Network Connectivity

Step 1

Step 2

Step 3

Step 4

Verify if the RMS Virtual host has network connectivity from the Central Node, using the following steps: a) Ping the gateway. (prop:vami.gateway.Central-Node or prop:Central_Node_Gateway).

b) Ping the DNS servers. (prop:vami.DNS.Central-Node or prop:Central_Node_Dns1_Address & prop:Central_Node_Dns2_Address).

c) Ping the NTP servers. (prop:Ntp1_Address, prop:Ntp2_Address, prop:Ntp3_Address & prop:Ntp4_Address).

Verify if the RMS Virtual host has network connectivity from the Serving Node, using the following steps: a) Ping the gateway. (prop:vami.gateway.Serving-Node or prop:Serving_Node_Gateway).

b) Ping the DNS servers. (prop:vami.DNS.Serving-Node or prop:Serving_Node_Dns1_Address & prop:Serving_Node_Dns2_Address).

c) Ping the NTP servers. (prop:Ntp1_Address, prop:Ntp2_Address, prop:Ntp3_Address & prop:Ntp4_Address).

Verify if the RMS Virtual host has network connectivity from the Upload Node, using the following steps: a) Ping the gateway. (prop:vami.gateway.Upload-Node or prop:Upload_Node_Gateway).

b) Ping the DNS servers. (prop:vami.DNS.Upload-Node or prop:Upload_Node_Dns1_Address & prop:Upload_Node_Dns2_Address).

c) Ping the NTP servers. (prop:Ntp1_Address, prop:Ntp2_Address, prop:Ntp3_Address & prop:Ntp4_Address).

Perform the additional network connectivity testing on each of the nodes, for the following optional services: a) Ping the Syslog servers (Optional).

b) Ping the SNMP servers (Optional).

RAN Management System Installation Guide, Release 4.1

145 May 25, 2015 HF

Verifying RMS Deployment

Verifying Network Listeners c) Ping the SNMP trap servers (Optional).

Verifying Network Listeners

Verify that the RMS virtual hosts have opened the required network listeners. If the Upload server process is not up, for more details see

Upload Server is Not Up, on page 176

.

RMS Node Component Network Listener

Central Node BAC RDU

• netstat -an | grep 443

• netstat -an | grep 8005

• netstat -an | grep 8083

• netstat -an | grep 49187

• netstat -an | grep 8090

Serving Node Cisco Prime Access Registrar

(PAR)

• netstat -an | grep 1812

• netstat -an | grep 8443

• netstat -an | grep 8005

Cisco Prime Network Registrar

(PNR)

• netstat -an | grep 61610

• netstat -an | grep 9005

BAC DPE

• netstat -an | grep 2323

• netstat -an | grep 49186

Upload Node Upload Server

• netstat -an |grep 8082

• netstat -an |grep 443

146

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Verifying RMS Deployment

Log Verification

Log Verification

Server Log Verification

Post installation, the following server logs should be checked for verification of clean server start-up.

• Central Virtual Machine (VM):

◦ /rms/data/CSCObac/agent/logs/snmpAgent_console.log

◦ /rms/data/CSCObac/agent/logs/tomcat_console.log

◦ /rms/data/dcc_ui/postgres/dbbase/pgstartup.log

◦ /rms/log/pmg/PMGServer.console.log

◦ /rms/data/nwreg2/regional/logs/install_cnr_log

◦ /rms/log/dcc_ui/ui-debug.log

• Serving VM: /rms/data/nwreg2/local/logs/install_cnr_log

Note Any errors in the above log files at the time of application deployment need to be notified to the operation support team.

Application Log Verification

Application level logs can be referred to in case of facing application-level usage issues:

RMS Node Component Log Name

Central VM DCC_UI

Serving VM

PMG

BAC/RDU

PNR

/rms/log/dcc_ui/ui-audit.log

/rms/log/dcc_ui/ui-debug.log

/rms/log/pmg/pmg-debug.log

/rms/log/pmg/pmg-audit.log

/rms/data/CSCObac/rdu/logs/audit.log

/rms/data/CSCObac/rdu/logs/rdu.log

/rms/data/nwreg2/local/logs/ name_dhcp_1_log

RAN Management System Installation Guide, Release 4.1

147 May 25, 2015 HF

Verifying RMS Deployment

End-to-End Testing

Upload Server VM

PAR

DPE

End-to-End Testing

Perform the following processes for end-to-end testing of the Small Cell device:

Step 9

Step 10

Step 11

Step 12

Step 13

Step 14

Step 15

Step 16

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 17

Step 18

Step 19

Register a Small Cell Device.

Power on the Small Cell Device.

Verify NTP Signal.

Verify TR-069 Inform.

Verify Discovered Parameters.

Verify Class of Service selection.

Perform Firmware Upgrade.

Verify Updated Discovered Parameters.

Verify Configuration Synchronization.

Activate the Small Cell Device.

Verify IPSec Connection.

Verify Connection Request.

Verify Live Data Retrieval.

Verify HNB-GW Connection.

Verify Radio is Activated.

Verify User Equipment can Camp.

Place First Call.

Verify Remote Reboot.

Verify On-Demand Log Upload.

/rms/app/CSCOar/logs/ name_radius_1_log

Or

/rms/app/CSCOar/logs/name_radius_1_trace

/rms/data/CSCObac/dpe/logs/dpe.log

/opt/CSCOuls/logs/*.log (uls.log, sb-events.log, nb-events.log)

148

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Verifying RMS Deployment

Updating VMware Repository

Updating VMware Repository

All the system updates for the VMware Studio and the VMware vCenter are stored on the Update Repository, and can be accessed either online through Cisco DMZ or Offline (delivered to the Customer through Services team or DVD).

Perform the following procedures to apply updates on the RMS nodes:

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Disable the network interfaces for each virtual machine.

Create a snapshot of each virtual machine.

Mount the Update ISO on the vCenter server.

Perform a check for new software availability.

Install updates using the vSphere Console.

Perform system tests to verify that the updated software features are operating properly.

Enable network interfaces for each virtual machine in the appliance.

Perform end-to-end testing.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

149

Updating VMware Repository

Verifying RMS Deployment

150

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

C H A P T E R

7

RMS Upgrade Procedure

Version: RMS-UPGRADE-4.1.0-1X.tar.gz

Follow this procedure to upgrade from RMS 3.0 FCS to RMS 4.1 FCS, from RMS 3.0.1 MR to RMS 4.1

FCS or from RMS 4.0 FCS to RMS 4.1 FCS. The procedure involves executing the upgrade_rms.sh

script for the Central Node, Serving Node, and Upload Nodes.

Upgrade Prerequisites, page 151

Upgrade on Central Node, page 152

Upgrade on Serving Node, page 155

Upgrade on Upload Node, page 158

Post RMS Upgrade, page 159

Merging of Files Manually, page 160

Rollback to Versions RMS 3.0/3.1 and 4.0 , page 162

Upgrade Prerequisites

1 The Central , Serving, and Upload node should be up before performing the upgrade.

2 Ensure that the existing Hardware supports RMS 4.1. See section

Cisco RMS Hardware and Software

Requirements, on page 10

before proceeding with the upgrade.

3 Increase the Upload VM memory to 32 GB . See section

Upgrading the VM CPU and Memory Settings,

on page 86 for more details.

4 Make a clone of the of existing RMS Installation (RMS 3.0.0-1N/RMS 3.1.0-G/ RMS 4.0.0.-2N) for the backup purpose. See Back Up System Using vApp Cloning for the detailed steps.

5 You must have root privileges on all RMS nodes.

6 Optionally, the small cell statistics can be collected through GDDT before performing the upgrade.

7 Ensure that you maintain a manual backup of any additional files created by you during deployment.

Upgrade script does not back up any additional files.

RAN Management System Installation Guide, Release 4.1

151 May 25, 2015 HF

RMS Upgrade Procedure

Upgrade on Central Node

8 Verify that the password property value in the

/rms/app/BACCTools/conf/APIScripter.arguments

file contains the same password as the BACadmin user password (used to log in to the BAC UI). If these are not in sync, change the password in the file to match.

9 Verify that there are no older upgrade files present in the /tmp directory on all nodes; if present older upgrade files have to be manually backed up (if necessary) and removed from the /tmp directory.

Upgrade on Central Node

Step 1

Step 2

Copy the RMS-UPGRADE-4.1.0-1M.tar.gz

file at the /rms directory of central node.

Execute the following commands as root user to perform the upgrade: a) cd /rms b) gtar -zxvf RMS-UPGRADE-4.1.0-1M.tar.gz

c) cd /rms/upgrade/preupgrade

Refer to the BAC_HOTFIX_README file and follow instructions therein.

d) cd / e) /rms/upgrade/upgrade_rms.sh

Note • This is a sample output for release 4.1. Your output may differ.

• In the output, when you are prompted to proceed with the upgrade, enter a response.

Sample Output:

[BLR17-Central] / # /rms/upgrade/upgrade_rms.sh

User : root

Detected RMS4.0 setup

Indentified VM Node: Central-Node

Upgrading the current RMS installation to 4.1.0 . Do you want to proceed? (y/n) : y

Stopping applications on Central Node

Stopping bprAgent ..

BAC stopped successfully

Stopping PMG ..

PMGServer[3327]: PMGServer has stopped by request (watchdog may restart it)

Taking RMS Central Node file backup as per the configuration in the

152

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Upgrade Procedure

Upgrade on Central Node file:/rms/upgrade/backupfilelist/centralBackUpFileList

Taking the DCC-UI DB backup

Stopping postgresql service:

Starting postgresql service:

[ OK ]

[ OK ]

Starting the BAC RDU DB backup ... Removing the old BAC RDU DB backup directory ..

BAC RDU DB backup done at /rmsbackupfiles/rdubackup

Running recovery script on backed up RDU DB ...

Filebackup tar is present at path : /rms-central.tar

Starting RPM Upgrade ..

Upgrading RUBY GEM GOD ...

RUBY GEM GOD upgraded to /rms/upgrade/rpms/rubygem-god-0.11.0-45.x86_64.rpm

BACTAR : /rms/upgrade/rpms/BAC_3.8.1.1_LinuxK9.gtar.gz

Verifying BAC RDU database before upgrade ..

Extracting the BAC tar file ..

Extracting the BAC migration tar file ..

Running BAC migration tool ...

Upgrading the BAC on RMS Central Node ....

Restoring the BAC RDU DB ...

Restarting the bprAgent ..

BACCTOOLSRPM : /rms/upgrade/rpms/CSCOrms-bacctools-3.8.1.0-32.el6_1.noarch.rpm

Currently installed BACCTOOLS is CSCOrms-bacctools-3.8.1.0-28.el6_1.noarch

Upgrading BACCTOOLS ...

BACCTOOLS upgraded to /rms/upgrade/rpms/CSCOrms-bacctools-3.8.1.0-32.el6_1.noarch.rpm

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

153

RMS Upgrade Procedure

Upgrade on Central Node

BACCONFIGRPM : /rms/upgrade/rpms/CSCOrms-baseline-config-ga-4.1.0-113.noarch.rpm

Currently installed BAC-CONFIG is CSCOrms-baseline-config-ga-4.0.0-168.noarch

Upgrading BASELINE CONFIG ...

BAC-CONFIG upgraded to /rms/upgrade/rpms/CSCOrms-baseline-config-ga-4.1.0-113.noarch.rpm

OPSTOOLSRPM : /rms/upgrade/rpms/CSCOrms-ops-tools-ga-4.1.0-168.noarch.rpm

Currently installed OPS-TOOLS is CSCOrms-ops-tools-ga-4.0.0-173.noarch

Upgrading OPS-TOOLS ...

OPS-TOOLS upgraded to /rms/upgrade/rpms/CSCOrms-ops-tools-ga-4.1.0-168.noarch.rpm

PMGRPM : /rms/upgrade/rpms/CSCOrms-pmg-ga-4.1.0-159.noarch.rpm

Currently installed PMG is CSCOrms-pmg-ga-4.0.0-306.noarch

Upgrading PMG ...

PMG upgraded to /rms/upgrade/rpms/CSCOrms-pmg-ga-4.1.0-159.noarch.rpm

DCCUIRPM : /rms/upgrade/rpms/CSCOrms-dcc-ui-ga-4.1.0-174.noarch.rpm

Currently installed DCC-UI is CSCOrms-dcc-ui-ga-4.0.0-269.noarch

Upgrading DCC-UI ...

DCC-UI upgraded to /rms/upgrade/rpms/CSCOrms-dcc-ui-ga-4.1.0-174.noarch.rpm

Restoring the DCC-UI DB ..

Restarting applications on Central Node

Restarting bprAgent ...

BAC is running

Restarting PMG ..

PMGServer[6958]: PMGServer has stopped by request (watchdog may restart it)

Stopping postgresql service: [ OK ]

[ OK ] Starting postgresql service:

Finished upgrading RMS Central Node .

154

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Upgrade Procedure

Upgrade on Serving Node

Step 3

Step 4

[BLR17-Central] / #

Clear the browser cache and cookies before accessing DCC-UI.

Install the PMGDB if required. See

PMG Database Installation and Configuration, on page 116

for more details.

Upgrade on Serving Node

Step 1

Step 2

Copy the RMS-UPGRADE-4.1.0-1M.tar.gz

file to the /rms directory of the serving node.

Execute the following commands as root user to perform the upgrade: a) cd /rms b) gtar -zxvf RMS-UPGRADE-4.1.0-1M.tar.gz

c) cd / d) /rms/upgrade/upgrade_rms.sh

Note • This is sample output for release 4.1. Your output may differ.

• In the output, when you are prompted to proceed with the upgrade, enter an appropriate response. When the script prompts you to enter the CAR password, type the password, press enter and wait. The script executes various commands in the background and it may take a few seconds or a minute before the next message is displayed to the console.

Sample Output: root@BLR17-Serving /]# /rms/upgrade/upgrade_rms.sh

User : root

Detected RMS4.0 setup

Indentified VM Node: Serving-Node

Upgrading the current RMS installation to 4.1.0 . Do you want to proceed? (y/n) : y

Stopping applications on Serving Node

Stopping bprAgent ..

BAC stopped successfully

Disabling the PNR extension points

RAN Management System Installation Guide, Release 4.1

155 May 25, 2015 HF

Upgrade on Serving Node

Enter cnradmin Password:

Stopping PNR ..

Stopping CAR ..

No CAR CLI console open . Stopping CAR ..

Taking RMS Serving Node file backup as per the configuration in the file:/rms/upgrade/backupfilelist/servingBackUpFileList

Copying the DHCP files ..

Files are being moved to backup directory dpeext.jar is being moved to backup directory

Copying the DHCP files done

Filebackup tar is present at path : /rms-serving.tar

Starting RPM Upgrade ..

BACTAR : /rms/upgrade/rpms/BAC_3.8.1.1_LinuxK9.gtar.gz

Extracting the BAC TAR ..

Upgrading the BAC on RMS Serving Node ....

Copying the DHCP files to the original directory

Enable the PNR extensions

Starting bprAgent ..

Starting PNR ..

156

RAN Management System Installation Guide, Release 4.1

RMS Upgrade Procedure

May 25, 2015 HF

RMS Upgrade Procedure

Upgrade on Serving Node

100 Ok session: cluster = localhost current-vpn = global default-format = user dhcp-edit-mode = synchronous dns-edit-mode = synchronous groups = superuser roles = sup

100 Ok dexdropras: entry = dexdropras file = libdexextension.so init-args = init-entry = lang = Dex name = dexdropras nrcmd> dhcp attachextension post-packet-decode

100 Ok preClientLookup: entry = bprClientLookup file = libbprextensions.so init-args = init-entry = lang = Dex name = preClientLookup nrcmd> dhcp attachExtension pre

100 Ok dexdropras: entry = dexdropras file = libdexextension.so init-args = init-entry = lang = Dex name = dexdropras extclientid: entry = clientID_trace file = libt race_clientid.so init-args = init-entry = lang = Dex name = extclientid preClientLookup: entry = bprClientLookup file = libbprextensions.so init-args = BPR_HOME=/rms/

100 Ok post-packet-decode: 1 dexdropras 2 extclientid pre-packet-encode: pre-client-lookup: preClientLookup post-client-lookup: post-send-packet: pre-dns-add-forward

100 Ok nrcmd>acceptable: post-class-lookup: lease-state-change: generate-lease: environment-destructor: pre-packet-decode: post-packet-encode: nrcmd>

Starting CAR ..

Enter car admin Password:

Restarting bprAgent ..

Restoring the Serving certs :

Restarting bprAgent ..

Finished upgrading RMS Serving Node .

[root@BLR17-Serving /]#

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

157

RMS Upgrade Procedure

Upgrade on Upload Node

Upgrade on Upload Node

Step 1

Step 2

Copy the RMS-UPGRADE-4.1.0-1M.tar.gz

file to the /rms directory of the upload node.

Execute the following commands as root user to perform the upgrade: a) cd /rms b) gtar -zxvf RMS-UPGRADE-4.1.0-1M.tar.gz

c) cd / d) /rms/upgrade/upgrade_rms.sh

Note • This is a sample output. It may vary based on the release.

• In the output, when you are prompted to proceed with the upgrade, enter a response.

Sample Output:

[root@BLR17-Upload /]# /rms/upgrade/upgrade_rms.sh

User : root

Detected RMS4.0 setup

Indentified VM Node: Upload-Node

Upgrading the current RMS installation to 4.1.0 . Do you want to proceed? (y/n) : y

Stopping applications on Upload Node

Stopping Upload Server ..

..

Stopped all watches

Stopped god

The server is not available (or you do not have permissions to access it)

Taking RMS Upload Node file backup as per the configuration in the file:/rms/upgrade/backupfilelist/uploadBackUpFileList

Filebackup tar is present at path : /rms-upload.tar

Starting RPM Upgrade ..

Upgrading RUBY GEM GOD ...

RUBY GEM GOD upgraded to /rms/upgrade/rpms/rubygem-god-0.11.0-45.x86_64.rpm

Upload Node subroutine

158

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Upgrade Procedure

Post RMS Upgrade

UPLOAD-SERVER RPM : /rms/upgrade/rpms/CSCOrms-upload-server-9.2.0-40.noarch.rpm

Currently installed Upload Server is CSCOrms-upload-server-9.1.0-52.noarch

Upgrading UPLOAD SERVER ...

UPLOAD SERVER upgraded to /rms/upgrade/rpms/CSCOrms-upload-server-9.2.0-40.noarch.rpm

Restarting Upload Server

Finished upgrading RMS Upload Node .

[root@BLR17-Upload /]#

Upload server requires the two partitions: one for archived files and one for raw files mounted under

/opt/CSCOuls/files/archives and /opt/CSCOuls/files/uploads. For creating the data stores, disks, and creating partitions on them, see

Adding PM_RAW and PM_ARCHIVE Datastores, on page 50

.

Post RMS Upgrade

1 For upgrades from RMS 3.0 and 3.1, post upgrade the application password gets reset to the default password 'Ch@ngeme1'. For upgrades from RMS 4.0, the application password is retained.

Note It is mandatory to follow the LUS disk sizing concept. See

Upgrading the Upload VM Data Sizing, on

page 87 for more details.

• On the Central node , the password gets reset for DCC-UI, BAC-UI, DCC-UI DB, and regional PNR.

• On the Serving node, the password gets reset for CLI of PNR and PAR. The DPE CLI needs to accessed using the password 'changeme', which then prompts you to change the password. While using the ENABLE mode in DPE CLI, also use the password 'changeme' and you are again prompted to enter a new password. You should enter the same password as in the PAR and PNR to be consistent.

• On upload node , the on-demand and stat admin passwords get reset in UploadServer.properties.

2 Post upgrade, update the newly added mandatory properties in DCC-UI groups and pools through DCC-UI

Groups and IDs screen. Refer to

Configuring the Central Node, on page 121

section of RMS 4.x Installation

Guide.

3 Execute the massCR script for the new data to be pushed on to the AP.

4 CMHS is no longer supported from RMS 4.0 and above. For upgrades from RMS 3.0 and 3.1, turn off the

CMHS VM.

5 The DCC-UI dynamic screens such as SDM Dashboard, Registration, Update, and Groups and IDs

XMLs are auto-replaced with the respective RMS 4.x version and should be ready to use.

RAN Management System Installation Guide, Release 4.1

159 May 25, 2015 HF

RMS Upgrade Procedure

Merging of Files Manually

6 If any configuration from SDM Dashboard, Registration, Update, and Groups and IDs is required from RMS 3.0, 3.1 or 4.0, you have to manually merge the files from the /rmsbackupfiles directory. Refer to

Merging of Files Manually, on page 160

7 Post upgrade from RMS 3.0 or 3.1, install the PMGDB if required. See

PMG Database Installation and

Configuration, on page 116

for more details.

Post RMS Upgrade from RMS 4.0 with PMGDB, refer to the section Configuring High Availability for the PMG DB:RMS Upgrade in the document High Availability for Cisco RAN Management Systems for installation details.

8 Any FAPs must have firmware version of 3.5.9.7 or above, and should be compatible with RMS 4.1

release.

9 To use LDAP, refer to

LDAP Configuration , on page 139

. To use TACACS, refer to

TACACS

Configuration, on page 141

.

10 The default groups cannot be used post-upgrade. Create new groups as required. Refer to

PMG Database

Installation and Configuration, on page 116

in the Cisco RAN Management System Administration Guide,

Release 4.x

.

11 On upload VM, if "UploadServer.files.upload.[file_type].raw.delete.threshexceeded" property in

/opt/CSCOuls/conf/UploadServer.properties is manually modified or set to "false" before the upgrade, then it needs to be set to "true" after the upgrade.

Merging of Files Manually

Post upgrade, you are using only the new RMS configuration. In case you want a property that is manually configured in an earlier release of the RMS to be used in your new installation, specifically the DCC-UI dynamic screens, then you have to manually merge the files by copying the respective properties to the new

XML of your release. Following are the files that require manual change:

• sdm-update-screen-setup.xml

• sdm-register-screen-setup.xml

• deviceParamsDisplayConfig.xml

• bgmt-add-group-screen-setup-FemtoGateway.xml

• bgmt-add-group-screen-setup-Area.xml

• bgmt-add-pool-screen-setup-CELL-POOL.xml

• bgmt-add-pool-screen-setup-SAI-POOL.xml

All the above files are backed up in /rmsbackupfiles directory of the central node by default. Copy and paste the specific property manually to the respective files under /rms/app/rms/conf directory .

• sdm-update-screen-setup.xml

in RMS3.0/RMS3.0MR/RMS4.0 corresponds to sdm-update-residential-screen-setup.xml

in RMS4.1

• sdm-register-screen-setup.xml

in RMS3.0/RMS3.0MR/RMS4.0 corresponds to sdm-register-residential-screen-setup.xml

in RMS4.1

• deviceParamsDisplayConfig.xml

in RMS3.0/RMS3.0MR/RMS4.0 corresponds to deviceParamsDisplayConfig.xml

in RMS4.1

160

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS Upgrade Procedure

Merging of Files Manually

• bgmt-add-group-screen-setup-FemtoGateway.xml

in RMS3.0/RMS3.0MR/RMS4.0 corresponds to bgmt-add-group-screen-setup-FemtoGateway.xml

in RMS4.1

• bgmt-add-group-screen-setup-Area.xml

in RMS3.0/RMS3.0MR/RMS4.0 corresponds to bgmt-add-group-screen-setup-Area.xml

in RMS4.1

• bgmt-add-pool-screen-setup-CELL-POOL.xml

in RMS3.0/RMS3.0MR/RMS4.0 corresponds to bgmt-add-pool-screen-setup-CELL-POOL.xml

in RMS4.1

• bgmt-add-pool-screen-setup-SAI-POOL.xml

in RMS3.0/RMS3.0MR/RMS4.0 corresponds to bgmt-add-pool-screen-setup-SAI-POOL.xml

in RMS4.1

Step 1

Step 2

Step 3

If you want the following property, which was manually configured in RMS3.0/RMS3.0MR (to block the device through update operation), to be in RMS 4.1 as well, copy the below configuration to

/rms/app/rms/conf/sdm-update-residential-screen-setup.xml

from

/rmsbackupfiles/sdm-update-screen-setup.xml

:

<ScreenElement>

<Id>blocked</Id>

<Required>false</Required>

<Label>Block</Label>

<LabelWidth>100px</LabelWidth>

<CheckBox>

</CheckBox>

<ToolTip>Controls if the device is blocked or not.</ToolTip>

<StoredKey>Blocked</StoredKey>

<StoredSection>element</StoredSection>

<StoredType>boolean</StoredType>

</ScreenElement>

Navigate to /rms/app/rms/conf

Edit sdm-update-residential-screen-setup.xml

using VI Editor as follows: a) vi sdm-update-residential-screen-setup.xml

b) At the end of the file, before </ScreenElements> tag, paste the sdm-update-residential-screen-setup.xml

c) Save the changes :wq!

d) Verify the changes in the Update screen of DCC UI.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

161

RMS Upgrade Procedure

Rollback to Versions RMS 3.0/3.1 and 4.0

Rollback to Versions RMS 3.0/3.1 and 4.0

While executing the upgrade script, if you encounter any issues (such as incorrect passwords, script execution terminated in between and so on), or if downgrade to the previous release is required, then follow the below steps:

Step 1

Step 2

Step 3

Step 4

Step 5

Power-Off the current VM.

Right-click on the VM, and choose the option Delete from Disk .

Right- click on the cloned VM and choose the Clone option.

Follow the steps in Back Up System Using vApp Cloning for cloning the VM.

Use the new cloned VM for upgrade, or as a previous release VM.

162

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

C H A P T E R

8

Troubleshooting

This chapter provides procedures for troubleshooting the problems encountered during RMS Installation.

Regeneration of Certificates, page 163

Deployment Troubleshooting , page 169

Regeneration of Certificates

Following are the scenarios that requires regeneration of certificates:

• Certificate expiry (Certificate will have a validity of one year.)

• If importing certificates are not successful.

Follow the steps to regenerate self-signed certificates:

Certificate Regeneration for DPE

To address the problems faced during the certificate generation process in Distributed Provisioning Engine

(DPE), complete the following steps:

Step 1 Remove the root.ca.cer, server.ca.cer and client.ca.cer certificates that are installed in DPE.

Enter: ssh login to Serving_Node_1

Change to root user

Navigate to the conf folder cd /rms/app/CSCObac/dpe/conf/self_signed ls -lrt

Output:

[root@rms-aio-serving self_signed]# ls -lrt total 20

-rw-r--r--. 1 bacservice bacservice 2239 Sep 23 11:08 dpe.keystore

RAN Management System Installation Guide, Release 4.1

163 May 25, 2015 HF

Certificate Regeneration for DPE

Step 2

-rw-r--r--. 1 bacservice bacservice 1075 Sep 23 11:08 dpe.csr

-rwxr-x---. 1 admin1 ciscorms 1742 Sep 23 11:51 server-ca.cer

-rwxr-x---. 1 admin1

-rwxr-x---. 1 admin1 ciscorms ciscorms

1182 Sep 23 11:51 root-ca.cer

1626 Sep 23 11:51 client-ca.cer

Enter: rm root-ca.cer

Output:

[root@blr-rms11-serving conf]# rm root-ca.cer

rm: remove regular file `root-ca.cer'? Y

Enter: rm server-ca.cer

Output:

[root@blr-rms11-serving conf]# rm server-ca.cer

rm: remove regular file `server-ca.cer'? Y

Enter: rm client-ca.cer

Output:

[root@blr-rms11-serving conf]# rm client-ca.cer

rm: remove regular file `client-ca.cer'? Y

Enter: ls – lrt

Output:

[root@rms-aio-serving self_signed]# ls -lrt total 8

-rw-r--r--. 1 bacservice bacservice 2239 Sep 23 11:08 dpe.keystore

-rw-r--r--. 1 bacservice bacservice 1075 Sep 23 11:08 dpe.csr

Take a backup of old DPE Keystore and CSR:

Enter: mv /rms/app/CSCObac/dpe/conf/self_signed/dpe.keystore

/rms/app/CSCObac/dpe/conf/self_signed/dpe.keystore.bkup

Output:

System returns with command prompt

Enter: mv /rms/app/CSCObac/dpe/conf/self_signed/dpe.csr

/rms/app/CSCObac/dpe/conf/self_signed/dpe.csr.bkup

164

RAN Management System Installation Guide, Release 4.1

Troubleshooting

May 25, 2015 HF

Troubleshooting

Certificate Regeneration for DPE

Step 3

Step 4

Output:

System returns with command prompt

Remove the existing Server and Root ca from cacerts file:

Enter:

/rms/app/CSCObac/jre/bin/keytool -delete -alias server-ca -keystore

/rms/app/CSCObac/jre/lib/security/cacerts

Note

Output:

The default password for the keystore is

"changeit".

Enter keystore password:

Enter:

/rms/app/CSCObac/jre/bin/keytool -delete -alias root-ca -keystore

/rms/app/CSCObac/jre/lib/security/cacerts

Note

Output:

The default password for the keystore is

"changeit".

Enter keystore password:

Regenerate the keystore and CSR for DPE node. Ensure that CN field matches the FQDN or eth1 IP-Address of DPE).

Enter:

/rms/app/CSCObac/jre/bin/keytool -keystore /rms/app/CSCObac/dpe/conf/self_signed/dpe.keystore -alias dpe-key -genkey -keyalg RSA

Note The values must be as specified in OVA descriptor file

Output:

Enter keystore password:

Re-enter new password:

What is your first and last name?

[Unknown]: 10.5.2.217

What is the name of your organizational unit?

[Unknown]: CISCO

What is the name of your organization?

[Unknown]: CISCO

What is the name of your City or Locality?

[Unknown]: BLR

What is the name of your State or Province?

[Unknown]: KA

What is the two-letter country code for this unit?

[Unknown]: IN

Is CN=10.5.2.217, OU=CISCO, O=CISCO, L=BLR, ST=KA, C=IN correct?

[no]: yes

Enter key password for <dpe-key>

(RETURN if same as keystore password):

Re-enter new password:

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

165

Troubleshooting

Certificate Regeneration for Upload Server

Step 5

Step 6

Step 7

Step 8

Step 9

Enter:

/rms/app/CSCObac/jre/bin/keytool -keystore /rms/app/CSCObac/dpe/conf/ self_signed /dpe.keystore

-alias dpe-key -certreq -file dpe.csr

Output:

Enter keystore password:

Note It is important to use the keytool utility provided by DPE instead of the default java keytool as per BAC documentation.

Copy the regenerated keystore and CSR to the /rms/app/CSCObac/dpe/conf/ folder.

Cp /rms/app/CSCObac/dpe/conf/self_signed/dpe.keystore /rms/app/CSCObac/dpe/conf/

Cp /rms/app/CSCObac/dpe/conf/self_signed/dpe.csr /rms/app/CSCObac/dpe/conf/

Set ownership

Enter: chown bacservice:bacservice /rms/app/CSCObac/dpe/conf/dpe.keystore

Output:

System returns with command prompt

Enter: chown bacservice:bacservice /rms/app/CSCObac/dpe/conf/dpe.csr

Output:

System returns with command prompt

Get the CSR signed by the signing authority and get the signed certificates and CA certificates (client-ca.cer, root-ca.cer, and server-ca.cer).

Reinstall the certificates. Follow the steps 4 and 5 in the “ Installing RMS Certificates ” section.

Reload the server process. Follow the step 7 in “ Installing RMS Certificates" section.

Certificate Regeneration for Upload Server

Following are the Keystore regeneration steps to be performed manually if something goes wrong with the certificate generation process in LUS:

Note Manually backup older keystores because the keystores are replaced whenever the script is executed.

Step 1 Open the generate_keystore.sh script from /opt/CSCOuls/bin/ directory as a 'root' user using the below command.

166

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Troubleshooting

Certificate Regeneration for Upload Server

Step 2

Step 3

Example: vi /opt/CSCOuls/bin/generate_keystore.sh

Edit the below lines as per OVA descriptor settings:

Cert_C="US"

Cert_ST="NC"

Cert_L="RTP"

Cert_O="Cisco Systems, Inc."

Cert_OU="SCTG"

Upload_SB_Fqdn="rtpfga-ova-upload"

Upload_Keystore_Password="Ch@ngeme1"

Upload_Node_Eth1_Address="10.5.2.17"

Run the script:

Enter:

./generate_keystore.sh

Output:

[root@BLR17-Upload-41N bin]# ./generate_keystore.sh

create uls keystore, private key and certificate request

Enter keystore password: Re-enter new password: Enter key password for <uls-key>

(RETURN if same as keystore password): Re-enter new password: Enter destination keystore password: Re-enter new password: Enter source keystore password: Adding UBI CA certs to uls truststore

Enter keystore password: Owner: O=Ubiquisys, CN=Co Int CA

Issuer: O=Ubiquisys, CN=Co Root CA

Serial number: 40d8ada022c1f52d

Valid from: Fri Mar 22 16:42:03 IST 2013 until: Tue Mar 16 16:42:03 IST 2038

Certificate fingerprints:

MD5: F0:F0:15:82:D3:22:A9:D7:4A:48:58:00:25:A9:E5:FC

SHA1: 38:45:74:77:61:08:A9:78:53:22:C1:29:7F:B8:8C:35:52:6F:31:79

SHA256:

DC:88:99:BE:A0:A3:BE:5F:49:11:DA:FB:85:83:05:CF:1E:A2:FA:E0:4F:4D:18:AF:0B:9B:23:3F:5F:D2:57:61

Signature algorithm name: SHA256withRSA

Version: 3

Extensions:

#1: ObjectId: 2.5.29.35 Criticality=false

AuthorityKeyIdentifier [

KeyIdentifier [

0000: 4B 49 74 B3 E2 EF 41 BF

]

]

#2: ObjectId: 2.5.29.19 Criticality=false

BasicConstraints:[

CA:true

PathLen:0

]

#3: ObjectId: 2.5.29.15 Criticality=true

KIt...A.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

167

Troubleshooting

Certificate Regeneration for Upload Server

KeyUsage [

Key_CertSign

Crl_Sign

]

#4: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: 4C 29 95 49 9D 27 44 86

]

]

L).I.'D.

Trust this certificate? [no]: Certificate was added to keystore

Enter keystore password: Owner: O=Ubiquisys, CN=Co Root CA

Issuer: O=Ubiquisys, CN=Co Root CA

Serial number: 99af1d71b488d88e

Valid from: Fri Mar 22 16:12:43 IST 2013 until: Tue Mar 16 16:12:43 IST 2038

Certificate fingerprints:

MD5: FA:FA:41:EF:2E:F1:83:B8:FD:94:9F:37:A2:8E:EE:7C

SHA1: 99:B0:FA:51:C7:B2:45:5B:44:22:C0:F6:24:CD:91:3F:0F:50:DE:AB

SHA256:

1C:64:6E:CB:27:2D:23:5C:B3:01:09:6B:02:F9:3E:B6:B2:59:42:50:CD:8C:75:A6:3F:8A:66:DF:A5:18:B6:74

Signature algorithm name: SHA256withRSA

Version: 3

Extensions:

]

]

#1: ObjectId: 2.5.29.35 Criticality=false

AuthorityKeyIdentifier [

KeyIdentifier [

0000: 4B 49 74 B3 E2 EF 41 BF

#2: ObjectId: 2.5.29.19 Criticality=false

BasicConstraints:[

CA:true

PathLen:2147483647

]

#3: ObjectId: 2.5.29.15 Criticality=true

KeyUsage [

Key_CertSign

Crl_Sign

]

KIt...A.

#4: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: 4B 49 74 B3 E2 EF 41 BF

]

]

KIt...A.

Trust this certificate? [no]: Certificate was added to keystore

168

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Troubleshooting

Deployment Troubleshooting

Step 4

Step 5

Step 6

MAC verified OK

Changing permissions fix permissions on secure files

[root@BLR17-Upload-41N bin]#

The uls.keystore and uls.csr are regenerated in this directory: /opt/CSCOuls/conf

After getting the uls.csr file, get it signed by the signing authority to get client, server, and root certificates.

Reinstall the certificates. For more information, see the "Installing RMS Certificates ” section.

Reload the server process. Follow the step 7 in “ Installing RMS Certificates" section.

Deployment Troubleshooting

To address the problems faced during RMS deployment, complete the following steps.

For more details to check the status of CN, ULS and SN see

RMS Installation Sanity Check, on page 90

.

CAR/PAR Server Not Functioning

Issue CAR/PAR server is not functioning.

During login to aregcmd with user name 'admin' and proper password, this message is seen: "Communication with the 'radius' server failed. Unable to obtain license from server."

Cause

1 The property, "prop:Car_License_Base " is set incorrectly in the descriptor file.

or

2 CAR license has expired.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

169

Troubleshooting

Unable to Access BAC and DCC UI

Solution

1 Log in to Serving node as a root user.

2 Navigate to the /rms/app/CSCOar/license directory ( cd

/rms/app/CSCOar/license ).

3 Edit CSCOar.lic file to vi CSCOar.lic. Either overwrite the new license in the file or comment the existing one and add the fresh license in a new line:

Overwrite:

[root@rms-aio-serving license]# vi CSCOar.lic

INCREMENT PAR-SIG-NG-TPS cisco 6.0 28-feb-2015 uncounted

VENDOR_STRING=<count>1</count>

HOSTID=ANY

NOTICE="<LicFileID>20140818221132340</LicFileID><LicLineID>1</LicLineID>

<PAK></PAK>"

SIGN=E42AA34ED7C4

Comment the existing license and add the fresh license in the new line:

[root@rms-aio-serving license]# vi CSCOar.lic

#INCREMENT PAR-SIG-NG-TPS cisco 6.0 06-sept-2014 uncounted

VENDOR_STRING=<count>1</count> HOSTID=ANY NOTICE="

<LicFileID>20140818221132340</LicFileID><LicLineID>1</LicLineID>

<PAK></PAK>"

SIGN=E42AA34ED7C4

INCREMENT PAR-SIG-NG-TPS cisco 6.0 28-feb-2015 uncounted

VENDOR_STRING=<count>1</count>

HOSTID=ANY NOTICE="<LicFileID>20140818221132340</LicFileID>

<LicLineID>1</LicLineID> <PAK></PAK>" SIGN=E42AA34ED7C4

4 Navigate to the /home directory ( cd /home ) and repeat the previous step on the CSCOar.lic file in this directory.

5 Go to the Serving node console and restart PAR server using the following command:

/etc/init.d/arserver stop

/etc/init.d/arserver start

After restarting the PAR server, check the status using the following command:

/rms/app/CSCOar/usrbin/arstatus

Output:

Cisco Prime AR RADIUS server running

Cisco Prime AR Server Agent running

Cisco Prime AR MCD lock manager running

Cisco Prime AR MCD server running

Cisco Prime AR GUI running

(pid: 1668)

(pid: 1655)

(pid: 1659)

(pid: 1666)

(pid: 1669)

Unable to Access BAC and DCC UI

Issue

Cause

Not able to access BAC UI and DCC UI due to expiry of certificates in browser.

Certificate added to the browser just has three months validity.

170

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Troubleshooting

DCC UI Shows Blank Page After Login

Solution

1 Delete the existing certificates from the browser.

Go to Tools > Options . In the Options dialog, click Advanced > Certificates

> View Certificates .

2 Select RMS setup certificate and delete.

3 Clear the browser history.

4 Access DCC UI/BAC UI again. The message "This Connection is Untrusted" appears. Click Add Exception and click Confirm Security Exception from Add

Security Exception dialog.

DCC UI Shows Blank Page After Login

Issue

Cause

Unsupported plugins installed in the Browser

Unsupported plugins cause conflicts with the DCC UI Operation

Solution

1 Remove or uninstall all unsupported/incompatible third party plugins on the browser.

Or,

2 Reinstall the Browser

DHCP Server Not Functioning

Issue DHCP server is not functioning.

During login to nrcmd with user name 'cnradmin' and proper password, it shows groups and roles as 'superuser'; but if any command related to DHCP is entered, the following message is displayed.

"You do not have permission to perform this action."

Cause The property, "prop:Cnr_License_IPNode" is set incorrectly in the descriptor file.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

171

Troubleshooting

DHCP Server Not Functioning

Solution

1 Edit the following product.license

file with proper license key for PNR by logging into central node.

/rms/app/nwreg2/local/conf/product.licenses

Sample license file for reference:

INCREMENT count-dhcp cisco 8.1 permanent uncounted

VENDOR_STRING=<Count>10000</Count>

HOSTID=ANY

NOTICE="<LicFileID>20130715144658047</LicFileID><LicLineID>1</LicLineID>

<PAK></PAK><CompanyName></CompanyName>" SIGN=176CCF90B694

INCREMENT base-dhcp cisco 8.1 permanent uncounted

VENDOR_STRING=<Count>1000</Count>

HOSTID=ANY

NOTICE="<LicFileID>20130715144658047</LicFileID><LicLineID>2</LicLineID>

<PAK></PAK><CompanyName></CompanyName>" SIGN=0F10E6FC871E

INCREMENT base-system cisco 8.1 permanent uncounted

VENDOR_STRING=<Count>1</Count>

HOSTID=ANY

NOTICE="<LicFileID>20130715144658047</LicFileID><LicLineID>3</LicLineID>

<PAK></PAK><CompanyName></CompanyName>" SIGN=9242CBD0FED0

2 Log in to PNR GUI.

http://<central nb ip>:8090

User Name: cnradmin

Password: <prop:Cnradmin_Password> (Property value from the descriptor file)

3 Click Administration > Licenses from Home page.

The following three types of license keys should be present. If not present, add them using browser.

1 Base-dhcp

2 Count-dhcp

3 Base-system

4 Click Administration > Clusters .

5 Click Resynchronize .

Go to Serving Node Console and restart PNR server using the following command:

/etc/init.d/nwreglocal stop

/etc/init.d/nwreglocal start

After restarting the PNR server, check the status using the following command:

/rms/app/nwreg2/local/usrbin/cnr_status

Output:

DHCP Server running

Server Agent running

CCM Server running

WEB Server running

CNRSNMP Server running

RIC Server Running

TFTP Server is not running

DNS Server is not running

DNS Caching Server is not running

(pid: 8056)

(pid: 8050)

(pid: 8055)

(pid: 8057)

(pid: 8060)

(pid: 8058)

172

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Troubleshooting

DPE Processes are Not Running

DPE Processes are Not Running

Scenario 1:

DPE Installation Fails with error log:

Issue This DPE is not licensed. Your request cannot be serviced"

Cause Configure the property prop:Dpe_Cnrquery_Client_Socket_Address=NB IP address of serving node in the descriptor file. If other than NB IP address of serving node is given then "DPE is not licensed error" will appear in OVA first boot log.

Solution

1 Log in to DPE CLI using the command [admin1@blr-rms11-serving ~]$

2 Execute the command telnet localhost 2323 .

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

blr-rms11-serving BAC Device Provisioning Engine

User Access Verification

Password: blr-rms11-serving> en

Password: blr-rms11-serving# dpe cnrquery giaddr x.x.x.x

blr-rms11-serving# dpe cnrquery server-port 61610 blr-rms11-serving# dhcp reload

Scenario 2:

Issue

Cause

DPE process might not run when the password of keystore and key mismatches from the descriptor file.

The Keystore was tampered with, or password entered is incorrect resulting in a password verification failure. This occurs when the password used to generate the

Keystore file is different than the one given for the property

"prop:RMS_App_Password" in descriptor file.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

173

Troubleshooting

Connection to Remote Object Unsuccessful

Solution

1 Navigate to /rms/app/CSCObac/dpe/conf and execute the below command to change the password of the Keystore file.

Input:

"[root@rtpfga-s1-upload1 conf]# keytool -storepasswd – keystore dpe.keystore"

Output:

Enter keystore password:OLD PASSWORD

New keystore password: NEW PASSWORD

Re-enter new keystore password: NEW PASSWORD

Input:

[root@rtpfga-s1-upload1 conf]# keytool -keypasswd -keystore dpe.keystore -alias dpe – key

Output:

Enter keystore password: NEW AS PER LAST COMMAND

Enter key password for <dpe-key> : OLD PASSWORD

New key password for <dpe-key>: NEW PASSWORD

Re-enter new key password for <dpe-key>: NEW PASSWORD

Note The new keystore password should be same as given in the descriptor file.

2 Restart the server process.

[root@rtpfga-s1-upload1 conf]# /etc/init.d/bprAgent restart dpe

[root@rtpfga-s1-upload1 conf]# /etc/init.d/bprAgent status dpe

BAC Process Watchdog is running.

Process [dpe] is running.

Broadband Access Center [BAC 3.8.1.2

(LNX_BAC3_8_1_2_20140918_1230_12)].

Connected to RDU [10.5.1.200].

Caching [3] device configs and [52] files.

188 sessions succeeded and 1 sessions failed.

6 file requests succeeded and 0 file requests failed.

68 immediate device operations succeeded, and 2 failed.

0 home PG redirections succeeded, and 0 failed.

Using signature key name [] with a validity of [3600].

Abbreviated ParamList is enabled.

Running for [4] hours [23] mins [17] secs.

Connection to Remote Object Unsuccessful

Issue

Cause

A connection to the remote object could not be made. OVF Tool does not support this server.

Completed with errors

The errors are triggered by ovftool command during ova deployment. the errors can be found in both Console and vCenter logs.

Solution User must have Administrator privileges to VMware vCenter and ESXi.

174

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Troubleshooting

VLAN Not Found

VLAN Not Found

Issue

Cause

Solution

VLAN not found.

The errors are triggered by ovftool command during ova deployment. the errors can be found in both Console and vCenter logs.

Check for the appropriate "portgroup" name on virtual switch of Elastic Sky X

Integrated (ESXi) host or Distributed Virtual Switch (DVS) on VMware vCenter.

Unable to Get Live Data in DCC UI

Issue Live Data of an AP is not coming and Connection request fails.

Cause

1 Device is offline.

2 Device is not having its radio activated/ device is registered but not activated.

Solution

1 In the Serving Node, add one more route with Destination IP as HNB-GW

SCTP IP and Gateway as Serving Node North Bound IP as in the following example:

Serving NB Gateway IP-10.5.1.1

HNBGW SCTP IP- 10.5.1.83

Add the following route in Serving node: route add -net 10.5.1.83 netmask 255.255.255.0 gw 10.5.1.1

2 Activate the device from DCC UI post registration.

3 Verify trouble shooting logs in bac.

4 Verify DPE logs and ZGTT logs from ACS simulator.

Installation Warnings about Removed Parameters

These properties have been completely removed from the 4.0 OVA installation. A warning is given by the installer, if these properties are found in the OVA descriptor file. However, installation still continues.

prop:vami.gateway.Upload-Node prop:vami.DNS.Upload-Node prop:vami.ip0.Upload-Node prop:vami.netmask0.Upload-Node prop:vami.ip1.Upload-Node prop:vami.netmask1.Upload-Node prop:vami.gateway.Central-Node prop:vami.DNS.Central-Node prop:vami.ip0.Central-Node prop:vami.netmask0.Central-Node

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

175

Upload Server is Not Up prop:vami.ip1.Central-Node prop:vami.netmask1.Central-Node prop:vami.gateway.Serving-Node prop:vami.DNS.Serving-Node prop:vami.ip0.Serving-Node prop:vami.netmask0.Serving-Node prop:vami.ip1.Serving-Node prop:vami.netmask1.Serving-Node prop:Debug_Mode prop:Server_Crl_Urls prop:Bacadmin_Password prop:Dccapp_Password prop:Opstools_Password prop:Dccadmin_Password prop:Postgresql_Password prop:Central_Keystore_Password prop:Upload_Stat_Password prop:Upload_Calldrop_Password prop:Upload_Demand_Password prop:Upload_Lostipsec_Password prop:Upload_Lostgwconnection_Password prop:Upload_Nwlscan_Password prop:Upload_Periodic_Password prop:Upload_Restart_Password prop:Upload_Crash_Password prop:Upload_Lowmem_Password prop:Upload_Unknown_Password prop:Serving_Keystore_Password prop:Cnradmin_Password prop:Caradmin_Password prop:Dpe_Cli_Password prop:Dpe_Enable_Password prop:Fc_Realm prop:Fc_Log_Periodic_Upload_Enable prop:Fc_Log_Periodic_Upload_Interval prop:Fc_On_Nwl_Scan_Enable prop:Fc_On_Lost_Ipsec_Enable prop:Fc_On_Crash_Upload_Enable prop:Fc_On_Call_Drop_Enable prop:Fc_On_Lost_Gw_Connection_Enable prop:Upload_Keystore_Password prop:Dpe_Keystore_Password prop:Bac_Secret prop:Admin2_Username prop:Admin2_Password prop:Admin2_Firstname prop:Admin2_Lastname prop:Admin3_Username prop:Admin3_Password prop:Admin3_Firstname prop:Admin3_Lastname prop:Upgrade_Mode prop:Asr5k_Hnbgw_Address

Upload Server is Not Up

The upload server fails with java.lang.ExceptionInInitializerError in the following scenarios.

The errors can be seen in opt/CSCOuls/logs/uploadServer.console.log

file.

Scenario 1:

Troubleshooting

176

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Troubleshooting

Upload Server is Not Up

Issue

Cause

Solution

Upload Server failed with java.lang.ExceptionInInitializerError

java.lang.ExceptionInInitializerError

at com.cisco.ca.rms.upload.server.UlsSouthBoundServer.getInstance

(UlsSouthBoundServer.java:58) at com.cisco.ca.rms.upload.server.UlsServer.<init>(UlsServer.java:123) at com.cisco.ca.rms.upload.server.UlsServer.<init>(UlsServer.java:25) at com.cisco.ca.rms.upload.server.UlsServer$SingleInstanceHolder.<clinit>

(UlsServer.java:70) at com.cisco.ca.rms.upload.server.UlsServer.getInstance(UlsServer.java:82) at com.cisco.ca.rms.upload.server.UlsServer.main(UlsServer.java:55)

Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to:

/10.6.22.12:8080 at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:298) at com.cisco.ca.rms.upload.server.UlsSouthBoundServer.<init>

(UlsSouthBoundServer.java:109) at com.cisco.ca.rms.upload.server.UlsSouthBoundServer.<init>

(UlsSouthBoundServer.java:22) at com.cisco.ca.rms.upload.server.UlsSouthBoundServer$SingleInstanceHolder.<clinit>

(UlsSouthBoundServer.java:46)

... 6 more

Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Unknown Source) at sun.nio.ch.Net.bind(Unknown Source) at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source) at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source) at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.bind

(NioServerSocketPipelineSink.java:140) at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleServerSocket

(NioServerSocketPipelineSink.java:90) at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk

(NioServerSocketPipelineSink.java:64) at org.jboss.netty.channel.Channels.bind(Channels.java:569) at org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:189) at org.jboss.netty.bootstrap.ServerBootstrap$Binder.channelOpen(

ServerBootstrap.java:343) at org.jboss.netty.channel.Channels.fireChannelOpen(Channels.java:170) at org.jboss.netty.channel.socket.nio.NioServerSocketChannel.<init>

(NioServerSocketChannel.java:80) at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel

(NioServerSocketChannelFactory.java:158) at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel

(NioServerSocketChannelFactory.java:86) at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:277)

... 9 more

The server failed to bind to the IP /10.6.22.12:8080 because the requested address was unavailable.

Navigate to /opt/CSCOuls/conf and modify the UploadServer.properties file with proper

SB and NB IP address.

Scenario 2 :

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

177

Troubleshooting

Upload Server is Not Up

Issue

Cause

Upload Server failed with java.lang.ExceptionInInitializerError

java.lang.ExceptionInInitializerError

at com.cisco.ca.rms.upload.server.security.UlsSbSslContextMgr.getInstance

(UlsSbSslContextMgr.java:65) at com.cisco.ca.rms.upload.server.UlsSouthBoundPipelineFactory.<init>

(UlsSouthBoundPipelineFactory.java:86) at com.cisco.ca.rms.upload.server.UlsSouthBoundServer.<init>

(UlsSouthBoundServer.java:102) at com.cisco.ca.rms.upload.server.UlsSouthBoundServer.<init>

(UlsSouthBoundServer.java:22) at com.cisco.ca.rms.upload.server.UlsSouthBoundServer$SingleInstanceHolder.<clinit>

(UlsSouthBoundServer.java:46) at com.cisco.ca.rms.upload.server.UlsSouthBoundServer.getInstance

(UlsSouthBoundServer.java:58) at com.cisco.ca.rms.upload.server.UlsServer.<init>(UlsServer.java:123) at com.cisco.ca.rms.upload.server.UlsServer.<init>(UlsServer.java:25) at com.cisco.ca.rms.upload.server.UlsServer$SingleInstanceHolder.<clinit>

(UlsServer.java:70) at com.cisco.ca.rms.upload.server.UlsServer.getInstance(UlsServer.java:82) at com.cisco.ca.rms.upload.server.UlsServer.main(UlsServer.java:55)

Caused by: java.lang.IllegalStateException: java.io.IOException:

Keystore was tampered with, or password was incorrect at com.cisco.ca.rms.commons.security.SslContextManager.<init>

(SslContextManager.java:79) at com.cisco.ca.rms.upload.server.security.UlsSbSslContextMgr.<init>

(UlsSbSslContextMgr.java:72) at com.cisco.ca.rms.upload.server.security.UlsSbSslContextMgr.<init>

(UlsSbSslContextMgr.java:28) at com.cisco.ca.rms.upload.server.security.UlsSbSslContextMgr$SingleInstanceHolder.<clinit>

(UlsSbSslContextMgr.java:53)

... 11 more

Caused by: java.io.IOException: Keystore was tampered with, or password was incorrect at sun.security.provider.JavaKeyStore.engineLoad(Unknown Source) at sun.security.provider.JavaKeyStore$JKS.engineLoad(Unknown Source) at java.security.KeyStore.load(Unknown Source) at com.cisco.ca.rms.upload.server.security.UlsSbSslContextMgr.loadKeyManagers

(UlsSbSslContextMgr.java:91) at com.cisco.ca.rms.commons.security.SslContextManager.<init>(SslContextManager.java:48)

... 14 more

Caused by: java.security.UnrecoverableKeyException: Password verification failed

... 19 more

The Keystore was tampered with, or password entered is incorrect resulting in a password verification failure.

This occurs when the password used to generate the Keystore file is different than the one given for the property “ Upload_Keystore_Password ” in descriptor file.

178

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Troubleshooting

Upload Server is Not Up

Solution

1 Navigate to /opt/CSCOuls/conf and execute the below command to change the password of the Keystore file.

"[root@rtpfga-s1-upload1 conf]# keytool -storepasswd -keystore uls.keystore"

Output: keytool -storepasswd -keystore dpe.keystore

Enter keystore password:OLD PASSWORD

New keystore password: NEW PASSWORD

Re-enter new keystore password: NEW PASSWORD

Note The new Keystore password should be same as given in the descriptor file.

2 Run another command before restarting the server to change the key password.

keytool -keypasswd -keystore dpe.keystore -alias dpe -key

Enter keystore password: NEW AS PER LAST COMMAND

Enter key password for <dpe-key> : OLD PASSWORD

New key password for <dpe-key>: NEW PASSWORD

Re-enter new key password for <dpe-key>: NEW PASSWORD

3 Restart the server process.

[root@rtpfga-s1-upload1 conf]# service god restart

[root@rtpfga-s1-upload1 conf]# service god status

UploadServer: up

Scenario 3:

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

179

Troubleshooting

Upload Server is Not Up

Issue

Cause

Solution

Upload Server failed with java.lang.ExceptionInInitializerError

java.lang.ExceptionInInitializerError

at com.cisco.ca.rms.dcc.lus.server.LusNorthBoundServer.getInstance

(LusNorthBoundServer.java:65) at com.cisco.ca.rms.dcc.lus.server.LusServer.<init>(LusServer.java:98) at com.cisco.ca.rms.dcc.lus.server.LusServer.<init>(LusServer.java:17) at com.cisco.ca.rms.dcc.lus.server.LusServer$SingleInstanceHolder.<clinit>

(LusServer.java:45) at com.cisco.ca.rms.dcc.lus.server.LusServer.getInstance(LusServer.java:57) at com.cisco.ca.rms.dcc.lus.server.LusServer.main(LusServer.java:30)

Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to:

/0.0.0.0:8082 at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:298) at com.cisco.ca.rms.dcc.lus.server.LusNorthBoundServer.<init>

(LusNorthBoundServer.java:120) at com.cisco.ca.rms.dcc.lus.server.LusNorthBoundServer.<init>

(LusNorthBoundServer.java:30) at com.cisco.ca.rms.dcc.lus.server.LusNorthBoundServer$SingleInstanceHolder.<clinit>

(LusNorthBoundServer.java:53)

... 6 more

Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:137) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77) at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.bind

(NioServerSocketPipelineSink.java:140) at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleServerSocket

(NioServerSocketPipelineSink.java:92) at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk

(NioServerSocketPipelineSink.java:66) at org.jboss.netty.channel.Channels.bind(Channels.java:462) at org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:186) at org.jboss.netty.bootstrap.ServerBootstrap$Binder.channelOpen

(ServerBootstrap.java:343) at org.jboss.netty.channel.Channels.fireChannelOpen(Channels.java:170) at org.jboss.netty.channel.socket.nio.NioServerSocketChannel.<init>

(NioServerSocketChannel.java:77) at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel

(NioServerSocketChannelFactory.java:137) at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel

(NioServerSocketChannelFactory.java:85) at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:277)

... 9 more

The server failed to bind to the IP /0.0.0.0:8082 because the requested address is already in use.

Execute the command: netstat – anp |grep <port number>

For example:

[root@rtpfga-s1-upload1 conf]# netstat -anp |grep 8082 tcp 0 0 10.6.23.16:8082 0.0.0.0:* LISTEN

Kill the particular process.

[root@rtpfga-s1-upload1 conf]# kill -9 26842

Start the server.

“ [root@rtpfga-s1-upload1 conf]# service god start

[root@rtpfga-s1-upload1 conf]# service god status

UploadServer: up ”

26842/java

180

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Troubleshooting

OVA Installation failures

OVA Installation failures

Issue

Cause

Solution

If the OVA installer displays an error on the installation Console.

OVA Installation failures

If there are any issues during OVA installation, the ova-first-boot.log

should be referred that is present in the Central node and Serving node. Validate the appropriate errors in the boot log files.

Update failures in group type, Site - DCC UI throws an error

Issue

Cause

SITE Creation Fails While Importing All Mandatory and Optional Parameters.

Invalid parameter value- FC-CSON-STATUS-HSCO-INNER with Optimised .

Solution For FC-CSON-STATUS-HSCO-INNER parameter, allowed value is Optimized not Optimised . The spelling for Optimized should be corrected.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

181

Update failures in group type, Site - DCC UI throws an error

Troubleshooting

182

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

A P P E N D I X

A

OVA Descriptor File Properties

All required and optional properties for the OVA descriptor file are described here.

RMS Network Architecture, page 183

Virtual Host Network Parameters, page 184

Virtual Host IP Address Parameters, page 186

Virtual Machine Parameters, page 190

HNB Gateway Parameters, page 191

Auto-Configuration Server Parameters, page 193

OSS Parameters, page 193

Administrative User Parameters, page 197

BAC Parameters, page 198

Certificate Parameters, page 199

Deployment Mode Parameters, page 200

License Parameters, page 200

Password Parameters, page 201

Serving Node GUI Parameters, page 203

DPE CLI Parameters, page 203

Time Zone Parameter, page 203

Third-Party SeGW Parameter, page 205

RMS Network Architecture

The descriptor files are used to describe to the RMS system the network architecture being used so that all network entities can be accessed by the RMS. Before you create your descriptor files you must have on hand the IP addresses of the various nodes in the system, the VLAN numbers and all other information being configured in the descriptor files. Use this network architecture diagram as an example of a typical RMS

RAN Management System Installation Guide, Release 4.1

183 May 25, 2015 HF

OVA Descriptor File Properties

Virtual Host Network Parameters installation. The examples in this document use the IP addresses defined in this architecture diagram. It might be helpful to map out your RMS architecture in a similar manner and thereby easily replace the values in the descriptor example files provided here to be applicable to your installation.

Figure 12: Example RMS Architecture

Virtual Host Network Parameters

This section of the OVA descriptor file specifies the virtual host network architecture. Information must be provided regarding the VLANs for the ports on the central node, the serving node and the upload node. The virtual host network property contains the parameters described in this table.

Note VLAN numbers correspond to the network diagram in

RMS Network Architecture, on page 183

.

Name: Description

Central-node Network 1

VLAN for the connection between the central node (southbound) and the upload node

Values

VLAN #

Required Example

In all-in-one deployment descriptor file

In distributed central node descriptor file net:Central-Node Network

1=VLAN 11

184

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

Virtual Host Network Parameters

Name: Description

Central-node Network 2

VLAN for the connection between the central node (northbound) and the serving node

Values

VLAN #

Required Example

In all-in-one deployment descriptor file

In distributed central node descriptor file net:Central-Node Network

2=VLAN 2335K

Serving-node Network 1

VLAN for the connection between the serving node (northbound) and the central node

VLAN # In all-in-one deployment descriptor file

In serving node descriptor file for distributed deployment net:Serving-Node Network

1=VLAN 11

Serving-node Network 2

VLAN for the connection between the serving node (southbound) and the CPE network (FAPs)

VLAN # In all-in-one deployment descriptor file

In distributed serving node descriptor file net:Serving-Node Network

2=VLAN 12

Upload-node Network 1

VLAN for the connection between the upload node (northbound) and the central node

VLAN # In all-in-one deployment descriptor file

In distributed upload node descriptor file net:Upload-Node Network

1=VLAN 11

Upload-node Network 2

VLAN for the connection between the upload node (southbound) and the CPE network (FAPs)

VLAN # In all-in-one deployment descriptor file

In distributed upload node descriptor file net:Upload-Node Network

2=VLAN 12

Virtual Host Network Example Configuration

Example of virtual host network section for all-in-one deployment: net:Upload-Node Network 1=VLAN 11 net:Upload-Node Network 2=VLAN 12 net:Central-Node Network 1=VLAN 11 net:Central-Node Network 2=VLAN 2335K net:Serving-Node Network 1=VLAN 11 net:Serving-Node Network 2=VLAN 12

Example of virtual host network section for distributed central node: net:Central-Node Network 1=VLAN 11 net:Central-Node Network 2=VLAN 2335K

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

185

OVA Descriptor File Properties

Virtual Host IP Address Parameters

Example of virtual host network section for distributed upload node: net:Upload-Node Network 1=VLAN 11 net:Upload-Node Network 2=VLAN 12

Example of virtual host network section for distributed serving node: net:Serving-Node Network 1=VLAN 11 net:Serving-Node Network 2=VLAN 12

Virtual Host IP Address Parameters

This section of the OVA descriptor file specifies information regarding the virtual host. The Virtual Host IP

Address property includes these parameters:

Note • The Required column in the tables, Yes indicates Mandatory field, and No indicates Non-mandatory field.

• Underscore (_) cannot be used for the hostname for hostname parameters.

Hostname Parameters

Parameter Name:

Description

Central_Hostname

Configured hostname of the server

Valid Values / Default Required

Character string; no periods (.) allowed

Default: rms-aio-central for all-in-one descriptor file, rms-distr-central for central node descriptor file

Yes

Serving_Hostname

Configured hostname of the serving node

Character string; no periods (.) allowed

Default: rms-aio-serving for distributed all-in-one descriptor file, rms-distr-serving for distributed descriptor file

Required

Upload_Hostname

Configured hostname of the upload node

Character string; no periods (.) allowed

Default: rms-aio-upload for all-in-one, rms-distr-upload for distributed

Required

Example prop:Central_Hostname= hostname-central prop:Serving_Hostname= hostname-serving prop:Upload_Hostname= hostname-upload

186

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

Virtual Host IP Address Parameters

Central Node Parameters

Name: Description

Central_Node_Eth0_Address

IP address of the southbound VM interface

Valid Values /

Default

Required

IP address In all descriptor files

Example prop:Central_Node_Eth0_Address=

10.5.1.35

Central_Node_Eth0_Subnet

Network mask for the IP subnet of the southbound VM interface

Network mask In all descriptor files prop:Central_Node_Eth0_Subnet=

255.255.255.0

Central_Node_Eth1_Address

IP address of the northbound VM interface

IP address In all descriptor files prop:Central_Node_Eth1_Address=

10.105.233.76

Central_Node_Eth1_Subnet

Network mask for the IP subnet of the northbound VM interface

Network mask In all descriptor files prop:Central_Node_Eth1_Subnet=

255.255.255.0

Central_Node_Gateway

IP address of the gateway to the management network for the northbound interface of the central node

IP address In all descriptor files prop:Central_Node_Gateway=

10.105.233.1

Central_Node_Dns1_Address

IP address of primary DNS server provided by network administrator

IP address

Central_Node_Dns2_Address

IP address of secondary DNS server provided by network administrator

IP address

In all descriptor files prop:Central_Node_Dns1_Address=

72.163.128.140

In all descriptor files prop:Central_Node_Dns2_Address=

171.68.226.120

Serving Node

Name: Description

Serving_Node_Eth0_Address

IP address of the northbound

VM interface

Valid Values /

Default

IP address

Required Example

In all descriptor files prop:Serving_Node_Eth0_Address=

10.5.1.36

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

187

OVA Descriptor File Properties

Virtual Host IP Address Parameters

Name: Description Valid Values /

Default

Serving_Node_Eth0_Subnet

Network mask for the IP subnet of the northbound VM interface

Network mask

Required

In all descriptor files

Example prop:Serving_Node_Eth0_Subnet=

255.255.255.0

Serving_Node_Eth1_Address

IP address of the southbound

VM interface

IP address In all descriptor files prop:Serving_Node_Eth1_Address=

10.5.2.36

Serving_Node_Eth1_Subnet

Network mask for the IP subnet of the southbound VM interface

Network mask In all descriptor files prop:Serving_Node_Eth1_Subnet=

255.255.255.0

Serving_Node_Gateway

IP address of the gateway to the management network for the southbound interface of the serving node

IP address, can be specified in comma separated format in the form <NB

GW>,<SB

In all descriptor files prop:Serving_Node_Gateway=

10.5.1.1,10.5.2.1

Serving_Node_Dns1_Address

IP address of primary DNS server provided by network administrator

IP address In all descriptor files prop:Serving_Node_Dns1_Address=

10.105.233.60

Serving_Node_Dns2_Address

IP address of secondary DNS server provided by network administrator

IP address In all descriptor files prop:Serving_Node_Dns2_Address=

72.163.128.140

Upload Node

Name: Description

Upload_Node_Eth0_Address

IP address of the northbound

VM interface

Valid Values /

Default

IP address

Required

In all descriptor files

Example prop:Upload_Node_Eth0_Address=

10.5.1.38

Upload_Node_Eth0_Subnet

Network mask for the IP subnet of the northbound VM interface

Network mask In all descriptor files prop:Upload_Node_Eth0_Subnet=

255.255.255.0

188

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

May 25, 2015 HF

Virtual Host IP Address Parameters

Name: Description

Upload_Node_Eth1_Address

IP address of the southbound

VM interface

Valid Values /

Default

IP address

Required

In all descriptor files

Example prop:Upload_Node_Eth1_Address=

10.5.2.38

Upload_Node_Eth1_Subnet

Network mask for the IP subnet of the southbound VM interface

Network mask In all descriptor files prop:Upload_Node_Eth1_Subnet=

255.255.255.0

Upload_Node_Gateway

IP address of the gateway to the management network for the southbound interface of the upload node

IP address, can be specified in comma separated format in the form <NB

GW>,<SB

GW>

In all descriptor files prop:Upload_Node_Gateway=

10.5.1.1,10.5.2.1

In all descriptor files prop:Upload_Node_Dns1_Address=

10.105.233.60

Upload_Node_Dns1_Address

IP address of primary DNS server provided by network administrator

IP address

Upload_Node_Dns2_Address

IP address of secondary DNS server provided by network administrator

IP address In all descriptor files prop:Upload_Node_Dns2_Address=

72.163.128.140

Virtual Host IP Address Examples

All-in-one Descriptor File Example: prop:Central_Node_Eth0_Address=10.5.1.35

prop:Central_Node_Eth0_Subnet=255.255.255.0

prop:Central_Node_Eth1_Address=10.105.233.76

prop:Central_Node_Eth1_Subnet=255.255.255.128

prop:Central_Node_Dns1_Address=72.163.128.140

prop:Central_Node_Dns2_Address=171.68.226.120

prop:Central_Node_Gateway=10.105.233.1

prop:Serving_Node_Eth0_Address=10.5.1.36

prop:Serving_Node_Eth0_Subnet=255.255.255.0

prop:Serving_Node_Eth1_Address=10.5.2.36

prop:Serving_Node_Eth1_Subnet=255.255.255.0

prop:Serving_Node_Dns1_Address=10.105.233.60

prop:Serving_Node_Dns2_Address=72.163.128.140

prop:Serving_Node_Gateway=10.5.1.1,10.5.2.1

prop:Upload_Node_Eth0_Address=10.5.1.38

prop:Upload_Node_Eth0_Subnet=255.255.255.0

prop:Upload_Node_Eth1_Address=10.5.2.38

prop:Upload_Node_Eth1_Subnet=255.255.255.0

RAN Management System Installation Guide, Release 4.1

189

OVA Descriptor File Properties

Virtual Machine Parameters prop:Upload_Node_Dns1_Address=10.105.233.60

prop:Upload_Node_Dns2_Address=72.163.128.140

prop:Upload_Node_Gateway=10.5.1.1,10.5.2.1

Distributed Serving Node Descriptor File Example: prop:Serving_Node_Eth0_Address=10.5.1.36

prop:Serving_Node_Eth0_Subnet=255.255.255.0

prop:Serving_Node_Eth1_Address=10.5.2.36

prop:Serving_Node_Eth1_Subnet=255.255.255.0

prop:Serving_Node_Dns1_Address=10.105.233.60

prop:Serving_Node_Dns2_Address=72.163.128.140

prop:Serving_Node_Gateway=10.5.1.1,10.5.2.1

Distributed Upload Node Descriptor File Example: prop:Upload_Node_Eth0_Address=10.5.1.38

prop:Upload_Node_Eth0_Subnet=255.255.255.0

prop:Upload_Node_Eth1_Address=10.5.2.38

prop:Upload_Node_Eth1_Subnet=255.255.255.0

prop:Upload_Node_Dns1_Address=10.105.233.60

prop:Upload_Node_Dns2_Address=72.163.128.140

prop:Upload_Node_Gateway=10.5.1.1,10.5.2.1

Distributed Central Node Descriptor File Example: prop:Central_Node_Eth0_Address=10.5.1.35

prop:Central_Node_Eth0_Subnet=255.255.255.0

prop:Central_Node_Eth1_Address=10.105.233.76

prop:Central_Node_Eth1_Subnet=255.255.255.128

prop:Central_Node_Dns1_Address=72.163.128.140

prop:Central_Node_Dns2_Address=171.68.226.120

prop:Central_Node_Gateway=10.105.233.1

Virtual Machine Parameters

The following virtual machine (VM) parameters can be configured.

Note Make sure that the value of the parameter powerOn is set to false as the VMware hardware version needs to be upgraded before starting the VMs.

Parameter Name: Description acceptAllEulas

Specifies to accept license agreements

Values

True/False

Default: False skipManifestCheck

Specifies to skip validation of the

OVF package manifest

True/False

Default: False powerOn

Specifies to set the VM state for the first time once deployed

True/False

Default: False

Required

No

No

No

Example acceptAllEulas=False skipManifestCheck=True powerOn=False

190

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

HNB Gateway Parameters

Parameter Name: Description diskMode

Logical disk type of the VM

Values thick/thin

Default: thin vmFolder

Grouping virtual machine to add additional security folder name datastore

Name of the physical storage to keep

VM files text name

Name of the vApp that will be deployed on the host text

Default: VSC ovfid

Required

No

Yes

Yes

Yes

Example diskMode=thin vmFolder=FEM-GA-PEST datastore=ds-rtprms-c220-02 name=RMS-Provisioning-Solution

VM Parameter Configurations Example acceptAllEulas=True skipManifestCheck=True powerOn=False diskMode=thin vmFolder=FEM-GA-PEST datastore=ds-rtprms-c220-02 name=RMS-Provisioning-Solution

HNB Gateway Parameters

These parameters can be configured for the Cisco ASR 5000 hardware that is running the central and serving nodes in all descriptor files. A post-installation script is provided to configure correct values for these parameters. For more information, refer to

Configuring the HNB Gateway for Redundancy, on page 80

.

• IPSec address

• HNB-GW address

• DHCP pool information

• SCTP address

Parameter Name: Description

Asr5k_Dhcp_Address

DHCP IP address of the

ASR 5000

Values

IP address

Required

Yes, but can be configured with post-installation script

Example prop:Asr5k_Dhcp_Address=

172.23.27.152

RAN Management System Installation Guide, Release 4.1

191 May 25, 2015 HF

OVA Descriptor File Properties

HNB Gateway Parameters

Parameter Name: Description

Asr5k_Radius_Address

Radius IP address of the

ASR 5000

Values

IP address

Required

Yes, but can be configured with post-installation script

Example prop:Asr5k_Radius_Address=

172.23.27.152

Asr5k_Radius_Secret

Radius secret password as configured on the ASR 5000 text

Default: secret

Dhcp_Pool_Network

DHCP Pool network address of the ASR 5000

IP address

No prop:Asr5k_Radius_Secret= ***

Yes, but can be configured with post-installation script prop:Dhcp_Pool_Network= 6.0.0.0

Dhcp_Pool_Subnet

Subnet mask of the DHCP Pool network of the ASR 5000

Network mask Yes, but can be configured with post-installation script prop:Dhcp_Pool_Subnet=

255.255.255.0

Dhcp_Pool_FirstAddress

First IP address of the DHCP pool network of the ASR 5000

IP address

Dhcp_Pool_LastAddress

Last IP address of the DHCP pool network of the ASR 5000

IP address

Yes, but can be configured with post-installation script

Yes, but can be configured with post-installation script prop:Dhcp_Pool_FirstAddress=

6.32.0.2

prop:Dhcp_Pool_LastAddress=

6.32.0.2

Asr5k_Radius_CoA_Port Port number

Port for RADIUS

Change-of-Authorization (with white list updates) and

Disconnect flows from the PMG to the ASR 5000.

Default: 3799

No

Upload_SB_Fqdn IP address

Southbound fully qualified domain name or IP address for the upload node. For NAT based deployment, this can be set to public IP/FQDN of the NAT.

Default:

Upload eth1 address

Yes prop:Asr5k_Radius_CoA_Port=3799 prop:Upload_SB_Fqdn=

<Upload_Server_Hostname>

HNB Gateway Configuration Example prop:Asr5k_Dhcp_Address=10.5.4.152

192

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

Auto-Configuration Server Parameters prop:Asr5k_Radius_Address=10.5.4.152

prop:Asr5k_Radius_Secret=secret prop:Dhcp_Pool_Network=7.0.2.192

prop:Dhcp_Pool_Subnet=255.255.255.240

prop:Dhcp_Pool_FirstAddress=7.0.2.193

prop:Dhcp_Pool_LastAddress=7.0.2.206

prop:Asr5k_Radius_CoA_Port=3799

Auto-Configuration Server Parameters

Configure the network virtual IP and the loopback address on the Central node, Serving node, and the Upload node. The loopback address is configured as the default router for the Serving Node and the Upload Node.

The virtual IP address and the virtual Fully Qualified Domain Name (FQDN) are used as Auto-Configuration

Server (ACS) for TR-069 Informs, download of firmware files, and upload of diagnostic files. The virtual IP address and virtual FQDN should either point to ACE server or to the Serving node Southbound (South) address (if ACE is not used).

Note ACE hardware was used for TLS termination and load balancing. This hardware has reached end of life and not being deployed. For new installations, TLS is terminated on the DPE, thus bypassing the ACE.

The following parameters are used to configure the auto-configuration server information:

Parameter Name: Description

Acs_Virtual_Address

ACS Virtual Address. Southbound

IP address of the Serving node

Values

IP address

Required

In all descriptor files

Example prop:Acs_Virtual_Address=

172.18.137.20

Acs_Virtual_Fqdn

ACS virtual fully qualified domain name (FQDN). Southbound FQDN or IP address of the serving node.

For NAT based deployment, this can be set to public IP/FQDN of the

NAT.

Domain name or IP address

In all descriptor files prop:Acs_Virtual_Fqdn= femtosetup11.testlab.com

ACS Configuration Example prop:Acs_Virtual_Address=172.18.137.20

prop:Acs_Virtual_Fqdn=femtosetup11.testlab.com

OSS Parameters

Use these parameters to configure the integration points that are defined in the operation support systems

(OSS). Only a few integration points must be configured, while others are optional. The optional integration points can be enabled or disabled using a Boolean flag.

RAN Management System Installation Guide, Release 4.1

193 May 25, 2015 HF

OVA Descriptor File Properties

OSS Parameters

NTP Servers

Use these parameters to configure the NTP server address defined for virtual hosts:

Note NTP servers can be configured after deploying the OVA files. Refer to

NTP Servers Configuration , on

page 131 .

Parameter Name: Description

Ntp1_Address

Primary NTP server

Ntp2_Address

Secondary NTP server

Ntp3_Address

Alternative NTP server

Ntp4_Address

Alternative NTP server

Values

IP address

IP address

Default:

10.10.10.2

IP address

Default:

10.10.10.3

IP address

Default:

10.10.10.4

Required

No

No

No

No

Example prop:Ntp1_Address= <ip address> prop:Ntp2_Address= <ip address> prop:Ntp3_Address= <ip address> prop:Ntp4_Address= <ip address>

NTP Configuration Example prop:Ntp1_Address=<ip address> prop:Ntp2_Address=ntp-rtp2.cisco.com

prop:Ntp3_Address=ntp-rtp3.cisco.com

prop:Ntp4_Address=10.10.10.5

DNS Domain

Use these parameters to configure the DNS domain for virtual hosts:

Required Parameter Name: Description Valid Values /

Default

Dns_Domain

Configures the domain address for virtual hosts

Domain address

Default: cisco.com

No

Example prop:Dns_Domain=cisco.com

194

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

OSS Parameters

DNS Configuration Example prop:Dns_Domain=cisco.com

Syslog Servers

Use these parameters to configure the two syslog servers defined for remote logging support:

Parameter Name: Description Example

Syslog_Enable

Enables or disables syslog

Valid Values /

Default

True/False

Default: False

Required

No prop:Syslog_Enable=True

Syslog1_Address

Primary syslog server IP address

IP address of syslog server

Default:

10.10.10.10

No

Syslog2_Address

Secondary syslog server IP address

IP address of syslog server

Default:

10.10.10.10

No prop:Syslog1_Address=10.0.0.1

prop:Syslog2_Address=10.0.0.2

Note The syslog server configuration can be performed after the OVA file deployment. Refer to

SYSLog Servers

Configuration , on page 137

.

Syslog Configuration Example prop:Syslog_Enable=True prop:Syslog1_Address=10.0.0.1

prop:Syslog2_Address=10.0.0.2

SNMP

Use these parameters to configure the two Simple Network Management Protocol (SNMP) hosts that are defined. The SNMP hosts are allowed through the host-based firewall and configured in each of the applications that support SNMP polling.

Note The SNMP read and SNMP write communities are defined only if the SNMP "Read" and SNMP "Write" options are enabled.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

195

OVA Descriptor File Properties

OSS Parameters

Note The SNMP configuration can be performed after the OVA file deployment. Refer to

Configuring the

SNMP and SNMP Trap Servers, on page 134

.

Example Parameter Name: Description

Snmp_Enable

Enables or disables use of

SNMP

Valid Values /

Default

Required

True/False; values other than True are treated as False

Default: False

No

Snmp_Read_Community

SNMP read password

Snmp_Write_Community

SNMP write password

Default: snmp-read

Default: snmp-write

No

No

Snmp1_Address

IP address of primary SNMP server

IP address

Default:

10.10.10.10

Snmp2_Address

IP address of secondary SNMP server

IP address

Default:

10.10.10.10

No

No prop:Snmp_Enable=True prop:Snmp_Read_Community=*** prop:Snmp_Write_Community=*** prop:Snmp1_Address=10.0.0.1

prop:Snmp2_Address=10.0.0.2

SNMP Configuration Example prop:Snmp_Enable=True prop:Snmp_Read_Community=*** prop:Snmp_Write_Community=*** prop:Snmp1_Address=10.0.0.1

prop:Snmp2_Address=10.0.0.2

SNMP Traps

Use these parameters to configure two SNMP trap hosts defined for real-time alarming. Each application that supports SNMP traps are configured with the hosts and the community string.

Note The SNMP trap configuration can be performed after the OVA file deployment. Refer to

Configuring the

SNMP and SNMP Trap Servers, on page 134

.

196

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

Administrative User Parameters

Parameter Name: Description Valid Values /

Default

Required

Snmptrap_Enable

Enables or disables use of

SNMP trap service for real-time alarming

True/False; values other than True are treated as False

Default: False

No

No Snmptrap1_Address

IP address of primary SNMP trap server

Snmptrap_Community

SNMP trap community password

IP address

Default:

10.10.10.10

Snmptrap2_Address

IP address of secondary SNMP trap server

IP address

Default:

10.10.10.10

text

Default: snmp-trap

No

No

Example prop:Snmptrap_Enable=True prop:Snmptrap1_Address=

10.0.0.1

prop:Snmptrap2_Address=

10.0.0.2

prop:Snmptrap_Community= snmp-trap

SNMP Trap Configuration Example prop:Snmptrap_Enable=True prop:Snmptrap1_Address=10.0.0.1

prop:Snmptrap2_Address=10.0.0.2

prop:Snmptrap_Community=snmp-trap

Administrative User Parameters

Use these parameters to define the RMS administrative user. Configuring the administrative user ensures that accounts are created for all the software components such as Broadband Access Center Database (BAC DB),

Cisco Prime Network Registrar (PNR), Cisco Prime Access Registrar (PAR), and Secure Shell (SSH) system accounts. The user administration is an important security feature and ensures that management of the system is performed using non-root access.

One admin user is defined by default during installation. You can change the default with these parameters.

Other users can be defined after installation using the DCC UI.

Note LINUX users can be added using the appropriate post-configuration script. Refer to

Configuring Linux

Administrative Users, on page 129

.

RAN Management System Installation Guide, Release 4.1

197 May 25, 2015 HF

OVA Descriptor File Properties

BAC Parameters

Parameter Name: Description

Admin1_Username

System Admin user 1 login id

Values text

Default: admin1

Required

No

Admin1_Password

System Admin user 1 password

Passwords must be mixed case, alphanumeric,

8-127 characters long and contain one of the special characters

(*,@,#), at least one numeral and no spaces.

Default:

Ch@ngeme1

No

No Admin1_Firstname text

System Admin user 1 first name Default: admin1

Admin1_Lastname text

System Admin user 1 last name Default: admin1

No

Example prop:Admin1_Username=Admin1 prop:Admin1_Password=*** prop:Admin1_Firstname=

Admin1_Firstname prop:Admin1_Lastname=

Admin1_Lastname

BAC Parameters

These BAC parameters can be optionally configured in the descriptor file:

Parameter Name: Description

Bac_Provisioning_Group

Name of default provisioning group which gets created in the BAC

Values text

Default: pg01

Required Example

No prop:Bac_Provisioning_Group= default

198

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

Certificate Parameters

Parameter Name: Description Values

Ip_Timing_Server_Ip

IP-TIMING-SERVER-IP property of the provisioning group specified in this descriptor. If there is no IP timing configured then provide a dummy IP address for this parameter, something like 10.10.10.5

IP address

Default:

10.10.10.10

Required

No

Example prop:Ip_Timing_Server_Ip=

10.10.10.5

Certificate Parameters

The CPE-based security for the RMS solution is a private key, certificate-based authentication system. Each

Small Cell and server interface requires a unique signed certificate with the public DNS name and the defined

IP address.

Example Parameter Name: Description Values Required

System_Location

System Location used in SNMP configuration text

Default: Production

No prop:System_Location= Production

System_Contact

System contact used in SNMP configuration email address

Default:

[email protected]

No

No Cert_C

Certificate parameters to generate a Certificate Signing Request

(CSR): Country name text

Default: US

Cert_ST text

Certificate parameters to generate csr: State or Province name

Default: NC

Cert_L text

Certificate parameters to generate csr: Locality name

Default: RTP

Cert_O text

Certificate parameters to generate csr: Organization name

Default: Cisco

Systems, Inc.

No

No

No prop:System_Contact=

[email protected]

prop:Cert_C=US prop:Cert_ST= North Carolina prop:Cert_L=RTP prop:Cert_O= Cisco Systems, Inc.

RAN Management System Installation Guide, Release 4.1

199 May 25, 2015 HF

OVA Descriptor File Properties

Deployment Mode Parameters

Parameter Name: Description Values

Cert_OU text

Certificate parameters to generate csr: Organization Unit name

Default: MITG

Required

No

Example prop:Cert_OU= SCTG

Deployment Mode Parameters

Use these parameters to specify deployment modes. Secure mode is set to True by default, and is a required setting for any production environment.

Parameter Name: Description

Secure_Mode

Ensures that all the security options are configured. The security options include IP

Tables and secured "sshd" settings.

Values

True/False

Default: True

Required

No

Example prop:Secure_Mode=True

License Parameters

Use these parameters to configure the license information for the Cisco BAC, Cisco Prime Access Registrar and Cisco Prime Network Registrar. Default or mock licenses are installed unless you specify these parameters with actual license values.

Example Parameter Name: Description Valid Values /

Default

Required

Bac_License_Dpe: License for

BAC DPE text

A default dummy license is provided

No prop:Bac_License_Dpe=AAfA...

Bac_License_Cwmp: License for BAC CWMP text

A default dummy license is provided

No

Bac_License_Ext: License for

BAC DPE extensions text

A default dummy license is provided

No prop:Bac_License_Cwmp=AAfa...

prop:Bac_License_Ext=AAfa...

200

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

Password Parameters

Parameter Name: Description Valid Values /

Default

Required

Bac_License_FemtoExt: License for BAC DPE extensions

Note License should be of

PAR type and not SUB type text

A default dummy license is provided

No

Car_License_Base: License for

Cisco PAR text

A default dummy license is provided

No

Cnr_License_IPNode: License for Cisco PNR text

A default dummy license is provided

No

Example prop:Bac_License_FemtoExt=AAfa...

prop:Bac_License_Cwmp=AAfa...

prop:Bac_License_Cwmp=AAfa...

Note For the PAR and PNR licenses, the descriptor properties Car_License_Base and Cnr_License_IPNode need to be updated in case of multi-line license file . (Put '/n' at the start of new line of the license file)

For example: prop:Cnr_License_IPNode=INCREMENT count-dhcp cisco 8.1 uncounted

VENDOR_STRING=<Count>10000</Count> HOSTID=ANY

NOTICE="<LicFileID>20130715144658047</LicFileID><LicLineID>1</LicLineID>

<PAK></PAK><CompanyName></CompanyName>" SIGN=176CCF90B694 \nINCREMENT base-dhcp cisco

8.1 uncounted VENDOR_STRING=<Count>1000</Count> HOSTID=ANY

NOTICE="<LicFileID>20130715144658047</LicFileID><LicLineID>2</LicLineID>

<PAK></PAK><CompanyName></CompanyName>" SIGN=0F10E6FC871E

Password Parameters

The password for the root user to all virtual machines (VM) can be configured through the deployment descriptor. If this property is not set, the default root password is Ch@ngeme1 . However, it is strongly recommended to set the Root_Password through the deployment descriptor file.

The RMS_App_Password configures access to all of the following applications with one password:

• BAC admin password

• DCC application

• Operations tools

• ciscorms user password

• DCC administration

RAN Management System Installation Guide, Release 4.1

201 May 25, 2015 HF

OVA Descriptor File Properties

Password Parameters

• Postgres database

• Central keystore

• Upload statistics files

• Upload call drop files

• Upload demand files

• Upload lost IPsec files

• Upload lost gateway connection files

• Upload network location scan files

• Upload periodic files

• Upload restart files

• Upload crash files

• Upload low memory files

• Upload unknown files

Parameter Name: Description Values

Root_Password text

Password of the root user for all

RMS VMs

Default:

Ch@ngeme1

Required

No

RMS_App_Password

Password of the root user for all

RMS VMs

Passwords must be mixed case, alphanumeric,

8-127 characters long and contain one of the special characters

(*,@,#), at least one numeral and no spaces.

Default:

Ch@ngeme1

No

Example prop:Root_Password=*** prop:RMS_App_Password=***

Password Configuration Example prop:Root_Password=cisco123 prop:RMS_App_Password=Newpswd#123

202

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

Serving Node GUI Parameters

Serving Node GUI Parameters

The serving node GUI for Cisco PAR and Cisco PNR is disabled by default. You can enable it with this parameters.

Example Parameter Name: Description Valid Values /

Default

Required

Serving_Gui_Enable: Option to enable/disable GUI of PAR and

PNR

True/False; values other than "True" treated as

"False."

Default: False

No prop:Serving_Gui_Enable=False

DPE CLI Parameters

The properties of the DPE command line interface (CLI) on the serving node can be configured through the deployment descriptor file with this parameter.

Parameter Name: Description

Dpe_Cnrquery_Client_

Socket_Address

Address and port of the CNR query client configured in the

DPE

Values

IP address followed by the port

Default: serving eth0 addr:61611

Required

No

Example prop:

Dpe_Cnrquery_Client_Socket_Address=

127.0.0.1:61611

DPE CLI Configuration Example prop:Dpe_Cnrquery_Client_Socket_Address=10.5.1.48:61611

Time Zone Parameter

You can configure the time zone of the RMS installation with this parameter.

RAN Management System Installation Guide, Release 4.1

203 May 25, 2015 HF

OVA Descriptor File Properties

Time Zone Parameter

Parameter Name:

Description prop:vamitimezone

Default time zone

Values Required

Default: Etc/UTC

Supported values:

• Pacific/Samoa

• US/Hawaii

• US/Alaska

• US/Pacific

• US/Mountain

• US/Central

• US/Eastern

• America/Caracas

• America/Argentina/Buenos_Aires

• America/Recife

• Etc/GMT-1

• Etc/UTC

• Europe/London

• Europe/Pari

• Africa/Cairo

• Europe/Moscow

• Asia/Baku

• Asia/Karachi

• Asia/Calcutta

• Asia/Dacca

• Asia/Bangko

• Asia/Hong_Kong

• Asia/Tokyo

• Australia/Sydney

• Pacific/Noumea

• Pacific/Fiji

No

Example prop:vamitimezone=Etc/UTC

204

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

OVA Descriptor File Properties

Third-Party SeGW Parameter

Time Zone Configuration Example prop:vamitimezone=Etc/UTC

Third-Party SeGW Parameter

You can configure the third-party security gate with this parameter.

Values Parameter Name:

Description prop:Install_Cnr

When this flag is set to

"false", the Cisco

Network Registrar component is not installed on the RMS

Serving node.

True/False

Default: True

Required Example

No prop:Install_Cnr=False

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

205

Third-Party SeGW Parameter

OVA Descriptor File Properties

206

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

A P P E N D I X

B

Examples of OVA Descriptor Files

This appendix provides examples of descriptor files that you can copy and edit for your use. Use the " .ovftool

" suffix for the file names and deploy them as described in

Preparing the OVA Descriptor Files, on page 56

.

Example of Descriptor File for All-in-One Deployment, page 207

Example Descriptor File for Distributed Central Node, page 209

Example Descriptor File for Distributed Serving Node, page 210

Example Descriptor File for Distributed Upload Node, page 212

Example Descriptor File for Redundant Serving/Upload Node, page 213

Example of Descriptor File for All-in-One Deployment

#Logical disk type of the VM. Recommended to use thin instead of thick to conserve VM disk utilization diskMode=thin

#Name of the physical storage to keep VM files datastore=ds-blrrms-240b-02

#Name of the vApp that will be deployed on the host name=BLR-RMS40-AIO

#VLAN for communication between central and serving/upload node net:Central-Node Network 1=VLAN 11

#VLAN for communication between central-node and management network net:Central-Node Network 2=VLAN 233

#IP address of the northbound VM interface prop:Central_Node_Eth0_Address=10.5.1.55

#Network mask for the IP subnet of the northbound VM interface prop:Central_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Central_Node_Eth1_Address=10.105.233.81

#Network mask for the IP subnet of the southbound VM interface prop:Central_Node_Eth1_Subnet=255.255.255.128

#IP address of primary DNS server provided by network administrator prop:Central_Node_Dns1_Address=64.102.6.247

RAN Management System Installation Guide, Release 4.1

207 May 25, 2015 HF

Examples of OVA Descriptor Files

Example of Descriptor File for All-in-One Deployment

#IP address of secondary DNS server provided by network administrator prop:Central_Node_Dns2_Address=10.105.233.60

#IP address of the gateway to the management network prop:Central_Node_Gateway=10.105.233.1

#VLAN for the connection between the serving node (northbound) and the central node net:Serving-Node Network 1=VLAN 11

#VLAN for the connection between the serving node (southbound) and the CPE network (FAPs) net:Serving-Node Network 2=VLAN 12

#IP address of the northbound VM interface prop:Serving_Node_Eth0_Address=10.5.1.56

#Network mask for the IP subnet of the northbound VM interface prop:Serving_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Serving_Node_Eth1_Address=10.5.2.56

#Network mask for the IP subnet of the southbound VM interface prop:Serving_Node_Eth1_Subnet=255.255.255.0

#IP address of primary DNS server provided by network administrator prop:Serving_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Serving_Node_Dns2_Address=10.105.233.60

#Comma separated northbound and southbound gateway of serving node prop:Serving_Node_Gateway=10.5.1.1,10.5.2.1

#VLAN for the connection between the upload node (northbound) and the central node net:Upload-Node Network 1=VLAN 11

#VLAN for the connection between the upload node (southbound) and the CPE network (FAPs) net:Upload-Node Network 2=VLAN 12

#IP address of the northbound VM interface prop:Upload_Node_Eth0_Address=10.5.1.58

#Network mask for the IP subnet of the northbound VM interface prop:Upload_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Upload_Node_Eth1_Address=10.5.2.58

#Network mask for the IP subnet of the southbound VM interface prop:Upload_Node_Eth1_Subnet=255.255.255.0

#IP address of primary DNS server provided by network administrator prop:Upload_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Upload_Node_Dns2_Address=10.105.233.60

#Comma separated northbound and southbound gateway of upload node prop:Upload_Node_Gateway=10.5.1.1,10.5.2.1

#Southbound fully qualified domain name or IP address for the upload node for setting logupload URL on CPE prop:Upload_SB_Fqdn=femtosetup11.testlab.com

#Primary RMS NTP server prop:Ntp1_Address=10.105.233.60

#ACS virtual fully qualified domain name (FQDN). Southbound FQDN or IP address of the serving node.

prop:Acs_Virtual_Fqdn=femtosetup11.testlab.com

208

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Examples of OVA Descriptor Files

Example Descriptor File for Distributed Central Node

#Central VM hostname prop:Central_Hostname=blrrms-central-22

#Serving VM hostname prop:Serving_Hostname=blrrms-serving-22

#Upload VM hostname prop:Upload_Hostname=blr-blrrms-lus-22

To keep IPv6 disabled on the Serving node in an All-In-One deployment: prop:Serving_Node_IPv6_Enable=false

To enable IPv6 on the Serving node in an All-In-One deployment.

prop:Serving_Node_IPv6_Enable=true prop:Serving_Node_Eth0_IPv6_Address= 2607:f0d0:1002:11::14 prop:Serving_Node_IPv6_Gateway_Address= 2607:f0d0:1002:0011:0000:0000:0000:0001 prop:Serving_Node_IPv6_Lease_prefix= 2607:f0d0:1002:11:: prop:IPv6_lease_prefix_length=64

Example Descriptor File for Distributed Central Node

#Logical disk type of the VM. Recommended to use thin instead of thick to conserve VM disk utilization diskMode=thin

#Name of the physical storage to keep VM files datastore=ds-blrrms-240b-02

#Name of the vApp that will be deployed on the host name=BLR-RMS40-CENTRAL

#VLAN for communication between central and serving/upload node net:Central-Node Network 1=VLAN 11

#VLAN for communication between central-node and management network net:Central-Node Network 2=VLAN 233

#IP address of the northbound VM interface prop:Central_Node_Eth0_Address=10.5.1.55

#Network mask for the IP subnet of the northbound VM interface prop:Central_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Central_Node_Eth1_Address=10.105.233.81

#Network mask for the IP subnet of the southbound VM interface prop:Central_Node_Eth1_Subnet=255.255.255.128

#IP address of primary DNS server provided by network administrator prop:Central_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Central_Node_Dns2_Address=10.105.233.60

#IP address of the gateway to the management network prop:Central_Node_Gateway=10.105.233.1

#IP address of the northbound VM interface prop:Serving_Node_Eth0_Address=10.5.1.56

#Network mask for the IP subnet of the northbound VM interface prop:Serving_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Serving_Node_Eth1_Address=10.5.2.56

RAN Management System Installation Guide, Release 4.1

209 May 25, 2015 HF

Examples of OVA Descriptor Files

Example Descriptor File for Distributed Serving Node

#Network mask for the IP subnet of the southbound VM interface prop:Serving_Node_Eth1_Subnet=255.255.255.0

#IP address of primary DNS server provided by network administrator prop:Serving_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Serving_Node_Dns2_Address=10.105.233.60

#Comma separated northbound and southbound gateway of serving node prop:Serving_Node_Gateway=10.5.1.1,10.5.2.1

#IP address of the northbound VM interface prop:Upload_Node_Eth0_Address=10.5.1.58

#Network mask for the IP subnet of the northbound VM interface prop:Upload_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Upload_Node_Eth1_Address=10.5.2.58

#Network mask for the IP subnet of the southbound VM interface prop:Upload_Node_Eth1_Subnet=255.255.255.0

#IP address of primary DNS server provided by network administrator prop:Upload_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Upload_Node_Dns2_Address=10.105.233.60

#Comma separated northbound and southbound gateway of upload node prop:Upload_Node_Gateway=10.5.1.1,10.5.2.1

#Southbound fully qualified domain name or IP address for the upload node for setting logupload URL on CPE prop:Upload_SB_Fqdn=femtosetup11.testlab.com

#Primary RMS NTP server prop:Ntp1_Address=10.105.233.60

#ACS virtual fully qualified domain name (FQDN). Southbound FQDN or IP address of the serving node.

prop:Acs_Virtual_Fqdn=femtosetup11.testlab.com

#Central VM hostname prop:Central_Hostname=blrrms-central-22

#Serving VM hostname prop:Serving_Hostname=blrrms-serving-22

#Upload VM hostname prop:Upload_Hostname=blr-blrrms-lus-22

Example Descriptor File for Distributed Serving Node

#Logical disk type of the VM. Recommended to use thin instead of thick to conserve VM disk utilization diskMode=thin

#Name of the physical storage to keep VM files datastore=ds-blrrms-240b-02

#Name of the vApp that will be deployed on the host name=BLR-RMS40-SERVING

#IP address of the northbound VM interface

210

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Examples of OVA Descriptor Files

Example Descriptor File for Distributed Serving Node prop:Central_Node_Eth0_Address=10.5.1.55

#Network mask for the IP subnet of the northbound VM interface prop:Central_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Central_Node_Eth1_Address=10.105.233.81

#Network mask for the IP subnet of the southbound VM interface prop:Central_Node_Eth1_Subnet=255.255.255.128

#IP address of primary DNS server provided by network administrator prop:Central_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Central_Node_Dns2_Address=10.105.233.60

#IP address of the gateway to the management network prop:Central_Node_Gateway=10.105.233.1

#VLAN for the connection between the serving node (northbound) and the central node net:Serving-Node Network 1=VLAN 11

#VLAN for the connection between the serving node (southbound) and the CPE network (FAPs) net:Serving-Node Network 2=VLAN 12

#IP address of the northbound VM interface prop:Serving_Node_Eth0_Address=10.5.1.56

#Network mask for the IP subnet of the northbound VM interface prop:Serving_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Serving_Node_Eth1_Address=10.5.2.56

#Network mask for the IP subnet of the southbound VM interface prop:Serving_Node_Eth1_Subnet=255.255.255.0

#IP address of primary DNS server provided by network administrator prop:Serving_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Serving_Node_Dns2_Address=10.105.233.60

#Comma separated northbound and southbound gateway of serving node prop:Serving_Node_Gateway=10.5.1.1,10.5.2.1

#IP address of the northbound VM interface prop:Upload_Node_Eth0_Address=10.5.1.58

#Network mask for the IP subnet of the northbound VM interface prop:Upload_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Upload_Node_Eth1_Address=10.5.2.58

#Network mask for the IP subnet of the southbound VM interface prop:Upload_Node_Eth1_Subnet=255.255.255.0

#IP address of primary DNS server provided by network administrator prop:Upload_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Upload_Node_Dns2_Address=10.105.233.60

#Comma separated northbound and southbound gateway of upload node prop:Upload_Node_Gateway=10.5.1.1,10.5.2.1

#Southbound fully qualified domain name or IP address for the upload node for setting logupload URL on CPE prop:Upload_SB_Fqdn=femtosetup11.testlab.com

#Primary RMS NTP server

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

211

Examples of OVA Descriptor Files

Example Descriptor File for Distributed Upload Node prop:Ntp1_Address=10.105.233.60

#ACS virtual fully qualified domain name (FQDN). Southbound FQDN or IP address of the serving node.

prop:Acs_Virtual_Fqdn=femtosetup11.testlab.com

#Central VM hostname prop:Central_Hostname=blrrms-central-22

#Serving VM hostname prop:Serving_Hostname=blrrms-serving-22

#Upload VM hostname prop:Upload_Hostname=blr-blrrms-lus-22

To keep IPv6 disabled on the Serving node in a Distributed deployment: prop:Serving_Node_IPv6_Enable=false

To enable IPv6 on the Serving node in a Distributed deployment: prop:Serving_Node_IPv6_Enable=true prop:Serving_Node_Eth0_IPv6_Address= 2607:f0d0:1002:11::14 prop:Serving_Node_IPv6_Gateway_Address= 2607:f0d0:1002:0011:0000:0000:0000:0001 prop:Serving_Node_IPv6_Lease_prefix= 2607:f0d0:1002:11:: prop:IPv6_lease_prefix_length=64

Example Descriptor File for Distributed Upload Node

#Logical disk type of the VM. Recommended to use thin instead of thick to conserve VM disk utilization diskMode=thin

#Name of the physical storage to keep VM files datastore=ds-blrrms-240b-02

#Name of the vApp that will be deployed on the host name=BLR-RMS40-UPLOAD

#IP address of the northbound VM interface prop:Central_Node_Eth0_Address=10.5.1.55

#Network mask for the IP subnet of the northbound VM interface prop:Central_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Central_Node_Eth1_Address=10.105.233.81

#Network mask for the IP subnet of the southbound VM interface prop:Central_Node_Eth1_Subnet=255.255.255.128

#IP address of primary DNS server provided by network administrator prop:Central_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Central_Node_Dns2_Address=10.105.233.60

#IP address of the gateway to the management network prop:Central_Node_Gateway=10.105.233.1

#IP address of the northbound VM interface prop:Serving_Node_Eth0_Address=10.5.1.56

#Network mask for the IP subnet of the northbound VM interface prop:Serving_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Serving_Node_Eth1_Address=10.5.2.56

212

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Examples of OVA Descriptor Files

Example Descriptor File for Redundant Serving/Upload Node

#Network mask for the IP subnet of the southbound VM interface prop:Serving_Node_Eth1_Subnet=255.255.255.0

#IP address of primary DNS server provided by network administrator prop:Serving_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Serving_Node_Dns2_Address=10.105.233.60

#Comma separated northbound and southbound gateway of serving node prop:Serving_Node_Gateway=10.5.1.1,10.5.2.1

#VLAN for the connection between the upload node (northbound) and the central node net:Upload-Node Network 1=VLAN 11

#VLAN for the connection between the upload node (southbound) and the CPE network (FAPs) net:Upload-Node Network 2=VLAN 12

#IP address of the northbound VM interface prop:Upload_Node_Eth0_Address=10.5.1.58

#Network mask for the IP subnet of the northbound VM interface prop:Upload_Node_Eth0_Subnet=255.255.255.0

#IP address of the southbound VM interface prop:Upload_Node_Eth1_Address=10.5.2.58

#Network mask for the IP subnet of the southbound VM interface prop:Upload_Node_Eth1_Subnet=255.255.255.0

#IP address of primary DNS server provided by network administrator prop:Upload_Node_Dns1_Address=64.102.6.247

#IP address of secondary DNS server provided by network administrator prop:Upload_Node_Dns2_Address=10.105.233.60

#Comma separated northbound and southbound gateway of upload node prop:Upload_Node_Gateway=10.5.1.1,10.5.2.1

#Southbound fully qualified domain name or IP address for the upload node for setting logupload URL on CPE prop:Upload_SB_Fqdn=femtosetup11.testlab.com

#Primary RMS NTP server prop:Ntp1_Address=10.105.233.60

#ACS virtual fully qualified domain name (FQDN). Southbound FQDN or IP address of the serving node.

prop:Acs_Virtual_Fqdn=femtosetup11.testlab.com

#Central VM hostname prop:Central_Hostname=blrrms-central-22

#Serving VM hostname prop:Serving_Hostname=blrrms-serving-22

#Upload VM hostname prop:Upload_Hostname=blr-blrrms-lus-22

Example Descriptor File for Redundant Serving/Upload Node

datastore=ds-blrrms-5108-01 name=blrrms-central06-harsh net:Upload-Node Network 1=VLAN 11 net:Upload-Node Network 2=VLAN 12

RAN Management System Installation Guide, Release 4.1

213 May 25, 2015 HF

Example Descriptor File for Redundant Serving/Upload Node net:Central-Node Network 1=VLAN 11 net:Central-Node Network 2=VLAN 2335K net:Serving-Node Network 1=VLAN 11 net:Serving-Node Network 2=VLAN 12 prop:Central_Node_Eth0_Address=10.5.1.35

prop:Central_Node_Eth0_Subnet=255.255.255.0

prop:Central_Node_Eth1_Address=10.105.233.76

prop:Central_Node_Eth1_Subnet=255.255.255.128

prop:Central_Node_Dns1_Address=72.163.128.140

prop:Central_Node_Dns2_Address=171.68.226.120

prop:Central_Node_Gateway=10.105.233.1

prop:Serving_Node_Eth0_Address=10.5.1.36

prop:Serving_Node_Eth0_Subnet=255.255.255.0

prop:Serving_Node_Eth1_Address=10.5.2.36

prop:Serving_Node_Eth1_Subnet=255.255.255.0

prop:Serving_Node_Dns1_Address=10.105.233.60

prop:Serving_Node_Dns2_Address=72.163.128.140

prop:Serving_Node_Gateway=10.5.1.1

prop:Upload_Node_Eth0_Address=10.5.1.38

prop:Upload_Node_Eth0_Subnet=255.255.255.0

prop:Upload_Node_Eth1_Address=10.5.2.38

prop:Upload_Node_Eth1_Subnet=255.255.255.0

prop:Upload_Node_Dns1_Address=10.105.233.60

prop:Upload_Node_Dns2_Address=72.163.128.140

prop:Upload_Node_Gateway=10.5.1.1

prop:Central_Hostname=rms-distr-central prop:Serving_Hostname=rms-distr-serving2 prop:Upload_Hostname=rms-distr-upload2 prop:Ntp1_Address=10.105.233.60

prop:Acs_Virtual_Fqdn=femtoacs.testlab.com

prop:Asr5k_Dhcp_Address=10.5.1.107

prop:Asr5k_Radius_Address=10.5.1.107

prop:Asr5k_Hnbgw_Address=10.5.1.107

prop:Dhcp_Pool_Network=7.0.1.96

prop:Dhcp_Pool_Subnet=255.255.255.240

prop:Dhcp_Pool_FirstAddress=7.0.1.96

prop:Dhcp_Pool_LastAddress=7.0.1.111

prop:Upload_SB_Fqdn=femtouls.testlab.com

Examples of OVA Descriptor Files

214

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

A P P E N D I X

C

System Backup

A full system backup of the VM is recommended before installing a new version of Cisco RMS so that if there is a failure while deploying the new version of Cisco RMS, the older version can be recovered.

Full system backups can be performed using the VMware snapshot features. Sufficient storage space must exist in the local data store for each server to perform a full system backup. For more information on storage space,

Virtualization Requirements, on page 12

.

Full system backups should be deleted or transported to external storage for long-duration retention.

Application data backups can be performed using a set of “ tar ” and “ gzip ” commands. This document will identify the important data directories and database backup commands. Sufficient storage space must exist within each virtual machine to perform an application data backup. For more information on storage space, see

Virtualization Requirements, on page 12

.

Performing application data backup directly to external storage requires an external volume to be mounted within each local VM; this configuration is beyond the scope of this document.

Both types of backups can support Online mode and Offline mode operations:

• Online mode backups are taken without affecting application services and are recommended for hot system backups.

• Offline mode backups are recommended when performing major system updates. Application services or network interfaces must be disabled before performing Offline mode backups. Full system restore must always be performed in Offline mode.

Full System Backup, page 215

Application Data Backup, page 218

Full System Backup

Full system backups can be performed using the VMware vSphere client and managed by the VMware vCenter server.

With VMware, there are two options to have full system backup:

• VM Snapshot

RAN Management System Installation Guide, Release 4.1

215 May 25, 2015 HF

System Backup

Back Up System Using VM Snapshot

◦ VM snapshot preserves the state and data of a virtual machine at a specific point in time. It is not a full backup of VMs. It creates a disk file and keeps the current state data. If the full system is corrupted, it is not possible to restore.

◦ Snapshots can be taken while VM is running.

◦ Requires lesser disk space for storage than VM cloning.

• vApp/VM Cloning

◦ It copies the whole vApp/VM.

◦ While cloning, vApp needs to be powered off

Note It is recommended to clone vApp instead of individual VMs

.

◦ Requires more disk space for storage than VM snapshots.

Back Up System Using VM Snapshot

Note If offline mode backup is required, disable network interfaces for each virtual machine. Create Snapshot using the VMware vSphere client.

Following are the steps to disable the network interfaces:

Step 1

Step 2

Login as 'root' user to the RMS node through the Vsphere Client console.

Run the command: #service network stop .

Using VM Snapshot

Step 1

Step 2

Step 3

Step 4

Log in to vCenter using vSphere client.

Right-click on the VM and click Take Snapshot from the Snapshot menu.

Specify the name and description of the snapshot and click OK .

Verify that the snapshot taken is displayed in the Snapshot Manager. To do this, right-click on the VM and select Snapshot

Manager from Snapshot menu.

216

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

System Backup

Back Up System Using vApp Cloning

Back Up System Using vApp Cloning

Follow the below procedure to clone the Upload node with partitions and skip the steps 5 to 19 to clone the

Central and Serving nodes with or without partition.

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Step 10

Step 11

Step 12

Step 13

Step 14

Step 15

Step 16

Step 17

Step 18

Log in to vCenter using the vSphere web client.

Select the vApp of the VM to be cloned, right-click and in the Getting Started tab, click Power off vApp .

After the power-off, right-click on the VM and click Edit Settings .

If there are no additional hard disks configured, skip the steps 4 to 17.

Click on the additionally-configured hard disk (other than the default hard disk – Hard Disk 1) from the drop-down list.

For example, Hard Disk 2. Repeat the steps for all the additionally configured hard disks. Exmaple, Hard Disk 3, Hard

Disk 4, and so on.

Make a note of the Disk File from the drop-down list.

Close the drop-down and remove (click on the "X" symbol against each additionally added hard disk) the additional hard disks. Example, Hard Disk 2. Repeat the steps 5 and 6 on all the additionally-configured hard disks. For example, Hard

Disk 3, Hard Disk 4 and so on. Click Ok . Note:

Note Do not check the checkbox because that would delete the files from the datastore, which cannot be recovered.

Right-click on the vApp and select All vCenter Actions and click Clone . The New vApp Wizard is displayed.

In the Select a creation type screen, select Clone an existing vApp and click Next .

In Select a destination screen, select a host which has to be cloned and click Next .

In Select a name and location screen, provide a name and target folder/datacenter for the clone and click Next .

In Select storage screen, select the virtual disk format from the drop-down, which has the same format as the source and the destination datastore and click Next .

Click Next in Map Networks, vApp properties, and Resource allocation screens.

In the Ready to complete screen, click Finish .

The status of the clone is shown in the Recent Tasks section of the window.

After the task is completed, to remount the additional hard disks in step r above, right-click on the cloned VM and select

Edit Settings .

Select the new device as Existing Hard Disk and click Add .

In the Select File screen, select the disk file as noted before the clone in Step 5 and click Ok . Repeat this step for each additional hard disk seen in Step 4.

Repeat the Steps 14 to 16 on the original VM.

Select the vApp (either cloned or original) to be used and in the Getting Started tab, click Power on vApp .

Note Make sure Serving node and Upload node is powered on only after the Central node is completely up and running.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

217

System Backup

Application Data Backup

Application Data Backup

Application data backups are performed from the guest OS themselves. These backups will create compressed tar files containing required configuration files, database backups and other required files. The backups and restores are performed using root user.

Excluding Upload AP diagnostic files, a typical total size of all application configuration files would be 2-3

MB.

Upload AP diagnostic files backup size would vary depending on the size of AP diagnostic files.

The rdu/postgres db backup files would depend on the data and devices. A snapshot of backup files with 20 devices running has a total size of around 100 MB.

Perform the following procedure for each node to create an application data backup.

Note Copy all the backups created to the local PC or some other repository to store them.

Backup on the Central Node

Follow the below procedure to back up the RDU DB, Postgres DB, and configuration files on the Central node:

1 Log in to the Central node and switch to 'root' user.

2 Execute the backup script to create the backup file. This script prompts for the following inputs:

• new backup directory: Provide a directory name with date included in the name to ensure that it is easy to identify the backup later when needed to restore. For example, CentralNodeBackup_March20.

• PostgresDB password: Provide the password as defined in the descriptor file for RMS_App_Password property during RMS installation. If RMS_App_Password property is not defined in the descriptor file, use the default password Ch@ngeme1.

218

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

System Backup

Enter:

Output: cd /rms/ova/scripts/redundancy;

./backup_central_vm.sh

Backup on the Central Node

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

219

System Backup

Backup on the Central Node

[blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy #

./backup_central_vm.sh

Existing backup directories:

Enter name of new backup directory: CentralNodeBackup_March20

Enter password for postgresdb: Ch@ngeme1

Doing backup of Central VM configuration files.

tar: Removing leading `/' from member names

-rw-------. 1 root root 181089 Mar 20 05:13

/rms/backups/CentralNodeBackup_March20//central-config.tar.gz

Completed backup of Central VM configuration files.

Doing backup of Central VM Postgress DB.

-rw-------. 1 root root 4305935 Mar 20 05:13

/rms/backups/CentralNodeBackup_March20//postgres_db_bkup

Completed backup of Central VM Postgress DB.

Doing backup of Central VM RDU Berklay DB.

Database backup started

Back up to:

/rms/backups/CentralNodeBackup_March20/rdu-db/rdu-backup-20150320-051308

Copying DB_VERSION.

DB_VERSION: 100% completed.

Copied DB_VERSION. Size: 394 bytes.

Copying rdu.db.

rdu.db: 1% completed.

rdu.db: 2% completed.

.

.

.

rdu.db: 100% completed.

Copied rdu.db. Size: 5364383744 bytes.

Copying log.0000321861.

log.0000321861: 100% completed.

Copied log.0000321861. Size: 10485760 bytes.

Copying history.log.

history.log: 100% completed.

Copied history.log. Size: 23590559 bytes.

Database backup completed

Database recovery started

Recovering in:

/rms/backups/CentralNodeBackup_March20/rdu-db/rdu-backup-20150320-051308

This process may take a few minutes.

Database recovery completed rdu-db/ rdu-db/rdu-backup-20150320-051308/ rdu-db/rdu-backup-20150320-051308/DB_VERSION rdu-db/rdu-backup-20150320-051308/log.0000321861

rdu-db/rdu-backup-20150320-051308/history.log

rdu-db/rdu-backup-20150320-051308/rdu.db

-rw-------. 1 root root 664582721 Mar 20 05:14

/rms/backups/CentralNodeBackup_March20//rdu-db.tar.gz

Completed backup of Central VM RDU Berklay DB.

CentralNodeBackup_March20/

CentralNodeBackup_March20/rdu-db.tar.gz

CentralNodeBackup_March20/postgres_db_bkup

CentralNodeBackup_March20/.rdufiles_backup

CentralNodeBackup_March20/central-config.tar.gz

-rwxrwxrwx. 1 root root 649192608 Mar 20 05:16

/rms/backups/CentralNodeBackup_March20.tar.gz

backup done.

[blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy #

220

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

System Backup

Backup on the Serving Node

3 Check for the backup file created in /rms/backups/ directory .

Enter: ls -l /rms/backups

Output: [blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy # ls -l

/rms/backups total 634604

-rwxrwxrwx. 1 root root 649192608 Mar 20 05:16

CentralNodeBackup_March20.tar.gz

[blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy #

Backup on the Serving Node

Perform the following commands to create a backup of RMS component data on the Serving node.

1 Back up Femtocell Firmware Files:

Enter: cd /root mkdir -p /rms/backup tar cf /rms/backup/serving-firmware.tar /rms/data/CSCObac/dpe/files gzip /rms/backup/serving-firmware.tar

ls /rms/backup/serving-firmware.tar.gz

Output: [root@rtpfga-ova-serving06 ~]# cd /root

[root@rtpfga-ova-serving06 ~]# mkdir -p /rms/backup

[root@rtpfga-ova-serving06 ~]# tar cf /rms/backup/serving-firmware.tar

/rms/data/CSCObac/dpe/files tar: Removing leading `/' from member names

[root@rtpfga-ova-serving06 ~]# gzip /rms/backup/serving-firmware.tar

[root@rtpfga-ova-serving06 ~]# ls /rms/backup/serving-firmware.tar.gz

/rms/backup/serving-firmware.tar.gz

[root@rtpfga-ova-serving06 ~]#

2 Back up Configuration Files:

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

221

System Backup

Backup on the Upload Node

Enter:

Output: cd /root mkdir -p /rms/backup tar cf /rms/backup/serving-config.tar /rms/app/CSCOar/conf

/rms/app/nwreg2/local/conf

/rms/app/CSCObac/dpe/conf /rms/app/CSCObac/car_ep/conf

/rms/app/CSCObac/cnr_ep/conf /rms/app/CSCObac/snmp/conf/

/rms/app/CSCObac/agent/conf

/rms/app/CSCObac/jre/lib/security/cacerts gzip /rms/backup/serving-config.tar

ls /rms/backup/serving-config.tar.gz

[root@rtpfga-ova-serving06 ~]# cd /root

[root@rtpfga-ova-serving06 ~]# mkdir -p /rms/backup

[root@rtpfga-ova-serving06 ~]# tar cf /rms/backup/serving-config.tar

/rms/app/CSCOar/conf /rms/app/nwreg2/local/conf

/rms/app/CSCObac/dpe/conf

/rms/app/CSCObac/car_ep/conf /rms/app/CSCObac/cnr_ep/conf

/rms/app/CSCObac/snmp/conf/ /rms/app/CSCObac/agent/conf

/rms/app/CSCObac/jre/lib/security/cacerts tar: Removing leading `/' from member names

[root@rtpfga-ova-serving06 ~]# gzip /rms/backup/serving-config.tar

[root@rtpfga-ova-serving06 ~]# ls /rms/backup/serving-config.tar.gz

/rms/backup/serving-config.tar.gz

[root@rtpfga-ova-serving06 ~]#

Backup on the Upload Node

Perform the following commands to create a backup of RMS component data on the Upload node.

1 Back up Configuration Files:

Enter: cd /root mkdir -p /rms/backup tar cf /rms/backup/upload-config.tar /opt/CSCOuls/conf gzip /rms/backup/upload-config.tar

ls /rms/backup/upload-config.tar.gz

Output: [root@rtpfga-ova-upload06 ~]# cd /root

[root@rtpfga-ova-upload06 ~]# mkdir -p /rms/backup

[root@rtpfga-ova-upload06 ~]# tar cf /rms/backup/upload-config.tar

/opt/CSCOuls/conf tar: Removing leading `/' from member names

[root@rtpfga-ova-upload06 ~]# gzip /rms/backup/upload-config.tar

[root@rtpfga-ova-upload06 ~]# ls /rms/backup/upload-config.tar.gz

/rms/backup/upload-config.tar.gz

2 Back up AP Files:

222

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

System Backup

Backup on the Upload Node

Enter:

Output: cd /root mkdir -p /rms/backup tar cf /rms/backup/upload-node-apfiles.tar /opt/CSCOuls/files gzip /rms/backup/upload-node-apfiles.tar

ls /rms/backup/upload-node-apfiles.tar.gz

[root@rtpfga-ova-upload06 ~]# cd /root

[root@rtpfga-ova-upload06 ~]# mkdir -p /rms/backup

[root@rtpfga-ova-upload06 ~]# tar cf

/rms/backup/upload-node-apfiles.tar /opt/CSCOuls/files tar: Removing leading `/' from member names

[root@rtpfga-ova-upload06 ~]# gzip /rms/backup/upload-node-apfiles.tar

[root@rtpfga-ova-upload06 ~]# ls /rms/backup/upload-node-apfiles.tar.gz

/rms/backup/upload-node-apfiles.tar.gz

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

223

Backup on the Upload Node

System Backup

224

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

A P P E N D I X

D

RMS System Rollback

This section describes the Restore procedure for the RMS provisioning solution.

Full System Restore, page 225

Application Data Restore, page 226

End-to-End Testing, page 234

Full System Restore

Restore from VM Snapshot

To perform a full system restore from VM snapshot, follow the steps:

1 Restore the Snapshot from the VMware data store.

2 Restart the virtual appliance.

3 Perform end-to-end testing.

To restore VM snapshot, follow the steps:

Step 1

Step 2

Step 3

Step 4

Step 5

Right-click on the VM and select Snapshot > Snapshot Manager .

Select the snapshot to restore and click Go to .

Click Yes to confirm the restore.

Verify that the Snapshot Manager shows the restored state of the VM.

Perform end-to-end testing.

RAN Management System Installation Guide, Release 4.1

225 May 25, 2015 HF

RMS System Rollback

Restore from vApp Clone

Restore from vApp Clone

To perform a full system restore from vApp clone, follow the steps:

Step 1

Step 2

Step 3

Select the running vApp, right-click and click Power Off .

Clone the backup vApp to restore, if required, by following steps mentioned in the Back Up System Using vApp Cloning .

Right-click on the vApp that is restored and click Power on vApp to perform end-to-end testing.

Application Data Restore

Place the backup of all the nodes at /rms/backup directory. Execute the restore steps for all the nodes as a root user.

Restore from Central Node

Execute the following procedure to restore a backup of the RMS component data on the Central Node. Take care to ensure the application data backup is being restored onto a system running the same version as it was created on.

Step 1

Step 2

Step 3

Log in to the Central node and switch to 'root' user

Create a restore directory, /rms/backups , if it does not exist and copy the required backup file to the restore directory.

mkdir – p /rms/backups/restore cp /rms/backups/CentralNodeBackup_March20.tar.gz /rms/backups/restore

Run the script to restore the RDU database, Postgres database, and configuration on the primary Central VM using the backup file. This script lists all the available backups in the restore directory and it prompts for the following:

• backup file to restore: Provide one of the backup file name listed by the script.

• PostgresDB password: Provide the password as defined in the descriptor file for RMS_App_Password property during RMS installation. If RMS_App_Password property is not defined in the descriptor file, use the default password Ch@ngeme1.

Enter: cd /rms/ova/scripts/redundancy/;

./restore_central_vm_from_bkup.sh

Output:

[blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy # ./restore_central_vm_from_bkup.sh

Existing backup files:

CentralNodeBackup_March20.tar.gz

CentralNodeBackup_March20_1.tar.gz

226

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS System Rollback

Restore from Central Node

Enter name of backup file to restore from: CentralNodeBackup_March20.tar.gz

Enter password for postgresdb: Ch@ngeme1

CentralNodeBackup_March20/

CentralNodeBackup_March20/rdu-db.tar.gz

CentralNodeBackup_March20/postgres_db_bkup

CentralNodeBackup_March20/.rdufiles_backup

CentralNodeBackup_March20/central-config.tar.gz

Stopping RDU service

Encountered an error when stopping process [rdu].

Encountered an error when stopping process [tomcat].

ERROR: BAC Process Watchdog failed to exit after 90 seconds, killing processes.

BAC Process Watchdog has stopped.

RDU service stopped

Doing restore of Central VM RDU Berklay DB.

/ ~ rdu-db/ rdu-db/rdu-backup-20150320-051308/ rdu-db/rdu-backup-20150320-051308/DB_VERSION rdu-db/rdu-backup-20150320-051308/log.0000321861

rdu-db/rdu-backup-20150320-051308/history.log

rdu-db/rdu-backup-20150320-051308/rdu.db

Restoring RDU database...

Restoring from: /rms/backups/restore/temp/CentralNodeBackup_March20/rdu-db/rdu-backup-20150320-051308

Copying rdu.db.

rdu.db: 1% completed.

rdu.db: 2% completed.

.

.

.

Copied DB_VERSION. Size: 394 bytes.

Database was successfully restored

You can now start RDU server.

~

Completed restore of Central VM RDU Berklay DB.

Doing restore of Central VM Postgress DB.

/ ~

TRUNCATE TABLE

SET

SET

.

.

.

Completed restore of Central VM Postgress DB.

Doing restore of Central VM configuration files.

/ ~

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

227

RMS System Rollback

Restore from Central Node

Step 4

Step 5

Step 6

.

.

rms/app/CSCObac/rdu/conf/ rms/app/CSCObac/rdu/conf/cmhs_nba_client_logback.xml

.

rms/app/rms/conf/dcc.properties

xuSrz6FQB9QSaiyB2GreKw== xuSrz6FQB9QSaiyB2GreKw==

Taking care of special characters in passwords xuSrz6FQB9QSaiyB2GreKw== xuSrz6FQB9QSaiyB2GreKw==

~

Completed restore of Central VM configuration files.

BAC Process Watchdog has started.

Restore done.

[blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy #

Check the status of the RDU and tomcat processes with the following command:

Enter:

/etc/init.d/bprAgent status popd

Output:

[blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy # /etc/init.d/bprAgent status

BAC Process Watchdog is running.

Process [snmpAgent] is running.

Process [rdu] is running.

Process [tomcat] is running.

[blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy #

Restart god service to restart PMGServer and AlarmHandler components with the following command:

Enter: service god restart

Output:

[blrrms-central-ucs240-ha] /home/admin # service god restart

..

Stopped all watches

Stopped god

[blrrms-central-ucs240-ha] /home/admin #

Check if the PMGServer and AlarmHandler components are up using the following command:

Enter: service god status

Output:

[blrrms-central-ucs240-ha] /home/admin # service god status

AlarmHandler: up

PMGServer: up

[blrrms-central-ucs240-ha] /home/admin #

It takes about 10 to 15 minutes (depends on the number of devices and groups) for the PMGServer to restore its service completely. Use the following command to check if port 8083 is listening to confirm that PMG service is up:

Enter: netstat -an|grep 8083|grep LIST

228

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS System Rollback

Restore from Serving Node

Step 7

Output:

[blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy # netstat -an|grep 8083|grep LIST tcp 0 0 0.0.0.0:8083 0.0.0.0:* LISTEN

[blrrms-central50-ucs240-ha] /rms/ova/scripts/redundancy #

If the PMGServer status is unmonitored then run the following command.

Example: god start PMGServer

Sending 'start' command

The following watches were affected:

PMGServer check the status PMGServer should be up and running after sometime

[rms-aio-central] /home/admin1 # god status PMGServer

PMGServer: up

Restore from Serving Node

Step 1

Step 2

Stop Application Services:

Enter: cd /root service bprAgent stop service nwreglocal stop service arserver stop

Output:

[root@rtpfga-ova-serving06 ~]# service bprAgent stop

Encountered an error when stopping process [dpe].

Encountered an error when stopping process [cli].

ERROR: BAC Process Watchdog failed to exit after 90 seconds, killing processes.

BAC Process Watchdog has stopped.

[root@rtpfga-ova-serving06 ~]# service nwreglocal stop

# Stopping Network Registrar Local Server Agent

INFO: waiting for Network Registrar Local Server Agent to exit ...

[root@rtpfga-ova-serving06 ~]# service arserver stop

Waiting for these processes to die (this may take some time):

AR RADIUS server running

AR Server Agent running

AR MCD lock manager running

AR MCD server running

(pid: 4568)

(pid: 4502)

(pid: 4510)

(pid: 4507)

AR GUI running (pid: 4517)

4 processes left.3 processes left.1 process left.0 processes left

Access Registrar Server Agent shutdown complete.

Restore Femtocell Firmware Files:

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

229

RMS System Rollback

Restore from Serving Node

Step 3

Enter: cd /root pushd / tar xfvz /rms/backup/serving-firmware.tar.gz

popd

Output:

[root@rtpfga-ova-serving06 ~]# pushd /

/ ~

[root@rtpfga-ova-serving06 /]# tar xfvz /rms/backup/serving-firmware.tar.gz

rms/data/CSCObac/dpe/files/

[root@rtpfga-ova-serving06 /]# popd

~

Restore Configuration Files:

Enter: cd /root pushd / tar xfvz /rms/backup/serving-config.tar.gz

popd

Output:

[root@rtpfga-ova-serving06 ~]# pushd /

/ ~

[root@rtpfga-ova-serving06 /]# tar xfvz /rms/backup/serving-config.tar.gz

rms/app/CSCOar/conf/ rms/app/CSCOar/conf/tomcat.csr

rms/app/CSCOar/conf/diaconfig.server.xml

rms/app/CSCOar/conf/tomcat.keystore

rms/app/CSCOar/conf/diaconfiguration.dtd

rms/app/CSCOar/conf/arserver.orig

rms/app/CSCOar/conf/car.conf

rms/app/CSCOar/conf/diadictionary.xml

rms/app/CSCOar/conf/car.orig

rms/app/CSCOar/conf/mcdConfig.txt

rms/app/CSCOar/conf/mcdConfig.examples

rms/app/CSCOar/conf/mcdConfigSM.examples

rms/app/CSCOar/conf/openssl.cnf

rms/app/CSCOar/conf/diadictionary.dtd

rms/app/CSCOar/conf/release.batch.ver

rms/app/CSCOar/conf/add-on/ rms/app/nwreg2/local/conf/ rms/app/nwreg2/local/conf/cnrremove.tcl

rms/app/nwreg2/local/conf/webui.properties

rms/app/nwreg2/local/conf/tomcat.csr

rms/app/nwreg2/local/conf/localBasicPages.properties

rms/app/nwreg2/local/conf/tomcat.keystore

rms/app/nwreg2/local/conf/nwreglocal rms/app/nwreg2/local/conf/userStrings.properties

rms/app/nwreg2/local/conf/nrcmd-listbrief-defaults.conf

rms/app/nwreg2/local/conf/tramp-cmtssrv-unix.txt

rms/app/nwreg2/local/conf/localCorePages.properties

rms/app/nwreg2/local/conf/regionalCorePages.properties

rms/app/nwreg2/local/conf/cnr_cert_config rms/app/nwreg2/local/conf/product.licenses

rms/app/nwreg2/local/conf/dashboardhelp.properties

230

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS System Rollback

Restore from Serving Node rms/app/nwreg2/local/conf/cmtssrv.properties

rms/app/nwreg2/local/conf/tramp-tomcat-unix.txt

rms/app/nwreg2/local/conf/cert/ rms/app/nwreg2/local/conf/cert/pubkey.pem

rms/app/nwreg2/local/conf/cert/cert.pem

rms/app/nwreg2/local/conf/cnr_status.orig

rms/app/nwreg2/local/conf/localSitePages.properties

rms/app/nwreg2/local/conf/regionalBasicPages.properties

rms/app/nwreg2/local/conf/manifest rms/app/nwreg2/local/conf/cnr.conf

rms/app/nwreg2/local/conf/basicPages.conf

rms/app/nwreg2/local/conf/openssl.cnf

rms/app/nwreg2/local/conf/regionalSitePages.properties

rms/app/nwreg2/local/conf/priv/ rms/app/nwreg2/local/conf/priv/key.pem

rms/app/nwreg2/local/conf/genericPages.conf

rms/app/nwreg2/local/conf/aicservagt.orig

rms/app/CSCObac/dpe/conf/ rms/app/CSCObac/dpe/conf/self_signed/ rms/app/CSCObac/dpe/conf/self_signed/dpe.keystore

rms/app/CSCObac/dpe/conf/self_signed/dpe.csr

rms/app/CSCObac/dpe/conf/dpeextauth.jar

rms/app/CSCObac/dpe/conf/dpe.properties.29052014

rms/app/CSCObac/dpe/conf/AuthResponse.xsd

rms/app/CSCObac/dpe/conf/dpe.properties_May31_before_increasing_alarmQuesize_n_session_timeout

rms/app/CSCObac/dpe/conf/bak_dpe.properties

rms/app/CSCObac/dpe/conf/dpe-genericfemto.properties

rms/app/CSCObac/dpe/conf/dpe.keystore_changeme1

rms/app/CSCObac/dpe/conf/bak_orig_dpe.keystore

rms/app/CSCObac/dpe/conf/AuthRequest.xsd

rms/app/CSCObac/dpe/conf/dpe-femto.properties

rms/app/CSCObac/dpe/conf/dpe-TR196v1.parameters

rms/app/CSCObac/dpe/conf/dpe.properties

rms/app/CSCObac/dpe/conf/dpe.keystore

rms/app/CSCObac/dpe/conf/dpe.properties.bak.1405

rms/app/CSCObac/dpe/conf/bak_no_debug_dpe.properties

rms/app/CSCObac/dpe/conf/dpe.csr

rms/app/CSCObac/dpe/conf/dpe.properties.org

rms/app/CSCObac/dpe/conf/dpe-TR196v2.parameters

rms/app/CSCObac/dpe/conf/server-certs rms/app/CSCObac/dpe/conf/Apr4_certs_check.pcap

rms/app/CSCObac/car_ep/conf/ rms/app/CSCObac/car_ep/conf/AuthResponse.xsd

rms/app/CSCObac/car_ep/conf/AuthRequest.xsd

rms/app/CSCObac/car_ep/conf/car_ep.properties

rms/app/CSCObac/car_ep/conf/server-certs rms/app/CSCObac/cnr_ep/conf/ rms/app/CSCObac/cnr_ep/conf/cnr_ep.properties

rms/app/CSCObac/snmp/conf/ rms/app/CSCObac/snmp/conf/sys_group_table.properties

rms/app/CSCObac/snmp/conf/trap_forwarding_table.xml

rms/app/CSCObac/snmp/conf/proxy_table.xml

rms/app/CSCObac/snmp/conf/access_control_table.xml

rms/app/CSCObac/snmp/conf/sys_or_table.xml

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

231

RMS System Rollback

Restore from Upload Node

Step 4 rms/app/CSCObac/snmp/conf/agent_startup_conf.xml

rms/app/CSCObac/agent/conf/ rms/app/CSCObac/agent/conf/agent.ini

rms/app/CSCObac/agent/conf/agent.conf

rms/app/CSCObac/jre/lib/security/cacerts

[root@rtpfga-ova-serving06 /]# popd

~

[root@rtpfga-ova-serving06 ~]#

Start Application Services:

Enter: cd /root service arserver start service nwreglocal start service bprAgent start

Output:

[root@rtpfga-ova-serving06 ~]# service arserver start

Starting Access Registrar Server Agent...completed.

[root@rtpfga-ova-serving06 ~]# service nwreglocal start

# Starting Network Registrar Local Server Agent

[root@rtpfga-ova-serving06 ~]# service bprAgent start

BAC Process Watchdog has started.

Restore from Upload Node

Perform the following commands to restore a backup of the RMS component data on the Upload node.

Step 1

Step 2

Stop Application Services:

Enter: cd /root service god stop

Output:

[[root@rtpfga-ova-upload06 ~]# service god stop

..

Stopped all watches

Stopped god

Restore Configuration Files:

Enter: cd /root pushd / tar xfvz /rms/backup/upload-config.tar.gz

popd

Output:

[root@rtpfga-ova-upload06 ~]# pushd /

/ ~

[root@rtpfga-ova-upload06 /]# tar xfvz /rms/backup/upload-config.tar.gz

opt/CSCOuls/conf/

232

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

RMS System Rollback

Restore from Upload Node

Step 3

Step 4 opt/CSCOuls/conf/CISCO-SMI.my

opt/CSCOuls/conf/proofOfLife.txt

opt/CSCOuls/conf/post_config_logback.xml

opt/CSCOuls/conf/god.dist

opt/CSCOuls/conf/UploadServer.properties

opt/CSCOuls/conf/server_logback.xml

opt/CSCOuls/conf/CISCO-MHS-MIB.my

[root@rtpfga-ova-upload06 /]# popd

~

Restore AP Files:

Enter: cd /root pushd / tar xfvz /rms/backup/upload-node-apfiles.tar.gz

popd

Output:

[root@rtpfga-ova-upload06 ~]# pushd /

/ ~

[root@rtpfga-ova-upload06 /]# tar xfvz /rms/backup/upload-node-apfiles.tar.gz

opt/CSCOuls/files/ opt/CSCOuls/files/uploads/ opt/CSCOuls/files/uploads/lost-ipsec/ opt/CSCOuls/files/uploads/lost-gw-connection/ opt/CSCOuls/files/uploads/stat/ opt/CSCOuls/files/uploads/unexpected-restart/ opt/CSCOuls/files/uploads/unknown/ opt/CSCOuls/files/uploads/nwl-scan-complete/ opt/CSCOuls/files/uploads/on-call-drop/ opt/CSCOuls/files/uploads/on-periodic/ opt/CSCOuls/files/uploads/on-demand/ opt/CSCOuls/files/conf/ opt/CSCOuls/files/conf/index.html

opt/CSCOuls/files/archives/ opt/CSCOuls/files/archives/lost-ipsec/ opt/CSCOuls/files/archives/lost-gw-connection/ opt/CSCOuls/files/archives/stat/ opt/CSCOuls/files/archives/unexpected-restart/ opt/CSCOuls/files/archives/unknown/ opt/CSCOuls/files/archives/nwl-scan-complete/ opt/CSCOuls/files/archives/on-call-drop/ opt/CSCOuls/files/archives/on-periodic/ opt/CSCOuls/files/archives/on-demand/

[root@rtpfga-ova-upload06 /]# popd

~

[root@rtpfga-ova-upload06 ~]#

Start Application Services:

Enter: cd /root service god start sleep 30 service god status

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

233

RMS System Rollback

End-to-End Testing

Output:

[root@rtpfga-ova-upload06 ~]# cd /root

[root@rtpfga-ova-upload06 ~]# service god start

[root@rtpfga-ova-upload06 ~]# sleep 30

[root@rtpfga-ova-upload06 ~]# service god status

UploadServer: up

[root@rtpfga-ova-upload06 ~]#

End-to-End Testing

To perform end-to-end testing of the Small Cell device, see

End-to-End Testing, on page 148

:

234

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

May 25, 2015 HF

A P P E N D I X

E

Glossary

Term

3G

ACE

ACS

ASR5K

BAC

CPE

CR

DVS

DMZ

DPE

DNS

DNM

DNB

ESXi

FQDN

HNB-GW

Description

Refers to the 3G or 4G cellular radio connection.

Application Control Engine.

Auto Configuration Server. Also refers to the BAC server.

Cisco Aggregation Service Router 5000 series.

Broadband Access Center. Serves as the Auto Configuration Server (ACS) in the

Small Cell solution.

Customer Premises Equipment.

Connection Request. Used by the ACS to establish a TR-069 session.

Distributed Virtual Switch.

Demilitarized Zone.

Distributed Provisioning Engine.

Domain Name System.

Detected Neighbor MCC/MNC.

Detected Neighbor Benchmark.

Elastic Sky X Integrated.

Fully Qualified Domain Name.

Home Node Base station Gateway also known as Femto Gateway.

RAN Management System Installation Guide, Release 4.1

235

Glossary

OSS

OVA

PAR

PNR

PMG

PMGDB

RMS

RDU

SAC

RNC

SIB

TACACS

SNMP

USC

TLS

TR-069

INSEE

LV

LDAP

LAC

LUS

NB

NTP

UBI

UCS

Institute for Statistics and Economic Studies.

Location Verification.

Lightweight Directory Access Protocol.

Location Area Code.

Log Upload Server.

North Bound.

Network Time Protocol.

Operations Support Systems.

Open Virtual Application.

Cisco Prime Access Registrar (PAR).

Cisco Prime Network Registrar (PNR).

Provisioning and Management Gateway.

Provisioning and Management Gateway Data Base.

Cisco RAN Management System.

Regional Distribution Unit.

Service Area Code.

Radio Network Controller.

System Information Block.

Terminal Access Controller Access-Control System.

Simple Network Management Protocol.

Ubiquisys Small Cell.

Transport Layer Security.

Technical Report 069 is a Broadband Forum (standard organization formerly known as the DSL forum) technical specification entitled CPE WAN Management Protocol

(CWMP).

Ubiquisys.

Unified Computing System.

236

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

Glossary

VM

XMPP

Virtual Machine.

Extensible Messaging and Presence Protocol.

May 25, 2015 HF

RAN Management System Installation Guide, Release 4.1

237

Glossary

238

RAN Management System Installation Guide, Release 4.1

May 25, 2015 HF

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Key Features

  • Centralized management of all aspects of the network
  • Real-time monitoring of the network status
  • Configuration management
  • Device provisioning
  • Troubleshooting
  • Improved efficiency and reliability of the network
  • Reduced cost of management

Related manuals

Frequently Answers and Questions

What are the benefits of using Cisco Universal Small Cell RAN Management System?
Cisco Universal Small Cell RAN Management System offers a number of benefits, including: * Improved efficiency and reliability of the network * Reduced cost of management * Centralized management of all aspects of the network * Real-time monitoring of the network status * Configuration management * Device provisioning * Troubleshooting
What are the key features of Cisco Universal Small Cell RAN Management System?
The key features of Cisco Universal Small Cell RAN Management System include: * Centralized management of all aspects of the network * Real-time monitoring of the network status * Configuration management * Device provisioning * Troubleshooting
How can I get started with Cisco Universal Small Cell RAN Management System?
To get started with Cisco Universal Small Cell RAN Management System, you will need to purchase the software and install it on a server. Once the software is installed, you will need to configure it and add your network devices. You can then begin monitoring the status of your network, managing its configuration, and provisioning new devices.

advertisement

Table of contents