RMS Installation Tasks

RMS Installation Tasks
RMS Installation Tasks
Perform these tasks to install the RMS software.
• RMS Installation Procedure, page 1
• Preparing the OVA Descriptor Files, page 2
• Deploying the RMS Virtual Appliance, page 5
• RMS Redundant Deployment, page 9
• Optimizing the Virtual Machines, page 21
• RMS Installation Sanity Check, page 29
RMS Installation Procedure
The RMS installation procedure is summarized here with links to the specific tasks.
Step No.
Task
Link
Task Completion:
Mandatory or Optional
1
Perform all prerequisite installations Installation Prerequisites
Mandatory
and Installing VMware
ESXi and vCenter for Cisco
RMS
2
Create the Open Virtual Application Preparing the OVA
(OVA) descriptor file
Descriptor Files, on page
2
3
Deploy the OVA package
Deploying the RMS Virtual Mandatory
Appliance, on page 5
4
Configure redundant Serving nodes
RMS Redundant
Deployment, on page 9
Mandatory
Optional
RAN Management System Installation Guide, Release 4.0
February 9, 2015
1
RMS Installation Tasks
Preparing the OVA Descriptor Files
Step No.
Task
Link
Task Completion:
Mandatory or Optional
5
Run the configure_hnbgw.sh script
to configure the HNB gateway
properties
HNB Gateway and DHCP
Configuration
Mandatory if the HNB
gateway properties were
not included in the OVA
descriptor file.
6
Optimize the VMs by upgrading the Optimizing the Virtual
VM hardware version, upgrading the Machines, on page 21
VM CPU and memory and upgrading
the Upload VM data size
Optional
7
Perform a sanity check of the system RMS Installation Sanity
Check, on page 29
Optional but
recommended
8
Install RMS Certificates
Installing RMS Certificates Mandatory
9
Configure the default route on the
Upload and Serving nodes for TLS
termination
Configuring Default Routes Optional
for Direct TLS Termination
at the RMS
10
Install and configure the PMG
database
PMG Database Installation Optional
and Configuration
Contact Cisco services to
deploy PMG DB.
11
Configure the Central node
Configuring the Central
Node
Mandatory
12
Populate the PMG database
Configuring the Central
Node
Mandatory
13
Verify the installation
Installation Verification
Optional but
recommended
Preparing the OVA Descriptor Files
The RMS requires Open Virtual Application (OVA) descriptor files, more commonly known as configuration
files, that specify the configuration of various system parameters.
The easiest way to create these configuration files is to copy the example OVA descriptor files that are bundled
as part of RMS build deliverable itself. Both RMS-Distributed-Solution-4.0.0-2M.tar.gz and
RMS-Provisioning-Solution-4.0.0-2M.tar.gz contain sample descriptors for Distributed and All-in-one package.
It is recommended to use these sample descriptor files and edit them according to your needs.
Copy the files and rename them as ".ovftool" before deploying. You need one configuration file for the
all-in-one deployment and three separate files for the distributed deployment.
RAN Management System Installation Guide, Release 4.0
2
February 9, 2015
RMS Installation Tasks
Preparing the OVA Descriptor Files
When you are done creating the configuration files, copy them to the server where vCenter is hosted and the
ovftool utility is installed. Alternately, they can be copied to any other server where the ovftool utility tool
by VMware is installed. In short, the configuration files must be copied as ".ovftool" to the directory where
you can run the VMware ovftool command.
The following are mandatory properties that must be provided in the OVA descriptor file. These are the bare
minimum properties required for successful RMS installation and operation. If any of these properties are
missing or incorrectly formatted, an error is displayed. All other properties are optional and configured
automatically with default values.
Note
Make sure that all Network 1 (eth0) interfaces (Central Node, Serving Node, and Upload Nodes) must be
in same VLAN.
Table 1: Mandatory Properties for OVA Descriptor File
Property
Description
Valid Values
datastore
Name of the physical storage to keep the VM text
files
net:Upload-Node Network 1
VLAN for the connection between the upload VLAN #
node (NB) and the central node
net:Upload-Node Network 2
VLAN for the connection between the upload VLAN #
node (southbound) and the CPE network
(FAPs)
net:Central-Node Network 1
VLAN for the connection between the central VLAN #
node (southbound) and the upload node
net:Central-Node Network 2
VLAN for the connection between the central VLAN #
node (northbound) and the OSS network
net:Serving-Node Network 1
VLAN for the connection between the serving VLAN #
node (northbound) and the central node
net:Serving-Node Network 2
VLAN for the connection between the serving VLAN #
node (southbound) and the CPE network
(FAPs)
prop:Central_Node_Eth0_Address IP address of the southbound VM interface
IPv4 address
prop:Central_Node_Eth0_Subnet
Network mask
Network mask for the IP subnet of the
southbound VM interface
prop:Central_Node_Eth1_Address IP address of the northbound VM interface
IPv4 address
prop:Central_Node_Eth1_Subnet
Network mask
Network mask for the IP subnet of the
northbound VM interface
RAN Management System Installation Guide, Release 4.0
February 9, 2015
3
RMS Installation Tasks
Preparing the OVA Descriptor Files
Property
Description
Valid Values
prop:Central_Node_Dns1_Address IP address of primary DNS server provided by IPv4 address
network administrator
prop:Central_Node_Dns2_Address IP address of secondary DNS server provided IPv4 address
by network administrator
prop:Central_Node_Gateway
IP address of the gateway to the management IPv4 address
network for the north bound interface of the
central node
prop:Serving_Node_Eth0_Address IP address of the northbound VM interface
IPv4 address
prop:Serving_Node_Eth0_Subnet Network mask for the IP subnet of the
northbound VM interface
Network mask
prop:Serving_Node_Eth1_Address IP address of the southbound VM interface
IPv4 address
prop:Serving_Node_Eth1_Subnet Network mask for the IP subnet of the
southbound VM interface
Network mask
prop:Serving_Node_Dns1_Address IP address of primary DNS server provided by IPv4 address
network administrator
prop:Serving_Node_Dns2_Address IP address of secondary DNS server provided IPv4 address
by network administrator
prop:Serving_Node_Gateway
IP address of the gateway to the management comma separated IPv4
network
addresses of the form
[northbound
GW],[southbound GW]
prop:Upload_Node_Eth0_Address IP address of the northbound VM interface
IPv4 address
prop:Upload_Node_Eth0_Subnet
Network mask
Network mask for the IP subnet of the
northbound VM interface
prop:Upload_Node_Eth1_Address IP address of the southbound VM interface
IPv4 address
prop:Upload_Node_Eth1_Subnet
Network mask
Network mask for the IP subnet of the
southbound VM interface
prop:Upload_Node_Dns1_Address IP address of primary DNS server provided by IPv4 address
network administrator
prop:Upload_Node_Dns2_Address IP address of secondary DNS server provided IPv4 address
by network administrator
RAN Management System Installation Guide, Release 4.0
4
February 9, 2015
RMS Installation Tasks
Validation of OVA Files
Property
Description
Valid Values
prop:Upload_Node_Gateway
IP address of the gateway to the management comma separated IPv4
network
addresses of the form
[northbound
GW],[southbound GW]
prop:Ntp1_Address
Primary NTP server
prop:Acs_Virtual_Address
ACS Virtual Address. Southbound IP address IPv4 address
of the Serving node
prop:Acs_Virtual_Fqdn
ACS virtual fully qualified domain name
IPv4 address or FQDN
(FQDN). Southbound FQDN or IP address of value
the serving node. For NAT based deployment,
this can be set to public IP/FQDN of the NAT.
prop:Upload_SB_Fqdn
Southbound FQDN or IP address of the upload IPv4 address or FQDN
node. Specify Upload eth1 address if no fqdn value
exists. For NAT based deployment, this can
be set to public IP/FQDN of the NAT.
IPv4 address or URL
Refer to OVA Descriptor File Properties for a complete description of all required and optional properties for
the OVA descriptor files.
Validation of OVA Files
If mandatory properties are missing from a descriptor file, the OVA installer displays an error on the installation
console. If mandatory properties are incorrectly configured, an appropriate error is displayed on the installation
console or in the ova-first-boot.log.
An example validation failure message in the ova-first-boot.log is shown here:
"Alert!!! Invalid input for Acs_Virtual_Fqdn...Aborting installation..."
Log in to the relevant VM using root credentials (default password is Ch@ngeme1) to access the first-boot
logs in the case of installation failures.
Wrongly configured properties include invalid IP addresses, invalid FQDN format, and so on. Validations
are restricted to format/data-type validations. Incorrect IP addresses/FQDNs (for example, unreachable IPs)
are not in the scope of validation.
Deploying the RMS Virtual Appliance
All administrative functions are available through vSphere client. A subset of those functions is available
through the vSphere web client. The vSphere client users are virtual infrastructure administrators for specialized
functions. The vSphere web client users are virtual infrastructure administrators, help desk, network operations
centre operators, and virtual machine owners.
RAN Management System Installation Guide, Release 4.0
February 9, 2015
5
RMS Installation Tasks
All-in-One RMS Deployment: Example
Note
All illustrations in this document are from the VMware vSphere client.
Before You Begin
You must be running VMware vSphere version 5.1. There are two ways to access the VMware Vcenter:
• VMware vSphere Client locally installed application
• VMware vSphere Web Client
Procedure
Step 1
Copy the OVA descriptor configuration files as ".ovftool" to the directory where you can run the VMware
ovftool command.
Note
If you are running from a Linux server, the .ovftool file should not be in the root directory as it takes
precedence over other ".ovftool" files.
While deploying the ova package, the home directory takes the preference over the current directory.
Step 2
Change the mode of the OVA deployer file to executable: chmod +x ./OVAdeployer.sh
Step 3
./OVAdeployer.sh ova-filepath/ova-file
vi://vcenter-user:password@vcenter-host/datacenter-name/host/host-folder-if-any/ucs-host
Example:
./OVAdeployer.sh /tmp/RMS-Provisioning-Solution-4.0.0-1E.ova
vi://myusername:mypass#1@blr-rms-vcenter1.cisco.com/BLR/host/UCS5K/blrrms-5108-09.cisco.com
The OVAdeployer.sh tool is new in RMS Release 4.0. It first validates the OVA descriptor file and
then continues to install the RMS. If necessary, get the OVAdeployer.sh tool from the build package
and copy it to the directory where the OVA descriptor file is stored.
If the vcenter-user and/or password are not specified in the command, you are prompted to enter this information
on the command line. Enter the user name and password to continue.
Note
All-in-One RMS Deployment: Example
In an all-in-one RMS deployment, all the nodes such as central, serving, and upload are deployed on a single
host on the VSphere client.
chmod +x ./OVAdeployer.sh
./OVAdeployer.sh /data/ovf/OVA_Files_QA/RMS-Provisioning-Solution-4.0.0-2N
/RMS-Provisioning-Solution-4.0.0-2N.ova vi://admin:admin123@blr-rms-vcenter1.cisco.com
/BLR/host/RMS/blrrms-c240-05.cisco.com
Reading OVA descriptor from path: ./.ovftool
Checking deployment type
Starting input validation
prop:Admin1_Password not provided, will be taking the default value for RMS.
prop:RMS_App_Password not provided, will be taking the default value for RMS.
RAN Management System Installation Guide, Release 4.0
6
February 9, 2015
RMS Installation Tasks
Distributed RMS Deployment: Example
prop:Root_Password not provided, will be taking the default value for RMS.
Checking network configurations in descriptor...
Deploying OVA...
Opening OVA source:
/data/ovf/OVA_Files_QA/RMS-Provisioning-Solution-4.0.0-2N/RMS-Provisioning-Solution-4.0.0-2N.ova
The manifest does not validate
Opening VI target:
vi://admin@blr-rms-vcenter1.cisco.com:443/BLR/host/RMS/blrrms-c240-05.cisco.com
Deploying to VI:
vi://admin@blr-rms-vcenter1.cisco.com:443/BLR/host/RMS/blrrms-c240-05.cisco.com
Transfer Completed
Powering on vApp: BLR-RMS-AIO-17
Completed successfully
Tue 08 Jul 2014 11:41:34 AM IST
OVA deployment took 538 seconds.
-bash-4.1$
The RMS all-in-one deployment in the vCenter appears similar to this illustration:
Figure 1: RMS All-In-One Deployment
Distributed RMS Deployment: Example
In the distributed deployment, RMS Nodes (Central node, Serving node, and Upload node) are deployed on
different hosts on the VSphere client. The RMS nodes must be deployed and powered in the following
sequence:
1 Central Node
2 Serving Node
3 Upload Node
The .ovftool files for the distributed deployment differ slightly than that of the all-in-one deployment in terms
of virtual host network values as mentioned in Preparing the OVA Descriptor Files, on page 2. Here is an
example of the distributed RMS deployment:
RAN Management System Installation Guide, Release 4.0
February 9, 2015
7
RMS Installation Tasks
Distributed RMS Deployment: Example
Central Node Deployment
chmod +x ./OVAdeployer.sh
./OVAdeployer.sh RMS-Central-Node-4.0.0-2I.ova
vi://ova:ova123@blr-rms-vcenter1.cisco.com/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com
Reading OVA descriptor from path: ./.ovftool
Checking deployment type
Starting input validation
Deploying OVA...
Opening OVA source: RMS-Central-Node-4.0.0-2I.ova
The manifest validates
Opening VI target:
vi://ova@blr-rms-vcenter1.cisco.com:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com
Deploying to VI:
vi://ova@blr-rms-vcenter1.cisco.com:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com
Transfer Completed
Warning:
- No manifest entry found for: '.ovf'.
- File is missing from the manifest: '.ovf'.
Completed successfully
Wed 28 May 2014 04:09:24 PM IST
OVA deployment took 335 seconds.
Serving Node Deployment
chmod +x ./OVAdeployer.sh
./OVAdeployer.sh RMS-Serving-Node-4.0.0-2I.ova
vi://ova:ova123@blr-rms-vcenter1.cisco.com/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com
Reading OVA descriptor from path: ./.ovftool
Checking deployment type
Starting input validation
Deploying OVA...
Opening OVA source: RMS-Serving-Node-4.0.0-2I.ova
The manifest validates
Opening VI target:
vi://ova@blr-rms-vcenter1.cisco.com:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com
Deploying to VI:
vi://ova@blr-rms-vcenter1.cisco.com:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com
Transfer Completed
Warning:
- No manifest entry found for: '.ovf'.
- File is missing from the manifest: '.ovf'.
Completed successfully
Wed 28 May 2014 04:09:24 PM IST
OVA deployment took 335 seconds.
Upload Node Deployment
chmod +x ./OVAdeployer.sh
./OVAdeployer.sh RMS-Upload-Node-4.0.0-2I.ova
vi://ova:ova123@blr-rms-vcenter1.cisco.com/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com
Reading OVA descriptor from path: ./.ovftool
Checking deployment type
Starting input validation
Deploying OVA...
Opening OVA source: RMS-Upload-Node-4.0.0-2I.ova
The manifest validates
Opening VI target:
RAN Management System Installation Guide, Release 4.0
8
February 9, 2015
RMS Installation Tasks
RMS Redundant Deployment
vi://ova@blr-rms-vcenter1.cisco.com:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com
Deploying to VI:
vi://ova@blr-rms-vcenter1.cisco.com:443/BLR/host/UCS5108-CH1-DEV/blrrms-5108-04.cisco.com
Transfer Completed
Warning:
- No manifest entry found for: '.ovf'.
- File is missing from the manifest: '.ovf'.
Completed successfully
Wed 28 May 2014 04:09:24 PM IST
OVA deployment took 335 seconds.
The RMS distributed deployment in the vSphere appears similar to this illustration:
Figure 2: RMS Distributed Deployment
RMS Redundant Deployment
To mitigate Serving node and Upload Server Node deployment failover, additional serving/upload nodes can
be configured with the same central node.
This procedure describes how to configure additional serving/upload nodes with an existing central node.
Note
Redundant Deployment does not mandate having both Serving Node and Upload Node together. Each
redundant node can be deployed individually without having the other node in the setup.
Procedure
Step 1
Prepare the deployment descriptor (.ovftool file) for any additional serving nodes as described in Preparing
the OVA Descriptor Files, on page 2.
For serving node redundancy, the descriptor file should have the same provisioning group as the primary
serving node. For an example on redundant OVA descriptor file, refer to Example Descriptor File for Redundant
Serving/Upload Node. The Descriptor File properties changes for Redundant Serving Node and Redundant
Upload Node as follows:
Redundant Serving Node:
• Name
• Serving_Node_Eth0_Address
RAN Management System Installation Guide, Release 4.0
February 9, 2015
9
RMS Installation Tasks
RMS Redundant Deployment
• Serving_Node_Eth1_Address
• Serving_Hostname
• Acs_Virtual_Address (should be same as Serving_Node_Eth1_Address)
• Dpe_Cnrquery_Client_Socket_Address (should be same as Serving_Node_Eth0_Address)
Redundant Upload Node:
• name
• Upload_Node_Eth0_Address
• Upload_Node_Eth1_Address
• Upload_Hostname
• Acs_Virtual_Address (should be same as Serving_Node_Eth1_Address)
• Dpe_Cnrquery_Client_Socket_Address (should be same as Serving_Node_Eth0_Address)
A configuration file needs to be copied to the Central Node as part of Redundancy Configuration. As part of
this configuration please create an ovf file with all the above properties changes for both Redundant Upload
and Serving Node and name it appropriately.
Step 2
Step 3
Copy and upload the above ovf file ovadescriptorfile_CN_Config.txt and save it as .txt on the central node
at / directory.
Take a backup of /etc/hosts and /rms/app/rms/conf/UploadServer.xml using the commands:
cp /etc/hosts /etc/hosts_orig
cp /rms/app/rms/conf/UploadServer.xml /rms/app/rms/conf/UploadServer.xml_orig
Step 4
Execute the utility shell script (central-multi-nodes-config.sh) to configure the network and application
properties on the central node.
The script is located in the / directory. The above copied descriptor file ovadescriptorfile_CN_Config.txt
to be given as input to the shell script.
Example:
./central-multi-nodes-config.sh <deploy-decsr-filename>
After execution of the script, a new fqdn/ip entry for the new ULS Node is created in the Upload.xml file.
Step 5
Install additional serving node and upload node as per the instructions in Deploying the RMS Virtual Appliance,
on page 5
Create individual ovf file per Redundant Serving Node or Redundant Upload Node and these ovf files will
be used as input for the respective Redundant Node Deployment
Step 6
Configure the serving node VMs to update the IP table firewall rules so that the DPE servers on these VMs
can communicate with each other. Refer to Configuring Redundant Serving Nodes, on page 11.
Configure the serving node redundancy as described in Setting Up Redundant Serving Nodes, on page 12.
Note
Redundant Upload Node needs no further
configuration
Step 7
RAN Management System Installation Guide, Release 4.0
10
February 9, 2015
RMS Installation Tasks
Configuring Redundant Serving Nodes
Configuring Redundant Serving Nodes
After installing additional serving nodes, use this procedure to update the IP table firewall rules on the serving
nodes so that the DPEs on the serving nodes can communicate with each other.
Procedure
Step 1
Step 2
Step 3
Log in to the primary serving node using SSH.
Change to root user.
Update the IP table firewall rules on the primary serving node so that the serving nodes can communicate:
a) iptables -A INPUT -s serving-node-2-eth1-address/32 -d serving-node-1-eth1-address/32 -i eth1 -p udp
--dport 49186 -m state --state NEW -j ACCEPT
b) iptables -A OUTPUT -s serving-node-1-eth-address/32 -d serving-node-2-eth1-address/32 -o eth1 -p
udp --dport 49186 -m state --state NEW -j ACCEPT
Port 49186 is used for inter-serving node communications.
Step 4
Save the configuration: service iptables save
Step 5
Step 6
Log in to the secondary serving node using SSH.
Change to root user: su-
Step 7
Update the IP table firewall rules on the secondary serving node:
a) iptables -A INPUT -s serving-node-1-eth1-address/32 -d serving-node-2-eth1-address/32 -i eth1 -p udp
--dport 49186 -m state --state NEW -j ACCEPT
b) iptables -A OUTPUT -s serving-node-2-eth1-address/32 -d serving-node-1-eth1-address/32 -o eth1 -p
udp --dport 49186 -m state --state NEW -j ACCEPT
Step 8
Save the configuration: service iptables save
Example:
This example assumes that the primary serving node eth1 address is 10.5.2.24 and the primary serving node
hostname is blr-rms1-serving; the secondary serving node eth1 address is 10.5.2.20 and the secondary serving
node hostname is blr-rms2-serving:
Primary Serving Node:
[root@blr-rms1-serving
--dport
49186 -m state --state
[root@blr-rms1-serving
udp --dport
49186 -m state --state
[root@blr-rms1-serving
~]# iptables -A INPUT -s 10.5.2.20/32 -d 10.5.2.24/32 -i eth1 -p udp
NEW -j ACCEPT
~]# iptables -A OUTPUT -s 10.5.2.24/32 -d 10.5.2.20/32 -o eth1 -p
NEW -j ACCEPT
~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]
Secondary Serving Node:
[root@blr-rms2-serving
--dport
49186 -m state --state
[root@blr-rms2-serving
udp --dport
49186 -m state --state
~]# iptables -A INPUT -s 10.5.2.24/32 -d 10.5.2.20/32 -i eth1 -p udp
NEW -j ACCEPT
~]# iptables -A OUTPUT -s 10.5.2.20/32 -d 10.5.2.24/32 -o eth1 -p
NEW -j ACCEPT
RAN Management System Installation Guide, Release 4.0
February 9, 2015
11
RMS Installation Tasks
Setting Up Redundant Serving Nodes
[root@blr-rms2-serving ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]
Setting Up Redundant Serving Nodes
This task enables the IP tables for port 61610, 61611, 1234 and 647 on both serving nodes.
Procedure
Step 1
Step 2
Log in to the primary serving node using SSH.
Change to root user: su-
Step 3
For each port 61610, 61611, and 647, run this command:
iptables -A OUTPUT -s serving-node-1-eth0-address /32 -d serving-node-2-eth0-address/32 -o eth0 -p udp
-m udp --dport port-number -m state --state NEW -j ACCEPT
Step 4
For port 1234, run this command:
iptables -A OUTPUT -s serving-node-1-eth0-address /32 -d serving-node-2-eth0-address/32 -o eth0 -p tcp
-m tcp --dport port-number -m state --state NEW -j ACCEPT
Step 5
For each port 61610, 61611, and 647, run this command:
iptables -A INPUT -s serving-node-2-eth0-address/32 -d serving-node-1-eth0-address/32 -i eth0 -p udp -m
udp --dport port-number -m state --state NEW -j ACCEPT
Step 6
For port 1234, run this command:
iptables -A INPUT -s serving-node-2-eth0-address/32 -d serving-node-1-eth0-address/32 -i eth0 -p tcp -m
tcp --dport port-number -m state --state NEW -j ACCEPT
Step 7
Save the results: service iptables save
Step 8
Step 9
Log in to the secondary serving node using SSH.
Change to root user: su-
Step 10 For each port 61610, 61611, and 647, run this command:
iptables -A OUTPUT -s serving-node-2-eth0-address /32 -d serving-node-1-eth0-address/32 -o eth0 -p udp
-m udp --dport port-number -m state --state NEW -j ACCEPT
Step 11 For port 1234, run this command:
iptables -A OUTPUT -s serving-node-2-eth0-address /32 -d serving-node-1-eth0-address/32 -o eth0 -p tcp
-m tcp --dport port-number -m state --state NEW -j ACCEPT
Step 12 For each port 61610, 61611, and 647, run this command:
iptables -A INPUT -s serving-node-1-eth0-address/32 -d serving-node-2-eth0-address/32 -i eth0 -p udp -m
udp --dport port-number -m state --state NEW -j ACCEPT
Step 13 For port 1234, run this command:
iptables -A INPUT -s serving-node-1-eth0-address/32 -d serving-node-2-eth0-address/32 -i eth0 -p tcp -m
tcp --dport port-number -m state --state NEW -j ACCEPT
Step 14 Save the results: service iptables save
RAN Management System Installation Guide, Release 4.0
12
February 9, 2015
RMS Installation Tasks
Setting Up Redundant Serving Nodes
Example:
This example assumes that the primary serving node eth0 address is 10.5.1.24 and that the secondary serving
node eth0 address is 10.5.1.20:
Serving Node Eth0:
[root@blr-rms11-serving ~]# iptables -A OUTPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -o eth0 -p
udp -m udp
--dport 61610 -m state --state NEW -j ACCEPT
[root@blr-rms11-serving ~]# iptables -A OUTPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -o eth0 -p
udp -m udp
--sport 61611 -m state --state NEW -j ACCEPT
[root@blr-rms11-serving ~]# iptables -A OUTPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -o eth0 -p
tcp -m tcp
--dport 1234 -m state --state NEW -j ACCEPT
[root@blr-rms11-serving ~]# iptables -A OUTPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -o eth0 -p
udp -m udp
--dport 647 -m state --state NEW -j ACCEPT
[root@blr-rms11-serving ~]# iptables -A INPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -i eth0 -p
udp -m udp
--dport 61610 -m state --state NEW -j ACCEPT
[root@blr-rms11-serving ~]# iptables -A INPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -i eth0 -p
udp -m udp
--dport 61611 -m state --state NEW -j ACCEPT
[root@blr-rms11-serving ~]# iptables -A INPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -i eth0 -p
tcp -m tcp
--dport 1234 -m state --state NEW -j ACCEPT
[root@blr-rms11-serving ~]# iptables -A INPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -i eth0 -p
udp -m udp
--dport 647 -m state --state NEW -j ACCEPT
[root@blr-rms11-serving ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[
OK
]
Serving Node Eth1:
[root@blr-rms12-serving ~]#
-p udp -m udp
[root@blr-rms12-serving ~]#
-p udp -m udp
[root@blr-rms12-serving ~]#
-p tcp -m tcp
[root@blr-rms12-serving ~]#
-p udp -m udp
[root@blr-rms12-serving ~]#
udp -m udp
[root@blr-rms12-serving ~]#
udp -m udp
[root@blr-rms12-serving ~]#
tcp -m tcp
[root@blr-rms12-serving ~]#
udp -m udp
[root@blr-rms12-serving ~]#
iptables -A OUTPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -o eth0
--dport 61610 -m state --state NEW -j ACCEPT
iptables -A OUTPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -o eth0
--sport 61611 -m state --state NEW -j ACCEPT
iptables -A OUTPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -o eth0
--dport 1234 -m state --state NEW -j ACCEPT
iptables -A OUTPUT -s 10.5.1.20/32 -d 10.5.1.24/32 -o eth0
--dport 647 -m state --state NEW -j ACCEPT
iptables -A INPUT -s 10.5.1.24/32 -d 10.5.1.24/32 -i eth0 -p
--dport 61610 -m state --state NEW -j ACCEPT
iptables -A INPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -i eth0 -p
--dport 61611 -m state --state NEW -j ACCEPT
iptables -A INPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -i eth0 -p
--dport 1234 -m state --state NEW -j ACCEPT
iptables -A INPUT -s 10.5.1.24/32 -d 10.5.1.20/32 -i eth0 -p
--dport 647 -m state --state NEW -j ACCEPT
service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[
OK
]
RAN Management System Installation Guide, Release 4.0
February 9, 2015
13
RMS Installation Tasks
Configuring the PNR for Redundancy
Configuring the PNR for Redundancy
Use this task to verify that all DPEs and the network registrar are ready in the BAC UI and that two DPEs
and two PNRs are in one provisioning group in the BAC UI.
Procedure
Step 1
Log into the PNR on the primary PNR DHCP server via the serving node CLI:
/rms/app/nwreg2/local/usrbin/nrcmd -N cnradmin
Enter the password when prompted.
Step 2
Configure the backup DHCP server (2nd Serving Node's IP (eth0):
cluster Backup-cluster create Backup DHCP server IP address admin=admin username password=user
admin password product-version=version number
Example:
nrcmd> cluster Backup-cluster create 10.5.1.20 admin=cnradmin
password=Ch@ngeme1 product-version=8.1.3 scp-port=1234
100 Ok
Backup-cluster:
admin = cnradmin
atul-port =
cluster-id = 2
fqdn =
http-port =
https-port =
ipaddr = 10.5.1.20
licensed-services =
local-servers =
name = Backup-cluster
password =
password-secret = 00:00:00:00:00:00:00:5a
poll-lease-hist-interval =
poll-lease-hist-offset =
poll-lease-hist-retry =
poll-replica-interval = [default=4h]
poll-replica-offset = [default=4h]
poll-subnet-util-interval =
poll-subnet-util-offset =
poll-subnet-util-retry =
product-version = 8.1.3
remote-id =
replication-initialized = [default=false]
restore-state = [default=active]
scp-port = 1234
scp-read-timeout = [default=20m]
shared-secret =
tenant-id = 0 tag: core
use-https-port = [default=false]
use-ssl = [default=optional]
Step 3
Configure the DHCP servers:
failover-pair femto-dhcp-failover create Main DHCP server IP address Backup DHCP server IP address
main=localhost backup=Backup-cluster backup-pct=20 mclt=57600
RAN Management System Installation Guide, Release 4.0
14
February 9, 2015
RMS Installation Tasks
Configuring the PNR for Redundancy
Example:
nrcmd> failover-pair femto-dhcp-failover create 10.5.1.24 10.5.1.20
main=localhost backup=Backup-cluster backup-pct=20 mclt=57600
100 Ok
femto-dhcp-failover:
backup = Backup-cluster
backup-pct = 20%
backup-server = 10.5.1.20
dynamic-bootp-backup-pct =
failover = [default=true]
load-balancing = [default=disabled]
main = localhost
main-server = 10.5.1.24
mclt = 16h
name = femto-dhcp-failover
persist-lease-data-on-partner-ack = [default=true]
safe-period = [default=24h]
scopetemplate =
tenant-id = 0 tag: core
use-safe-period = [default=disabled]
Step 4
Save the configuration: save
Example:
nrcmd>save
100 Ok
Step 5
Reload the primary DHCP server: server dhcp reload
Example:
nrcmd> server dhcp reload
100 Ok
Step 6
Configure the primary to secondary synchronization:
a) cluster localhost set admin=admin user password=admin password
Example:
nrcmd> cluster localhost set admin=cnradmin password=Ch@ngeme1
100 Ok
b) failover-pair femto-dhcp-failover sync exact main-to-backup
Example:
nrcmd> failover-pair femto-dhcp-failover sync exact main-to-backup
101 Ok, with warnings
((ClassName RemoteRequestStatus)(error 2147577914)(exception-list
[((ClassName ConsistencyDetail)(error-code 2147577914)(error-object
((ClassName DHCPTCPListener)(ObjectID OID-00:00:00:00:00:00:00:42)
(SequenceNo 30)(name femto-leasequery-listener)(address 0.0.0.0)(port 61610)))
(classid 1155)(error-attr-list [((ClassName AttrErrorDetail)(attr-id-list [03 ])
(error-code 2147577914)(error-string DHCPTCPListener 'femto-leasequery-listener'
RAN Management System Installation Guide, Release 4.0
February 9, 2015
15
RMS Installation Tasks
Configuring the PNR for Redundancy
address will be unset. The default value will apply.))]))]))
The above error is due to the change in the secondary PNR dhcp-listener-address. Change the
dhcp-listner-address in the secondary PNR as mentioned in the next steps.
c) failover-pair femto-dhcp-failover sync exact main-to-backup
Note
Example:
nrcmd> failover-pair femto-dhcp-failover sync exact main-to-backup
101 Ok, with warnings
((ClassName RemoteRequestStatus)(error 2147577914)(exception-list
[((ClassName ConsistencyDetail)(error-code 2147577914)(error-object
((ClassName DHCPTCPListener)(ObjectID OID-00:00:00:00:00:00:00:42)
(SequenceNo 30)(name femto-leasequery-listener)(address 0.0.0.0)(port 61610)))
(classid 1155)(error-attr-list [((ClassName AttrErrorDetail)(attr-id-list [03 ])
(error-code 2147577914)(error-string DHCPTCPListener 'femto-leasequery-listener'
address will be unset. The default value will apply.))]))]))
Note
The above error is due to the change in the secondary PNR dhcp-listener-address. Change the
dhcp-listner-address in the secondary PNR as mentioned in the next steps.
Step 7
Log in to the secondary PNR: /rms/app/nwreg2/local/usrbin/nrcmd -N cnradmin
Enter the password when prompted.
Step 8
Configure the femto lease query listener:
dhcp-listener femto-leasequery-listener set address=Serving node eth0 Ip Address
This address must be the secondary PNR IP address which is the serving node eth0 IP address.
Example:
nrcmd> dhcp-listener femto-leasequery-listener set address=10.5.1.20
100 Ok
nrcmd> dhcp-listener list
100 Ok
femto-leasequery-listener:
address = 10.5.1.20
backlog = [default=5]
enable = [default=true]
ip6address =
leasequery-backlog-time = [default=120]
leasequery-idle-timeout = [default=60]
leasequery-max-pending-notifications = [default=120000]
leasequery-packet-rate-when-busy = [default=500]
leasequery-send-all = [default=false]
max-connections = [default=10]
name = femto-leasequery-listener
port = 61610
receive-timeout = [default=30]
send-timeout = [default=120]
Step 9
Reload the secondary DHCP server: server dhcp reload
Example:
nrcmd> server dhcp reload
100 Ok
Step 10 Verify communication: dhcp getRelatedServers
RAN Management System Installation Guide, Release 4.0
16
February 9, 2015
RMS Installation Tasks
Configuring the Security Gateway on the ASR 5000 for Redundancy
Example:
nrcmd> dhcp getRelatedServers
100 Ok
Type Name
Address
Requests Communications State
Partner
Role Partner State
MAIN -10.5.1.24
0 OK
NORMAL
MAIN
NORMAL
TCP-L blrrms-Serving-02.cisco.com 10.5.1.20,61610
0 NONE
listening ---
Configuring the Security Gateway on the ASR 5000 for Redundancy
Procedure
Step 1
Step 2
Log in to the Cisco ASR 5000 that contains the HNB and security gateways.
Check the context name for the security gateway: show context all.
Step 3
Display the HNB gateway configuration: show configuration context security_gateway_context_name.
Verify that there are two DHCP server addresses configured. See the highlighted text in the example.
Example:
[local]blrrms-xt2-03# show configuration context HNBGW config
context HNBGW
ip pool ipsec range 7.0.1.48 7.0.1.63 public 0 policy allow-static-allocation
ipsec transform-set ipsec-vmct
#exit
ikev2-ikesa transform-set ikesa-vmct
#exit
crypto template vmct-asr5k ikev2-dynamic
authentication local certificate
authentication remote certificate
ikev2-ikesa transform-set list ikesa-vmct
keepalive interval 120
payload vmct-sa0 match childsa match ipv4
ip-address-alloc dynamic
ipsec transform-set list ipsec-vmct
tsr start-address 10.5.1.0 end-address 10.5.1.255
#exit
nai idr 10.5.1.91 id-type ip-addr
ikev2-ikesa keepalive-user-activity
certificate 10-5-1-91
ca-certificate list ca-cert-name TEF_CPE_SubCA ca-cert-name Ubi_Cisco_Int_ca
#exit
interface Iu-Ps-Cs-H
ip address 10.5.1.91 255.255.255.0
ip address 10.5.1.92 255.255.255.0 secondary
ip address 10.5.1.93 255.255.255.0 secondary
#exit
subscriber default
dhcp service CNR context HNBGW
ip context-name HNBGW
ip address pool name ipsec
exit
RAN Management System Installation Guide, Release 4.0
February 9, 2015
17
RMS Installation Tasks
Configuring the Security Gateway on the ASR 5000 for Redundancy
radius change-authorize-nas-ip 10.5.1.92 encrypted key
+A1rxtnjd9vom7g1ugk4buohqxtt073pbivjonsvn3olnz2wsl0sm5
event-timestamp-window 0 no-reverse-path-forward-check
aaa group default
radius max-retries 2
radius max-transmissions 5
radius timeout 1
radius attribute nas-ip-address address 10.5.1.92
radius server 10.5.1.20 encrypted key
+A3qji4gwxyne5y3s09r8uzi5ot70fbyzzzzgbso92ladvtv7umjcj
port 1812 priority 2
radius server 1.4.2.90 encrypted key
+A1z4194hjj9zvm24t0vdmob18b329iod1jj76kjh1pzsy3w46m9h4
port 1812 priority 1
#exit
gtpp group default
#exit
gtpu-service GTPU_FAP_1
bind ipv4-address 10.5.1.93
exit
dhcp-service CNR
dhcp client-identifier ike-id
dhcp server 10.5.1.20
dhcp server 10.5.1.24
no dhcp chaddr-validate
dhcp server selection-algorithm use-all
dhcp server port 61610
bind address 10.5.1.92
#exit
dhcp-server-profile CNR
#exit
hnbgw-service HNBGW_1
sctp bind address 10.5.1.93
sctp bind port 29169
associate gtpu-service GTPU_FAP_1
sctp sack-frequency 5
sctp sack-period 5
no sctp connection-timeout
no ue registration-timeout
hnb-identity oui discard-leading-char
hnb-access-mode mismatch-action accept-aaa-value
radio-network-plmn mcc 116 mnc 116
rnc-id 116
security-gateway bind address 10.5.1.91 crypto-template vmct-asr5k context HNBGW
#exit
ip route 0.0.0.0 0.0.0.0 10.5.1.1 Iu-Ps-Cs-H
ip route 10.5.3.128 255.255.255.128 10.5.1.1 Iu-Ps-Cs-H
ip igmp profile default
#exit
#exit
end
Step 4
If the second DHCP server is not configured, run these commands to configure it:
a) configure
b) context HNBGW
c) dhcp-service CNR
d) dhcp server <dhcp-server-2-IP-Addr >
e) dhcp server selection-algorithm use-all
Verify that the second DHCP server is configured by examining the output from this step.
Note
Exit from the config mode and view the DHCP
ip.
Example:
[local]blrrms-xt2-03# configure
RAN Management System Installation Guide, Release 4.0
18
February 9, 2015
RMS Installation Tasks
Configuring the HNB Gateway for Redundancy
[local]blrrms-xt2-03(config)# context HNBGW
[HNBGW]blrrms-xt2-03(config-ctx)# dhcp-service CNR
[HNBGW]blrrms-xt2-03(config-dhcp-service)# dhcp server 1.1.1.1
[HNBGW]blrrms-xt2-03(config-dhcp-service)# dhcp server selection-algorithm use-all
Step 5
To view the changes, execute the following command:
[local]blrrms-xt2-03# show configuration context HNBGW config
Step 6
Save the changes by executing the following command:
[local]blrrms-xt2-03# save config /flash/xt2-03-aug12
Configuring the HNB Gateway for Redundancy
Procedure
Step 1
Step 2
Login to the HNB gateway.
Display the configuration context of the HNB gateway so that you can verify the radius information:
show configuration context HNBGW_context_name
If the radius parameters are not configured as shown in this example, configure them as in this procedure.
Example:
[local]blrrms-xt2-03# show configuration context HNBGW config
context HNBGW
ip pool ipsec range 7.0.1.48 7.0.1.63 public 0 policy allow-static-allocation
ipsec transform-set ipsec-vmct
#exit
ikev2-ikesa transform-set ikesa-vmct
#exit
crypto template vmct-asr5k ikev2-dynamic
authentication local certificate
authentication remote certificate
ikev2-ikesa transform-set list ikesa-vmct
keepalive interval 120
payload vmct-sa0 match childsa match ipv4
ip-address-alloc dynamic
ipsec transform-set list ipsec-vmct
tsr start-address 10.5.1.0 end-address 10.5.1.255
#exit
nai idr 10.5.1.91 id-type ip-addr
ikev2-ikesa keepalive-user-activity
certificate 10-5-1-91
ca-certificate list ca-cert-name TEF_CPE_SubCA ca-cert-name Ubi_Cisco_Int_ca
#exit
interface Iu-Ps-Cs-H
ip address 10.5.1.91 255.255.255.0
ip address 10.5.1.92 255.255.255.0 secondary
ip address 10.5.1.93 255.255.255.0 secondary
#exit
subscriber default
dhcp service CNR context HNBGW
ip context-name HNBGW
ip address pool name ipsec
exit
radius change-authorize-nas-ip 10.5.1.92 encrypted key
RAN Management System Installation Guide, Release 4.0
February 9, 2015
19
RMS Installation Tasks
Configuring the HNB Gateway for Redundancy
+A1rxtnjd9vom7g1ugk4buohqxtt073pbivjonsvn3olnz2wsl0sm5
event-timestamp-window 0 no-reverse-path-forward-check
aaa group default
radius max-retries 2
radius max-transmissions 5
radius timeout 1
radius attribute nas-ip-address address 10.5.1.92
radius server 10.5.1.20 encrypted key
+A3qji4gwxyne5y3s09r8uzi5ot70fbyzzzzgbso92ladvtv7umjcj
port 1812 priority 2
radius server 1.4.2.90 encrypted key
+A1z4194hjj9zvm24t0vdmob18b329iod1jj76kjh1pzsy3w46m9h4
port 1812 priority 1
#exit
gtpp group default
#exit
gtpu-service GTPU_FAP_1
bind ipv4-address 10.5.1.93
exit
dhcp-service CNR
dhcp client-identifier ike-id
dhcp server 10.5.1.20
dhcp server 10.5.1.24
no dhcp chaddr-validate
dhcp server selection-algorithm use-all
dhcp server port 61610
bind address 10.5.1.92
#exit
dhcp-server-profile CNR
#exit
hnbgw-service HNBGW_1
sctp bind address 10.5.1.93
sctp bind port 29169
associate gtpu-service GTPU_FAP_1
sctp sack-frequency 5
sctp sack-period 5
no sctp connection-timeout
no ue registration-timeout
hnb-identity oui discard-leading-char
hnb-access-mode mismatch-action accept-aaa-value
radio-network-plmn mcc 116 mnc 116
rnc-id 116
security-gateway bind address 10.5.1.91 crypto-template vmct-asr5k context HNBGW
#exit
ip route 0.0.0.0 0.0.0.0 10.5.1.1 Iu-Ps-Cs-H
ip route 10.5.3.128 255.255.255.128 10.5.1.1 Iu-Ps-Cs-H
ip igmp profile default
#exit
#exit
end
Step 3
If the radius server configuration is not as shown in the above example, perform the following configuration:
a) configure
b) context HNBGW_context_name
c) radius server radius-server-ip-address key secret port 1812 priority 2
Example:
[local]blrrms-xt2-03# configure
[local]blrrms-xt2-03(config)# context HNBGW
[HNBGW]blrrms-xt2-03(config-ctx)# radius server 10.5.1.20 key secret port 1812 priority 2
radius server 10.5.1.20 encrypted key +A3qji4gwxyne5y3s09r8uzi5ot70fbyzzzzgbso92ladvtv7umjcj
port 1812 priority 2
Step 4
If the configuration of the radius server is not correct, delete it: no radius server radius-server-id-address
RAN Management System Installation Guide, Release 4.0
20
February 9, 2015
RMS Installation Tasks
Optimizing the Virtual Machines
Example:
[HNBGW]blrrms-xt2-03(config-ctx)# no radius server 10.5.1.20
Step 5
Configure the radius maximum retries and time out settings:
a) configure
b) context hnbgw_context_name
c) radius max-retries 2
d) radius timeout 1
After configuring the radius settings, verify that they are correct as in the example.
Example:
[local]blrrms-xt2-03# configure
[local]blrrms-xt2-03(config)# context HNBGW
[HNBGW]blrrms-xt2-03(config-ctx)# radius max-retries 2
[HNBGW]blrrms-xt2-03(config-ctx)# radius timeout 1
radius max-retries 2
radius max-transmissions 5
radius timeout 1
What to Do Next
After the configuration is complete, the HNBGW sends access request trice to the primary PAR with a one
second time delay between the two requests.
Optimizing the Virtual Machines
To run the RMS software, you need to verify that the VMs that you are running are up-to-date and configured
optimally. Use these tasks to optimize your VMs.
Upgrading the VM Hardware Version
To have better performance parameter options available (for example, more virtual CPU and memory), the
VMware hardware version needs to be upgraded to version 8 or above. You can upgrade the version using
the vSphere client .
RAN Management System Installation Guide, Release 4.0
February 9, 2015
21
RMS Installation Tasks
Upgrading the VM Hardware Version
Note
Prior to the VM hardware upgrade, make a note of the current hardware version from vSphere client.
Figure 3: VMware Hardware Version
RAN Management System Installation Guide, Release 4.0
22
February 9, 2015
RMS Installation Tasks
Upgrading the VM Hardware Version
Procedure
Step 1
Step 2
Start the vSphere client.
Right-click the vApp for one of the RMS nodes and select Power Off.
Figure 4: Power Off the vApp
Step 3
Right-click the virtual machine for the RMS node (central, serving, upload) and select Upgrade Virtual
Hardware.
The software upgrades the virtual machine hardware to the latest supported version.
The Upgrade Virtual Hardware option appears only if the virtual hardware on the virtual machine
is not the latest supported version.
Click Yes in the Confirm Virtual Machine Upgrade screen to continue with the virtual hardware upgrade.
Note
Step 4
Step 5
Step 6
Step 7
Step 8
Verify that the upgraded version is displayed in the Summary screen of the vSphere client.
Repeat this procedure for all remaining VMs, such as central, serving and upload so that all three VMs are
upgraded to the latest hardware version.
Right-click the respective vApp of the RMS nodes and select Power On.
Make sure that all VMs are completely up with their new installation configurations.
RAN Management System Installation Guide, Release 4.0
February 9, 2015
23
RMS Installation Tasks
Upgrading the VM CPU and Memory Settings
Upgrading the VM CPU and Memory Settings
Before You Begin
Upgrade the VM hardware version as described in Upgrading the VM Hardware Version, on page 21.
Note
Upgrade the CPU/Memory settings of the required RMS VMs using the below procedure to match the
configurations defined in the section Optimum CPU and Memory Configurations
Procedure
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
Step 8
Start the vSphere client.
Right-click the vApp for one of the RMS nodes and select Power Off.
Right-click the virtual machine for an RMS node (central, serving, upload) and select Edit Settings.
Select the Hardware Tab. Click Memory in the hardware device list on the left side of the screen and update
the Memory Size.
Click CPUs in the hardware device list on the left side of the screen and update the Number of virtual sockets.
Click OK.
Right-click the vApp and select Power On.
Repeat this procedure for all remaining VMs (central, serving and upload).
Upgrading the Upload VM Data Sizing
Note
Refer to Virtualization Requirements for more information on data sizing.
RAN Management System Installation Guide, Release 4.0
24
February 9, 2015
RMS Installation Tasks
Upgrading the Upload VM Data Sizing
Procedure
Step 1
Step 2
Log in to the vSphere client and connect to a specific vCenter server.
Click the Upload VM and click the Summary tab to view the available free disk space. Make sure that there
is sufficient disk space available to make a change to the configuration.
Figure 5: Upload Node Summary Tab
Step 3
Step 4
Step 5
Step 6
Step 7
Step 8
Right-click the RMS upload virtual machine and select Power followed by Shut Down Guest.
Right-click again the RMS upload virtual machine and select Edit Settings.
Click the Hardware tab. Click Hard disk 1 in the hardware device list on the left side of the screen and change
the Provisioned Size value to minimum 300 GB to retain one day logs uploaded by 10,000 devices.
Click OK.
Right-click the VM and select Power followed by Power On.
Log in to the Upload node.
a) Log in to the Central node VM using the central node eth1 address.
b) ssh to the Upload VM using the upload node hostname.
Example:
ssh admin1@blr-rms14-upload
Step 9
Check the effective disk space after expanding: fdisk -l
Example:
[admin1@blr-rms14-upload ~]$ su - root
Password:
[root@blr-rms14-upload ~]#
[root@blr-rms14-upload ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10
(wheel),501(wami) context=user_u:user_r:policykit_grant_t:s0
[root@blr-rms14-upload ~]# fdisk -l
Disk /dev/sda: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
RAN Management System Installation Guide, Release 4.0
February 9, 2015
25
RMS Installation Tasks
Upgrading the Upload VM Data Sizing
Disk identifier: 0x000463d0
Device Boot
Start
End
Blocks
/dev/sda1
*
1
17
131072
Partition 1 does not end on cylinder boundary.
/dev/sda2
17
33
131072
Partition 2 does not end on cylinder boundary.
/dev/sda3
33
6528
52165632
Id
83
System
Linux
82
Linux swap / Solaris
83
Linux
Step 10 Create the extended partition: fdisk /dev/sd
Example:
[root@blr-rms14-upload ~]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. it's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
e
Selected partition 4
First cylinder (6528-26108, default 6528): 6528
Last cylinder, +cylinders or +size{K,M,G} (6528-26108, default 26108):
Using default value 26108
Warning
Follow the on-screen prompts carefully as a small mistake can corrupt the entire system.
Step 11 Create a logical partition on the extended partition.
Example:
Command (m for help): n
First clinder (6528-26108, default 6528):
Using default value 26108
Last cylinder, +cylinders or +size{K,M,G} (6528-26108, default 26108):
Using default value 26108
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource
busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@blr-rms14-upload ~]#
Step 12 Reboot the system: reboot
After the reboot completes, log back into the server and switch to root user.
Step 13 Verify that the new partition was created: fdisk -l
Example:
[root@blr-rms14-upload ~]# fdisk -l
Disk /dev/sda: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065*512 = 8225280 bytes
RAN Management System Installation Guide, Release 4.0
26
February 9, 2015
RMS Installation Tasks
Upgrading the Upload VM Data Sizing
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004b3d0
Device Boot
Start
End
Blocks
/dev/sda1
*
1
17
131072
Partition 1 does not end on cylinder boundary.
/dev/sda2
17
33
131072
Partition 2 does not end on cylinder boundary.
/dev/sda3
33
6528
52165632
/dev/sda4
6528
26108
157283710
/dev/sda5
6528
26108
157283678+
Id
83
System
Linux
82
Linux swap / Solaris
83
5
83
Linux
Extended
Linux
Step 14 Create ext3 FS on the new partition: mkfs -t ext3 /dev/sda5
Example:
[root@blr-rms14-upload ~]# mkfs -t ext3 /dev/sda5
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
9830400 inodes, 39320919 blocks
1966045 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1200 block groups
32768 blocks per group, 32768 fragments per group
8192 idnodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229378, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Wireing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting invormation: done
This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@blr-rms14-upload ~]#
Step 15 Verify ownership of the mounting directory: ls -al /opt/CSCOuls/
Example:
[root@blr-rms14-upload
total 28
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 5 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 3 root
[root@blr-rms14-upload
~]# ls -l /opt/CSCOuls/
ciscorms
ciscorms
ciscorms
ciscorms
ciscorms
ciscorms
root
~]#
4096
4096
4096
4096
4096
4096
4096
Mar 7 21:29 bin
Mar 11 14:36 conf
Mar 7 21:30 files
Mar 7 21:29 lib
Mar 11 15:00 logs
Mar 11 16:54 run
Mar 7 21:29 server-perf
Step 16 Open the file that permanently mounts the volume/partition with the mount point and add entry related to the
new partition (/dev/sda5 /opt/CSCOuls/files ext3 defaults 0 0) and save the file: vi /etc/fstab
Example:
#
# /etc/fstab
#Created by anaconda on Fri Mar
#
7 10:56:44 2014
RAN Management System Installation Guide, Release 4.0
February 9, 2015
27
RMS Installation Tasks
Upload Server Tuning for 15min Upload Interval
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstag(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=a415c4c0-9657-4548-b599-b338c2d815f6 /
ext3
defaults
1
UUID=1bdc029d-ddc6-4130-bf78-2e8253bd85a4 /boot
ext3
defaults
1
UUID=086decf2-7e0a-445f-8775-b2377904f962 swap
swap
defaults
0
tmpfs
/dev/shm
tmpfs defaults
0
devpts
/dev/pts
devpts gid=5,mode=620 0
sysfs
/sys
sysfs defaults
0
proc
/proc
proc
defaults
0
/dev/sda5
/opt/CSCOuls/files ext3
defaults
0
1
1
0
0
0
0
0
0
Step 17 Check the new volume: mount -a
Example:
[root@blr-rms14-upload ~]#
[root@blr-rms14-upload ~]#
Filesystem
Size
/dev/sda3
49G
tmpfs
7.8G
/dev/sda1
124M
/dev/sda5
148G
[root@blr-rms14-upload ~]#
mount -a
df -h
Used Avail
1.4G
46G
0
7.8G
25M
94M
188M
140G
Use%
3%
0%
21%
1%
Mounted on
/
/dev/shm
/boot
/opt/CSCOuls/files
Step 18 Check ownership of the files directory after the mount and change it to ciscorms: ls -l /opt/CSCOuls/
Example:
[root@blr-rms14-upload
total 28
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 3 root
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 3 root
~]# ls -l /opt/CSCOuls/
[root@blr-rms14-upload
[root@blr-rms14-upload
total 28
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 3 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 2 ciscorms
drwxr-xr-x. 3 root
~]# chown -R ciscorms:ciscorms /opt/CSCOuls/files/
~]# ls -l /opt/CSCOuls/
ciscorms
ciscorms
root
ciscorms
ciscorms
ciscorms
root
ciscorms
ciscorms
ciscorms
ciscorms
ciscorms
ciscorms
root
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
7
11
11
7
11
11
7
7
11
11
7
11
11
7
21:29
14:36
17:30
21:29
15:00
16:54
21:29
21:29
14:36
17:30
21:29
15:00
16:54
21:29
bin
conf
files
lib
logs
run
server-perf
bin
conf
files
lib
logs
run
server-perf
Step 19 Reboot the system: reboot
After the system restarts, log in to the server.
Upload Server Tuning for 15min Upload Interval
The following upload server properties can support:
• 10K APs for the PM/Stat file size of 500KB
• 35K APs for the PM/Stat file size of 150KB
RAN Management System Installation Guide, Release 4.0
28
February 9, 2015
RMS Installation Tasks
RMS Installation Sanity Check
• 58K APs for the PM/Stat file size of 90KB
The minimum disk space allocation required is 300GB for retaining raw files for 1 hour and archived files
for 24 hours for 10,000 devices.
Upload server periodically cleans up the uploaded files if the disk utilization exceeds the threshold value.
Procedure
Step 1
Log in to the central node ssh admin1@<rdu_ip_address>
Step 2
Log in to the Upload Server from the Central Node ssh admin1@<uls_ip_address>
Step 3
Change to root user: su-
Step 4
Change to config directory user: cd /opt/CSCOuls/conf
Step 5
Edit the file UploadServer.properties and update the properties listed below:
Note: The maxgb=300 in the below property is the size of the directory /opt/CSCOuls/files
in GB. This value varies on the disk size.
UploadServer.disk.alloc.global.maxgb=300
UploadServer.server.filemanager.taskscheduler.dirmaint.initialdelay.minutes=1440
UploadServer.server.filemanager.taskscheduler.dirmaint.interval.minutes=1440
Note: The below properties needs to be modified for each of the file_type such as stat,
on-periodic, on-call-drop and so on.
UploadServer.files.upload.<file_type>.archiverawfiles.interval.minutes=60
UploadServer.files.upload.<file_type>.archiving.compression.enabled=true
UploadServer.files.upload.<file_type>.archiving.enabled=true
UploadServer.files.upload.<file_type>.archive.delete.threshexceeded=true
UploadServer.files.upload.<file_type>.raw.delete.threshexceeded=false
UploadServer.files.upload.<file_type>.raw.delete.afterarchived=true
UploadServer.files.upload.<file_type>.pctoftotaldiskspacetofree=100
Step 6
Restart the ULS service: service god restart
RMS Installation Sanity Check
Sanity Check for the BAC UI
Following the installation, perform this procedure to ensure that all connections are established.
Note
The default user name is bacadmin. The password is as specified in the OVA descriptor file
(prop:RMS_App_Password). The default password is Ch@ngeme1.
RAN Management System Installation Guide, Release 4.0
February 9, 2015
29
RMS Installation Tasks
Sanity Check for the DCC UI
Procedure
Step 1
Log in to BAC UI using the URL https://<central-node-north-bound-IP>/adminui.
Step 2
Step 3
Click on Servers.
Click the tabs at the top of the display to verify that all components are populated:
• DPEs—Should display respective serving node name given in the descriptor file used for deployment.
Click on the serving node name. The display should indicate that this serving node is in the Ready state.
Figure 6: BAC: View Device Provisioning Engines Details
• NRs—Should display the NR (same as serving node name) given in the descriptor file used for
deployment. Click on the NR name. The display should indicate that this node is in the Ready state.
• Provisioning Groups—Should display the respective provisioning group name given in the descriptor
file used for deployment. Click on the Provisioning group name. The display should indicate the ACS
URL pointing to the value of the property, “prop: Acs_Virtual_Fqdn” that you specified in the descriptor
file.
• RDU—Should display the RDU in the Ready state.
If all of these screens display correctly as described, the BAC UI is communicating correctly.
Sanity Check for the DCC UI
Following the installation, perform this procedure to ensure that all connections are established.
RAN Management System Installation Guide, Release 4.0
30
February 9, 2015
RMS Installation Tasks
Verifying Application Processes
Procedure
Step 1
Log in to DCC UI using the URL https://[central-node-northbound-IP]/dcc_ui.
The default username is dccadmin. The password is as specified in the OVA descriptor file
(prop:RMS_App_Password). The default password is Ch@ngeme1.
Step 2
Click the Groups and IDs tab and verify that the Group Types table shows Area, Femto Gateway, RFProfile,
Enterprise and Site.
Verifying Application Processes
Verify the RMS virtual appliance deployment by logging onto each of the virtual servers for the Central,
Serving and Upload nodes. Note that these processes and network listeners are available for each of the servers:
Procedure
Step 1
Step 2
Log in to the Central node as a root user.
Run: service bprAgent status
In the output, note that these processes are running:
[rtpfga-s1-central1] ~ # service bprAgent status
BAC Process Watchdog is running
Process [snmpAgent] is running
Process [rdu] is running
Process [tomcat] is running
Step 3
Run: /rms/app/nwreg2/regional/usrbin/cnr_status
[rtpfga-ova-central06] ~ # /rms/app/nwreg2/regional/usrbin/cnr_status
Server Agent running (pid: 4564)
CCM Server running (pid: 4567)
WEB Server running (pid: 4568)
RIC Server Running (pid:v4569)
Step 4
Step 5
Log in to the Serving node.
Run: service bprAgent status
[rtpfga-s1-serving1] ~ # service bprAgent status
BAC Process Watchdog is running.
Process [snmpAgent] is running.
Process [dpe] is running.
Process [cli] is running.
RAN Management System Installation Guide, Release 4.0
February 9, 2015
31
RMS Installation Tasks
Verifying Application Processes
Step 6
Run: /rms/app/nwreg2/local/usrbin/cnr_status
[rtpfga-s1-serving1] ~ # /rms/app/nwreg2/local/usrbin/cnr_status
DHCP server running (pid: 16805)
Server Agent running (pid: 16801)
CCM Server running (pid: 16804)
WEB Server running (pid: 16806)
CNRSNMP server running (pid: 16808)
RIC Server Running (pid: 16807)
TFTP Server is not running
DNS Server is not running
DNS Caching Server is not running
Step 7
Run: /rms/app/CSCOar/usrbin/arstatus
[root@rms-aio-serving ~]# /rms/app/CSCOar/usrbin/arstatus
Cisco Prime AR RADIUS server running (pid: 24272)
Cisco Prime AR Server Agent running (pid: 24232)
Cisco Prime AR MCD lock manager running (pid: 24236)
Cisco Prime AR MCD server running (pid: 24271)
Cisco Prime AR GUI running (pid: 24273)
[root@rms-aio-serving ~]#
Step 8
Step 9
Log in to the Upload node..
Run: service god status
[rtpfga-s1-upload1] ~ # service god status
UploadServer: up
Note
If the above status of UploadServer is not up (start or unmonitor state), see Upload Server is Not Up
for details.
RAN Management System Installation Guide, Release 4.0
32
February 9, 2015
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising