Configuring NTP - HPE Support Center

HP MSR Router Series
Network Management and Monitoring
Configuration Guide(V5)
Part number: 5998-6591
Software version: CMW520-R2511
Document version: 6PW105-20140813
Legal and notice information
© Copyright 2014 Hewlett-Packard Development Company, L.P.
No part of this documentation may be reproduced or transmitted in any form or by any means without
prior written consent of Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice.
HEWLETT-PACKARD COMPANY MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS
MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE. Hewlett-Packard shall not be liable for errors contained
herein or for incidental or consequential damages in connection with the furnishing, performance, or use
of this material.
The only warranties for HP products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
i
Contents
Legal and notice information·········································································································································i Configuring SNMP······················································································································································· 1 Overview············································································································································································ 1 SNMP framework ····················································································································································· 1 MIB and view-based MIB access control ··············································································································· 1 SNMP operations ····················································································································································· 2 SNMP protocol versions ·········································································································································· 2 SNMP configuration task list ············································································································································ 2 Configuring SNMP basic parameters ····························································································································· 2 Configuring SNMPv3 basic parameters ················································································································ 2 Configuring SNMPv1 or SNMPv2c basic parameters ························································································· 4 Configuring SNMP logging ············································································································································· 5 Configuring SNMP traps ·················································································································································· 6 Enabling SNMP traps ·············································································································································· 6 Configuring the SNMP agent to send traps to a host ·························································································· 7 Displaying and maintaining SNMP ································································································································ 8 SNMP configuration examples ········································································································································ 8 SNMPv1/SNMPv2c configuration example ········································································································· 8 SNMPv3 configuration example·························································································································· 10 SNMP logging configuration example ··············································································································· 11 Configuring RMON ··················································································································································· 14 Overview········································································································································································· 14 Working mechanism ············································································································································· 14 RMON groups ······················································································································································· 14 Configuring the RMON statistics function ··················································································································· 16 Configuring the RMON Ethernet statistics function···························································································· 16 Configuring the RMON history statistics function ······························································································ 16 Configuring the RMON alarm function ······················································································································· 17 Displaying and maintaining RMON ···························································································································· 18 Ethernet statistics group configuration example ········································································································· 18 History group configuration example ·························································································································· 19 Alarm group configuration example ···························································································································· 21 Configuring NTP ························································································································································ 23 Overview········································································································································································· 23 NTP application ····················································································································································· 23 NTP advantages ···················································································································································· 23 How NTP works ····················································································································································· 23 NTP message format ············································································································································· 24 NTP operation modes ··········································································································································· 26 NTP for VPNs ························································································································································· 28 NTP configuration task list ············································································································································· 29 Configuring NTP operation modes ······························································································································ 29 Configuring NTP client/server mode ·················································································································· 30 Configuring the NTP symmetric peers mode ······································································································ 30 Configuring NTP broadcast mode······················································································································· 31 Configuring NTP multicast mode ························································································································· 31 Configuring the local clock as a reference source ····································································································· 32 i
Configuring optional parameters for NTP ··················································································································· 33 Specifying the source interface for NTP messages ···························································································· 33 Disabling an interface from receiving NTP messages ······················································································· 33 Configuring the allowed maximum number of dynamic sessions ···································································· 34 Configuring access-control rights ································································································································· 34 Configuration prerequisites ·································································································································· 34 Configuration procedure ······································································································································ 35 Configuring NTP authentication ··································································································································· 35 Configuring NTP authentication in client/server mode ····················································································· 35 Configuring NTP authentication in symmetric peers mode ··············································································· 36 Configuring NTP authentication in broadcast mode ························································································· 37 Configuring NTP authentication in multicast mode ··························································································· 38 Displaying and maintaining NTP ································································································································· 39 NTP configuration examples ········································································································································· 40 NTP client/server mode configuration example ································································································ 40 NTP symmetric peers mode configuration example ·························································································· 41 NTP broadcast mode configuration example····································································································· 42 NTP multicast mode configuration example ······································································································· 44 Configuration example for NTP client/server mode with authentication ························································ 46 Configuration example for NTP broadcast mode with authentication ···························································· 48 Configuration example for MPLS VPN time synchronization in client/server mode ······································ 51 Configuration example for MPLS VPN time synchronization in symmetric peers mode································ 52 Configuring cluster management ······························································································································ 54 Overview········································································································································································· 54 Roles in a cluster ···················································································································································· 54 How a cluster works ·············································································································································· 55 Configuration restrictions and guidelines ··········································································································· 58 Cluster management configuration task list ················································································································· 58 Configuring the management device ··························································································································· 59 Enabling NDP globally and for specific ports ···································································································· 59 Configuring NDP parameters ······························································································································ 60 Enabling NTDP globally and for specific ports ·································································································· 60 Configuring NTDP parameters····························································································································· 60 Manually collecting topology information ·········································································································· 61 Enabling the cluster function ································································································································ 61 Establishing a cluster ············································································································································· 62 Enabling management VLAN autonegotiation ··································································································· 62 Configuring communication between the management device and the member devices within a cluster · 63 Configuring cluster management protocol packets ··························································································· 63 Cluster member management ······························································································································ 64 Configuring the member devices·································································································································· 65 Enabling NDP ························································································································································ 65 Enabling NTDP ······················································································································································ 65 Manually collecting topology information ·········································································································· 65 Enabling the cluster function ································································································································ 65 Deleting a member device from a cluster ··········································································································· 65 Toggling between the CLIs of the management device and a member device ······················································· 65 Adding a candidate device to a cluster ······················································································································ 66 Configuring advanced cluster functions ······················································································································ 66 Configuring topology management ···················································································································· 66 Configuring interaction for a cluster ···················································································································· 67 Configuring the SNMP configuration synchronization function ······································································· 68 Configuring Web user accounts in batches ······································································································· 68 Displaying and maintaining cluster management ······································································································ 69 ii
Cluster management configuration example ·············································································································· 70 Configuring CWMP (TR-069) ···································································································································· 73 Overview········································································································································································· 73 CWMP network framework·································································································································· 73 Basic CWMP functions ········································································································································· 73 CWMP mechanism ··············································································································································· 75 CWMP configuration approaches ······························································································································· 76 Configuring ACS and CPE attributes through ACS ··························································································· 77 Configuring ACS and CPE attributes through DHCP························································································· 77 Configuring CWMP at the CLI ····························································································································· 77 Enabling CWMP ···························································································································································· 78 Configuring ACS attributes ··········································································································································· 78 Configuring the ACS URL ····································································································································· 79 Configuring the ACS username and password ································································································· 79 Configuring CPE attributes ············································································································································ 79 Configuring the CPE username and password ·································································································· 80 Configuring the CWMP connection interface ···································································································· 80 Sending Inform messages ····································································································································· 80 Configuring the maximum number of attempts made to retry a connection ··················································· 81 Configuring the close-wait timer of the CPE ······································································································· 81 Configuring the CPE working mode ···················································································································· 82 Specifying an SSL client policy for HTTPS connection to ACS ········································································· 82 Displaying and maintaining CWMP ···························································································································· 83 Configuring IP accounting ········································································································································· 84 Configuring IP accounting ············································································································································· 84 Displaying and maintaining IP accounting·················································································································· 85 IP accounting configuration example ··························································································································· 85 Network requirements ··········································································································································· 85 Configuration procedure ······································································································································ 86 Configuring NetStream·············································································································································· 87 Overview········································································································································································· 87 NetStream basic concepts ············································································································································ 87 Flow ········································································································································································ 87 NetStream operation············································································································································· 87 NetStream key technologies ········································································································································· 88 Flow aging ····························································································································································· 88 NetStream data export ········································································································································· 88 NetStream export formats ···································································································································· 91 NetStream sampling and filtering ································································································································ 91 NetStream sampling·············································································································································· 91 NetStream filtering ················································································································································ 91 NetStream configuration task list ·································································································································· 91 Enabling NetStream on an interface···························································································································· 92 Configuring NetStream filtering and sampling ··········································································································· 93 Configuring NetStream filtering ··························································································································· 93 Configuring NetStream sampling ························································································································ 93 Configuring NetStream data export ···························································································································· 94 Configuring NetStream traditional data export ································································································· 94 Configuring NetStream aggregation data export ····························································································· 94 Configuring attributes of NetStream export data ······································································································· 95 Configuring NetStream export format················································································································· 95 Configuring the refresh rate for NetStream version 9 templates ······································································ 97 Configuring MPLS-aware NetStream ·················································································································· 97 iii
Configuring NetStream flow aging ······························································································································ 97 Flow aging approaches ········································································································································ 97 Configuration procedure ······································································································································ 98 Displaying and maintaining NetStream ······················································································································ 99 NetStream configuration examples ······························································································································ 99 NetStream traditional data export configuration example ··············································································· 99 NetStream aggregation data export configuration example ········································································· 100 Configuring NQA ··················································································································································· 102 Overview······································································································································································· 102 Collaboration ······················································································································································· 102 Threshold monitoring ·········································································································································· 103 NQA configuration task list ········································································································································ 104 Configuring the NQA server ······································································································································ 104 Configuring the NQA client ········································································································································ 105 Enabling the NQA client ···································································································································· 105 Configuring an ICMP echo operation ··············································································································· 105 Configuring a DHCP operation ························································································································· 106 Configuring a DNS operation ··························································································································· 107 Configuring an FTP operation ···························································································································· 107 Configuring an HTTP operation ························································································································· 108 Configuring a UDP jitter operation ···················································································································· 109 Configuring an SNMP operation······················································································································· 111 Configuring a TCP operation ····························································································································· 111 Configuring a UDP echo operation ··················································································································· 112 Configuring a voice operation··························································································································· 113 Configuring a DLSw operation ·························································································································· 115 Configuring optional parameters for an NQA operation··············································································· 116 Configuring the collaboration function ············································································································· 117 Configuring threshold monitoring ······················································································································ 117 Configuring the NQA statistics function ··········································································································· 120 Configuring NQA history records saving function ·························································································· 120 Scheduling an NQA operation·························································································································· 121 Displaying and maintaining NQA ····························································································································· 122 NQA configuration examples ···································································································································· 123 ICMP echo operation configuration example ·································································································· 123 DHCP operation configuration example ··········································································································· 125 DNS operation configuration example ············································································································· 126 FTP operation configuration example ··············································································································· 127 HTTP operation configuration example ············································································································· 128 UDP jitter operation configuration example ····································································································· 130 SNMP operation configuration example ·········································································································· 132 TCP operation configuration example ·············································································································· 133 UDP echo operation configuration example ···································································································· 135 Voice operation configuration example ··········································································································· 136 DLSw operation configuration example ············································································································ 139 NQA collaboration configuration example······································································································ 140 Configuring IP traffic ordering ······························································································································· 143 Enabling IP traffic ordering ········································································································································· 143 Setting the IP traffic ordering interval ························································································································· 143 Displaying and maintaining IP traffic ordering ········································································································· 143 IP traffic ordering configuration example ·················································································································· 143 Configuring sFlow ··················································································································································· 145 Configuring the sFlow agent and sFlow collector information ················································································ 145 iv
Configuring flow sampling·········································································································································· 146 Configuring counter sampling ···································································································································· 147 Displaying and maintaining sFlow ····························································································································· 147 sFlow configuration example ······································································································································ 147 Troubleshooting sFlow configuration ························································································································· 148 The remote sFlow collector cannot receive sFlow packets ·············································································· 148 Configuring samplers ·············································································································································· 150 Overview······································································································································································· 150 Creating a sampler ······················································································································································ 150 Displaying and maintaining a sampler ····················································································································· 150 Sampler configuration example ································································································································· 151 Configuring PoE ······················································································································································ 153 Hardware compatibility ··············································································································································· 153 Overview······································································································································································· 153 PoE configuration task list ··········································································································································· 153 Enabling PoE ································································································································································ 154 Enabling PoE for a PSE ······································································································································· 154 Enabling PoE on a PoE interface ······················································································································· 155 Detecting PDs ································································································································································ 156 Enabling the PSE to detect nonstandard PDs ··································································································· 156 Configuring a PD disconnection detection mode ···························································································· 156 Configuring the PoE power ········································································································································· 156 Configuring the maximum PSE power ·············································································································· 156 Configuring the maximum PoE interface power ······························································································ 157 Configuring PoE power management ························································································································ 157 Configuring PSE power management ··············································································································· 157 Configuring PoE interface power management ······························································································· 158 Configuring the PoE monitoring function ··················································································································· 159 Configuring PSE power monitoring ··················································································································· 159 Monitoring PD ······················································································································································ 159 Configuring a PoE interface by using a PoE profile ································································································· 159 Configuring a PoE profile ··································································································································· 160 Applying a PoE profile········································································································································ 160 Upgrading PSE processing software in service ········································································································ 161 Displaying and maintaining PoE ································································································································ 161 PoE configuration example ········································································································································· 162 Troubleshooting PoE ···················································································································································· 164 Failure to set the priority of a PoE interface to critical····················································································· 164 Failure to apply a PoE profile to a PoE interface ····························································································· 164 Configuring port mirroring ····································································································································· 165 Overview······································································································································································· 165 Terminologies of port mirroring ························································································································· 165 Port mirroring classification and implementation ····························································································· 166 Configuring local port mirroring ································································································································ 166 Configuring local port mirroring by using the mirror-group command ························································· 166 Creating a local mirroring group ······················································································································ 166 Configuring source ports for the local mirroring group ·················································································· 167 Configuring the monitor port for the local mirroring group ············································································ 168 Configuring local port mirroring by using the mirror command ···································································· 168 Configuring remote port mirroring ····························································································································· 169 Displaying and maintaining port mirroring ··············································································································· 169 Local port mirroring configuration example ·············································································································· 169 Network requirements ········································································································································· 169 v
Configuration procedure ···································································································································· 170 Verifying the configuration ································································································································· 170 Configuring traffic mirroring ·································································································································· 171 Overview······································································································································································· 171 Traffic mirroring configuration task list ······················································································································ 171 Configuring traffic mirroring ······································································································································· 171 Configuring match criteria ································································································································· 171 Mirroring traffic to an interface ························································································································· 172 Configuring a QoS policy ·································································································································· 172 Applying a QoS policy ······································································································································· 172 Displaying and maintaining traffic mirroring ············································································································ 173 Traffic mirroring configuration example ···················································································································· 173 Network requirements ········································································································································· 173 Configuration procedure ···································································································································· 173 Verifying the configuration ································································································································· 175 Configuring the information center ························································································································ 176 Overview······································································································································································· 176 Classification of system information ·················································································································· 176 System information levels ··································································································································· 176 Output channels and destinations ····················································································································· 177 Default output rules of system information ········································································································ 178 System information formats ································································································································ 179 FIPS compliance ··························································································································································· 181 Information center configuration task list ··················································································································· 182 Outputting system information to the console ··········································································································· 182 Outputting system information to the monitor terminal ···························································································· 183 Outputting system information to a log host ············································································································· 184 Outputting system information to the trap buffer ······································································································ 185 Outputting system information to the log buffer ········································································································ 186 Outputting system information to the SNMP module ······························································································· 186 Outputting system information to the Web interface································································································ 187 Saving system information to a log file······················································································································ 188 Managing security logs ··············································································································································· 189 Saving security logs into the security log file···································································································· 190 Managing the security log file ··························································································································· 191 Enabling synchronous information output ················································································································· 193 Disabling an interface from generating link up/down logging information ························································· 194 Displaying and maintaining information center ······································································································· 194 Information center configuration examples ··············································································································· 195 Outputting log information to the console ········································································································ 195 Outputting log information to a UNIX log host ································································································ 196 Outputting log information to a Linux log host ································································································· 197 Using ping, tracert, and system debugging ·········································································································· 200 Ping ················································································································································································ 200 Using a ping command to test network connectivity ······················································································· 200 Ping example ······················································································································································· 200 Tracert ··········································································································································································· 202 Prerequisites ························································································································································· 203 Using a tracert command to identify failed or all nodes in a path ································································ 204 System debugging ······················································································································································· 204 Debugging information control switches··········································································································· 204 Debugging a feature module ····························································································································· 205 Ping and tracert example ············································································································································ 206 vi
Configuring IPv6 NetStream ·································································································································· 208 Overview······································································································································································· 208 IPv6 NetStream basic concepts ·································································································································· 208 IPv6 flow ······························································································································································· 208 IPv6 NetStream operation ·································································································································· 208 IPv6 NetStream key technologies ······························································································································· 209 Flow aging ··························································································································································· 209 IPv6 NetStream data export ······························································································································· 209 IPv6 NetStream export format ···························································································································· 210 IPv6 NetStream configuration task list ······················································································································· 211 Enabling IPv6 NetStream ············································································································································ 211 Configuring IPv6 NetStream data export ·················································································································· 211 Configuring IPv6 NetStream traditional data export ······················································································· 211 Configuring IPv6 NetStream aggregation data export ··················································································· 212 Configuring attributes of IPv6 NetStream data export ····························································································· 213 Configuring IPv6 NetStream export format ······································································································ 213 Configuring the refresh rate for IPv6 NetStream version 9 templates ··························································· 214 Configuring IPv6 NetStream flow aging ··················································································································· 214 Flow aging approaches ······································································································································ 214 Configuration procedure ···································································································································· 215 Displaying and maintaining IPv6 NetStream ············································································································ 216 IPv6 NetStream configuration examples ··················································································································· 216 IPv6 NetStream traditional data export configuration example ····································································· 216 IPv6 NetStream aggregation data export configuration example ································································· 217 Support and other resources ·································································································································· 219 Contacting HP ······························································································································································ 219 Subscription service ············································································································································ 219 Related information ······················································································································································ 219 Documents ···························································································································································· 219 Websites······························································································································································· 219 Conventions ·································································································································································· 220 Index ········································································································································································ 222 vii
Configuring SNMP
This chapter provides an overview of the Simple Network Management Protocol (SNMP) and guides you
through the configuration procedure.
Overview
SNMP is an Internet standard protocol widely used for a management station to access and operate the
devices on a network, regardless of their vendors, physical characteristics and interconnect technologies.
SNMP enables network administrators to read and set the variables on managed devices for state
monitoring, troubleshooting, statistics collection, and other management purposes.
SNMP framework
The SNMP framework comprises the following elements:
•
SNMP manager—Works on an NMS to monitor and manage the SNMP-capable devices in the
network.
•
SNMP agent—Works on a managed device to receive and handle requests from the NMS, and
sends traps to the NMS when some events, such as an interface state change, occur.
•
Management Information Base (MIB)—Specifies the variables (for example, interface status and
CPU usage) maintained by the SNMP agent for the SNMP manager to read and set.
Figure 1 Relationship between an NMS, agent and MIB
MIB and view-based MIB access control
A MIB stores variables called "nodes" or "objects" in a tree hierarchy and identifies each node with a
unique OID. An OID is a string of numbers that describes the path from the root node to a leaf node. For
example, object B in Figure 2 is uniquely identified by the OID {1.2.1.1}.
Figure 2 MIB tree
1
A MIB view represents a set of MIB objects (or MIB object hierarchies) with certain access privilege and
is identified by a view name. The MIB objects included in the MIB view are accessible while those
excluded from the MIB view are inaccessible.
A MIB view can have multiple view records each identified by a view-name oid-tree pair.
You control access to the MIB by assigning MIB views to SNMP groups or communities.
SNMP operations
SNMP provides the following basic operations:
•
Get—The NMS retrieves SNMP object nodes in an agent MIB.
•
Set—The NMS modifies the value of an object node in an agent MIB.
•
Notifications—Includes traps and informs. SNMP agent sends traps or informs to report events to
the NMS. The difference between these two types of notification is that informs require
acknowledgement but traps do not. The device supports only traps.
SNMP protocol versions
HP supports SNMPv1, SNMPv2c, and SNMPv3. An NMS and an SNMP agent must use the same
SNMP version to communicate with each other.
•
SNMPv1—Uses community names for authentication. To access an SNMP agent, an NMS must use
the same community name as set on the SNMP agent. If the community name used by the NMS is
different from that set on the agent, the NMS cannot establish an SNMP session to access the agent
or receive traps from the agent.
•
SNMPv2c—Uses community names for authentication. SNMPv2c is compatible with SNMPv1, but
supports more operation modes, data types, and error codes.
•
SNMPv3—Uses a user-based security model (USM) to secure SNMP communication. You can
configure authentication and privacy mechanisms to authenticate and encrypt SNMP packets for
integrity, authenticity, and confidentiality.
SNMP configuration task list
Task
Remarks
Configuring SNMP basic parameters
Required.
Configuring SNMP logging
Optional.
Configuring SNMP traps
Optional.
Configuring SNMP basic parameters
SNMPv3 differs from SNMPv1 and SNMPv2c in many ways. Their configuration procedures are
described in separate sections.
Configuring SNMPv3 basic parameters
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
2
Step
Command
Remarks
Optional.
By default, the SNMP agent is
disabled.
2.
3.
Enable the SNMP agent.
snmp-agent
Configure system information
for the SNMP agent.
snmp-agent sys-info { contact
sys-contact | location sys-location
| version { all | { v1 | v2c |
v3 }* } }
You can also enable the SNMP
agent by using any command that
begins with snmp-agent except for
the snmp-agent
calculate-password command.
Optional.
The defaults are as follows:
• Contact—null.
• Location—null.
• Version—SNMPv3.
Optional.
4.
Configure the local engine ID.
snmp-agent local-engineid
engineid
The default local engine ID is the
company ID plus the device ID.
After you change the local engine
ID, the existing SNMPv3 users
become invalid, and you must
re-create the SNMPv3 users.
Optional.
By default, the MIB view
ViewDefault is predefined and its
OID is 1.
5.
6.
7.
Each view-name oid-tree pair
represents a view record. If you
specify the same record with
different MIB subtree masks
multiple times, the most recent
configuration takes effect. Except
for the four subtrees in the default
MIB view, you can create up to 16
unique MIB view records.
Create or update a MIB view.
snmp-agent mib-view { excluded |
included } view-name oid-tree
[ mask mask-value ]
Configure an SNMPv3 group.
snmp-agent group v3 group-name
[ authentication | privacy ]
[ read-view read-view ]
[ write-view write-view ]
[ notify-view notify-view ] [ acl
acl-number | acl ipv6
ipv6-acl-number ] *
By default, no SNMP group exists.
Convert a plaintext key to a
ciphertext (encrypted) key.
snmp-agent calculate-password
plain-password mode { 3desmd5 |
3dessha | md5 | sha }
{ local-engineid |
specified-engineid engineid }
Optional.
3
Step
Command
Remarks
N/A
8.
Add a user to the SNMPv3
group.
snmp-agent usm-user v3
user-name group-name [ [ cipher ]
authentication-mode { md5 | sha }
auth-password [ privacy-mode
{ 3des | aes128 | des56 }
priv-password ] ] [ acl acl-number |
acl ipv6 ipv6-acl-number ] *
9.
Configure the maximum
SNMP packet size (in bytes)
that the SNMP agent can
handle.
snmp-agent packet max-size
byte-count
Optional.
By default, the SNMP agent can
receive and send SNMP packets
up to 1500 bytes.
Configuring SNMPv1 or SNMPv2c basic parameters
Step
Command
Remarks
10. Enter system view.
system-view
N/A
Optional.
By default, the SNMP agent is
disabled.
You can also enable the SNMP
agent service by using any
command that begins with
snmp-agent except for the
snmp-agent
calculate-password command.
11. Enable the SNMP
agent.
snmp-agent
12. Configure system
information for the
SNMP agent.
snmp-agent sys-info { contact sys-contact |
location sys-location | version { all |{ v1 | v2c
| v3 }* } }
The defaults are as follows:
• Contact—null.
• Location—null.
• Version—SNMPv3.
Optional.
13. Configure the local
engine ID.
snmp-agent local-engineid engineid
The default local engine ID is
the company ID plus the device
ID.
Optional.
By default, the MIB view
ViewDefault is predefined and
its OID is 1.
14. Create or update a
MIB view.
snmp-agent mib-view { excluded | included }
view-name oid-tree [ mask mask-value ]
4
Each view-name oid-tree pair
represents a view record. If you
specify the same record with
different MIB subtree masks
multiple times, the most recent
configuration takes effect.
Except for the four subtrees in
the default MIB view, you can
create up to 16 unique MIB
view records.
Step
Command
Remarks
• (Method 1) Create an SNMP community:
snmp-agent community { read | write }
[ cipher ] community-name [ mib-view
view-name ] [ acl acl-number | acl ipv6
ipv6-acl-number ] *
Use either method.
• (Method 2) Create an SNMP group, and
By default, no SNMP group
exists.
add a user to the SNMP group:
15. Configure the SNMP
access right.
a. snmp-agent group { v1 | v2c }
group-name [ read-view read-view ]
[ write-view write-view ] [ notify-view
notify-view ] [ acl acl-number | acl ipv6
ipv6-acl-number ] *
b. snmp-agent usm-user { v1 | v2c }
user-name group-name [ acl
acl-number | acl ipv6
ipv6-acl-number ] *
16. Configure the
maximum size (in
bytes) of SNMP
packets for the
SNMP agent.
In method 2, the username is
equivalent to the community
name in method 1, and must be
the same as the community
name configured on the NMS.
Optional.
snmp-agent packet max-size byte-count
By default, the SNMP agent can
receive and send SNMP
packets up to 1500 bytes.
Configuring SNMP logging
Disable SNMP logging in normal cases to prevent a large amount of SNMP logs from decreasing device
performance.
The SNMP logging function logs Get requests, Set requests, and Set responses, but does not log Get
responses.
•
Get operation—The agent logs the IP address of the NMS, name of the accessed node, and node
OID.
•
Set operation—The agent logs the NMS' IP address, name of accessed node, node OID, variable
value, and error code and index for the Set operation.
The SNMP module sends these logs to the information center as informational messages. You can
configure the information center to output these messages to certain destinations, for example, the
console and the log buffer. The total output size for the node field (MIB node name) and the value field
(value of the MIB node) in each log entry is 1024 bytes. If this limit is exceeded, the information center
truncates the data in the fields. For more information about the information center, see "Configuring the
information center."
To configure SNMP logging:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable SNMP logging.
snmp-agent log { all |
get-operation | set-operation }
By default, SNMP logging is
disabled.
5
Configuring SNMP traps
The SNMP agent sends traps to inform the NMS of important events, such as a reboot.
Traps include generic traps and vendor-specific traps. Available generic traps include authentication,
coldstart, linkdown, linkup and warmstart. All other traps are vendor-defined.
SNMP traps generated by a module are sent to the information center. You can configure the information
center to enable or disable outputting the traps from a module by severity and set output destinations. For
more information about the information center, see "Configuring the information center."
Enabling SNMP traps
Enable SNMP traps only if necessary. SNMP traps are memory-intensive and might affect device
performance.
To generate linkUp or linkDown traps when the link state of an interface changes, enable the linkUp or
linkDown trap function both globally by using the snmp-agent trap enable [ standard [ linkdown |
linkup ] * ] command and on the interface by using the enable snmp trap updown command.
After you enable a trap function for a module, whether the module generates traps also depends on the
configuration of the module. For more information, see the configuration guide for each module.
To enable traps:
Step
Command
Remarks
Enter system view.
system-view
N/A
2.
Enable traps
globally.
snmp-agent trap enable [ acfp [ client | policy | rule |
server ] | bfd | bgp | configuration | default-route |
flash | fr | isdn [ call-clear | call-setup | lapd-status ] |
mpls | ospf [ process-id ] [ ifauthfail | ifcfgerror |
ifrxbadpkt | ifstatechange | iftxretransmit |
lsdbapproachoverflow | lsdboverflow | maxagelsa |
nbrstatechange | originatelsa | vifcfgerror | virifauthfail
| virifrxbadpkt | virifstatechange | viriftxretransmit |
virnbrstatechange ] * | pim [ candidatebsrwinelection |
electedbsrlostelection | interfaceelection |
invalidjoinprune | invalidregister | neighborloss |
rpmappingchange ] * | posa | standard [ authentication
| coldstart | linkdown | linkup | warmstart ] * | system
| voice dial | vrrp [ authfailure | newmaster ] | wlan ]
By default, the trap
function of the voice
module is disabled
and the trap functions
of all the other modules
are enabled.
3.
Enter interface view.
• interface interface-type interface-number
• controller { cpos | e1 | e3 | e-cpos | t1 | t3 } number
Use either command
depending on the
interface type.
4.
Enable link state
traps.
enable snmp trap updown
By default, the link
state traps are
enabled.
1.
6
Configuring the SNMP agent to send traps to a host
The SNMP module buffers the traps received from a module in a trap queue. You can set the size of the
queue, the duration that the queue holds a trap, and trap target (destination) hosts, typically the NMS.
To successfully send traps, you must also perform the following tasks:
•
Complete the basic SNMP settings and verify that they are the same as on the NMS. If SNMPv1 or
SNMPv2c is used, you must configure a community name. If SNMPv3 is used, you must configure
an SNMPv3 user and MIB view.
•
Make sure the device and the NMS can reach each other.
To configure the SNMP agent to send traps to a host:
Step
1.
2.
3.
Command
Remarks
Enter system view.
system-view
N/A
Configure a target host.
snmp-agent target-host trap
address udp-domain { ip-address |
ipv6 ipv6-address } [ udp-port
port-number ] [ vpn-instance
vpn-instance-name ] params
securityname security-string [ v1 |
v2c | v3 [ authentication |
privacy ] ]
Configure the source address
for traps.
snmp-agent trap source
interface-type { interface-number |
interface-number.subnumber }
If the trap destination is a host. The
ip-address argument must be the IP
address of the host.
The vpn-instance keyword is
applicable in an IPv4 network.
Optional.
By default, SNMP chooses the IP
address of an interface to be the
source IP address of traps.
Optional.
By default, standard
linkUp/linkDown traps are used.
4.
Extend the standard
linkUp/linkDown traps.
snmp-agent trap if-mib link
extended
Extended linkUp/linkDown traps
add interface description and
interface type to standard
linkUp/linkDown traps. If the NMS
does not support extended SNMP
messages, use standard
linkUp/linkDown traps.
Optional.
5.
Configure the trap queue size.
The default trap queue size is 100.
snmp-agent trap queue-size size
When the trap queue is full, the
oldest traps are automatically
deleted for new traps.
Optional.
6.
Configure the trap holding
time.
snmp-agent trap life seconds
7
The default setting is 120 seconds.
A trap is deleted when its holding
time expires.
Displaying and maintaining SNMP
Task
Command
Remarks
Display SNMP agent system
information, including the contact,
physical location, and SNMP
version.
display snmp-agent sys-info [ contact | location
| version ]* [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display SNMP agent statistics.
display snmp-agent statistics [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the local engine ID.
display snmp-agent local-engineid [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display SNMP group information.
display snmp-agent group [ group-name ] [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
Display basic information about
the trap queue.
display snmp-agent trap queue [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the modules that can send
traps and their trap status (enable
or disable).
display snmp-agent trap-list [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display SNMPv3 user information.
display snmp-agent usm-user [ engineid
engineid | username user-name | group
group-name ] * [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display SNMPv1 or SNMPv2c
community information.
display snmp-agent community [ read | write ]
[ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display MIB view information.
display snmp-agent mib-view [ exclude |
include | viewname view-name ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
SNMP configuration examples
This section gives examples of configuring SNMPv1 or SNMPv2c, SNMPv3, and SNMP logging.
SNMPv1/SNMPv2c configuration example
Network requirements
As shown in Figure 3, the NMS (1.1.1.2/24) uses SNMPv1 or SNMPv2c to manage the SNMP agent
(1.1.1.1/24), and the agent automatically sends traps to report events to the NMS.
Figure 3 Network diagram
8
Configuration procedure
1.
Configure the SNMP agent:
# Configure the IP address of the agent, and make sure the agent and the NMS can reach each
other. (Details not shown.)
# Specify SNMPv1 and SNMPv2c, and create a read-only community public and a read and write
community private.
<Agent> system-view
[Agent] snmp-agent sys-info version v1 v2c
[Agent] snmp-agent community read public
[Agent] snmp-agent community write private
# Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable SNMP traps, set the NMS at 1.1.1.2 as an SNMP trap destination, and use public as the
community name. (To make sure the NMS can receive traps, specify the same SNMP version in the
snmp-agent target-host command as is configured on the NMS.)
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public v1
[Agent] quit
2.
Configure the SNMP NMS:
# Configure the SNMP version for the NMS as v1 or v2c, create a read-only community and name
it public, and create a read and write community and name it private. For information about
configuring the NMS, see the NMS manual.
NOTE:
The SNMP settings on the agent and the NMS must match.
3.
Verify the configuration:
# Try to get the count of sent traps from the agent. The attempt succeeds.
Send request to 1.1.1.1/161 ...
Protocol version: SNMPv1
Operation: Get
Request binding:
1: 1.3.6.1.2.1.11.29.0
Response binding:
1: Oid=snmpOutTraps.0 Syntax=CNTR32 Value=18
Get finished
# Use a wrong community name to get the value of a MIB node from the agent. You can see an
authentication failure trap on the NMS.
1.1.1.1/2934 V1 Trap = authenticationFailure
SNMP Version = V1
Community = public
Command = Trap
Enterprise = 1.3.6.1.4.1.43.1.16.4.3.50
GenericID = 4
SpecificID = 0
9
Time Stamp = 8:35:25.68
SNMPv3 configuration example
Network requirements
As shown in Figure 4, the NMS (1.1.1.2/24) uses SNMPv3 to monitor and manage the interface status of
the agent (1.1.1.1/24), and the agent automatically sends traps to report events to the NMS.
The NMS and the agent perform authentication when they set up an SNMP session. The authentication
algorithm is MD5 and the authentication key is authkey. The NMS and the agent also encrypt the SNMP
packets between them by using the DES algorithm and the privacy key prikey.
Figure 4 Network diagram
Configuration procedure
1.
Configure the agent:
# Configure the IP address of the agent and make sure the agent and the NMS can reach each
other. (Details not shown.)
# Assign the NMS read and write access to the objects under the snmp node (OID
1.3.6.1.2.1.11), and deny its access to any other MIB object.
<Agent> system-view
[Agent] undo snmp-agent mib-view ViewDefault
[Agent] snmp-agent mib-view included test snmp
[Agent] snmp-agent group v3 managev3group read-view test write-view test
# Set the username to managev3user, authentication algorithm to MD5, authentication key to
authkey, encryption algorithm to DES56, and privacy key to prikey.
[Agent] snmp-agent usm-user v3 managev3user managev3group authentication-mode md5
authkey privacy-mode des56 prikey
# Configure contact person and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable traps, specify the NMS at 1.1.1.2 as a trap destination, and set the username to
managev3user for the traps.
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
managev3user v3 privacy
2.
Configure the SNMP NMS:
{
Specify the SNMP version for the NMS as v3.
{
Create two SNMP users: managev3user and public.
{
Enable both authentication and privacy functions.
{
Use MD5 for authentication and DES for encryption.
{
Set the authentication key to authkey and the privacy key to prikey.
10
{
Set the timeout time and maximum number of retries.
For information about configuring the NMS, see the NMS manual.
NOTE:
The SNMP settings on the agent and the NMS must match.
3.
Verify the configuration:
# Try to get the count of sent traps from the agent. The get attempt succeeds.
Send request to 1.1.1.1/161 ...
Protocol version: SNMPv3
Operation: Get
Request binding:
1: 1.3.6.1.2.1.11.29.0
Response binding:
1: Oid=snmpOutTraps.0 Syntax=CNTR32 Value=18
Get finished
# Try to get the device name from the agent. The get attempt fails because the NMS has no access
right to the node.
Send request to 1.1.1.1/161 ...
Protocol version: SNMPv3
Operation: Get
Request binding:
1: 1.3.6.1.2.1.1.5.0
Response binding:
1: Oid=sysName.0 Syntax=noSuchObject Value=NULL
Get finished
# Execute the shutdown or undo shutdown command on an idle interface on the agent. You can
see the interface state change traps on the NMS:
1.1.1.1/3374 V3 Trap = linkdown
SNMP Version = V3
Community = managev3user
Command = Trap
1.1.1.1/3374 V3 Trap = linkup
SNMP Version = V3
Community = managev3user
Command = Trap
SNMP logging configuration example
Network requirements
Configure the SNMP agent (1.1.1.1/24) in Figure 5 to log the SNMP operations performed by the NMS.
11
Figure 5 Network diagram
Configuration procedure
This example assumes that you have configured all required SNMP settings for the NMS and the agent
(see "SNMPv1/SNMPv2c configuration example" or "SNMPv3 configuration example").
# Enable displaying log messages on the configuration terminal. (This function is enabled by default.
Skip this step if you are using the default.)
<Agent> terminal monitor
<Agent> terminal logging
# Enable the information center to output system information with severity level equal to or higher than
informational to the console port.
<Agent> system-view
[Agent] info-center source snmp channel console log level informational
# Enable logging GET and SET operations.
[Agent] snmp-agent log all
# Verify the configuration:
Use the NMS to get a MIB variable from the agent. The following is a sample log message displayed on
the configuration terminal:
%Nov 23 16:10:09:482 2011 Agent SNMP/6/SNMP_GET:
-seqNO=27-srcIP=1.1.1.2-op=GET-node=sysUpTime(1.3.6.1.2.1.1.3.0)-value=-node=ifHCOutO
ctets(1.3.6.1.2.1.31.1.1.1.10.1)-value=; The agent received a message.
Use the NMS to set a MIB variable on the agent. The following is a sample log message displayed on
the configuration terminal:
%Nov 23 16:16:42:581 2011 Agent SNMP/6/SNMP_SET:
-seqNO=37-srcIP=1.1.1.2-op=SET-errorIndex=0-errorStatus=noError-node=sysLocation(1.3.
6.1.2.1.1.6.0)-value=beijing; The agent received a message.
Table 1 SNMP log message field description
Field
Description
Nov 23 16:10:09:482 2011
Time when the SNMP log was generated.
seqNO
Serial number automatically assigned to the SNMP log,
starting from 0.
srcIP
IP address of the NMS.
op
SNMP operation type (GET or SET).
12
Field
Description
node
MIB node name and OID of the node instance.
errorIndex
Error index, with 0 meaning no error.
errorStatus
Error status, with noError meaning no error.
Value set by the SET operation. This field is null for a GET
operation.
value
If the value is a character string that has invisible characters or
characters beyond the ASCII range 0 to 127, the string is
displayed in hexadecimal format, for example, value =
<81-43>[hex].
The information center can output system event messages to several destinations, including the terminal
and the log buffer. In this example, SNMP log messages are output to the terminal. To configure other
message destinations, see "Configuring the information center."
13
Configuring RMON
Overview
Remote Monitoring (RMON) is an enhancement to SNMP for remote device management and traffic
monitoring. An RMON monitor, typically the RMON agent embedded in a network device, periodically
or continuously collects traffic statistics for the network attached to a port, and when a statistic crosses a
threshold, logs the crossing event and sends a trap to the management station.
RMON uses SNMP traps to notify NMSs of exceptional conditions. RMON SNMP traps report various
events, including traffic events such as broadcast traffic threshold exceeded. In contrast, SNMP standard
traps report device operating status changes such as link up, link down, and module failure.
RMON enables proactive monitoring and management of remote network devices and subnets. The
managed device can automatically send a trap when a statistic crosses an alarm threshold, and the
NMS does not need to constantly poll MIB variables and compare the results. As a result, network traffic
is reduced.
Working mechanism
RMON monitors typically take one of the following forms:
•
Dedicated RMON probes. NMSs can obtain management information from RMON probes directly
and control network resources. By using this method, NMSs can obtain all RMON MIB information.
•
RMON agents embedded in network devices. NMSs exchange data with RMON agents by using
basic SNMP operations to gather network management information. Because this method is
resource intensive, most RMON agent implementations provide only four groups of MIB information:
alarm, event, history, and statistics.
HP devices provide the embedded RMON agent function. You can configure your device to collect and
report traffic statistics, error statistics, and performance statistics.
RMON groups
Among the RFC 2819 defined RMON groups, HP implements the statistics group, history group, event
group, and alarm group supported by the public MIB. HP also implements a private alarm group, which
enhances the standard alarm group.
Ethernet statistics group
The statistics group defines that the system collects various traffic statistics on an interface (only Ethernet
interfaces are supported), and saves the statistics in the Ethernet statistics table (ethernetStatsTable) for
future retrieval. The interface traffic statistics include network collisions, CRC alignment errors,
undersize/oversize packets, broadcasts, multicasts, bytes received, and packets received.
After you create a statistics entry for an interface, the statistics group starts to collect traffic statistics on the
interface. The statistics in the Ethernet statistics table are cumulative sums.
14
History group
The history group defines that the system periodically collects traffic statistics on interfaces and saves the
statistics in the history record table (ethernetHistoryTable). The statistics include bandwidth utilization,
number of error packets, and total number of packets.
The history statistics table record traffic statistics collected for each sampling interval. The sampling
interval is user-configurable.
Event group
The event group defines event indexes and controls the generation and notifications of the events
triggered by the alarms defined in the alarm group and the private alarm group. The events can be
handled in one of the following ways:
•
Log—Logs event information (including event name and description) in the event log table of the
RMON MIB, so the management device can get the logs through the SNMP Get operation.
•
Trap—Sends a trap to notify an NMS of the event.
•
Log-Trap—Logs event information in the event log table and sends a trap to the NMS.
•
None—No action.
Alarm group
The RMON alarm group monitors alarm variables, such as the count of incoming packets (etherStatsPkts)
on an interface. After you define an alarm entry, the system gets the value of the monitored alarm
variable at the specified interval. If the value of the monitored variable is greater than or equal to the
rising threshold, a rising event is triggered. If the value of the monitored variable is smaller than or equal
to the falling threshold, a falling event is triggered. The event is then handled as defined in the event
group.
If an alarm entry crosses a threshold multiple times in succession, the RMON agent generates an alarm
event only for the first crossing. For example, if the value of a sampled alarm variable crosses the rising
threshold multiple times before it crosses the falling threshold, only the first crossing triggers a rising alarm
event, as shown in Figure 6.
Figure 6 Rising and falling alarm events
15
Private alarm group
The private alarm group calculates the values of alarm variables and compares the results with the
defined threshold for a more comprehensive alarming function.
The system handles the private alarm entry (as defined by the user) in the following ways:
•
Periodically samples the private alarm variables defined in the private alarm formula.
•
Calculates the sampled values based on the private alarm formula.
•
Compares the result with the defined threshold and generates an appropriate event if the threshold
value is reached.
If a private alarm entry crosses a threshold multiple times in succession, the RMON agent generates an
alarm event only for the first crossing. For example, if the value of a sampled alarm variable crosses the
rising threshold multiple times before it crosses the falling threshold, only the first crossing triggers a rising
alarm event.
Configuring the RMON statistics function
The RMON statistics function can be implemented by either the Ethernet statistics group or the history
group, but the objects of the statistics are different, as follows:
•
A statistics object of the Ethernet statistics group is a variable defined in the Ethernet statistics table,
and the recorded content is a cumulative sum of the variable from the time the statistics entry is
created to the current time. For more information, see "Configuring the RMON Ethernet statistics
function."
•
A statistics object of the history group is the variable defined in the history record table, and the
recorded content is a cumulative sum of the variable in each period. For more information, see
"Configuring the RMON history statistics function."
Configuring the RMON Ethernet statistics function
Step
Command
1.
Enter system view.
system-view
2.
Enter Ethernet interface view.
interface interface-type interface-number
3.
Create an entry in the RMON statistics
table.
rmon statistics entry-number [ owner text ]
You can create one statistics entry for each interface, and up to 100 statistics entries on the device. After
the entry limit is reached, you cannot add new entries.
Configuring the RMON history statistics function
Follow these guidelines when you configure the RMON history statistics function:
•
The entry-number for an RMON history control entry must be globally unique. If an entry number
has been used on one interface, it cannot be used on another.
•
You can configure multiple history control entries for one interface, but must make sure their entry
numbers and sampling intervals are different.
•
The device supports up to 100 history control entries.
16
You can successfully create a history control entry, even if the specified bucket size exceeds the
history table size supported by the device. However, the effective bucket size will be the actual value
supported by the device.
•
To configure the RMON history statistics function:
Step
Command
1.
Enter system view.
system-view
2.
Enter Ethernet interface view.
interface interface-type interface-number
3.
Create an entry in the RMON history
control table.
rmon history entry-number buckets number interval
sampling-interval [ owner text ]
Configuring the RMON alarm function
Follow these guidelines when you configure the RMON alarm function:
•
To send traps to the NMS when an alarm is triggered, configure the SNMP agent as described in
"Configuring SNMP" before configuring the RMON alarm function.
•
If the alarm variable is a MIB variable defined in the history group or the Ethernet statistics group,
make sure the RMON Ethernet statistics function or the RMON history statistics function is
configured on the monitored Ethernet interface. Otherwise, even if you can create the alarm entry,
no alarm event can be triggered.
•
You cannot create a new event, alarm, or private alarm entry that has the same set of parameters
as an existing entry. For parameters to be compared for duplication, see Table 2.
•
After the maximum number of entries is reached, no new entry can be created. For the table entry
limits, see Table 2.
To configure the RMON alarm function:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an event entry in
the event table.
rmon event entry-number [ description string ] { log |
log-trap log-trapcommunity | none | trap
trap-community } [ owner text ]
N/A
• Create an entry in the alarm table:
3.
Create an entry in the
alarm table or private
alarm table.
rmon alarm entry-number alarm-variable
sampling-interval { absolute | delta }
rising-threshold threshold-value1 event-entry1
falling-threshold threshold-value2 event-entry2
[ owner text ]
• Create an entry in the private alarm table:
rmon prialarm entry-number prialarm-formula
prialarm-des sampling-interval { absolute |
changeratio | delta } rising-threshold
threshold-value1 event-entry1 falling-threshold
threshold-value2 event-entry2 entrytype { forever |
cycle cycle-period } [ owner text ]
17
Use at least one
command.
Table 2 RMON configuration restrictions
Entry
Parameters to be compared
Maximum number of
entries
Event
Event description (description string), event type (log, trap,
logtrap or none) and community name (trap-community or
log-trapcommunity)
60
Alarm
Alarm variable (alarm-variable), sampling interval
(sampling-interval), sampling type (absolute or delta), rising
threshold (threshold-value1) and falling threshold
(threshold-value2)
60
Prialarm
Alarm variable formula (alarm-variable), sampling interval
(sampling-interval), sampling type (absolute, changeratio or
delta), rising threshold (threshold-value1) and falling
threshold (threshold-value2)
50
Displaying and maintaining RMON
Task
Command
Remarks
Display RMON statistics.
display rmon statistics [ interface-type
interface-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display the RMON history
control entry and history
sampling information.
display rmon history [ interface-type
interface-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display RMON alarm
configuration.
display rmon alarm [ entry-number ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display RMON private alarm
configuration.
display rmon prialarm [ entry-number ] [ |
{ begin | exclude | include } regular-expression ]
Available in any view.
Display RMON event
configuration.
display rmon event [ entry-number ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display log information for
event entries.
display rmon eventlog [ entry-number ] [ | { begin
| exclude | include } regular-expression ]
Available in any view.
Ethernet statistics group configuration example
Network requirements
Configure the RMON statistics group on the RMON agent in Figure 7 to gather cumulative traffic
statistics for Ethernet 1/1.
18
Figure 7 Network diagram
Configuration procedure
# Configure the RMON statistics group on the RMON agent to gather statistics for Ethernet 1/1.
<Sysname> system-view
[Sysname] interface ethernet 1/1
[Sysname-Ethernet1/1] rmon statistics 1 owner user1
# Display statistics collected by the RMON agent for Ethernet 1/1.
<Sysname> display rmon statistics ethernet 1/1
EtherStatsEntry 1 owned by user1-rmon is VALID.
Interface : Ethernet1/1<ifIndex.3>
etherStatsOctets
: 21657
, etherStatsPkts
etherStatsBroadcastPkts
: 56
, etherStatsMulticastPkts : 34
etherStatsUndersizePkts
: 0
, etherStatsOversizePkts
: 0
etherStatsFragments
: 0
, etherStatsJabbers
: 0
, etherStatsCollisions
: 0
etherStatsCRCAlignErrors : 0
: 307
etherStatsDropEvents (insufficient resources): 0
Packets received according to length:
64
: 235
256-511: 1
,
65-127
: 67
,
512-1023: 0
,
128-255
: 4
,
1024-1518: 0
# On the configuration terminal, get the traffic statistics through SNMP. (Details not shown.)
History group configuration example
Network requirements
Configure the RMON history group on the RMON agent in Figure 8 to gather periodical traffic statistics
for Ethernet 1/1 every minute.
Figure 8 Network diagram
19
Configuration procedure
# Configure the RMON history group on the RMON agent to gather traffic statistics every minute for
Ethernet 1/1. Retain up to eight records for the interface in the history statistics table.
<Sysname> system-view
[Sysname] interface ethernet 1/1
[Sysname-Ethernet1/1] rmon history 1 buckets 8 interval 60 owner user1
# Display the history data collected for Ethernet 1/1.
[Sysname-Ethernet1/1] display rmon history
HistoryControlEntry 2 owned by null is VALID
Samples interface
: Ethernet1/1<ifIndex.3>
Sampled values of record 1 :
dropevents
: 0
, octets
: 834
packets
: 8
, broadcast packets
: 1
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 2 :
dropevents
: 0
, octets
: 962
packets
: 10
, broadcast packets
: 3
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 3 :
dropevents
: 0
, octets
: 830
packets
: 8
, broadcast packets
: 0
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 4 :
dropevents
: 0
, octets
: 933
packets
: 8
, broadcast packets
: 0
multicast packets : 7
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 5 :
dropevents
: 0
, octets
: 898
packets
: 9
, broadcast packets
: 2
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 6 :
dropevents
: 0
, octets
: 898
packets
: 9
, broadcast packets
: 2
20
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 7 :
dropevents
: 0
, octets
: 766
packets
: 7
, broadcast packets
: 0
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 8 :
dropevents
: 0
, octets
: 1154
packets
: 13
, broadcast packets
: 1
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
, jabbers
: 0
: 0
collisions
: 0
, utilization
: 0
# On the configuration terminal, get the traffic statistics through SNMP. (Details not shown.)
Alarm group configuration example
Network requirements
Configure the RMON alarm group on the RMON agent in Figure 9 to send alarms in traps when the
5-second incoming traffic statistic on Ethernet 1/1 crosses the rising threshold or drops below the falling
threshold.
Figure 9 Network diagram
Configuration procedure
# Configure the SNMP agent with the same SNMP settings as the NMS at 1.1.1.2. This example uses
SNMPv1, read community public, and write community private.
<Sysname> system-view
[Sysname] snmp-agent
[Sysname] snmp-agent community read public
[Sysname] snmp-agent community write private
[Sysname] snmp-agent sys-info version v1
[Sysname] snmp-agent trap enable
21
[Sysname] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public
# Configure the RMON statistics group to gather traffic statistics for Ethernet 1/1.
[Sysname] interface ethernet 1/1
[Sysname-Ethernet1/1] rmon statistics 1 owner user1
[Sysname-Ethernet1/1] quit
# Create an RMON event entry and an RMON alarm entry so the RMON agent sends traps when the
delta sampling value of node 1.3.6.1.2.1.16.1.1.1.4.1 exceeds 100 or drops below 50.
[Sysname] rmon event 1 trap public owner user1
[Sysname] rmon alarm 1 1.3.6.1.2.1.16.1.1.1.4.1 5 delta rising-threshold 100 1
falling-threshold 50 1
# Display the RMON alarm entry configuration.
<Sysname> display rmon alarm 1
AlarmEntry 1 owned by null is Valid.
Samples type
: delta
Variable formula
: 1.3.6.1.2.1.16.1.1.1.4.1<etherStatsOctets.1>
Sampling interval
: 5(sec)
Rising threshold
: 100(linked with event 1)
Falling threshold
: 50(linked with event 2)
When startup enables
: risingOrFallingAlarm
Latest value
: 0
# Display statistics for Ethernet 1/1.
<Sysname> display rmon statistics ethernet 1/1
EtherStatsEntry 1 owned by user1-rmon is VALID.
Interface : Ethernet1/1<ifIndex.3>
etherStatsOctets
: 57329
, etherStatsPkts
etherStatsBroadcastPkts
: 53
, etherStatsMulticastPkts : 353
etherStatsUndersizePkts
: 0
, etherStatsOversizePkts
: 0
etherStatsFragments
: 0
, etherStatsJabbers
: 0
, etherStatsCollisions
: 0
etherStatsCRCAlignErrors : 0
: 455
etherStatsDropEvents (insufficient resources): 0
Packets received according to length:
64
: 7
,
65-127
: 413
256-511: 0
,
512-1023: 0
,
128-255
: 35
,
1024-1518: 0
# Query alarm events on the NMS. (Details not shown.)
On the RMON agent, alarm event messages are displayed when events occur. The following is a sample
output:
[Sysname]
#Aug 27 16:31:34:12 2005 Sysname RMON/2/ALARMFALL:Trap 1.3.6.1.2.1.16.0.2 Alarm table 1
monitors 1.3.6.1.2.1.16.1.1.1.4.1 with sample type 2,has sampled alarm value 0 less than(or
=) 50.
22
Configuring NTP
You must synchronize your device with a trusted time source by using the Network Time Protocol (NTP) or
changing the system time before you run it on a live network. Various tasks, including network
management, charging, auditing, and distributed computing depend on an accurate system time setting,
because the timestamps of system messages and logs use the system time.
Overview
NTP is typically used in large networks to dynamically synchronize time among network devices. It
guarantees higher clock accuracy than manual system clock setting. In a small network that does not
require high clock accuracy, you can keep time synchronized among devices by changing their system
clocks one by one.
NTP runs over UDP and uses UDP port 123.
NTP application
An administrator is unable to keep time synchronized among all the devices within a network by
changing the system clock on each station, because this is a huge work and does not guarantee clock
precision. NTP, however, allows quick clock synchronization within the entire network and ensures a high
clock precision.
NTP is used when all devices within the network must be consistent in timekeeping, for example:
•
In analysis of the log information and debugging information collected from different devices in
network management, time must be used as reference basis.
•
All devices must use the same reference clock in a charging system.
•
To implement certain functions, such as scheduled restart of all devices within the network, all
devices must be consistent in timekeeping.
•
When multiple systems process a complex event in cooperation, these systems must use the same
reference clock to ensure the correct execution sequence.
•
For incremental backup between a backup server and clients, timekeeping must be synchronized
between the backup server and all the clients.
NTP advantages
•
NTP uses a stratum to describe clock precision, and it can synchronize time among all devices
within the network.
•
NTP supports access control and MD5 authentication.
•
NTP can unicast, multicast or broadcast protocol messages.
How NTP works
Figure 10 shows how NTP synchronizes the system time between two devices, in this example, Device A
and Device B. Assume that:
23
•
Prior to the time synchronization, the time of Device A is set to 10:00:00 am and that of Device B
is set to 11:00:00 am.
•
Device B is used as the NTP server. Device A is to be synchronized to Device B.
•
It takes 1 second for an NTP message to travel from Device A to Device B, and from Device B to
Device A.
Figure 10 Basic work flow of NTP
The synchronization process is as follows:
•
Device A sends Device B an NTP message, which is timestamped when it leaves Device A. The
timestamp is 10:00:00 am (T1).
•
When this NTP message arrives at Device B, it is timestamped by Device B. The timestamp is
11:00:01 am (T2).
•
When the NTP message leaves Device B, Device B timestamps it. The timestamp is 11:00:02 am
(T3).
•
When Device A receives the NTP message, the local time of Device A is 10:00:03 am (T4).
Now, Device A can calculate the following parameters based on the timestamps:
•
The roundtrip delay of an NTP message: Delay = (T4–T1) – (T3-T2) = 2 seconds.
•
The time difference between Device A and Device B: Offset = ((T2-T1) + (T3-T4))/2 = 1 hour.
Based on these parameters, Device A can synchronize its own clock to the clock of Device B.
This is a rough description of how NTP works. For more information, see RFC 1305.
NTP message format
All NTP messages mentioned in this document refer to NTP clock synchronization messages.
24
NTP uses two types of messages: clock synchronization messages and NTP control messages. NTP
control messages are used in environments where network management is needed. Because NTP control
messages are not essential for clock synchronization, they are not described in this document.
A clock synchronization message is encapsulated in a UDP message, as shown in Figure 11.
Figure 11 Clock synchronization message format
The main fields are described as follows:
•
LI (Leap Indicator)—A 2-bit leap indicator. If set to 11, it warns of an alarm condition (clock
unsynchronized). If set to any other value, it is not to be processed by NTP.
•
VN (Version Number)—A 3-bit version number that indicates the version of NTP. The latest version
is version 4.
•
Mode—A 3-bit code that indicates the work mode of NTP. This field can be set to these values:
{
0—Reserved
{
1—Symmetric active
{
2—Symmetric passive
{
3—Client
{
4—Server
{
5—Broadcast or multicast
{
6—NTP control message
{
7—Reserved for private use
•
Stratum—An 8-bit integer that indicates the stratum level of the local clock, taking the value of 1 to
16. Clock precision decreases from stratum 1 through stratum 16. A stratum 1 clock has the highest
precision, and a stratum 16 clock is not synchronized.
•
Poll—An 8-bit signed integer that indicates the maximum interval between successive messages,
which is called the poll interval.
25
•
Precision—An 8-bit signed integer that indicates the precision of the local clock.
•
Root Delay—Roundtrip delay to the primary reference source.
•
Root Dispersion—The maximum error of the local clock relative to the primary reference source.
•
Reference Identifier—Identifier of the particular reference source.
•
Reference Timestamp—The local time at which the local clock was last set or corrected.
•
Originate Timestamp—The local time at which the request departed from the client for the service
host.
•
Receive Timestamp—The local time at which the request arrived at the service host.
•
Transmit Timestamp—The local time at which the reply departed from the service host for the client.
•
Authenticator—Authentication information.
NTP operation modes
Devices that run NTP can implement clock synchronization in one of the following modes:
•
Client/server mode
•
Symmetric peers mode
•
Broadcast mode
•
Multicast mode
You can select operation modes of NTP as needed. If the IP address of the NTP server or peer is unknown
and many devices in the network need to be synchronized, you can adopt the broadcast or multicast
mode. In client/server or symmetric peers mode, a device is synchronized from the specified server or
peer, so clock reliability is enhanced.
Client/server mode
Figure 12 Client/server mode
Client
Server
Network
Clock
synchronization (Mode3)
Performs clock filtering and
selection, and synchronizes its
local clock to that of the
optimal reference source
Automatically works in
client/server mode and
sends a reply
message
Reply
( Mode4)
When operating in client/server mode, a client sends a clock synchronization message to servers with
the Mode field in the message set to 3 (client mode). Upon receiving the message, the servers
automatically operate in server mode and send a reply, with the Mode field in the messages set to 4
(server mode). Upon receiving the replies from the servers, the client performs clock filtering and selection
and synchronizes its local clock to that of the optimal reference source.
In client/server mode, a client can be synchronized to a server, but not vice versa.
26
Symmetric peers mode
Figure 13 Symmetric peers mode
In symmetric peers mode, devices that operate in symmetric active mode and symmetric passive mode
exchange NTP messages with the Mode field 3 (client mode) and 4 (server mode). Then the device that
operates in symmetric active mode periodically sends clock synchronization messages, with the Mode
field in the messages set to 1 (symmetric active). The device that receives the messages automatically
enters symmetric passive mode and sends a reply, with the Mode field in the message set to 2 (symmetric
passive). This exchange of messages establishes symmetric peers mode between the two devices, so the
two devices can synchronize, or be synchronized by, each other. If the clocks of both devices have been
synchronized, the device whose local clock has a lower stratum level synchronizes the clock of the other
device.
Broadcast mode
Figure 14 Broadcast mode
In broadcast mode, a server periodically sends clock synchronization messages to broadcast address
255.255.255.255, with the Mode field in the messages set to 5 (broadcast mode). Clients listen to the
broadcast messages from servers. When a client receives the first broadcast message, the client and the
server start to exchange messages with the Mode field set to 3 (client mode) and 4 (server mode), to
calculate the network delay between client and the server. Then, the client enters broadcast client mode.
The client continues listening to broadcast messages and synchronizes its local clock based on the
received broadcast messages.
27
Multicast mode
Figure 15 Multicast mode
In multicast mode, a server periodically sends clock synchronization messages to the user-configured
multicast address, or, if no multicast address is configured, to the default NTP multicast address 224.0.1.1,
with the Mode field in the messages set to 5 (multicast mode). Clients listen to the multicast messages
from servers. When a client receives the first multicast message, the client and the server start to
exchange messages with the Mode field set to 3 (client mode) and 4 (server mode), to calculate the
network delay between client and server. Then, the client enters multicast client mode. It continues
listening to multicast messages and synchronizes its local clock based on the received multicast
messages.
In symmetric peers mode, broadcast mode and multicast mode, the client (or the symmetric active peer)
and the server (the symmetric passive peer) can operate in the specified NTP working mode only after
they exchange NTP messages with the Mode field being 3 (client mode) and the Mode field being 4
(server mode). During this message exchange process, NTP clock synchronization can be implemented.
NTP for VPNs
The device supports multiple VPN instances when it functions as an NTP client or a symmetric active peer
to realize clock synchronization with the NTP server or symmetric passive peer in an MPLS VPN network.
For more information about MPLS L3VPN, VPN instance, and PE, see MPLS Configuration Guide.
As shown in Figure 16, users in VPN 1 and VPN 2 are connected to the MPLS backbone network through
PE devices, and services of the two VPNs are isolated. If you configure the PEs to operate in NTP client
or symmetric active mode, and specify the VPN to which the NTP server or NTP symmetric passive peer
belongs, the clock synchronization between PEs and CEs of the two VPNs can be realized.
28
Figure 16 Network diagram
NTP configuration task list
Task
Remarks
Configuring NTP operation modes
Required.
Configuring the local clock as a reference source
Optional.
Configuring optional parameters for NTP
Optional.
Configuring access-control rights
Optional.
Configuring NTP authentication
Optional.
Configuring NTP operation modes
Devices can implement clock synchronization in one of the following modes:
•
Client/server mode—Configure only clients.
•
Symmetric mode—Configure only symmetric-active peers.
•
Broadcast mode—Configure both clients and servers.
•
Multicast mode—Configure both clients and servers.
A single device can have a maximum of 128 associations at the same time, including static associations
and dynamic associations.
A static association refers to an association that a user has manually created by using an NTP command.
A dynamic association is a temporary association created by the system during operation. A dynamic
association is removed if the system fails to receive messages from it over a specific long time.
In client/server mode, for example, when you execute a command to synchronize the time to a server, the
system creates a static association, and the server just responds passively upon the receipt of a message,
rather than creating an association (static or dynamic). In symmetric mode, static associations are
created at the symmetric-active peer side, and dynamic associations are created at the symmetric-passive
peer side. In broadcast or multicast mode, static associations are created at the server side, and dynamic
associations are created at the client side.
29
Configuring NTP client/server mode
If you specify the source interface for NTP messages by specifying the source interface source-interface
option, NTP uses the primary IP address of the specified interface as the source IP address of the NTP
messages.
A device can act as a server to synchronize other devices only after it is synchronized. If a server has a
stratum level higher than or equal to a client, the client does not synchronize to that server.
In the ntp-service unicast-server command, ip-address must be a unicast address, rather than a
broadcast address, a multicast address or the IP address of the local clock.
To specify an NTP server on the client:
Step
1.
2.
Command
Remarks
Enter system view.
system-view
N/A
By default, no NTP server is
specified.
Specify an NTP server for the
device.
ntp-service unicast-server
[ vpn-instance vpn-instance-name ]
{ ip-address | server-name }
[ authentication-keyid keyid |
priority | source-interface
interface-type interface-number |
version number ] *
You can configure multiple servers
by repeating the command. The
clients will select the optimal
reference source.
Configuring the NTP symmetric peers mode
For devices operating in symmetric mode, specify a symmetric-passive peer on a symmetric-active peer.
Follow these guidelines when you configure the NTP symmetric peers mode:
•
Use the ntp-service refclock-master command or any NTP configuration command in Configuring
NTP operation modes to enable NTP. Otherwise, a symmetric-passive peer does not process NTP
messages from a symmetric-active peer.
•
To ensure time synchronization accuracy, do not specify different reference sources for a symmetric
active peer and a symmetric passive peer if they operate as the NTP client for the client/server,
broadcast, and multicast modes.
•
Either the symmetric-active peer or the symmetric-passive peer must be in synchronized state.
Otherwise, clock synchronization does not proceed.
•
After you specify the source interface for NTP messages by specifying the source interface
source-interface option, the source IP address of the NTP messages is set as the primary IP address
of the specified interface.
•
You can configure multiple symmetric-passive peers by repeating the ntp-service unicast-peer
command.
To specify a symmetric-passive peer on the active peer:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
30
Step
2.
Specify a symmetric-passive
peer for the device.
Command
Remarks
ntp-service unicast-peer
[ vpn-instance vpn-instance-name ]
{ ip-address | peer-name }
[ authentication-keyid keyid |
priority | source-interface
interface-type interface-number |
version number ] *
By default, no symmetric-passive
peer is specified.
The ip-address argument must be a
unicast address, rather than a
broadcast address, a multicast
address, or the IP address of the
local clock.
Configuring NTP broadcast mode
The broadcast server periodically sends NTP broadcast messages to the broadcast address
255.255.255.255. After receiving the messages, the device operating in NTP broadcast client mode
sends a reply and synchronizes its local clock.
Configure the NTP broadcast mode on both the server and clients. The NTP broadcast mode can only be
configured in a specific interface view because an interface needs to be specified on the broadcast
server for sending NTP broadcast messages and on each broadcast client for receiving broadcast
messages.
For more information about tunnel interfaces, see Layer 3—IP Services Configuration Guide.
Configuring a broadcast client
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
This command enters the view of
the interface for sending NTP
broadcast messages.
3.
Configure the device to
operate in NTP broadcast
client mode.
ntp-service broadcast-client
N/A
Command
Remarks
Configuring the broadcast server
Step
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
This command enters the view of
the interface for sending NTP
broadcast messages.
3.
Configure the device to
operate in NTP broadcast
server mode.
ntp-service broadcast-server
[ authentication-keyid keyid |
version number ] *
A broadcast server can
synchronize broadcast clients only
when its clock has been
synchronized.
Configuring NTP multicast mode
The multicast server periodically sends NTP multicast messages to multicast clients, which send replies
after receiving the messages and synchronize their local clocks.
31
Configure the NTP multicast mode on both the server and clients. The NTP multicast mode must be
configured in a specific interface view.
For more information about tunnel interfaces, see Layer 3—IP Services Configuration Guide.
Configuring a multicast client
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
This command enters the view of
the interface for sending NTP
multicast messages.
3.
Configure the device to
operate in NTP multicast client
mode.
ntp-service multicast-client
[ ip-address ]
You can configure up to 1024
multicast clients, of which 128 can
take effect at the same time.
Command
Remarks
Configuring the multicast server
Step
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
This command enters the view of
the interface for sending NTP
multicast messages.
3.
Configure the device to
operate in NTP multicast
server mode.
ntp-service multicast-server
[ ip-address ]
[ authentication-keyid keyid | ttl
ttl-number | version number ] *
A multicast server can synchronize
broadcast clients only when its
clock has been synchronized.
Configuring the local clock as a reference source
A network device can get its clock synchronized in either of the following two ways:
•
Synchronized to the local clock, which operates as the reference source.
•
Synchronized to another device on the network in any of the four NTP operation modes previously
described.
If you configure two synchronization modes, the device selects the optimal clock as the reference source.
Typically, the stratum level of the NTP server that is synchronized from an authoritative clock (such as an
atomic clock) is set to 1. This NTP server operates as the primary reference source on the network, and
other devices synchronize to it. The number of NTP hops that devices in a network are away from the
primary reference source determines the stratum levels of the devices.
If you configure the local clock as a reference clock, the local device can act as a reference clock to
synchronize other devices in the network. Perform this configuration with caution to avoid clock errors in
the devices in the network.
To configure the local clock as a reference source:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
32
Step
Configure the local clock as a
reference source.
2.
Command
Remarks
ntp-service refclock-master
[ ip-address ] [ stratum ]
The value of the ip-address
argument must be 127.127.1.u,
where u represents the NTP
process ID in the range of 0 to 3.
Configuring optional parameters for NTP
This section explains how to configure the optional parameters for NTP.
Specifying the source interface for NTP messages
If you specify the source interface for NTP messages, the device sets the source IP address of the NTP
messages as the primary IP address of the specified interface when sending the NTP messages.
When the device responds to an NTP request received, the source IP address of the NTP response is
always the IP address of the interface that received the NTP request.
Configuration guidelines
•
The source interface for NTP unicast messages is the interface specified in the ntp-service
unicast-server or ntp-service unicast-peer command.
•
The source interface for NTP broadcast or multicast messages is the interface where you configure
the ntp-service broadcast-server or ntp-service multicast-server command.
•
If the specified source interface goes down, NTP uses the primary IP address of the outgoing
interface as the source IP address.
Configuration procedure
To specify the source interface for NTP messages:
Step
1.
2.
Enter system view.
Specify the source interface
for NTP messages.
Command
Remarks
system-view
N/A
ntp-service source-interface
interface-type interface-number
By default, no source interface is
specified for NTP messages, and
the system uses the IP address of
the interface determined by the
matching route as the source IP
address of NTP messages.
Disabling an interface from receiving NTP messages
If NTP is enabled, NTP messages can be received from all the interfaces by default, and you can disable
an interface from receiving NTP messages through the following configuration.
To disable an interface from receiving NTP messages:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
33
Step
Command
Remarks
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Disable the interface from
receiving NTP messages.
ntp-service in-interface disable
By default, an interface is enabled
to receive NTP messages.
Configuring the allowed maximum number of dynamic
sessions
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the maximum
number of dynamic sessions
allowed to be established
locally.
ntp-service max-dynamic-sessions
number
The default is 100.
Configuring access-control rights
From the highest to lowest, the NTP service access-control rights are peer, server, synchronization, and
query. If a device receives an NTP request, it performs an access-control right match and uses the first
matched right. If no matched right is found, the device drops the NTP request.
•
Query—Control query permitted. This level of right permits the peer devices to perform control
query to the NTP service on the local device but does not permit a peer device to synchronize its
clock to that of the local device. The so-called "control query" refers to query of some states of the
NTP service, including alarm information, authentication status, clock source information, and so
on.
•
Synchronization—Server access only. This level of right permits a peer device to synchronize its
clock to that of the local device but does not permit the peer devices to perform control query.
•
Server—Server access and query permitted. This level of right permits the peer devices to perform
synchronization and control query to the local device but does not permit the local device to
synchronize its clock to that of a peer device.
•
Peer—Full access. This level of right permits the peer devices to perform synchronization and control
query to the local device and also permits the local device to synchronize its clock to that of a peer
device.
The access-control right mechanism provides only a minimum level of security protection for a system
running NTP. A more secure method is identity authentication.
Configuration prerequisites
Before you configure the NTP service access-control right to the local device, create and configure an
ACL associated with the access-control right. For more information about ACLs, see ACL and QoS
Configuration Guide.
34
Configuration procedure
To configure the NTP service access-control right to the local device:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the NTP service
access-control right for a peer
device to access the local
device.
ntp-service access { peer | query |
server | synchronization }
acl-number
The default is peer.
The access-control right mechanism provides only a minimum degree of security protection for the system
running NTP. A more secure method is identity authentication.
Configuring NTP authentication
Enable NTP authentication for a system running NTP in a network where there is a high security demand.
NTP authentication enhances network security by using client-server key authentication, which prohibits
a client from synchronizing with a device that fails authentication.
To configure NTP authentication, do the following:
•
Enable NTP authentication
•
Configure an authentication key
•
Configure the key as a trusted key
•
Associate the specified key with an NTP server or a symmetric peer
These tasks are required. If any task is omitted, NTP authentication cannot function.
Configuring NTP authentication in client/server mode
Follow these instructions to configure NTP authentication in client/server mode:
•
A client can synchronize to the server only when you configure all the required tasks on both the
client and server.
•
On the client, if NTP authentication is not enabled or no key is specified to associate with the NTP
server, the client is not authenticated. No matter whether NTP authentication is enabled or not on
the server, the clock synchronization between the server and client can be performed.
•
On the client, if NTP authentication is enabled and a key is specified to associate with the NTP
server, but the key is not a trusted key, the client does not synchronize to the server no matter whether
NTP authentication is enabled or not on the server.
Configuring NTP authentication for a client
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
35
Step
3.
4.
Command
By default, no NTP authentication
key is configured.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
By default, no authentication key is
configured to be trusted.
ntp-service unicast-server
{ ip-address | server-name }
authentication-keyid keyid
You can associate a non-existing
key with an NTP server. To enable
NTP authentication, you must
configure the key and specify it as
a trusted key after associating the
key with the NTP server.
Associate the specified key
with an NTP server.
5.
Remarks
Configure the same authentication
key on the client and server.
Configuring NTP authentication for a server
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
4.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
By default, no authentication key is
configured to be trusted.
Configuring NTP authentication in symmetric peers mode
Follow these instructions to configure NTP authentication in symmetric peers mode:
•
An active symmetric peer can synchronize to the passive symmetric peer only when you configure
all the required tasks on both the active symmetric peer and passive symmetric peer.
•
When the active peer has a greater stratum level than the passive peer:
{
{
•
On the active peer, if NTP authentication is not enabled or no key is specified to associate with
the passive peer, the active peer synchronizes to the passive peer as long as NTP authentication
is disabled on the passive peer.
On the active peer, if NTP authentication is enabled and a key is associated with the passive
peer, but the key is not a trusted key, no matter whether NTP authentication is enabled or not on
the passive peer, the active peer does not synchronize to the passive peer.
When the active peer has a smaller stratum level than the passive peer:
On the active peer, if NTP authentication is not enabled, no key is specified to associate with the
passive peer, or the key is not a trusted key, the active peer can synchronize to the passive peer
as long as NTP authentication is disabled on the passive peer.
36
Configuring NTP authentication for an active peer
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
4.
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
By default, no authentication key is
configured to be trusted.
ntp-service unicast-peer
{ ip-address | peer-name }
authentication-keyid keyid
You can associate a non-existing
key with a passive peer. To enable
NTP authentication, you must
configure the key and specify it as
a trusted key after associating the
key with the passive peer.
Associate the specified key
with the passive peer.
5.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the active symmetric peer
and passive symmetric peer.
Configuring NTP authentication for a passive peer
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
4.
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the active symmetric peer
and passive symmetric peer.
By default, no authentication key is
configured to be trusted.
Configuring NTP authentication in broadcast mode
Follow these instructions to configure NTP authentication in broadcast mode:
•
A broadcast client can synchronize to the broadcast server only when you configure all the required
tasks on both the broadcast client and server.
•
If NTP authentication is not enabled on the client, the broadcast client can synchronize to the
broadcast server no matter whether NTP authentication is enabled or not on the server.
Configuring NTP authentication for a broadcast client
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
37
Step
Command
Remarks
By default, NTP authentication is
disabled.
2.
Enable NTP authentication.
ntp-service authentication enable
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
4.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
By default, no authentication key is
configured to be trusted.
Configuring NTP authentication for a broadcast server
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
4.
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
By default, no authentication key is
configured to be trusted.
5.
Enter interface view.
interface interface-type
interface-number
N/A
ntp-service broadcast-server
authentication-keyid keyid
You can associate a non-existing
key with the broadcast server. To
enable NTP authentication, you
must configure the key and specify
it as a trusted key after associating
the key with the broadcast server.
Associate the specified key
with the broadcast server.
6.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
Configuring NTP authentication in multicast mode
Follow these instructions to configure NTP authentication in multicast mode:
•
A broadcast client can synchronize to the broadcast server only when you configure all the required
tasks on both the broadcast client and server.
•
If NTP authentication is not enabled on the client, the multicast client can synchronize to the
multicast server no matter whether NTP authentication is enabled or not on the server.
Configuring NTP authentication for a multicast client
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
38
Step
3.
4.
Command
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
Remarks
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
By default, no authentication key is
configured to be trusted.
Configuring NTP authentication for a multicast server
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
4.
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
By default, no authentication key is
configured to be trusted.
5.
Enter interface view.
interface interface-type
interface-number
N/A
ntp-service multicast-server
authentication-keyid keyid
You can associate a non-existing
key with the multicast server. To
enable NTP authentication, you
must configure the key and specify
it as a trusted key after associating
the key with the multicast server.
6.
Associate the specified key
with the multicast server.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
Displaying and maintaining NTP
Task
Command
Remarks
Display information about
NTP service status.
display ntp-service status [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
Display information about
NTP sessions.
display ntp-service sessions
[ verbose ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display brief information
about the NTP servers from the
local device back to the
primary reference source.
display ntp-service trace [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
39
NTP configuration examples
NTP client/server mode configuration example
Network requirements
Perform the following configurations to synchronize the time between Device B and Device A:
•
As shown in Figure 17, the local clock of Device A is to be used as a reference source, with the
stratum level 2.
•
Device B operates in client/server mode and Device A is to be used as the NTP server of Device B.
Figure 17 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 17. (Details not shown.)
2.
Configure Device A:
# Specify the local clock as the reference source, with the stratum level 2.
<DeviceA> system-view
[DeviceA] ntp-service refclock-master 2
3.
Configure Device B:
# Display the NTP status of Device B before clock synchronization.
<DeviceB> display ntp-service status
Clock status: unsynchronized
Clock stratum: 16
Reference clock ID: none
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 0.00 ms
Root dispersion: 0.00 ms
Peer dispersion: 0.00 ms
Reference time: 00:00:00.000 UTC Jan 1 1900 (00000000.00000000)
# Specify Device A as the NTP server of Device B so that Device B synchronizes to Device A.
<DeviceB> system-view
[DeviceB] ntp-service unicast-server 1.0.1.11
# Display the NTP status of Device B after clock synchronization.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 1.0.1.11
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
40
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 1.05 ms
Peer dispersion: 7.81 ms
Reference time: 14:53:27.371 UTC Sep 19 2005 (C6D94F67.5EF9DB22)
The output shows that Device B has synchronized to Device A. The stratum level of Device B is 3,
and that of Device A is 2.
# Display NTP session information for Device B, which shows that an association has been set up
between Device B and Device A.
[DeviceB] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[12345] 1.0.1.11
127.127.1.0
2
63
64
3
-75.5
31.0
16.5
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
NTP symmetric peers mode configuration example
Network requirements
Perform the following configurations to synchronize time among devices:
•
As shown in Figure 18, the local clock of Device A is to be configured as a reference source, with
the stratum level 2.
•
The local clock Device C is to be configured as a reference source, with the stratum level 1.
•
Device B operates in client mode and Device A is to be used as the NTP server of Device B.
•
Device C operates in symmetric-active mode and Device B acts as the peer of Device C.
Figure 18 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 18. (Details not shown.)
2.
Configure Device A:
# Specify the local clock as the reference source, with the stratum level 2.
<DeviceA> system-view
[DeviceA] ntp-service refclock-master 2
41
3.
Configure Device B:
# Specify Device A as the NTP server of Device B.
<DeviceB> system-view
[DeviceB] ntp-service unicast-server 3.0.1.31
4.
Configure Device C (after Device B is synchronized to Device A):
# Specify the local clock as the reference source, with the stratum level 1.
<DeviceC> system-view
[DeviceC] ntp-service refclock-master 1
# Configure Device B as a symmetric peer after local synchronization.
[DeviceC] ntp-service unicast-peer 3.0.1.32
In the step above, Device B and Device C are configured as symmetric peers, with Device C in the
symmetric-active mode and Device B in the symmetric-passive mode. Because the stratus level of
Device C is 1 while that of Device B is 3, Device B synchronizes to Device C.
# Display the NTP status of Device B after clock synchronization.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 2
Reference clock ID: 3.0.1.33
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: -21.1982 ms
Root delay: 15.00 ms
Root dispersion: 775.15 ms
Peer dispersion: 34.29 ms
Reference time: 15:22:47.083 UTC Sep 19 2005 (C6D95647.153F7CED)
The output shows that Device B has synchronized to Device C. The stratum level of Device B is 2,
and that of Device C is 1.
# Display NTP session information for Device B, which shows that an association has been set up
between Device B and Device C.
[DeviceB] display ntp-service sessions
source
reference
stra
reach
poll
now
offset delay
disper
**************************************************************************
[245] 3.0.1.31
[1234] 3.0.1.33
127.127.1.0
LOCL
2
15
1
14
64
64
24
27
10535.0
-77.0
19.6
14.5
16.0
14.8
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
2
NTP broadcast mode configuration example
Network requirements
As shown in Figure 19, Router C functions as the NTP server for multiple devices on a network segment
and synchronizes the time among multiple devices.
•
Router C’s local clock is to be used as a reference source, with the stratum level 2.
•
Router C operates in broadcast server mode and sends broadcast messages from Ethernet 1/1.
42
Router B and Router A operate in broadcast client mode and receive broadcast messages through
their respective Ethernet 1/1.
•
Figure 19 Network diagram
Eth1/1
3.0.1.31/24
Router C
Eth1/1
3.0.1.30/24
Router A
Eth1/1
3.0.1.32/24
Router B
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 19. (Details not shown.)
2.
Configure Router C:
# Specify the local clock as the reference source, with the stratum level 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Configure Router C to operate in broadcast server mode and send broadcast messages through
Ethernet 1/1.
[RouterC] interface ethernet 1/1
[RouterC-Ethernet1/1] ntp-service broadcast-server
3.
Configure Router A:
# Configure Router A to operate in broadcast client mode and receive broadcast messages on
Ethernet 1/1.
<RouterA> system-view
[RouterA] interface ethernet 1/1
[RouterA-Ethernet1/1] ntp-service broadcast-client
4.
Configure Router B:
# Configure Router B to operate in broadcast client mode and receive broadcast messages on
Ethernet 1/1.
<RouterB> system-view
[RouterB] interface ethernet 1/1
[RouterB-Ethernet1/1] ntp-service broadcast-client
Router A and Router B get synchronized upon receiving a broadcast message from Router C.
# Take Router A as an example. Display the NTP status of Router A after clock synchronization.
[RouterA-Ethernet1/1] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
43
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router A has synchronized to Router C. The stratum level of Router A is 3,
and that of Router C is 2.
# Display NTP session information for Router A, which shows that an association has been set up
between Router A and Router C.
[RouterA-Ethernet1/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
2
254
64
62
-16.0
32.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
NTP multicast mode configuration example
Network requirements
As shown in Figure 20, Router C functions as the NTP server for multiple devices on different network
segments and synchronizes the time among multiple devices.
•
Router C’s local clock is to be used as a reference source, with the stratum level 2.
•
Router C operates in multicast server mode and sends multicast messages from Ethernet 1/1.
•
Router D and Router A operate in multicast client mode and receive multicast messages through
their respective Ethernet 1/1.
Figure 20 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 20. (Details not shown.)
2.
Configure Router C:
# Specify the local clock as the reference source, with the stratum level 2.
44
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Configure Router C to operate in multicast server mode and send multicast messages through
Ethernet 1/1.
[RouterC] interface ethernet 1/1
[RouterC-Ethernet1/1] ntp-service multicast-server
3.
Configure Router D:
# Configure Router D to operate in multicast client mode and receive multicast messages on
Ethernet 1/1.
<RouterD> system-view
[RouterD] interface ethernet 1/1
[RouterD-Ethernet1/1] ntp-service multicast-client
Because Router D and Router C are on the same subnet, Router D can receive the multicast
messages from Router C without being enabled with the multicast functions and can be
synchronized to Router C.
# Display the NTP status of Router D after clock synchronization.
[RouterD-Ethernet1/1] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router D has synchronized to Router C. The stratum level of Router D is 3,
and that of Router C is 2.
# Display NTP session information for Router D, which shows that an association has been set up
between Router D and Router C.
[RouterD-Ethernet1/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
2
254
64
62
-16.0
31.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
4.
1
Configure Router B:
Because Router A and Router C are on different subnets, you must enable the multicast functions on
Router B before Router A can receive multicast messages from Router C.
# Enable the IP multicast function.
<RouterB> system-view
[RouterB] multicast routing-enable
[RouterB] interface ethernet 1/1
[RouterB-Ethernet1/1] igmp enable
[RouterB-Ethernet1/1] igmp static-group 224.0.1.1
45
[RouterB-Ethernet1/1] quit
[RouterB] interface ethernet 1/2
[RouterB-Ethernet1/2] pim dm
5.
Configure Router A:
<RouterA> system-view
[RouterA] interface ethernet 1/1
# Configure Router A to operate in multicast client mode and receive multicast messages on
Ethernet 1/1.
[RouterA-Ethernet1/1] ntp-service multicast-client
# Display the NTP status of Router A after clock synchronization.
[RouterA-Ethernet1/1] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 40.00 ms
Root dispersion: 10.83 ms
Peer dispersion: 34.30 ms
Reference time: 16:02:49.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router A has synchronized to Router C. The stratum level of Router A is 3,
and that of Router C is 2.
# Display NTP session information for Router A, which shows that an association has been set up
between Router A and Router C.
[RouterA-Ethernet1/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
2
255
64
26
-16.0
40.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
For more information about how to configuration IGMP and PIM, see IP Multicast Configuration Guide.
Configuration example for NTP client/server mode with
authentication
Network requirements
As shown in Figure 21, perform the following configurations to synchronize the time between Device B
and Device A and ensure network security.
•
The local clock of Device A is to be configured as a reference source, with the stratum level 2.
•
Device B operates in client mode and Device A is to be used as the NTP server of Device B, with
Device B as the client.
•
NTP authentication is to be enabled on both Device A and Device B.
46
Figure 21 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 21. (Details not shown.)
2.
Configure Device A:
# Specify the local clock as the reference source, with the stratum level 2.
<DeviceA> system-view
[DeviceA] ntp-service refclock-master 2
3.
Configure Device B:
<DeviceB> system-view
# Enable NTP authentication on Device B.
[DeviceB] ntp-service authentication enable
# Set an authentication key.
[DeviceB] ntp-service authentication-keyid 42 authentication-mode md5 aNiceKey
# Specify the key as a trusted key.
[DeviceB] ntp-service reliable authentication-keyid 42
# Specify Device A as the NTP server of Device B.
[DeviceB] ntp-service unicast-server 1.0.1.11 authentication-keyid 42
Before Device B can synchronize to Device A, enable NTP authentication for Device A.
4.
Perform the following configuration on Device A:
# Enable NTP authentication.
[DeviceA] ntp-service authentication enable
# Set an authentication key.
[DeviceA] ntp-service authentication-keyid 42 authentication-mode md5 aNiceKey
# Specify the key as a trusted key.
[DeviceA] ntp-service reliable authentication-keyid 42
# Display the NTP status of Device B after clock synchronization.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 1.0.1.11
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 1.05 ms
Peer dispersion: 7.81 ms
Reference time: 14:53:27.371 UTC Sep 19 2005 (C6D94F67.5EF9DB22)
The output shows that Device B has synchronized to Device A. The stratum level of Device B is 3,
and that of Device A is 2.
47
# Display NTP session information for Device B, which shows that an association has been set up
Device B and Device A.
[DeviceB] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[12345] 1.0.1.11
127.127.1.0
2
63
64
3
-75.5
31.0
16.5
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
Configuration example for NTP broadcast mode with
authentication
Network requirements
As shown in Figure 22, Router C functions as the NTP server for multiple devices on different network
segments and synchronizes the time among multiple devices. Router B authenticates the reference source.
•
Router C’s local clock is to be used as a reference source, with the stratum level 3.
•
Router C operates in broadcast server mode and sends broadcast messages from Ethernet 1/1.
•
Router A and Router B operate in broadcast client mode and receive broadcast client through
Ethernet 1/1.
•
Configure NTP authentication on both Router B and Router C.
Figure 22 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 22. (Details not shown.)
2.
Configure Router A:
# Configure Router A to operate in NTP broadcast client mode and receive NTP broadcast
messages on Ethernet 1/1.
<RouterA> system-view
[RouterA] interface ethernet 1/1
[RouterA-Ethernet1/1] ntp-service broadcast-client
3.
Configure Router B:
48
# Enable NTP authentication on Router B. Configure an NTP authentication key, with the key ID of
88 and key value of 123456. Specify the key as a trusted key.
<RouterB> system-view
[RouterB] ntp-service authentication enable
[RouterB] ntp-service authentication-keyid 88 authentication-mode md5 123456
[RouterB] ntp-service reliable authentication-keyid 88
# Configure Router B to operate in broadcast client mode and receive NTP broadcast messages on
Ethernet 1/1.
[RouterB] interface ethernet 1/1
[RouterB-Ethernet1/1] ntp-service broadcast-client
4.
Configure Router C:
# Specify the local clock as the reference source, with the stratum level 3.
<RouterC> system-view
[RouterC] ntp-service refclock-master 3
# Configure Router C to operate in NTP broadcast server mode and use Ethernet 1/1 to send NTP
broadcast packets.
[RouterC] interface ethernet 1/1
[RouterC-Ethernet1/1] ntp-service broadcast-server
[RouterC-Ethernet1/1] quit
# Router A synchronizes its local clock based on the received broadcast messages sent from Router
C.
# Display NTP service status information on Router A.
[RouterA-Ethernet1/1] display ntp-service status
Clock status: synchronized
Clock stratum: 4
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router A has synchronized to Router C. The stratum level of Router A is 4,
and that of Router C is 3.
# Display NTP session information for Router A, which shows that an association has been set up
between Router A and Router C.
[RouterA-Ethernet1/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
3
254
64
62
-16.0
32.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
# NTP authentication is enabled on Router B, but not enabled on Router C, so Router B cannot
synchronize to Router C.
[RouterB-Ethernet1/1] display ntp-service status
49
Clock status: unsynchronized
Clock stratum: 16
Reference clock ID: none
Nominal frequency: 100.0000 Hz
Actual frequency: 100.0000 Hz
Clock precision: 2^18
Clock offset: 0.0000 ms
Root delay: 0.00 ms
Root dispersion: 0.00 ms
Peer dispersion: 0.00 ms
Reference time: 00:00:00.000 UTC Jan 1 1900(00000000.00000000)
# Enable NTP authentication on Router C. Configure an NTP authentication key, with the key ID of
88 and key value of 123456. Specify the key as a trusted key.
[RouterC] ntp-service authentication enable
[RouterC] ntp-service authentication-keyid 88 authentication-mode md5 123456
[RouterC] ntp-service reliable authentication-keyid 88
# Specify Router C as an NTP broadcast server, and associate the key 88 with Router C.
[RouterC] interface ethernet 1/1
[RouterC-Ethernet1/1] ntp-service broadcast-server authentication-keyid 88
# After NTP authentication is enabled on Router C, Router B can synchronize to Router C. Display
NTP service status information on Router B.
[RouterB-Ethernet1/1] display ntp-service status
Clock status: synchronized
Clock stratum: 4
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router B has synchronized to Router C. The stratum level of Router B is 4, and
that of Router C is 3
# Display NTP session information for Router B, which shows that an association has been set up
between Router B and Router C.
[RouterB-Ethernet1/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
3
254
64
62
-16.0
32.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
# Configuration of NTP authentication on Router C does not affect Router A. Router A still
synchronizes to Router C.
[RouterA-Ethernet1/1] display ntp-service status
Clock status: synchronized
50
Clock stratum: 4
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
Configuration example for MPLS VPN time synchronization in
client/server mode
Network requirements
As shown in Figure 23, two VPNs are present on PE 1 and PE 2: VPN 1 and VPN 2. CE 1 and CE 3 are
devices in VPN 1. To synchronize the time between PE 2 and CE 1 in VPN 1, configure CE 1’s local clock
as a reference source, with the stratum level 1, configure CE 1 to operate in client/server mode, and
specify VPN 1 as the target VPN.
MPLS L3VPN time synchronization can be implemented only in the unicast mode (client/server mode or
symmetric peers mode), but not in the multicast or broadcast mode.
Figure 23 Network diagram
Device
Interface
IP address
Device
Interface
IP address
CE 1
S2/0
10.1.1.1/24
PE 1
S2/0
10.1.1.2/24
CE 2
S2/0
10.2.1.1/24
S2/1
172.1.1.1/24
CE 3
S2/0
10.3.1.1/24
S2/2
10.2.1.2/24
CE 4
S2/0
10.4.1.1/24
S2/0
10.3.1.2/24
P
S2/0
172.1.1.2/24
S2/1
172.2.1.2/24
S2/1
172.2.1.1/24
S2/2
10.4.1.2/24
51
PE 2
Configuration procedure
Before you perform the following configuration, be sure you have completed MPLS VPN-related
configurations and make sure of the reachability between CE 1 and PE 1, between PE 1 and PE 2, and
between PE 2 and CE 3. For information about configuring MPLS VPN, see MPLS Configuration Guide.
1.
Set the IP address for each interface as shown in Figure 23. (Details not shown.)
2.
Configure CE 1:
# Specify the local clock as the reference source, with the stratum level 1.
<CE1> system-view
[CE1] ntp-service refclock-master 1
3.
Configure PE 2:
# Specify CE 1 as the NTP server for VPN 1.
<PE2> system-view
[PE2] ntp-service unicast-server 10.1.1.1
# Display the NTP session information and status information on PE 2 a certain period of time later.
The information should show that PE 2 has been synchronized to CE 1, with the stratum level 2.
[PE2] display ntp-service status
Clock status: synchronized
Clock stratum: 2
Reference clock ID: 10.1.1.1
Nominal frequency: 63.9100 Hz
Actual frequency: 63.9100 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 47.00 ms
Root dispersion: 0.18 ms
Peer dispersion: 34.29 ms
Reference time: 02:36:23.119 UTC Jan 1 2001(BDFA6BA7.1E76C8B4)
[PE2] display ntp-service sessions
source
reference
stra reach poll
now offset
delay disper
**************************************************************************
[12345]10.1.1.1
LOCL
1
7
64
15
0.0
47.0
7.8
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
[PE2]
1
display ntp-service trace
server 127.0.0.1,stratum 2, offset -0.013500, synch distance 0.03154
server 10.1.1.1,stratum 1, offset -0.506500, synch distance 0.03429
refid 127.127.1.0
Configuration example for MPLS VPN time synchronization in
symmetric peers mode
Network requirements
As shown in Figure 23, two VPNs are present on PE 1 and PE 2: VPN 1 and VPN 2. CE 1 and CE 3
belong to VPN 1. To synchronize the time between PE 1 and CE 1 in VPN 1, configure CE 1’s local clock
as a reference source, with the stratum level 1, configure CE 1 to operate in symmetric peers mode, and
specify VPN 1 as the target VPN.
52
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 23. (Details not shown.)
2.
Configure CE 1:
# Specify the local clock as the reference source, with the stratum level 1.
<CE1> system-view
[CE1] ntp-service refclock-master 1
3.
Configure PE 1:
# Specify CE 1 as the symmetric-passive peer for VPN 1.
<PE1> system-view
[PE1] ntp-service unicast-peer vpn-instance vpn1 10.1.1.1
# Display the NTP session information and status information on PE 1 a certain period of time later.
The information should show that PE 1 has been synchronized to CE 1, with the stratum level 2.
[PE1] display ntp-service status
Clock status: synchronized
Clock stratum: 2
Reference clock ID: 10.1.1.1
Nominal frequency: 63.9100 Hz
Actual frequency: 63.9100 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 32.00 ms
Root dispersion: 0.60 ms
Peer dispersion: 7.81 ms
Reference time: 02:44:01.200 UTC Jan 1 2001(BDFA6D71.33333333)
[PE1] display ntp-service sessions
source
reference
stra reach poll
now offset
delay disper
**************************************************************************
[12345]10.1.1.1
LOCL
1
1
64
29
-12.0
32.0
15.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
[PE1] display ntp-service trace
server 127.0.0.1,stratum 2, offset -0.012000, synch distance 0.02448
server 10.1.1.1,stratum 1, offset 0.003500, synch distance 0.00781
refid 127.127.1.0
53
Configuring cluster management
Overview
Cluster management is an effective way to manage large numbers of dispersed network devices in
groups and offers the following advantages:
•
Saves public IP address resources. You do not need to assign one public IP address for every cluster
member device.
•
Simplifies configuration and management tasks. By configuring a public IP address on one device,
you can configure and manage a group of devices without having to log in to them one by one.
•
Provides the topology discovery and display functions, which are useful for network monitoring and
debugging.
•
Enables concurrent software upgrading and configuration on multiple devices, free of topology and
distance limitations.
This feature is more applicable to networks that do not require high level of security.
Roles in a cluster
The devices in a cluster play different roles according to their different functions and status. You can
specify the following three roles for the devices:
•
Management device (Administrator)—A device that provides management interfaces for all
devices in a cluster and the only device configured with a public IP address. You can specify one
and only one management device for a cluster. Any configuration, management, and monitoring of
the other devices in a cluster can be implemented only through the management device. The
management device collects topology data to discover and define candidate devices.
•
Member device (Member)—A device managed by the management device in a cluster.
•
Candidate device (Candidate)—A device that does not belong to any cluster but can be added to
a cluster. The management device collects topology data of candidate devices but does not add
them to the cluster.
Figure 24 Cluster example
Network manager
69.110.1.1/24
Administrator
IP network
69.110.1.100/24
Member
Cluster
Member
Member
Candidate
54
As shown in Figure 24, the device configured with a public IP address and performing the management
function is the management device, the other managed devices are member devices, and the device that
does not belong to any cluster but can be added to a cluster is a candidate device. The management
device and the member devices form the cluster.
Figure 25 Role change in a cluster
As shown in Figure 25, a device in a cluster changes its role according to the following rules:
•
A candidate device becomes a management device when you create a cluster on it. A
management device becomes a candidate device only after the cluster is removed.
•
A candidate device becomes a member device after it is added to a cluster. A member device
becomes a candidate device after it is removed from the cluster.
How a cluster works
Cluster management is implemented through the HW Group Management Protocol version 2 (HGMPv2),
which comprises of the following three protocols:
•
Neighbor Discovery Protocol (NDP)
•
Neighbor Topology Discovery Protocol (NTDP)
•
Cluster
These protocols enable topology data collection and cluster establishment and maintenance.
The following is the topology data collection procedure:
•
Every device uses NDP to collect data on the directly connected neighbors, including their software
version, host name, MAC address and port number.
•
The management device uses NTDP to collect data on the devices within user-specified hops and
their topology data, and identifies candidate devices based on the topology data.
•
The management device adds or deletes a member device and modifies the cluster management
configuration according to the candidate device information collected through NTDP.
About NDP
NDP discovers information about directly connected neighbors, including the device name, software
version, and connecting port of the adjacent devices. NDP works in the following ways:
•
A device running NDP periodically sends NDP packets to its neighbors. An NDP packet carries
NDP information (including the device name, software version, and connecting port) and the
holdtime. The holdtime indicates how long the receiving devices will keep the NDP information. At
the same time, the device also receives, but does not forward, the NDP packets from its neighbors.
•
A device running NDP stores and maintains an NDP table. The device creates an entry in the NDP
table for each neighbor. If a new neighbor is found, meaning the device receives an NDP packet
sent by the neighbor for the first time, the device adds an entry to the NDP table. If the NDP
information carried in the NDP packet is different from the stored information, the corresponding
entry and holdtime in the NDP table are updated. Otherwise, only the holdtime of the entry is
updated. If no NDP information from the neighbor is received when the holdtime times out, the
corresponding entry is removed from the NDP table.
55
NDP runs on the data link layer and supports different network layer protocols.
About NTDP
NTDP provides information required for cluster management. It collects topology information about the
devices within the specified hop count. Based on the neighbor information stored in the neighbor table
maintained by NDP, NTDP on the management device advertises NTDP topology-collection requests to
collect the NDP information of all the devices in a specific network range as well as the connection
information of all its neighbors. The information collected will be used by the management device or the
network management software to implement required functions.
When a member device detects a change on its neighbors through its NDP table, it informs the
management device through handshake packets. Then the management device triggers its NTDP to
collect specific topology information, so that its NTDP can discover topology changes timely.
The management device collects topology information periodically. You can also administratively launch
a topology information collection. The process of topology information collection is as follows:
•
The management device periodically sends NTDP topology-collection request from the
NTDP-enabled ports.
•
Upon receiving the request, the device sends an NTDP topology-collection response to the
management device, copies this response packet on the NTDP-enabled port and sends it to the
adjacent device. Topology-collection response includes the basic information of the NDP-enabled
device and NDP information of all adjacent devices.
•
The adjacent device performs the same operation until the NTDP topology-collection request is sent
to all the devices within specified hops.
To avoid concurrent responses to an NTDP topology-collection request causing congestion and deny of
service on the management device, a delay mechanism was introduced. You configure the delay
parameters for NTDP on the management device. As a result:
•
Each requested device waits for a period of time before forwarding an NTDP topology-collection
request on the first NTDP-enabled port.
•
After the first NTDP-enabled port forwards the request, all other NTDP-enabled ports on the
requested device forward the request in turn at a specific interval.
Cluster management maintenance
1.
Adding a candidate device to a cluster
You should specify the management device before creating a cluster. The management device
discovers and defines a candidate device through NDP and NTDP protocols. The candidate
device can be automatically or manually added to the cluster.
After the candidate device is added to the cluster, it can obtain the member number assigned by
the management device and the private IP address used for cluster management.
2.
Communication within a cluster
In a cluster the management device communicates with its member devices by sending handshake
packets to maintain connection between them. The management/member device state change is
shown in Figure 26.
56
Figure 26 Management/member device state change
A cluster manages the state of its member devices as follows:
•
After a candidate device is added to the cluster and becomes a member device, the management
device saves its state information and identifies it as Active. The member device also saves its state
information and identifies itself as Active.
•
The management device and member devices send handshake packets. Upon receiving the
handshake packets, the management device or a member device keeps its state as Active without
sending a response.
•
If the management device does not receive handshake packets from a member device within a
period that is three times the handshake interval, it changes the status of the member device from
Active to Connect. Likewise, if a member device fails to receive handshake packets within a period
that is three times the handshake interval, its state changes from Active to Connect.
•
During the information holdtime, if the management device receives handshake or management
packets from a member device that is in Connect state, it changes the state of the member device
to Active. Otherwise, it considers the member device to be disconnected, and changes the state of
the member device to Disconnect.
•
During the information holdtime, a member device in Connect state changes its state to Active if it
receives handshake or management packets from the management device. Otherwise, it changes
its state to Disconnect.
•
When the communication between the management device and a member device is recovered, the
member device is added to the cluster and its state changes from Disconnect to Active on itself and
the management device.
•
Besides, a member device sends handshake packets to inform the management device of neighbor
topology changes.
Management VLAN
Management VLAN limits the cluster boundaries. All cluster control packets, including NDP, NTDP, and
handshake packets between the management device and member devices are restricted within the
cluster management VLAN.
To assign a candidate to a cluster, make sure all ports on the path from the candidate device to the
management device are in the management VLAN. If not, the candidate device cannot join the cluster.
You can manually assign ports to the management VLAN or use the management VLAN autonegotiation
function to enable automatic VLAN assignment on the management device.
57
To ensure security of the cluster management VLAN, PCs and other network devices that do not belong
to the cluster are not allowed to join the management VLAN, only ports on devices of the cluster can join
the management VLAN.
IMPORTANT:
To guarantee the communication within the cluster, ensure VLAN handling consistency on all ports on the
path from a member device or candidate device to the management device. To remove the VLAN tag of
outgoing management VLAN packets, set the management VLAN as the PVID on all the ports, including
hybrid ports. If the management VLAN is not the PVID, a hybrid and trunk port must send outgoing
management VLAN packets with the VLAN tag.
For more information about VLAN, see Layer 2—LAN Switching Configuration Guide.
Configuration restrictions and guidelines
•
Do not disable NDP or NTDP after a cluster is formed. Doing so on the cluster management device
or its member devices does not break up the cluster, but can affect operation of the cluster.
•
If an 802.1X- or MAC authentication-enabled member device is connected to any other member
device, enable HABP server on the device. Otherwise, the management device of the cluster cannot
manage the devices connected to it. For more information about HABP, see Security Configuration
Guide.
•
Before you establish a cluster or add a device to the cluster, verify that:
{
{
The management device's routing table can accommodate routes destined for the candidate
devices. A full routing table can cause continual additions and removals of all candidate
devices.
The candidate device' routing table can accommodate the route destined for the management
device. A full routing table can cause continual additions and removals of the candidate device.
Cluster management configuration task list
Before configuring a cluster, determine the roles and functions the devices play, and configure functions
required for the cluster member devices to communicate with one another.
Complete these tasks to configure cluster management functions:
Task
Remarks
Configuring the management device:
• Enabling NDP globally and for specific ports
Required.
• Configuring NDP parameters
Optional.
• Enabling NTDP globally and for specific ports
Required.
• Configuring NTDP parameters
Optional.
• Manually collecting topology information
Optional.
• Enabling the cluster function
Required.
• Establishing a cluster
Required.
• Enabling management VLAN autonegotiation
Required.
58
Task
Remarks
• Configuring communication between the management device and the
member devices within a cluster
Optional.
• Configuring cluster management protocol packets
Optional.
• Cluster member management
Optional.
Configuring the member devices:
• Enabling NDP
Required.
• Enabling NTDP
Required.
• Manually collecting topology information
Optional.
• Enabling the cluster function
Required.
• Deleting a member device from a cluster
Optional.
Toggling between the CLIs of the management device and a member device
Optional.
Adding a candidate device to a cluster
Optional.
Configuring advanced cluster functions:
• Configuring topology management
Optional.
• Configuring interaction for a cluster
Optional.
• Configuring the SNMP configuration synchronization function
Optional.
• Configuring Web user accounts in batches
Optional.
Configuring the management device
Perform the tasks in this section on the management device.
Enabling NDP globally and for specific ports
For NDP to work correctly, enable NTDP both globally and on specific ports.
To enable NDP globally and for specific ports:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable NDP globally.
ndp enable
Optional.
By default, this function is
enabled.
• In system view:
ndp enable interfaceinterface-list
3.
Enable the NDP feature
on ports.
• In Ethernet interface view or Layer 2
aggregate interface view:
a. interface interface-type
interface-number
b. ndp enable
59
Use either command.
By default, NDP is enabled
globally and also on all ports.
To avoid the management
device collecting unnecessary
topology data, disable NDP on
ports connected to
non-candidate devices.
Configuring NDP parameters
An NDP-enabled port periodically sends NDP packets that have an aging time. If the receiving device
has not received any NDP packet before that aging time expires, the receiving device automatically
removes the neighbor entry for the sending device.
To avoid NDP table entry flappings, make sure the NDP aging timer is equal to or longer than the NDP
packet sending interval.
To configure NDP parameters:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Configure the interval for
sending NDP packets.
ndp timer hello hello-time
Configure the period for the
receiving device to keep the
NDP packets.
ndp timer aging aging-time
3.
Optional.
The default setting is 60 seconds.
Optional.
The default setting is 180 seconds.
Enabling NTDP globally and for specific ports
For NTDP to work correctly, you must enable NTDP both globally and on specific ports.
To enable NTDP globally and for specific ports:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable NTDP globally.
ntdp enable
3.
Enter Ethernet interface view
or Layer 2 aggregate
interface view.
interface interface-type
interface-number
Optional.
By default, this function is enabled.
N/A
Optional.
By default, NTDP is enabled on all
ports.
4.
Enable NTDP on the port.
ntdp enable
To avoid the management device
collecting unnecessary topology
data, disable NTDP on ports
connected to non-candidate
devices.
Configuring NTDP parameters
NTDP parameter configuration includes the following:
•
Limiting the maximum number of hops (devices) from which topology data is collected.
•
Setting the topology data collection interval.
•
Setting the following topology request forwarding delays for requested devices' NTDP-enabled
ports:
60
{
{
Forwarding delay for the first NTDP-enabled port—After receiving a topology request, the
requested device forwards the request out of the first NTDP-enabled port when this forwarding
delay expires rather than immediately.
Forwarding delay for other NTDP-enabled ports—After the first NTDP-enabled port forwards
the request, all other NTDP-enabled ports forward the request in turn at this delay interval.
The delay settings are conveyed in topology requests sent to the requested devices. They help
avoid concurrent responses to an NTDP topology-collection request causing congestion and deny
of service on the management device.
To configure NTDP parameters:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Configure the maximum hops
for topology collection.
ntdp hop hop-value
Configure the interval for
collecting topology
information.
ntdp timer interval
Configure the delay for the
first NTDP-enabled port to
forward a topology-collection
request.
ntdp timer hop-delay delay-time
Configure the delay for other
NTDP-enabled ports to
forward a topology-collection
request.
ntdp timer port-delay delay-time
3.
4.
5.
Optional.
The default setting is 3.
Optional.
The default setting is 1 minute.
Optional.
The default setting is 200 ms.
Optional.
The default setting is 20 ms.
Manually collecting topology information
The management device collects topology information periodically after a cluster is created. In addition,
you can configure to manually initiate topology information collection on the management device or
NTDP-enabled device, thus managing and monitoring devices in real time, regardless of whether a
cluster is created.
To configure to manually collect topology information:
Task
Command
Remarks
Manually collect topology
information.
ntdp explore
N/A
Enabling the cluster function
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the cluster function
globally.
cluster enable
61
Optional.
By default, this function is enabled.
Establishing a cluster
To successfully establish a cluster:
•
Make sure UDP port 40000 is not used by any application. This port will be used by the cluster
management module for exchanging handshake packets.
•
Perform the following tasks before establishing the cluster:
{
{
Specify a management VLAN. You cannot change the management VLAN after a cluster is
created.
Configure a private IP address pool on the management device for cluster member devices. This
address pool must not include IP addresses that are on the same subnet as the IP address
assigned to any VLAN interface on the management device or a cluster candidate device.
When a candidate device is added to the cluster, the management device assigns it a private
IP address for inter-cluster communication.
A cluster can be established manually or automatically. Using the automatic setup method:
1.
You enter a name for the cluster you want to establish.
2.
The system lists all candidate devices within your predefined hop count.
3.
The system starts to add them to the cluster.
During this process, you can press Ctrl+C to stop the process. However, devices already added
into the cluster are not removed.
To manually establish a cluster:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Specify the management
VLAN.
management-vlan vlan-id
By default, VLAN 1 is the
management VLAN.
3.
Enter cluster view.
cluster
N/A
4.
Configure the private IP
address range for member
devices.
ip-pool ip-address { mask |
mask-length }
By default, no private IP address
range is configured.
Optional.
• Manually establish a cluster:
build cluster-name
5.
Establish a cluster.
• Automatically establish a
cluster:
auto-build [ recover ]
Use either method.
By default, the device is not the
management device.
Enabling management VLAN autonegotiation
Management VLAN limits the cluster boundaries. To assign a device to a cluster, you must make sure the
port that directly connects the device to the management device or its cascade ports are in the
management VLAN.
Management VLAN autonegotiation enables a cluster management device to add ports directly
connected to it and cascades ports between cluster candidate devices to a management VLAN.
62
To enable management VLAN autonegotiation on the management device:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter cluster view.
cluster
N/A
3.
Enable management VLAN
auto-negotiation.
management-vlan synchronization
enable
By default, this function is
disabled.
Configuring communication between the management device
and the member devices within a cluster
In a cluster, the management device and its member devices communicate by sending handshake
packets to maintain a connection. You can configure the interval for sending handshake packets and the
holdtime of a device on the management device. This configuration applies to all member devices within
the cluster. For a member device in Connect state:
•
If the management device does not receive handshake packets from a member device within the
holdtime, it changes the state of the member device to Disconnect. When the communication is
recovered, the member device needs to be re-added to the cluster (this process is automatically
performed).
•
If the management device receives handshake packets from the member device within the holdtime,
the state of the member device remains Active.
To configure communication between the management device and the member devices within a cluster:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter cluster view.
cluster
N/A
3.
Configure the interval for
sending handshake packets.
timer interval
Configure the holdtime of a
device.
holdtime hold-time
4.
Optional.
The default setting is 10 seconds.
Optional.
The default setting is 60 seconds.
Configuring cluster management protocol packets
By default, the destination MAC address of cluster management protocol packets (including NDP, NTDP
and HABP packets) is a multicast MAC address 0180-C200-000A, which IEEE reserved for later use.
Since some devices cannot forward the multicast packets with the destination MAC address of
0180-C200-000A, cluster management packets cannot traverse these devices. For a cluster to work
correctly in this case, you can modify the destination MAC address of a cluster management protocol
packet without changing the current networking.
The management device periodically sends MAC address negotiation broadcast packets to advertise the
destination MAC address of the cluster management protocol packets.
When you configure the destination MAC address for cluster management protocol packets:
63
•
If the interval for sending MAC address negotiation broadcast packets is 0, the system
automatically sets it to 1 minute.
•
If the interval for sending MAC address negotiation broadcast packets is not 0, the interval remains
unchanged.
To configure the destination MAC address of the cluster management protocol packets:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter cluster view.
cluster
N/A
3.
Configure the destination
MAC address for cluster
management protocol
packets.
cluster-mac mac-address
The destination MAC address is
0180-C200-000A by default.
Configure the interval to send
MAC address negotiation
broadcast packets.
cluster-mac syn-interval interval
4.
Optional.
The default setting is 1 minute.
Cluster member management
You can manually add a candidate device to a cluster, or remove a member device from a cluster.
If a member device needs to be rebooted for software upgrade or configuration update, you can
remotely reboot it through the management device.
Adding a member device
Step
Command
1.
Enter system view.
system-view
2.
Enter cluster view.
cluster
3.
Add a candidate device to the cluster.
add-member [ member-number ] mac-address
mac-address [ password password ]
Removing a member device
Step
Command
1.
Enter system view.
system-view
2.
Enter cluster view.
cluster
3.
Remove a member device from the cluster.
delete-member member-number [ to-black-list ]
Rebooting a member device
Step
Command
1.
Enter system view.
system-view
2.
Enter cluster view.
cluster
3.
Reboot a specified member device.
reboot member { member-number | mac-address
mac-address } [ eraseflash ]
64
Configuring the member devices
Enabling NDP
See "Enabling NDP globally and for specific ports."
Enabling NTDP
See "Enabling NTDP globally and for specific ports."
Manually collecting topology information
See "Manually collecting topology information."
Enabling the cluster function
See "Enabling the cluster function."
Deleting a member device from a cluster
Step
Command
1.
Enter system view.
system-view
2.
Enter cluster view.
cluster
3.
Delete a member device from the cluster.
undo administrator-address
Toggling between the CLIs of the management
device and a member device
In a cluster, you can access the CLI of a member device from the management device or access the CLI
of the management device from a member device.
Because CLI toggling uses Telnet, the following restrictions apply:
•
Authentication is required for toggling to the management device. If authentication is passed, you
are assigned the user privilege level predefined on the management device.
•
When a candidate device is added to the cluster, its super password for level-3 commands changes
to be the same as that on the management device. To avoid authentication failures, HP recommends
you not modify the super password settings of any member (including the management device and
member devices) in the cluster.
•
After toggling to a member device, you have the same user privilege level as on the management
device.
•
If the maximum number of Telnet users on the target device has been reached, you cannot toggle
to the device.
Perform the following tasks in user view:
65
Task
Command
Remarks
Access the CLI of a member device
from the management device.
cluster switch-to { member-number |
mac-address mac-address | sysname
member-sysname }
N/A
cluster switch-to administrator
You can use this command
only if you are not logged in to
the member device from the
CLI of the management device.
Access the CLI of the management
device from a member device.
Adding a candidate device to a cluster
Step
Command
1.
Enter system view.
system-view
2.
Enter cluster view.
cluster
3.
Add a candidate device to the cluster.
administrator-address mac-address name name
Configuring advanced cluster functions
Configuring topology management
The concepts of blacklist and whitelist are used for topology management. An administrator can
diagnose the network by comparing the current topology (information about a node and its neighbors in
the cluster) and the standard topology.
•
Topology management whitelist (standard topology)—A whitelist is a list of topology information
that has been confirmed by the administrator as correct. You can get information about a node and
its neighbors from the current topology. Based on the information, you can manage and maintain
the whitelist by adding, deleting or modifying a node.
•
Topology management blacklist—Devices in a blacklist are not allowed to join a cluster. A blacklist
contains the MAC addresses of devices. If a blacklisted device is connected to a network through
another device not included in the blacklist, the MAC address and access port of the latter are also
included in the blacklist. The candidate devices in a blacklist can be added to a cluster only if the
administrator manually removes them from the list.
The whitelist and blacklist are mutually exclusive. A whitelist member cannot be a blacklist member, and
vice versa. However, a topology node can belong to neither the whitelist nor the blacklist. Nodes of this
type are usually newly added nodes, whose identities are to be confirmed by the administrator.
You can back up and restore the whitelist and blacklist in the following two ways:
•
Backing them up on the FTP server shared by the cluster. You can manually restore the whitelist and
blacklist from the FTP server.
•
Backing them up in the Flash of the management device. When the management device restarts,
the whitelist and blacklist will be automatically restored from the Flash. When a cluster is
re-established, you can choose whether to restore the whitelist and blacklist from the Flash
automatically, or you can manually restore them from the Flash of the management device.
To configure cluster topology management:
66
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter cluster view.
cluster
N/A
3.
Add a device to the blacklist.
black-list add-mac mac-address
Optional.
4.
Remove a device from the
blacklist.
black-list delete-mac { all |
mac-address }
Optional.
5.
Confirm the current topology and
save it as the standard topology.
topology accept { all [ save-to
{ ftp-server | local-flash } ] |
mac-address mac-address |
member-id member-number }
Optional.
6.
Save the standard topology to
the FTP server or the local Flash.
topology save-to { ftp-server |
local-flash }
Optional.
7.
Restore the standard topology.
topology restore-from { ftp-server |
local-flash }
Optional.
Configuring interaction for a cluster
You configure the FTP/TFTP server, NMS and log host settings for the cluster on the cluster management
device.
•
All cluster members access the FTP/TFTP server through the management device.
•
All cluster members output their log data to the management device, which converts the IP address
for the log data packets before forwarding the packets to the log host.
•
All cluster members send their traps to the SNMP NMS through the management device.
To isolate cluster management and control packets from the external networks for security, HP
recommends you configure the ports connected to the external networks as not allowing the
management VLAN to pass through. If the port connected to the NMS, FTP/TFTP server, or log host is
one of these ports, you must specify a VLAN interface other than the management VLAN interface as the
network management interface for communicating with these devices. Otherwise, communication failure
will occur.
To configure the interaction for the cluster:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter cluster view.
cluster
N/A
3.
Configure the FTP server
shared by the cluster.
ftp-server ip-address [ user-name
username password { cipher |
simple } password ]
By default, no FTP server is
configured for a cluster.
4.
Configure the TFTP server
shared by the cluster.
tftp-server ip-address
By default, no TFTP server is
configured for a cluster.
5.
Configure the log host shared
by the member devices in the
cluster.
logging-host ip-address
By default, no log host is
configured for a cluster.
67
Step
Command
Remarks
6.
Configure the SNMP NM host
shared by the cluster.
snmp-host ip-address
[ community-string read string1
write string2 ]
By default, no SNMP host is
configured.
7.
Configure the NM interface of
the management device.
nm-interface vlan-interface
interface-name
Optional.
Configuring the SNMP configuration synchronization function
SNMP configuration synchronization simplifies SNMP configuration in a cluster by enabling the
management device to propagate its SNMP settings to all member devices on a whitelist. These SNMP
settings are retained on the member devices after they are removed from the whitelist or the cluster is
dismissed. For more information about SNMP, see "Configuring SNMP."
To configure the SNMP configuration synchronization function:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter cluster view.
cluster
N/A
3.
Configure the SNMP
community name shared by a
cluster.
cluster-snmp-agent community
{ read | write } community-name
[ mib-view view-name ]
N/A
4.
Configure the SNMPv3 group
shared by a cluster.
cluster-snmp-agent group v3
group-name [ authentication |
privacy ] [ read-view read-view ]
[ write-view write-view ]
[ notify-view notify-view ]
N/A
5.
Create or update information
about the MIB view shared by
a cluster.
cluster-snmp-agent mib-view
included view-name oid-tree
By default, the name of the MIB
view shared by a cluster is
ViewDefault and a cluster can
access the ISO subtree.
Add a user for the SNMPv3
group shared by a cluster.
cluster-snmp-agent usm-user v3
user-name group-name
[ authentication-mode { md5 |
sha } [ cipher | simple ]
auth-password ] [ privacy-mode
des56 [cipher | simple ]
priv-password ]
N/A
6.
Configuring Web user accounts in batches
Configuring Web user accounts in batches enables you to do the following:
•
Through the Web interface, configure, on the management device, the username and password
used to log in to the cluster devices (including the management device and member devices).
•
Synchronize the configurations to the member devices on the whitelist.
This operation is equal to performing the configurations on the member devices. You need to enter your
username and password when you log in to the devices (including the management device and member
68
devices) in a cluster through Web.
These Web user account settings are retained on the member devices after they are removed from the
whitelist or the cluster is dismissed.
To configure Web user accounts in batches:
Step
Command
1.
Enter system view.
system-view
2.
Enter cluster view.
cluster
3.
Configure Web user accounts in batches.
cluster-local-user user-name [ password { cipher |
simple } password ]
Displaying and maintaining cluster management
Task
Command
Remarks
Display NDP configuration
information.
display ndp [ interface interface-list ] [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
Display NTDP configuration
information.
display ntdp [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display device information
collected through NTDP.
display ntdp device-list [ verbose ] [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
Display detailed NTDP
information for a specified
device.
display ntdp single-device mac-address
mac-address [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display information about the
cluster to which the current
device belongs.
display cluster [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display the standard
topology.
display cluster base-topology
[ mac-address mac-address | member-id
member-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display the current blacklist of
the cluster.
display cluster black-list [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display information about
candidate devices.
display cluster candidates [ mac-address
mac-address | verbose ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the current topology.
display cluster current-topology
[ mac-address mac-address
[ to-mac-address mac-address ] |
member-id member-number
[ to-member-id member-number ] ] [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
69
Task
Command
Remarks
Display information about
cluster members.
display cluster members [ member-number
| verbose ] [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Clear NDP statistics.
reset ndp statistics [ interface interface-list ]
Available in user view.
Cluster management configuration example
Network requirements
•
Three devices form cluster abc, whose management VLAN is VLAN 10. In the cluster, Device B
serves as the management device (Administrator), whose network management interface is
VLAN-interface 2; Device A and Device C are the member devices (Member).
•
All the devices in the cluster use the same FTP server and TFTP server on host 63.172.55.1/24, and
use the same SNMP NMS and log services on host IP address: 69.172.55.4/24.
•
Add the device whose MAC address is 00E0-FC01-0013 to the blacklist.
Figure 27 Network diagram
Configuration procedure
1.
Configure the member device Device A:
# Enable NDP globally and for port Ethernet 1/1.
<DeviceA> system-view
[DeviceA] ndp enable
[DeviceA] interface ethernet 1/1
[DeviceA-Ethernet1/1] ndp enable
[DeviceA-Ethernet1/1] quit
# Enable NTDP globally and for port Ethernet 1/1.
[DeviceA] ntdp enable
[DeviceA] interface ethernet 1/1
70
[DeviceA-Ethernet1/1] ntdp enable
[DeviceA-Ethernet1/1] quit
# Enable the cluster function.
[DeviceA] cluster enable
2.
Configure the member device Device C:
As the configurations of the member devices are the same, the configuration procedure of Device
C is not shown.
3.
Configure the management device Device B:
# Enable NDP globally and for ports Ethernet 1/2 and Ethernet 1/3.
<DeviceB> system-view
[DeviceB] ndp enable
[DeviceB] interface ethernet 1/2
[DeviceB-Ethernet1/2] ndp enable
[DeviceB-Ethernet1/2] quit
[DeviceB] interface ethernet 1/3
[DeviceB-Ethernet1/3] ndp enable
[DeviceB-Ethernet1/3] quit
# Configure the period for the receiving device to keep NDP packets as 200 seconds.
[DeviceB] ndp timer aging 200
# Configure the interval to send NDP packets as 70 seconds.
[DeviceB] ndp timer hello 70
# Enable NTDP globally and for ports Ethernet 1/2 and Ethernet 1/3.
[DeviceB] ntdp enable
[DeviceB] interface ethernet 1/2
[DeviceB-Ethernet1/2] ntdp enable
[DeviceB-Ethernet1/2] quit
[DeviceB] interface ethernet 1/3
[DeviceB-Ethernet1/3] ntdp enable
[DeviceB-Ethernet1/3] quit
# Configure the maximum hops for topology collection as 2.
[DeviceB] ntdp hop 2
# Configure the delay to forward topology-collection request packets on the first port as 150 ms.
[DeviceB] ntdp timer hop-delay 150
# Configure the delay to forward topology-collection request packets on the first port as 15 ms.
[DeviceB] ntdp timer port-delay 15
# Configure the interval to collect topology information as 3 minutes.
[DeviceB] ntdp timer 3
# Configure the management VLAN of the cluster as VLAN 10.
[DeviceB] vlan 10
[DeviceB-vlan10] quit
[DeviceB] management-vlan 10
# Configure ports Ethernet 1/2 and Ethernet 1/3 as Trunk ports and allow packets from the
management VLAN to pass.
[DeviceB] interface ethernet 1/2
[DeviceB-Ethernet1/2] port link-type trunk
71
[DeviceB-Ethernet1/2] port trunk permit vlan 10
[DeviceB-Ethernet1/2] quit
[DeviceB] interface ethernet 1/3
[DeviceB-Ethernet1/3] port link-type trunk
[DeviceB-Ethernet1/3] port trunk permit vlan 10
[DeviceB-Ethernet1/3] quit
# Enable the cluster function.
[DeviceB] cluster enable
# Configure a private IP address range for the member devices, which is from 172.16.0.1 to
172.16.0.7.
[DeviceB] cluster
[DeviceB-cluster] ip-pool 172.16.0.1 255.255.255.248
# Configure the current device as the management device, and establish a cluster named abc.
[DeviceB-cluster] build abc
Restore topology from local flash file,for there is no base topology.
(Please confirm in 30 seconds, default No). (Y/N)
N
# Enable management VLAN auto-negotiation.
[abc_0.DeviceB-cluster] management-vlan synchronization enable
# Configure the holdtime of the member device information as 100 seconds.
[abc_0.DeviceB-cluster] holdtime 100
# Configure the interval to send handshake packets as 10 seconds.
[abc_0.DeviceB-cluster] timer 10
# Configure the FTP Server, TFTP Server, Log host and SNMP host for the cluster.
[abc_0.DeviceB-cluster] ftp-server 63.172.55.1
[abc_0.DeviceB-cluster] tftp-server 63.172.55.1
[abc_0.DeviceB-cluster] logging-host 69.172.55.4
[abc_0.DeviceB-cluster] snmp-host 69.172.55.4
# Add the device whose MAC address is 00E0-FC01-0013 to the blacklist.
[abc_0.DeviceB-cluster] black-list add-mac 00e0-fc01-0013
[abc_0.DeviceB-cluster] quit
# Add port Ethernet 1/1 to VLAN 2, and configure the IP address of VLAN-interface 2.
[abc_0.DeviceB] vlan 2
[abc_0.DeviceB-vlan2] port ethernet 1/1
[abc_0.DeviceB] quit
[abc_0.DeviceB] interface vlan-interface 2
[abc_0.DeviceB-Vlan-interface2] ip address 163.172.55.1 24
[abc_0.DeviceB-Vlan-interface2] quit
# Configure VLAN-interface 2 as the network management interface.
[abc_0.DeviceB] cluster
[abc_0.DeviceB-cluster] nm-interface vlan-interface 2
72
Configuring CWMP (TR-069)
Overview
CPE WAN Management Protocol (CWMP), also called "TR-069," is a DSL Forum technical specification
for remote management of home network devices. It defines the general framework, message format,
management method, and data model for managing and configuring home network devices.
CWMP applies mainly to DSL access networks, which are hard to manage because end-user devices are
dispersed and large in number. CWMP makes the management easier by using an autoconfiguration
server to perform remote centralized management of customer premises equipment.
CWMP network framework
Figure 28 shows a basic CWMP network framework.
Figure 28 CWMP network framework
The basic CWMP network elements include:
•
ACS—Autoconfiguration server, the management device in the network.
•
CPE—Customer premises equipment, the managed device in the network.
•
DNS server—Domain name system server. CWMP defines that an ACS and a CPE use URLs to
identify and access each other. DNS is used to resolve the URLs.
•
DHCP server—Assigns IP addresses to CPEs, and uses the options field in the DHCP packet to issue
configuration parameters to the CPE.
Your device can work as the CPE but not the ACS.
Basic CWMP functions
Autoconnection between ACS and CPE
A CPE can connect to an ACS automatically by sending an Inform message. The following conditions
might trigger an autoconnection establishment:
•
A CPE starts up. A CPE can find the corresponding ACS according to the acquired URL, and
automatically initiates a connection to the ACS.
73
•
A CPE is configured to send Inform messages periodically. The CPE automatically sends an Inform
message at the configured interval to establish connections.
•
A CPE is configured to send an Inform message at a specific time. The CPE automatically sends an
Inform message at the configured time to establish a connection.
•
The current session is interrupted incorrectly. In this case, if the number of CPE autoconnection
retries does not reach the limit, the CPE automatically establishes a connection.
An ACS can initiate a connection request to a CPE at any time, and can establish a connection with the
CPE after passing CPE authentication.
Autoconfiguration
When a CPE logs in to an ACS, the ACS can automatically apply some configurations to the CPE for it
to perform auto configuration. Autoconfiguration parameters supported by the device include the
following:
•
Configuration file (ConfigFile)
•
ACS address (URL)
•
ACS username (Username)
•
ACS password (Password)
•
PeriodicInformEnable
•
PeriodicInformInterval
•
PeriodicInformTime
•
CPE username (ConnectionRequestUsername)
•
CPE password (ConnectionRequestPassword)
CPE system software image file and configuration file management
The network administrator can save CPE system software image files and configuration files on the ACS
and configure the ACS to automatically request the CPE to download any update made to these files.
After the CPE receives an update request, it automatically downloads the updated file from the file server
according to the filename and server address in the ACS request. After the CPE downloads the file, it
checks the file validity and reports the download result (success or failure) to the ACS.
To back up important data, a CPE can upload the current configuration file to the specified server
according to the requirement of an ACS. The device supports uploading only the vendor configuration
file or log file.
NOTE:
The device can download only system software images and configuration files from the ACS, and does not
support digital signatures.
CPE status and performance monitoring
An ACS can monitor the parameters of a CPE connected to it. Different CPEs have different performances
and functionalities. Therefore the ACS must be able to identify each type of CPE and monitor each CPE's
current configuration and configuration changes. CWMP also allows the administrator to define monitor
parameters and get the parameter values through an ACS, so as to get the CPE status and statistics
information.
The status and performance that can be monitored by an ACS include:
•
Manufacturer name (Manufacturer)
74
•
ManufacturerOUI
•
SerialNumber
•
HardwareVersion
•
SoftwareVersion
•
DeviceStatus
•
UpTime
•
Configuration file (ConfigFile)
•
ACS address (URL)
•
ACS username (Username)
•
ACS password (Password)
•
PeriodicInformEnable
•
PeriodicInformInterval
•
PeriodicInformTime
•
CPE address (ConnectionRequestURL)
•
•
CPE username (ConnectionRequestUsername)
CPE password (ConnectionRequestPassword)
CWMP mechanism
RPC methods
CWMP provides the following major remote procedure call methods for an ACS to manage or monitor
a CPE:
•
Get—The ACS gets the value of one or more parameters from the CPE.
•
Set—The ACS sets the value of one or more parameters on the CPE.
•
Inform—The CPE sends an Inform message to an ACS whenever the CPE initiates a connection to
the ACS, or the CPE's underlying configuration changes, or the CPE periodically sends its local
information to the ACS.
•
Download—The ACS requires a CPE to download a specific file from the specified URL, ensuring
upgrading of CPE software and auto download of the vendor configuration file.
•
Upload—The ACS requires a CPE to upload a specific file to the specified location.
•
Reboot—The ACS remotely reboots the CPE when the CPE encounters a failure or completes a
software upgrade.
How CWMP works
The following example illustrates how CWMP works. Suppose there are two ACSs in an area: main and
backup. The main ACS needs to restart for a system upgrade. To ensure a continuous monitoring of the
CPE, the main ACS needs to let all CPEs in the area connect to the backup ACS.
75
Figure 29 Example of the CWMP message interaction
The following steps show how CWMP works:
1.
Establish a TCP connection.
2.
Initialize SSL and establish a security connection.
3.
The CPE sends an Inform request message to initiate a CWMP connection. The Inform message
carries the reason for sending this message in the Eventcode field. In this example, the reason is "6
CONNECTION REQUEST," indicating that the ACS requires the CPE to establish a connection.
4.
If the CPE passes the authentication of the ACS, the ACS returns an Inform response, and the
connection is established.
5.
Upon receiving the Inform response, the CPE sends an empty message, if it has no other requests.
The CPE does this in order to comply with the request/reply interaction model of HTTP/HTTPS, in
which CWMP messages are conveyed.
6.
The ACS queries the value of the ACS URL set on the CPE.
7.
The CPE replies to the ACS with the obtained value of the ACS URL.
8.
The ACS finds that its local URL value is the same as the value of the ACS URL on the CPE. Therefore,
the ACS sends a Set request to the CPE to modify the ACS URL value of the CPE to the URL of the
backup ACS.
9.
The setting succeeds and the CPE sends a response.
10. The ACS sends an empty message to notify the CPE that it has no other requests.
11. The CPE closes the connection.
After this, the CPE initiates a connection to the backup ACS.
CWMP configuration approaches
To use CWMP, you must enable CWMP at the CLI. After that, you can configure ACS and CPE attributes
at the CLI. Alternatively, the CPE may obtain some ACS and CPE attributes from the DHCP server, or the
ACS may assign some ACS and CPE attributes to the CPE, depending on the CWMP implementation in
76
your network. Support for these configuration modes varies with attributes. For more information, see
"Configuring CWMP at the CLI."
Configuring ACS and CPE attributes through ACS
An ACS performs autoconfiguration of a CPE through remote management. For the primary configurable
parameters, see "Autoconfiguration."
Configuring ACS and CPE attributes through DHCP
You can configure ACS parameters for the CPE on the DHCP server by using DHCP Option 43. When
accessed by the CPE, the DHCP server sends the ACS parameters in DHCP Option 43 to the CPE. If the
DHCP server is an HP device that supports DHCP Option 43, you can configure the ACS parameters at
the CLI with the command option 43 hex 01length URL username password, where:
•
length is a hexadecimal string that indicates the total length of the length URL, username, and
password arguments. No space is allowed between the 01 keyword and the length value.
•
URL is the ACS address.
•
username is the ACS username.
•
password is the ACS password.
When you configure the ACS URL, username and password, follow these guidelines:
•
The three arguments take the hexadecimal format and the ACS URL and username must each end
with a space (20 in hexadecimal format) for separation.
•
The three arguments must be input in 2-digit, 4-digit, 6-digit, or 8-digit segments, each separated by
a space.
For example, configure the ACS address as http://169.254.76.31:7547/acs, username as 1234, and
password as 5678, as follows:
<Sysname> system-view
[Sysname] dhcp server ip-pool 0
[Sysname-dhcp-pool-0] option 43 hex 0127 68747470 3A2F2F31 36392E32 35342E37 362E3331
3A373534 372F6163 73203132 33342035 3637 38
In the option 43 hex command:
•
27 indicates that the length of the subsequent hexadecimal strings is 39 characters.
•
68747470 3A2F2F31 36392E32 35342E37 362E3331 3A373534 372F6163 73 corresponds to
the ACS address http://169.254.76.31/acs.
•
3132 3334 corresponds to the username 1234.
•
35 3637 38 corresponds to the password 5678.
•
20 is the end delimiter.
For more information about DHCP, DHCP Option 43, and the option command, see Layer 3—IP Services
Configuration Guide.
Configuring CWMP at the CLI
Some tasks in this section can be performed on the ACS or DHCP server.
Complete the following tasks to configure CWMP at the CLI:
77
Task
Remarks
Enabling CWMP
Required.
Configuring ACS attributes:
Required.
• Configuring the ACS URL
Supports configuration through ACS,
DHCP, and CLI.
Optional.
• Configuring the ACS username and password
Supports configuration through ACS,
DHCP, and CLI.
Configuring CPE attributes:
Optional.
• Configuring the CPE username and password
• Configuring the CWMP connection interface
Supports configuration through ACS and
CLI.
Optional.
Supports configuration through CLI only.
Optional.
• Sending Inform messages
Supports configuration through ACS and
CLI.
• Configuring the maximum number of attempts made to retry a
Optional.
connection
Supports configuration through CLI only.
Optional.
• Configuring the close-wait timer of the CPE
Supports configuration through CLI only.
Optional.
• Configuring the CPE working mode
Supports configuration through CLI only.
• Specifying an SSL client policy for HTTPS connection to ACS
Optional.
Supports configuration through CLI only.
Enabling CWMP
CWMP configurations can take effect only after you enable CWMP.
To enable CWMP:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Enable CWMP.
cwmp enable
By default, CWMP is disabled.
Configuring ACS attributes
ACS attributes include ACS URL, username and password. When the CPE initiates a connection to the
ACS, the ACS URL, username and password are carried in the connection request. After the ACS receives
78
the request, if the parameter values in the request are consistent with those configured locally, the
authentication succeeds, and the connection is allowed to be established. If not, the authentication fails,
and the connection is not allowed to be established.
Configuring the ACS URL
You can assign only one ACS for a CPE and the ACS URL you configured overwrites the old one, if any.
To configure the ACS URL:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Configure the ACS URL.
cwmp acs url url
By default, no ACS URL is
configured.
Configuring the ACS username and password
To pass ACS authentication, make sure the configured username and password are the same as those
configured for the CPE on the ACS.
To configure the ACS username and password:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Configure the ACS username
for connection to the ACS.
cwmp acs username username
By default, no ACS username is
configured for connection to the
ACS.
Optional.
4.
Configure the ACS password
for connection to the ACS.
cwmp acs password [ cipher |
simple ] password
You can specify a username
without a password for
authentication, but must make sure
the ACS has the same
authentication setting as the CPE.
By default, no ACS password is
configured for connection to the
ACS.
Configuring CPE attributes
CPE attributes include CPE username and password, which a CPE uses to authenticate the validity of an
ACS. When an ACS initiates a connection to a CPE, the ACS sends a session request carrying the CPE
URL, username, and password. When the device (CPE) receives the request, it compares the CPE URL,
username, and password with those configured locally. If they are the same, the ACS passes the
authentication of the CPE, and the connection establishment proceeds. Otherwise, the authentication
fails, and the connection establishment is terminated.
79
Configuring the CPE username and password
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Configure the CPE username
for connection to the CPE.
cwmp cpe username username
By default, no CPE username is
configured for connection to the
CPE.
Optional.
4.
Configure the CPE password
for connection to the CPE.
cwmp cpe password [ cipher |
simple ] password
You can specify a username
without a password for
authentication, but make sure the
ACS has the same authentication
setting as the CPE.
By default, no CPE password is
configured for connection to the
CPE.
Configuring the CWMP connection interface
The CWMP connection interface is the interface that the CPE uses to communicate with the ACS. The CPE
sends the IP address of this interface in the Inform messages and the ACS replies to this IP address for
setting up a CWMP connection.
If the interface that connects the CPE to the ACS is the only Layer 3 interface that has an IP address on
the device, you do not need to specify the CWMP connection interface. If multiple Layer 3 interfaces are
configured, specify the CWMP connection interface to make sure the IP address of the interface that
connects to the ACS is sent to the ACS for setting up CWMP connection.
To configure a CWMP connection interface:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Set the interface that
connects the CPE to the
ACS.
cwmp cpe connect interface
interface-type interface-number
The default CWMP connection
interface is the interface that is
assigned an IP address first among all
interfaces on the CPE.
Sending Inform messages
Inform messages need to be sent during the connection establishment between a CPE and an ACS. You
can configure the Inform message sending parameter to trigger the CPE to initiate a connection to the
ACS.
To configure the CPE to periodically send Inform messages:
80
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Enable the periodic sending
of Inform messages.
cwmp cpe inform interval enable
By default, this function is disabled.
4.
Configure the interval
between sending the Inform
messages.
cwmp cpe inform interval seconds
Optional.
By default, the CPE sends an Inform
message every 600 seconds.
To configure the CPE to send an Inform message at a specific time:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Configure the CPE to send an
Inform message at a specific
time.
cwmp cpe inform time time
By default, no time is set. The CPE
is not configured to send an Inform
message at a specific time.
Configuring the maximum number of attempts made to retry a
connection
If a CPE fails to establish a connection to an ACS or the connection is interrupted during the session (the
CPE does not receive a message indicating the normal close of the session), the CPE can automatically
reinitiate a connection to the ACS.
To configure the maximum number of attempts that the CPE can make to retry a connection:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Configure the maximum
number of attempts that the
CPE can make to retry a
connection.
Optional.
cwmp cpe connect retry times
By default, the CPE regularly sends
connection requests to the ACS
until a connection is set up.
Configuring the close-wait timer of the CPE
The close-wait timer is used mainly in the following cases:
•
During the establishment of a connection, if the CPE sends a connection request to the ACS, but the
CPE does not receive a response within the configured close-wait timeout, the CPE considers the
connection as having failed.
•
After a connection is established, if there is no packet interaction between the CPE and the ACS
within the configured close-wait timeout, the CPE considers the connection to be invalid and
disconnects the connection.
81
To configure the close-wait timer for the CPE:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Set the CPE close-wait timer.
cwmp cpe wait timeout seconds
Optional.
The default setting is 30 seconds.
Configuring the CPE working mode
Configure the device to operate in one of the following CPE modes depending on its position in the
network:
•
Gateway mode—Enables the ACS to manage the device and any CPE attached to the device. Use
this mode if the device is the egress to the WAN and has lower-level CPEs.
•
Device mode—If no CPEs are attached to the device, configure the device to operate in device
mode.
Disable CWMP before you change the CPE working mode.
To configure the working mode of the CPE:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Configure the working mode
of the CPE.
cwmp device-type { device |
gateway }
By default, the device operates in
gateway mode.
Specifying an SSL client policy for HTTPS connection to ACS
CWMP uses HTTP or HTTPS for data transmission. If the ACS uses HTTPS for secure access, its URL
begins with https://. You must configure an SSL client policy for the CPE to authenticate the ACS for
establishing an HTTPS connection. For more information about configuring SSL client policies, see
Security Configuration Guide.
To specify an SSL client policy for the CPE to establish an HTTPS connection to the ACS:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter CWMP view.
cwmp
N/A
3.
Specify an SSL client policy.
ssl client-policy policy-name
By default, no SSL client policy is
configured.
82
Displaying and maintaining CWMP
Task
Command
Remarks
Display CWMP configuration.
display cwmp configuration [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
Display the current status of
CWMP.
display cwmp status [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
83
Configuring IP accounting
IP accounting collects IP packet statistics on the device. It uses IP accounting rules to classify packets and
uses flow entries to store packet statistics in different tables.
Each IP accounting rule specifies a subnet to match packets sourced from and destined to the subnet.
Each flow entry records the source and destination IP addresses, protocol number, packet sum, and byte
sum for a flow. If a flow entry is not updated within the timeout time, IP accounting deletes it.
IP accounting stores different types of IP packet statistics in the following tables:
•
Firewall-denied table—Stores statistics for incoming and outgoing IP packets that are denied by the
firewall configured on an interface.
•
Interior table—Stores statistics for IP packets that pass an interface or the firewall on the interface
and match an IP accounting rule.
•
Exterior table—Stores statistics for IP packets that pass an interface or the firewall on the interface
but does not match any IP accounting rule.
Configuring IP accounting
Before performing this task, assign an IP address and mask to the interface on which you want to enable
IP accounting. If necessary, configure a firewall on the interface.
To configure IP accounting:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable IP accounting.
ip count enable
By default, this function is
disabled.
3.
Configure the timeout time for flow
entries.
ip count timeout minutes
The default setting is 720 minutes
(12 hours).
4.
Configure the maximum number of
flow entries in the interior table.
ip count interior-threshold
number
Optional.
Configure the maximum number of
flow entries in the exterior table.
ip count exterior-threshold
number
Optional.
5.
Optional.
6.
Configure an IP accounting rule.
ip count rule ip-address { mask
| mask-length }
7.
Enter interface view.
interface interface-type
interface-number
84
The default setting is 512.
The default setting is 0.
Up to 32 rules can be
configured.
If no rule is configured, all packet
information is stored in the
exterior table.
N/A
Step
Command
Remarks
• Enable IP accounting for
valid incoming IP packets
on the current interface:
ip count inbound-packets
• Enable IP accounting for
valid outgoing IP packets
on the current interface:
ip count outbound-packets
8.
Configure the type of packet
accounting.
• Enable IP accounting for
firewall-denied incoming
packets on the current
interface:
ip count firewall-denied
inbound-packets
Select at least one type of packet
accounting. Otherwise, IP
account does not count any
packet on the interface.
• Enable IP accounting for
firewall-denied outgoing
packets on the current
interface:
ip count firewall-denied
outbound-packets
Displaying and maintaining IP accounting
Task
Command
Remarks
Display IP accounting rules.
display ip count rule [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display IP accounting statistics.
display ip count { inbound-packets |
outbound-packets } { exterior |
firewall-denied | interior } [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Clear IP accounting statistics.
reset ip count { all | exterior | firewall |
interior }
Available in user view.
After you create a new IP accounting rule, it is possible that some originally rule-incompliant packets from
a subnet comply with the new rule. Information about these packets is then saved in the interior table. The
exterior table, however, might still contain information about the IP packets from the same subnet.
Therefore, in some cases, the interior and exterior tables contain statistics about the IP packets from the
same subnet. The statistics in the exterior table will be removed when the timeout time expires.
IP accounting configuration example
Network requirements
As shown in Figure 30, enable IP accounting on Ethernet 1/1 of the router to count IP packets between
Host A to Host B. Set the timeout time for flow entries to 24 hours.
85
Figure 30 Network diagram
Configuration procedure
The two hosts can be replaced by other types of network devices such as routers.
1.
Configure the router:
# Enable IP accounting.
<Router> system-view
[Router] ip count enable
# Configure an IP accounting rule.
[Router] ip count rule 1.1.1.1 24
# Set the timeout time to 1440 minutes (24 hours).
[Router] ip count timeout 1440
# Set the maximum number of flow entries in the interior table to 100.
[Router] ip count interior-threshold 100
# Set the maximum number of flow entries in the exterior table to 20.
[Router] ip count exterior-threshold 20
# Assign Ethernet 1/1 an IP address, and enable IP accounting for both incoming and outgoing IP
packets on it.
[Router] interface ethernet 1/1
[Router-Ethernet1/1] ip address 1.1.1.2 24
[Router-Ethernet1/1] ip count inbound-packets
[Router-Ethernet1/1] ip count outbound-packets
[Router-Ethernet1/1] quit
# Assign Ethernet 1/2 an IP address.
[Router] interface ethernet 1/2
[Router-Ethernet1/2] ip address 2.2.2.1 24
[Router-Ethernet1/2] quit
2.
Configure a static route on Host A and Host B respectively so that they can reach each other.
(Details not shown.)
3.
Display IP accounting statistics on the router.
[Router] display ip count inbound-packets interior
1 Inbound streams information in interior list:
SrcIP
DstIP
Protocol
Pkts
Bytes
1.1.1.1
2.2.2.2
ICMP
4
240
[Router] display ip count outbound-packets interior
1 Outbound streams information in interior list:
SrcIP
DstIP
Protocol
Pkts
Bytes
2.2.2.2
1.1.1.1
ICMP
4
240
86
Configuring NetStream
Overview
Conventional ways to collect traffic statistics, like SNMP and port mirroring, cannot provide precise
network management because of inflexible statistical methods or the high cost of required dedicated
servers. This calls for a new technology to collect traffic statistics.
NetStream provides statistics about network traffic flows, and it can be deployed on access, distribution,
and core layers.
NetStream implements the following features:
•
Accounting and billing—NetStream provides fine-gained data about network usage based on
resources such as lines, bandwidth, and time periods. The ISPs can use the data for billing based
on the time period, bandwidth usage, application usage, and QoS. Enterprise customers can use
this information for department chargeback or for cost allocation.
•
Network planning—NetStream data provides key information, such as the AS traffic information,
for optimizing the network design and planning. This helps maximize the network performance and
reliability while minimizing the network operation cost.
•
Network monitoring—Configured on the Internet interface, NetStream allows for monitoring traffic
and bandwidth utilization in real time. Based on this information, administrators can understand
how the network is used and where the bottleneck is, so they can better plan for the resource
allocation.
•
User monitoring and analysis—NetStream data provides detailed information about network
applications and resources. This information helps network administrators efficiently plan and
allocate network resources, which helps ensure network security.
NetStream basic concepts
Flow
NetStream is an accounting technology that provides statistics on a per-flow basis. An IPv4 flow is
defined by the following 7-tuple elements: destination address, source IP address, destination port
number, source port number, protocol number, ToS, and inbound or outbound interface. The 7-tuple
elements define a unique flow.
NetStream operation
A typical NetStream system comprises the following parts:
•
NetStream data exporter (NDE)—The NDE analyzes traffic flows that pass through it, collects
necessary data from the target flows, and exports the data to the NSC. Before exporting data, the
NDE might perform processes on the data, such as aggregation. A device with NetStream
configured acts as an NDE.
87
•
NetStream collector (NSC)—The NSC is usually a program running in UNIX or Windows. It parses
the packets sent from the NDE, and then it stores the statistics to the database for the NDA. The NSC
gathers the data from multiple NDEs, and then it filters and aggregates the total received data.
•
NetStream data analyzer (NDA)—The NDA is a tool for analyzing network traffic. It collects
statistics from the NSC, performs further process, and generates various types of reports for
applications of traffic billing, network planning, and attack detection and monitoring. Typically, the
NDA features a Web-based system for users to easily obtain, view, and gather the data.
Figure 31 NetStream system
As shown in Figure 31, NetStream uses the following procedure to collect and analyze data:
1.
The NDE (the device configured with NetStream) periodically delivers the collected statistics to the
NSC.
2.
The NSC processes the statistics, and then it sends the results to the NDA.
3.
The NDA analyzes the statistics for accounting, network planning, and the like.
NSC and NDA are usually integrated into a NetStream server. This document focuses on the description
and configuration of the NDE.
NetStream key technologies
Flow aging
NetStream uses the flow aging to enable the NDE to export NetStream data to the NetStream server.
NetStream creates a NetStream entry for each flow in the cache, and each entry stores the flow statistics.
When the timer of the entry expires, the NDE exports the summarized data to the NetStream server in a
specific NetStream version export format. For more information about flow aging types and configuration,
see "Configuring NetStream flow aging."
NetStream data export
NetStream traditional data export
NetStream collects statistics about each flow, and, when the entry timer expires, it exports the data in
each entry to the NetStream server.
The data includes statistics about each flow, but this method consumes more bandwidth and CPU than
the aggregation method, and it requires a large cache size. In most cases, not all statistics are necessary
for analysis.
88
NetStream aggregation data export
NetStream aggregation merges the flow statistics according to the aggregation criteria of an
aggregation mode, and it sends the summarized data to the NetStream server. This process is the
NetStream aggregation data export, which uses less bandwidth than traditional data export.
For example, the aggregation mode configured on the NDE is protocol-port, which means that is
aggregates statistics about flow entries by protocol number, source port, and destination port. Four
NetStream entries record four TCP flows with the same destination address, source port, and destination
port, but with different source addresses. In the aggregation mode, only one NetStream aggregation
flow is created and sent to the NetStream server.
Table 3 lists the 12 aggregation modes. In each mode, the system merges flows into one aggregation
flow if the aggregation criteria are of the same value. These 12 aggregation modes work independently
and can be configured on the same interface.
Table 3 NetStream aggregation modes
Aggregation mode
Aggregation criteria
AS aggregation
•
•
•
•
Source AS number
Destination AS number
Inbound interface index
Outbound interface index
Protocol-port aggregation
• Protocol number
• Source port
• Destination port
Source AS number
Source-prefix aggregation
•
•
•
•
•
•
•
•
Destination AS number
•
•
•
•
•
•
•
•
Source AS number
Destination-prefix aggregation
Prefix aggregation
Source address mask length
Source prefix
Inbound interface index
Destination address mask length
Destination prefix
Outbound interface index
Destination AS number
Source address mask length
Destination address mask length
Source prefix
Destination prefix
Inbound interface index
Outbound interface index
89
Aggregation mode
Aggregation criteria
Source prefix
Prefix-port aggregation
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ToS
•
•
•
•
•
ToS
•
•
•
•
•
ToS
•
•
•
•
•
•
•
•
•
ToS
•
•
•
•
•
•
ToS
ToS-AS aggregation
ToS-source-prefix aggregation
ToS-destination-prefix aggregation
ToS- prefix aggregation
ToS-protocol-port aggregation
ToS-BGP-nexthop
Destination prefix
Source address mask length
Destination address mask length
ToS
Protocol number
Source port
Destination port
Inbound interface index
Outbound interface index
Source AS number
Destination AS number
Inbound interface index
Outbound interface index
Source AS number
Source prefix
Source address mask length
Inbound interface index
Destination AS number
Destination address mask length
Destination prefix
Outbound interface index
Source AS number
Source prefix
Source address mask length
Destination AS number
Destination address mask length
Destination prefix
Inbound interface index
Outbound interface index
Protocol type
Source port
Destination port
Inbound interface index
Outbound interface index
• ToS
• BGP next hop
• Outbound interface index
90
In an aggregation mode with AS, if the packets are not forwarded according to the BGP routing table,
the statistics on the AS number cannot be obtained.
In the aggregation mode of ToS-BGP-nexthop, if the packets are not forwarded according to the BGP
routing table, the statistics on the BGP next hop cannot be obtained.
NetStream export formats
NetStream exports data in UDP datagrams in one of the following formats:
•
Version 5—Exports original statistics collected based on the 7-tuple elements. The packet format is
fixed and cannot be extended flexibly.
•
Version 8—Supports NetStream aggregation data export. The packet formats are fixed and cannot
be extended flexibly.
•
Version 9—The most flexible format. Users can define templates that have different statistics fields.
The template feature supports different statistics, such as BGP next hop and MPLS information.
NetStream sampling and filtering
NetStream sampling
NetStream sampling basically reflects the network traffic information by collecting statistics on fewer
packets. The reduced statistics to be transferred also reduces the impact on the device performance. For
more information about sampling, see "Configuring sampler."
NetStream filtering
NetStream filtering is implemented by referencing an ACL or by applying a QoS policy to NetStream.
NetStream filtering enables a NetStream module to collect statistics on packets that match the criteria.
The filtering allows for selecting specific data flows for statistics purposes. The NetStream filtering by QoS
policy is flexible and suits various applications.
NetStream configuration task list
Before you configure NetStream, verify that the following configurations are proper, as needed:
•
Make sure which device you want to enable NetStream on.
•
If multiple service flows are passing the NDE, use an ACL or QoS policy to select the target data.
•
If enormous traffic flows are on the network, configure NetStream sampling.
•
Decide which export format is used for NetStream data export.
•
Configure the timer for NetStream flow aging.
•
To reduce the bandwidth consumption of NetStream data export, configure NetStream
aggregation.
91
Figure 32 NetStream configuration flow
Start
Enable NetStream
Configure filtering
Yes
Filter?
No
Yes
Configure sampling
Sample?
No
Configure export
format
Configure flow
aging
Configure aggregation
data export
Yes
Aggregate?
No
Configure common
data export
End
Complete these tasks to configure NetStream:
Task
Remarks
Enabling NetStream on an interface
Required.
Configuring NetStream filtering and sampling
Optional.
Configuring NetStream sampling
Optional.
Configuring NetStream data
export
Configuring NetStream traditional data export
Configuring NetStream aggregation data
export
Required.
Use at least one method.
Configuring attributes of NetStream export data
Optional.
Configuring NetStream flow aging
Optional.
Enabling NetStream on an interface
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type interface-number
N/A
92
Step
Enable NetStream on the
interface.
3.
Command
Remarks
ip netstream { inbound | outbound }
Disabled by default.
Configuring NetStream filtering and sampling
Before you configure NetStream filtering and sampling, use the ip netstream command to enable
NetStream.
Configuring NetStream filtering
When you configure NetStream filtering, follow these guidelines:
•
The NetStream filtering function is not effective on MPLS packets.
•
When NetStream filtering and sampling are both configured, packets are filtered first and then the
matching packets are sampled.
•
The ACL referenced by NetStream filtering must already exist and cannot be empty. An ACL that is
referenced by NetStream filtering cannot be deleted or modified. For more information about ACLs,
see ACL and QoS Configuration Guide.
To configure NetStream filtering:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type interface-number
N/A
3.
Enable ACL-based
NetStream filtering on the
interface.
ip netstream filter acl acl-number
{ inbound | outbound }
Optional.
By default, no ACL is referenced
and IPv4 packets are not filtered.
Configuring NetStream sampling
When you configure NetStream sampling, follow these guidelines:
•
When NetStream filtering and sampling are both configured, packets are filtered first and then the
matching packets are sampled.
•
A sampler must be created by using the sampler command before being referenced by NetStream
sampling.
•
A sampler that is referenced by NetStream sampling cannot be deleted. For more information
about samplers, see "Configuring sampler."
To configure NetStream sampling:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type interface-number
N/A
3.
Configure NetStream
sampling.
ip netstream sampler sampler-name { inbound |
outbound }
Disabled by default.
93
Configuring NetStream data export
To allow the NDE to export collected statistics to the NetStream server, configure the source interface out
of which the data is sent and the destination address to which the data is sent.
Configuring NetStream traditional data export
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type interface-number
N/A
3.
Enable NetStream.
ip netstream { inbound | outbound }
Disabled by default.
4.
Exit to system view.
quit
N/A
5.
Configure the destination
address and the destination
UDP port number for the
NetStream traditional data
export.
ip netstream export host ip-address
udp-port [ vpn-instance
vpn-instance-name ]
By default, no destination
address or destination UDP
port number is configured,
so the NetStream traditional
data is not exported.
Optional.
6.
7.
Configure the source interface
for NetStream traditional data
export.
ip netstream export source interface
interface-type interface-number
Limit the data export rate.
ip netstream export rate rate
By default, the interface
where the NetStream data
is sent out (the interface
connects to the NetStream
server) is used as the source
interface.
HP recommends that you
connect the network
management interface to
the NetStream server and
configure it as the source
interface.
Optional.
No limit by default.
Configuring NetStream aggregation data export
NetStream aggregation can be implemented by software.
Configuration restrictions and guidelines
Configurations in NetStream aggregation view apply to aggregation data export only, and those in
system view apply to NetStream traditional data export. If configurations in NetStream aggregation view
are not provided, the configurations in system view apply to the aggregation data export.
Configuration procedure
To configure NetStream aggregation data export:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
94
Step
Command
Remarks
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Enable NetStream.
ip netstream { inbound | outbound }
Disabled by default.
4.
Exit to system view.
quit
N/A
Set a NetStream
aggregation mode and
enter its view.
ip netstream aggregation { as |
destination-prefix | prefix |
prefix-port | protocol-port |
source-prefix | tos-as |
tos-destination-prefix | tos-prefix |
tos-protocol-port | tos-source-prefix
| tos-bgp-nexthop }
N/A
5.
6.
Configure the
destination address and
the destination UDP port
number for the
NetStream aggregation
data export.
ip netstream export host ip-address
udp-port [ vpn-instance
vpn-instance-name ]
By default, no destination address or
destination UDP port number is
configured in NetStream
aggregation view.
If you expect to export only
NetStream aggregation data,
configure the destination address in
related aggregation view only.
Optional.
By default, the interface connecting
to the NetStream server is used as the
source interface.
7.
Configure the source
interface for NetStream
aggregation data
export.
• Source interfaces in different
ip netstream export source interface
interface-type interface-number
aggregation views can be
different.
• If no source interface is
configured in aggregation view,
the source interface configured in
system view, if any, is used.
• HP recommends you connect the
network management interface to
the NetStream server.
8.
Enable the NetStream
aggregation
configuration.
Disabled by default.
enable
Configuring attributes of NetStream export data
Configuring NetStream export format
The NetStream export format exports NetStream data in version 5 or version 9 formats, and the data
fields can be expanded to contain more information, including the following:
•
Statistics about source AS, destination AS, and peer ASs in version 5 or version 9 export format.
•
Statistics about BGP next hop in version 9 format only.
To configure the NetStream export format:
95
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
Optional.
By default:
2.
Configure the version
for NetStream export
format, and specify
whether to record AS
and BGP next hop
information.
• ip netstream export version 5
[ origin-as | peer-as ]
• ip netstream export version 9
[ origin-as | peer-as ]
[ bgp-nexthop ]
• NetStream traditional data export
uses version 5.
• IPv4 NetStream aggregation data
export uses version 8.
• MPLS flow data is not exported.
• The peer AS numbers are exported
for the source and destination.
• The BGP next hop is not exported.
For more information about an AS, see Layer 3—IP Routing Configuration Guide.
A NetStream entry for a flow records the source IP address and the destination IP address, each with two
AS numbers. The source IP address includes the source AS from which the flow originates and the peer
AS from which the flow travels to the NetStream-enabled device. The destination IP address includes the
destination AS to which the flow is destined and the peer AS to which the NetStream-enabled device
passes the flow.
To specify which AS numbers to record for the source and destination IP addresses, include the peer-as
or origin-as keyword. For example, as shown in Figure 33, a flow starts at AS 20, passes AS 21 through
AS 23, and then reaches AS 24. NetStream is enabled on the device in AS 22. If the peer-as keyword
is provided, the command records AS 21 as the source AS and AS 23 as the destination AS. If the
origin-as keyword is provided, the command records AS 20 as the source AS and AS 24 as the
destination AS.
Figure 33 Recorded AS information varies with different keyword configurations
AS 20
Enable NetStream
AS 21
AS 22
Include peer-as in the command.
AS 21 is recorded as the source AS, and
AS 23 as the destination AS.
Include origin-as in the command.
AS 20 is recorded as the source AS and
AS 24 as the destination AS.
AS 23
AS 24
96
Configuring the refresh rate for NetStream version 9 templates
Version 9 is template-based and supports user-defined formats, so the NetStream-enabled device needs
to resend a new template to the NetStream server for an update. If the version 9 format is changed on the
NetStream-enabled device and is not updated on the NetStream server, the server cannot associate the
received statistics with its proper fields. To avoid this situation, configure the refresh frequency and rate
for version 9 templates so that the NetStream server can refresh the templates on time.
The refresh frequency and interval can be both configured, and the template is resent when either of the
condition is reached.
To configure the refresh rate for NetStream version 9 templates:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Configure the refresh frequency
for NetStream version 9
templates.
ip netstream export v9-template
refresh-rate packet packets
Configure the refresh interval
for NetStream version 9
templates.
ip netstream export v9-template
refresh-rate time minutes
3.
Optional.
By default, the version 9 templates
are sent every 20 packets.
Optional.
By default, the version 9 templates
are sent every 30 minutes.
Configuring MPLS-aware NetStream
An MPLS flow is identified by the same labels in the same position and the same 7-tuple elements.
MPLS-aware NetStream collects and exports statistics on labels (up to three) in the label stack, FEC
corresponding to the top label, and traditional 7-tuple elements data.
To configure MPLS-aware NetStream:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
By default, no statistics about MPLS
packets are counted and exported.
2.
Count and export
statistics on MPLS
packets.
ip netstream mpls [ label-positions
{ label-position1 [ label-position2 ]
[ label-position3 ] } ] [ no-ip-fields ]
Configuring NetStream flow aging
Flow aging approaches
The following types of NetStream flow aging are available:
•
Periodical aging
97
The command of ip netstream mpls
[ label-positions { label-position1
[ label-position2 ]
[ label-position3 ] } ] [ no-ip-fields ]
enables both IPv4 and IPv6
NetStream of MPLS packets.
•
Forced aging
•
TCP FIN- and RST-triggered aging (automatically triggered if a TCP connection is terminated)
Periodical aging
Periodical aging uses the following approaches:
•
Inactive flow aging—A flow is considered inactive if its statistics have not been changed, which
means no packet for this NetStream entry arrives in the time specified by the ip netstream timeout
inactive command. The inactive flow entry remains in the cache until the inactive timer expires. Then
the inactive flow is aged out and its statistics, which can no longer be displayed by the display ip
netstream cache command, are sent to the NetStream server. The inactive flow aging makes sure
that the cache is big enough for new flow entries.
•
Active flow aging—An active flow is aged out when the time specified by the ip netstream timeout
active command is reached, and its statistics are exported to the NetStream server. The device
continues to count the active flow statistics, which can be displayed by the display ip netstream
cache command. The active flow aging exports the statistics of active flows to the NetStream server.
Forced aging
Use the reset ip netstream statistics command to age out all NetStream entries in the cache and to clear
the statistics. This is forced aging.
TCP FIN- and RST-triggered aging
For a TCP connection, when a packet with a FIN or RST flag is sent out, it means that a session is finished.
If a packet with a FIN or RST flag is recorded for a flow with the NetStream entry already created, the
flow is aged out immediately. However, if the packet with a FIN or RST flag is the first packet of a flow,
a new NetStream entry is created instead of being aged out. This type of aging is enabled by default,
and it cannot be disabled.
Configuration procedure
To configure flow aging:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
Optional.
• Set the aging timer for active flows:
2.
Configure periodical
aging.
ip netstream timeout active minutes
• Set the aging timer for inactive flows:
ip netstream timeout inactive seconds
98
By default:
• The aging timer for active
flows is 30 minutes.
• The aging timer for
inactive flows is 30
seconds.
Step
3.
Command
Configure forced
aging of the
NetStream entries.
Remarks
a. Set the maximum entries that the cache
can accommodate:
ip netstream max-entry max-entries
b. Exit to user view:
quit
c. Configure forced aging:
reset ip netstream statistics
Optional.
By default, the cache can
accommodate a maximum of
100 entries.
The reset ip netstream
statistics command also
clears the cache.
Displaying and maintaining NetStream
Task
Command
Remarks
Display NetStream entry information in
the cache.
display ip netstream cache [ verbose ] [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
Display information about NetStream
data export.
display ip netstream export [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the configuration and status of
the NetStream flow record templates.
display ip netstream template [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Clear the cache, age out, and export
all NetStream data.
reset ip netstream statistics
Available in user
view.
NetStream configuration examples
NetStream traditional data export configuration example
Network requirements
As shown in Figure 34, configure NetStream on Router A to collect statistics on packets passing through
it. Enable NetStream for incoming traffic on Ethernet 1/0 and for outgoing traffic on Ethernet 1/1.
Configure the router to export NetStream traditional data to UDP port 5000 of the NetStream server at
12.110.2.2/16.
Figure 34 Network diagram
99
Configuration procedure
# Enable NetStream for incoming traffic on Ethernet 1/0.
<RouterA> system-view
[RouterA] interface ethernet 1/0
[RouterA-Ethernet1/0] ip address 11.110.2.1 255.255.0.0
[RouterA-Ethernet1/0] ip netstream inbound
[RouterA-Ethernet1/0] quit
# Enable NetStream for outgoing traffic on Ethernet1/1.
[RouterA] interface ethernet 1/1
[RouterA-Ethernet1/1] ip address 12.110.2.1 255.255.0.0
[RouterA-Ethernet1/1] ip netstream outbound
[RouterA-Ethernet1/1] quit
# Configure the destination address and the destination UDP port number for the NetStream traditional
data export.
[RouterA] ip netstream export host 12.110.2.2 5000
NetStream aggregation data export configuration example
Network requirements
As shown in Figure 35, configure NetStream on Router A so that:
•
Router A exports NetStream traditional data in version 5 export format to port 5000 of the
NetStream server at 4.1.1.1/16.
•
Router A performs NetStream aggregation in the modes of AS, protocol-port, source-prefix,
destination-prefix, and prefix. Use version 8 export format to send the aggregation data of different
modes to the destination address at 4.1.1.1, with UDP ports 2000, 3000, 4000, 6000, and 7000,
respectively.
All the routers in the network are running EBGP. For more information about BGP, see Layer 3—IP Routing
Configuration Guide.
Figure 35 Network diagram
Configuration procedure
# Enable NetStream for incoming and outgoing traffic on Ethernet 1/0.
<RouterA> system-view
[RouterA] interface ethernet 1/0
100
[RouterA-Ethernet1/0] ip address 3.1.1.1 255.255.0.0
[RouterA-Ethernet1/0] ip netstream inbound
[RouterA-Ethernet1/0] ip netstream outbound
[RouterA-Ethernet1/0] quit
# In system view, configure the destination address and the destination UDP port number for the
NetStream traditional data export with IP address 4.1.1.1 and port 5000.
[RouterA] ip netstream export host 4.1.1.1 5000
# Configure the aggregation mode as AS, and, then, in aggregation view, configure the destination
address and the destination UDP port number for the NetStream AS aggregation data export.
[RouterA] ip netstream aggregation as
[RouterA-ns-aggregation-as] enable
[RouterA-ns-aggregation-as] ip netstream export host 4.1.1.1 2000
[RouterA-ns-aggregation-as] quit
# Configure the aggregation mode as protocol-port, and, then, in aggregation view, configure the
destination address and the destination UDP port number for the NetStream protocol-port aggregation
data export.
[RouterA] ip netstream aggregation protocol-port
[RouterA-ns-aggregation-protport] enable
[RouterA-ns-aggregation-protport] ip netstream export host 4.1.1.1 3000
[RouterA-ns-aggregation-protport] quit
# Configure the aggregation mode as source-prefix, and, then, in aggregation view, configure the
destination address and the destination UDP port number for the NetStream source-prefix aggregation
data export.
[RouterA] ip netstream aggregation source-prefix
[RouterA-ns-aggregation-srcpre] enable
[RouterA-ns-aggregation-srcpre] ip netstream export host 4.1.1.1 4000
[RouterA-ns-aggregation-srcpre] quit
# Configure the aggregation mode as destination-prefix, and, then, in aggregation view, configure the
destination address and the destination UDP port number for the NetStream destination-prefix
aggregation data export.
[RouterA] ip netstream aggregation destination-prefix
[RouterA-ns-aggregation-dstpre] enable
[RouterA-ns-aggregation-dstpre] ip netstream export host 4.1.1.1 6000
[RouterA-ns-aggregation-dstpre] quit
# Configure the aggregation mode as prefix, and, then, in aggregation view, configure the destination
address and the destination UDP port number for the NetStream prefix aggregation data export.
[RouterA] ip netstream aggregation prefix
[RouterA-ns-aggregation-prefix] enable
[RouterA-ns-aggregation-prefix] ip netstream export host 4.1.1.1 7000
[RouterA-ns-aggregation-prefix] quit
101
Configuring NQA
Overview
Network quality analyzer (NQA) allows you to monitor link status, measure network performance, verify
the service levels for IP services and applications, and troubleshoot network problems. It provides the
following types of operations:
•
ICMP echo
•
DHCP
•
DNS
•
FTP
•
HTTP
•
UDP jitter
•
SNMP
•
TCP
•
UDP echo
•
Voice
•
Data Link Switching (DLSw)
As shown in Figure 36, the NQA source device (NQA client) sends data to the NQA destination device
by simulating IP services and applications to measure network performance. The obtained performance
metrics include the one-way latency, jitter, packet loss, voice quality, application performance, and
server response time.
All types of NQA operations require the NQA client, but only the TCP, UDP echo, UDP jitter, and voice
operations require the NQA server. The NQA operations for services that are already provided by the
destination device such as FTP do not need the NQA server.
You can configure the NQA server to listen and respond on specific ports to meet various test needs.
Figure 36 Network diagram
Collaboration
NQA can collaborate with the Track module to notify application modules of state or performance
changes so that the application modules can take predefined actions.
102
Figure 37 Collaboration
Application modules
Detection
module
VRRP
Associates with a
detection entry
NQA
Track
module
Sends the
detection result
Associates with
a track entry
Sends the track
entry status
Static routing
Policy-based
routing
Interface backup
Traffic redirection
WLAN uplink
detection
The following describes how a static route destined for 192.168.0.88 is monitored through collaboration:
1.
NQA monitors the reachability to 192.168.0.88.
2.
When 192.168.0.88 becomes unreachable, NQA notifies the Track module of the change.
3.
The Track module notifies the static routing module of the state change.
4.
The static routing module sets the static route as invalid according to a predefined action.
For more information about collaboration, see High Availability Configuration Guide.
Threshold monitoring
Threshold monitoring enables the NQA client to display results or send trap messages to the network
management station (NMS) when the performance metrics that an NQA operation gathers violate the
specified thresholds.
Table 4 describes the relationships between performance metrics and NQA operation types.
Table 4 Performance metrics and NQA operation types
Performance metric
NQA operation types that can gather the metric
Probe duration
All NQA operation types except UDP jitter and voice
Number of probe failures
All NQA operation types except UDP jitter and voice
Round-trip time
UDP jitter and voice
Number of discarded packets
UDP jitter and voice
One-way jitter (source-to-destination and
destination-to-source)
UDP jitter and voice
One-way latency (source-to-destination and
destination-to-source)
UDP jitter and voice
Calculated Planning Impairment Factor (ICPIF)
(see "Configuring a voice operation")
Voice
Mean Opinion Scores (MOS) (see
"Configuring a voice operation")
Voice
103
NQA configuration task list
Complete the following task to configure the NQA server:
Task
Remarks
Configuring the NQA server
Required for NQA operations types of TCP, UDP echo, UDP
jitter, and voice.
Complete these tasks to configure the NQA client:
Task
Remarks
Enabling the NQA client
Required.
Configuring an ICMP echo operation
Configuring a DHCP operation
Configuring a DNS operation
Configuring an FTP operation
Configuring an HTTP operation
Required.
Configuring a UDP jitter operation
Use at least one method.
Configuring an SNMP operation
Configuring a TCP operation
Configuring a UDP echo operation
Configuring a voice operation
Configuring a DLSw operation
Configuring optional parameters for an NQA operation
Optional.
Configuring the collaboration function
Optional.
Configuring threshold monitoring
Optional.
Configuring the NQA statistics function
Optional.
Configuring NQA history records saving function
Optional.
Scheduling an NQA operation
Required.
Configuring the NQA server
To perform TCP, UDP echo, UDP jitter, and voice operations, you must enable the NQA server on the
destination device. The NQA server listens and responds to requests on the specified IP addresses and
ports.
You can configure multiple TCP (or UDP) listening services on an NQA server, each of which corresponds
to a specific destination IP address and port number. The destination IP address and port number must
be the same as those configured on the NQA client and must be different from those of an existing
listening service.
To configure the NQA server:
104
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable the NQA server.
nqa server enable
Disabled by default.
• Method 1:
3.
Configure a listening service.
nqa server tcp-connect
ip-address port-number
• Method 2:
Use at least one method.
nqa server udp-echo
ip-address port-number
Configuring the NQA client
Enabling the NQA client
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the NQA client.
nqa agent enable
Optional.
Enabled by default.
Configuring an ICMP echo operation
An ICMP echo operation measures the reachability of a destination device. It has the same function as
the ping command, but provides more output information. In addition, if multiple paths exist between the
source and destination devices, you can specify the next hop for the ICMP echo operation.
The ICMP echo operation is not supported in IPv6 networks. To test the reachability of an IPv6 address,
use the ping ipv6 command. For more information about the command, see Network Management and
Monitoring Command Reference.
To configure an ICMP echo operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the ICMP echo type
and enter its view.
type icmp-echo
N/A
4.
Specify the destination
address of ICMP echo
requests.
destination ip ip-address
By default, no destination IP
address is configured.
Specify Payload size in each
ICMP echo request.
data-size size
5.
105
Optional.
100 bytes by default.
Step
6.
7.
Command
Remarks
Optional.
Configure the string to be
filled in the payload of each
ICMP echo request.
data-fill string
Specify the VPN where the
operation is performed.
vpn-instance vpn-instance-name
By default, the string is the
hexadecimal number
00010203040506070809.
Optional.
By default, the operation is
performed on the public network.
Optional.
• Method 1:
8.
Specify the source interface
and the source IP address of
ICMP echo requests.
source interface interface-type
interface-number
• Method 2:
source ip ip-address
By default, no source interface or
source IP address is configured.
The requests take the primary IP
address of the outgoing interface
as their source IP address.
If you configure both the source ip
command and the source interface
command, the source ip command
takes effect.
The specified source interface must
be up. The source IP address must
be the IP address of a local
interface and the interface must be
up.
9.
Configure the next hop IP
address for ICMP echo
requests.
Optional.
next-hop ip-address
By default, no next hop IP address
is configured.
Configuring a DHCP operation
A DHCP operation measures the time the NQA client uses to get an IP address from a DHCP server.
The specified interface simulates the DHCP client to acquire an IP address and it does not change its IP
address.
When the DHCP operation completes, the NQA client sends a packet to release the obtained IP address.
To configure a DHCP operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the DHCP type and
enter its view.
type dhcp
N/A
106
Step
Command
Specify an interface to
perform the DHCP operation.
4.
operation interface interface-type
interface-number
Remarks
By default, no interface is specified
to perform a DHCP operation.
The specified interface must be up.
Otherwise, no probe packets can
be sent out.
Configuring a DNS operation
A DNS operation measures the time the NQA client uses to translate a domain name into an IP address
through a DNS server.
A DNS operation simulates domain name resolution and does not save the obtained DNS entry.
To configure a DNS operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name operation-tag
By default, no NQA operation is
created.
3.
Specify the DNS type and
enter its view.
type dns
N/A
4.
Specify the IP address of the
DNS server as the
destination address of DNS
packets.
destination ip ip-address
By default, no destination IP
address is configured.
Configure the domain name
that needs to be translated.
resolve-target domain-name
By default, no domain name is
configured.
5.
Configuring an FTP operation
An FTP operation measures the time the NQA client uses to transfer a file to or download a file from an
FTP server.
Follow these guidelines when you configure an FTP operation:
•
Before you perform an FTP operation, obtain the username and password for logging in to the FTP
server.
•
When you execute the put command, the NQA client creates a file (not a real one) named
file-name of fixed size on the FTP server. When you execute the get command, the client does not
save the file obtained from the FTP server.
•
If you get a file that does not exist on the FTP server, the FTP operation fails.
•
Only use the get command to download a small file. A big file might result in transfer failure
because of timeout, or might affect other services for occupying much network bandwidth.
To configure an FTP operation:
107
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the FTP type and enter
its view.
type ftp
N/A
4.
Specify the IP address of the
FTP server as the destination
address of FTP request
packets.
destination ip ip-address
By default, no destination IP
address is configured.
By default, no source IP address is
specified.
5.
Configure the source IP
address of FTP request
packets.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no FTP requests can be
sent out.
Optional.
6.
Specify the operation type.
operation { get | put }
7.
Configure a login username.
username name
8.
Configure a login password.
password [ cipher | simple ]
password
9.
Specify the name of a file to
be transferred.
filename file-name
By default, the operation type for
the FTP operation is get, which
means obtaining files from the FTP
server.
Optional.
10. Set the data transmission
mode.
mode { active | passive }
By default, no login username is
configured.
Optional.
By default, no login password is
configured.
By default, no file is specified.
Optional.
active by default.
Configuring an HTTP operation
An HTTP operation measures the time the NQA client uses to obtain data from an HTTP server.
The TCP port number of the HTTP server must be 80. Otherwise, the HTTP operation fails.
To configure an HTTP operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
108
Step
Command
Remarks
3.
Specify the HTTP type and
enter its view.
type http
N/A
4.
Configure the IP address of
the HTTP server as the
destination address of HTTP
request packets.
destination ip ip-address
By default, no destination IP
address is configured.
Optional.
Configure the source IP
address of request packets.
5.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no request packets can
be sent out.
Optional.
6.
Configure the operation type.
operation { get | post }
7.
Specify the destination
website URL.
url url
8.
Specify the HTTP version.
http-version v1.0
By default, the operation type for
the HTTP is get, which means
obtaining data from the HTTP
server.
N/A
Optional.
By default, HTTP 1.0 is used.
Configuring a UDP jitter operation
CAUTION:
Do not perform the UDP jitter operation to well-known ports from 1 to 1023. Otherwise, the UDP jitter
operation might fail or the service on the well-known port becomes unavailable.
Jitter means inter-packet delay variance. A UDP jitter operation measures unidirectional and
bidirectional jitters so that you can verify whether the network can carry jitter-sensitive services such as
real-time voice and video services.
The UDP jitter operation works as follows:
1.
The NQA client sends UDP packets to the destination port at a regular interval.
2.
The destination device takes a time stamp to each packet that it receives, and then sends the
packet back to the NQA client.
3.
Upon receiving the responses, the NQA client calculates the jitter according to the time stamps.
The UDP jitter operation requires both the NQA server and the NQA client. Before you perform the UDP
jitter operation, configure the UDP listening service on the NQA server. For more information about UDP
listening service configuration, see "Configuring the NQA server."
To configure a UDP jitter operation:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
109
Step
Command
Remarks
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the UDP jitter type
and enter its view.
type udp-jitter
N/A
4.
Configure the destination
address of UDP packets.
By default, no destination IP
address is configured.
destination ip ip-address
By default, no destination port
number is configured.
5.
Configure the destination port
of UDP packets.
6.
Specify the source port
number of UDP packets.
source port port-number
7.
Configure Payload size in
each UDP packet.
data-size size
8.
Configure the string to be
filled in the payload of each
UDP packet.
The destination IP address must be
the same as that of the listening
service on the NQA server.
destination port port-number
The destination port must be the
same as that of the listening service
on the NQA server.
Optional.
By default, no source port number
is specified.
Optional.
100 bytes by default.
Optional.
data-fill string
By default, the string is the
hexadecimal number
00010203040506070809.
probe packet-number
Optional.
packet-number
10 by default.
10. Configure the interval for
sending UDP packets.
probe packet-interval
packet-interval
Optional.
11. Configure how long the NQA
client waits for a response
from the server before it
regards the response times
out.
probe packet-timeout
packet-timeout
Optional.
9.
Configure the number of UDP
packets sent in one UDP jitter
probe.
20 milliseconds by default.
3000 milliseconds by default.
Optional.
12. Configure the source IP
address for UDP packets.
By default, no source IP address is
specified.
source ip ip-address
110
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no UDP packets can be
sent out.
NOTE:
The display nqa history command does not show the results of the UDP jitter operation. Use the display
nqa result command to display the results, or use the display nqa statistics command to display the
statistics of the operation.
Configuring an SNMP operation
An SNMP operation measures the time the NQA client uses to get a value from an SNMP agent.
To configure an SNMP operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the SNMP type and
enter its view.
type snmp
N/A
4.
Configure the destination
address of SNMP packets.
destination ip ip-address
By default, no destination IP
address is configured.
5.
Specify the source port of
SNMP packets.
Optional.
source port port-number
By default, no source port number
is specified.
Optional.
6.
Configure the source IP
address of SNMP packets.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no SNMP packets can
be sent out.
Configuring a TCP operation
A TCP operation measures the time the NQA client uses to establish a TCP connection to a specific port
on the NQA server.
The TCP operation requires both the NQA server and the NQA client. Before you perform a TCP
operation, configure a TCP listening service on the NQA server. For more information about the TCP
listening service configuration, see "Configuring the NQA server."
To configure a TCP operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
111
Step
3.
Specify the TCP type and
enter its view.
Command
Remarks
type tcp
N/A
By default, no destination IP
address is configured.
4.
5.
Configure the destination
address of TCP packets.
Configure the destination port
of TCP packets.
destination ip ip-address
The destination address must be
the same as the IP address of the
listening service configured on the
NQA server.
By default, no destination port
number is configured.
destination port port-number
The destination port number must
be the same as that of the listening
service on the NQA server.
Optional.
6.
Configure the source IP
address of TCP packets.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no TCP packets can be
sent out.
Configuring a UDP echo operation
A UDP echo operation measures the round-trip time between the client and a specific UDP port on the
NQA server.
The UDP echo operation requires both the NQA server and the NQA client. Before you perform a UDP
echo operation, configure a UDP listening service on the NQA server. For more information about the
UDP listening service configuration, see "Configuring the NQA server."
To configure a UDP echo operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the UDP echo type
and enter its view.
type udp-echo
N/A
By default, no destination IP
address is configured.
4.
Configure the destination
address of UDP packets.
destination ip ip-address
112
The destination address must be
the same as the IP address of the
listening service configured on the
NQA server.
Step
Command
By default, no destination port
number is configured.
5.
Configure the destination port
of UDP packets.
destination port port-number
6.
Configure Payload size in
each UDP packet.
data-size size
7.
8.
Remarks
The destination port number must
be the same as that of the listening
service on the NQA server.
Optional.
100 bytes by default.
Optional.
Configure the string to be
filled in the payload of each
UDP packet.
data-fill string
Specify the source port of UDP
packets.
source port port-number
By default, the string is the
hexadecimal number
00010203040506070809.
Optional.
By default, no source port number
is specified.
Optional.
9.
Configure the source IP
address of UDP packets.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be that
of an interface on the device and
the interface must be up.
Otherwise, no UDP packets can be
sent out.
Configuring a voice operation
CAUTION:
Do not perform a voice operation to a well-known port from 1 to 1023. Otherwise, the NQA operation
might fail or the service on that port becomes unavailable.
A voice operation measures voice over IP (VoIP) network performance.
A voice operation works as follows:
1.
The NQA client sends voice packets of G.711 A-law, G.711 μ-law or G.729 A-law codec type at
a specific interval to the destination device (NQA server).
2.
The destination device takes a time stamp to each voice packet it receives and sends it back to the
source.
3.
Upon receiving the packet, the source device calculates the jitter and one-way delay based on the
time stamp.
The following parameters that reflect VoIP network performance can be calculated by using the metrics
gathered by the voice operation:
•
Calculated Planning Impairment Factor (ICPIF)—Measures impairment to voice quality in a VoIP
network. It is decided by packet loss and delay. A higher value represents a lower service quality.
•
Mean Opinion Scores (MOS)—A MOS value can be evaluated by using the ICPIF value, in the
range of 1 to 5. A higher value represents a higher service quality.
113
The evaluation of voice quality depends on users' tolerance for voice quality, which you should consider.
For users with higher tolerance for voice quality, use the advantage-factor command to configure the
advantage factor. When the system calculates the ICPIF value, it subtracts the advantage factor to modify
ICPIF and MOS values, so both objective and subjective factors are considered.
The voice operation requires both the NQA server and the NQA client. Before you perform a voice
operation, configure a UDP listening service on the NQA server. For more information about UDP
listening service configuration, see "Configuring the NQA server."
A voice operation cannot repeat.
To configure a voice operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the voice type and
enter its view.
type voice
N/A
4.
Configure the destination
address of voice packets.
By default, no destination IP
address is configured.
destination ip ip-address
The destination IP address must be
the same as that of the listening
service on the NQA server.
By default, no destination port
number is configured.
5.
Configure the destination port
of voice packets.
destination port port-number
6.
Specify the codec type.
codec-type { g711a | g711u |
g729a }
7.
Configure the advantage
factor for calculating MOS
and ICPIF values.
advantage-factor factor
The destination port must be the
same as that of the listening service
on the NQA server.
Optional.
By default, the codec type is
G.711 A-law.
Optional.
By default, the advantage factor is
0.
Optional.
By default, no source IP address is
specified.
8.
Specify the source IP address
of voice packets.
source ip ip-address
9.
Specify the source port
number of voice packets.
source port port-number
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no voice packets can
be sent out.
Optional.
114
By default, no source port number
is specified.
Step
Command
Remarks
Optional.
10. Configure Payload size in
each voice packet.
data-size size
By default, the voice packet size
depends on the codec type. The
default packet size is 172 bytes for
G.711A-law and G.711 μ-law
codec type, and 32 bytes for
G.729 A-law codec type.
Optional.
11. Configure the string to be
filled in the payload of each
voice packet.
data-fill string
By default, the string is the
hexadecimal number
00010203040506070809.
12. Configure the number of voice
packets to be sent in a voice
probe.
probe packet-number
packet-number
Optional.
13. Configure the interval for
sending voice packets.
probe packet-interval
packet-interval
Optional.
14. Configure how long the NQA
client waits for a response
from the server before it
regards the response times
out.
probe packet-timeout
packet-timeout
Optional.
1000 by default.
20 milliseconds by default.
5000 milliseconds by default.
NOTE:
The display nqa history command cannot show the results of the voice operation. Use the display nqa
result command to display the results, or use the display nqa statistics command to display the statistics of
the operation.
Configuring a DLSw operation
A DLSw operation measures the response time of a DLSw device.
To configure a DLSw operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the DLSw type and
enter its view.
type dlsw
N/A
4.
Configure the destination
address of probe packets.
destination ip ip-address
By default, no destination IP
address is configured.
115
Step
Command
Remarks
Optional.
5.
Configure the source IP
address of probe packets.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no probe packets can
be sent out.
Configuring optional parameters for an NQA operation
Unless otherwise specified, the following optional parameters apply to all NQA operation types.
To configure optional parameters for an NQA operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Enter a specified NQA
operation type view.
type { dhcp | dlsw | dns | ftp |
http | icmp-echo | snmp | tcp |
udp-echo | udp-jitter | voice }
N/A
Optional.
4.
Configure a description.
description text
By default, no description is
configured.
Optional.
5.
Specify the interval at which
the NQA operation repeats.
frequency interval
By default, the interval between
two consecutive voice tests is
60000 milliseconds. For other
tests, the interval is 0 milliseconds.
Only one operation is performed.
If the operation is not completed
when the interval expires, the next
operation does not start.
Optional.
6.
Specify the probe times.
probe count times
By default, an NQA operation
performs one probe.
The voice operation can perform
only one probe, and does not
support this command.
Optional.
7.
Specify the probe timeout
time.
probe timeout timeout
By default, the timeout time is 3000
milliseconds.
This setting is not available for the
UDP jitter or voice operation.
116
Step
Command
Remarks
Optional.
Specify the TTL for probe
packets.
8.
Specify the ToS value in the IP
packet header of probe
packets.
9.
20 by default.
ttl value
This setting is not available for the
DHCP operation.
Optional.
0 by default.
tos value
This setting is not available for the
DHCP operation.
Optional.
10. Enable the routing table
bypass function.
route-option bypass-route
Disabled by default.
This setting is not available for the
DHCP operation.
Configuring the collaboration function
Collaboration is implemented by associating a reaction entry of an NQA operation a track entry. The
reaction entry monitors the NQA operation. If the number of operation failures reaches the specified
threshold, the configured action is triggered.
To configure the collaboration function:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify an NQA operation
type and enter its view.
type { dhcp | dlsw | dns | ftp |
http | icmp-echo | snmp | tcp |
udp-echo }
The collaboration function is not
available for the UDP jitter and
voice operations.
4.
Configure a reaction entry.
reaction item-number
checked-element probe-fail
threshold-type consecutive
consecutive-occurrences
action-type trigger-only
5.
Exit to system view.
quit
N/A
6.
Associate Track with NQA.
See High Availability
Configuration Guide.
N/A
7.
Associate Track with an
application module.
See High Availability
Configuration Guide.
N/A
Configuring threshold monitoring
Introduction
1.
Threshold types
117
Not configured by default.
You cannot modify the content of
an existing reaction entry.
An NQA operation supports the following threshold types:
{
{
{
average—If the average value for the monitored performance metric either exceeds the upper
threshold or goes below the lower threshold, a threshold violation occurs.
accumulate—If the total number of times that the monitored performance metric is out of the
specified value range reaches or exceeds the specified threshold, a threshold violation occurs.
consecutive—If the number of consecutive times that the monitored performance metric is out of
the specified value range reaches or exceeds the specified threshold, a threshold violation
occurs.
Threshold violations for the average or accumulate threshold type are determined on a per NQA
operation basis, and threshold violations for the consecutive type are determined from the time the
NQA operation starts.
2.
Triggered actions
The following actions might be triggered:
{
{
none—NQA displays results only on the terminal screen. It does not send traps to the NMS.
trap-only—NQA displays results on the terminal screen, and meanwhile it sends traps to the
NMS.
The DNS operation does not support the action of sending trap messages.
3.
Reaction entry
In a reaction entry, a monitored element, a threshold type, and an action to be triggered are
configured to implement threshold monitoring.
The state of a reaction entry can be invalid, over-threshold, or below-threshold.
{
{
Before an NQA operation starts, the reaction entry is in invalid state.
If the threshold is violated, the state of the entry is set to over-threshold. Otherwise, the state of
the entry is set to below-threshold.
If the action to be triggered is configured as trap-only for a reaction entry, when the state of the
entry changes, a trap message is generated and sent to the NMS.
Configuration prerequisites
Before you configure threshold monitoring, configure the destination address of the trap messages by
using the snmp-agent target-host command. For more information about the command, see Network
Management and Monitoring Command Reference.
Configuration procedure
To configure threshold monitoring:
Step
Command
Remarks
1.
Enter system
view.
system-view
N/A
2.
Create an
NQA
operation and
enter NQA
operation view.
nqa entry admin-name operation-tag
By default, no
NQA operation
is created.
118
Step
3.
Specify an
NQA
operation type
and enter its
view.
Command
Remarks
type { dhcp | dlsw | dns | ftp | http | icmp-echo | snmp | tcp |
udp-echo | udp-jitter | voice }
N/A
• Enable sending traps to the NMS when specified conditions are
met:
reaction trap { probe-failure consecutive-probe-failures |
test-complete | test-failure cumulate-probe-failures }
• Configure a reaction entry for monitoring the duration of an
NQA operation (not supported in UDP jitter and voice
operations):
reaction item-number checked-element probe-duration
threshold-type { accumulate accumulate-occurrences | average
| consecutive consecutive-occurrences } threshold-value
upper-threshold lower-threshold [ action-type { none |
trap-only } ]
• Configure a reaction entry for monitoring failure times (not
supported in UDP jitter and voice operations):
reaction item-number checked-element probe-fail threshold-type
{ accumulate accumulate-occurrences | consecutive
consecutive-occurrences } [ action-type { none | trap-only } ]
• Configure a reaction entry for monitoring the round-trip time
4.
Configure
threshold
monitoring.
(only supported in UDP jitter and voice operations):
reaction item-number checked-element rtt threshold-type
{ accumulate accumulate-occurrences | average }
threshold-value upper-threshold lower-threshold [ action-type
{ none | trap-only } ]
• Configure a reaction entry for monitoring packet loss (only
supported in UDP jitter and voice operations):
reaction item-number checked-element packet-loss
threshold-type accumulate accumulate-occurrences [ action-type
{ none | trap-only } ]
• Configure a reaction entry for monitoring one-way jitter (only
supported in UDP jitter and voice operations):
reaction item-number checked-element { jitter-ds | jitter-sd }
threshold-type { accumulate accumulate-occurrences | average }
threshold-value upper-threshold lower-threshold [ action-type
{ none | trap-only } ]
• Configure a reaction entry for monitoring the one-way delay
(only supported in UDP jitter and voice operations):
reaction item-number checked-element { owd-ds | owd-sd }
threshold-value upper-threshold lower-threshold
• Configure a reaction entry for monitoring the ICPIF value (only
supported in the voice operation):
reaction item-number checked-element icpif threshold-value
upper-threshold lower-threshold [ action-type { none |
trap-only } ]
• Configure a reaction entry for monitoring the MOS value (only
supported in the voice operation):
reaction item-number checked-element mos threshold-value
upper-threshold lower-threshold [ action-type { none |
trap-only } ]
119
Configure the
trap sending
method as
needed.
No traps are
sent to the NMS
by default.
The reaction
trap command
in voice
operation view
only supports
the
test-complete
keyword.
Configuring the NQA statistics function
NQA collects statistics for an operation in a statistics group. To view information about the statistics
groups, use the display nqa statistics command. To set the interval for collecting statistics, use the
statistics interval command.
If a new statistics group is to be saved when the number of statistics groups reaches the upper limit, the
oldest statistics group is deleted. To set the maximum number of statistics groups that can be saved, use
the statistics max-group command.
A statistics group is formed after an operation is completed. Statistics groups have an aging mechanism.
A statistics group is deleted when its hold time expires. To set the hold time, use the statistics hold-time
command.
The DHCP operation does not support the NQA statistics function.
If you use the frequency command to set the interval between two consecutive operations to 0, only one
operation is performed, and no statistics group information is generated.
To configure the NQA statistics collection function:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify an NQA operation
type and enter its view.
type { dlsw | dns | ftp | http |
icmp-echo | snmp | tcp |
udp-echo | udp-jitter | voice }
N/A
4.
Configure the interval for
collecting the statistics.
statistics interval interval
Optional.
60 minutes by default.
Optional.
5.
6.
Configure the maximum
number of statistics groups
that can be saved.
statistics max-group number
Configure the hold time of
statistics groups.
statistics hold-time hold-time
2 by default.
To disable collecting NQA
statistics, set the maximum number
to 0.
Optional.
120 minutes by default.
Configuring NQA history records saving function
Perform this task to enable the system to save the history records of NQA operations. To display NQA
history records, use the display nqa history command.
This task also configures the following parameters:
•
Lifetime of the history records.
The records are removed when the lifetime is reached.
•
Maximum number of history records that can be saved for an NQA operation.
120
If the maxim number is reached, the earliest history records are removed.
To configure the history records saving function:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA
operation and enter
NQA operation view.
nqa entry admin-name operation-tag
By default, no NQA operation
is created.
3.
Enter NQA operation
type view.
type { dhcp | dlsw | dns | ftp | http |
icmp-echo | snmp | tcp | udp-echo |
udp-jitter | voice }
N/A
4.
Enable saving history
records for the NQA
operation.
history-record enable
By default, this feature is not
enabled.
Optional.
5.
Set the lifetime of
history records.
6.
Configure the maximum
number of history
records that can be
saved.
history-record keep-time keep-time
By default, the history records in
the NQA operation are kept for
120 minutes.
Optional.
history-record number number
By default, the maximum
number of records that can be
saved for the NQA operation is
50.
Scheduling an NQA operation
The NQA operation works between the specified start time and the end time (the start time plus
operation duration). If the specified start time is ahead of the system time, the operation starts
immediately. If both the specified start and end time are ahead of the system time, the operation does not
start. To view the current system time, use the display clock command.
You can configure the maximum number of NQA operations that can work simultaneously as needed to
avoid excessive system resource consumption.
You cannot enter the operation type view or the operation view of a scheduled NQA operation.
A system time adjustment does not affect started or completed NQA operations. It only affects the NQA
operations that have not started.
To schedule an NQA operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the scheduling
parameters for an NQA
operation.
nqa schedule admin-name operation-tag
start-time { hh:mm:ss [ yyyy/mm/dd ] | now }
lifetime { lifetime | forever }
N/A
Configure the maximum number
of NQA operations that can
work simultaneously.
nqa agent max-concurrent number
Optional.
3.
121
All MSR routers support the nqa agent max-concurrent command, but they have different value ranges
and default values:
Hardware
Value range and default value
Value range: 1 to 50
MSR900
Default: 5
Value range: 1 to 50
MSR93X
Default: 5
Value range: 1 to 50
MSR20-1X
Default: 5
Value range: 1 to 50
MSR20
Default: 5
Value range: 1 to 200
MSR30
Default: 20
Value range: 1 to 500
MSR50
Default: 80
Value range: 1 to 50
MSR1000
Default: 5
Displaying and maintaining NQA
Task
Command
Remarks
Display history records of NQA
operations.
display nqa history [ admin-name
operation-tag ] [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display the current monitoring
results of reaction entries.
display nqa reaction counters [ admin-name
operation-tag [ item-number ] ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the result of the specified
NQA operation.
display nqa result [ admin-name
operation-tag ] [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display NQA statistics.
display nqa statistics [ admin-name
operation-tag ] [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display NQA server status.
display nqa server status [ | { begin | exclude
| include } regular-expression ]
Available in any view.
122
NQA configuration examples
ICMP echo operation configuration example
Network requirements
As shown in Figure 38, configure and schedule an ICMP echo operation from the NQA client Device A
to Device B through Device C to test the round-trip time.
Figure 38 Network diagram
Device C
10.1.1.2/24
10.2.2.1/24
NQA client
10.1.1.1/24
10.2.2.2/24
10.3.1.1/24
10.4.1.2/24
Device A
Device B
10.4.1.1/24
10.3.1.2/24
Device D
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create an ICMP echo operation, and specify 10.2.2.2 as the destination IP address.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type icmp-echo
[DeviceA-nqa-admin-test1-icmp-echo] destination ip 10.2.2.2
# Configure 10.1.1.2 as the next hop. The ICMP echo requests are sent through Device C to Device B.
[DeviceA-nqa-admin-test1-icmp-echo] next-hop 10.1.1.2
# Configure the ICMP echo operation to perform 10 probes.
[DeviceA-nqa-admin-test1-icmp-echo] probe count 10
# Specify the probe timeout time for the ICMP echo operation as 500 milliseconds.
[DeviceA-nqa-admin-test1-icmp-echo] probe timeout 500
# Configure the ICMP echo operation to repeat at an interval of 5000 milliseconds.
[DeviceA-nqa-admin-test1-icmp-echo] frequency 5000
123
# Enable saving history records and configure the maximum number of history records that can be saved
as 10.
[DeviceA-nqa-admin-test1-icmp-echo] history-record enable
[DeviceA-nqa-admin-test1-icmp-echo] history-record number 10
[DeviceA-nqa-admin-test1-icmp-echo] quit
# Start the ICMP echo operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the ICMP echo operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the ICMP echo operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test) test1 results:
Destination IP address: 10.2.2.2
Send operation times: 10
Receive response times: 10
Min/Max/Average round trip time: 2/5/3
Square-Sum of round trip time: 96
Last succeeded probe time: 2011-08-23 15:00:01.2
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the ICMP echo operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
370
3
Succeeded
2011-08-23 15:00:01.2
369
3
Succeeded
2011-08-23 15:00:01.2
368
3
Succeeded
2011-08-23 15:00:01.2
367
5
Succeeded
2011-08-23 15:00:01.2
366
3
Succeeded
2011-08-23 15:00:01.2
365
3
Succeeded
2011-08-23 15:00:01.2
364
3
Succeeded
2011-08-23 15:00:01.1
363
2
Succeeded
2011-08-23 15:00:01.1
362
3
Succeeded
2011-08-23 15:00:01.1
361
2
Succeeded
2011-08-23 15:00:01.1
The output shows that the packets sent by Device A can reach Device B through Device C. No packet loss
occurs during the operation. The minimum, maximum, and average round-trip times are 2, 5, and 3
milliseconds, respectively.
124
DHCP operation configuration example
Network requirements
As shown in Figure 39, configure and schedule a DHCP operation to test the time required for Router A
to obtain an IP address from the DHCP server (Router B).
Figure 39 Network diagram
Configuration procedure
# Create a DHCP operation to be performed on interface Ethernet 1/1.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type dhcp
[RouterA-nqa-admin-test1-dhcp] operation interface ethernet 1/1
# Enable the saving of history records.
[RouterA-nqa-admin-test1-dhcp] history-record enable
[RouterA-nqa-admin-test1-dhcp] quit
# Start the DHCP operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the DHCP operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the DHCP operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 512/512/512
Square-Sum of round trip time: 262144
Last succeeded probe time: 2011-11-22 09:54:03.8
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the DHCP operation.
[RouterA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
125
1
512
Succeeded
2011-11-22 09:54:03.8
The output shows that Router A uses 512 milliseconds to obtain an IP address from the DHCP server.
DNS operation configuration example
Network requirements
As shown in Figure 40, configure a DNS operation to test whether Device A can translate the domain
name host.com into an IP address through the DNS server, and test the time required for resolution.
Figure 40 Network diagram
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create a DNS operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type dns
# Specify the IP address of the DNS server 10.2.2.2 as the destination address.
[DeviceA-nqa-admin-test1-dns] destination ip 10.2.2.2
# Specify the domain name to be translated as host.com.
[DeviceA-nqa-admin-test1-dns] resolve-target host.com
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-dns] history-record enable
[DeviceA-nqa-admin-test1-dns] quit
# Start the DNS operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the DNS operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the DNS operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 62/62/62
Square-Sum of round trip time: 3844
Last succeeded probe time: 2008-11-10 10:49:37.3
Extended results:
Packet loss in test: 0%
126
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the DNS operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
62
Succeeded
2008-11-10 10:49:37.3
The output shows that Device A uses 62 milliseconds to translate domain name host.com into an IP
address.
FTP operation configuration example
Network requirements
As shown in Figure 41, configure an FTP operation to test the time required for Device A to upload a file
to the FTP server. The login username is admin, the login password is systemtest, and the file to be
transferred to the FTP server is config.txt.
Figure 41 Network diagram
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create an FTP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type ftp
# Specify the IP address of the FTP server 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-ftp] destination ip 10.2.2.2
# Specify 10.1.1.1 as the source IP address.
[DeviceA-nqa-admin-test1-ftp] source ip 10.1.1.1
# Set the FTP username to admin, and password to systemtest.
[DeviceA-nqa-admin-test1-ftp] username admin
[DeviceA-nqa-admin-test1-ftp] password simple systemtest
# Configure the device to upload file config.txt to the FTP server.
[DeviceA-nqa-admin-test1-ftp] operation put
127
[DeviceA-nqa-admin-test1-ftp] filename config.txt
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-ftp] history-record enable
[DeviceA-nqa-admin-test1-ftp] quit
# Start the FTP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the FTP operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the FTP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 173/173/173
Square-Sum of round trip time: 29929
Last succeeded probe time: 2011-11-22 10:07:28.6
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the FTP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
173
Succeeded
2011-11-22 10:07:28.6
The output shows that Device A uses 173 milliseconds to upload a file to the FTP server.
HTTP operation configuration example
Network requirements
As shown in Figure 42, configure an HTTP operation on the NQA client to test the time required to obtain
data from the HTTP server.
Figure 42 Network diagram
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
128
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create an HTTP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type http
# Specify the IP address of the HTTP server 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-http] destination ip 10.2.2.2
# Configure the HTTP operation to get data from the HTTP server. By default, the HTTP operation type is
get.
[DeviceA-nqa-admin-test1-http] operation get
# Configure the HTTP operation to visit the website /index.htm.
[DeviceA-nqa-admin-test1-http] url /index.htm
# Configure the operation to use HTTP version 1.0. By default, the HTTP version is version 1.0.
[DeviceA-nqa-admin-test1-http] http-version v1.0
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-http] history-record enable
[DeviceA-nqa-admin-test1-http] quit
# Start the HTTP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the HTTP operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the HTTP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 64/64/64
Square-Sum of round trip time: 4096
Last succeeded probe time: 2011-11-22 10:12:47.9
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors:
Packet(s) arrived late: 0
# Display the history records of the HTTP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
64
Succeeded
2011-11-22 10:12:47.9
129
The output shows that Device A uses 64 milliseconds to obtain data from the HTTP server.
UDP jitter operation configuration example
Network requirements
As shown in Figure 43, configure a UDP jitter operation to test the jitter, delay, and round-trip time
between Device A and Device B.
Figure 43 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen on the IP address 10.2.2.2 and UDP port 9000.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4.
Configure Device A:
# Create a UDP jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-jitter
# Configure 10.2.2.2 as the destination IP address and port 9000 as the destination port.
[DeviceA-nqa-admin-test1-udp-jitter] destination ip 10.2.2.2
[DeviceA-nqa-admin-test1-udp-jitter] destination port 9000
# Configure the operation to repeat at an interval of 1000 milliseconds.
[DeviceA-nqa-admin-test1-udp-jitter] frequency 1000
[DeviceA-nqa-admin-test1-udp-jitter] quit
# Start the UDP jitter operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the UDP jitter operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the UDP jitter operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 10
Receive response times: 10
Min/Max/Average round trip time: 15/32/17
130
Square-Sum of round trip time: 3235
Last succeeded probe time: 2008-05-29 13:56:17.6
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
UDP-jitter results:
RTT number: 10
Min positive SD: 4
Min positive DS: 1
Max positive SD: 21
Max positive DS: 28
Positive SD number: 5
Positive DS number: 4
Positive SD sum: 52
Positive DS sum: 38
Positive SD average: 10
Positive DS average: 10
Positive SD square sum: 754
Positive DS square sum: 460
Min negative SD: 1
Min negative DS: 6
Max negative SD: 13
Max negative DS: 22
Negative SD number: 4
Negative DS number: 5
Negative SD sum: 38
Negative DS sum: 52
Negative SD average: 10
Negative DS average: 10
Negative SD square sum: 460
Negative DS square sum: 754
One way results:
Max SD delay: 15
Max DS delay: 16
Min SD delay: 7
Min DS delay: 7
Number of SD delay: 10
Number of DS delay: 10
Sum of SD delay: 78
Sum of DS delay: 85
Square sum of SD delay: 666
Square sum of DS delay: 787
SD lost packet(s): 0
DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
# Display the statistics of the UDP jitter operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Destination IP address: 10.2.2.2
Start time: 2008-05-29 13:56:14.0
Life time: 47 seconds
Send operation times: 410
Receive response times: 410
Min/Max/Average round trip time: 1/93/19
Square-Sum of round trip time: 206176
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
131
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
UDP-jitter results:
RTT number: 410
Min positive SD: 3
Min positive DS: 1
Max positive SD: 30
Max positive DS: 79
Positive SD number: 186
Positive DS number: 158
Positive SD sum: 2602
Positive DS sum: 1928
Positive SD average: 13
Positive DS average: 12
Positive SD square sum: 45304
Positive DS square sum: 31682
Min negative SD: 1
Min negative DS: 1
Max negative SD: 30
Max negative DS: 78
Negative SD number: 181
Negative DS number: 209
Negative SD sum: 181
Negative DS sum: 209
Negative SD average: 13
Negative DS average: 14
Negative SD square sum: 46994
Negative DS square sum: 3030
One way results:
Max SD delay: 46
Max DS delay: 46
Min SD delay: 7
Min DS delay: 7
Number of SD delay: 410
Number of DS delay: 410
Sum of SD delay: 3705
Sum of DS delay: 3891
Square sum of SD delay: 45987
Square sum of DS delay: 49393
SD lost packet(s): 0
DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
SNMP operation configuration example
Network requirements
As shown in Figure 44, configure an SNMP operation to test the time the NQA client uses to get a value
from the SNMP agent.
Figure 44 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure the SNMP agent (Device B):
# Set the SNMP version to all.
<DeviceB> system-view
[DeviceB] snmp-agent sys-info version all
132
# Set the read community to public.
[DeviceB] snmp-agent community read public
# Set the write community to private.
[DeviceB] snmp-agent community write private
4.
Configure Device A:
# Create an SNMP operation, and configure 10.2.2.2 as the destination IP address.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type snmp
[DeviceA-nqa-admin-test1-snmp] destination ip 10.2.2.2
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-snmp] history-record enable
[DeviceA-nqa-admin-test1-snmp] quit
# Start the SNMP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the SNMP operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the SNMP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 50/50/50
Square-Sum of round trip time: 2500
Last succeeded probe time: 2011-11-22 10:24:41.1
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the SNMP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
50
Timeout
2011-11-22 10:24:41.1
The output shows that Device A uses 50 milliseconds to receive a response from the SNMP agent.
TCP operation configuration example
Network requirements
As shown in Figure 45, configure a TCP operation to test the time the NQA client uses to establish a TCP
connection to the NQA server on Device B.
133
Figure 45 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen on the IP address 10.2.2.2 and TCP port 9000.
[DeviceB] nqa server tcp-connect 10.2.2.2 9000
4.
Configure Device A:
# Create a TCP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type tcp
# Configure 10.2.2.2 as the destination IP address and port 9000 as the destination port.
[DeviceA-nqa-admin-test1-tcp] destination ip 10.2.2.2
[DeviceA-nqa-admin-test1-tcp] destination port 9000
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-tcp] history-record enable
[DeviceA-nqa-admin-test1-tcp] quit
# Start the TCP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the TCP operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the TCP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 13/13/13
Square-Sum of round trip time: 169
Last succeeded probe time: 2011-11-22 10:27:25.1
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
134
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the TCP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
1
Response
Status
13
Time
Succeeded
2011-11-22 10:27:25.1
The output shows that Device A uses 13 milliseconds to establish a TCP connection to port 9000
on the NQA server.
UDP echo operation configuration example
Network requirements
As shown in Figure 46, configure a UDP echo operation to test the round-trip time between Device A and
Device B. The destination port number is 8000.
Figure 46 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen on the IP address 10.2.2.2 and UDP port 8000.
[DeviceB] nqa server udp-echo 10.2.2.2 8000
4.
Configure Device A:
# Create a UDP echo operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-echo
# Configure 10.2.2.2 as the destination IP address and port 8000 as the destination port.
[DeviceA-nqa-admin-test1-udp-echo] destination ip 10.2.2.2
[DeviceA-nqa-admin-test1-udp-echo] destination port 8000
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-udp-echo] history-record enable
[DeviceA-nqa-admin-test1-udp-echo] quit
# Start the UDP echo operation.
135
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the UDP echo operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the UDP echo operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 25/25/25
Square-Sum of round trip time: 625
Last succeeded probe time: 2011-11-22 10:36:17.9
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the UDP echo operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
25
Succeeded
2011-11-22 10:36:17.9
The output shows that the round-trip time between Device A and port 8000 on Device B is 25
milliseconds.
Voice operation configuration example
Network requirements
As shown in Figure 47, configure a voice operation to test the jitters between Device A and Device B.
Figure 47 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
136
[DeviceB] nqa server enable
# Configure a listening service to listen on IP address 10.2.2.2 and UDP port 9000.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4.
Configure Device A:
# Create a voice operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type voice
# Configure 10.2.2.2 as the destination IP address and port 9000 as the destination port.
[DeviceA-nqa-admin-test1-voice] destination ip 10.2.2.2
[DeviceA-nqa-admin-test1-voice] destination port 9000
[DeviceA-nqa-admin-test1-voice] quit
# Start the voice operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the voice operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the voice operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1000
Receive response times: 1000
Min/Max/Average round trip time: 31/1328/33
Square-Sum of round trip time: 2844813
Last succeeded probe time: 2008-06-13 09:49:31.1
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
Voice results:
RTT number: 1000
Min positive SD: 1
Min positive DS: 1
Max positive SD: 204
Max positive DS: 1297
Positive SD number: 257
Positive DS number: 259
Positive SD sum: 759
Positive DS sum: 1797
Positive SD average: 2
Positive DS average: 6
Positive SD square sum: 54127
Positive DS square sum: 1691967
Min negative SD: 1
Min negative DS: 1
Max negative SD: 203
Max negative DS: 1297
Negative SD number: 255
Negative DS number: 259
Negative SD sum: 759
Negative DS sum: 1796
Negative SD average: 2
Negative DS average: 6
Negative SD square sum: 53655
Negative DS square sum: 1691776
137
One way results:
Max SD delay: 343
Max DS delay: 985
Min SD delay: 343
Min DS delay: 985
Number of SD delay: 1
Number of DS delay: 1
Sum of SD delay: 343
Sum of DS delay: 985
Square sum of SD delay: 117649
Square sum of DS delay: 970225
SD lost packet(s): 0
DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
Voice scores:
MOS value: 4.38
ICPIF value: 0
# Display the statistics of the voice operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Destination IP address: 10.2.2.2
Start time: 2008-06-13 09:45:37.8
Life time: 331 seconds
Send operation times: 4000
Receive response times: 4000
Min/Max/Average round trip time: 15/1328/32
Square-Sum of round trip time: 7160528
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
Voice results:
RTT number: 4000
Min positive SD: 1
Min positive DS: 1
Max positive SD: 360
Max positive DS: 1297
Positive SD number: 1030
Positive DS number: 1024
Positive SD sum: 4363
Positive DS sum: 5423
Positive SD average: 4
Positive DS average: 5
Positive SD square sum: 497725
Positive DS square sum: 2254957
Min negative SD: 1
Min negative DS: 1
Max negative SD: 360
Max negative DS: 1297
Negative SD number: 1028
Negative DS number: 1022
Negative SD sum: 1028
Negative DS sum: 1022
Negative SD average: 4
Negative DS average: 5
Negative SD square sum: 495901
Negative DS square sum: 5419
One way results:
Max SD delay: 359
Max DS delay: 985
Min SD delay: 0
Min DS delay: 0
Number of SD delay: 4
Number of DS delay: 4
Sum of SD delay: 1390
Sum of DS delay: 1079
138
Square sum of SD delay: 483202
Square sum of DS delay: 973651
SD lost packet(s): 0
DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
Voice scores:
Max MOS value: 4.38
Min MOS value: 4.38
Max ICPIF value: 0
Min ICPIF value: 0
DLSw operation configuration example
Network requirements
As shown in Figure 48, configure a DLSw operation to test the response time of the DLSw device.
Figure 48 Network diagram
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create a DLSw operation, and configure 10.2.2.2 as the destination IP address.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type dlsw
[DeviceA-nqa-admin-test1-dlsw] destination ip 10.2.2.2
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-dlsw] history-record enable
[DeviceA-nqa-admin-test1-dlsw] quit
# Start the DLSw operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# Stop the DLSw operation after a period of time.
[DeviceA] undo nqa schedule admin test1
# Display the results of the DLSw operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 19/19/19
Square-Sum of round trip time: 361
Last succeeded probe time: 2011-11-22 10:40:27.7
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
139
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the DLSw operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
19
Succeeded
2011-11-22 10:40:27.7
The output shows that the response time of the DLSw device is 19 milliseconds.
NQA collaboration configuration example
Network requirements
As shown in Figure 49, configure a static route to Router C with Router B as the next hop on Router A.
Associate the static route, a track entry, and an NQA operation to monitor the state of the static route.
Figure 49 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
On Router A, configure a static route, and associate the static route with track entry 1.
<RouterA> system-view
[RouterA] ip route-static 10.1.1.2 24 10.2.1.1 track 1
3.
On Router A, configure an ICMP echo operation:
# Create an NQA operation with the administrator name admin and operation tag test1.
[RouterA] nqa entry admin test1
# Configure the NQA operation type as ICMP echo.
[RouterA-nqa-admin-test1] type icmp-echo
# Configure 10.2.2.1 as the destination IP address.
[RouterA-nqa-admin-test1-icmp-echo] destination ip 10.2.1.1
# Configure the operation to repeat at an interval of 100 milliseconds.
[RouterA-nqa-admin-test1-icmp-echo] frequency 100
# Create reaction entry 1. If the number of consecutive probe failures reaches 5, collaboration is
triggered.
140
[RouterA-nqa-admin-test1-icmp-echo] reaction 1 checked-element probe-fail
threshold-type consecutive 5 action-type trigger-only
[RouterA-nqa-admin-test1-icmp-echo] quit
# Start the ICMP echo operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
4.
On Router A, create track entry 1, and associate it with reaction entry 1 of the ICMP echo
operation.
[RouterA] track 1 nqa entry admin test1 reaction 1
Verifying the configuration
# On Router A, display information about all the track entries.
[RouterA] display track all
Track ID: 1
Status: Positive
Notification delay: Positive 0, Negative 0 (in seconds)
Reference object:
NQA entry: admin test1
Reaction: 1
# Display brief information about active routes in the routing table on Router A.
[RouterA] display ip routing-table
Routing Tables: Public
Destinations : 5
Destination/Mask
Proto
10.1.1.0/24
10.2.1.0/24
Routes : 5
Pre
Cost
NextHop
Interface
Static 60
0
10.2.1.1
Eth1/1
Direct 0
0
10.2.1.2
Eth1/1
10.2.1.2/32
Direct 0
0
127.0.0.1
InLoop0
127.0.0.0/8
Direct 0
0
127.0.0.1
InLoop0
127.0.0.1/32
Direct 0
0
127.0.0.1
InLoop0
The output shows that the static route with the next hop 10.2.1.1 is active, and the status of the track entry
is positive.
# Remove the IP address of Ethernet 1/1 on Router B.
<RouterB> system-view
[RouterB] interface ethernet 1/1
[RouterB-Ethernet1/1] undo ip address
# On Router A, display information about all the track entries.
[RouterA] display track all
Track ID: 1
Status: Negative
Notification delay: Positive 0, Negative 0 (in seconds)
Reference object:
NQA entry: admin test1
Reaction: 1
# Display brief information about active routes in the routing table on Router A.
[RouterA] display ip routing-table
Routing Tables: Public
141
Destinations : 4
Destination/Mask
Proto
10.2.1.0/24
10.2.1.2/32
Routes : 4
Pre
Cost
NextHop
Interface
Direct 0
0
10.2.1.2
Eth1/1
Direct 0
0
127.0.0.1
InLoop0
127.0.0.0/8
Direct 0
0
127.0.0.1
InLoop0
127.0.0.1/32
Direct 0
0
127.0.0.1
InLoop0
The output shows that the static route does not exist, and the status of the track entry is Negative.
142
Configuring IP traffic ordering
IP traffic ordering enables a device to collect and rank statistics for IP flows.
An interface can be specified as an external or internal interface to collect traffic statistics:
•
External interface—Collects only inbound traffic statistics (classified by source IP addresses).
•
Internal interface—Collects both inbound and outbound traffic statistics (classified by source and
destination IP addresses respectively), including total inbound and outbound traffic statistics,
inbound and outbound TCP packet statistics, inbound and outbound UDP packet statistics, and
inbound and outbound ICMP packet statistics.
Enabling IP traffic ordering
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Enable IP traffic ordering and
specify its mode.
ip flow-ordering { external |
internal }
Optional.
By default, IP traffic ordering is
disabled.
Setting the IP traffic ordering interval
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Set the IP traffic ordering
interval.
ip flow-ordering stat-interval { 5 |
10 | 15 | 30 | 45 | 60 }
Optional.
The default setting is 10 seconds.
Displaying and maintaining IP traffic ordering
Task
Command
Remarks
Display IP traffic ordering statistics.
display ip flow-ordering statistic { external
| internal } [ | { begin | exclude | include }
regular-expression ]
Available in any view.
IP traffic ordering configuration example
Network requirements
As shown in Figure 50, enable IP traffic ordering for IP packets sourced from Host A, Host B and Host C.
143
Figure 50 Network diagram
Configuration procedure
1.
Configure IP traffic ordering:
# Enable IP traffic ordering on Ethernet 1/1 and specify the interface as an internal interface to
collect statistics.
<Device> system-view
[Device] interface ethernet 1/1
[Device-Ethernet1/1] ip address 192.168.1.4 24
# Set the statistics interval to 30 seconds.
[Device-Ethernet1/1] quit
[Device] ip flow-ordering stat-interval 30
2.
Display IP traffic ordering statistics.
[Device-Ethernet1/1] display ip flow-ordering statistic internal
Unit: kilobytes/second
User IP
TOTAL IN TOTAL OUT TCP-IN TCP-OUT UDP-IN UDP-OUT ICMP-IN ICMP-OUT
192.168.1.1
0.2
0.1
0.1
0.1
0.0
0.0
0.1
0.0
192.168.1.2
0.1
0.0
0.1
0.0
0.0
0.0
0.0
0.0
192.168.1.3
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
144
Configuring sFlow
Sampled Flow (sFlow) is a traffic monitoring technology used to collect and analyze traffic statistics.
As shown in Figure 51, the sFlow system involves an sFlow agent embedded in a device and a remote
sFlow collector. The sFlow agent collects interface counter information and packet content information
and encapsulates the sampled information in sFlow packets. When the sFlow packet buffer is full, or the
aging timer of sFlow packets expires, the sFlow agent sends the sFlow packets in UDP datagrams to the
specified sFlow collector. The sFlow collector analyzes the information and displays the results.
sFlow provides the following sampling mechanisms:
•
Flow sampling—Obtains packet content information.
•
Counter sampling—Obtains interface counter information.
Figure 51 sFlow system
sFlow has the following advantages:
•
Supports traffic monitoring on Gigabit and higher-speed networks.
•
Provides good scalability to allow one sFlow collector to monitor multiple sFlow agents.
•
Saves money by embedding the sFlow agent in a device, instead of using a dedicated sFlow agent
device.
The device supports only the sFlow agent function.
Configuring the sFlow agent and sFlow collector
information
Step
1.
Enter system
view.
Command
Remarks
system-view
N/A
145
Step
2.
Command
Configure an
IP address for
the sFlow
agent.
Remarks
sflow agent { ip ip-address |
ipv6 ipv6-address }
Not specified by default. The device periodically checks
whether the sFlow agent has an IP address. If the sFlow
agent has no IP address configured, the device
automatically selects an interface IP address for the
sFlow agent but does not save the IP address.
NOTE:
• HP recommends that you configure an IP address
manually for the sFlow agent.
• Only one IP address can be specified for the sFlow
agent on the device.
3.
4.
5.
Configure the
sFlow
collector
information.
sflow collector collector-id
{ { ip ip-address | ipv6
ipv6-address } |
datagram-size size |
description text | port
port-number | time-out
seconds } *
Specify the
sFlow version.
sflow version { 4 | 5 }
Specify the
source IP
address of
sFlow
packets.
sflow source { ip ip-address |
ipv6 ipv6-address } *
By default, the device presets a certain number of sFlow
collectors.
Use the display sflow command to display the
parameters of the preset sFlow collectors.
Optional.
The default sFlow version is 5.
Optional.
Not specified by default.
Configuring flow sampling
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter Ethernet interface view.
interface interface-type
interface-number
N/A
3.
Set the flow sampling mode.
sflow sampling-mode determine |
random
Optional.
4.
Specify the number of packets
out of which flow sampling
samples a packet on the
interface.
sflow sampling-rate rate
Required.
5.
6.
Optional.
Set the maximum number of
bytes of a packet (starting
from the packet header) that
flow sampling can copy.
sflow flow max-header length
Specify the sFlow collector for
flow sampling.
sflow flow collector collector-id
146
The default setting is 128 bytes. HP
recommends that you use the
default value.
No collector is specified for flow
sampling by default.
Configuring counter sampling
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Set the interval for counter
sampling.
sflow counter interval seconds
Counter sampling is disabled by
default.
4.
Specify the sFlow collector for
counter sampling.
sflow counter collector collector-id
No collector is specified for
counter sampling by default.
Displaying and maintaining sFlow
Task
Command
Remarks
Display sFlow configuration
information.
display sflow [ | { begin | exclude |
include } regular-expression ]
Available in any view.
sFlow configuration example
Network requirements
As shown in Figure 52, enable flow sampling and counter sampling on Ethernet 1/1 of the device to
monitor traffic on the port and configure the device to send sampled information to the sFlow collector
through Ethernet 1/3.
Figure 52 Network diagram
sFlow Collector
3.3.3.2/16
Eth1/3
3.3.3.1/16
Host A
Eth1/2
2.2.2.1/16
Eth1/1
1.1.1.2/16
1.1.1.1/16
Device
Server
2.2.2.2/16
Configuration procedure
1.
Configure the sFlow agent and sFlow collector information:
# Configure the IP address of Ethernet 1/0 on the device as 3.3.3.1/16.
<Device> system-view
[Device] interface ethernet 1/3
[Device-Ethernet1/3] ip address 3.3.3.1 16
[Device-Ethernet1/3] quit
# Configure the IP address for the sFlow agent.
[Device] sflow agent ip 3.3.3.1
147
# Configure parameters for an sFlow collector: specify sFlow collector ID 2, IP address 3.3.3.2,
the default port number, and description of netserver for the sFlow collector.
[Device] sflow collector 2 ip 3.3.3.2 description netserver
2.
Configure counter sampling:
# Set the counter sampling interval to 120 seconds.
[Device] interface ethernet 1/1
[Device-Ethernet1/1] sflow counter interval 120
# Specify sFlow collector 2 for counter sampling.
[Device-Ethernet1/1] sflow counter collector 2
3.
Configure flow sampling:
# Set the flow sampling mode and the sampling interval.
[Device-Ethernet1/1] sflow sampling-mode determine
[Device-Ethernet1/1] sflow sampling-rate 4000
# Specify sFlow collector 2 for flow sampling.
[Device-Ethernet1/1] sflow flow collector 2
# Display the sFlow configuration and operation information.
[Device-Ethernet1/1] display sflow
sFlow Version: 5
sFlow Global Information:
Agent
IP:3.3.3.1(CLI)
Collector Information:
ID
IP
1
2
3.3.3.2
Port
Aging
Size
6343
0
1400
6543
N/A
1400
3
6343
0
1400
4
6343
0
1400
5
6343
0
1400
6
6343
0
1400
7
6343
0
1400
8
6343
0
1400
9
6343
0
1400
10
6343
0
1400
Description
netserver
sFlow Port Information:
Interface CID
Eth1/1
2
Interval(s) FID
120
2
MaxHLen
Rate
Mode
128
4000
Determine
Status
Active
The output shows that Ethernet 1/1 enabled with sFlow is active, the counter sampling interval is
120 seconds, and the packet sampling interval is 4000.
Troubleshooting sFlow configuration
The remote sFlow collector cannot receive sFlow packets
Symptom
The remote sFlow collector cannot receive sFlow packets.
148
Analysis
•
The sFlow collector is not specified.
•
sFlow is not configured on the interface.
•
The IP address of the sFlow collector specified on the sFlow agent is different from that of the remote
sFlow collector.
•
No IP address is configured for the Layer 3 interface on the device. Or the IP address is configured,
but the UDP packets that have the IP address as the source cannot reach the sFlow collector.
•
The physical link between the device and the sFlow collector fails.
1.
Check that sFlow is correctly configured by using display sflow.
2.
Check that a correct IP address is configured for the device to communicate with the sFlow
collector.
3.
Check the physical link between the device and the sFlow collector.
Solution
149
Configuring samplers
Overview
A sampler samples packets. The sampler selects a packet from among sequential packets, and it sends
the packet to the service module for processing.
The following sampling modes are available:
•
Fixed mode—The first packet is selected from among sequential packets in each sampling.
•
Random mode—Any packet might be selected from among sequential packets in each sampling.
A sampler can be used to sample packets for NetStream. Only the sampled packets are sent and
processed by the traffic monitoring module. Sampling is useful if you have too much traffic and want to
limit how much traffic is to be analyzed. The sampled data is statistically accurate and decreases the
impact on the forwarding capacity of the device.
For more information about NetStream, see "Configuring NetStream."
Creating a sampler
Step
1.
2.
Enter system view.
Create a sampler.
Command
Remarks
system-view
N/A
sampler sampler-name mode
{ fixed | random }
packet-interval rate
The sampling rate is calculated by using the
formula 2 to the nth power, where n is the rate.
For example, if the rate is 8, each sampling
selects one packet from among 256 packets (2
to the 8th power); if the rate is 10, each
sampling selects one packet from among
1024 packets (2 to the 10th power).
Displaying and maintaining a sampler
Task
Command
Remarks
Display configuration and running
information about the sampler.
display sampler [ sampler-name ] [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
Clear running information about the
sampler.
reset sampler statistics
[ sampler-name ]
Available in user view.
150
Sampler configuration example
Network requirements
As shown in Figure 53, configure IPv4 NetStream on Device to collect statistics on incoming and
outgoing traffic on Ethernet 1/2. The NetStream data is sent to port 5000 on the NSC at 12.110.2.2/16.
Do the following:
•
Configure fixed sampling in the inbound direction to select the first packet from among 256
packets.
•
Configure random sampling in the outbound direction to select one packet randomly from among
1024 packets.
Figure 53 Network diagram
Configuration procedure
# Create sampler 256 in fixed sampling mode, and set the rate to 8. The first packet of 256 (2 to the 8th
power) packets is selected.
<Device> system-view
[Device] sampler 256 mode fixed packet-interval 8
# Create sampler 1024 in random sampling mode, and set the sampling rate to 10. One packet from
among 1024 (2 to the 10th power) packets is selected.
[Device] sampler 1024 mode random packet-interval 10
# Configure Ethernet 1/2, enable IPv4 NetStream to collect statistics about the incoming traffic, and then
configure the interface to use sampler 256.
[Device] interface ethernet 1/2
[Device-Ethernet1/2] ip address 11.110.2.1 255.255.0.0
[Device-Ethernet1/2] ip netstream inbound
[Device-Ethernet1/2] ip netstream sampler 256 inbound
[Device-Ethernet1/2] quit
# Configure interface Ethernet 1/2, enable IPv4 NetStream to collect statistics about outgoing traffic,
and then configure the interface to use sampler 1024.
[Device] interface ethernet 1/2
[Device-Ethernet1/2] ip address 12.110.2.1 255.255.0.0
[Device-Ethernet1/2] ip netstream outbound
[Device-Ethernet1/2] ip netstream sampler 1024 outbound
[Device-Ethernet1/2] quit
# Configure the address and port number of NSC as the destination host for the NetStream data export,
leaving the default for the source interface.
[Device] ip netstream export host 12.110.2.2 5000
Verification
# Execute the display sampler command on Device to view the configuration and running information
about sampler 256. The output shows that Device received and processed 256 packets, which reached
151
the number of packets for one sampling, and Device selected the first packet from among the 256
packets received on Ethernet 1/2.
<Device> display sampler 256
Sampler name: 256
Index: 1,
Mode: Fixed,
Packet counter: 0,
Packet-interval: 8
Random number: 1
Total packet number (processed/selected): 256/1
# Execute the display sampler command on Device to view the configuration and running information
about sampler 1024. The output shows that Device processed and sent out 1024 packets, which reached
the number of packets for one sampling, and Device selected a packet randomly from among the 1024
packets sent from among Ethernet 1/2.
<Device> display sampler 1024
Sampler name: 1024
Index: 2,
Mode: Random,
Packet counter: 0,
Packet-interval: 10
Random number: 370
Total packet number (processed/selected): 1024/1
152
Configuring PoE
Hardware compatibility
PoE is available only for MSR50 routers that are installed with the MPU-G2, and MSR30-16, MSR30-20,
MSR30-40, MSR30-60, MSR50-40, and MSR50-60 routers that are installed with a PoE-capable
switching module.
Overview
IEEE 802.3af-compliant power over Ethernet (PoE) enables a power sourcing equipment (PSE) to supply
power to powered devices (PDs) through Ethernet interfaces over twisted pair cables. Examples of PDs
include IP telephones, wireless APs, portable chargers, card readers, Web cameras, and data collectors.
A PD can also use a different power source from the PSE at the same time for power redundancy.
As shown in Figure 54, a PoE system comprises the following elements:
•
PoE power—The entire PoE system is powered by the PoE power.
•
PSE—A PSE supplies power to PDs and can also examine the Ethernet cables connected to PoE
interfaces, detect and classify PDs, monitor power supplying state, and detect connections to PDs.
On the device, a PoE-capable interface module is a PSE. The device uses PSE IDs to identify PSEs.
To display PSE ID and interface module slot number mappings, use the display poe device
command.
•
PI—An Ethernet interface with the PoE capability is called a PoE interface. A PoE interface can be
an FE or GE interface.
•
PD—A PD receives power from the PSE. You can also connect a PD to a redundant power source for
reliability.
Figure 54 PoE system diagram
PoE configuration task list
You can configure a PoE interface directly at the CLI or by configuring a PoE profile and applying the PoE
profile to the PoE interface.
To configure a single PoE interface, configure it at the CLI. To configure several PoE interfaces in batches,
use the PoE profile. For a PoE configuration parameter of a PoE interface, you can select only one mode
(including modification and removal of a PoE interface).
153
Before configuring PoE, make sure the PoE power supply and PSE are operating properly. Otherwise,
either you cannot configure PoE or the PoE configuration does not take effect.
If the PoE power supply is turned off while a device is starting up, the PoE configuration in the PoE profile
might become invalid.
Complete these tasks to configure PoE:
Task
Remarks
Enabling PoE:
• Enabling PoE for a PSE
Required.
• Enabling PoE on a PoE interface
Required.
Detecting PDs:
• Enabling the PSE to detect nonstandard PDs
Optional.
• Configuring a PD disconnection detection mode
Optional.
Configuring the PoE power:
• Configuring the maximum PSE power
Optional.
• Configuring the maximum PoE interface power
Optional.
Configuring PoE power management:
• Configuring PSE power management
Optional.
• Configuring PoE interface power management
Optional.
Configuring the PoE monitoring function:
• Configuring PSE power monitoring
Optional.
Optional.
The device automatically
monitors PDs when
supplying power to them,
so no configuration is
required.
• Monitoring PD
Configuring a PoE interface by using a PoE profile:
• Configuring a PoE profile
Optional.
• Applying a PoE profile
Optional.
Upgrading PSE processing software in service
Optional.
Enabling PoE
Enabling PoE for a PSE
If PoE is not enabled for a PSE, the system does not supply power or reserve power for the PSE.
You can enable PoE of a PSE if the PSE will not result in PoE power overload. Otherwise, whether you can
enable PoE of the PSE depends on whether the PSE is enabled with the PoE power management function.
For more information about PSE power management, see "Configuring PSE power management."
154
•
If the PSE is not enabled with the PoE power management function, you cannot enable PoE for the
PSE.
•
If the PSE is enabled with the PoE power management function, you can enable PoE for the PSE.
Whether the PSE can supply power depends on other factors, such as the power supply priority of
the PSE.
When the sum of the power consumption of all PSEs exceeds the maximum power of PoE, the system
considers the PoE to be overloaded. The maximum power of PoE depends on the hardware specifications
of the PoE power supply and the user configuration.
To enable PoE for a PSE:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable PoE for the PSE.
poe enable pse pse-id
By default, this function is disabled.
Enabling PoE on a PoE interface
The system does not supply power to or reserve power for the PDs connected to a PoE interface unless the
PoE interface is enabled with the PoE function.
You can enable PoE on a PoE interface if the action does not result in power overload on the PSE.
Otherwise, whether you can enable PoE for the PoE interface depends on whether the PoE interface is
enabled with the PoE power management function. For more information about PoE interface power
management, see "Configuring PoE interface power management."
•
If the PoE interface is not enabled with the PoE power management function, you cannot enable PoE
on the PoE interface.
•
If the PoE interface is enabled with the PoE power management function, you can enable PoE on the
PoE interface. Whether the PDs can be powered depends on other factors, such as the power
supply priority of the PoE interface.
The PSE uses data pairs (pins 1, 2 and 3, 6) of category 3/5 twisted pair cable to supply DC power to
PDs.
When the sum of the power consumption of all powered PoE interfaces on a PSE exceeds the maximum
power of the PSE, the system considers the PSE as overloaded. The maximum PSE power is user
configurable.
To enable PoE for a PoE interface:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter PoE interface view.
interface interface-type
interface-number
N/A
3.
Enable PoE for the PoE
interface.
poe enable
By default, this function is disabled.
4.
Configure PoE interface
power supply mode.
Optional.
poe mode signal
155
By default, power is supplied to
PDs over signal wires (data pairs).
Step
5.
Command
Remarks
Optional.
Configure a description for
the PD connected to the PoE
interface.
poe pd-description text
By default, no description for the
PD connected to the PoE interface
is available.
Detecting PDs
Enabling the PSE to detect nonstandard PDs
There are standard PDs and nonstandard PDs. Usually, the PSE can detect only standard PDs and supply
power to them. The PSE can detect nonstandard PDs and supply power to them only if you enable the PSE
to detect nonstandard PDs.
To enable the PSE to detect nonstandard PDs:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable the PSE to detect
nonstandard PDs.
poe legacy enable pse pse-id
By default, the PSE can detect
only standard PDs.
Configuring a PD disconnection detection mode
CAUTION:
If you change the PD disconnection detection mode while the device is running, the connected PDs are
powered off.
To detect the PD connection with a PSE, PoE provides two detection modes: AC detection and DC
detection. The AC detection mode uses less energy than the DC detection mode.
To configure a PD disconnection detection mode:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Configure a PD disconnection
detection mode.
poe disconnect { ac | dc }
Optional.
The default is AC.
Configuring the PoE power
Configuring the maximum PSE power
The maximum PSE power is the sum of power that the PDs connected to the PSE can get.
To avoid PSE power interruption due to PoE power overload, make sure the sum of power of all PSEs is
less than the maximum PoE power.
156
The maximum power of the PSE must be greater than or equal to the total maximum power of all critical
PoE interfaces on the PSE to guarantee these PoE interfaces of power.
To configure the maximum PSE power:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Configure the maximum
power for the PSE.
poe max-power max-power
pse pse-id
Default maximum power of the PSE:
• MIM/FIC 16FSW—247 W.
• MIM/FIC 24FSW—370 W.
Configuring the maximum PoE interface power
The maximum PoE interface power is the maximum power that the PoE interface can provide to the
connected PD. If the PD requires more power than the maximum PoE interface power, the PoE interface
does not supply power to the PD.
To configure the maximum PSE power:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter PoE interface view.
interface interface-type
interface-number
N/A
3.
Configure the maximum
power for the PoE interface.
poe max-power max-power
Optional.
The default is 15400 milliwatts.
Configuring PoE power management
PoE power management involves PSE power management and PoE interface power management.
Configuring PSE power management
If the maximum PoE power is lower than the sum of the maximum power that all PSEs require, PSE power
management is applied to decide whether the PSE can enable PoE, whether to supply power to a specific
PSE, and the power-allocation method. If the maximum PoE power of the device is higher than the sum
of the maximum power that all PSEs require, it is unnecessary to enable PSE power management.
If PoE supplies power to PSEs, the following actions occur:
•
If the PoE power is overloaded and PSE power management is not enabled, no power is supplied
to a new PSE.
•
If the PoE power is overloaded and a PSE power-management-priority policy is enabled, the PSE
that has a lower priority is first disconnected to guarantee the power supply to a new PSE that has
a higher priority.
In descending order, the power-supply priority levels of a PSE are critical, high, and low.
The guaranteed remaining PoE power is the maximum PoE power minus the power allocated to the
critical PSE, regardless of whether PoE is enabled for the PSE. If this is lower than the maximum power of
the PSE, you cannot set the power priority of the PSE to critical. Otherwise, you can set the power priority
157
to critical, and this PSE preempts the power of the PSE that has a lower priority level. In this case, the PSE
whose power is preempted is disconnected, but its configuration remains unchanged. If you change the
priority of the PSE from critical to a lower level, other PSEs have an opportunity to be powered.
To configure PSE power management:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure a PSE power
management priority policy.
poe pse-policy priority
By default, this policy is not
configured.
3.
Configure the power supply
priority for the PSE.
poe priority { critical | high | low }
pse pse-id
Optional.
By default, the power supply
priority for the PSE is low.
NOTE:
• The guaranteed PoE power is used to guarantee that the key PSEs in the device can be supplied with
power all the time, without being influenced by a change of PSEs.
• The guaranteed maximum PoE power is equal to the maximum PoE power.
Configuring PoE interface power management
The power supply priority of a PD depends on the priority of the PoE interface. In descending order, the
power-supply priority levels of a PoE interface are critical, high, and low. Power supply to a PD is subject
to PoE interface power management policies.
All PSEs implement the same PoE interface power management policies. If PoE supplies power to a PD,
the following actions occur:
•
If the PoE power is overloaded and PSE power management is not enabled, no power is supplied
to a new PD.
•
If the PoE power is overloaded and a PSE power-management-priority policy is enabled, the PD that
has a lower priority is first disconnected to guarantee the power supply to a new PD that has a
higher priority.
The guaranteed remaining PoE power is the maximum PoE power minus the power allocated to the
critical PoE interface, regardless of whether PoE is enabled for the PoE interface. If this is lower than the
maximum power of the PoE interface, you cannot set the power priority of the PoE interface to critical.
Otherwise, you can set the power priority to critical, and this PoE interface preempts the power of the PoE
interface that has a lower priority level. In this case, the PoE interface whose power is preempted is
disconnected, but its configuration remains unchanged. If you change the priority of the PoE interface
from critical to a lower level, the PDs connecting to other PoE interfaces have an opportunity to be
powered.
A guard band of 19 watts is reserved for each PoE interface on the device to prevent a PD from being
powering off because of a sudden increase of power. If the remaining power of the PSE is lower than 19
watts and no priority is configured for a PoE interface, the PSE does not supply power to the new PD. If
the remaining power of the PSE is lower than 19 watts, but priorities are configured for PoE interfaces, the
PoE interface that has a higher priority can preempt the power of a PoE interface that has a lower priority
to ensure normal operation of the higher priority PoE interface.
If a sudden increase of the PD power results in PSE power overload, power supply to the PD on the PoE
interface that has a lower priority is stopped to ensure power supply to the PD that has a higher priority.
158
Configuration prerequisites
Enable PoE for PoE interfaces.
Configuration procedure
To configure PoE interface power management:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure PoE interface
power management priority
policy.
poe pd-policy priority
By default, this policy is not
configured.
3.
Enter PoE interface view.
interface interface-type
interface-number
N/A
4.
Configure the power supply
priority for a PoE interface.
Optional.
poe priority { critical | high | low }
By default, the power supply
priority for the PSE low.
Configuring the PoE monitoring function
If the PoE monitoring function is enabled, the system monitors the parameter values related to PoE power
supply, PSE, PD, and device temperature in real time. If a specific value exceeds the limited range, the
system automatically takes measures to protect itself.
Configuring PSE power monitoring
If the PSE power exceeds or drops below the specified threshold, the system sends trap messages.
To configure a power alarm threshold for a PSE:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure a power alarm
threshold for the PSE.
poe utilization-threshold
utilization-threshold-value pse
pse-id
Optional.
80% by default.
Monitoring PD
If a PSE starts or ends power supply to a PD, the system sends a trap message.
Configuring a PoE interface by using a PoE profile
You can configure a PoE interface either at the CLI or by using a PoE profile and applying the PoE profile
to the PoE interfaces.
To configure a single PoE interface, configure it at the CLI. To configure PoE interfaces in batches, use a
PoE profile.
159
A PoE profile is a collection of configurations that contain multiple PoE features. On large networks, you
can apply a PoE profile to multiple PoE interfaces, and these interfaces have the same PoE features. If the
PoE interface connecting to a PD changes to another one, instead of reconfiguring the features defined
in the PoE profile one by one, you can apply the PoE profile from the original interface to the current one,
simplifying the PoE configurations.
The device supports up to 100 PoE profiles. You can define PoE configurations based on each PD, save
the configurations for different PDs into different PoE profiles, and apply the PoE profiles to the access
interfaces of PDs accordingly.
Configuring a PoE profile
If a PoE profile is applied, it cannot be deleted or modified before you cancel its application.
The poe max-power max-power and poe priority { critical | high | low } commands must be configured
in only one way (either at the CLI or by configuring PoE profile).
A PoE parameter on a PoE interface must be configured, modified and deleted in only one way. If a
parameter configured in a way (for example, at the CLI) is then configured in the other way (for example,
through PoE profile), the latter configuration fails and the original one is still effective. To make the latter
configuration effective, you must cancel the original one first.
To configure a PoE profile:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create a PoE profile, and
enter PoE profile view.
poe-profile profile-name [ index ]
N/A
3.
Enable PoE for the PoE
interface.
poe enable
By default, this function is disabled.
4.
Configure the maximum
power for the PoE interface.
poe max-power max-power
5.
Configure PoE power supply
mode for the PoE interface.
poe mode signal
6.
Configure power supply
priority for the PoE interface.
poe priority { critical | high | low }
Optional.
The default is 15400 milliwatts.
Optional.
The default is signal (power over
signal cables).
Optional.
The default is low.
Applying a PoE profile
You can apply a PoE profile in either system view or interface view. If you perform application to a PoE
interface in both views, the latter application takes effect. To apply a PoE profile to multiple PoE interfaces,
the system view is more efficient.
A PoE profile can be applied to multiple PoE interfaces, while a PoE interface can be applied with only
one PoE profile.
To apply the PoE profile in system view:
Step
1.
Command
Enter system view.
system-view
160
Step
Command
Apply the PoE profile to one or multiple PoE
interfaces.
2.
apply poe-profile { index index | name profile-name }
interface interface-range
To apply the PoE profile in interface view:
Step
Command
1.
Enter system view.
system-view
2.
Enter PoE interface view.
interface interface-type interface-number
3.
Apply the PoE profile to the current PoE interface.
apply poe-profile { index index | name profile-name }
Upgrading PSE processing software in service
You can upgrade the PSE processing software in service in either of the following two modes:
•
Refresh mode—This mode enables you to update the PSE processing software without deleting it.
You can upgrade the PSE processing software in the refresh mode at the CLI.
•
Full mode—This mode deletes the PSE processing software and reloads it. If the PSE processing
software is damaged (so you cannot execute any PoE commands), you can upgrade the PSE
processing software in full mode to restore the PSE function.
An in-service PSE processing software upgrade might be unexpectedly interrupted (for example, if an
error causes the device to reboot). If you cannot upgrade the PSE processing software in full mode after
a reboot, you can power off the device and restart it before you upgrade it in full mode again. After the
upgrade, restart the device manually to make the new PSE processing software take effect.
To upgrade the PSE processing software in service:
Step
Command
1.
Enter system view.
system-view
2.
Upgrade the PSE processing software in service.
poe update { full | refresh } filename pse pse-id
Displaying and maintaining PoE
Task
Command
Remarks
Display PSE information.
display poe device [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display the power supply state
of the specified PoE interface.
display poe interface [ interface-type
interface-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display power information for
PoE interfaces.
display poe interface power [ interface-type
interface-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
161
Task
Command
Remarks
Display power information for
the PoE power supply and all
PSEs.
display poe power-usage [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display PSE information.
display poe pse [ pse-id ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the power supply
states of all PoE interfaces
connected with the PSE.
display poe pse pse-id interface [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display power information for
all PoE interfaces connected
with the PSE.
display poe pse pse-id interface power [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
Display information about the
PoE power supply.
display poe-power [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display all information about
the configurations and
applications of the PoE profile.
display poe-profile [ index index | name
profile-name ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display all information about
the configurations and
applications of the PoE profile
applied to the specified PoE
interface.
display poe-profile interface interface-type
interface-number [ | { begin | exclude |
include } regular-expression ]
Available in any view.
PoE configuration example
Network requirements
As shown in Figure 55, the device supplies power to PDs through its PoE interfaces, as follows:
•
The device is equipped with two cards that support PoE and that are inserted in Slot 3 and Slot 5
respectively. The PSE IDs are 10 and 16.
•
Allocate 400 watts to PSE 10, provided the default maximum power to PSE in PSE 16 can meet the
requirements.
•
GigabitEthernet 3/1 and GigabitEthernet 3/2 connect to IP telephones.
•
GigabitEthernet 5/1 and GigabitEthernet 5/2 connect to AP devices.
•
The power supply priority of GigabitEthernet 3/2 is critical. If a new PD results in PSE power
overload, the PSE does not supply power to the new PD according to the default PoE interface
power management priority policy.
•
The power of the AP device connected to GigabitEthernet 5/2 does not exceed 9000 milliwatts.
162
Figure 55 Network diagram
Configuration procedure
# Enable PoE for the PSE.
<Sysname> system-view
[Sysname] poe enable pse 10
[Sysname] poe enable pse 16
# Set the maximum power of PSE 10 to 400 watts.
[Sysname] poe max-power 400 pse 10
# Enable PoE on GigabitEthernet 3/1 and GigabitEthernet 5/1.
[Sysname] interface gigabitethernet 3/1
[Sysname-GigabitEthernet3/1] poe enable
[Sysname-GigabitEthernet3/1] quit
[Sysname] interface gigabitethernet 5/1
[Sysname-GigabitEthernet5/1] poe enable
[Sysname-GigabitEthernet5/1] quit
# Enable PoE on GigabitEthernet 3/2, and set its power priority to critical.
[Sysname] interface gigabitethernet 3/2
[Sysname-GigabitEthernet3/2] poe enable
[Sysname-GigabitEthernet3/2] poe priority critical
[Sysname-GigabitEthernet3/2] quit
# Enable PoE on GigabitEthernet 5/2, and set its maximum power to 9000 milliwatts.
[Sysname] interface gigabitethernet 5/2
[Sysname-GigabitEthernet5/2] poe enable
[Sysname-GigabitEthernet5/2] poe max-power 9000
Verifying the configuration
After the configuration takes effect, the IP telephones and AP devices are powered and can operate
correctly.
163
Troubleshooting PoE
Failure to set the priority of a PoE interface to critical
Analysis
•
The guaranteed remaining power of the PSE is lower than the maximum power of the PoE interface.
•
The priority of the PoE interface is already set.
•
In the first case, either increase the maximum PSE power or reduce the maximum power of the PoE
interface if the guaranteed remaining power of the PSE cannot be modified.
•
In the second case, remove the priority that is already configured.
Solution
Failure to apply a PoE profile to a PoE interface
Analysis
•
Some configurations in the PoE profile are already configured.
•
Some configurations in the PoE profile do not meet the configuration requirements of the PoE
interface.
•
Another PoE profile is already applied to the PoE interface.
•
In the first case, remove the original configurations.
•
In the second case, modify the configurations in the PoE profile.
•
In the third case, remove the application of the undesired PoE profile from the PoE interface.
Solution
164
Configuring port mirroring
You cannot configure a Layer 2 mirroring group with the source ports and the monitor port located on
different cards of the same device, but you can configure that for a Layer 3 mirroring group.
The HP MSR routers do not support configuring source ports in CPOS interface view.
The HP MSR routers do not support using an aggregate interface as the monitor port.
SIC-4FSW modules, DSIC-9FSW modules, MSR20-1X routers, and fixed Layer 2 Ethernet ports do not
support inter-VLAN mirroring. Before configuring a mirroring group, make sure all ports in the mirroring
group belong to the same VLAN. If a port in an effective mirroring group leaves a mirroring VLAN, the
mirroring function does not take effect. You must remove the mirroring group and configure a new one.
Overview
Port mirroring refers to copying packets that are passing through a port to a monitor port that is
connected to a monitoring device for packet analysis.
Terminologies of port mirroring
Mirroring source
The mirroring source can be one or more monitored ports. Packets (called "mirrored packets") passing
through them are copied to a port that is connected to a monitoring device for packet analysis. This type
of port is called a "source port" and the device where the mirroring source resides is called a "source
device."
Mirroring destination
The mirroring destination is the destination port (also known as the monitor port) of mirrored packets. It
connects to the data monitoring device. The device where the monitor port resides is called the
"destination device." The monitor port forwards mirrored packets to its connected monitoring device.
A monitor port might receive multiple duplicates of a packet in some cases because it can monitor
multiple mirroring sources. For example, assume that Port 1 is monitoring bidirectional traffic on Port 2
and Port 3 on the same device. If a packet travels from Port 2 to Port 3, two duplicates of the packet will
be received on Port 1.
Mirroring direction
The mirroring direction indicates that the inbound, outbound, or bidirectional traffic can be copied on a
mirroring source:
•
Inbound—Copies packets received on a mirroring source.
•
Outbound—Copies packets sent out of a mirroring source.
•
Bidirectional—Copies packets both received on and sent out of a mirroring source.
Local mirroring group
Port mirroring is implemented through mirroring groups. Mirroring source and mirroring destination are
in a same mirroring group.
165
Port mirroring classification and implementation
Port mirroring includes local port mirroring and remote port mirroring based on whether the mirroring
source and the mirroring destination are on the same device.
Local port mirroring
In local port mirroring, the mirroring source and mirroring destination are on the same device. You can
configure local port mirroring by using the mirroring-group command or the mirror command. These
two methods are implemented in different ways:
•
The mirroring-group command mirrors packets at the network layer, whereas the mirror command
mirrors packets at the physical layer, which provides more packet information.
•
The mirroring-group command does not support mirroring on AUX, Bridge-Aggregation, cellular,
Encrypt, null, WLAN-BSS, or WLAN-Ethernet interfaces, whereas the mirror command supports
mirroring on all interfaces on a router.
Remote port mirroring
In remote port mirroring, the mirroring source and the mirroring destination are on different devices. The
source device copies mirrored packets to a remote destination device, which forwards them to the data
monitoring device. Remote port mirroring is implemented by using the mirror command and supports
mirroring on all interfaces on a router.
NOTE:
The mirror command is configured differently in voice interface view. For more information, see Voice
Configuration Guide.
Configuring local port mirroring
Configuring local port mirroring by using the mirror-group
command
Local port mirroring configuration task list
Create a local mirroring group and then configure one or multiple source ports and a monitor port for the
local mirroring group.
Complete these tasks to configure local port mirroring:
Task
Remarks
Creating a local mirroring group
Required.
Configuring source ports for the local mirroring group
Required.
Configuring the monitor port for the local mirroring group
Required.
Creating a local mirroring group
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
166
Step
Create a local mirroring
group.
2.
Command
Remarks
mirroring-group group-id local
No local mirroring group
exists by default.
NOTE:
A local mirroring group takes effect only after you configure a monitor port and source port for it.
The following matrix shows the feature and router compatibility:
Feature
Creating a
local
mirroring
group
MSR900
MSR93X
Yes
Yes
Value
range for
the
number: 1
to 5.
Value
range for
the
number: 1
to 5.
MSR20-1X
Yes
Value
range for
the number:
1 to 5.
MSR20
MSR30
MSR50
Yes
Yes
Yes
Value
range for
the
number: 1
to 5.
Value
range for
the
number: 1
to 5.
Value
range
for the
number:
1 to 10.
MSR1000
Yes
Value
range for
the number:
1 to 5.
Configuring source ports for the local mirroring group
Either you can configure a list of source ports for a mirroring group in system view, or you can assign only
the current port to the mirroring group as a source port in interface view. The two methods lead to the
same result.
Configuration restrictions and guidelines
When you configure source ports for a local mirroring group, follow these restrictions and guidelines:
•
A mirroring group can contain multiple source ports.
•
A port can belong to only one mirroring group. On devices that support mirroring groups with
multiple monitor ports, a port can serve as a source port for multiple mirroring groups, but the port
cannot be a monitor port at the same time.
Configuring source ports
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
• (Method 1) In system view:
mirroring-group group-id
mirroring-port
mirroring-port-list { both |
inbound | outbound }
2.
Configure source ports.
• (Method 2) In interface view:
a. interface interface-type
interface-number
b. mirroring-group group-id
mirroring-port { both |
inbound | outbound }
167
Use either method.
By default, no source port is
configured for a local mirroring
group.
By default, a port is not a source
port of any local mirroring group.
Configuring the monitor port for the local mirroring group
CAUTION:
Do not enable the spanning tree feature on the monitor port.
Either you can configure the monitor port for a mirroring group in system view, or you can assign the
current port to a mirroring group as the monitor port in interface view. The two methods lead to the same
result.
Configuration restrictions and guidelines
When you configure the monitor port for a local mirroring group, follow these restrictions and guidelines:
•
A mirroring group contains only one monitor port.
•
HP recommends that you use a monitor port for port mirroring only. This is to make sure that the
data monitoring device receives and analyzes only the mirrored traffic rather than a mix of mirrored
traffic and other traffic.
•
For Layer 3 port mirroring, the device can only mirror information about Layer 3 and upper layers
of packets but cannot mirror Layer 2 information, with the source MAC address as local device
MAC address, and the destination MAC address 00-0F-E2-41-5E-5B.
Configuring the monitor port
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
• (Method 1) In system view:
mirroring-group group-id
monitor-port monitor-port-id
2.
Configure the monitor port.
• (Method 2) In interface view:
a. interface interface-type
interface-number
b. [ mirroring-group
group-id ] monitor-port
Use either method.
By default, no monitor port is
configured for a local mirroring
group.
By default, a port is not the monitor
port of any local mirroring group.
Configuring local port mirroring by using the mirror command
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type interface-number
N/A
3.
Mirror the traffic on the
interface to another local
interface.
mirror number number { all | in | out } to
local-interface interface-type
interface-number [ mac H-H-H ]
By default, the traffic on
an interface is not
mirrored.
168
Configuring remote port mirroring
To configure remote port mirroring by using the mirror command:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type interface-number
N/A
3.
Mirror the traffic on the
interface to a remote host.
mirror number number { all | in | out } to
remote-ip ip-address [ port port ]
By default, the traffic on an
interface is not mirrored.
Displaying and maintaining port mirroring
Task
Command
Remarks
Display mirroring group
information.
display mirroring-group { group-id | all |
local } [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Local port mirroring configuration example
Network requirements
As shown in Figure 56, Device A connects to the marketing department through Ethernet 1/1 and to the
technical department through Ethernet 1/2. It connects to the server through Ethernet 1/3.
Configure local port mirroring in source port mode to enable the server to monitor the bidirectional traffic
of the marketing department and the technical department.
169
Figure 56 Network diagram
Configuration procedure
# Create local mirroring group 1.
<DeviceA> system-view
[DeviceA] mirroring-group 1 local
# Configure Ethernet 1/1 and Ethernet 1/2 as source ports, and configure port Ethernet 1/3 as the
monitor port.
[DeviceA] mirroring-group 1 mirroring-port ethernet 1/1 ethernet 1/2 both
[DeviceA] mirroring-group 1 monitor-port ethernet 1/3
# Disable the spanning tree feature on the monitor port Ethernet 1/3.
[DeviceA] interface ethernet 1/3
[DeviceA-Ethernet1/3] undo stp enable
[DeviceA-Ethernet1/3] quit
Verifying the configuration
# Display the configuration of all mirroring groups.
[DeviceA] display mirroring-group all
mirroring-group 1:
type: local
status: active
mirroring port:
Ethernet1/1
both
Ethernet1/2
both
mirroring vlan:
mirroring CPU:
monitor port: Ethernet1/3
You can monitor all the packets received and sent by the marketing department and the technical
department on the server.
170
Configuring traffic mirroring
The following matrix shows the feature and router compatibility:
Feature
MSR900
MSR93X
MSR20-1X
MSR20
MSR30
MSR50
MSR1000
Configuring traffic
mirroring
Yes
No
No
No
Yes
Yes
No
Overview
Traffic mirroring copies specified packets to a specific destination for packet analysis and monitoring.
Traffic mirroring is implemented through QoS policies. In other words, you define traffic classes and
configure match criteria to classify packets to be mirrored, and then you configure traffic behaviors to
mirror packets that fit the match criteria to the specified destination. You can use traffic mirroring to
flexibly classify packets by defining match criteria and obtain accurate statistics. The MSR routers support
mirroring traffic to an interface, which is to copy the matching packets to a destination interface.
For more information about QoS policies, traffic classes, and traffic behaviors, see ACL and QoS
Configuration Guide.
Traffic mirroring configuration task list
Task
Remarks
Configuring match criteria
Required.
Mirroring traffic to an interface
Required.
Configuring a QoS policy
Required.
Applying a QoS policy
Required.
On some Layer 2 interfaces, traffic mirroring might conflict with traffic redirecting and port mirroring.
Configuring traffic mirroring
Configuring match criteria
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create a class, and then enter
class view.
traffic classifier tcl-name [ operator
{ and | or } ]
By default, no traffic class exists.
3.
Configure match criteria.
if-match [ not ] match-criteria
By default, no match criterion is
configured in a traffic class.
171
For more information about the traffic classifier and if-match commands, see ACL and QoS Command
Reference.
Mirroring traffic to an interface
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
By default, no traffic behavior
exists.
2.
Create a behavior, and enter
behavior view.
traffic behavior behavior-name
For more information about the
traffic behavior command, see
ACL and QoS Command
Reference.
3.
Specify the destination
interface for traffic mirroring.
mirror-to interface interface-type
interface-number
By default, traffic mirroring is not
configured in a traffic behavior.
Configuring a QoS policy
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create a policy and enter
policy view.
qos policy policy-name
By default, no policy exists.
3.
Associate a class with a traffic
behavior in the QoS policy.
classifier tcl-name behavior
behavior-name
By default, no traffic behavior is
associated with a class.
For more information about the qos policy and classifier behavior commands, see ACL and QoS
Command Reference.
Applying a QoS policy
For more information about applying a QoS policy, see ACL and QoS Configuration Guide.
By applying a QoS policy to a Layer 2 interface, you can mirror the traffic in a specific direction on the
interface. A policy can be applied to multiple interfaces, but in one direction (inbound or outbound) of
an interface, only one policy can be applied.
To apply a QoS policy to a Layer 2 interface:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter Layer 2 interface view.
interface interface-type
interface-number
Settings in interface view take
effect on the current interface.
3.
Apply a policy to the
interface.
qos apply policy policy-name
{ inbound | outbound }
N/A
172
For more information about the qos
apply policy command, see ACL
and QoS Command Reference.
Displaying and maintaining traffic mirroring
Task
Command
Remarks
Display user-defined traffic
behavior configuration.
display traffic behavior user-defined
[ behavior-name ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display user-defined QoS policy
configuration.
display qos policy user-defined [ policy-name
[ classifier tcl-name ] ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
For more information about the display traffic behavior and display qos policy commands, see ACL and
QoS Command Reference.
Traffic mirroring configuration example
Network requirements
As shown in Figure 57, different departments of a company use IP addresses on different subnets. The
marketing and technical departments use the IP addresses on subnets 192.168.1.0/24 and
192.168.2.0/24, respectively. The working hours of the company are from 8:00 to 18:00 on weekdays.
Configure traffic mirroring so that the server can monitor the following traffic:
•
All traffic that the technical department sends to access the Internet.
•
IP traffic that the technical department sends to the marketing department during working hours.
Figure 57 Network diagram
Configuration procedure
1.
Monitor the traffic sent by the technical department to access the Internet:
# Create ACL 3000 to allow packets from the technical department (on subnet 192.168.2.0/24)
to access the Internet.
<DeviceA> system-view
173
[DeviceA] acl number 3000
[DeviceA-acl-adv-3000] rule permit tcp source 192.168.2.0 0.0.0.255 destination-port
eq www
[DeviceA-acl-adv-3000] quit
# Create traffic class tech_c, and then configure the match criterion as ACL 3000.
[DeviceA] traffic classifier tech_c
[DeviceA-classifier-tech_c] if-match acl 3000
[DeviceA-classifier-tech_c] quit
# Create traffic behavior tech_b, and then configure the action of mirroring traffic to port Ethernet
1/3.
[DeviceA] traffic behavior tech_b
[DeviceA-behavior-tech_b] mirror-to interface ethernet 1/3
[DeviceA-behavior-tech_b] quit
# Create QoS policy tech_p, and then associate traffic class tech_c with traffic behavior tech_b in
the QoS policy.
[DeviceA] qos policy tech_p
[DeviceA-qospolicy-tech_p] classifier tech_c behavior tech_b
[DeviceA-qospolicy-tech_p] quit
# Apply QoS policy tech_p to the outgoing packets of Ethernet 1/1.
[DeviceA] interface ethernet 1/1
[DeviceA-Ethernet1/1] qos apply policy tech_p outbound
[DeviceA-Ethernet1/1] quit
2.
Monitor the traffic that the technical department sends to the marketing department:
# Configure a time range named work to cover the time from 8: 00 to 18: 00 in working days.
[DeviceA] time-range work 8:0 to 18:0 working-day
# Create ACL 3001 to allow packets sent from the technical department (on subnet
192.168.2.0/24) to the marketing department (on subnet 192.168.1.0/24).
[DeviceA] acl number 3001
[DeviceA-acl-adv-3001] rule permit ip source 192.168.2.0 0.0.0.255 destination
192.168.1.0 0.0.0.255 time-range work
[DeviceA-acl-adv-3001] quit
# Create traffic class mkt_c, and then configure the match criterion as ACL 3001.
[DeviceA] traffic classifier mkt_c
[DeviceA-classifier-mkt_c] if-match acl 3001
[DeviceA-classifier-mkt_c] quit
# Create traffic behavior mkt_b, and then configure the action of mirroring traffic to port Ethernet
1/3.
[DeviceA] traffic behavior mkt_b
[DeviceA-behavior-mkt_b] mirror-to interface ethernet 1/3
[DeviceA-behavior-mkt_b] quit
# Create QoS policy mkt_p, and then associate traffic class mkt_c with traffic behavior mkt_b in
the QoS policy.
[DeviceA] qos policy mkt_p
[DeviceA-qospolicy-mkt_p] classifier mkt_c behavior mkt_b
[DeviceA-qospolicy-mkt_p] quit
# Apply QoS policy mkt_p to the outgoing packets of Ethernet 1/2.
[DeviceA] interface ethernet 1/2
174
[DeviceA-Ethernet1/2] qos apply policy mkt_p outbound
Verifying the configuration
# Verify that you can monitor the following traffic through the server:
•
All traffic sent by the technical department to access the Internet.
•
All IP traffic that the technical department sends to the marketing department during working hours.
175
Configuring the information center
Overview
The information center collects and classifies system information as follows:
•
Receives system information including log, trap, and debug information from source modules.
•
Outputs system information to different information channels, according to user-defined output
rules.
•
Outputs system information to different destinations, based on channel-to-destination associations.
Figure 58 Information center diagram (support the logfile feature)
By default, the information center is enabled. It affects system performance to some degree when
processing large amounts of information. If the system resources are insufficient, disable the information
center to save resources.
Classification of system information
System information is divided into the following types:
•
Log information—Describes user operations and interface state changes.
•
Trap information—Describes device faults such as authentication and network failures.
•
Debug information—Displays device running status for troubleshooting.
Source modules refer to protocol modules, board drivers, and configuration modules which generate
system information. You can classify, filter, and output system information based on source modules. To
view the supported source modules, use the info-center source ? command.
System information levels
System information is classified into eight severity levels, from 0 through 7 in descending order. The
device outputs the system information with a severity level that is higher than or equal to the specified
level. For example, if you configure an output rule with a severity level of 6 (informational), information
that has a severity level from 0 to 6 is output.
Table 5 System information levels
Severity
Severity
value
Description
Corresponding
keyword in
commands
Emergency
0
The system is unusable. For example, the system
authorization has expired.
emergencies
176
Severity
Severity
value
Description
Corresponding
keyword in
commands
Alert
1
Action must be taken immediately to solve a serious
problem. For example, traffic on an interface exceeds the
upper limit.
alerts
Critical
2
Critical condition. For example, the device temperature
exceeds the upper limit, the power module fails or the fan
tray fails.
critical
Error
3
Error condition. For example, the link state changes or a
storage card is unplugged.
errors
Warning
4
Warning condition. For example, an interface is
disconnected, or the memory resources are used up.
warnings
Notification
5
Normal but significant condition. For example, a terminal
logs in to the device, or the device reboots.
notifications
Information
al
6
Informational message. For example, a command or a
ping operation is executed.
informational
Debug
7
Debug message.
debugging
Output channels and destinations
Table 6 shows the output channels and destinations.
The system supports ten channels. By default, channels 0 through 6, and channel 9 are configured with
channel names and output destinations. You can change these default settings as needed. You can also
configure channels 7 and 8 and associate them with specific output destinations as needed.
You can use the info-center channel name command to change the name of an information channel.
Each output destination receives information from only one information channel, but each information
channel can output information to multiple output destinations.
Table 6 Default information channels and output destinations
Channel
number
Default
channel name
Default output destination
System information received by
default
0
console
Console
Log, trap and debug information
1
monitor
Monitor terminal
Log, trap and debug information
2
loghost
Log host
Log, trap and debug information
3
trapbuffer
Trap buffer
Trap information
4
logbuffer
Log buffer
Log and debug information
5
snmpagent
SNMP module
Trap information
6
channel6
Web interface
Log information
7
channel7
Not specified
Log, trap, and debug information
8
channel8
Not specified
Log, trap, and debug information
9
channel9
Log file
Log, trap, and debug information
177
The following matrix shows the feature and router compatibility:
Feature
MSR900
Eight output
destinations and
ten channels
Yes.
MSR93X
MSR20-1
X
MSR20
MSR30
MSR50
MSR10
00
Yes.
Yes except
the log file
output
destination.
Yes.
Yes.
Yes.
Yes.
Default output rules of system information
A default output rule specifies the system information source modules, information type, and severity
levels for an output destination. Table 7 shows the default output rules.
Table 7 Default output rules
Destinat
ion
Source
modules
Console
Log
Trap
Debug
Status
Severity
Status
Severity
Status
Severity
All
supported
modules
Enabled
Informational
Enabled
Debug
Enabled
Debug
Monitor
terminal
All
supported
modules
Enabled
Informational
Enabled
Debug
Enabled
Debug
Log host
All
supported
modules
Enabled
Informational
Enabled
Debug
Disabled
Debug
Trap
buffer
All
supported
modules
Disabled
Informational
Enabled
Informational
Disabled
Debug
Log
buffer
All
supported
modules
Enabled
Informational
Disabled
Debug
Disabled
Debug
SNMP
module
All
supported
modules
Disabled
Debug
Enabled
Informational
Disabled
Debug
Web
interface
All
supported
modules
Enabled
Debug
Enabled
Debug
Disabled
Debug
Log file
All
supported
modules
Enabled
Debug
Enabled
Debug
Disabled
Debug
178
System information formats
Formats
The system information format varies with output destinations. See Table 8.
Table 8 System information formats
Output destination
Format
Example
Console, monitor
terminal, logbuffer,
trapbuffer, SNMP
module, or log file
timestamp sysname
module/level/digest: content
%Jun 26 17:08:35:809 2008 Sysname
SHELL/4/LOGIN: VTY login from 1.1.1.1.
• HP format:
• HP format:
Log host
<PRI>timestamp
Sysname %%vvmodule/level
/digest: source content
<189>Oct 9 14:59:04 2009
Sysname %%10SHELL/5/SHELL_LOGIN(l):
VTY logged in from 192.168.1.21.
• UNICOM format:
{
• UNICOM format:
<PRI>timestamp Sysname
vvmodule/level/serial_numb
er: content
{
<186>Oct 13 16:48:08 2000 Sysname
10IFNET/2/210231a64jx073000020:
log_type=port;content=Vlan-interface1
link status is DOWN.
<186>Oct 13 16:48:08 2000 Sysname
10IFNET/2/210231a64jx073000020:
log_type=port;content=Line protocol on
the interface Vlan-interface1 is DOWN.
Field description
Field
Description
The priority is calculated by using this formula: facility*8+level, where:
• facility is the facility name. It can be configured with info-center loghost. It is
PRI (priority)
used to identify different log sources on the log host, and to query and filter logs
from specific log sources.
• level ranges from 0 to 7. See Table 5 for more information.
Note that the priority field is available only for information that is sent to the log
host.
The timestamp records the time when the system information was generated.
Timestamp
System information sent to the log host and those sent to the other destinations
have different precisions, and their timestamp formats are configured with
different commands. See Table 9 and Table 10.
179
Field
Description
• If the system information that is sent to a log host is in the UNICOM format, and
Sysname (host name or
host IP address)
the info-center loghost source command is configured, or the vpn-instance
vpn-instance-name option is provided in the info-center loghost command, the
sysname field is displayed as the IP address of the device that generated the
system information.
• If the system information is in the HP format, the field is displayed as the system
name of the device that generated the system information. You can use the
sysname command to modify the local system name. For more information, see
Fundamentals Command Reference.
This field indicates that the information was generated by an HP device.
%% (vendor ID)
It exists only in system information sent to a log host.
vv (version information)
This field identifies the version of the log, and has a value of 10.
It exists only in system information sent to the log host.
Module
This field specifies source module name. You can execute the info-center source ?
command in system view to view the module list.
Level (severity)
System information is divided into eight severity levels, from 0 to 7. See Table 5 for
more information about severity levels. You cannot change the system information
levels generated by modules. However, you can use the info-center source
command to control the output of system information based on severity levels.
This field briefly describes the content of the system information. It contains a string
of up to 32 characters.
For system information destined to the log host:
Digest
• If the string ends with (l), the information is log information.
• If the string ends with (t), the information is trap information.
• If the string ends with (d), the information is debug information.
This field indicates the serial number of the device that generated the system
information. It is displayed only if the system information is sent to the log host in
the UNICOM format.
Serial Number
source
This optional field identifies the source of the information. It is displayed only if the
system information is sent to a log host in HP format. It can take one of the
following values:
content
This field contains the content of the system information.
• Slot number of a card.
• IP address of the log sender.
Table 9 Timestamp precisions and configuration commands
Item
Destined to the log host
Destined to the console, monitor
terminal, log buffer, and log file
Precision
Seconds
Milliseconds
Command
used to set the
timestamp
format
info-center timestamp loghost
info-center timestamp
180
Table 10 Description of the timestamp parameters
Timestamp
parameters
boot
date
Description
Example
Time since system startup, in the format of
xxx.yyy. xxx represents the higher 32 bits,
and yyy represents the lower 32 bits, of
milliseconds elapsed.
%0.109391473 Sysname
FTPD/5/FTPD_LOGIN: User ftp
(192.168.1.23) has logged in
successfully.
System information sent to all destinations
other than log host supports this parameter.
0.109391473 is a timestamp in the boot
format.
Current date and time, in the format of mm dd
hh:mm:ss:xxx yyy.
%May 30 05:36:29:579 2003 Sysname
FTPD/5/FTPD_LOGIN: User ftp
(192.168.1.23) has logged in
successfully.
All system information supports this
parameter.
May 30 05:36:29:579 2003 is a
timestamp in the date format.
Timestamp format stipulated in ISO 8601.
iso
Only system information that is sent to the log
host supports this parameter.
no-year-date
2003-05-30T06:42:44 is a timestamp in
the iso format.
% Sysname FTPD/5/FTPD_LOGIN: User
ftp (192.168.1.23) has logged in
successfully.
No timestamp is included.
none
<189>2003-05-30T06:42:44
Sysname %%10FTPD/5/FTPD_LOGIN(l):
User ftp (192.168.1.23) has logged in
successfully.
All system information supports this
parameter.
No timestamp is included.
Current date and time without year
information, in the format of mm dd
hh:mm:ss:xxx.
Only the system information that is sent to the
log host supports this parameter.
<189>May 30 06:44:22
Sysname %%10FTPD/5/FTPD_LOGIN(l):
User ftp (192.168.1.23) has logged in
successfully.
May 30 06:44:22 is a timestamp in the
no-year-date format.
FIPS compliance
Table 11 shows the support of devices for the FIPS mode that complies with NIST FIPS 140-2 requirements.
Support for features, commands, and parameters might differ in FIPS mode and non-FIPS mode. For
more information about FIPS mode, see Security Configuration Guide.
Table 11 Hardware and FIPS mode compatibility matrix
Hardware
FIPS mode
MSR900
No.
MSR93X
No.
MSR20-1X
No.
MSR20
Yes.
MSR30
Yes except on MSR30-16.
181
Hardware
FIPS mode
MSR50
Yes.
MSR1000
Yes.
Information center configuration task list
Task
Remarks
Outputting system information to the console
Optional.
Outputting system information to the monitor terminal
Optional.
Outputting system information to a log host
Optional.
Outputting system information to the trap buffer
Optional.
Outputting system information to the log buffer
Optional.
Outputting system information to the SNMP module
Optional.
Outputting system information to the Web interface
Optional.
Saving system information to a log file
Optional.
Managing security logs
Optional.
Enabling synchronous information output
Optional.
Disabling an interface from generating link up/down logging information
Optional.
Configurations for the information output destinations function independently.
Outputting system information to the console
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
4.
Configure an output channel
for the console.
info-center console channel
{ channel-number |
channel-name }
Configure an output rule for
the console.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap
{ level severity | state state } * ] *
5.
182
Optional.
Enabled by default.
Optional.
See Table 6 for default channel
names.
Optional.
By default, system information is
output to the console through
channel 0 (console).
Optional.
See "Default output rules of system
information."
Step
Command
Remarks
Optional.
6.
Configure the timestamp
format.
info-center timestamp { debugging
| log | trap } { boot | date | none }
By default, the timestamp format
for log, trap and debug
information is date.
7.
Return to user view.
quit
N/A
8.
Enable system information
output to the console.
terminal monitor
Optional.
The default setting is enabled.
• Enable the display of debug
information on the console:
terminal debugging
9.
Enable the display of system
information on the console.
• Enable the display of log
information on the console:
terminal logging
• Enable the display of trap
Optional.
By default, the console displays log
and trap information, and discards
debug information.
information on the console:
terminal trapping
Outputting system information to the monitor
terminal
Monitor terminals refer to terminals that log in to the device through the AUX, VTY, or TTY user interface.
To output system information to the monitor terminal:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
Configure an output channel
for the monitor terminal.
info-center monitor channel
{ channel-number |
channel-name }
Configure an output rule for
the monitor terminal.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap
{ level severity | state state } * ] *
Optional.
Enabled by default.
Optional.
See Table 6 for default channel
names.
Optional.
4.
5.
183
By default, system information is
output to the monitor terminal
through channel 1 (known as
monitor).
Optional.
See "Default output rules of system
information."
Step
Command
Remarks
Optional.
6.
Configure the timestamp
format.
info-center timestamp { debugging
| log | trap } { boot | date | none }
By default, the timestamp format
for log, trap and debug
information is date.
7.
Return to user view.
quit
N/A
The default setting is disabled.
8.
Enable system information
output to the monitor terminal.
terminal monitor
You must first execute this
command, and then you can
enable the display of debugging,
log, and trap information on the
monitor terminal.
• Enable the display of debug
information on the monitor
terminal:
terminal debugging
9.
Enable the display of system
information on the monitor
terminal.
• Enable the display of log
information on the monitor
terminal:
terminal logging
• Enable the display of trap
Optional.
By default, the monitor terminal
displays log and trap information,
and discards debug information.
information on the monitor
terminal:
terminal trapping
For more information about terminal access, see Terminal Access Configuration Guide.
Outputting system information to a log host
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information
center.
info-center enable
Name the channel with a
specified channel
number.
info-center channel channel-number
name channel-name
4.
Configure an output rule
for the log host.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap { level
severity | state state } * ] *
5.
Specify the source IP
address for the log
information.
3.
Optional.
Enabled by default.
Optional.
See Table 6 for default channel names.
Optional.
See "Default output rules of system
information."
Optional.
info-center loghost source
interface-type interface-number
184
By default, the source IP address of log
information is the primary IP address of
the matching route's egress interface .
Step
6.
7.
Configure the timestamp
format for system
information output to the
log host.
Set the format of the
system information sent
to a log host.
Command
Remarks
info-center timestamp loghost { date
| iso | no-year-date | none }
Optional.
date by default.
• Set the format to UNICOM:
info-center format unicom
• Set the format to HP:
undo info-center format
Optional.
HP by default.
By default, no log host or related
parameters are specified.
8.
Specify a log host and
configure related
parameters.
info-center loghost [ vpn-instance
vpn-instance-name ]
{ host-ipv4-address | ipv6
host-ipv6-address } [ port
port-number ] [ channel
{ channel-number | channel-name }
| facility local-number ] *
If no channel is specified when
outputting system information to a log
host, the system uses channel 2
(loghost) by default.
The value of the port-number argument
must be the same as the value
configured on the log host. Otherwise,
the log host cannot receive system
information.
Outputting system information to the trap buffer
The trap buffer only receives trap information, and discards log and debug information.
To output system information to the trap buffer:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
Configure an output channel
for the trap buffer and set the
buffer size.
info-center trapbuffer [ channel
{ channel-number |
channel-name } | size buffersize ]
*
Configure an output rule for
the trap buffer.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap
{ level severity | state state } * ] *
4.
5.
185
Optional.
Enabled by default.
Optional.
See Table 6 for default channel
names.
Optional.
By default, system information is
output to the trap buffer through
channel 3 (known as trapbuffer)
and the default buffer size is 256.
Optional.
See "Default output rules of system
information."
Step
Command
Remarks
Optional.
6.
Configure the timestamp
format.
info-center timestamp { debugging
| log | trap } { boot | date | none }
The timestamp format for log, trap
and debug information is date by
default.
Outputting system information to the log buffer
The log buffer only receives log information, and discards trap and debug information.
To output system information to the log buffer:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
Configure an output channel
for the log buffer and set the
buffer size.
info-center logbuffer [ channel
{ channel-number |
channel-name } | size buffersize ]
*
5.
Configure an output rule for
the log buffer.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap
{ level severity | state state } * ] *
6.
Configure the timestamp
format.
4.
Optional.
Enabled by default.
Optional.
See Table 6 for default channel
names.
Optional.
By default, system information is
output to the log buffer through
channel 4 (known as logbuffer)
and the default buffer size is 512.
Optional.
See "Default output rules of system
information."
Optional.
info-center timestamp { debugging
| log | trap } { boot | date | none }
The timestamp format for log, trap
and debug information is date by
default.
Outputting system information to the SNMP module
The SNMP module only receives trap information, and discards log and debug information.
To monitor the device running status, trap information is usually sent to the SNMP network management
system (NMS). For this purpose, you must configure output of traps to the SNMP module, and set the trap
sending parameters for the SNMP module. For more information about SNMP, see "Configuring SNMP."
To output system information to the SNMP module:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
186
Step
Command
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
Remarks
Optional.
Enabled by default.
Optional.
See Table 6 for default channel
names.
Optional.
Configure an output channel
for the SNMP module.
info-center snmp channel
{ channel-number |
channel-name }
5.
Configure an output rule for
the SNMP module.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state }* | log { level
severity | state state }* | trap
{ level severity | state state }* ]*
6.
Configure the timestamp
format.
4.
By default, system information is
output to the SNMP module
through channel 5 (known as
snmpagent).
Optional.
See "Default output rules of system
information."
Optional.
info-center timestamp { debugging
| log | trap } { boot | date | none }
The timestamp format for log, trap
and debug information is date by
default.
Outputting system information to the Web interface
The Web interface only receives log information, and discards trap and debug information.
This feature allows you to control whether to output system information to the Web interface and, if so,
which system information can be output to the Web interface. The Web interface provides abundant
search and sorting functions. If you output system information to the Web interface, you can view the
system information by clicking corresponding tabs after logging in to the device through the Web
interface.
To output system information to the Web interface:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
4.
Configure an output channel
for the Web interface.
info-center syslog channel
{ channel-number |
channel-name }
187
Optional.
Enabled by default.
Optional.
See Table 6 for default channel
names.
Optional.
By default, system information is
output to the Web interface
through channel 6.
Step
Command
5.
Configure an output rule for
the Web interface.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state }* | log { level
severity | state state }* | trap
{ level severity | state state }* ]*
6.
Configure the timestamp
format.
info-center timestamp { debugging
| log | trap } { boot | date | none }
Remarks
Optional.
See "Default output rules of system
information."
Optional.
The timestamp format for log, trap
and debug information is date by
default.
Saving system information to a log file
By default, the log file feature saves system information from the log file buffer to a log file every 24 hours.
You can adjust the saving interval or manually save system information to a log file. After saving
information into a log file, the system clears the log file buffer.
The router supports multiple log files. Each log file has a specific capacity. When the capacity is reached,
the system creates a new log file to save new messages. The log files are named as logfile1.log,
logfile2.log, and so on. If the number of log files reaches the upper limit, or the storage device runs out
of space, the system deletes the earliest log file and creates a new one.
Saving system information to a log file (MSR900, MSR93X, MSR20, MSR30, or MSR50)
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Enable the log file feature.
info-center logfile enable
4.
Configure the interval at
which the system saves logs in
the log buffer to a log file.
info-center logfile frequency
freq-sec
Optional.
Enabled by default.
Optional.
Enabled by default.
Optional.
The default saving interval is
86400 seconds.
By default, log file
overwrite-protection is disabled.
5.
Enable log file
overwrite-protection.
info-center logfile
overwrite-protection
[ all-port-powerdown ]
This feature enables the system to
stop saving new system
information into log files when the
last log file is full or the storage
device runs out of space.
This feature is supported only in
FIPS mode.
6.
Configure the maximum size
of the log file.
info-center logfile size-quota size
188
Optional.
The default setting is 10 MB.
Step
Command
Remarks
Optional.
7.
Configure the directory to
save the log files.
info-center logfile switch-directory
dir-name
By default, the log file is saved in
the logfile directory under the root
directory of the storage device (the
root directory of a storage device
varies with devices).
The configuration made by this
command cannot survive a system
reboot.
Optional.
Available in any view.
8.
Manually save logs in the log
file buffer to a log file.
By default, the system saves logs in
the log file buffer to a log file at the
interval configured by the
info-center logfile frequency
command.
logfile save
The following matrix shows feature and router compatibility:
Feature
MSR900
MSR93X
MSR20-1
X
MSR20
MSR30
MSR50
MSR1
000
Log file feature
Yes.
Yes.
No.
Yes.
Yes.
Yes.
Yes.
Saving system information to a log file (MSR20-1X)
Task
Command
Remarks
Optional.
Enable the log file feature.
logfile { enable | disable }
Display whether the log file feature
is enabled.
display logfile status
Disabled by default.
To make the new configuration take
effect, reboot the router.
Optional.
The following matrix shows the feature and router compatibility:
Feature
MSR900
MSR93X
MSR20-1X
MSR20
MSR30
MSR50
MSR100
0
Log file feature
No
No
Yes
No
No
No
No
Managing security logs
Security logs are very important for locating and troubleshooting network problems. Generally, security
logs are output together with other logs. It is difficult to identify security logs among all logs.
189
To solve this problem, you can save security logs into a security log file without affecting the current log
output rules. After logging in to the device, the system administrator can enable the saving of security
logs into the security log file and configure related parameters. However, the system administrator cannot
perform any operations on the security log file. Only the security log administrator who has passed AAA
local authentication and logged in to the device can manage the security log file.
A security log administrator is a local user who is authorized by AAA to play the security log
administrator role.
For more information about local user and AAA local authentication, see Security Configuration Guide.
Saving security logs into the security log file
If this feature is enabled, the system first outputs security logs to the security log file buffer, and then saves
the logs in the security log file buffer into the security log file at a specified interval (the security log
administrator can also manually save security logs into the log file). After the logs are saved, the buffer
is cleared immediately.
The size of the security log file is limited. If the maximum size is reached, the system deletes the oldest log
and writes the new log into the security log file. To avoid losing security logs, you can set an alarm
threshold. When the alarm threshold is reached, the system outputs a message to inform the
administrator. The administrator can log in to the device as the security log administrator and back up
the security log file.
By default, security logs are not saved into the security log file. The parameters, such as the saving
interval, the maximum size, and the alarm threshold, have default settings. To modify these parameters,
log in to the device as the system administrator, and then follow the steps in the following table to
configure the related parameters:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Enable the saving of the
security logs into the security
log file.
info-center security-logfile enable
Disabled by default.
Set the interval for saving
security logs to the security log
file.
info-center security-logfile
frequency freq-sec
Optional.
Set the maximum size of the
security log file.
info-center security-logfile
size-quota size
Optional.
4.
5.
Optional.
Enabled by default.
Optional.
6.
Set the alarm threshold of the
security log file usage.
info-center security-logfile
alarm-threshold usage
The following matrix shows the feature and router compatibility:
190
80 by default. That is, when the
usage of the security log file
reaches 80%, the system informs
the user.
Feature
MSR900
MSR93X
MSR201X
MSR20
MSR30
MSR50
MSR100
0
Saving security
logs into the
security log file
Yes.
Yes.
No.
Yes.
Yes.
Yes.
Yes.
Managing the security log file
Task
Command
Remarks
Display a summary of the security
log file.
display security-logfile summary
[ | { begin | exclude | include }
regular-expression ]
Optional.
Available in user view.
Optional.
Change the directory of the security
log file.
info-center security-logfile
switch-directory dir-name
By default, the security log file is
saved in the seclog directory under
the root directory of the storage
device.
Available in user view.
Display contents of the security log
file buffer.
display security-logfile buffer [ |
{ begin | exclude | include }
regular-expression ]
Optional.
Optional.
Manually save security logs from
the security log file buffer into the
security log file.
security-logfile save
By default, the system saves security
logs from the security log buffer into
the security log file at the interval
specified by info-center
security-logfile frequency.
The directory to save the security log
file is specified by info-center
security-logfile switch-directory.
Available in user view.
191
Task
Command
Remarks
• Display the contents of the
specified file:
more file-url
• Display information about all
files and folders:
dir [ /all ] [ file-url ]
• Create a folder in a specified
directory on the storage
medium:
mkdir directory
• Change the current working
directory:
cd { directory | .. | / }
• Display the current path:
pwd
• Copy a file:
Perform these operations to the
security log file.
copy fileurl-source fileurl-des
• Rename a file or a folder:
rename fileurl-source
fileurl-dest
• Move a file:
move fileurl-source fileurl-dest
• Move a specified file from a
storage medium to the
Recycle Bin:
delete [ /unreserved ] file-url
• Remove a folder:
rmdir directory
• Format a storage medium:
format device [ FAT16 |
FAT32 ]
• Restore a file from the Recycle
Bin:
undelete file-url
192
Optional.
Available in user view.
For more information about these
commands, see Fundamentals
Command Reference.
Task
Command
Remarks
• Establish an FTP connection:
ftp [ server-address
[ service-port ]
[ [ vpn-instance
vpn-instance-name ] |
[ source { interface
interface-type
interface-number | ip
source-ip-address } ] ] ]
• Establish an FTP connection in
(Optional) Upload the security log
file to the FTP server.
an IPv6 network:
ftp ipv6 [ server-address
[ service-port ] [ source ipv6
source-ipv6-address ] [ -i
interface-type
interface-number ] ]
• Upload a file on the client to
the remote FTP server:
put localfile [ remotefile ]
Optional.
The ftp and ftp ipv6 commands are
available in user view. The other
commands are available in FTP
client view.
For more information about these
commands, see Fundamentals
Command Reference.
For all other operations supported
by the device acting as an FTP
client, see Fundamentals
Configuration Guide.
• Download a file from a
remote FTP server and save it:
get remotefile [ localfile ]
Enabling synchronous information output
System log output interrupts ongoing configuration operations, obscuring previously entered commands.
Synchronous information output shows the obscured commands. It also provides a command prompt in
command editing mode, or a [Y/N] string in interaction mode so you can continue your operation from
where you were stopped.
If system information, such as log information, is output before you input any information under the
current command line prompt, the system does not display the command line prompt after the system
information output.
If system information is output when you are inputting some interactive information (non-Y/N
confirmation information), the system displays your previous input in a new line but does not display the
command line prompt.
To enable synchronous information output:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable synchronous
information output.
info-center synchronous
Disabled by default.
193
Disabling an interface from generating link
up/down logging information
By default, all interfaces generate link up or link down log information when the state changes. In some
cases, you might want to disable specific interfaces from generating this information. For example:
•
You are concerned only about the states of some interfaces. In this case, you can use this function
to disable other interfaces from generating link up and link down log information.
•
An interface is unstable and continuously outputs log information. In this case, you can disable the
interface from generating link up and link down log information.
Use the default setting in normal cases to avoid affecting interface status monitoring.
To disable an interface from generating link up/down logging information:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Disable the interface from
generating link up or link
down logging information.
undo enable log updown
By default, all interfaces generate
link up and link down logging
information when the state
changes.
Displaying and maintaining information center
Task
Command
Remarks
Display information about
information channels.
display channel [ channel-number
| channel-name ] [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
Display information center
configuration information.
display info-center [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
Display the state and the log
information of the log buffer.
display logbuffer [ reverse ] [ level
severity | size buffersize ] * [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
Display a summary of the log
buffer.
display logbuffer summary [ level
severity ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display the content of the log file
buffer.
display logfile buffer [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
Display the configuration of the log
file.
display logfile summary [ | { begin
| exclude | include }
regular-expression ]
Available in any view.
194
Task
Command
Remarks
Display the state and the trap
information of the trap buffer.
display trapbuffer [ reverse ] [ size
buffersize ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Clear the log buffer.
reset logbuffer
Available in user view.
Clear the trap buffer.
reset trapbuffer
Available in user view.
The following matrix shows the commands and router compatibility:
Command
MSR900
MSR93X
MSR20-1
X
MSR20
MSR30
MSR50
MSR10
00
display logfile
buffer
Yes.
Yes.
No.
Yes.
Yes.
Yes.
Yes.
display logfile
summary
Yes.
Yes.
No.
Yes.
Yes.
Yes.
Yes.
Information center configuration examples
Outputting log information to the console
Network requirements
Configure the device to send ARP and IP log information that has a severity level of at least informational
to the console.
Figure 59 Network diagram
Configuration procedure
# Enable the information center.
<Sysname> system-view
[Sysname] info-center enable
# Use channel console to output log information to the console. By default, log information is output to the
console through channel console.
[Sysname] info-center console channel console
# Disable the output of log, trap, and debug information of all modules on channel console.
[Sysname] info-center source default channel console debug state off log state off trap
state off
195
To avoid output of unnecessary information, disable the output of log, trap, and debug information of all
modules on the specified channel (console in this example), and then configure the output rule as
needed.
# Configure an output rule to enable the ARP and IP modules to send log information that has a severity
level of at least informational to the console. (The supported source modules depend on the device
model.)
[Sysname] info-center source arp channel console log level informational state on
[Sysname] info-center source ip channel console log level informational state on
[Sysname] quit
# Enable the display of log information on the console. By default, the display of log information on the
console is enabled.
<Sysname> terminal monitor
Info: Current terminal monitor is on.
<Sysname> terminal logging
Info: Current terminal logging is on.
Now, if the ARP and IP modules generate log information, the information center automatically sends the
log information to the console.
Outputting log information to a UNIX log host
Network requirements
Configure the device to send ARP and IP log information that has a severity level of at least informational
to the UNIX log host at 1.2.0.1/16.
Figure 60 Network diagram
Configuration procedure
Before the configuration, make sure the device and the log host can reach each other. (Details not
shown.)
1.
Configure the device:
# Enable the information center.
<Device> system-view
[Device] info-center enable
# Specify the log host 1.2.0.1/16, use channel loghost to output log information, and specify
local4 as the logging facility. By default, log information is output to a log host through channel
loghost.
[Device] info-center loghost 1.2.0.1 channel loghost facility local4
# Disable the output of log, trap, and debug information of all modules on channel loghost.
[Device] info-center source default channel loghost debug state off log state off trap
state off
To avoid outputting unnecessary information, disable the output of log, trap, and debug
information on the specified channel (loghost in this example) before you configure an output rule.
196
# Configure an output rule to output to the log host ARP and IP log information that has a severity
level of at least informational.
[Device] info-center source arp channel loghost log level informational state on trap
state off
[Device] info-center source ip channel loghost log level informational state on trap
state off
2.
Configure the log host:
The following configurations were performed on Solaris which has similar configurations to the
UNIX operating systems implemented by other vendors.
a.
b.
Log in to the log host as a root user.
Create a subdirectory named Device in directory /var/log/, and then create file info.log in
the Device directory to save logs from Device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit the file syslog.conf in directory /etc/ and add the following contents.
# Device configuration messages
local4.info
/var/log/Device/info.log
In this configuration, local4 is the name of the logging facility that the log host uses to receive
logs. info is the informational level. The UNIX system records the log information that has a
severity level of at least informational to the file /var/log/Device/info.log.
NOTE:
Be aware of the following issues while editing file /etc/syslog.conf:
• Comments must be on a separate line and must begin with a pound sign (#).
• No redundant spaces are allowed after the file name.
• The logging facility name and the information level specified in the /etc/syslog.conf file must be
identical to those configured on the device using the info-center loghost and info-center source
commands. Otherwise the log information might not be output properly to the log host.
d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd using the –r
option to make the new configuration take effect.
# ps -ae | grep syslogd
147
# kill -HUP 147
# syslogd -r &
Now, the system can record log information into the log file.
Outputting log information to a Linux log host
Network requirements
Configure the device to send log information that has a severity level of at least informational to the Linux
log host at 1.2.0.1/16.
197
Figure 61 Network diagram
Configuration procedure
Before the configuration, make sure the device and the log host can reach each other. (Details not
shown.)
1.
Configure the device:
# Enable the information center.
<Sysname> system-view
[Sysname] info-center enable
# Specify the host 1.2.0.1/16 as the log host, use the channel loghost to output log information,
and specify local5 as the logging facility. By default, log information is output to a log host through
channel loghost.
[Sysname] info-center loghost 1.2.0.1 channel loghost facility local5
# Configure an output rule to output to the log host the log information that has a severity level of
at least informational.
[Sysname] info-center source default channel loghost log level informational state
on debug state off trap state off
Disable the output of unnecessary information of all modules on the specified channel in the output
rule.
2.
Configure the log host:
a. Log in to the log host as a root user.
b. Create a subdirectory named Device in the directory /var/log/, and create file info.log in the
Device directory to save logs of Device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit the file syslog.conf in the directory /etc/ and add the following contents.
# Device configuration messages
local5.info
/var/log/Device/info.log
In this configuration, local5 is the name of the logging facility used by the log host to receive
logs. info is the information level. The Linux system will record the log information with severity
level equal to or higher than informational to file /var/log/Device/info.log.
NOTE:
Be aware of the following issues while editing file /etc/syslog.conf:
• Comments must be on a separate line and must begin with a pound sign (#).
• No redundant spaces are allowed after the file name.
• The logging facility name and the information level specified in the /etc/syslog.conf file must be
identical to those configured on the device using the info-center loghost and info-center source
commands. Otherwise, the log information might not be output properly to the log host.
d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd using the –r
option to make the new configuration take effect.
198
# ps -ae | grep syslogd
147
# kill -9 147
# syslogd -r &
Make sure the syslogd process is started with the -r option on a Linux log host.
Now, the system can record log information into the log file.
199
Using ping, tracert, and system debugging
Use the ping, tracert, and system debugging utilities to test network connectivity and identify network
problems.
Ping
The ping utility sends ICMP echo requests (ECHO-REQUEST) to the destination device. Upon receiving
the requests, the destination device responds with ICMP echo replies (ECHO-REPLY) to the source device.
The source device outputs statistics about the ping operation, including the number of packets sent,
number of echo replies received, and the round-trip time. You can measure the network performance by
analyzing these statistics.
Using a ping command to test network connectivity
Execute ping commands in any view.
Task
Command
Remarks
• For
Test the network
connectivity to an IP
address.
an
IPv4
network:
ping [ ip ] [ -a source-ip | -c count | -f |
-h ttl | -i interface-type interface-number
| -m interval | -n | -p pad | -q | -r | -s
packet-size | -t timeout | -tos tos | -v |
-vpn-instance vpn-instance-name ] * host
• For an IPv6 network:
ping ipv6 [ -a source-ipv6 | -c count | -m
interval | -s packet-size | -t timeout |
-vpn-instance vpn-instance-name ] * host
[ -i interface-type interface-number ]
Set a larger value for the timeout timer
(indicated by the -t parameter in the
command) when you configure the
ping command for a low-speed
network.
Only the directly connected segment
address can be pinged if the outgoing
interface is specified with the -i
keyword.
Disabling the echo reply function on
the firewall installed on the destination
affects the ping function.
For more information about the ping ipx command, see IPX Command Reference.
For more information about the ping lsp command, see MPLS Command Reference.
Ping example
Network requirements
Test the network connectivity between Device A and Device C in Figure 62. If they can reach each other,
get detailed information about routes from Device A to Device C.
200
Figure 62 Network diagram
Configuration procedure
# Use the ping command on Device A to test connectivity to Device C.
<DeviceA> ping 1.1.2.2
PING 1.1.2.2: 56
data bytes, press CTRL_C to break
Reply from 1.1.2.2: bytes=56 Sequence=1 ttl=254 time=205 ms
Reply from 1.1.2.2: bytes=56 Sequence=2 ttl=254 time=1 ms
Reply from 1.1.2.2: bytes=56 Sequence=3 ttl=254 time=1 ms
Reply from 1.1.2.2: bytes=56 Sequence=4 ttl=254 time=1 ms
Reply from 1.1.2.2: bytes=56 Sequence=5 ttl=254 time=1 ms
--- 1.1.2.2 ping statistics --5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/41/205 ms
# Get detailed information about routes from Device A to Device C.
<DeviceA> ping -r 1.1.2.2
PING 1.1.2.2: 56
data bytes, press CTRL_C to break
Reply from 1.1.2.2: bytes=56 Sequence=1 ttl=254 time=53 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=2 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=3 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
201
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=4 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=5 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
--- 1.1.2.2 ping statistics --5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/11/53 ms
The test procedure with the ping –r command (see Figure 62) is as follows:
1.
The source device (Device A) sends an ICMP echo request with the RR option being empty to the
destination device (Device C).
2.
The intermediate device (Device B) adds the IP address of its outbound interface (1.1.2.1) to the RR
option of the ICMP echo request, and forwards the packet.
3.
Upon receiving the request, the destination device copies the RR option in the request and adds the
IP address of its outbound interface (1.1.2.2) to the RR option. Then the destination device sends
an ICMP echo reply.
4.
The intermediate device adds the IP address of its outbound interface (1.1.1.2) to the RR option in
the ICMP echo reply, and then forwards the reply.
5.
Upon receiving the reply, the source device adds the IP address of its inbound interface (1.1.1.1)
to the RR option. Finally, you can get the detailed information of routes from Device A to Device C:
1.1.1.1 <-> {1.1.1.2; 1.1.2.1} <-> 1.1.2.2.
Tracert
Tracert (also called "Traceroute") enables you to get the IP addresses of Layer 3 devices in the path to a
specific destination. You can use tracert to test network connectivity and identify failed nodes.
202
Figure 63 Traceroute operation
Tracert uses received ICMP error messages to get the IP addresses of devices. As shown in Figure 63,
tracert works as follows:
1.
The source device (Device A) sends a UDP packet with a TTL value of 1 to the destination device
(Device D). The destination UDP port is not used by any application on the destination device.
2.
The first hop (Device B, the first Layer 3 device that receives the packet) responds by sending a
TTL-expired ICMP error message to the source, with its IP address (1.1.1.2) encapsulated. In this
way, the source device can get the address of the first Layer 3 device (1.1.1.2).
3.
The source device sends a packet with a TTL value of 2 to the destination device.
4.
The second hop (Device C) responds with a TTL-expired ICMP error message, which gives the
source device the address of the second Layer 3 device (1.1.2.2).
5.
The process continues until the packet sent by the source device reaches the ultimate destination
device. Because no application uses the destination port specified in the packet, the destination
device responds with a port-unreachable ICMP message to the source device, with its IP address
encapsulated. This way, the source device gets the IP address of the destination device (1.1.3.2).
6.
The source device thinks that the packet has reached the destination device after receiving the
port-unreachable ICMP message, and the path to the destination device is 1.1.1.2 to 1.1.2.2 to
1.1.3.2.
Prerequisites
Before you use a tracert command, perform the tasks in this section.
For an IPv4 network:
•
Enable sending of ICMP timeout packets on the intermediate devices (devices between the source
and destination devices). If the intermediate devices are HP devices, execute the ip ttl-expires
enable command on the devices. For more information about this command, see Layer 3—IP
Services Command Reference.
•
Enable sending of ICMP destination unreachable packets on the destination device. If the
destination device is an HP device, execute the ip unreachables enable command. For more
information about this command, see Layer 3—IP Services Command Reference.
•
If there is an MPLS network between the source and destination devices and you need to display the
MPLS information during the tracert process, enable support for ICMP extensions on the source and
intermediate devices. If the source and intermediate devises are HP devices, execute the ip
203
icmp-extensions compliant command on the devices. For more information about this command,
see Layer 3—IP Services Command Reference.
For an IPv6 network:
•
Enable sending of ICMPv6 timeout packets on the intermediate devices (devices between the
source and destination devices). If the intermediate devices are HP devices, execute the ipv6
hoplimit-expires enable command on the devices. For more information about this command, see
Layer 3—IP Services Command Reference.
•
Enable sending of ICMPv6 destination unreachable packets on the destination device. If the
destination device is an HP device, execute the ipv6 unreachables enable command. For more
information about this command, see Layer 3—IP Services Command Reference.
Using a tracert command to identify failed or all nodes in a
path
Execute tracert commands in any view.
Task
Command
Remarks
• For an IPv4 network:
Display the routes from source to
destination.
tracert [ -a source-ip | -f first-ttl | -m
max-ttl | -p port | -q packet-number
| -vpn-instance vpn-instance-name |
-w timeout ] * host
• For an IPv6 network:
Use either approach.
tracert ipv6 [ -f first-ttl | -m max-ttl |
-p port | -q packet-number |
-vpn-instance vpn-instance-name |
-w timeout ] * host
For more information about the tracert lsp command, see MPLS Command Reference.
System debugging
The device supports debugging for the majority of protocols and features and provides debugging
information to help users diagnose errors.
Debugging information control switches
The following switches control the display of debugging information:
•
Protocol debugging switch—Controls whether to generate the protocol-specific debugging
information.
•
Screen output switch—Controls whether to display the debugging information on a certain screen.
As shown in Figure 64, assume that the device can provide debugging for the three modules 1, 2, and
3. The debugging information can be output on a terminal only when both the protocol debugging
switch and the screen output switch are turned on.
Output of debugging information depends on the configurations of the information center and the
debugging commands of each protocol and functional module. Debugging information is typically
204
displayed on a terminal (including console or VTY). You can also send debugging information to other
destinations. For more information, see "Configuring the information center."
Figure 64 Relationship between the protocol and screen output switch
Debugging a feature module
Output of debugging commands is memory intensive. To guarantee system performance, enable
debugging only for modules that are in an exceptional condition. When debugging is complete, use the
undo debugging all command to disable all the debugging functions.
Configure the debugging, terminal debugging and terminal monitor commands before you can display
detailed debugging information on the terminal. For more information about the terminal debugging
and terminal monitor commands, see Network Management and Monitoring Command Reference.
To debug a feature module and display the debugging information on a terminal:
Step
Command
Remarks
Optional.
1.
Enable the terminal
monitoring of system
information.
terminal monitor
By default, the terminal monitoring
on the console port is enabled and
that on the monitoring terminal is
disabled.
Available in user view.
2.
Enable the terminal to display
debugging information.
terminal debugging
By default, terminal display of
debugging information is
disabled.
Available in user view.
3.
Enable debugging for a
specified module.
debugging { all [ timeout time ] |
module-name [ option ] }
205
By default, debugging for a
specified module is disabled.
Available in user view.
Step
4.
Command
display debugging [ interface
interface-type interface-number ]
[ module-name ] [ | { begin |
exclude | include }
regular-expression ]
Display the enabled
debugging functions.
Remarks
Optional.
Available in any view.
Ping and tracert example
Network requirements
As shown in Figure 65, Device A failed to Telnet Device C. Determine whether Device A and Device C
can reach each other. If they cannot reach each other, locate the failed nodes in the network.
Figure 65 Network diagram
Configuration procedure
1.
Use the ping command to test connectivity between Device A and Device C.
<DeviceA> ping 1.1.2.2
PING 1.1.2.2: 56
data bytes, press CTRL_C to break
Request time out
Request time out
Request time out
Request time out
Request time out
--- 1.1.2.2 ping statistics --5 packet(s) transmitted
0 packet(s) received
100.00% packet loss
The output shows that Device A and Device C cannot reach each other.
2.
Use the tracert command to identify failed nodes.
# Enable sending of ICMP timeout packets on Device B.
<DeviceB> system-view
[DeviceB] ip ttl-expires enable
# Enable sending of ICMP destination unreachable packets on Device C.
<DeviceC> system-view
[DeviceC] ip unreachables enable
# Execute the tracert command on Device A.
<DeviceA> tracert 1.1.2.2
traceroute to 1.1.2.2(1.1.2.2) 30 hops max,40 bytes packet, press CTRL_C to break
1
1.1.1.2 14 ms 10 ms 20 ms
2
* * *
3
* * *
206
4
* * *
5
<DeviceA>
The output shows that Device A and Device C cannot reach other, Device A and Device B can
reach each other, and an error occurred on the connection between Device B and Device C.
# Use the debugging ip icmp command on Device A and Device C to verify that they can send and
receive the specific ICMP packets, or use the display ip routing-table command to verify the
availability of active routes between Device A and Device C.
207
Configuring IPv6 NetStream
Overview
Legacy ways to collect traffic statistics, like SNMP and port mirroring, cannot provide precise network
management because of inflexible statistical methods or the high cost of required dedicated servers. This
calls for a new technology to collect traffic statistics.
IPv6 NetStream provides statistics about network traffic flows, and it can be deployed on access,
distribution, and core layers.
IPv6 NetStream implements the following features:
•
Accounting and billing—IPv6 NetStream provides fine-gained data about the network usage based
on the resources such as lines, bandwidth, and time periods. The ISPs can use the data for billing
based on time period, bandwidth usage, application usage, and QoS. Enterprise customers can
use this information for department chargeback or for cost allocation.
•
Network planning—IPv6 NetStream data provides key information, such as AS traffic information,
for optimizing the network design and planning. This helps maximize the network performance and
reliability while minimizing the network operation cost.
•
Network monitoring—Configured on the Internet interface, IPv6 NetStream allows for monitoring
traffic and bandwidth utilization in real time. By using this information, administrators can
understand how the network is used and where the bottleneck is, so that they can better plan the
resource allocation.
•
User monitoring and analysis—The IPv6 NetStream data provides detailed information about
network applications and resources. This information helps network administrators efficiently plan
and allocate network resources, which helps ensure network security.
IPv6 NetStream basic concepts
IPv6 flow
IPv6 NetStream is an accounting technology that provides statistics on a per-flow basis. An IPv6 flow is
defined by the following 7-tuple elements: destination address, source IP address, destination port
number, source port number, protocol number, ToS, and inbound or outbound interface. The 7-tuple
elements define a unique flow.
IPv6 NetStream operation
A typical IPv6 NetStream system comprises the following parts:
•
NetStream data exporter (NDE)—The NDE analyzes traffic flows that pass through it, collects data
from the target flows, and then exports the data to the NSC. Before exporting data, the NDE might
perform processes on the data, such as aggregation. A device with IPv6 NetStream configured acts
as an NDE.
208
•
NetStream collector (NSC)—The NSC is usually a program running in UNIX or Windows. It parses
the packets sent from the NDE, and then it stores the statistics to the database for the NDA. The NSC
gathers the data from multiple NDEs.
•
NetStream data analyzer (NDA)—The NDA is a tool for analyzing network traffic. It collects
statistics from the NSC, and performs further processes, and generates various types of reports for
applications of traffic billing, network planning, and attack detection and monitoring. Typically, the
NDA features a Web-based system for users to easily obtain, view, and gather the data.
Figure 66 IPv6 NetStream system
As shown in Figure 66, IPv6 NetStream uses the following procedure to collect and analyze data:
1.
The NDE (the device configured with IPv6 NetStream) periodically delivers the collected statistics
to the NSC.
2.
The NSC processes the statistics, and then it sends the results to the NDA.
3.
The NDA analyzes the statistics for accounting, network planning, and the like.
NSC and NDA are usually integrated into a NetStream server. This document focuses on the description
and configuration of the NDE.
IPv6 NetStream key technologies
Flow aging
IPv6 NetStream uses flow aging to enable the NDE to export IPv6 NetStream data to the NetStream
server. IPv6 NetStream creates an IPv6 NetStream entry for each flow in the cache, and each entry stores
the flow statistics. When the timer of the entry expires, the NDE exports the summarized data to the IPv6
NetStream server in a specific IPv6 NetStream version export format. For information about flow aging
types and configuration, see "Configuring IPv6 NetStream flow aging."
IPv6 NetStream data export
IPv6 NetStream traditional data export
IPv6 NetStream collects statistics about each flow and, when the entry timer expires, it exports the data
in each entry to the NetStream server.
The data includes statistics about each flow, but this method consumes more bandwidth and CPU than
the aggregation method, and it requires a large cache size. In most cases, not all statistics are necessary
for analysis.
209
IPv6 NetStream aggregation data export
IPv6 NetStream aggregation merges the flow statistics according to the aggregation criteria of an
aggregation mode, and it sends the summarized data to the IPv6 NetStream server. This process is the
IPv6 NetStream aggregation data export, which uses less bandwidth than traditional data export.
Table 12 lists the six IPv6 NetStream aggregation modes are supported. In each mode, the system
merges flows into one aggregation flow if the aggregation criteria are of the same value. These six
aggregation modes work independently and can be configured on the same interface.
Table 12 IPv6 NetStream aggregation modes
Aggregation mode
Aggregation criteria
AS aggregation
•
•
•
•
Source AS number
Destination AS number
Inbound interface index
Outbound interface index
Protocol-port aggregation
• Protocol number
• Source port
• Destination port
Source AS number
Source-prefix aggregation
•
•
•
•
•
•
•
•
Destination AS number
•
•
•
•
•
•
•
•
Source AS number
Destination-prefix aggregation
Prefix aggregation
BGP-nexthop
Source address mask length
Source prefix
Inbound interface index
Destination address mask length
Destination prefix
Outbound interface index
Destination AS number
Source address mask length
Destination address mask length
Source prefix
Destination prefix
Inbound interface index
Outbound interface index
• BGP next hop
• Outbound interface index
In an aggregation mode with AS, if the packets are not forwarded according to the BGP routing table,
the statistics on the AS number cannot be obtained.
In the aggregation mode of BGP-nexthop, if the packets are not forwarded according to the BGP routing
table, the statistics on the BGP next hop cannot be obtained.
IPv6 NetStream export format
IPv6 NetStream exports data in UDP datagrams in version 9 format.
210
The version 9 format template-based feature provides support of different statistics, such as BGP next hop
and MPLS information.
IPv6 NetStream configuration task list
Before you configure IPv6 NetStream, verify that the following configurations are proper, as needed:
•
Make sure which device you want to enable IPv6 NetStream on.
•
Configure the timer for IPv6 NetStream flow aging.
•
To reduce the bandwidth that IPv6 NetStream data export uses, configure IPv6 NetStream
aggregation.
Complete these tasks to configure IPv6 NetStream:
Task
Remarks
Enabling IPv6 NetStream
Required.
Configuring IPv6 NetStream data
export
Configuring IPv6 NetStream
traditional data export
Configuring IPv6 NetStream
aggregation data export
Use either command as required.
Configuring attributes of IPv6 NetStream data export
Optional.
Configuring IPv6 NetStream flow aging
Optional.
Enabling IPv6 NetStream
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type interface-number
N/A
3.
Enable IPv6 NetStream
on the interface.
ipv6 netstream { inbound | outbound }
Disabled by default.
Configuring IPv6 NetStream data export
To allow the NDE to export collected statistics to the NetStream server, configure the source interface out
of which the data is sent and the destination address to which the data is sent.
Configuring IPv6 NetStream traditional data export
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Enable IPv6 NetStream.
ipv6 netstream { inbound
| outbound }
Disabled by default.
4.
Exit to system view.
quit
N/A
211
Step
5.
Configure the destination
address and the
destination UDP port
number for the IPv6
NetStream traditional
data export.
Command
Remarks
ipv6 netstream export
host ip-address udp-port
[ vpn-instance
vpn-instance-name ]
By default, no destination address or destination
UDP port number is configured, so the IPv6
NetStream traditional data is not exported.
Optional.
6.
7.
Configure the source
interface for IPv6
NetStream traditional
data export.
ipv6 netstream export
source interface
interface-type
interface-number
Limit the data export rate.
ipv6 netstream export
rate rate
By default, the interface where the NetStream
data is sent out (the interface that connects to the
NetStream server) is used as the source interface.
HP recommends that you connect the network
management interface to the NetStream server
and configure it as the source interface.
Optional.
No limit by default.
Configuring IPv6 NetStream aggregation data export
IPv6 NetStream aggregation can be implemented by software.
Configuration restrictions and guidelines
Configurations in IPv6 NetStream aggregation view apply to aggregation data export only, and those in
system view apply to traditional data export. If configurations in IPv6 NetStream aggregation view are
not provided, the configurations in system view apply to the aggregation data export.
Configuration procedure
To configure IPv6 NetStream aggregation data export:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface
view.
interface interface-type interface-number
N/A
3.
Enable IPv6
NetStream.
ipv6 netstream { inbound | outbound }
Disabled by default.
4.
Exit to system
view.
quit
N/A
5.
Set an IPv6
NetStream
aggregation
mode and enter
its view.
ipv6 netstream aggregation { as |
bgp-nexthop | destination-prefix |
prefix | protocol-port | source-prefix }
N/A
212
Step
Command
Configure the
destination
address and
destination UDP
port number for
the IPv6
NetStream
aggregation data
export.
6.
Remarks
ipv6 netstream export host ip-address
udp-port [ vpn-instance
vpn-instance-name ]
By default, no destination address or
destination UDP port number is
configured in IPv6 NetStream
aggregation view.
If you expect to export only IPv6
NetStream aggregation data, configure
the destination address in related
aggregation view only.
Optional.
Configure the
source interface
for IPv6
NetStream
aggregation data
export.
7.
By default, the interface connecting to the
NetStream server is used as the source
interface.
• Source interfaces in different
ipv6 netstream export source interface
interface-type interface-number
aggregation views can be different.
• If no source interface is configured in
aggregation view, the source
interface configured in system view, if
any, is used.
• HP recommends you connect the
network management interface to the
NetStream server.
Enable the current
IPv6 NetStream
aggregation
configuration.
8.
Disabled by default
enable
Configuring attributes of IPv6 NetStream data
export
Configuring IPv6 NetStream export format
The IPv6 NetStream export format exports IPv6 NetStream data in version 9 format, and the data fields
can be expanded to contain more information, including the following:
•
Statistics about source AS, destination AS, and peer ASs in version 9 format.
•
Statistics about BGP next hop in version 9 format.
To configure the IPv6 NetStream export format:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
213
Step
Command
Remarks
Optional.
Configure the version
for IPv6 NetStream
export format, and
specify whether to
record AS and BGP
next hop information.
2.
By default:
ipv6 netstream export
version 9 [ origin-as |
peer-as ]
[ bgp-nexthop ]
• Version 9 format is used to export IPv6
NetStream traditional data, IPv6 NetStream
aggregation data, and MPLS flow data with IPv6
fields.
• The peer AS numbers are recorded.
• The BGP next hop is not recorded.
Configuring the refresh rate for IPv6 NetStream version 9
templates
Version 9 is template-based and supports user-defined formats, so the NetStream device needs to resend
a new template to the NetStream server for an update. If the version 9 format is changed on the
NetStream device and not updated on the NetStream server, the server cannot associate the received
statistics with its proper fields. To avoid this situation, configure the refresh frequency and rate for version
9 templates so that the NetStream server can refresh the templates on time.
The refresh frequency and interval can be both configured, and the template is resent when either of the
condition is reached.
To configure the refresh rate for IPv6 NetStream version 9 templates:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Configure the refresh
frequency for NetStream
version 9 templates.
ipv6 netstream export v9-template
refresh-rate packet packets
Configure the refresh
interval for NetStream
version 9 templates.
ipv6 netstream export v9-template
refresh-rate time minutes
3.
Optional.
By default, the version 9 templates
are sent every 20 packets.
Optional.
By default, the version 9 templates
are sent every 30 minutes.
Configuring IPv6 NetStream flow aging
Flow aging approaches
The following types of IPv6 NetStream flow aging are available:
•
Periodical aging
•
Forced aging
•
TCP FIN- and RST-triggered aging (automatically triggered if a TCP connection is terminated)
Periodical aging
Periodical aging uses the following approaches:
214
•
Inactive flow aging—A flow is considered inactive if its statistics have not been changed. No
packet for this IPv6 NetStream entry arrives in the time specified by the ipv6 netstream timeout
inactive command. The inactive flow entry remains in the cache until the inactive timer expires. Then,
the inactive flow is aged out and its statistics, which can no longer be displayed by the display ipv6
netstream cache command, are sent to the NetStream server. The inactive flow aging ensures the
cache is big enough for new flow entries.
•
Active flow aging—An active flow is aged out when the time specified by the ipv6 netstream
timeout active command is reached, and its statistics are exported to the NetStream server. The
device continues to count the active flow statistics, which can be displayed by the display ipv6
netstream cache command. The active flow aging exports the statistics of active flows to the
NetStream server.
Forced aging
Use the reset ipv6 netstream statistics command to age out all IPv6 NetStream entries in the cache and
to clear the statistics. This is forced aging. Alternatively, use the ipv6 netstream max-entry command to
set the maximum entries that the cache can accommodate.
TCP FIN- and RST-triggered aging
For a TCP connection, when a packet with a FIN or RST flag is sent out, it means that a session is finished.
If a packet with a FIN or RST flag is recorded for a flow with the IPv6 NetStream entry already created,
the flow is aged out immediately. However, if the packet with a FIN or RST flag is the first packet of a flow,
a new IPv6 NetStream entry is created instead of being aged out. This type of aging is enabled by
default, and it cannot be disabled.
Configuration procedure
To configure flow aging:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
• Set the aging timer for active flows:
2.
Configure periodical aging.
ipv6 netstream timeout active
minutes
• Set the aging timer for inactive flows:
ipv6 netstream timeout inactive
seconds
3.
Configure forced aging of the
IPv6 NetStream entries.
a. Set the maximum entries that
the cache can accommodate:
ipv6 netstream max-entry
max-entries
b. Exit to user view:
quit
c. Configure forced aging:
reset ipv6 netstream statistics
215
Optional.
By default:
• The aging timer for active
flows is 30 minutes.
• The aging timer for
inactive flows is 30
seconds.
Optional.
• By default, the cache can
accommodate a
maximum of 10000
entries.
• The reset ipv6 netstream
statistics command also
clears the cache.
Displaying and maintaining IPv6 NetStream
Task
Command
Remarks
Display IPv6 NetStream entry information in
the cache.
display ipv6 netstream cache [ verbose ]
[ | { begin | exclude | include }
regular-expression ]
Available in any
view.
Display information about IPv6 NetStream
data export.
display ipv6 netstream export [ | { begin |
exclude | include } regular-expression ]
Available in any
view.
Display the configuration and status of the
NetStream flow record templates.
display ipv6 netstream template [ | { begin
| exclude | include } regular-expression ]
Available in any
view.
Clear the cache, age out, and export all IPv6
NetStream data.
reset ipv6 netstream statistics
Available in user
view.
IPv6 NetStream configuration examples
IPv6 NetStream traditional data export configuration example
Network requirements
As shown in Figure 67, configure IPv6 NetStream on Router A to collect statistics on packets passing
through it. Enable IPv6 NetStream in the inbound direction on Ethernet 1/0 and in the outbound
direction of Ethernet 1/1. Configure the router to export IPv6 NetStream traditional data to UDP port
5000 of the NetStream server at 12.110.2.2/16.
Figure 67 Network diagram
Configuration procedure
# Enable IPv6 NetStream in the inbound direction of Ethernet 1/0.
<RouterA> system-view
[RouterA] ipv6
[RouterA] interface ethernet 1/0
[RouterA-Ethernet1/0] ipv6 address 10::1/64
[RouterA-Ethernet1/0] ipv6 netstream inbound
[RouterA-Ethernet1/0] quit
# Enable IPv6 NetStream in the outbound direction of Ethernet1/1.
[RouterA] interface ethernet 1/1
[RouterA-Ethernet1/1] ip address 12.110.2.1 255.255.0.0
[RouterA-Ethernet1/1] ipv6 address 20::1/64
[RouterA-Ethernet1/1] ipv6 netstream outbound
[RouterA-Ethernet1/1] quit
216
# Configure the destination address and the destination UDP port number for the IPv6 NetStream
traditional data export.
[RouterA] ipv6 netstream export host 12.110.2.2 5000
IPv6 NetStream aggregation data export configuration
example
Network requirements
As shown in Figure 68, configure IPv6 NetStream on Router A so that:
•
Router A exports IPv6 NetStream traditional data to port 5000 of the NetStream server at
4.1.1.1/16.
•
Router A performs IPv6 NetStream aggregation in the modes of AS, protocol-port, source-prefix,
destination-prefix, and prefix. Send the aggregation data to the destination address with UDP port
2000, 3000, 4000, 6000, and 7000 for different modes.
All the routers in the network are running IPv6 EBGP. For more information about IPv6 BGP, see Layer
3—IP Routing Configuration Guide.
Figure 68 Network diagram
Configuration procedure
# Enable IPv6 NetStream in the inbound and outbound directions of Ethernet 1/0.
<RouterA> system-view
[RouterA] ipv6
[RouterA] interface ethernet 1/0
[RouterA-Ethernet1/0] ipv6 address 10::1/64
[RouterA-Ethernet1/0] ipv6 netstream inbound
[RouterA-Ethernet1/0] ipv6 netstream outbound
[RouterA-Ethernet1/0] quit
# In system view, configure the destination address and the destination UDP port number for the IPv6
NetStream traditional data export with IP address 4.1.1.1 and port 5000.
[RouterA] ipv6 netstream export host 4.1.1.1 5000
# Configure the aggregation mode as AS, and then, in aggregation view, configure the destination
address and the destination UDP port number for the IPv6 NetStream AS aggregation data export.
[RouterA] ipv6 netstream aggregation as
[RouterA-ns6-aggregation-as] enable
217
[RouterA-ns6-aggregation-as] ipv6 netstream export host 4.1.1.1 2000
[RouterA-ns6-aggregation-as] quit
# Configure the aggregation mode as protocol-port, and then, in aggregation view, configure the
destination address and the destination UDP port number for the IPv6 NetStream protocol-port
aggregation data export.
[RouterA] ipv6 netstream aggregation protocol-port
[RouterA-ns6-aggregation-protport] enable
[RouterA-ns6-aggregation-protport] ipv6 netstream export host 4.1.1.1 3000
[RouterA-ns6-aggregation-protport] quit
# Configure the aggregation mode as source-prefix, and then, in aggregation view, configure the
destination address and the destination UDP port number for the IPv6 NetStream source-prefix
aggregation data export.
[RouterA] ipv6 netstream aggregation source-prefix
[RouterA-ns6-aggregation-srcpre] enable
[RouterA-ns6-aggregation-srcpre] ipv6 netstream export host 4.1.1.1 4000
[RouterA-ns6-aggregation-srcpre] quit
# Configure the aggregation mode as destination-prefix, and then, in aggregation view, configure the
destination address and the destination UDP port number for the IPv6 NetStream destination-prefix
aggregation data export.
[RouterA] ipv6 netstream aggregation destination-prefix
[RouterA-ns6-aggregation-dstpre] enable
[RouterA-ns6-aggregation-dstpre] ipv6 netstream export host 4.1.1.1 6000
[RouterA-ns6-aggregation-dstpre] quit
# Configure the aggregation mode as prefix, and then, in aggregation view, configure the destination
address and the destination UDP port number for the IPv6 NetStream prefix aggregation data export.
[RouterA] ipv6 netstream aggregation prefix
[RouterA-ns6-aggregation-prefix] enable
[RouterA-ns6-aggregation-prefix] ipv6 netstream export host 4.1.1.1 7000
[RouterA-ns6-aggregation-prefix] quit
218
Support and other resources
Contacting HP
For worldwide technical support information, see the HP support website:
http://www.hp.com/support
Before contacting HP, collect the following information:
•
Product model names and numbers
•
Technical support registration number (if applicable)
•
Product serial numbers
•
Error messages
•
Operating system type and revision level
•
Detailed questions
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website:
http://www.hp.com/go/wwalerts
After registering, you will receive email notification of product enhancements, new driver versions,
firmware updates, and other product resources.
Related information
Documents
To find related documents, browse to the Manuals page of the HP Business Support Center website:
http://www.hp.com/support/manuals
•
For related documentation, navigate to the Networking section, and select a networking category.
•
For a complete list of acronyms and their definitions, see HP FlexNetwork Technology Acronyms.
Websites
•
HP.com http://www.hp.com
•
HP Networking http://www.hp.com/go/networking
•
HP manuals http://www.hp.com/support/manuals
•
HP download drivers and software http://www.hp.com/support/downloads
•
HP software depot http://www.software.hp.com
•
HP Education http://www.hp.com/learn
219
Conventions
This section describes the conventions used in this documentation set.
Command conventions
Convention
Description
Boldface
Bold text represents commands and keywords that you enter literally as shown.
Italic
Italic text represents arguments that you replace with actual values.
[]
Square brackets enclose syntax choices (keywords or arguments) that are optional.
{ x | y | ... }
Braces enclose a set of required syntax choices separated by vertical bars, from which
you select one.
[ x | y | ... ]
Square brackets enclose a set of optional syntax choices separated by vertical bars, from
which you select one or none.
{ x | y | ... } *
Asterisk-marked braces enclose a set of required syntax choices separated by vertical
bars, from which you select at least one.
[ x | y | ... ] *
Asterisk-marked square brackets enclose optional syntax choices separated by vertical
bars, from which you select one choice, multiple choices, or none.
&<1-n>
The argument or keyword and argument combination before the ampersand (&) sign can
be entered 1 to n times.
#
A line that starts with a pound (#) sign is comments.
GUI conventions
Convention
Description
Boldface
Window names, button names, field names, and menu items are in bold text. For
example, the New User window appears; click OK.
>
Multi-level menus are separated by angle brackets. For example, File > Create > Folder.
Convention
Description
Symbols
WARNING
An alert that calls attention to important information that if not understood or followed can
result in personal injury.
CAUTION
An alert that calls attention to important information that if not understood or followed can
result in data loss, data corruption, or damage to hardware or software.
IMPORTANT
An alert that calls attention to essential information.
NOTE
TIP
An alert that contains additional or supplementary information.
An alert that provides helpful information.
220
Network topology icons
Represents a generic network device, such as a router, switch, or firewall.
Represents a routing-capable device, such as a router or Layer 3 switch.
Represents a generic switch, such as a Layer 2 or Layer 3 switch, or a router that supports
Layer 2 forwarding and other Layer 2 features.
Represents an access controller, a unified wired-WLAN module, or the switching engine
on a unified wired-WLAN switch.
Represents an access point.
Represents a security product, such as a firewall, a UTM, or a load-balancing or security
card that is installed in a device.
Represents a security card, such as a firewall card, a load-balancing card, or a
NetStream card.
Port numbering in examples
The port numbers in this document are for illustration only and might be unavailable on your device.
221
Index
ACDEFHILMNOPRSTU
Configuring the PoE power,156
A
Configuring the RMON alarm function,17
Adding a candidate device to a cluster,66
Configuring the RMON statistics function,16
Alarm group configuration example,21
C
Configuring the sFlow agent and sFlow collector
information,145
Cluster management configuration example,70
Configuring traffic mirroring,171
Contacting HP,219
Cluster management configuration task list,58
Conventions,220
Configuring a PoE interface by using a PoE profile,159
Creating a sampler,150
Configuring access-control rights,34
Configuring ACS attributes,78
CWMP configuration approaches,76
Configuring advanced cluster functions,66
D
Configuring attributes of IPv6 NetStream data
export,213
Detecting PDs,156
Disabling an interface from generating link up/down
logging information,194
Configuring attributes of NetStream export data,95
Configuring counter sampling,147
Displaying and maintaining a sampler,150
Configuring CPE attributes,79
Displaying and maintaining cluster management,69
Configuring flow sampling,146
Displaying and maintaining CWMP,83
Configuring IP accounting,84
Displaying and maintaining information center,194
Configuring IPv6 NetStream data export,211
Displaying and maintaining IP accounting,85
Configuring IPv6 NetStream flow aging,214
Displaying and maintaining IP traffic ordering,143
Configuring local port mirroring,166
Displaying and maintaining IPv6 NetStream,216
Configuring NetStream data export,94
Displaying and maintaining NetStream,99
Configuring NetStream filtering and sampling,93
Displaying and maintaining NQA,122
Configuring NetStream flow aging,97
Displaying and maintaining NTP,39
Configuring NTP authentication,35
Displaying and maintaining PoE,161
Configuring NTP operation modes,29
Displaying and maintaining port mirroring,169
Configuring optional parameters for NTP,33
Displaying and maintaining RMON,18
Configuring PoE power management,157
Displaying and maintaining sFlow,147
Configuring remote port mirroring,169
Displaying and maintaining SNMP,8
Configuring SNMP basic parameters,2
Displaying and maintaining traffic mirroring,173
Configuring SNMP logging,5
E
Configuring SNMP traps,6
Configuring the local clock as a reference source,32
Enabling CWMP,78
Configuring the management device,59
Enabling IP traffic ordering,143
Configuring the member devices,65
Enabling IPv6 NetStream,211
Configuring the NQA client,105
Enabling NetStream on an interface,92
Enabling PoE,154
Configuring the NQA server,104
Enabling synchronous information output,193
Configuring the PoE monitoring function,159
222
Outputting system information to the Web
interface,187
Ethernet statistics group configuration example,18
F
Overview,165
FIPS compliance,181
Overview,73
H
Overview,208
Hardware compatibility,153
Overview,23
Overview,54
History group configuration example,19
Overview,171
I
Overview,14
Information center configuration examples,195
Overview,150
Information center configuration task list,182
Overview,176
IP accounting configuration example,85
Overview,102
IP traffic ordering configuration example,143
Overview,153
IPv6 NetStream basic concepts,208
Overview,1
IPv6 NetStream configuration examples,216
Overview,87
IPv6 NetStream configuration task list,211
P
IPv6 NetStream key technologies,209
Ping,200
L
Ping and tracert example,206
Local port mirroring configuration example,169
PoE configuration example,162
M
PoE configuration task list,153
Managing security logs,189
R
N
Related information,219
NetStream basic concepts,87
S
NetStream configuration examples,99
Sampler configuration example,151
NetStream configuration task list,91
Saving system information to a log file,188
NetStream key technologies,88
Setting the IP traffic ordering interval,143
NetStream sampling and filtering,91
sFlow configuration example,147
NQA configuration examples,123
SNMP configuration examples,8
NQA configuration task list,104
SNMP configuration task list,2
NTP configuration examples,40
System debugging,204
NTP configuration task list,29
T
O
Toggling between the CLIs of the management device
and a member device,65
Outputting system information to a log host,184
Outputting system information to the console,182
Tracert,202
Outputting system information to the log buffer,186
Traffic mirroring configuration example,173
Outputting system information to the monitor
terminal,183
Traffic mirroring configuration task list,171
Troubleshooting PoE,164
Outputting system information to the SNMP
module,186
Troubleshooting sFlow configuration,148
U
Outputting system information to the trap buffer,185
Upgrading PSE processing software in service,161
223