HP 6600/HSR6600 Routers

HP 6600/HSR6600 Routers
Network Management and Monitoring
Configuration Guide
Part number: 5998-1511
Software version: A6602-CMW520-R3303P05
A6600-CMW520-R3303P05-RPE
A6600-CMW520-R3303P05-RSE
HSR6602_MCP-CMW520-R3303P05
Document version: 6PW105-20140507
Legal and notice information
© Copyright 2014 Hewlett-Packard Development Company, L.P.
No part of this documentation may be reproduced or transmitted in any form or by any means without
prior written consent of Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice.
HEWLETT-PACKARD COMPANY MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS
MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE. Hewlett-Packard shall not be liable for errors contained
herein or for incidental or consequential damages in connection with the furnishing, performance, or
use of this material.
The only warranties for HP products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or omissions contained
herein.
Contents
Using ping, tracert, and system debugging ··············································································································· 1 Ping ····················································································································································································· 1 Using a ping command to test network connectivity ···························································································· 1 Ping example ···························································································································································· 1 Tracert ················································································································································································ 3 Prerequisites ······························································································································································ 4 Using a tracert command to identify failed or all nodes in a path ····································································· 5 System debugging ···························································································································································· 5 Debugging information control switches················································································································ 5 Debugging a feature module ·································································································································· 6 Ping and tracert example ················································································································································· 7 Configuring NQA ························································································································································ 9 Overview············································································································································································ 9 Collaboration ···························································································································································· 9 Threshold monitoring ············································································································································ 10 NQA configuration task list ·········································································································································· 11 Configuring the NQA server ········································································································································ 11 Configuring the NQA client ·········································································································································· 12 Enabling the NQA client ······································································································································ 12 Configuring an ICMP echo operation ················································································································· 12 Configuring a DHCP operation ··························································································································· 13 Configuring a DNS operation ····························································································································· 14 Configuring an FTP operation ······························································································································ 14 Configuring an HTTP operation ··························································································································· 15 Configuring a UDP jitter operation ······················································································································ 16 Configuring an SNMP operation························································································································· 18 Configuring a TCP operation ······························································································································· 18 Configuring a UDP echo operation ····················································································································· 19 Configuring a voice operation····························································································································· 20 Configuring a DLSw operation ···························································································································· 22 Configuring optional parameters for an NQA operation················································································· 23 Configuring the collaboration function ··············································································································· 24 Configuring threshold monitoring ························································································································ 25 Configuring the NQA statistics function ············································································································· 28 Configuring NQA history records saving function ···························································································· 28 Scheduling an NQA operation···························································································································· 29 Displaying and maintaining NQA ······························································································································· 30 NQA configuration examples ······································································································································ 31 ICMP echo operation configuration example ···································································································· 31 DHCP operation configuration example ············································································································· 33 DNS operation configuration example ··············································································································· 34 FTP operation configuration example ················································································································· 35 HTTP operation configuration example ··············································································································· 36 UDP jitter operation configuration example ······································································································· 38 SNMP operation configuration example ············································································································ 40 TCP operation configuration example ················································································································ 42 UDP echo operation configuration example ······································································································ 43 Voice operation configuration example ············································································································· 44 i
DLSw operation configuration example ·············································································································· 47 NQA collaboration configuration example········································································································ 48 Configuring NTP ························································································································································ 51 Overview········································································································································································· 51 NTP application ····················································································································································· 51 NTP advantages ···················································································································································· 51 How NTP works ····················································································································································· 52 NTP message format ············································································································································· 53 NTP operation modes ··········································································································································· 54 NTP for VPNs ························································································································································· 56 NTP configuration task list ············································································································································· 57 Configuring NTP operation modes ······························································································································ 57 Configuring NTP client/server mode ·················································································································· 57 Configuring the NTP symmetric peers mode ······································································································ 58 Configuring NTP broadcast mode······················································································································· 58 Configuring NTP multicast mode ························································································································· 59 Configuring the local clock as a reference source ····································································································· 60 Configuring optional parameters for NTP ··················································································································· 60 Specifying the source interface for NTP messages ···························································································· 60 Disabling an interface from receiving NTP messages ······················································································· 61 Configuring the allowed maximum number of dynamic sessions ···································································· 61 Configuring access-control rights ································································································································· 62 Configuration prerequisites ·································································································································· 62 Configuration procedure ······································································································································ 62 Configuring NTP authentication ··································································································································· 63 Configuring NTP authentication in client/server mode ····················································································· 63 Configuring NTP authentication in symmetric peers mode ··············································································· 64 Configuring NTP authentication in broadcast mode ························································································· 65 Configuring NTP authentication in multicast mode ··························································································· 66 Displaying and maintaining NTP ································································································································· 67 NTP configuration examples ········································································································································· 68 NTP client/server mode configuration example ································································································ 68 NTP symmetric peers mode configuration example ·························································································· 69 NTP broadcast mode configuration example····································································································· 71 NTP multicast mode configuration example ······································································································· 72 Configuration example for NTP client/server mode with authentication ························································ 75 Configuration example for NTP broadcast mode with authentication ···························································· 76 Configuration example for MPLS VPN time synchronization in client/server mode ······································ 79 Configuration example for MPLS VPN time synchronization in symmetric peers mode································ 81 Configuring IPC ·························································································································································· 83 Overview········································································································································································· 83 Node······································································································································································· 83 Link ·········································································································································································· 83 Channel ·································································································································································· 83 Packet sending modes ·········································································································································· 84 Enabling IPC performance statistics ····························································································································· 84 Displaying and maintaining IPC ··································································································································· 85 Configuring SNMP····················································································································································· 86 Overview········································································································································································· 86 SNMP framework ·················································································································································· 86 MIB and view-based MIB access control ············································································································ 86 SNMP operations ·················································································································································· 87 SNMP protocol versions ······································································································································· 87 ii
SNMP configuration task list ········································································································································· 88 Configuring SNMP basic parameters ·························································································································· 88 Configuring SNMPv3 basic parameters ············································································································· 88 Configuring SNMPv1 or SNMPv2c basic parameters ······················································································ 89 Configuring SNMP logging ·········································································································································· 90 Configuring SNMP traps ··············································································································································· 91 Enabling SNMP traps ··········································································································································· 91 Configuring the SNMP agent to send traps to a host ······················································································· 92 Displaying and maintaining SNMP ····························································································································· 93 SNMP configuration examples ····································································································································· 94 SNMPv1/SNMPv2c configuration example ······································································································ 94 SNMPv3 configuration example·························································································································· 95 SNMP logging configuration example ··············································································································· 97 Configuring RMON ··················································································································································· 99 Overview········································································································································································· 99 Working mechanism ············································································································································· 99 RMON groups ······················································································································································· 99 Configuring the RMON statistics function ················································································································· 101 Configuring the RMON Ethernet statistics function·························································································· 101 Configuring the RMON history statistics function ···························································································· 101 Configuring the RMON alarm function ····················································································································· 102 Displaying and maintaining RMON ·························································································································· 103 Ethernet statistics group configuration example ······································································································· 104 History group configuration example ························································································································ 105 Alarm group configuration example ·························································································································· 106 Configuring sampler ··············································································································································· 109 Overview······································································································································································· 109 Creating a sampler ······················································································································································ 109 Displaying and maintaining a sampler ····················································································································· 109 Sampler configuration example ································································································································· 110 Network requirements ········································································································································· 110 Configuration procedure ···································································································································· 110 Verifying the configuration ································································································································· 111 Configuring port mirroring ····································································································································· 112 Overview······································································································································································· 112 Terminologies of port mirroring ························································································································· 112 Port mirroring classification and implementation ····························································································· 113 Configuring local port mirroring ································································································································ 114 Local port mirroring configuration task list ······································································································· 114 Creating a local mirroring group ······················································································································ 115 Configuring source ports for the local mirroring group ·················································································· 115 Configuring the monitor port for the local mirroring group ············································································ 116 Configuring Layer 2 remote port mirroring ··············································································································· 116 Configuration task list ········································································································································· 117 Configuring a remote source group ·················································································································· 117 Configuring a remote destination group··········································································································· 119 Displaying and maintaining port mirroring ··············································································································· 121 Port mirroring configuration examples······················································································································· 121 Local port mirroring configuration example ····································································································· 121 Layer 2 remote port mirroring configuration example ···················································································· 122 Configuring traffic mirroring ·································································································································· 125 Overview······································································································································································· 125 iii
Traffic mirroring configuration task list ······················································································································ 125 Configuring match criteria ·········································································································································· 125 Configuring different types of traffic mirroring ········································································································· 126 Mirroring traffic to an interface ························································································································· 126 Mirroring traffic to the CPU ································································································································ 126 Configuring a QoS policy ··········································································································································· 127 Applying a QoS policy ··············································································································································· 127 Applying a QoS policy to an interface or a port group ················································································· 127 Applying a QoS policy to a VLAN···················································································································· 127 Displaying and maintaining traffic mirroring ············································································································ 128 Traffic mirroring configuration example ···················································································································· 128 Network requirements ········································································································································· 128 Configuration procedure ···································································································································· 129 Verifying the configuration ································································································································· 130 Configuring NetStream··········································································································································· 131 Overview······································································································································································· 131 Basic NetStream concepts··········································································································································· 131 Flow ······································································································································································ 131 NetStream operation··········································································································································· 131 NetStream key technologies ······································································································································· 132 Flow aging ··························································································································································· 132 NetStream data export ······································································································································· 132 NetStream export formats ·································································································································· 135 NetStream sampling and filtering ······························································································································ 135 NetStream sampling············································································································································ 135 NetStream filtering ·············································································································································· 135 NetStream configuration task list ································································································································ 135 Enabling NetStream ····················································································································································· 136 Configuring NetStream filtering and sampling ········································································································· 137 Configuring NetStream filtering ························································································································· 137 Configuring NetStream sampling ······················································································································ 137 Configuring NetStream data export ·························································································································· 138 Configuring NetStream traditional data export ······························································································· 138 Configuring NetStream aggregation data export ··························································································· 139 Configuring attributes of NetStream export data ····································································································· 140 Configuring NetStream export format··············································································································· 140 Configuring the refresh rate for NetStream version 9 templates ···································································· 141 Configuring MPLS-aware NetStream ················································································································ 141 Configuring NetStream flow aging ···························································································································· 142 Flow aging methods ············································································································································ 142 Configuration procedure ···································································································································· 143 Displaying and maintaining NetStream ···················································································································· 143 NetStream configuration examples ···························································································································· 144 NetStream traditional data export configuration example ············································································· 144 NetStream aggregation data export configuration example ········································································· 144 Configuring IPv6 NetStream ·································································································································· 147 Overview······································································································································································· 147 Basic IPv6 NetStream concepts ·································································································································· 147 IPv6 flow ······························································································································································· 147 IPv6 NetStream operation ·································································································································· 147 IPv6 NetStream key technologies ······························································································································· 148 Flow aging ··························································································································································· 148 IPv6 NetStream data export ······························································································································· 148 iv
IPv6 NetStream export format ···························································································································· 149 IPv6 NetStream configuration task list ······················································································································· 150 Enabling IPv6 NetStream ············································································································································ 150 Configuring IPv6 NetStream data export ·················································································································· 150 Configuring IPv6 NetStream traditional data export ······················································································· 150 Configuring IPv6 NetStream aggregation data export ··················································································· 151 Configuring attributes of IPv6 NetStream data export ····························································································· 152 Configuring IPv6 NetStream export format ······································································································ 152 Configuring the refresh rate for IPv6 NetStream version 9 templates ··························································· 153 Configuring IPv6 NetStream flow aging ··················································································································· 153 Flow aging methods ············································································································································ 153 Configuration procedure ···································································································································· 154 Displaying and maintaining IPv6 NetStream ············································································································ 154 IPv6 NetStream configuration examples ··················································································································· 155 IPv6 NetStream traditional data export configuration example ····································································· 155 IPv6 NetStream aggregation data export configuration example ································································· 156 Configuring the information center ························································································································ 158 Overview······································································································································································· 158 Classification of system information ·················································································································· 158 System information levels ··································································································································· 158 Output channels and destinations ····················································································································· 159 Default output rules of system information ········································································································ 160 System information formats ································································································································ 161 Information center configuration task list ··················································································································· 163 Outputting system information to the console ··········································································································· 164 Outputting system information to the monitor terminal ···························································································· 165 Outputting system information to a log host ············································································································· 166 Outputting system information to the trap buffer ······································································································ 166 Outputting system information to the log buffer ········································································································ 167 Outputting system information to the SNMP module ······························································································· 168 Saving system information to the log file ··················································································································· 169 Enabling synchronous information output ················································································································· 170 Disabling an interface from generating link up/down logging information ························································· 170 Configuring the minimum age of syslog messages ·································································································· 171 Displaying and maintaining information center ······································································································· 171 Information center configuration examples ··············································································································· 172 Outputting log information to the console ········································································································ 172 Outputting log information to a UNIX log host ································································································ 173 Outputting log information to a Linux log host ································································································· 174 Configuring Flow Logging ······································································································································ 176 Configuring flow logging ············································································································································ 176 Flow logging configuration task list ··························································································································· 177 Configuring the flow logging version ························································································································ 177 Configuring the source address for flow log packets ······························································································ 178 Exporting flow logs ······················································································································································ 178 Exporting flow logs to a log server···················································································································· 178 Exporting flow logs to the information center··································································································· 179 Configuring the timestamp for flow logs···················································································································· 179 Displaying and maintaining flow logging ················································································································· 180 Flow logging configuration examples························································································································ 180 Configuring flow logging on the 6602 router ································································································· 180 Configuring flow logging on the HSR6602/6604/6608/6616 router ······················································ 181 Troubleshooting flow logging ····································································································································· 182 v
Configuring sFlow ··················································································································································· 183 Configuring the sFlow agent and sFlow collector information ················································································ 183 Configuring flow sampling·········································································································································· 184 Configuring counter sampling ···································································································································· 185 Displaying and maintaining sFlow ····························································································································· 185 sFlow configuration example ······································································································································ 185 Network requirements ········································································································································· 185 Configuration procedure ···································································································································· 185 Troubleshooting sFlow configuration ························································································································· 187 The remote sFlow collector cannot receive sFlow packets ·············································································· 187 Configuring gateway mode ··································································································································· 188 Configuring gateway mode ········································································································································ 188 Displaying and maintaining gateway mode ············································································································· 188 Gateway mode configuration example ····················································································································· 188 Network requirements ········································································································································· 188 Configuration procedure ···································································································································· 189 Configuring Host-monitor········································································································································ 190 Overview······································································································································································· 190 Configuration prerequisites ········································································································································· 190 Host-monitor configuration task list ···························································································································· 190 Enabling Host-monitor·················································································································································· 190 Freezing legitimate flow entries ·································································································································· 191 Adding legitimate flow entries ···································································································································· 191 Deleting a legitimate flow entry ·································································································································· 191 Deleting unfixed flow entries······································································································································· 191 Deleting illegitimate flow entries································································································································· 192 Displaying and maintaining Host-monitor ················································································································· 192 Host-monitor configuration example ·························································································································· 193 Network requirements ········································································································································· 193 Configuration procedure ···································································································································· 193 Support and other resources ·································································································································· 195 Contacting HP ······························································································································································ 195 Subscription service ············································································································································ 195 Related information ······················································································································································ 195 Documents ···························································································································································· 195 Websites······························································································································································· 195 Conventions ·································································································································································· 196 Index ········································································································································································ 198 vi
Using ping, tracert, and system debugging
Use the ping, tracert, and system debugging utilities to test network connectivity and identify network
problems.
Ping
The ping utility sends ICMP echo requests (ECHO-REQUEST) to the destination device. Upon receiving
the requests, the destination device responds with ICMP echo replies (ECHO-REPLY) to the source device.
The source device outputs statistics about the ping operation, including the number of packets sent,
number of echo replies received, and the round-trip time. You can measure the network performance by
analyzing these statistics.
Using a ping command to test network connectivity
Execute ping commands in any view.
Task
Command
Remarks
• For an IPv4 network:
Test the network
connectivity to an IP
address.
ping [ ip ] [ -a source-ip | -c count | -f |
-h ttl | -i interface-type interface-number
| -m interval | -n | -p pad | -q | -r | -s
packet-size | -t timeout | -tos tos | -v |
{ -mt topology-name |-vpn-instance
vpn-instance-name} ] * host
• For an IPv6 network:
ping ipv6 [ -a source-ipv6 | -c count | -m
interval | -s packet-size | -t timeout |
-vpn-instance vpn-instance-name ] * host
[ -i interface-type interface-number ]
Set a larger value for the timeout timer
(indicated by the -t parameter in the
command) when you configure the
ping command for a low-speed
network.
Disabling the echo reply function on
the destination affects the ping
function.
For more information about the ping lsp command, see MPLS Command Reference.
Ping example
Network requirements
Test the network connectivity between Device A and Device C in Figure 1. If they can reach each other,
get detailed information about routes from Device A to Device C.
1
Figure 1 Network diagram
Configuration procedure
# Use the ping command on Device A to test connectivity to Device C.
<DeviceA> ping 1.1.2.2
PING 1.1.2.2: 56
data bytes, press CTRL_C to break
Reply from 1.1.2.2: bytes=56 Sequence=1 ttl=254 time=205 ms
Reply from 1.1.2.2: bytes=56 Sequence=2 ttl=254 time=1 ms
Reply from 1.1.2.2: bytes=56 Sequence=3 ttl=254 time=1 ms
Reply from 1.1.2.2: bytes=56 Sequence=4 ttl=254 time=1 ms
Reply from 1.1.2.2: bytes=56 Sequence=5 ttl=254 time=1 ms
--- 1.1.2.2 ping statistics --5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/41/205 ms
# Get detailed information about routes from Device A to Device C.
<DeviceA> ping -r 1.1.2.2
PING 1.1.2.2: 56
data bytes, press CTRL_C to break
Reply from 1.1.2.2: bytes=56 Sequence=1 ttl=254 time=53 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=2 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=3 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=4 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
Reply from 1.1.2.2: bytes=56 Sequence=5 ttl=254 time=1 ms
Record Route:
1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
--- 1.1.2.2 ping statistics --5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/11/53 ms
The test procedure with the ping –r command (see Figure 1) is as follows:
1.
The source device (Device A) sends an ICMP echo request with the RR option being blank to the
destination device (Device C).
2.
The intermediate device (Device B) adds the IP address of its outbound interface (1.1.2.1) to the RR
option of the ICMP echo request, and forwards the packet.
3.
Upon receiving the request, the destination device copies the RR option in the request and adds the
IP address of its outbound interface (1.1.2.2) to the RR option. Then the destination device sends
an ICMP echo reply.
4.
The intermediate device adds the IP address of its outbound interface (1.1.1.2) to the RR option in
the ICMP echo reply, and then forwards the reply.
5.
Upon receiving the reply, the source device adds the IP address of its inbound interface (1.1.1.1)
to the RR option. Finally, you can get the detailed information of routes from Device A to Device C:
1.1.1.1 <-> {1.1.1.2; 1.1.2.1} <-> 1.1.2.2.
Tracert
Tracert (also called "Traceroute") enables you to get the IP addresses of Layer 3 devices in the path to a
specific destination. You can use tracert to test network connectivity and identify failed nodes.
3
Figure 2 Traceroute operation
Tracert uses received ICMP error messages to get the IP addresses of devices. As shown in Figure 2,
tracert works as follows:
1.
The source device (Device A) sends a UDP packet with a TTL value of 1 to the destination device
(Device D). The destination UDP port is not used by any application on the destination device.
2.
The first hop (Device B, the first Layer 3 device that receives the packet) responds by sending a
TTL-expired ICMP error message to the source, with its IP address (1.1.1.2) encapsulated. In this
way, the source device can get the address of the first Layer 3 device (1.1.1.2).
3.
The source device sends a packet with a TTL value of 2 to the destination device.
4.
The second hop (Device C) responds with a TTL-expired ICMP error message, which gives the
source device the address of the second Layer 3 device (1.1.2.2).
5.
The process continues until the packet sent by the source device reaches the ultimate destination
device. Because no application uses the destination port specified in the packet, the destination
device responds with a port-unreachable ICMP message to the source device, with its IP address
encapsulated. This way, the source device gets the IP address of the destination device (1.1.3.2).
6.
The source device thinks that the packet has reached the destination device after receiving the
port-unreachable ICMP message, and the path to the destination device is 1.1.1.2 to 1.1.2.2 to
1.1.3.2.
Prerequisites
Before you use a tracert command, perform the tasks in this section.
For an IPv4 network:
•
Enable sending of ICMP timeout packets on the intermediate devices (devices between the source
and destination devices). If the intermediate devices are HP devices, execute the ip ttl-expires
enable command on the devices. For more information about this command, see Layer 3—IP
Services Command Reference.
•
Enable sending of ICMP destination unreachable packets on the destination device. If the
destination device is an HP device, execute the ip unreachables enable command. For more
information about this command, see Layer 3—IP Services Command Reference.
•
If there is an MPLS network between the source and destination devices and you need to display the
MPLS information during the tracert process, enable support for ICMP extensions on the source and
intermediate devices. If the source and intermediate devises are HP devices, execute the ip
4
icmp-extensions compliant command on the devices. For more information about this command,
see Layer 3—IP Services Command Reference.
For an IPv6 network:
•
Enable sending of ICMPv6 timeout packets on the intermediate devices (devices between the
source and destination devices). If the intermediate devices are HP devices, execute the ipv6
hoplimit-expires enable command on the devices. For more information about this command, see
Layer 3—IP Services Command Reference.
•
Enable sending of ICMPv6 destination unreachable packets on the destination device. If the
destination device is an HP device, execute the ipv6 unreachables enable command. For more
information about this command, see Layer 3—IP Services Command Reference.
Using a tracert command to identify failed or all nodes in a
path
Execute tracert commands in any view.
Task
Command
Remarks
• For an IPv4 network:
Display the routes from source to
destination.
tracert [ -a source-ip | -f first-ttl | -m
max-ttl | -p port | -q packet-number
| { -mt topology-name |
-vpn-instance vpn-instance-name }|
-w timeout ] * host
• For an IPv6 network:
Use either method.
tracert ipv6 [ -f first-ttl | -m max-ttl |
-p port | -q packet-number |
-vpn-instance vpn-instance-name |
-w timeout ] * host
For more information about the tracert lsp command, see MPLS Command Reference.
System debugging
The device supports debugging for the majority of protocols and features and provides debugging
information to help users diagnose errors.
Debugging information control switches
The following switches control the display of debugging information:
•
Protocol debugging switch—Controls whether to generate the protocol-specific debugging
information.
•
Screen output switch—Controls whether to display the debugging information on a certain screen.
As shown in Figure 3, assume that the device can provide debugging for the three modules 1, 2, and 3.
The debugging information can be output on a terminal only when both the protocol debugging switch
and the screen output switch are turned on.
5
Output of debugging information depends on the configurations of the information center and the
debugging commands of each protocol and functional module. Debugging information is typically
displayed on a terminal (including console or VTY). You can also send debugging information to other
destinations. For more information, see "Configuring the information center."
Figure 3 Relationship between the protocol and screen output switch
Debugging a feature module
Output of debugging commands is memory intensive. To guarantee system performance, enable
debugging only for modules that are in an exceptional condition. When debugging is complete, use the
undo debugging all command to disable all the debugging functions.
Configure the debugging, terminal debugging and terminal monitor commands before you can display
detailed debugging information on the terminal. For more information about the terminal debugging
and terminal monitor commands, see Network Management and Monitoring Command Reference.
To debug a feature module and display the debugging information on a terminal:
Step
Command
Remarks
Optional.
1.
Enable the terminal
monitoring of system
information.
terminal monitor
By default, the terminal monitoring
on the console port is enabled and
that on the monitoring terminal is
disabled.
Available in user view.
2.
Enable the terminal to display
debugging information.
terminal debugging
By default, terminal display of
debugging information is
disabled.
Available in user view.
3.
Enable debugging for a
specified module.
debugging { all [ timeout time ] |
module-name [ option ] }
6
By default, debugging for a
specified module is disabled.
Available in user view.
Step
4.
Command
display debugging [ interface
interface-type interface-number ]
[ module-name ] [ | { begin |
exclude | include }
regular-expression ]
Display the enabled
debugging functions.
Remarks
Optional.
Available in any view.
Ping and tracert example
Network requirements
As shown in Figure 4, Router A failed to Telnet Router C. Determine whether Router A and Router C can
reach each other. If they cannot reach each other, locate the failed nodes in the network.
Figure 4 Network diagram
Configuration procedure
1.
Use the ping command to test connectivity between Router A and Router C.
<RouterA> ping 1.1.2.2
PING 1.1.2.2: 56
data bytes, press CTRL_C to break
Request time out
Request time out
Request time out
Request time out
Request time out
--- 1.1.2.2 ping statistics --5 packet(s) transmitted
0 packet(s) received
100.00% packet loss
The output shows that Router A and Router C cannot reach each other.
2.
Use the tracert command to identify failed nodes.
# Enable sending of ICMP timeout packets on Router B.
<RouterB> system-view
[RouterB] ip ttl-expires enable
# Enable sending of ICMP destination unreachable packets on Router C.
<RouterC> system-view
[RouterC] ip unreachables enable
# Execute the tracert command on Router A.
<RouterA> tracert 1.1.2.2
traceroute to 1.1.2.2(1.1.2.2) 30 hops max,40 bytes packet, press CTRL_C to break
1
1.1.1.2 14 ms 10 ms 20 ms
2
* * *
7
3
* * *
4
* * *
5
The output shows that Router A and Router C cannot reach other, Router A and Router B can reach
each other, and an error occurred on the connection between Router B and Router C.
Use the debugging ip icmp command on Router A and Router C to verify that they can send and
receive the specific ICMP packets.
Or use the display ip routing-table command to verify the availability of active routes between
Router A and Router C.
8
Configuring NQA
Overview
Network quality analyzer (NQA) allows you to monitor link status, measure network performance, verify
the service levels for IP services and applications, and troubleshoot network problems. It provides the
following types of operations:
•
ICMP echo
•
DHCP
•
DNS
•
FTP
•
HTTP
•
UDP jitter
•
SNMP
•
TCP
•
UDP echo
•
Voice
•
Data Link Switching (DLSw)
As shown in Figure 5, the NQA source device (NQA client) sends data to the NQA destination device
by simulating IP services and applications to measure network performance. The obtained performance
metrics include the one-way latency, jitter, packet loss, voice quality, application performance, and
server response time.
All types of NQA operations require the NQA client, but only the TCP, UDP echo, UDP jitter, and voice
operations require the NQA server. The NQA operations for services that are already provided by the
destination device such as FTP do not need the NQA server.
You can configure the NQA server to listen and respond on specific ports to meet various test needs.
Figure 5 Network diagram
Collaboration
NQA can collaborate with the Track module to notify application modules of state or performance
changes so that the application modules can take predefined actions. For more information about
collaboration, see High Availability Configuration Guide.
9
Figure 6 Collaboration
Application modules
Detection
module
VRRP
Associates with an
NQA entry
Associates with a
track entry
Static routing
Policy-based
routing
Track
NQA
Sends the
detection results
Sends the track
entry status
Interface backup
Traffic redirection
The following describes how a static route destined for 192.168.0.88 is monitored through collaboration:
1.
NQA monitors the reachability to 192.168.0.88.
2.
When 192.168.0.88 becomes unreachable, NQA notifies the Track module of the change.
3.
The Track module notifies the static routing module of the state change.
4.
The static routing module sets the static route as invalid according to a predefined action.
For more information about collaboration, see High Availability Configuration Guide.
Threshold monitoring
Threshold monitoring enables the NQA client to display results or send trap messages to the network
management station (NMS) when the performance metrics that an NQA operation gathers violate the
specified thresholds.
Table 1 describes the relationships between performance metrics and NQA operation types.
Table 1 Performance metrics
Performance metric
NQA operation types that can gather the metric
Probe duration
All NQA operation types excluding UDP jitter and voice
Number of probe failures
All NQA operation types excluding UDP jitter and voice
Round-trip time
UDP jitter and voice
Number of discarded packets
UDP jitter and voice
One-way jitter (source-to-destination and
destination-to-source)
UDP jitter and voice
One-way latency (source-to-destination and
destination-to-source)
UDP jitter and voice
Calculated Planning Impairment Factor (ICPIF)
(see "Configuring a voice operation")
Voice
Mean Opinion Scores (MOS) (see
" Configuring a voice operation")
Voice
10
NQA configuration task list
Complete the following task to configure the NQA server:
Task
Remarks
Configuring the NQA server
Required for NQA operations types of TCP, UDP echo, UDP
jitter, and voice.
Complete these tasks to configure the NQA client:
Task
Remarks
Enabling the NQA client
Required.
Configuring an ICMP echo operation
Configuring a DHCP operation
Configuring a DNS operation
Configuring an FTP operation
Configuring an HTTP operation
Required.
Configuring a UDP jitter operation
Use at least one method.
Configuring an SNMP operation
Configuring a TCP operation
Configuring a UDP echo operation
Configuring a voice operation
Configuring a DLSw operation
Configuring optional parameters for an NQA operation
Optional.
Configuring the collaboration function
Optional.
Configuring threshold monitoring
Optional.
Configuring the NQA statistics function
Optional.
Configuring NQA history records saving function
Optional.
Scheduling an NQA operation
Required.
Configuring the NQA server
To perform TCP, UDP echo, UDP jitter, and voice operations, you must enable the NQA server on the
destination device. The NQA server listens and responds to requests on the specified IP addresses and
ports.
You can configure multiple TCP (or UDP) listening services on an NQA server, each of which corresponds
to a specific destination IP address and port number. The destination IP address and port number must
be the same as those configured on the NQA client and must be different from those of an existing
listening service.
11
To configure the NQA server:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable the NQA server.
nqa server enable
Disabled by default.
• Method 1:
3.
Configure a listening service.
nqa server tcp-connect
ip-address port-number
• Method 2:
Use at least one method.
nqa server udp-echo
ip-address port-number
Configuring the NQA client
Enabling the NQA client
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the NQA client.
nqa agent enable
Optional.
Enabled by default.
Configuring an ICMP echo operation
An ICMP echo operation measures the reachability of a destination device. It has the same function as
the ping command, but provides more output information. In addition, if multiple paths exist between the
source and destination devices, you can specify the next hop for the ICMP echo operation.
The ICMP echo operation is not supported in IPv6 networks. To test the reachability of an IPv6 address,
use the ping ipv6 command. For more information about the command, see Network Management and
Monitoring Command Reference.
To configure an ICMP echo operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the ICMP echo type
and enter its view.
type icmp-echo
N/A
4.
Specify the destination
address of ICMP echo
requests.
destination ip ip-address
By default, no destination IP
address is configured.
12
Step
5.
6.
7.
Command
Specify the payload size in
each ICMP echo request.
Configure the string to be
filled in the payload of each
ICMP echo request.
Specify the VPN where the
operation is performed.
data-size size
Remarks
Optional.
100 bytes by default.
Optional.
data-fill string
By default, the string is the
hexadecimal number
00010203040506070809.
Optional.
vpn-instance vpn-instance-name
By default, the operation is
performed on the public network.
Optional.
• Method 1:
8.
Specify the source interface
and the source IP address of
ICMP echo requests.
source interface interface-type
interface-number
• Method 2:
source ip ip-address
By default, no source interface or
source IP address is configured.
The requests take the primary IP
address of the outgoing interface
as their source IP address.
If you configure both the source ip
command and the source interface
command, the source ip command
takes effect.
The specified source interface must
be up. The source IP address must
be the IP address of a local
interface and the interface must be
up.
9.
Configure the next hop IP
address for ICMP echo
requests.
Optional.
next-hop ip-address
By default, no next hop IP address
is configured.
Configuring a DHCP operation
A DHCP operation measures the time the NQA client uses to get an IP address from a DHCP server.
The specified interface simulates the DHCP client to acquire an IP address and it does not change its IP
address.
When the DHCP operation completes, the NQA client sends a packet to release the obtained IP address.
To configure a DHCP operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the DHCP type and
enter its view.
type dhcp
N/A
13
Step
Command
Specify an interface to
perform the DHCP operation.
4.
operation interface interface-type
interface-number
Remarks
By default, no interface is specified
to perform a DHCP operation.
The specified interface must be up.
Otherwise, no probe packets can
be sent out.
Configuring a DNS operation
A DNS operation measures the time the NQA client uses to translate a domain name into an IP address
through a DNS server.
A DNS operation simulates domain name resolution and does not save the obtained DNS entry.
To configure a DNS operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name operation-tag
By default, no NQA operation is
created.
3.
Specify the DNS type and
enter its view.
type dns
N/A
4.
Specify the IP address of the
DNS server as the
destination address of DNS
packets.
destination ip ip-address
By default, no destination IP
address is configured.
Configure the domain name
that needs to be translated.
resolve-target domain-name
By default, no domain name is
configured.
5.
Configuring an FTP operation
An FTP operation measures the time the NQA client uses to transfer a file to or download a file from an
FTP server.
Follow these guidelines when you configure an FTP operation:
•
Before you perform an FTP operation, obtain the username and password for logging in to the FTP
server.
•
When you execute the put command, the NQA client creates a file (not a real one) named
file-name of fixed size on the FTP server. When you execute the get command, the client does not
save the file obtained from the FTP server.
•
If you get a file that does not exist on the FTP server, the FTP operation fails.
•
Only use the get command to download a small file. A big file might result in transfer failure
because of timeout, or might affect other services for occupying much network bandwidth.
To configure an FTP operation:
14
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the FTP type and enter
its view.
type ftp
N/A
4.
Specify the IP address of the
FTP server as the destination
address of FTP request
packets.
destination ip ip-address
By default, no destination IP
address is configured.
By default, no source IP address is
specified.
5.
Configure the source IP
address of FTP request
packets.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no FTP requests can be
sent out.
Optional.
6.
Specify the operation type.
operation { get | put }
7.
Configure a login username.
username name
8.
Configure a login password.
password [ cipher | simple ]
password
9.
Specify the name of a file to
be transferred.
filename file-name
By default, the operation type for
the FTP operation is get, which
means obtaining files from the FTP
server.
Optional.
10. Set the data transmission
mode.
mode { active | passive }
By default, no login username is
configured.
Optional.
By default, no login password is
configured.
By default, no file is specified.
Optional.
active by default.
Configuring an HTTP operation
An HTTP operation measures the time the NQA client uses to obtain data from an HTTP server.
The TCP port number of the HTTP server must be 80. Otherwise, the HTTP operation fails.
To configure an HTTP operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
15
Step
Command
Remarks
3.
Specify the HTTP type and
enter its view.
type http
N/A
4.
Configure the IP address of
the HTTP server as the
destination address of HTTP
request packets.
destination ip ip-address
By default, no destination IP
address is configured.
Optional.
5.
Configure the source IP
address of request packets.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no request packets can
be sent out.
Optional.
6.
Configure the operation type.
operation { get | post }
7.
Specify the destination
website URL.
url url
8.
Specify the HTTP version.
http-version v1.0
By default, the operation type for
the HTTP is get, which means
obtaining data from the HTTP
server.
N/A
Optional.
By default, HTTP 1.0 is used.
Configuring a UDP jitter operation
CAUTION:
Do not perform the UDP jitter operation to well-known ports from 1 to 1023. Otherwise, the UDP jitter
operation might fail or the service on the well-known port becomes unavailable.
Jitter means inter-packet delay variance. A UDP jitter operation measures unidirectional and
bidirectional jitters so that you can verify whether the network can carry jitter-sensitive services such as
real-time voice and video services.
The UDP jitter operation works as follows:
1.
The NQA client sends UDP packets to the destination port at a regular interval.
2.
The destination device takes a time stamp to each packet that it receives, and then sends the
packet back to the NQA client.
3.
Upon receiving the responses, the NQA client calculates the jitter according to the time stamps.
The UDP jitter operation requires both the NQA server and the NQA client. Before you perform the UDP
jitter operation, configure the UDP listening service on the NQA server. For more information about UDP
listening service configuration, see "Configuring the NQA server."
To configure a UDP jitter operation:
16
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the UDP jitter type
and enter its view.
type udp-jitter
N/A
4.
Configure the destination
address of UDP packets.
By default, no destination IP
address is configured.
destination ip ip-address
By default, no destination port
number is configured.
5.
Configure the destination port
of UDP packets.
destination port port-number
6.
Specify the source port
number of UDP packets.
source port port-number
7.
Configure Payload size in
each UDP packet.
data-size size
8.
Configure the string to be
filled in the payload of each
UDP packet.
The destination IP address must be
the same as that of the listening
service on the NQA server.
The destination port must be the
same as that of the listening service
on the NQA server.
Optional.
By default, no source port number
is specified.
Optional.
100 bytes by default.
Optional.
data-fill string
By default, the string is the
hexadecimal number
00010203040506070809.
probe packet-number
Optional.
packet-number
10 by default.
10. Configure the interval for
sending UDP packets.
probe packet-interval
packet-interval
Optional.
11. Configure how long the NQA
client waits for a response
from the server before it
regards the response times
out.
probe packet-timeout
packet-timeout
Optional.
9.
Configure the number of UDP
packets sent in one UDP jitter
probe.
20 milliseconds by default.
3000 milliseconds by default.
Optional.
12. Configure the source IP
address for UDP packets.
By default, no source IP address is
specified.
source ip ip-address
17
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no UDP packets can be
sent out.
NOTE:
The display nqa history command does not show the results of the UDP jitter operation. Use the display
nqa result command to display the results, or use the display nqa statistics command to display the
statistics of the operation.
Configuring an SNMP operation
An SNMP operation measures the time the NQA client uses to get a value from an SNMP agent.
To configure an SNMP operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the SNMP type and
enter its view.
type snmp
N/A
4.
Configure the destination
address of SNMP packets.
destination ip ip-address
By default, no destination IP
address is configured.
5.
Specify the source port of
SNMP packets.
source port port-number
Optional.
By default, no source port number
is specified.
Optional.
6.
Configure the source IP
address of SNMP packets.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no SNMP packets can
be sent out.
Configuring a TCP operation
A TCP operation measures the time the NQA client uses to establish a TCP connection to a specific port
on the NQA server.
The TCP operation requires both the NQA server and the NQA client. Before you perform a TCP
operation, configure a TCP listening service on the NQA server. For more information about the TCP
listening service configuration, see "Configuring the NQA server."
To configure a TCP operation:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
18
Step
Command
Remarks
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the TCP type and
enter its view.
type tcp
N/A
By default, no destination IP
address is configured.
4.
5.
Configure the destination
address of TCP packets.
Configure the destination port
of TCP packets.
destination ip ip-address
The destination address must be
the same as the IP address of the
listening service configured on the
NQA server.
By default, no destination port
number is configured.
destination port port-number
The destination port number must
be the same as that of the listening
service on the NQA server.
Optional.
6.
Configure the source IP
address of TCP packets.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no TCP packets can be
sent out.
Configuring a UDP echo operation
A UDP echo operation measures the round-trip time between the client and a specific UDP port on the
NQA server.
The UDP echo operation requires both the NQA server and the NQA client. Before you perform a UDP
echo operation, configure a UDP listening service on the NQA server. For more information about the
UDP listening service configuration, see "Configuring the NQA server."
To configure a UDP echo operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the UDP echo type
and enter its view.
type udp-echo
N/A
19
Step
Command
Remarks
By default, no destination IP
address is configured.
4.
Configure the destination
address of UDP packets.
destination ip ip-address
By default, no destination port
number is configured.
5.
Configure the destination port
of UDP packets.
destination port port-number
6.
Configure Payload size in
each UDP packet.
data-size size
7.
8.
Configure the string to be
filled in the payload of each
UDP packet.
Specify the source port of UDP
packets.
The destination address must be
the same as the IP address of the
listening service configured on the
NQA server.
The destination port number must
be the same as that of the listening
service on the NQA server.
Optional.
100 bytes by default.
Optional.
data-fill string
By default, the string is the
hexadecimal number
00010203040506070809.
Optional.
source port port-number
By default, no source port number
is specified.
Optional.
9.
Configure the source IP
address of UDP packets.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be that
of an interface on the device and
the interface must be up.
Otherwise, no UDP packets can be
sent out.
Configuring a voice operation
CAUTION:
Do not perform a voice operation to a well-known port from 1 to 1023. Otherwise, the NQA operation
might fail or the service on that port becomes unavailable.
A voice operation measures voice over IP (VoIP) network performance.
A voice operation works as follows:
1.
The NQA client sends voice packets of G.711 A-law, G.711 μ-law or G.729 A-law codec type at
a specific interval to the destination device (NQA server).
2.
The destination device takes a time stamp to each voice packet it receives and sends it back to the
source.
3.
Upon receiving the packet, the source device calculates the jitter and one-way delay based on the
time stamp.
20
The following parameters that reflect VoIP network performance can be calculated by using the metrics
gathered by the voice operation:
•
Calculated Planning Impairment Factor (ICPIF)—Measures impairment to voice quality in a VoIP
network. It is decided by packet loss and delay. A higher value represents a lower service quality.
•
Mean Opinion Scores (MOS)—A MOS value can be evaluated by using the ICPIF value, in the
range of 1 to 5. A higher value represents a higher service quality.
The evaluation of voice quality depends on users' tolerance for voice quality, which you should consider.
For users with higher tolerance for voice quality, use the advantage-factor command to configure the
advantage factor. When the system calculates the ICPIF value, it subtracts the advantage factor to modify
ICPIF and MOS values, so both objective and subjective factors are considered.
The voice operation requires both the NQA server and the NQA client. Before you perform a voice
operation, configure a UDP listening service on the NQA server. For more information about UDP
listening service configuration, see "Configuring the NQA server."
A voice operation cannot repeat.
To configure a voice operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the voice type and
enter its view.
type voice
N/A
4.
Configure the destination
address of voice packets.
By default, no destination IP
address is configured.
destination ip ip-address
By default, no destination port
number is configured.
Configure the destination port
of voice packets.
destination port port-number
6.
Specify the codec type.
codec-type { g711a | g711u |
g729a }
7.
Configure the advantage
factor for calculating MOS
and ICPIF values.
advantage-factor factor
5.
The destination IP address must be
the same as that of the listening
service on the NQA server.
The destination port must be the
same as that of the listening service
on the NQA server.
Optional.
By default, the codec type is
G.711 A-law.
Optional.
21
By default, the advantage factor is
0.
Step
Command
Remarks
Optional.
By default, no source IP address is
specified.
8.
Specify the source IP address
of voice packets.
source ip ip-address
9.
Specify the source port
number of voice packets.
source port port-number
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no voice packets can
be sent out.
Optional.
By default, no source port number
is specified.
Optional.
10. Configure the payload size in
each voice packet.
data-size size
By default, the voice packet size
depends on the codec type. The
default packet size is 172 bytes for
G.711A-law and G.711 μ-law
codec type, and 32 bytes for
G.729 A-law codec type.
Optional.
11. Configure the string to be
filled in the payload of each
voice packet.
data-fill string
By default, the string is the
hexadecimal number
00010203040506070809.
12. Configure the number of voice
packets to be sent in a voice
probe.
probe packet-number
packet-number
Optional.
13. Configure the interval for
sending voice packets.
probe packet-interval
packet-interval
Optional.
14. Configure how long the NQA
client waits for a response
from the server before it
regards the response times
out.
probe packet-timeout
packet-timeout
Optional.
1000 by default.
20 milliseconds by default.
5000 milliseconds by default.
NOTE:
The display nqa history command cannot show the results of the voice operation. Use the display nqa
result command to display the results, or use the display nqa statistics command to display the statistics of
the operation.
Configuring a DLSw operation
A DLSw operation measures the response time of a DLSw device.
To configure a DLSw operation:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
22
Step
Command
Remarks
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify the DLSw type and
enter its view.
type dlsw
N/A
4.
Configure the destination
address of probe packets.
destination ip ip-address
By default, no destination IP
address is configured.
Optional.
Configure the source IP
address of probe packets.
5.
By default, no source IP address is
specified.
source ip ip-address
The source IP address must be the
IP address of a local interface. The
local interface must be up.
Otherwise, no probe packets can
be sent out.
Configuring optional parameters for an NQA operation
Unless otherwise specified, the following optional parameters apply to all NQA operation types.
The following describes how different NQA operation types operate:
•
A TCP or DLSw operation sets up a connection.
•
A UDP jitter or a voice operation sends a specific number of probe packets. The number of probe
packets is configurable with the probe packet-number command.
•
An FTP, HTTP, DHCP, or DNS operation uploads or downloads a file, gets a web page, gets an IP
address through DHCP, or translates a domain name to an IP address.
•
An ICMP echo or UDP echo operation sends an ICMP echo request or a UDP packet.
•
An SNMP operation sends one SNMPv1 packet, one SNMPv2c packet, and one SNMPv3 packet.
To configure optional parameters for an NQA operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify an NQA operation
type and enter its view.
type { dhcp | dlsw | dns | ftp |
http | icmp-echo | snmp | tcp |
udp-echo | udp-jitter | voice }
N/A
Optional.
4.
Configure a description.
description text
23
By default, no description is
configured.
Step
Command
Remarks
Optional.
5.
Specify the interval at which
the NQA operation repeats.
frequency interval
By default, the interval is 0
milliseconds. Only one operation
is performed.
If the operation is not completed
when the interval expires, the next
operation does not start.
Optional.
6.
Specify the probe times.
probe count times
By default, an NQA operation
performs one probe.
The voice operation can perform
only one probe, and does not
support this command.
Optional.
7.
Specify the probe timeout
time.
probe timeout timeout
By default, the timeout time is 3000
milliseconds.
This setting is not available for the
UDP jitter or voice operation.
Optional.
8.
9.
Specify the TTL for probe
packets.
Specify the ToS value in the IP
packet header of probe
packets.
20 by default.
ttl value
This setting is not available for the
DHCP operation.
Optional.
0 by default.
tos value
This setting is not available for the
DHCP operation.
Optional.
10. Enable the routing table
bypass function.
route-option bypass-route
Disabled by default.
This setting is not available for the
DHCP operation.
Configuring the collaboration function
Collaboration is implemented by associating a reaction entry of an NQA operation with a track entry.
The reaction entry monitors the NQA operation. If the number of operation failures reaches the specified
threshold, the configured action is performed.
Before you configure a reaction entry for the trigger and trap action, configure the destination address of
the trap messages by using the snmp-agent target-host command. For more information about the
command, see Network Management and Monitoring Command Reference.
To configure the collaboration function:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
24
Step
Command
Remarks
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify an NQA operation
type and enter its view.
type { dhcp | dlsw | dns | ftp |
http | icmp-echo | snmp | tcp |
udp-echo }
The collaboration function is not
available for the UDP jitter and
voice operations.
• The trigger action only:
4.
Configure a reaction entry.
reaction item-number
checked-element probe-fail
threshold-type consecutive
consecutive-occurrences
action-type trigger-only
• The trigger and trap action:
reaction item-number
checked-element probe-fail
threshold-type consecutive
consecutive-occurrences
action-type trap-and-trigger
Not configured by default.
The trigger-only keyword is not
supported by the UDP jitter or voice
operation.
The trap-and-trigger keyword is
not supported by the DNS, UDP
jitter, or voice operation.
You cannot modify the content of
an existing reaction entry. To
change the attributes in a reaction
entry, use the undo reaction
command to delete the entry first
and then configure a new one.
5.
Exit to system view.
quit
N/A
6.
Associate Track with NQA.
See High Availability
Configuration Guide.
N/A
7.
Associate Track with an
application module.
See High Availability
Configuration Guide.
N/A
Configuring threshold monitoring
Introduction
1.
Threshold types
An NQA operation supports the following threshold types:
{
{
{
average—If the average value for the monitored performance metric either exceeds the upper
threshold or goes below the lower threshold, a threshold violation occurs.
accumulate—If the total number of times that the monitored performance metric is out of the
specified value range reaches or exceeds the specified threshold, a threshold violation occurs.
consecutive—If the number of consecutive times that the monitored performance metric is out of
the specified value range reaches or exceeds the specified threshold, a threshold violation
occurs.
Threshold violations for the average or accumulate threshold type are determined on a per NQA
operation basis, and threshold violations for the consecutive type are determined from the time the
NQA operation starts.
2.
Triggered actions
The following actions might be triggered:
{
none—NQA displays results only on the terminal screen. It does not send traps to the NMS.
25
{
trap-only—NQA displays results on the terminal screen, and meanwhile it sends traps to the
NMS.
The DNS operation does not support the action of sending trap messages.
3.
Reaction entry
In a reaction entry, a monitored element, a threshold type, and an action to be triggered are
configured to implement threshold monitoring.
The state of a reaction entry can be invalid, over-threshold, or below-threshold.
{
{
Before an NQA operation starts, the reaction entry is in invalid state.
If the threshold is violated, the state of the entry is set to over-threshold. Otherwise, the state of
the entry is set to below-threshold.
If the action to be triggered is configured as trap-only for a reaction entry, when the state of the
entry changes, a trap message is generated and sent to the NMS.
Configuration prerequisites
Before you configure threshold monitoring, configure the destination address of the trap messages by
using the snmp-agent target-host command. For more information about the command, see Network
Management and Monitoring Command Reference.
Configuration procedure
To configure threshold monitoring:
Step
Command
Remarks
1.
Enter system
view.
system-view
N/A
2.
Create an NQA
operation and
enter NQA
operation view.
nqa entry admin-name operation-tag
By default, no
NQA operation
is created.
Specify an NQA
operation type
and enter its
view.
type { dhcp | dlsw | dns | ftp | http | icmp-echo | snmp | tcp |
udp-echo | udp-jitter | voice }
N/A
3.
26
Step
Command
Remarks
• Enable sending traps to the NMS when specified conditions
are met:
reaction trap { probe-failure consecutive-probe-failures |
test-complete | test-failure cumulate-probe-failures }
• Configure a reaction entry for monitoring the duration of an
NQA operation (not supported in UDP jitter and voice
operations):
reaction item-number checked-element probe-duration
threshold-type { accumulate accumulate-occurrences |
average | consecutive consecutive-occurrences }
threshold-value upper-threshold lower-threshold [ action-type
{ none | trap-only } ]
• Configure a reaction entry for monitoring failure times (not
supported in UDP jitter and voice operations):
reaction item-number checked-element probe-fail
threshold-type { accumulate accumulate-occurrences |
consecutive consecutive-occurrences } [ action-type { none |
trap-only } ]
• Configure a reaction entry for monitoring the round-trip time
4.
Configure
threshold
monitoring.
(only supported in UDP jitter and voice operations):
reaction item-number checked-element rtt threshold-type
{ accumulate accumulate-occurrences | average }
threshold-value upper-threshold lower-threshold [ action-type
{ none | trap-only } ]
• Configure a reaction entry for monitoring packet loss (only
supported in UDP jitter and voice operations):
reaction item-number checked-element packet-loss
threshold-type accumulate accumulate-occurrences
[ action-type { none | trap-only } ]
• Configure a reaction entry for monitoring one-way jitter (only
supported in UDP jitter and voice operations):
reaction item-number checked-element { jitter-ds | jitter-sd }
threshold-type { accumulate accumulate-occurrences |
average } threshold-value upper-threshold lower-threshold
[ action-type { none | trap-only } ]
• Configure a reaction entry for monitoring the one-way delay
(only supported in UDP jitter and voice operations):
reaction item-number checked-element { owd-ds | owd-sd }
threshold-value upper-threshold lower-threshold
• Configure a reaction entry for monitoring the ICPIF value (only
supported in the voice operation):
reaction item-number checked-element icpif threshold-value
upper-threshold lower-threshold [ action-type { none |
trap-only } ]
• Configure a reaction entry for monitoring the MOS value (only
supported in the voice operation):
reaction item-number checked-element mos threshold-value
upper-threshold lower-threshold [ action-type { none |
trap-only } ]
27
Configure the
trap sending
method as
needed.
No traps are
sent to the NMS
by default.
The reaction
trap command
in voice
operation view
only supports
the
test-complete
keyword.
Configuring the NQA statistics function
NQA collects statistics for an operation in a statistics group. To view information about the statistics
groups, use the display nqa statistics command. To set the interval for collecting statistics, use the
statistics interval command.
If a new statistics group is to be saved when the number of statistics groups reaches the upper limit, the
oldest statistics group is deleted. To set the maximum number of statistics groups that can be saved, use
the statistics max-group command.
A statistics group is formed after an operation is completed. Statistics groups have an aging mechanism.
A statistics group is deleted when its hold time expires. To set the hold time, use the statistics hold-time
command.
The DHCP operation does not support the NQA statistics function.
If you use the frequency command to set the interval between two consecutive operations to 0, only one
operation is performed, and no statistics group information is generated.
To configure the NQA statistics collection function:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA operation
and enter NQA operation
view.
nqa entry admin-name
operation-tag
By default, no NQA operation is
created.
3.
Specify an NQA operation
type and enter its view.
type { dlsw | dns | ftp | http |
icmp-echo | snmp | tcp |
udp-echo | udp-jitter | voice }
N/A
4.
Configure the interval for
collecting the statistics.
statistics interval interval
Optional.
60 minutes by default.
Optional.
5.
6.
Configure the maximum
number of statistics groups
that can be saved.
statistics max-group number
Configure the hold time of
statistics groups.
statistics hold-time hold-time
2 by default.
To disable collecting NQA
statistics, set the maximum number
to 0.
Optional.
120 minutes by default.
Configuring NQA history records saving function
Perform this task to enable the system to save the history records of NQA operations. To display NQA
history records, use the display nqa history command.
This task also configures the following parameters:
•
Lifetime of the history records.
The records are removed when the lifetime is reached.
•
Maximum number of history records that can be saved for an NQA operation.
If the maxim number is reached, the earliest history records are removed.
28
To configure the history records saving function:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an NQA
operation and enter
NQA operation view.
nqa entry admin-name operation-tag
By default, no NQA operation
is created.
3.
Enter NQA operation
type view.
type { dhcp | dlsw | dns | ftp | http |
icmp-echo | snmp | tcp | udp-echo |
udp-jitter | voice }
N/A
4.
Enable saving history
records for the NQA
operation.
history-record enable
By default, this feature is not
enabled.
Optional.
5.
Set the lifetime of
history records.
6.
Configure the maximum
number of history
records that can be
saved.
history-record keep-time keep-time
By default, the history records in
the NQA operation are kept for
120 minutes.
Optional.
history-record number number
By default, the maximum
number of records that can be
saved for the NQA operation is
50.
Scheduling an NQA operation
The NQA operation works between the specified start time and the end time (the start time plus
operation duration). If the specified start time is ahead of the system time, the operation starts
immediately. If both the specified start and end time are ahead of the system time, the operation does not
start. To view the current system time, use the display clock command.
You can configure the maximum number of NQA operations that can work simultaneously as needed to
avoid excessive system resource consumption.
You cannot enter the operation type view or the operation view of a scheduled NQA operation.
A system time adjustment does not affect started or completed NQA operations. It only affects the NQA
operations that have not started.
To schedule an NQA operation:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the scheduling
parameters for an NQA
operation.
nqa schedule admin-name
operation-tag start-time { hh:mm:ss
[ yyyy/mm/dd ] | now } lifetime
{ lifetime | forever }
N/A
3.
Configure the maximum
number of NQA operations
that can work simultaneously.
Optional.
nqa agent max-concurrent number
29
By default, a maximum of 40 NQA
operations can work
simultaneously.
Displaying and maintaining NQA
Task
Command
Remarks
Display history records of NQA
operations.
display nqa history [ admin-name
operation-tag ] [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display the current monitoring
results of reaction entries.
display nqa reaction counters [ admin-name
operation-tag [ item-number ] ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the result of the specified
NQA operation.
display nqa result [ admin-name
operation-tag ] [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display NQA statistics.
display nqa statistics [ admin-name
operation-tag ] [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display NQA server status.
display nqa server status [ | { begin | exclude
| include } regular-expression ]
Available in any view.
30
NQA configuration examples
ICMP echo operation configuration example
Network requirements
As shown in Figure 7, configure and schedule an ICMP echo operation from the NQA client Router A to
Router B through Router C to test the round-trip time.
Figure 7 Network diagram
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create an ICMP echo operation and specify 10.2.2.2 as the destination IP address.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type icmp-echo
[RouterA-nqa-admin-test1-icmp-echo] destination ip 10.2.2.2
# Configure 10.1.1.2 as the next hop. The ICMP echo requests are sent through Router C to Router B.
[RouterA-nqa-admin-test1-icmp-echo] next-hop 10.1.1.2
# Configure the ICMP echo operation to perform 10 probes, specify the probe timeout time as 500
milliseconds, and configure the operation to repeat at an interval of 5000 milliseconds.
[RouterA-nqa-admin-test1-icmp-echo] probe count 10
[RouterA-nqa-admin-test1-icmp-echo] probe timeout 500
[RouterA-nqa-admin-test1-icmp-echo] frequency 5000
31
# Enable saving history records and configure the maximum number of history records that can be saved
as 10.
[RouterA-nqa-admin-test1-icmp-echo] history-record enable
[RouterA-nqa-admin-test1-icmp-echo] history-record number 10
[RouterA-nqa-admin-test1-icmp-echo] quit
# Start the ICMP echo operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the ICMP echo operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the ICMP echo operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test) test1 results:
Destination IP address: 10.2.2.2
Send operation times: 10
Receive response times: 10
Min/Max/Average round trip time: 2/5/3
Square-Sum of round trip time: 96
Last succeeded probe time: 2011-08-23 15:00:01.2
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the ICMP echo operation.
[RouterA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
370
3
Succeeded
2011-08-23 15:00:01.2
369
3
Succeeded
2011-08-23 15:00:01.2
368
3
Succeeded
2011-08-23 15:00:01.2
367
5
Succeeded
2011-08-23 15:00:01.2
366
3
Succeeded
2011-08-23 15:00:01.2
365
3
Succeeded
2011-08-23 15:00:01.2
364
3
Succeeded
2011-08-23 15:00:01.1
363
2
Succeeded
2011-08-23 15:00:01.1
362
3
Succeeded
2011-08-23 15:00:01.1
361
2
Succeeded
2011-08-23 15:00:01.1
The output shows that the packets sent by Router A can reach Router B through Router C. No packet loss
occurs during the operation. The minimum, maximum, and average round-trip times are 2, 5, and 3
milliseconds, respectively.
32
DHCP operation configuration example
Network requirements
As shown in Figure 8, configure and schedule a DHCP operation to test the time required for Router A to
obtain an IP address from the DHCP server (Router B).
Figure 8 Network diagram
NQA client
DHCP server
GE2/0/1
10.1.1.1/16
GE2/0/1
10.1.1.2/16
Router A
Router B
Configuration procedure
# Create a DHCP operation to be performed on interface GigabitEthernet 2/0/1.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type dhcp
[RouterA-nqa-admin-test1-dhcp] operation interface gigabitethernet 2/0/1
# Enable the saving of history records.
[RouterA-nqa-admin-test1-dhcp] history-record enable
[RouterA-nqa-admin-test1-dhcp] quit
# Start the DHCP operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the DHCP operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the DHCP operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 512/512/512
Square-Sum of round trip time: 262144
Last succeeded probe time: 2011-11-22 09:54:03.8
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the DHCP operation.
[RouterA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
33
1
512
Succeeded
2011-11-22 09:54:03.8
The output shows that Router A uses 512 milliseconds to obtain an IP address from the DHCP server.
DNS operation configuration example
Network requirements
As shown in Figure 9, configure a DNS operation to test whether Router A can translate the domain name
host.com into an IP address through the DNS server, and test the time required for resolution.
Figure 9 Network diagram
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create a DNS operation.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type dns
# Specify the IP address of the DNS server 10.2.2.2 as the destination address, and specify the domain
name to be translated as host.com.
[RouterA-nqa-admin-test1-dns] destination ip 10.2.2.2
[RouterA-nqa-admin-test1-dns] resolve-target host.com
# Enable the saving of history records.
[RouterA-nqa-admin-test1-dns] history-record enable
[RouterA-nqa-admin-test1-dns] quit
# Start the DNS operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the DNS operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the DNS operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 62/62/62
Square-Sum of round trip time: 3844
Last succeeded probe time: 2011-11-10 10:49:37.3
Extended results:
Packet loss in test: 0%
34
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the DNS operation.
[RouterA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
62
Succeeded
2011-11-10 10:49:37.3
The output shows that Router A uses 62 milliseconds to translate domain name host.com into an IP
address.
FTP operation configuration example
Network requirements
As shown in Figure 10, configure an FTP operation to test the time required for Router A to upload a file
to the FTP server. The login username is admin, the login password is systemtest, and the file to be
transferred to the FTP server is config.txt.
Figure 10 Network diagram
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create an FTP operation.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type ftp
# Specify the IP address of the FTP server 10.2.2.2 as the destination IP address.
[RouterA-nqa-admin-test1-ftp] destination ip 10.2.2.2
# Specify 10.1.1.1 as the source IP address.
[RouterA-nqa-admin-test1-ftp] source ip 10.1.1.1
# Set the FTP username to admin, and password to systemtest.
[RouterA-nqa-admin-test1-ftp] username admin
[RouterA-nqa-admin-test1-ftp] password systemtest
# Configure the device to upload file config.txt to the FTP server.
[RouterA-nqa-admin-test1-ftp] operation put
35
[RouterA-nqa-admin-test1-ftp] filename config.txt
# Enable the saving of history records.
[RouterA-nqa-admin-test1-ftp] history-record enable
[RouterA-nqa-admin-test1-ftp] quit
# Start the FTP operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the FTP operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the FTP operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 173/173/173
Square-Sum of round trip time: 29929
Last succeeded probe time: 2011-11-22 10:07:28.6
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the FTP operation.
[RouterA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
173
Succeeded
2011-11-22 10:07:28.6
The output shows that Router A uses 173 milliseconds to upload a file to the FTP server.
HTTP operation configuration example
Network requirements
As shown in Figure 11, configure an HTTP operation on the NQA client to test the time required to obtain
data from the HTTP server.
Figure 11 Network diagram
36
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create an HTTP operation.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type http
# Specify the IP address of the HTTP server 10.2.2.2 as the destination IP address.
[RouterA-nqa-admin-test1-http] destination ip 10.2.2.2
# Configure the HTTP operation to get data from the HTTP server. (The default HTTP operation type is get,
and this step can be omitted.)
[RouterA-nqa-admin-test1-http] operation get
# Configure the HTTP operation to visit the website /index.htm.
[RouterA-nqa-admin-test1-http] url /index.htm
# Configure the operation to use HTTP version 1.0. (Version 1.0 is the default version, and this step can
be omitted.)
[RouterA-nqa-admin-test1-http] http-version v1.0
# Enable the saving of history records.
[RouterA-nqa-admin-test1-http] history-record enable
[RouterA-nqa-admin-test1-http] quit
# Start the HTTP operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the HTTP operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the HTTP operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 64/64/64
Square-Sum of round trip time: 4096
Last succeeded probe time: 2011-11-22 10:12:47.9
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors:
Packet(s) arrived late: 0
# Display the history records of the HTTP operation.
[RouterA] display nqa history admin test1
37
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
64
Succeeded
2011-11-22 10:12:47.9
The output shows that Router A uses 64 milliseconds to obtain data from the HTTP server.
UDP jitter operation configuration example
Network requirements
As shown in Figure 12, configure a UDP jitter operation to test the jitter, delay, and round-trip time
between Router A and Router B.
Figure 12 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure Router B:
# Enable the NQA server and configure a listening service to listen on the IP address 10.2.2.2 and
UDP port 9000.
<RouterB> system-view
[RouterB] nqa server enable
[RouterB] nqa server udp-echo 10.2.2.2 9000
4.
Configure Router A:
# Create a UDP jitter operation.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type udp-jitter
# Configure 10.2.2.2 as the destination IP address and port 9000 as the destination port.
[RouterA-nqa-admin-test1-udp-jitter] destination ip 10.2.2.2
[RouterA-nqa-admin-test1-udp-jitter] destination port 9000
# Configure the operation to repeat at an interval of 1000 milliseconds.
[RouterA-nqa-admin-test1-udp-jitter] frequency 1000
[RouterA-nqa-admin-test1-udp-jitter] quit
# Start the UDP jitter operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the UDP jitter operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the UDP jitter operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
38
Destination IP address: 10.2.2.2
Send operation times: 10
Receive response times: 10
Min/Max/Average round trip time: 15/32/17
Square-Sum of round trip time: 3235
Last succeeded probe time: 2008-05-29 13:56:17.6
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
UDP-jitter results:
RTT number: 10
Min positive SD: 4
Min positive DS: 1
Max positive SD: 21
Max positive DS: 28
Positive SD number: 5
Positive DS number: 4
Positive SD sum: 52
Positive DS sum: 38
Positive SD average: 10
Positive DS average: 10
Positive SD square sum: 754
Positive DS square sum: 460
Min negative SD: 1
Min negative DS: 6
Max negative SD: 13
Max negative DS: 22
Negative SD number: 4
Negative DS number: 5
Negative SD sum: 38
Negative DS sum: 52
Negative SD average: 10
Negative DS average: 10
Negative SD square sum: 460
Negative DS square sum: 754
One way results:
Max SD delay: 15
Max DS delay: 16
Min SD delay: 7
Min DS delay: 7
Number of SD delay: 10
Number of DS delay: 10
Sum of SD delay: 78
Sum of DS delay: 85
Square sum of SD delay: 666
Square sum of DS delay: 787
SD lost packet(s): 0
DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
# Display the statistics of the UDP jitter operation.
[RouterA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Destination IP address: 10.2.2.2
Start time: 2008-05-29 13:56:14.0
Life time: 47 seconds
Send operation times: 410
Receive response times: 410
Min/Max/Average round trip time: 1/93/19
Square-Sum of round trip time: 206176
Extended results:
Packet loss in test: 0%
39
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
UDP-jitter results:
RTT number: 410
Min positive SD: 3
Min positive DS: 1
Max positive SD: 30
Max positive DS: 79
Positive SD number: 186
Positive DS number: 158
Positive SD sum: 2602
Positive DS sum: 1928
Positive SD average: 13
Positive DS average: 12
Positive SD square sum: 45304
Positive DS square sum: 31682
Min negative SD: 1
Min negative DS: 1
Max negative SD: 30
Max negative DS: 78
Negative SD number: 181
Negative DS number: 209
Negative SD sum: 181
Negative DS sum: 209
Negative SD average: 13
Negative DS average: 14
Negative SD square sum: 46994
Negative DS square sum: 3030
One way results:
Max SD delay: 46
Max DS delay: 46
Min SD delay: 7
Min DS delay: 7
Number of SD delay: 410
Number of DS delay: 410
Sum of SD delay: 3705
Sum of DS delay: 3891
Square sum of SD delay: 45987
Square sum of DS delay: 49393
SD lost packet(s): 0
DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
SNMP operation configuration example
Network requirements
As shown in Figure 13, configure an SNMP operation to test the time the NQA client uses to get a value
from the SNMP agent.
Figure 13 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure the SNMP agent (Router B):
40
# Enable the SNMP agent, and set the SNMP version to all, the read community to public, and the
write community to private.
<RouterB> system-view
[RouterB] snmp-agent sys-info version all
[RouterB] snmp-agent community read public
[RouterB] snmp-agent community write private
4.
Configure Router A:
# Create an SNMP operation and configure 10.2.2.2 as the destination IP address.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type snmp
[RouterA-nqa-admin-test1-snmp] destination ip 10.2.2.2
# Enable the saving of history records.
[RouterA-nqa-admin-test1-snmp] history-record enable
[RouterA-nqa-admin-test1-snmp] quit
# Start the SNMP operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the SNMP operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the SNMP operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 50/50/50
Square-Sum of round trip time: 2500
Last succeeded probe time: 2011-11-22 10:24:41.1
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the SNMP operation.
[RouterA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
50
Timeout
2011-11-22 10:24:41.1
The output shows that Router A uses 50 milliseconds to receive a response from the SNMP agent.
41
TCP operation configuration example
Network requirements
As shown in Figure 14, configure a TCP operation to test the time the NQA client uses to establish a TCP
connection to the NQA server on Router B.
Figure 14 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure Router B:
# Enable the NQA server, and configure a listening service to listen on the IP address 10.2.2.2
and TCP port 9000.
<RouterB> system-view
[RouterB] nqa server enable
[RouterB] nqa server tcp-connect 10.2.2.2 9000
4.
Configure Router A:
# Create a TCP operation.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type tcp
# Configure 10.2.2.2 as the destination IP address and port 9000 as the destination port.
[RouterA-nqa-admin-test1-tcp] destination ip 10.2.2.2
[RouterA-nqa-admin-test1-tcp] destination port 9000
# Enable the saving of history records.
[RouterA-nqa-admin-test1-tcp] history-record enable
[RouterA-nqa-admin-test1-tcp] quit
# Start the TCP operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the TCP operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the TCP operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 13/13/13
Square-Sum of round trip time: 169
Last succeeded probe time: 2011-11-22 10:27:25.1
42
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the TCP operation.
[RouterA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
1
Response
Status
13
Time
Succeeded
2011-11-22 10:27:25.1
The output shows that Router A uses 13 milliseconds to establish a TCP connection to port 9000 on
the NQA server.
UDP echo operation configuration example
Network requirements
As shown in Figure 15, configure a UDP echo operation to test the round-trip time between Router A and
Router B. The destination port number is 8000.
Figure 15 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure Router B:
# Enable the NQA server, and configure a listening service to listen on the IP address 10.2.2.2
and UDP port 8000.
<RouterB> system-view
[RouterB] nqa server enable
[RouterB] nqa server udp-echo 10.2.2.2 8000
4.
Configure Router A:
# Create a UDP echo operation.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type udp-echo
# Configure 10.2.2.2 as the destination IP address and port 8000 as the destination port.
[RouterA-nqa-admin-test1-udp-echo] destination ip 10.2.2.2
43
[RouterA-nqa-admin-test1-udp-echo] destination port 8000
# Enable the saving of history records.
[RouterA-nqa-admin-test1-udp-echo] history-record enable
[RouterA-nqa-admin-test1-udp-echo] quit
# Start the UDP echo operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the UDP echo operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the UDP echo operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 25/25/25
Square-Sum of round trip time: 625
Last succeeded probe time: 2011-11-22 10:36:17.9
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the UDP echo operation.
[RouterA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
25
Succeeded
2011-11-22 10:36:17.9
The output shows that the round-trip time between Router A and port 8000 on Router B is 25
milliseconds.
Voice operation configuration example
Network requirements
As shown in Figure 16, configure a voice operation to test the jitters between Router A and Router B.
Figure 16 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
44
2.
Configure static routes or a routing protocol to make sure the devices can reach each other.
(Details not shown.)
3.
Configure Router B:
# Enable the NQA server, and configure a listening service to listen on IP address 10.2.2.2 and
UDP port 9000.
<RouterB> system-view
[RouterB] nqa server enable
[RouterB] nqa server udp-echo 10.2.2.2 9000
4.
Configure Router A:
# Create a voice operation.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type voice
# Configure 10.2.2.2 as the destination IP address and port 9000 as the destination port.
[RouterA-nqa-admin-test1-voice] destination ip 10.2.2.2
[RouterA-nqa-admin-test1-voice] destination port 9000
[RouterA-nqa-admin-test1-voice] quit
# Start the voice operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the voice operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the voice operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1000
Receive response times: 1000
Min/Max/Average round trip time: 31/1328/33
Square-Sum of round trip time: 2844813
Last succeeded probe time: 2008-06-13 09:49:31.1
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
Voice results:
RTT number: 1000
Min positive SD: 1
Min positive DS: 1
Max positive SD: 204
Max positive DS: 1297
Positive SD number: 257
Positive DS number: 259
Positive SD sum: 759
Positive DS sum: 1797
Positive SD average: 2
Positive DS average: 6
Positive SD square sum: 54127
Positive DS square sum: 1691967
Min negative SD: 1
Min negative DS: 1
45
Max negative SD: 203
Max negative DS: 1297
Negative SD number: 255
Negative DS number: 259
Negative SD sum: 759
Negative DS sum: 1796
Negative SD average: 2
Negative DS average: 6
Negative SD square sum: 53655
Negative DS square sum: 1691776
One way results:
Max SD delay: 343
Max DS delay: 985
Min SD delay: 343
Min DS delay: 985
Number of SD delay: 1
Number of DS delay: 1
Sum of SD delay: 343
Sum of DS delay: 985
Square sum of SD delay: 117649
Square sum of DS delay: 970225
SD lost packet(s): 0
DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
Voice scores:
MOS value: 4.38
ICPIF value: 0
# Display the statistics of the voice operation.
[RouterA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Destination IP address: 10.2.2.2
Start time: 2008-06-13 09:45:37.8
Life time: 331 seconds
Send operation times: 4000
Receive response times: 4000
Min/Max/Average round trip time: 15/1328/32
Square-Sum of round trip time: 7160528
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
Voice results:
RTT number: 4000
Min positive SD: 1
Min positive DS: 1
Max positive SD: 360
Max positive DS: 1297
Positive SD number: 1030
Positive DS number: 1024
Positive SD sum: 4363
Positive DS sum: 5423
Positive SD average: 4
Positive DS average: 5
Positive SD square sum: 497725
Positive DS square sum: 2254957
Min negative SD: 1
Min negative DS: 1
Max negative SD: 360
Max negative DS: 1297
Negative SD number: 1028
Negative DS number: 1022
Negative SD sum: 1028
Negative DS sum: 1022
Negative SD average: 4
Negative DS average: 5
Negative SD square sum: 495901
Negative DS square sum: 5419
46
One way results:
Max SD delay: 359
Max DS delay: 985
Min SD delay: 0
Min DS delay: 0
Number of SD delay: 4
Number of DS delay: 4
Sum of SD delay: 1390
Sum of DS delay: 1079
Square sum of SD delay: 483202
Square sum of DS delay: 973651
SD lost packet(s): 0
DS lost packet(s): 0
Lost packet(s) for unknown reason: 0
Voice scores:
Max MOS value: 4.38
Min MOS value: 4.38
Max ICPIF value: 0
Min ICPIF value: 0
DLSw operation configuration example
Network requirements
As shown in Figure 17, configure a DLSw operation to test the response time of the DLSw device.
Figure 17 Network diagram
Configuration procedure
# Assign each interface an IP address. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not
shown.)
# Create a DLSw operation and configure 10.2.2.2 as the destination IP address.
<RouterA> system-view
[RouterA] nqa entry admin test1
[RouterA-nqa-admin-test1] type dlsw
[RouterA-nqa-admin-test1-dlsw] destination ip 10.2.2.2
# Enable the saving of history records.
[RouterA-nqa-admin-test1-dlsw] history-record enable
[RouterA-nqa-admin-test1-dlsw] quit
# Start the DLSw operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
# Stop the DLSw operation after a period of time.
[RouterA] undo nqa schedule admin test1
# Display the results of the DLSw operation.
[RouterA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Destination IP address: 10.2.2.2
Send operation times: 1
Receive response times: 1
Min/Max/Average round trip time: 19/19/19
47
Square-Sum of round trip time: 361
Last succeeded probe time: 2011-11-22 10:40:27.7
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to sequence error: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packet(s) arrived late: 0
# Display the history records of the DLSw operation.
[RouterA] display nqa history admin test1
NQA entry (admin admin, tag test1) history record(s):
Index
Response
Status
Time
1
19
Succeeded
2011-11-22 10:40:27.7
The output shows that the response time of the DLSw device is 19 milliseconds.
NQA collaboration configuration example
Network requirements
As shown in Figure 18, configure a static route to Router C with Router B as the next hop on Router A.
Associate the static route, a track entry, and an NQA operation to monitor the state of the static route.
Figure 18 Network diagram
Configuration procedure
1.
Assign each interface an IP address. (Details not shown.)
2.
On Router A, configure a unicast static route, and associate the static route with a track entry:
# Configure a static route and associate the static route with track entry 1.
<RouterA> system-view
[RouterA] ip route-static 10.1.1.2 24 10.2.1.1 track 1
3.
On Router A, configure an ICMP echo operation:
# Create an NQA operation with the administrator name being admin and operation tag being
test1.
[RouterA] nqa entry admin test1
48
# Configure the NQA operation type as ICMP echo.
[RouterA-nqa-admin-test1] type icmp-echo
# Configure 10.2.2.1 as the destination IP address.
[RouterA-nqa-admin-test1-icmp-echo] destination ip 10.2.1.1
# Configure the operation to repeat at an interval of 100 milliseconds.
[RouterA-nqa-admin-test1-icmp-echo] frequency 100
# Create reaction entry 1. If the number of consecutive probe failures reaches 5, collaboration is
triggered.
[RouterA-nqa-admin-test1-icmp-echo] reaction 1 checked-element probe-fail
threshold-type consecutive 5 action-type trigger-only
[RouterA-nqa-admin-test1-icmp-echo] quit
# Start the ICMP echo operation.
[RouterA] nqa schedule admin test1 start-time now lifetime forever
4.
On Router A, create the track entry:
# Create track entry 1, and associate it with reaction entry 1 of the ICMP echo operation
admin-test1.
[RouterA] track 1 nqa entry admin test1 reaction 1
Verifying the configuration
# On Router A, display information about all the track entries.
[RouterA] display track all
Track ID: 1
Status: Positive
Notification delay: Positive 0, Negative 0 (in seconds)
Reference object:
NQA entry: admin test1
Reaction: 1
# Display brief information about active routes in the routing table on Router A.
[RouterA] display ip routing-table
Routing Tables: Public
Destinations : 5
Destination/Mask
Proto
10.1.1.0/24
Routes : 5
Pre
Cost
NextHop
Interface
Static 60
0
10.2.1.1
GE2/0/1
10.2.1.0/24
Direct 0
0
10.2.1.2
GE2/0/1
10.2.1.2/32
Direct 0
0
127.0.0.1
InLoop0
127.0.0.0/8
Direct 0
0
127.0.0.1
InLoop0
127.0.0.1/32
Direct 0
0
127.0.0.1
InLoop0
The output shows that the static route with the next hop 10.2.1.1 is active, and the status of the track entry
is Positive.
# Remove the IP address of GigabitEthernet 2/0/1 on Router B.
<RouterB> system-view
[RouterB] interface gigabitethernet 2/0/1
[RouterB-GigabitEthernet2/0/1] undo ip address
# On Router A, display information about all the track entries.
49
[RouterA] display track all
Track ID: 1
Status: Negative
Notification delay: Positive 0, Negative 0 (in seconds)
Reference object:
NQA entry: admin test1
Reaction: 1
# Display brief information about active routes in the routing table on Router A.
[RouterA] display ip routing-table
Routing Tables: Public
Destinations : 4
Destination/Mask
Proto
10.2.1.0/24
10.2.1.2/32
Routes : 4
Pre
Cost
NextHop
Interface
Direct 0
0
10.2.1.2
GE2/0/1
Direct 0
0
127.0.0.1
InLoop0
127.0.0.0/8
Direct 0
0
127.0.0.1
InLoop0
127.0.0.1/32
Direct 0
0
127.0.0.1
InLoop0
The output shows that the static route does not exist, and the status of the track entry is Negative.
50
Configuring NTP
You must synchronize your device with a trusted time source by using the Network Time Protocol (NTP) or
changing the system time before you run it on a live network. Various tasks, including network
management, charging, auditing, and distributed computing depend on an accurate system time setting,
because the timestamps of system messages and logs use the system time.
Overview
NTP is typically used in large networks to dynamically synchronize time among network devices. It
guarantees higher clock accuracy than manual system clock setting. In a small network that does not
require high clock accuracy, you can keep time synchronized among devices by changing their system
clocks one by one.
NTP runs over UDP and uses UDP port 123.
NTP application
An administrator is unable to keep time synchronized among all the devices within a network by
changing the system clock on each station, because this is a huge work and does not guarantee clock
precision. NTP, however, allows quick clock synchronization within the entire network and ensures a high
clock precision.
NTP is used when all devices within the network must be consistent in timekeeping, for example:
•
In analysis of the log information and debugging information collected from different devices in
network management, time must be used as reference basis.
•
All devices must use the same reference clock in a charging system.
•
To implement certain functions, such as scheduled restart of all devices within the network, all
devices must be consistent in timekeeping.
•
When multiple systems process a complex event in cooperation, these systems must use the same
reference clock to ensure the correct execution sequence.
•
For incremental backup between a backup server and clients, timekeeping must be synchronized
between the backup server and all the clients.
NTP advantages
•
NTP uses a stratum to describe clock precision, and it can synchronize time among all devices
within the network.
•
NTP supports access control and MD5 authentication.
•
NTP can unicast, multicast or broadcast protocol messages.
51
How NTP works
Figure 19 shows how NTP synchronizes the system time between two devices, in this example, Device A
and Device B. Assume that:
•
Prior to the time synchronization, the time of Device A is set to 10:00:00 am and that of Device B
is set to 11:00:00 am.
•
Device B is used as the NTP server. Device A is to be synchronized to Device B.
•
It takes 1 second for an NTP message to travel from Device A to Device B, and from Device B to
Device A.
Figure 19 Basic work flow of NTP
The synchronization process is as follows:
•
Device A sends Device B an NTP message, which is timestamped when it leaves Device A. The
timestamp is 10:00:00 am (T1).
•
When this NTP message arrives at Device B, it is timestamped by Device B. The timestamp is
11:00:01 am (T2).
•
When the NTP message leaves Device B, Device B timestamps it. The timestamp is 11:00:02 am
(T3).
•
When Device A receives the NTP message, the local time of Device A is 10:00:03 am (T4).
Now, Device A can calculate the following parameters based on the timestamps:
•
The roundtrip delay of an NTP message: Delay = (T4–T1) – (T3-T2) = 2 seconds.
•
The time difference between Device A and Device B: Offset = ((T2-T1) + (T3-T4))/2 = 1 hour.
Based on these parameters, Device A can synchronize its own clock to the clock of Device B.
This is a rough description of how NTP works. For more information, see RFC 1305.
52
NTP message format
All NTP messages mentioned in this document refer to NTP clock synchronization messages.
NTP uses two types of messages: clock synchronization messages and NTP control messages. NTP
control messages are used in environments where network management is needed. Because NTP control
messages are not essential for clock synchronization, they are not described in this document.
A clock synchronization message is encapsulated in a UDP message, as shown in Figure 20.
Figure 20 Clock synchronization message format
The main fields are described as follows:
•
LI (Leap Indicator)—A 2-bit leap indicator. If set to 11, it warns of an alarm condition (clock
unsynchronized). If set to any other value, it is not to be processed by NTP.
•
VN (Version Number)—A 3-bit version number that indicates the version of NTP. The latest version
is version 4.
•
Mode—A 3-bit code that indicates the work mode of NTP. This field can be set to these values:
{
0—Reserved
{
1—Symmetric active
{
2—Symmetric passive
{
3—Client
{
4—Server
{
5—Broadcast or multicast
{
6—NTP control message
{
7—Reserved for private use
53
•
Stratum—An 8-bit integer that indicates the stratum level of the local clock, taking the value of 1 to
16. Clock precision decreases from stratum 1 through stratum 16. A stratum 1 clock has the highest
precision, and a stratum 16 clock is not synchronized.
•
Poll—An 8-bit signed integer that indicates the maximum interval between successive messages,
which is called the poll interval.
•
Precision—An 8-bit signed integer that indicates the precision of the local clock.
•
Root Delay—Roundtrip delay to the primary reference source.
•
Root Dispersion—The maximum error of the local clock relative to the primary reference source.
•
Reference Identifier—Identifier of the particular reference source.
•
Reference Timestamp—The local time at which the local clock was last set or corrected.
•
Originate Timestamp—The local time at which the request departed from the client for the service
host.
•
Receive Timestamp—The local time at which the request arrived at the service host.
•
Transmit Timestamp—The local time at which the reply departed from the service host for the client.
•
Authenticator—Authentication information.
NTP operation modes
Devices that run NTP can implement clock synchronization in one of the following modes:
•
Client/server mode
•
Symmetric peers mode
•
Broadcast mode
•
Multicast mode
You can select operation modes of NTP as needed. If the IP address of the NTP server or peer is unknown
and many devices in the network need to be synchronized, you can adopt the broadcast or multicast
mode. In client/server or symmetric peers mode, a device is synchronized from the specified server or
peer, so clock reliability is enhanced.
Client/server mode
Figure 21 Client/server mode
Client
Server
Network
Clock
synchronization (Mode3)
Performs clock filtering and
selection, and synchronizes its
local clock to that of the
optimal reference source
Automatically works in
client/server mode and
sends a reply
message
Reply
( Mode4)
When operating in client/server mode, a client sends a clock synchronization message to servers with
the Mode field in the message set to 3 (client mode). Upon receiving the message, the servers
automatically operate in server mode and send a reply, with the Mode field in the messages set to 4
54
(server mode). Upon receiving the replies from the servers, the client performs clock filtering and selection
and synchronizes its local clock to that of the optimal reference source.
In client/server mode, a client can be synchronized to a server, but not vice versa.
Symmetric peers mode
Figure 22 Symmetric peers mode
In symmetric peers mode, devices that operate in symmetric active mode and symmetric passive mode
exchange NTP messages with the Mode field 3 (client mode) and 4 (server mode). Then the device that
operates in symmetric active mode periodically sends clock synchronization messages, with the Mode
field in the messages set to 1 (symmetric active). The device that receives the messages automatically
enters symmetric passive mode and sends a reply, with the Mode field in the message set to 2 (symmetric
passive). This exchange of messages establishes symmetric peers mode between the two devices, so the
two devices can synchronize, or be synchronized by, each other. If the clocks of both devices have been
synchronized, the device whose local clock has a lower stratum level synchronizes the clock of the other
device.
Broadcast mode
Figure 23 Broadcast mode
In broadcast mode, a server periodically sends clock synchronization messages to broadcast address
255.255.255.255, with the Mode field in the messages set to 5 (broadcast mode). Clients listen to the
broadcast messages from servers. When a client receives the first broadcast message, the client and the
server start to exchange messages with the Mode field set to 3 (client mode) and 4 (server mode), to
calculate the network delay between client and the server. Then, the client enters broadcast client mode.
55
The client continues listening to broadcast messages and synchronizes its local clock based on the
received broadcast messages.
Multicast mode
Figure 24 Multicast mode
In multicast mode, a server periodically sends clock synchronization messages to the user-configured
multicast address, or, if no multicast address is configured, to the default NTP multicast address 224.0.1.1,
with the Mode field in the messages set to 5 (multicast mode). Clients listen to the multicast messages
from servers. When a client receives the first multicast message, the client and the server start to
exchange messages with the Mode field set to 3 (client mode) and 4 (server mode), to calculate the
network delay between client and server. Then, the client enters multicast client mode. It continues
listening to multicast messages and synchronizes its local clock based on the received multicast
messages.
In symmetric peers mode, broadcast mode and multicast mode, the client (or the symmetric active peer)
and the server (the symmetric passive peer) can operate in the specified NTP working mode only after
they exchange NTP messages with the Mode field being 3 (client mode) and the Mode field being 4
(server mode). During this message exchange process, NTP clock synchronization can be implemented.
NTP for VPNs
The device supports multiple VPN instances when it functions as an NTP client or a symmetric active peer
to realize clock synchronization with the NTP server or symmetric passive peer in an MPLS VPN network.
For more information about MPLS L3VPN, VPN instance, and PE, see MPLS Configuration Guide.
As shown in Figure 25, users in VPN 1 and VPN 2 are connected to the MPLS backbone network through
PE devices, and services of the two VPNs are isolated. If you configure the PEs to operate in NTP client
or symmetric active mode, and specify the VPN to which the NTP server or NTP symmetric passive peer
belongs, the clock synchronization between PEs and CEs of the two VPNs can be realized.
56
Figure 25 Network diagram
NTP configuration task list
Task
Remarks
Configuring NTP operation modes
Required.
Configuring the local clock as a reference source
Optional.
Configuring optional parameters for NTP
Optional.
Configuring access-control rights
Optional.
Configuring NTP authentication
Optional.
Configuring NTP operation modes
Devices can implement clock synchronization in one of the following modes:
•
Client/server mode—Configure only clients.
•
Symmetric mode—Configure only symmetric-active peers.
•
Broadcast mode—Configure both clients and servers.
•
Multicast mode—Configure both clients and servers.
Configuring NTP client/server mode
If you specify the source interface for NTP messages by specifying the source interface source-interface
option, NTP uses the primary IP address of the specified interface as the source IP address of the NTP
messages.
A device can act as a server to synchronize other devices only after it is synchronized. If a server has a
stratum level higher than or equal to a client, the client does not synchronize to that server.
In the ntp-service unicast-server command, ip-address must be a unicast address, rather than a
broadcast address, a multicast address or the IP address of the local clock.
To specify an NTP server on the client:
57
Step
1.
2.
Command
Remarks
Enter system view.
system-view
N/A
By default, no NTP server is
specified.
Specify an NTP server for the
device.
ntp-service unicast-server
[ vpn-instance vpn-instance-name ]
{ ip-address | server-name }
[ authentication-keyid keyid |
priority | source-interface
interface-type interface-number |
version number ] *
You can configure multiple servers
by repeating the command. The
clients will select the optimal
reference source.
Configuring the NTP symmetric peers mode
Follow these guidelines when you configure the NTP symmetric peers mode:
•
For devices operating in symmetric mode, specify a symmetric-passive peer on a symmetric-active
peer.
•
Use the ntp-service refclock-master command or any NTP configuration command in Configuring
NTP operation modes to enable NTP. Otherwise, a symmetric-passive peer does not process NTP
messages from a symmetric-active peer.
•
Either the symmetric-active peer or the symmetric-passive peer must be in synchronized state.
Otherwise, clock synchronization does not proceed.
•
After you specify the source interface for NTP messages by specifying the source interface
source-interface option, the source IP address of the NTP messages is set as the primary IP address
of the specified interface.
To specify a symmetric-passive peer on the active peer:
Step
1.
2.
Command
Remarks
Enter system view.
system-view
N/A
By default, no symmetric-passive
peer is specified.
Specify a symmetric-passive
peer for the device.
ntp-service unicast-peer
[ vpn-instance vpn-instance-name ]
{ ip-address | peer-name }
[ authentication-keyid keyid |
priority | source-interface
interface-type interface-number |
version number ] *
The ip-address argument must be a
unicast address, rather than a
broadcast address, a multicast
address, or the IP address of the
local clock.
Configuring NTP broadcast mode
The broadcast server periodically sends NTP broadcast messages to the broadcast address
255.255.255.255. After receiving the messages, the device operating in NTP broadcast client mode
sends a reply and synchronizes its local clock.
Configure the NTP broadcast mode on both the server and clients. The NTP broadcast mode can only be
configured in a specific interface view because an interface needs to be specified on the broadcast
server for sending NTP broadcast messages and on each broadcast client for receiving broadcast
messages.
58
Configuring a broadcast client
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
This command enters the view of
the interface for sending NTP
broadcast messages.
3.
Configure the device to
operate in NTP broadcast
client mode.
ntp-service broadcast-client
N/A
Command
Remarks
Configuring the broadcast server
Step
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
This command enters the view of
the interface for sending NTP
broadcast messages.
3.
Configure the device to
operate in NTP broadcast
server mode.
ntp-service broadcast-server
[ authentication-keyid keyid |
version number ] *
A broadcast server can
synchronize broadcast clients only
when its clock has been
synchronized.
Configuring NTP multicast mode
The multicast server periodically sends NTP multicast messages to multicast clients, which send replies
after receiving the messages and synchronize their local clocks.
Configure the NTP multicast mode on both the server and clients. The NTP multicast mode must be
configured in a specific interface view.
Configuring a multicast client
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
This command enters the view of
the interface for sending NTP
multicast messages.
3.
Configure the device to
operate in NTP multicast client
mode.
ntp-service multicast-client
[ ip-address ]
You can configure up to 1024
multicast clients, of which 128 can
take effect at the same time.
Command
Remarks
system-view
N/A
Configuring the multicast server
Step
1.
Enter system view.
59
Step
Command
Remarks
2.
Enter interface view.
interface interface-type
interface-number
This command enters the view of
the interface for sending NTP
multicast messages.
3.
Configure the device to
operate in NTP multicast
server mode.
ntp-service multicast-server
[ ip-address ]
[ authentication-keyid keyid | ttl
ttl-number | version number ] *
A multicast server can synchronize
broadcast clients only when its
clock has been synchronized.
Configuring the local clock as a reference source
A network device can get its clock synchronized in either of the following two ways:
•
Synchronized to the local clock, which operates as the reference source.
•
Synchronized to another device on the network in any of the four NTP operation modes previously
described.
If you configure two synchronization modes, the device selects the optimal clock as the reference source.
Typically, the stratum level of the NTP server that is synchronized from an authoritative clock (such as an
atomic clock) is set to 1. This NTP server operates as the primary reference source on the network, and
other devices synchronize to it. The number of NTP hops that devices in a network are away from the
primary reference source determines the stratum levels of the devices.
If you configure the local clock as a reference clock, the local device can act as a reference clock to
synchronize other devices in the network. Perform this configuration with caution to avoid clock errors in
the devices in the network.
To configure the local clock as a reference source:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the local clock as a
reference source.
ntp-service refclock-master
[ ip-address ] [ stratum ]
The value of the ip-address argument
must be 127.127.1.u, where the value
range for u is 0 to 3, representing the
NTP process ID.
Configuring optional parameters for NTP
This section explains how to configure the optional parameters of NTP.
Specifying the source interface for NTP messages
If you specify the source interface for NTP messages, the device sets the source IP address of the NTP
messages as the primary IP address of the specified interface when sending the NTP messages.
When the device responds to an NTP request received, the source IP address of the NTP response is
always the IP address of the interface that received the NTP request.
60
Configuration guidelines
•
The source interface for NTP unicast messages is the interface specified in the ntp-service
unicast-server or ntp-service unicast-peer command.
•
The source interface for NTP broadcast or multicast messages is the interface where you configure
the ntp-service broadcast-server or ntp-service multicast-server command.
•
If the specified source interface goes down, NTP uses the primary IP address of the outgoing
interface as the source IP address.
Configuration procedure
To specify the source interface for NTP messages:
Step
1.
2.
Enter system view.
Specify the source interface
for NTP messages.
Command
Remarks
system-view
N/A
ntp-service source-interface
interface-type interface-number
By default, no source interface is
specified for NTP messages, and
the system uses the IP address of
the interface determined by the
matching route as the source IP
address of NTP messages.
Disabling an interface from receiving NTP messages
If NTP is enabled, NTP messages can be received from all the interfaces by default, and you can disable
an interface from receiving NTP messages through the following configuration.
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Disable the interface from
receiving NTP messages.
ntp-service in-interface disable
By default, an interface is enabled
to receive NTP messages.
Configuring the allowed maximum number of dynamic
sessions
A single device can have a maximum of 128 associations at the same time, including static associations
and dynamic associations.
A static association refers to an association that a user has manually created by using an NTP command.
A dynamic association is a temporary association created by the system during operation. A dynamic
association is removed if the system fails to receive messages from it over a specific long time.
In client/server mode, for example, when you execute a command to synchronize the time to a server, the
system creates a static association, and the server simply responds passively upon the receipt of a
message, rather than creating an association (static or dynamic). In symmetric mode, static associations
are created at the symmetric-active peer side, and dynamic associations are created at the
61
symmetric-passive peer side. In broadcast or multicast mode, static associations are created at the server
side, and dynamic associations are created at the client side.
To configure the allowed maximum number of dynamic sessions:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the maximum
number of dynamic sessions
allowed to be established
locally.
ntp-service max-dynamic-sessions
number
The default is 100.
Configuring access-control rights
From the highest to lowest, the NTP service access-control rights are peer, server, synchronization, and
query. If a device receives an NTP request, it performs an access-control right match and uses the first
matched right. If no matched right is found, the device drops the NTP request.
•
Query—Control query permitted. This level of right permits the peer devices to perform control
query to the NTP service on the local device but does not permit a peer device to synchronize its
clock to that of the local device. The so-called "control query" refers to query of some states of the
NTP service, including alarm information, authentication status, clock source information, and so
on.
•
Synchronization—Server access only. This level of right permits a peer device to synchronize its
clock to that of the local device but does not permit the peer devices to perform control query.
•
Server—Server access and query permitted. This level of right permits the peer devices to perform
synchronization and control query to the local device but does not permit the local device to
synchronize its clock to that of a peer device.
•
Peer—Full access. This level of right permits the peer devices to perform synchronization and control
query to the local device and also permits the local device to synchronize its clock to that of a peer
device.
The access-control right mechanism provides only a minimum level of security protection for a system
running NTP. A more secure method is identity authentication.
Configuration prerequisites
Before you configure the NTP service access-control right to the local device, create and configure an
ACL associated with the access-control right. For more information about ACLs, see ACL and QoS
Configuration Guide.
Configuration procedure
To configure the NTP service access-control right to the local device:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
62
Step
Configure the NTP service
access-control right for a peer
device to access the local
device.
2.
Command
Remarks
ntp-service access { peer | query |
server | synchronization }
acl-number
The default is peer.
Configuring NTP authentication
Enable NTP authentication for a system running NTP in a network where there is a high security demand.
NTP authentication enhances network security by using client-server key authentication, which prohibits
a client from synchronizing with a device that fails authentication.
To configure NTP authentication, do the following:
•
Enable NTP authentication
•
Configure an authentication key
•
Configure the key as a trusted key
•
Associate the specified key with an NTP server or a symmetric peer
These tasks are required. If any task is omitted, NTP authentication cannot function.
Configuring NTP authentication in client/server mode
Follow these instructions to configure NTP authentication in client/server mode:
•
A client can synchronize to the server only when you configure all the required tasks on both the
client and server.
•
On the client, if NTP authentication is not enabled or no key is specified to associate with the NTP
server, the client is not authenticated. No matter whether NTP authentication is enabled or not on
the server, the clock synchronization between the server and client can be performed.
•
On the client, if NTP authentication is enabled and a key is specified to associate with the NTP
server, but the key is not a trusted key, the client does not synchronize to the server no matter whether
NTP authentication is enabled or not on the server.
Configuring NTP authentication for a client
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
4.
63
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
By default, no authentication key is
configured to be trusted.
Step
Associate the specified key
with an NTP server.
5.
Command
Remarks
ntp-service unicast-server
{ ip-address | server-name }
authentication-keyid keyid
You can associate a non-existing
key with an NTP server. To enable
NTP authentication, you must
configure the key and specify it as
a trusted key after associating the
key with the NTP server.
Configuring NTP authentication for a server
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
4.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
By default, no authentication key is
configured to be trusted.
Configuring NTP authentication in symmetric peers mode
Follow these instructions to configure NTP authentication in symmetric peers mode:
•
An active symmetric peer can synchronize to the passive symmetric peer only when you configure
all the required tasks on both the active symmetric peer and passive symmetric peer.
•
When the active peer has a greater stratum level than the passive peer:
{
{
On the active peer, if NTP authentication is not enabled or no key is specified to associate with
the passive peer, the active peer synchronizes to the passive peer as long as NTP authentication
is disabled on the passive peer.
On the active peer, if NTP authentication is enabled and a key is associated with the passive
peer, but the key is not a trusted key, no matter whether NTP authentication is enabled or not on
the passive peer, the active peer does not synchronize to the passive peer.
When the active peer has a smaller stratum level than the passive peer:
•
On the active peer, if NTP authentication is not enabled, no key is specified to associate with the
passive peer, or the key is not a trusted key, the active peer can synchronize to the passive peer
as long as NTP authentication is disabled on the passive peer.
Configuring NTP authentication for an active peer
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
64
Step
Command
Remarks
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
4.
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
By default, no authentication key is
configured to be trusted.
ntp-service unicast-peer
{ ip-address | peer-name }
authentication-keyid keyid
You can associate a non-existing
key with a passive peer. To enable
NTP authentication, you must
configure the key and specify it as
a trusted key after associating the
key with the passive peer.
2.
Associate the specified key
with the passive peer.
5.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the active symmetric peer
and passive symmetric peer.
Configuring NTP authentication for a passive peer
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
4.
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the active symmetric peer
and passive symmetric peer.
By default, no authentication key is
configured to be trusted.
Configuring NTP authentication in broadcast mode
Follow these instructions to configure NTP authentication in broadcast mode:
•
A broadcast client can synchronize to the broadcast server only when you configure all the required
tasks on both the broadcast client and server.
•
If NTP authentication is not enabled on the client, the broadcast client can synchronize to the
broadcast server no matter whether NTP authentication is enabled or not on the server.
Configuring NTP authentication for a broadcast client
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
65
Step
Command
Remarks
By default, NTP authentication is
disabled.
2.
Enable NTP authentication.
ntp-service authentication enable
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
4.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
By default, no authentication key is
configured to be trusted.
Configuring NTP authentication for a broadcast server
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
4.
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
By default, no authentication key is
configured to be trusted.
5.
Enter interface view.
interface interface-type
interface-number
N/A
ntp-service broadcast-server
authentication-keyid keyid
You can associate a non-existing
key with the broadcast server. To
enable NTP authentication, you
must configure the key and specify
it as a trusted key after associating
the key with the broadcast server.
Associate the specified key
with the broadcast server.
6.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
Configuring NTP authentication in multicast mode
Follow these instructions to configure NTP authentication in multicast mode:
•
A broadcast client can synchronize to the broadcast server only when you configure all the required
tasks on both the broadcast client and server.
•
If NTP authentication is not enabled on the client, the multicast client can synchronize to the
multicast server no matter whether NTP authentication is enabled on the server.
Configuring NTP authentication for a multicast client
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
66
Step
3.
4.
Command
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
Remarks
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
By default, no authentication key is
configured to be trusted.
Configuring NTP authentication for a multicast server
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable NTP authentication.
ntp-service authentication enable
By default, NTP authentication is
disabled.
3.
Configure an NTP
authentication key.
ntp-service authentication-keyid
keyid authentication-mode md5
[ cipher | simple ] value
4.
Configure the key as a trusted
key.
ntp-service reliable
authentication-keyid keyid
By default, no authentication key is
configured to be trusted.
5.
Enter interface view.
interface interface-type
interface-number
N/A
ntp-service multicast-server
authentication-keyid keyid
You can associate a non-existing
key with the multicast server. To
enable NTP authentication, you
must configure the key and specify
it as a trusted key after associating
the key with the multicast server.
6.
Associate the specified key
with the multicast server.
By default, no NTP authentication
key is configured.
Configure the same authentication
key on the client and server.
Displaying and maintaining NTP
Task
Command
Remarks
Display information about
NTP service status.
display ntp-service status [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
Display information about
NTP sessions.
display ntp-service sessions
[ verbose ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display brief information
about the NTP servers from the
local device back to the
primary reference source.
display ntp-service trace [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
67
NTP configuration examples
NTP client/server mode configuration example
Network requirements
Perform the following configurations to synchronize the time between Router B and Router A:
•
As shown in Figure 26, the local clock of Router A is to be used as a reference source, with the
stratum level 2.
•
Router B operates in client/server mode and Router A is to be used as the NTP server of Router B.
Figure 26 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 26. (Details not shown.)
2.
Configure Router A:
# Specify the local clock as the reference source, with the stratum level 2.
<RouterA> system-view
[RouterA] ntp-service refclock-master 2
3.
Configure Router B:
# Display the NTP status of Router B before clock synchronization.
<RouterB> display ntp-service status
Clock status: unsynchronized
Clock stratum: 16
Reference clock ID: none
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 0.00 ms
Root dispersion: 0.00 ms
Peer dispersion: 0.00 ms
Reference time: 00:00:00.000 UTC Jan 1 1900 (00000000.00000000)
# Specify Router A as the NTP server of Router B so that Router B synchronizes to Router A.
<RouterB> system-view
[RouterB] ntp-service unicast-server 1.0.1.11
# Display the NTP status of Router B after clock synchronization.
[RouterB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 1.0.1.11
68
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 1.05 ms
Peer dispersion: 7.81 ms
Reference time: 14:53:27.371 UTC Sep 19 2005 (C6D94F67.5EF9DB22)
The output shows that Router B has synchronized to Router A. The stratum level of Router B is 3, and
that of Router A is 2.
# Display the NTP session information of Router B, which shows that an association has been set
up between Router B and Router A.
[RouterB] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[12345] 1.0.1.11
127.127.1.0
2
63
64
3
-75.5
31.0
16.5
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
NTP symmetric peers mode configuration example
Network requirements
Perform the following configurations to synchronize time among devices:
•
As shown in Figure 27, the local clock of Router A is to be configured as a reference source, with the
stratum level 2.
•
The local clock Router C is to be configured as a reference source, with the stratum level 1.
•
Router B operates in client mode and Router A is to be used as the NTP server of Router B.
•
Router C operates in symmetric-active mode and Router B acts as the peer of Router C.
Figure 27 Network diagram
69
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 27. (Details not shown.)
2.
Configure Router A:
# Specify the local clock as the reference source, with the stratum level 2.
<RouterA> system-view
[RouterA] ntp-service refclock-master 2
3.
Configure Router B:
# Specify Router A as the NTP server of Router B.
<RouterB> system-view
[RouterB] ntp-service unicast-server 3.0.1.31
4.
Configure Router C (after Router B is synchronized to Router A):
# Specify the local clock as the reference source, with the stratum level 1.
<RouterC> system-view
[RouterC] ntp-service refclock-master 1
# Configure Router B as a symmetric peer after local synchronization.
[RouterC] ntp-service unicast-peer 3.0.1.32
In the step above, Router B and Router C are configured as symmetric peers, with Router C in the
symmetric-active mode and Router B in the symmetric-passive mode. Because the stratus level of
Router C is 1 while that of Router B is 3, Router B synchronizes to Router C.
# Display the NTP status of Router B after clock synchronization.
[RouterB] display ntp-service status
Clock status: synchronized
Clock stratum: 2
Reference clock ID: 3.0.1.33
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: -21.1982 ms
Root delay: 15.00 ms
Root dispersion: 775.15 ms
Peer dispersion: 34.29 ms
Reference time: 15:22:47.083 UTC Sep 19 2005 (C6D95647.153F7CED)
The output shows that Router B has synchronized to Router C. The stratum level of Router B is 2, and
that of Router C is 1.
# Display the NTP session information of Router B, which shows that an association has been set
up between Router B and Router C.
[RouterB] display ntp-service sessions
source
reference
stra
reach
poll
now
offset delay
disper
**************************************************************************
[245] 3.0.1.31
[1234] 3.0.1.33
127.127.1.0
LOCL
2
15
1
14
64
64
24
27
10535.0
-77.0
19.6
16.0
14.5
14.8
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
2
70
NTP broadcast mode configuration example
Network requirements
As shown in Figure 28, Router C functions as the NTP server for multiple devices on a network segment
and synchronizes the time among multiple devices.
•
Router C’s local clock is to be used as a reference source, with the stratum level 2.
•
Router C operates in broadcast server mode and sends broadcast messages from GigabitEthernet
2/0/1.
•
Router B and Router A operate in broadcast client mode and receive broadcast messages through
their respective GigabitEthernet 2/0/1.
Figure 28 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 28. (Details not shown.)
2.
Configure Router C:
# Specify the local clock as the reference source, with the stratum level 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Configure Router C to operate in broadcast server mode and send broadcast messages through
GigabitEthernet 2/0/1.
[RouterC] interface gigabitethernet 2/0/1
[RouterC-GigabitEthernet2/0/1] ntp-service broadcast-server
3.
Configure Router A:
# Configure Router A to operate in broadcast client mode and receive broadcast messages on
GigabitEthernet 2/0/1.
<RouterA> system-view
[RouterA] interface gigabitethernet 2/0/1
[RouterA-GigabitEthernet2/0/1] ntp-service broadcast-client
4.
Configure Router B:
71
# Configure Router B to operate in broadcast client mode and receive broadcast messages on
GigabitEthernet 2/0/1.
<RouterB> system-view
[RouterB] interface gigabitethernet 2/0/1
[RouterB-GigabitEthernet2/0/1] ntp-service broadcast-client
Router A and Router B get synchronized upon receiving a broadcast message from Router C.
# Take Router A as an example. Display the NTP status of Router A after clock synchronization.
[RouterA-GigabitEthernet2/0/1] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router A has synchronized to Router C. The stratum level of Router A is 3,
and that of Router C is 2.
# Display the NTP session information of Router A, which shows that an association has been set
up between Router A and Router C.
[RouterA-GigabitEthernet2/0/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
2
254
64
62
-16.0
32.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
NTP multicast mode configuration example
Network requirements
As shown in Figure 29, Router C functions as the NTP server for multiple devices on different network
segments and synchronizes the time among multiple devices.
•
Router C’s local clock is to be used as a reference source, with the stratum level 2.
•
Router C operates in multicast server mode and sends multicast messages from GigabitEthernet
2/0/1.
•
Router D and Router A operate in multicast client mode and receive multicast messages through
their respective GigabitEthernet 2/0/1.
72
Figure 29 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 29. (Details not shown.)
2.
Configure Router C:
# Specify the local clock as the reference source, with the stratum level 2.
<RouterC> system-view
[RouterC] ntp-service refclock-master 2
# Configure Router C to operate in multicast server mode and send multicast messages through
GigabitEthernet 2/0/1.
[RouterC] interface gigabitethernet 2/0/1
[RouterC-GigabitEthernet2/0/1] ntp-service multicast-server
3.
Configure Router D:
# Configure Router D to operate in multicast client mode and receive multicast messages on
GigabitEthernet 2/0/1.
<RouterD> system-view
[RouterD] interface gigabitethernet 2/0/1
[RouterD-GigabitEthernet2/0/1] ntp-service multicast-client
Because Router D and Router C are on the same subnet, Router D can receive the multicast
messages from Router C without being enabled with the multicast functions and can be
synchronized to Router C.
# Display the NTP status of Router D after clock synchronization.
[RouterD-GigabitEthernet2/0/1] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
73
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router D has synchronized to Router C. The stratum level of Router D is 3,
and that of Router C is 2.
# Display the NTP session information of Router D, which shows that an association has been set
up between Router D and Router C.
[RouterD-GigabitEthernet2/0/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
2
254
64
62
-16.0
31.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
4.
1
Configure Router B:
Because Router A and Router C are on different subnets, you must enable the multicast functions on
Router B before Router A can receive multicast messages from Router C.
# Enable the IP multicast function.
<RouterB> system-view
[RouterB] multicast routing-enable
[RouterB] interface gigabitethernet 2/0/1
[RouterB-GigabitEthernet2/0/1] igmp enable
[RouterB-GigabitEthernet2/0/1] igmp static-group 224.0.1.1
[RouterB-GigabitEthernet2/0/1] quit
[RouterB] interface gigabitethernet 2/0/2
[RouterB-GigabitEthernet2/0/2] pim dm
5.
Configure Router A:
<RouterA> system-view
[RouterA] interface gigabitethernet 2/0/1
# Configure Router A to operate in multicast client mode and receive multicast messages on
GigabitEthernet 2/0/1.
[RouterA-GigabitEthernet2/0/1] ntp-service multicast-client
# Display the NTP status of Router A after clock synchronization.
[RouterA-GigabitEthernet2/0/1] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 40.00 ms
Root dispersion: 10.83 ms
Peer dispersion: 34.30 ms
Reference time: 16:02:49.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router A has synchronized to Router C. The stratum level of Router A is 3,
and that of Router C is 2.
74
# Display the NTP session information of Router A, which shows that an association has been set
up between Router A and Router C.
[RouterA-GigabitEthernet2/0/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
2
255
64
26
-16.0
40.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
For more information about how to configuration IGMP and PIM, see IP Multicast Configuration Guide.
Configuration example for NTP client/server mode with
authentication
Network requirements
As shown in Figure 30, perform the following configurations to synchronize the time between Router B
and Router A and ensure network security.
•
The local clock of Router A is to be configured as a reference source, with the stratum level 2.
•
Router B operates in client mode and Router A is to be used as the NTP server of Router B, with
Router B as the client.
•
NTP authentication is to be enabled on both Router A and Router B.
Figure 30 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 30. (Details not shown.)
2.
Configure Router A:
# Specify the local clock as the reference source, with the stratum level 2.
<RouterA> system-view
[RouterA] ntp-service refclock-master 2
3.
Configure Router B:
<RouterB> system-view
# Enable NTP authentication on Router B.
[RouterB] ntp-service authentication enable
# Set an authentication key.
[RouterB] ntp-service authentication-keyid 42 authentication-mode md5 aNiceKey
# Specify the key as a trusted key.
[RouterB] ntp-service reliable authentication-keyid 42
# Specify Router A as the NTP server of Router B.
[RouterB] ntp-service unicast-server 1.0.1.11 authentication-keyid 42
Before Router B can synchronize to Router A, enable NTP authentication for Router A.
75
4.
Perform the following configuration on Router A:
# Enable NTP authentication.
[RouterA] ntp-service authentication enable
# Set an authentication key.
[RouterA] ntp-service authentication-keyid 42 authentication-mode md5 aNiceKey
# Specify the key as a trusted key.
[RouterA] ntp-service reliable authentication-keyid 42
# Display the NTP status of Router B after clock synchronization.
[RouterB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
Reference clock ID: 1.0.1.11
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 1.05 ms
Peer dispersion: 7.81 ms
Reference time: 14:53:27.371 UTC Sep 19 2005 (C6D94F67.5EF9DB22)
The output shows that Router B has synchronized to Router A. The stratum level of Router B is 3, and
that of Router A is 2.
# Display the NTP session information of Router B, which shows that an association has been set
up between Router B and Router A.
[RouterB] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[12345] 1.0.1.11
127.127.1.0
2
63
64
3
-75.5
31.0
16.5
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
Configuration example for NTP broadcast mode with
authentication
Network requirements
As shown in Figure 31, Router C functions as the NTP server for multiple devices on different network
segments and synchronizes the time among multiple devices. Router B authenticates the reference source.
•
Router C’s local clock is to be used as a reference source, with the stratum level 3.
•
Router C operates in broadcast server mode and sends broadcast messages from GigabitEthernet
2/0/1.
•
Router A and Router B operate in broadcast client mode and receive broadcast client through
GigabitEthernet 2/0/1.
•
Configure NTP authentication on both Router B and Router C.
76
Figure 31 Network diagram
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 31. (Details not shown.)
2.
Configure Router A:
# Configure Router A to operate in NTP broadcast client mode and receive NTP broadcast
messages on GigabitEthernet 2/0/1.
<RouterA> system-view
[RouterA] interface gigabitethernet 2/0/1
[RouterA-GigabitEthernet2/0/1] ntp-service broadcast-client
3.
Configure Router B:
# Enable NTP authentication on Router B. Configure an NTP authentication key, with the key ID of
88 and key value of 123456. Specify the key as a trusted key.
<RouterB> system-view
[RouterB] ntp-service authentication enable
[RouterB] ntp-service authentication-keyid 88 authentication-mode md5 123456
[RouterB] ntp-service reliable authentication-keyid 88
# Configure Router B to operate in broadcast client mode and receive NTP broadcast messages on
GigabitEthernet 2/0/1.
[RouterB] interface gigabitethernet 2/0/1
[RouterB-GigabitEthernet2/0/1] ntp-service broadcast-client
4.
Configure Router C:
# Specify the local clock as the reference source, with the stratum level 3.
<RouterC> system-view
[RouterC] ntp-service refclock-master 3
# Configure Router C to operate in NTP broadcast server mode and use GigabitEthernet 2/0/1 to
send NTP broadcast packets.
[RouterC] interface gigabitethernet 2/0/1
[RouterC-GigabitEthernet2/0/1] ntp-service broadcast-server
77
[RouterC-GigabitEthernet2/0/1] quit
# Router A synchronizes its local clock based on the received broadcast messages sent from Router
C.
# Display NTP service status information on Router A.
[RouterA-GigabitEthernet2/0/1] display ntp-service status
Clock status: synchronized
Clock stratum: 4
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router A has synchronized to Router C. The stratum level of Router A is 4,
and that of Router C is 3.
# Display the NTP session information of Router A, which shows that an association has been set
up between Router A and Router C.
[RouterA-GigabitEthernet2/0/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
3
254
64
62
-16.0
32.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
# NTP authentication is enabled on Router B, but not enabled on Router C, so Router B cannot
synchronize to Router C.
[RouterB-GigabitEthernet2/0/1] display ntp-service status
Clock status: unsynchronized
Clock stratum: 16
Reference clock ID: none
Nominal frequency: 100.0000 Hz
Actual frequency: 100.0000 Hz
Clock precision: 2^18
Clock offset: 0.0000 ms
Root delay: 0.00 ms
Root dispersion: 0.00 ms
Peer dispersion: 0.00 ms
Reference time: 00:00:00.000 UTC Jan 1 1900(00000000.00000000)
# Enable NTP authentication on Router C. Configure an NTP authentication key, with the key ID of
88 and key value of 123456. Specify the key as a trusted key.
[RouterC] ntp-service authentication enable
[RouterC] ntp-service authentication-keyid 88 authentication-mode md5 123456
[RouterC] ntp-service reliable authentication-keyid 88
# Specify Router C as an NTP broadcast server, and associate the key 88 with Router C.
[RouterC] interface gigabitethernet 2/0/1
78
[RouterC-GigabitEthernet2/0/1] ntp-service broadcast-server authentication-keyid 88
# After NTP authentication is enabled on Router C, Router B can synchronize to Router C. Display
NTP service status information on Router B.
[RouterB-GigabitEthernet2/0/1] display ntp-service status
Clock status: synchronized
Clock stratum: 4
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
The output shows that Router B has synchronized to Router C. The stratum level of Router B is 4, and
that of Router C is 3
# Display the NTP session information of Router B, which shows that an association has been set
up between Router B and Router C.
[RouterB-GigabitEthernet2/0/1] display ntp-service sessions
source
reference
stra
reach
poll
now
offset
delay
disper
**************************************************************************
[1234] 3.0.1.31
127.127.1.0
3
254
64
62
-16.0
32.0
16.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
# Configuration of NTP authentication on Router C does not affect Router A. Router A still
synchronizes to Router C.
[RouterA-GigabitEthernet2/0/1] display ntp-service status
Clock status: synchronized
Clock stratum: 4
Reference clock ID: 3.0.1.31
Nominal frequency: 64.0000 Hz
Actual frequency: 64.0000 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 31.00 ms
Root dispersion: 8.31 ms
Peer dispersion: 34.30 ms
Reference time: 16:01:51.713 UTC Sep 19 2005 (C6D95F6F.B6872B02)
Configuration example for MPLS VPN time synchronization in
client/server mode
Network requirements
As shown in Figure 32, two VPNs are present on PE 1 and PE 2: VPN 1 and VPN 2. CE 1 and CE 3 are
devices in VPN 1. To synchronize the time between PE 2 and CE 1 and CE 3 in VPN 1, configure CE 1’s
79
local clock as a reference source, with the stratum level 1, configure CE 1 to operate in client/server
mode, and specify VPN 1 as the target VPN.
MPLS L3VPN time synchronization can be implemented only in the unicast mode (client/server mode or
symmetric peers mode), but not in the multicast or broadcast mode.
Figure 32 Network diagram
Device
Interface
IP address
Device
Interface
IP address
CE 1
GE2/0/0
10.1.1.1/24
PE 1
GE2/0/0
10.1.1.2/24
CE 2
GE2/0/0
10.2.1.1/24
GE2/0/1
172.1.1.1/24
CE 3
GE2/0/0
10.3.1.1/24
CE 4
GE2/0/0
10.4.1.1/24
P
GE2/0/0
GE2/0/1
GE2/0/2
10.2.1.2/24
GE2/0/0
10.3.1.2/24
172.1.1.2/24
GE2/0/1
172.2.1.2/24
172.2.1.1/24
GE2/0/2
10.4.1.2/24
PE 2
Configuration procedure
Before you perform the following configuration, be sure you have completed MPLS VPN-related
configurations and make sure of the reachability between CE 1 and PE 1, between PE 1 and PE 2, and
between PE 2 and CE 3. For information about configuring MPLS VPN, see MPLS Configuration Guide.
1.
Set the IP address for each interface as shown in Figure 32. (Details not shown.)
2.
Configure CE 1:
# Specify the local clock as the reference source, with the stratum level 1.
<CE1> system-view
[CE1] ntp-service refclock-master 1
3.
Configure PE 2:
# Specify CE 1 as the NTP server for VPN 1.
<PE2> system-view
[PE2] ntp-service unicast-server vpn-instance vpn1 10.1.1.1
80
# Display the NTP session information and status information on PE 2 a certain period of time later.
The information should show that PE 2 has been synchronized to CE 1, with the stratum level 2.
[PE2] display ntp-service status
Clock status: synchronized
Clock stratum: 2
Reference clock ID: 10.1.1.1
Nominal frequency: 63.9100 Hz
Actual frequency: 63.9100 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 47.00 ms
Root dispersion: 0.18 ms
Peer dispersion: 34.29 ms
Reference time: 02:36:23.119 UTC Jan 1 2001(BDFA6BA7.1E76C8B4)
[PE2] display ntp-service sessions
source
reference
stra reach poll
now offset
delay disper
**************************************************************************
[12345]10.1.1.1
LOCL
1
7
64
15
0.0
47.0
7.8
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
[PE2]
1
display ntp-service trace
server 127.0.0.1,stratum 2, offset -0.013500, synch distance 0.03154
server 10.1.1.1,stratum 1, offset -0.506500, synch distance 0.03429
refid 127.127.1.0
Configuration example for MPLS VPN time synchronization in
symmetric peers mode
Network requirements
As shown in Figure 32, two VPNs are present on PE 1 and PE 2: VPN 1 and VPN 2. CE 1 and CE 3
belong to VPN 1. To synchronize the time between PE 1 and CE 1 in VPN 1, configure CE 1’s local clock
as a reference source, with the stratum level 1, configure CE 1 to operate in symmetric peers mode, and
specify VPN 1 as the target VPN.
Configuration procedure
1.
Set the IP address for each interface as shown in Figure 32. (Details not shown.)
2.
Configure CE 1:
# Specify the local clock as the reference source, with the stratum level 1.
<CE1> system-view
[CE1] ntp-service refclock-master 1
3.
Configure PE 1:
# Specify CE 1 as the symmetric-passive peer for VPN 1.
<PE1> system-view
[PE1] ntp-service unicast-peer vpn-instance vpn1 10.1.1.1
# Display NTP session information and status information on PE 1 a certain period of time later.
The information should show that PE 1 has been synchronized to CE 1, with the stratum level 2.
81
[PE1] display ntp-service status
Clock status: synchronized
Clock stratum: 2
Reference clock ID: 10.1.1.1
Nominal frequency: 63.9100 Hz
Actual frequency: 63.9100 Hz
Clock precision: 2^7
Clock offset: 0.0000 ms
Root delay: 32.00 ms
Root dispersion: 0.60 ms
Peer dispersion: 7.81 ms
Reference time: 02:44:01.200 UTC Jan 1 2001(BDFA6D71.33333333)
[PE1] display ntp-service sessions
source
reference
stra reach poll
now offset
delay disper
**************************************************************************
[12345]10.1.1.1
LOCL
1
1
64
29
-12.0
32.0
15.6
note: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured
Total associations :
1
[PE1] display ntp-service trace
server 127.0.0.1,stratum 2, offset -0.012000, synch distance 0.02448
server 10.1.1.1,stratum 1, offset 0.003500, synch distance 0.00781
refid 127.127.1.0
82
Configuring IPC
This chapter provides an overview of Inter-Process Communication (IPC) and describes the IPC
monitoring commands.
Overview
IPC provides a reliable communication mechanism among processing units, typically CPUs. IPC is
typically used on a distributed device to provide reliable inter-card or inter-device transmission. This
section describes the basic IPC concepts.
Node
An IPC node is an independent IPC-capable processing unit, typically, a CPU.
The HSR6602/6604/6608/6616 routers have multiple IPC nodes, because each card in them has at
least one CPU.
The 6602 router has only one IPC node, because it has only one CPU.
Link
An IPC link is a connection between any two IPC nodes. Any two IPC nodes have one and only one IPC
link for sending and receiving packets. All IPC nodes are fully meshed.
The system creates IPC links when it is initialized. An IPC node, upon startup, sends handshake packets
to other nodes. If the handshake succeeds, a connection is established.
The system uses link status to identify the link connectivity between two nodes. An IPC node can have
multiple links, and each link has its own status.
Channel
A channel is the communication interface between peer upper layer application modules that use
different IPC nodes. Each node assigns a locally unique channel number to each upper layer application
module for identification.
An upper layer application module sends data to an IPC module across a channel, and the IPC module
sends the data to a peer node across a link, as shown in Figure 33.
83
C
ha
nn
el
2
Figure 33 Relationship between a node, link and channel
Link
Packet sending modes
IPC uses one of the following modes to send packets for upper layer application modules:
•
Unicast—One node sends packets to another node.
•
Multicast—One node sends packets to several other nodes. This mode includes broadcast, a
special multicast. To use multicast mode, an application module must create a multicast group that
includes a set of nodes. Multicasts destined for this group are sent to all the nodes in the group. An
application module can create multiple multicast groups. Creation and deletion of a multicast group
or group member depend on the application module.
•
Mixcast—Supports both unicast and multicast.
IPC assigns one queue for each mode. An upper layer application module automatically selects one
mode as needed.
Enabling IPC performance statistics
The IPC performance statistics function provides the most recent 10-second, 1-minute, and 5-minute traffic
input and output statistics for IPC nodes. If this function is disabled, the display ipc performance
command displays the statistics collected before IPC performance statistics was disabled.
Perform the following task in user view:
Task
Command
Remarks
Enable IPC performance statistics.
ipc performance enable { node
node-id | self-node } [ channel
channel-id ]
By default, the function is disabled.
84
Displaying and maintaining IPC
Task
Command
Remarks
Display IPC node information.
display ipc node [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
Display channel information for a
node.
display ipc channel { node node-id
| self-node } [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display queue information for a
node.
display ipc queue { node node-id |
self-node } [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display multicast group
information for a node.
display ipc multicast-group { node
node-id | self-node } [ | { begin |
exclude | include }
regular-expression ]
Available in any view.
Display packet information for a
node.
display ipc packet { node node-id |
self-node } [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display link status information for a
node.
display ipc link { node node-id |
self-node } [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display IPC performance statistics
for a node.
display ipc performance { node
node-id | self-node } [ channel
channel-id ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Clear IPC performance statistics for
a node.
reset ipc performance [ node
node-id | self-node ] [ channel
channel-id ]
Available in user view.
85
Configuring SNMP
This chapter provides an overview of the Simple Network Management Protocol (SNMP) and guides you
through the configuration procedure.
Overview
SNMP is an Internet standard protocol widely used for a management station to access and operate the
devices on a network, regardless of their vendors, physical characteristics and interconnect technologies.
SNMP enables network administrators to read and set the variables on managed devices for state
monitoring, troubleshooting, statistics collection, and other management purposes.
SNMP framework
The SNMP framework comprises the following elements:
•
SNMP manager—Works on an NMS to monitor and manage the SNMP-capable devices in the
network.
•
SNMP agent—Works on a managed device to receive and handle requests from the NMS, and
sends traps to the NMS when some events, such as an interface state change, occur.
•
Management Information Base (MIB)—Specifies the variables (for example, interface status and
CPU usage) maintained by the SNMP agent for the SNMP manager to read and set.
Figure 34 Relationship between an NMS, agent and MIB
MIB and view-based MIB access control
A MIB stores variables called "nodes" or "objects" in a tree hierarchy and identifies each node with a
unique OID. An OID is a string of numbers that describes the path from the root node to a leaf node. For
example, object B in Figure 35 is uniquely identified by the OID {1.2.1.1}.
86
Figure 35 MIB tree
A MIB view represents a set of MIB objects (or MIB object hierarchies) with certain access privilege and
is identified by a view name. The MIB objects included in the MIB view are accessible while those
excluded from the MIB view are inaccessible.
A MIB view can have multiple view records each identified by a view-name oid-tree pair.
You control access to the MIB by assigning MIB views to SNMP groups or communities.
SNMP operations
SNMP provides the following basic operations:
•
Get—The NMS retrieves SNMP object nodes in an agent MIB.
•
Set—The NMS modifies the value of an object node in an agent MIB.
•
Notifications—Includes traps and informs. SNMP agent sends traps or informs to report events to
the NMS. The difference between these two types of notification is that informs require
acknowledgement but traps do not. The device supports only traps.
SNMP protocol versions
HP supports SNMPv1, SNMPv2c, and SNMPv3. An NMS and an SNMP agent must use the same
SNMP version to communicate with each other.
•
SNMPv1—Uses community names for authentication. To access an SNMP agent, an NMS must use
the same community name as set on the SNMP agent. If the community name used by the NMS is
different from that set on the agent, the NMS cannot establish an SNMP session to access the agent
or receive traps from the agent.
•
SNMPv2c—Uses community names for authentication. SNMPv2c is compatible with SNMPv1, but
supports more operation modes, data types, and error codes.
•
SNMPv3—Uses a user-based security model (USM) to secure SNMP communication. You can
configure authentication and privacy mechanisms to authenticate and encrypt SNMP packets for
integrity, authenticity, and confidentiality.
87
SNMP configuration task list
Task
Remarks
Configuring SNMP basic parameters
Required.
Configuring SNMP logging
Optional.
Configuring SNMP traps
Optional.
Configuring SNMP basic parameters
SNMPv3 differs from SNMPv1 and SNMPv2c in many ways. Their configuration procedures are
described in separate sections.
Configuring SNMPv3 basic parameters
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
Optional.
By default, the SNMP agent is
disabled.
2.
3.
Enable the SNMP agent.
snmp-agent
Configure system information
for the SNMP agent.
snmp-agent sys-info { contact
sys-contact | location sys-location
| version { all | { v1 | v2c |
v3 }* } }
You can also enable the SNMP
agent by using any command that
begins with snmp-agent except for
the snmp-agent
calculate-password command.
Optional.
The defaults are as follows:
• Contact—Null.
• Location—Null.
• Version—SNMPv3.
Optional.
4.
Configure the local engine ID.
snmp-agent local-engineid
engineid
88
The default local engine ID is the
company ID plus the device ID.
After you change the local engine
ID, the existing SNMPv3 users
become invalid, and you must
re-create the SNMPv3 users.
Step
Command
Remarks
Optional.
By default, the MIB view
ViewDefault is predefined and its
OID is 1.
Each view-name oid-tree pair
represents a view record. If you
specify the same record with
different MIB subtree masks
multiple times, the last
configuration takes effect. Except
the four subtrees in the default MIB
view, you can create up to 16
unique MIB view records.
Create or update a MIB view.
snmp-agent mib-view { excluded |
included } view-name oid-tree
[ mask mask-value ]
Configure an SNMPv3 group.
snmp-agent group v3 group-name
[ authentication | privacy ]
[ read-view read-view ]
[ write-view write-view ]
[ notify-view notify-view ] [ acl
acl-number | acl ipv6
ipv6-acl-number ] *
By default, no SNMP group exists.
Convert a plaintext key to a
ciphertext (encrypted) key.
snmp-agent calculate-password
plain-password mode { 3desmd5 |
3dessha | md5 | sha }
{ local-engineid |
specified-engineid engineid }
Optional.
8.
Add a user to the SNMPv3
group.
snmp-agent usm-user v3
user-name group-name [ [ cipher ]
authentication-mode { md5 | sha }
auth-password [ privacy-mode
{ 3des | aes128 | des56 }
priv-password ] ] [ acl acl-number |
acl ipv6 ipv6-acl-number ] *
The md5, des56, and 3des
keywords are supported only in
non-FIPS mode.
9.
Configure the maximum
SNMP packet size (in bytes)
that the SNMP agent can
handle.
snmp-agent packet max-size
byte-count
5.
6.
7.
Optional.
By default, the SNMP agent can
receive and send SNMP packets
up to 1500 bytes.
Configuring SNMPv1 or SNMPv2c basic parameters
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
Optional.
2.
Enable the SNMP
agent.
By default, the SNMP agent is
disabled.
snmp-agent
You can also enable the SNMP agent
by using any command that begins
with snmp-agent.
89
Step
Command
Remarks
3.
Configure system
information for the
SNMP agent.
snmp-agent sys-info { contact sys-contact
| location sys-location | version { all |
{ v1 | v2c | v3 }* } }
4.
Configure the local
engine ID.
snmp-agent local-engineid engineid
The defaults are as follows:
• Contact—Null.
• Location—Null.
• Version—SNMPv3.
Optional.
The default local engine ID is the
company ID plus the device ID.
Optional.
By default, the MIB view ViewDefault
is predefined and its OID is 1.
5.
Create or update a
MIB view.
snmp-agent mib-view { excluded |
included } view-name oid-tree [ mask
mask-value ]
Each view-name oid-tree pair
represents a view record. If you
specify the same record with different
MIB subtree masks multiple times, the
last configuration takes effect. Except
the four subtrees in the default MIB
view, you can create up to 16 unique
MIB view records.
• (Method 1) Create an SNMP
community:
snmp-agent community { read |
write } [ cipher ] community-name
[ mib-view view-name ] [ acl
acl-number | acl ipv6
ipv6-acl-number ] *
• (Method 2) Create an SNMP group,
6.
Configure the SNMP
access right.
and add a user to the SNMP group:
a. snmp-agent group { v1 | v2c }
group-name [ read-view
read-view ] [ write-view
write-view ] [ notify-view
notify-view ] [ acl acl-number |
acl ipv6 ipv6-acl-number ] *
Use either method.
By default, no SNMP group exists.
The username configured by using
method 2 is equivalent to the
community name configured by using
method 1, and it must be the same as
the community name configured on
the NMS.
b. snmp-agent usm-user { v1 |
v2c } user-name group-name [ acl
acl-number | acl ipv6
ipv6-acl-number ] *
7.
Configure the
maximum size (in
bytes) of SNMP
packets for the
SNMP agent.
Optional.
snmp-agent packet max-size byte-count
Configuring SNMP logging
90
By default, the SNMP agent can
receive and send SNMP packets up
to 1500 bytes.
IMPORTANT:
Disable SNMP logging in normal cases to prevent a large amount of SNMP logs from decreasing device
performance.
The SNMP logging function logs Get requests, Set requests, and Set responses, but does not log Get
responses.
•
Get operation—The agent logs the IP address of the NMS, name of the accessed node, and node
OID.
•
Set operation—The agent logs the NMS' IP address, name of accessed node, node OID, variable
value, and error code and index for the Set operation.
The SNMP module sends these logs to the information center as informational messages. You can
configure the information center to output these messages to certain destinations, for example, the
console and the log buffer. The total output size for the node field (MIB node name) and the value field
(value of the MIB node) in each log entry is 1024 bytes. If this limit is exceeded, the information center
truncates the data in the fields. For more information about the information center, see "Configuring the
information center."
To enable SNMP logging:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable SNMP logging.
snmp-agent log { all |
get-operation | set-operation }
By default, SNMP logging is
disabled.
Configuring SNMP traps
The SNMP agent sends traps to inform the NMS of important events, such as a reboot.
Traps include generic traps and vendor-specific traps. Available generic traps include authentication,
coldstart, linkdown, linkup and warmstart. All other traps are vendor-defined.
SNMP traps generated by a module are sent to the information center. You can configure the information
center to enable or disable outputting the traps from a module by severity and set output destinations. For
more information about the information center, see "Configuring the information center."
Enabling SNMP traps
Enable SNMP traps only if necessary. SNMP traps are memory-intensive and might affect device
performance.
To generate linkUp or linkDown traps when the link state of an interface changes, enable the linkUp or
linkDown trap function both globally by using the snmp-agent trap enable [ standard [ linkdown |
linkup ] * ] command and on the interface by using the enable snmp trap updown command.
After you enable a trap function for a module, whether the module generates traps also depends on the
configuration of the module. For more information, see the configuration guide for each module.
To enable traps:
91
Step
Command
Remarks
Enter system view.
system-view
N/A
2.
Enable traps
globally.
snmp-agent trap enable [ acfp [ client | policy | rule |
server ] | bfd | bgp | configuration | default-route |
flash | fr | mpls | ospf [ process-id ] [ ifauthfail |
ifcfgerror | ifrxbadpkt | ifstatechange | iftxretransmit |
lsdbapproachoverflow | lsdboverflow | maxagelsa |
nbrstatechange | originatelsa | vifcfgerror | virifauthfail
| virifrxbadpkt | virifstatechange | viriftxretransmit |
virnbrstatechange ] * | pim [ candidatebsrwinelection |
electedbsrlostelection | interfaceelection |
invalidjoinprune | invalidregister | neighborloss |
rpmappingchange ] * | standard [ authentication |
coldstart | linkdown | linkup | warmstart ] * | system |
vrrp [ authfailure | newmaster ] ]
3.
Enter interface view.
• interface interface-type interface-number
• controller { cpos | e1 | e3 | e-cpos | t1 | t3 } number
Use either command
depending on the
interface type.
4.
Enable link state
traps.
enable snmp trap updown
By default, the link
state traps are
enabled.
1.
Optional.
By default, the trap
function of modules is
enabled globally.
Configuring the SNMP agent to send traps to a host
The SNMP module buffers the traps received from a module in a trap queue. You can set the size of the
queue, the duration that the queue holds a trap, and trap target (destination) hosts, typically the NMS.
To successfully send traps, you must also perform the following tasks:
•
Complete the basic SNMP settings and verify that they are the same as on the NMS. If SNMPv1 or
SNMPv2c is used, you must configure a community name. If SNMPv3 is used, you must configure
an SNMPv3 user and MIB view.
•
Make sure the device and the NMS can reach each other.
To configure the SNMP agent to send traps to a host:
Step
1.
2.
3.
Command
Remarks
Enter system view.
system-view
N/A
Configure a target host.
snmp-agent target-host trap
address udp-domain { ip-address |
ipv6 ipv6-address } [ udp-port
port-number ] [ vpn-instance
vpn-instance-name ] params
securityname security-string [ v1 |
v2c | v3 [ authentication |
privacy ] ]
To send the traps to the NMS, this
command is required, and you
must specify the ip-address
argument as the IP address of the
NMS.
Configure the source address
for traps.
snmp-agent trap source
interface-type { interface-number |
interface-number.subnumber }
92
Optional.
By default, SNMP chooses the IP
address of an interface to be the
source IP address of traps.
Step
Command
Remarks
Optional.
By default, standard
linkUp/linkDown traps are used.
4.
Extend the standard
linkUp/linkDown traps.
snmp-agent trap if-mib link
extended
Extended linkUp/linkDown traps
add interface description and
interface type to standard
linkUp/linkDown traps. If the NMS
does not support extended SNMP
messages, use standard
linkUp/linkDown traps.
Optional.
5.
Configure the trap queue size.
The default trap queue size is 100.
snmp-agent trap queue-size size
When the trap queue is full, the
oldest traps are automatically
deleted for new traps.
Optional.
6.
Configure the trap holding
time.
snmp-agent trap life seconds
The default setting is 120 seconds.
A trap is deleted when its holding
time expires.
Displaying and maintaining SNMP
Task
Command
Remarks
Display SNMP agent system
information, including the
contact, physical location,
and SNMP version.
display snmp-agent sys-info [ contact | location |
version ]* [ | { begin | exclude | include }
regular-expression ]
Available in any view.
Display SNMP agent
statistics.
display snmp-agent statistics [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display the local engine ID.
display snmp-agent local-engineid [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display SNMP group
information.
display snmp-agent group [ group-name ] [ | { begin
| exclude | include } regular-expression ]
Available in any view.
Display basic information
about the trap queue.
display snmp-agent trap queue [ | { begin | exclude
| include } regular-expression ]
Available in any view.
Display the modules that
can send traps and their
trap status (enable or
disable).
display snmp-agent trap-list [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display SNMPv3 user
information.
display snmp-agent usm-user [ engineid engineid |
username user-name | group group-name ] * [ |
{ begin | exclude | include } regular-expression ]
Available in any view.
93
Task
Command
Remarks
Display SNMPv1 or
SNMPv2c community
information.
display snmp-agent community [ read | write ] [ |
{ begin | exclude | include } regular-expression ]
Available in any view.
Display MIB view
information.
display snmp-agent mib-view [ exclude | include |
viewname view-name ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
SNMP configuration examples
This section gives examples of configuring SNMPv1 or SNMPv2c, SNMPv3, and SNMP logging.
SNMPv1/SNMPv2c configuration example
Network requirements
As shown in Figure 36, the NMS (1.1.1.2/24) uses SNMPv1 or SNMPv2c to manage the SNMP agent
(1.1.1.1/24), and the agent automatically sends traps to report events to the NMS.
Figure 36 Network diagram
Configuration procedure
1.
Configure the SNMP agent:
# Configure the IP address of the agent, and make sure the agent and the NMS can reach each
other. (Details not shown.)
# Specify SNMPv1 and SNMPv2c, and create a read-only community public and a read and write
community private.
<Agent> system-view
[Agent] snmp-agent sys-info version v1 v2c
[Agent] snmp-agent community read public
[Agent] snmp-agent community write private
# Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable SNMP traps, set the NMS at 1.1.1.2 as an SNMP trap destination, and use public as the
community name. (To make sure the NMS can receive traps, specify the same SNMP version in the
snmp-agent target-host command as is configured on the NMS.)
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public v1
[Agent] quit
2.
Configure the SNMP NMS:
94
# Configure the SNMP version for the NMS as v1 or v2c, create a read-only community and name
it public, and create a read and write community and name it private. For more information about
configuring the NMS, see the NMS manual.
NOTE:
The SNMP settings on the agent and the NMS must match.
3.
Verify the configuration:
# Try to get the count of sent traps from the agent. The attempt succeeds.
Send request to 1.1.1.1/161 ...
Protocol version: SNMPv1
Operation: Get
Request binding:
1: 1.3.6.1.2.1.11.29.0
Response binding:
1: Oid=snmpOutTraps.0 Syntax=CNTR32 Value=18
Get finished
# Use a wrong community name to get the value of a MIB node from the agent. You can see an
authentication failure trap on the NMS.
1.1.1.1/2934 V1 Trap = authenticationFailure
SNMP Version = V1
Community = public
Command = Trap
Enterprise = 1.3.6.1.4.1.43.1.16.4.3.50
GenericID = 4
SpecificID = 0
Time Stamp = 8:35:25.68
SNMPv3 configuration example
Network requirements
As shown in Figure 37, the NMS (1.1.1.2/24) uses SNMPv3 to monitor and manage the interface status
of the agent (1.1.1.1/24), and the agent automatically sends traps to report events to the NMS.
The NMS and the agent perform authentication when they set up an SNMP session. The authentication
algorithm is MD5 and the authentication key is authkey. The NMS and the agent also encrypt the SNMP
packets between them by using the DES algorithm and the privacy key prikey.
Figure 37 Network diagram
Configuration procedure
1.
Configure the agent:
95
# Configure the IP address of the agent and make sure the agent and the NMS can reach each
other. (Details not shown.)
# Assign the NMS read and write access to the objects under the snmp node (OID
1.3.6.1.2.1.11), and deny its access to any other MIB object.
<Agent> system-view
[Agent] undo snmp-agent mib-view ViewDefault
[Agent] snmp-agent mib-view included test interfaces
[Agent] snmp-agent group v3 managev3group read-view test write-view test
# Set the username to managev3user, authentication algorithm to MD5, authentication key to
authkey, encryption algorithm to DES56, and privacy key to prikey.
[Agent] snmp-agent usm-user v3 managev3user managev3group authentication-mode md5
authkey privacy-mode des56 prikey
# Configure contact person and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable traps, specify the NMS at 1.1.1.2 as a trap destination, and set the username to
managev3user for the traps.
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
managev3user v3 privacy
2.
Configure the SNMP NMS:
{
Specify the SNMP version for the NMS as v3.
{
Create two SNMP users: managev3user and public.
{
Enable both authentication and privacy functions.
{
Use MD5 for authentication and DES for encryption.
{
Set the authentication key to authkey and the privacy key to prikey.
{
Set the timeout time and maximum number of retries.
For more information about configuring the NMS, see the NMS manual.
NOTE:
The SNMP settings on the agent and the NMS must match.
3.
Verify the configuration:
# Try to get the count of sent traps from the agent. The get attempt succeeds.
Send request to 1.1.1.1/161 ...
Protocol version: SNMPv3
Operation: Get
Request binding:
1: 1.3.6.1.2.1.11.29.0
Response binding:
1: Oid=snmpOutTraps.0 Syntax=CNTR32 Value=18
Get finished
# Try to get the device name from the agent. The get attempt fails because the NMS has no access
right to the node.
Send request to 1.1.1.1/161 ...
Protocol version: SNMPv3
96
Operation: Get
Request binding:
1: 1.3.6.1.2.1.1.5.0
Response binding:
1: Oid=sysName.0 Syntax=noSuchObject Value=NULL
Get finished
# Execute the shutdown or undo shutdown command on an idle interface on the agent. You can
see the interface state change traps on the NMS:
1.1.1.1/3374 V3 Trap = linkdown
SNMP Version = V3
Community = managev3user
Command = Trap
1.1.1.1/3374 V3 Trap = linkup
SNMP Version = V3
Community = managev3user
Command = Trap
SNMP logging configuration example
Network requirements
Configure the SNMP agent (1.1.1.1/24) in Figure 38 to log the SNMP operations performed by the NMS.
Figure 38 Network diagram
Configuration procedure
This example assumes that you have configured all required SNMP settings for the NMS and the agent
(see "SNMPv1/SNMPv2c configuration example" or "SNMPv3 configuration example").
# Enable displaying log messages on the configuration terminal. (This function is enabled by default.
Skip this step if you are using the default.)
<Agent> terminal monitor
<Agent> terminal logging
# Enable the information center to output system information with severity level equal to or higher than
informational to the console port.
<Agent> system-view
[Agent] info-center source snmp channel console log level informational
# Enable logging GET and SET operations.
97
[Agent] snmp-agent log get-operation
[Agent] snmp-agent log set-operation
# Verify the configuration:
Use the NMS to get a MIB variable from the agent. The following is a sample log message displayed on
the configuration terminal:
%Nov 23 16:10:09:482 2011 Agent SNMP/6/SNMP_GET:
-seqNO=27-srcIP=1.1.1.2-op=GET-node=sysUpTime(1.3.6.1.2.1.1.3.0)-value=-node=ifHCOutO
ctets(1.3.6.1.2.1.31.1.1.1.10.1)-value=; The agent received a message.
Use the NMS to set a MIB variable on the agent. The following is a sample log message displayed on
the configuration terminal:
%Nov 23 16:16:42:581 2011 Agent SNMP/6/SNMP_SET:
-seqNO=37-srcIP=1.1.1.2-op=SET-errorIndex=0-errorStatus=noError-node=sysLocation(1.3.
6.1.2.1.1.6.0)-value=beijing; The agent received a message.
Table 2 SNMP log message field description
Field
Description
Nov 23 16:10:09:482 2011
Time when the SNMP log was generated.
seqNO
Serial number automatically assigned to the SNMP log,
starting from 0.
srcIP
IP address of the NMS.
op
SNMP operation type (GET or SET).
node
MIB node name and OID of the node instance.
errorIndex
Error index, with 0 meaning no error.
errorStatus
Error status, with noError meaning no error.
Value set by the SET operation. This field is null for a GET
operation.
value
If the value is a character string that has invisible characters or
characters beyond the ASCII range 0 to 127, the string is
displayed in hexadecimal format, for example, value =
<81-43>[hex].
The information center can output system event messages to several destinations, including the terminal
and the log buffer. In this example, SNMP log messages are output to the terminal. To configure other
message destinations, see "Configuring the information center."
98
Configuring RMON
Overview
Remote Monitoring (RMON) is an enhancement to SNMP for remote device management and traffic
monitoring. An RMON monitor, typically the RMON agent embedded in a network device, periodically
or continuously collects traffic statistics for the network attached to a port, and when a statistic crosses a
threshold, it logs the crossing event and sends a trap to the management station.
RMON uses SNMP traps to notify NMSs of exceptional conditions. RMON SNMP traps report various
events, including traffic events such as broadcast traffic threshold exceeded. In contrast, SNMP standard
traps report device operating status changes such as link up, link down, and module failure.
RMON enables proactive monitoring and management of remote network devices and subnets. The
managed device can automatically send a trap when a statistic crosses an alarm threshold, and the
NMS does not need to constantly poll MIB variables and compare the results. As a result, network traffic
is reduced.
Working mechanism
RMON monitors typically take one of the following forms:
•
Dedicated RMON probes. NMSs can obtain management information from RMON probes directly
and control network resources. This method enables NMSs to obtain all RMON MIB information.
•
RMON agents embedded in network devices. NMSs exchange data with RMON agents by using
basic SNMP operations to gather network management information. Because this method is
resource intensive, most RMON agent implementations provide only four groups of MIB information:
alarm, event, history, and statistics.
HP devices provide the embedded RMON agent function. You can configure your device to collect and
report traffic statistics, error statistics, and performance statistics.
RMON groups
Among the RFC 2819 defined RMON groups, HP implements the statistics group, history group, event
group, and alarm group supported by the public MIB. HP also implements a private alarm group, which
enhances the standard alarm group.
Ethernet statistics group
The statistics group defines that the system collects various traffic statistics on an interface (only Ethernet
interfaces are supported), and saves the statistics in the Ethernet statistics table (etherStatsTable) for future
retrieval. The interface traffic statistics include network collisions, CRC alignment errors,
undersize/oversize packets, broadcasts, multicasts, bytes received, and packets received.
After you create a statistics entry for an interface, the statistics group starts to collect traffic statistics on the
interface. The statistics in the Ethernet statistics table are cumulative sums.
99
History group
The history group defines that the system periodically collects traffic statistics on interfaces and saves the
statistics in the history record table (etherHistoryTable). The statistics include bandwidth utilization,
number of error packets, and total number of packets.
The history statistics table record traffic statistics collected for each sampling interval. The sampling
interval is user-configurable.
Event group
The event group defines event indexes and controls the generation and notifications of the events
triggered by the alarms defined in the alarm group and the private alarm group. The events can be
handled in one of the following ways:
•
Log—Logs event information (including event name and description) in the event log table of the
RMON MIB, so the management device can get the logs through the SNMP Get operation.
•
Trap—Sends a trap to notify an NMS of the event.
•
Log-Trap—Logs event information in the event log table and sends a trap to the NMS.
•
None—No action.
Alarm group
The RMON alarm group monitors alarm variables, such as the count of incoming packets (etherStatsPkts)
on an interface. After you define an alarm entry, the system gets the value of the monitored alarm
variable at the specified interval. If the value of the monitored variable is greater than or equal to the
rising threshold, a rising event is triggered. If the value of the monitored variable is smaller than or equal
to the falling threshold, a falling event is triggered. The event is then handled as defined in the event
group.
If an alarm entry crosses a threshold multiple times in succession, the RMON agent generates an alarm
event only for the first crossing. For example, if the value of a sampled alarm variable crosses the rising
threshold multiple times before it crosses the falling threshold, only the first crossing triggers a rising alarm
event, as shown in Figure 39.
Figure 39 Rising and falling alarm events
100
Private alarm group
The private alarm group calculates the values of alarm variables and compares the results with the
defined threshold for a more comprehensive alarming function.
The system handles the private alarm entry (as defined by the user) in the following ways:
•
Periodically samples the private alarm variables defined in the private alarm formula.
•
Calculates the sampled values based on the private alarm formula.
•
Compares the result with the defined threshold and generates an appropriate event if the threshold
value is reached.
If a private alarm entry crosses a threshold multiple times in succession, the RMON agent generates an
alarm event only for the first crossing. For example, if the value of a sampled alarm variable crosses the
rising threshold multiple times before it crosses the falling threshold, only the first crossing triggers a rising
alarm event. If the count result of the private alarm group overpasses the same threshold multiple times,
only the first one can cause an alarm event. In other words, the rising alarm and falling alarm are
alternate.
Configuring the RMON statistics function
The RMON statistics function can be implemented by either the Ethernet statistics group or the history
group, but the objects of the statistics are different, as follows:
•
A statistics object of the Ethernet statistics group is a variable defined in the Ethernet statistics table,
and the recorded content is a cumulative sum of the variable from the time the statistics entry is
created to the current time. For more information, see "Configuring the RMON Ethernet statistics
function."
•
A statistics object of the history group is the variable defined in the history record table, and the
recorded content is a cumulative sum of the variable in each period. For more information, see
"Configuring the RMON history statistics function."
Configuring the RMON Ethernet statistics function
Step
Command
1.
Enter system view.
system-view
2.
Enter Ethernet interface view.
interface interface-type interface-number
3.
Create an entry in the RMON statistics
table.
rmon statistics entry-number [ owner text ]
You can create one statistics entry for each interface, and up to 100 statistics entries on the device. After
the entry limit is reached, you cannot add new entries.
Configuring the RMON history statistics function
Follow these guidelines when you configure the RMON history statistics function:
•
The entry-number for an RMON history control entry must be globally unique. If an entry number
has been used on one interface, it cannot be used on another.
101
•
You can configure multiple history control entries for one interface, but must make sure their entry
numbers and sampling intervals are different.
•
The device supports up to 100 history control entries.
•
You can successfully create a history control entry, even if the specified bucket size exceeds the
history table size supported by the device. However, the effective bucket size will be the actual value
supported by the device.
To configure the RMON history statistics function:
Step
Command
1.
Enter system view.
system-view
2.
Enter Ethernet interface view.
interface interface-type interface-number
3.
Create an entry in the RMON history
control table.
rmon history entry-number buckets number interval
sampling-interval [ owner text ]
Configuring the RMON alarm function
Follow these guidelines when you configure the RMON alarm function:
•
To send traps to the NMS when an alarm is triggered, configure the SNMP agent as described in
"Configuring SNMP" before configuring the RMON alarm function.
•
If the alarm variable is a MIB variable defined in the history group or the Ethernet statistics group,
make sure the RMON Ethernet statistics function or the RMON history statistics function is
configured on the monitored Ethernet interface. Otherwise, even if you can create the alarm entry,
no alarm event can be triggered.
•
You cannot create a new event, alarm, or private alarm entry that has the same set of parameters
as an existing entry. For parameters to be compared for duplication, see Table 3.
•
After the maximum number of entries is reached, no new entry can be created. For the table entry
limits, see Table 3.
To configure the RMON alarm function:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create an event entry in
the event table.
rmon event entry-number [ description string ] { log |
log-trap log-trapcommunity | none | trap
trap-community } [ owner text ]
N/A
102
Step
Command
Remarks
• Create an entry in the alarm table:
3.
Create an entry in the
alarm table or private
alarm table.
rmon alarm entry-number alarm-variable
sampling-interval { absolute | delta }
rising-threshold threshold-value1 event-entry1
falling-threshold threshold-value2 event-entry2
[ owner text ]
• Create an entry in the private alarm table:
rmon prialarm entry-number prialarm-formula
prialarm-des sampling-interval { absolute |
changeratio | delta } rising-threshold
threshold-value1 event-entry1 falling-threshold
threshold-value2 event-entry2 entrytype { forever |
cycle cycle-period } [ owner text ]
Use at least one
command.
Table 3 RMON configuration restrictions
Entry
Parameters to be compared
Maximum number of
entries
Event
Event description (description string), event type (log, trap,
logtrap or none) and community name (trap-community or
log-trapcommunity)
60
Alarm
Alarm variable (alarm-variable), sampling interval
(sampling-interval), sampling type (absolute or delta), rising
threshold (threshold-value1) and falling threshold
(threshold-value2)
60
Prialarm
Alarm variable formula (alarm-variable), sampling interval
(sampling-interval), sampling type (absolute, changeratio or
delta), rising threshold (threshold-value1) and falling
threshold (threshold-value2)
50
Displaying and maintaining RMON
Task
Command
Remarks
Display RMON statistics.
display rmon statistics [ interface-type
interface-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display the RMON history
control entry and history
sampling information.
display rmon history [ interface-type
interface-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display RMON alarm
configuration.
display rmon alarm [ entry-number ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display RMON private alarm
configuration.
display rmon prialarm [ entry-number ] [ |
{ begin | exclude | include } regular-expression ]
Available in any view.
Display RMON event
configuration.
display rmon event [ entry-number ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
103
Task
Command
Remarks
Display log information for
event entries.
display rmon eventlog [ entry-number ] [ | { begin
| exclude | include } regular-expression ]
Available in any view.
Ethernet statistics group configuration example
Network requirements
Configure the RMON statistics group on the RMON agent in Figure 40 to gather cumulative traffic
statistics for GigabitEthernet 2/0/1.
Figure 40 Network diagram
Configuration procedure
# Configure the RMON statistics group on the RMON agent to gather statistics for GigabitEthernet
2/0/1.
<Sysname> system-view
[Sysname] interface gigabitethernet 2/0/1
[Sysname-GigabitEthernet2/0/1] rmon statistics 1 owner user1
# Display statistics collected by the RMON agent for GigabitEthernet 2/0/1.
<Sysname> display rmon statistics gigabitethernet 2/0/1
EtherStatsEntry 1 owned by user1-rmon is VALID.
Interface : GigabitEthernet2/0/1<ifIndex.3>
etherStatsOctets
: 21657
, etherStatsPkts
etherStatsBroadcastPkts
: 56
, etherStatsMulticastPkts : 34
etherStatsUndersizePkts
: 0
, etherStatsOversizePkts
: 0
etherStatsFragments
: 0
, etherStatsJabbers
: 0
, etherStatsCollisions
: 0
etherStatsCRCAlignErrors : 0
: 307
etherStatsDropEvents (insufficient resources): 0
Packets received according to length:
64
: 235
256-511: 1
,
65-127
: 67
,
512-1023: 0
,
128-255
: 4
,
1024-1518: 0
# On the configuration terminal, get the traffic statistics through SNMP. (Details not shown.)
104
History group configuration example
Network requirements
Configure the RMON history group on the RMON agent in Figure 41 to gather periodical traffic statistics
for GigabitEthernet 2/0/1 every 1 minute.
Figure 41 Network diagram
Configuration procedure
# Configure the RMON history group on the RMON agent to gather traffic statistics every 1 minute for
GigabitEthernet 2/0/1. Retain up to eight records for the interface in the history statistics table.
<Sysname> system-view
[Sysname] interface gigabitethernet 2/0/1
[Sysname-GigabitEthernet2/0/1] rmon history 1 buckets 8 interval 60 owner user1
# Display the history data collected for GigabitEthernet 2/0/1.
[Sysname-GigabitEthernet2/0/1] display rmon history
HistoryControlEntry 2 owned by null is VALID
Samples interface
: GigabitEthernet2/0/1<ifIndex.3>
Sampled values of record 1 :
dropevents
: 0
, octets
: 834
packets
: 8
, broadcast packets
: 1
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 2 :
dropevents
: 0
, octets
: 962
packets
: 10
, broadcast packets
: 3
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 3 :
dropevents
: 0
, octets
: 830
packets
: 8
, broadcast packets
: 0
multicast packets : 6
, CRC alignment errors : 0
105
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 4 :
dropevents
: 0
, octets
: 933
packets
: 8
, broadcast packets
: 0
multicast packets : 7
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 5 :
dropevents
: 0
, octets
: 898
packets
: 9
, broadcast packets
: 2
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 6 :
dropevents
: 0
, octets
: 898
packets
: 9
, broadcast packets
: 2
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 7 :
dropevents
: 0
, octets
: 766
packets
: 7
, broadcast packets
: 0
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
: 0
, jabbers
: 0
collisions
: 0
, utilization
: 0
Sampled values of record 8 :
dropevents
: 0
, octets
: 1154
packets
: 13
, broadcast packets
: 1
multicast packets : 6
, CRC alignment errors : 0
undersize packets : 0
, oversize packets
: 0
fragments
, jabbers
: 0
: 0
collisions
: 0
, utilization
: 0
# On the configuration terminal, get the traffic statistics through SNMP. (Details not shown.)
Alarm group configuration example
Network requirements
Configure the RMON alarm group on the RMON agent in Figure 42 to send alarms in traps when the
5-second incoming traffic statistic on GigabitEthernet 2/0/1 crosses the rising threshold or drops below
the falling threshold.
106
Figure 42 Network diagram
Configuration procedure
# Configure the SNMP agent with the same SNMP settings as the NMS at 1.1.1.2. This example uses
SNMPv1, read community public, and write community private.
<Sysname> system-view
[Sysname] snmp-agent
[Sysname] snmp-agent community read public
[Sysname] snmp-agent community write private
[Sysname] snmp-agent sys-info version v1
[Sysname] snmp-agent trap enable
[Sysname] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname
public
# Configure the RMON statistics group to gather traffic statistics for GigabitEthernet 2/0/1.
[Sysname] interface gigabitethernet 2/0/1
[Sysname-GigabitEthernet2/0/1] rmon statistics 1 owner user1
[Sysname-GigabitEthernet2/0/1] quit
# Create an RMON event entry and an RMON alarm entry so the RMON agent sends traps when the
delta sampling value of node 1.3.6.1.2.1.16.1.1.1.4.1 exceeds 100 or drops below 50.
[Sysname] rmon event 1 trap public owner user1
[Sysname] rmon alarm 1 1.3.6.1.2.1.16.1.1.1.4.1 5 delta rising-threshold 100 1
falling-threshold 50 1
# Display the RMON alarm entry configuration.
<Sysname> display rmon alarm 1
AlarmEntry 1 owned by null is Valid.
Samples type
: delta
Variable formula
: 1.3.6.1.2.1.16.1.1.1.4.1<etherStatsOctets.1>
Sampling interval
: 5(sec)
Rising threshold
: 100(linked with event 1)
Falling threshold
: 50(linked with event 2)
When startup enables
: risingOrFallingAlarm
Latest value
: 0
# Display statistics for GigabitEthernet 2/0/1.
<Sysname> display rmon statistics gigabitethernet 2/0/1
EtherStatsEntry 1 owned by user1-rmon is VALID.
Interface : GigabitEthernet2/0/1<ifIndex.3>
107
etherStatsOctets
: 57329
, etherStatsPkts
etherStatsBroadcastPkts
: 53
, etherStatsMulticastPkts : 353
etherStatsUndersizePkts
: 0
, etherStatsOversizePkts
: 0
etherStatsFragments
: 0
, etherStatsJabbers
: 0
, etherStatsCollisions
: 0
etherStatsCRCAlignErrors : 0
: 455
etherStatsDropEvents (insufficient resources): 0
Packets received according to length:
64
: 7
,
65-127
: 413
256-511: 0
,
512-1023: 0
,
128-255
: 35
,
1024-1518: 0
# Query alarm events on the NMS. (Details not shown.)
On the RMON agent, alarm event messages are displayed when events occur. The following is a sample
output:
[Sysname]
#Aug 27 16:31:34:12 2005 Sysname RMON/2/ALARMFALL:Trap 1.3.6.1.2.1.16.0.2 Alarm table 1
monitors 1.3.6.1.2.1.16.1.1.1.4.1 with sample type 2,has sampled alarm value 0 less than(or
=) 50.
108
Configuring sampler
Overview
A sampler samples packets. The sampler selects a packet from among sequential packets, and it sends
the packet to the service module for processing.
The following sampling modes are available:
•
Fixed mode—The first packet is selected from among sequential packets in each sampling.
•
Random mode—Any packet might be selected from among sequential packets in each sampling.
A sampler can be used to sample packets for NetStream. Only the sampled packets are sent to and
processed by the traffic monitoring module. Sampling is useful if you have too much traffic and want to
limit how much traffic is to be analyzed. The sampled data is statistically accurate and sampling
decreases the impact on the forwarding capacity of the device.
For more information about NetStream, see "Configuring NetStream."
Creating a sampler
Step
1.
2.
Enter system view.
Create a sampler.
Command
Remarks
system-view
N/A
sampler sampler-name mode
{ fixed | random }
packet-interval rate
The sampling rate is calculated by using the
formula 2 to the nth power, where n is the rate.
For example, if the rate is 8, each sampling
selects one packet from among 256 packets (2
to the 8th power); if the rate is 10, each
sampling selects one packet from among
1024 packets (2 to the 10th power).
Displaying and maintaining a sampler
Task
Command
Remarks
Display configuration and running
information for the sampler.
display sampler [ sampler-name ] [ slot
slot-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Clear running information for the sampler.
reset sampler statistics
[ sampler-name ]
Available in user view.
109
Sampler configuration example
Network requirements
As shown in Figure 43, configure IPv4 NetStream on Router A to collect statistics on incoming and
outgoing traffic on GigabitEthernet 2/0/0. The NetStream data is sent to port 5000 on the NSC at
12.110.2.2/16. Do the following:
•
Configure fixed sampling in the inbound direction to select the first packet from among 256
packets.
•
Configure random sampling in the outbound direction to select one packet randomly from among
1024 packets.
Figure 43 Network diagram
Configuration procedure
# Create sampler 256 in fixed sampling mode, and set the rate to 8. The first packet of 256 (2 to the 8th
power) packets is selected.
<RouterA> system-view
[RouterA] sampler 256 mode fixed packet-interval 8
# Create sampler 1024 in random sampling mode, and set the sampling rate to 10. One packet from
among 1024 (2 to the 10th power) packets is selected.
[RouterA] sampler 1024 mode random packet-interval 10
# Configure GigabitEthernet 2/0/0, enable IPv4 NetStream to collect statistics about the incoming
traffic, and then configure the interface to use sampler 256.
[RouterA] interface gigabitethernet 2/0/0
[RouterA-GigabitEthernet2/0/0] ip address 11.110.2.1 255.255.0.0
[RouterA-GigabitEthernet2/0/0] ip netstream inbound
[RouterA-GigabitEthernet2/0/0] ip netstream sampler 256 inbound
[RouterA-GigabitEthernet2/0/0] quit
# Configure interface GigabitEthernet 2/0/1, enable IPv4 NetStream to collect statistics about outgoing
traffic, and then configure the interface to use sampler 1024.
[RouterA] interface gigabitethernet 2/0/1
[RouterA-GigabitEthernet2/0/1] ip address 12.110.2.1 255.255.0.0
[RouterA-GigabitEthernet2/0/1] ip netstream outbound
[RouterA-GigabitEthernet2/0/1] ip netstream sampler 1024 outbound
110
[RouterA-GigabitEthernet2/0/1] quit
# Configure the address and port number of NSC as the destination host for the NetStream data export,
leaving the default for the source interface.
[RouterA] ip netstream export host 12.110.2.2 5000
Verifying the configuration
# Execute the display sampler command on Router A to view the configuration and running information
about sampler 256. The output shows that Router A received and processed 256 packets, which reached
the number of packets for one sampling, and Router A selected the first packet from among the 256
packets received on GigabitEthernet 2/0/0.
<RouterA> display sampler 256
Sampler name: 256
Index: 1,
Mode: Fixed,
Packet counter: 0,
Packet-interval: 8
Random number: 1
Total packet number (processed/selected): 256/1
# Execute the display sampler command on Router A to view the configuration and running information
about sampler 1024. The output shows that Router A processed and sent out 1024 packets, which
reached the number of packets for one sampling, and Router A selected a packet randomly from among
the 1024 packets sent from among GigabitEthernet 2/0/1.
<RouterA> display sampler 1024
Sampler name: 1024
Index: 2,
Mode: Random,
Packet counter: 0,
Packet-interval: 10
Random number: 370
Total packet number (processed/selected): 1024/1
111
Configuring port mirroring
Overview
Port mirroring refers to copying packets that are passing through a port to a monitor port that is
connected to a monitoring device for packet analysis.
Terminologies of port mirroring
Mirroring source
The mirroring source can be one or more monitored ports. Packets (called "mirrored packets") passing
through them are copied to a port that is connected to a monitoring device for packet analysis. This type
of port is called a "source port" and the device where the mirroring source resides is called a "source
device."
Mirroring destination
The mirroring destination is the destination port (also known as the monitor port) of mirrored packets. It
connects to the data monitoring device. The device where the monitor port resides is called the
"destination device." The monitor port forwards mirrored packets to its connected monitoring device.
A monitor port can receive multiple duplicates of a packet in some cases because it can monitor multiple
mirroring sources. For example, assume that Port 1 is monitoring bidirectional traffic on Port 2 and Port
3 on the same device. If a packet travels from Port 2 to Port 3, two duplicates of the packet will be
received on Port 1.
Mirroring direction
The mirroring direction indicates that the inbound, outbound, or bidirectional traffic can be copied on a
mirroring source:
•
Inbound—Copies packets received on a mirroring source.
•
Outbound—Copies packets sent out of a mirroring source.
•
Bidirectional—Copies packets both received on and sent out of a mirroring source.
Mirroring group
Port mirroring is implemented through mirroring groups. Mirroring groups include local, remote source,
and remote destination mirroring groups. For more information about mirroring groups, see "Port
mirroring classification and implementation."
Egress port, and remote probe VLAN
A remote probe VLAN and an egress port are used for Layer 2 remote port mirroring. The remote probe
VLAN specially transmits mirrored packets to the destination device. The egress port resides on a source
device and sends mirrored packets to the remote probe VLAN. For more information about the egress
port, remote probe VLAN, and Layer 2 remote port mirroring, see "Port mirroring classification and
implementation."
Ports except for source, monitor, and egress ports are called common ports on port mirroring devices.
112
Port mirroring classification and implementation
Port mirroring includes local port mirroring and remote port mirroring based on whether the mirroring
source and the mirroring destination are on the same device.
Local port mirroring
In local port mirroring, the mirroring source and mirroring destination are on the same device, and the
source device is directly connected to the data monitoring device and can act as the destination device
to forward mirrored packets to the data monitoring device. A mirroring group that contains the mirroring
source and the mirroring destination on the device is called a "local mirroring group."
Figure 44 Local port mirroring implementation
As shown in Figure 44, configure local port mirroring to copy inbound packets on the source port
Ethernet 1/1 to the monitor port Ethernet 1/2, which then forwards the packets to the data monitoring
device for analysis.
Layer 2 remote port mirroring
In remote port mirroring, the source device is not directly connected to the data monitoring device. The
source device copies mirrored packets to the destination device, which forwards them to the data
monitoring device. The mirroring source and the mirroring destination are on different devices and in
different mirroring groups. The mirroring group that contains the mirroring source or the mirroring
destination is called a "remote source group" or "remote destination group," respectively. The devices
between the source devices and the destination device are intermediate devices.
Layer 2 remote port mirroring refers that the mirroring source and the mirroring destination are located
on different devices on a same Layer 2 network.
113
Figure 45 Layer 2 remote port mirroring implementation
As shown in Figure 45, the source device copies packets received on the source port Ethernet 1/1 to the
egress port Ethernet 1/2. The egress port forwards the packets to the intermediate devices, which then
broadcast the packets in the remote probe VLAN and transmit the packets to the destination device.
Upon receiving the mirrored packets, the destination device checks whether their VLAN IDs are the same
as the remote probe VLAN ID. If yes, the device forwards the packets to the data monitoring device
through the monitor port Ethernet 1/2.
To make sure the source device and the destination device can communicate at Layer 2 through the
remote probe VLAN, assign the intermediate devices' ports along the path between the source and
destination devices to the remote probe VLAN.
To monitor both the received and sent packets of a port in a remote mirroring group, you must make some
special configurations on the intermediate devices.
Configuring local port mirroring
Local port mirroring configuration task list
The following matrix shows the feature and router compatibility:
Feature
6602
HSR6602
6604/6608/6616
Local port mirroring
Yes
Yes only on fixed
interfaces
Yes only when SAP modules are
operating in bridge mode
Local port mirroring takes effect only when the source ports and the monitor port are configured.
A port can belong to only one mirroring group. However, on devices that support mirroring groups with
multiple monitor ports, a port can serve as a source port for multiple mirroring groups, but cannot be an
egress port or monitor port at the same time.
114
Complete these tasks to configure local port mirroring:
Task
Remarks
Creating a local mirroring group
Required.
Configuring source ports for the local mirroring group
Required.
Configuring the monitor port for the local mirroring group
Required.
Creating a local mirroring group
Local port mirroring takes effect only when the source ports and the monitor port are configured.
To create a local mirroring group:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create a local mirroring
group.
mirroring-group group-id local
No local mirroring group
exists by default.
Configuring source ports for the local mirroring group
CAUTION:
Do not assign a source port to a source VLAN.
Either you can configure a list of source ports for a mirroring group in system view, or you can assign only
the current port to the mirroring group as a source port in interface view. The two configuration modes
lead to the same result.
Configuring source ports in system view
To configure source ports for a local mirroring group in system view:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure source ports.
mirroring-group group-id
mirroring-port mirroring-port-list
{ both | inbound | outbound }
By default, no source port is
configured for a local mirroring
group.
Configuring a source port in interface view
To configure a source port for a local mirroring group in interface view:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
115
Step
Configure the current port as
a source port.
3.
Command
Remarks
[ mirroring-group group-id ]
mirroring-port { both | inbound |
outbound }
By default, a port does not serve as
a source port for any mirroring
group.
Configuring the monitor port for the local mirroring group
CAUTION:
Do not enable the spanning tree feature on the monitor port.
Either you can configure the monitor port for a mirroring group in system view, or you can assign the
current port to a mirroring group as the monitor port in interface view. The two methods lead to the same
result.
Configuration restrictions and guidelines
•
A mirroring group contains only one monitor port.
•
HP recommends that you use a monitor port for port mirroring only. This is to make sure the data
monitoring device receives and analyzes only the mirrored traffic rather than a mix of mirrored
traffic and correctly forwarded traffic.
Configuring the monitor port in system view
To configure the monitor port of a local mirroring group in system view:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the monitor port.
mirroring-group group-id
monitor-port monitor-port-id
By default, no monitor port is
configured for a local mirroring
group.
Configuring the monitor port in interface view
To configure the monitor port of a local mirroring group in interface view:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Configure the current port as
the monitor port.
[ mirroring-group group-id ]
monitor-port
By default, a port does not serve as
the monitor port for any local
mirroring group.
Configuring Layer 2 remote port mirroring
The following matrix shows the feature and router compatibility:
116
Feature
6602
HSR6602
6604/6608/6616
Layer 2 remote
mirroring
No
No
Yes only when SAP modules are
operating in bridge mode
Configuration task list
To configure Layer 2 remote port mirroring, configure remote mirroring groups. When doing that,
configure the remote source group on the source device, and configure the cooperating remote
destination group on the destination device. If an intermediate device exists, configure the intermediate
devices to allow the remote probe VLAN to pass through.
When you configure Layer 2 remote port mirroring, follow these guidelines:
•
A port can belong to only one mirroring group. However, on devices that support mirroring groups
with multiple monitor ports, a port can serve as a source port for multiple mirroring groups, but
cannot be an egress port or monitor port at the same time.
•
HP recommends not enabling GVRP. If you enable GVRP, GVRP might register the remote probe
VLAN to unexpected ports, resulting in undesired duplicates. For more information about GVRP, see
Layer 2—LAN Switching Configuration Guide.
First, configure the source ports, the egress port, and the remote probe VLAN for the remote source group
on the source device. Then, configure the remote probe VLAN and the monitor port for the remote
destination group on the destination device.
Complete these tasks to configure Layer 2 remote port mirroring:
Task
Remarks
Configuring a remote
source group
Configuring a remote
destination group
Creating a remote source group
Required.
Configuring source ports for the remote source group
Required.
Configuring the egress port for the remote source group
Required.
Creating a remote destination group
Required.
Configuring the monitor port for the remote destination group
Required.
Configuring the remote probe VLAN for the remote
destination group
Required.
Assigning the monitor port to the remote probe VLAN
Required.
Configuring a remote source group
Creating a remote source group
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create a remote source
group.
mirroring-group group-id remote-source
By default, no remote source
group exists on a device.
117
Configuring source ports for the remote source group
CAUTION:
• A mirroring group can contain multiple source ports.
• Do not assign a source port to the remote probe VLAN.
Either you can configure a list of source ports for a mirroring group in system view, or you can assign only
the current port as a source port in interface view. The two configuration modes lead to the same result.
To configure source ports for the remote source group in system view:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure source ports for the
remote source group.
mirroring-group group-id
mirroring-port mirroring-port-list
{ both | inbound | outbound }
By default, no source port is
configured for a remote source
group.
To configure a source port for the remote source group in interface view:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Configure the current port as
a source port.
[ mirroring-group group-id ]
mirroring-port { both | inbound |
outbound }
By default, a port does not serve as
a source port for any remote
source group.
Configuring the egress port for the remote source group
Either you can configure the egress port for a mirroring group in system view, or you can assign the
current port to the mirroring group as the egress port in interface view. The two configuration modes lead
to the same result.
When you configure the egress port for the remote source group, follow these guidelines:
•
Disable these functions on the egress port: the spanning tree feature, 802.1X, IGMP snooping,
static ARP, and MAC address learning.
•
A mirroring group contains only one egress port.
•
A port of an existing mirroring group cannot be configured as an egress port.
To configure the egress port for the remote source group in system view:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the egress port for
the remote source group.
mirroring-group group-id
monitor-egress monitor-egress-port
By default, no egress port is
configured for a remote source
group.
To configure the egress port for the remote source group in interface view:
118
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Configure the current port as
the egress port.
mirroring-group group-id
monitor-egress
By default, a port does not serve as
the egress port for any remote
source group.
Configuring the remote probe VLAN for the remote source group
Before configuring a remote probe VLAN, create a static VLAN that serves as the remote probe VLAN for
the remote source group.
When you configure the remote probe VLAN for the remote source group, follow these guidelines:
•
When a VLAN is configured as a remote probe VLAN, use the remote probe VLAN for port
mirroring exclusively.
•
The remote mirroring groups on the source device and destination device must use the same remote
probe VLAN.
To configure the remote probe VLAN for the remote source group:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the remote probe
VLAN.
mirroring-group group-id
remote-probe vlan rprobe-vlan-id
By default, no remote probe VLAN
is configured for a mirroring
group.
Configuring a remote destination group
To configure a remote destination group, make the following configurations on the destination device:
Creating a remote destination group
To create a remote destination group:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create a remote destination
group.
mirroring-group group-id
remote-destination
By default, no remote destination
group exists on a device.
Configuring the monitor port for the remote destination group
Either you can configure the monitor port for a mirroring group in system view, or you can assign the
current port to a mirroring group as the monitor port in interface view. The two methods lead to the same
result.
When you configure the monitor port for the remote destination group, follow these guidelines:
•
Do not enable the spanning tree feature on the monitor port.
119
•
HP recommends that you use a monitor port only for port mirroring. This is to make sure that the
data monitoring device receives and analyzes only the mirrored traffic rather than a mix of mirrored
traffic and correctly forwarded traffic.
•
A mirroring group contains only one monitor port.
To configure the monitor port for the remote destination group in system view:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the monitor port.
mirroring-group group-id
monitor-port monitor-port-id
By default, no monitor port is
configured for a remote destination
group.
To configure the monitor port for the remote destination group in interface view:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Configure the current port as
the monitor port.
[ mirroring-group group-id ]
monitor-port
By default, a port does not serve as
the monitor port for any remote
destination group.
Configuring the remote probe VLAN for the remote destination group
When you configure the remote probe VLAN for the remote destination group, follow these guidelines:
•
When a VLAN is configured as a remote probe VLAN, use the remote probe VLAN for port
mirroring exclusively.
•
Configure the same remote probe VLAN for the remote destination group on the source device and
destination device.
To configure the remote probe VLAN for the remote destination group:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the remote probe
VLAN.
mirroring-group group-id
remote-probe vlan rprobe-vlan-id
By default, no remote probe VLAN
is configured for a remote
destination group.
Assigning the monitor port to the remote probe VLAN
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter the interface view of the
monitor port.
interface interface-type interface-number
N/A
120
Step
Command
Remarks
• For an access port:
Use one of the commands.
port access vlan vlan-id
Assign the port to the probe
VLAN.
3.
• For a trunk port:
port trunk permit vlan vlan-id
• For a hybrid port:
port hybrid vlan vlan-id { tagged |
untagged }
For more information about
the port access vlan, port
trunk permit vlan, and port
hybrid vlan commands, see
Layer 2—LAN Switching
Command Reference.
Displaying and maintaining port mirroring
Task
Command
Remarks
Display mirroring group
information.
display mirroring-group { group-id | all | local |
remote-destination | remote-source } [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Port mirroring configuration examples
Local port mirroring configuration example
Network requirements
As shown in Figure 46:
•
Router A connects to the marketing department through GigabitEthernet 2/0/1 and to the
technical department through GigabitEthernet 2/0/2. It connects to the server through
GigabitEthernet 2/0/3.
•
Configure local port mirroring in source port mode to enable the server to monitor the bidirectional
traffic of the marketing department and the technical department.
Figure 46 Network diagram
121
Configuration procedure
# Create local mirroring group 1.
<RouterA> system-view
[RouterA] mirroring-group 1 local
# Configure GigabitEthernet 2/0/1 and GigabitEthernet 2/0/2 as source ports, and configure
GigabitEthernet 2/0/3 as the monitor port.
[RouterA] mirroring-group 1 mirroring-port gigabitethernet 2/0/1 gigabitethernet 2/0/2
both
[RouterA] mirroring-group 1 monitor-port gigabitethernet 2/0/3
# Disable the spanning tree feature on the monitor port GigabitEthernet 2/0/3.
[RouterA] interface gigabitethernet 2/0/3
[RouterA-GigabitEthernet2/0/3] undo stp enable
[RouterA-GigabitEthernet2/0/3] quit
Verifying the configuration
# Display the configuration of all mirroring groups.
[RouterA] display mirroring-group all
mirroring-group 1:
type: local
status: active
mirroring port:
GigabitEthernet2/0/1
both
GigabitEthernet2/0/2
both
monitor port: GigabitEthernet2/0/3
After the configurations are completed, you can monitor all packets received and sent by the marketing
department and the technical department on the server.
Layer 2 remote port mirroring configuration example
Network requirements
As shown in Figure 47, configure Layer 2 remote port mirroring to enable the server to monitor the
bidirectional traffic of the marketing department.
Figure 47 Network diagram
122
Configuration procedure
1.
Configure Router A (the source device):
# Create a remote source group.
<RouterA> system-view
[RouterA] mirroring-group 1 remote-source
# Create VLAN 2.
[RouterA] vlan 2
[RouterA-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN, GigabitEthernet 2/0/1 as a source port in the
mirroring group, and GigabitEthernet 2/0/2 as the egress port.
[RouterA] mirroring-group 1 remote-probe vlan 2
[RouterA] mirroring-group 1 mirroring-port gigabitethernet 2/0/1 both
[RouterA] mirroring-group 1 monitor-egress gigabitethernet 2/0/2
# Configure port GigabitEthernet 2/0/2 as a trunk port to permit the packets of VLAN 2 to pass
through.
[RouterA] interface gigabitethernet 2/0/2
[RouterA-GigabitEthernet2/0/2] port link-type trunk
[RouterA-GigabitEthernet2/0/2] port trunk permit vlan 2
[RouterA-GigabitEthernet2/0/2] quit
2.
Configure Router B (the intermediate device):
# Create VLAN 2.
<RouterB> system-view
[RouterB] vlan 2
[RouterB-vlan2] quit
# Configure GigabitEthernet 2/0/1 as a trunk port that permits the packets of VLAN 2 to pass
through.
[RouterB] interface gigabitethernet 2/0/1
[RouterB-GigabitEthernet2/0/1] port link-type trunk
[RouterB-GigabitEthernet2/0/1] port trunk permit vlan 2
[RouterB-GigabitEthernet2/0/1] quit
# Configure GigabitEthernet 2/0/2 as a trunk port that permits the packets of VLAN 2 to pass
through.
[RouterB] interface gigabitethernet 2/0/2
[RouterB-GigabitEthernet2/0/2] port link-type trunk
[RouterB-GigabitEthernet2/0/2] port trunk permit vlan 2
[RouterB-GigabitEthernet2/0/2] quit
3.
Configure Router C (the destination device):
# Configure GigabitEthernet 2/0/1 as a trunk port that permits the packets of VLAN 2 to pass
through.
<RouterC> system-view
[RouterC] interface gigabitethernet 2/0/1
[RouterC-GigabitEthernet2/0/1] port link-type trunk
[RouterC-GigabitEthernet2/0/1] port trunk permit vlan 2
[RouterC-GigabitEthernet2/0/1] quit
# Create a remote destination group.
[RouterC] mirroring-group 1 remote-destination
123
# Create VLAN 2.
[RouterC] vlan 2
[RouterC-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN, and configure GigabitEthernet 2/0/2 as the
monitor port in the mirroring group, disable the spanning tree feature on GigabitEthernet 2/0/2,
and assign the port to VLAN 2.
[RouterC] mirroring-group 1 remote-probe vlan 2
[RouterC] interface gigabitethernet 2/0/2
[RouterC-GigabitEthernet2/0/2] mirroring-group 1 monitor-port
[RouterC-GigabitEthernet2/0/2] undo stp enable
[RouterC-GigabitEthernet2/0/2] port access vlan 2
[RouterC-GigabitEthernet2/0/2] quit
Verifying the configuration
After the configurations are completed, you can monitor all packets received and sent by the marketing
department on the server.
124
Configuring traffic mirroring
This feature is supported only when SAP modules are operating in bridge mode.
Overview
Traffic mirroring copies specified packets to a specific destination for packet analysis and monitoring.
Traffic mirroring is implemented through QoS policies. In other words, you define traffic classes and
configure match criteria to classify packets to be mirrored, and then you configure traffic behaviors to
mirror packets that fit the match criteria to the specified destination. You can use traffic mirroring to
flexibly classify packets by defining match criteria and obtain accurate statistics.
You can mirror traffic to the following destinations:
•
Interface—Copies the matching packets to a destination interface.
•
CPU—Copies the matching packets to the CPU of the card where interfaces with traffic mirroring
configured reside.
Mirroring outgoing traffic to an interface only supports known unicasts, and does not support broadcasts,
multicast, or unknown unicasts.
For more information about QoS policies, traffic classes, and traffic behaviors, see ACL and QoS
Configuration Guide.
Traffic mirroring configuration task list
Task
Remarks
Configuring match criteria
Required.
Configuring different types
of traffic mirroring
Mirroring traffic to an interface
Use either configuration.
Mirroring traffic to the CPU
Configuring a QoS policy
Applying a QoS policy
Required.
Applying a QoS policy to an interface or a port
group
Use either method.
Applying a QoS policy to a VLAN
Configuring match criteria
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create a class, and then enter
class view.
traffic classifier tcl-name [ operator
{ and | or } ]
By default, no traffic class exists.
125
Step
3.
Configure match criteria.
Command
Remarks
if-match [ not ] match-criteria
By default, no match criterion is
configured in a traffic class.
For more information about the traffic classifier and if-match commands, see ACL and QoS Command
Reference.
Configuring different types of traffic mirroring
In a traffic behavior, you can configure only one type of traffic mirroring.
Mirroring traffic to an interface
To mirror traffic to an interface:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
By default, no traffic behavior
exists.
2.
Create a behavior, and enter
behavior view.
traffic behavior behavior-name
For more information about the
traffic behavior command, see
ACL and QoS Command
Reference.
3.
Specify the destination
interface for traffic mirroring.
mirror-to interface interface-type
interface-number
By default, traffic mirroring is not
configured in a traffic behavior.
Mirroring traffic to the CPU
To mirror traffic to the CPU:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
By default, no traffic behavior
exists.
2.
Create a behavior and enter
behavior view.
traffic behavior behavior-name
For more information about the
traffic behavior command, see
ACL and QoS Command
Reference.
By default, no traffic mirroring is
configured in a traffic behavior.
3.
Mirror traffic to the CPU.
mirror-to cpu
126
The CPU refers to the CPU of the
card where interfaces with traffic
mirroring configured reside.
Configuring a QoS policy
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Create a policy and enter
policy view.
qos policy policy-name
By default, no policy exists.
3.
Associate a class with a traffic
behavior in the QoS policy.
classifier tcl-name behavior
behavior-name
By default, no traffic behavior is
associated with a class.
For more information about the qos policy and classifier behavior commands, see ACL and QoS
Command Reference.
Applying a QoS policy
For more information about applying a QoS policy, see ACL and QoS Configuration Guide.
Applying a QoS policy to an interface or a port group
By applying a QoS policy to an interface, you can mirror the traffic in a specific direction on the interface.
A policy can be applied to multiple interfaces, but in one direction (inbound or outbound) of an interface,
only one policy can be applied.
To apply a QoS policy to an interface:
Step
1.
2.
Enter system view.
Enter interface view or port
group view.
Command
Remarks
system-view
N/A
• Enter interface view:
Use one of the commands.
interface interface-type
interface-number
• Enter port group view:
port-group manual
port-group-name
3.
Apply a policy to the
interface or to all ports in
the port group
• Settings in interface view take
effect on the current interface.
• Settings in port group view take
effect on all ports in the port
group.
N/A
qos apply policy policy-name
{ inbound | outbound }
For more information about the qos
apply policy command, see ACL
and QoS Command Reference.
Applying a QoS policy to a VLAN
You can apply a QoS policy to a VLAN to mirror the traffic in a specific direction on all ports in the
VLAN.
To apply the QoS policy to a VLAN:
127
Step
Command
1.
Enter system view.
system-view
2.
Apply a QoS policy to a VLAN.
qos vlan-policy policy-name vlan vlan-id-list { inbound
| outbound }
For more information about the qos vlan-policy command, see ACL and QoS Command Reference.
Displaying and maintaining traffic mirroring
Task
Command
Remarks
Display user-defined traffic
behavior configuration.
display traffic behavior user-defined
[ behavior-name ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display user-defined QoS policy
configuration.
display qos policy user-defined [ policy-name
[ classifier tcl-name ] ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
For more information about the display traffic behavior and display qos policy commands, see ACL and
QoS Command Reference.
Traffic mirroring configuration example
Network requirements
As shown in Figure 48:
•
Different departments of a company use IP addresses on different subnets. The marketing and
technical departments use the IP addresses on subnets 192.168.1.0/24 and 192.168.2.0/24,. The
working hours of the company are from 8:00 to 18:00 on weekdays.
•
Configure traffic mirroring so that the server can monitor the traffic that the technical department
sends to access the Internet and the IP traffic that the technical department sends to the marketing
department.
128
Figure 48 Network diagram
Configuration procedure
1.
Monitor the traffic sent by the technical department to access the Internet:
# Create ACL 3000 to allow packets from the technical department (on subnet 192.168.2.0/24)
to access the Internet.
<RouterA> system-view
[RouterA] acl number 3000
[RouterA-acl-adv-3000] rule permit tcp source 192.168.2.0 0.0.0.255 destination-port
eq www
[RouterA-acl-adv-3000] quit
# Create traffic class tech_c, and then configure the match criterion as ACL 3000.
[RouterA] traffic classifier tech_c
[RouterA-classifier-tech_c] if-match acl 3000
[RouterA-classifier-tech_c] quit
# Create traffic behavior tech_b, and then configure the action of mirroring traffic to port
GigabitEthernet 2/0/3.
[RouterA] traffic behavior tech_b
[RouterA-behavior-tech_b] mirror-to interface gigabitethernet 2/0/3
[RouterA-behavior-tech_b] quit
# Create QoS policy tech_p, and then associate traffic class tech_c with traffic behavior tech_b in
the QoS policy.
[RouterA] qos policy tech_p
[RouterA-qospolicy-tech_p] classifier tech_c behavior tech_b
[RouterA-qospolicy-tech_p] quit
# Apply QoS policy tech_p to the outgoing packets of GigabitEthernet 2/0/1.
[RouterA] interface gigabitethernet 2/0/1
[RouterA-GigabitEthernet2/0/1] qos apply policy tech_p outbound
[RouterA-GigabitEthernet2/0/1] quit
2.
Monitor the traffic that the technical department sends to the marketing department:
129
# Configure a time range named work to cover the time from 8: 00 to 18: 00 in working days.
[RouterA] time-range work 8:0 to 18:0 working-day
# Create ACL 3001 to allow packets sent from the technical department (on subnet
192.168.2.0/24) to the marketing department (on subnet 192.168.1.0/24).
[RouterA] acl number 3001
[RouterA-acl-adv-3001] rule permit ip source 192.168.2.0 0.0.0.255 destination
192.168.1.0 0.0.0.255 time-range work
[RouterA-acl-adv-3001] quit
# Create traffic class mkt_c, and then configure the match criterion as ACL 3001.
[RouterA] traffic classifier mkt_c
[RouterA-classifier-mkt_c] if-match acl 3001
[RouterA-classifier-mkt_c] quit
# Create traffic behavior mkt_b, and then configure the action of mirroring traffic to port
GigabitEthernet 2/0/3.
[RouterA] traffic behavior mkt_b
[RouterA-behavior-mkt_b] mirror-to interface gigabitethernet 2/0/3
[RouterA-behavior-mkt_b] quit
# Create QoS policy mkt_p, and then associate traffic class mkt_c with traffic behavior mkt_b in
the QoS policy.
[RouterA] qos policy mkt_p
[RouterA-qospolicy-mkt_p] classifier mkt_c behavior mkt_b
[RouterA-qospolicy-mkt_p] quit
# Apply QoS policy mkt_p to the outgoing packets of GigabitEthernet 2/0/2.
[RouterA] interface gigabitethernet 2/0/2
[RouterA-GigabitEthernet2/0/2] qos apply policy mkt_p outbound
Verifying the configuration
After completing the configurations, through the server, you can monitor all traffic sent by the technical
department to access the Internet and all IP traffic that the technical department sends to the marketing
department during working hours.
130
Configuring NetStream
Overview
Conventional ways to collect traffic statistics, like SNMP and port mirroring, cannot provide precise
network management because of inflexible statistical methods or the high cost of required dedicated
servers. This calls for a new technology to collect traffic statistics.
NetStream provides statistics about network traffic flows, and it can be deployed on access, distribution,
and core layers.
NetStream implements the following features:
•
Accounting and billing—NetStream provides fine-gained data about network usage based on
resources such as lines, bandwidth, and time periods. The ISPs can use the data for billing based
on the time period, bandwidth usage, application usage, and QoS. Enterprise customers can use
this information for department chargeback or for cost allocation.
•
Network planning—NetStream data provides key information, such as the AS traffic information,
for optimizing the network design and planning. This helps maximize the network performance and
reliability while minimizing the network operation cost.
•
Network monitoring—Configured on the Internet interface, NetStream allows for monitoring traffic
and bandwidth utilization in real time. Based on this information, administrators can understand
how the network is used and where the bottleneck is, so they can better plan for the resource
allocation.
•
User monitoring and analysis—NetStream data provides detailed information about network
applications and resources. This information helps network administrators efficiently plan and
allocate network resources, which helps ensure network security.
Basic NetStream concepts
Flow
NetStream is an accounting technology that provides statistics on a per-flow basis. An IPv4 flow is
defined by the following 7-tuple elements: destination address, source IP address, destination port
number, source port number, protocol number, ToS, and inbound or outbound interface. The 7-tuple
elements define a unique flow.
NetStream operation
A typical NetStream system comprises the following parts:
•
NetStream data exporter (NDE)—The NDE analyzes traffic flows that pass through it, collects
necessary data from the target flows, and exports the data to the NSC. Before exporting data, the
NDE might perform processes on the data, such as aggregation. A device with NetStream
configured acts as an NDE.
131
•
NetStream collector (NSC)—The NSC is usually a program running in UNIX or Windows. It parses
the packets sent from the NDE, and then it stores the statistics to the database for the NDA. The NSC
gathers the data from multiple NDEs, and then it filters and aggregates the total received data.
•
NetStream data analyzer (NDA)—The NDA is a tool for analyzing network traffic. It collects
statistics from the NSC, performs further process, and generates various types of reports for
applications of traffic billing, network planning, and attack detection and monitoring. Typically, the
NDA features a Web-based system for users to easily obtain, view, and gather the data.
Figure 49 NetStream system
As shown in Figure 49, NetStream uses the following procedure to collect and analyze data:
1.
The NDE, that is the device configured with NetStream, periodically delivers the collected statistics
to the NSC.
2.
The NSC processes the statistics, and then it sends the results to the NDA.
3.
The NDA analyzes the statistics for accounting, network planning, and the like.
NSC and NDA are usually integrated into a NetStream server. This document focuses on the description
and configuration of the NDE.
NetStream key technologies
Flow aging
NetStream uses the flow aging to enable the NDE to export NetStream data to the NetStream server.
NetStream creates a NetStream entry for each flow in the cache, and each entry stores the flow statistics.
When the timer of the entry expires, the NDE exports the summarized data to the NetStream server in a
specific NetStream version export format. For more information about flow aging types and configuration,
see "Configuring NetStream flow aging."
NetStream data export
NetStream traditional data export
NetStream collects statistics about each flow, and, when the entry timer expires, it exports the data in
each entry to the NetStream server.
The data includes statistics about each flow, but this method consumes more bandwidth and CPU than
the aggregation method, and it requires a large cache size. In most cases, not all statistics are necessary
for analysis.
132
NetStream aggregation data export
NetStream aggregation merges the flow statistics according to the aggregation criteria of an
aggregation mode, and it sends the summarized data to the NetStream server. This process is the
NetStream aggregation data export, which uses less bandwidth than traditional data export.
For example, the aggregation mode configured on the NDE is protocol-port, which means that is
aggregates statistics about flow entries by protocol number, source port, and destination port. Four
NetStream entries record four TCP flows with the same destination address, source port, and destination
port, but with different source addresses. In the aggregation mode, only one NetStream aggregation
flow is created and sent to the NetStream server.
Table 4 lists the 12 aggregation modes. In each mode, the system merges flows into one aggregation
flow if the aggregation criteria are of the same value. These 12 aggregation modes work independently
and can be configured on the same interface.
Table 4 NetStream aggregation modes
Aggregation mode
Aggregation criteria
AS aggregation
•
•
•
•
Source AS number
Destination AS number
Inbound interface index
Outbound interface index
Protocol-port aggregation
• Protocol number
• Source port
• Destination port
Source AS number
Source-prefix aggregation
•
•
•
•
•
•
•
•
Destination AS number
•
•
•
•
•
•
•
•
Source AS number
Destination-prefix aggregation
Prefix aggregation
Source address mask length
Source prefix
Inbound interface index
Destination address mask length
Destination prefix
Outbound interface index
Destination AS number
Source address mask length
Destination address mask length
Source prefix
Destination prefix
Inbound interface index
Outbound interface index
133
Aggregation mode
Aggregation criteria
Source prefix
Prefix-port aggregation
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ToS
•
•
•
•
•
ToS
•
•
•
•
•
ToS
•
•
•
•
•
•
•
•
•
ToS
•
•
•
•
•
•
ToS
ToS-AS aggregation
ToS-source-prefix aggregation
ToS-destination-prefix aggregation
ToS- prefix aggregation
ToS-protocol-port aggregation
ToS-BGP-nexthop
Destination prefix
Source address mask length
Destination address mask length
ToS
Protocol number
Source port
Destination port
Inbound interface index
Outbound interface index
Source AS number
Destination AS number
Inbound interface index
Outbound interface index
Source AS number
Source prefix
Source address mask length
Inbound interface index
Destination AS number
Destination address mask length
Destination prefix
Outbound interface index
Source AS number
Source prefix
Source address mask length
Destination AS number
Destination address mask length
Destination prefix
Inbound interface index
Outbound interface index
Protocol type
Source port
Destination port
Inbound interface index
Outbound interface index
• ToS
• BGP next hop
• Outbound interface index
134
In an aggregation mode with AS, if the packets are not forwarded according to the BGP routing table,
the statistics on the AS number cannot be obtained.
In the aggregation mode of ToS-BGP-nexthop, if the packets are not forwarded according to the BGP
routing table, the statistics on the BGP next hop cannot be obtained.
NetStream export formats
NetStream exports data in UDP datagrams in one of the following formats:
•
Version 5—Exports original statistics collected based on the 7-tuple elements. The packet format is
fixed and cannot be extended flexibly.
•
Version 8—Supports NetStream aggregation data export. The packet formats are fixed and cannot
be extended flexibly.
•
Version 9—The most flexible format. Users can define templates that have different statistics fields.
The template feature supports different statistics, such as BGP next hop and MPLS information.
NetStream sampling and filtering
NetStream sampling
NetStream sampling basically reflects the network traffic information by collecting statistics on fewer
packets. The reduced statistics to be transferred also reduces the impact on the device performance. For
more information about sampling, see "Configuring sampler."
NetStream filtering
NetStream filtering is implemented by referencing an ACL to NetStream. NetStream filtering enables a
NetStream module to collect statistics on packets that match the criteria. The filtering allows for selecting
specific data flows for statistics purposes.
NetStream configuration task list
Before you configure NetStream, verify that the following configurations are proper, as needed:
•
Make sure which device you want to enable NetStream on.
•
If multiple service flows are passing the NDE, use an ACL to select the target data.
•
If enormous traffic flows are on the network, configure NetStream sampling.
•
Decide which export format is used for NetStream data export.
•
Configure the timer for NetStream flow aging.
•
To reduce the bandwidth consumption of NetStream data export, configure NetStream
aggregation.
135
Figure 50 NetStream configuration flow
Start
Enable NetStream
Configure filtering
Yes
Filter?
No
Yes
Configure sampling
Sample?
No
Configure export
format
Configure flow
aging
Configure aggregation
data export
Yes
Aggregate?
No
Configure common
data export
End
Complete these tasks to configure NetStream:
Task
Remarks
Enabling NetStream
Required.
Configuring NetStream filtering and sampling
Optional.
Configuring NetStream sampling
Optional.
Configuring NetStream data
export
Configuring NetStream traditional data export
Configuring NetStream aggregation data
export
Required.
Use at least one method.
Configuring attributes of NetStream export data
Optional.
Configuring NetStream flow aging
Optional.
Enabling NetStream
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
136
Step
Command
Remarks
2.
Enter interface view.
interface interface-type interface-number
N/A
3.
Enable NetStream on the
interface.
ip netstream { inbound | outbound }
Disabled by default.
Configuring NetStream filtering and sampling
Before you configure NetStream filtering and sampling, use the ip netstream command to enable
NetStream.
Configuring NetStream filtering
When you configure ACL-based NetStream filtering, follow these guidelines:
•
The NetStream filtering function is not effective on MPLS packets.
•
When NetStream filtering and sampling are both configured, packets are filtered first and then the
matching packets are sampled.
•
The ACL referenced by NetStream filtering must already exist and the function takes effect only after
the ACL is created. An ACL that is referenced by NetStream filtering cannot be deleted or modified.
For more information about ACLs, see ACL and QoS Configuration Guide.
To configure NetStream filtering:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type interface-number
N/A
3.
Enable NetStream
filtering on the interface.
ip netstream filter acl acl-number
{ inbound | outbound }
4.
Enable NetStream.
ip netstream { inbound | outbound }
Optional.
By default, no ACL is referenced
and IPv4 packets are not filtered.
N/A
Configuring NetStream sampling
When you configure NetStream sampling, follow these guidelines:
•
When NetStream filtering and sampling are both configured, packets are filtered first and then the
matching packets are sampled.
•
A sampler must be created by using the sampler command before being referenced by NetStream
sampling.
•
A sampler that is referenced by NetStream sampling cannot be deleted. For more information
about samplers, see "Configuring sampler."
To configure NetStream sampling:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
137
Step
Command
Remarks
N/A
2.
Enter interface view.
interface interface-type
interface-number
3.
Configure NetStream
sampling.
ip netstream sampler sampler-name
{ inbound | outbound }
You can also execute the command in
system view to enable NetStream
sampling for all interfaces.
4.
Enable NetStream.
ip netstream { inbound | outbound }
N/A
Disabled by default.
Configuring NetStream data export
To allow the NDE to export collected statistics to the NetStream server, configure the source interface out
of which the data is sent and the destination address to which the data is sent.
Configuring NetStream traditional data export
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Enable NetStream.
ip netstream { inbound | outbound }
Disabled by default.
4.
Exit to system view.
quit
N/A
5.
Configure the
destination address and
the destination UDP port
number for the
NetStream traditional
data export.
ip netstream export host ip-address
udp-port [ vpn-instance
vpn-instance-name ]
By default, no destination address or
destination UDP port number is
configured, so the NetStream
traditional data is not exported.
Optional.
6.
7.
Configure the source
interface for NetStream
traditional data export.
ip netstream export source interface
interface-type interface-number
Limit the data export
rate.
ip netstream export rate rate
138
By default, the interface where the
NetStream data is sent out (the
interface connects to the NetStream
server) is used as the source interface.
HP recommends that you connect the
network management interface to the
NetStream server and configure it as
the source interface.
Optional.
No limit by default.
Configuring NetStream aggregation data export
Configurations in NetStream aggregation view apply to aggregation data export only, and those in
system view apply to NetStream traditional data export. If configurations in NetStream aggregation view
are not provided, the configurations in system view apply to the aggregation data export.
To configure NetStream aggregation data export:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Enable NetStream.
ip netstream { inbound | outbound }
Disabled by default.
4.
Exit to system view.
quit
N/A
Set a NetStream
aggregation mode and
enter its view.
ip netstream aggregation { as |
destination-prefix | prefix | prefix-port
| protocol-port | source-prefix | tos-as
| tos-destination-prefix | tos-prefix |
tos-protocol-port | tos-source-prefix |
tos-bgp-nexthop }
N/A
5.
6.
Configure the destination
address and the
destination UDP port
number for the NetStream
aggregation data export.
ip netstream export host ip-address
udp-port [ vpn-instance
vpn-instance-name ]
By default, no destination
address or destination UDP port
number is configured in
NetStream aggregation view.
If you expect to export only
NetStream aggregation data,
configure the destination address
in related aggregation view only.
Optional.
By default, the interface
connecting to the NetStream
server is used as the source
interface.
• Source interfaces in different
7.
Configure the source
interface for NetStream
aggregation data export.
ip netstream export source interface
interface-type interface-number
aggregation views can be
different.
• If no source interface is
configured in aggregation
view, the source interface
configured in system view, if
any, is used.
• HP recommends that you
connect the network
management interface to the
NetStream server.
8.
Enable the NetStream
aggregation
configuration.
Disabled by default.
enable
139
Configuring attributes of NetStream export data
Configuring NetStream export format
The NetStream export format exports NetStream data in version 5 or version 9 formats, and the data
fields can be expanded to contain more information:
•
Statistics about source AS, destination AS, and peer ASs in version 5 or version 9 export format.
•
Statistics about BGP next hop in version 9 format only.
To configure the NetStream export format:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
Optional.
By default:
2.
Configure the version
for NetStream export
format, and specify
whether to record AS
and BGP next hop
information.
• ip netstream export version 5
[ origin-as | peer-as ]
• ip netstream export version 9
[ origin-as | peer-as ]
[ bgp-nexthop ]
• NetStream traditional data export
uses version 5.
• IPv4 NetStream aggregation data
export uses version 8.
• MPLS flow data is not exported.
• The peer AS numbers are exported
for the source and destination.
• The BGP next hop is not exported.
For more information about an AS, see Layer 3—IP Routing Configuration Guide.
A NetStream entry for a flow records the source IP address and the destination IP address, each with two
AS numbers. The source IP address includes the source AS from which the flow originates and the peer
AS from which the flow travels to the NetStream-enabled device. The destination IP address includes the
destination AS to which the flow is destined and the peer AS to which the NetStream-enabled device
passes the flow.
To specify which AS numbers to record for the source and destination IP addresses, include the peer-as
or origin-as keyword. For example, as shown in Figure 51, a flow starts at AS 20, passes AS 21 through
AS 23, and then reaches AS 24. NetStream is enabled on the device in AS 22. If the peer-as keyword
is provided, the command records AS 21 as the source AS and AS 23 as the destination AS. If the
origin-as keyword is provided, the command records AS 20 as the source AS and AS 24 as the
destination AS.
140
Figure 51 Recorded AS information varies with different keyword configurations
AS 20
Enable NetStream
AS 21
AS 22
Include peer-as in the command.
AS 21 is recorded as the source AS, and
AS 23 as the destination AS.
Include origin-as in the command.
AS 20 is recorded as the source AS and
AS 24 as the destination AS.
AS 23
AS 24
Configuring the refresh rate for NetStream version 9 templates
Version 9 is template-based and supports user-defined formats, so the NetStream-enabled device needs
to resend a new template to the NetStream server for an update. If the version 9 format is changed on
the NetStream-enabled device and is not updated on the NetStream server, the server cannot associate
the received statistics with its proper fields. To avoid this situation, configure the refresh frequency and
rate for version 9 templates so that the NetStream server can refresh the templates on time.
The refresh frequency and interval can be both configured, and the template is resent when either of the
condition is reached.
To configure the refresh rate for NetStream version 9 templates:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Configure the refresh frequency
for NetStream version 9
templates.
ip netstream export v9-template
refresh-rate packet packets
Configure the refresh interval
for NetStream version 9
templates.
ip netstream export v9-template
refresh-rate time minutes
3.
Optional.
By default, the version 9 templates
are sent every 20 packets.
Optional.
By default, the version 9 templates
are sent every 30 minutes.
Configuring MPLS-aware NetStream
An MPLS flow is identified by the same labels in the same position and the same 7-tuple elements.
MPLS-aware NetStream collects and exports statistics on labels (up to three) in the label stack, FEC
corresponding to the top label, and traditional 7-tuple elements data.
141
To configure MPLS-aware NetStream:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
By default, no statistics about MPLS
packets are counted and exported.
2.
Count and export
statistics on MPLS
packets.
ip netstream mpls [ label-positions
{ label-position1 [ label-position2 ]
[ label-position3 ] } ] [ no-ip-fields ]
The command of ip netstream mpls
[ label-positions { label-position1
[ label-position2 ]
[ label-position3 ] } ] [ no-ip-fields ]
enables both IPv4 and IPv6
NetStream of MPLS packets.
Configuring NetStream flow aging
Flow aging methods
The following types of NetStream flow aging are available:
•
Periodical aging
•
Forced aging
•
TCP FIN- and RST-triggered aging (automatically triggered if a TCP connection is terminated)
Periodical aging
Periodical aging uses the following methods:
•
Inactive flow aging—A flow is considered inactive if its statistics have not been changed, that is, no
packet for this NetStream entry arrives in the time specified by the ip netstream timeout inactive
command. The inactive flow entry remains in the cache until the inactive timer expires. Then the
inactive flow is aged out and its statistics, which can no longer be displayed by the display ip
netstream cache command, are sent to the NetStream server. The inactive flow aging makes sure
that the cache is big enough for new flow entries.
•
Active flow aging—An active flow is aged out when the time specified by the ip netstream timeout
active command is reached, and its statistics are exported to the NetStream server. The device
continues to count the active flow statistics, which can be displayed by the display ip netstream
cache command. The active flow aging exports the statistics of active flows to the NetStream server.
Forced aging
Use the reset ip netstream statistics command to age out all NetStream entries in the cache and to clear
the statistics. This is forced aging. Alternatively, use the ip netstream max-entry command to configure
aging out of entries in the cache when the maximum number of entries is reached.
TCP FIN- and RST-triggered aging
For a TCP connection, when a packet with a FIN or RST flag is sent out, it means that a session is finished.
If a packet with a FIN or RST flag is recorded for a flow with the NetStream entry already created, the
flow is aged out immediately. However, if the packet with a FIN or RST flag is the first packet of a flow,
a new NetStream entry is created instead of being aged out.
142
Configuration procedure
To configure flow aging:
Step
1.
Enter system view.
2.
Enable periodical
aging and TCP FINand RST-triggered
aging.
Command
Remarks
system-view
N/A
Optional.
ip netstream aging
By default, periodical aging
and TCP FIN- and
RST-triggered aging are
enabled.
Optional.
• Set the aging timer for active flows:
3.
Configure periodical
aging.
ip netstream timeout active seconds
• Set the aging timer for inactive flows:
ip netstream timeout inactive seconds
By default:
• The aging timer for active
flows is 1800 seconds.
• The aging timer for
inactive flows is 30
seconds.
Optional.
4.
Configure forced
aging of the
NetStream entries.
a. Set the maximum entries that the cache
can accommodate, and the processing
method when the upper limit is reached:
ip netstream max-entry { max-entries |
aging | disable-caching }
b. Exit to user view:
quit
c. Configure forced aging:
reset ip netstream statistics
By default, the cache can
accommodate a maximum of
620000 entries, and the
device ages out the entries
when the upper limit is
reached.
The ip netstream max-entry
command is supported on
only 6602 and HSR6602
routers.
The reset ip netstream
statistics command also
clears the cache.
Displaying and maintaining NetStream
Task
Command
Remarks
Display NetStream entry
information in the cache.
display ip netstream cache [ verbose ]
[ destination ip-address | interface
interface-type interface-number | source
ip-address ] * [ slot slot-number ] [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display information about
NetStream data export.
display ip netstream export [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the configuration and
status of the NetStream flow
record templates.
display ip netstream template [ slot
slot-number ] [ | { begin | exclude | include }
regular-expression ]
Available in any view.
143
Task
Command
Remarks
Clear the cache, age out, and
export all NetStream data.
reset ip netstream statistics
Available in user view.
NetStream configuration examples
NetStream traditional data export configuration example
Network requirements
As shown in Figure 52, configure NetStream on Router A to collect statistics on packets passing through
it. Enable NetStream for incoming traffic on GigabitEthernet 2/0/0 and for outgoing traffic on
GigabitEthernet 2/0/1. Configure the router to export NetStream traditional data to UDP port 5000 of
the NetStream server at 12.110.2.2/16.
Figure 52 Network diagram
Configuration procedure
# Enable NetStream for incoming traffic on GigabitEthernet 2/0/0.
<RouterA> system-view
[RouterA] interface gigabitethernet 2/0/0
[RouterA-GigabitEthernet2/0/0] ip address 11.110.2.1 255.255.0.0
[RouterA-GigabitEthernet2/0/0] ip netstream inbound
[RouterA-GigabitEthernet2/0/0] quit
# Enable NetStream for outgoing traffic on GigabitEthernet 2/0/1.
[RouterA] interface gigabitethernet 2/0/1
[RouterA-GigabitEthernet2/0/1] ip address 12.110.2.1 255.255.0.0
[RouterA-GigabitEthernet2/0/1] ip netstream outbound
[RouterA-GigabitEthernet2/0/1] quit
# Configure the destination address and the destination UDP port number for the NetStream traditional
data export.
[RouterA] ip netstream export host 12.110.2.2 5000
NetStream aggregation data export configuration example
Network requirements
As shown in Figure 53, configure NetStream on Router A to meet the following requirements
•
Router A exports NetStream traditional data in version 5 export format to port 5000 of the
NetStream server at 4.1.1.1/16.
144
•
Router A performs NetStream aggregation in the modes of AS, protocol-port, source-prefix,
destination-prefix, and prefix. Use version 8 export format to send the aggregation data of different
modes to the destination address at 4.1.1.1, with UDP ports 2000, 3000, 4000, 6000, and 7000,
respectively.
All routers in the network are running EBGP. For more information about BGP, see Layer 3—IP Routing
Configuration Guide.
Figure 53 Network diagram
Configuration procedure
# Enable NetStream for incoming and outgoing traffic on GigabitEthernet 2/0/0.
<RouterA> system-view
[RouterA] interface gigabitethernet 2/0/0
[RouterA-GigabitEthernet2/0/0] ip address 3.1.1.1 255.255.0.0
[RouterA-GigabitEthernet2/0/0] ip netstream inbound
[RouterA-GigabitEthernet2/0/0] ip netstream outbound
[RouterA-GigabitEthernet2/0/0] quit
# In system view, configure the destination address and the destination UDP port number for the
NetStream traditional data export with IP address 4.1.1.1 and port 5000.
[RouterA] ip netstream export host 4.1.1.1 5000
# Configure the aggregation mode as AS, and then in aggregation view, configure the destination
address and the destination UDP port number for the NetStream AS aggregation data export.
[RouterA] ip netstream aggregation as
[RouterA-ns-aggregation-as] enable
[RouterA-ns-aggregation-as] ip netstream export host 4.1.1.1 2000
[RouterA-ns-aggregation-as] quit
# Configure the aggregation mode as protocol-port, and then in aggregation view, configure the
destination address and the destination UDP port number for the NetStream protocol-port aggregation
data export.
[RouterA] ip netstream aggregation protocol-port
[RouterA-ns-aggregation-protport] enable
[RouterA-ns-aggregation-protport] ip netstream export host 4.1.1.1 3000
[RouterA-ns-aggregation-protport] quit
# Configure the aggregation mode as source-prefix, and then in aggregation view, configure the
destination address and the destination UDP port number for the NetStream source-prefix aggregation
data export.
145
[RouterA] ip netstream aggregation source-prefix
[RouterA-ns-aggregation-srcpre] enable
[RouterA-ns-aggregation-srcpre] ip netstream export host 4.1.1.1 4000
[RouterA-ns-aggregation-srcpre] quit
# Configure the aggregation mode as destination-prefix, and then in aggregation view, configure the
destination address and the destination UDP port number for the NetStream destination-prefix
aggregation data export.
[RouterA] ip netstream aggregation destination-prefix
[RouterA-ns-aggregation-dstpre] enable
[RouterA-ns-aggregation-dstpre] ip netstream export host 4.1.1.1 6000
[RouterA-ns-aggregation-dstpre] quit
# Configure the aggregation mode as prefix, and then in aggregation view, configure the destination
address and the destination UDP port number for the NetStream prefix aggregation data export.
[RouterA] ip netstream aggregation prefix
[RouterA-ns-aggregation-prefix] enable
[RouterA-ns-aggregation-prefix] ip netstream export host 4.1.1.1 7000
[RouterA-ns-aggregation-prefix] quit
146
Configuring IPv6 NetStream
Overview
Legacy ways to collect traffic statistics, like SNMP and port mirroring, cannot provide precise network
management because of inflexible statistical methods or the high cost of required dedicated servers. This
calls for a new technology to collect traffic statistics.
IPv6 NetStream provides statistics about network traffic flows, and it can be deployed on access,
distribution, and core layers.
IPv6 NetStream implements the following features:
•
Accounting and billing—IPv6 NetStream provides fine-gained data about the network usage based
on the resources such as lines, bandwidth, and time periods. The ISPs can use the data for billing
based on time period, bandwidth usage, application usage, and QoS. Enterprise customers can
use this information for department chargeback or for cost allocation.
•
Network planning—IPv6 NetStream data provides key information, such as AS traffic information,
for optimizing the network design and planning. This helps maximize the network performance and
reliability while minimizing the network operation cost.
•
Network monitoring—Configured on the Internet interface, IPv6 NetStream allows for monitoring
traffic and bandwidth utilization in real time. By using this information, administrators can
understand how the network is used and where the bottleneck is, so that they can better plan the
resource allocation.
•
User monitoring and analysis—The IPv6 NetStream data provides detailed information about
network applications and resources. This information helps network administrators efficiently plan
and allocate network resources, which helps ensure network security.
Basic IPv6 NetStream concepts
IPv6 flow
IPv6 NetStream is an accounting technology that provides statistics on a per-flow basis. An IPv6 flow is
defined by the following 7-tuple elements: destination address, source IP address, destination port
number, source port number, protocol number, ToS, and inbound or outbound interface. The 7-tuple
elements define a unique flow.
IPv6 NetStream operation
A typical IPv6 NetStream system comprises the following parts:
•
NetStream data exporter (NDE)—The NDE analyzes traffic flows that pass through it, collects data
from the target flows, and then exports the data to the NSC. Before exporting data, the NDE might
perform processes on the data, such as aggregation. A device with IPv6 NetStream configured acts
as an NDE.
147
•
NetStream collector (NSC)—The NSC is usually a program running in UNIX or Windows. It parses
the packets sent from the NDE, and then it stores the statistics to the database for the NDA. The NSC
gathers the data from multiple NDEs.
•
NetStream data analyzer (NDA)—The NDA is a tool for analyzing network traffic. It collects
statistics from the NSC, and performs further processes, and generates various types of reports for
applications of traffic billing, network planning, and attack detection and monitoring. Typically, the
NDA features a Web-based system for users to easily obtain, view, and gather the data.
Figure 54 IPv6 NetStream system
As shown in Figure 54, IPv6 NetStream uses the following procedure to collect and analyze data:
1.
The NDE, that is the device configured with IPv6 NetStream, periodically delivers the collected
statistics to the NSC.
2.
The NSC processes the statistics, and then it sends the results to the NDA.
3.
The NDA analyzes the statistics for accounting, network planning, and the like.
NSC and NDA are usually integrated into a NetStream server. This document focuses on the description
and configuration of the NDE.
IPv6 NetStream key technologies
Flow aging
IPv6 NetStream uses flow aging to enable the NDE to export IPv6 NetStream data to the NetStream
server. IPv6 NetStream creates an IPv6 NetStream entry for each flow in the cache, and each entry stores
the flow statistics. When the timer of the entry expires, the NDE exports the summarized data to the IPv6
NetStream server in a specific IPv6 NetStream version export format. For information about flow aging
types and configuration, see "Configuring IPv6 NetStream flow aging."
IPv6 NetStream data export
IPv6 NetStream traditional data export
IPv6 NetStream collects statistics about each flow and, when the entry timer expires, it exports the data
in each entry to the NetStream server.
The data includes statistics about each flow, but this method consumes more bandwidth and CPU than
the aggregation method, and it requires a large cache size. In most cases, not all statistics are necessary
for analysis.
148
IPv6 NetStream aggregation data export
IPv6 NetStream aggregation merges the flow statistics according to the aggregation criteria of an
aggregation mode, and it sends the summarized data to the IPv6 NetStream server. This process is the
IPv6 NetStream aggregation data export, which uses less bandwidth than traditional data export.
Table 5 lists the six IPv6 NetStream aggregation modes are supported. In each mode, the system merges
flows into one aggregation flow if the aggregation criteria are of the same value. These six aggregation
modes work independently and can be configured on the same interface.
Table 5 IPv6 NetStream aggregation modes
Aggregation mode
Aggregation criteria
AS aggregation
•
•
•
•
Source AS number
Destination AS number
Inbound interface index
Outbound interface index
Protocol-port aggregation
• Protocol number
• Source port
• Destination port
Source AS number
Source-prefix aggregation
•
•
•
•
•
•
•
•
Destination AS number
•
•
•
•
•
•
•
•
Source AS number
Destination-prefix aggregation
Prefix aggregation
BGP-nexthop
Source address mask length
Source prefix
Inbound interface index
Destination address mask length
Destination prefix
Outbound interface index
Destination AS number
Source address mask length
Destination address mask length
Source prefix
Destination prefix
Inbound interface index
Outbound interface index
• BGP next hop
• Outbound interface index
In an aggregation mode with AS, if the packets are not forwarded according to the BGP routing table,
the statistics on the AS number cannot be obtained.
In the aggregation mode of BGP-nexthop, if the packets are not forwarded according to the BGP routing
table, the statistics on the BGP next hop cannot be obtained.
IPv6 NetStream export format
IPv6 NetStream exports data in UDP datagrams in version 9 format.
149
The version 9 format template-based feature provides support of different statistics, such as BGP next hop
and MPLS information.
IPv6 NetStream configuration task list
Before you configure IPv6 NetStream, verify that the following configurations are proper, as needed:
•
Make sure which device you want to enable IPv6 NetStream on.
•
Configure the timer for IPv6 NetStream flow aging.
•
To reduce the bandwidth that IPv6 NetStream data export uses, configure IPv6 NetStream
aggregation.
Complete these tasks to configure IPv6 NetStream:
Task
Remarks
Enabling IPv6 NetStream
Required.
Configuring IPv6 NetStream data
export
Configuring IPv6 NetStream
traditional data export
Configuring IPv6 NetStream
aggregation data export
Use at least one method.
Configuring attributes of IPv6 NetStream data export
Optional.
Configuring IPv6 NetStream flow aging
Optional.
Enabling IPv6 NetStream
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type interface-number
N/A
3.
Enable IPv6 NetStream
on the interface.
ipv6 netstream { inbound | outbound }
Disabled by default.
Configuring IPv6 NetStream data export
To allow the NDE to export collected statistics to the NetStream server, configure the source interface out
of which the data is sent and the destination address to which the data is sent.
Configuring IPv6 NetStream traditional data export
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
150
Step
Command
Remarks
3.
Enable IPv6 NetStream.
ipv6 netstream { inbound
| outbound }
Disabled by default.
4.
Exit to system view.
quit
N/A
5.
Configure the destination
address and the
destination UDP port
number for the IPv6
NetStream traditional
data export.
ipv6 netstream export
host ip-address udp-port
[ vpn-instance
vpn-instance-name ]
By default, no destination address or destination
UDP port number is configured, so the IPv6
NetStream traditional data is not exported.
Optional.
6.
7.
Configure the source
interface for IPv6
NetStream traditional
data export.
ipv6 netstream export
source interface
interface-type
interface-number
Limit the data export rate.
ipv6 netstream export
rate rate
By default, the interface where the NetStream
data is sent out (the interface that connects to the
NetStream server) is used as the source interface.
HP recommends that you connect the network
management interface to the NetStream server
and configure it as the source interface.
Optional.
No limit by default.
Configuring IPv6 NetStream aggregation data export
Configurations in IPv6 NetStream aggregation view apply to aggregation data export only, and those in
system view apply to traditional data export. If configurations in IPv6 NetStream aggregation view are
not provided, the configurations in system view apply to the aggregation data export.
To configure IPv6 NetStream aggregation data export:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Enable IPv6 NetStream.
ipv6 netstream { inbound |
outbound }
Disabled by default.
4.
Exit to system view.
quit
N/A
5.
Set an IPv6 NetStream
aggregation mode and
enter its view.
ipv6 netstream aggregation { as
| bgp-nexthop |
destination-prefix | prefix |
protocol-port | source-prefix }
N/A
6.
Configure the
destination address and
destination UDP port
number for the IPv6
NetStream aggregation
data export.
ipv6 netstream export host
ip-address udp-port
[ vpn-instance
vpn-instance-name ]
151
By default, no destination address or
destination UDP port number is
configured in IPv6 NetStream
aggregation view.
If you expect to export only IPv6
NetStream aggregation data, configure
the destination address in related
aggregation view only.
Step
Command
Remarks
Optional.
By default, the interface connecting to the
NetStream server is used as the source
interface.
Configure the source
interface for IPv6
NetStream aggregation
data export.
7.
• Source interfaces in different
ipv6 netstream export source
interface interface-type
interface-number
aggregation views can be different.
• If no source interface is configured in
aggregation view, the source
interface configured in system view, if
any, is used.
• HP recommends that you connect the
network management interface to the
NetStream server.
Enable the current IPv6
NetStream aggregation
configuration.
8.
Disabled by default
enable
Configuring attributes of IPv6 NetStream data
export
Configuring IPv6 NetStream export format
The IPv6 NetStream export format exports IPv6 NetStream data in version 9 format, and the data fields
can be expanded to contain more information:
•
Statistics about source AS, destination AS, and peer ASs in version 9 format.
•
Statistics about BGP next hop in version 9 format.
To configure the IPv6 NetStream export format:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
Optional.
2.
Configure the version
for IPv6 NetStream
export format, and
specify whether to
record AS and BGP
next hop information.
By default:
ipv6 netstream export
version 9 [ origin-as |
peer-as ]
[ bgp-nexthop ]
• Version 9 format is used to export IPv6
NetStream traditional data, IPv6 NetStream
aggregation data, and MPLS flow data with IPv6
fields.
• The peer AS numbers are recorded.
• The BGP next hop is not recorded.
152
Configuring the refresh rate for IPv6 NetStream version 9
templates
Version 9 is template-based and supports user-defined formats, so the NetStream device needs to resend
a new template to the NetStream server for an update. If the version 9 format is changed on the
NetStream device and not updated on the NetStream server, the server cannot associate the received
statistics with its proper fields. To avoid this situation, configure the refresh frequency and rate for version
9 templates so that the NetStream server can refresh the templates on time.
The refresh frequency and interval can be both configured, and the template is resent when either of the
condition is reached.
To configure the refresh rate for IPv6 NetStream version 9 templates:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Configure the refresh
frequency for NetStream
version 9 templates.
ipv6 netstream export v9-template
refresh-rate packet packets
Configure the refresh
interval for NetStream
version 9 templates.
ipv6 netstream export v9-template
refresh-rate time minutes
3.
Optional.
By default, the version 9 templates
are sent every 20 packets.
Optional.
By default, the version 9 templates
are sent every 30 minutes.
Configuring IPv6 NetStream flow aging
Flow aging methods
The following types of IPv6 NetStream flow aging are available:
•
Periodical aging
•
Forced aging
•
TCP FIN- and RST-triggered aging (automatically triggered if a TCP connection is terminated)
Periodical aging
Periodical aging uses the following methods:
•
Inactive flow aging—A flow is considered inactive if its statistics have not been changed. No
packet for this IPv6 NetStream entry arrives in the time specified by the ipv6 netstream timeout
inactive command. The inactive flow entry remains in the cache until the inactive timer expires. Then,
the inactive flow is aged out and its statistics, which can no longer be displayed by the display ipv6
netstream cache command, are sent to the NetStream server. The inactive flow aging ensures the
cache is big enough for new flow entries.
•
Active flow aging—An active flow is aged out when the time specified by the ipv6 netstream
timeout active command is reached, and its statistics are exported to the NetStream server. The
device continues to count the active flow statistics, which can be displayed by the display ipv6
netstream cache command. The active flow aging exports the statistics of active flows to the
NetStream server.
153
Forced aging
Use the reset ipv6 netstream statistics command to age out all IPv6 NetStream entries in the cache and
to clear the statistics. This is forced aging. Alternatively, use the ipv6 netstream max-entry command to
configure aging out of entries in the cache if the maximum number of entries is reached.
TCP FIN- and RST-triggered aging
For a TCP connection, when a packet with a FIN or RST flag is sent out, it means that a session is finished.
If a packet with a FIN or RST flag is recorded for a flow with the IPv6 NetStream entry already created,
the flow is aged out immediately. However, if the packet with a FIN or RST flag is the first packet of a flow,
a new IPv6 NetStream entry is created instead of being aged out. This type of aging is enabled by
default, and it cannot be disabled.
Configuration procedure
To configure flow aging:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
• Set the aging timer for active
2.
Configure periodical
aging.
flows:
ipv6 netstream timeout active
seconds
• Set the aging timer for inactive
flows:
ipv6 netstream timeout inactive
seconds
3.
Configure forced
aging of the IPv6
NetStream entries.
a. Set the maximum entries that
the cache can
accommodate, and the
processing method if the
upper limit is reached:
ipv6 netstream max-entry
{ max-entries | aging |
disable-caching }
b. Exit to user view:
quit
c. Configure forced aging:
reset ipv6 netstream statistics
Optional.
By default:
• The aging timer for active flows is
1800 seconds.
• The aging timer for inactive flows is
30 seconds.
Optional.
By default, the cache can
accommodate a maximum of 620000
entries, and the device ages out the
entries when the upper limit is reached.
The ipv6 netstream max-entry
command is only supported on 6602
and HSR6602 routers.
The reset ipv6 netstream statistics
command also clears the cache.
Displaying and maintaining IPv6 NetStream
Task
Command
Remarks
Display IPv6 NetStream entry
information in the cache.
display ipv6 netstream cache [ verbose ]
[ slot slot-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display information about IPv6
NetStream data export.
display ipv6 netstream export [ | { begin |
exclude | include } regular-expression ]
Available in any view.
154
Task
Command
Remarks
Display the configuration and status of
the NetStream flow record templates.
display ipv6 netstream template [ slot
slot-number ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Clear the cache, age out, and export
all IPv6 NetStream data.
reset ipv6 netstream statistics
Available in user
view.
IPv6 NetStream configuration examples
IPv6 NetStream traditional data export configuration example
Network requirements
As shown in Figure 55, configure IPv6 NetStream on Router A to collect statistics on packets passing
through it. Enable IPv6 NetStream in the inbound direction on GigabitEthernet 2/0/0 and in the
outbound direction of GigabitEthernet 2/0/1. Configure the router to export IPv6 NetStream traditional
data to UDP port 5000 of the NetStream server at 12.110.2.2/16.
Figure 55 Network diagram
Configuration procedure
# Enable IPv6 NetStream in the inbound direction of GigabitEthernet 2/0/0.
<RouterA> system-view
[RouterA] ipv6
[RouterA] interface gigabitethernet 2/0/0
[RouterA-GigabitEthernet2/0/0] ipv6 address 10::1/64
[RouterA-GigabitEthernet2/0/0] ipv6 netstream inbound
[RouterA-GigabitEthernet2/0/0] quit
# Enable IPv6 NetStream in the outbound direction of GigabitEthernet 2/0/1.
[RouterA] interface gigabitethernet 2/0/1
[RouterA-GigabitEthernet2/0/1] ip address 12.110.2.1 255.255.0.0
[RouterA-GigabitEthernet2/0/1] ipv6 address 20::1/64
[RouterA-GigabitEthernet2/0/1] ipv6 netstream outbound
[RouterA-GigabitEthernet2/0/1] quit
# Configure the destination address and the destination UDP port number for the IPv6 NetStream
traditional data export.
[RouterA] ipv6 netstream export host 12.110.2.2 5000
155
IPv6 NetStream aggregation data export configuration
example
Network requirements
As shown in Figure 56, configure IPv6 NetStream on Router A to meet the following requirements:
•
Router A exports IPv6 NetStream traditional data to port 5000 of the NetStream server at
4.1.1.1/16.
•
Router A performs IPv6 NetStream aggregation in the modes of AS, protocol-port, source-prefix,
destination-prefix, and prefix. Send the aggregation data to the destination address with UDP port
2000, 3000, 4000, 6000, and 7000 for different modes.
All routers in the network are running IPv6 EBGP. For more information about IPv6 BGP, see Layer 3—IP
Routing Configuration Guide.
Figure 56 Network diagram
Configuration procedure
# Enable IPv6 NetStream in the inbound and outbound directions of GigabitEthernet 2/0/0.
<RouterA> system-view
[RouterA] ipv6
[RouterA] interface gigabitethernet 2/0/0
[RouterA-GigabitEthernet2/0/0] ipv6 address 10::1/64
[RouterA-GigabitEthernet2/0/0] ipv6 netstream inbound
[RouterA-GigabitEthernet2/0/0] ipv6 netstream outbound
[RouterA-GigabitEthernet2/0/0] quit
# In system view, configure the destination address and the destination UDP port number for the IPv6
NetStream traditional data export with IP address 4.1.1.1 and port 5000.
[RouterA] ipv6 netstream export host 4.1.1.1 5000
# Configure the aggregation mode as AS, and then, in aggregation view, configure the destination
address and the destination UDP port number for the IPv6 NetStream AS aggregation data export.
[RouterA] ipv6 netstream aggregation as
[RouterA-ns6-aggregation-as] enable
[RouterA-ns6-aggregation-as] ipv6 netstream export host 4.1.1.1 2000
[RouterA-ns6-aggregation-as] quit
156
# Configure the aggregation mode as protocol-port, and then, in aggregation view, configure the
destination address and the destination UDP port number for the IPv6 NetStream protocol-port
aggregation data export.
[RouterA] ipv6 netstream aggregation protocol-port
[RouterA-ns6-aggregation-protport] enable
[RouterA-ns6-aggregation-protport] ipv6 netstream export host 4.1.1.1 3000
[RouterA-ns6-aggregation-protport] quit
# Configure the aggregation mode as source-prefix, and then, in aggregation view, configure the
destination address and the destination UDP port number for the IPv6 NetStream source-prefix
aggregation data export.
[RouterA] ipv6 netstream aggregation source-prefix
[RouterA-ns6-aggregation-srcpre] enable
[RouterA-ns6-aggregation-srcpre] ipv6 netstream export host 4.1.1.1 4000
[RouterA-ns6-aggregation-srcpre] quit
# Configure the aggregation mode as destination-prefix, and then, in aggregation view, configure the
destination address and the destination UDP port number for the IPv6 NetStream destination-prefix
aggregation data export.
[RouterA] ipv6 netstream aggregation destination-prefix
[RouterA-ns6-aggregation-dstpre] enable
[RouterA-ns6-aggregation-dstpre] ipv6 netstream export host 4.1.1.1 6000
[RouterA-ns6-aggregation-dstpre] quit
# Configure the aggregation mode as prefix, and then, in aggregation view, configure the destination
address and the destination UDP port number for the IPv6 NetStream prefix aggregation data export.
[RouterA] ipv6 netstream aggregation prefix
[RouterA-ns6-aggregation-prefix] enable
[RouterA-ns6-aggregation-prefix] ipv6 netstream export host 4.1.1.1 7000
[RouterA-ns6-aggregation-prefix] quit
157
Configuring the information center
Overview
The information center collects and outputs system information as follows:
•
Receives system information including log, trap, and debugging information from source modules.
•
Assigns the system information to different information channels, according to user-defined output
rules.
•
Outputs information to different destinations, based on channel-to-destination associations.
To sum up, the information center assigns log, trap, and debugging information to ten information
channels according to eight severity levels and then outputs the information to different destinations.
Figure 57 Information center diagram
By default, the information center is enabled. It affects system performance to some degree when
processing large amounts of information. If the system resources are insufficient, disable the information
center to save resources.
Classification of system information
System information is divided the following types:
•
Log information—Describes user operations and interface state changes.
•
Trap information—Describes device faults such as authentication and network failures.
•
Debugging information—Displays device running status for troubleshooting.
Source modules refer to protocol modules, board drivers, and configuration modules which generate
system information. You can classify, filter, and output system information based on source modules. To
view the supported source modules, use the info-center source ? command.
System information levels
System information is classified into eight severity levels, from 0 through 7 in descending order. The
device outputs the system information with a severity level that is higher than or equal to the specified
level. For example, if you configure an output rule with a severity level of 6 (informational), information
that has a severity level from 0 to 6 is output.
158
Table 6 System information levels
Severity
Severity
level
Description
Corresponding
keyword in
commands
Emergency
0
The system is unusable. For example, the system
authorization has expired.
emergencies
Alert
1
Action must be taken immediately to solve a serious
problem. For example, traffic on an interface exceeds the
upper limit.
alerts
Critical
2
Critical condition. For example, the device temperature
exceeds the upper limit, the power module fails or the fan
tray fails.
critical
Error
3
Error condition. For example, the link state changes or a
storage card is unplugged.
errors
Warning
4
Warning condition. For example, an interface is
disconnected, or the memory resources are used up.
warnings
Notification
5
Normal but significant condition. For example, a terminal
logs in to the device, or the device reboots.
notifications
Informational
6
Informational message. For example, a command or a ping
operation is executed.
informational
Debug
7
Debugging message.
debugging
Output channels and destinations
Table 7 shows the output channels and destinations.
The system supports ten channels. By default, channels 0 through 6, and channel 9 are configured with
channel names and output destinations. You can change these default settings as needed. You can also
configure channels 7 and 8 and associate them with specific output destinations as needed.
You can use the info-center channel name command to change the name of an information channel.
Each output destination receives information from only one information channel, but each information
channel can output information to multiple output destinations.
Table 7 Default information channels and output destinations
Channel
number
Default
channel name
Default output destination
System information received by
default
0
console
Console
Log, trap and debugging information
1
monitor
Monitor terminal
Log, trap and debugging information
2
loghost
Log host
Log, trap and debugging information
3
trapbuffer
Trap buffer
Trap information
4
logbuffer
Log buffer
Log and debugging information
5
snmpagent
SNMP module
Trap information
6
channel6
Web interface
Log information
159
Channel
number
Default
channel name
Default output destination
System information received by
default
7
channel7
Not specified
Log, trap, and debugging
information
8
channel8
Not specified
Log, trap, and debugging
information
9
channel9
Log file
Log, trap, and debugging
information
Default output rules of system information
A default output rule specifies the system information source modules, information type, and severity
levels for an output destination. Table 8 shows the default output rules.
Table 8 Default output rules
Destinatio
n
System
informatio
n source
modules
Console
Trap
Log
Debug
Output
switch
Severity
Output
switch
Severity
Output
switch
Severity
All
supported
modules
Enabled
Information
al
Enabled
Debug
Enabled
Debug
Monitor
terminal
All
supported
modules
Enabled
Information
al
Enabled
Debug
Enabled
Debug
Log host
All
supported
modules
Enabled
Information
al
Enabled
Debug
Disabled
Debug
Trap buffer
All
supported
modules
Disabled
Information
al
Enabled
Informatio
nal
Disabled
Debug
Log buffer
All
supported
modules
Enabled
Information
al
Disabled
Debug
Disabled
Debug
SNMP
module
All
supported
modules
Disabled
Debug
Enabled
Informatio
nal
Disabled
Debug
Web
interface
All
supported
modules
Enabled
Debug
Enabled
Debug
Disabled
Debug
Log file
All
supported
modules
Enabled
Debug
Enabled
Debug
Disabled
Debug
160
System information formats
The following shows the original format of system information, which might be different from what you see.
The actual system information format depends on the log resolution tool you use.
Formats
The system information format varies with output destinations. See Table 9.
Table 9 System information formats
Output destination
Format
Example
Console, monitor
terminal, logbuffer,
trapbuffer, SNMP
module, or log file
timestamp sysname
module/level/digest: content
%Jun 26 17:08:35:809 2008 Sysname
SHELL/4/LOGIN: VTY login from 1.1.1.1.
• HP format:
<189>Oct 9 14:59:04 2009
Sysname %%10SHELL/5/SHELL_LOGIN(l):
VTY logged in from 192.168.1.21.
• HP format:
Log host
<PRI>timestamp
Sysname %%vvmodule/level
/digest: source content
• UNICOM format:
{
• UNICOM format:
<PRI>timestamp Sysname
vvmodule/level/serial_numb
er: content
{
<186>Oct 13 16:48:08 2000 Sysname
10IFNET/2/210231a64jx073000020:
log_type=port;content=GigabitEthernet4
/0/1 link status is DOWN.
<186>Oct 13 16:48:08 2000 Sysname
10IFNET/2/210231a64jx073000020:
log_type=port;content=Line protocol on
the interface GigabitEthernet4/0/1 is
DOWN.
Field description
Field
Description
The priority is calculated by using this formula: facility*8+level, where:
• facility is the facility name. It can be configured with info-center loghost. It is
PRI (priority)
used to identify different log sources on the log host, and to query and filter logs
from specific log sources.
• level ranges from 0 to 7. See Table 6 for more information.
Note that the priority field is available only for information that is sent to the log
host.
The timestamp records the time when the system information was generated.
Timestamp
System information sent to the log host and that sent to the other destinations have
different precisions, and their timestamp formats are configured with different
commands. See Table 10 and Table 11 for more information.
161
Field
Description
• If the system information that is sent to a log host is in the UNICOM format, and
Sysname (host name or
host IP address)
the info-center loghost source command is configured, or the vpn-instance
vpn-instance-name option is provided in the info-center loghost command, the
sysname field is displayed as the IP address of the device that generated the
system information.
• If the system information is in the HP format, the field is displayed as the system
name of the device that generated the system information. You can use the
sysname command to modify the local system name. For more information, see
Fundamentals Command Reference.
This field indicates that the information was generated by an HP device.
%% (vendor ID)
It exists only in system information sent to a log host.
vv (version information)
This field identifies the version of the log, and has a value of 10.
It exists only in system information sent to a log host.
Module
This field specifies source module name. You can execute the info-center source ?
command in system view to view the module list.
Level (severity)
System information is divided into eight severity levels from 0 to 7. See Table 6 for
more information about severity levels. You cannot change the system information
levels generated by modules. However, you can use the info-center source
command to control the output of system information based on severity levels.
This field briefly describes the content of the system information. It contains a string
of up to 32 characters.
For system information destined to the log host:
Digest
• If the string ends with (l), the information is log information.
• If the string ends with (t), the information is trap information.
• If the string ends with (d), the information is debugging information.
This field indicates the serial number of the device that generated the system
information. It is displayed only if the system information sent to the log host is in
the UNICOM format.
Serial Number
This optional field identifies the source of the information. It is displayed only if the
system information is sent to a log host in HP format.
It can take one of the following values:
source
• Slot number of a card.
• IP address of the log sender.
content
This field contains the content of the system information.
Table 10 Timestamp precisions and configuration commands
Item
Destined to the log host
Destined to the console, monitor
terminal, log buffer, and log file
Precision
Seconds
Milliseconds
Command
used to set the
timestamp
format
info-center timestamp loghost
info-center timestamp
162
Table 11 Description of the timestamp parameters
Timestamp
parameters
boot
date
Description
Example
Time since system startup, in the format of
xxx.yyy. xxx represents the higher 32 bits,
and yyy represents the lower 32 bits, of
milliseconds elapsed.
%0.109391473 Sysname
FTPD/5/FTPD_LOGIN: User ftp
(192.168.1.23) has logged in
successfully.
System information sent to all destinations
other than log host supports this parameter.
0.109391473 is a timestamp in the boot
format.
Current date and time, in the format of mm dd
hh:mm:ss:xxx yyy.
%May 30 05:36:29:579 2003 Sysname
FTPD/5/FTPD_LOGIN: User ftp
(192.168.1.23) has logged in
successfully.
All system information supports this
parameter.
Timestamp format stipulated in ISO 8601.
iso
Only system information that is sent to the log
host supports this parameter.
No timestamp is included.
none
no-year-date
All system information supports this
parameter.
Current date and time without year
information, in the format of mm dd
hh:mm:ss:xxx.
Only the system information that is sent to the
log host supports this parameter.
May 30 05:36:29:579 2003 is a
timestamp in the date format.
<189>2003-05-30T06:42:44
Sysname %%10FTPD/5/FTPD_LOGIN(l):
User ftp (192.168.1.23) has logged in
successfully.
2003-05-30T06:42:44 is a timestamp in
the iso format.
% Sysname FTPD/5/FTPD_LOGIN: User
ftp (192.168.1.23) has logged in
successfully.
No timestamp is included.
<189>May 30 06:44:22
Sysname %%10FTPD/5/FTPD_LOGIN(l):
User ftp (192.168.1.23) has logged in
successfully.
May 30 06:44:22 is a timestamp in the
no-year-date format.
Information center configuration task list
Task
Remarks
Outputting system information to the console
Optional.
Outputting system information to the monitor terminal
Optional.
Outputting system information to a log host
Optional.
Outputting system information to the trap buffer
Optional.
Outputting system information to the log buffer
Optional.
Outputting system information to the SNMP module
Optional.
Saving system information to the log file
Optional.
Enabling synchronous information output
Optional.
163
Task
Remarks
Configuring the minimum age of syslog messages
Optional.
Disabling an interface from generating link up/down logging information
Optional.
Configurations for the information output destinations function independently.
Outputting system information to the console
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
Configure an output channel
for the console.
info-center console channel
{ channel-number |
channel-name }
5.
Configure an output rule for
the console.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap
{ level severity | state state } * ] *
6.
Configure the timestamp
format.
info-center timestamp { debugging
| log | trap } { boot | date | none }
By default, the timestamp format
for log, trap and debugging
information is date.
7.
Return to user view.
quit
N/A
8.
Enable system information
output to the console.
terminal monitor
4.
Optional.
Enabled by default.
Optional.
See Table 7 for default channel
names.
Optional.
By default, system information is
output to the console through
channel 0 (console).
Optional.
See "Default output rules of system
information."
Optional.
Optional.
The default setting is enabled.
• Enable the display of
debugging information on the
console:
terminal debugging
9.
Enable the display of system
information on the console.
• Enable the display of log
information on the console:
terminal logging
• Enable the display of trap
information on the console:
terminal trapping
164
Optional.
By default, the console displays log
and trap information, and discards
debug information.
Outputting system information to the monitor
terminal
Monitor terminals refer to terminals that log in to the device through the AUX, VTY, or TTY user interface.
To output system information to the monitor terminal:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information
center.
info-center enable
Name the channel
with a specified
channel number.
info-center channel channel-number name
channel-name
3.
Optional.
Enabled by default.
Optional.
See Table 7 for default channel
names.
Optional.
Configure an output
channel for the monitor
terminal.
info-center monitor channel
{ channel-number | channel-name }
Configure an output
rule for the monitor
terminal.
info-center source { module-name |
default } channel { channel-number |
channel-name } [ debug { level severity |
state state } * | log { level severity | state
state } * | trap { level severity | state
state } * ] *
6.
Configure the
timestamp format.
info-center timestamp { debugging | log |
trap } { boot | date | none }
By default, the timestamp format
for log, trap and debugging
information is date.
7.
Return to user view.
quit
N/A
4.
5.
By default, system information is
output to the monitor terminal
through channel 1 (known as
monitor).
Optional.
See "Default output rules of system
information."
Optional.
The default setting is disabled.
8.
Enable system
information output to
the monitor terminal.
terminal monitor
You must first execute this
command, and then you can
enable the display of debugging,
log, and trap information on the
monitor terminal.
• Enable the display of debugging
information on the monitor terminal:
terminal debugging
9.
Enable the display of
system information on
the monitor terminal.
• Enable the display of log information
on the monitor terminal:
terminal logging
• Enable the display of trap information
on the monitor terminal:
terminal trapping
165
Optional.
By default, the monitor terminal
displays log and trap information,
and discards debug information.
Outputting system information to a log host
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information
center.
info-center enable
Name the channel with a
specified channel
number.
info-center channel channel-number
name channel-name
4.
Configure an output rule
for the log host.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap { level
severity | state state } * ] *
5.
Specify the source IP
address for the log
information.
info-center loghost source
interface-type interface-number
By default, the source IP address of log
information is the primary IP address of
the matching route's egress interface.
6.
Configure the timestamp
format for system
information output to a
log host.
info-center timestamp loghost { date
| iso | no-year-date | none }
Optional.
3.
7.
Set the format of the
system information sent
to a log host.
Optional.
Enabled by default.
Optional.
See Table 7 for default channel names.
Optional.
See "Default output rules of system
information."
Optional.
• Set the format to UNICOM:
info-center format unicom
• Set the format to HP:
undo info-center format
date by default.
Optional.
Use either method.
HP by default.
By default, no log host or related
parameters are specified.
8.
Specify a log host and
configure related
parameters.
info-center loghost [ vpn-instance
vpn-instance-name ]
{ host-ipv4-address | ipv6
host-ipv6-address } [ port
port-number ] [ channel
{ channel-number | channel-name }
| facility local-number ] *
If no channel is specified when
outputting system information to a log
host, the system uses channel 2
(loghost) by default.
The value of the port-number argument
must be the same as the value
configured on the log host. Otherwise,
the log host cannot receive system
information.
Outputting system information to the trap buffer
The trap buffer only receives trap information, and discards log and debug information.
To output system information to the trap buffer:
166
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
4.
Configure an output channel
for the trap buffer and set the
buffer size.
info-center trapbuffer [ channel
{ channel-number |
channel-name } | size buffersize ]
*
5.
Configure an output rule for
the trap buffer.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap
{ level severity | state state } * ] *
6.
Configure the timestamp
format.
info-center timestamp { debugging
| log | trap } { boot | date | none }
Optional.
Enabled by default.
Optional.
See Table 7 for default channel
names.
Optional.
By default, system information is
output to the trap buffer through
channel 3 (known as trapbuffer)
and the default buffer size is 256.
Optional.
See "Default output rules of system
information."
Optional.
The timestamp format for log, trap
and debugging information is date
by default.
Outputting system information to the log buffer
The log buffer only receives log information, and discards trap and debug information.
To output system information to the log buffer:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
Optional.
Enabled by default.
Optional.
See Table 7 for default channel
names.
Optional.
4.
Configure an output channel
for the log buffer and set the
buffer size.
info-center logbuffer [ channel
{ channel-number |
channel-name } | size buffersize ]
*
167
By default, system information is
output to the log buffer through
channel 4 (known as logbuffer)
and the default buffer size is
10240.
Step
Command
5.
Configure an output rule for
the log buffer.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap
{ level severity | state state } * ] *
6.
Configure the timestamp
format.
info-center timestamp { debugging
| log | trap } { boot | date | none }
Remarks
Optional.
See "Default output rules of system
information."
Optional.
The timestamp format for log, trap
and debugging information is date
by default.
Outputting system information to the SNMP module
The SNMP module only receives trap information, and discards log and debug information.
To monitor the device running status, trap information is usually sent to the SNMP network management
system (NMS). For this purpose, you must configure output of traps to the SNMP module, and set the trap
sending parameters for the SNMP module. For more information about SNMP, see "Configuring SNMP."
To output system information to the SNMP module:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Name the channel with a
specified channel number.
info-center channel
channel-number name
channel-name
Configure an output channel
for the SNMP module.
info-center snmp channel
{ channel-number |
channel-name }
Configure an output rule for
the SNMP module.
info-center source { module-name |
default } channel { channel-number
| channel-name } [ debug { level
severity | state state } * | log { level
severity | state state } * | trap
{ level severity | state state } * ] *
Optional.
Enabled by default.
Optional.
See Table 7 for default channel
names.
Optional.
4.
5.
By default, system information is
output to the SNMP module
through channel 5 (known as
snmpagent).
Optional.
See "Default output rules of system
information."
Optional.
6.
Configure the timestamp
format.
info-center timestamp { debugging
| log | trap } { boot | date | none }
168
The timestamp format for log, trap
and debugging information is date
by default.
Saving system information to the log file
Perform this task to enable saving system information to the log file at specific interval, or manually save
system information to the log file.
System information is saved into the log file buffer. The system writes the information from the log file
buffer to the log file at the specified interval. You can also manually save the information while the device
is not busy. After saving information from the log file buffer to the log file, the system clears the log file
buffer.
The log file has a specific capacity. When the capacity is reached, the system deletes the earliest
messages and writes new messages into the log file.
To save system information to the log file:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Enable the information center.
info-center enable
3.
Enable saving system
information to the log file.
info-center logfile enable
Configure the interval at which
the system saves system
information in the logfile buffer
to the log file.
info-center logfile frequency
freq-sec
4.
Optional.
Enabled by default.
Optional.
Enabled by default.
Optional.
The default saving interval is 60
seconds.
Optional.
5.
Enable log file
overwrite-protection
info-center logfile
overwrite-protection
[ all-port-powerdown ]
By default, log file
overwrite-protection is disabled.
This feature is only supported in
FIPS mode.
Optional.
6.
Configure the maximum size of
the log file.
The default setting is 2 MB.
info-center logfile size-quota size
To ensure normal operation, set
the size argument to a value
between 1 MB and 10 MB.
Optional.
7.
Configure the directory to save
the log file.
info-center logfile switch-directory
dir-name
By default, the log file is saved in
the logfile directory under the root
directory of the storage device
(the root directory varies with
devices).
The configuration made by this
command cannot survive a
reboot or an active/standby
switchover.
169
Step
Command
Remarks
Optional.
Available in any view.
Manually save the log file buffer
content to the log file.
8.
logfile save
By default, the system saves logs
in the log file buffer to the log file
at the interval configured by the
info-center logfile frequency
command.
Enabling synchronous information output
The output of system logs interrupts ongoing configuration operations. You have to find the previously
input commands before the logs. Synchronous information output can show the previous input after log
output and a command prompt in command editing mode, or a [Y/N] string in interaction mode so you
can continue your operation from where you were stopped.
If system information, such as log information, is output before you input any information under the
current command line prompt, the system does not display the command line prompt after the system
information output.
If system information is output when you are inputting some interactive information (non-Y/N
confirmation information), the system displays your previous input in a new line but does not display the
command line prompt.
To enable synchronous information output:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enable synchronous
information output.
info-center synchronous
Disabled by default.
Disabling an interface from generating link
up/down logging information
By default, all interfaces generate link up or link down log information when the state changes. In some
cases, you might want to disable specific interfaces from generating this information. For example:
•
You are concerned only about the states of some interfaces. In this case, you can use this function
to disable other interfaces from generating link up and link down log information.
•
An interface is unstable and continuously outputs log information. In this case, you can disable the
interface from generating link up and link down log information.
Use the default setting in normal cases to avoid affecting interface status monitoring.
To disable an interface from generating link up/down logging information:
170
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter Layer 2 Ethernet
interface view, Layer 3
Ethernet interface view, or
VLAN interface view.
interface interface-type
interface-number
N/A
undo enable log updown
By default, all interfaces generate
link up and link down logging
information when the state
changes.
3.
Disable the interface from
generating link up or link
down logging information.
Configuring the minimum age of syslog messages
The minimum age specifies how long a syslog message must be kept before it can be overwritten by new
messages. The default setting is 0.
To specify the minimum age of syslog messages:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the minimum age
of syslog messages.
info-center syslog min-age hours
The default minimum age is 0.
Displaying and maintaining information center
Task
Command
Remarks
Display information about
information channels.
display channel [ channel-number |
channel-name ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display information center
configuration information.
display info-center [ | { begin | exclude
| include } regular-expression ]
Available in any view.
Display the state and the log
information of the log buffer .
display logbuffer [ reverse ] [ level
severity | size buffersize | slot
slot-number ] * [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Display the summary of the log
buffer.
display logbuffer summary [ level
severity | slot slot-number ] * [ | { begin
| exclude | include }
regular-expression ]
Available in any view.
Display the content of the log file
buffer.
display logfile buffer [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the configuration of the log
file.
display logfile summary [ | { begin |
exclude | include } regular-expression ]
Available in any view.
171
Task
Command
Remarks
Display the state and the trap
information of the trap buffer.
display trapbuffer [ reverse ] [ size
buffersize ] [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Clear the log buffer.
reset logbuffer
Available in user view.
Clear the trap buffer.
reset trapbuffer
Available in user view.
Information center configuration examples
Outputting log information to the console
Network requirements
Configure the router to send ARP and IP log information that has a severity level of at least informational
to the console.
Figure 58 Network diagram
Configuration procedure
# Enable the information center.
<Sysname> system-view
[Sysname] info-center enable
# Use channel console to output log information to the console. (This step is optional because it is the
default setting.)
[Sysname] info-center console channel console
# Disable the output of log, trap, and debugging information of all modules on channel console.
[Sysname] info-center source default channel console debug state off log state off trap
state off
To avoid output of unnecessary information, disable the output of log, trap, and debugging information
of all modules on the specified channel (console in this example), and then configure the output rule as
needed.
# Configure an output rule to enable the device to send ARP and IP log information that has a severity
level of at least informational to the console.
[Sysname] info-center source arp channel console log level informational state on
[Sysname] info-center source ip channel console log level informational state on
[Sysname] quit
# Enable the display of log information on the console. (This function is enabled by default.)
<Sysname> terminal monitor
Info: Current terminal monitor is on.
<Sysname> terminal logging
Info: Current terminal logging is on.
172
Now, if the ARP and IP modules generate log information, the information center automatically sends the
log information to the console.
Outputting log information to a UNIX log host
Network requirements
Configure the router to send ARP and IP log information that has a severity level of at least informational
to the UNIX log host at 1.2.0.1/16.
Figure 59 Network diagram
Configuration procedure
Before the configuration, make sure the router and the log host can reach each other. (Details not shown.)
1.
Configure the router:
# Enable the information center.
<Sysname> system-view
[Sysname] info-center enable
# Specify the log host 1.2.0.1/16, use channel loghost to output log information (optional,
loghost by default), and specify local4 as the logging facility.
[Sysname] info-center loghost 1.2.0.1 channel loghost facility local4
# Disable the output of log, trap, and debugging information of all modules on channel loghost.
[Sysname] info-center source default channel loghost debug state off log state off
trap state off
To avoid outputting unnecessary information, disable the output of log, trap, and debugging
information on the specified channel (loghost in this example) before you configure an output rule.
# Configure an output rule to output to the log host ARP and IP log information that has a severity
level of at least informational.
[Sysname] info-center source arp channel loghost log level informational state on trap
state off
[Sysname] info-center source ip channel loghost log level informational state on trap
state off
2.
Configure the log host:
The following configurations were performed on Solaris which has similar configurations to the
UNIX operating systems implemented by other vendors.
a.
b.
Log in to the log host as a root user.
Create a subdirectory named Router in directory /var/log/, and then create file info.log in
the Router directory to save logs from the router.
# mkdir /var/log/Router
# touch /var/log/Router/info.log
c. Edit the file syslog.conf in directory /etc/ and add the following contents.
# Router configuration messages
local4.info
/var/log/Router/info.log
173
In this configuration, local4 is the name of the logging facility that the log host uses to receive
logs. info is the informational level. The UNIX system records the log information that has a
severity level of at least informational to the file /var/log/Router/info.log.
NOTE:
Be aware of the following issues while editing file /etc/syslog.conf:
• Comments must be on a separate line and must begin with a pound sign (#).
• No redundant spaces are allowed after the file name.
• The logging facility name and the information level specified in the /etc/syslog.conf file must be
identical to those configured on the router by using the info-center loghost and info-center source
commands. Otherwise the log information might not be output properly to the log host.
d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd using the –r
option for the configuration to take effect.
# ps -ae | grep syslogd
147
# kill -HUP 147
# syslogd -r &
Now, the system can record log information into the log file.
Outputting log information to a Linux log host
Network requirements
Configure the router to send log information that has a severity level of at least informational to the Linux
log host at 1.2.0.1/16.
Figure 60 Network diagram
Configuration procedure
Before the configuration, make sure the router and the log host can reach each other. (Details not shown.)
1.
Configure the router:
# Enable the information center.
<Sysname> system-view
[Sysname] info-center enable
# Specify the host 1.2.0.1/16 as the log host, use the channel loghost to output log information
(optional, loghost by default), and specify local5 as the logging facility.
[Sysname] info-center loghost 1.2.0.1 channel loghost facility local5
# Configure an output rule to output to the log host the log information that has a severity level of
at least informational.
[Sysname] info-center source default channel loghost log level informational state
on debug state off trap state off
174
Disable the output of unnecessary information of all modules on the specified channel in the output
rule.
2.
Configure the log host:
a. Log in to the log host as a root user.
b. Create a subdirectory named Router in the directory /var/log/, and create file info.log in the
Router directory to save logs from the router.
# mkdir /var/log/Router
# touch /var/log/Router/info.log
c. Edit the file syslog.conf in the directory /etc/ and add the following contents.
# Router configuration messages
local5.info
/var/log/Router/info.log
In this configuration, local5 is the name of the logging facility used by the log host to receive
logs. info is the information level. The Linux system will record the log information with severity
level equal to or higher than informational to file /var/log/Router/info.log.
NOTE:
Be aware of the following issues while editing file /etc/syslog.conf:
• Comments must be on a separate line and must begin with a pound sign (#).
• No redundant spaces are allowed after the file name.
• The logging facility name and the information level specified in the /etc/syslog.conf file must be
identical to those configured on the router by using the info-center loghost and info-center source
commands. Otherwise the log information might not be output properly to the log host.
d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd using the –r
option for the configuration to take effect.
# ps -ae | grep syslogd
147
# kill -9 147
# syslogd -r &
Make sure the syslogd process is started with the -r option on a Linux log host.
Now, the system can record log information into the log file.
175
Configuring Flow Logging
Configuring flow logging
Flow logging records users' access to external networks. The device classifies flows by 5-tuple
information and generates flow logs. The 5-tuple information includes source IP address, destination IP
address, source port, destination port, and protocol number. The flow logs contain the 5-tuple
information of flows and the numbers of received and sent bytes.
Flow logging has two versions: version 1.0 and version 3.0. They are slightly different in log format, as
show in Table 12 and Table 13.
Table 12 Log format for flow logging 1.0
Field
Description
SIP
Source IP address.
DIP
Destination IP address.
SPORT
TCP/UDP source port number.
DPORT
TCP/UDP destination port number.
STIME
Start time of the flow, in seconds, counted from 1970/1/1 0:0.
ETIME
End time of the flow, in seconds, counted from 1970/1/1 0:0.
PROT
Protocol number.
OPERATOR
Indicates the reason why the flow ended.
RESERVED
For future applications.
Table 13 Log format for flow logging version 3.0
Field
Description
Prot
Protocol number.
Operator
Indicates the reason why the flow ended.
IpVersion
IP packet version.
TosIPv4
ToS field of the IPv4 packet.
SourceIP
Source IP address.
SrcNatIP
Source IP address after Network Address Translation (NAT).
DestIP
Destination IP address.
DestNatIP
Destination IP address after NAT.
SrcPort
TCP/UDP source port number.
SrcNatPort
TCP/UDP source port number after NAT.
DestPort
TCP/UDP destination port number.
DestNatPort
TCP/UDP destination port number after NAT.
176
Field
Description
StartTime
Start time of the flow, in seconds, counted from 1970/01/01 00:00.
EndTime
End time of the flow, in seconds, counted from 1970/01/01 00:00.
InTotalPkg
Number of packets received.
InTotalByte
Number of bytes received.
OutTotalPkg
Number of packets sent.
OutTotalByte
Number of bytes sent.
Reserved in version 0x02 (FirewallV200R001).
Reserved1
In version 0x03 (FirewallV200R005), the first byte is the source VPN ID, the second byte
is the destination VPN ID, and the third and forth bytes are reserved for future
applications.
Reserved2
For future applications.
Reserved3
For future applications.
Flow logging configuration task list
Task
Remarks
Configuring the flow logging version
Optional.
Configuring the source address for flow log packets
Optional.
Exporting flow logs to a log server
Exporting flow logs
Exporting flow logs to the information
center
Required.
Use either method.
Configuring the flow logging version
Configure the flow logging version that the log receiver supports. The receiver cannot resolve flow logs
if it does not support the specified flow logging version.
To configure the flow logging version:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
Optional.
2.
Configure the flow logging
version.
userlog flow export version
version-number
177
The default version is 1.0.
If you configure the flow logging
version multiple times, only the
most recent configuration takes
effect.
Configuring the source address for flow log packets
A source IP address is usually used to uniquely identify the sender of a packet. Suppose Device A sends
flow logs to Device B. Device A uses the specified IP address instead of the actual egress address as the
source IP address of the packets. In this way, although Device A sends out packets to Device B through
different ports, Device B can judge whether the packets are sent from Device A according to their source
IP addresses. This function also simplifies the configurations of ACLs and security policies. You only need
to specify one address to filter packets from or to a device.
To configure the source address for flow log packets:
Step
Command
Remarks
N/A
1.
Enter system view.
system-view
2.
Specify the source IP address
of flow log packets.
userlog flow export source-ip
ip-address
Optional.
By default, the source IP address of
flow log packets is the IP address of
the egress interface.
Exporting flow logs
Flow logs can be exported in two ways:
•
Flow logs are encapsulated into UDP packets and are sent to a remote log server, as shown
in Figure 62. The log server analyzes flow logs and displays them by class, thus realizing remote
monitoring.
•
Flow logs in the format of system information are exported to the information center of the device.
You can set system information output parameters for the information center to control the output
destinations of the flow logs. For more information about information center, see "Configuring the
Information center."
The two export approaches are mutually exclusive. If you configure both approaches, the system
automatically exports flow logs to the information center.
Exporting flow logs to a log server
On the 6602 router:
You can specify at most two log servers of the same type or different types. There are three types of log
servers, the VPN flow logging server, the IPv4 flow logging server, and the IPv6 flow logging server. If
you have already specified two servers, you need to delete one to specify a new one. If you specify a
new server that has the same IP address as but has other information different from the current server, the
new configuration overwrites the previous one.
On the HSR6602/6604/6608/6616 router:
You must specify flow logging servers for interface cards separately. The router supports at most two log
servers of the same type or different types. There are three types of log servers, the VPN flow logging
server, the IPv4 flow logging server, and the IPv6 flow logging server. If you have already specified two
servers for an interface card, you must delete one to specify a new one. If you specify a new server that
178
has the same IP address as but has other information from the current server, the new configuration
overwrites the previous one.
Exporting flow logs to an IPv4 log server
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Configure the IPv4 address
and UDP port number of the
log server.
userlog flow export [ slot
slot-number ] [ vpn-instance
vpn-instance-name ] host
ipv4-address udp-port
By default, no IPv6 log server is
configured.
Command
Remarks
Exporting flow logs to an IPv6 log server
Step
1.
Enter system view.
system-view
N/A
2.
Configure the IPv6 address
and UDP port number of the
log server.
userlog flow export [ slot
slot-number ] [ vpn-instance
vpn-instance-name ] host ipv6
ipv6-address udp-port
By default, no IPv6 log server is
configured.
Exporting flow logs to the information center
Exporting flow logs to the information center occupies device storage space, so use this export approach
only if there are a small amount of logs. Flow logs exported to the information center have a severity level
of informational.
To export flow logs to the information center:
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Export flow logs to the
information center.
userlog flow syslog
Flow logs are exported to the log
server by default.
Configuring the timestamp for flow logs
Perform this task to timestamp flow logs in the local time or Coordinated Universal Time (UTC).
To configure the timestamp for flow logs:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
179
Step
2.
Command
Configure the device to
timestamp flow logs in the
local time.
Remarks
By default, flow logs are time
stamped in the Coordinated
Universal Time (UTC).
userlog flow export timestamps
localtime
Flow logs exported to the
information center are always time
stamped in the local time,
regardless of this configuration.
Displaying and maintaining flow logging
Task
Command
Remarks
Display the configuration and
statistics about flow logging.
display userlog export slot
slot-number [ | { begin | exclude |
include } regular-expression ]
Available in any view.
Clear statistics of all logs.
reset userlog flow export slot
slot-number
Available in user view.
Clear flow logs in the cache.
reset userlog flow logbuffer slot
slot-number
Available in user view.
Flow logging configuration examples
Configuring flow logging on the 6602 router
Network requirements
As shown in Figure 61, configure flow logging on the device so that the log server can monitor the user's
access to the network.
Figure 61 Network diagram
Loop0
2.2.2.2/24
1.1.1.1/24
Router
1.1.1.4/24
1.2.3.1/24
IP network
Logs
Logs
169.1.1.1/24
Logs
Log server
1.2.3.6/24
Traffic
User
169.1.1.2/24
180
Configuration procedure
# Configure IP addresses for the interfaces according to the network diagram. Make sure the devices can
reach each other. (Details not shown.)
# Set the flow logging version to 3.0.
<Router> system-view
[Router] userlog flow export version 3
# Export flow logs to the log server with IP address 1.2.3.6:2000.
[Router] userlog flow export host 1.2.3.6 2000
# Configure the source IP address of UDP packets carrying flow logs as 2.2.2.2.
[Router] userlog flow export source-ip 2.2.2.2
Configuration verification
# Display the configuration and statistics about flow logs.
<Router> display userlog export
nat:
No userlog export is enabled
flow:
Export Version 3 logs to log server : enabled
Source address of exported logs
: 2.2.2.2
Address of log server
: 1.2.3.6 (port: 2000)
total Logs/UDP packets exported
: 112/87
Logs in buffer
: 6
Configuring flow logging on the
HSR6602/6604/6608/6616 router
Network requirements
As shown in Figure 62, configure flow logging on the router so the log server can monitor the user's
access to the network.
Figure 62 Network diagram
Loop0
2.2.2.2/24
1.1.1.1/24
Router
1.1.1.4/24
1.2.3.1/24
IP network
Logs
Logs
169.1.1.1/24
Logs
Log server
1.2.3.6/24
Traffic
User
169.1.1.2/24
181
Configuration procedure
# Set the flow logging version to 3.0.
<Router> system-view
[Router] userlog flow export version 3
# Export flow logs of the interface card in slot 2 to the log server with IP address 1.2.3.6:2000.
[Router] userlog flow export slot 2 host 1.2.3.6 2000
# Configure the source IP address of UDP packets carrying flow logs as 2.2.2.2.
[Router] userlog flow export source-ip 2.2.2.2
Configuration verification
# Display the configuration and statistics about flow logs of the interface card in slot 2.
<Router> display userlog export slot 2
nat:
No userlog export is enabled
flow:
Export Version 3 logs to log server : enabled
Source address of exported logs
: 2.2.2.2
Address of log server
: 1.2.3.6 (port: 2000)
total Logs/UDP packets exported
: 128/91
Logs in buffer
: 10
Troubleshooting flow logging
Symptom 1: No flow log is exported
•
Analysis: No export approach is specified.
•
Solution: Configure flow logging to export flow logs to the information center or to the log server.
Symptom 2: Flow logs cannot be exported to log server
•
Analysis: Both of the export approaches are configured.
•
Solution: Restore to the default, and then configure the IP address and UDP port number of the log
server.
182
Configuring sFlow
This feature is available on only SAP interface modules that are operating in bridge mode.
Sampled Flow (sFlow) is a traffic monitoring technology used to collect and analyze traffic statistics.
As shown in Figure 63, the sFlow system involves an sFlow agent embedded in a device and a remote
sFlow collector. The sFlow agent collects interface counter information and packet content information
and encapsulates the sampled information in sFlow packets. When the sFlow packet buffer is full, or the
aging timer of sFlow packets expires, the sFlow agent sends the sFlow packets in UDP datagrams to the
specified sFlow collector. The sFlow collector analyzes the information and displays the results.
sFlow provides the following sampling mechanisms:
•
Flow sampling—Obtains packet content information.
•
Counter sampling—Obtains interface counter information.
Figure 63 sFlow system
sFlow has the following advantages:
•
Supports traffic monitoring on Gigabit and higher-speed networks.
•
Provides good scalability to allow one sFlow collector to monitor multiple sFlow agents.
•
Saves money by embedding the sFlow agent in a device, instead of using a dedicated sFlow agent
device.
Configuring the sFlow agent and sFlow collector
information
Step
1.
Enter system
view.
Command
Remarks
system-view
N/A
183
Step
Command
Remarks
Optional.
2.
Configure an
IP address for
the sFlow
agent.
sflow agent { ip ip-address |
ipv6 ipv6-address }
Not specified by default. The device periodically checks
whether the sFlow agent has an IP address. If the sFlow
agent has no IP address configured, the device
automatically selects an interface IP address for the
sFlow agent but does not save the IP address.
NOTE:
• HP recommends configuring an IP address manually
for the sFlow agent.
• Only one IP address can be specified for the sFlow
agent on the device.
3.
4.
Configure the
sFlow
collector
information.
sflow collector collector-id
{ { ip ip-address | ipv6
ipv6-address } |
datagram-size size |
description text | port
port-number | time-out
seconds } *
Specify the
source IP
address of
sFlow
packets.
sflow source { ip ip-address |
ipv6 ipv6-address } *
By default, the device presets a certain number of sFlow
collectors.
Use the display sflow command to display the
parameters of the preset sFlow collectors.
Optional.
Not specified by default.
Configuring flow sampling
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter Ethernet interface view.
interface interface-type
interface-number
N/A
3.
Set the flow sampling mode.
sflow sampling-mode { determine
| random }
Optional.
4.
Specify the number of packets
out of which flow sampling
samples a packet on the
interface.
sflow sampling-rate rate
N/A
5.
6.
Optional.
Set the maximum number of
bytes of a packet (starting
from the packet header) that
flow sampling can copy.
sflow flow max-header length
Specify the sFlow collector for
flow sampling.
sflow flow collector collector-id
184
The default setting is 128 bytes. HP
recommends using the default
value.
No collector is specified for flow
sampling by default.
Configuring counter sampling
Step
Command
Remarks
1.
Enter system view.
system-view
N/A
2.
Enter interface view.
interface interface-type
interface-number
N/A
3.
Set the interval for counter
sampling.
sflow counter interval seconds
Counter sampling is disabled by
default.
4.
Specify the sFlow collector for
counter sampling.
sflow counter collector collector-id
No collector is specified for
counter sampling by default.
Displaying and maintaining sFlow
Task
Command
Remarks
Display sFlow configuration
information.
display sflow [ slot slot-number ] [ |
{ begin | exclude | include }
regular-expression ]
Available in any view.
sFlow configuration example
Network requirements
As shown in Figure 64, enable flow sampling and counter sampling on GigabitEthernet 4/0/1 of the
router to monitor traffic on the port and configure the device to send sampled information to the sFlow
collector through GigabitEthernet 4/0/0.
Figure 64 Network diagram
sFlow Collector
3.3.3.2/16
GE4/0/0
3.3.3.1/16
Host A
1.1.1.1/16
GE4/0/2
2.2.2.1/16
GE4/0/1
1.1.1.2/16
Router
Server
2.2.2.2/16
Configuration procedure
1.
Configure the sFlow agent and sFlow collector information:
185
# Configure the IP address of GigabitEthernet 4/0/0 on the device as 3.3.3.1/16.
<Router> system-view
[Router] interface gigabitethernet 4/0/0
[Router-GigabitEthernet4/0/0] ip address 3.3.3.1 16
[Router-GigabitEthernet4/0/0] quit
# Configure the IP address for the sFlow agent.
[Router] sflow agent ip 3.3.3.1
# Configure parameters for an sFlow collector: specify sFlow collector ID 2, IP address 3.3.3.2,
the default port number, and description of netserver for the sFlow collector.
[Router] sflow collector 2 ip 3.3.3.2 description netserver
2.
Configure counter sampling:
# Set the counter sampling interval to 120 seconds.
[Router] interface gigabitethernet 4/0/1
[Router-GigabitEthernet4/0/1] sflow counter interval 120
# Specify sFlow collector 2 for counter sampling.
[Router-GigabitEthernet4/0/1] sflow counter collector 2
3.
Configure flow sampling:
# Set the flow sampling mode and the sampling rate.
[Router-GigabitEthernet4/0/1] sflow sampling-mode random
[Router-GigabitEthernet4/0/1] sflow sampling-rate 4000
# Specify sFlow collector 2 for flow sampling.
[Router-GigabitEthernet4/0/1] sflow flow collector 2
# Display the sFlow configuration and operation information.
[Router-GigabitEthernet4/0/1] display sflow
sFlow Version: 5
sFlow Global Information:
Agent
IP:3.3.3.1(CLI)
Source
Address:
Collector Information:
ID
IP
Port
Aging
Size
6343
0
1400
6543
N/A
1400
3
6343
0
1400
4
6343
0
1400
5
6343
0
1400
6
6343
0
1400
7
6343
0
1400
8
6343
0
1400
9
6343
0
1400
10
6343
0
1400
1
2
3.3.3.2
Description
netserver
sFlow Port Information:
Interface CID
GE4/0/1
2
Interval(s) FID
120
2
MaxHLen
Rate
Mode
Status
128
4000
Random
Active
The output shows that GigabitEthernet 4/0/1 enabled with sFlow is active, the counter sampling
interval is 120 seconds, and the packet sampling rate is 4000.
186
Troubleshooting sFlow configuration
The remote sFlow collector cannot receive sFlow packets
Symptom
The remote sFlow collector cannot receive sFlow packets.
Analysis
•
The sFlow collector is not specified.
•
sFlow is not configured on the interface.
•
The IP address of the sFlow collector specified on the sFlow agent is different from that of the remote
sFlow collector.
•
No IP address is configured for the Layer 3 interface on the device. Or the IP address is configured,
but the UDP packets that have the IP address as the source cannot reach the sFlow collector.
•
The physical link between the device and the sFlow collector fails.
1.
Check that sFlow is correctly configured by using display sflow.
2.
Check that a correct IP address is configured for the device to communicate with the sFlow
collector.
3.
Check the physical link between the device and the sFlow collector.
Solution
187
Configuring gateway mode
The gateway mode is a specific operating mode for 6600/HSR6600. It enables the router to provide
more than 10 G NAT performance, so the router can provide better gateway services for campus,
enterprise, and cybercafé networks.
On an 6600 router that has two MCP MPUs, after you enable or disable gateway mode on the active
MPU, the standby MPU automatically reboots to synchronize its operating mode to the active MPU.
Configuring gateway mode
To enable gateway mode:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
By default, the 6600/HSR6600
router operates in normal mode.
2.
Enable gateway mode.
gateway-mode
After you enable gateway mode, you
must reboot the router to apply the
configuration.
Displaying and maintaining gateway mode
Task
Display gateway-mode
forwarding entries.
Command
Remarks
display gateway-mode forwarding-cache slot
slot-number [ source-ip ip-address
[ mask-length ] | source-port port-number ] |
destination-ip ip-address [ mask-length ] |
destination-port port-number ] | protocol { tcp |
udp | protocol-type } ]*
Available in any view.
display gateway-mode forwarding-cache
summary slot slot-number
Clear the packet statistics of the
gateway-mode forwarding
entries.
reset gateway-mode forwarding-cache
statistics slot slot-number
Available in user view.
Gateway mode configuration example
Network requirements
As shown in Figure 65, enable gateway mode on the router to improve NAT performance.
188
Figure 65 Network diagram
Configuration procedure
1.
Configure the router.
# Enable gateway mode.
[HP]gateway-mode
Info: Please reboot the device to make the configuration take effect.
# Reboot the router.
2.
Display device information to check if the router is in gateway mode.
<HP>display device
System-mode(Current/After Reboot): Gateway/Gateway
Slot No.
Board type
Status
Primary
SubSlots
-------------------------------------------------------------0
HSR6602-XG
Normal
Master
0
1
FIP-10
Normal
N/A
4
The output shows that the router is operating in gateway mode.
3.
Configure routing and NAT.
The configuration commands and procedures in gateway mode are the same as those in normal
mode, and are not shown.
189
Configuring Host-monitor
Overview
Host-monitor is a traffic monitoring feature that helps quickly identify sources of illegitimate traffic flows
in your network and access basic traffic flow statistics.
This feature automatically imports flow data from NetStream to a host monitor table and allows you to
manually add and delete flow entries. In this table, each flow entry is identified by its source IPv4 address,
destination IPv4 address, IP protocol number, traffic direction (inbound or outbound), interface where the
traffic is monitored, and MPLS L3VPN (optional).
After the host monitor table is populated with all legitimate traffic flows in your network, you can perform
the "fix" action to freeze host monitor table entries. Before you perform that action, all traffic flow entries,
automatically or manually added, are legitimate and set in Unfixed state. After you perform that action,
these flow entries are legitimate with their state changed to Fixed, and all new flow entries automatically
imported from NetStream are illegitimate (or "invalid"). To add new legitimate flows, you must do that
manually.
Handle illegitimate flow entries as appropriate. Host-monitor does not take any action on them.
Configuration prerequisites
NetStream has been enabled in the same traffic direction as Host-monitor. For more information about
NetStream, see "Configuring NetStream."
Host-monitor configuration task list
Task
Remarks
Enabling Host-monitor
Required.
Freezing legitimate flow entries
Optional.
Adding legitimate flow entries
Optional.
Deleting a legitimate flow entry
Optional.
Deleting unfixed flow entries
Optional.
Deleting illegitimate flow entries
Optional.
Enabling Host-monitor
To enable Host-monitor:
Step
1.
Enter system view.
Command
Remarks
system-view
N/A
190
Step
Command
Remarks
By default, Host-monitor is
disabled.
2.
Enable Host-monitor.
host-monitor { inbound | outbound }
Make sure Host-monitor is
enabled in the same traffic
direction as Netstream.
Freezing legitimate flow entries
To freeze legitimate flow entries in the host monitor table:
Step
1.
Enter system view.
Freeze legitimate flow
entries in the host monitor
table.
Command
Remarks
system-view
N/A
host-monitor [slot slot-number ]
By default, the flow
entries are in
Unfixed state.
Adding legitimate flow entries
Step
Command
1.
Enter system view.
system-view
2.
Add a legitimate flow
entry.
host-monitor add source source-ip destination destination-ip protocol protocol
interface interface-type interface-number { inbound | outbound } [ vpn-instance
vpn-instance-name ] [ slot slot-number ]
Deleting a legitimate flow entry
To delete a fixed or unfixed legitimate flow entry:
Step
Command
1.
Enter system view.
system-view
2.
Delete a legitimate
flow entry.
host-monitor delete source source-ip destination destination-ip protocol protocol
interface interface-type interface-number { inbound | outbound } [ vpn-instance
vpn-instance-name ] [ slot slot-number ]
Deleting unfixed flow entries
To delete all unfixed flow entries, perform the following task in user view:
191
To delete the unfixed flow entries on one or all cards, perform the following task in user view:
Task
Command
• 6602:
Delete unfixed flow
entries.
reset host-monitor entry
• HSR6600/6604/6608/6616:
reset host-monitor entry [ slot slot-number [ source-slot source-slot-number ]]
Deleting illegitimate flow entries
To delete all illegitimate flow entries, perform the following command in user view:
To delete the illegitimate flow entries on one or all cards, perform the following task in user view:
Task
Command
• 6602::
Delete illegitimate flow
entries.
reset host-monitor entry invalid
• HSR6600/6604/6608/6616:
reset host-monitor entry invalid [ slot slot-number [ source-slot
source-slot-number ] ]
Displaying and maintaining Host-monitor
Task
Command
Remarks
Display the flow entries in the host
monitor table (6602).
display host-monitor [ invalid ]
[ verbose ] [ destination ip-address |
interface interface-type interface-number
| source ip-address ] * [ | { begin |
exclude | include } regular-expression ]
Available in any view.
Display the flow entries in the host
monitor table
(HSR6600/6604/6608/6616).
display host-monitor [ invalid ]
[ verbose ] [ destination ip-address |
interface interface-type interface-number
| source ip-address ] * [ slot slot-number ]
[ | { begin | exclude | include }
regular-expression ]
Available in any view.
Clear the flow statistics in the host
monitor table (6602).
reset host-monitor statistics
Available in user view.
Clear the flow statistics in the host
monitor table
(HSR6600/6604/6608/6616).
reset host-monitor statistics [ slot
slot-number ]
Available in user view.
192
Host-monitor configuration example
Network requirements
Configure Host-monitor on the router in Figure 66 to monitor the incoming traffic on GigabitEthernet
2/1/1 and outgoing traffic on GigabitEthernet 2/1/2.
Figure 66 Network diagram
Router
GE2/1/3
GE2/1/1
GE2/1/2
Configuration procedure
# Configure the IP address of GigabitEthernet 2/1/1.
<Router> system-view
[Router] interface GigabitEthernet 2/1/1
[Router-GigabitEthernet2/1/1] ip address 192.168.40.1 255.255.255.0
# Configure the IP address of GigabitEthernet 2/1/2.
[Router] interface GigabitEthernet 2/1/2
[Router-GigabitEthernet2/1/2] ip address 192.168.80.1 255.255.255.0
# Enable Host-monitor.
[Router] host-monitor inbound
[Router] host-monitor outbound
# Display brief information about the flow entries automatically imported from NetStream.
[Router] display host-monitor
Total 9 flow(s).
State: Unfixed
Source
Destination
Protocol
Direction
Interface
VPN
------------------------------------------------------------------------------192.168.1.102
192.168.1.255
17
Inbound
GE2/1/1
192.168.1.1
239.255.255.250 17
Inbound
GE2/1/1
192.168.20.65
239.255.255.250 17
Inbound
GE2/1/1
56.56.56.44
224.0.0.5
89
Inbound
GE2/1/1
192.168.20.167
192.168.20.255
17
Inbound
GE2/1/1
192.168.20.170
192.168.20.255
17
Inbound
GE2/1/1
192.168.20.191
192.168.20.255
17
Inbound
GE2/1/1
192.168.80.133
192.168.80.131
1
Outbound
GE2/1/2
40.0.0.3
40.0.0.255
17
Inbound
GE2/1/1
193
# Make sure the host monitor table has been populated with all legitimate flows and perform the fix
action.
[Router] host-monitor fixup
# Add new legitimate flow entries into the host monitor table.
[Router] host-monitor add source 192.168.40.2 destination 192.168.80.2 protocol 17
interface GigabitEthernet 2/1/1 inbound
[Router] host-monitor add source 192.168.40.2 destination 192.168.80.2 protocol 17
interface GigabitEthernet 2/1/2 outbound
# Delete a fixed flow entry.
[Router] host-monitor delete source 40.0.0.3 destination 40.0.0.255 protocol 17 interface
GigabitEthernet 2/1/1 inbound
# Display brief information about the current legitimate flow entries.
[Router] display host-monitor
Total 10 flow(s).
State: Fixed
Source
Destination
Protocol
Direction
Interface
VPN
------------------------------------------------------------------------------192.168.1.102
192.168.1.255
17
Inbound
GE2/1/1
192.168.1.1
239.255.255.250 17
Inbound
GE2/1/1
192.168.20.65
239.255.255.250 17
Inbound
GE2/1/1
56.56.56.44
224.0.0.5
89
Inbound
GE2/1/1
192.168.20.167
192.168.20.255
17
Inbound
GE2/1/1
192.168.20.170
192.168.20.255
17
Inbound
GE2/1/1
192.168.20.191
192.168.20.255
17
Inbound
GE2/1/1
192.168.40.2
192.168.80.2
17
Inbound
GE2/1/1
192.168.40.2
192.168.80.2
17
Outbound
GE2/1/2
192.168.80.133
192.168.80.131
1
Outbound
GE2/1/2
194
Support and other resources
Contacting HP
For worldwide technical support information, see the HP support website:
http://www.hp.com/support
Before contacting HP, collect the following information:
•
Product model names and numbers
•
Technical support registration number (if applicable)
•
Product serial numbers
•
Error messages
•
Operating system type and revision level
•
Detailed questions
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website:
http://www.hp.com/go/wwalerts
After registering, you will receive email notification of product enhancements, new driver versions,
firmware updates, and other product resources.
Related information
Documents
To find related documents, browse to the Manuals page of the HP Business Support Center website:
http://www.hp.com/support/manuals
•
For related documentation, navigate to the Networking section, and select a networking category.
•
For a complete list of acronyms and their definitions, see HP FlexNetwork Technology Acronyms.
Websites
•
HP.com http://www.hp.com
•
HP Networking http://www.hp.com/go/networking
•
HP manuals http://www.hp.com/support/manuals
•
HP download drivers and software http://www.hp.com/support/downloads
•
HP software depot http://www.software.hp.com
•
HP Education http://www.hp.com/learn
195
Conventions
This section describes the conventions used in this documentation set.
Command conventions
Convention
Description
Boldface
Bold text represents commands and keywords that you enter literally as shown.
Italic
Italic text represents arguments that you replace with actual values.
[]
Square brackets enclose syntax choices (keywords or arguments) that are optional.
{ x | y | ... }
Braces enclose a set of required syntax choices separated by vertical bars, from which
you select one.
[ x | y | ... ]
Square brackets enclose a set of optional syntax choices separated by vertical bars, from
which you select one or none.
{ x | y | ... } *
Asterisk-marked braces enclose a set of required syntax choices separated by vertical
bars, from which you select at least one.
[ x | y | ... ] *
Asterisk-marked square brackets enclose optional syntax choices separated by vertical
bars, from which you select one choice, multiple choices, or none.
&<1-n>
The argument or keyword and argument combination before the ampersand (&) sign can
be entered 1 to n times.
#
A line that starts with a pound (#) sign is comments.
GUI conventions
Convention
Description
Boldface
Window names, button names, field names, and menu items are in bold text. For
example, the New User window appears; click OK.
>
Multi-level menus are separated by angle brackets. For example, File > Create > Folder.
Convention
Description
Symbols
WARNING
An alert that calls attention to important information that if not understood or followed can
result in personal injury.
CAUTION
An alert that calls attention to important information that if not understood or followed can
result in data loss, data corruption, or damage to hardware or software.
IMPORTANT
An alert that calls attention to essential information.
NOTE
TIP
An alert that contains additional or supplementary information.
An alert that provides helpful information.
196
Network topology icons
Represents a generic network device, such as a router, switch, or firewall.
Represents a routing-capable device, such as a router or Layer 3 switch.
Represents a generic switch, such as a Layer 2 or Layer 3 switch, or a router that supports
Layer 2 forwarding and other Layer 2 features.
Represents an access controller, a unified wired-WLAN module, or the switching engine
on a unified wired-WLAN switch.
Represents an access point.
Represents a security product, such as a firewall, a UTM, or a load-balancing or security
card that is installed in a device.
Represents a security card, such as a firewall card, a load-balancing card, or a
NetStream card.
Port numbering in examples
The port numbers in this document are for illustration only and might be unavailable on your device.
197
Index
ABCDEFGHINOPRST
Configuring the NQA server,11
A
Configuring the RMON alarm function,102
Adding legitimate flow entries,191
Configuring the RMON statistics function,101
Alarm group configuration example,106
Configuring the sFlow agent and sFlow collector
information,183
Applying a QoS policy,127
B
Configuring the source address for flow log
packets,178
Basic IPv6 NetStream concepts,147
Configuring the timestamp for flow logs,179
Basic NetStream concepts,131
Contacting HP,195
C
Conventions,196
Configuration prerequisites,190
Creating a sampler,109
Configuring a QoS policy,127
D
Configuring access-control rights,62
Deleting a legitimate flow entry,191
Configuring attributes of IPv6 NetStream data
export,152
Deleting illegitimate flow entries,192
Deleting unfixed flow entries,191
Configuring attributes of NetStream export data,140
Disabling an interface from generating link up/down
logging information,170
Configuring counter sampling,185
Configuring different types of traffic mirroring,126
Displaying and maintaining a sampler,109
Configuring flow logging,176
Displaying and maintaining flow logging,180
Configuring flow sampling,184
Displaying and maintaining gateway mode,188
Configuring gateway mode,188
Displaying and maintaining Host-monitor,192
Configuring IPv6 NetStream data export,150
Displaying and maintaining information center,171
Configuring IPv6 NetStream flow aging,153
Displaying and maintaining IPC,85
Configuring Layer 2 remote port mirroring,116
Displaying and maintaining IPv6 NetStream,154
Configuring local port mirroring,114
Displaying and maintaining NetStream,143
Configuring match criteria,125
Displaying and maintaining NQA,30
Configuring NetStream data export,138
Displaying and maintaining NTP,67
Configuring NetStream filtering and sampling,137
Displaying and maintaining port mirroring,121
Configuring NetStream flow aging,142
Displaying and maintaining RMON,103
Configuring NTP authentication,63
Displaying and maintaining sFlow,185
Configuring NTP operation modes,57
Displaying and maintaining SNMP,93
Configuring optional parameters for NTP,60
Configuring SNMP basic parameters,88
Displaying and maintaining traffic mirroring,128
Configuring SNMP logging,90
E
Configuring SNMP traps,91
Enabling Host-monitor,190
Configuring the flow logging version,177
Enabling IPC performance statistics,84
Configuring the local clock as a reference source,60
Enabling IPv6 NetStream,150
Configuring the minimum age of syslog messages,171
Enabling NetStream,136
Configuring the NQA client,12
Enabling synchronous information output,170
198
Outputting system information to the SNMP
module,168
Ethernet statistics group configuration example,104
Exporting flow logs,178
Outputting system information to the trap buffer,166
F
Overview,147
Flow logging configuration examples,180
Overview,99
Flow logging configuration task list,177
Overview,51
Freezing legitimate flow entries,191
Overview,125
G
Overview,131
Gateway mode configuration example,188
Overview,9
Overview,190
H
Overview,109
History group configuration example,105
Overview,158
Host-monitor configuration example,193
Overview,83
Host-monitor configuration task list,190
Overview,86
I
Overview,112
Information center configuration examples,172
P
Information center configuration task list,163
Ping,1
IPv6 NetStream configuration examples,155
Ping and tracert example,7
IPv6 NetStream configuration task list,150
Port mirroring configuration examples,121
IPv6 NetStream key technologies,148
R
N
Related information,195
NetStream configuration examples,144
S
NetStream configuration task list,135
Sampler configuration example,110
NetStream key technologies,132
Saving system information to the log file,169
NetStream sampling and filtering,135
sFlow configuration example,185
NQA configuration examples,31
SNMP configuration examples,94
NQA configuration task list,11
SNMP configuration task list,88
NTP configuration examples,68
NTP configuration task list,57
System debugging,5
O
T
Outputting system information to a log host,166
Tracert,3
Traffic mirroring configuration example,128
Outputting system information to the console,164
Outputting system information to the log buffer,167
Traffic mirroring configuration task list,125
Outputting system information to the monitor
terminal,165
Troubleshooting sFlow configuration,187
Troubleshooting flow logging,182
199
Download PDF