ASR1000 and Cisco Intelligent WAN (IWAN)

ASR1000 System &
Solution Architectures
Steven Wood - Principal Engineer, Enterprise Network Group
BRKARC-2001
Session Abstract
Many Service Provider and Enterprise customers are looking to converge their network edge architectures. On the Service Provider side, firewall, security or deep-packet inspection functionality is being integrated into Provider Edge or BNG systems. Similarly, on the
Enterprise side multiple functionalities are activated in a converged WAN edge router, thus yielding operational savings and efficiencies.
The Cisco ASR 1000 takes this convergence to the next level. Based on the Cisco Quantum Flow Processor, the ASR 1000 enables the integration of voice, firewall, security or deep packet inspection services in a single system, with exceptional performance and highavailability support. The processing power of the Quantum Flow Processor allows this integration without the need for additional service modules. This technical seminar describes the system architecture of the ASR 1000. The different hardware modules (route processor, forwarding processor, interface cards) and Cisco IOS XE software modules are described in detail. Examples of how different packets flows traverse and ASR 1000 illustrate how the hard and software modules work in conjunction. The session also discusses the expected performance characteristics in converged service deployments. Particular attention is also given to sample use cases on how the ASR 1000 can be deployed in different Service Provider and Enterprise architectures in a converged services role. The session is targeted for network engineers and network architects who seek to gain an in-depth understanding of the ASR 1000 system architecture for operational or design purposes. Attendees from both the Service Provider as well as Enterprise market segments are welcome.
Glossary
CF
CLI
CM
CPE
CPU
CRC
Ctrl
BNG
BQS
BRAS
BW
CAC
CCO
CDR
DBE
DMVPN
DPI
DSCP
AF4
ALG
ASR
B2B
BB
BGP
BITS
AAA
ACL
ACT
AF1
AF2
AF3
Authentication, authorization and Accounting
Access Control List
Active; referring to ESP or RP in an ASR 1006
Assured Forwarding Per Hop behaviour class 1
Assured Forwarding Per Hop behaviour class 2
Assured Forwarding Per Hop behaviour class 3
Assured Forwarding Per Hop behaviour class 4
Application Layer Gateway
As in ASR1000; Aggregation Services Router
Business to Business in the context of WebEx or Telepresence
Broadband
Border Gateway Protocol
Building Integrated Timing Supply
Broadband Network Gateway
Buffer, Queuing and Scheduling chip on the QFP
Broadband remote Access Server
Bandwidth
Connection Admission Control
Cisco Connection Online (www.cisco.com)
Call Detail Records
Checkpointing Facility
Command Line Interface
Chassis Manager
Customer Premise Equipment
Central Processing Unit
Cyclic Redundancy Check
Control
Data Border Element (in Session Border Controller)
Dynamic Multipoint Virtual Private Network
Deep Packet Inspection
Diffserv Code Point (see also AF, EF)
GigE
GRE
HA
HDTV
HH
HQF
H-QoS
HW
I2C
IOCP
IOS XE
IPC
IPS
ISG
ISP
ISSU
L2TP CC
LAC
DSLAM
DST
EF
EOBC
ESI
ESP
FECP
FH
FIB
FM
FPM
FR-DE
FW
Digital subscriber Line Access Multiplexer
Destination
Expedited Forwarding (see also DSCP)
Ethernet out-of-band control channel on the ASR 1000
Enhanced SerDes Interface
Embedded Services Processor on the ASR 1000
Forwarding Engine (ESP) Control Processor
Full Height (SPA)
Forwarding Information Base
Forwarding Manager
Flexible Packet Matching
Frame Relay Discard Eligible
Firewall
Gigabit Ethernet
Generic Route Encapsulation
High Availability
High Definition TV
Half-height (SPA)
Hierarchical Queuing Framework
Hierarchical Quality of Service hardware
Inter-Integrated Circuit input output Control Processor
Internet Operating system XE (on the ASR 1000)
Inter-process communication
Intrusion Prevention System
Intelligent Services Gateway
Internet Service Provider
In-service software upgrade
Layer 2 Transport Protocol Control connection
L2TP access concentrator
Glossary
NSF
OBFL
OIR
OLT
P1
P2
PAL
PE
POST
POTS
PQ
LNS
MFIB mGRE
MPLS
MPLS-
EXP
MPV Video
MQC mVPN
NAPT
NAT
NBAR
Nr
Ns
Nr
NF
L2TP network Server
Multicast FIB multipoint GRE
Multiprotocol label switching
MPLS Exp bits in the MPLS header
Modular QoS CLI multicast VPN
Network address port translation network address translation network based application recognition receive sequence number (field in TCP header) send sequence number (field in TCP header) receive sequence number (field in TCP header)
Netflow non-stop forwarding on board failure logging online insertion and removal optical line termination
Priority 1 queue priority 2 queue
Platform Adaption layer (middleware in the ASR 1000)
Provider Edge
Power on self test
Plain old telephony system priority queue
PSTN
PTA
PWR
QFP public switched telephone network
PPP termination and aggregation power
Quantum Flow Processor
QFP-PPE QFP packet Processing elements
QFP-TM QFP traffic Manager (see also BQS)
QoS
RACS
RA-MPLS
RF
RIB
RP
RP1
RP2
RR
RU
SBC
SBE
SBY
SDTV
SIP
SPA
SPA SPI
SPV Video
SRC
SSL
SSO
SW
TC
TCAM
TOS
VAI
VLAN
VOD
VTI
WAN
WRED
Quality of Service
Resource and admission control subsystem
Remote access into MPLS redundancy facility (see also CF) routing information base
Route processor
1st generation RP on the ASR 1000
2nd generation RP on the ASR 1000
Route reflector rack unit session border controller signaling border element (of an SBC) standby standard definition TV (see also HDTV)
Session initiation protocol shared port adapter
SPA Serial Peripheral Interface
Source
Secure Socket Layer stateful switch over software traffic class (field in the IPv6 header)
Ternary content addressable memory
Type of service (field in the IPv4 header) virtual access interface virtual local area network video on demand virtual tunnel interface wide area network weighted random early discard
Agenda
• Introducing the ASR1000
•
ASR1000 System Architecture
•
ASR 1000 Building Blocks
•
ASR 1000 Software Architecture
• ASR 1000 Packet Flows
• QoS on the ASR 1000
• High-Availability on the ASR 1000
• Applications & Solutions
Companion Session:
BRKARC-2019 - Operating an ASR1000
Introducing the ASR1000
ASR1000 Integrated Services Router
Key Design Principles
Application
Performance
Optimization
(AVC, PfR)
Best in Class ASIC
Technology
Quantum Flow Processor
(QFP) for high scale services and sophisticated QoS with minimum performance impact
Voice and
Video
Services
(CUBE)
Best in Class
Security Services
(Firewall, VPN,
Encryption)
Availability
Enterprise IOS Features with Modular OS and
Software Redundancy or
Hardware Redundancy and ISSU
Ethernet
WAN and Provider
Edge Services
Multi-Service, Secure
WAN Aggregation
Services
Cisco ASR 1000 Series Routers: Overview
2.5 Gbps to 200Gbps Range
—Designed Today for up to 400 Gbps in the Future
COMPACT,
POWERFUL ROUTERS
•
Line-rate performance 2.5G to
200G+ with services enabled
•
Investment protection with modular engines, IOS CLI and
SPAs and Ethernet Line Cards for
I/O
•
Hardware-based QoS engine with up to 464K queues
BUSINESS-CRITICAL RESILIENCY
•
Resilient, high performance services router
•
Fully separated control and forwarding planes
•
Hardware and software redundancy
•
In-service software upgrades
INSTANT ON
SERVICE DELIVERY
•
Integrated firewall, VPN, encryption, Application Visibility and Control, Session Border
Controller
•
Scalable on-chip service provisioning through software licensing
ASR 1013
ASR 1006
ASR 1001-X
2.5 to 20
Gbps
ASR 1002
5 to 10
Gbps
ASR 1002-X
5 to 36
Gbps
ASR 1004
10 to 40
Gbps
10 to 100
Gbps
40 to 360
Gbps
ASR1000 Positioning
Enterprise Edge and Managed Services Routers
ISR Series
850 Mbps per
System
350 Mbps with
Services
ASR 9000
7600 Series
ASR1000
ISR4000 Series
1-2 Gbps per System
Separate Services
Planes for Continuity
Pay-As-You-Grow
2.5-200Gbps per
System
Broadband
Route Reflector
Distributed PE
Firewall, IPSec
SBC/VoIP
Up to 2 Tbps per system
Carrier Ethernet
IP RAN
Mobile Gateways
SBC/VoIP
Broadband
Video Monitoring
Service Provider Edge Routers
Up to 48 Tbps per system
Carrier Ethernet
IP RAN
L2/L3 VPNs
IPSec
BNG
ASR1000 Enterprise Applications
Flexible WAN Services Edge & CPE
Mobile Worker WAN aggregation
DCI
Corporate office
Internet gateway
High end branch
Cloud
High Speed CPE
High-end Branch
Campus Edge
WAN Aggregation
IPSec VPN
L2 and L3 VPN
IWAN
Data Center Interconnect
Internet gateway
Zone-Based Firewall
Cloud Services Edge
ASR1000 Service Provider Applications
A Wide Variety of Use Cases
Mobile
Subscriber
Access and Aggregation
Edge
Wireless
L2/L3 VPNs
Firewall/NAT/IPSec
SBC
—SIP TrunkingIPSec
NBAR2
CGN
Business
CPE
CPE
Residence
Wire line
ETTx xDSL xPON
DSLAM
OLT
Cable
DOCSIS
M-CMTS
iWAG
BNG
PE
PPP or IP Aggregation
ATM or Ethernet
Intelligent Services Gateway
WiFi Access Gateway
VOD TV
RR
ISP
SIP
LNS
Peering
IP/MPLS Core
Content Farm
ASR1000 SYSTEM
ARCHITECTURE
ASR 1000 Series Building Blocks
Embedded Services
Processor
(active)
FECP
Crypto assist
QFP subsystem
Interconn.
Interconn.
SPA
Agg.
IOCP
SPA
…
SPA
Route
Processor
(active)
RP
Interconn.
Route
Processor
(standby)
RP
Passive Midplane
Interconn.
SPA
Agg.
IOCP
Interconn.
Embedded Services
Processor
(standby)
FECP
Crypto assist
QFP subsystem
SPA
…
SPA SPA
…
SPA
ESI, (Enhanced Serdes) 11.5Gbps
SPA-SPI, 11.2Gbps
HyperTransport, 10Gbps
Interconn.
Interconn.
SPA
Agg.
IOCP
• Centralized Forwarding Architecture
All traffic flows through the active ESP, standby is synchronized with all flow state with a dedicated 10-
Gbps link
• Distributed Control Architecture
All major system components have a powerful control processor dedicated for control and management planes
• Route Processor (RP)
Handles control plane traffic
Manages system
•
Embedded Service Processor (ESP)
Handles forwarding plane traffic
•
SPA Interface Processor (SIP)
Shared Port Adapters provide interface connectivity
ASR 1000 Data Plane Links
Embedded Services
Processor
(active)
FECP
Route
Processor
(active)
RP
Route
Processor
(standby)
RP
Embedded Services
Processor
(standby)
FECP
Crypto assist
QFP subsystem
Interconn.
Interconn.
SPA
Agg.
IOCP
SPA
…
SPA
Interconn.
Passive Midplane
Interconn.
SPA
Agg.
IOCP
SPA
…
SPA
Interconn.
Crypto assist
QFP subsystem
Interconn.
Interconn.
SPA
Agg.
IOCP
SPA
…
SPA
ESI, (Enhanced Serdes) 11.5Gbps
SPA-SPI, 11.2Gbps
HyperTransport, 10Gbps
•
•
Enhanced SerDes Interconnect (ESI) links
– high speed serial communication
ESIs can run at 11.5Gbps or 23Gbps
• ESIs run over midplane and carry
Packets between ESP and the other cards (SIPs, RP and other ESP)
Network traffic to/from SPA SIPs
Punt/inject traffic to/from RP (e.g. network control pkts)
State synchronization to/from standby ESP
• Two ESIs between ESPs and to every card in the system
• Additional full set of ESI links to/from standby ESP
• CRC protection of packet contents
• ESP-10G: 1 x 11.5G ESI to each SIP slot
• ESP-20G: 2 x 11.5G ESI to two SIP slots; 1 x 11.5G to third SIP slot
• ESP-40G/100G/200G: 2 x 23G ESI to all SIP slots
ASR 1000 Control Plane Links
•
•
•
•
•
Ethernet out-of-band Channel (EOBC)
Run between ALL components
Indication if cards are installed and ready
Loading images, stats collection
State information exchange for
L2 or L3 Protocols
•
•
•
•
•
•
I
2
C
Monitor health of hardware components
Control resets
Communicate active/standby, Real time presence and ready indicators
Control the other RP (reset, power-down, interrupt, report Power-supply status, signal ESP active/standby)
EEPROM access
•
•
•
•
•
SPA control links
Run between IOCP and SPAs
Detect SPA OIR
Reset & Power control for SPAs (via I
2
C)
Read EEPROMs
Forwarding
Processor
(active)
FECP
Crypto assist
QFP subsystem
Interconn.
Interconn.
SPA
Agg.
IOCP
SPA
…
SPA
Route
Processor
(active)
RP
Midplane
Interconn.
SPA
Agg.
IOCP
SPA
…
SPA
Route
Processor
(Standby)
RP
Forwarding
Processor
(Standby)
FECP
Crypto assist
QFP subsystem
Interconn.
Interconn.
SPA
Agg.
IOCP
SPA
…
SPA
GE, 1Gbps
I 2 C
SPA Control
SPA Bus
ASR1000 Building Blocks:
Under the Hood
ASR1000 Modular Systems Overview
g ASR 1004 ASR 1006 ASR 1013
SPA Slots
RP Slots
ESP Slots
SIP Slots
Redundancy
Height
Bandwidth
Maximum
Output Power
Airflow
8-slot
1
1
2
Software
7” (4RU)
10 to 40 Gbps
765W
Front to back
12-slot
2
2
3
Hardware
10.5” (6RU)
10 to 100 Gbps
1695W
Front to back
24-slot
2
2
6
Hardware
22.7” (13RU)
40-200+ Gbps
3390W
Front to back
18
ASR1000 Series SPA Interface Processor
SIP10 and SIP40
•
•
•
•
•
•
•
•
Physical termination of SPA
10 or 40 Gbps aggregate throughput options
•
•
Supports up to 4 SPAs
4 half-height, 2 full-height, 2 HH+1FH full OIR support
Does not participate in forwarding
•
•
•
Limited QoS
Ingress packet classification
– high/low
Ingress over-subscription buffering (low priority) until ESP can service them.
Up to 128MB of ingress oversubscription buffering
Capture stats on dropped packets
Network clock distribution to SPAs, reference selection from SPAs
IOCP manages Midplane links, SPA OIR, SPA drivers
ASR1000 SIP40 and SIP10
Major Functional Differences
•
Sustained throughput of 40Gbps vs 10Gbps for SIP10
•
Different ESI modes depending on the ESP being used (1x10G vs 2x20G)
•
Packet classification enhancements to support more L2 transport types
(e.g. PPP, HDLC, FR, ATM…)
• Support for more queues (96 vs 64), allows up to 12 Ethernet ports per half-height SPA
• 3-level priority scheduler (Strict, Min, Excess) vs 2-level (Min, Excess)
• Addition of per-port and per-VLAN/VC ingress policers
•
•
•
Network clocking support
DTI clock distribution to SPAs
Timestamp and time-of-day clock distribution
SIP40 Block Diagram
ESI Links:
2x 20G to each ESP
(2x10G for SIP10)
Card
Infrastructure
IO Control
(IOCP)
Processor
Complex
Memory
IOCP
Boot Flash
(OBFL, …)
128MB Ingress
Buffering
…
Ingress Buffers
(per port)
HW-based
3-priority
Scheduler Strict,
Min, Excess
SIP10: Min, Excess only
Enhanced Classifier
(Eth, PPP, HDLC,
ATM, FR)
Chassis
Mgmt. Bus
RPs RPs
To
ESPs
Ingress
Scheduler
Ingress classifier
4
SPAs
Egress
Buffer
Status
SPA
Aggregation
ASIC
…
Egress Buffers
(per port)
4
SPAs
Output reference clocks
GE, 1Gbps
I
2
C
SPA Control
SPA Bus
RPs RPs
Network clock distributio n
Network clocks
Input reference clocks
4 SPAs 4 SPAs
ESI, 11.5 or 23Gbps
SPA-SPI, 11.2Gbps
Hypertransport, 10Gbps
Other
8MB Egress
Buffering
Network/Interface
Clock Selection
Shared Port Adapters (SPA) and SFPs
Optics
SFP-OC3-MM
Optics
SFP-GE-S / GLC-SX-MMD
POS SPA
SPA-2XOC3-POS
Serial/Channelized/
Clear Channel SPA
SPA-4XT-Serial
SFP-GE-L / GLC-LH-SMD
SFP-OC3-SR
SPA-4XOC3-POS
SPA-8XCHT1/E1
SFP-GE-Z
SFP-OC3-IR1
SPA-8XOC3-POS
SPA-4XCT3/DS0
SFP-GE-T
SFP-OC3-LR1
SPA-1XOC12-POS
SPA-2XCT3/DS0
CWDM
SFP-OC3-LR2 SPA-2XOC12-POS
SPA-1XCHSTM1/OC3
SFP-OC12-MM
XFP-10GLR-OC192SR /
XFP10GLR-192SR-L
SPA-4XOC12-POS
SPA-1XCHOC12/DS0
SPA-8XOC12-POS
SFP-OC12-SR
XFP-10GER-192IR+ /
XFP10GER-192lR-L
SPA-2XT3/E3
SFP-OC12-IR1
SPA-1XOC48-POS/RPR
SPA-4XT3/E3
XFP-10GZR-OC192LR
SFP-OC12-LR1
SPA-2XOC48POS/RPR
XFP-10G-MM-SR Service SPA
SFP-OC12-LR2
SPA-4XOC48POS/RPR
GLC-GE-100FX
SFP-OC48-SR SPA-OC192POS-XFP
SPA-WMA-K9
GLC-BX-U
SPA-DSP
SFP-OC48-IR1
GLC-BX-D
ATM SPA
SFP-OC48-LR2
DWDM-XFP
32 fixed channels
SPA-1XOC3-ATM-V2
CEOP SPA
SPA-1CHOC3-CE-ATM
XFP-10GLR-OC192SR
SPA-3XOC3-ATM-V2 SPA-24CHT1-CE-ATM
XFP-10GER-OC192IR
SPA-1XOC12-ATM-V2
XFP-10GZR-OC192LR
SPA-2CHT3-CE-ATM
For Your
Reference
Ethernet SPA
SPA-4X1FE-TX-V2
SPA-8X1FE-TX-V2
SPA-2X1GE-V2
SPA-5X1GE-V2
SPA-8X1GE-V2
SPA-10X1GE-V2
SPA-1X10GE-L-V2
SPA-1X10GE-WL-V2
SPA-2X1GE-SYNCE
Route Processors: RP1, RP2 and ASR1001 RP
Two Generations of ASR1000 Route Processor
•
•
•
•
•
•
First Generation
1.5GHz PowerPC architecture
Up to 4GB IOS Memory
1GB Bootflash
33MB NVRAM
40GB Hard Drive
•
•
•
•
•
•
•
Second Generation:
2.66Ghz Intel dual-core architecture
64-bit IOS XE
Up to 16GB IOS Memory
2GB Bootflash (eUSB)
33MB NVRAM
Hot swappable 80GB Hard Drive
RP1
RP2
HDD Enclosure
ASR 1000 Route Processor Architecture
Highly Scalable Control Plane Processor
Not a traffic interface!
Mgmt only
System Logging
Core Dumps
Manages all chassis functions
Runs IOS
Card
Infrastructure
USB
Mgmt
ENET
Console and Aux
2.5”
Hard disk
BITS
(input & output)
Runs IOS, Linux OS
Manages board and
Chassis functions
IOS Memory: RIB, FIB &
Other Processes
Determines Route Scale
RP1: 4GB
RP2: 8 & 16GB
CPU Memory
Chassis Mgmt
Bus
CPU
(1.5/2.66 GHz Dual-core)
ESI
Interconnect
GE Switch nvram
Bootdisk
Stratum-3 Network clock circuit
RP1: 1GB
RP2: 2GB
33MB
Output clocks
Input clocks
GE, 1Gbps
I 2 C
SPA Control
SPA Bus
ESI, 11.2Gbps
SPA-SPI, 11.2Gbps
Hypertransport, 10Gbps
Other
SIPs ESPs RP Misc
Ctrl
ESPs SIPs ESPs RP SIPs SIPs RP
Route Processors (RP)
ASR1001-X ASR1002-X
Recommended
Purchase
RP2
For Your
Reference
RP1
CPU
Memory
Built-In eUSB
Bootflash
Storage
Cisco IOS XE
Operating System
Chassis Support
Built-in
Dual-Core 2.0GHz Processor
Built-in
Quad-Core 2.13GHz Processor
General Purpose CPU Based on
1.5GHz Processor
Dual-Core Processor, 2.66GHz
8GB default (4x2GB)
16GB maximum (4x4GB)
8GB
SSD (200G or 400G)
64 bit
Integrated in
ASR1001-X chassis
4GB default
8GB
16GB
8GB
160GB HDD (optional) &
External USB
64 bit
Integrated in
ASR1002-X chassis
2GB default (2x1GB)
4GB maximum (2x2GB)
RP1 with 4GB built in ASR 1002
1GB (8GB on ASR 1002)
40GB HDD and External USB
32 bit
ASR1002 (integrated),
ASR1004, and
ASR1006
8GB default (4x2GB)
16GB maximum (4x4GB)
2GB
80GB HDD and External USB
64 bit
ASR1004, ASR1006, and
ASR1013
ASR1000 GE Management Port Design
• ASR 1000 has a dedicated, out-of-band GE management port attached to the
RP
•
•
•
Interface (GigE0) is in a VRF (mgmt-intf) but it is not tied to general MPLS
VPNs has its own dedicated routing and FIB tables .
can be associated with connected, static and dynamic routing
• Designed to prevent any routing/forwarding between RP and ESP.
•
No other interfaces can join the Mgmt VRF.
• Many management features must be configured with vrf options or use GigE0 as source interface. ( eg: tftp, ntp, snmp, syslog, tacacs/radius)
Embedded Services Processors (ESP)
Scalable Bandwidth from 5Gbps to 200Gbps+
• Centralized, programmable, multiprocessor forwarding engine providing fullpacket processing
•
•
•
•
Packet Buffering and Queuing/Scheduling (BQS)
For output traffic to carrier cards/SPAs
For special features such as input shaping, reassembly, replication, punt to RP, etc.
5 levels of HQoS scheduling, up to 464K Queues, Priority Propagation
• Dedicated Crypto Co-processor
•
•
•
Interconnect providing data path links (ESI) to/from other cards
Transports traffic into and out of the Cisco Quantum
Flow Processor (QFP)
Input scheduler for allocating QFP BW among ESIs
• FECP CPU managing QFP, crypto device, midplane links, etc.
ESP40: 40 Gbps Services Processor
The choice for many Enterprise and SP-edge Applications
Centralized, programmable forwarding engine (i.e. QFP subsystem (PPE) and crypto engine) providing full-packet processing
Packet buffering and queuing/scheduling (BQS)
40G total throughput
13Gbps crypto throughput Max (1400B packets)
Support up to two ESI links to each SIP slot
1 x 11G to a SIP10
2 x 23G to a SIP40
FECP CPU (1.86GHz dual core CPU with 8GB memory) managing QFP, crypto device, midplane links, etc
ASR 1000 Forwarding Processor
Quantum Flow Processor (QFP) Drives Integrated Services & Scalability
• Class/Policy Maps: QoS, DPI, FW
• ACL/ACE storage
• IPSec Security Association class groups, classes, rules
• NAT Tables
• Runs Linux
• Performs board
management
• Program QFP & Crypto
• Stats collection
• Memory for FECP
• QFP client / driver
• OBFL
• QoS Class maps
• FM FP
• Statistics
• ACL ACEs copy
• NAT config objects
• IPSec/IKE SA
• NF config data
• ZB-FW config objects
Memory
Boot Flash
Chassis
Mgmt Bus
RPs RPs
Card
Infrastructure
FECP
Memory
Crypto
TCAM4
Resource
DRAM
PPE0
PPE1
Processor pool
PPE0
PPE0
PPE2
PPE0
PPE0
PPE3
PPE0
PPE4
Pkt Buffer
DRAM
QFP
PPE0
PPE0
PPE5
PPE0
PPE0
…
PPE0
PPE40
Buffer, queue, schedule (BQS)
Buffer, queue, schedule (BQS)
• QoS Mark/Police
• NAT sessions
• IPSec SA
• Netflow Cache
• FW hash tables
• Per session data
(FW, NAT, Netflow,
SBC)
• QoS Queuing
• NAT VFR re-assembly
• IPSec headers
Dispatche r/Pkt
Buffer
Interconn.
ESP RPs
SIPs
GE, 1Gbps
I 2 C
SPA Control
SPA Bus
ESI, 11.2Gbps
SPA-SPI, 11.2Gbps
Hypertransport, 10Gbps
Other
• System Bandwidth
• 5, 10, 20, 40, 100, 200 Gbps
NF: Netflow
ZBFW: Zone-based Firewall
FW: Firewall
SA: Security Association
VFR: Virtual Fragmentation Reassembly
OBFL: On-board Failure Logs
ESP100G and ESP 200G
Larger Enterprise Aggregation and Service Provider Edge
ESP-100G ESP-200G
Total Bandwidth
Performance
QuantumFlow Processors
- TCAM
- Packet Buffer
Control CPU
- Memory
Broadband
QoS
IPSec Bandwidth (1400 B)
FW/NAT
Chassis
Route Processor
• 100 Gbps
• Up to 32 Mpps
• 2
• 80 Mb
• 1024 MB
• Dual-core 1.73Ghz CPU
• 16 GB
• Up to 58 K sessions
• Up to 232 K queues
• 25 Gbps
• 6 M sessions
• ASR 1006, ASR 1013
• RP2 + Future
NSA
“Suite-B”
Security
Total Bandwidth
Performance
QuantumFlow Processors
- TCAM
- Packet Buffer
Control CPU
- Memory
Broadband
QoS
IPSec Bandwidth (1400 B)
FW/NAT
Chassis
Route Processor
• 200 Gbps
• Up to 64 Mpps
• 4
• 2 x 80 Mb
• 2048 MB
• Dual-core 1.73 Ghz CPU
• 32 GB
• Up to 128 K sessions
• Up to 464 K queues
• 50 Gbps
• 13 M sessions
• ASR 1013
• RP2 + Future
Embedded Services Processors (ESP)
Based on Quantum Flow Processor (QFP)
System
Bandwidth
Performance
# of
Processors
Clock Rate
Crypto
Engine BW
(1400 bytes)
QFP
Resource
Memory
Packet
Buffer
Control CPU
Control
Memory
TCAM
Chassis
Support
ESP-2.5G
2.5 Gbps
3 Mpps
10
900 MHz
1 Gbps
256MB
ESP-5G
5 Gbps
8 Mpps
20
900 MHz
1.8 Gbps
256MB
64MB 64MB
Single core
800 MHz
Single core
800 MHz
1 GB
5 Mb
ASR 1001
(Integrated)
1 GB
5 Mb
ASR 1001
(integrated)
,
ASR 1002
ESP-10G
10 Gbps
17 Mpps
40
900 MHz
4.4 Gbps
512MB
ESP-20G
20 Gbps
24 Mpps
40
1.2 GHz
8.5 Gbps
1GB
ASR1001-X
ESP
2.5/5/10/20
Gbps
13 Mpps
31
1.5 GHz
8 Gbps
4 GB
128MB
Single core
800 MHz
2 GB
10 Mb
ASR 1002,
1004, 1006
256MB
Single core
1.2 GHz
4 GB
40 Mb
ASR 1004,
1006
512MB
Quad core* 2.0
GHz
8 GB
10 Mb
ASR 1001-X
ASR1002-X
ESP
5/10/20/36
Gbps
30 Mpps
8/16/32/62
1.2 GHz
4 Gbps
1GB
512MB
Quad core
2.13 GHz
4/8/16 GB
40 Mb
ASR1002-X
ESP-40G
40 Gbps
24 Mpps
40
1.2 GHz
11 Gbps
1GB
ESP-100G
100 Gbps
58 Mpps
128
1.5 GHz
29 Gbps
4GB
For Your
Reference
ESP-200G
200 Gpbs
130 Mpps
256
1.5GHz
78 Gbps
8GB
256MB
Dual core
1.8 GHz
1GB
Dual core
1.73 GHz
2GB
Dual core
1.73 GHz
8 GB
40 Mb
ASR 1004,
1006, 1013
16 GB
80 Mb
ASR 1006,
1006X,
1009X, 1013
32 GB
2 x 80 Mb
ASR 1006X,
1009X, ASR
1013
Cisco Quantum Flow Processor (QFP)
ASR1000 Series Innovation
• Five year design and continued evolution
– now on 3rd generation
• Architected to scale to >100Gbit/sec
• Multiprocessor with 64 multi-threaded cores; 4 threads per core
•
256 processes per chip available to handle traffic
• High-priority traffic is prioritised
•
Packet replication capabilities for Multicast
• Many H/W assists for accelerated processing
• 3rd generation QFP is capable of 70Gbit/sec, 32Mpps processing
• Mesh-able: 1, 2 or 4 chips to build higher capacity ESPs
• Latency: tens of microseconds with features enabled
QFP Chip Set
Cisco QFP
Packet Processor
Cisco QFP Traffic Manager
(Buffering, Queueing, Scheduling)
Cisco Enterprise Routing NPU Leadership
Continuing Investment in Networking Processor Technology
Over 100
Patents
Awarded!
Gen1
20G
QFP1
2005
Gen4
> 200G
Gen3
200G
Lower Cost fully integrated NPU and IO device
Next-Gen: Emphasis on Line-Rate Security and Advanced Feature
Processing
Gen2
40G
QFP4 family
QFP3 family
QFP2
NPU
#cores: Number of Packet Processing Engines
#Threads: Concurrent, parallel threads processed
High Speed Backplane Aggregation ASIC
IO Oversubscription & Aggregation ASIC
Increasing Branch and Network Edge Requirements
2010
2015
ASR1000 Fixed Platforms
ASR1002-X
5Gbps to 36Gbps Soft-upgradable 2RU platform
System Management
RJ45 Console
Auxiliary Port
2x USB Ports
RJ45 GE Ethernet
BITS clocking
GPS input
Stratum 3 built-in
Built-in I/O
6x1GE
SyncE
Pay As You Grow
5 Gbps Default
Upgradeable to 10, 20, or 36 Gbps
4 Gbps crypto throughput
Optional
160 GB hard disk
Memory
4 GB default
8/16 GB optional
Shared Port Adapter
3x SPA slot
Multi-Core QFP
62 cores
248 simultaneous threads
40 Mb TCAM
Control Plane
Quad cores clocked at 2.13G Hz
4G/8G/16G Memory options
Secure Boot. FIPS-140-3 certification
Cryptography
4 Gbps crypto throughput
SuiteB crypto support
ASR 1001-X
5Gbps to 20Gbps Soft-Upgradable 1RU Platform with built-in 10GE
Pay As You Grow
2.5G Default
Upgradeable to 5G, 10G, and 20G
Up to 8G Crypto Throughput
Control Plane
Quad Cores; 2.0GHz
8G/16G memory options
Built-in I/O
2x10G
6x1G
Multipoint
MACsec Capable
Management/USB
Ports
RJ45 Management GE
2x USB Ports
System
Management
Auxiliary Port
RJ45 Console
Network Interface
Modules
SSD Drive
ISR 4K Modules
Mini Console
1x Mini USB Console
Multi-Core Network Processor
32 Cores
4 Packet Processing Engines / Core
128 processing threads
10 Mb TCAM
Shared Port Adapter
1x SPA slot
ASR 1001-X Block Diagram
3rd Generation
QFP: 20 Gbps
Forwarding &
Feature processing
PPE0
PPE1
PPE0
PPE5
Rsrc/Pkt
DDR3
Processor Pool
PPE0
PPE2
PPE0
PPE3
PPE0
PPE4
PPE0
PPE6
…
PPE0
PPE31
Oversub
DDR3
Resource / Packet
Buffer Memory (4G)
TCAM4
(10 Mbit)
QFP
Temp Sensor
Power Ctlr
EEPROM
Stratum-3
Network clock circuit
Integrated
SIP & Enet
IO
Subsystem
USB
Mgmt
ENET
Console and Aux
CPU
(2.0 GHz Quad-Core)
Dispatcher/Pkt
Buffer
SA table
DRAM
Boot Flash
(OBFL, …)
10GE
Crypto
GE GE GE GE GE GE
10
GE
10
GE
NIM
SPA
Solid State Drive
200G or 400G
Optionally in NIM
Slot
PCIe
SPA Control
SPA
Bus
1G
Other
Octeon II Encryption
Coprocessor
8G Crypto
Suite-B support
ASR1001-X
CPU
Memory
DDR3 nvram
Bootdisk
Integrated
Control Plane
- Quad Core
CPU
ASR1000 High Density Ethernet Line Cards
GE+10GE
High Density 10GE:
Fixed Ethernet Line card for ASR1k
Port Density
• 2x10GE+20x1GE
Throughput
• 40G
Key Features
• Feature parity with SIP40 + GE/10GE SPA
• SyncE, Y.1731, IEEE 1588 capable (future)
• No SIP needed
Chassis
ASR1004, ASR1006, ASR1013
RP
RP2
ESP
ESP40, ESP100, ESP200
Fixed Ethernet Line card for ASR1k
Port Density
• 6x10GE
Throughput
• 60G I/O with 40G Throughput
Key Features
• Feature parity with SIP40 + 10GE SPA
• Exception: MDR not supported
• No SIP Needed
Chassis
ASR1004, ASR1006, ASR1013
RP
RP2
ESP
ESP40, ESP100, ESP200
New ASR Modular Chassis
High Performance Chassis Upgrades
ASR 1006-X ASR 1009-X
Height
RP Slots
ESP Slots
SIP/MIP Slots
SPA Slots
EPA Slots
NIM Slots
Slot Bandwidth
Forwarding Bandwidth
(based on current QFP)
Maximum
Output Power
6RU
2
2 (regular)
2 (SIP40)
8
4
None
200G
40 to 100G+
1100W power modules
N+1, Max 6
9RU
2
2 (super)
3 (SIP40)
12
6
None
200G
40 to 200G+
1100W power modules
N+1, Max 6
ASR 1006-X
FCS Target
Nov 2015
Power Shelf
ASR 1009-X
FCS Target
Sep 2015
ASR 1000 System Oversubscription
Key Oversubscription Points
• Total bandwidth of the system is determined by the following factors
1.
2.
Type of ESP: eg. ESP10->200
Type of SIP: SIP10 or SIP40 (Link BW between one SIP and the ESP)
•
•
•
Step1: SPA-to-SIP Oversubscription
Up to 4 x 10Gbps SPAs per SIP 10 = 4:1 Oversubscription Max
No over subscription for SIP-40 = 1:1
•
•
Step 2: SIP-to-ESP Oversubscription
Up to 2,3 or 6 SIPs share the ESP bandwidth, depending on the ASR1000 chassis used
• Total Oversubscription = Step1 x Step2
Calculate configured SPA
BW to SIP capacity ratio
Calculate configured SIP
BW to ESP capacity ratio
SIP to ESP Oversubscription
Important Exceptions
ESP have a different Interconnect ASIC with different numbers of ESI ports. There are some rules:
• ESP-10G: 10G to all slots
•
•
ESP-20G: 20G to all slots except ASR1006 slot 3
10G only to SIP Slot 3
•
•
ESP-40G: 40G to all slots except ASR1013 slots 4 and 5
20G only to SIP slots 4 & 5
• ESP-100G: 40G to all slots
• ESP-200: 40G to all slots
• Keep these exceptions in mind when planning I/O capacity
Chassis
Version
ASR 1001
ASR
1001/ASR1002
ASR 1002-X
ASR 1004
ASR 1006
ASR 1013
ESP Version
ESP2.5
ESP5
ESP10
ESP40
ESP10
ESP20
ESP40
ESP10
ESP20
ESP40
ESP40
ESP100
ESP40
ESP40
ESP100
SIP Version
n.a.
n.a.
SIP Slots
n.a.
n.a.
n.a.
SIP40
SIP10
SIP10
SIP10
SIP10
SIP10
SIP 10
SIP 40
SIP40
SIP10
SIP40
SIP40
3
3
2
3
n.a.
n.a.
2
2
3
3
6
Slots 1, 2, 3,
4
Slots 5, 6
6
Max.
Bandwidth per
IP Slot (Gbps)
n.a.
n.a.
10
10
10
10 n.a.
n.a.
10
10
40
40
10
40
10
40
SPA to SIP
Oversubscription
2:1
4:1
1
4:1
9:10
4:1
4:1
4:1
4:1
4:1
4:1
1:1
1:1
4:1
1:1
4:1
1:1
For Your
Reference
ASR 1000 System Oversubscription Example
Bandwidth on ESP
(Gbps)
2.5
5
40
10
20
40
10
36
10
20
40
100
40
40
100
SIP to ESP
Oversubscription
5.6:1
6.8:1
2
3.4:1
1:1
2:1
1:1
1:2
3:1
3:2
3:4
3:1
6:5
3:2
9:2
12:5
I/O to ESP
Oversubscription
5.6:1
6.8:1
3
3.4:1
9:10
8:1
4:1
4:1
12:1
6:1
4:1
3:1
6:5
6:1
Example:
1
4x10G SPAs max per SIP
2
3 SIPs max per ESP
3
12x10G SPAs max per ESP
6:1
12:5
42
SOFTWARE
ARCHITECTURE
Software Architecture
–IOS XE
• IOS XE = IOS + IOS XE Middleware + Platform
Software. Not a new OS!
• Operational Consistency
—same look and feel as
IOS Router
• IOS runs as a Linux process for control plane
(Routing, SNMP, CLI etc.) 64-bit operation
•
•
•
•
Linux kernel with multiple processes running in protected memory for
Fault containment
Re-startability
ISSU of individual SW packages
•
•
•
•
ASR 1000 HA Innovations
Zero-packet-loss RP Failover
<50ms ESP Failover
“Software Redundancy”
Route Processor
IOS
(Active)
IOS
(Standby)
IOS XE Platform Adaptation Layer (PAL)
Chassis
Manager
Forwarding
Manager
Kernel
Control Messaging
SPA SPA
Driver
Chassis
Manager
Kernel
SPA Interface Processor
QFP
Client/Driver
Forwarding
Manager
Chassis
Manager
Kernel
Embedded Services
Processor
ASR 1000 Software Architecture
• Initialization and boot of RP Processes
• Detects OIR of other cards and coordinates initialization
• Manages system/card status, Environmentals, Power ctl, EOBC
• Runs Control Plane
• Generates configurations
• Populates and maintains routing tables (RIB, FIB…)
• Provides abstraction layer between hardware and IOS
• Manages ESP redundancy
• Maintains copy of FIB and interface list
• Communicates FIB status to active & standby ESP
(or bulk-download state info in case of restart)
• Maintains copy of FIBs
• Programs QFP forwarding plane and QFP DRAM
• Statistics collection and communication to RP
• Communicates with Forwarding manager on RP
• Provides interface to QFP Client / Driver
• Implements forwarding plane
• Programs PPEs with feature processing information
• Driver Software for SPA interface cards. Loaded separately and
independently
• Failure or upgrade of driver does not affect other SPAs in same or
different SIPs
RP
CPU
IOS
Chassis Mgr.
Forwarding Mgr.
Interconn.
ESP FECP
QFP
Client /
Driver
Chassis Mgr.
Forwarding Mgr.
Interconn.
Interconn.
QFP subsys-tem
QFP code
Crypto assist
SIP
Interconn.
SPA Agg.
IOCP
SPA
SPA driv drive r
Chassis Mgr.
Kernel (incl. utilities)
SPA
…
SPA
1.
2.
3.
4.
5.
6.
7.
Software Sub-packages
RPBase : RP OS
Why?: Upgrading of the OS will require reload to the RP and expect minimal changes
RPIOS : IOS
Why?: Facilitates Software Redundancy feature
RPAccess (K9 & non-K9 ): Software required for Router access;
2 versions available. One that contains open SSH & SSL and one without
Why?: To facilitate software packaging for export-restricted countries
RPControl : Control Plane processes that interface between
IOS and the rest of the platform
Why?: IOS XE Middleware
ESPBase : ESP OS + Control processes + QFP client/driver/ucode:
Why?: Any software upgrade of the ESP requires reload of the
ESP
SIPBase : SIP OS + Control processes
Why?: OS upgrade requires reload of the SIP
SIPSPA : SPA drivers and FPD (SPA FPGA image)
Why?: Facilitates SPA driver upgrade of specific SPA slots
RP
CPU
2
IOS
3
SSL/SSH
1
Chassis Mgr.
Forwarding Mgr.
Interface Mgr.
4
Interconn.
FP
FECP
Interconn.
Interconn.
CPP
Client /
Driver
Chassis Mgr.
Forwarding Mgr.
5
CPP subsys-tem
CPP code
Crypto assist
SIP
Interconn.
SPA
Agg.
IOCP
Chassis Mgr.
SPA
7 r
SPA drive r drive r r
Interface Mgr.
6
Kernel (incl. utilities)
SPA … SPA
Standard Release Timeline (XE 3.12 example)
Standard releases are supported for 18 months
6 months active bug-fix, 6 months limited bug fix, and 6 months PSIRT as needed
Rebuilds will be done at
3 3 6
6 month intervals
Two standard releases per year
EoSA
– End of Sale Announcement
EoS
– End of Sale
EoSM
– End of SW Maintenance
EoVS: End of Vulnerability &
Security
EoSA EoS EoSM EoVS
EoL
Milestone s
Release
Schedule
.0
.1
.2
.3
.4
Mo 1-4 Mo 5-8
Year-1 2013
Mo 9-12 Mo 1-4 Mo 5-8
Year-2 2014
Mo 9-12 Mo 1-4 Mo 5-8
Year-3 2015
Mo 9-12 Mo 1-4 Mo 5-8
Year-4 2016
Mo 9-12
IOS XE Extended Release (XE 3.13 example)
Extended throttle release with up to 48 months support
10 rebuilds over lifetime, last two are PSIRT as needed
Rebuilds will be done at
3 3 4 4 4 6 6 6
6 6 month intervals
One extended release per year (every 3 rd release is extended)
EoSA EoS EoSM
EoL
Milestones
EoSA
– End of Sale Announcement
EoS
– End of Sale
EoSM
– End of SW Maintenance
EoVS: End of Vulnerability &
Security
EoVS
Release
Schedule
.0
.1
.2
.3
.4
.5
.6
.7
.8
.9
.10
Mo 1-4 Mo 5-8
Year-1 2014
Mo 9-12 Mo 1-4 Mo 5-8
Year-2 2015
Mo 9-12 Mo 1-4 Mo 5-8
Year-3 2016
Mo 9-12 Mo 1-4 Mo 5-8
Year-4 2017
Mo 9-12 Mo 1-4 Mo 5-8
Year-4 2018
Packet Flows
– Data
Plane
Data Packet Flow: From SPA Through SIP
ESPs
…
Ingress
Buffers (per port)
Ingress
Scheduler
Interconn
.
Egress
Buffer
Status g
SPA aggregation
ASIC
Ingress classifier
SPA
Agg.
4 SPAs
Data
…
Egress
Buffers (per port)
1.
SPA receives packet data from its network interfaces and transfers the packet to the SIP
2.
SPA Aggregation ASIC classifies the packet into H/L priority
3.
SIP writes packet data to external 128MB memory (at 40Gbps from 4 full-rate SPAs)
4.
Ingress buffer memory is carved into 96 queues on SIP40 (64 queues for SIP10). The queues are arranged by SPA-SPI channel and optionally H/L. Channels on “channelized” SPAs share the same queue.
5.
SPA ASIC selects among ingress queues for next pkt to send to ESP over ESI. It prepares the packet for internal transmission
6.
The interconnect transmits packet data of selected packet over ESI to active ESP at up to 11.5 Gbps
7.
Active ESP can backpressure SIP via ESI ctl message to slow pkt transfer over ESI if overloaded (provides separate backpressure for Hi vs. Low priority pkt data)
Data Packet Flow: Through ESP40
TCAM4
(40Mbit)
Resource
DRAM
(1GB)
Processor pool
PPE0
PPE1
PPE0
PPE5
PPE0
PPE2
PPE0
PPE4
PPE0
PPE3
PPE0
PPE6
…
PPE0
PPE64
Pkt Buffer
DRAM
(256MB)
Part Len/
BW SRAM
Quantum Flow
Processor
Buffer, queue, schedule (BQS)
Buffer, queue, schedule
(BQS)
Dispatcher/
Pkt Buffer
Interconnect
1.
Packet arrives on QFP
2.
Packet assigned to a PPE thread.
3.
•
•
•
•
The PPE thread processes the packet in a feature chain similar to 12.2S IOS (very basic view of a v4 use case):
Input Features applied
• NetFlow, MQC/NBAR Classify, FW, RPF, Mark/Police, NAT, WCCP etc.
Forwarding Decision is made
• Ipv4 FIB, Load Balance, MPLS, MPLSoGRE, Multicast etc.
Output Features applied
•
NetFlow, FW, NAT, Crypto, MQC/NBAR Classify, Police/Mark etc.
Finished
4.
Packet released from on-chip memory to Traffic Manager (Queued)
5.
The Traffic Manager schedules which traffic to send to which SIP interface (or RP or Crypto Chip) based on priority and what is configured in MQC
6.
SIP can independently backpressure ESP via ESI control message to pace the packet transfer if overloaded
Data Packet Flow: Through SIP to SPA
Interconn
.
Ingress
Scheduler
Egress
Buffer
Status
…
Ingress
Buffers (per port) g
SPA
Aggregation
ASIC
…
Egress Buffers
(per port)
Ingress classifier
SPA
Agg.
1.
Interconnect receives packet data over ESI from the active ESP at up to 46 Gbps
2.
SPA Aggregation ASIC receives the packet and writes it to external egress buffer memory
3.
Egress buffer memory is carved into 64/96 queues. The queues are arranged by egress SPA-
SPI channel and H/L. Channels on
“channelized”
SPAs share the same queue.
4.
SPA Aggregation ASIC selects and transfers packet data from eligible queues to SPA-SPI channel (Hi queue are selected before Low)
5.
SPA can backpressure transfer of packet data burst independently for each SPA-SPI channel using SPI FIFO status
6.
SPA transmits packet data on network interface
4 SPAs
ASR1000 QoS
ASR 1000 Forwarding Path
QoS View
1.
SPA classification
2.
Ingress SIP packet buffering
3.
Port rate limiting & weighting for forwarding to ESP
4.
Advanced classification
5.
Ingress MQC based QoS
6.
Egress MQC based QoS
7.
Hierarchical packet scheduling
& queuing
8.
Egress SIP packet buffering
ESP (active)
4
TCAM Buffers
7
5
Cisco
QFP
6
Interconnect
Midplane
Interconnect
3
2
Ingress classifier, scheduler & buffers
1
SPA
SPA
RP (active)
IOS Process
Interconnect
SPA
RP (backup)
IOS Process
Interconnect
Interconnect
Packet buffers
8
SPA
ESP (backup)
TCAM Buffers
Cisco
QFP
Interconnect
ESI, 40Gbps each direction
SPA-SPI, 11.2Gbps each direct
Hypertransport, 8Gbps each direction
ASR 1000 ESP QoS
QFP Processing
•
•
•
•
•
The following QoS functions are handled by PPEs:
Classification
Marking
Policing
WRED
•
QoS functions (along with other packet forwarding features such as NAT,
Netflow, etc.) are handled by QFP
• Packet is sent to the QFP Traffic Manager for queueing & scheduling
• All ESP QoS functions are configured using MQC CLI
ASR 1000 QoS
The QFP Traffic Manager (BQS) performs all packet scheduling decisions.
•
•
•
•
Cisco QFP Traffic Manager implements a 3 parameter scheduler which gives advanced flexibility. Only 2 parameters can be configured at any level (min/max or max/excess)
Minimum - bandwidth
Excess
Maximum
- bandwidth remaining
- shape
• Priority propagation ensures that high priority packets are forwarded first without loss
• Packet memory is one large pool. Interfaces do not reserve a specific amount of memory.
•
•
•
•
Out of resources memory exhaustion conditions
Non-priority user data dropped at 85% packet memory utilization
Priority user data dropped at 97% packet memory utilization
Selected IOS control plane packets and internal control packets dropped at 100% memory utilization
ASR 1000 QoS
Queuing Hierarchy
•
•
Multilayer hierarchies (5 layers in total)
SIP, Interface, 3 layers of MQC QoS
•
Two levels of priority traffic (1 and 2)
•
Strict and conditional priority rate limiting
•
3 parameter scheduler (min, max, & excess)
•
Priority propagation for no loss priority forwarding via minimum parameter
• Shaping average and peak options, burst parameters are accepted but not used
• Backpressure mechanism between hardware components to deal with external flow control
Level 2 “Class” schedules
Level 1 “VLAN” schedule
Interface/Port schedule
SIP schedule
Level3 “Class” queues
ASR 1000 QoS
Classification and Marking
•
•
•
Classification
IPv4 precedence/DSCP, IPv6 precedence/DSCP, MPLS EXP, FR-DE, ACL, packetlength, ATM CLP, COS, inner/outer COS (QinQ), vlan, input-interface, qos-group, discard-class
QFP is assisted in hardware by TCAM
•
•
Marking
IPv4 precedence/DSCP, IPv6 precedence/DSCP, MPLS EXP, FR-DE, discard-class, qos-group, ATM CLP, COS, inner/outer COS
•
•
•
Enhanced match and marker stats may be enabled with a global configuration option platform qos marker-statistics platform qos match-statistics per-filter
ASR 1000 Policing and Congestion Avoidance
•
•
•
•
•
•
•
•
Policing
1R2C
– 1 rate 2 color
1R3C
– 1 rate 3 color
2R2C
– 2 rate 2 color
2R3C
– 2 rate 3 color
• color blind and aware in XE 3.2 and higher software supports RFC 2697 and RFC 2698 explicit rate and percent based configuration dedicated policer block in QFP hardware
•
•
•
•
•
•
WRED precedence (implicit MPLS EXP), dscp, and discard-class based
ECN marking byte, packet, and time based CLI packet based configurations limited to exponential constant values 1 through 6 dedicated WRED block in QFP hardware
INTEGRATED SECURITY
ON ASR1000
ASR1000 Cryptography Support
Improved Octeon-II Crypto Processor on X-series Chassis
•
•
•
•
•
ESP-100 / 200
24 core processor
800MHz clock frequency
2GB DDR3 SDRAM
Up to 60Gbps (512B packets)
•
•
•
•
ASR 1002-X
6 core processor
1.1 GHz clock frequency
Up to 4Gbps (512B packets)
•
•
ASR 1001-X
Up to 4 Gbps Crypto
•
•
•
Crypto support:
AES, SHA-1, ARC4, DES, 3-DES
IKEv1 or IKEv2
•
•
•
•
•
Next Gen “Suite B” crypto support
Encryption: AES-128-GCM
Authentication: HMAC-SHA-256
Hashing: SHA-256
Protocol: IKEv2
• NOTE: In-Box High Availability ASR1006 configuration:
ESP to ESP - stateful
RP to RP
– stateless
ASR 1000 Forwarding Processor
IPSec Processing is done with Crypto Co-processor Assist
• IPSec SA Database
• IKE SA Database
• Crypto-map
• DH key pairs
• IPSec SA class groups
• Classes
• Rules (ACE or IPSec SA)
• IPSec SA Database
• IPSec Headers
Card
Infrastructure
Memory
FECP
Boot Flash
•Anti-replay check
•Encryption / decryption
(Diffie-Helman)
•NAT Traversal
•Traffic-based lifetime expiry
Chassis
Mgmt Bus
RPs RPs
Memory
Crypto
TCAM4
Resource
DRAM
PPE0
PPE0
PPE1
Processor pool
PPE0
PPE2
PPE0
PPE3
PPE0
PPE4
PPE0
PPE5
Pkt Buffer
DRAM
QFP
PPE0
PPE0
…
PPE0
PPE0
PPE40
Buffer, queue, schedule (BQS)
Buffer, queue, schedule (BQS)
Dispatche r/Pkt
Buffer
Interconn.
ESP RPs
SIPs
•Outbound packet classification
•Formatting of packets to Crypto chip
(internal header)
•Receiving packets from crypto chip
•Removal of internal crypto header
•Re-assembly of fragmented IPSec
packets
GE, 1Gbps
I
2
C
SPA Control
SPA Bus
ESI, 10/40Gbps
SPA-SPI, 11.2Gbps
Hypertransport, 10Gbps
Other
ASR 1000 IPSec Software Architecture
Function Partitioning
RP
CPU
IOS
Chassis Mgr.
Forwarding Mgr.
• Creation of IPSec Security Associations (SA)
• IKE Control Plane (IKE negogiation, expiry, tunnel
setup)
• Communicates FIB status to active & standby ESP (or
bulk-download state info in case of restart)
Interconn.
• Communicates with Forwarding manager on RP
• Provides interface to QFP Client / Driver
• Copy of IPSec SAs
• Copy of IKE Sas
• Synchronization of SA Databases with standby ESP
• Punting of Encrypted packets to the Crypto Assist
• Encryption / Decryption of packets
ESP FECP
QFP
Client /
Driver
Chassis Mgr.
Forwarding Mgr.
Interconn.
Interconn.
QFP subsys-tem
QFP code
Crypto assist
SIP
Interconn.
SPA Agg.
IOCP
SPA
SPA driv drive r
Chassis Mgr.
Kernel (incl. utilities)
SPA
…
SPA
For Your
Reference
ASR Integrated Zone-based Firewall Protection
DoS, DDoS and Application Layer Detection and Prevention
TCP SYN Attack Prevention
Protects against TCP SYN Flood Attack to the FW
Session Database
SYN Cookie Protection:
Per Zone
Per VRF
Per Box
Half Open Session Limit
Protects Firewall Session Table from attacks that could be based on UDP, TCP and ICMP
Half Open Session Limits are configurable:
Per Box and VRF Level
Per Class supported initially
FW resources are managed effectively with half open session limit configuration knobs
Logs are generated when limits are crossed
Basic Threat Detection
Enables detection of possible threats, anomalies and attacks per Zone
Monitors rate of pre-defined events in the system; alerts sent to Sys/HSL logs
Report drops due to: Basic FW check failures, L4 inspection failures, and count of the # of dropped
SYNs
Application Layer Protocol Inspection
Conformance checking, state tracking, security checks with granular policy control
Over 20 Inspection Engines:
UC: SIP, Skinny, H323, RSTP...
Enterprise Apps: Video/Soft phones, H.323, FTP64
Core Protocols: FTP, SNMP, DNS, POP3, ...
Database & O/S: LDAP, NetBIOS, Microsoft RPC, ..
Strictly Cisco Confidential
Cisco Router Security Certifications
Cisco ISR 890 Series
Cisco ISR 1900 Series
Cisco ISR 2900 Series
Cisco ISR 3900 Series
Cisco ISR 3900E Series
Cisco ASR 1000 Series
FIPS
140-2, Level 2
P
P
P
** RP2 is only supported in ASR1004 , ASR1006, and ASR1013
Common Criteria
EAL4
P
P
P
P
P
NSA Suite B
Hardware Assist
P
P
P
P
P
P
**
ASR 1000 IPSec Performance & Scale
ASR 1001
Supported Chassis ASR1001
ASR 1001-X ASR 1002-X
ASR 1001-X ASR1002-X
ESP5
ASR 1002
ESP10
ASR 1002,
1004, 1006
ESP20
ASR 1004 &
1006
ESP40
ASR1004
1006 & 1013
Encryption Throughput
(Max/IMIX)
1.8/1 Gbps 8/5.8Gbps
4/4Gbps 1.8/1 Gbps
ESP100
ASR1006 &
1013
3.5/2.5Gbps
9.2/6.3 Gbps 12.9/7.4 Gbps 29/16 Gbps
4000 4000 8000 1000 1000 8000/1000 8000/1000 8000 VRFs (RP2/RP1)
Total Tunnels
(Site to Site IPSec) *
Tunnel Setup Rate w/ RP2
(IPSec, per sec)
4000
130
8000
130
8000
130
4000
N/A
4000
130
8000
130
8000
130
8000
130
Tunnel Setup Rate w/ RP1
(IPSec, per sec)
N/A N/A N/A 90 90 90 90 90
For Your
Reference
ESP200
ASR1013
78/59 Gbps
8000
8000
130
N/A
DMVPN / BGP Adjacencies
(RP2/RP1, 5 routes per peer)
3500 4000 4000
DMVPN / EIGRP Adjacencies
(RP2/RP1, 5 routes per peer)
FlexVPN + dVTI
3500 4000 4000
10,000 10,000 10,000
*RP2 is not recommended with ESP10; RP1 is not recommended with ESP20
3000
3000
10,000
3000
3000
10,000
4000
4000
10,000
4000
4000
10,000
4000
4000
10,000
4000
4000
10,000
HIGH AVAILABILITY
High-Availability on the ASR 1000
ASR1000 Built for Carrier-grade HA
•
Redundant ESP / RP on ASR 1006 and ASR 1013
• Software Redundancy on ASR 1001, ASR 1002, ASR 1004
• Zero packet loss on RP Fail-over! Max 100ms loss for ESP failover
•
•
•
Intra-chassis Stateful Switchover (SSO) support for
Protocols: FR, ML(PPP), HDLC, VLAN , IS-IS, BGP, CEF, SNMP, MPLS, MPLS
VPN, LDP, VRF-lite
Stateful features: PPPoX, AAA, DHCP, IPSec, NAT, Firewall
•
•
•
•
IOS XE also provides full support for Network Resiliency
NSF/GR for BGP, OSPFv2/v3, IS-IS, EIGRP, LDP
IP Event Dampening; BFD (BGP, IS-IS, OSPF)
GLBP, HSRP, VRRP
• Support for ISSU
• Stateful inter-chassis redundancy available for NAT, Firewall,
SBC
ASR 1006
Active
Forwarding
Processor
SPA SPA
SPA Carrier Card
SPA SPA
Active
SPA SPA
SPA Carrier Card
SPA SPA
Zero
Packet
Loss
Standby
Forwarding
Processor
SPA SPA
SPA Carrier Card
SPA SPA
Software Redundancy
– IOS XE
ASR1002 and ASR1004
• IOS runs as its own Linux process for control plane (Routing, SNMP, CLI etc.)
•
•
•
Linux kernel runs IOS process in protected memory for:
Fault containment
Restart-ability of individual SW processes
• Software redundancy helps when there is a RP-
IOS failure
•
Active process will switchover to the standby, while forwarding continues with zero packet loss
• Can be used for ISSU of RP-IOS package for control-plane bug fixes and PSIRTs
• Other software upgrades (example: SIP or ESP) cannot benefit from Software redundancy
IOS
Process fail
IOS IOS
Standby
Becomes
Active
(Active) (Standby)
IOS XE Platform Adaptation Layer (PAL)
Chassis
Manager
Forwarding
Manager
Kernel
Route Processor
Control Messaging
SPA SPA SPA
Driver
Chassis
Manager
Kernel
SPA Interface Processor
QFP
Client/Driver
Forwarding
Manager
Chassis
Manager
Kernel
Embedded Services
Processor
ASR 1006 High Availability Infrastructure
Infrastructure for Stateful Redundancy
RP act
Non-HA-Aware
Application
Config
MLD
CEF
Mcast
Driver/Media
Layer
IDB State Update Msg
IDB
IDB
CF
RF
IPC Message Qs
IOS act
I
P
C
Interconnect
Used for
IPC and
Checkpointing
IOS sby
I
P
C
IPC Message Qs
CF
RF
Non-HA-Aware
Application
Config
MLD
CEF
Mcast
Driver/Media
Layer
IDB State Update Msg
FM
RP
FM
RP
RP sby
ESP act
FM
ESP
QFP Client
ESP sby
FM
ESP
QFP Client
Provides hitless or near hitless switchover
Reliable IPC transport used for synchronization
HA operates in a similar manner to other protocols on the ASR 1000
SPAs
ASR 1000 In-Service Software Upgrade
Ability to perform upgrade of the IOS image on the single-engine systems
Support for updgrade or downgrade
One-shot ISSU procedure available for H/W redundant platforms
Hitless upgrade of some software packages
“In Service” component upgrades (SIP-
Base, SIP-SPA, ESP-Base)
RP Portability - installing & configuring hardware that are physically not present in the chassis
Software Release
From \ To
3.1.0
3.1.1
3.1.2
3.2.1
3.2.2
3.1.0
N/A
SSO Tested
SSO
SSO via 3.1.2
SSO via 3.1.2
3.1.1
SSO Tested
N/A
SSO Tested
SSO via 3.1.2
SSO via 3.1.2
3.1.2
SSO
SSO Tested
N/A
SSO Tested
SSO Tested
3.2.1
SSO via 3.1.2
SSO via 3.1.2
SSO Tested
N/A
SSO Tested
3.2.2
SSO via 3.1.2
SSO via 3.1.2
SSO Tested
SSO Tested
N/A
ASR1000
APPLICATIONS
ASR1000 Network Applications
Routing, PE, Broadband, WiFi
•
IPv4 / IPv6 Routing, Transition
•
BGP, RIP, IS-IS, OSPF, Static routes
•
GRE, MPLSoGRE, EoMPLSoGREoIPSec,
ATMoMPLS
•
MPLS L3 VPN
•
L2VPN (ATM, Circuit Emulation)
•
VPLS, H-VPLS PE; Carrier Ethernet Services
•
Route Reflector, Internet Peering
•
Internet & WAN Edge
•
Broadband & WiFi Aggregation
•
Subscriber Management
Multicast
•
IPv4 / IPv6 Multicast Router
•
MVPN (GRE, mLDP), MVPN Extranet
•
IGMPv2/v3
•
NAT & CAC
2700+ Features!
Secure WAN and PE
•
IPSec VPN
– DES, 3DES, AES-128-GCM
•
DMVPN, GETVPN, FLEXVPN
•
Secure group tagging (SGT)
•
VRF-lite, MPLS-VPN, over DMVPN
•
IOS Zone-based Firewall, many ALGs
•
Carrier Grade NAT
•
VRF-aware
•
Hardware accelerated (Crypto + TCAM)
Application Layer Services
•
SBC: CUBE Enterprise, CUBE SP (HCS, CTX)
•
SIP, NAPT, Megaco/H.248, Topology Hiding
•
AppNav
– Advanced WAAS redirection
•
AVC: NBAR2, hardware accelerated DPI
•
Application-aware QoS Policy
ASR1000 Unified Communications Applications
Session Border Controller
•
Cisco Unified Border Element (ENT)
(CUBE(ENT))
•
Full trunk-side SBC functionality
•
Session Mgmt, Demarcation, Security,
Interworking
•
Connect CUCM to SIP trunks
•
Connect 3 rd party IP BPX to SIP trunks
•
DSP-based transcoding up to 9000 calls with DSP SPA module; Noise cancellation.
•
Hi density Media forking
•
UC Service API
•
3rd Party API for call control
•
SRTP Encryption HW (ESP)
•
Line Side SBC functionality for voice endpoints
Cisco Unified Call Manager (CUCM)
•
Software Media Termination Point (MTP)
•
Scales to 5000 Sessions
Media Performance Aware
•
Performance aware statistics based on media traffic analysis
•
Packet loss, Jitter, Delay, Metadata for media flows
•
Media trace (traceroute for mediaflows)
•
Class Specific threshold crossing alerts
•
Netflow and SNMP/MIB based reporting
•
Compatible with Cisco Media architecture and equipment
Routing Baseline
•
IPv4 / IPv6 Routing, Transition
•
BGP, RIP, IS-IS, OSPF, Static routes
•
MPLS L3 VPN, L2VPN, GRE, IPSec
•
VPLS, H-VPLS PE; Carrier Ethernet Services
•
IPv4 / IPv6 Multicast Router
•
MVPN (GRE, mLDP), IGMPv2/v3
•
Rich connectivity options
74
ASR1000 Applications:
Carrier Ethernet & MPLS
VPN
MPLS L3 VPN Applications
Extensive MPLS feature set
VRF-lite/Multi-VRF CE
Sub-interface per VRF for CE/PE Interface
Up to 4000 VRFs
MPLS VPN (RFC-2547)
IPv4 & IPv6
6PE/6VPE Support
MPLS over GRE overlay for large Enterprise VPN
MPLS TE/FRR
FRR Link, Path & Node protection
RSVP & BFD triggered FRR
Path first tunnel computation
Multicast VPN
Per-VRF Unicast and Multicast Forwarding
PIM or BGP customer signalling; PIM, MLDP or P2MP TE core
2547oDMVPN
Hub as P or
PE
mGRE
GRE
Tunnels
MPLS
Campus/
MAN
RR RR
E-P
Campus-PE
IP
Service
E-PE
E-PE E-PE
Remote
Branches/Customers
Multicast VPN
Multicast
Source
CE
CE
PE
PE
CE
PMSI Instance
CE
CE
PE
Provider
Network
PE
PMSI Instance
CE
Multicast
Receiver
MPLS VPN Multi-Service Edge
CE
Layer 3 Routing Protocols
Available on PE-CE
—Static,
RIP, OSPF, EIGRP, eBGP
IP Services Can Be
Configured on per-VPN Basis on the PE Router
Traffic Engineering for
Bandwidth Protection and
Restoration
Internet
Gateway
Internet
PE
IP/MPLS
Backbone
CE
CE
PE
QoS:
HQoS and Policing at CE and PE Routers
Legend
Layer 3 VPN
Layer 2 VPN
Traffic Engineering
Layer 2 Circuits Available
Ethernet
ATM CRoMPLS (VP and VC mode)
ATM AAL5, Frame Relay
CE
L3 VPNs, L2VPNs, Traffic Engineering, QoS + IP Services Coexist on a Single Infrastructure
Carrier Ethernet Applications
Optimized for 40-Gbps to 200-Gbps Requirements
Content Farm
Mobile
MSPP
Access
Cable
Residential
Business
Untagged
Single-tagged
Double-tagged
ETTx
802.1q
802.1ad
DSL
LAN
PON
Aggregation
L2 Point-to-Point
L2 Multipoint,
Bridged
L2 Multipoint VPLS
L3 Routed
Provider
Edge
Edge
Service
Edge
Integration
BRAS
DPI
SR/PE
VOD TV SIP
Core Network
IP/MPLS
Content Farm
VOD TV SIP
ASR 1000 Carrier Ethernet Capabilities
Bridge
Domain
•
•
•
•
•
•
•
•
•
Support for EVC infrastructure
VLAN tags (single, double, ambiguous, untagged)
802.1ad S-VLANs
Custom EtherType (eg. IPv4/v6, PPPoE Discovery, PPPoE
Session)
CoS Support (802.1p bits)
•
Flexible EVC forwarding services
Pseudowire Headend, Bridge Domain Interface
•
Ethernet OAM Support
Link OAM, CFM, 802.1ag + Y.1731 extensions, 802.3ah,
Loopback, ELMI
•
Support for E-Line, E-Lan, E-Tree
Port/VLAN/.1q modes with interworking and local switching!
•
•
Strong UNI features
HQoS, Security ACL, MAC Security
Flexible Tag Matching and Manipulation
Ports
EFPs
connect
(hair-pin) connect
EVC Infrastructure
xconnect
BD
BD
L2 VFI
Subintf
L3/VRF
BD
L2 MP Bridging
BD
EFPs
* EVC = Ethernet Virtual Circuit
* UNI = User to Network Interface
ATM/FR
L2 Interworking
(not yet supported)
Pseud o wire
Pseud o wire
Pseud o wire
Routed
Can ASR1000 Be a Layer 2 Switch ?
Yes!
•
EVC Addresses Flexible Ethernet Edge requirements
•
Flexible VLAN manipulation
•
Virtual interface (BDI) similar to SVI on a switch
•
Supports Spanning tree protocols (MST, PVST, RPVST+)
•
Supports Various Ethernet encapsulations (802.1Q,
802.1ad, Q-in-Q, 802.1ah)
•
VLAN to Forwarding Service (L3/BDI, P2P, P2MP)
•
Support E-OAM capabilities (CFM, Y1731, Link EOAM, etc…)
No!
•
LAN Switch port density
•
Lowest cost per port
•
Rich IOS LAN switch functionality & Capability
Answer:
Handy solution to absorb a switch/trunk in some situations especially for integrated L3 edge applications
ASR1000 VPLS Services
Common VC ID between PEs creates a Virtual
Switching Instance
VPLS Full-mesh, Hub/Spoke & H-VPLS Provider Edge (PE)
128K MAC Addresses, Broadcast Storm Control
VPLS over GRE +IPSec
VPLS Auto-discovery
LDP Signalled (RFC-6074)
BGP Signalled (RFC-4761)
Inter-AS support
Option A (BGP signalled)
Option B,C (LDP signalled)
U-PE Dual-homing
Multiple Spanning Tree with Control Pseudowire
Routed Pseudowire
‒
VPLS circuit terminated on Bridge Domain Interface
CE
N-PE
Full Mesh of
Targeted LDP
Sessions Exchange
VC Labels
MPLS Enabled
Core Forms Tunnel
LSPs
MPLS
(VSI)
N-PE
CE
N-PE
Attachment VCs are
Port Mode or VLAN
ID
H-VPLS PE
U-PE
Acronyms:
CE Customer Edge Device n-PE Network Facing-Provider Edge u-PE User facing Provide Edge
VSI/VFI Virtual Switching/Forwarding Instance
ASR1000 Applications:
Secure VPN
IPSec VPN Applications
GETVPN
MPLS-VPN, VRF-lite, SP Multicast replication
Group Key Mgmt, Centralized Key Server
DMVPN
RFC-2547oDMVPN, VRF-aware DMVPN,
Supports BGP, EIGRP & per tunnel QoS
FlexVPN
Remote Access VPN with Policy control
User or device security policy through AAA
Great for SPs!
VRF-awareness
NSA Suite-B Cryptography
Branch LAN
2547oDMVPN
Hub as P or
PE
mGRE
GRE
Tunnels
MPLS
Campus/
MAN
RR RR
E-P
Campus-PE
IP
Servic e
E-PE
E-PE E-PE
Remote
Branches
VRF-lite over DMVPN
MPLS
Campus or MAN mGRE per
VRF
E-PE
IP
Service
RR
Multi
-VRF
CE
Multi-
VRF CE
Remote
Branches
Dynamic Multipoint VPN (DMVPN)
Site-to-Site, Dynamic Full Mesh VPN
•
Highly scalable VPN overlay over any transport network.
Ideal for Hybrid MPLS/Internet
•
Branch spoke sites establish an IPsec tunnel to and register with the hub site
• IP routing exchanges prefix information for each site. BGP or
EIGRP for scale.
•
With WAN interface IP address as the tunnel address, provider network does not need to route customer internal
IP prefixes
• Data traffic flows over the DMVPN tunnels
• When traffic flows between spoke sites, dynamic site-to-site tunnels are established
•
Per-tunnel QOS is applied to prevent hub site oversubscription of spoke sites
SECURE ON-DEMAND TUNNELS
ASR 1000
Hub
IPsec
VPN
Branch n
ISR G2
ISR G2
Branch 1
ISR G2
Branch 2
Traditional Static Tunnels
DMVPN On-Demand Tunnels
Static Known IP Addresses
Dynamic Unknown IP Addresses
ASR1000 WAN
Applications: Performance
Routing (PfR)
What is Performance Routing (PfR)?
Tooling for Intelligent Path Control
“Performance Routing (PfR) provides additional intelligence to classic routing technologies to track the performance of, or verify the quality of, a path between two devices over a Wide Area Networking
(WAN) infrastructure to determine the best egress or ingress path for application traffic....”
Data Center
DSL
BR
MC
BR
Cable
•
Cisco IOS technology
•
Two components: Master controller and border router
MC+BR
Branch
Performance Routing
—Components
•
•
•
The Policy Controller: Domain Controller (DC)
Discover Site Peers and Connected Networks
Advertise policy and services; Discover topology and prefixes
One per domain, Collocated with MC.
•
•
•
The Decision Maker: Master Controller (MC)
Discover BRs, collect statistics
Apply policy, verification, reporting
No packet forwarding/inspection required
•
•
•
The Forwarding Path: Border Router (BR)
Gain network visibility in forwarding path (Learn, measure)
Enforce MC’s decision (path enforcement)
Does all packet forwarding
Data Center
DSL
BR
MC/DC
BR
Cable
MC+BR
Branch
Intelligent Path Control with PfR
Voice and video use case
VOICE/VIDEO take the best delay, jitter, and/or loss path
MPLS
Private Cloud
Branch
Virtual Private
Cloud
Internet
OTHER TRAFFIC is load balanced to maximize bandwidth
VOICE/VIDEO will be rerouted if the current path degrades below policy thresholds
• PfR monitors network performance and routes applications based on application performance policies
• PfR load balances traffic based upon link utilization levels to efficiently utilize all available WAN bandwidth
ASR1000 and Cisco
Intelligent WAN (IWAN)
IWAN Sessions this week:
BRKARC-2000 IWAN Architecture
BRKCRS-2002 IWAN Design and Deployment Workshop
BRKRST-2514 Application Optimization and Provisioning the Intelligent WAN (IWAN)
BRKNMS-2845 - IWAN and AVC Management with Cisco Prime Infrastructure
BRKRST-2362 - Implementing Next Generation Performance Routing
– PfRv3
BRKRST-2041 - WAN Architecture and Design Principles
BRKRST-2042 - Highly Available Wide-Area Network Design
Intelligent WAN: Leveraging the Internet
Secure WAN Transport and Internet Access
Hybrid WAN
Transport
IPsec Secure
Branch
Direct Internet
Access
•
•
•
Secure WAN transport for private and virtual private cloud access
Leverage Local Internet path for public cloud and Internet access
Increased WAN transport capacity;
MPLS (IP-VPN)
Private
Cloud
Virtual
Private
Cloud
Internet
Public
Cloud
• and cost effectively!
Improve application performance
(right flows to right places)
Intelligent WAN Solution Components
Branch
Transport
Independent
•
•
•
•
Consistent operational model
Simple provider migrations
Scalable and modular design
DMVPN IPsec overlay design
AVC
Internet
Private
Cloud
Virtual
Private
Cloud
3G/4G-LTE
MPLS
Public
Cloud
WAAS
PfR
Intelligent
Path Control
•
•
•
•
Application best path based on delay, loss, jitter, path preference
Load balancing for full utilization of all bandwidth
Improved network availability
Performance Routing (PfR)
Application
Optimization
•
•
•
Akamai Caching and Best
Path selection
Performance Monitoring with
Application Visibility and
Control (AVC)
Acceleration and bandwidth savings with WAAS
Secure
Connectivity
•
•
•
Certified strong encryption
Comprehensive threat defense with ASA and IOS firewall/IPS
Cloud Web Security (CWS) for scalable secure direct
Internet access
SD-WAN Automation with IWAN
Data Center or POP
Policy Expression
• APIC-EM Centralized Policy expression & distribution
•
Distributed Policy Enforcement
• Automated Application & Topology discovery
• Application & Network performance monitoring
• Adaptive path selection and QoS to sustain policy
Policy Rendering
Policy Distribution
& Domain Control
• Performance analytics collected network-wide and reported centrally
Distributed Policy
Enforcement
4G
LTE
MC
Branch
APIC
DC/MC
MPLS
(IP-VPN)
MC
Large Site
Data Center or POP #2...n
IWAN Domain
Controller
MC
Internet
MC
Campus
IWAN Automated Secure VPN
Embedded
Trust Devices
AX
4G
Campus
Secure Boot Strap
Automatic Configuration and
Trust Establishment
Metro-E
AX
Large
Site
AX
MPLS
Automatic Session Key
Refresh (IKEv2)
Trust Revocation
ISP
Branch
Intelligent
Branch Resilient WAN POP
1H2015
Deploy,
Search,
Retrieve,
Revoke
IWAN App, Prime, 3rd Party
Configuration
Orchestration
Key and
Certificate
Controller
DC
Optional External
Certificate Authority
Start with Cisco AX Routers
Embedded IWAN Capabilities: 3900 | 2900 | 1900 | 890 | 4000 | ASR1000
One Network
One IOS-XE
UNIFIED SERVICES
Visibility
Control
Optimization
ASR1000-AX
ISR-4000 AX
Simplify
Application
Delivery
Transport
Independent
Secure
Routing
ISR-AX
SUMMARY
Summary and Key Takeaways
• ASR 1000 is the Swiss Army Knife to solve your tough network problems
• Reduce complexity in your network edge
• ASR 1000 is positioned for both Service Provider and
Enterprise Architectures
• ASR1000 is at the heart of Cisco IWAN for SP and
Enterprise Applications
One IOS-XE everywhere with ISR4000
•
Come see live at our WOS Booth!
Participate in the “My Favorite Speaker” Contest
Promote Your Favorite Speaker and You Could Be a Winner
• Promote your favorite speaker through Twitter and you could win $200 of Cisco
Press products (@CiscoPress)
•
•
•
Send a tweet and include
Your favorite speaker’s Twitter handle:
@swood0214
Two hashtags: #CLUS #MyFavoriteSpeaker
•
You can submit an entry for more than one of your “favorite” speakers
•
Don’t forget to follow @CiscoLive and @CiscoPress
• View the official rules at http://bit.ly/CLUSwin
Complete Your Online Session Evaluation
•
Give us your feedback to be entered into a Daily Survey
Drawing. A daily winner will receive a $750 Amazon gift card.
•
Complete your session surveys though the Cisco Live mobile app or your computer on
Cisco Live Connect.
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at
CiscoLive.com/Online
Continue Your Education
• Demos in the Cisco campus
•
Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Related manuals
advertisement