Building Next
Generation
Datacenter Networks
Boris Germashev
Regional Director
Eastern & Central Europe, CIS, Russia
Hvala!
• Our customer SRCE and our partner S&T for proving this opportunity!
2
Three Corners of the Data Center Triangle
Compute
Data
Center
Data
I/O
Moore’s Law For Data Centers
Compute
Doubles Every 1.5
Years
Data
Doubles Every 1.5
Years
I/O
Doubles Every 4
Years
View from Open Networking Foundation (ONF)
• The Need for a New Network Architecture
– Changing traffic patterns
– The “consumerization of IT”
– The rise of cloud services
– “Big data” means more bandwidth
• Limitations of Current Networking Technologies
– Complexity that leads to stasis
– Inconsistent policies
– Inability to scale
– Vendor dependence
• ONF touches two major topics
– Scalability
– Flexibility/automation
5
The SDN Model
Separate Control/Management Plane from Data/Forwarding
Plane
Centralize Network Intelligence and State
Abstract Network Infrastructure from Applications
Make the Control and Management Plane Programmable
6
Network Technology to Meet Future Demand
Scalable
• Bandwidth aggregation to multiply I/O
• Seamless migration to higher speeds and feeds
Flexible
• I/O diversity and mix-n-match
• Auto-provisioning and configuration
Economical
• Minimal cost increase with speed migrations
• Reusable in terms of infrastructure and training
Reliable
• Is resilient and time tested
• Provides required level of service up time
Example of “Complexity which leads to stasis”
VM Mobility Issues Today
Network has Zero Visibility into VM Lifecycle
Server Admin
Initiate
Virtual Machine Manager
Network Admin
e.g.
Switch Port Config
IP:
MAC:
QoS:
ACL:
Switch Port Config
None or Disabled
1.1.1.2
00:0A
QP7
Deny HTTP
When a VMotion or Live
Migration occurs automatically
or is initiated by the server
admin, the network admin has
NO visibility into VM location or
when the movement occurs
NIC
NIC
VM1
IP: 1.1.1.2
MAC: 00:0A
Hypervisor
Hypervisor
Result: The VM moves to a
destination switch port that
is incorrectly configured to
deliver network services to
the specific VM
Example of a “Network Application”
Extreme Networks XNV – VM Lifecycle Management
Network Visibility into VM Lifecycle
Query
Location-based VM awareness at the network level for
efficient VM mobility
Server Admin
Initiate
Virtual Machine Manager
Network Admin
e.g.
Switch
Config
SwitchPort
Port
Config
Virtual
Port
Profile
IP:
1.1.1.2
IP:
MAC: 1.1.1.2
00:0A
MAC:
QoS: 00:0A
QP7
QoS: QP7
ACL:
HTTP
ACL: Deny
Deny HTTP
Switch Port Config
None or Disabled
XNV™-enabled
XNV-enabled
Ridgeline™:
Through XML integration
•
•
•
•
•
•
Pull Inventory from virtual machine
manager
Locate VMs on network switches
Show Inventory VM Switch Port
Mapping
Define Virtual Port Profile (VPP)
Assign (VPP) to VMs and distribute to
switches
Respond to VM motion occurrences
VM info
Result:
NIC
NIC
VM1
IP: 1.1.1.2
MAC: 00:0A
Hypervisor
Hypervisor
Both the VM and the Virtual
Port Profile move to the
destination switch port.
Network-level visibility into
VM movement is achieved to
deliver a better SLA.
How is it related to SDN?
Proactive Dynamic Networks
Static
Dynamic
Limited visibility of User, Device,
Location, and Presence
Network provisioning and
monitoring based on:
Awareness of User, Device,
Location, and Presence
Network provisioning and
monitoring based on:
• IP Address
• User Identity, Device Identity
• TCP/UDP Port Information
• Virtual Machine Identity
• Static ACLs
• Role-based Access, Dynamic ACLs
Manual Configuration
Automated Configuration
Reactive
Management
Proactive
Management
Enabling the Move from a Static Network to a Dynamic Network
10
Transparent Authentication with AD Login
Username
IP
MAC
Computer
Name
VLAN
Location
Switch Port #
John_Smith
10.1.1.101
00:00:00:00:01
Laptop_1011
1
24
User and Device Awareness through Transparent Authentication
•
No software agents required – utilize existing authentication methods
•
Do not need to retrain users on logging on to the network
Internet
Intranet
1
Mail
Servers
User logs into the Active
Directory domain with
user name and password
Active Directory Server
2
ExtremeXOS® network
“snoops” the Kerberos login
by capturing the user name
3
Active Directory validates
and approves user credentials
and responds to host
Success
RADIUS Server
LDAP Server
CRM
Database
4
ExtremeXOS grants
network access based on
AD server response
Why does it matter for the DC?
Role-based Access
Turning bits and bytes of information into “rich content” (users,
devices, and their location) and achieving automatic provisioning
with Role-based Policies
001010100010101101010
User and
Device
Identity
010101010101010010010
Username
Device Identity
IP
MAC
Computer
Name
Role
VLAN
Location
Switch Port
#
Location
Switch Location
John_Smith
10.1.1.101
00:00:00:00:00:01
John’s_Laptop
Employee
1
24
Wiring closet, building 2
Alice_Jones
10.1.1.200
00:00:00:00:00:02
Science_PC
Contractor
1
1
3rd floor, building 3
Cisco VoIP
Phone
10.1.2.100
00:00:00:00:00:03
n/a
Voice
10
2
3rd floor, building 4
Dell iSCSI_Array
10.3.1.111
00:00:22:00:00:10
n/a
Storage
20
8
Data Center
<unknown>
10.1.1.50
00:00:00:00:00:50
n/a
Guest
1
1
Media building
Differentiator: The Power of ExtremeXOS®
Resilient and Proven
Modular
Adaptable
Across Platforms
Predictable
Performance
Memory
Protected
Scripting and
Open Interfaces
Distributed
Policies
Intelligent and Personalized
Virtualization
Loadable
Modules
The Power and Service Predictability of A Single OS
From Service Provider, Through The Enterprise Edge
And Core, And Into The Data Center
13
Extreme Networks Open Fabric and SDN
SDN Applications
VM Lifecycle
Management
(XNV)
User Identity
Management
(IDM)
Collaborative
Programming
(XKIT)
Application
Performance
Management
Bring Your
Own Device
(BYOD)
Centralized Management/Orchestration Platform
Ridgeline
EXOS – Extensible, Open, Secure Network OS
XML
Modular
Scripts
External App SDK
Predictable Performance
OpenStack Quantum Plugin
Memory Protected
Hardware Abstracted
High Performance Converged Open Fabric
Low Latency
14
High
Capacity
MLAG
DCB
OpenFlow
Open
Fabric
xKit: XOS Extensibility and Tools
Crowd-Sourced Knowledge Base to Empower IT
15
New SDN Applications from Big Switch
• Supported by Extreme Networks
• Tested interoperability of
Extreme Networks switches and
Big Network Controller
• Big Tap delivers traffic
monitoring and dynamic network
visibility with flow filtering.
• Big Virtual Switch virtualizes
the network - provisions the
physical network into multiple
logical networks from Layer 2 to
Layer 7.
16
Flexibility of SDN requires new level of scalability
• Each flow can have a lot of attributes
– This gives the flexibility
– And yet requires to store hundreds thousands of flow in hardware by the
network switches!
• Extreme Networks does this for years with ExtremeXOS!
17
Network to support SDN
High Density 10 GbE Server Migration
Switch Attach Rate on
Servers
2x
Source: Dell’Oro, 2011
Server
Bandwidth
15%
45%
80%
Infrastructure
Costs
Power per
Rack
Cables and
Switch ports
Dell’Oro 10GbE Forecast
• RJ-45 10Gbit Ethernet!
Standalone Server LOM
Blade Server LOM
Adapter Cards
Two Factors Driving Dell’Oro Forecast
• Increased IT spending
• Rapid integration of Blade LOM
20
40 and 100G Market Evolution
Dell’Oro
• Initial adoption of 100GbE was limited to Service Provider and Internet Exchange Points.
• In Data Centers, 100GbE adoption will depend on relative cost-per-port and 40GbE adoption
on the server side.
• 40GbE server access will start gaining traction after 2014, driving 40GbE port volume
• 40GbE for aggregation will grow
• CFP2 in 2H 2013 drives inflection point in 100GbE cost-per-port and density
• Catalyst for data center 100GbE deployment
21
Barriers for wide adoption of 40G/100G
Source: Infonetics Research, 9-Nov 2012
• Cost
• Reliability of new physical components (especially optics)
22
Extensive Portfolio for Wired and Wireless Networks
SDN is supported across all families of
Extreme Networks Switches!
23
BlackDiamond X8 - Introduction
Highest Consolidation
• 14.5 RU - 1/3rd of Rack
• 768 x 1/10G wire-speed
• 192 x 40G wire-speed
• High-Density 100G
End-to-End Virtualization
• 128K Virtual Machines
• VM Lifecycle Management
• VEPA, VPP, XNV™
• VR, MLAG, VPLS
Unmatched Performance
• 20 Tbps Capacity/Switch
• 2.56 Tbps Bandwidth/Slot
• 30.5 Bpps Throughput
• 1 Million L2/L3 Scale
Full Convergence
• iSCSI, NFS, CIFS
• DCBx (PFC, FS, ETS)
• FCoE Transit
• FC SAN Connectivity**
High Availability
• 1+1 Management
• N+1 Fabric, Power & Fan
• N+N Power Grid
• EAPS, LAG, VRRP
Lowest TCO
• Front-to-Back Cooling
• Variable Fan Speed
• 5.6W per 10GbE port*
• Intelligent Power Mgmt.
* Based on Lippis Test Report
** Through QLogic UA5900
24
Ultra-Low Latency
• 2.3 uSec – Unicast*
• 2.4 uSec – Multicast*
BlackDiamond X8 in Different Data Center Verticals
High Performance Computing
Internet Exchange Points
BDX-8
ISP
BDX-8
BDX-8
Virtualized Multi-Tenant
ISP
CUSTOMER B
CUSTOMER A
CUSTOMER C
10Gb
10Gb
ISP
ISP
40Gb LAG
BDX-8
DWDM
ISP
ISP
ISP
ISP
40Gb
i
S
C
S
I
40Gb
X670
CUSTOMER B
CUSTOMER C
CUSTOMER A
25
VBLOCK & Extreme for DR & Avoidance
• Ring-Based Interconnect between VBlocks within a Data Center as
a High-Speed Bus
• Ethernet Replication Over Ring Protection (ERPS) for Business
Continuity and Disaster Recovery
o Commoditizing 100%
Application Availability
– 10Gb Ethernet costs
o Simplifying Data Center
Network
ERPS
Ring
– True convergence
without DCB/FCoE
o Extending the Metro
Data Center over
Ethernet
- Much lower cost
BlackDiamond X8
High Performance Open Fabric - BlackDiamond X8
New
Open Fabric
48-Port 100/1000/10000MbE RJ45 Module
48-Port 10GbE SFP+ Module
12-Port 40GbE QSFP+ Module
New
12-Port 40GbE-XL QSFP+ Module
24-Port 40GbE QSFP+ Module
New
4-Port 100GbE-XL CFP2 Module
27
BlackDiamond X8 Chassis – Rear Open
A
Power Supply
Sockets
B
1 2 3 4 5 6 7 8
Rear Configuration
• 4 Fabric slots
• 5 Fan Tray slots
• 8 Power Supply sockets
Fabric Modules
•
•
•
•
Orthogonal direct coupling
3+1 Switch Fabric Modules
20.48Tbps switching capacity
2.56Tbps bandwidth per slot
Fan Trays
•
•
•
•
1
2
3
4+1 Fan Trays
5+1 fans per tray
Variable fan speed control
Front-to-back airflow pull
4
Fabric Module Slots
1
2
3
4
5
Fan Tray Slots
28
Density to Performance Proportionality
• Direct Connect data path for ultimate performance
• Future-Proof chassis architecture: No mid-data plane design
Fabric Modules
I/O Modules
Mid-plane only for management path
No mid-plane for data path!
Direct Mating
29
Fabric Module
I/O module
Highest Capacity for Traffic Handling
BlackDiamond X8
Provides Highest Return on Investment!
20TB is just the beginning (no limitation in the chassis!)
20
Future
Traffic
Current
Traffic
2
Tbps
3
10
Tbps
Tbps
Tbps
Dell E600i
HP 12500
Arista 7500
Juniper EX8200 Cisco Nexus 7K Extreme BDX8
Brocade MLXe16 Juniper QFabric
30
BlackDiamond X – 4-Port 100GBaseX-XL Module
Main Features:
•
•
•
•
•
•
•
•
31
4-ports wire-speed 100GbE (CFP2)
Non-blocking performance
Choice of 100G-SR (100m) / LR (10Km) optics
Large scale 1 Million L2/L3 entries
N+1 power support with fully populated chassis
Supported with 10T and 20T fabrics
Availability: CY13
Price: due to the new design it’s 1/4 of the current market 100G pricing
100GbE Optics Comparison
• Cisco Nexus 7000
• Juniper MX
• Extreme BDX
CFP
CFP2
CFP4
• Late 2014
32
Unified Forwarding Table:
Unprecedented Deployment Flexibility
Legacy
New XL
Modules:
Optimal
Table
Utilization
L2 MAC
L3 IPv4/v6
IP
Multicast
Unified Forwarding Table (UFT)
Network
Deployment
Based Profiles
L2/L3 Balanced
L3 Heavy
Flow/ACL
Heavy
ACL/Flow
What is coming to your DC
Expansion of the Ethernet
• Ethernet and Infiniband and Fibre Channel
– Major driver for Fibre Channel is high risk of making changes in existing SAN
– Major driver for Infiniband is cost/performance
• Metro Ethernet
– Active expansion of 10Gb aggregation in the metro!
– Vympelcom Russia – agreement for 48,000 ports of 10Gb in Extreme X670
• Mobile backhauling
– Replacement of TDM with optical Ethernet
– Extreme Networks E4G with TDM and Ethernet (SyncE, 1588v2 in HW)
• Audio Video Bridging
– New wave of standards around AVB
– Also requires clock synchronization
– Replacement of the legacy audio interfaces
– Extreme Networks is the only one supporting AVB in the Enterprise-class switch
35
What is a Data Center? Barco Digital Operating Room
Extreme Networks X670 is in the heart of each room!
• Fully IP-centric solution for image distribution in the operating room. The
system architecture has been specifically designed to meet the
performance demands and the unique requirements of the surgical suite,
such as high-quality imaging, ultra-low latency, and real-time
communication.
• Dozens of 10G ports in each room!
36
Audio Video Bridging (AVB) Overview
Coming to your datacenter!
• AVB networks are used to
– Support time & bandwidth-sensitive applications
Ethernet
Convergence
– Using standard Ethernet while
– Coexisting with other “legacy” (or non-AV) Ethernet traffic.
• Goal: Synchronization & QoS for multiple streams
– Voice, video & control.
– Multiple audio streams for a multi-digital speaker deployment in a large venue.
– Multiple Video streams in a security surveillance application.
• Applications
Hospitals
Broadcast/production studios
Conference halls
Security and monitoring
Live performance stages
Theme parks
High-end residential AV installations.
Restaurants
Hotels
Airports
Industrial control
Automotive
Properties of AVB Systems
• Time Synchronization
– It must be possible to synchronize multiple streams with respect to each other.
– Clocks should be capable of being synchronized to within approximately 1us.
• QoS for AV Streams
– Domain Detection.
– Bandwidth Guarantees.
– Determine, guarantee, and report worst Case Delay Bounds.
• Prioritization.
• Traffic Shaping.
• Protect AVB traffic from non-AVB traffic.
• Network Convergence
– Allow AV traffic to coexist with other non-AV traffic on the same network.
Page
38
You can do it without Network Media System
• Diminished DSP resources.
• Greater cost.
• Different software for design and control.
• More training and education.
• Miles of cable and conduit.
39
Using Network Media System
• A single platform system
operating on a single
network.
• Maximized processing
resources.
• Control from one central
location.
• Efficient expansion to new
areas and buildings as you
grow.
• Easy to upgrade features and
functionality.
• Simplified design, installation
and support.
• Greater profitability.
40
Audio-Video Bridging
Why Audio Video Bridging (AVB)?
• Moving to Ethernet solves cabling issues but:
– Introduces latency/guaranteed bandwidth and synchronization issues
• AVB
– Is based on new IEEE Ethernet standards
– It is open and interoperable so any vendor can support it
– With AVNU that interoperability is certifiable
• The interoperability test with 15 member companies, including
– Pro A/V equipment manufacturers Avid, Biamp, Bosch, Harman, Meyer Sound,
Riedel Communications, Sennheiser, and Yamaha,
– network equipment vendor Extreme Networks,
– platform providers Analog Devices, Audinate, Lab X, Marvell, UMAN, and
XMOS.
• Extreme Networks is now the only Enterprise-class vendor
supporting AVB!
What has WiFi to do with your DC?
802.11ac coverage and throughput
43
Summary
Extreme Networks Open Fabric and SDN
SDN Applications
VM Lifecycle
Management
(XNV)
User Identity
Management
(IDM)
Collaborative
Programming
(XKIT)
Application
Performance
Management
Bring Your
Own Device
(BYOD)
Centralized Management/Orchestration Platform
Ridgeline
EXOS – Extensible, Open, Secure Network OS
XML
Modular
Scripts
External App SDK
Predictable Performance
OpenStack Quantum Plugin
Memory Protected
Hardware Abstracted
High Performance Converged Open Fabric
Low Latency
45
High
Capacity
MLAG
DCB
OpenFlow
Open
Fabric
Extreme Networks Open Fabric
Storage Compute
StandardsBased, Open
&
Interoperable
Open
Ecosystem
Automation &
Virtualization
Intelligence
Virtualization Security
46
Inventory Provision
East/West Traffic History
A Pragmatic
Fabric
Best-of-Breed
Hardware
Platforms
VEPA
Open Fabric
OpenFlow OpenStack
TRILL DCB
Scale Performance
Flexibility Latency
Network Technology to Meet Future Demand
Scalable
• Bandwidth aggregation to multiply I/O
• Seamless migration to higher speeds and feeds
Flexible
• I/O diversity and mix-n-match
• Auto-provisioning and configuration
Economical
• Minimal cost increase with speed migrations
• Reusable in terms of infrastructure and training
Reliable
• Is resilient and time tested
• Provides required level of service up time
Thank You!
bgermashev@extremenetworks.com
Download PDF

advertising