null  null
SDN/NFV 核心網路
Meng-Hsun Tsai, CSIE, NCKU
[email protected]
教育部行動寬頻尖端技術跨校教學聯盟:
行動寬頻網路與應用-小細胞基站計畫
種子教師培訓營
2017/1/19
聯盟計畫組織圖
課程地圖規劃
https://www.facebook.com/groups/1620326401589839/
主要參考教材
• Courses
• Software Defined Networking, by Prof. Nick Feamster in Princeton
University
https://www.coursera.org/course/sdn1
• Software-Defined Networks, by Prof. Shie-Yuan Wang in National
Chiao Tung University
http://people.cs.nctu.edu.tw/~shieyuan/course/softnetwork/2015/
• Book
• Paul Goransson, Chuck Black, Software Defined Networks: a
comprehensive approach, 1st edition, Morgan Kaufmann, 2014
5
SDN/NFV核心網路-試教課程
 開課說明:研究所選修課程
 開課時段:104 上 在成功大學 (共 80 人選修)
104 下 在高雄大學 (共 4 人選修)
 課程期程規劃:

學期課程總計18週
- 12週為學校老師或業界專家在課堂上講授之課程
- 4週為實驗課程
- 1周期中測驗、1週期末成果展示
開學第一週-課程推廣說明會
 學生共有 80 人參加 (大學部 2 位、工科所 5 位、
製造所 1 位、資工所 52 位、電通所 19 位、
1 位跨校選修)
 SDN 真的蠻夯的!
授課內容
 授課內容分為以下四個主題:

主題一:SDN/NFV簡介及網路模擬方法介紹

主題二:控制器技術、資料平面技術及網路擬真

主題三:網路功能虛擬化

主題四:網路安全、流量管理、測試與偵錯
實驗教材 (1/5)
4個實驗教材
實驗一:Open vSwitch與Mininet模擬環境架設
實驗二:Ryu控制器與EstiNet擬真環境
實驗三:基於OpenWRT之OpenFlow交換器製作
實驗四:無線SDN應用實驗 (接續實驗三)
實驗教材 (2/5)
實驗一:Open vSwitch與Mininet模擬環境架設
 安裝執行環境 Ubuntu。
 Mininet安裝與使用方法。
 Open vSwitch概念、原理及使用方式。
 控制器: OpenDaylight的安裝與執行,並與Mininet連接。
實驗教材 (3/5)
實驗二:Ryu控制器與EstiNet擬真環境
 安裝執行環境Fedora。
 Estinet模擬器安裝。
 控制器: Ryu的安裝與執行,並與Estinet連接。
 Estinet模擬器之環境操作介紹。
 利用Estinet建立傳統網路及SDN網路,發送封包觀察兩者差異。
實驗教材(4/5)
實驗三:基於OpenWRT之OpenFlow交換器製作
 OpenWRT及硬體(TL-WR1043ND)介紹。
 安裝執行環境 Ubuntu。
 製作符合硬體之包含Openflow協定的OpenWRT映像檔。
 利用OpenWRT映像檔更新TL-WR1043ND韌體,
製作OpenFlow交換器。
 設定OpenFlow交換器內,網路相關及OpenFlow協議的設定檔。
 控制器: Ryu的安裝與執行,並與剛製作的OpenFlow交換器連接。
實驗教材 (5/5)
實驗四:無線SDN應用實驗 (接續實驗三)
利用Ryu控制器對OpenFlow交換器下rule,讓使用者能跨AP使
用同一個IP位址。
實驗設備(1/4)
 硬體
名稱
簡介
可安裝 Ubuntu 64位元作業系統之個人電腦、筆記
電腦
型電腦或伺服器。此硬體用於安裝控制器。
實驗設備(2/4)
名稱
簡介
TP-Link
WR1043ND
是一部整合網際網路共用路由器和 4 埠交換器的有線/無線網路連線裝置。最
高可達到 300Mbps 的速度所建立的無線網路,可以確保您在同一時間輕鬆使
用多種需要高頻寬且不可被中斷的應用程式。值得一提的是,路由器背面配備
一個 USB 儲存連接埠,可以將 USB 儲存裝置連接到網路上,方便網路上的每
位使用者共用資源。
硬體規格
連接介面
無線標準
無線使用頻率
訊號速率
按鈕
天線
支援連線類型
4 個
1 個
1 個
IEEE
10/100/1000Mbps LAN 連接埠
10/100/1000Mbps WAN 連接埠
USB 2.0 連接埠
802.11n, IEEE 802.11g, IEEE 802.11b
2.4-2.4835GHz
11n:最高達 450Mbps (動態)
11g:最高達 54Mbps (動態)
11b:最高達 11Mbps (動態)
WPS / 恢復原廠預設值(RESET) 按鈕
電源 按鈕
Wi-Fi 按鈕
3 條 5dBi 可拆卸全向天線 (RP-SMA接頭)
浮動IP(DHCP)、固定IP、PPPoE、PPTP/L2TP
實驗設備(3/4) - 軟體
名稱
Open vSwitch
Mininet
Estinet網路模擬器
OpenWrt
Wireshark
簡介
是一款以軟體實現為基礎的開放原始碼虛擬交換機,能夠支援多種標準的
管理介面和協定(如:NetFlow、sFlow、SPAN、LACP、802.1ag等),也
提供對OpenFlow協定的支援,並能和許多虛擬化平台相整合(如:
Mininet)。
由Standford大學Nick McKeown團隊以Linux架構為基礎開發的虛擬化平台。
透過Mininet,可以輕易在筆電測試一個軟體定義網路,對OpenFlow、
Open vSwitch等協定進行開法驗證。
由交通大學王協源老師團隊和思銳科技開發之網路模擬軟體。支援
OpenFlow 通訊協定,涵蓋各種有線、無線網路通訊協定的模擬與仿真功
能,可模擬真實網路設備相互溝通的通訊協定,以及封包傳輸行為。
是一個嵌入式設備的Linux發行版。相對於嵌入式設備原廠提供的韌體,
OpenWrt提供了一個完全可寫的檔案系統。使用者可以自由安裝軟體包,
選擇應用程式和配置,不必受設備供應商韌體限制,依照自己想要的方式
使用該設備。
是一個免費開源的網路封包分析軟體。可截取網路封包,顯示網路傳輸訊
息及詳細的網路封包資料。為用於SDN實驗,特別安裝支援OpenFlow協定
之套件。
實驗設備(4/4)
控制器
(將安裝於Ubuntu 12.04以上版本之作業系統)
名稱
使用程式語言
簡介
由思科、英特爾、IBM等IT大廠聯手組成
的OpenDaylight專案。為制定一套通用
SDN開放原始碼框架,框架由上到下包含
OpenDaylight
Java
網路應用和服務、北向介面、中心控制
平臺、南向介面等。
由Nicira開發,業界第一款OpenFlow控
NOX
C++
制器,是許多SDN研發專案的基礎。
由NTT開發,能夠與OpenStack平台整合,
具有豐富的控制器API,支援網路控管應
Ryu
Python
用的建立。
課程綱要-原本的規劃
Week
Progress
Week
Progress
1
Syllabus
10
Lab3: OpenFlow-ize a real AP (OpenWrt)
2
The Road to SDN
11
NFV (+ paper presentation x 2)
3
SDN Simulation
12
NFV (+ paper presentation x 2)
4
Lab1: SDN Simulation (Open
vSwitch/Mininet/OpenDayLight)
13
Paper presentation x 6 (no lecture)
5
OpenFlow (+ paper presentation x 2)
14
Lab4: Supporting mobility through OpenFlow
(Quanta switch)
6
Control Plane (+ paper presentation x
2)
15
Security Issues (paper presentation x 2)
7
EstiNet
16
Network Traffic Management (paper
presentation x 2)
8
Lab2: SDN Simulation
(EstiNet/Ryu)
17
Testing and Debugging (paper presentation x 2)
9
Midterm Exam
18
Final Project Demo
課程綱要-實際上
Week
Progress
Week
Progress
Lab3: OpenFlow-ize a real AP (OpenWrt)
OpenFlow (+ paper presentation x 2)
SDN Switch 分享 (by 廣達科技 翁陸峰處
長) (+ paper presentation x 2)
1
2
Syllabus
The Road to SDN
10
11
3
SDN Simulation
12
4
5
Lab1: SDN Simulation (Open
vSwitch/Mininet/OpenDayLight)
Lab1: SDN Simulation (Open
vSwitch/Mininet/OpenDayLight)
(+ paper presentation x 2)
13
Paper presentation x 6 (no lecture)
14
Control Plane
6
OpenFlow (+ paper presentation x 2)
15
7
EstiNet (by 思銳科技 周智良技術長)
16
8
Lab2: SDN Simulation (EstiNet/Ryu)
17
Lab4: Supporting mobility through
OpenFlow (Quanta switch) (paper
presentation x 2)
Lab4: Supporting mobility through
OpenFlow (Quanta switch) (paper
presentation x 2)
NFV (paper presentation x 2)
課程開設建議
一開始不要把 SDN/NFV 開成獨立一門課,建議在現
有的課程中加入 SDN 的素材。
實驗的時間可以多安排一些,但不建議做太廣,少而
深入比較有助於學習。
What? SDN? OpenFlow?
• OpenFlow, the major protocol
used in Software Defined
Network (SDN), was introduced
in ACM SIGCOMM 2008.
• OpenFlow is considered as
enabler of SDN.
• In 2011, the InformationWeek
magazine introduced OpenFlow
as the biggest thing since ethernet
(developed by Robert Metcalfe in
1973).
21
SDN is Emerging
• MIT Technology Review
introduces Software-definednetworking as one of ten
breakthrough technologies in
2009.
Software-defined Networking (SDN)
• Software-defined networking (SDN) is an architecture
purporting to be dynamic, manageable, cost-effective, and
adaptable, seeking to be suitable for the high-bandwidth,
dynamic nature of today's applications.
• SDN architectures decouple network control and
forwarding functions, enabling network control to become
directly programmable and the underlying infrastructure to
be abstracted from applications and network services.
Source: https://en.wikipedia.org/wiki/Softwaredefined_networking
23
Why SDN is so hot today?
Google deployed OpenFlow in its datacenters since 2010,
and later announced many benefits and great improvement
of utilization in ONS 2012.
24
SDN History
25
From Legacy Network to SDN
27
28
From Legacy Network to SDN (1/5)
control plane: distributed algorithms
data plane: packet processing
29
From Legacy Network to SDN (2/5)
decouple control and data planes
30
From Legacy Network to SDN (3/5)
decouple control and data planes
by providing open standard API
31
From Legacy Network to SDN (4/5)
- (Logically) Centralized Controller
Controller Platform /
Network Operating System
32
From Legacy Network to SDN (5/5)
-Protocols  Applications
Controller Application
Controller Platform
33
A Major Trend in Networking
Entire backbone
runs on
SDN
Bought for $1.2 x 109
(mostly cash) 34
An Opportunity to Rethink
• How should future networks be
• Designed
• Managed
• Programmed
• What are the right abstractions
• Simple
• Powerful
• Reusable
35
How SDN Works
Meng-Hsun Tsai
Department of Computer Science & Information Engineering
National Cheng Kung University
Ethane: Centralized, reactive, per-flow control
Controller
Flow Switch
Flow Switch
Flow Switch
Host B
Host A
Flow Switch
37
OpenFlow: a pragmatic compromise
• + Speed, scale, fidelity of vendor hardware
• + Flexibility and control of software and simulation
• Vendors don’t need to expose implementation
• Leverages hardware inside most switches today (ACL tables)
38
Working Groups in ONF (old)
39
Areas in ONF
40
Members in ONF
41
Members in ONF (cont.)
42
Three Layers in SDN
43
OpenFlow: the Southbound Interface
OpenFlow
Controller
OpenFlow Protocol (SSL/TCP)
Switch
OpenFlow Client
Data Path
(Hardware)
44
Centralized vs Distributed Control
Both models are possible with OpenFlow
Centralized Control
Controller
OpenFlow
Switch
Distributed Control
Controller
OpenFlow
Switch
Controller
OpenFlow
Switch
OpenFlow
Switch
OpenFlow
Switch
Controller
OpenFlow
Switch
45
Flow Routing vs. Aggregation
Both models are possible with OpenFlow
Aggregated
Flow-Based
• Every flow is individually set
•
•
•
up by controller
Exact-match flow entries
Flow table contains one entry
per flow
Good for fine grain control,
e.g. campus networks
•
•
•
•
One flow entry covers large
groups of flows
Wildcard flow entries
Flow table contains one entry
per category of flows
Good for large number of
flows, e.g. backbone
46
Reactive vs. Proactive (pre-populated)
Both models are possible with OpenFlow
Reactive
Proactive
• First packet of flow triggers
•
•
•
•
controller to insert flow
entries
Efficient use of flow table
Every flow incurs small
additional flow setup time
If control connection lost,
switch has limited utility
•
•
•
Controller pre-populates flow
table in switch
Zero additional flow setup
time
Loss of control connection
does not disrupt traffic
Essentially requires aggregated
(wildcard) rules
47
OpenFlow
Meng-Hsun Tsai
Department of Computer Science & Information Engineering
National Cheng Kung University
OpenFlow Overview
Standardization of OpenFlow
• The nonprofit Internet organization openflow.org
was created in 2008 as a mooring to promote and
support OpenFlow. The physical organization was
really just a group of people that met informally at
Stanford University.
OpenFlow 1.0.0
• The first release, OpenFlow 1.0.0, appeared on Dec.
31, 2009. Later, OpenFlow 1.1.0 was released on
Feb. 28, 2011.
• On March 21, 2011, the Open Network Foundation
(ONF) was created for the purpose of accelerating
the delivery and commercialization of SDN.
OpenFlow 1.1.0
50
OpenFlow Overview
• OpenFlow defines both the communications protocol
between the SDN data plane and the SDN control plane
and part of the behavior of the data plane.
• The OpenFlow behavior
specifies how the device
should react in various
situations and how it should
respond to commands from
the controller.
• There is always an OpenFlow
controller that communicates
to one or more OpenFlow
switches.
51
OpenFlow Switch
• The packet-matching function tries to
match the incoming packet (X) with an
entry in flow table, and then directs the
packet to an action box.
• The action box has three fundamental
options:
(A) Forward the packet out , possibly
modifying certain header fields
first.
(B) Drop the packet.
(C) Pass the packet to the controller
through a OpenFlow PACKET_IN
message.
52
OpenFlow Switch (cont.)
• The packets are transferred between the
controller and the switch through secure
channel.
• When the controller has a data packet to
forward out through the switch, it uses
the OpenFlow PACKET_OUT message.
Two paths (Y) are possible:
Controller directly specifies the output
port.
Controller defer the forwarding
decision to the packet-matching logic.
53
OpenFlow Controller
• The OpenFlow control plane differs from the legacy control
plane in three key ways:
It can program different data plane elements with a
common and standard language, OpenFlow.
It exists on a separate hardware device than the
forwarding plane.
The controller can program multiple data plane elements
from a single control plane instance.
54
The Controller-Switch Secure Channel
• Generally, the communications
between controller and switch are
secured by TLS-based asymmetrical
encryption, though unencrypted TCP
connections are allowed.
• The connections may be in-band or
out-of-band.
• In the in-band example, the flow
tables are constructed so that the
OpenFlow traffic is forwarded to the
LOCAL virtual port, where the
messages are passed to the secure
channel process.
55
OpenFlow 1.0
Ports and Port Queues
• An OpenFlow V.1.0 port
corresponds to a physical port.
• Sophisticated switches have
supported multiple queues per
physical port for different QoS
levels.
• OpenFlow 1.0 embraces the QoS
concept and permits a flow to be
mapped to an already defined
queue at an output port.
57
Flow Table
• A flow table consists of flow entries. A flow entry consists of:
header fields used as match criteria to determine whether an
incoming packet matches this entry,
counters used to track statistics relative to this flow, and
actions prescribing what the switch should do for a matched
packet .
58
Packet Matching
• Twelve match fields may be used:
Switch input port,
VLAN ID, VLAN priority,
Ethernet source address, Ethernet destination address,
Ethernet frame type
IP source address, IP destination address, IP protocol, IP
Type of Service (ToS) bits
TCP/UDP source port, TCP/UDP destination port
• The match fields may be wildcarded.
59
Packet Matching (cont.)
• Flow entries are processed in order, and once a match is found,
no further match attempts are made against that flow table.
• OpenFlow 1.0 is silent about which of these 12 match fields are
required versus those that are optional. The ONF has clarified
this confusion by defining three types of conformance:
Full conformance means that all 12 match fields are supported.
Layer two conformance means that only layer two header
fields are supported.
Layer three conformance means that only layer three header
fields are supported.
60
Packet Matching (cont.)
• If no flow entry is matched, it is called table miss. In this
case, the packet is forwarded to the controller.
• Note that OpenFlow V1.0 was designed as an abstraction of
the way that existing switching hardware works. In later
versions, we will see that the specification outpaces the
reality of today’s hardware.
61
Actions and Packet Forwarding
• Five special virtual ports defined in
V.1.0: LOCAL, ALL, CONTROLLER,
IN_PORT and TABLE.
LOCAL indicates that the packet
needs to be processed by the local
OpenFlow control software. LOCAL
is used for in-band OpenFlow
messages.
ALL is used to send a packet out all
ports except the input port.
62
Actions and Packet Forwarding (cont.)
CONTROLLER indicates that the
switch should forward this
packet to the controller.
IN_PORT instructs the switch to
forward the packet back out of
the port on which it arrived.
TABLE only applies to packets
that the controller sends to the
switch. The packets are then
processed by normal OpenFlow
packet processing pipeline.
63
Actions and Packet Forwarding (cont.)
• In V.1.0, there are two optional
virtual ports: NORMAL and
FLOOD.
A packet forwarded to
NORMAL port is sent to legacy
forwarding logic in the case of a
hybrid switch.
FLOOD instructs the switch to
send a copy of the packet along
minimum spanning tree, except
the input port.
64
Actions and Packet Forwarding (cont.)
• For a table miss, the virtual port CONTROLLER is used.
• There are two optional actions in V.1.0: enqueue and
modify field.
Enqueue specifies a queue to a particular port.
Modify-field informs the switch how to modify certain
header fields.
65
Messaging Between Controller and Switch
• If the switch knows the IP address of the controller, the
switch will initiate this connection.
• Each message between controller and switch starts with
the OpenFlow header.
OpenFlow Header
66
OpenFlow Message Types and Protocol Session
OpenFlow
Header
67
Initialization Phase
• The HELLO messages are
exchanged to determine the
highest OpenFlow version
supported by the peers, where
the lower of the two is used.
• The controller uses the
FEATURES message pair to
interrogate the switch about the
supported features.
• The controller modifies existing
flow entries in the switch via the
FLOW_MOD message.
68
Operation Phase
• The PACKET_IN message is
the way the switch passes
data packets back to the
controller for exception
handling.
control
ler
switc
h
• The controller uses
PACKET_OUT to send data
packets to the switch for
forwarding out through the
data plane.
69
Monitoring Phase
• ECHO messages are used by
either side to ascertain that
the connection is still alive
and to measure the current
latency or bandwidth of the
connection.
switc
h
control
ler
• PORT_STATUS is used to
communicate changes in port
status.
• Statistics are obtained from
the switch via the STATS
message pair.
70
Example: Controller Programming Flow Table
Adding a flow
entry
71
Example: Controller Programming Flow Table (cont.)
Modifying a flow
entry
72
Example: Basic Packet Forwarding
73
Example: Switch Forwarding Packet to Controller
74
OpenFlow 1.1 Additions
OpenFlow V.1.1
• OpenFlow 1.1 had little impact other than as
a stepping stone to OpenFlow 1.2.
• SDN community waited for V.1.2 (the first
version by ONF) before creating
implementations.
• Five major new features
Multiple flow tables
Groups
MPLS and VLAN tag support
Virtual ports
Controller connection failure
76
Multiple Flow Tables
• In V.1.1., it is possible to defer further packet processing to
subsequent matching in other flow tables.
• Instruction is introduced to be associated with a flow entry.
• Flow tables can be chained by GOTO instructions.
77
Multiple Flow Tables (cont.)
• For an incoming packet, an action set is initialized and then
modified by instructions through the processing pipeline.
• When the pipeline ends, the actions in the action set are
executed in the following order:
1. Copy TTL inward
4. Copy TTL outward
7. QoS: Apply QoS actions (e.g. set_queue)
2. Pop
5. Decrement TTL
8. Group: Apply actions to relevant action
buckets if a group action is specified
3. Push
6. Set: Apply set_field actions 9. Output to specified port
• If there is neither a group nor output action, the packet is
dropped.
78
Multiple Flow Tables (cont.)
• Note that actions specified by an Apply-Actions instruction
are immediately executed.
• When a matched flow entry does not specify a GOTO flow
table, the processing pipeline completes, and actions in the
current action set are then executed.
• In normal cases, the final action is to forward a packet to an
output port, to the controller, or to a group table.
• The flexibility of multiple flow tables comes at a price for
adapting existing hardware switches.
79
Example: Forwarding with
Multiple Flow Tables
• The packet first matches entry
1 in Table 0. An action “output
to port P”is added to the
action set. The pipeline then
jumps to Table K according to
the GOTO instruction.
• The packet matches entry F in
Table K. Since there is no GOTO
instruction here, actions in the
action set are executed.
80
Groups
• Group table consists of
group entries. Each entry
consists of one or more
action buckets.
• Refinements on flooding,
such as multicast, can be
achieved in V.1.1 by defining
groups as specific sets of
ports.
• One group’s buckets may
forward to other groups,
providing the capability to
chain groups together.
81
Example: Multicast Using V.1.1 Groups
82
MPLS and VLAN Tag Support
• MPLS and VLAN are supported by adding PUSH and POP
actions.
• When a PUSH action is executed, a new header (or tag) is
inserted in front of the current outermost header. On the
other hand, POP is used to remove the current outermost
header.
• The matching logic is replaced by V.1.2, and will be
discussed later.
83
Virtual Ports
• A V.1.1 switch classifies ports into standard ports and
reserved virtual ports.
• Standard ports consist of
Physical ports
Switch-defined virtual ports
• Reserved virtual ports consist of ALL, CONTROLLER, TABLE,
IN_PORT, LOCAL (optional), NORMAL (optional) and FLOOD
(optional).
84
Controller Connection Failure
• Two modes are introduced for loss of connection between
switch and controller:
In fail secure mode, the switch continues to operate as a
normal V.1.1 switch except that all messages destined for
the controller are dropped.
In fail standalone mode, the switch additionally ceases
its OpenFlow pipeline processing and continues to
operate in its native, underlying switch or router mode.
85
OpenFlow 1.2 Additions
OpenFlow V.1.2
• Eight major new features
Extensible match support
Extensible set_field packet-rewriting support
Extensible context expression in “packet-in”
Multiple controller enhancements
Extensible error messages via experimenter error type
IPv6 support
Simplified behavior of flow-mod request
Removed packet parsing specification
87
Extensible Match Support
• A generic and extensible packet-matching capability has been
added in V.1.2 via the Openflow Extensible Match (OXM)
descriptors.
• OXM defines a set of type-length-value (TLV) pairs that can
describe or define virtually any of the header fields.
OpenFlow 1.1
…
OpenFlow
1.2
88
Extensible Match Support (cont.)
OXM TLV Layout
89
Extensible Match Support (cont.)
• The ability to match on any combination of header fields is
provided within the OPENFLOW_BASIC match class.
• The EXPERIMENTER match class opens up the opportunity for
matching on fields in the packet payload, providing a nearlimitless horizon for new definitions of flows.
90
Extensible set_field Packet Rewriting Support
• It is allowed to set the value of any packet header field that
may be used for matching.
• For EXPERIMENTER match class, it is possible to modify any
packet fields (including the payload) that is not part of the
standard OXM header fields.
91
Extensible Context Expression in PACKET_IN
• The OXM encoding is also used to extend the PACKET_IN
message sent from the switch to the controller.
OpenFlow 1.1
OpenFlow 1.2
92
Multiple Controllers
• In V.1.2, the switch may be configured to maintain
simultaneous connections to multiple controllers.
• If a message pertains to multiple controllers, it is duplicated
and a copy sent to each controller.
• Three different roles of controllers: slave, master and equal
In slave mode, the controller may only request data from
the switch.
Both master and equal modes allow the controller the full
ability to program the switch.
Only one controller in master mode is allowed, while
93
other controllers are in slave mode.
OpenFlow 1.3 Additions
OpenFlow V.1.3
• OpenFlow V.1.3 was released on April 13, 2012. This release
was a major milestone, especially for ASIC designers.
• It is likely that the real-life chips that support V.1.3 will have to
limit the number of flow tables to a manageable number.
• Thirteen major new features
1. Refactor capabilities netogiation 6. Auxiliary connections
11. Cookies in PACKET_IN
2. More flexible table-miss support 7. MPLS BoS matching
12. Duration for stats
3. IPv6 extension header-handling
support
8. Provider backbone bridging
tagging
13. On-demand flow
counters
4. Per-flow meters
9. Rework tag order
5. Per-connection event filtering
10. Tunnel-ID metadata
95
More Flexible Table-miss Support
• Formally, a table miss was configurable as one of three options:
dropping, forwarding to the controller, and continuing matching in
the next table.
• V.1.3 expands on this limited handling capability via the
introduction of the table-miss flow entry.
• The table-miss flow entry is of the lowest priority (zero) and all
match fields are wildcards.
• The advantage of this approach is that full semantics of flow entry
(including instructions and actions) are applicable for a table miss.
• For example, a table-missed packet may be passed to another flow
table by a GOTO instruction for further processing.
96
Per-Flow Meters
• Meters are defined on a perflow basis and reside in a
meter table.
• V.1.3 instructions may direct
packets to a meter identified
by a meter ID.
• V.1.3 meters are only ratelimiting meters.
• There may be multiple meter
bands attached to a given
meter.
97
Per-Flow Meters (cont.)
• When a packet is processed by a meter, at most one band is
used.
• This band is selected based on the highest bandwidth rate
band that is lower than the current measured bandwidth for
that flow.
• If the current measured rate is lower than all bands, no band
is selected, and no action is taken.
• If a band is selected, the action prescribed by the band type
field is taken.
98
Example: Enforcing QoS via Meter Bands
• The packet from Port 2 matches a
flow entry with an instruction which
directs the packet to Meter 3.
• If the bandwidth limits are
exceeded, the packet is dropped.
• Otherwise, the packet undergo
further processing in the pipeline.
99
Per Connection Event Filtering
• In the previous versions, all controllers must receive the same
kind and quantity of asynchronous notifications from the
switch.
• V.1.3 introduces a SET_ASYNC message that allows the
controller to specify the sorts of async messages it is willing
to receive from a switch.
100
Auxiliary Connections
• As deployment of OpenFlow grows, performance considerations
have become increasingly important.
• V.1.3 allows multiple connections per controller-switch channel.
• The major advantage is to achieve greater overall throughput,
both in control plane and data plane.
101
Cookies in PACKET_IN
• In the case of PACKET_IN messages, it is somewhat wasteful to
require the switch to match over and over for the same flow.
• V.1.3 allows the switch to pass a cookie with the PACKET_IN
message.
• The switch maintains the cookie in a new
field in the flow entry.
OpenFlow 1.3 Flow Entry
• The cookie allows the switch to cache the
flow entry pointed to by this cookie, and
prevent the full packet-matching process.
102
Controllers / Network
Operating Systems
Meng-Hsun Tsai
Department of Computer Science & Information Engineering
National Cheng Kung University
A bit of history
OpenFlow Controller
SDN Controller
SDN Framework
Network Operating System
104
A bit of history (cont.)
NOX (C++ & Python)
Ryu (Python)
NOX-MT (C++)
POX (Python)
OpenDayLight (Java + OSGi)
Trema (Ruby & C)
Beacon -> Floodlight (Java)
ONOS (Java + OSGi)
2008
2010
2011
2012
2013
2014
105
Background
• Networks have so far been managed and
configured using lower level, device-specific
instruction sets and mostly closed proprietary
NOSs (e.g., Cisco IOS and Juniper JunOS).
• SDN is promised to facilitate network management
and ease the burden of solving networking
problems by means of the logically centralized
control offered by a NOS.
• With NOSs, to define network policies a developer
no longer needs to care about the low-level details
of data distribution among routing elements.
106
How many flows exist in real
networks/datacenters?
• NOX handles around 30k flow initiation events per
second while maintaining a sub-10ms flow install
time.
• Kandula et al. found that a 1500-server cluster has
a median flow arrival rate of 100k flows per second.
• Benson et al. show that a network with 100
switches can have spikes of 10M flows arrivals per
second in the worst case.
107
Centralized Controllers
• A centralized controller is a single entity that manages all
forwarding devices of the network.
• Naturally, it represents a single point of failure and may
have scaling limitations.
• Centralized controllers are designed as highly concurrent
systems (i.e., multithreaded design for multicore computer)
to achieve required throughput.
• Beacon can deal with more than 12 million flows per
second by using Amazon cloud service.
• List of centralized controllers:
NOX-MT, Maestro, Beacon, Floodlight, Trema, Ryu,
Meridian, ProgrammableFlow, Rosemary
108
Effect of Multi-threading on Throughput
Source: A. Tootoonchian, S. Gorbunov, Y. Ganjali, M. Casado, and R. Sherwood. On controller performance
in software-defined networks. In USENIX Workshop on Hot Topics in Management of Internet, Cloud, and
Enterprise Networks and Services (Hot-ICE), 2012.
109
Distributed Controllers
• A distributed NOS can be scaled up to meet the
requirements of potentially any environment.
• Most distributed controllers offer weak consistency
semantics, which implies that there is a period of time in
which distinct nodes may read different values.
• Another common property is fault tolerance. However,
SDN resiliency as a whole is an open challenge.
• List of distributed controllers:
Onix, HyperFlow, HP VAN SDN, ONOS, DISCO, yanc, PANE,
SMaRt-Light, Fleet
110
Architectural and Design Elements of SDN
Controllers
111
112
Many Different SDN Controllers
• NOX/POX
• Ryu
• Floodlight
• OpenDaylight
• Pyretic
• Frenetic
• Procera
• RouteFlow
• Trema
113
http://www.noxrepo.org/
NOX: Overview
• First-generation OpenFlow controller
• Open source, stable, widely used
• Two “flavors” of NOX
• NOX-Classic: C++/Python. No longer supported.
• NOX (the “new NOX”)
• C++ only
• Fast, clean codebase
• Well maintained and supported (?)
114
NOX: Characteristics
• Users implement control in C++
• Supports OpenFlow v.1.0
• A fork (CPqD) supports 1.1, 1.2, and 1.3
• Programming model
• Controller registers for events
• Programmer writes event handler
When to Use NOX
• You know C++
• You are willing to use
low-level facilities and
semantics of
OpenFlow
• You need good
performance
115
POX: Overview
• NOX in Python
• Supports OpenFlow v. 1.0 only
• Advantages
• Widely used, maintained, supported
• Relatively easy to read and write code
• Disadvantages: Performance
When to Use POX
• You know Python
• You are not
concerned about
controller
performance
• Rapid prototyping
and experimentation
116
http://osrg.github.io/ryu/
Ryu
• Open source Python controller
• Supports OpenFlow 1.0, 1.2, 1.3, 1.4, 1.5, Nicira extensions
• Works with OpenStack
• Aims to be an “Operating System” for SDN
• Advantages
Ryu means "flow" in Japanese. Ryu is pronounced "ree-yooh".
• OpenStack integration
• OpenFlow 1.2, 1.3, 1.4, 1.5
• Good documentation
• Disadvantages: Performance
117
http://www.projecEloodlight.org/floodlight/
Floodlight
• Open-source Java controller
• Supports OpenFlow v. 1.0 and v. 1.3
• Fork from the Beacon Java OpenFlow controller
• Maintained by Big Switch Networks
• Advantages
• Good documentation
• Integration with REST API
• Production-level, OpenStack/Multi-Tenant Clouds
• Disadvantages: Steep learning curve
118
OpenDaylight
OpenDaylight Consortium
• Heavy industry involvement and backing
• Focused on having an open framework for building
upon SDN/NFV innovations
• Not limited to OpenFlow innovations
120
Hydrogen Release
121
Beryllium Release
122
123
Java, Maven, OSGi, Interface
• Java chosen as an enterprise-grade,
cross-platform compatible language
• Maven – build system for Java
• OSGi:
App1 App2 … SAL
OSGi Framework
(Equinox)
• Allows dynamically loading bundles
• Allows registering dependencies and services exported
• For exchanging information across bundles
• Java Interfaces are used for event listening,
specifications, and forming patterns
124
OpenDaylight Web Interface
125
Importance of ODL
• Supports a wide variety of SBI protocols versions
• Active community
• Aligned with vendors and telcos
• Easy proposal of projects
• Easy deployment (OSGi)
• However…!
• Not so good documentation :<
• Development of modules requires a deep knowledge of
ODL
Source: Telcaria (http://www.slideshare.net/opennebula/clash-of-titans-insdn-opendaylight-vs-onos-elisa-rojas)
126
ONOS – A Carrier Grade Controller
• Features of Open Network Operating System (ONOS)
• Highly available
• modular
• extensible,
• distributed,
• scalable
• multi-protocol controller infrastructure
• Supports OpenFlow (1.0, 1.3), NETCONF, OVSDB
• Protocol and device behavior independence
• Written in Java
• Apache 2.0 license
127
ONOS Board
• Service providers
• AT&T, China Unicom, NTT Communications, SK Telecom, Verizon
• Vendors
• Alcatel Lucent, Ciena, Cisco, Ericsson, Fujitsu, Huawei, Intel, NEC,
ON.Lab
128
ONOS Architecture
129
ONOS Releases
Release Name
Release Date
Avocet
December 5, 2014
Blackbird
February 28, 2015
Cardinal
May 31, 2015
Drake
September 18, 2015
Emu
December 18, 2015
Falcon
March 10, 2016
Goldeneye
Under development
Hummingbird
Under development
130
ONOS Web GUI
131
Global SDN-IP Deployment
132
Global SDN-IP Deployment Team Member
1. Internet2
• 40 OF switches around US, 5 sites connected
2. AmLight
• 4 OF switches around South America and Miami
3. GEANT
• Multiple end-points all around Europe
4. KREONET
• 4 OF switches around US, 5 sites connected
5. AARENT
• 40 OF switches distributed in two cities in Korea
6. NCTU
• 4 OF switches in Taiwan
133
About SDN-IP
• Allows an SDN to connect to external networks on the
Internet using standard Border Gateway Protocol (BGP)
• SDN-IP is just an ONOS application
• Uses ONOS services to install and update appropriate data
forwarding rules
• Design Goal of SDN-IP
• Protocol Compatibility and Vendor Independence
• High Availability (HA) – Provides HA within SDN-IP itself
• Scalability – Using multiple ONOS clusters
134
Taiwan on ONOS
135
136
ONOS Voting Community 2016
137
Importance of ONOS
• Supports a wide variety of SBI protocols versions
• Active community
• Aligned with vendors and telcos
• Good documentation
• Easy deployment (OSGi)
• However…!
• Still in its early phases (some project are still under
development and not fully supported)
Source: Telcaria (http://www.slideshare.net/opennebula/clash-of-titans-insdn-opendaylight-vs-onos-elisa-rojas)
138
ODL vs. ONOS
• Cloud vs. Carrier-grade networks
• Legacy vs. “Pure” SDN
• Private companies vs. Academic
• Both in Linux Foundation!
• ONOS and ODL focused on different problems.
• ONOS has focused on service providers’ needs, which landed it a role as
a local controller for AT&T.
• ODL was created to be the Linux of networking: one platform to have a
very long life and enable people to build a wide range of solutions to solve
a wide range of problems. AT&T is using ODL framework as the basis for its
global SDN controller.
Source: Linux Foundation (https://www.sdxcentral.com/articles/news/onosjoins-the-linux-foundation-becoming-an-opendaylight-sibling/2015/10/)
139
SDN Use Cases
OpenFlow Example
Software
Layer
PC
OpenFlow Client
Controller
Flow Table
Hardware
Layer
MAC
src
MAC
dst
*
*
port 1
IP
Src
*
IP
Dst
TCP
sport
TCP
dport
5.6.7.8
*
*
port 2
port 3
Action
port 1
port 4
141
5.6.7.8
1.2.3.4
OpenFlow Basics: Flow Table Entries
Rule
Action
Stats
Packet + byte counters
1.
2.
3.
4.
5.
Switch VLAN
Port
ID
Forward packet to zero or more ports
Encapsulate and forward to controller
Send to normal processing pipeline
Modify Fields
Any extensions you add!
VLAN MAC
pcp src
MAC
dst
Eth
type
IP
Src
IP
Dst
IP
ToS
IP
Prot
L4
sport
L4
dport
142
Examples
Switching
Switch MAC
Port
src
*
MAC
dst
Eth
type
VLAN
ID
IP
Src
IP
Dst
IP
Prot
TCP
sport
TCP
dport
*
*
*
*
*
*
Eth
type
VLAN
ID
IP
Src
IP
Dst
IP
Prot
TCP
sport
TCP
dport
00:20.. 00:1f.. 0800
vlan1
1.2.3.4 5.6.7.8
Eth
type
VLAN
ID
IP
Src
IP
Dst
IP
Prot
TCP
sport
TCP
dport
*
*
*
*
*
*
22
00:1f:.. *
*
Action
port6
Flow Switching
Switch MAC
Port
src
port3
MAC
dst
4
17264 80
Action
port6
Firewall
Switch MAC
Port
src
*
*
MAC
dst
*
Action
drop
143
Examples (cont.)
Routing
Switch MAC
Port
src
*
*
MAC
dst
*
Eth
type
VLAN
ID
IP
Src
IP
Dst
*
*
*
Eth
type
VLAN
ID
IP
Src
IP
Prot
TCP
sport
TCP
dport
5.6.7.8 *
*
*
IP
Dst
TCP
sport
TCP
dport
Action
port6
VLAN Switching
Switch MAC
Port
src
*
*
MAC
dst
00:1f..
*
vlan1
*
*
IP
Prot
*
*
*
Action
port6,
port7,
port9
144
Usage examples
• Alice’s code:
•
•
•
•
•
Simple learning switch
Per Flow switching
Network access control/firewall
Static “VLANs”
Her own new routing protocol:
unicast, multicast, multipath
• Home network manager
• Packet processor (in controller)
• IPvAlice
–
–
–
–
–
VM migration
Server Load balancing
Mobility manager
Power management
Network monitoring and
visualization
– Network debugging
– Network slicing
… and much more you can create!
145
Intercontinental VM Migration
S+.' &: " +L +' +.%4(VF (F EC&%L " +(
Moved a VM from Stanford to Japan without changing its IP.
VM hosted a video game server with active network connections.
Moved a VM from Stanford to Japan without changing its IP.
VM hosted a video game server with active network connections.
146
SDN Example: Seamless Mobility
1. Controller/Application finds a host sending traffic at new location
2. Controller/Application modify rules on switches to reroute the traffic
147
SDN Example: Server Load Balancing
• Pre-install load-balancing policy
• Split traffic based on source IP
10.0.0.1
src=0*,
dst=1.2.3.4
10.0.0.2
src=1*,
dst=1.2.3.4
148
2016 聯盟主要活動
2016 / 1
2
3
4
5
6
7
8
9
10
11
雙月技術交流會議
與其它單位合辦不定期的課程、研討會及SDN/NFV 國際交流活動
2016全球未來網絡暨SDN技術大會展示活動
年度會員大會
SDN競賽
12
ITRI ICL SDN Research Topics
 SDN Common Platform





OpenDaylight Controller + Plug-in Bundles
Common Northbound Interface
Network Virtualization
Bandwidth Slicing
Traffic Engineering/Traffic Monitoring
• SDN Enterprise/Campus Solution
• SDN Migration (Mixed Network)
• Edge Ethernet Switch Replacement
• VLAN Management
• Critical Point Replacement
• Enterprise/Campus Applications
• Wi-Fi Network Management / Access Control
• Surveillance/ uBike
• Load Balance
• Security
• SDN Mobile Backhaul Solution
• SDN-enabled Access Network
• SDN Mobile Backhaul Field Trial
• SDN Controller for LTE Mobile Backhaul
• Network Traffic Flow Management
• SDN Backhaul Virtualization
150
Use Case: VLAN Management in ITRI
Problem in legacy network: port-based VLAN is not flexible and complicated configuration is
required.
50
20
50
20
20
50
Modify Configuration
show vlans
show vlan ports 18
config vlan vlan20 del port 18 untagged
config vlan vlan50 add port 18 untagged
save
151
Use Case: SDN-enabled Mobile Backhaul in ITRI
• SDN In-band Channel Deployment
• Meeting Mobile Network QoS Requirements in SDN-enabled backhaul
SW
11館
SW
ITRI Campus
SW
SW
SW
SW
SW
52館
SW
1588 GM
SW
Cisco
EPC
NMS Server
SDN
controller
SW
SW
SW
SW
VLC Server
SW
Internet
SW
SW
51館
SW
FY105
SW
FY104
Fiber
Copper
152
152
New Era of Networking
• Guru Parulkar (Executive Director of ON.Lab) said in ONS
2016 that new era of networking is defined by
disaggregation, softwarization, virtualization and open
source.
Source:
153
153
State of the Industry
Source:
154
154
Community Challenge
Source:
155
155
SDN Migration
2
Areas in ONF
157
Architecture Requirements for Migration to SDN
158
Three Stages of SDN Migration
159
SDN Migration Tools
160
Use Case: Google Inter-Datacenter WAN (B4)
• Google’s WAN is organized as two backbones – an Internet
facing (I-scale) network that carries user traffic and an internal
(G-scale) network that carries traffic between datacenters.
• Google has deployed OpenFlow powered solution in G-scale
network.
Google controls the applications, servers, and the LANs all the
way to the edge of the network.
The most bandwidth-intensive
applications perform large-scale
data copies from one site to
another.
161
Features of I-scale Network
• The user-facing networking connects with a range of gear and
providers, and hence must support a wide range of protocols.
• Its physical topology will necessarily be denser than a network
connecting a modest number of datacenters.
• In delivering content to end users, it must support the highest
levels of availability.
162
G-scale Network (B4)
• Google categorizes applications running across B4 target network
into three classes:
User data copies (e.g., email, documents) to remote datacenters
for availability/durability.
Remote storage access for computation over inherently
distributed data sources.
Large-scale data push synchronizing state across multiple
datacenters.
• These three classes are ordered in increasing volume, decreasing
latency sensitivity, and decreasing overall priority.
163
Pre-Migration Assessment
• Elastic bandwidth demands: the majority of Google's datacenter
traffic involves synchronizing large data sets across sites. These
applications benefit from as much bandwidth as they can get but
can tolerate periodic failures with temporary bandwidth reductions.
• Moderate number of sites: While B4 must scale among multiple
dimensions, targeting the datacenter deployments means that the
total number of WAN sites would be a few dozens.
• End application control: Google controls both the applications and
the site networks connected to B4. Hence, it can enforce relative
application priorities and control bursts at the network edge, rather
than through over provisioning or complex functionality in B4.
• Cost sensitivity: B4’s capacity targets and growth rate led to
unsustainable cost projections. The traditional approach of
provisioning WAN links at 30-40% (or 2-3x the cost of a fully utilized
WAN) to protect against failures and packet loss, combined with
prevailing per-port router cost, would make the network
prohibitively expensive.
164
Google SDN Architecture
• Many B4 links run at near
100% utilization and all
links to average 70%
utilization over long time
periods, corresponding to
2-3x efficiency
improvements relative to
standard practice.
165
Migration – Starting Network
Google viewed BGP integration as a step toward deploying new
protocols customized to the requirements of, for instance, a
private WAN setting.
166
Migration – Phased Deployment
A subset of the nodes in the network were OpenFlow-enabled
and controlled by the logically centralized controller utilizing
Paxos, OpenFlow controller, and Quagga.
167
Migration – Target Network
• There is no direct correspondence between the Datacenter and the
network.
• The controller has also TE server that guides the Traffic Engineering
in the network.
168
Use Case: ITRI SDN Edge Switch Migration Solution
1.
2.
3.
4.
Subnet configuration & Monitoring
Of-Switch Configuration & Monitoring
Topology View, Hosts Information
Flow, Port statistics
ITRI SDN Controller
ODL Hydrogen
ITRI SDN NMS
10.101.xx.xx
control
data
vlan
SDN Edge Switch
vlan
10.1.xx.xx
L2
Internet
Authorized User/Device
vlan
Wi-Fi AP Controller
Meeting Room
(User Authen.)
vlan
L3 Switch
ITRI IP-Cam
Server
L2 Switch
STOP
DHCP Server
Unauthorized User/Device
10.101.xx.xx
ITRI e-Service
169
Major Consideration in ITRI SDN Migration
• Simplify VLAN Configuration and Maintenance
 Simplify VLAN Configuration (Port/IP/MAC/User-based) and Switch Replacement
 Topology View Includes SDN Switch, Legacy Switch nearby OF-Switch, Link, and Host
• Network Security Access Control
 Let Each SDN Switch Can Redirect Guest/Staff to Do Web Identity Authentication
 Easily Set the ACL only in SDN Controller (Not in Each Switch)
 802.1x Authentication, DHCP Snooping, Dynamic ARP Inspection, Loop
Protection….etc.
• Traffic Monitoring
 Port/Flow Statistics/Path Monitoring
170
Migration Mixed Network Topology View
SDN Switch
Legacy Switch
171
Subnet Configuration GUI
172
ITRI SDN Migration Results
 Feedback from MIS people
– Reduce Legacy Switch Port Configuration Time: from NxMx2mins  2 mins
(N  SW Port Num)
(M  Num of SW)
– Reduce Edge Switch Replacement Time: from hours  within 5 mins
– Unified Access and Policy Control
Seamless Synergy of Mixed SDN/Legacy Networks
ITRI SDN
Controller
vlan
10.101.xx.xx
vlan
Internet
L2
vlan
10.1.xx.xx
L3 Switch
DHCP (102)
Office
10
Wireless subnet
20
Cam subnet
30
L2
DHCP (102)
DHCP Server
173
Network Architecture of NCTU Trial
174
ITRI SDN Migration Solution @NCTU
Mixed
Topology
OF Switch
State
Statistics
IP source
binding
175
EstiNet SDN Migration Solution @NCTU
Devices list
Search agent
EstiNet SDN Switch
SS-4028R
Real time Statistics
176
III SDN Migration Solution @NCTU
• Switch:
• Estinet OF-4048C
• Controller:
• Ryu
• III independent development
APP.
177
Features of III SDN Migration Solution
• Network Host Management.
• Support static port, IP and MAC binding.
• Support dynamic hosts.
• Broadcast-Free Network.
• Remove broadcasting behaviors.
• User requirements with different
broadcasting services feedback.
• OpenFlow Network Debugger.
• Dynamically network snapshot.
• Support step-by-step flow entry tracing.
• Multiple Links to the EC Core
Network.
• Concurrent multiple links utilization.
• Compatible with existing STP devices.
178
Network Function
Virtualization (NFV)
Meng-Hsun Tsai
Department of Computer Science & Information Engineering
National Cheng Kung University
Source: http://www.bloomberg.com/news/articles/2013-09-23/at-t-shifting-to-lower-cost-software-defined-networkin
180
180
NFV Overview
2
Background
• Network operators’ networks are populated with a large and
increasing variety of proprietary hardware appliances.
• Finding the space and power to accommodate these boxes is
becoming increasingly difficult:
 Increasing costs of energy
 Capital investment
 Rarity of skills necessary to design, integrate and operate complex
hardware-based appliances
 Hardware-based appliances rapidly reach end of life
184
Introduction to NFV
• Network Function Virtualization (NFV) is a network architecture concept
that proposes using the technologies of IT virtualization to virtualize
entire classes of network node functions into building blocks that may be
connected, or chained, to create communication services.
• Implementation of network functions in software can run on a range of
industry standard server hardware without the need for installation of
new equipment.
• The concept of network functions virtualization was first introduced in
2012 as part of the ETSI NFV ISG to provide hardware-related CAPEX
and OPEX reductions.
• As more developments have been made with NFV, service agility has
become one of the main drivers for the development of network functions
virtualization.
ETSI: European Telecommunications Standards Institute ISG: Industry Specification Group
185
Basic Concept
• Network Functions Virtualization (NFV) is an approach to
telecommunications networking where the network entities that
traditionally used dedicated hardware items are now replaced with
computers on which software runs to provide the same functionality.
• It is easier to expand and modify the network, and it is able to
provide considerably more flexibility as well as being able to
standardize on much of the hardware as it consists of additional
computing power. In this way costs can be considerably reduced.
186
Virtualization Techniques
• NFV is a concept that virtualizes major elements of a network.
• Entire classes of network node functions can be set up as building blocks
that can be connected to create overall telecommunications networks.
• NFV utilizes traditional server virtualization, but extends the concept
significantly. One or more virtual machines running different software on
top of industry standard high volume servers.
• Examples of the virtualized functions that can be provided include:
virtualized load balancers, firewalls, intrusion detection devices, WAN
accelerators, routers, access control and billing.
187
Vision for NFV
Source: NFV white paper from ETSI
188
What Does it Mean to be NFV?
189
Source: Alcatel-Lucent
189
Relationship with SDN
• NFV is highly complementary
to SDN, but not dependent
on it (or vice-versa).
• NFV and SDN can be
combined for potentially
greater value.
• NFV is able to support SDN
by providing the
infrastructure upon which the
SDN software can be run.
190
SDN/NFV Common Feature-decoupling
• NFV decouple the SW/HW of network function equipment
Firewall, IDS, and etc …
Programmability & Scalability of network function
• SDN decouple the control intelligent and data forwarding of
switches
L2/L3/L4 switch
Programmability of network
• SDN + NFV
Programmability of service
191
ETSI NFV
2
Early Contributors for ETSI NFV
• All of them are
from telecom
operators / ISPs.
Why?
Source: NFV white paper from ETSI
193
NFV Architectural Framework
NFV Management and Orchestration
Os-Ma
NFV
Orchestrator
OSS/BSS
Or-Vnfm
EM 1
VNF 1
EM 2
EM 3
Ve-Vnfm
VNF
Manager(s)
Service, VNF and
Infrastructure
Description
VNF 3
VNF 2
Vn-Nf
Vi-Vnfm
NFVI
Virtual
Computing
Virtual
Storage
Virtual
Network
Nf-Vi
Virtualisation Layer
Vl-Ha
Computing
Hardware
Storage
Hardware
Execution reference points
Virtualised
Infrastructure
Manager(s)
BSS: Business Support
System
EM: Element Management
NFV: Network Functions
Virtualization
NFVI: Network Functions
Virtualization
Infrastructure
OSS: Operations Support
System
Or-Vi
Hardware resources
Network
Hardware
Other reference points
Main NFV reference points
Source: NFV white paper from ETSI
194
NFV Architectural Framework (cont.)
The NFV framework consists of three main components:
• Virtualized network functions (VNFs) are software implementations of
network functions that can be deployed on a network function
virtualization infrastructure (NFVI).
• Network function virtualization infrastructure (NFVI) is the totality of all
(Commerical-Off-The-Shelf (COTS)) hardware and software components
that build the environment in which VNFs are deployed.
• Network functions virtualization management and orchestration
architectural framework (NFV-MANO Architectural Framework) is the
collection of all functional blocks, data repositories used by these blocks,
and reference points and interfaces through which these functional blocks
exchange information for the purpose of managing and orchestrating
NFVI and VNFs.
195
ETSI NFV ISG
• Although ETSI is a Standards Development
Organization (SDO), the object of NFV ISG is
not to produce standards.
• The key objectives are to achieve industry
consensus on business and technical
requirements for NFV, and to agree
common approaches to meeting these
requirements.
• NFV ISG will collaborate with other SDOs if
any standardization is necessary to meet
the requirements.
196
Source: https://portal.etsi.org/TBSiteMap/NFV/NFVLiaisonMatrix.aspx
196
ETSI NFV ISG Working Groups
• Six working groups are established in NFV ISG:
 Infrastructure Architecture
 Management & Orchestration
 Reliability & Availability
 Software Architecture
 Performance & Portability
 Security
• Coordination is provided by the Technical Steering Committee
(TSC), and the Network Operator Council provides guidance on
business priorities.
Source: https://portal.etsi.org/tb.aspx?tbid=789&SubTB=789
197
Timeline for NFV ISG work program
198
NFV Use Cases
2
Top NFV Drivers
• Service velocity
• Simpler provisioning
of multi-vendor
networks
• Lower OPEX
Source: Infonetics Research: IMS Service Strategies and Vendor Leadership:
Global Service Provider Survey, 2014
202
202
Real Market: Operators Are Moving IMS to NFV
• Moving to NFV is
happening
• Transition to software is
underway
• 76% moving IMS
elements to NFV by 2016
Source: Infonetics Research: IMS Service Strategies and Vendor Leadership:
Global Service Provider Survey, 2014
205
205
Business Revenue Drives Operator NFV
• Nearly all service
providers will buy NFV
products from telecom
equipment and data
center network vendors
• With VoLTE plans coming
into focus, IMS is a key
area for NFV
Source: Infonetics Research
206
vIMS Technology Challenge #1
• Most IMS functions are control plane only – no problem
implementing in software
• SBC is a key exception to this – most current solutions are
hardware-based
• SBC in software is HARD
 DDoS protection
 Media processing functions: media relay, encryption, transcoding
Source: MetaSwitch
207
vIMS Technology Challenge #2
• NFV promises massively and dynamically scalable and
resilient solutions
• Porting software designed for classic 1+1 telco appliance to
Cloud won’t deliver
Cloud is only 99.9% available – need to design around this
Web-scale application software architecture looks very
different
(e.g., stateless processing elements, distributed state stores,
N+k HA)
Most vIMS solutions today are simple ports of appliance-based software.
Source: MetaSwitch
208
208
NFV Use Cases: Service Chaining
209
Service Chaining
• Virtualization eliminates the dependency between a network function (NF)
and its hardware by creating standardized execution environment and
management interfaces for the Virtualized Network Functions (VNFs).
• This results in the sharing of the physical hardware by multiple VNFs in
the form of virtual machines (VMs).
• Further pooling of the hardware facilitates a massive and agile sharing of
NFV Infrastructure (NFVI) resources by the VNFs.
• Once new network functions are virtualized into VNFs, it is necessary to
organize the VNFs into an ordered graph to realize a network service.
• In NFV, such graphs are called VNF Forwarding Graphs.
210
POC#23: E2E Orchestration of Virtualized LTE Core-Network
Functions & SDN-based Dynamic Service Chaining of VNFs
using VNF-FG
•
Multi-vendor project: SK Telecom, HP, Telcoware, Samsung
211
POC#23: T-OVEN GUI
NFV-FG for Service Deployment
Rapid service deployment  Time to market
Automated Configuration  Reduce OPEX
VNF
vVAS
PNF
Connectivity
vEPC
vCDN
vIMS
212212
POC#23: A Scenario for
Dynamic Service Chaining via SDN-driven VNF-FG
• vVAS functions located at Gi interfaces
• QoE Mgmt, TCP Acc., Video Opt.
• w/ SDN and VNF-FG, packets may be optionally forwarded to the VAS
functions dynamically as needed.
213213
Use Case: BT One Phone
Fixed-Mobile Convergence service for business launched by BT in July
2014
Time-to-market for new services is significantly
reduced.
Source: Metaswitch
214
Use Case: Indoona
Free mobile VoIP and messaging service provided by
Tiscali
• Service is delivered from an IMS network
• Clearwater Core + Perimeta SBC from
Metaswitch
• App Servers + Client Apps developed by
Tiscali
• Everything running virtualized on Vmware
• Capex and Opex low enough to support a
free calling service
• Service went live in December 2014
Capex and Opex are significantly reduced.
Source: Metaswitch
215
Source: ONS 2015
216
Use Case: Central Office Re-architected as a
Datacenter (CORD)
• CORD is an end-to-end solution POC that combines SDN, NFV,
Cloud with commodity infrastructure and open building blocks to
deliver datacenter economies of scale and cloud-style agility to
service provider networks.
• CORD enables service providers to build an underlying common
infrastructure in Central Office with white boxes, ONOS, OpenStack,
and XOS with a diversity of organizations building the services and
solutions above.
• CORD open building blocks include ONOS (SDN Control Plane),
Openstack (Virtual infrastructure mgmt), open commodity hardware,
OF-enabled OLT MAC and G.fast DPU.
Source: ONS 2015
217
Source: ONS 2015
218
Legacy Broadband Access using GPON
Source: ONS 2016
219
GPON in CORD
Source: ONS 2016
220
GPON in CORD (cont.)
Source: ONS 2016
221
It is NFVing in 4G
• In MWC 2015, AT&T announced to start deploying SDN and NFV technologies this
year as part of its Domain 2.0 vision for a next-generation network.
• expects 5% and 75% VNF by 2015 and 2020, respectively.
• In MWC 2014, China Mobile made a demonstration of a multivendor NFV POC.
• A-Lu vRAN + 1 Huawei vEPC + 1 multivendor vEPC + ZTE vIMS
• NTT DoCoMo + A-Lu + Cisco + NEC completed PoC trials
• To virtualize EPC and have commercial services by early 2016.
• Vodafone Hutchison Australia (VHA) has a five year network evolution program to
adopt NFV in the core network, in which Ericsson is the sole supplier.
• In NFV World Congress 2015, 華為西安NFV/SDN Open Lab成功完成OPNFV實驗環
境搭建,成為OPNFV全球8個驗證實驗室之一。華為計劃在2015年攜手超過150家
合作夥伴進行更深層次的聯合探索,共同推動NFV產業的發展。
Source: ITRI
VNF Instantiation Message Flow
Source: ETSI GS NFV-MAN 001 (V1.1.1): "Network Function Virtualisation (NFV); Management and Orchestration".
Mixed Network Management between 3GPP and NFVMANO
OSS/BSS
NM
Os-Ma-nfvo
Itf-N
Itf-N
Itf-N
DM
NFV
Orchestrator
(NFVO)
Or-Vnfm
Ve-Vnfm-em
EM
EM
EM
VNF
VNF
VNF
Manager
(VNFM)
Ve-Vnfm-vnf
Or-Vi
Vi-Vnfm
NE
(PNF)
NE
(PNF)
Vn-Nf
Vn-Nf
NFVI
Nf-Vi
Virtualized
Infrastructure
Manager
(VIM)
NFV-MANO
Source: 3GPP TR 32.842 (V 13.1.0)
Use Case from 3GPP MME VNF instantiation in
Mixed Network
2
Mixed Network NM
Legacy NM
NFV Orchestrator
(NFVO)
NFV NM
11
12
1
Itf-N
DM
12
7
4
10
9
DM
EM
3
EM
8
VNF Manager
(VNFM)
MME VNF
Vn-Nf
5
NFVI
Virtualization
Layer
NE
Physical HW
MME NE
13
13
S1
eNB
eNB
eNB
Source: ITRI
Virtualized
Infrastructure
Manager (VIM)
NFV-MANO
6
METIS’ 5G baseline system
D2D
Offloading
SD
N
SDN/NFV
SDN
SDN/NFV
(vEPC, vIMS,…)
 5G 以滿足不同應用領域需求為重點,5G主要應用:
行動寬頻、虛擬實境多媒體內容傳送、機器類通信(MTC)、遠端機械控制、 智慧運輸等
 5G 關鍵技術選項 :
大規模天線陣列(Massive MIMO)、 非正交傳輸、高頻段通訊,、超密集組網(UDN)、C-RAN、D2D
軟體定義網路(SDN)、網路功能虛擬化(NFV) 、內容分發網路(CDN) 等
資料來源:METIS, Mobile and wireless communications Enablers for the Twentytwenty Information Society(METIS), Final report on architecture, 31/01/2015
226
SDN/NFV in 5G (from NGMN point of view)
• NGMN envisions an architecture that leverages the structural
separation of hardware and software, as well as the
programmability offered by SDN and NFV.
227
Src: NGMN, “5G White Paper”, Feb. 17, 2015
Summary of NFV
Pros
•
•
•
•
•
•
Reduced OPEX & CAPEX
Increased speed of Time to Market
Services can be rapidly scaled up/down as
required.
Availability of network appliance multiversion and multi-tenancy
Targeted service introduction based on
geography or customer sets is possible.
• Shorten transmission delay
Enables a wide variety of eco-systems and
encourages openness.
• Market is shared by multiple vendors
Cons
•
•
•
•
•
•
Source: ITRI
Achieving high performance virtualized
network appliances which are portable between
different hardware vendors, and with different
hypervisors.
Achieving co-existence with bespoke hardware
based network platforms whilst enabling an
efficient migration path to fully virtualized
network platforms which re-use network
operator OSS/BSS.
Managing and orchestrating many virtual
network appliances (particularly alongside
legacy management systems) while ensuring
security from attack and misconfiguration.
NFV will only scale if all of the functions can be
automated.
Ensuring the appropriate level of resilience to
hardware and software failures.
Integrating multiple virtual appliances from
different vendors.
229
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement