On the Feasibility of “Breadcrumb” Trails Within OpenFlow Switches

On the Feasibility of “Breadcrumb” Trails Within OpenFlow Switches
On the Feasibility of “Breadcrumb” Trails Within
OpenFlow Switches
Giuseppe Bianchi∗ , Marco Bonola∗ , Salvatore Pontarelli∗ ,
/ University of Roma Tor Vergata, Email: {name.surname}@uniroma2.it
Abstract—Several network protocols require the ability to dynamically deploy, along a network path, stateful data, nicknamed
“breadcrumbs”, used to forward packets on the reverse direction.
This is the case of either classical reverse path forwarding
schemes, as well as more recent information centric networking
approaches. Perhaps surprisingly, this paper shows that such
capability is already somewhat at reach in current OpenFlow
switch architectures: its support requires only very marginal
modification of the existing OpenFlow hardware. We support our
claim with a concrete hardware proof-of-concept implementation,
and we show, with the help of both traditional reverse path
schemes and original approaches, how such functionality can be
programmed via a platform-agnostic abstraction.
Coined in 2009 [1], the term Software Defined Networking
(SDN) has gained significant momentum in the last years. SDN
promises to enable easier and faster network innovation, by
making networks programmable and more agile, and by centralizing and simplifying their control by permitting network
administrators to handle heterogeneous multi-vendor network
nodes, without the need to bother with a plethora of proprietary
configuration interfaces.
Even if some SDN’s programmable networking ideas date
back to the mid of the 90s [2], and do not nearly restrict to
device-level programmability and to OpenFlow [3], it is fair to
say that OpenFlow is the technology which brought SDN to
the real world [1]. Quoting [2], “Before OpenFlow, the ideas
underlying SDN faced a tension between the vision of fully
programmable networks and pragmatism that would enable
real-world deployment. OpenFlow struck a balance between
these two goals by enabling more functions than earlier route
controllers and building on existing switch hardware, through
the increasing use of merchant-silicon chipsets in commodity
switches”. Indeed, a key OpenFlow’s asset was the identification of a pragmatic compromise for configuring the forwarding
behavior of switching fabrics, in the form of a vendor-neutral
“match/action” programming abstraction. Starting from the
recognition that several different network devices implement
somewhat similar flow tables for a broad range of networking
functionalities (L2/L3 forwarding, firewall, NAT, etc), the
authors of OpenFlow proposed an abstract model of a programmable flow table which permits the device programmer
to broadly specify a flow via an header matching rule, associate
forwarding/processing actions (natively implemented in the
device) to the matching packets, and access statistics associated
to the specified flow. The success of the OpenFlow abstraction
stems in the fact that (quoting [3]) it is “amenable to highperformance and low-cost implementations; it is capable of
supporting a broad range of research; and it is consistent with
vendors’ need for closed platforms”.
Still, a broad class of applications require the switch to
dynamically set a forwarding state triggered by a packet-level
event, a feature that OpenFlow switches do not provide. For
concreteness, in this paper we address the special (but still
very broad) case of applications which we refer to in most
generality with the descriptive name “breadcrumb” forwarding,
and which require the ability to dynamically update (on a
packet-level basis) forwarding states for the opposite direction
of the flow the arriving packet belongs to; in essence, for a
different flow. The simplest and most striking example is MAC
learning: an incoming packet is forwarded on the basis of its
MAC destination address, but the forwarding table must be
updated with a new entry mapping the MAC source address
to the switch port the packet has entered from. Despite being
a fundamental (and historical!) feature in Layer 2 switches,
the MAC learning function cannot be expressed in terms of a
match/action primitive, and hence it is not natively supported
by a basic OpenFlow implementation. And, to a greater extent,
OpenFlow is unable to support more general reverse path
forwarding schemes, as mandated by legacy protocols such as
RSVP or PIM-SM, or as needed in bidirectional load balancing
schemes which aim at pinning a flow to both an upward and a
downward path, or as emerging as fundamental component in
many Information Centric proposals, which deliver data using
the interface from which request (interest) packets arrive from.
The point is that OpenFlow does not natively permit to
dynamically deploy forwarding states (i.e. flow-mod commands) without the intervention of an external controller.
Thus, stateful tasks (such as setting a new forwarding table
entry “learned” from an incoming packet) are forcedly and
unnecessarily centralized, even when they do rely only on local
switch/link states. In the best case, the explicit involvement
of the controller for any stateful processing and for any
update of the match/action rules yields extra signaling load
and processing delay, and calls for a capillary distributed
implementation of the “logically” centralized controller. And
in the worst case, the slow control plane operation a priori
prevents the support of network control algorithms which
require fast path data plane forwarding updates.
Our contribution.
In this work we specifically show that “breadcrumb” forwarding, a key primitive in several applications and protocols, can
be supported by an OpenFlow switch with minimal modifications in the underlying architecture, and via commodity hardware technologies already widely used in OpenFlow switches.
In details, we show that the solution to this issue is surprisingly
simple: it suffices to deploy a state table (in practice, a cheap
and scalable hash table) which i) is read and rewritten at (in
principle) each packet, and which ii) uses two different flow
keys for the read and write operations.
We prove the viability of the proposed concept by i)
showing that it requires minimal modifications with respect
to OpenFlow version 1.3 [4], and by ii) proving via a proof
of concept hardware implementation (minimal in scale and
features, but sufficient to identify implementation issues and
limitations), that it can operate at wire speed. We finally
conclude the paper by sketching a few simple use cases which
show how the proposed operation can be configured on the
switch, while retaining platform independence.
Related Work.
OpenFlow, now at version 1.5 [5], has so far pragmatically
evolved by means of very specific extensions focusing on
critical needs, such as more flexible header matching, action
bundles, pipelined tables, support for multiple controllers, and
many more. Consequently, state management inside the switch
has been addressed only in relation to specific needs (e.g. group
ports for fast failover and load balancing, synchronized tables
for sharing a same table across the pipeline, etc). In other
words, OpenFlow lacks general/explicit state management
primitives which our work is instead trying to foster.
This paper builds upon our previous OpenState proposal
[6], where we showed that OpenFlow could support the execution of Finite State Machines with Output (Mealy Machines).
Many applications, and most notably all those which rely on
“breadcrumb” forwarding, cannot be described as a Mealy
Machine, as they require to modify states of flows which are
different from the one to which an arriving packet belongs.
Even if this problem was anticipated in [6], this paper makes
a further step in terms of both technical design (including an
actual hardware support) as well as exploitation by applications, including new ones, which make use of such facility.
In terms of research area, the present work belongs to a relatively recent research trend, challenging improved data plane
programmability. Proposals such as POF [7], [8], although
not yet targeting stateful flow processing, do significantly
improve header matching flexibility and programmability, freeing it from any specific structure of the packet header. The
already mentioned OpenState proposal [6], [9] and FAST
[10] explicitly add support for per-flow state handling inside
OpenFlow switches, although they are still limited in the type
of supported applications. Perhaps the most influential work in
data plane programmability is the P4 programming language
[11], [12], which improves the programming flexibility of
an OpenFlow-type pipeline by leveraging advanced hardware
technology, namely dedicated processing architectures [13] or
Reconfigurable Match Tables [14] as an extension of TCAMs
(Ternary Content Addressable Memories). Still, P4 has so far
only marginally addressed support for stateful primitives: even
if registries for (dynamic) state handling were introduced in
the latest 1.0.2 language specification [15], the underlying
technical support and HW design has not yet been specified.
Let us start by the layman definition of “breadcrumb”
forwarding given in the introduction, i.e., the ability to dynamically update (on a packet-by-packet basis) forwarding states for
the opposite direction of the flow the arriving packet belongs
to. Can we formally specify and express it in terms of an
OpenFlow “match/action” primitive, or a combination of?
TCAM Table
State Table
Lookup Key PKT, FK
Flow key
Packet fields
Next State
FK ’ + state
PKT + state
PKT match
Update Key Extractor
PKT, next_state
Fig. 1: High level architecture
As it will become clear in section II-A, the answer is
positive and extremely simple, as long as we i) make explicit
the forwarding state of a flow, i.e. by associating a state
label to a flow, and dedicate a table “just” to maintain such
introduced flow states, and ii) we use the forwarding state
of the flow, rather than the flow identity, to associate the
forwarding actions to an incoming packet (and to trigger state
updates). In other words, let’s remind that a (basic) OpenFlow
“Match-action” takes the form of a “map”
Match(flow id, other pkt hdr fields) → Action(s)
which associates one or more forwarding actions to be performed on the considered packet, to a match performed on
the header fields of that same packet. In the above expression
we have made explicit that, among the various header fields
used in an OpenFlow match, there might be one field (or a
combination of) which, in relation to the specific application’s
purpose, migh take the meaning of a Flow Identity. Even if
apparently disadvantageous, we may split the above single
stage map into two distinct match stages
1) Match(flow id) → flow state
2) Match(flow state, other pkt hdr fields) → Action(s)
so as to decouple the association between a flow and its
state from the actual forwarding rule. This split brings several
key advantages. First, the table which maps flow identities to
flow states, and which we refer to as State Table, does not
require wildcard matches, and hence can be very efficiently
and scalably implemented in RAM (e.g. as an hash table).
Second, being in RAM, this table is trivially updated in
O(1) time (actually, one clock cycle), unlike a TCAM whose
updates are very demanding, and require in the worst case
an O(N ) complexity, with N the number of entries. Such
decoupling, and the resulting ability to dynamically update a
flow state during the packet processing pipeline (i.e., on the
fast path), permits us to get back to our initial goal, and support
“breadcrumb” forwarding by means of the following, standard,
single OpenFlow TCAM match, plus one read (before the
match) and one write (output of the match) State Table access:
Match(FK:S, pkt hdr fields) → Action(s), write(FK’:S’)
where FK:S is the state associated to the flow FK the packet
belongs to, write (in the state table) can be considered as a
supplementary OpenFlow action/instruction, and S’ is the state
that shall be written (or updated) for a different flow identity
key FK’. We now turn this idea into a concrete architecture.
A. Architecture
As discussed above, “breadcrumb” forwarding ultimately
requires to “just” support rules summarized by the template:
<Match(FK:S, pkt hdr fields) → Action(s), write(FK’:S’)>.
This can be done via the conceptual switch architecture
depicted in Figure 1. A packet entering the switch is first
processed by a module called Lookup Key Extractor, which
is programmatically configured (section III) to extracts from
the packet header a bit string which represents the Flow Key
F K, i.e. the “entity” (a MAC address, IP address, 5-tuple flow
identifier, etc) which may be attributed a (forwarding) state.
The physical extraction of a bit string F K permits us to
implement the subsequent lookup to a State Table via an exact
match, i.e. via a RAM-based hash table, opposed to a more
costly and less scalable TCAM-based wildcard match which
would have been necessary if the whole packet header field
were used for the match (as in a generic OpenFlow Table).
The State Table returns a state label associated to the key F K.
For flows whose state is not yet stored in the table (e.g. new
flows), we conventionally return a DEFAULT state.
The retrieved state label S associated to the flow key
F K is appended as metadata to the packet, and passed to
a subsequent TCAM table. This name stresses the fact that
this is technically implemented as a standard (usually TCAMbased) OpenFlow wildcard match table. However, the semantic
of such table is more precise than in the general OpenFlow
case: the TCAM table indeed receives two conceptually distinct
inputs, namely the flow state and the packet header, and returns
two outputs, namely the associated forwarding actions (e.g.
DROP, FORWARD, FLOOD, etc, as per standard OpenFlow),
plus a supplementary data which we call target state S 0 .
The last step in our construction consists in updating the
state table, by saving S 0 for a different Flow Key F K 0 , which
is extracted from the packet header by the Update Key Extractor module. This step is the crucial one for implementing
a desired “breadcrumb” forwarding mechanism. Indeed, while
the packet is forwarded using the state associated to a given
flow (e.g. identified by the MAC destination address), state
updates are performed on a different flow (e.g. identified by
the MAC source address).
B. Programming Abstraction
The above proposed architecture permits to set reverse path
forwarding states via configurations which can be expressed
in terms of abstract forwarding rules, i.e. in a way analogous
to the vendor-neutral OpenFlow match/action rules. The key
difference is that the programmer is free to define states which
describe a desired application’s behavior, as well as how states
should be set on the basis of actual packet-level matches.
Specifically, a configuration is completely identified by i)
the Flow Key F K used for state lookup; ii) the flow key F K 0
used for state updates, iii) the arbitrary set of state labels (plus
a default state) which describe the desired behavior, and iv)
the pairs <state, header match → actions, target state> which
formalize the rules according to which states are set.We refer
the reader to section IV for programming examples.
C. OpenFlow Compatibility
Interestingly, most of the features provided by the architecture described in section II-A are already supported by the
latest OpenFlow version 1.5 [5]. Indeed, OpenFlow already
supports pipelined tables since version 1.1, and matches on
metadata fields, which are required by our TCAM Table.
delay queue
Ingress queues
TCAM table
State table
next state
egress queues
Fig. 2: Scheme of the hardware prototype
Moreover, the update of the target state could be implemented
as an OpenFlow instruction, and hence integrated in the set
of actions. The trickiest feature is perhaps the architecture’s
backward loop, which requires to access the State Table at the
beginning of the processing pipeline, and update the same table
at the end of the pipeline: this is actually made possible since
OpenFlow 1.4 by the flow table synchronisation feature (whose
details are however in part left to the implementation - we
believe that our proposed approach has the advantage of being
unambiguous). In conclusion, the only non standard feature
we require from an OpenFlow switch is the differentiation
between Lookup and Update Flow key, which is however a
minimal modification in terms of HW design, as shown next.
The architecture presented in section II-A has been implemented in a a proof-of-concept hardware prototype using
an FPGA platform. Since the aim of the ptototype is to
show the feasibility of the proposed architecture, the current
implementation has a small TCAM and a limited size hash
tables, but provides all the functionalities needed for reverse
path applications: i) an (exact) match table to associate the
flow id to the flow state, ii) a (wildcard-based) match table to
decide the action and the state update based on the flow state
and on other packet header fields, iii) a flexible/configurable
(POF-like) extractor block to select the packet header fields
that build the flow identities FK and FK’ and, iv) a set on
actions to apply to the packets (forward, drop, flood).
The hardware prototype has been designed using as target
development board the NetFPGA SUME [16], an x8 Gen3
PCIe adapter card incorporating a Xilinx Virtex-7 690T FPGA
[17], four SFP+ transceivers providing four 10GbE links, three
72 Mbits QDR II SRAM and two 4GB DDR3 memories. The
board also provides a USB Connector for FPGA programming
and debugging and an UART interface. The FPGA is clocked
at 156.25 MHz, with a 64 bits data path from the Ethernet
ports, corresponding to a 10 gbps throughput per port. The
blocks composing the hardware prototype are depicted in fig.
2. The data coming from the 10GbE links are aggregated by
a mixer with an 320 bits wide data bus output. A delay queue
stores the packet during the time needed by the system to
operate. The system is able to provide an overall throughput
of 50 Gbps.
State table. The State table is implemented as a d-left
hash table (with d=4) sized for 4K entries. In order to support
the target throughput, the RAMs composing the d-left table
are realized as dual port RAMs, so as to provide a read
and a write operation for each clock cycle. The state table
is further equipped with a small TCAM with 32 entries of
128 bits (TCAM1), and an associated Block RAM (RAM1)
of 32 entries of 32 bits that reads the output of the TCAM
and provides the state associated to the specific TCAM row.
The TCAM is needed to handle special (wildcard) cases, such
as static state assignment to a pool of flows (e.g. ACLs), flow
categories which are out of the scope of the machine and must
be processed in a different way (if necessary by another stage,
either stateless or stateful), etc. The table is accessed with 128
bits keys provided, alternatively, by a look-up extractor and an
update extractor during state read and (re)write, respectively.
Look-up/update extractors. These extractors are a key
element for the flexible selection of the flow identities FK
and FK’. The block selects a subset of all the incoming
packet header fields to build FK and FK’. Both extractors
are configurable at application level (i.e. the selection of the
fields composing the key are defined at configuration time).
The internal structure of the extractors is based on the use of
elementary shift and mask (SaM) blocks . The SaM block first
selects the beginning of the header (i.e performs the shift of
the input key), and then performs a bit-wise mask operation.
The shift and mask block is configured using two configuration
values, one that defines the starting pointer and another that
provides the bit-wise mask. The number of SaM blocks and
the output size of each block is a design choice that depends
on the number of fields and complexity of FK and FK’ to be
extracted. For example, in our developed FPGA prototype the
extractors are built using two 64 bits-wide SaM blocks. The
output of the two extracted fields are put together forming a
128 bits vector. Since typical flow identity keys can refer to
multiple fields, but they are usually contiguous fields, or fields
with a limited distance, we suppose that two SaM blocks are
sufficient for a wide range of applications.
The logical operations composing the extractor are easily
implementable in hardware and are able to process the incoming data at the required clock frequency, while maintaining
a high degree of flexibility needed to select different kind of
protocol fields. For example, if the lookup FK is the SRC,DST
MAC pair and the update FK’ is the DST,SRC MAC pair,
the lookup will use 0 and 48 as starting pointers, while the
update will use 48 and 0 as starting pointers, thus providing
the cross flow state management of the ethernet flows. We
remark that the use of a configuration mask allows to combine
together multiple fields to form the flow identity key (e.g. can
be combined the field of the source IP and the field of the
destination port of a TCP packet), with the limitation that the
bits composing these fields are in the same 128 bit windows
defined by the index configuration register.
As a last remark, we notice that the packet extractor field
configuration is somewhat similar to the format of ProtocolOblivious Forwarding (POF) element [7]. While the extractor configuration is composed by an offset and mask, the
corresponding POF element is formed by an offset and a
length. However, the mask configuration register can be used
to shorten the length of the key to extract, (i.e. setting to 0
the last n bits of the incoming input). Therefore this simple
block can provide a viable hardware implementation of the
operations provided by a POF element.
TCAM table. The TCAM (TCAM2) has 128 entries of
160 bits, associated to a Block RAM (RAM2) of 128 entries
of 64 bits, storing the next state used to update the state table,
and the specific action to perform on the packet. This TCAM
takes as input the aggregated (unmasked) lookup-scope and
the retrieved flow state, providing as output row associated to
the matching rule with higher priority. The limited number of
entries of the TCAMs is due to the inefficient mapping of these
structures on an FPGA. This number, however, is similar to
that of other FPGA based TCAM implementations[18].
Packet output. A final Action Block applies the retrieved
action to the packet coming from the delay queue. Being our
prototype a proof-of-concept, as of now only a basic subset
of OpenFlow actions have been implemented: drop, flood,
forward. This block then provides as output the four 64 bits
data-bus for the four 10 Gbits/sec egress ports.
Microcontroller. The NetFPGA SUME infrastructure that
we used for the development of our prototype uses Microblaze,
a Xilinx Microcontroller, to provide configuration/management
tasks. In our implementation this microcontroller is used for
three main tasks: i) to retrieve debug/status information, ii)
to configure the memories (TCAM and block RAM) and the
extractors and iii) to provide an aging mechanism to delete
unused entries in the state table. For this purpose, two activity
flag bits are stored in each state table entry and are set to the
value 11 (ACT IV E) each time the flow is read or written.
With a configurable time interval Taging , the microcontroller
scan the state table putting all the entries in (ACT IV E)
to the value 10 (IN ACT IV E state), and removing (setting
the activity bits to 00) all the entries that are already in
IN ACT IV E state. This mechanism is meant to remove all
entries that has not been accessed for an interval time greater
than Taging . While active flows will oscillate from (ACT IV E)
to IN ACT IV E state, flows that will not be seen by the
switch for a time greater than Taging will be first moved from
ACT IV E to IN ACT IV E state, and then removed, freeing
space for new flow identity keys.
Synthesis results. The whole system has been synthesized
using the standard Xilinx design flow: the resource occupation
for the implemented system, in terms of used logic resources,
are presented in the table below.
type of resource
Number of Slice LUTs
Block RAMs
# of used resources
10,691 out of 24,320
53 out of 212
25 %
The FPGA prototype confirms the feasibility of the hardware implementation. The additional hardware needed to
support cross-flow state management (namely, the extractor
modules) uses a negligible amount of logic resources and
does not exhibit any implementation criticality. Similarly, the
limited number of actions and TCAM entries implemented in
the prototype are just due to the proof-of-concept nature of our
prototype (and lack of an OpenFlow hardware from which we
would directly inherit these parts).
In this section we present three use cases that rely on
”learning facilities” to implement various forwarding applications. These applications exploit the proposed OpenFlow
extension that leaves complete freedom to the programmer to
decide what should be learned and by which flow identifiers
(see e.g. the example of switching based on MPLS labels
presented in section IV-B).
A. Mac Learning
The configuration of a MAC learning switch consists in
the following steps:
1) Set the lookup-scope to the Ethernet destination address and
the update-scope to the Ethernet source address;
2) Assuming a switch with N ports, configure the TCAM
tables with the following (N + 1) × N OpenFlow messages:
for i in [PORT1, PORT2, ..., PORTN]:
flow_mod([match=[state=0, inport = i],
action=[flood, set-state(i)]])
for j in [PORT1, PORT2, ..., PORTN]:
flow_mod([match=[state=j, inport = i],
action=[fwd(j), set-state(i)]])
The MAC learning TCAM table will be configured with
the (N + 1) × N entries - i, j ∈ (1, N ):
state=DEF, in port=i
state=j, in port=i
set state(i), flood
set state(i), fwd(j)
The above configuration uses as state labels the port numbers. A lookup to the State Table using the Ethernet destination
address will retrieve the port to which the packet should be
forwarded, or a DEF (default) state if the address is not stored
in the table. The state label (switch output port) and the ingress
port are provided as input to the TCAM table, which returns a
flood action on all switch ports in the case of default state, or a
forward action to the port returned as state otherwise. In both
cases, a set state(ingress port) instruction will be triggered, to
update the entry corresponding to the differing update-scope
(the Ethernet source address).
packet on the relevant path. A frequently recurring idea is
to use an indentifier of the egress switch as label itself. The
problem, ordinarily solved by dedicated control protocols or
by a centralized SDN controller, is to create and maintain a
mapping between a destination host address and the relevant
egress switch identifier (MPLS label).
At least in principle (because of the obvious emerging
inefficiencies), it could be possible to get rid of any control
protocol, and adopt the following simple solution mimicking a
(hierarchical) layer 2 operation. Each ingress switch maintains
a forwarding database mapping destination MAC addresses to
egress switch identifiers. When a packet arrives at an ingress
switch, a state table is queried using as key its MAC destination; if an entry if found, a label comprising both the egress and
the ingress switch identifiers is added to the packet; the core
network will forward the packet based on the egress switch
identifier. Conversely, if no entry is found in the forwarding
database, the packet is broadcast to all the egress switches1 . In
turns, egress switches could use the received packets to learn
the mapping between the source MAC address of the packet,
and its ingress switch (whose identifier is included the MPLS
label). Note that the egress switches will also need to store the
layer 2 forwarding information, i.e. through which edge port
the packet shall be forwarded.
Such operation is trivially implemented with OpenState
by an FSM program conceptually very similar to that used
for MAC learning, but cast in the new hierarchical port/label
setting; specifically, it would suffice2 to set in the edge switches
(notations in the tables being self-explaining):
1) lookup-scope = MAC dst and update-scope = MAC src;
2) state label = the pair [Pi , Sj ] with Pi an edge port
of the local ingress/egress switch, and Si being an egress
switch identifier;
3) TCAM table - entries for packet incoming from an
edge port (outbound packets, the packet is labeled and
forwarded to the egress switch(es); the state table learns the
edge port of the arriving packet):
state=[*,DEF], in port=Pi
state=[*,Sj ], in port=Pi
set state(Pi ,*), flood
set state(Pi ,*), push([Pi , Sj ]), fwd(Sj )
B. Ingress/egress switching based on MPLS labels
As the MAC learning example has shown, the addresses
used by the learning approach have been configured via the
lookup and update scope, and the forwarding to switch ports
has been performed accordingly to state labels. By changing
the definition of states and the lookup/update scopes, the
same learning construction can be cast into widely different
scenarios. As an example, consider a data center network
where a subset of edge switches, directly connected to the
end hosts via edge ports, act as ingress/egress nodes and
are connected each other via a core network engineered via
MPLS paths. Core switches are connected to edge switched
via transport ports. Once a packet, say a layer 2 Ethernet
frame, arrives to an ingress switch, the ingress node must
identify which path brings the packet to the egress switch,
before adding the appropriate MPLS label and forward the
4) TCAM table - entries for packet incoming from a transport port (inbound packets, the packet is decapsulated and
forwarded to the edge port(s), the state table learns the ingress
switch from where the packet arrives):
state=[DEF,*], label=Sj
state=[Pi ,*], label=Sj
set state(*,Sj ), flood
set state(*,Sj ), pop, fwd(Pi )
1 Also the broadcasting will rely on labels, so that core switches need only
to maintain forwarding states for egress switch labels rather than per MAC
2 We haven’t actually implemented it, as it is conceptually equivalent to the
MAC learning example and would give limited extra insights; indeed we do
not claim that such approach is effective (at least as presented here), but we
use it as an hopefully compelling example of how OpenState permits to deploy
learning in different settings.
C. A simple Content-Centric Networking implementation
Content-Centric Networking (CCN) [19] is a communication architecture based on named data. In CCN resources are
not routed through IP addresses, but through content names.
CCN considers two kind of packets: interest packets and data
packets. When a host requires to retrieve a specific content it
generates an interest packet and sends it through its output
interface. When a given host storing the required content
receives a CCN interest packet, it simply sends the content
encapsulated in a CCN data packet. When a CCN router
receives an interest packet from an input interface I it performs
the following actions: (1) it stores the association between
the required content name N and I in a local table called
Pending Interest Table (PIT); (2) it forwards the interest packet
according to name based prefixes stored in the Forwarding
Information Base (FIB). If another interest packet for the same
content name is received from a different input interface, in
the time window before the content is actually received, the
CCN router will add the new interface to the list of interfaces
associated to the pending interest without forwarding it. When
a CCN router receives a data packet, it checks for any pending
interest in the PIT and if a match is found, the data packet is
forwarded via the interface list associated to that content name.
Assuming CCN packet parsing capabilities, with the proposed architecture we can easily implement the basic CCN
interest and data forwarding approach, or in other words, a
simple case of CCN in which: (1) the CCN routers have no
content cache; (2) the forwarding strategy is a simple flooding
strategy (i.e. the interest packet is forwarded via all interfaces
except the one from which it is received)3 . In details, the CCN
approach described above can be configured in our proposed
switch according to the following steps:
1) set the lookup-scope and update-scope to the CCN content
name (match field currently not implemented in OpenFlow)
2) the state label S is the set of ports from which an interest is
received and is represented as a bit vector in which the i − th
bit represents the i − th port number;
requirements practically identical to a standard OpenFlow
implementation. Finally, the paper describes use cases in which
the differentiation between lookup and update flow keys is
exploited to implement “breadcrumbs” based protocols.
This work is partially supported by the EU Commission
in the frame of the Horizon 2020 ICT-05 BEBA (BEhavioural
BAsed forwarding) project, grant # 644122.
3) Assuming an N port switch, send the following messages:
for i in [PORT1, PORT2, ..., PORTN]:
flow_mod([match=[cnn.type=intr, state=DEF],
action=[flood, set_state(i)]])
flow_mod([match=[cnn.type=intr, state!=DEF],
action=[drop, set_state(S OR i)]])
In this work we proposed how to extend OpenFlow to
provide the necessary primitives to implement in the data
plane “breadcrumbs” based protocols and presents an FPGA
implementation of the proposed extension. The most important
lesson learned is that the configurable/flexible logic for the
selection of the flow identify keys has a very limited impact
in terms of resource requirements (and the same would apply
also if configurable/flexible logic is added in the action block).
The overall FPGA prototype has performance and resource
3 More
complex prefix matching strategies are object of ongoing works
K. Greene, “TR10: Software-defined networking, 2009,” MIT Technology Review.
N. Feamster, J. Rexford, and E. Zegura, “The road to SDN: an intellectual history of programmable networks,” ACM SIGCOMM Computer
Communication Review, vol. 44, no. 2, pp. 87–98, 2014.
N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,
J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation
in campus networks,” ACM SIGCOMM Computer Communication
Review, vol. 38, no. 2, pp. 69–74, 2008.
“OpenFlow 1.3 Software Switch,” http://cpqd.github.io/ofsoftswitch13/.
Open Networking Foundation, “OpenFlow Switch Specification ver
1.5,” Tech. Rep., 2015.
G. Bianchi, M. Bonola, A. Capone, and C. Cascone, “Openstate:
programming platform-independent stateful openflow applications inside the switch,” ACM SIGCOMM Computer Communication Review,
vol. 44, no. 2, pp. 44–51, 2014.
H. Song, “Protocol-oblivious forwarding: Unleash the power of sdn
through a future-proof forwarding plane,” in Proceedings of the Second ACM SIGCOMM Workshop on Hot Topics in Software Defined
Networking, HotSDN ’13, 2013, pp. 127–132.
H. Song, J. Gong, H. Chen, and J. Dustzadeh, “Unified POF Programming for Diversified SDN Data Plane Devices,” in ICNS 2015, 2015.
S. Pontarelli, M. Bonola, G. Bianchi, A. Capone, and C. Cascone,
“Stateful openflow: Hardware proof of concept,” in IEEE High Performance Switching and Routing (HPSR), 2015.
M. Moshref, A. Bhargava, A. Gupta, M. Yu, and R. Govindan, “Flowlevel state transition as a new switch primitive for SDN,” in 3rd
workshop on Hot topics in software defined networking, 2014, pp. 61–
P. Bosshart, D. Daly, G. Gibb, M. Izzard, N. McKeown, J. Rexford,
C. Schlesinger, D. Talayco, A. Vahdat, G. Varghese et al., “P4: Programming protocol-independent packet processors,” ACM SIGCOMM
Computer Communication Review, vol. 44, no. 3, pp. 87–95, 2014.
L. Jose, L. Yan, G. Varghese, and N. McKeown, “Compiling packet
programs to reconfigurable switches,” in USENIX NSDI, 2015.
“Intel Ethernet Switch FM6000 Series - Software Defined Networking.”
[Online]. Available: http://www.intel.com/content/dam/www/public/
P. Bosshart, G. Gibb, H.-S. Kim, G. Varghese, N. McKeown, M. Izzard, F. Mujica, and M. Horowitz, “Forwarding metamorphosis: Fast
programmable match-action processing in hardware for sdn,” in ACM
SIGCOMM Conference, 2013, pp. 99–110.
The P4 language Consortium, “The P4 Language Specification, version
1.0.2,” March 2015.
N. Zilberman, Y. Audzevich, G. Covington, and A. Moore, “NetFPGA
SUME: Toward 100 Gbps as Research Commodity,” Micro, IEEE,
vol. 34, no. 5, pp. 32–41, Sept 2014.
“Virtex-7 Family Overview,” http://www.xilinx.com.
J. Naous, D. Erickson, G. A. Covington, G. Appenzeller, and N. McKeown, “Implementing an OpenFlow switch on the NetFPGA platform,”
in 4th ACM/IEEE Symposium on Architectures for Networking and
Communications Systems, 2008, pp. 1–9.
V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs,
and R. L. Braynard, “Networking named content,” in Proceedings of
the 5th international conference on Emerging networking experiments
and technologies. ACM, 2009, pp. 1–12.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF