Deliverable D4.1 Demonstrators design and prototyping

Deliverable D4.1 Demonstrators design and prototyping

Ref. Ares(2016)3477241 - 15/07/2016

Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for

Intelligent and Secure Media Access

Project no. 671704

Research and Innovation Action

Co-funded by the Horizon 2020 Framework Programme of the European Union

Call identifier: H2020-ICT-2014-1

Topic: ICT-14-2014 - Advanced 5G Network Infrastructure for the Future Internet

Start date of project: July 1

st

, 2015

Deliverable D4.1

Demonstrators design and prototyping

Due date:

01/06/2016

Submission date:

15/07/2016

Deliverable leader:

NCSRD

Editor:

Eleni Trouva (NCSRD)

Dissemination Level

PU: Public

PP:

RE:

Restricted to other programme participants (including the Commission Services)

Restricted to a group specified by the consortium (including the Commission Services)

CO: Confidential, only for members of the consortium (including the Commission Services)

CHARISMA – D4.1 – Demonstrators design and prototyping Page 1 of 64

Executive Summary

This deliverable D4.1 “Demonstrators design and prototyping” is a technical report presenting the initial deployment of the intermediate CHARISMA demonstrators. The work on the demonstrators during the first year of the CHARISMA project has focused on building upon and demonstrating the key drivers of the

CHARISMA project: open access, virtualised security, and low latency. In this report, an early view on the deployed infrastructure is provided. In addition, the physical topologies of the demonstrators’ setups are given as well as the specifications of the software and hardware components used in two different pilot locations: Athens and Barcelona.

In particular, in chapter 3 we describe in detail the demonstrator defined to showcase the work on virtualised security. We demonstrate the deployment of virtual security functions (VSFs) within an

Intelligent Management Unit (IMU) placed near the eNodeB (first Converged Aggregation Level, CAL1) of an

LTE testbed. To demonstrate the use of the VSFs, we perform a Denial of Service (DoS) attack over the LTE network, originating from a rogue user equipment (UE), and showcase the mitigation of such an attack through the automated configuration of the VSFs. In chapter 4 we introduce the initial description of the low latency demonstrator that has been developed within CHARISMA. The deployed hardware devices

(TrustNode router, mm-wave high capacity link, and MoBcache distributed caching system) that comprise the infrastructure of the low latency demonstrator were chosen for providing physical access technologies with both low end-to-end latency and low access time latency. Based on the bus/tram use case (UC) previously defined in the deliverable D1.1, we demonstrate a video caching UC with low latency physical layer implementation based on 60-GHz wireless and GbE through various CHARISMA CALs, each with their own local intelligence and data processing functionalities. Finally, in chapter 5 we provide the details of the open access demonstrator. The main focus of the open access demonstrator is to showcase CHARISMA’s support for network slicing and multi-tenancy. The demonstrated scenario shows the creation of network slices that belong to different virtual network operators (VNOs) over a common FTTH infrastructure and the deployment of services on their assigned network slice.

In each chapter dedicated to a specific demonstrator, we also describe planned and possible steps for further experimentation, while at the end of this report, we provide the first draft plan for the integration of the work into the two final CHARISMA demonstrators (pilot and field-trial). More specific detail for these end-of-project demonstrators will be described in the subsequent WP4 deliverable D4.2, “Demonstrators infrastructure setup and validation”, which is due at M24 at the end of the second year of the CHARISMA project.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 2 of 64

List of Contributors

Participant Short Name

National Centre for Scientific Research

Demokritos

NCSRD

COSMOTE Kinites Tilepikoinonies AE

ERICSSON

COSMOTE

ERICSSON

Fundacio i2CAT

University of Essex

Heinrich Heinz Institution

InnoRoute GmbH

Ethernity

I2CAT

HHI

UESSEX

InnoRoute

ETH

JCP Connect

Altice Labs

APFutura Internacional Soluciones SL

Intracom Telecom

JCP-C

Altice Labs

APFutura

ICOM

Contributor

Eleni Trouva, Yanos Angelopoulos

George Lyberopoulos, Eleni Theodoropoulou, Konstantinos Filis

Carolina Canales

Shuaib Siddiqui, Amaia Legarrea, Eduard Escalona

Michael Parker, Geza Koczian, Stuart Walker

Kai Habel

Marian Ulbricht

Eugene Zetserov

Yaning Liu

Victor Marques

Orio Riba

Dimitrios Kritharidis, Konstantinos Chartsias, Konstantinos

Katsaros

CHARISMA – D4.1 – Demonstrators design and prototyping Page 3 of 64

Table of Contents

List of Contributors ................................................................................................................ 3

1. Introduction ...................................................................................................................... 7

2. CHARISMA Demonstrators Definition ................................................................................. 9

3. Security Demonstrator ......................................................................................................11

3.1.

Motivation and Scope ......................................................................................................................11

3.2.

Scenario Description ........................................................................................................................12

3.3.

Infrastructure Architecture ..............................................................................................................16

3.3.1.

Physical Level Architecture ......................................................................................................16

3.3.2.

Logical Level Architecture ........................................................................................................18

3.4.

Configuration and Deployment .......................................................................................................22

3.5.

Future Work .....................................................................................................................................25

4. Low Latency Demonstrator ...............................................................................................26

4.1.

Motivation and Scope ......................................................................................................................26

4.2.

Scenario Description ........................................................................................................................26

4.2.1.

Scenario Story-line ...................................................................................................................27

4.3.

Infrastructure Architecture ..............................................................................................................29

4.3.1.

Physical Level Architecture ......................................................................................................29

4.3.2.

Logical Level Architecture ........................................................................................................30

4.4.

Configuration and Deployment .......................................................................................................31

4.4.1.

Inventory list ............................................................................................................................31

4.4.2.

Assembly and disassembly .......................................................................................................32

4.5.

System / caching end-to-end latency...............................................................................................35

4.6.

Future Work .....................................................................................................................................36

5. Open Access Demonstrator ...............................................................................................38

5.1.

Motivation and Scope ......................................................................................................................38

5.2.

Scenario Description ........................................................................................................................39

5.3.

Infrastructure Architecture ..............................................................................................................42

5.3.1.

Software Components .............................................................................................................42

5.3.2.

Physical Level Components ......................................................................................................48

5.4.

Configuration and Deployment .......................................................................................................53

5.5.

Future Work .....................................................................................................................................55

6. Strategy for Integration ....................................................................................................56

6.1.

CHARISMA vision .............................................................................................................................56

6.2.

Final CHARISMA demonstrators ......................................................................................................58

6.2.1.

NCSRD demonstrator, Athens ..................................................................................................58

6.2.2.

ApFutura demonstrator, Centelles ..........................................................................................59

7. Conclusions ......................................................................................................................61

References ............................................................................................................................62

Acronyms ..............................................................................................................................63

CHARISMA – D4.1 – Demonstrators design and prototyping Page 4 of 64

List of Figures

Figure 1: Overview of the three CHARISMA intermediate demonstrators ...................................................... 10

Figure 2: Security demonstration setup ........................................................................................................... 12

Figure 3: Attacker performing a DoS attack ..................................................................................................... 13

Figure 4: Rate of received bytes on the network interface of the VNO’s application server during the DoS attack ....................................................................................................................................................... 14

Figure 5: Attacker UE performing a DoS attack and traffic routed in the NFVI-PoP ........................................ 15

Figure 6: Steps for attack neutralisation .......................................................................................................... 15

Figure 7: Rate of received bytes on the network interface of the VNO’s application server during the DoS attack mitigation ...................................................................................................................................... 16

Figure 8: VNO customer accessing the web application after the attack mitigation ....................................... 16

Figure 9: Physical view of the CHARISMA security demonstrator.................................................................... 17

Figure 10: Logical view of the CHARISMA security demonstrator ................................................................... 19

Figure 11: GTP encapsulation/decapsulation server and its configuration ..................................................... 22

Figure 12: SDN switch and its configuration .................................................................................................... 23

Figure 13: Networking configuration within the OpenStack deployment ....................................................... 24

Figure 14: Bus use case scenario ...................................................................................................................... 27

Figure 15: Bus use case scenario mapped to CHARISMA PHY architecture ..................................................... 29

Figure 16: Diagram of the logical architecture ................................................................................................. 30

Figure 17: Table top set-up of Low Latency intermediate demo ..................................................................... 33

Figure 18: Schematic of experimental point-to-point latencies in LL demo .................................................... 35

Figure 19: CHARISMA network slice and multi-tenancy concept ..................................................................... 39

Figure 20: High-level open access/multi-tenancy demonstrator scenario ...................................................... 40

Figure 21: Centelles headend topology ............................................................................................................ 41

Figure 22: CHARISMA O&M GUI ....................................................................................................................... 42

Figure 23: CHARISMA O&M InfP View.............................................................................................................. 43

Figure 24: CHARISMA O&M InfP View Service Provider tab ............................................................................ 43

Figure 25: CHARISMA O&M InfP View Virtual Slices tab .................................................................................. 44

Figure 26: CHARISMA O&M InfP View Network Slice Request Edit ................................................................. 44

Figure 27: CHARISMA O&M VNO Dashboard ................................................................................................... 45

Figure 28: CHARISMA O&M VNO view Virtual Slice Information ..................................................................... 45

Figure 29: CHARISMA O&M VNO view end-to-end service provisioning ......................................................... 46

Figure 30: QnQ VLAN tags ................................................................................................................................ 46

Figure 31: CHARISMA O&M VNO view end-to-end service test and VNF deployment ................................... 46

Figure 32: CHARISMA Server VNF deployment ................................................................................................ 47

Figure 33: OLT 1T1 3 D chassis schematic ........................................................................................................ 48

Figure 34: OLT services Usage Scenario ........................................................................................................... 50

Figure 35: GPON TG16G line board front view................................................................................................. 51

Figure 36: CXO160G Front panel ...................................................................................................................... 51

Figure 37: ONU (ONT-RGW) Overview ............................................................................................................ 53

Figure 38: Virtualised CHARISMA server .......................................................................................................... 53

Figure 39: N:1 MAC Bridge Services ................................................................................................................. 54

Figure 40: Schematic of integrated 5G CHARISMA pilot demonstrator network ............................................ 56

Figure 41: NCSRD demonstrator ...................................................................................................................... 59

Figure 42: Initial proposal for end-of-project ApFutura demonstrator ........................................................... 60

Figure 43: CHARISMA demonstrator deployment in APFUTURA network....................................................... 60

CHARISMA – D4.1 – Demonstrators design and prototyping Page 5 of 64

List of Tables

Table 1: Specification of physical devices used in the security demonstrator ................................................. 17

Table 2: Inventory list for LL demo ................................................................................................................... 31

Table 3: List of required connections for LL demo ........................................................................................... 33

Table 4: Tabulated expected and experimental round trip latencies of LL demo ........................................... 36

Table 5: Weight & Physical Dimensions ........................................................................................................... 48

Table 6: OLT Equipment features ..................................................................................................................... 49

Table 7: GPON TG16G line board Slot position in OLT shelf ............................................................................ 51

Table 8: CXO160G Slot position in OLT shelf .................................................................................................... 51

CHARISMA – D4.1 – Demonstrators design and prototyping Page 6 of 64

1.

Introduction

One of the key measurable objectives of CHARISMA includes the building of secure end-to-end pilot demonstrators to provide multi-tenant, multi-user, and virtualized open access infrastructures based on the CHARISMA low-latency and virtual security developments. These demonstrators will serve two purposes: (i) to test and validate enabling technologies for the novel CHARISMA concepts; and (ii) to showcase the low-latency, open access, and virtual security aspects of the project from the academic/research and the industry perspectives.

This deliverable is the report arising out of WP4, which is the work package showcasing the CHARISMA technologies and technical solutions designed and developed in the parallel WPs 1-3 and provides information on the end-to-end secure demonstrators and field trials. The validations will be based on the

Use Cases and scenarios as laid down in WP1 and already described in the earlier deliverable D1.1. Specific objectives for WP4 include:

Laboratory prototype validation of the specified scenarios from WP1;

Two small-scale test-bed validations of selected solutions in 5G laboratory setups;

5G field-trial pilots of the CHARISMA low-latency security converged cellular network architecture;

Results evaluation of prototypes and pilot validators.

In particular, this deliverable D4.1 presents in detail the first phase of the activities associated with the definition and deployment of the CHARISMA demonstrators. During this first phase in the first year of the project, the focus has been on the key drivers of the CHARISMA architecture: virtualised security, low latency and open access. An early view on the deployed infrastructure is also provided in this report.

Particularly, the physical topology of the deployed components is given as well as the specification of the software and hardware components used.

The aim of this deliverable is not only to present the technical progress of the project in the field, but also to constitute a rough technical guide for the deployment of the CHARISMA demonstrators. Therefore, it is addressed to any members of the wider research/industrial community who wish to replicate (all or part of) the CHARISMA demonstrators in their own lab infrastructures. Specifically, during the first year of the project, three demonstrators were defined, showcasing the virtualized security, low latency and open access features of CHARISMA. For each demonstrator, the installation and configuration procedures are detailed, along with any hardware and software dependencies. As the project continues to proceed, the expectation is that as each demonstrator reaches an acceptable maturity level, the integration of the components into the final project demonstrators will take place. In addition, experimentation and work on defining the testing procedures that can be used to validate the proper function and performance of the developed testbeds will also be undertaken.

The rest of this document is structured as follows: Chapter 2 provides an overview description of the three defined demonstrators. Chapters 3, 4 and 5 respectively describe in detail the three defined demonstrators

CHARISMA – D4.1 – Demonstrators design and prototyping Page 7 of 64

that have been developed to specifically demonstrate progress in the three key technology areas of: virtualized security, low latency and open access. In discussing the CHARISMA technology solutions for these technology areas, the demonstrators are also put into the relevant 5G context by describing the associated scenario story-line use case (as already outlined in the earlier deliverable D1.1), as well as specifying the physical and logical infrastructure, configuration and deployment of the system components.

Future work and technology development of the three demonstrators to be addressed in the second year of the project are also elaborated upon in these chapters. Chapter 6 presents an overall initial, draft view of the integrated pilots, discussing briefly the integration strategy of how the technology components of each of the three intermediate demonstrators will be brought together to create an overall CHARISMA architecture solution, and providing details on various aspects of such an integration. Finally, Chapter 7 concludes this deliverable D4.1 report.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 8 of 64

2.

CHARISMA Demonstrators Definition

For the definition of the most appropriate intermediate demonstrators at the end of the first year of the project, an assessment of CHARISMA’s main contributions to 5G networking was performed within WP4 in terms of technical added-value and technology readiness. Taking into account the infrastructure and devices being developed by the CHARISMA partners within the framework of the project’s implementation scheme, the analysis focused upon the selection of three demonstrators showcasing CHARISMA contributions to virtualized security, low latency, and open access.

Specifically, the following demonstrator scenarios were chosen aiming to emphasize and demonstrate the specific added-value characteristics of the CHARISMA architectural concept, and provide a suitable snapshot showcase of the progress of the project after the first year of research:

Security demonstrator

The security demo showcases the detection and neutralization of a DoS attack originating from a

UE and performed over a 5G network using the virtualised security functions (VSFs) that have been developed thus far.

Low Latency demonstrator

Development of physical access technologies for low latency end-to-end transmission, and low latency access times for a video caching application. This demo demonstrates a hierarchical and integrated video caching use case, with additional low latency physical layer implementation based on 60-GHz wireless and TrustNode routing, located at various CHARISMA Converged Aggregation

Levels (CALs) each with their respective local intelligence and data processing functionalities.

Open Access demonstrator

Showcase of multi-tenancy in open access networks using the CHARISMA control, management and orchestration (CMO) plane being developed within the project. This demo showcases an open access scenario with network (virtualised) slicing isolation, multi-tenancy functionalities and an active node at the Optical Line Termination (OLT) for distributed computing.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 9 of 64

Figure 1: Overview of the three CHARISMA intermediate demonstrators

A brief diagrammatic overview of the three intermediate (i.e. end of year 1) demonstrators is given in

Figure 1, while implementation and configuration details are provided in the following chapters of this deliverable. For the needs of the three scenarios individual experimental topologies have been deployed at three different partner sites. The security demonstrator was co-ordinated by NCSRD and deployed in the infrastructure of the NCSRD lab, Athens, Greece. The low latency demonstrator was co-ordinated by the laboratory of the University of Essex, UK, integrating hardware devices provided by Essex, InnoRoute and

JCPC. The open access demonstrator was co-ordinated by i2CAT, Spain, using infrastructure provided by

ApFutura and Altice Labs and the CMO plane developed by i2CAT.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 10 of 64

3.

Security Demonstrator

3.1.

Motivation and Scope

5G enables innovative scenarios and applications making use of ultra-high speed, low-latency telecommunication networks for fixed and mobile users and machine-to-machine communications, as already supported by the CHARISMA architecture, e.g. as described in the use cases of deliverable D1.1.

These scenarios, together with the introduction of the new paradigm for computing and network infrastructure, which decouples the actual control functionality from the underlying hardware and middleware functions (Cloud Computing, virtualization, and Software Defined Networking etc.) further reinforces the need for automated management and control of the 5G telecommunications infrastructure.

In particular, since a cloud-based paradigm promotes the ability for infrastructure to be highly accessible and shared by multiple users (i.e. Virtual Network Operators, VNOs), the importance of ensuring highly secure networking is becoming ever more relevant. It is of outmost importance to be able to provide robust, flexible and proactive mechanisms to detect and prevent security issues, and to be able to perform this in real time and in an automated fashion.

We cannot foresee all the new and ever changing threats that 5G networks will have to protect against, but we do have the tools to create a robust, flexible and autonomous network management solution that can cope with the constantly changing security environment, being fed with insights from policy-governed realtime analytics systems on the one hand, and actuating network resources in order to minimize or prevent the effects of detected threats in real-time on the other hand. CHARISMA is therefore proposing a realtime, automated Security Management Framework for 5G telecommunications networking, by implementing a continuous and closed loop real-time environment inspection regime, based on analytics, policy-based decisions and actuation/enforcement via Cloud & SDN orchestration procedures. In particular, the virtualized nature of 5G networking itself allows the automated instantiation, deployment, configuration and management of Virtual Security Functions (VSFs) in real time, with a centralized orchestration approach. In CHARISMA, the introduction of Converged Aggregation Levels (CALs) enables the de-centralization of network intelligence also contributes to early detection and neutralization of attacks, placing the diagnostics and neutralizing systems as close as possible to the malicious entities originating the attack, and preventing hostile traffic from entering the network backhaul.

Our focus within the security demonstrator is the setup of a mobile DoS attack and its mitigation through a

Software Defined Network (SDN) architecture. We make use of emerging technologies such as SDN and

NFV to demonstrate how security threats can be addressed in future mobile networks. This work serves as a first step towards the implementation of an automated security management framework for 5G networks. Through our work, a number of key innovations have been achieved:

Placement of security functions in the access network as opposed to the current implementation

of LTE networks, where security systems are located in the EPC. For attacks that are originating

CHARISMA – D4.1 – Demonstrators design and prototyping Page 11 of 64

 from mobile users, this placement decision of security functions results in an early identification and mitigation of the attack, and therefore enabling prevention of malicious traffic from entering the backhaul or core network.

Use of Virtualised Security Functions (VSFs) for the identification and mitigation of a DoS over an

LTE infrastructure. NFV provides a number of advantages to mobile operators such as significant cost reduction, optimised resource allocation and provisioning, flexible and elastic resource management, as well as energy efficiency. The VSFs developed for the purposes of this CHARISMA demo are: i) an Intrusion Detection System VNF (vIDS) configured to be off-path to avoid introduction of latency due to processing; and ii) a Firewall VNF (vFW) configured to be in-line to perform actions on the passing traffic.

Automated security policy enforcement in case of threatening events such as the identification of

a DoS attack based on network programmability. SDN provides the means towards policy-based automation for network devices, which can serve well to achieve adaptive security. In our demo, we use the implemented virtualised firewall (vFW) to selectively block malicious traffic originating from a specific UE based on OpenFlow rules, while still allowing normal traffic flows coming from other devices.

3.2.

Scenario Description

Amongst CHARISMA’s various objectives is to provide a 5G open access network architecture that allows the slicing of the virtualised resources to different Virtual Network Operators (VNOs) over the same common physical infrastructure. The security aspects of the project focus on allowing the VNOs to provide their services securely over this virtualised open access network. In the scenario for demonstrating these security features of CHARISMA, we assume that a Virtual Network Operator (VNO) purchases a slice over the shared infrastructure and provides services to its customers. A cyber-criminal exploits a vulnerability found in the mobile device of one of the VNO's customers, in order to perform a Denial of Service (DoS) attack on the VNO's services with the intention to make its service unavailable to customers.

The attack originates from an exploited customer’s UE and targets an end-user application that belongs to the VNO. We demonstrate the detection and neutralisation of the DoS attack performed over an LTE network, such as CHARISMA, through the use of Virtual Security Functions (VSFs).

Figure 2: Security demonstration setup

CHARISMA – D4.1 – Demonstrators design and prototyping Page 12 of 64

Figure 2 illustrates the setup of our security demonstration. We have set up an LTE testbed consisting of an

EPC and an eNodeB connected to the EPC. The VNO’s customers can connect through their User Equipment

(UEs) to the mobile network and access services offered by the VNO or access any service on the Internet.

As shown in the figure, the UEs that connect to the eNodeB, might be laptop computers with LTE connection or mobile phones with LTE connectivity. For simulating the services offered by VNOs to customers, we have setup a web server connected to the EPC, running a web application accessible to the

VNO’s customers.

Denial of Service (DoS) attack

The following figure (Figure 3) shows a case in which one of the VNO’s customers UE is hijacked (exploited)

and performs a DoS attack on the VNO application server. We refer to the exploited UE of the VNO customer as the “attacker”. The attacker overwhelms the VNO application server by sending a large number of requests. During such a flood of messages, the processing of messages arriving at the application server requires the entire server’s CPU and memory, which eventually forces the server to crash.

Figure 3: Attacker performing a DoS attack

As illustrated in the Figure 3, during the attack, the other VNO customers are unable to use the application

server that is under attack. The rate of received bytes on the network interface of the application server increases dramatically and the application server can no longer process legitimate connection requests.

Figure 4 shows the rate of received bytes on the network interface of the VNO’s application server during

the DoS attack as visualized by Speedometer, a Unix network traffic monitoring tool.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 13 of 64

Figure 4: Rate of received bytes on the network interface of the VNO’s application server during the DoS attack

Attack identification and mitigation

As we have already described in the deliverable D1.1 [1], the CHARISMA architecture is designed to be hierarchical, with a set of intelligent aggregation nodes located between the CO and the end-users. Each node is labeled a Converged Aggregation Level (CAL) and is designated with a number, to signify its level in the hierarchy. Each active node (i.e. CAL) has its own scalable intelligent management unit (IMU) performing data storage/caching, security and routing functionalities. In this demo we showcase the use of an IMU running security functions at CAL1. The security functions are placed near the eNodeB (CAL1) to be located as close as possible to the malicious devices and prevent hostile traffic from entering the network backhaul. The purpose of these security functions is to identify and mitigate the security threats originating from the customers UE.

In order to provide security, CHARISMA implements two VSFs: a) a virtualised Intrusion Detection System

(IDS) equipped with advanced traffic analysis and monitoring capabilities for attack detection; and b) a virtualised firewall (vFW) able to filter the passing traffic based on a predetermined set of security rules.

To enable the deployed VSFs and provide security, we intercept the packets flowing between the EPC and the eNB servers and forward them to the cloud infrastructure. In the cloud infrastructure we have setup a

Network Functions Virtualisation Infrastructure Point of Presence (NFVI-PoP) where the two VSFs, the v-IDS and vFW are deployed. However, packets flowing in LTE networks are encapsulated using the GPRS

Tunnelling Protocol (GTP), while the implemented security functions can process IP packets. For this reason, we insert between the EPC and the eNB an extra node, whose purpose is to intercept the packets

flowing in the GTP tunnel and process them accordingly to remove their GTP headers. As depicted in Figure

5, we refer to this machine as the GTP encapsulation/decapsulation server. From there, the IP traffic is

forwarded to the security functions deployed in the cloud. Specifically, using traffic engineering, packets are directed to pass through the deployed firewall. In parallel, traffic is mirrored toward the IDS for analysis. For the traffic that is not blocked by the firewall rules, the reverse process (GTP encapsulation) has to be followed to re-route legitimate traffic passing from the firewall back to the GTP tunnel. Again the GTP encapsulation/decapsulation server is responsible for this procedure.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 14 of 64

Figure 5: Attacker UE performing a DoS attack and traffic routed in the NFVI-PoP

The following figure (Figure 6) shows the three steps that take place for attack mitigation. First, the IDS VSF

based on threat signatures identifies that a DoS attack is taking place towards the VNO application server.

Following the attack identification, the appropriate configuration of the firewall VSF deployed near the eNodeB takes place. Finally, when the appropriate rules are enforced on the firewall, the attacker’s traffic is blocked and the attack neutralised.

Figure 6: Steps for attack neutralisation

Figure 7 shows the rate of received bytes on the network interface of the VNO’s application server during

the DoS attack neutralisation. As illustrated, the application server receives a traffic spike at the moment that the attacker’s packets reach its network interface. Within seconds (<2seconds), the vIDS identifies the attack and sends to the vFW the appropriate configuration to block the traffic originating from the specific

VNO customer. When the new firewall rules are enforced, the rate of received bytes drops immediately to normal levels. During this demonstration, no other application is using the specific network interface of the application server, which is the reason why the rate of received bytes drops to zero.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 15 of 64

Figure 7: Rate of received bytes on the network interface of the VNO’s application server during the DoS attack mitigation

After the mitigation of the threat we also try to access the web application provided by the VNO’s

application server through a second UE connected to the eNB. As shown in Figure 8, traffic originating from

this second UE is allowed to pass through the firewall, reach the web server and get the requested information.

Figure 8: VNO customer accessing the web application after the attack mitigation

3.3.

Infrastructure Architecture

3.3.1.

Physical Level Architecture

Figure 9 illustrates the physical architecture and topology of the CHARISMA security demonstration. The

LTE testbed comprises two servers: one for the EPC, and a second for the eNodeB. The eNodeB features an external RF interface (USRP B210), allowing the connection of LTE User Equipment. For the needs of our security demonstration we have used two different UE devices: a laptop as the attacker UE, and a mobile

CHARISMA – D4.1 – Demonstrators design and prototyping Page 16 of 64

phone serving as the UE of a second VNO customer. The EPC server is connected to a router that provides access to the Internet and to the VNO application server. For capturing the packets flowing between the

EPC and the eNodeB, we have connected a server that performs GTP encapsulation/decapsulation between them. This server is connected to an SDN switch that has access to the cloud infrastructure, allowing the forwarding of traffic to the deployed security functions.

Figure 9: Physical view of the CHARISMA security demonstrator

3

4

5

6

The specifications of the physical devices used are enumerated in the table below.

Table 1: Specification of physical devices used in the security demonstrator

ID

1

2

7

Role

EPC eNodeB

Vendor

HP

HP

CPU Model

Intel i5 - 2400

Intel i7 – 4790

CPU cores

4 x 3.1GHz

8 x 3.6GHz

User Equipment A (laptop)

User Equipment B (mobile phone)

Toshiba

Samsung

Galaxy S4

VNO application server

Cloud infrastructure

GTP decapsulation server encapsulation/

DELL

Turbo-X

Turbo-X

Intel i5 – 4200M

Exynos 5 Octa

5410

Intel i7 - 2600

Intel(R)

Core(TM) i5-

2500K CPU

Intel i5 - 4460

4 x 2.5GHz

4 x 1.6GHz

& 4x

1.2GHz

4 x 3.8GHz

4 x

3.30GHz

4 x 3.2GHz

RAM

16 GB

8 GB

4 GB

2 GB

Storage

500 GB

500 GB

360 GB

16 GB

Other

Features

USRP B210 external RF interface

16 GB

32 GB

320 GB

1 TB

16 GB 1 TB 4 network interface cards

CHARISMA – D4.1 – Demonstrators design and prototyping Page 17 of 64

8 SDN switch

9

10

Router

4G LTE USB adapter

Shuttle

Cisco

HUAWEI

AMD

Sempron(tm)

Processor

2800+

-

-

1

-

-

1.5 GB 80 GB 3 network interface cards

-

-

-

-

RV-180

HUAWEI

E3372h

3.3.2.

Logical Level Architecture

Figure 10 provides a logical view of the security demonstrator setup with the software components that

were installed. The logical level architecture comprises of:

1.

The LTE testbed with EPC and eNode servers running Open Air Interface (OAI) software.

2.

The attacker UE (laptop), running BoNeSi software to simulate the DoS attack. The laptop is connected to the LTE network through a Huawei 4G LTE USB adapter.

3.

The GTP encapsulation/decapsulation server with Ubuntu 14.04 server running GTP encapsulation and decapsulation software based on the packet handling library PF_RING.

4.

The SDN switch providing access to the cloud infrastructure, which is implemented using Open vSwitch software.

5.

The cloud infrastructure, which is an Ubuntu 14.04 server running OpenStack cloud platform in an

“all-in-one” deployment scheme.

6.

The IDS VSF, which is an OpenStack VM with Ubuntu 14.04 server running Snort, Barnyard2 and

Snorby.

7.

The firewall VSF, which is an OpenStack VM with Ubuntu 14.04 server running Open vSwitch.

8.

The VNO application server with Ubuntu 14.04 server running Lighttpd web server. A web application was created to test accessibility during DoS attacks.

The following sections provide a more detailed description of the software components that were installed and deployed for the purposes of the security demonstration.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 18 of 64

Figure 10: Logical view of the CHARISMA security demonstrator

3.3.2.1.

Open Air Interface (OAI) software

EURECOM has created the OpenAirInterface (OAI) Software Alliance (OSA) [3], which aims to provide an open-source ecosystem for the core (EPC) and access-network (EUTRAN) protocols of 3GPP cellular systems with the possibility of interoperating with closed-source equipment in either portion of the network. OAI provides a standard-compliant implementation of a subset of Release 10 LTE for UE, eNB, MME, HSS, SGw and PGw on standard Linux-based computing equipment (Intel x86 PC/ARM architectures), distributed under an Apache v2.0 license.

3.3.2.2.

GTP encapsulation and decapsulation software

The traffic exchanged between the EPC and the eNB is encapsulated using the GPRS Tunneling Protocol

(GTP) for reasons of multiplexing and scalability. The test case under consideration requires for the user traffic to be analysed by a security perspective. The analysis of the traffic is performed in an NFV enabled environment, in order to exploit the flexibility and scalability provided by the CHARISMA architecture. The

VNFs require the traffic to be decapsulated from their GTP headers and forwarded to them as pure IP packets. NCSRD has implemented a GTP encapsulation and decapsulation software running on top of the widely used packet processing library PF_RING [4].

The software runs as an intermediate node between the EPC and the eNB, forwarding the traffic both ways, and preserving the connectivity of the 2 nodes. The user traffic is then filtered from the rest of the control traffic, the filtered packets are removed by their GTP header, and they are forwarded directly to the security VNFs for analysis, or for further filtering. The analysed returning packets, that re-enter the flow, are re-capsulated with the valid, correct GTP header and forwarded further on to their original path.

The GTP protocol for multiplexing reasons uses a tunnel identifier for each direction of a flow. The challenging part is to maintain the GTP header, as the packets flow by, and re-encapsulating them as they come back from the VNF security part of the CHARISMA architecture. As the decapsulation and recap operations can introduce a penalty on the performance of the system, they are performed in a parallel

CHARISMA – D4.1 – Demonstrators design and prototyping Page 19 of 64

manner to the rest of the process. The control and other traffic are forwarded using the zero-copy PF_RING library, and only the filtered packets are copied to memory, as they need to be modified. The GTP header storage for the re-encapsulation is considered insignificant as the GTP header merely allocates 8 bytes to the memory.

3.3.2.3.

BoNeSi traffic generator

We have used BoNeSi [5] software for generating traffic that resembles a Denial of Service attack. BoNeSi is an open source network traffic generator for different protocol types. The attributes of the created packets and connections can be controlled by several parameters, such as send rate or payload size, or they are determined randomly. BoNeSi spoofs the source IP addresses even when generating TCP traffic, and it therefore also includes a simple TCP-stack to handle TCP connections in promiscuous mode. BoNeSi generates ICMP, UDP and TCP (HTTP) flooding attacks from a defined botnet size (different IP addresses), and is highly configurable, such that rates, data volume, source IP addresses, URLs and other parameters can all be configured. BoNeSi software is licensed under the Apache License v2.0.

3.3.2.4.

OpenStack Cloud Platform

For the implementation of the NFVI-PoP, the OpenStack cloud computing software platform [6] has been selected and installed on a server, following an “all-in-one” deployment scheme. OpenStack has a modular design that enables integration with legacy and third-party technologies. An overview of Openstack has been already described in the CHARISMA deliverable D3.1 [2]. For the needs of the security demo, the following OpenStack components have been installed: Nova, Neutron, Glance, Cinder, Horizon and

Keystone. Also, RabbitMQ open source message broker was installed for communication between the

OpenStack components and MariaDB server, as the database backend for OpenStack services.

The following subsections briefly discuss the most important OpenStack components that are of high interest for the CHARISMA security demonstrator, namely the Nova and Neutron components, which have been successfully deployed and tested.

OpenStack Nova

The primary component of the cloud operating environment, which is of high importance for the VNF deployment of the CHARISMA prototype platform, is the Nova compute service. Nova orchestrates the creation and deletion of VM instances, which are used as carriers/enablers for the VNFs image deployment.

In the CHARISMA security testbed, the following components of Nova are used:

The nova-api accepts and responds to end-user compute API calls. It also initiates most of the orchestration activities (such as running an instance) as well as enforcing some policies.

The nova-compute process is primarily a worker daemon that creates and terminates VM instances via hypervisor APIs. In CHARISMA, the KVM hypervisor is used.

The nova-scheduler process keeps a queue of VM instance requests, and for each request it determines where the VM instance should run (specifically, which compute node it should run on).

OpenStack Neutron

OpenStack Neutron, is the OpenStack module focused on delivering Networking-as-a-Service (NaaS).

Neutron makes it easier to deliver NaaS in the cloud and provides REST APIs to manage network connections for the resources managed by other OpenStack services.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 20 of 64

Neutron provides native multi-tenancy support (isolation, abstraction and full control over virtual networks), letting tenants create multiple private networks and control the IP addressing on them, and exposes vendor-specific network virtualisation and SDN technologies.

The core Neutron API to be used in CHARISMA, includes support for Layer 2 networking and IP Address

Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing between

Layer 2 networks and gateways to external networks. It is based on a simple model of virtual networks, subnet, and port abstractions to describe networking resources. A network is an isolated layer-2 segment, analogous to a VLAN in the physical networking world. More specifically, it is a broadcast domain reserved for the tenant that created it, or explicitly it is configured as shared.

3.3.2.5.

Intrusion Detection System VSF (vIDS)

The Intrusion Detection System VSF is implemented using Snort [7], accompanied by other software systems, namely Barnyard [8], PulledPork [9] and Snorby [10]. Snort is an open-source intrusion detection system, capable of performing real-time traffic analysis and packet logging on IP networks. Barnyard2 is a software tool that takes Snort output and writes it to a SQL database, in order to reduce load on the system.

PulledPork automatically downloads the latest Snort rules (threat signatures). Snorby is a web-based graphical interface for viewing and clearing events logged by Snort. A detailed overview of Snort has

already been provided in D3.2 [16].

3.3.2.6.

Firewall VSF (vFW)

The Firewall VSF is implemented using Open vSwitch (OVS) [11] software on an Ubuntu 14.04 operating system. Open vSwitch is a multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols, among others, OpenFlow for SDN programmability.

Open vSwitch is well suited to function as a virtual switch in VM environments. In addition to exposing standard control and visibility interfaces to the virtual networking layer, it has been designed to support distribution across multiple physical servers.

The current release of Open vSwitch supports the following features:

Standard 802.1Q VLAN model with trunk and access ports;

NIC bonding with or without LACP on upstream switch;

NetFlow, sFlow(R), and mirroring for increased visibility;

QoS (Quality of Service) configuration, plus policing;

Geneve, GRE, GRE over IPSEC, VXLAN, and LISP tunnelling;

802.1ag connectivity fault management;

OpenFlow 1.0 plus numerous extensions;

Transactional configuration database with C and Python bindings;

High-performance forwarding using a Linux kernel module.

The step-by-step implementation and detailed description of the vFW is provided in D3.2 [16].

3.3.2.7.

SDN switch

We have implemented a SDN switch using OVS running on an Ubuntu 14.04 machine. An overview of OVS has been provided in the previous section.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 21 of 64

3.3.2.8.

lighttpd web server

lighttpd [12] is a secure, fast, compliant, and very flexible web-server that has been optimized for highperformance environments. It has a very low memory footprint as compared to other webservers, and has effective management of the CPU-load. Its advanced feature set (FastCGI, SCGI, Auth, Output-Compression,

URL-Rewriting, and others) make lighttpd an ideal webserver-software for every server that suffers load problems. Since our security demo scenario involves the DoS attack towards a webserver, we have chosen lighttpd due to its ability to fast serve out pages under load, and also for its straightforward setup. lighttpd software is open source under the revised BSD license.

3.4.

Configuration and Deployment

GTP Encapsulation/Decapsulation Server

Figure 11: GTP encapsulation/decapsulation server and its configuration

As depicted above, the GTP encapsulation/decapsulation server contains 4 network interface cards. The role of this component is to forward incoming traffic from the eNodeB towards the SDN switch and incoming traffic from the SDN switch towards the EPC. In order to make the traffic to the SDN switch readable, the GTP header must be removed from the packet before sending; and for the packet to be readable from the EPC, the removed GTP header must be reattached to the packet. The aforementioned actions are implemented on Ubuntu 14.04 operating system using C programming language in combination with PF_RING software.

PF_RING polls packets from NICs by means of Linux NAPI (the extension to the device driver packet processing framework offering higher speed). This means that NAPI copies packets from the NIC to the

PF_RING circular buffer, and then the userland application reads packets from the ring. In this scenario, there are two pollers, both the application and NAPI, and this results in CPU cycles used for this polling; the advantage is that PF_RING can distribute incoming packets to multiple rings (hence multiple applications) simultaneously.

In order to achieve even higher packet capture and transmission speed the developed application makes use of Linux hugepages. Linux typically uses memory pages of 4 KBytes, but provides an explicit interface to allocate pages with bigger size called hugepages. Amongst the advantages is the reservation of large amounts of physical memory for memory allocation that otherwise would fail (especially when contiguous memory is required). Another benefit is the overhead reduction, as the TLB (Translation Lookaside Buffer)

CHARISMA – D4.1 – Demonstrators design and prototyping Page 22 of 64

contains per-page virtual to physical address mappings. In contrast, using a large amount of memory with the default page size for managing the TLB entries leads to processing overhead.

As packets arrive from the eNodeB via the eth3 interface, the GTP header is removed and the destination

MAC address is modified from the EPC interface MAC to the SDN switch interface MAC, so that the packet can then be forwarded to the virtual security components through the eth0 interface. If the packet is not blocked by the firewall (vFW), it returns to the GTP encapsulation/decapsulation server through the eth1 interface. Then the packet must regain its initial form. The destination MAC address is modified to the EPC interface MAC address, and the GTP header is added again to the packet. The exact reverse process is applied for packets sent from the EPC to the eNodeB.

SDN Switch

Figure 12: SDN switch and its configuration

The SDN Switch acts as the mediator between the GTP encapsulation/decapsulation server and the

Openstack Cloud Infrastructure. It consists of a server having 3 network interface cards running Ubuntu

14.04 and Open VSwitch software. A bridge is created and each interface is added to it as a port. In that way, traffic entering the switch can be manipulated using OpenFlow rules. Packets entering the switch from either interface connected to the GTP encapsulation/decapsulation server (eth0 or eth2) are directed to the Openstack Cloud Infrastructure via the eth3 interface by adding the appropriate OpenFlow rule.

Modification of the destination MAC address is also required to match the Openstack Neutron Router MAC address where security virtual machines are connected. On the other hand, packets coming from the

Openstack interface (eth3) are directed to the corresponding interface leading to their destination ip

(dest_ip:EPC -> eth2, dest_ip:eNB -> eth0), and MAC address modification is applied accordingly.

Additionally, configuration automation scripts have been developed to easily redeploy the system providing only the required MAC addresses of the external components.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 23 of 64

Cloud Infrastructure

Figure 13: Networking configuration within the OpenStack deployment

The Cloud Infrastructure consists of a server running Ubuntu 14.04, on top of which Openstack Liberty release is deployed as an All-In-One deployment, with 1 network interface card. Having instantiated the required vFW and vIDS Virtual Security Functions on Openstack VMs, traffic must be directed to pass through the firewall so that security rules can be enforced, allowing or blocking traffic, and traffic with eNodeB’s source ip must also be mirrored to the IDS, so that potential attacks can be detected. To achieve this, using Openflow rules on an Open VSwitch deployed to implement Openstack networking, all traffic arriving at the br-ex bridge from the external physical interface is sent directly to the br-int bridge via the phy-br-ex - int-br-ex ports connection. Packets arriving at the br-int bridge from the int-br-ex port are then sent both to the vFW (Firewall LAN tap) and vIDS (IDS tap) ports connected to the same bridge. Packets that are allow to pass through the Firewall VM are entering br-int from the Firewall WAN tap br-int and are directed to br-ex via the int-br-ex port. Finally, packets arriving in br-ex from the phy-br-ex port are forwarded to the physical external interface. The networking configuration in the OpenStack deployment is

illustrated in Figure 13.

vFW and vIDS

Both firewall and IDS virtual machines run Ubuntu 14.04 operating system. The vFW is implemented with

Open VSwitch and the vIDS using Snort platform. Snort can have rules which may consist of one or many conditions. Whenever the conditions of a rule are found to be true, Snort records an event for the specific rule. The rule defined in the current demonstration needs to detect more than 190 requests per second coming from the same source ip in order to record an event. Snort saves the events in unified2 file format.

When such an event is detected from the vIDS, it must be communicated to the vFW along with the IP address that caused the event. For that matter a python web application was developed which transforms

Snort saved logs in unified2 format to JSON file format; then whenever the JSON file gets modified, it

CHARISMA – D4.1 – Demonstrators design and prototyping Page 24 of 64

parses the JSON file and if there are new events, sends them via a POST request to a RESTful web application running in the vFW VM. The web application in the vFW can accept requests containing data in

JSON format that bear information about rules to be enforced allowing or blocking traffic for specific IP addresses.

3.5.

Future Work

The next steps for improving the CHARISMA security demonstrator involve the management of the virtual network functions implemented through the CHARISMA Control, Management and Orchestration (CMO) platform developed within WP3. As a result, this work has a number of primary dependencies specifically with tasks 3.2 and 3.3, with regard to the implementation of the key components of the CMO platform, as the Service Orchestration, Monitoring and Analytics, Security Policy Mapper and Security Controller. For the continuation of this work, our efforts will be focused on the following activities: a) c) e) f) g) b) d)

Successful deployment of the implemented VSFs (vFW, vIDS) through the selected orchestrator platform.

Aggregation of events/notifications produced by the vFW and vIDS though the Monitoring and

Analytics component.

Combined aggregation of the events/notifications produced by the vFW and vIDS and generic network and system metrics (CPU/ RAM utilization) though the Monitoring and Analytics component to enforce threat identification.

Offering of the vFW and vIDS VSFs through the CHARISMA GUI as security services available to

VNOs. Selection and appropriate configuration should be provided as options.

Generation of security policy description that includes the selected VSFs and their desired configuration by each VNO.

Configuration of the VSFs by the Security Policy Controller, as immediate reaction to threatening events and conditions, increasing this way the automation in security management.

Measurements of time required to identify and react to an attack.

Another aspect for future work will be to research the feasibility of moving the GTP encapsulation and decapsulation procedures in the cloud infrastructure, by implementing another CHARISMA VNF that implements these functions.

Finally, three other directions towards 5G security that we are currently considering to explore are: i) attack identification through SDN log analysis; ii) attack amplification through service elasticity; and iii) Denial of

Service attack towards infrastructure services such as the Service Orchestrator.

Ideally, the security-related services and scenarios to be deployed in the context of the CHARISMA project will be also demonstrated and validated over the commercial COSMOTE’s live LTE network appropriately adapted to serve the needs of the CHARISMA experiments. In order to avoid any impact to the commercial operation of the network and its subscribers, a fully functional, commercial eNB (connected to COSMOTE’s core network) will be made available for the CHARISMA traffic exclusively; that is no traffic from

COSMOTE’s subscribers will be served by this eNB. Discussion of integration of these security demonstrator technologies into the final CHARISMA pilot and field-trial demonstrators is given in chapter 6 of this deliverable.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 25 of 64

4.

Low Latency Demonstrator

4.1.

Motivation and Scope

A key performance improvement in the quality of 5G networks as compared to 4G is the data rate speed,

which has to be many times quicker than what the 4G has to offer today [13]. A related and important KPI

criterion for 5G performance is to bring latency down to less than one millisecond; the motivation being that low latency has been identified as one of the key performance criteria in influencing the successful

deployment of future 5G networks [14].

The two key aspects to latency are end-to-end and access time latency. End-to-end or user plane latency

(delay seen by an application in exchanging data with a server, commonly known as the PING delay) is crucial for live events and for remote surgery, for example. Access time or control plane latency, also known as call setup latency, is the latency for a User Equipment (UE) to transition to a state where it can send and receive data.

Various technologies such as smart NIC’s, caching boxes and routers that are currently being used in

CHARISMA will contribute to reduce latency. The low latency demonstrator aims to send data using the lowest common aggregation node in the Converged Aggregation Level (CAL) architecture.

4.2.

Scenario Description

To show the CHARISMA low latency improvements on a real scenario, the Public Transport use case was selected to represent the CHARISMA low latency features in the most advantageous way. Public transport services like buses or trams are generally operated along a regular route and a published transport timetable. This allows provision of network services such as caching and routing to provide optimized

Internet access in the public transport by reducing resources and service latency.

In the bus use case scenario as shown in Figure 14, we assume a number of users commuting on the bus.

Each bus is equipped with a wireless/mobile cache device (MoBcache, MB) at the CAL0 gateway that:

 accesses the Internet by connecting to a WiFi AP located at the bus stop through the radio or WiFi network, or connecting to an eNodeB BS (CAL2) through LTE network;

 provides Internet access (WiFi) to the end users; enables caching functionality and provides content to end users through local caches.

At the bus stops (CAL1), a WiFi AP with cache functionality is deployed to provide Internet connections for the passing buses. The MBs either connect to the AP through WiFi when the bus is approaching the bus stop, or switch to eNodeB through LTE when the WiFi services are not available on the road between two bus stops. The traffic is intelligently offloaded between WiFi and LTE networks to be able to provide a

CHARISMA – D4.1 – Demonstrators design and prototyping Page 26 of 64

seamless handover and service continuity. Moreover, the MBs deployed on different buses automatically create an ad-hoc network (“mobile CDN”), which ensures continuous use of content even when disconnected, and then provides an end user continuity of service.

Figure 14: Bus use case scenario

4.2.1.

Scenario Story-line

A key feature of 5G networking includes seamless connectivity as perceived by end-users while they are on the move. In addition, the other key 5G features as exhibited by the CHARISMA architecture are: low latency, multi-tenancy (open access), and security. These four aspects are key features of the use case scenario being demonstrated both by the intermediate demos as well as the final demonstrators of

CHARISMA. We have adopted the Automotive – Buses/Trams use case, as the scenario exhibiting the key

CHARISMA and 5G features that our architecture has been designed to support, for the following reasons:

1) Seamless connectivity:

There are various aspects to the seamless connectivity that the buses use case can demonstrate. For example, a passenger on a journey is watching a streamed video on his mobile device (tablet, smartphone etc.) After requesting to view the video, the data for the video is downloaded and cached locally at the

CAL0 of the bus gateway on the bus that he is travelling on. The traveller needs to change buses at an intermediate point on his journey. When the bus arrives at the intermediate bus stop where the traveller has to change buses, he descends from the bus (while still watching his video) and the video caching source is then transferred from the CAL0 of the bus, to the CAL1 cache at the bus stop. This requires appropriate management and control by the CAL2 node, which is the lowest common aggregation level connecting all the various actors. Without the traveller noticing any interruption, the video therefore continues to stream at the bus stop while he is directly connected to the CAL1 micro BS and its own local cache. When his

CHARISMA – D4.1 – Demonstrators design and prototyping Page 27 of 64

connecting bus arrives, he gets into the bus and takes his seat, and his tablet/smartphone automatically connects to the CAL0 gateway of the new bus, where the video is downloaded to the cache located at the

CAL0 IMU, and he continues to watch his video without interruption. This seamless handover is again coordinated by the same CAL2 node as the previous handover.

2) Low latency

There are 2 aspects to low latency: i) end-to-end latency, e.g. for live video; and ii) access time. For the first of these, the traveller on the bus is watching, for example, a live football match, which is being streamed over the Internet. In such a situation, the hierarchical caching system of the CHARISMA architecture is not being used; but rather the ultra-low latency aspects of the CHARISMA architecture (i.e. accelerated NIC cards, and TrustNode routing) act together to ensure an as-low-as-possible delay for the streamed video.

Again, if the traveller needs to change buses at an intermediate location, the lowest common CAL point (e.g.

CAL2) co-ordinates the handover from the CAL0 of the bus to the CAL1 of the bus stop, to enable a seamless video experience. For the low access time, it may be that the traveller decides that he wants to watch a different video, and so makes a request to start watching his video of choice. In this case, the hierarchical caching system is designed to transfer and locate popular videos at the lowest video cache storage locations. In addition, an intelligent caching system will know of the traveller’s use profile and preferences, and can also transfer content in advance to CAL IMU caches that are the nearest to the traveller in anticipation of him exercising his preferences. Together, these act to minimise the access time to the video content.

3) Multi-tenancy

The CHARISMA architecture is optimised to allow multi-tenancy (open access) operation. For the

Automotive – Bus/Tram scenario, this is of relevance in the situation where multiple bus operators are running their buses on the same public roads. In this case, the same physical infrastructure from CAL3 to

CAL1 is shared by the different bus operators. Each bus operator has its own set of CAL0 gateways in each of its buses. In this case, the multi-tenancy operation of the CHARISMA network allows seamless handover between the different CAL0 gateways of different bus operators over the same (common lowest aggregation) CAL2 point, for the case where a traveller making a journey using bus operator A makes a change of buses at an intermediate bus stop, and then subsequently boards a bus belonging to a different bus operator B. The appropriate networking slicing, virtualisation and tenancy isolation (privacy and security etc.) all needs to be observed by the overall CHARISMA infrastructure.

4) Security

With regard to the security aspects for the Automotive – Buses/Trams use case scenario specific security include: a) In-Bus communications: The CHARISMA in-bus networking infrastructure can also be used as part of the in-bus monitoring and control system for the bus driver, e.g. to monitor status of individual compartments

(e.g. upstairs and downstairs for double decker buses), exit and entrance doors (whether they are open or shut), integrity of bus windows and CCTV cameras etc. Such an in-bus signalling network needs to be safeguarded against intrusion, hijacking, data corruption (including jamming, interceptions, and spoofing), and denial of service. b) Bus-to-Bus Station Control Centre (e.g. CAL2): There are 2 aspects to the bus-to-central control signalling:

CHARISMA – D4.1 – Demonstrators design and prototyping Page 28 of 64

i) Communications with bus controllers at the bus station, and with other bus drivers (e.g. with different buses traveling along the same route, or that are in the vicinity). Here, the integrity and reliability of the communications between bus controllers and other bus drivers mean that low-latency is also critical for time-critical signaling, and safety alerts. ii) Data (internet) communications between bus and base-stations - this relates to the use by the bus passengers of the bus-to-street communications links for their own private data consumption requirements (e.g. downloading video content, emailing, web surfing, telephone/Skype conversations etc.), such that in order to offer the required QoS/E with appropriate privacy and integrity, the link needs to be secure against intrusion, hijacking, data corruption (inc. jamming, interceptions, and spoofing), and denial of service etc.

Special attention also needs to be made for potential breaks between bus-to-street signaling, e.g. when a bus enters a tunnel, enters a long and deep cutting, goes under a bridge etc. A different technology (e.g. distributed antennas, such as leaky feeders) can be employed for within-tunnel busto-street communications; but appropriate hand-over technology needs to ensure avoidance of interceptions, spoofing and jamming as hand-over is effected. c) Multi-tenancy: Appropriate isolation between the different virtual networks offered between different bus operators needs to be maintained. Safeguarding and security against malicious or renegade operators

(e.g. if they have been hijacked, spoofed, intercepted etc.) therefore needs to be in place.

4.3.

Infrastructure Architecture

4.3.1.

Physical Level Architecture

The proposed physical layer (PHY) architecture described in this section has been successfully set up and

tested. Figure 15 below shows the schematic diagram of the test bed and how it is mapped on to the bus

use case scenario described in section 4.2.

Figure 15: Bus use case scenario mapped to CHARISMA PHY architecture

CHARISMA – D4.1 – Demonstrators design and prototyping Page 29 of 64

CAL0 consists of a laptop (UE) that wirelessly connects to the mobile cache (MB on the bus) that has a

Gigabit Ethernet connection to the 60GHz RF modem. The data is then sent over 60GHz (802.11ad) to the compatible docking station where the cache function is deployed at CAL1. The docking station connects to the TrustNode router that handles and accelerates the IPv6 traffic via Gigabit Ethernet. Through this process the data is routed to CAL3 that represents the Telco Cloud (and the Internet).

4.3.2.

Logical Level Architecture

The components of the low latency demonstrator and their various interactions are described in turn in the following section, starting at CAL3 and working downwards down the CHARISMA hierarchy towards CAL0 closest to the end-users:

Figure 16: Diagram of the logical architecture

Server with hardware accelerated network interface card

The Server hosts several virtualised services:

Cache Controller – coordination of the caching entity's in the network;

Content server – providing web content accessible for the users;

DHCP server – handling addresses for all attached devices below.

The server is equipped with a hardware accelerated network interface card, which has several physical connections. Configured by the server process, the NIC automatically selects traffic which is not intended for the server and forwards this traffic to the internet, without CPU interaction. This direct-forwarded traffic is therefore not delayed by CPU processing.

TrustNode router

The TrustNode is a network element, which handles with precedence IPv6-traffic. The traffic handling can be done in hardware or software:

CHARISMA – D4.1 – Demonstrators design and prototyping Page 30 of 64

Software processed traffic: all other packets, using Open vSwitch

Hardware accelerated: IPv6-Traffic with special prefix “AF”, special flows specified using

OpenFlow

Routing the IPv6 traffic in hardware speeds up the introduced delay dramatically in comparison to other

IPv6 network elements. For fast-forwarding IPv6 packets, a special treelike address schema is needed, which is provided by the DHCP-server process running on the server. The TrustNode is configured to process a predefined part of the network address. In this case this will be the first byte following the “AF” fast prefix. The value of this byte directly indicates the output-port of the processed packet. So, for example, all traffic with a destination address of “AF1” is directed to port 1 of the TrustNode, which is connected to the 60 GHz radio equipment. All Traffic with the fast prefix “AF2” is directed to the layer1 cache.

L1 Cache

The caching system consists of several levels of caches and a cache controller, which is located in the server.

The caches store content, which is often requested by users. The location and migration of the saved content is controlled by the cache controller, that follows a strategy to store requested content as near as possible to the end user. Due to physical considerations, not every cache is able to store all the content, such that a hierarchical cache topology is therefore used.

60GHz transmission equipment

The 60GHz transmission link acts as layer2 transparent tunnel between the TrustNode and the layer L0 caching box.

L0 Cache

The layer L0 cache is also part of the hierarchically caching system. Caches at the L0 level act additional as wireless access points providing IEEE802.11 access to the users.

User equipment

The UE is used as the measurement endpoint for the different demo scenarios.

4.4.

Configuration and Deployment

The following subchapter describes the required inventory and guides through assembly and disassembly and the various hardware components used in the low latency demonstrator. The tables provided can be used as checklists.

4.4.1.

Inventory list

In order to setup the LL demonstrator a variety of equipment is required. The following table lists all components sorted by CAL. It can be used as a check list before setup.

Table 2: Inventory list for LL demo

#

1 submodule

CAL3

quantity

1

component/device

Content server (e.g. Laptop)

CHARISMA – D4.1 – Demonstrators design and prototyping Page 31 of 64

2

3

4

5

6

CAL2 1

1

7

8

9

10

CAL1 1

1

11

CAL0

12

1

1

13

14

15

D

16

1

1

1

1

1

1

Power supply (only required in case of Laptop)

 primary: 240V

Unmanaged Ethernet-Switch (GbE)

Power supply (Ethernet-Switch)

 primary: 240V

TrustNode board

Power supply (TrustNode)

 primary: 240V

 secondary: (12V/5A)

Content cache

Power supply (Content cache)

 primary: 240V

60 GHz link terminal #1, “60GHz box”

Power supply (“60GHz box”)

 primary: 240V

60 GHz link terminal #2, “60 GHz laptop”

Power supply (“60 GHz laptop”)

 primary: 240V

 secondary: (19.5V)

MoBcache

Power supply (MoBcache)

 primary: 240V

Laptop for end user device

Power supply (end user laptop)

 primary: 240V

Cat5e cable (length depends on location)

Extension lead (8 sockets required)

17

General >6

18

1

4.4.2.

Assembly and disassembly

In this section we describe the assembly of the LL demo, divided into the parts for the physical, IP, and application layers.

4.4.2.1.

Physical assembly

The setup of the physical layer requires no specific sequence. All components should be placed at their respective locations, with the length of the Ethernet cables and power cables including extension leads to be chosen accordingly. An example setup of the LL demo is shown in the figure below:

CHARISMA – D4.1 – Demonstrators design and prototyping Page 32 of 64

MobCache&

60GHz terminal#2

(CAL0)

60GHz terminal#1

(CAL1)

60GHz link

Content cache

(@CAL2)

Switch

(@CAL3)

TrustNode

(CAL2)

Content

Server

(@CAL3)

Figure 17: Table top set-up of Low Latency intermediate demo

The following table lists the required connections, which can be setup in arbitrary order.

Table 3: List of required connections for LL demo

# from to type cable of

1

2

3

4

5

6

“Internet”

[email protected], any port port 1

[email protected], any port

[email protected],

[email protected], port 2

60 GHz laptop,

RJ45 socket

For disassembly no specific sequence is required.

[email protected], any port

Content

RJ45 socket server,

[email protected], port 0

Content

[email protected],

RJ45 socket

60 GHz [email protected], RJ45 socket

MobCache, RJ45 socket

Cat long

Cat 5e

5e,

Cat 5e

Cat 5e

Cat 5e

Cat 5e

CHARISMA – D4.1 – Demonstrators design and prototyping Page 33 of 64

4.4.2.2.

IP layer configuration

The IP layer configuration requires the setup of a DHCP server running. Consequently, all connected devices should be set to get their IP address automatically.

IPv6:

To enable the acceleration features of the TrustNode router the IPv6 addresses should be configured in a

treelike topology as described in section 4.2.3 of the deliverable D2.1 [15] (parallel to this deliverable.) In

order to indicate the fast routing, the Network prefix needs to start with “AF”. The Trustnode router acts as a DHCP relay agent to distribute the addresses from the DHCP-server which should be placed in the root of the tree through the whole network.

IPv4:

Remark: If IPv4 is used the TrustNode cannot demonstrate its latency reduction. IPv4 traffic is simply forwarded in the default way. If for some reason it is not possible to run the DHCP server, a manual IP setup is required. Numbers from the private IP range according to RFC 1918 should be used in this case. For this LL demo setup, it is suggested to use the range from 10.11.12.100…110. The DNS server should be used according to the suggestions of the Internet provider. Otherwise a list of free DNS server is e.g. available at http://public-dns.info

. (Google provides DNS server with good connectivity: 8.8.8.8, 8.8.4.4)

4.4.2.3.

Application setup

The link technologies and the intelligent nodes following below require specific software.

Cloud and Content Server:

The application server provides video content. Content streamer provided by JCP-C, sends video to network.

Activated upon network readiness. The solution uses Linux OS as mostly suitable for NFV/SDN environment.

First of all, Cloud-Content server runs DHCP Server, to provide an IP address to each device in network. We use IPv6 address pool for data forwarding and Ipv4 for control flow, so DHCP supports both. After start-up each device in the network gets an IP address, and a short ping procedure tests the connectivity from anyto-any network point. Another application running on the Server is the cache-manager which is provided by

JCP-C and is responsible for configuring the cache solution at any network point. The third main component managed on the Server is the NIC, responsible for both forwarding data to CPU (Local content server) and to Internet (Internet Content server “like Youtube”). The NIC can be managed in different ways, but we have decided to use CLI (can be also remote by Telent and SSH) where Ethernity provides script according to the demo use cases, which is activated upon demo, and includes forwarding from network to network, and colouring the Content traffic to provide high priority. Other applications like the SDN Controller to manage Network devices can also run on the cloud server.

TrustNode:

The Trustnode is configured to act as autarkic device. Further information about the current status can be extracted via hardware-root-shell, from the logfiles placed in /var/log/*. The parameters of the serial interface are 115200Baud 8N1.

MoBcache and Content Cache:

The MoBcache content cache each has the following control features:

CHARISMA – D4.1 – Demonstrators design and prototyping Page 34 of 64

A manager which controls the device connectivity and can configure the wireless mesh network

A cache controller which interfaces with the network operator or OTT/content provider, and controls the caching and pre-fetching strategy of the MoBcache device, in an SDN framework. Caching and prefetching strategies can be set in a D2D manner (managed by the MoBcache themselves) or network to device configurations (controlled by the cache controller in that case).

4.5.

System / caching end-to-end latency

At this point in time of the CHARISMA project, only a simple end-to-end latency time measurement of the

network has been able to be performed. The experimental setup is as shown in the Figure 16 and Figure 17,

with the test performed on the Internet to End-Users caching data solution, where content-data is allocated at the Server at CAL3, and requested by the End-User device. The resulting point-to-point

latencies as measured in the network of the LL demo are shown both in Figure 18, below, and are also

tabulated in Table 4. Of particular note is that many of the individual link delays are well below 1ms, and so

in keeping with the desire to achieve an end-to-end latency of below 1ms, when summed across the various links. This also includes the processing delay at the TrustNode of CAL1, and internal delay to the

Cache Box. However, the 60-GHz link has shown a higher than anticipated latency of about 3ms, due to the need to translate between the Ethernet and the wireless 802.11ad protocols. This will be a feature of ongoing future work in the project, to reduce the mm-wave link delay to a value closer to the intrinsic flight time of the data. The current latencies are also measured to be higher than the target delay times, and this is an aspect that will need further investigation in the next year of the project; in particular, to investigate solutions that will continue to reduce the latency times associated with the various link and node technologies discussed here, e.g. by the use of more FPGA technical solutions to replace current CPU processing. In addition, the accelerated NIC card technology (from Ethernity) has not yet been deployed, as a further means to reduce end-to-end latency. In addition, each test also needs to show results and a comparison for the latencies with and without the CHARISMA solution, as far as extrapolation to full Telco environment: distances, number of users. However, these measurements are not yet available, and will be investigated and reported upon during the second year of the project.

Figure 18: Schematic of experimental point-to-point latencies in LL demo

CHARISMA – D4.1 – Demonstrators design and prototyping Page 35 of 64

7

8

9

10

Table 4: Tabulated expected and experimental round trip latencies of LL demo

Test Description

1

2

3

4

5

6

Target time

PC to Cache box (L0)

PC to Content server

PC to Content server

(end-to-end latency)

<0.3ms

<0.5ms

<1ms

Cache box to TrustNode

Cache box to content server

Cache box to content server

TrustNode to Cache box

(L3)

TrustNode to WiFi

Trustnode

Server to Content

WiFi to Content Server

<0.2ms

<0.2ms

<0.5ms

<0.2ms

<0.2ms

<0.2ms

<0.2ms

Resulted round time

1.128ms

3.12ms

6.69ms

0.292ms trip Comments

Without 60 GHz

With 60 GHz (Resulted latency due to

Ethernet to wireless protocol conversion)

Without 60 GHz

0.541ms Without 60 GHz

3.471ms

0.382ms

0.406ms

0.414ms

0.415ms

With 60 GHz

4.6.

Future Work

To show the impact of the two low latency optimized traffic transfer technologies TrustNode and 60-GHz link, the devices should be tested separately using a hardware traffic generator/analyser with delay measurement capabilities. The test results need to be compared with conventional hardware to show the overall CHARISMA improvement in the low latency performance. In particular, the 60-GHz link is showing higher than anticipated latencies, due to the need to translate between the Ethernet transmission protocol and the 802.11ad protocol intrinsic to the mm-wave link. Future work will involve the investigation of the use of faster conversion technologies, e.g. faster CPU processing capabilities, or alternatively FPGA or even the possibility of dedicated ASIC solutions, as a means to achieve the high speed conversion between the protocols to help achieve the overall 1-ms 5G KPI for end-to-end latency. In addition, the CHARISMA endto-end network latency also needs to be assessed with respect to the access times for end-users initiating a request for content from the hierarchical caching system, either from the Cache Box at CAL1 or the Cache

Box at CAL3. The relative delays in access time for each scenario need to be measured, and this will also be a feature for future work during the second year of the project. As already mentioned, the current

measured latencies as indicated in Table 4 are higher than the target delay times, and technical solutions to

reduce the link and node processing delays in the CHARISMA architecture will continue to be investigated

CHARISMA – D4.1 – Demonstrators design and prototyping Page 36 of 64

as the project progresses. In particular, Ethernity’s accelerated NIC card technology will also be deployed into the network, as a means to further reduce measured latency.

Another aspect to the low latency system testing that needs to be investigated further is to demonstrate the effect upon the overall system advantage in throughput and latency by adding many users accessing the same data content from the Telco cloud/Internet. These more detailed characterisations of the low latency distributed caching demonstrator will be investigated during the second year of the CHARISMA project. Finally, the hardware solutions discussed here will also be integrated with the CMO plane

(including the virtualised security capability) and multi-tenancy functionalities being developed in the other intermediate demonstrators described elsewhere in this deliverable. The full integration plan of the low latency technologies is discussed in greater detail in chapter 6.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 37 of 64

5.

Open Access Demonstrator

5.1.

Motivation and Scope

The telecommunications industry is currently investigating numerous challenges and opportunities in the continuous process of incorporating new digital application paradigms such as IoT, Industry 4.0, Smart

Grids, Smart Cities, Autonomous Driving, etc. into the next-generation of telecommunications systems in pursuit of a common ambition that is to get “everything connected”. Within this context, there is a growing interest in the development of telecommunication platforms that support logical resource sharing on the same physical infrastructure. This interest is driven by two key features of a 5G network, namely multitenancy and network slicing. In 5G networks, it is foreseen that a physical infrastructure is sliced into multiple standalone and independent virtual networks, each one called a network slice. A network slice may consist of virtual as well as physical resources. These logical network slices can belong to different network operators running their customized services for their customers/users. This accommodation of multiple network operators on same physical infrastructure is termed as multi-tenancy where each tenant is in charge and control of its network slice and the respective services. A converged 5G infrastructure intrinsically possesses natural monopolistic characteristics, so that enabling open access operation to multiple virtual network operators (VNOs) has multiple social, economic and environmental benefits. The network sharing and multi-tenancy imply a single infrastructure provider (InP) serving several services providers, with physical infrastructure shared through the C&M systems, which becomes a fundamental enabler to provide required flexibility, elasticity, and programmability required for 5G access-core networking. As such, multi-tenancy and network slicing are expected to be fundamental areas of development in the 5G era. Furthermore, telecommunications architectures have become a multitechnology, multi-purpose, multi-vendor network, which can be a demanding amalgam of networks and its functions, something that can become difficult to virtualize, manage and control.

As mentioned earlier, CHARISMA has been designed to emphasize three specific important functionalities that are considered to be key to several vertical sectors and provisioning of their supporting 5G services.

One of CHARISMA’s objective is the development of an open access (multi-tenant), converged 5G network, via virtualised slicing of network as well as IT resources to different Service Providers (SPs), with network intelligence distributed out towards end-users over a hierarchical architecture. Thus the CHARISMA Open access (multi-tenancy) solution allows infrastructure providers to share resources among multiple SPs thereby leveraging down CapEx and OpEx, as well as achieving more efficient operation of the network using a centralized control and management system.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 38 of 64

Figure 19: CHARISMA network slice and multi-tenancy concept

The main motivation behind CHARISMA’s open access demonstrator is to show-case progress and achievements in line with the open access/multi-tenancy objectives of the project. The scope of the demonstrator is twofold: firstly, to exhibit the creation of network slices and their assignment to different network operators on a subset of the CHARISMA physical infrastructure; and secondly, to deploy a service on these network slices. In order to achieve this, we use an FTTH solution (designed by Altice Labs based on the GPON protocol) that will prove the provisioning of slices dedicated to VNOs, in addition to an innovative concept in the CAL2 (Converged Aggregation Level) site, which implies the inclusion of a server that will provide IT capabilities to the end-to-end system. This setup is in line with the CHARISMA defined

architecture as already explained in D1.1 [1]. The ultimate objective of this open access demo is to prove

the concept of multi-tenancy by adding the CHARISMA-specific IMU (intelligent management unit) capability in the CAL2 to be able to deploy additional services per user slice.

5.2.

Scenario Description

The main focus of the open access/multi-tenancy demonstrator is to prove CHARISMA’s support for network slicing and multi-tenancy. For this purpose, in the initial open access/multi-tenancy demonstrator,

a simple scenario is considered as shown in Figure 20. The scenario consists of an InfP which owns the FTTH

solution including GPON network devices along with storage and CPU resources. The InfP creates virtual network operators (VNOs) and their respective network slices and enables them to run a service on their assigned network slice. The network slice of the VNO consists of storage, CPU and network resources that allow the VNO to offer an end-to-end network service for their customers. In this manner, multiple VNOs will have access to their network slices only, and can offer their customized services on the same physical

infrastructure, hence demonstrating network slicing and multi-tenancy. Figure 20 shows two VNOs, namely

VNO1 and VNO2, along with network slices highlighted in different colours.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 39 of 64

Figure 20: High-level open access/multi-tenancy demonstrator scenario

The above open access/multi-tenancy demonstrator is deployed in the premises of APFUTURA in Centelles,

Barcelona in a controlled deployment for testing purposes. The physical topology of the demonstrator is

illustrated in Figure 21. The logical and physical descriptions of the setup are detailed later in this section.

The network used could be part of a real deployment of GPON as is located in a real deployment site from

APFutura; but at the moment it is a lab controlled environment with no real clients.

Apart from the main objective of creating network slices and enabling multi-tenancy on the same physical infrastructure, this demonstrator has following two side objectives:

1) Isolation concept evaluation

The aim is to prove that the isolation exists at a VNO-level and at a user level, so that each VNO can only see the slices of the users that belong only to that VNO and the users are not capable to retrieve the traffic that is not directed to them. This isolation is provided via VLANs through a QinQ packetisation format

(described later). Services identified with a combination of S+C tags will be provided over the FTTH system and then, through a Testing Equipment the traffic will be examined to make sure that the traffic is indeed received in the appropriate ports of the ONT. The S-tag represents the Service Provider whereas the C-tag represents the User of these SPs or the service that they are delivering to the users. In order to ensure this, testing equipment (from the vendor microtik) is used to verify the different tags at the different interfaces.

2) End-to-end service delivery

The aspect here is is to demonstrate ordinary service delivery on the enhanced slicing features (like Firewall,

DPI, Parental Control applications, etc.) over the same physical infrastructure, with these features delivered

over an NFV-based system. The implemented topology is shown in Figure 21 where we have included the

FTTH system based upon GPON, together with the IT resources capability that will be provided by the

Ubuntu server.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 40 of 64

Figure 21: Centelles headend topology

CHARISMA – D4.1 – Demonstrators design and prototyping Page 41 of 64

5.3.

Infrastructure Architecture

5.3.1.

Software Components

There are two main software components in this demonstrator. One is the CHARISMA Operation &

Management GUI and the other one is the CHARISMA server, as shown in Figure 20.

CHARISMA Operation & Management GUI

The CHARISMA O&M GUI has been developed as part of the efforts in WP3 which is dedicated to the development of the CHARISMA Control, Management and Orchestration (CMO) plane. The CHARISMA

CMO has already been described in the deliverable D3.1 in more detail.

The CHARISMA O&M GUI is developed using Bootstrap [17] for the frontend and Angular [18] for the

backend logic. The CHARISMA O&M GUI provides an operation and management web portal for both InfP and VNO.

Figure 22: CHARISMA O&M GUI

The InfP view of the CHARISMA O&M, provides the infrastructure provider with all the information about

the physical infrastructure and topology, as shown in Figure 23. Furthermore, it also provides all the related

information about the VNOs (i.e., the service providers (SPs)), network slices, and related monitoring

information (See left menu in Figure 23). The Service Provider tab in the left menu provides the list of operating VNOs within the CHARISMA physical infrastructure (See Figure 24). It provides the flexibility of

creating more VNOs depending on the availability of the physical and virtual resources. At the bottom of

Figure 24, is also shown the list of admin users of the VNOs (represented as SPs in the screen shots), which

allows a VNO to login into CHARISMA O&M and control and manage its respective slice.

Figure 25 shows the virtual slices tab screen shot. As shown, it lists the virtual slice requests and the virtual

network created as a response to these requests. It allows one to create, edit and process a new network slice, which could be provisioned for any existing or new VNO. The Edit option allows the design of a

particular network slice request, as shown in Figure 26. In this option, the mapping of physical to virtual

CHARISMA – D4.1 – Demonstrators design and prototyping Page 42 of 64

resources is achieved. Once the network slice design is complete, the Process button provisions the configured virtual slice request in to a virtual network which can be assigned to an existing or new VNO.

Figure 23: CHARISMA O&M InfP View

Figure 24: CHARISMA O&M InfP View Service Provider tab

CHARISMA – D4.1 – Demonstrators design and prototyping Page 43 of 64

Figure 25: CHARISMA O&M InfP View Virtual Slices tab

Figure 26: CHARISMA O&M InfP View Network Slice Request Edit

The CHARISMA O&M also provides an operation and management web portal for the VNOs. Figure 27 shows the dashboard of a VNO which has one network slice assigned to it, whereas Figure 28 displays the

detailed information of the network slice. The end-to-end network service over the allocated network slice

can be provisioned by clicking the Create Service button in Figure 28. Figure 29 shows the configuration of

the end-to-end network service by using QnQ VLANs. This type of configuration allows the unique and unequivocally identification of users of the infrastructure which is the QinQ (802.1ad). The configured setup

identifies each SP with the outer VLAN and each customer with the inner VLAN, as shown in Figure 30. The

number of VNOs in this case is limited to 4094, and the number of customers per VNO is also limited to that number. For the purpose of this open access demo this configuration fulfils our requirements, however, in a real deployment of services and for a real infrastructure owner a more elaborated engineering in terms of

VLAN allocation can be designed in order to allocate more efficiently VNOs, customers and services for these customers and to allow escalation of this network topology.

Furthermore, virtual network functions (VNFs) such as a virtual firewall (vFW) can be deployed on the

provisioned end-to-end service as shown in Figure 31. The virtual machine containing the VNF is created

CHARISMA – D4.1 – Demonstrators design and prototyping Page 44 of 64

inside the CHARISMA server and configured with respective network slice VLAN tags to ensure end-to-end service.

Figure 27: CHARISMA O&M VNO Dashboard

Figure 28: CHARISMA O&M VNO view Virtual Slice Information

CHARISMA – D4.1 – Demonstrators design and prototyping Page 45 of 64

Figure 29: CHARISMA O&M VNO view end-to-end service provisioning

Figure 30: QnQ VLAN tags

Figure 31: CHARISMA O&M VNO view end-to-end service test and VNF deployment

CHARISMA – D4.1 – Demonstrators design and prototyping Page 46 of 64

CHARISMA Server

The purpose of the CHARISMA server in the initial open access/multi-tenancy demonstrator is to emulate the storage and compute resources assigned to the created VNOs. As mentioned earlier, the compute and storage resources required to deploy the VNF for a particular network slice are provided by the CHARISMA server.

The CHARISMA server is deployed on top of a VMWare ESX hypervisor. This allows the deployment of a virtual machine for validation purposes in order to test the performance of the CHARISMA network. The

virtual machine configuration is presented in Figure 32.

Figure 32: CHARISMA Server VNF deployment

The actual implementation of the CHARISMA Open Access implementation includes additional Virtual

Machines to host the current and future functions that are needed in the deployment of the CHARISMA architecture.

The system has been designed to be flexible enough to accommodate new services and alternative

CHARISMA platforms in the future by simply adding new features via new implemented VNFs to the server, so the system is scalable enough to host many of the VNFs required by CHARISMA both currently and in the future. In that sense, it is expected that this demonstration will take into account future generation solutions for FTTH and will be able to cope with wireless and fixed networks convergence managed from a unified platform, that the WP3 is responsible for implementing: The Control, Management and

Orchestration of services. D3.1 provides further details about this platform. D3.2 and D3.3 will be submitted simultaneously with this deliverable D4.1, and they will provide further detailed information about the capabilities of this CMO system.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 47 of 64

5.3.2.

Physical Level Components

5.3.2.1.The OLT

One of the key components that has been incorporated in this demo is the OLT of a GPON system provided by Altice Labs. Based on the rec. ITU-T G.984.x (GPON), this active network equipment solution is nowadays evolving towards next-generation PON architectures as defined by the recs. ITU-T G.987.x, ITU-T G.988.x and ITU-T G.698.x.

The Optical Line Termination (OLT) is a high availability, high reliability, and high density scenario item of equipment, specially designed for fibre network infrastructures in Point-to-Multipoint (P2MP) FTTx Gigabit

Passive Optical Network (GPON) architectures in urban deployment scenarios. Supporting up to 48 PON ports, the OLT can be considered as the best-of-breed that the market is currently offering in terms of future-proofed central office optical equipment.

This equipment is intended to solve fibre access needs in terms of retail as well as wholesale clients. It complies with ITU-T G.984x and is able to serve multiple-play services, namely Voice (VoIP), Data (High

Speed Internet - HSI), TV (IPTV and RF Overlay) of up to 3072 client premises (assuming a 1:64 Splitting ratio) from a single chassis. Wholesale and enterprise services are addressed through Business Ethernet VBES/TLS

Services (VLAN Business Ethernet Services/Transparent VLAN Services). Up to 144 client premises can be supported at the P2P topology from a single chassis. During the course of the first year of the CHARISMA project, this OLT has been designed to support NG-PON2 using the most advanced line cards developed by

Altice. The complete description of the product and different scenarios it could be used is out of the scope of this document hence we present relevant information below.

OLT: Physical Description

The OLT used under the CHARISMA framework is a 3 RU equipment shown in Figure 33 :

Figure 33: OLT 1T1 3 D chassis schematic

Dimension

OLT 1T1

Height (mm) / RU

133/3

Width (mm) / “ Depth (mm)

483 / 19 248

Table 5: Weight & Physical Dimensions

OLT: Technical Description

Weight (max Kg)

12 (with cards)

CHARISMA – D4.1 – Demonstrators design and prototyping Page 48 of 64

The OLT system is a reliable modularOLT equipment specially devoted for fibre network infrastructures, for either Point-to-Point (P2P) Ethernet or Point-to-Multipoint (P2MP) FTTx Gigabit Passive Optical Network

(GPON) architectures.

Features OLT system

Chassis 3U

Number of line card slots

Backplane:

Physical BW per slot

3

12x10GbE

GPON ports [max]

Active Ethernet GbE ports (max)

48

144

10GbE interfaces 4

Table 6: OLT Equipment features

OLT: Usage Scenarios

The OLT system Usage services scenario include the provision of:

NGN voice services: VoIP and ToIP soft switched controlled, including IP Centrex services and SIP

Trunk services;

Internet services – High speed internet in the order of Mbit/s to Gbit/s, with traffic prioritization and differentiation;

Enhanced Multimedia Communications such as voice, presence, unified messaging, localization,

Caller ID with IPTV, controlled by an IMS CSCF (Call Session Control Function) platform;

Mobile Backhaul services: o

Applicable to WiMAX, 2G, 3G, 4G (LTE) and 5G networks; o

Support transparent synchronization signals (frequency, phase and time) transport according to

NTP and PTP Protocols (IEEE 1588 v2 and telecom profile); o

Provide BITs interface for the connection of an external MHz or Mbps clock reference.

Video services: o

IPTV o

RF Video

Residential multiple-play services: o

Voice: Voice over IP (VoIP); o

Internet: High Speed Internet Services; o

RF Overlay: Analog Video using a different wavelength (1550nm);

CHARISMA – D4.1 – Demonstrators design and prototyping Page 49 of 64

o

IPTV: transport of Digital Video Services;

Business services: o

TDM emulation services for E1 transport according to MEF-8 (CESoETH) o

Carrier Ethernet services based on MEF 10.1 (E-LINE, E-LAN) o

Business Ethernet Services TLS (Transparent VLAN Services) o

BitStream (enterprise)

OLT: GPON service scenarios

The OLT system supports the following Ethernet services over GPON (See Figure 34 ):

N:1 o

Multiple Clients using the same Service Tag o

Traffic Forwarding based on the S-TAG+DMAC

1:1 o

Unique S-TAG or S-TAG+C-TAG per Client Service o

Traffic switching based on S-TAG+C-TAG or S-TAG

Transparent LAN Services (TLS) o

The traffic is not processed o

The CPU is not part of the EVC o

1:1 and N:1 topology can be used

OLT

Figure 34: OLT services Usage Scenario

OLT: GPON LINE BOARD: TG16G

The TG16G board provides sixteen Class B+ or C+ GPON interfaces. Each of the interfaces can have up to 64 clients in a maximum of 1024 clients per board.

16xGPON Port LED

16xGPON

CHARISMA – D4.1 – Demonstrators design and prototyping Page 50 of 64

Figure 35: GPON TG16G line board front view

Table 7: GPON TG16G line board Slot position in OLT shelf

Board OLT system Slot

TG16G OLT 1T1 2 to 4

OLT: SWITCH FABRIC BOARDS: CXO160G

The Switch Fabric boards provide the Operation and Maintenance function of the system, and it also switches the client traffic between the client and uplink boards. The OLT system must be equipped with two Switch Fabric boards in order to guarantee 1+1 redundancy for switching and management.

CXO160G is a Switch Fabric board for OLT 1T1 systems. CXO160G has a 160 Gbps switching capacity.

This board has four 10GbE internal connections and one O&M for each of the uplink and client boards in the system.

AVR FRR

PRC HLR

ON LCK

Power Input

Reset

2xETH

1xUSB Alarms I/O

+T1/E1 Sync

GPS Sync

2x10GbE

Uplinks

Figure 36: CXO160G Front panel

Table 8: CXO160G Slot position in OLT shelf

Board

CXO160G

OLT system

OLT 1T1

Slot

1 and 5

It has the following interfaces:

2x ETH management interfaces G1 and G2 (RJ45);

1xUSB debug port, only for factory configuration;

MISC (Alarms/conditions and ACK indicators) contacts;

1x GPS Synchronism interface;

2 x 10GbE uplink interfaces; o

SFP+ modules.

ONU

The ONU used under the framework of the CHARISMA project is an Optical Terminal Equipment (ONT) unit for Passive Optical Networks (PON) termination in a FTTH (Fibre-To-The-Home) service delivery architecture. It communicates with the OLT (Optical Line Terminal) on the PON side and with the customer’s premises on the client side. This equipment supports triple-play services - high speed internet

(HSI), voice (VoIP), video (IPTV and RF Overlay) and WPS (WI-FI Protected Setup). The use of the GPON fibre access technology allows a significant service delivery increase when compared with traditional xDSL technologies.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 51 of 64

The equipment technology is based on GEM (GPON Encapsulation Method), and complies with ITU-T

G.984.x. recommendation as like as G.984.4 (OMCI) ensuring interoperability with major GPON OLT vendors (BBF.247).

These base functionalities, together with the support for bit rates of up to 2.5 Gbps (downstream) and 1.24

Gbps (upstream), an optical network splitting ratio of up to 1:64 in a single fibre and a distance range of up to 60 km, make the GPON technology the most efficient option for passive optical network topologies, when integrated service delivery is an issue.

Together with multi-vendor OLT interoperability (BBF.247 certified), other differentiated features of the product are the embedded RF Video Overlay as well as the chance to have several TV channel packs by means of using remote managed analogue RF video overlay filters. The use of an embedded optical reflective component also increases probing resolution in case of FTTH probing. This ONU is also one of the first single household integrated CPE solutions (ONT+GATEWAY).

As opposed to the point-to-point architecture, in which there is one physical port per client in the Central

Office, in the GPON point-to-multipoint architecture there is only a single laser and photo-detector in the

Central Office (CO) to serve up to 64 CPEs. All the Optical Distribution Network is built by means of passive equipment modules with a long life MTBF standard and very low OPEX.

ONU: Technical Description

Gateway Main Functionalities

The Gateway is aimed for customer premises and complies with the ITU-T G.984.x recommendation in order to transport (over GPON) and deliver (to premises domain) the full broadband service pack.

Broadband service applications are commonly referred as below:

High speed internet (HSI);

Voice (VoIP) services (SIP/MEGACO H.248);

TV (whether IPTV or analog RF video overlay);

WI-FI.

The multiple-play environment is thus reinforced when combining the above referred services.

ONU: Interfaces

Client interface options are of type:

4x 100/1000Base-T for Ethernet network connection (RJ45 connectors);

2x FXS channels (RJ11 connectors);

2x2 @ 2.4/5.0 GHz wireless interfaces (802.11 b/g/n);

2x USB 2.0 Masters for printer sharing, media sharing and for 3G/4G backup uplink;

RF Overlay interface;

Control switches for power and WI-FI;

Network interface option is of type:

GPON SC/APC Optical connector (B+/C+).

ONT: Connections

CHARISMA – D4.1 – Demonstrators design and prototyping Page 52 of 64

ONT-RGW connections are distributed by two side faces of the device. ONT-RGW connections’ general view

is show in Figure 37 .

Figure 37: ONU (ONT-RGW) Overview

5.4.

Configuration and Deployment

The CHARISMA Open Access demonstrator testbed topology is show in Figure 21 . It shows two separated

planes: one is dedicated for the management, connecting the management ports of each and every device

(OLT, and virtual machines); and the second one depicts the data plane.

Two PON ports (PON1 and PON2) have been utilised and two ONTs have been provisioned on each of the

ports to be able to demonstrate the multi-tenant isolation on separated ports. The Figure 21 shows the

initial topology used in the testebed in Centelles in order to demonstrate that CHARISMA can support multi-tenancy.

In more detail, there are 3 relevant elements in the testbed that we want to focus in this sub-section:

1.

Server

The server is deployed on a top of a VMWare ESXi hypervisor. This allows us the deployment to several virtual machines that might help in order to test the performance and isolation of CHARISMA.

The initial configuration of the server is presented in Figure 38 .

Figure 38: Virtualised CHARISMA server

2.

ONT

CHARISMA – D4.1 – Demonstrators design and prototyping Page 53 of 64

The Figure 37 shows the external view of the ONT used for the testbed. It is a L2/L3 ONT, providing GPON

with router capabilities such as routing, NAT, parental control, etc. We have configured this terminal in 2 different modes, in order to see if CHARISMA supports this two different configuration. The first one is in bridge mode that is only using the GPON and an Ethernet port, and using the ONT as a L2 device. This mode need another L3 extra device. (We use a laptop). The other mode is in L3 mode; in this case the ONT itself is configured with an IP and routing.

3.

OLT

Figure 33 presents the OLT general external view. The OLT used in CHARSIMA is equipped with two switch

fabric boards ( Figure 36 ) on a 1+1 configuration (stand-by mode) and one 16 ports GPON line card ( Figure

35 ). Other line cards are also available on this system, but are not (yet) used within CHARISMA. As soon as a

NG-PON2 line card is available, it will be used instead of the GPON, in order to improve the available bandwidth per customer/terminal/ONU. This OLT, with its current hardware configuration features 4 uplink

interfaces (two on each switch fabric Figure 36 ). Each of these interfaces is a SFP+ cage, providing either 1

GBE or 10 GBE (depending on the SFP used). These 4 interfaces may be used either separately, or in any link aggregation combination (2, 3 or 4 interfaces on the link). Under the framework of CHARISMA, the 4 interfaces are used on a single link group that may provide up to 40 Gbps aggregated uplink bandwidth.

The CHARISMA Open Access demonstrator uses the N:1 services usage scenario (See Figure 39) with MAC

bridge services, that is (refer to section “

OLT: GPON service scenarios

“) :

Forwarding Based on S-VID+DMAC or S-VID+C-VID+DMAC;

Unicast Unknown, Multicast and Broadcast MAC Addresses are replicated to all the GEM Ports that belong to the same S-VID or S-VID+C-VID.

OLT ONT Subscriber

GEM ID 1

GEM ID 2

S-VID10

S-VID11

RG

A

S-VID100

S-VID101

C-VID1

MAC

Table

GEM ID 1

GEM ID 2

GEM ID 3

MAC

Table

GEM ID 4

GEM ID 5

GEM ID 6

GEM ID 1

GEM ID 2

GEM ID 3

GEM ID 4

GEM ID 5

GEM ID 6

GEM ID 3

GEM ID 4

GEM ID 5

GEM ID 6

S-VID100

S-VID10

S-VID101

S-VID12

RG

B

RG

C

GEM ID 7

S-VID102

RG

D

S-VID102

C-VID2

MAC

Table

GEM ID 7

GEM ID 8

GEM ID 9

GEM ID 7

GEM ID 8

GEM ID 9

GEM ID 8

S-VID102

RG

E

GEM ID 9

S-VID102

RG

F

Figure 39: N:1 MAC Bridge Services

On the N:1 service model with a single service VLAN, one VLAN tag is used to identify the service/provider on the uplink, while each customer is differentiated by the CPE MAC address. Security is ensured by encryption on the PON, anti-MAC spoofing and MAC filtering. Additionally, according to Broadband Forum

Technical Recommendation TR-156, the OLT performs customer isolation, preventing L2 traffic between customers on the same OLT. For two customers communicate between them it is necessary to use a L3 device on the OLT uplink.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 54 of 64

The N:1 service model is also being tested with Q-in-Q (double VLAN) on the upstream. Using this option, each customer/service/provider is uniquely identified by a pair of VLANs (Q-in-Q) and thus, client isolation and security is guaranteed at L2.

5.5.

Future Work

As mentioned earlier, this is an initial demonstrator to prove CHARISMA’s support of network slicing and multi-tenancy in an access network based on a GPON technology. The open access/multi-tenancy demonstrator is planned to be improved in the following two ways, in order to ensure smooth integration with the other two CHARISMA demonstrators by project end:

The CHARISMA O&M GUI shall be further developed, both at the frontend and backend, to extend network slicing and the multi-tenancy feature for other planned access network technologies including: o

Wireless Access (eNodeB), o

Point-to-point Wireless Backhaul, o

OFDM-PON, o

TrustNode, and o

Ethernity NIC.

This is to ensure multi-tenancy and network slicing reach at all CAL levels.

Instead of manual deployment of VNFs, as is currently done, we will also aim to achieve this in an automated manner via an Orchestrator, which is part of the CHARISMA CMO. Overall, the

CHARISMA O&M GUI shall be extended to incorporate complete features of CHARISMA CMO.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 55 of 64

6.

Strategy for Integration

6.1.

CHARISMA vision

Figure 40 presents the project’s vision for the end-of-project integration of work building on the three key

CHARISMA drivers: low latency, virtualized security, and open access including data plane and control plane. In the figure, a hierarchical and quasi-distributed network is depicted via the use of the four

CHARISMA converged aggregation levels (CALs) from CAL0 to CAL3. In CAL0, CPE and vCPE supporting multiple wireless technologies like WiFi and LTE are deployed to provide Internet access to end-users.

Caching is enabled in the CPE in order to provide good QoS especially in a mobile scenario to end-users by caching popular content close to end-users. eNodeB, ONT nodes and WiFi AP, e.g. using mm-wave

802.11ad router, are deployed in CAL1. A dedicated server, where virtualized functions related to security and caching will be running, is located in CAL1. The server is equipped with a smart NIC to provide packet acceleration features using acceleration software such as SR-IOV and Intel DPDK. In CAL2, OLT Node,

TrustNode router (employing 6-Tree routing) and a local server with smart NIC and caching are deployed to accelerate routing and caching functions in order to meet the low latency objective. A wireless PtP backhaul is used to transport data from eNodeB to EPC. The EPC and a central caching server with smart

NIC are presented in CAL3 where the SDN-based CHARISMA CMO platform is deployed to manage VNFs and physical equipment located in the different CAL levels in order to provide open access services.

Figure 40: Schematic of integrated 5G CHARISMA pilot demonstrator network

CHARISMA – D4.1 – Demonstrators design and prototyping Page 56 of 64

For the implementation of a CHARISMA 5G network like the one illustrated in Figure 40, a number of

additional requirements are set. First, support of IPv6. The public IPv4 address space managed by IANA was completely depleted by February 2011. However, new technologies like Internet of Things (IoT), Cloud

Computing and 5G require upgrading of the Internet global infrastructure in order to keep the Internet growing. IPv6 has capacity to provide about 3.4x10^38 addresses or 340 trillion trillion trillion addresses.

The manageability, end-to-end connectivity and scalability provided by IPv6 are promising for IoT and 5G technologies. From a low latency aspect, IPv6 support is an important feature. The new IP protocol includes a lot of features which will reduce latency, e.g. flowlabels or the much larger address space which will avoid network address translations and other software-based packet processing. InnoRoute’s hardware device

TrustNode has the focus of fast IPv6 packet forwarding in hardware.

Second, existing wireless/mobile technologies need to be integrated. One of the main objectives of 5G radio access is to assist in the evolution of existing wireless technologies like LTE, HSPA, GSM and WiFi.

CHARISMA’s vision of 5G networking involves WiFi (802.11n, 802.11ac and 802.11ad) at higher carrier frequencies, and LTE network technologies. Hence, user equipment is able to connect to the CHARISMA network by WiFi and LTE, to enable service continuity via designing seamless handover and traffic offloading between WiFi and LTE networks.

In 5G infrastructures, Cloud, SDN and NFV technologies are required to reshape the entire mobile management infrastructure and the entire mobile ecosystem. Network functions like NAT, firewalling, intrusion detection, routing and caching will be virtualized and implemented as VNFs. For example,

CHARISMA proposes the implementation of cache nodes and cache controller as VNFs, and the deployment of virtualised caches at the network edge so that content can be cached locally. Moreover, as explained in this deliverable D4.1, virtualization of security functions such as firewalls and intrusion detection systems is also foreseen. Technologies like SDN can be used for the management of traffic and communication between the caching devices. With the multiple hierarchies, SDN controller and cache virtualization, virtualized caches located in different parts of the network will be dynamically created with required resources like network bandwidth, CPU, memory and hard-disk storage, and network traffic can be shifted dynamically based on the need.

Two kinds of caching methods are proposed: a dedicated cache box called MoBcache and virtualized caches

(vCache). MoBcache is a mobile router-server prototype with autonomous battery, and auto-configuration within a tree-topology radio network. It is specially designed for mobile scenarios in the CHARISMA architecture system to keep service continuity while a user moves and makes hand-over between networks. vCache will be deployed as a VNF in the different CAL levels and be managed by the SDN-based

CHARISMA management system. The main idea here is to separate the data plane and control plane that orchestrates the caching and distribution functionalities, and transparently pushes the content as close as possible to the end-users. The pieces of the vCache software can be moved around entire networks while managed with a centralized control plane. In this way, caching functions can be easily and quickly installed and provisioned within the network, as well as dynamically configured or adapted to the network needs.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 57 of 64

6.2.

Final CHARISMA demonstrators

Having now completed month 12 of the 30-month CHARISMA project, we are now actively considering the design of the final demonstrators for the end of the project. At the moment two different demonstrators appear to be feasible for implementation to showcase the multiple of the technologies being developed within CHARISMA. The first demonstrator will be located at NCSRD in Athens, Greece, and the second at

ApFutura’s premises in Centelles (Barcelona), Spain.

6.2.1.

NCSRD demonstrator, Athens

The following figure (Figure 41) depicts the infrastructure comprising the NCSRD demonstrator. An LTE

testbed is foreseen comprising of two servers, one for the EPC and a second for the eNodeB. The eNodeB features an external RF interface, allowing the connection of LTE User Equipment. Also, the connection of an LTE CPE is possible, that provides connectivity to other wireless devices. Between the EPC and the eNB, we will place a wireless backhaul link with SDN management functionalities. The implementation of two

IMUs at CAL1 and CAL2 is planned, using cloud infrastructure to support both the implemented virtualized functions (security and caching at CAL1) and the CHARISMA CMO services (CAL3).

The objectives of this setup will be to demonstrate the following CHARISMA aspects: a) Virtualised security through the use of virtual security functions such as the vIDS and the vFW, in b) c) combination with traffic and resource monitoring. Security policy management through the

CHARISMA CMO will be also demonstrated;

Low latency features through the use of virtual caches and cache controllers, implemented within the context of WP3 and packet acceleration through the use of the FPGA-based smart NIC;

Management of the underlying infrastructure (virtual or physical) and the developed VNFs through the CHARISMA CMO platform and ability for on-demand provisioning of resources assigned to different VNOs. Creation of virtual slices and isolation is planned to be demonstrated.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 58 of 64

Figure 41: NCSRD demonstrator

6.2.2.

ApFutura demonstrator, Centelles

A potential final demonstrator at the ApFutura premises is depicted in Figure 42. It is built upon the initial

open access/multi-tenancy demonstrator, however with four main improvements planned to demonstrate key features (open access, low latency, and virtualised security) of the CHARISMA project.

1.

It includes TrustNode (a high-speed routing device from InnoRoute) and Ethernity NIC card at the

CO. The main purpose of their inclusion is to show case low-latency.

2.

This demonstrator will include wireless links placed in different positions a.

Point-to-point wireless backhaul equipment is envisioned to demonstrate extension of multi-tenancy and network slicing for wireless access network. This link will be located from CO (TrustNode) to CAL2. b.

Point-to-point wireless link located between the ONT and a CAL1 IMU element.

3.

The inclusion of IMUs at CAL 2 and CAL 1 in order to demonstrate low latency by moving the VNFs near to the end user.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 59 of 64

Figure 42: Initial proposal for end-of-project ApFutura demonstrator

Figure 43 describes the existing equipment that APFUTURA has in Centelles, the city where the testing will

be performed.

Figure 43: CHARISMA demonstrator deployment in APFUTURA network

As the reader can see, this validator will be carried out in a real environment, using a production network in order to bring the technical developments very close to the market and the commercial exploitation.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 60 of 64

7.

Conclusions

Work conducted within WP4 during the first year of the CHARISMA project has allowed us to select the most promising technologies being developed in CHARISMA in the different fields and areas and define initial demonstrators that exhibit these innovations. In particular, in this deliverable D4.1, we have presented three individual scenarios in order to implement realistic networking testbeds that showcase the contributions towards the three key innovation directions of CHARISMA, namely: virtualized security, low latency and open access. The testbeds already available at the various partners’ sites in Athens and

Centelles, as well as the defined demonstrators, constitute a solid foundation for the assessment and evaluation of the technology solutions being developed within CHARISMA, under various diverse setups and configurations.

We have defined three individual intermediate demonstrators (at the 12-month stage of the project): the security demonstrator deployed at NCRSD premises; the low latency demonstrator co-ordinated by

University of Essex; and the open access demonstrator deployed on the ApFutura infrastructure. In this deliverable, we have presented the physical and logical architectures of each demonstrator, as well as detailed instructions for the deployment and configuration of the individual components comprising the demonstrators. For this reason, the present deliverable serves the purpose of a technical report that can be also used as a guide for outside parties interested in deploying all or part of the 5G CHARISMA technical solutions.

The three demonstrators laid out in the present document are subject to on-going elaboration, development, and overall integration as the project continues to advance, depending on the progress of implementation, on the evolution of the technical architecture, and on the possible adjustment of technical details. It may also be affected by the continuous evolutions in the more general 5G networking area and the various associated 5G PPP work groups, which is valid especially for the open-source projects, which have been selected to be used in the CHARISMA demonstrators. The next deliverable of WP4, namely D4.2

“Demonstrators infrastructure setup and validation”, which is due in month 24, will reflect all these developments and evolutions and present the final CHARISMA demonstrators designs that will integrate all the required features. Finally, deliverable D4.3 “Validation field and test results and analysis evaluation” will also present the validation results of the aforementioned field and lab trials at the conclusion of the project in month 30.

CHARISMA – D4.1 – Demonstrators design and prototyping Page 61 of 64

References

[1] Deliverable D1.1 - CHARISMA intelligent, distributed low-latency security C-RAN/RRH architecture -

CHARISMA consortium

[2] Deliverable D3.1 - V-Security Management Plane Design and Definition - CHARISMA consortium

[3] Open Air Interface [Online] http://www.openairinterface.org/

[4] PF_RING [Online] http://www.ntop.org/products/packet-capture/pf_ring/

[5] BoNeSi DDoS Botnet Simulator [Online] https://github.com/Markus-Go/bonesi

[6] OpenStack Cloud Platform [Online] https://www.openstack.org/

[7] Snort intrusion detection and prevention system [Online] https://www.snort.org/

[8] barnyard2 project [Online] https://github.com/firnsy/barnyard2

[9] Pulled Pork project [Online] https://github.com/shirkdog/pulledpork

[10] Snorby project [Online] https://github.com/Snorby/snorby

[11] Open vSwitch project [Online] http://openvswitch.org/

[12] Lighttpd web server [Online] https://www.lighttpd.net/

[13] http://www.trustedreviews.com/opinions/what-is-5g-a-rough-guide-to-the-next-generation-ofmobile-networks

[14] Deliverable D5.4 - Roadmapping to CHARISMA and 5G networking - CHARISMA consortium

[15] Deliverable D2.1 - Initial architecture design and interfaces - CHARISMA consortium

[16] Deliverable D3.2 - Initial 5G multi-provider v-security realization: Orchestration and Management -

CHARISMA consortium

[17] Bootstrap. [Online] http://getbootstrap.com/

[18] AngularJS. [Online] https://angularjs.org/

CHARISMA – D4.1 – Demonstrators design and prototyping Page 62 of 64

Acronyms

MB

NFV

NFVI-PoP

NIC

OAI

OLT

ONU

OTT

OVS

P2MP

PtP

PHY

Q-in-Q

SDN

UE vIDS vFW

VM

EPC

FPGA

FTTH

GUI

GPON

GSM

GTP

HSPA

IDS

InP

IoT

LTE

CAL

CMO

CO

DHCP

DoS eNB eNodeB

Converged Aggregation Level

Control Management and Orchestration

Central Office

Dynamic Host Configuration Protocol

Denial of Service

Evolved Node B

Evolved Node B

Evolved Packet Core

Field Programmable Gate Array

Fiber-to-the-Home

Graphical User Interface

Gigabit Passive Optical Network

Global System for Mobile communication

GPRS Tunnelling Protocol

High Speed Packet Access

Intrusion Detection System

Infrastructure Provider

Internet of Things

Long-Term Evolution

MoBcache

Network Function Virtualization

Network Function Virtualisation Infrastructure Point of Presence

Network Interface Card

Open Air Interface

Optical Line Termination

Optical Network Unit

Over-The-Top

Open vSwitch

Point-to-Multi-Point

Point-to-Point

Physical layer

IEEE 802.1 Q-in-Q VLAN tagging

Software Defined Networking

User Equipment virtualised Intrusion Detection System virtualised firewall

Virtual Machine

CHARISMA – D4.1 – Demonstrators design and prototyping Page 63 of 64

VNF

VNO

VSF

Virtual Network Function

Virtual Network Operator

Virtual Security Function

CHARISMA – D4.1 – Demonstrators design and prototyping Page 64 of 64

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project