Contrail Feature Guide

Contrail Feature Guide
Contrail™
Contrail Feature Guide
Release
2.21
Modified: 2016-06-13
Copyright © 2016, Juniper Networks, Inc.
Juniper Networks, Inc.
1133 Innovation Way
Sunnyvale, California 94089
USA
408-745-2000
www.juniper.net
Copyright © 2016, Juniper Networks, Inc. All rights reserved.
Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United
States and other countries. The Juniper Networks Logo, the Junos logo, and JunosE are trademarks of Juniper Networks, Inc. All other
trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
Contrail™ Contrail Feature Guide
2.21
Copyright © 2016, Juniper Networks, Inc.
All rights reserved.
The information in this document is current as of the date on the title page.
YEAR 2000 NOTICE
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the
year 2038. However, the NTP application is known to have some difficulty in the year 2036.
END USER LICENSE AGREEMENT
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks
software. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted at
http://www.juniper.net/support/eula.html. By downloading, installing or using such software, you agree to the terms and conditions of
that EULA.
ii
Copyright © 2016, Juniper Networks, Inc.
Table of Contents
About the Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Documentation and Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Documentation Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
Requesting Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxx
Self-Help Online Tools and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . xxx
Opening a Case with JTAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxx
Part 1
Overview
Chapter 1
Understanding Contrail Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Contrail Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Contrail Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Contrail Major Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Contrail Control Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Contrail Compute Nodes – XMPP Agent and vRouter . . . . . . . . . . . . . . . . 4
Contrail Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Part 2
Installing and Upgrading Contrail
Chapter 2
Supported Platforms and Server Requirements . . . . . . . . . . . . . . . . . . . . . . . . 9
Supported Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Server Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 3
Installing Contrail and Provisioning Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Installation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Downloading Installation Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Installing the Operating System and Contrail Packages . . . . . . . . . . . . . . . . . . . . . 13
Configuring System Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Installing the Contrail Packages, Part One (CentOS or Ubuntu) . . . . . . . . . . . . . . 14
Populating the Testbed Definitions File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Testbed Definitions File Settings for Deploying Contrail with an Existing
OpenStack Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Supporting Multiple Interfaces on Servers and Nodes . . . . . . . . . . . . . . . . . . . . . . 20
Support for Multiple Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Number of cfgm Nodes Supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Uneven Number of Database Nodes Required . . . . . . . . . . . . . . . . . . . . 20
Support for VLAN Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Support for Bonding Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Support for Static Route Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Server Interface Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Interface Naming and Configuration Management . . . . . . . . . . . . . . . . . . . . . 22
Copyright © 2016, Juniper Networks, Inc.
iii
Contrail Feature Guide
Setting Up Interfaces and Installing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Sample testbed.py File With Exclusive Interfaces . . . . . . . . . . . . . . . . . . . . . . 23
Installing the Contrail Packages, Part Two (CentOS or Ubuntu) — Installing on
the Remaining Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Configuring the Control Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Adding or Removing a Compute Node in an Existing Contrail Cluster . . . . . . . . . . 32
Configuring MD5 Authentication for BGP Sessions . . . . . . . . . . . . . . . . . . . . . . . . 33
Setting Up and Using a Simple Virtual Gateway with Contrail . . . . . . . . . . . . . . . . 34
Introduction to the Simple Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
How the Simple Gateway Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Setup Without Simple Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Setup With Simple Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Simple Gateway Configuration Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Packet Flows with the Simple Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Packet Flow Process From the Virtual Network to the Public Network . . . . . 38
Packet Flow Process From the Public Network to the Virtual Network . . . . . 39
Four Methods for Configuring the Simple Gateway . . . . . . . . . . . . . . . . . . . . 39
Using Fab Provisioning to Configure the Simple Gateway . . . . . . . . . . . . . . . 39
Using the vRouter Configuration File to Configure the Simple Gateway . . . . . 41
Using Thrift Messages to Dynamically Configure the Simple Gateway . . . . . . 41
How to Dynamically Create a Virtual Gateway . . . . . . . . . . . . . . . . . . . . . 42
How to Dynamically Delete a Virtual Gateway . . . . . . . . . . . . . . . . . . . . . 42
Using Devstack to Configure the Simple Gateway . . . . . . . . . . . . . . . . . . 43
Common Issues with Simple Gateway Configuration . . . . . . . . . . . . . . . . . . . 44
Configuring Contrail on VMware ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Using Server Manager to Configure Contrail Compute Nodes on VMware
ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Using Fab Commands to Configure Contrail Compute Nodes on VMware
ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Fab Installation Guidelines for ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Installing Contrail with Red Hat OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Overview: Contrail with Red Hat OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Procedure for Installing RHOSP5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Install and Configure Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Download and Install Contrail-Install-Packages to the First Config Node . . . 57
Update the Testbed.py . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Complete the Installation on Remaining Nodes . . . . . . . . . . . . . . . . . . . . . . . 60
Appendix: Installing with RDO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Install All-in-One OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Configuring OpenStack Nova Docker with Contrail . . . . . . . . . . . . . . . . . . . . . . . . 62
Overview: Nova Docker Support in Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Platform Support: Contrail and OpenStack Nova Docker . . . . . . . . . . . . 62
Deploying a Contrail Compute Node to Work With OpenStack Nova
Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Saving the Docker Image to Glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Launching a Docker Container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
iv
Copyright © 2016, Juniper Networks, Inc.
Table of Contents
Chapter 4
Using Contrail with VMware vCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Installing Contrail with VMware vCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Overview: Integrating Contrail with vCenter Server . . . . . . . . . . . . . . . . . . . . . 65
Installation of a Contrail Integration with VMware vCenter . . . . . . . . . . . . . . 66
Preparing the Installation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Installing the Contrail for vCenter Components . . . . . . . . . . . . . . . . . . . . . . . 68
Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Deployment Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Sample Testbed.py for Contrail vCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
User Interfaces for Configuring Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Using the Contrail and VMWare vCenter User Interfaces to Configure and Manage
the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Overview: User Interfaces for Contrail Integration with VMware vCenter . . . . 78
Contrail User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Contrail vCenter User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Feature Configuration for Contrail vCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Creating a Virtual Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Delete Virtual Networks – vCenter UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Creating a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Create a Virtual Machine – vCenter UI . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Configuring the vCenter Network in Contrail UI . . . . . . . . . . . . . . . . . . . . . . . . 92
Chapter 5
Using Server Manager to Automate Provisioning . . . . . . . . . . . . . . . . . . . . . . 93
Installing Server Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Overview: Installation Requirements for Server Manager . . . . . . . . . . . . . . . . 93
Platform Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Installation Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Installing Server Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Finishing the Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Starting the Server Manager Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Upgrading Server Manager Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Prerequisite to Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Use Steps for New Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Server Manager Installation Completion Checks . . . . . . . . . . . . . . . . . . . . . . 96
Server Manager Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Server Manager Client Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Server Manager Webui Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Sample Configurations for Server Manager Templates . . . . . . . . . . . . . . . . . . 97
Sample Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
dhcp.template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
named.conf.options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
named.template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
sendmail.cf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Using Server Manager to Automate Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Overview of Server Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Server Manager Requirements and Assumptions . . . . . . . . . . . . . . . . . . . . . . 98
Server Manager Component Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Configuring Server Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Copyright © 2016, Juniper Networks, Inc.
v
Contrail Feature Guide
Configuring the Cobbler DHCP Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
User-Defined Tags for Server Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Server Manager Client Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Restart Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Accessing Server Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Communicating with the Server Manager Client . . . . . . . . . . . . . . . . . . . . . . 105
Server Manager Commands for Configuring Servers . . . . . . . . . . . . . . . . . . . 105
Create New Servers or Update Existing Servers . . . . . . . . . . . . . . . . . . . 106
Delete Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Show Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Server Manager Commands for Managing Clusters . . . . . . . . . . . . . . . . 109
Server Manager Commands for Managing Tags . . . . . . . . . . . . . . . . . . . 112
Server Manager Commands for Managing Images . . . . . . . . . . . . . . . . . 114
Server Manager Operational Commands for Managing Servers . . . . . . . 117
Reimaging Server(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Provisioning and Configuring Roles on Servers . . . . . . . . . . . . . . . . . . . . 119
Restarting Server(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Show Status of Server(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Server Manager REST API Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
REST APIs for Server Manager Configuration Database Entries . . . . . . . 124
API: Add a Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
API: Delete Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
API: Retrieve Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
API: Add an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
API: Upload an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
API: Get Image Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
API: Delete an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
API: Add or Modify a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
API: Delete a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
API: Get Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
API: Get All Server Manager Configurations . . . . . . . . . . . . . . . . . . . . . . 128
API: Reimage Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
API: Provision Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
API: Restart Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Example: Reimaging and Provisioning a Server . . . . . . . . . . . . . . . . . . . . . . . 129
Using the Server Manager Web User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Log In to Server Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Create a Cluster for Server Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Working with Servers in the Server Manager User Interface . . . . . . . . . . . . . 139
Add a Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Edit Tags for Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Using the Edit Config Option for Multiple Servers . . . . . . . . . . . . . . . . . . . . . . 141
Filter Servers by Tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Viewing Server Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Configuring Images and Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Add New Image or Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Selecting Server Manager Actions for Clusters . . . . . . . . . . . . . . . . . . . . . . . . 143
Reimage a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Provision a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
vi
Copyright © 2016, Juniper Networks, Inc.
Table of Contents
Chapter 6
Extending Contrail to Physical Routers, Bare Metal Servers, Switches,
and Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Using TOR Switches and OVSDB to Extend the Contrail Cluster to Other
Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Overview: Support for TOR Switch and OVSDB . . . . . . . . . . . . . . . . . . . . . . . 145
TOR Services Node (TSN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Contrail TOR Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Configuration Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Control Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Data Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Using the Web Interface to Configure TOR Switch and Interfaces . . . . . . . . 148
Provisioning with Fab Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Prerequisite Configuration for QFX5100 Series Switch . . . . . . . . . . . . . . . . . . 151
Debug QFX5100 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Changes to Agent Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
REST APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Configuring High Availability for the Contrail OVSDB TOR Agent . . . . . . . . . . . . . 153
Overview: High Availability for a TOR Switch . . . . . . . . . . . . . . . . . . . . . . . . . 153
High Availability Solution for Contrail TOR Agent . . . . . . . . . . . . . . . . . . . . . 154
Failover Methodology Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Failure Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Redundancy for HAProxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Configuration for TOR Agent High Availability . . . . . . . . . . . . . . . . . . . . . . . . 157
Testbed.py and Provisioning for High Availability . . . . . . . . . . . . . . . . . . . . . . 158
Using Device Manager to Manage Physical Routers . . . . . . . . . . . . . . . . . . . . . . . 158
Overview: Support for Physical Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Configuration Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Configuring a Physical Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Alternate Ways to Configure a Physical Router . . . . . . . . . . . . . . . . . . . . . . . . 161
Device Manager Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Prerequisite Configuration Required on MX Series Device . . . . . . . . . . . . . . . 162
Debugging Device Manager Configuration . . . . . . . . . . . . . . . . . . . . . . . 162
Configuration Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Configuring Physical Routers Using REST APIs . . . . . . . . . . . . . . . . . . . . 162
Sample Python Script Using Rest API for Configuring an MX Device . . . 162
Device Manager Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Dynamic Tunnels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Web UI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
BGP Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Extending the Private Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Extending the Public Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Ethernet VPN Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Floating IP Addresses and Source Network Address Translation for Guest
Virtual Machines and Bare Metal Servers . . . . . . . . . . . . . . . . . . . . . . . . . 171
Samples of Generated Configurations for an MX Series Device . . . . . . . . . . 179
Scenario 1: Physical Router With No External Networks . . . . . . . . . . . . . 179
Scenario 2: Physical Router With External Network, Public VRF . . . . . . . 181
Scenario 3: Physical Router With External Network, Public VRF, and
EVPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Copyright © 2016, Juniper Networks, Inc.
vii
Contrail Feature Guide
Scenario 4: Physical Router With External Network, Public VRF, and
Floating IP Addresses for a Bare Metal Server . . . . . . . . . . . . . . . . . 183
REST APIs for Extending the Contrail Cluster to Physical Routers, and Physical
and Logical Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Introduction: REST APIs for Extending Contrail Cluster . . . . . . . . . . . . . . . . . 183
REST API for Physical Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
REST API for Physical Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
REST API for Logical Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Chapter 7
Installing and Using Contrail Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Installing and Using Contrail Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Overview of the Contrail Storage Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Basic Storage Functionality with Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Ceph Block and Object Storage Functionality . . . . . . . . . . . . . . . . . . . . . . . . 190
Using the Contrail Storage User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Hardware Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Software Files for Compute Storage Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Contrail OpenStack Nova Modifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Installing the Contrail Storage Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Using Fabric Commands to Install and Configure Storage . . . . . . . . . . . . . . 193
Fabric Installation Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Using Server Manager to Install and Configure Storage . . . . . . . . . . . . . . . . 196
Server Manager Installation Procedure for Storage . . . . . . . . . . . . . . . . . . . . 196
Example Configurations for Storage for Reimaging and Provisioning a
Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Storage Installation Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Chapter 8
Upgrading Contrail Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Upgrading Contrail Software from Release 2.00 or Greater to Release 2.20 . . . 205
Adding or Removing a Compute Node in an Existing Contrail Cluster . . . . . . . . . 208
DKMS for vRouter Kernel Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Part 3
Configuring Contrail
Chapter 9
Configuring Virtual Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Creating Projects in OpenStack for Configuring Tenants in Contrail . . . . . . . . . . . 214
Creating a Virtual Network—Juniper Networks Contrail . . . . . . . . . . . . . . . . . . . . 215
Deleting a Virtual Network–Juniper Networks Contrail . . . . . . . . . . . . . . . . . . . . . 218
Creating a Virtual Network—OpenStack Contrail . . . . . . . . . . . . . . . . . . . . . . . . . 219
Deleting a Virtual Network–OpenStack Contrail . . . . . . . . . . . . . . . . . . . . . . . . . 220
Creating an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Launching a Virtual Machine (Instance) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Creating a Network Policy—Juniper Networks Contrail . . . . . . . . . . . . . . . . . . . . . 227
Associating a Network to a Policy—Juniper Networks Contrail . . . . . . . . . . . . . . 229
Associating Network Policies Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Associating a Network Policy to a Network from the Edit Network . . . . . . . 229
Associating Networks with Network Policies from the Edit Policy . . . . . . . . 232
viii
Copyright © 2016, Juniper Networks, Inc.
Table of Contents
Creating a Network Policy—OpenStack Contrail . . . . . . . . . . . . . . . . . . . . . . . . . 233
Associating a Network to a Policy—OpenStack Contrail . . . . . . . . . . . . . . . . . . . 236
Associating Network Policies Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Associating a Network Policy to a Network . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Creating a Floating IP Address Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Allocating a Floating IP Address to a Virtual Machine . . . . . . . . . . . . . . . . . . . . . 240
Using Security Groups with Virtual Machines (Instances) . . . . . . . . . . . . . . . . . . 241
Security Groups Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Creating Security Groups and Adding Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Support for IPv6 Networks in Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Overview: IPv6 Networks in Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Creating IPv6 Virtual Networks in Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Address Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Adding IPv6 Peers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Configuring EVPN and VXLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Configuring Forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Configuring the VXLAN Identifier Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Configuring the VXLAN Identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Configuring Encapsulation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Chapter 10
Example of Deploying a Multi-Tier Web Application Using Contrail . . . . . 255
Example: Deploying a Multi-Tier Web Application . . . . . . . . . . . . . . . . . . . . . . . . 255
Multi-Tier Web Application Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Example: Setting Up Virtual Networks for a Simple Tiered Web
Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Verifying the Multi-Tier Web Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Sample Addressing Scheme for Simple Tiered Web Application . . . . . . . . . 258
Sample Physical Topology for Simple Tiered Web Application . . . . . . . . . . 259
Sample Physical Topology Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Sample Network Configuration for Devices for Simple Tiered Web
Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Chapter 11
Configuring Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Configuring DNS Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
DNS Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Defining Multiple Virtual Domain Name Servers . . . . . . . . . . . . . . . . . . . . . . 268
IPAM and Virtual DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
DNS Record Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Configuring DNS Using the Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Configuring DNS Using Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Configuring Discovery Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Contrail Discovery Service Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Discovery Service Registration and Publishing . . . . . . . . . . . . . . . . . . . . . . . . 277
Discovery Service Subscription . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Discovery Service REST API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Discovery Service Heartbeats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Discovery Service Internal Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Discovery Service Client Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Discovery Service Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Copyright © 2016, Juniper Networks, Inc.
ix
Contrail Feature Guide
Support for Multicast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Subnet Broadcast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
All-Broadcast/Limited-Broadcast and Link-Local Multicast . . . . . . . . . . . . . 281
Host Broadcast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Using Static Routes with Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Static Routes for Service Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Configuring Static Routes on a Service Instance . . . . . . . . . . . . . . . . . . . . . . 283
Configuring Static Routes on Service Instance Interfaces . . . . . . . . . . . . . . 284
Configuring Static Routes as Host Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Configuring Metadata Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Configuring Load-Balancing-as-a-Service in Contrail . . . . . . . . . . . . . . . . . . . . . 287
Overview: Load-Balancing-as-a-Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Contrail LBaaS Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Chapter 12
Configuring High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
High Availability Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Contrail High Availability Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Configuration Options for Enabling Contrail High Availability . . . . . . . . . . . . 292
Supported Cluster Topologies for High Availability . . . . . . . . . . . . . . . . . . . . 292
Deploying OpenStack and Contrail on the Same High Available Nodes . . . 292
Deploying OpenStack and Contrail on Different High Available Nodes . . . . 293
Deploying Contrail Only on High Available Nodes . . . . . . . . . . . . . . . . . . . . . 293
Juniper OpenStack High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Contrail High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
OpenStack High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Supported Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Juniper OpenStack High Availability Architecture . . . . . . . . . . . . . . . . . . . . . 295
Juniper OpenStack Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Solution Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Virtual IP with Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Failure Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Minimum Hardware Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Testbed File for Fab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Example: Adding New OpenStack or Contrail Roles to an Existing High Availability
Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Adding New OpenStack or Contrail Roles to an Existing High Availability
Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Purging a Controller From an Existing Cluster . . . . . . . . . . . . . . . . . . . . . . . . 304
Replacing a Node With a Node That Has the Same IP Address . . . . . . . . . . 305
Known Limitations and Configuration Guidelines . . . . . . . . . . . . . . . . . . . . . 305
Understanding How the System Adds a New Node to an Existing Cluster . . 305
x
Copyright © 2016, Juniper Networks, Inc.
Table of Contents
Chapter 13
Configuring Service Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Service Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Service Chaining Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Service Chaining Configuration Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Service Chaining MX Series Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Example: Creating an In-Network or In-Network-NAT Service Chain . . . . . . . . . . 314
Creating an In-Network or In-Network-NAT Service Chain . . . . . . . . . . . . . . 314
Example: Creating a Transparent Service Chain . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Creating a Transparent Mode Service Chain . . . . . . . . . . . . . . . . . . . . . . . . . . 321
ECMP Load Balancing in the Service Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Example: Creating a Service Chain With the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . 326
CLI for Creating a Service Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
CLI for Creating a Service Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
CLI for Creating a Service Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
CLI for Creating a Service Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Example: Creating a Service Chain with VSRX and In-Network or Routed
Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Using the Juniper Networks Heat Template with Contrail . . . . . . . . . . . . . . . . . . 329
Introduction to Heat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Heat Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Juniper Heat Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Example: Creating a Service Template Using Heat . . . . . . . . . . . . . . . . . . . . 330
Chapter 14
Configuring Multitenancy Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Configuring Multitenancy Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Multitenancy Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
API Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
API Library Keystone Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Supporting Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Configuring Network QoS Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
QoS Configuration Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Chapter 15
Optimizing Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Using a Headless vRouter to Improve Redundancy . . . . . . . . . . . . . . . . . . . . . . . 339
Overview: vRouter Agent Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Headless vRouter Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Configuring the Headless vRouter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Using CLI to Configure Headless vRouter . . . . . . . . . . . . . . . . . . . . . . . . 340
Using contrail-vrouter-agent.conf to Configure Headless vRouter . . . . 340
vRouter Command Line Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
vif Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
flow Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
vrfstats Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
rt Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
dropstats Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
mpls Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Copyright © 2016, Juniper Networks, Inc.
xi
Contrail Feature Guide
mirror Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
vxlan Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
nh Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Route Target Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Debugging and Troubleshooting Route Target Filtering . . . . . . . . . . . . . . . . 357
RTF Limitations in Contrail 1.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Source Network Address Translation (SNAT) . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Neutron APIs for Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Network Namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Using Web UI to Configure Routers with SNAT . . . . . . . . . . . . . . . . . . . . . . . 360
Part 4
Monitoring and Troubleshooting Contrail
Chapter 16
Configuring Traffic Mirroring to Monitor Network Traffic . . . . . . . . . . . . . . 365
Configuring Traffic Analyzers and Packet Capture for Mirroring . . . . . . . . . . . . . 365
Traffic Analyzer Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Configuring Traffic Analyzers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Setting Up Traffic Mirroring Using Monitor > Debug > Packet Capture . . . . 366
Setting Up Traffic Mirroring Using Configure > Networking > Services . . . . 369
Configuring Interface Monitoring and Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Analyzer Service Virtual Machine (analyzer-vm-console.qcow2) . . . . . . . . . . . . 375
Packet Format for Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Metadata Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Wireshark Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Troubleshooting Packet Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Chapter 17
Using Contrail Analytics to Monitor and Troubleshoot the Network . . . . . 379
Contrail Analytics Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Analytics Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
High Availability for Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Ceilometer Support in a Contrail Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Ceilometer Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Verification of Ceilometer Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
Contrail Ceilometer Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Ceilometer Installation and Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Underlay Overlay Mapping in Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Overview: Underlay Overlay Mapping using Contrail Analytics . . . . . . . . . . 389
Underlay Overlay Analytics Available in Contrail . . . . . . . . . . . . . . . . . . . . . . 389
Architecture and Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
New Processes/Services for Underlay Overlay Mapping . . . . . . . . . . . . . . . 390
External Interfaces Configuration for Underlay Overlay Mapping . . . . . . . . . 391
Physical Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
SNMP Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Link Layer Discovery Protocol (LLDP) Configuration . . . . . . . . . . . . . . . . . . . 392
IPFIX and sFlow Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Sending pRouter Information to the SNMP Collector in Contrail . . . . . . . . . 393
pRouter UVEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
xii
Copyright © 2016, Juniper Networks, Inc.
Table of Contents
Contrail User Interface for Underlay Overlay Analytics . . . . . . . . . . . . . . . . . 395
Viewing Topology to the Virtual Machine Level . . . . . . . . . . . . . . . . . . . . . . . 395
Viewing the Traffic of any Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Trace Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Search Flows and Map Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Overlay to Underlay Flow Map Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Module Operations for Overlay Underlay Mapping . . . . . . . . . . . . . . . . . . . 400
SNMP Collector Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Topology Module Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
IPFIX and sFlow Collector Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
Troubleshooting Underlay Overlay Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 403
Script to add pRouter Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Monitoring the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Debugging Processes Using the Contrail Introspect Feature . . . . . . . . . . . . . . . . 407
Monitor > Infrastructure > Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Monitor Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Monitor Individual Details from the Dashboard . . . . . . . . . . . . . . . . . . . . . . . 413
Using Bubble Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
Color-Coding of Bubble Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
Monitor > Infrastructure > Control Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Monitor Control Nodes Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Monitor Individual Control Node Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Monitor Individual Control Node Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Monitor Individual Control Node Peers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Monitor Individual Control Node Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Monitor > Infrastructure > Virtual Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
Monitor vRouters Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Monitor Individual vRouters Tabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Monitor Individual vRouter Details Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Monitor Individual vRouters Interfaces Tab . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Configuring Interface Monitoring and Mirroring . . . . . . . . . . . . . . . . . . . . . . . 426
Monitor Individual vRouters Networks Tab . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Monitor Individual vRouters ACL Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Monitor Individual vRouters Flows Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Monitor Individual vRouters Routes Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Monitor Individual vRouter Console Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Monitor > Infrastructure > Analytics Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
Monitor Analytics Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
Monitor Analytics Individual Node Details Tab . . . . . . . . . . . . . . . . . . . . . . . 434
Monitor Analytics Individual Node Generators Tab . . . . . . . . . . . . . . . . . . . . 435
Monitor Analytics Individual Node QE Queries Tab . . . . . . . . . . . . . . . . . . . . 436
Monitor Analytics Individual Node Console Tab . . . . . . . . . . . . . . . . . . . . . . 437
Monitor > Infrastructure > Config Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Monitor Config Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Monitor Individual Config Node Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Monitor Individual Config Node Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Monitor > Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Monitor > Networking Menu Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Monitor -> Networking -> Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Copyright © 2016, Juniper Networks, Inc.
xiii
Contrail Feature Guide
Monitor > Networking > Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Monitor Projects Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
Monitor > Networking > Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Query > Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Query > Flows > Flow Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Example: Query Flow Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Query > Flow Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Query > Flows > Query Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Query > Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Query > Logs Menu Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Query > Logs > System Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Sample Query for System Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Query > Logs > Object Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
System Log Receiver in Contrail Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Redirecting System Logs to Contrail Collector . . . . . . . . . . . . . . . . . . . . . . . . 461
Exporting Logs from Contrail Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Example: Debugging Connectivity Using Monitoring for Troubleshooting . . . . . 462
Using Monitoring to Debug Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Chapter 18
Common Support Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Debugging Ping Failures for Policy-Connected Networks . . . . . . . . . . . . . . . . . . 467
Debugging BGP Peering and Route Exchange in Contrail . . . . . . . . . . . . . . . . . . . 473
Example Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Verifying the BGP Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Verifying the Route Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Debugging Route Exchange with Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
Debugging Peering with an MX Series Router . . . . . . . . . . . . . . . . . . . . . . . . 481
Debugging a BGP Peer Down Error with Incorrect Family . . . . . . . . . . . . . . . 482
Configuring MX Peering (iBGP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Checking Route Exchange with an MX Series Peer . . . . . . . . . . . . . . . . . . . . 485
Checking the Route in the MX Series Router . . . . . . . . . . . . . . . . . . . . . . . . . 487
Troubleshooting the Floating IP Address Pool in Contrail . . . . . . . . . . . . . . . . . . 488
Example Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Example: MX80 Configuration for the Gateway . . . . . . . . . . . . . . . . . . . . . . 490
Ping the Floating IP from the Public Network . . . . . . . . . . . . . . . . . . . . . . . . 493
Troubleshooting Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
Get the UUID of the Virtual Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
View the Floating IP Object in the API Server . . . . . . . . . . . . . . . . . . . . . . . . 494
View floating-ips in floating-ip-pools in the API Server . . . . . . . . . . . . . . . . 497
Check Floating IP Objects in the Virtual Machine Interface . . . . . . . . . . . . . 500
View Floating IP Objects in the IFMAP Server View . . . . . . . . . . . . . . . . . . . 503
View the BGP Peer Status on the Control Node . . . . . . . . . . . . . . . . . . . . . . 507
Querying Routes in the Public Virtual Network . . . . . . . . . . . . . . . . . . . . . . . 508
Verification from the MX80 Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Viewing the Compute Node Vnsw Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Advanced Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
xiv
Copyright © 2016, Juniper Networks, Inc.
Table of Contents
Removing Stale Virtual Machines and Virtual Machine Interfaces . . . . . . . . . . . . 516
Problem Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
Show Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Show Virtual Machines Using Python API . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Delete Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
Troubleshooting Link-Local Services in Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . 520
Overview of Link-Local Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
Troubleshooting Procedure for Link-Local Services . . . . . . . . . . . . . . . . . . . . 521
Metadata Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Troubleshooting Procedure for Link-Local Metadata Service . . . . . . . . . . . . 523
Part 5
Contrail Commands and APIs
Chapter 19
Contrail Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
contrail-logs (Accessing Log File Messages) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Command-Line Options for Contrail-Logs . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Option Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
Example Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
contrail-status (Viewing Node Status) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
contrail-version (Viewing Version Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
service (Managing Services) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
Backing Up and Restoring Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
Back up Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
Restore Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Restore Steps Continued . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
Finishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
Chapter 20
Contrail Application Programming Interfaces (APIs) . . . . . . . . . . . . . . . . . 547
Contrail Analytics Application Programming Interfaces (APIs) and User-Visible
Entities (UVEs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
User-Visible Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
Common UVEs in Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Virtual Network UVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Virtual Machine UVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
vRouter UVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
UVEs for Contrail Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Wild Card Query of UVEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Filtering UVE Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
Contrail Node Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
UVE for NodeStatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
Node Status Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Using Introspect to Get Process Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
contrail-status script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
Log and Flow Information APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
HTTP GET APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
HTTP POST API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
POST Data Format Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
Query Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
Examining Query Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
Copyright © 2016, Juniper Networks, Inc.
xv
Contrail Feature Guide
Examining Query Chunks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
Example Queries for Log and Flow Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
Working with Neutron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
Data Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
Network Sharing in Neutron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
Commands for Neutron Network Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
Support for Neutron APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
Contrail Neutron Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
DHCP Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
Incompatibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
Support for Amazon VPC APIs on Contrail OpenStack . . . . . . . . . . . . . . . . . . . . 576
Overview of Amazon Virtual Private Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Mapping Amazon VPC Features to OpenStack Contrail Features . . . . . . . . 577
VPC and Subnets Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
Euca2ools CLI for VPC and Subnets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
Security in VPC: Network ACLs Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
Euca2ools CLI for Network ACLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
Security in VPC: Security Groups Example . . . . . . . . . . . . . . . . . . . . . . . . . . 580
Euca2ools CLI for Security Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
Elastic IPs in VPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
Euca2ools CLI for Elastic IPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
Euca2ools CLI for Route Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
Supported Next Hops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
Internet Gateway Next Hop Euca2ools CLI . . . . . . . . . . . . . . . . . . . . . . . . . . 583
NAT Instance Next Hop Euca2ools CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
Exzample: Creating a NAT Instance with Euca2ools CLI . . . . . . . . . . . . . . . . 584
Part 6
Index
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
xvi
Copyright © 2016, Juniper Networks, Inc.
List of Figures
Part 2
Installing and Upgrading Contrail
Chapter 3
Installing Contrail and Provisioning Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Figure 1: Configure> Infrastructure > BGP Routers . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 2: BGP Routers Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 3: Create BGP Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Figure 4: Control Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Figure 5: Control Node Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Figure 6: Control Node Peers Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Chapter 4
Using Contrail with VMware vCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Figure 7: Contrail VMware vCenter Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Chapter 5
Using Server Manager to Automate Provisioning . . . . . . . . . . . . . . . . . . . . . . 93
Figure 8: Server Manager Component Interactions . . . . . . . . . . . . . . . . . . . . . . . 100
Chapter 6
Extending Contrail to Physical Routers, Bare Metal Servers, Switches,
and Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Figure 9: Configuration Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Figure 10: Add Physical Router Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Figure 11: Add Interface Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Figure 12: High Availability Solution for Contrail TOR Agent . . . . . . . . . . . . . . . . . 154
Figure 13: Failure Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Figure 14: Redundancy for HAProxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Figure 15: Contrail Configuration Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Figure 16: Add Physical Router Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Figure 17: Add Interface Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Figure 18: Edit Global Config Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Figure 19: Edit BGP Router Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Figure 20: Edit Physical Router Window for BGP Groups . . . . . . . . . . . . . . . . . . . 166
Figure 21: Edit Physical Router Window for Extending Private Networks . . . . . . . 167
Figure 22: Edit Network Gateway Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Figure 23: Logical Topology for Floating IP and SNAT . . . . . . . . . . . . . . . . . . . . . . 172
Part 3
Configuring Contrail
Chapter 9
Configuring Virtual Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Figure 24: OpenStack Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Figure 25: Add Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Figure 26: Add IP Address Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Figure 27: Configure Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Figure 28: Create Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Copyright © 2016, Juniper Networks, Inc.
xvii
Contrail Feature Guide
Figure 29: Configure Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Figure 30: Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Figure 31: Create Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Figure 32: OpenStack Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Figure 33: OpenStack Network Detail , Associated Instances Tab . . . . . . . . . . . . 221
Figure 34: Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Figure 35: Images & Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Figure 36: Create An Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Figure 37: OpenStack Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Figure 38: Launch Instance , Details Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Figure 39: Launch Instance, Networking Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Figure 40: Network Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Figure 41: Create Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Figure 42: Configure > Networking > Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Figure 43: Edit Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Figure 44: Configure > Networking > Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Figure 45: Edit Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Figure 46: Network Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Figure 47: Create Network Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Figure 48: Network Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Figure 49: Edit Policy Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Figure 50: Networks Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Figure 51: Edit Network Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Figure 52: Configure > Networking > Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Figure 53: Edit Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Figure 54: Allocate Floating IPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Figure 55: Allocate Floating IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Figure 56: Associate Floating IP to Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Figure 57: Security Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Figure 58: Edit Security Group Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Figure 59: Add Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Figure 60: Create Security Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Figure 61: Associate Security Group at Launch Instance . . . . . . . . . . . . . . . . . . . 244
Chapter 10
Example of Deploying a Multi-Tier Web Application Using Contrail . . . . . 255
Figure 62: Simple Tiered Web Use Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Figure 63: Create Floating IP Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Figure 64: Allocate Floating IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Figure 65: Sample Physical Topology for Simple Tiered Web Application . . . . . 260
Figure 66: Sample Physical Topology Addressing . . . . . . . . . . . . . . . . . . . . . . . . . 261
Chapter 11
Configuring Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Figure 67: DNS Servers Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Figure 68: IPAM and Virtual DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Figure 69: Example Usage for NS Record Type . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Figure 70: Configure DNS Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Figure 71: Add DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Figure 72: Add DNS Record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Figure 73: Associate IPAMs to DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Figure 74: Configure IP Address Management . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
xviii
Copyright © 2016, Juniper Networks, Inc.
List of Figures
Figure 75: DNS Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Chapter 13
Configuring Service Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Figure 76: Service Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Figure 77: Contrail Service Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Figure 78: Create Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Figure 79: Add Service Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Figure 80: Add Service Template Shared IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Figure 81: Service Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Figure 82: Create Service Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Figure 83: Create Service Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Figure 84: Service Instance Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Figure 85: Service Instance Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Figure 86: Create Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Figure 87: Edit Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Figure 88: Launch Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Figure 89: Create Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Figure 90: Add Service Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Figure 91: Create Service Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Figure 92: Service Instance Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Figure 93: Create Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Figure 94: Launch Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Figure 95: Load Balancing a Service Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Chapter 15
Optimizing Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Figure 96: Virtual Network With a Private Subnet . . . . . . . . . . . . . . . . . . . . . . . . 359
Figure 97: Edit Router Window to Enable SNAT . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Figure 98: Router Status for SNAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Figure 99: Instance Details Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Part 4
Monitoring and Troubleshooting Contrail
Chapter 16
Configuring Traffic Mirroring to Monitor Network Traffic . . . . . . . . . . . . . . 365
Figure 100: Packet Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Figure 101: Create Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Figure 102: Analyzer Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Figure 103: Create Analyzer Associate Networks . . . . . . . . . . . . . . . . . . . . . . . . . 368
Figure 104: Launch Analyzer VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Figure 105: Packet Capture Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Figure 106: Service Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Figure 107: Add Service Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Figure 108: Create Service Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Figure 109: Create Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Figure 110: Policy Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Figure 111: Service Instances View Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Figure 112: Individual vRouter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Figure 113: Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Figure 114: Wireshark Packet Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Chapter 17
Using Contrail Analytics to Monitor and Troubleshoot the Network . . . . . 379
Copyright © 2016, Juniper Networks, Inc.
xix
Contrail Feature Guide
Figure 115: Analytics Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Figure 116: Analytics Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Figure 117: Add Physical Router Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Figure 118: Sample Output From a pRouter REST API . . . . . . . . . . . . . . . . . . . . . 394
Figure 119: Sample Output From a pRouter UVE . . . . . . . . . . . . . . . . . . . . . . . . . 395
Figure 120: Physical Topology Related to a vRouter . . . . . . . . . . . . . . . . . . . . . . . 396
Figure 121: Traffic Statistics Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Figure 122: List of Active Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Figure 123: Underlay Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Figure 124: Monitor Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Figure 125: Control Nodes Details Tab Window . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Figure 126: Sandesh Modules for the Contrail Control Process . . . . . . . . . . . . . . 410
Figure 127: Controller Introspect Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Figure 128: BGP Peer Introspect Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Figure 129: BGP Neighbor Summary Introspect Page . . . . . . . . . . . . . . . . . . . . . . 411
Figure 130: s042491 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Figure 131: Agent Introspect Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Figure 132: Monitor > Infrastructure > Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . 413
Figure 133: Dashboard Summary Boxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Figure 134: Bubble Summary Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
Figure 135: Control Nodes Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Figure 136: Individual Control Node—Details Tab . . . . . . . . . . . . . . . . . . . . . . . . . 416
Figure 137: Individual Control Node—Console Tab . . . . . . . . . . . . . . . . . . . . . . . . 418
Figure 138: Individual Control Node—Peers Tab . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Figure 139: Individual Control Node—Routes Tab . . . . . . . . . . . . . . . . . . . . . . . . . 421
Figure 140: vRouters Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Figure 141: Individual vRouters—Details Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Figure 142: Individual vRouters—Interfaces Tab . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Figure 143: Individual vRouter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Figure 144: Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Figure 145: Individual vRouters—Networks Tab . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Figure 146: Individual vRouters—ACL Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Figure 147: Individual vRouters—Flows Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Figure 148: Individual vRouters—Routes Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Figure 149: Individual vRouter—Console Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Figure 150: Analytics Nodes Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Figure 151: Monitor Analytics Individual Node Details Tab . . . . . . . . . . . . . . . . . . 435
Figure 152: Individual Analytics Node—Generators Tab . . . . . . . . . . . . . . . . . . . . 436
Figure 153: Individual Analytics Node—QE QueriesTab . . . . . . . . . . . . . . . . . . . . 436
Figure 154: Analytics Individual Node—Console Tab . . . . . . . . . . . . . . . . . . . . . . . 437
Figure 155: Config Nodes Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Figure 156: Individual Config Nodes— Details Tab . . . . . . . . . . . . . . . . . . . . . . . . 439
Figure 157: Individual Config Node—Console Tab . . . . . . . . . . . . . . . . . . . . . . . . . 440
Figure 158: Monitor Networking Menu Options . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Figure 159: Traffic Statistics for Domain Window . . . . . . . . . . . . . . . . . . . . . . . . . 442
Figure 160: Monitor > Networking > Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Figure 161: Monitor Projects Connectivity Details . . . . . . . . . . . . . . . . . . . . . . . . . 444
Figure 162: Traffic Statistics Between Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 444
Figure 163: Projects Instances Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
xx
Copyright © 2016, Juniper Networks, Inc.
List of Figures
Figure 164: Instance Traffic Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Figure 165: Network Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Figure 166: Individual Network Connectivity Details—Summary Tab . . . . . . . . . 447
Figure 167: Individual Network-– Port Map Tab . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Figure 168: Individual Network-– Port Distribution Tab . . . . . . . . . . . . . . . . . . . . 448
Figure 169: Individual Network Instances Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
Figure 170: Individual Network Details Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Figure 171: Query Flow Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Figure 172: Flow Series Select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Figure 173: Flow Series Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Figure 174: Example: QueryFlow Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Figure 175: Query Flow Series Tabular Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Figure 176: Query Flow Series Graphical Results . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Figure 177: Flow Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Figure 178: Flow Records Select Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Figure 179: Where Clause Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Figure 180: Flows Query Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Figure 181: Query > Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Figure 182: Query > Logs > System Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Figure 183: Edit Where Clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Figure 184: Sample Query System Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Figure 185: Query > Logs > Object Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Figure 186: Navigate to Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Figure 187: Traffic Statistics for Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Figure 188: Navigate to Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Figure 189: Traffic Statistics for Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Figure 190: Navigate to a3s18 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Figure 191: Navigate to a3s19 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Figure 192: ACL Connectivity a3s18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Figure 193: ACL Connectivity a3s19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Figure 194: Routes default-domain:demo:vn0:vn0 . . . . . . . . . . . . . . . . . . . . . . . 464
Figure 195: Routes default-domain:demo:vn16:vn16 . . . . . . . . . . . . . . . . . . . . . . 464
Figure 196: Verify Route and Next Hop a3s18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Figure 197: Verify Route and Next Hop a3s19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Figure 198: Flows for a3s18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Figure 199: Flows for a3s19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Chapter 18
Common Support Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Figure 200: Virtual Machine Status Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Figure 201: Tap Interface Status Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Figure 202: Policies, Attachments, and Traffic Rule Status Window . . . . . . . . . 469
Figure 203: Virtual Network Policy Configuration Window . . . . . . . . . . . . . . . . . 469
Figure 204: Virtual Network Route Information Window . . . . . . . . . . . . . . . . . . . 469
Figure 205: Flow and Dropstats Command List . . . . . . . . . . . . . . . . . . . . . . . . . . 470
Figure 206: Flow Command Output Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
Figure 207: Fetch Flow Record Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Figure 208: Unresolved IP Address Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Figure 209: Unresolved Flow Details Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
Figure 210: Protocol-Specific Flow Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
Copyright © 2016, Juniper Networks, Inc.
xxi
Contrail Feature Guide
Figure 211: Protocol-Specific Flow Sample With Deny Action . . . . . . . . . . . . . . . 473
Figure 212: Sample Output, BGP Routers: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
Figure 213: Sample Output, BGP Router References: . . . . . . . . . . . . . . . . . . . . . . 476
Figure 214: Sample Output, BGP Neighbor Config: . . . . . . . . . . . . . . . . . . . . . . . . 476
Figure 215: Sample Output, BGP Peering Config: . . . . . . . . . . . . . . . . . . . . . . . . . 477
Figure 216: Sample Output, BGP Neighbor States: . . . . . . . . . . . . . . . . . . . . . . . . 477
Figure 217: Sample Output, Show Routing Instance: . . . . . . . . . . . . . . . . . . . . . . 478
Figure 218: Sample Output, Validate Route: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
Figure 219: Sample Output, Validate L3vpn Table: . . . . . . . . . . . . . . . . . . . . . . . . 479
Figure 220: Sample Output, Validate L3vpn Table, Scrolled: . . . . . . . . . . . . . . . . 479
Figure 221: Create Policy Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
Figure 222: Sample Output, Validate Import Target: . . . . . . . . . . . . . . . . . . . . . . 480
Figure 223: Sample Output, Route Import: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
Figure 224: Edit Global ASN Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
Figure 225: Create BGP Peer Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
Figure 226: Sample BGP Peer UVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
Figure 227: Sample Established BGP Peer UVE . . . . . . . . . . . . . . . . . . . . . . . . . . 483
Figure 228: Edit Global ASN Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Figure 229: Create BGP Peer Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Figure 230: Sample Established IBGP Peer UVE . . . . . . . . . . . . . . . . . . . . . . . . . 485
Figure 231: Sample Established IBGP Peer Introspect Window . . . . . . . . . . . . . . 485
Figure 232: Routing Instance Route Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Figure 233: Routing Instance Route Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Figure 234: Routing Instance Public IPv4 Route Table . . . . . . . . . . . . . . . . . . . . . 486
Figure 235: Virtual Machine Routing Instance Public IPv4 Route Table . . . . . . . 486
Figure 236: BGP Routing Instance Route Table . . . . . . . . . . . . . . . . . . . . . . . . . . 487
xxii
Copyright © 2016, Juniper Networks, Inc.
List of Tables
About the Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Table 1: Notice Icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviii
Table 2: Text and Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviii
Part 2
Installing and Upgrading Contrail
Chapter 3
Installing Contrail and Provisioning Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Table 3: Create BGP Router Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Chapter 5
Using Server Manager to Automate Provisioning . . . . . . . . . . . . . . . . . . . . . . 93
Table 4: Server Manager Paremeters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Table 5: Server Manager Add Server Command Options . . . . . . . . . . . . . . . . . . . 106
Table 6: Server Manager Delete Server Command Options . . . . . . . . . . . . . . . . . 107
Table 7: Server Manager Show Server Command Options . . . . . . . . . . . . . . . . . . 108
Table 8: Server Manager Add Cluster Command Options . . . . . . . . . . . . . . . . . . 109
Table 9: Server Manager Delete Cluster Command Options . . . . . . . . . . . . . . . . . 111
Table 10: Server Manager Show Cluster Command Options . . . . . . . . . . . . . . . . . 111
Part 3
Configuring Contrail
Chapter 9
Configuring Virtual Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Table 11: Add IP Address Management Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Table 12: Create Network Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Table 13: Create Network Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Table 14: Create An Image Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Table 15: Launch Instance Details Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Table 16: Create Policy Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Table 17: Edit Policy Rules Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Table 18: Add Rule Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Chapter 10
Example of Deploying a Multi-Tier Web Application Using Contrail . . . . . 255
Table 19: Sample Addressing Scheme for Example . . . . . . . . . . . . . . . . . . . . . . . 259
Chapter 11
Configuring Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Table 20: DNS Record Types Supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Table 21: Add DNS Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Table 22: Add DNS Record Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Table 23: Associate IPAMs to DNS Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Table 24: DNS Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Table 25: DNS Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Chapter 13
Configuring Service Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Copyright © 2016, Juniper Networks, Inc.
xxiii
Contrail Feature Guide
Table 26: Add Service Template Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Table 27: Create Service Instances Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Table 28: Add Service Template Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Table 29: Create Service Instances Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Chapter 15
Optimizing Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Table 30: vif Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Part 4
Monitoring and Troubleshooting Contrail
Chapter 16
Configuring Traffic Mirroring to Monitor Network Traffic . . . . . . . . . . . . . . 365
Table 31: Analyzer Rule Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Table 32: Add Service Template Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Table 33: Create Service Instances Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Table 34: Add Rule Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Chapter 17
Using Contrail Analytics to Monitor and Troubleshoot the Network . . . . . 379
Table 35: Monitor Menu Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Table 36: Dashboard Summary Boxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
Table 37: Control Nodes Summary Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Table 38: Individual Control Node—Details Tab Fields . . . . . . . . . . . . . . . . . . . . . 417
Table 39: Control Node: Console Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
Table 40: Control Node: Peers Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Table 41: Control Node: Routes Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
Table 42: vRouters Summary Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Table 43: vRouters Details Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Table 44: vRouters: Interfaces Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Table 45: vRouters: Networks Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Table 46: vRouters: ACL Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Table 47: vRouters: Flows Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Table 48: vRouters: Routes Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Table 49: Control Node: Console Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Table 50: Fields on Analytics Nodes Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Table 51: Monitor Analytics Individual Node Details Tab Fields . . . . . . . . . . . . . . 435
Table 52: Monitor Analytics Individual Node Generators Tab Fields . . . . . . . . . . 436
Table 53: Analytics Node QE Queries Tab Fields . . . . . . . . . . . . . . . . . . . . . . . . . 436
Table 54: Monitor Analytics Individual Node Console Tab Fields . . . . . . . . . . . . . 437
Table 55: Config Nodes Summary Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Table 56: Individual Config Nodes— Details Tab Fields . . . . . . . . . . . . . . . . . . . . 440
Table 57: Individual Config Node-Console Tab Fields . . . . . . . . . . . . . . . . . . . . . 440
Table 58: Projects Summary Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Table 59: Projects Summary Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Table 60: Projects Instances Summary Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Table 61: Network Summary Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Table 62: Query Flow Series Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Table 63: Query Flow Records Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Table 64: Query Flow Records Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Table 65: Query System Logs Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Table 66: Object Logs Query Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
xxiv
Copyright © 2016, Juniper Networks, Inc.
List of Tables
Part 5
Contrail Commands and APIs
Chapter 20
Contrail Application Programming Interfaces (APIs) . . . . . . . . . . . . . . . . . 547
Table 67: Amazon VPC and OpenStack Contrail Feature Comparison . . . . . . . . 577
Copyright © 2016, Juniper Networks, Inc.
xxv
Contrail Feature Guide
xxvi
Copyright © 2016, Juniper Networks, Inc.
About the Documentation
•
Documentation and Release Notes on page xxvii
•
Documentation Conventions on page xxvii
•
Documentation Feedback on page xxix
•
Requesting Technical Support on page xxx
Documentation and Release Notes
®
To obtain the most current version of all Juniper Networks technical documentation,
see the product documentation page on the Juniper Networks website at
http://www.juniper.net/techpubs/.
If the information in the latest release notes differs from the information in the
documentation, follow the product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject
matter experts. These books go beyond the technical documentation to explore the
nuances of network architecture, deployment, and administration. The current list can
be viewed at http://www.juniper.net/books.
Documentation Conventions
Table 1 on page xxviii defines notice icons used in this guide.
Copyright © 2016, Juniper Networks, Inc.
xxvii
Contrail Feature Guide
Table 1: Notice Icons
Icon
Meaning
Description
Informational note
Indicates important features or instructions.
Caution
Indicates a situation that might result in loss of data or hardware damage.
Warning
Alerts you to the risk of personal injury or death.
Laser warning
Alerts you to the risk of personal injury from a laser.
Tip
Indicates helpful information.
Best practice
Alerts you to a recommended use or implementation.
Table 2 on page xxviii defines the text and syntax conventions used in this guide.
Table 2: Text and Syntax Conventions
Convention
Description
Examples
Bold text like this
Represents text that you type.
To enter configuration mode, type the
configure command:
user@host> configure
Fixed-width text like this
Italic text like this
Italic text like this
xxviii
Represents output that appears on the
terminal screen.
user@host> show chassis alarms
•
Introduces or emphasizes important
new terms.
•
•
Identifies guide names.
A policy term is a named structure
that defines match conditions and
actions.
•
Identifies RFC and Internet draft titles.
•
Junos OS CLI User Guide
•
RFC 1997, BGP Communities Attribute
Represents variables (options for which
you substitute a value) in commands or
configuration statements.
No alarms currently active
Configure the machine’s domain name:
[edit]
root@# set system domain-name
domain-name
Copyright © 2016, Juniper Networks, Inc.
About the Documentation
Table 2: Text and Syntax Conventions (continued)
Convention
Description
Examples
Text like this
Represents names of configuration
statements, commands, files, and
directories; configuration hierarchy levels;
or labels on routing platform
components.
•
To configure a stub area, include the
stub statement at the [edit protocols
ospf area area-id] hierarchy level.
•
The console port is labeled CONSOLE.
< > (angle brackets)
Encloses optional keywords or variables.
stub <default-metric metric>;
| (pipe symbol)
Indicates a choice between the mutually
exclusive keywords or variables on either
side of the symbol. The set of choices is
often enclosed in parentheses for clarity.
broadcast | multicast
# (pound sign)
Indicates a comment specified on the
same line as the configuration statement
to which it applies.
rsvp { # Required for dynamic MPLS only
[ ] (square brackets)
Encloses a variable for which you can
substitute one or more values.
community name members [
community-ids ]
Indention and braces ( { } )
Identifies a level in the configuration
hierarchy.
; (semicolon)
Identifies a leaf statement at a
configuration hierarchy level.
(string1 | string2 | string3)
[edit]
routing-options {
static {
route default {
nexthop address;
retain;
}
}
}
GUI Conventions
Bold text like this
Represents graphical user interface (GUI)
items you click or select.
> (bold right angle bracket)
Separates levels in a hierarchy of menu
selections.
•
In the Logical Interfaces box, select
All Interfaces.
•
To cancel the configuration, click
Cancel.
In the configuration editor hierarchy,
select Protocols>Ospf.
Documentation Feedback
We encourage you to provide feedback, comments, and suggestions so that we can
improve the documentation. You can provide feedback by using either of the following
methods:
•
Online feedback rating system—On any page of the Juniper Networks TechLibrary site
at http://www.juniper.net/techpubs/index.html, simply click the stars to rate the content,
and use the pop-up form to provide us with information about your experience.
Alternately, you can use the online feedback form at
http://www.juniper.net/techpubs/feedback/.
Copyright © 2016, Juniper Networks, Inc.
xxix
Contrail Feature Guide
•
E-mail—Send your comments to techpubs-comments@juniper.net. Include the document
or topic name, URL or page number, and software version (if applicable).
Requesting Technical Support
Technical product support is available through the Juniper Networks Technical Assistance
Center (JTAC). If you are a customer with an active J-Care or Partner Support Service
support contract, or are covered under warranty, and need post-sales technical support,
you can access our tools and resources online or open a case with JTAC.
•
JTAC policies—For a complete understanding of our JTAC procedures and policies,
review the JTAC User Guide located at
http://www.juniper.net/us/en/local/pdf/resource-guides/7100059-en.pdf.
•
Product warranties—For product warranty information, visit
http://www.juniper.net/support/warranty/.
•
JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
Self-Help Online Tools and Resources
For quick and easy problem resolution, Juniper Networks has designed an online
self-service portal called the Customer Support Center (CSC) that provides you with the
following features:
•
Find CSC offerings: http://www.juniper.net/customers/support/
•
Search for known bugs: http://www2.juniper.net/kb/
•
Find product documentation: http://www.juniper.net/techpubs/
•
Find solutions and answer questions using our Knowledge Base: http://kb.juniper.net/
•
Download the latest versions of software and review release notes:
http://www.juniper.net/customers/csc/software/
•
Search technical bulletins for relevant hardware and software notifications:
http://kb.juniper.net/InfoCenter/
•
Join and participate in the Juniper Networks Community Forum:
http://www.juniper.net/company/communities/
•
Open a case online in the CSC Case Management tool: http://www.juniper.net/cm/
To verify service entitlement by product serial number, use our Serial Number Entitlement
(SNE) Tool: https://tools.juniper.net/SerialNumberEntitlementSearch/
Opening a Case with JTAC
You can open a case with JTAC on the Web or by telephone.
xxx
•
Use the Case Management tool in the CSC at http://www.juniper.net/cm/.
•
Call 1-888-314-JTAC (1-888-314-5822 toll-free in the USA, Canada, and Mexico).
Copyright © 2016, Juniper Networks, Inc.
About the Documentation
For international or direct-dial options in countries without toll-free numbers, see
http://www.juniper.net/support/requesting-support.html.
Copyright © 2016, Juniper Networks, Inc.
xxxi
Contrail Feature Guide
xxxii
Copyright © 2016, Juniper Networks, Inc.
PART 1
Overview
•
Understanding Contrail Controller on page 3
Copyright © 2016, Juniper Networks, Inc.
1
Contrail Feature Guide
2
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 1
Understanding Contrail Controller
•
Contrail Overview on page 3
•
Contrail Description on page 4
Contrail Overview
Juniper Networks Contrail is an open, standards-based software solution that delivers
network virtualization and service automation for federated cloud networks. It provides
self-service provisioning, improves network troubleshooting and diagnostics, and enables
service chaining for dynamic application environments across enterprise virtual private
cloud (VPC), managed Infrastructure as a Service (IaaS), and Networks Functions
Virtualization use cases.
Contrail simplifies the creation and management of virtual networks to enable
policy-based automation, greatly reducing the need for physical and operational
infrastructure typically required to support network management. In addition, it uses
mature technologies to address key challenges of large-scale managed environments,
including multitenancy, network segmentation, network access control, and IP service
enablement. These challenges are particularly difficult in evolving dynamic application
environments such as the Web, gaming, big data, cloud, and the like.
Contrail allows a tenant or a cloud service provider to abstract virtual networks at a higher
layer to eliminate device-level configuration and easily control and manage policies for
tenant virtual networks. A browser-based user interface enables users to define virtual
network and network service policies, then configure and interconnect networks simply
by attaching policies. Contrail also extends native IP capabilities to the hosts (compute
nodes) in the data center to address the scale, resiliency, and service enablement
challenges of traditional orchestration platforms.
Using Contrail, a tenant can define, manage, and control the connectivity, services, and
security policies of the virtual network. The tenant or other users can use the self-service
graphical user interface to easily create virtual network nodes, add and remove IP services
(such as firewall, load balancing, DNS, and the like) to their virtual networks, then connect
the networks using traffic policies that are simple to create and apply. Once created,
policies can be applied across multiple network nodes, changed, added, and deleted, all
from a simple browser-based interface.
Contrail can be used with open cloud orchestration systems such as OpenStack or
CloudStack. It can also interact with other systems and applications based on Operations
Copyright © 2016, Juniper Networks, Inc.
3
Contrail Feature Guide
Support System (OSS) and Business Support Systems (BSS), using northbound APIs.
Contrail allows customers to build elastic architectures that leverage the benefits of
cloud computing — agility, self-service, efficiency, and flexibility — while providing an
interoperable, scale-out control plane for network services within and across network
domains.
Related
Documentation
•
Contrail Description on page 4
Contrail Description
•
Contrail Major Components on page 4
•
Contrail Solution on page 4
Contrail Major Components
The following are the major components of Contrail.
Contrail Control Nodes
•
Responsible for the routing control plane, configuration management, analytics, and
the user interface.
•
Provide APIs to integrate with an orchestration system or a custom user interface.
•
Horizontally scalable, can run on multiple servers.
Contrail Compute Nodes – XMPP Agent and vRouter
•
Responsible for managing the data plane.
•
Functionality can reside on a host OS.
Contrail Solution
Contrail architecture takes advantage of the economics of cloud computing and simplifies
the physical network (IP fabric) with a software virtual network overlay that delivers
service orchestration, automation, and intercloud federation for public and hybrid clouds.
Similar to the native Layer 3 designs of web-scale players in the market and public cloud
providers, the Contrail solution leverages IP as the abstraction between dynamic
applications and networks, ensuring smooth migration from existing technologies, as
well as support of emerging dynamic applications.
The Contrail solution is software running on x86 Linux servers, focused on enabling
multitenancy for enterprise Information Technology as a Service (ITaaS). Multitenancy
is enabled by the creation of multiple distinct Layer 3-enabled virtual networks with
traffic isolation, routing between tenant groups, and network-based access control for
each user group. To extend the IP network edge to the hosts and accommodate virtual
machine workload mobility while simplifying and automating network (re)configuration,
Contrail maintains a real-time state across dynamic virtual networks, exposes the
network-as-a-service to cloud users, and enables deep network diagnostics and analytics
down to the host.
4
Copyright © 2016, Juniper Networks, Inc.
Chapter 1: Understanding Contrail Controller
In this paradigm, users of cloud-based services can take advantage of services and
applications and assume that pooled, elastic resources will be orchestrated, automated,
and optimized across compute, storage, and network nodes in a converged architecture
that is application-aware and independent of underlying hardware and software
technologies.
Related
Documentation
•
Contrail Overview on page 3
•
Installation Overview on page 11
Copyright © 2016, Juniper Networks, Inc.
5
Contrail Feature Guide
6
Copyright © 2016, Juniper Networks, Inc.
PART 2
Installing and Upgrading Contrail
•
Supported Platforms and Server Requirements on page 9
•
Installing Contrail and Provisioning Roles on page 11
•
Using Contrail with VMware vCenter on page 65
•
Using Server Manager to Automate Provisioning on page 93
•
Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and
Interfaces on page 145
•
Installing and Using Contrail Storage on page 189
•
Upgrading Contrail Software on page 205
Copyright © 2016, Juniper Networks, Inc.
7
Contrail Feature Guide
8
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 2
Supported Platforms and Server
Requirements
•
Supported Platforms on page 9
•
Server Requirements on page 9
Supported Platforms
Contrail Release 2.21, is supported on the OpenStack Juno and Icehouse releases. Juno
is supported on Ubuntu 14.04.2 and Centos 7.1.
Contrail networking is supported on Red Hat RHOSP 5.0, which is supported only on
OpenStack Icehouse
In Contrail Release 2.21, support for VMware vCenter 5.5. vCenter is limited to Ubuntu
14.04.2 (Linux kernel version: 3.13.0-40-generic).
Other supported platforms include:
•
CentOS 6.5 (Linux kernel version: 2.6.32-358.el6.x86_64)
•
CentOS 7.1 (Linux kernel version: 3.10.0-229.el7)
•
Redhat 7/RHOSP 5.0 (Linux kernel version: 3.10.0-123.el7.x86_64)
•
Ubuntu 12.04.04 (Ubuntu kernel version: 3.13.0-34-generic)
•
Ubuntu 14.04. (Linux kernel version: 3.13.0-40-generic)
Server Requirements
The minimum requirement for a proof-of-concept (POC) system is 3 servers, either
physical or virtual machines. All non-compute roles can be configured in each controller
node. For scalability and availability reasons, it is highly recommended to use physical
servers.
Each server must have a minimum of:
•
64 GB memory
•
300 GB hard drive
Copyright © 2016, Juniper Networks, Inc.
9
Contrail Feature Guide
•
4 CPU cores
•
At least one Ethernet port
For production environment, each server must have a minimum of:
•
256 GB memory
•
500 GB hard drive
•
16 CPU cores
NOTE: If you are using Contrail Storage, additional hardware requirements
can be found in “Installing and Using Contrail Storage” on page 189, Hardware
Specifications.
Related
Documentation
10
•
Installation Overview on page 11
•
Downloading Installation Software on page 12
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 3
Installing Contrail and Provisioning Roles
•
Installation Overview on page 11
•
Downloading Installation Software on page 12
•
Installing the Operating System and Contrail Packages on page 13
•
Configuring System Settings on page 13
•
Installing the Contrail Packages, Part One (CentOS or Ubuntu) on page 14
•
Populating the Testbed Definitions File on page 16
•
Testbed Definitions File Settings for Deploying Contrail with an Existing OpenStack
Node on page 19
•
Supporting Multiple Interfaces on Servers and Nodes on page 20
•
Installing the Contrail Packages, Part Two (CentOS or Ubuntu) — Installing on the
Remaining Machines on page 24
•
Configuring the Control Node on page 27
•
Adding or Removing a Compute Node in an Existing Contrail Cluster on page 32
•
Configuring MD5 Authentication for BGP Sessions on page 33
•
Setting Up and Using a Simple Virtual Gateway with Contrail on page 34
•
Configuring Contrail on VMware ESXi on page 44
•
Installing Contrail with Red Hat OpenStack on page 55
•
Configuring OpenStack Nova Docker with Contrail on page 62
Installation Overview
The Contrail Controller is typically installed on multiple servers. The base software image
is installed on all servers to be used, then provisioning scripts are run that launch
role-based components of the software.
Copyright © 2016, Juniper Networks, Inc.
11
Contrail Feature Guide
The roles used for the installed system include:
•
cfgm—Runs Contrail configuration manager (config-node)
•
openstack—Runs OpenStack services such as Nova, Quantum, and the like
•
collector—Runs monitoring and analytics services
•
compute—Runs vRouter service and launches tenant virtual machines (VMs)
•
control—Runs the control plane service
•
database—Runs analytics and configuration database services
•
webui—Runs the administrator web-based user interface service
The roles are run on multiple servers in an operating installation. A single node can have
multiple roles. The roles can also run on a single server for testing or demonstration
purposes.
“Installing the Operating System and Contrail Packages” on page 13 describes installing
the Contrail Controller software onto multiple servers.
Your account team can help you determine the number of servers needed for your specific
implementation.
Related
Documentation
•
Server Requirements on page 9
•
Downloading Installation Software on page 12
•
Installing the Operating System and Contrail Packages on page 13
Downloading Installation Software
All components necessary for installing the Contrail Controller are available as:
•
an RPM file (contrail-install-packages-1.xx-xxx.el6.noarch.rpm) that can be used to
install the Contrail system on an appropriate CentOS operating system.
•
a Debian file (contrail-install-packages-1.xx-xxx~xxxxxx_all.deb) that can be used to
install the Contrail system on an appropriate Ubuntu operating system.
Versions are available for each Contrail release, for the supported Linux operating systems
and versions, and for the supported versions of OpenStack.
All installation images can be downloaded from
http://www.juniper.net/support/downloads/?p=contrail#sw.
The Contrail image includes the following software:
12
•
All dependent software packages needed to support installation and operation of
OpenStack and Contrail
•
Contrail Controller software – all components
•
OpenStack release currently in use for Contrail
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
Related
Documentation
•
Installing the Operating System and Contrail Packages on page 13
•
Configuring System Settings on page 13
•
Populating the Testbed Definitions File on page 16
•
Installing the Contrail Packages, Part One (CentOS)
•
Download Software
Installing the Operating System and Contrail Packages
Install the stock CentOS or Ubuntu operating system image appropriate for your version
of Contrail (CentOS 6.4 or 6.5 or Ubuntu 12.04.4 for Contrail 1.2 and greater) onto the
server, then install Contrail packages separately.
The following are general guidelines for installing the operating system and preparing to
install Contrail.
1.
Install a CentOS or Ubuntu minimal distribution as desired on all servers. Follow the
published operating system installation procedure for the selected operating system;
refer to the website for the operating system.
2. After rebooting all of the servers after installation, verify that you can log in to each
of them using the root password defined during installation.
3. After the initial installations on all servers, configure some items specific to your
systems, (see “Configuring System Settings” on page 13), then begin the first part of
the installation (see “Installing the Contrail Packages, Part One (CentOS or Ubuntu)”
on page 14).
Related
Documentation
•
Configuring System Settings on page 13
•
Installing the Contrail Packages, Part One (CentOS or Ubuntu) on page 14
•
Populating the Testbed Definitions File on page 16
•
Download Software
Configuring System Settings
After installing the base image to all servers being used in the installation, and before
running role provisioning scripts, perform the following steps to configure items specific
to your environment.
Perform these configuration steps each time you perform an initial installation or an
upgrade to a new release.
To configure system settings:
1.
Update /etc/resolv.conf with nameserver information specific to your system.
Copyright © 2016, Juniper Networks, Inc.
13
Contrail Feature Guide
2. Update /etc/sysconfig/network with the hostname and domain information specific
to your system.
3. Configure the LAN port with network information specific to your system:
a. Use show ifconfig –a to determine which LAN port you are using, as this might not
be obvious on some systems due to the ways interfaces can be named.
b. Update the appropriate interface configuration file in
/etc/sysconfig/network-scripts/ifcfg-<int name> using the following guidelines:
Related
Documentation
•
IPADDR = <IP of the host you want to assign>
•
NETMASK = <e.g. 255.255.255.0>
•
GATEWAY = <gateway router address>
•
BOOTPROTO — delete this, or change dhcp to static
•
Other settings can remain as is
•
Installing the Contrail Packages, Part One (CentOS)
•
Populating the Testbed Definitions File on page 16
Installing the Contrail Packages, Part One (CentOS or Ubuntu)
This procedure includes instructions for installing Contrail for either a CentOS-based
system or an Ubuntu-based system. In each step, be sure to follow the instructions for
your operating system type.
All installation files are available from
http://www.juniper.net/support/downloads/?p=contrail#sw .
14
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
CentOS Systems
Contrail packages for CentOS are provided either as part of the Contrail ISO installation
or separately in an RPM file with the format:
contrail-install-packages-1.xx-xxx~openstack_version.el6.noarch.rpm, where
xx-xxx~openstack_version represents the release number, build number, and OpenStack
common version name (such as Havana or Icehouse) for the included Contrail install
packages.
If you already have a compatible operating system installed, you can choose to copy only
the Contrail packages after the base operating system installation is complete. The base
operating system can be installed using netboot or a USB, using installation instructions
for that operating system.
Ubuntu Systems
Contrail packages for Ubuntu are provided only as packages in a Debian file of the format:
contrail-install-packages-1.xx-xxx~openstack_version_all.deb, where
xx-xxx~openstack_version represents the release number, build number, and OpenStack
common version name (such as Havana or Icehouse) for the included Contrail install
packages.
It is expected that you already have a compatible Ubuntu operating system installed,
such as Ubuntu12.04.3 LTS, kernel version 3.13.0-2934 generic, before installing the
Contrail packages.
NOTE: the stock kernel version as part of 12.04.3 or 12.04.4 LTS is older than
3.13.0-34. In such cases, the following Fabric task can be used to upgrade
the kernel version to 3.13.0-34 in all nodes.
cd /opt/contrail/utils; fab upgrade_kernel_all
Installing Contrail Packages for CentOS or Ubuntu
This procedure provides instructions for installing Contrail packages onto either a
CentOS-based system or an Ubuntu-based system.
1.
Ensure that a compatible base operating system has been installed, using the
installation instructions for that system.
2. Download the appropriate Contrail install packages file from
http://www.juniper.net/support/downloads/?p=contrail#sw :
CentOS: contrail-install-packages-1.xx-xxx~openstack_version.el6.noarch.rpm
Ubuntu: contrail-install-packages-1.xx-xxx~openstack_version_all.deb
3. Copy the downloaded Contrail install packages file to /tmp/ on the first server for
your system installation.
4. On one of the config nodes in your cluster, copy the Contrail packages as follows:
CentOS: scp
<id@server>:/path/to/contrail-install-packages-1.xx-xxx~openstack_version.el6.noarch.rpm
/tmp
Copyright © 2016, Juniper Networks, Inc.
15
Contrail Feature Guide
Ubuntu: scp
<id@server>:/path/to/contrail-install-packages-1.xx-xxx~openstack_version_all.deb
/tmp
5. Install the Contrail packages:
CentOS: yum localinstall
/tmp/contrail-install-packages-1.xx-xxx~openstack_version.el6.noarch.rpm
Ubuntu: dpkg -i /tmp/contrail-install-packages-1.xx-xxx~openstack_version_all.deb
6. Run the setup.sh script. This step will create the Contrail packages repository as well
as the Fabric utilities (located in /opt/contrail/utils) needed for provisioning:
cd /opt/contrail/contrail_packages; ./setup.sh
7. Populate the testbed.py definitions file, see “Populating the Testbed Definitions File”
on page 16.
NOTE: As of Contrail Release 1.10, Apache ZooKeeper resides on the database
node. Because a ZooKeeper ensemble operates most effectively with an
uneven number of nodes, it is required to have an uneven number (3, 5, 7, and
so on) of database nodes in a Contrail system.
Related
Documentation
•
Populating the Testbed Definitions File on page 16
•
Download Software
•
Supporting Multiple Interfaces on Servers and Nodes on page 20
•
Installing the Contrail Packages, Part Two (CentOS or Ubuntu) — Installing on the
Remaining Machines on page 24
•
Configuring the Control Node on page 27
Populating the Testbed Definitions File
Populate a testbed definitions file, /opt/contrail/utils/fabfile/testbeds/testbed.py, with
parameters specific to your system, then run the fab commands as provided in “Installing
the Contrail Packages, Part Two (CentOS or Ubuntu) — Installing on the Remaining
Machines” on page 24 to launch the role-based provisioning script tasks.
You can view example testbed files on any node in the controller at:
•
/opt/contrail/utils/fabfile/testbeds/testbed_multibox_example.py for a multiple server
system
For a list of all available Fabric commands, refer to the file
/opt/contrail/utils/README.fabric.
16
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
To define the following parameters within the testbed.py file:
1.
Provide host strings for the nodes in the cluster. Replace the addresses shown in the
example with the actual IP addresses of the hosts in your system.
host1 = 'root@1.1.1.1'
host2 = 'root@1.1.1.2'
host3 = 'root@1.1.1.3'
host4 = 'root@1.1.1.4'
host5 = 'root@1.1.1.5'
2. Define external routers (MX Series routers and the like) to which the virtual network
controller control nodes will be peered.
ext_routers = [('mx1', '1.1.1.253'), ('mx2', '1.1.1.252')]
If there are no external routers, define
ext_routers = []
3. Provide the BGP autonomous system number.
router_asn = 64512
NOTE: The default ASN 64512 is a private ASN number. A private ASN
should be used if an AS is only required to communicate via BGP with
a single provider. As the routing policy between the AS and the provider
will not be visible in the Internet, a private ASN can be used for this
purpose. IANA has reserved AS 64512 through to AS 65535 to be used
as private ASNs. If these circumstances do not apply, you cannot use
the default or any other private ASN number.
4. Define the host on which the Fabric tasks will be invoked. Replace the address shown
in the example with the actual IP address of the host in your system.
host_build = 'user@10.10.10.10'
5. Define which hosts will operate with which roles.
For multinode setups:
env.roledefs = {
'all': [host1, host2, host3, host4, host5],
‘database’: [host1, host2, host3],
'cfgm': [host1, host2],
'control': [host1, host2],
'compute': [host4, host5],
'collector': [host1, host2, host3],
'webui': [host1],
'build': [host_build],
}
For single node all-in-one setups:
Copyright © 2016, Juniper Networks, Inc.
17
Contrail Feature Guide
env.roledefs = {
'all': [host1],
‘database’: [host1],
'cfgm': [host1],
'control': [host1],
'compute': [host1],
'collector': [host1],
'webui': [host1],
'build': [host_build],
}
6. Define password credentials for each of the hosts.
env.password = 'secret' # Required only for releases prior to 1.10
env.passwords = {
host1: 'secret',
host2: 'secret',
host3: 'secret',
host4: 'secret',
host5: 'secret',
}
NOTE: Ensure both env.password and env.passwords are set for releases
prior to Contrail Release 1.10.
NOTE: Set appropriate permissions for the testbed.py file because it
contains host credentials.
If your system servers and nodes have multiple interfaces, refer to “Supporting Multiple
Interfaces on Servers and Nodes” on page 20 for information about setting up the
testbed.py for your system.
To deploy the Contrail High Available cluster, refer to “Juniper OpenStack High Availability”
on page 294 for information about setting up the testbed.py for your system.
To deploy with an existing Openstack, refer to “Testbed Definitions File Settings for
Deploying Contrail with an Existing OpenStack Node” on page 19 for testbed.py definitions.
When finished, continue on to “Installing the Contrail Packages, Part Two (CentOS or
Ubuntu) — Installing on the Remaining Machines” on page 24.
Related
Documentation
18
•
Supporting Multiple Interfaces on Servers and Nodes on page 20
•
Testbed Definitions File Settings for Deploying Contrail with an Existing OpenStack
Node on page 19
•
Installing the Contrail Packages, Part Two (CentOS or Ubuntu) — Installing on the
Remaining Machines on page 24
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
Testbed Definitions File Settings for Deploying Contrail with an Existing OpenStack
Node
It is possible to deploy Contrail when there is already an existing OpenStack node on your
system.
The following shows additional testbed.py definitions that are required to deploy Contrail
when there is already an existing OpenStack node.
1.
Update the Openstack admin password in the testbed.py.
env.openstack_admin_password = '<password>'
2. Update the keystone environment section as in the following.
env.keystone = {
'keystone_ip' : 'x.y.z.a’,
# same IP as the openstack IP address
'auth_protocol' : 'http',
# Default is http
'auth_port' : '35357',
# Default is 35357
'admin_token' : $ABC123',
# Uses the admin_token from /etc/keystone/keystone.conf
of the openstack node
'admin_user' : 'admin',
# Default is admin
'admin_password': '<password>', 'service_tenant': 'service',
'admin_tenant' : 'admin',
'region_name' : 'RegionOne',
# Default is service
# Default is admin
# Default is RegionOne
'insecure' :'True',
# Default = False, however, "insecure" is applicable only when
protocol is https
'manage_neutron': 'no',
# Default = 'yes' , Does configure neutron user/role in
keystone required.
}
3. Update the openstack environment section as in the following, where:
•
service_token is the common service token for all services like nova, neutron, glance,
cinder, and so on.
•
amqp_host is the IP of the AMQP server to be used for openstack.
•
manage_amqp, the default = 'no', if set to 'yes', provisions amqp in openstack nodes
and openstack services uses the amqp in openstack nodes instead of config nodes.
The amqp_host is neglected if manage_amqp is set.
env.openstack = {
'service_token': $ABC123’, # the admin_token from keystone.conf of the openstack
node
Copyright © 2016, Juniper Networks, Inc.
19
Contrail Feature Guide
'amqp_host' : '<ip address>’,
# same as the IP address of the openstack node
'manage_amqp' : 'yes',
}
Related
Documentation
•
Supporting Multiple Interfaces on Servers and Nodes on page 20
•
Installing the Contrail Packages, Part Two (CentOS or Ubuntu) — Installing on the
Remaining Machines on page 24
Supporting Multiple Interfaces on Servers and Nodes
This section describes how to set up and manage multiple interfaces.
•
Support for Multiple Interfaces on page 20
•
Server Interface Examples on page 21
•
Interface Naming and Configuration Management on page 22
•
Setting Up Interfaces and Installing on page 22
•
Sample testbed.py File With Exclusive Interfaces on page 23
Support for Multiple Interfaces
Servers and nodes with multiple interfaces should be deployed with exclusive
management and control and data networks. In the case of multiple interfaces per server,
the expectation is that the management network provides only management connectivity
to the cluster, and the control and data network carries the control plane information
and the guest traffic data.
Examples of control traffic include the following:
•
XMPP traffic between the control nodes and the compute nodes.
•
BGP protocol messages across the control nodes.
•
Statistics, monitoring, and health check data collected by the analytics engine from
different parts of the system.
For Contrail, control and data must share the same interface, configured in the testbed.py
in a section named control_data.
Number of cfgm Nodes Supported
The Contrail system can have any number of cfgm nodes.
Uneven Number of Database Nodes Required
In Contrail, Apache ZooKeeper resides on the database node. Because a ZooKeeper
ensemble operates most effectively with an odd number of nodes, it is required to have
an odd number (3, 5, 7, and so on) of database nodes in a Contrail system.
20
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
Support for VLAN Interfaces
A VLAN ID can also be specified in the testbed.py file under the control_data section,
similar to the following example:
control_data= { host1: { 'ip': '<ip address set>', 'gw': '<ip address>', 'device': 'bond0',
‘vlan’: ‘20’},
host2: { 'ip': '<ip address set>', 'gw': '<ip address>', 'device': 'bond0',
‘vlan’: ‘20’} }
Support for Bonding Options
Contrail provides support for all available bond interface options.
The default bond interface options are:
miimon=100, mode=802.3ad(lacp), xmit_hash_policy=layer3+4
In the testbed.py bond section, anything other than name and member are treated as a
bond interface option, and provisioned as such. The following is an example:
bond= { host1: { ‘name’: ‘bond0’, ‘member’: [‘p2p0p2’, ‘p2p0p3’], ‘lacp_rate’: ‘slow’ }
Support for Static Route Options
Contrail Release 1.04 and later provides support for adding static routes on target systems.
This option is ideal for use cases in which a system has servers with multiple interfaces
and has control_data or management connections that span multiple networks.
The following shows the use of the static_route stanza in the testbed.py file to configure
static routes in host2 and host5.
static_route = {
host2 : [{ 'ip': '<ip address>', 'netmask' : '<ip address>',
address>', 'intf': 'bond0' },
'gw':'<ip
{ 'ip': '<ip address>', 'netmask' : '<ip address>',
'gw':'<ip address>', 'intf': 'bond0' }],
host5 : [{ 'ip': '<ip address>', 'netmask' : '<ip address>',
address>', 'intf': 'bond0' }],
'gw':'<ip
}
Server Interface Examples
For Contrail Release 1.10 and later, control and data are required to share the same
interface. A set of servers can be deployed in any of the following combinations for
management, control, and data:
•
mgmt=control=data -- Single interface use case
Copyright © 2016, Juniper Networks, Inc.
21
Contrail Feature Guide
•
mgmt, control=data -- Exclusive management access, with control and data sharing
a single network.
The following server interface combinations are no longer allowed for Contrail Release
1.10 and later:
•
mgmt=control, data--Dual interfaces in Layer 3 mode, management and control shared
on a single network
•
mgmt, control, data–Complete exclusivity across management, control, and data
traffic.
Interface Naming and Configuration Management
On a standard Linux installation there is no guarantee that a physical interface will come
up with the same name after a system reboot. Linux NetworkManager tries to
accommodate this behavior by linking the interface configurations to the hardware
addresses of the physical ports. However, Contrail avoids using hardware-based
configuration files because this type of solution cannot scale when using remote
provisioning and management techniques.
The Contrail alternative is a threefold interface-naming scheme based on <bus, device,
port (or function)>. As an example, on a server operating system that typically gives
interface names such as p4p0 and p4p1 for onboard interfaces, the Contrail system
generates those names as p4p0p0 and p4p0p1, when using the optional package
contrail-interface-name.
When the contrail-interface-name package is installed, it uses the threefold naming
scheme to provide consistent interface naming after reboots. The contrail-interface-name
package is installed by default when a Contrail ISO image is installed. If you are using an
RPM-based installation, you should install the contrail-interface-name package before
doing any network configuration.
If your system already has another mechanism for getting consistent interface names
after a reboot, it is not necessary to install the contrail-interface-name package.
Setting Up Interfaces and Installing
As part of the provisioning scheme, there are two additional commands that the
administrator can use to set up control and data interfaces.
The fab setup_interface command creates bond interface configurations, if there is a
corresponding configuration in the testbed.py file (see sample testbed.py file in the next
section).
When you use the fab setup_interface command, the interface configurations are
generated with the syntax (ifcfg-* files), which is needed for the network service.
The fab add_static_route command creates static routes in a node, if there is a
corresponding configuration in the testbed.py file (see sample testbed.py file in the next
section).
22
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
A typical work flow for setting up a cluster with multiple interfaces follows:
Set env.interface_rename = True in the testbed.py file (meaning: install the
contrail-interface-name package on compute nodes)
fab install_contrail (meaning: change the testbed.py file with the renamed interface
name)
fab setup_interface
fab add_static_route
fab setup_all
NOTE: The fab setup_interface command and fab add_static_route command
can be executed simultaneously by using the fab setup_network command.
In cases where the fab setup_interface command is not used for setting up the interfaces,
configurations for the data interface are migrated as part of the vrouter installation on
the compute nodes.
If the data interface is a bond interface, the bond member interfaces are reconfigured
into network service based configurations using appropriate ifcfg script files.
Sample testbed.py File With Exclusive Interfaces
The following is a sample testbed.py definitions file that shows the configuration for
exclusive interfaces for management and control and for data networks.
#testbed file from fabric.api import env
os_username = 'admin'
os_password = '<password>'
os_tenant_name = 'demo'
host1 = 'host@<ip address>'
host2 = 'host@<ip address>'
host3 = 'host@<ip address>'
host4 = 'host@<ip address>'
host5 = 'host@<ip address>'
host6 = 'host@<ip address>'
host7 = 'host@<ip address>'
host8 = 'host@<ip address>'
ext_routers = [('mx1', '<ip address>')] router_asn = <asn> public_vn_rtgt = 10003
public_vn_subnet = '<ip address set>'
host_build = 'host@<ip address>'
env.roledefs = {
'all': [host1, host2, host3, host4, host5, host6, host7, host8],
'cfgm': [host1],
'openstack': [host6],
'webui': [host7],
Copyright © 2016, Juniper Networks, Inc.
23
Contrail Feature Guide
'control': [host4, host3],
'compute': [host2, host5],
'collector': [host2, host3],
'database': [host8],
'build': [host_build],
}
env.hostnames = {
'all': ['nodea10', 'nodea4', 'nodea2', 'nodeb2', 'nodeb12','nodea32','nodec36','nodec31']
}
bond= {
host2 : { 'name': 'bond0', 'member': ['p2p0p0','p2p0p1','p2p0p2','p2p0p3'],
'mode':'balance-xor' },
host5 : { 'name': 'bond0', 'member': ['p4p0p0','p4p0p1','p4p0p2','p4p0p3'],
'mode':'balance-xor' }, }
control_data = {
host1 : { 'ip': '<ip address set>', 'gw' : '<ip address>', 'device':'eth0' },
host2 : { 'ip': '<ip address set>', 'gw' : '<ip address>', 'device':'p0p25p0' },
host3 : { 'ip': '<ip address set>', 'gw' : '<ip address>', 'device':'eth0' },
host4 : { 'ip': '<ip address set>', 'gw' : '<ip address>', 'device':'eth3' },
host5 : { 'ip': '<ip address set>', 'gw' : '<ip address>', 'device':'p6p0p1' },
host6 : { 'ip': '<ip address set>', 'gw' : '<ip address>', 'device':'eth0' },
host7 : { 'ip': '<ip address set>', 'gw' : '<ip address>', 'device':'eth1' },
host8 : { 'ip': '<ip address set>', 'gw' : '<ip address>', 'device':'eth1' }, }
env.password = :'secret' #Required only for releases prior to 1.10
env.passwords = {
host1:'secret',
host2:'secret',
host3:'secret',
host4:'secret',
host5:'secret',
host6:'secret',
host7:'secret',
host8:'secret',
host_build: 'secret'
}
Related
Documentation
•
Juniper OpenStack High Availability on page 294
Installing the Contrail Packages, Part Two (CentOS or Ubuntu) — Installing on the
Remaining Machines
Preinstallation Checklist
24
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
NOTE: This procedure assumes that you have first completed the following
procedures:
•
Installing the Contrail Packages, Part One (CentOS or Ubuntu) on page 14
•
Populating the Testbed Definitions File on page 16
And the following system tasks are accomplished:
•
All of the servers are time synced.
•
All servers can ping from one to another, both on management and on data and control,
if part of the system.
•
All servers can ssh and scp between one another.
•
All host names are resolvable.
•
If using CentOS or RHEL, SELinux has been disabled (/etc/sysconfig/selinux).
Each step in this procedure contains instructions for installing on a CentOS system or an
Ubuntu system. Be sure to follow the instructions specific to your operating system.
To copy and install Contrail packages on the remaining machines in your cluster, you can
use scp and yum localinstall as on the first server (for CentOS) or scp and dpkg -I (for
Ubuntu), as in “Installing the Contrail Packages, Part One (CentOS or Ubuntu)” on page 14,
or you can use a Fabric utility to copy onto all machines at once, as follows:
1.
Ensure that the testbed.py file has been created and populated with information
specific to your cluster at /opt/contrail/utils/fabfile/testbeds.
See “Populating the Testbed Definitions File” on page 16.
2. Run Fabric commands to install packages as follows:
CentOS: /opt/contrail/utils/fab
install_pkg_all:/tmp/contrail-install-packages-1.xx-xxx~openstack_version..el6.noarch.rpm
Ubuntu: /opt/contrail/utils/fab
install_pkg_all:/tmp/contrail-install-packages-1.xx-xxx~openstack_version_all.deb
NOTE: Fab commands are always run from /opt/contrail/utils/.
3. Ubuntu: The recommended Kernel version for Ubuntu based system is 3.13.0-40.
Nodes can be upgraded to kernel version 3.13.0-40 using below fabric-utils command
:
fab upgrade_kernel_all
NOTE: This step upgrades the kernel version to 3.13.0-40 in all nodes and
performs reboot. Reconnect to perform remaining tasks.
Copyright © 2016, Juniper Networks, Inc.
25
Contrail Feature Guide
4. Install the required Contrail packages in each node of the cluster:
fab install_contrail
NOTE: To install Contrail with an existing OpenStack node:
fab install_without_openstack
# Script will install nova-compute in the
compute node
or
fab install_without_openstack:no # User installs nova-compute in the
compute node
5. If your installation has multiple interfaces (see “Supporting Multiple Interfaces on
Servers and Nodes” on page 20), run setup_interface:
fab setup_interface
6. Provision the entire cluster:
fab setup_all
NOTE: To provision Contrail with an existing OpenStack node, use one of
the following:
•
fab setup_without_openstack # Script provisions vrouter and
nova-compute services in the compute nodes and the compute nodes
are rebooted on completion
•
fab setup_without_openstack:no # Only vrouter services are provisioned,
the nova-compute service is not provisioned and compute nodes are
rebooted on completion
•
fab setup_without_openstack:no,False # Only vrouter services
are provisioned, the nova-compute service is not provisioned and
the compute nodes are not rebooted on complet
7. CentOS only: Alternatively, if the contrail-install-packages is already installed (as part
of installing the Contrail ISO via netboot), follow steps 2, 4, and 5.
When finished, you can proceed to “Configuring the Control Node” on page 27.
Related
Documentation
26
•
Configuring the Control Node on page 27
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
Configuring the Control Node
An important task after a successful installation is to configure the control node. This
procedure shows how to configure basic BGP peering between one or more virtual network
controller control nodes and any external BGP speakers. External BGP speakers, such
as Juniper Networks MX80 routers, are needed for connectivity to instances on the virtual
network from an external infrastructure or a public network.
Before you begin, ensure that the following tasks are completed:
•
The Contrail Controller base system image has been installed on all servers.
•
The role-based services have been assigned and provisioned.
•
IP connectivity has been verified between all nodes of the Contrail Controller.
•
You can access the Contrail user interface at http://nn.nn.nn.nn:8080, where nn.nn.nn.nn
is the IP address of the configuration node server that is running the contrail-webui
service.
To configure BGP peering in the control node:
Copyright © 2016, Juniper Networks, Inc.
27
Contrail Feature Guide
1.
From the Contrail Controller module control node (http://nn.nn.nn.nn:8080), select
Configure > Infrastructure > BGP Routers; see Figure 1 on page 28.
Figure 1: Configure> Infrastructure > BGP Routers
A summary screen of the control nodes and BGP routers appears; see
Figure 2 on page 28.
Figure 2: BGP Routers Summary
2. (Optional) The global AS number is 64512 by default. To change the global AS number,
on the BGP Router summary screen click Global ASN and enter the new number.
28
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
3. To configure control nodes and BGP routers, on the BGP Routers summary screen,
click the
icon. The Create BGP Router window is displayed; see Figure 3 on page 29.
Figure 3: Create BGP Router
4. In the Create BGP Router window, click BGP Router to add a new BGP peer or click
Control Node to add control nodes.
For each node you want to add, populate the fields with values for your system. See
Table 3 on page 29.
Table 3: Create BGP Router Fields
Field
Description
Hostname
Enter a name for the node being added.
IP Address
The IP address of the node.
Autonomous System
Enter the AS number for the node. (BGP peer only)
Router ID
Enter the router ID.
Vendor ID
Required for external peers. Populate with a text identifier, for example, “MX-0”.
(BGP peer only)
Address Families
Enter the address family, for example, inet-vpn
Hold Time
BGP session hold time. The default is 90 seconds; change if needed.
Copyright © 2016, Juniper Networks, Inc.
29
Contrail Feature Guide
Table 3: Create BGP Router Fields (continued)
Field
Description
BGP Port
The default is 179; change if needed.
Authentication Mode
Enable MD5 authentication if desired.
Authentication key
Enter the Authentication Key value.
Physical Router
The type of the physical router.
Available Peers
Displays peers currently available.
Configured Peers
Displays peers currently configured.
5. Click Save to add each node that you configure.
6. To configure an existing node as a peer, select it from the list in the Available Peers
box, then click >> to move it into the Configured Peers box.
Click << to remove a node from the Configured Peers box.
7. You can check for peers by selecting Monitor > Infrastructure > Control Nodes; see
Figure 4 on page 30.
Figure 4: Control Nodes
In the Control Nodes screen, click any hostname from the memory map to view its
details; see Figure 5 on page 31.
30
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
Figure 5: Control Node Details
8. Click the Peers tab to view the peers of a control node; see Figure 6 on page 31.
Figure 6: Control Node Peers Tab
Related
Documentation
•
Creating a Virtual Network—Juniper Networks Contrail on page 215
•
Creating a Virtual Network—OpenStack Contrail on page 219
Copyright © 2016, Juniper Networks, Inc.
31
Contrail Feature Guide
Adding or Removing a Compute Node in an Existing Contrail Cluster
Use the following procedure to add one or more new compute nodes to an existing
Contrail cluster.
1.
Add the new information about the new compute node(s) into your existing testbed.py
file.
NOTE: For convenience, this procedure assumes you are adding a node
@1.1.1.1 , however, replace the 1.1.1.1 with the correct IP for the node or nodes
that you are adding.
2. Copy the contrail-install-packages file for CentOS or Ubuntu to the /tmp directory of
the cfgm node where the fab commands are triggered:
CentOS: scp <id@server>:/path/to/contrail-install-packages-xxx-xxx.el6.noarch.rpm
/tmp
Ubuntu: scp <id@server>:/path/to/contrail-install-packages_xxx-xxx~havana_all.deb
/tmp
3. For Ubuntu 12.04.4 or 12.04.3 server with kernel version older than 3.13.0-34, upgrade
the kernel by using the following fab command:
cd /opt/contrail/utils; fab upgrade_kernel_node:root@1.1.1.1
where 1.1.1.1 should be replaced with the server’s actual IP address.
4. Install the contrail-install-packages on to the new compute node (or nodes):
CentOS: fab
install_pkg_node:/tmp/contrail-install-packages_x.xx-xxx.xxx.noarch.rpm,root@1.1.1.1
Ubuntu: fab
install_pkg_node:/tmp/contrail-install-packages_x.xx-xxx~havana_all.deb,root@1.1.1.1
5. Use fab commands to add the new compute node (or nodes):
fab add_vrouter_node:root@1.1.1.1
Removing a Node
Use the following procedure to remove one or more compute nodes from an existing
Contrail cluster.
NOTE: For convenience, this procedure assumes you are adding a node @1.1.1.1
, however, replace the 1.1.1.1 with the correct IP for the node or nodes that you
are adding.
1.
Use the following fab command to remove the new compute node:
fab detach_vrouter_node:root@1.1.1.1
32
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
2. Remove the information about this detached compute node from the existing
testbed.py file.
Configuring MD5 Authentication for BGP Sessions
Contrail Release 2.20 implements MD5 authentication for BGP peering based on RFC
2385.
The primary motivation for this option is to allow BGP to protect itself against the
introduction of spoofed TCP segments into the connection stream. Both of the BGP peers
must be configured with the same MD5 key. Once configured, each BGP peer adds a
16-byte MD5 digest to the TCP header of every segment that it sends. This digest is
produced by applying the MD5 algorithm on various parts of the TCP segment. Upon
receiving a signed segment, the receiver validates it by calculating its own digest from
the same data (using its own key) and compares the two digests. For valid segments,
the comparison is successful since both sides know the key.
There are 3 ways to enable BGP MD5 authentication and set the keys on the contrail
node.
1.
To configure MD5 authentication for a BGP peer using an environment dictionary:
Before provisioning the node, include an environment dictionary (env dict) in the
testbed.py file as shown. In this example, juniper is the md5 key that is configured on
the host1 and host2 nodes.
env.md5 = {
host1: 'juniper',
host2: ‘juniper’,
}
Specify the desired key value on the node. The key should only be of type string .
2. Alternately, if the md5 key is not included in the testbed.py file and the node is already
provisioned, you can run the following script with an argument for md5:
contrail-controller/src/config/utils/provision_control.py
root@<your_node>:/opt/contrail/utils# python provision_control.py --host_name
<host_name> --host_ip <host_ip> --router_asn <asn> --api_server_ip <api_ip>
--api_server_port <api_port> --oper add --md5 “juniper” --admin_user admin
--admin_password <password> --admin_tenant_name admin
Copyright © 2016, Juniper Networks, Inc.
33
Contrail Feature Guide
3. Another alternative is to use the web user interface.
a. Connect to the node’s IP address at port 8080 (<node_ip>:8080) and select
Configure->Infrastructure->BGP Routers. As shown in Figure, a list of BGP peers is
displayed.
b. For a BGP peer, click on the gear icon on the right hand side of the peer entry. Then
click Edit. This displays the Edit BGP Router dialog box.
c. Scroll down the window and select Advanced Options.
d. Configure the MD5 authentication by selecting Authentication Mode>MD5 and
entering the Authentication Key value
Related
Documentation
•
Configuring the Authentication Key Update Mechanism for BGP and LDP Routing Protocols
•
Creating a Virtual Network—Juniper Networks Contrail on page 215
•
Creating a Virtual Network—OpenStack Contrail on page 219
Setting Up and Using a Simple Virtual Gateway with Contrail
34
•
Introduction to the Simple Gateway on page 35
•
How the Simple Gateway Works on page 35
•
Setup Without Simple Gateway on page 35
•
Setup With Simple Gateway on page 36
•
Simple Gateway Configuration Features on page 37
•
Packet Flows with the Simple Gateway on page 38
•
Packet Flow Process From the Virtual Network to the Public Network on page 38
•
Packet Flow Process From the Public Network to the Virtual Network on page 39
•
Four Methods for Configuring the Simple Gateway on page 39
•
Using Fab Provisioning to Configure the Simple Gateway on page 39
•
Using the vRouter Configuration File to Configure the Simple Gateway on page 41
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
•
Using Thrift Messages to Dynamically Configure the Simple Gateway on page 41
•
Common Issues with Simple Gateway Configuration on page 44
Introduction to the Simple Gateway
Every virtual network has a routing instance associated with it. The routing instance
defines the network connectivity for the virtual machines in the virtual network. By default,
the routing instance contains routes only for virtual machines spawned within the virtual
network. Connectivity between virtual networks is controlled by defining network policies.
The public network is the IP fabric or the external networks across the IP fabric. The virtual
networks do not have access to the public network, and a gateway is used to provide
connectivity to the public network from a virtual network. In traditional deployments, a
routing device such as a Juniper Networks MX Series router can act as a gateway.
The simple virtual gateway for Contrail is a restricted implementation of a gateway that
can be used for experimental purposes. The simple gateway provides the Contrail virtual
networks with access to the public network, and is represented as vgw.
How the Simple Gateway Works
The following sections illustrate how the simple gateway works, first, by showing a virtual
network setup with no simple gateway, then illustrating the same setup with a simple
gateway configured.
Setup Without Simple Gateway
The following shows a virtual network setup when the simple gateway is not configured.
•
A virtual network, default-domain:admin:net1, is configured with the subnet
192.168.1.0/24.
•
The routing instance default-domain:admin:net1:net1 is associated with the virtual
network default-domain:admin:net1.
•
A virtual machine with the IP 192.168.1.253 is spawned in net1.
•
A virtual machine is spawned on compute server 1.
•
An interface, vhost0, is in the host OS of server 1 and is assigned the IP 10.1.1.1/24.
Copyright © 2016, Juniper Networks, Inc.
35
Contrail Feature Guide
•
The interface vhost0 is added to the vRouter in the routing instance fabric.
•
The simple gateway is not configured.
Setup With Simple Gateway
The following diagram shows a virtual network setup with the simple gateway configured
for the virtual network default-domain:admin:net1.
The simple gateway configuration uses a gateway interface (vgw) to provide connectivity
between the routing instance Fabric and the default-domain:admin:net1:net1.
The following shows the packet flows between Fabric and the
default-domain:admin:net1:net1.
36
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
In the diagram, routes marked with (*) are added by the simple gateway feature.
Simple Gateway Configuration Features
The simple gateway configuration has the following features.
•
•
•
The simple gateway is configured for the virtual network default-domain:admin:net1.
•
The gateway interface vgw provides connectivity between the routing instance
default-domain:admin:net1:net1 and the fabric.
•
An IP address is not configured for the gateway interface vgw.
The host OS is configured with the following:
•
Two INET interfaces are added to the host OS: vgw and vhost0
•
The host OS is not aware of the routing instances, so vgw and vhost0 are part of the
same routing instance in the host OS.
•
The simple gateway adds the route 192.168.1.0/24, pointing to the vgw interface,
and that setup is added to the host OS. This route ensures that any packet destined
to the virtual machine is sent to the vrouter on the vgw interface.
The vRouter is configured with the following:
Copyright © 2016, Juniper Networks, Inc.
37
Contrail Feature Guide
•
Simple Gateway
Restrictions
•
The routing instance named Fabric is created for the fabric network.
•
The interface vhost0 is added to the routing instance Fabric.
•
The interface eth0, which is connected to the fabric network, is added to the routing
instance named Fabric.
•
The simple gateway adds the route 192.168.1.0/24 => vhost0, consequently, packets
destined to the virtual network default-domain:admin:net1 are sent to the host OS.
The routing instance default-domain:admin:net1:net1 is created for the virtual network
default-domain:admin:net1.
•
The interface vgw is added to the routing instance default-domain:admin:net1:net1.
•
The simple gateway adds a default route 0.0.0.0/0 that points to the interface vgw.
Packets in the routing instance default-domain:admin:net1:net that hit this route are
sent to the host OS on the vgw interface. The host OS routes the packets to the
Fabric network over the vhost0 interface.
The following are restrictions of the simple gateway.
•
A single compute node can have the simple gateway configured for multiple virtual
networks, however, there cannot be overlapping subnets. The host OS does not support
routing instances, therefore, all gateway interfaces in the host OS are in the same
routing instance. Consequently, the subnets in the virtual networks must not overlap.
•
Each virtual network can have a single simple gateway interface. ECMP is not supported.
Packet Flows with the Simple Gateway
The following sections describe the packet flow process when the simple gateway is
configured on a Contrail system.
First, the packet flow process from the virtual network to the public network is described.
Next, the packet flow process from the public network to the virtual network is described.
Packet Flow Process From the Virtual Network to the Public Network
The following describes the procedure used to move a packet from the virtual network
(net1) to the public network.
1.
A packet with source-ip=192.168.1.253 and destination-ip=10.1.1.253 comes from a
virtual machine and is received by the vRouter on interface tap0.
2. The interface tap0 is in the routing instance of default-domain:admin:net1:net1.
3. The route lookup for 10.1.1.253 in the routing instance default-domain:admin:net1:net1
finds the default route pointing to the tap interface named vgw.
4. The vRouter transmits the packet toward vgw and it is received by the networking
stack of the host OS.
5. The host OS performs forwarding based on its routing table and forwards the packet
on the vhost0 interface.
38
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
6. Packets transmitted on vhost0 are received by the vRouter.
7. The vhost0 interface is added to the routing instance Fabric.
8. The routing table for 10.1.1.253 in the routing instance Fabric indicates that the packet
is to be transmitted on the eth0 interface.
9. The vRouter transmits the packet on the eth0 interface.
10. The host 10.1.1.253 on Fabric receives the packet.
Packet Flow Process From the Public Network to the Virtual Network
The following describes the procedure used to move a packet from the public network
to the virtual network (net1).
1.
A packet with source-ip=10.1.1.253 and destination-ip=192.168.1.253 coming from the
public network is received on interface eth0.
2. The interface tap0 is in the routing instance of default-domain:admin:net1:net1.
3. The vRouter receives the packet from eth0 in the routing instance Fabric.
4. The route lookup for 192.168.1.253 in Fabric points to the interface vhost0.
5. The vRouter transmits the packet on vhost0 and it is received by the networking stack
of the host OS.
6. The host OS performs forwarding according to its routing table and forwards the
packet on the vgw interface.
7. The vRouter receives the packet on the vgw interface into the routing instance
default-domain:admin:net1:net1.
8. The route lookup for 192.168.1.253 in the routing instance
default-domain:admin:net1:net1 points to the tap0 interface.
9. The vRouter transmits the packet on the tap0 interface.
10. The virtual machine receives the packet destined to 192.168.1.253.
Four Methods for Configuring the Simple Gateway
There are four different methods that can be used to configure the simple gateway. Each
of the methods is described in the following sections.
Using Fab Provisioning to Configure the Simple Gateway
You can provision the simple virtual gateway (vgw) during system provisioning with fab
commands by enabling the vgw knob in the Contrail testbed.py file. Select some or all
of the compute nodes to be configured as vgw by identifying vgw roles in the env.roledefs
section, along with other role definitions.
The following example configuration shows three host nodes (host4, host5, and host6)
configured as compute nodes. Two of the compute nodes (host4 and host5) are also
configured for vgw.
Copyright © 2016, Juniper Networks, Inc.
39
Contrail Feature Guide
In the file section env.vgw, two vgw interfaces (vgw1, vgw2) are configured in host4, and
the two interfaces are associated with virtual network public and public1, respectively.
For each vgw interface, the key ipam-subnets designates the subnets used by each virtual
network. If the same vgw interface is configured in a different compute node, it must be
associated with the same virtual network ipam-subnets. This is illustrated in the following
example, where vgw2 is configured in two compute nodes, host4 and host5. In both host4
and host5, vgw2 is associated with the same ipam-subnets.
The key gateway-routes is an optional parameter. If gateway-routes is configured, the
corresponding vgw will only publish the list of routes identified for gateway routes.
If the vgw interfaces are defined in env.roledefs, when provisioning the system nodes
with the command fab setup_all, the vgw interfaces will be provisioned, along with all
of the other nodes.
Example: Testbed.py
Env.roledefs for vgw
env.roledefs = { 'all': [host1, host2, host3, host4, host5, host6],
'cfgm': [host1, host2, host3],
'openstack': [host2],
'webui': [host3],
'control': [host1, host3],
'compute': [host4, host5, host6],
'vgw': [host4, host5], >>>>>>>>>Add section VGW in one or multiple compute node
'collector': [host1, host3],
'database': [host1],
'build': [host_build],
}
env.vgw = {
host4: {
'vgw1': {
'vn':'default-domain:admin:public:public',
'ipam-subnets': ['10.204.220.128/29', '10.204.220.136/29']
'gateway-routes': ['8.8.8.0/24', '1.1.1.0/24']
},
'vgw2':{
'vn':'default-domain:admin:public1:public1',
40
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
'ipam-subnets': ['10.204.220.144/29']}},
host5: {
'vgw2':{
'vn':'default-domain:admin:public1:public1',
ipam-subnets': ['10.204.220.144/29']
}
}
}
}
Using the vRouter Configuration File to Configure the Simple Gateway
Another way to enable a simple gateway is to configure one or more vgw interfaces within
the contrail-vrouter-agent.conf file.
Any changes made in this file for simple gateway configuration are implemented upon
the next restart of the vrouter agent. To configure the simple gateway in the
contrail-vrouter-agent.conf file, each simple gateway interface uses the following
parameters:
•
interface=vgwxx— Simple gateway interface name
•
routing_instance=default-domain:admin:public xx:public xx— Name of the routing
instance for which the simple gateway is being configured.
•
ip_block=1.1.1.0/24— List of the subnet addresses allocated for the virtual network.
Routes within this subnet are added to both the host OS and routing instance for the
fabric instance. Represent multiple subnets in the list by separating each with a space.
•
routes=10.10.10.1/24 11.11.11.1/24— List of subnets in the public network that are reachable
from the virtual network. Routes within this subnet are added to the routing instance
configured for the vgw interface. Represent multiple subnets in the list by separating
each with a space.
Using Thrift Messages to Dynamically Configure the Simple Gateway
Another way to configure the simple gateway is to dynamically send create and delete
thrift messages to the vrouter agent.
Starting with Contrail Release 1.10 and greater, the following thrift messages are available:
•
AddVirtualGateway—add a virtual gateway
•
DeleteVirtualGateway —delete a virtual gateway
•
ConnectForVirtualGateway —allows audit of the virtual gateway configuration by
stateful clients. Upon a new ConnectForVirtualGateway request, one minute is allowed
Copyright © 2016, Juniper Networks, Inc.
41
Contrail Feature Guide
for the configuration to be redone. Any older virtual gateway configuration remaining
after this time is deleted.
•
How to Dynamically Create a Virtual Gateway on page 42
•
How to Dynamically Delete a Virtual Gateway on page 42
•
Using Devstack to Configure the Simple Gateway on page 43
How to Dynamically Create a Virtual Gateway
To dynamically create a simple virtual gateway, you run a script on the compute node
where the virtual gateway will be created.
When run, the script does the following:
1.
Enables forwarding on the node.
2. Creates the required interface.
3. Adds the interface to the vRouter.
4. Adds required routes to the host OS.
5. Sends the thrift message AddVirtualGateway to the vRouter agent telling it to create
the virtual gateway.
Example: Dynamically
Create a Virtual
Gateway
The following procedure dynamically creates the interface vgw1, with subnets
20.30.40.0/24 and 30.40.50.0/24 in the vrf default-domain:admin:vn1:vn1.
1.
Set the PYTHONPATH to the location of InstanceService.py and types.py, for example:
export
PYTHONPATH=/usr/lib/python2.7/dist-packages/nova_contrail_vif/gen_py/instance_service
export
PYTHONPATH=/usr/lib/python2.6/site-packages/contrail_vrouter_api/gen_py/instance_service
2. Run the vgw provision command with the oper create option.
Use the option subnets to specify the subnets defined for virtual network vn1.
Use the option routes to specify the routes in the public network that are injected into
vn1.
In the following example, the virtual machines in vn1 can access subnets 8.8.8.0/24
and 9.9.9.0/24 in the public network:
python /opt/contrail/utils/provision_vgw_interface.py --oper create --interface vgw1
--subnets 20.30.40.0/24 30.40.50.0/24 --routes 8.8.8.0/24 9.9.9.0/24 --vrf
default-domain:admin:vn1:vn1
How to Dynamically Delete a Virtual Gateway
To dynamically delete a virtual gateway, you run a script on the compute node where
the virtual gateway was created.
When run, the script does the following:
42
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
1.
Sends the DeleteVirtualGateway thrift message to the vRouter agent, telling it to
delete the virtual gateway.
2. Deletes the vgw interface from the vRouter.
3. Deletes the vgw routes that were added in the host OS when the vgw was created.
Example: Dynamically
Create a Virtual
Gateway
The following procedure dynamically deletes the interface vgw1, and also deletes the
subnets 20.30.40.0/24 and 30.40.50.0/24 in the vrf default-domain:admin:vn1:vn1.
1.
Set the PYTHONPATH to the location of InstanceService.py and types.py, for example:
export
PYTHONPATH=/usr/lib/python2.7/dist-packages/nova_contrail_vif/gen_py/instance_service
export
PYTHONPATH=/usr/lib/python2.6/site-packages/contrail_vrouter_api/gen_py/instance_service
2. Run the vgw provision command with the oper delete option.
python /opt/contrail/utils/provision_vgw_interface.py --oper delete --interface vgw1
--subnets 20.30.40.0/24 30.40.50.0/24 --routes 8.8.8.0/24 9.9.9.0/24
3. (optional) If using a stateful client, send the ConnectForVirtualGateway thrift message
to the vRouter agent when the client starts.
NOTE: If the the vRouter agent restarts or if the compute node reboots, it is
expected that the client will reconfigure again.
Using Devstack to Configure the Simple Gateway
Another way to configure the simple gateway is to set configuration parameters in the
devstack localrc file.
The following parameters are available:
•
CONTRAIL_VGW_PUBLIC_NETWORK —The name of the routing instance for which the
simple gateway is being configured.
•
CONTRAIL_VGW_PUBLIC_SUBNET —A list of subnet addresses allocated for the virtual
network. Routes containing these addresses are added to both the host OS and the
routing instance for the fabric. List multiple subnets by separating each with a space.
•
CONTRAIL_VGW_INTERFACE —A list of subnets in the public network that are reachable
from the virtual network. Routes containing these subnets are added to the routing
instance configured for the simple gateway. List multiple subnets by separating each
with a space.
This method can only add the default route 0.0.0.0/0 into the routing instance specified
in CONTRAIL_VGW_PUBLIC_NETWORK.
Example: Devstack
Configuration for
Simple Gateway
Add following lines in the localrc file for stack.sh:
CONTRAIL_VGW_INTERFACE=vgw1
Copyright © 2016, Juniper Networks, Inc.
43
Contrail Feature Guide
CONTRAIL_VGW_PUBLIC_SUBNET=192.168.1.0/24
CONTRAIL_VGW_PUBLIC_NETWORK=default-domain:admin:net1:net1
NOTE: This method can only add default route 0.0.0.0/0 into the routing
instance specified in CONTRAIL_VGW_PUBLIC_NETWORK.
Common Issues with Simple Gateway Configuration
The following are common problems you might encounter when a simple gateway is
configured.
•
Packets from the external network are not reaching the compute node.
The devices in the fabric network must be configured with static routes for the IP
addresses defined in the public subnet (192.168.1.0/24 in the example) to reach the
compute node that is running as a simple gateway.
•
Packets are reaching the compute node, but are not routed from the host OS to the
virtual machine.
Check to see if the firewall_driver in /etc/nova/nova.conf file is set to
nova.virt.libvirt.firewall.IptablesFirewallDriver, which enables IPTables. IPTables can
discard packets.
Resolutions include disabling IPTables during runtime or setting the firewall_driver in
localrc: LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
Configuring Contrail on VMware ESXi
•
Introduction on page 44
•
Using Server Manager to Configure Contrail Compute Nodes on VMware ESXi on page 45
•
Using Fab Commands to Configure Contrail Compute Nodes on VMware
ESXi on page 53
•
Fab Installation Guidelines for ESXi on page 53
Introduction
A Contrail cluster of nodes consists of one or more servers, each configured to provide
certain role functionalities for the cluster, including control, config, database, analytics,
web-ui, and compute. The cluster provides virtualized network functionality in a cloud
computing environment, such as OpenStack. Typically, the servers running Contrail
components are using the Linux operating system and a KVM hypervisor.
As of Contrail Release 2.0 and greater, limited capability is provided for extending the
Contrail compute node functionality to servers running the VMware ESXi virtualization
platform. To run Contrail on ESXi, a virtual machine is spawned on a physical ESXi server,
and a compute node is configured on the virtual machine. For Contrail on ESXi, only
compute node functionality is provided at this time.
44
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
There are two methods for configuring and provisioning nodes in a Contrail cluster: using
the Contrail Server Manager to automate provisioning or using fab (Fabric) commands.
Both methods can also be used to configure Contrail compute nodes on VMWare ESXi.
Using Server Manager to Configure Contrail Compute Nodes on VMware ESXi
The following procedure provides guidelines and steps used to configure compute nodes
on ESXi when using the Contrail Server Manager.
Using Server Manager to provision nodes for ESXi is similar to the procedure for
provisioning nodes on the KVM hypervisor. However, because an ESXi server is represented
by a virtual machine for compute nodes in the Contrail environment, there are additional
items to be identified in the setup. The following procedure describes how to configure
the ESXi servers using Server Manager.
For more details regarding the use and functionality of the Server Manager, refer to Using
Server Manager to Automate Provisioning.
Copyright © 2016, Juniper Networks, Inc.
45
Contrail Feature Guide
Installation Guidelines
Using Server Manager
The following procedure provides guidelines for using the normal Server Manager
installation process and applying it to an environment that includes compute nodes on
ESXi server(s).
1.
Define the cluster and the cluster parameters. There are no additional cluster
parameters needed for ESXi hosts.
2. Define servers and configure them to be part of the cluster already defined. For a
KVM-only environment, server entries are needed only for physical servers. However,
when there is one or more ESXi servers in the cluster, in addition to a server entry for
each physical ESXi server, there must also be an entry for a virtual machine on each
ESXi server. The virtual machine is used to configure a Contrail compute node in
OpenStack. Refer to the sample file following this procedure for examples of the
additional entry and fields needed for ESXi servers and their virtual machines.
3. Add images for Ubuntu and ESXi to the server manager database. These images are
the base images for configuring the KVM and ESXi servers.
4. Use the Server Manager add image command to add the Contrail package to be used
to provision the Contrail nodes.
5. In addition to the base OS (ESXi) for the ESXi server, also provide a modified Ubuntu
VMDK image that will be used to spawn the virtual machine on ESXi. The Contrail
compute node functionality runs on the spawned virtual machine. The location of the
VMDK image is provided as part of the parameters of the server entry that corresponds
to the ESXi virtual machine.
6. Add all additional configuration objects that are needed as described in the Server
Manager installation instructions. See Using Server Manager to Automate Provisioning.
7. When all configuration objects are created in the Server Manager database, issue the
reimage command, as described in Using Server Manager to Automate Provisioning,
to boot the Ubuntu and ESXi hosts.
8. Issue the Server Manager provision command to provision the nodes on Ubuntu and
ESXi hosts. During provisioning, the virtual machine is spawned on the ESXi server,
using the VMDK image, then that node is configured as a compute node.
9. Upon completion of provisioning, on an Ubuntu machine, all the nodes are up and
operational. On the ESXi server, only compute node functionality is supported,
consequently, only the virtual machine is seen as one of the compute nodes within
the OpenStack cluster.
Example: json Files for
Configuring with Server
Manager
46
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
The following example demonstrates sample configuration parameters needed to
configure ESXi hosts along with Ubuntu servers. Only compute node functionality is
supported for ESXi at this time.
1.
Create a file, cluster.json, for the cluster configuration. The following shows sample
parameters to include.
"cluster" : [
{
"id" : "clusteresx",
"email" : "test@testco.net",
"parameters" : {
"router_asn": "<as>",
"database_dir": "/home/cassandra",
"db_initial_token": "",
"openstack_mgmt_ip": "",
"use_certs": "False",
"multi_tenancy": "False",
"encapsulation_priority": "MPLSoUDP,MPLSoGRE,VXLAN",
"service_token": "<password>",
"keystone_user": "admin",
"keystone_passwd": "<password>",
"keystone_tenant": "admin",
"openstack_password": "<password>",
"analytics_data_ttl": "168",
"compute_non_mgmt_ip": "",
"compute_non_mgmt_gway": "",
"haproxy": "disable",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"external_bgp": "",
"domain": "<domain name>"
}
}
Copyright © 2016, Juniper Networks, Inc.
47
Contrail Feature Guide
}
}
2. Add the cluster, using the json file just created:
server-manager add cluster –f cluster.json
48
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
3. Create a file, server.json, for the servers configuration.
The following shows sample parameters to include. ESXi-specific parameters are
highlighted. Note that the Ubuntu server is configured to have all Contrail role
definitions, however, the ESXi virtual machine is configured for a compute role only:
"server": [
{
"id": "nodea10",
"mac_address": "<mac address>",
"ip_address": "<ip address>",
"parameters" : {
"interface_name": "eth1"
},
"roles" :
["config","openstack","control","compute","collector","webui","database"],
"cluster_id": "clusteresx",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"domain": "<domain name>",
"ipmi_address": "<ip address>"
},
{
"id": "nodeh6",
"mac_address": "<mac address>",
"ip_address": "<ip address>",
"parameters": {
"interface_name": "eth0",
"server_license": "",
"esx_nicname": "vmnic0"
},
"roles": [
],
Copyright © 2016, Juniper Networks, Inc.
49
Contrail Feature Guide
"cluster_id": "clusteresx",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"ipmi_address": "<ip address>",
"domain": "<domain name>"
},
{
"id": "ContrailVM",
"host_name": "ContrailVM", <<<<<<<<< Provide a hostname for VM,
otherwise the hostname "nodeb2-contrail-vm" is created and hardcoded.
"mac_address": "<mac address>",<<<<<<<< The mac_address should be
in the range 00:50:56:*:*:*
"ip_address": "<ip address>",
"parameters": {
"interface_name": "eth0",
"esx_server": "nodeh6",
"esx_uplink_nic": "vmnic0",
"esx_fab_vswitch": "vSwitch0",
"esx_vm_vswitch": "vSwitch1",
"esx_fab_port_group": "contrail-fab-pg",
"esx_vm_port_group": "contrail-vm-pg",
"esx_vmdk": "/home/smgr_files/json/ContrailVM-disk1.vmdk",
"vm_deb":
"/home/smgr_files/json/contrail-install-packages_1.10-34~havana_all.deb"
},
"roles": [
"compute"
],
"cluster_id": "clusteresx",
"subnet_mask": "<ip address>",
50
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
"gateway": "<ip address>",
"password": "<password>",
"domain": "<domain name>"
}
]
}
4. Add the servers, using the server.json file:
server-manager add server –f server.json
Copyright © 2016, Juniper Networks, Inc.
51
Contrail Feature Guide
5. Create the file image.json to add the needed images (Ubuntu, ESXi, and Contrail
Ubuntu package) to the Server Manager database.
The following sample shows the parameters to include.
{
"image": [
{
"id": "esx",
"type": "esxi5.5",
"version": "5.5",
"path": "/home/smgr_files/json/esx5.5_x86_64.iso"
},
{
"id": "Ubuntu-12.04.3",
"type": "ubuntu",
"version": "12.04.3",
"path": "/home/smgr_files/json/Ubuntu-12.04.3-server-amd64.iso"
},
{
"id": "esx",
"type": "esxi5.5",
"version": "5.5",
"path": "/home/smgr_files/json/esx5.5_x86_64.iso"
}
]
}
6. Add the images:
server-manager add image –f image.json
7. Reimage nodes.
Issue server-manager reimage –server_id nodea10 Ubuntu-12.04.3 to reimage nodea10
with Ubuntu 12.04.3.
Issue server-manager reimage –server_id nodeh6 esx5-5 to reimage nodeh6 with ESXi
5.5.
52
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
8. Provision roles.
Issue server-manager provision –server_id nodea10 contrail-uh-r110-b34 to configure
and provision all the roles on nodea10.
9. Ensure that the DHCP server on Server Manager is configured to provide an IP address
to the virtual machine that will be spawned on ESXi as part of the next provisioning
command. The DHCP.template on the Server Manager with Cobbler machine should
be modified to provide IP to VIRTUAL MACHINE with the virtual MAC that is configured
on that virtual machine.
10. Provision the ESXi server.
Issue server-manager provision –server_id ContrailVM contrail-uh-r110-b3 to configure
and provision the compute role on a virtual machine that will be created on the ESXi.
NOTE: Provisioning on the ESXi consists of specifying the server id
corresponding to the virtual machine on ESXi and not the ESXi server itself.
Upon completion of provisioning, the two compute nodes can be seen in the OpenStack
cluster. One of the compute nodes is on a physical Ubuntu server and the other compute
node is on an ESXi virtual machine.
The system is now ready for users to launch virtual machine instances and virtual networks
and use them to communicate with each other and with external networks.
Using Fab Commands to Configure Contrail Compute Nodes on VMware ESXi
As of Contrail Release 2.0 and greater, you can use fab (Fabric) commands to configure
the VMware ESXi hypervisor as a Contrail compute node. Refer to Contrail Installation
and Provisioning Roles for details of using fab commands for installation.
Requirements Before You Begin
The guidelines for using fab commands to configure Contrail compute node on an ESXi
server have the following prerequisites:
•
The testbed.py must be populated with both ESXi hypervisor information and Contrail
virtual machine information.
•
The ESXi hypervisor must be up and running with an appropriate ESXi version.
NOTE: ESXi cannot be installed using fab commands in Contrail.
Fab Installation Guidelines for ESXi
Use the following guidelines when using fab commands to set up ESXi as a compute
node for Contrail.
1.
Issue fab prov_esxi to provision the ESXi with the required vswitches, port groups, and
the contrail-compute-vm.
Copyright © 2016, Juniper Networks, Inc.
53
Contrail Feature Guide
The ESXi hypervisor information is provided in the esxi_hosts stanza, as shown in the
following..
#Following are ESXi Hypervisor details.
esxi_hosts = {
#Ip address of Hypervisor
'esxi_host1' : {'ip': '<ip address>',
#Username and password of ESXi Hypervisor
'username': 'user',
'password': '<password>',
#Uplink port of Hypervisor through which it is connected to external world
'uplink_nic': 'vmnic2',
#Vswitch on which above uplinc exists
'fabric_vswitch' : 'vSwitch0',
#Port group on 'fabric_vswitch' through which ContrailVM connects to external
world
'fabric_port_group' : 'contrail-fab-pg',
#Vswitch name to which all openstack virtual machine's are hooked to
'vm_vswitch': 'vSwitch1',
#Port group on 'vm_vswitch', which is a member of all vlans, to which ContrailVM
is connected to all openstack VM's
'vm_port_group' : 'contrail-vm-pg',
#links 'host2' ContrailVM to esxi_host1 hypervisor
contrail_vm' : {
'name' : 'ContrailVM2' # Name for the contrail-compute-vm,
'mac' : '<mac address>', # VM's eth0 mac address, same should be
configured on DHCP server
'host' : host2, # host string for VM, as specified in the
env.rolesdef['compute']
’vmdk' : 'file.vmdk' # local path of the VMDK file
},
# Another ESXi hypervisor follows
}
54
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
NOTE: The VMDK for
contrail-compute-vm (ESXi-v5.5-Contrail-host-Ubuntu-precise-12.04.3-LTS.vmdk)can
be downloaded from
https://www.juniper.net/support/downloads/?p=contrail#sw
NOTE: The contrail-compute-vm gets the IP address and its host name
from the DHCP server.
2. The standard installation fab commands, such as fab install_pkg_all, fab install_contrail
and fab setup_all can now be used to finish the setup of the entire cluster.
Refer to Installing the Contrail Packages, Part Two for details of using fab commands
for installation.
Installing Contrail with Red Hat OpenStack
•
Overview: Contrail with Red Hat OpenStack on page 55
•
Procedure for Installing RHOSP5 on page 56
•
Install and Configure Contrail on page 57
•
Download and Install Contrail-Install-Packages to the First Config Node on page 57
•
Update the Testbed.py on page 58
•
Complete the Installation on Remaining Nodes on page 60
•
Appendix: Installing with RDO on page 61
•
Install All-in-One OpenStack on page 61
Overview: Contrail with Red Hat OpenStack
If you are planning to use Contrail with Red Hat OpenStack, be sure to first install Red Hat
OpenStack, using either RDO or RHOSP packages.
Copyright © 2016, Juniper Networks, Inc.
55
Contrail Feature Guide
Procedure for Installing RHOSP5
The following provides general steps for installing Red Hat OpenStack and configuring
the setup for Contrail.
1.
Install OpenStack.
To provision an openstack node with RHOSP5 packages, refer to the official Red Hat
documents:
DEPLOYING OPENSTACK: LEARNING ENVIRONMENTS (MANUAL SETUP)
NOTE: Configure a password for Keystone, the same password must be
used in the Contrail testbed.py file.
2. Update the OpenStack password in the testbed.py. Copy the OpenStack Keystone
password to the testbed.py. Refer to the Testbed section in the following.
3. Stop the nova-compute and Neutron services in the OpenStack node:
# service openstack-nova-compute stop
# service neutron-server stop
# nova service-disable $(hostname) nova-compute
4.
Update nova.conf as follows:
# openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class
nova.network.neutronv2.api.API
# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url
http://<FIRST_CFGM_IP>:9696
(FIRST_CFGM_IP is the IP address of the first node in the CFGM role defined in the
testbed.py.)
# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url
http://<KEYSTONE_IP_ADDRESS>:35357/v2.0
# openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver
nova.virt.libvirt.LibvirtDriver
5. Restart the Nova services:
# service openstack-nova-api restart
# service openstack-nova-conductor restart
# service openstack-nova-scheduler restart
# service openstack-nova-consoleauth restart
6. (optional) Configure the novncproxy_port.
Contrail uses port 5999 for the novncproxy_port. If the same port is preferred for any
openstack node, update the novncproxy_port as in the following:
# openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_port 5999
# service openstack-nova-novncproxy restart
56
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
Install and Configure Contrail
After Red Hat OpenStack is installed and the testbed.py has been configured with items
for Red Hat, you can install Contrail.
Repositories for Third Party Packages
The Contrail installation depends on a number of third party open source packages that
are not included in the contrail-install-packages file, however, they are downloaded and
installed through the Internet when respective repositories have been enabled in the
nodes.
•
For Red Hat registering of third party applications and subscribing, refer to Red Hat
documentation: How to register and subscribe a system to the Red Hat Customer Portal
using Red Hat Subscription-Manager at: https://access.redhat.com/solutions/253273
•
For enabling EPEL repositories, the respective EPEL packages might require installation.
Refer to EPEL documentation at: https://fedoraproject.org/wiki/EPEL .
Ensure that the contrail_install_repo has the highest priority.
The following are the least-required RHEL repositories to be enabled in all nodes.
subscription-manager repos --enable=rhel-7-server-extras-rpms
subscription-manager repos --enable=rhel-7-server-optional-rpms
subscription-manager repos --enable=rhel-7-server-rpms
subscription-manager repos --enable=rhel-7-server-openstack-5.0-rpms
rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
yum -y install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
Priority for a repository can be set manually by editing the corresponding sections in
/etc/yum.repos.d/*.repo files or yum-config-manager can be used.
To change priority of repos using yum-config-manager:
yum install yum-plugin-priorities
yum-config-manager --enable [reponame] --setopt="[reponame].priority=1"
Download and Install Contrail-Install-Packages to the First Config Node
The following steps show how to copy and install the contrail-install-packages, before
updating the testbed.py.
1.
Copy the contrail-install-packages to the /root/ of the first config node. The first config
node is the first cfgm node defined in the section env.roledefs[‘cfgm’] roles in the
testbed.py.
2. Install the contrail-install-packages:
Copyright © 2016, Juniper Networks, Inc.
57
Contrail Feature Guide
# yum --disablerepo=* localinstall <package file>
3. Set up contrail_install_repo and install fabric-utils:
# cd /opt/contrail/contrail_packages/
# ./setup.sh
4. Create the testbed.py file in the testbeds directory, with host details under different
Contrail roles.
cd /opt/contrail/utils/fabfile/testbeds/
Refer to example testbed.py files available in the testbeds directory.
Update the Testbed.py
The OpenStack node is not provisioned by using the Contrail fabric-utils, it is the testbed.py
that carries node information.
Use the following steps to update the testbed.py to make Contrail nodes aware of
the OpenStack node information.
1.
Update the Openstack admin password in the testbed.py:
#Openstack admin password
env.openstack_admin_password = '<passwrd>'
and
env.keystone = {
'keystone_ip' : '<ip address>',
'auth_protocol' : 'http',
#Default is http
'auth_port' : '35357',
#Default is 35357
'admin_token' : '$ABC123',
'admin_user' : 'admin',
#Default is admin
'admin_password': '<password>',
'service_tenant': 'service',
#Default is service
'admin_tenant' : 'admin',
#Default is admin
'region_name' : 'RegionOne',
#Default is RegionOne
'insecure' : 'True',
#Default = False
}
2. Update the keystone_ip in the testbed.py. The Keystone IP address is the same as
the OpenStack IP address.
env.keystone = {
'keystone_ip' : '<ip address>',
'auth_protocol' : 'http',
#Default is http
'auth_port' : '35357',
#Default is 35357
'admin_token' : '$ABC123',
'admin_user' : 'admin',
#Default is admin
'admin_password': '<password>',
'service_tenant': 'service',
#Default is service
'admin_tenant' : 'admin',
#Default is admin
'region_name' : 'RegionOne',
#Default is RegionOne
'insecure' : 'True',
#Default = False
}
58
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
3. Update the admin_token in the testbed.py.
The admin_token is available in the /etc/keystone/keystone.conf of the OpenStack
node.
env.keystone = {
'keystone_ip' : '<ip address>',
'auth_protocol' : 'http',
#Default is http
'auth_port' : '35357',
#Default is 35357
'admin_token' : '$ABC123',
'admin_user' : 'admin',
#Default is admin
'admin_password': '<password>',
'service_tenant': 'service',
#Default is service
'admin_tenant' : 'admin',
#Default is admin
'region_name' : 'RegionOne',
#Default is RegionOne
'insecure' : 'True',
#Default = False }
4. (optional) If a different keystone user or tenant for neutron service is preferred, update
the keystone settings as in the following:
env.keystone = {
'keystone_ip' : '<ip address>',
'auth_protocol' : 'http',
#Default is http
'auth_port' : '35357',
#Default is 35357
'admin_token' : '$ABC123',
'admin_user' : 'admin',
#Default is admin
'admin_password': '<password>',
'service_tenant': 'service',
#Default is service
'admin_tenant' : 'admin',
#Default is admin
'region_name' : 'RegionOne',
#Default is RegionOne
'insecure' : 'True',
#Default = False
'manage_neutron': 'no',
#Default = 'yes' , Does configure neutron user/role in
keystone required.
}
5. Update the service_token in the testbed.py.
Copy the admin_token from the /etc/keystone/keystone.conf of the OpenStack node,
and enter it as the value for the service_token, as in the following:
env.openstack = {
'service_token' : '$ABC123',
'amqp_host' : '<ip address>',
}
6. Update the amqp_host in the testbed.py.
Enter the value for the amqp_host to be the same as the IP address of the OpenStack
node, as in the following:
env.openstack = {
'service_token' : '$ABC123',
'amqp_host' : '<ip address>',
}
7. Precheck: Issue a precheck to make sure all nodes are reachable and properly updated
in the testbed.py.
Copyright © 2016, Juniper Networks, Inc.
59
Contrail Feature Guide
One easy way to precheck is to issue the following command and see if it passes.
# fab all_command:”uname –a”
Complete the Installation on Remaining Nodes
Use the following procedure to complete the installation.
1.
Copy and Install contrail-install-packages to all other nodes, except the OpenStack
node:
# fab install_pkg_all_without_openstack:</path/to/contrail-install-packages.rpm>
2. Disable iptables. It is currently required to permanently disable iptables. This is a
known issue regarding Contrail install and setup and deletion is the current resolution.
Iptables can be disabled by issuing the following fab commands. The fab all_command,
as used in the following, executes the given command in all nodes configured in the
testbed.py.
#
#
#
#
#
#
#
fab
fab
fab
fab
fab
fab
fab
all_command:"iptables --flush"
all_command:"sudo service iptables stop; echo pass"
all_command:"sudo service ip6tables stop; echo pass"
all_command:"sudo systemctl stop firewalld; echo pass"
all_command:"sudo systemctl status firewalld; echo pass"
all_command:"sudo chkconfig firewalld off; echo pass"
all_command:"sudo /usr/libexec/iptables/iptables.init stop; echo pass"
# fab all_command:"sudo /usr/libexec/iptables/ip6tables.init stop; echo pass"
# fab all_command:"sudo service iptables save; echo pass"
# fab all_command:"sudo service ip6tables save; echo pass"
3. Install Contrail without OpenStack.
Because the OpenStack node is set up with RDO or RHOSP, and the relevant details
are updated in the testbed.py, Contrail must be installed without OpenStack. The
following fab command is used to set up Contrail without OpenStack, while also
assuming that iptables are disabled permanently in all nodes.
# fab install_without_openstack
4. Set up Contrail.
Use the following fab command to set up Contrail without OpenStack, and with the
assumption that iptables are permanently disabled in all nodes.
# fab setup_without_openstack
5. Verify the setup.
Use contrail-status and openstack-status commands to verify the setup status.
# fab all_command:”contrail-status; echo pass”
# fab all_command:”openstack-status”
60
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
NOTE: The contrail-status command is not available from the Red Hat
OpenStack node.
Appendix: Installing with RDO
You can provision the OpenStack node by using RDO packages instead of RHOSP. This
section provides guidelines for installation and reparatioon when using RDO vs. RHOSP.
For more RDO installation information, refer to the Red Hat OpenStack documentation
at https://openstack.redhat.com/Quickstart.
Prerequisites
The following are the prerequisite steps.
1.
You must already have the Red Hat OpenStack repositories set up and enabled.
2. Install RDO.
# yum install -y https://rdo.fedorapeople.org/rdo-release.rpm
3. Install Packstack.
# yum install -y openstack-packstack
Install All-in-One OpenStack
Use the following procedure to prepare to install Contrail when using RDO vs. RHOSP
for Red Hat.
1.
Install as an all-in-one node using Packstack.
# packstack --allinone --mariadb-pw=juniper123 --use-epel=y
NOTE: Packstack can use an answers file to determine where a service
can be enabled or disabled based on your preference. Refer to Packstack
documentation for more information.
2. Update the Keystone password to use a predictable password.
Packstack usually sets up Keystone with a random password. Use the following to
set a predictable password for Keystone.
# source /root/keystonerc_admin && keystone user-password-update --pass
<password> admin and update the same password in “OS_PASSWORD” in
/root/keystonerc_admin
Copyright © 2016, Juniper Networks, Inc.
61
Contrail Feature Guide
Configuring OpenStack Nova Docker with Contrail
•
Overview: Nova Docker Support in Contrail on page 62
•
Deploying a Contrail Compute Node to Work With OpenStack Nova Docker on page 62
•
Launching a Docker Container on page 63
•
Limitations on page 63
Overview: Nova Docker Support in Contrail
Introduced: Contrail Release 2.20
OpenStack Nova Docker containers can be used instead of virtual machines for specific
use cases. DockerDriver is a driver for launching Docker containers.
Starting with Contrail Release 2.20, it is possible to configure a compute node in a Contrail
cluster to support Docker containers.
This section describes how to set up a compute node in a Contrail cluster to support
Docker containers.
For more details about OpenStack Nova Docker, refer to the OpenStack wiki
at:https://wiki.openstack.org/wiki/Docker.
Platform Support: Contrail and OpenStack Nova Docker
The following table provides the platforms and versions supported for Contrail and
OpenStack Nova Docker.
Platform
Version
OpenStack SKU
Contrail Release
Ubuntu
Precise
Icehouse
2.20 and greater
Ubuntu
Trusty
Icehouse
2.20 and greater
Ubuntu
Trusty
Juno
2.20 and greater
Deploying a Contrail Compute Node to Work With OpenStack Nova Docker
To deploy a compute node to work with Nova Docker in a Contrail cluster, the Nova
DockerDriver is defined in place of the LibvirtDriver in the env.hypervisor dictionary in the
testbed.py.
Example:
The following example uses a testbed.py with the following compute nodes listed in the
env.roledefs dictionary.
env.roledefs = {
‘compute’ : [host5, host6, host7],
}
62
Copyright © 2016, Juniper Networks, Inc.
Chapter 3: Installing Contrail and Provisioning Roles
Changes are made to the env.hypervisor, to deploy Nova DockerDriver in host7 and deploy
the host5 and host6 with the default LibvirtDriver.
Once the settings are changed in the testbed.py as in the following, use the steps in
“Installing the Contrail Packages, Part Two (CentOS or Ubuntu) — Installing on the
Remaining Machines” on page 24, to have the compute host7 deployed with the Nova
DockerDriver.
env.hypervisor = {
host7 : ‘docker’,
}
Saving the Docker Image to Glance
The Docker image must be pulled from the compute node that is deployed with
DockerDriver (host7) and added to the Glance configuration, as in the following.
host7$ source /etc/contrail/openstackrc
host7$ docker pull cirros
host7$ docker save cirros | glance image-create --is-public=True --container-format=docker
--disk-format=raw --name cirros
NOTE: Always use the same name for the Docker image and its configuration
in Glance. Otherwise, the Docker image cannot be launched.
Launching a Docker Container
Contrail creates a separate Nova availability zone (nova/docker) for compute nodes
deployed with DockerDriver.
To launch a Docker container, use a Nova command that includes the –availability-zone
option, as in the following.
host7$ source /etc/contrail/openstackrc
host7$ nova boot --flavor 1 --nic net-id=<netId> --image <imageId> --availability-zone
nova/docker <dockerContainerName>
Limitations
The following are limitations with the deployment of Nova Docker and Contrail.
•
The Docker containers cannot be seen in the virtual network controller console because
console access is not supported with Nova Docker, see
https://bugs.launchpad.net/nova-docker/+bug/1321818.
•
Docker images must be saved in Glance using the same name as used in the docker
images command. If the same name is not used, you will not be able to launch the
Docker container.
Copyright © 2016, Juniper Networks, Inc.
63
Contrail Feature Guide
64
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 4
Using Contrail with VMware vCenter
•
Installing Contrail with VMware vCenter on page 65
•
Using the Contrail and VMWare vCenter User Interfaces to Configure and Manage the
Network on page 78
Installing Contrail with VMware vCenter
•
Overview: Integrating Contrail with vCenter Server on page 65
•
Installation of a Contrail Integration with VMware vCenter on page 66
•
Preparing the Installation Environment on page 67
•
Installing the Contrail for vCenter Components on page 68
•
Provisioning on page 69
•
Verification on page 69
•
Deployment Scenarios on page 72
•
Sample Testbed.py for Contrail vCenter on page 72
•
User Interfaces for Configuring Features on page 77
Overview: Integrating Contrail with vCenter Server
Starting with Contrail Release 2.20, it is possible to install Contrail to work with the
VMware vCenter Server in various vSphere environments.
This topic describes how to install and provision Contrail Release 2.20 and later so that
it works with existing or already provisioned vSphere deployments that use VMware
vCenter as the main orchestrator.
The Contrail VMware vCenter solution is comprised of the following two main
components:
1.
Control and management that runs the following components as needed per Contrail
system:
a. A VMware vCenter Server independent installation, that is not managed by Juniper
Contrail. The Contrail software provisions vCenter with Contrail components and
creates entities required to run Contrail.
Copyright © 2016, Juniper Networks, Inc.
65
Contrail Feature Guide
b. The Contrail controller, including the configuration nodes, control nodes, analytics,
database, and Web UI, which is installed, provisioned, and managed by Contrail
software.
c. A VMware vCenter plugin provided with Contrail that typically resides on the Contrail
configuration node.
2. VMware ESXi virtualization platforms forming the compute cluster, with Contrail data
plane (vRouter) components running inside an Ubuntu-based virtual machine. The
virtual machine, named ContrailVM, forms the compute personality while performing
Contrail installs. The ContralVM is set up and provisioned by Contrail. There is one
ContrailVM running on each ESXi host.
The following figure shows various components of the Contrail VMware vCenter solution.
Figure 7: Contrail VMware vCenter Solution
Installation of a Contrail Integration with VMware vCenter
This section lists the basic installation procedure and the assumptions and prerequisites
necessary before starting the installation of any VMware vCenter Contrail integration.
66
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
Installation:
Assumptions and
Prerequisites
The following assumptions and prerequisites are required for a successful installation
of a VMware vCenter Contrail integrated system.
1.
VMware vCenter Server version 5.5 is installed and running on Windows.
2. A cluster of ESXis, of VMware version 5.5, is managed by vCenter.
3. The recommended hardware and virtual machines to run Contrail controller are
available. The recommended minimum for a high availability capable deployment is
three nodes.
4. The software installation packages are downloaded:
a) “.deb” file of Contrail install packages
b) VMDK image of ContrailVM
5. Because Contrail vRouter runs as a virtual machine on each of the ESXi hosts, it needs
an IP address assigned from the same underlay network as the host, all of which must
be specified appropriately in the testbed.py file.
Basic Installation
Steps
Before beginning the installation, familiarize yourself with the following basic installation
steps for vCenter with Contrail.
1.
Spawn a ContrailVM on every ESXi compute host. Set up each ESXi ContrailVM with
the resources needed prior to Contrail installation.
2. Install Contrail with all necessary roles defined.
3. Install the vCenter plugin on the Contrail config nodes.
4. Provision the Contrail controller nodes and the vCenter plugin.
5. Provision vCenter with the necessary state (entities), to plumb the data path in the
ESXi hosts to operate in the manner necessary for the Contrail integration.
Software Images
Distributed for
Installation
The following are the Contrail software image types required for installing a VMware
vCenter Server Contrail integrated system:
1.
Debian *.deb package for Contrail installation components
2. Contrail Virtual Machine Disk (VMDK) image for the ContrailVM
3. Debian *.deb package for the Contrail vCenter plugin
Preparing the Installation Environment
Use the standard Contrail installation procedure to install Contrail on one of the target
boxes or servers, so that Fabric (fab) scripts can be used to install and provision the entire
cluster.
Follow the steps in the Installing Contrail Packages for CentOS or Ubuntu section in
“Installing the Contrail Packages, Part One (CentOS or Ubuntu)” on page 14
Copyright © 2016, Juniper Networks, Inc.
67
Contrail Feature Guide
NOTE: The fab scripts require a file named testbed.py, that holds all of the
key attributes for the provisioning, including the IP addresses of the Contrail
roles. Ensure that the testbed.py file is updated with the correct parameters
for your system, including parameters to define the vCenter participation.
Refer to the sample testbed.py file for Contrail vCenter provided in this topic.
At the end of the procedure, ensure that the Contrail install Debian package is available
on all of the controller nodes.
If your system has multiple controller nodes, you might also need to run the following
command.
fab install_pkg_all:<Contrail deb package>
Installing the Contrail for vCenter Components
Use the steps in this section to install the Contrail for vCenter components. Refer to the
sample testbed.py file for Contrail vCenter for specific examples.
Step 1: Ensure all information in the esxi_hosts section of the testbed.py file is accurate.
The esxi_hosts = { } section of the testbed.py file spawns the ContrailVM from the bundled
VMDK file.
Ensure all required information in the section is specific to your environment and the
VMDK file can be accessed by the machine running the fab task.
If the IP address and the corresponding MAC address of the Contrail VM are statically
mapped in the DHCP server, specify the static IP address in the host field and the MAC
address in the mac field in the contrail_vm subsection.
Provision the ESXi hosts using the following command:
fab prov_esxi:<contrail deb package path>
When finished, ping each of the Contrail VMs to make sure they respond.
Step 2:Ensure the IP address(es) for controller and compute correctly point to the
controller nodes and Contrail VMs(on ESXis).
Specify the orchestrator to be vCenter for proper provisioning of vCenter related
components, as in the following:
env.orchestrator = 'vcenter'
Run:
fab setup_vcenter
When finished, verify that you can see the ESXIs and contrailVMs on the vCenter user
interface. This step also creates the DVSwitch and the required port groups for proper
functioning with Contrail components.
68
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
Step 3: Install the vCenter plugin on all of the Contrail config nodes:
fab install_contrail_vcenter_plugin:<contrail-vcenter-plugin deb package>
Step 4: Install the Contrail components into the desired roles on the specified nodes,
using the following command:
fab install_contrail
SR-IOV-Passthrough-for-Networking
Setup
If you are using an SR-IOV-Passthrough-for-Networking device with your Contrail setup,
one additional change is necessary.
In the testbed.py file, an optional parameter uplink is provided under the section
ContrailVM. Use uplink to identify the PCI-ID of the SR-IOV-Passthrough device.
Provisioning
Provisioning performs the steps required to create an initial state on the system, including
database and other changes. After performing provisioning, the vCenter is set up with a
datacenter that has a host cluster, a distributed vSwitch, and a distributed port group.
Run the following commands to provision all of the Contrail components, along with
vCenter and the vcenter-plugin.
cd /opt/contrail/utils
fab
setup_all
Verification
When the provisioning completes, run the contrail-status command to view a health
check of the Contrail configuration and control components. See the following example:
contrail-status
== Contrail Control ==
supervisor-control:
active
contrail-control
active
contrail-control-nodemgr
active
contrail-dns
active
contrail-named
active
== Contrail Analytics ==
supervisor-analytics:
active
contrail-analytics-api
active
contrail-analytics-nodemgr
active
Copyright © 2016, Juniper Networks, Inc.
69
Contrail Feature Guide
contrail-collector
active
contrail-query-engine
active
== Contrail Config ==
supervisor-config:
active
contrail-api:0
active
contrail-config-nodemgr
active
contrail-device-manager
active
contrail-discovery:0
active
contrail-schema
active
contrail-svc-monitor
active
contrail-vcenter-plugin
active
ifmap
active
== Contrail Database ==
supervisor-database:
active
contrail-database
active
== Contrail Support Services ==
supervisor-support-service:
active
rabbitmq-server
active
Check the vRouter status by logging in to the ContrailVM and running contrail-status. The
following is a sample of the output.
== Contrail vRouter ==
70
supervisor-vrouter:
active
contrail-vrouter-agent
active
contrail-vrouter-nodemgr
active
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
Add ESXi Host to
vCenter Cluster
It is possible to provision and add an ESXi host to an existing vCenter cluster.
Add an ESXi host by using the following commands These commands spawn the compute
virtual machine on an ESXi host, install and set up Contrail roles, and add the ESXi host
to the vCenter cluster and the distributed switch.
-> fab prov_esxi:esxi_host (as specified in the esxi_hosts{} stanza in testbed.py)
-> fab install_pkg_node:<contrail-deb>,root@ContrailVM-ip (as specified by ‘host’ in the
contrail_vm stanza of testbed.py)
-> fab add_esxi_to_vcenter:esxi_host
-> fab add_vrouter_node:root@ContrailVM-ip
Copyright © 2016, Juniper Networks, Inc.
71
Contrail Feature Guide
Deployment Scenarios
vCenter Blocks in the
testbed.py File
Populate the testbed.py file with the new stanzas (env.vcenter and env.esxi_hosts).
The env.vcenter stanza contains information regarding the vCenter server, the datacenter
within the vCenter, and the clusters present underneath it.
Contrail requires a DVSwitch key and the corresponding trunk port-group to be configured
inside of the datacenter for internal use. So that information is also provided in the vcenter
environment section.
The esxi_hosts stanza enumerates information regarding all esxi hosts present within
the datacenter. For each host, in addition to the access information (ip, password,
datastore etc.) for the host, it is required to provide the necessary attributes for its Contrail
VM e.g. mac-address, ip address and the path to access VMDK, to use while creating the
VM.
The vmdk key is used to provide an absolute value of a local path (on the build or target
machine where the install is being run) of the VMDK.
Alternately, use the vmdk_download_path key for a remote path on the server somewhere.
Parameterizing the
Underlay Connection
for Contrail VM
Contrail VM has two networks. The underlay network is used to take traffic in and out of
the host and the trunk port group network on the internal DVSwitch (created by specifying
the dv_switch key inside of the env.vcenter stanza) used to talk to the tenant VMs on the
host.
You can create the underlay connection to the Contrail VM in following ways:
1.
Through the standard switch. This is the default. It creates a standard switch with the
name vSwitch0 and a port group of contrail-fab-pg. This connection can also be
stitched through any other standard switch by explicitly specifying fabric_vswitch and
fabric_port_group values under the esxi-hosts stanza.
2. Through Distributed Virtual Switch possibly with LAG. If LAG support is needed for an
underlay connection into the Contrail VM, use the DVSwitch key. Add the dv_switch_fab
and dv_port_group_fab values under the env.vcenter section. Note that this is a vCenter
level resource hence it needs to be done at that level.
3. Passthrough NIC. Use the uplink specification inside of the Contrail VM with the ID of
the NIC that has been configured (a-priori) as passthrough.
4. SRIOV. Use the uplink specification inside of the Contrail VM with the ID of the NIC
that has been configured (a-priori) as SR-IOV.
Federation Between
VMware and KVM
Using Two Contrail
Controllers
Note the following considerations:
•
To ensure that the two controllers become BGP peers, configure the same value for
the router_asn value before running the BGP peer provisioning
•
When creating networks, use the same value for the route target so that BGP can copy
the routes from a network in one controller instance to another.
Sample Testbed.py for Contrail vCenter
The following sample is output from a from fabric.api import env:
72
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
#Management ip addresses of hosts in the cluster
host1 = 'root@10.84.7.32'
host2 = 'root@10.84.7.33'
host3 = 'root@10.84.7.34'
host4 = 'root@10.84.23.24'
#External routers if any
#for eg.
#ext_routers = [('mx1', '10.204.216.253')]
ext_routers = []
#Autonomous system number
router_asn = 64512
#Host from which the fab commands are triggered to install and provision
host_build ='root@10.84.7.32'
env.orchestrator = 'vcenter'
#Role definition of the hosts.
env.roledefs = {
'all': [host1, host2, host3, host4],
'cfgm': [host1, host2, host3],
'control': [host1, host2, host3],
'compute': [host4],
'collector': [host1, host2, host3],
'webui': [host1],
'database': [host1, host2, host3],
'build': [host_build],
'storage-master': [host1],
'storage-compute': [host4],
# 'vgw': [host4, host5], # Optional, Only to enable VGW. Only compute can
Copyright © 2016, Juniper Networks, Inc.
73
Contrail Feature Guide
support vgw
#
'backup':[backup_node],
# only if the backup_node is defined
}
env.hostnames = {
'all': ['a0s1', 'a0s2', 'a0s3','a0s4', 'a0s5', 'a0s6', 'a0s7', 'a0s8', 'a0s9',
'a0s10','backup_node']
}
env.password = '<password>'
#Passwords of each host
env.passwords = {
host1: '<password>',
host2: '<password>',
host3: '<password>',
host4: '<password>',
#
backup_node: 'secret',
host_build: '<password>',
}
#For reimage purpose
env.ostypes = {
host1: 'ubuntu',
host2: 'ubuntu',
host3: 'ubuntu',
host4: 'ubuntu',
}
#######################################
#vcenter provisioning
#server is the vcenter server ip
74
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
#port is the port on which vcenter is listening for connection
#username is the vcenter username credentials
#password is the vcenter password credentials
#auth is the authentication type used to talk to vcenter, http or https
#datacenter is the datacenter name we are operating on
#cluster is the clustername we are operating on
#ipfabricpg is the ip fabric port group name
#
if unspecified, the default port group name is contrail-fab-pg
#dv_switch_fab selection contains distributed switch related params for fab network
#
dv_switch_name
#dv_port_group_fab selection contains the distributed port group info for fab
network
#
dv_portgroup_name and the number of ports the group has
#dv_switch section contains distributed switch related params
#
dv_switch_name
#dv_port_group section contains the distributed port group info
#
dv_portgroupname and the number of ports the group has
######################################
env.vcenter = {
'server':'<ip address>',
'port': '443',
'username': 'administrator@vsphere.local',
'password': '<password>',
'auth': 'https',
'datacenter': 'kd_dc',
'cluster': 'kd_cluster',
‘ipfabricpg’: kd_ipfabric_pg,
'dv_switch_fab': { 'dv_switch_name': 'dvs-lag' },
'dv_port_group_fab': {
'dv_portgroup_name': 'contrail-fab-pg',
'number_of_ports': '3',
Copyright © 2016, Juniper Networks, Inc.
75
Contrail Feature Guide
'uplink': 'lag1',
}
'dv_switch': { 'dv_switch_name': 'kd_dvswitch',
},
'dv_port_group': {
'dv_portgroup_name': 'kd_dvportgroup',
'number_of_ports': '3',
},
}
#######################################
# The compute vm provisioning on ESXI host
# This section is used to copy a vmdk on to the ESXI box and bring it up
# the contrailVM which comes up will be setup as a compute node with only
# vrouter running on it. Each host has an associated esxi to it.
#
# esxi_host information:
#
ip: the esxi ip on which the contrailvm(host/compute) runs
#
username: username used to login to esxi
#
password: password for esxi
#
fabric_vswitch: the name of the underlay vswitch that runs on esxi
#
#
#
#
#
fabric_port_group: the name of the underlay port group for esxi
optional, defaults to contrail-fab-pg'
uplinck_nic: the nic used for underlay
optional, defaults to None
#
data_store: the datastore on esxi where the vmdk is copied to
#
cluster: the cluster to which this esxi needs to be added on to
#
contrail_vm information:
#
#
#
76
optional, defaults to 'vswitch0'
uplink: The SRIOV or Passthrough PCI Id(04:10.1). If not provided
will default to vmxnet3 based fabric uplink
mac: the virtual mac address for the contrail vm
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
#
host: the contrail_vm ip in the form of 'user@contrailvm_ip'
#
vmdk: the absolute path of the contrail-vmdk used to spawn vm
#
optional, if vmdk_download_path is specified
#
vmdk_download_path: download path of the contrail-vmdk.vmdk used to spawn
vm
#
optional, if vmdk is specified
#
deb: absolute path of the contrail package to be installed on contrailvm
#
optional, if contrail package is specified in command line
##############################################
#esxi_hosts = {
#
'esxi': {
#
'ip': '<ip address>',
#
'username': 'user',
#
'password': '<password>',
#
‘cluster': 'kd_cluster1',
#
'datastore': "/vmfs/volumes/ds1",
#
'contrail_vm': {
#
'mac': "<mac address>",
#
'uplink' : '<ip address>',
#
'host': "host@<ip address>",
#
'vmdk_download_path': "http://<ip
address>/vmware/vmdk/ContrailVM-disk1.vmdk",
#
#
}
}
#}
User Interfaces for Configuring Features
The Contrail integration with VMware vCenter provides two user interfaces for configuring
and managing features for this type of Contrail system.
Refer to “Using the Contrail and VMWare vCenter User Interfaces to Configure and Manage
the Network” on page 78
Copyright © 2016, Juniper Networks, Inc.
77
Contrail Feature Guide
Related
Documentation
•
Using the Contrail and VMWare vCenter User Interfaces to Configure and Manage the
Network on page 78
Using the Contrail and VMWare vCenter User Interfaces to Configure and Manage the
Network
Starting with Contrail Release 2.20, you can install Contrail to work with the VMware
vCenter Server in various vSphere environments and use the Contrail user interface and
the vCenter user interface to configure and manage the integrated Contrail system.
•
Overview: User Interfaces for Contrail Integration with VMware vCenter on page 78
•
Contrail User Interface on page 78
•
Contrail vCenter User Interface on page 78
•
Feature Configuration for Contrail vCenter on page 79
•
Creating a Virtual Machine on page 86
•
Configuring the vCenter Network in Contrail UI on page 92
Overview: User Interfaces for Contrail Integration with VMware vCenter
This topic shows how to use the Contrail user interface and the vCenter user interface
to configure and manage features of a Contrail VMware integrated system.
The two user interfaces are available after installing the integrated Contrail system, see
“Installing Contrail with VMware vCenter” on page 65.
When Contrail is integrated with VMware vCenter, the following two user interfaces are
used to manage and configure features of the system.
1.
Contrail administration user interface (Contrail UI).
2. Contrail vCenter user interface (vCenter UI).
Contrail User Interface
The Contrail UI is an administrator’s user interface. It provides a view of all components
managed by the Contrail controller.
To log in to the Contrail UI, use your Contrail server main IP address URL as follows:
http://<Contrail IP>:8143
Then log in using your registered Contrail account administrator credentials.
Contrail vCenter User Interface
The Contrail vCenter user interface (vCenter UI) is a subset of the Contrail administration
UI. The Contrail vCenter UI provides a view of all of the virtual components within a
Contrail vCenter project.
78
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
To access the log in page for the Contrail vCenter UI, use your Contrail IP address URL
as follows:
http://<Contrail URL>:8143/vcenter
Then use the vCenter registered account log in name and password to access the Contrail
vCenter UI, as in the following.
Upon successful login, the Contrail vCenter user interface appears, as in the following
example.
Feature Configuration for Contrail vCenter
This section shows how to use the Contrail UI and the Contrail vCenter UI to configure
features for the Contrail vCenter integrated system.
Creating a Virtual Network
This section describes how to create a virtual network using the Contrail UI and the
Contrail vCenter UI.
Copyright © 2016, Juniper Networks, Inc.
79
Contrail Feature Guide
Create Virtual Network – Contrail UI
After logging in to the Contrail UI, click Configure > Networking > Networks to access
the Networks screen.
At Networks, click the Create (+, plus icon) button to access the Create Network window.
Complete the fields in the Create Network screen. Provide a Primary VLAN value and a
Secondary VLAN value as part of a Private VLAN configuration. Private VLAN pairs are
configured on a Distributed Virtual Switch. Select the values for the Primary and Secondary
VLANs from one of the configured, isolated private-vlan pairs.
The following figure shows the creation of a virtual network named Green-VN.
Click the Save button to create the virtual network.
80
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
The virtual network just created (Green-VN) is displayed with its details, as in the following
figure.
Create Virtual Networks – Contrail vCenter UI
You can also create a virtual network in the vCenter UI, and view and manage it from
either the vCenter UI or the Contrail UI.
In vCenter, a virtual network is called a port group, which is part of a distributed switch.
Log in to the vCenter client UI (http://<Contrail URL>:8143/vcenter).
To start creating a virtual network (distributed port group), click the distributed virtual
switch (dvswitch) on the left panel.
The following figure shows the demo_dvswitch has been selected for this example.
Copyright © 2016, Juniper Networks, Inc.
81
Contrail Feature Guide
To create a virtual network (vCenter port group), at the bottom of the screen, click Create
a new port group.
When you click Create a new port group, the Create Distributed Port Group window
appears, as in the following figure.
Enter the name of the virtual network. Select the VLAN type, then select other details
for the selected VLAN type.
The following figure shows the Create Distributed Port Group window with the example
creation of a virtual network named Red-VN, with a Private VLAN and isolated private
VLAN ports 102, 103.
Click Next when finished.
82
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
The Ready to Complete screen appears (see the following figure), displaying the details
entered for the virtual network (distributed port group).
If changes are needed, click the Back button. If the details are correct, click the Finish
button to verify the port group details and complete its creation.
Next, create IP pools for the virtual network port group. Click the datacenter name in the
left side panel, then click the IP Pools tab.
The following figure shows the IP Pools tab for the datacenter named demo_dc.
Near the top of the IP Tools screen, click Add to open the New IP Pool Properties window,
as in the following figure. The IP Pool Properties window has several tabs across the
upper area. Ensure the IPv4 tab is selected, and enter a name for the IP pool, then enter
Copyright © 2016, Juniper Networks, Inc.
83
Contrail Feature Guide
the IP pool IPv4 details, including subnet, gateway, and ranges. Click the check box to
Enable IP Pool.
On the New IP Pool Properties window, click the Associations tab to select the networks
that should use the IP pool you are creating. This tab enables you to associate the IP pool
with the port group.
The following figure of the Associations tab shows that the IP pool being created should
be associated with the virtual network port group named Red-VN.
Click OK when finished.
To verify that the virtual network is created and visible to Contrail, in the Contrail UI, go
to Configure > Networking > Networks to display Contrail network information.
84
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
The virtual network just created (Red-VN in this example) appears in the Networks
window, see the following.
Delete Virtual Network – Contrail UI
It is possible to delete a virtual network in either the Contrail UI or in the vCenter UI. This
section shows how to delete a virtual network in the Contrail UI.
In the Contrail UI, go to Configure > Networking > Networks to display Contrail network
information.
Click the check box to select the network you want to delete, then click the Delete Virtual
Networks (trash icon) button.
A Confirm window pops up. Click the Confirm button to delete the selected network.
Copyright © 2016, Juniper Networks, Inc.
85
Contrail Feature Guide
Delete Virtual Networks – vCenter UI
You can also delete a virtual network from the vCenter UI. From the vCenter UI, in the
left side panel, right-click the port-group (virtual-network) you want to delete. In the
menu popup, select Delete to delete the selected port group. An example is shown in
the following.
When deleting a port group (virtual network) using the vCenter UI, you must also delete
the IP pool associated with the port group. Select the IP Pools tab, and right click the
name of the IP pool associated with the port group being deleted. From the popup menu,
select Remove to delete the IP pool.
The following shows the deletion of the ip-pool-for-Red-VN from the vCenter UI.
Creating a Virtual Machine
Use the vCenter client interface to create a virtual machine for your VMware vCenter
Contrail integrated system. This section describes how to create a virtual machine using
a virtual machine template from the vCenter client interface.
86
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
Create a Virtual Machine – vCenter UI
From the vCenter UI, select the virtual machine template from the left side panel. At the
bottom of the right side screen, click Deploy to a new virtual machine.
The following image shows the user has selected the vm-template-ubuntu-12.04.2.
The Deploy Template Name and Location window appears, as in the following. Specify
a name for the virtual machine and select the datacenter on which the virtual machine
is to be spawned.
When finished, click the Next button.
In the Host/Cluster screen that appears, as in the following, select the cluster on which
to spawn the virtual machine.
Copyright © 2016, Juniper Networks, Inc.
87
Contrail Feature Guide
When finished, click the Next button.
On the Specify a Specific Host screen that appears, as in the following, select the ESXi
host on which to spawn the virtual machine.
When finished, click the Next button.
On the Storage screen, select the destination storage location for the virtual machine.
88
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
When finished, click the Next button.
On the Guest Customization screen, the typical selection is Do not customize. Click the
Do not customize button.
When finished, click the Next button.
On the Ready to Complete screen, review all of the virtual machine definitions that you
have selected for the template.
Copyright © 2016, Juniper Networks, Inc.
89
Contrail Feature Guide
If all selections are correct, click the Finish button to spawn the virtual machine.
To complete the settings for the virtual machine, from the main screen of the vCenter
UI, from the left column select the virtual machine to be edited, then click Edit virtual
machine settings.
The Virtual Machine Properties window appears, as in the following, where you can
update the virtual machine properties.
Click the Hardware tab on the Virtual Machine Properties window, and click Add to add
an NIC and select the appropriate network. Click the check box to Connect at power
on.See the following example figure.
90
Copyright © 2016, Juniper Networks, Inc.
Chapter 4: Using Contrail with VMware vCenter
When finished, click OK.
You are returned to the main vCenter UI screen. On the Getting Started tab, as in the
following, click the task Power on the virtual machine. The virtual machine launches.
Copyright © 2016, Juniper Networks, Inc.
91
Contrail Feature Guide
Once the virtual machine is launched, you can view it from the Contrail UI. Go to Monitor
> Networking > Instances. You should see the virtual machine listed on the Instances
Summary screen, as in the following.
You can also see real time running information for the virtual machine at the vCenter UI.
Select the Console tab for the virtual machine to view real time information, including
ping statistics, as in the following.
Configuring the vCenter Network in Contrail UI
The following items can be configured for the vCenter network by using the Contrail UI.
•
Network policy is configured by using the Contrail UI.
•
Security policy is configured by using the Contrail UI.
•
Public network, floating IP pool, and floating-ips are configured using the Contrail
Administrator UI.
When a user configures a virtual network using the administrator UI, the network is a
Contrail-only network. No resources are consumed on vCenter to implement this type
of network. The user can configure a floating-ip pool on the network; allocate
floating-ips, and associate floating IPs to virtual machine interfaces (ports).
Related
Documentation
92
•
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 5
Using Server Manager to Automate
Provisioning
•
Installing Server Manager on page 93
•
Using Server Manager to Automate Provisioning on page 98
•
Using the Server Manager Web User Interface on page 132
Installing Server Manager
•
Overview: Installation Requirements for Server Manager on page 93
•
Installing Server Manager on page 94
•
Upgrading Server Manager Software on page 95
•
Server Manager Installation Completion Checks on page 96
•
Sample Configurations for Server Manager Templates on page 97
Overview: Installation Requirements for Server Manager
This document provides details for installing Server Manager, starting with Contrail
Release 2.10.
Platform Support
Server Manager can be installed on the following platform operating systems:
•
Ubuntu 12.04.3
•
Ubuntu 14.04
•
Ubuntu 14.04.1
Server Manager can be used to reimage and provision the following target platform
operating systems:
•
Ubuntu 12.04.3
•
Ubuntu 14.04
•
Ubuntu 14.04.1
•
VMware ESXi 5.5
Copyright © 2016, Juniper Networks, Inc.
93
Contrail Feature Guide
Installation Prerequisites
Before installing Server Manager ensure the following prerequisites have been met.
•
Internet access is required to get dependent packages. Ensure access is available to
the Ubuntu archive mirrors/repos at /etc/apt/sources.list.
NOTE: Server Manager is tested only with these versions of dependent
packages: Puppet 3.7.3-1 and Cobbler 2.6.3. As part of the installation,
these versions will be installed.
•
Puppet Master requires the fully-qualified domain name (FQDN) of the server manager
for key generation. The domain name is taken from /etc/hosts. If the server is part of
multiple domains, specify the domain name by using the –domain option during the
installation.
•
On multi-interface systems, specify the interface to which server manager needs to
listen by using the –hostip option. If the listening interface is not specified, the first
available interface from the ifconfig list is used.
Installing Server Manager
Server Manager and all of its components (server manager, monitoring, server manager
client, server manager Web user interface) are provided together in a wrapper installation
package:
Ubuntu: contrail-server-manager-installer_<version~sku>.deb
The user can choose to install all components at once or install individual components
one at a time.
Use the following steps to install and set up the Server Manager components.
1.
Install the server manager packages:
Ubuntu: dpkg –i contrail-server-manager–installer_<version-sku>.deb
NOTE: Make sure to select the correct version package that corresponds
to the platform for which you are installing.
2. Set up the server manager components. You use the setup.sh command to install all
of the components, or you can install individual components.
cd /opt/contrail/contrail_server_manager ./setup.sh [–hostip=<ip address>]
[--domain=<domain name>]
•
To set up all components:
./setup.sh --all
•
94
To set up only the server manager server:
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
./setup.sh --sm=contrail-server-manager_<version-sku>.deb
•
To set up only the server manager client:
setup.sh --sm-client=contrail-server-manager_<version-sku>.deb
•
To set up only the server manager user interface:
setup.sh --webui=contrail-server-manager_<version-sku>.deb
•
To set up only server manager monitoring:
setup.sh --sm-mon=contrail-server-manager_<version-sku>.deb
3. Installation logs are located at /var/log/contrail/install_logs/ .
Finishing the Provisioning
The Server Manager service does not start automatically upon successful installation.
The user must finish the provisioning by modifying the following templates. Refer to the
sample configuration section included in this topic for details on configuring these files.
/etc/cobbler/dhcp.template
/etc/cobbler/named.template
/etc/bind/named.conf.options
/etc/cobbler/setting
/etc/cobbler/modules.conf
/etc/sendmail.cf
Starting the Server Manager Service
When finished modifying the templates to match your environment, start the Server
Manager service using the following command:
service contrail-server-manager start
Upgrading Server Manager Software
If you are upgrading server manager software from a previous version to the current
version, use the following guidelines to ensure successful installation.
Prerequisite to Upgrading
Before upgrading, you must remove the previous version of server manager installer.
Remove any existing server manager installer package from the system using the following
steps.
1.
dpkg –P contrail-server-manager-installer
2. rm –rf /opt/contrail/contrail-server-manager
Copyright © 2016, Juniper Networks, Inc.
95
Contrail Feature Guide
Use Steps for New Installation
Once the existing server manager installer package has been removed, use the installation
steps for a new installation, see details in previous section:
1.
dpkg –i <contrail-server-manager-installer*deb>
2. cd /opt/contrail/contrail-server-manager
3. ./setup.sh –all
It is not necessary to reconfigure the templates of DHCP, bind, and so on. Previous
template configurations and configured data are preserved during the upgrade.
Server Manager Installation Completion Checks
The following are various checks you can use to investigate the status of your server
manager installation.
Server Manager Checks
Use the following to check that the Server Manager installation is complete.
•
The following services should all be running.
service contrail-server-manager status
service cobblerd status
service bind9 status
service isc-dhcp-server status
•
Also check:
ps auwx | grep Passenger
Server Manager Client Checks
•
Check
which server-manager
•
Check the client configuration at
/opt/contrail/server-manager/client/sm-client-config.ini
•
Make sure listen_ip_addr is configured with the server manager IP address.
Server Manager Webui Checks
•
Use the following to check the status of the server manager webui.
service supervisor-webui status
•
96
Check the webui access from the browser http:<server manager> :8080.
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
Sample Configurations for Server Manager Templates
The following are sample parameters for the server manager templates. Use settings
specific for your environment. Typically you will pay attention to parameters for DHCP,
bind, and email services.
Sample Settings
bind_master: <ip address>
manage_forward_zones: ['contrail.juniper.net']
manage_reverse_zones: ['<ip address>']
next_server: <ip address>
server: <ip address>
dhcp.template
Add server manager hooks into the dhcp template, then when dhcp commit, release, or
expire actions occur, the server manager will be notified. The DHCP servers are detected
at the server manager and kept with status as ‘Discovered’.
Define subnet blocks that the DHCP server needs to support, using the sample format
given in the /etc/cobbler/dhcp.template.
named.conf.options
Be sure to configure
forwarders {
x.x.x.x;
};
allow-query { any;};
recursion
yes;
named.template
Include the following in the beginning of the named.template file:
"/etc/bind/named.conf.options";
"/etc/bind/named.conf.local";
sendmail.cf
The sendmail.cf template is present with a juniper.net configuration. Populate with
configuration specific to your environment. The Server manager uses the template to
generate emails when reimage or provision is completed.
Related
Documentation
•
Using Server Manager to Automate Provisioning on page 98
•
Using the Server Manager Web User Interface on page 132
Copyright © 2016, Juniper Networks, Inc.
97
Contrail Feature Guide
Using Server Manager to Automate Provisioning
•
Overview of Server Manager on page 98
•
Server Manager Requirements and Assumptions on page 98
•
Server Manager Component Interactions on page 100
•
Configuring Server Manager on page 101
•
Configuring the Cobbler DHCP Template on page 102
•
User-Defined Tags for Server Manager on page 103
•
Server Manager Client Configuration File on page 103
•
Restart Services on page 104
•
Accessing Server Manager on page 104
•
Communicating with the Server Manager Client on page 105
•
Server Manager Commands for Configuring Servers on page 105
•
Server Manager REST API Calls on page 123
•
Example: Reimaging and Provisioning a Server on page 129
Overview of Server Manager
Starting with Contrail Release 1.10, the Contrail Server Manager can be used to provision,
configure, and reimage a Contrail virtual network system of servers, clusters, and nodes.
Server Manager is an alternative to using Fabric commands to provision a Contrail system.
This section describes the functions and usage guidelines for the Contrail server manager.
The server manager provides a simple, centralized way for users to manage and configure
components of a virtual network system running across multiple physical and virtual
servers in a cloud infrastructure.
You can use the server manager to configure, provision, and reimage servers with the
correct software version and packages for the nodes that are running on each server in
multiple virtual network system clusters.
The server manager:
•
Provides REST APIs to handle customer requests.
•
Manages its own database to store information about the servers.
•
Interacts with other open source products such as Cobbler and Puppet to configure
servers based on user requests.
Server Manager Requirements and Assumptions
The following requirements are assumed for the server manager:
98
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
•
The server manager runs on a Linux server (bare metal or virtual machine) and assumes
availability of several software products with which it interacts to provide the
functionality of managing servers.
•
The server manager has network connectivity to the servers it is trying to manage.
•
The server manager has access to a remote power management tool to power cycle
the servers that it manages.
•
The server manager uses Cobbler software for Linux provisioning to configure and
download software to physical servers. Cobbler resides on the same server that is
running the server manager daemon.
•
Contrail 1.10 server manager assumes that DNS and DHCP servers embedded with
Cobbler provide IP addresses and names to the servers being managed, although it
is possible to use external DNS and DHCP servers.
•
The server manager uses Puppet software, an open source configuration management
tool, to accomplish the configuration management of target servers, including the
installation and configuration of different software packages and the launching of
various services.
•
SQLite3 database management software is used to maintain and manage server
configurations and it runs on the same machine where the server manager daemon is
running.
Copyright © 2016, Juniper Networks, Inc.
99
Contrail Feature Guide
Server Manager Component Interactions
The server manager runs as a daemon and provides REST APIs for interaction with the
client. The server manager accepts user input in the form of REST API requests, performs
the requested function on the resources, and responds to the user with a REST API
response.
Configuration parameters required by the server manager are provided in the server
manager configuration file, however, the parameters can be overridden by server manager
command line parameters.
Figure 8 on page 100 illustrates several high-level components with which the server
manager interacts.
Figure 8: Server Manager Component Interactions
Internally, the server manager uses a SQLite3 database to hold server configuration
information. The server manager coordinates the database configuration information
and user requests to manage the servers defined in the database.
While managing the servers, the server manager also communicates with other software
components. It uses Cobbler for reimaging target servers and it uses Puppet for
provisioning, thereby ensuring necessary software packages are installed and configured,
required services are running, and so on.
A server manager agent runs on each of the servers and communicates with the server
manager, providing the information needed to monitor the operation of the servers. The
server manager agent also uses REST APIs to communicate with the server manager,
and it can use other software tools to fetch other information, such as Nagios
infrastructure monitoring, Razor spam filtering, Intelligent Platform Interface (IPMI), and
100
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
so on. Monitoring functionality is not part of server manager for the Contrail release. It
will be available at a later date.
Configuring Server Manager
When the installation of all server manager components and dependent packages is
finished, configure the server manager with parameters that identify your environment
and make it available for clients to serve REST API requests.
Upon installation, a sample server manager configuration file is created at:
/opt/contrail/server_manager/sm-config.ini
Modify the sm-config.ini configuration file to include parameter values specific to your
environment.
The environment-specific configuration section of the sm-config.ini file is named
SERVER-MANAGER.
The following example shows the format and parameters of the SERVER-MANAGER
section. Typically, only the listen_ip_addr, cobbler_user, and cobbler_passwd values need
to be modified.
[SERVER-MANAGER]
listen_ip_addr = <IP-Address-of-SM>
listen_port
= <port-number>
database_name
= <database-file-name>
server_manager_base_dir = <base-dir-where-SM-files-are-created>
html_root_dir
= <html-root-dir>
cobbler_ip_address = <cobbler-ip-address>
cobbler_port
= <cobbler-port-number>
cobbler_username = <cobbler-username>
cobbler_password = <cobbler-password>
puppet_dir
= <puppet-directory>
ipmi_username = <IPMI username>
ipmi_password = <IPMO password>
ipmi_type = <IPMI type>
Table 4 on page 102 provides details for each of the parameters in SERVER-MANAGER
section.
Copyright © 2016, Juniper Networks, Inc.
101
Contrail Feature Guide
Table 4: Server Manager Paremeters
Parameter
Configuration
listen_ip_addr
Specify the IP address of the server on which the server manager is listening for REST
API requests.
listen_port
Specify the port number on which the server manager is listening for REST API requests.
The default is 9001.
database_name
The name of the database file where the server manager stores configuration information.
This file is created under server_manager_base_dir.
server_manager_base_dir
The base directory where all of the server manager configuration files are created. The
default is /etc/contrail .
html_root_dir
The HTML root directory, /var/www/html .
cobbler_ip_address
The IP address used to access Cobbler. This address MUST be the same address as the
listen_ip_address. The server manager assumes that the Cobbler service is running on the
same server as the server manager service.
cobbler_port
The port on which Cobbler listens for user requests. Leave this field blank.
cobbler_username
The user name to access the Cobbler service. Specify cobblerunless your Cobbler settings
have been modified to use a different user name.
cobbler_password
The password to access the Cobbler service. Specify cobbler unless your Cobbler settings
have been modified to use a different password.
puppet_dir
The directory where the Puppet manifests and templates are created. This should be
/etc/puppet, unless your Puppet configuration has been modified to use another directory.
ipmi_username
The IPMI username for power management.
ipmi_password
The IPMI password for power management.
ipmi_type
The IPMI type (ipmilan, or other cobbler supported types).
Configuring the Cobbler DHCP Template
In addition to configuring the server_config.ini file, you must manually change the settings
in the /etc/cobbler/dhcp.template file to use the correct subnet address, mask, and DNS
domain name for your environment. Optionally, you can also restrict the use of the current
instance of the server manager and Cobbler to a subset of servers in the network.
Below is a snippet from the dhcp.template file showing the fields to be modified.
NOTE: The IP addresses and other values in the following are shown for
example purposes only. Be sure to use values that are correct for your
environment.
102
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
subnet <ip address> netmask <ip address> {
option routers
<ip address>;
option subnet-mask
<ip address>;
option domain-name-servers $next_server, <ip address> ;
option domain-search
"demo.company.net", "company.net";
option domain-name
"demo.company.net" ;
option ntp-servers
$next_server;
default-lease-time
21600;
max-lease-time
43200;
next-server
filename
$next_server;
"/pxelinux.0";
}
User-Defined Tags for Server Manager
Server manager provides the user the ability to define tags that can be used to group
servers for performing a particular operation such as show information, reimage, provision,
and so on. The server manager supports up to seven different tags that can be configured
and used for grouping servers.
The names of user-defined tags are kept in the tags.ini file, at /etc/contrail_smgr/tags.ini.
It is possible to modify tag names, and add or remove tags dynamically using the server
manager REST API interface. However, if a tag is already being used to group servers,
the tag must be removed from the servers before tag modification is allowed.
The following is a sample tags.ini file that is copied on installation. In the sample file, the
user has defined five tags – datacenter, floor, hall, rack, and user_tag. The tags can be
used to group servers together.
[TAGS]
tag1 =
tag2 =
tag3 =
tag4 =
tag5 =
datacenter
floor
hall
rack
user_tag
Server Manager Client Configuration File
The server manager client application installation copies a sample configuration file,
/opt/contrail/server_manager/client/sm-client-config.ini, that contains parameter values
such as the IP address to reach the server manager, the port used by server manager,
default values for cluster table entries, default values for server table entries, and so on.
You must modify the values in the sm-client-config.ini file to match your environment.
Copyright © 2016, Juniper Networks, Inc.
103
Contrail Feature Guide
Use values from the CLUSTER and SERVER subsections in the ini file during creation of
new clusters and servers, unless you explicitly override the values at the time of creation.
The following is a sample client configuration file.
[SERVER-MANAGER]
; ip address of the server manager
; replace the following with proper server manager address
listen_ip_addr = <ip address>
; server manager listening port
listen_port
= 9001
[CLUSTER]
subnet_mask = <ip address>
domain = contrail.juniper.net
database_dir = /home/cassandra
encapsulation_priority = MPLSoUDP,MPLSoGRE,VXLAN
router_asn = <asn>
keystone_username = admin
keystone_password = <password>
password = <password>
analytics_data_ttl = 168
haproxy = disable
use_certificates = False
multi_tenancy = False
database_token =
service_token = <password>
analytics_data_ttl = 168
[SERVER]
Restart Services
When all user changes have been made to the configuration files, restart the server
manager to run with the modifications by issuing the following:
service contrail-server-manager restart
Accessing Server Manager
When the server manager configuration has been customized to your environment, and
the required daemon services are running, clients can request and use services of the
server manager by using REST APIs. Any standard REST API client can be used to
construct and send REST API requests and process server manager responses.
The following steps are typically required to fully implement a new cluster of servers
being managed by the server manager.
1.
Configure elements such as servers, clusters, and images in the server manager.
Before managing servers, the server manager needs to have configuration details of
the servers that the server manager is managing.
2. Specify the name and location of boot images, packages, and repositories used to
bring up the servers with needed software.
Currently, the servers can be imaged with CentOS, Ubuntu Linux, or VMWare ESXi
distributions.
104
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
3. Provision or configure the servers by installing necessary packages, creating
configuration files, and bringing up the correct services so that each server can perform
the functions or role(s) configured for that server.
A Contrail system of servers has several components or roles that work together to
provide the functionality of the virtual network system, including: control, config,
analytics, compute, web-ui, openstack, and database. Each of the roles has different
requirements for software and services needed. The provisioning REST API enables
the client to configure the roles on servers using the server manager.
4. Set up API calls for monitoring servers.
Once the servers in the Contrail system are correctly reimaged and provisioned to run
configured roles, the server monitoring REST API calls allow clients to monitor
performance of the servers as they provide one or more role functions. Monitoring
functionality is not available in server manager for Contrail Release 1.10.
Communicating with the Server Manager Client
Server Manager provides a REST API interface for clients to talk to the server manager
software. Any client that can send and receive REST API requests and responses can be
used to communicate with server manager, for example, Curl or Postman. Additionally,
the server manager software provides a client with a simplified CLI interface, in a separate
package. The server manager client can be installed and run on the server manager
machine itself or on another server with an IP connection to the server manager machine.
Prior to using the server manager client CLI commands, youneed to modify the
sm-client-config.ini file to specify the IP address and the port for the server manager,
along with other parameters.
Each of the commands described in this section takes a set of parameters from the user,
constructs a REST API request to the server manager, and provides the server’s response
to the user.
The following describes each server manager client CLI command in detail.
Server Manager Commands for Configuring Servers
This section describes commands that are used to configure servers and server parameters
in the server manager database. These commands allow you to add, modify, delete, or
view servers.
•
Create New Servers or Update Existing Servers on page 106
•
Delete Servers on page 107
•
Show Server Configuration on page 108
•
Server Manager Commands for Managing Clusters on page 109
•
Server Manager Commands for Managing Tags on page 112
•
Server Manager Commands for Managing Images on page 114
•
Server Manager Operational Commands for Managing Servers on page 117
•
Reimaging Server(s) on page 118
Copyright © 2016, Juniper Networks, Inc.
105
Contrail Feature Guide
•
Provisioning and Configuring Roles on Servers on page 119
•
Restarting Server(s) on page 121
•
Show Status of Server(s) on page 122
Create New Servers or Update Existing Servers
Use the server-manager add command to create a new server or update a server in the
server manager database.
Usage:
server-manager [-h] [--config_file CONFIG_FILE] server [-f FILE_NAME]
Table 5 on page 106 lists the optional arguments.
Table 5: Server Manager Add Server Command Options
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
The JSON file that contains the server parameter values.
--file_name FILE_NAME, -f FILE_NAME
If no JSON file is specified, the client program accepts all the needed server parameter
values interactively from the user, then builds a JSON file and makes a REST API call to
the server manager. The JSON file contains a number of server entries, in the format
shown in the following example:
{
"server": [
{
"id": "demo2-server",
"mac_address": "<mac address>",
"ip_address": "<ip address>",
"parameters" : {
"interface_name": "eth1",
"partition": ""
},
"roles" :
["config","openstack","control","compute","collector","webui","database"],
106
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
"cluster_id": "demo-cluster",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"domain": "demo.company.net",
"ipmi_address": "<ip address>",
"tag" : {
"datacenter" : "demo-dc",
"floor" : "demo-floor",
"hall" : "demo-hall",
"rack" : "demo-rack",
"user_tag" : "demo-user"
}
}
]
}
Most of the parameters in the JSON sample file are self-explanatory. Cluster_id defines
the cluster to which the server belongs. The interface_name is the Ethernet interface
name on the server used to configure the server and roles define the roles that can be
configured for the server. The sample roles array in the example lists all valid role values.
Tag defines the list of tag values for grouping and classifying the server.
The server-manager add command adds a new entry if the server with the given ID or
mac_address does not exist in the server manager database. If an entry already exists,
the add command modifies the fields in the existing entry with any new parameters
specified.
Delete Servers
Use the server-manager delete command to delete one or more servers from the server
manager database.
Table 6 on page 107 lists the optional arguments.
Table 6: Server Manager Delete Server Command Options
Option
Description
-h, --help
Show the options available for the current command and exit.
Copyright © 2016, Juniper Networks, Inc.
107
Contrail Feature Guide
Table 6: Server Manager Delete Server Command Options (continued)
Option
Description
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
--server_id SERVER_ID
The server ID for the server or servers to be deleted.
--mac MAC
The MAC address for the server or servers to be deleted.
--ip IP
The IP address for the server or servers to be deleted.
--cluster_id CLUSTER_ID
The cluster ID for the server or servers to be deleted.
--tag TagName=TagValue
The TagName that is to be matched with the Tagvalue. Up to
seven TagName and Tagvalue pairs separated by commas can
be provided.
The criteria to identify servers to be deleted can be specified by providing the server_id
or the server: mac address, ip, cluster_id, or the TagName = TagValue.
Provide one of the server matching criteria to display a list of servers available to be
deleted.
Show Server Configuration
Use the server-manager show command to display the configuration of servers from the
server manager database.
Usage:
server-manager show [--config_file CONFIG_FILE]
server (--server_id SERVER_ID | --mac MAC | --ip IP | --cluster_id CLUSTER_ID
| --tag <tag_name=tag_value>.. ) [--detail]
Table 7 on page 108 lists the optional arguments.
Table 7: Server Manager Show Server Command Options
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
--server_id SERVER_ID
The server ID for the server or servers to be deleted.
--mac MAC
The MAC address for the server or servers to be displayed.
--ip IP
The IP address for the server or servers to be displayed.
108
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
Table 7: Server Manager Show Server Command Options (continued)
Option
Description
--cluster_id CLUSTER_ID
The cluster ID for the server or servers to be displayed.
--tag TagName=TagValue
The TagName that is to be matched with the Tagvalue. Up to
seven TagName and Tagvalue pairs separated by commas can
be provided.
--detail, -d
Flag to indicate if details are requested.
The criteria to identify servers to be displayed can be specified by providing the server_id
or the server: mac address, ip, VNS_id, cluster_id, or TagName=TagValue.
Provide one or more of the server matching criteria to display a list of servers.
Server Manager Commands for Managing Clusters
A cluster is used to store parameter values that are common to all servers belonging to
that cluster. The commands in this section facilitate managing clusters in the server
manager database, enabling the user to add, modify, delete, and view clusters.
NOTE: Whenever a server is created with a specific cluster_id, Server Manager
checks to see if a cluster with that ID has already been created. If there is no
matching cluster_id already in the database, an error is returned.
•
Create a New Cluster or Update an Existing Cluster on page 109
•
Delete a Cluster on page 111
•
Show Cluster Configuration on page 111
Create a New Cluster or Update an Existing Cluster
Use the server-manager add command to create a new cluster or update an existing
cluster in the server manager database.
Usage:
server-manager [-h] [--config_file CONFIG_FILE]
cluster [--file_name FILE_NAME]
Table 8 on page 109 lists the optional arguments.
Table 8: Server Manager Add Cluster Command Options
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
Copyright © 2016, Juniper Networks, Inc.
109
Contrail Feature Guide
Table 8: Server Manager Add Cluster Command Options (continued)
Option
Description
--file_name FILE_NAME, -f FILE_NAME
The JSON file that contains the cluster parameter values.
If no JSON file is specified, the client program accepts all the needed cluster parameter
values interactively from the user, then builds a JSON file and makes a REST API call to
the server manager. The JSON file contains a number of cluster entries, in the format
shown in the following example:
{
"cluster" : [
{
"id" : "demo-cluster",
"parameters" : {
"router_asn": "<asn>",
"database_dir": "/home/cassandra",
"database_token": "",
"use_certificates": "False",
"multi_tenancy": "False",
"encapsulation_priority": "MPLSoUDP,MPLSoGRE,VXLAN",
"service_token": "<password>",
"keystone_username": "admin",
"keystone_password": "<password>",
"keystone_tenant": "admin",
"analytics_data_ttl": "168",
"haproxy": "disable",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"external_bgp": "",
"domain": "<domain name"
110
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
}
}
]
}
Server membership to a cluster is determined by specifying the ID corresponding to the
cluster when defining the server. All of the cluster parameters are available to the server
when provisioning roles on the server.
Delete a Cluster
Use the server-manager delete command to delete a cluster from the server manager
database that are no longer needed and after all servers in the cluster have also been
deleted.
Usage:
server-manager delete [-h] [--config_file CONFIG_FILE]
cluster [--cluster_id CLUSTER_ID]
Table 9 on page 111 lists the optional arguments.
Table 9: Server Manager Delete Cluster Command Options
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
Show Cluster Configuration
Use the server-manager show command to list the configuration of a cluster.
Usage:
server-manager show [-h] [ --config_file CONFIG_FILE]
cluster [--cluster_id CLUSTER_ID] [--detail]
Table 10 on page 111 lists the optional arguments.
Table 10: Server Manager Show Cluster Command Options
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
Copyright © 2016, Juniper Networks, Inc.
111
Contrail Feature Guide
Table 10: Server Manager Show Cluster Command Options (continued)
Option
Description
--detail, -d
Flag to indicate if details are requested.
--cluster_id CLUSTER_ID
The cluster ID for the cluster or clusters.
You can optionally specify a cluster ID to get information about a particular cluster. If the
optional parameter is not specified, information about all clusters in the system is returned.
Server Manager Commands for Managing Tags
Tags are used for grouping servers together so that an operation such as get, reimage,
provision, status, and so on can be easily performed on servers that have matching tags.
The server manager provides a flexible way for users to define their own tags, then use
those tags to assign values to servers. Servers with matching tag values can be easily
grouped together. The server manager can store a maximum of seven tag values. At
initialization, the server manager reads the tag names from the configuration file. The
tag names can be retrieved or modified using CLI commands. When modifying tag names,
the server manager ensures that the tag name being modified is not used by any of the
server entries.
•
Create a New Tag or Update an Existing Tag on page 112
•
Show Tag Configuration on page 113
Create a New Tag or Update an Existing Tag
Use the server-manager add command to create a new or update a tag in the server
manager database.
Usage:
server-manager add [-h] [--config_file CONFIG_FILE]
tag [--file_name FILE_NAME]
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
--file_name FILE_NAME, -f FILE_NAME
112
The JSON file that contains the tag names.
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
If no JSON file is specified, the client program prompts the user for tag names, then builds
a JSON file and makes a REST API call to the server manager. The JSON file contains a
number of tag entries, in the format shown in the following example:
{
"tag1" : "data-center",
"tag2" : "floor",
"tag3" : "",
"tag4" : "pod",
"tag5" : "rack",
}
In the example, the user is specifying JSON to add or modify the tags, tag1 thru tag5. For
tag3, the “” value specifies that if the tag is defined prior to the CLI command, it is removed
on execution of the command. The tag name for tag1 is set to data-center. This is allowed
if, and only if, none of the server entries are using tag1.
Show Tag Configuration
Use the server-manager show command to list the configuration of a tag.
Usage:
server-manager show [-h] [ --config_file CONFIG_FILE] tag
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
The following is sample output for the show tag command.
{
"tag1": "datacenter",
"tag2": "floor",
"tag3": "hall",
"tag4": "rack",
"tag5": "user_tag"
}
Copyright © 2016, Juniper Networks, Inc.
113
Contrail Feature Guide
Server Manager Commands for Managing Images
In addition to servers and clusters, the server manager also manages information about
images and packages that can be used to reimage and configure servers. Images and
packages are both stored in the database as images. When new images are added to
the database, or existing images are modified or deleted, the server manager interfaces
with Cobbler to make corresponding modifications in the Cobbler distribution profile for
the specified image.
The image types currently supported are summarized in the following table:
Image Type
Description
centos
Manages the CentOS stock ISO, and does not include the Contrail packages repository
packaged with the ISO.
contrail-centos-package
Maintains a repository of the package to be installed on the CentOS system image.
ubuntu
Manages the base Ubuntu ISO.
contrail-ubuntu-package
Maintains a repository of packages that contain Contrail and dependent packages to
be installed on an Ubuntu base system.
ESXi5.1/ESXi5.5
Manages VMware ESXi 5.1 or 5.5 ISO.
•
Creating New Images or Updating Existing Images on page 114
•
Add an Image on page 114
•
Upload an Image on page 116
•
Delete an Image on page 116
•
Show Image Configuration on page 117
Creating New Images or Updating Existing Images
The server manager maintains five types of images – CISO, Ubuntu ISO, ESXi hypervisor
ISO, Contrail CentOS package, and Contrail Ubuntu package.
Use the server-manager add command or the server-manager upload command to add
new images to the server manager database.
•
Use add when the new image is present locally on the server manager machine. The
path provided is the image path on the server manager machine.
•
Use upload_image when the new image is present on the machine where the client
program is being invoked. The path provided is the image path on the client machine.
Add an Image
Usage:
server-manager add [-h] [ --config_file CONFIG_FILE]
image [--file_name FILE_NAME]
114
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
The name of the JSON file that contains the image parameter
values.
--file_name FILE_NAME, -f FILE_NAME
If no JSON file is specified, the client program accepts parameter values from the user
interactively, then builds a JSON file and makes a REST API call to the server manager.
The JSON file contains an array of possible entries, in the following sample format. The
sample shows three images: one CentOS ISO containing Contrail packages, one Ubuntu
base ISO, and one Contrail Ubuntu package. When the images were added, corresponding
distribution, profile, and repository entries were created in Cobbler by the server manager.
{
"image": [
{
"id": "ubuntu-12.04.3",
"type": "ubuntu",
"version": "ubuntu-12.04.3",
"path": "/iso/ubuntu-12.04.3-server-amd64.iso"
},
{
"id": "centos-6.4",
"type": "centos",
"version": "centos-6.4",
"path": "/iso/CentOS-6.4-x86_64-minimal.iso"
},
{
"id": "contrail-ubuntu-r11-b33",
"type": "contrail-ubuntu-package",
Copyright © 2016, Juniper Networks, Inc.
115
Contrail Feature Guide
"version": "contrail-ubuntu-r11-b33",
"path": "/iso/contrail-install-packages_1.11-33_all.deb"
}
]
}
Upload an Image
The server-manager upload_image command is similar to the server-manager add
command, except that the path provided for the image being added is the local path on
the client machine. This command is useful if the client is being run remotely, not on the
server manager machine, and the image being added is not physically present on the
server manager machine.
Usage:
server-manager upload_image [-h]
--config_file CONFIG_FILE]
image_id image_version image_type file_name
Positional arguments include the following:
Option
Description
image_id
Name of the new image.
image_version
Version number of the new image.
image_type
Type of the image: fedora, centos, ubuntu, contrail- ubuntu-package,
contrail-centos-package
Complete path for the file.
file_name
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
Delete an Image
Use the server-manager delete command to delete an image from the server manager
database. When an image is deleted from server manager database, the corresponding
116
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
distribution, profile, or repository that corresponds to the image is also deleted from the
Cobbler database.
Usage:
server-manager delete [-h] [ --config_file CONFIG_FILE]
image image_id
Positional arguments include the following:
Option
Description
image_id
The image ID for the image to be deleted.
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
Show Image Configuration
Use the server-manager show command to list the configuration of images from the
server manager database. If the detail flag is specified, detailed information about the
image is returned. If the optional image_id is not specified, information about all the
images is returned.
Usage:
server-manager [-h] [--config_file CONFIG_FILE] image [--image_id IMAGE_ID] [--detail]
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
image_id
The image ID for the image or images.
--detail, -d
Flag to indicate if details are requested.
Server Manager Operational Commands for Managing Servers
The server manager commands in the following sections are operational commands for
performing a specific operation on a server or a group of servers. These commands
Copyright © 2016, Juniper Networks, Inc.
117
Contrail Feature Guide
assume that the base configuration of entities required to execute the operational
commands is already completed using configuration CLI commands.
Reimaging Server(s)
Use the server-manager reimage command to reimage an identified server or servers with
a provided base ISO and package. Servers are specified by providing match conditions
to select servers from the database.
Before issuing the reimage command, the images must be added to the server manager
using the create image command, which also adds the images to Cobbler. The set of
servers to be reimaged can be specified by providing match criteria for servers already
added to the server manager database, using the server_id or server: vns_id, cluster_id,
pod_id, or rack_id.
You must identify the base image ID to be used to reimage, plus any optional Contrail
package to be used. When a Contrail package is provided, a local repository is created
that can be used for subsequent provisioning of reimaged servers.
The command asks for a confirmation before making the REST API call to the server
manager to start reimaging the servers. This confirmation message can be bypassed by
specifying the optional --no_confirm or –F parameter on the command line.
Usage:
server-manager reimage [-h]
[ --config_file CONFIG_FILE]
[--package_image_id PACKAGE_IMAGE_ID]
[--no_reboot]
(--server_id SERVER_ID | --cluster_id CLUSTER_ID |--tag <tag_name=tag_value>)
[--no_confirm]
base_image_id
Positional arguments include the following:
Option
Description
base_image_id
The image ID of the base image to be used.
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
118
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
Option
Description
--package_image_id PACKAGE_IMAGE_ID, -p
PACKAGE_IMAGE_ID
The optional Contrail package to be used to reimage the server or
servers.
--no_reboot, -n
Optional parameter to indicate that the server should not be
rebooted following the reimage setup.
--server_id SERVER_ID
The server ID for the server or servers to be reimaged.
--cluster_id CLUSTER_ID
The cluster ID for the server or servers to be reimaged.
--tag TagName=TagValue
TagName which is to be matched with Tagvalue
--no_confirm, -F
Flag to bypass confirmation message, default = do NOT bypass.
Provisioning and Configuring Roles on Servers
Use the server-manager provision command to provision identified server(s) with
configured roles for the virtual network system. The servers can be selected from the
database configuration (using standard server match criteria), identified in a JSON file,
or provided interactively by the user.
From the configuration of servers in the database, the server manager determines which
roles to configure on which servers and uses this information along with other server and
VNS parameters from the database to achieve the task of configuring the servers with
specific roles.
When the server-manager provision command is used, the server manager builds the
manifest files corresponding to each of the servers and pushes them to the Puppet agent
for execution upon the servers.
Usage:
server-manager provision [-h]
[--config_file CONFIG_FILE]
(--server_id SERVER_ID | --cluster_id CLUSTER_ID | --tag <tag_name=tag_value> |
--provision_params_file PROVISION_PARAMS_FILE | --interactive)
[--no_confirm]
package_image_id
Positional arguments include the following:
Option
Description
package_image_id
The Contrail package image ID to be used for provisioning.
Copyright © 2016, Juniper Networks, Inc.
119
Contrail Feature Guide
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
--server_id SERVER_ID
The server ID for the server or servers to be provisioned.
--cluster_id CLUSTER_ID
The cluster ID for the server or servers to be provisioned.
--tag TagName=TagValue
TagName to be matched with Tagvalue.
--provision_params_file PROVISION_PARAMS_FILE, -f
PROVISION_PARAMS_FILE
Optional JSON file containing the parameters for provisioning the
server(s).
--interactive, -I
Flag indicating that the user will manually enter the server
parameters for provisioning.
--no_confirm, -F
Flag to bypass confirmation message, default = do NOT bypass.
You can specify roles different from what is configured in the database by using the JSON
file option parameter. When using the file option, the rest of the server parameters, the
cluster parameters, and the list of servers must be configured before using the provision
command. The following is a sample format for the file option:
{
"roles" : {
"database" : ["demo2-server"],
"openstack" : ["demo2-server"],
"config" : ["demo2-server"],
"control" : ["demo2-server"],
"collector" : ["demo2-server"],
"webui" : ["demo2-server"],
"compute" : ["demo2-server"]
}
}
120
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
The final option for specifying roles for provisioning servers is to specify the –interactive
option flag. When the provision command is used, the user is prompted to enter role
definitions interactively.
Restarting Server(s)
Use the server-manager restart command to reboot identified server(s). Servers can be
specified from the database by providing standard match conditions. The restart
command provides a way to reboot or power-cycle the servers, using the server manager
REST API interface. If reimaging is intended, use the restart command with the net-boot
flag enabled. When netbooted, the Puppet agent is also installed and configured on the
servers. If there are Puppet manifest files created for the server prior to rebooting, the
agent pulls those from the server manager and executes the configured Puppet manifests.
The restart command uses an IPMI mechanism to power cycle the servers, if available
and configured. Otherwise, the restart command uses SSH to the server and the existing
reboot command mechanism is used.
Usage:
server-manager restart [-h]
[ --config_file CONFIG_FILE]
(--server_id SERVER_ID | --cluster_id CLUSTER_ID | --tag <tag_name=tag_value>)
[--net_boot]
[--no_confirm]
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini.
--server_id SERVER_ID
The server ID for the server or servers to be restarted.
--cluster_id CLUSTER_ID
The cluster ID for the server or servers to be restarted.
--tag TagName=TagValue
TagName to be matched with Tagvalue.
--net_boot, -n
Optional parameter to indicate if the server should be netbooted.
--no_confirm, -F
Flag to bypass confirmation message, default = do NOT bypass.
Copyright © 2016, Juniper Networks, Inc.
121
Contrail Feature Guide
Show Status of Server(s)
Use the server-manager status command to view the reimaging or provisioning status of
server(s).
Usage:
server-manager status server [-h]
[--config_file CONFIG_FILE]
(--server_id SERVER_ID | --cluster_id CLUSTER_ID | --tag
<tag_name=tag_value>)
Optional arguments include the following:
Option
Description
-h, --help
Show the options available for the current command and exit.
--config_file CONFIG_FILE, -c CONFIG_FILE
The name of the server manager client configuration file. The
default file is:
/opt/contrail/server_manager/client/sm-client-config.ini
--server_id SERVER_ID
The server ID for the server whose status is to be fetched.
--cluster_id CLUSTER_ID
The cluster ID for the server or servers to be restarted.
--tag TagName=TagValue
TagName to be matched with Tagvalue.
The status CLI provides a way to fetch the current status of a server.
Status outputs include the following:
122
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
restart_issued
reimage_started
provision_started
provision_completed
database_started
database_completed
openstack_started
openstack_completed
config_started
config_completed
control_started
control_completed
collector_started
collector_completed
webui_started
webui_completed
compute_started
compute_completed
Server Manager REST API Calls
This section describes all of the REST API calls to the server manager. Each description
includes an example configuration.
•
REST APIs for Server Manager Configuration Database Entries on page 124
•
API: Add a Server on page 124
•
API: Delete Servers on page 125
•
API: Retrieve Server Configuration on page 125
•
API: Add an Image on page 125
•
API: Upload an Image on page 126
•
API: Get Image Information on page 126
•
API: Delete an Image on page 127
•
API: Add or Modify a Cluster on page 127
•
API: Delete a Cluster on page 128
•
API: Get Cluster Configuration on page 128
•
API: Get All Server Manager Configurations on page 128
•
API: Reimage Servers on page 128
Copyright © 2016, Juniper Networks, Inc.
123
Contrail Feature Guide
•
API: Provision Servers on page 128
•
API: Restart Servers on page 129
REST APIs for Server Manager Configuration Database Entries
The REST API calls in this section help in configuring different elements in the server
manager database.
NOTE: The IP addresses and other values in the following are shown for
example purposes only. Be sure to use values that are correct for your
environment.
API: Add a Server
To add a new server to the service manager configuration database:
URL : http://<SM-IP-Address>:<SM-Port>/server
Method: PUT
Payload: JSON payload containing array of servers to be added. For each server in the
array, all the parameters are specified as JSON fields. The fields mask, gateway, password,
and domain are optional, and if not specified, the values of these fields are taken from
the cluster to which the server belongs.
The following is a sample JSON file format for adding a server.
{
"server": [
{
"id": "demo2-server",
"mac_address": "<mac address>",
"ip_address": "<ip address>",
"parameters" : {
"interface_name": "eth1"
},
"roles" : [
"config",
"openstack",
"control",
"compute",
"collector",
"webui",
"database"
],
"cluster_id": "demo-cluster",
"mask": "<ip address>",
"gateway": "<ip address>",
"password": "juniper",
"domain": "demo.company.net",
"email": "id@company.net",
"tag" : {
"datacenter": "demo-dc",
124
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
“rack” : “demo-rack”
},
}
]
}
API: Delete Servers
Use one of the following formats to delete a server.
URL: http://<SM-IP-Address>:<SM-Port>/server?server_id=SERVER_ID
http://<SM-IP-Address>:<SM-Port>/server?cluster_id=CLUSTER_ID
http://<SM-IP-Address>:<SM-Port>/server?mac=MAC
http://<SM-IP-Address>:<SM-Port>/server?ip=IP
http://<SM-IP-Address>:<SM-Port>/server[?tag=<tag_name>=<tag_value>,.]
Method : DELETE
Payload : None
API: Retrieve Server Configuration
Use one of the following methods to retrieve a server configuration. The detail argument
is optional, and specified as part of the URL if details of the server entry are requested.
URL: http://<SM-IP-Address>:<SM-Port>/server[?server_id=SERVER_ID&detail]
http://<SM-IP-Address>:<SM-Port>/server[?cluster_id=CLUSTER_ID&detail]
http://<SM-IP-Address>:<SM-Port>/server[?tag=<tag_name>=<tag_value>,.]
http://<SM-IP-Address>:<SM-Port>/server[?mac=MAC&detail]
http://<SM-IP-Address>:<SM-Port>/server[?ip=IP&detail]
http://<SM-IP-Address>:<SM-Port>/server[?tag=<tag_name>=<tag_value>,.]
Method : GET
Payload : None
API: Add an Image
Use the following to add a new image to the server manager configuration database
from the server manager machine.
An image is either an ISO for a CentOS or Ubuntu distribution or an Ubuntu Contrail
package repository. When adding an image, the image file is assumed to be available on
the server manager machine.
URL : http://<SM-IP-Address>:<SM-Port>/image
Copyright © 2016, Juniper Networks, Inc.
125
Contrail Feature Guide
Method: PUT
Payload: Specifies all the parameters that define the image being added.
{
"image": [
{
" id": "Image-id",
"type": "image_type", <ubuntu or centos or esxi5.1 or esxi5.5 or
contrail-ubuntu-package or contrail-centos-package>
"version": "image_version",
"path":"path-to-image-on-server-manager-machine"
}
]
}
API: Upload an Image
Use the following to upload a new image from a client to the server manager configuration
database.
An image is an ISO for a CentOS or Ubuntu distribution or an Ubuntu Contrail package
repository. Add image assumes the file is available on the server manager, whereas
upload image transfers the image file from the client machine to the server manager
machine.
URL : http://<SM-IP-Address>:<SM-Port>/image/upload
Method: PUT
Payload: Specifies all the parameters that define the image being added.
{
"image": [
{
" id": "Image-id",
"type": "image_type", <ubuntu or centos or esxi5.1 or esxi5.5 or
contrail-ubuntu-package or contrail-centos-package>
"version": "image_version",
"path":"path-to-image-on-client-machine"
}
]
}
API: Get Image Information
Use the following to get image information.
URL : http://<SM-IP-Address>:<SM-Port>/image[?image_id=IMAGE_ID&detail]
Method: GET
Payload: Specifies criteria for the image being sought. If no match criteria is specified,
information about all the images is provided. The details field specifies if details of the
image entry in the database are requested.
126
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
API: Delete an Image
Use the following to delete an image.
URL : http://<SM-IP-Address>:<SM-Port>/image?image_id=IMAGE_ID
Method: DELETE
Payload: Specifies criteria for the image being deleted.
API: Add or Modify a Cluster
Use the following to add a cluster to the server manager configuration database. A cluster
maintains parameters for a set of servers that work together in different roles to provide
complete functions for a Contrail cluster.
URL : http://<SM-IP-Address>:<SM-Port>/cluster
Method: PUT
Payload: Contains the definition of the cluster, including all the global parameters needed
by all the servers in the cluster. The fields subnet_mask, gateway, password, and domain
define parameters that apply to all servers in the VNS. These parameter values can be
individually overridden for a server by specifying different values in the server entry.
{
"cluster" : [
{
"id" : "demo-cluster",
"parameters" : {
"router_asn": "<asn>",
"database_dir": "/home/cassandra",
"database_token": "",
"use_certificates": "False",
"multi_tenancy": "False",
"encapsulation_priority": "MPLSoUDP,MPLSoGRE,VXLAN",
"service_token": "<password>",
"keystone_user": "admin",
"keystone_password": "<password>",
"keystone_tenant": "admin",
"analytics_data_ttl": "168",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"haproxy": "disable",
"external_bgp": "",
"domain": "demo.company.net"
}
}
]
}
Copyright © 2016, Juniper Networks, Inc.
127
Contrail Feature Guide
API: Delete a Cluster
Use this API to delete a cluster from the server manager database.
URL : http://<SM-IP-Address>:<SM-Port>/cluster?cluster_id=CLUSTER_ID
Method: DELETE
Payload: None
API: Get Cluster Configuration
Use this API to get a cluster configuration.
URL : http://<SM-IP-Address>:<SM-Port>/cluster[?cluster_id=CLUSTER_ID&detail]
Method: GET
Payload: None
The optional detail argument is specified as part of the URL if details of the VNS entry
are requested.
API: Get All Server Manager Configurations
Use this API to get all configurations of server manager objects, including servers, clusters,
images, and tags.
URL : http://<SM-IP-Address>:<SM-Port>/all[?detail]
Method: GET
Payload: None
The optional detail argument is specified as part of the URL if details of the server manager
configuration are requested.
API: Reimage Servers
Use one of the following API formats to reimage one or more servers.
URL : http://<SM-IP-Address>:<SM-Port>/server/reimage?server_id=SERVER_ID
http://<SM-IP-Address>:<SM-Port>/server/reimage?cluster_id=CLUSTER_ID
http://<SM-IP-Address>:<SM-Port>/server/reimage?mac=MAC
http://<SM-IP-Address>:<SM-Port>/server/reimage?ip=IP
http://<SM-IP-Address>:<SM-Port>/server/reimage [?tag=<tag_name>=<tag_value>,.]
Method: POST
Payload: None
API: Provision Servers
Use this API to provision or configure one or more servers for roles configured on them.
URL : http://<SM-IP-Address>:<SM-Port>/server/provision
128
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
Method: POST
Payload: Specifies the criteria to be used to identify servers which are being provisioned.
The servers can be identified by server_id, mac, cluster_id or tags. See the following
example.
{
server_id : <server_id> OR
mac : <server_mac_address> OR
cluster_id : <cluster_id> OR
tag : {“data-center” : “dc1”} OR
provision_parameters = {
"roles" : {
"database" : ["demo2-server"],
"openstack" : ["demo2-server"],
"config" : ["demo2-server"],
"control" : ["demo2-server"],
"collector" : ["demo2-server"],
"webui" : ["demo2-server"],
"compute" : ["demo2-server"]
}
}
}
API: Restart Servers
This REST API is used to power cycle the servers and reboot either with net-booting
enabled or disabled.
If the servers are to be reimaged and reprovisioned, the net-boot flag should be set.
If servers are only being reprovisioned, the net-boot flag is not needed, however, the
Puppet agent must be running on the target systems with the correct puppet configuration
to talk to the puppet master running on the server manager.
URL : http://<SM-IP-Address>:<SM-Port>/server/restart?server_id=SERVER_ID
http://<SM-IP-Address>:<SM-Port>/server/restart?[netboot&]cluster_id=CLUSTER_ID
http://<SM-IP-Address>:<SM-Port>/server/restart? [netboot&]mac=MAC
http://<SM-IP-Address>:<SM-Port>/server/restart? [netboot&]ip=IP
http://<SM-IP-Address>:<SM-Port>/server/restart ?
[netboot&]tag=<tag_name>=<tag_value>
Method: POST
Payload: Specifies the criteria to be used to identify servers which are being restarted.
The servers can be identified by their server_id, mac, cluster_id, or tag. The netboot
parameter specifies if the servers being power-cycled are to be booted from Cobbler or
locally.
Example: Reimaging and Provisioning a Server
This example shows the steps used in server manager software to configure, reimage,
and provision a server running all roles of the Contrail system in a single-node
configuration.
Copyright © 2016, Juniper Networks, Inc.
129
Contrail Feature Guide
NOTE: Component names and IP addresses in the following are used for
example, only. To use this example in your own environment, be sure to use
addresses and names specific to your environment.
The server manager client configuration file used for the following CLI commands, is
/opt/contrail/server_manager/client/sm-client-config.ini . It contains the values for the
server IP address and port number as follows:
[SERVER-MANAGER]
listen_ip_addr = <ip address> (Server Manager IP address)
listen_port = 9001
The steps to be followed include:
1.
Configure cluster.
2. Configure servers.
3. Configure images.
4. Reimage servers (either using servers configured above or using explicitly specified
reimage parameters with the request).
5. Provision servers (either using servers configured above or using explicitly specified
provision parameters with the request).
1.
Configure a cluster.
server-manager add cluster -f cluster.json
Where cluster.json contains :
{
"cluster" : [
{
"id" : "demo-cluster",
"parameters" : {
"router_asn": "<asn>",
"database_dir": "/home/cassandra",
"database_token": "",
"use_certificates": "False",
"multi_tenancy": "False",
"encapsulation_priority": "MPLSoUDP,MPLSoGRE,VXLAN",
"service_token": "<password>",
"keystone_user": "admin",
"keystone_password": "<password>",
"keystone_tenant": "admin",
"analytics_data_ttl": "168",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"haproxy": "disable",
"external_bgp": "",
130
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
"domain": "demo.company.net"
}
}
]
}
2. Configure the server.
server-manager add server –f server.json
Where server.json contains :
{
"server": [
{
"id": "demo2-server",
"mac_address": "<mac address>",
"ip_address": "<ip address>",
"parameters" : {
"interface_name": "eth1",
},
"roles" : ["config","openstack","control","compute","collector","webui","database"],
"cluster_id": "demo-cluster",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"domain": "demo.company.net",
"ipmi_address": "<ip address>"
}
]
}
3. Configure images.
In the example, the image files for ubuntu 12.04.3 and contrail-ubuntu-164 are located
at the corresponding image path specified on the server manager.
server-manager add -c smgr_client_config.ini image –f image.json
Where image.json contains:
{
"image": [
{
" id": "ubuntu-12.04.3",
"type": "ubuntu",
"version": "ubuntu-12.04.3",
"path":"/iso/ubuntu-12.04.3-server-amd64.iso"
},
{
"id": "contrail-ubuntu-164",
"type": "contrail-ubuntu-package",
"version": "contrail-ubuntu-164",
"path":"/iso/contrail-install-packages_1.05-164~havana_all.deb"
}
]
}
Copyright © 2016, Juniper Networks, Inc.
131
Contrail Feature Guide
4. Reimage servers.
This step can be performed after the configuration in the previous steps is in the server
manager database.
server-manager reimage –server_id demo2-server –r contrail-ubuntu-164 ubuntu-12.04.3
5. Provision servers.
server-manager provision –server_id demo2-server contrail-ubuntu-164
NOTE: Optionally, the Contrail package to be used can be specified with the
reimage command, in which case the repository with the Contrail packages
is created and made available to the target nodes as part of the reimage
process. It is a mandatory parameter of the provision command.
Using the Server Manager Web User Interface
When the Server Manager is installed on your Contrail system, you can also install a
Server Manager Web user interface that you can use to access the features of Server
Manager.
•
Log In to Server Manager on page 132
•
Create a Cluster for Server Manager on page 133
•
Working with Servers in the Server Manager User Interface on page 139
•
Add a Server on page 139
•
Edit Tags for Servers on page 141
•
Using the Edit Config Option for Multiple Servers on page 141
•
Filter Servers by Tag on page 142
•
Viewing Server Details on page 142
•
Configuring Images and Packages on page 142
•
Add New Image or Package on page 143
•
Selecting Server Manager Actions for Clusters on page 143
•
Reimage a Cluster on page 143
•
Provision a Cluster on page 144
Log In to Server Manager
The server manager user interface can be accessed using:
http://<server-manager-user-interface-ip>:8080
Where <server-manager-user-interface-ip> is the IP address of the server on which the
server manager web user interface is installed.
132
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
From the Contrail user interface, select Setting > Server Manager to access the Server
Manager home page. From this page you can manage server manager settings for clusters,
servers, images, and packages.
Create a Cluster for Server Manager
Select Add Cluster to identify a cluster to be managed by the server manager. From
Setting > Server Manager >Clusters, to access the Clusters page, see the following:
To create a new cluster, click the plus icon in the upper right of the Clusters page. The
Add Cluster window appears, where you can add a new cluster ID and the domain email
address of the cluster.
When you are finished adding information about the new cluster in the Add Clusters page,
click Save & Next . Now you can add servers to the cluster, as shown in the following
image.
Click the check box of each server to be added to the cluster.
Copyright © 2016, Juniper Networks, Inc.
133
Contrail Feature Guide
When finished, click the Next button, and the selected servers are added to the cluster.
When finished adding servers, click the Save & Next button. Now you can assign roles to
servers that you select in the cluster. Roles available are the Contrail roles of Config,
Openstack, Control, Compute, and Collector. Click the check box for each role assignment
for the selected server. You can also click a check box to remove the check mark for any
assigned role. The assigned roles correspond to the role functions in operation on the
server.
When finished selecting roles for the selected server in the Roles window, click the Apply
button to save your choices.
134
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
Click the Save & Next button to view your selections, which display as check marks in the
columns on the Add Cluster window, see the following image.
The next step after roles are assigned is to enter the cluster configuration information
for OpenStack. After viewing the assigned roles, click the Save & Next button. A window
opens where you can click an icon that opens a set of fields where you can enter
OpenStack or Contrail configuration information for the cluster. In the following image,
the Openstack icon is open, with fields where you can enter configuration information, such
as Openstack or Keystone passwords and usernames.
Copyright © 2016, Juniper Networks, Inc.
135
Contrail Feature Guide
In the following image, the Contrail controller icon is open, where the user can enter
configuration information for Contrail, such as External BGP, Multi Tenancy, Router ASN,
HA Proxy, and so on. See the following image.
In following image, the High Availability (HA) icon is open, where the user can configure
high availability parameters such as HA Proxy, Internal and External VIP, and so on.
In following image, the Analytics icon is open, where the user can configure parameters
for Contrail Analytics, including TTL, Syslog Port, Data Dir, and so on.
136
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
In following image, the Contrail Storage icon is open, where the user can configure
parameters for Contrail Storage, including Monitoring Key, OSD Bootstrap Key, Admin
Key, and so on.
When finished entering all of the cluster configuration information, click Save to submit
the configurations. You can view all configured clusters at the Clusters screen (Setting
> Server Manager > Clusters).
To perform an action on one of the configured clusters, click the cog icon at the right to
pick from a list of actions available for that cluster, including Add Servers, Remove Servers,
Assign Roles, Edit Config, Reimage, Provision, and Delete, as shown in the following.
Copyright © 2016, Juniper Networks, Inc.
137
Contrail Feature Guide
You can also click the expansion icon on the left side of the cluster name to display the
details of that cluster in an area below the name line, as in the following.
Click the upper right icon to switch to the JSON view to see the contents of the JSON file
for the cluster.
138
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
The cluster name is a link, click the cluster name to go to the cluster Details page, as in
the following.
Working with Servers in the Server Manager User Interface
Click on the Servers link in the left sidebar at Setting > Server Manager to view a list of all
servers.
Add a Server
To add a new server, from Setting > Server Manager > Servers, click the plus (+) icon at
the upper right side in the header line. The Add Server window pops up, as in the following.
Copyright © 2016, Juniper Networks, Inc.
139
Contrail Feature Guide
In following image, the Interfaces icon is open, where the user can add new or edit existing
interfaces. To enable editing for any field, hover the cursor on any selected field to open
it.
In following image, the Contrail Storage icon is open, where the user can configure
parameters for Contrail Storage, including selecting a package and adding storage disks
locations.
When finished entering new server details at the Add Server window, click Save to add
the new server configuration to the list of servers at Setting > Server Manager > Servers.
You can change details of the new server by clicking the cog icon to the right side to get
a list of actions available, including Edit Config, Edit Tags, Reimage, Provision, and Delete.
In the following, the user is preparing to edit tags for the new server.
140
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
Edit Tags for Servers
The following shows the Edit Tags window that pops up when you select Edit Tags from
the list at the cog icon for a server. Enter any user-defined tags that are to be associated
with the selected server, then click Save to add the tags to the server configuration.
Using the Edit Config Option for Multiple Servers
You can also edit the configuration of multiple servers at one time. From the Servers
page at Setting > Server Manager > Servers, click the check box of each of the servers you
will edit, then click a cog icon at the right to open the action list, and select Edit Config.
The Edit Config window pops up, as in the following.
Click a pencil icon to open fields of configuration areas that can be edited, including
System Management, Contrail Controller, Contrail Storage, and so on.
Copyright © 2016, Juniper Networks, Inc.
141
Contrail Feature Guide
Filter Servers by Tag
You can filter servers according to the tags defined for them. At the Servers page, click
the Filter Tags field at the upper right heading, and a list of configured tags appears. Click
the check box to select a tag by which to filter the list of servers.
Viewing Server Details
Each server name on the Servers page is a link to the details page for that server. Click
any server name to open the details for that server, as in the following image.
For each server, the Sensors Information area shows the Name, Type, Reading, and Status
for the following sensor types: temperature (degrees Celsius), fan (rpm), and power
(watts).
Configuring Images and Packages
Use the sidebar options Images and Packages to configure the software images and
packages to be used by the server manager. Images are used to reimage clusters, typically
with an operating system version, and packages are used to provision clusters with a
Contrail setup.
142
Copyright © 2016, Juniper Networks, Inc.
Chapter 5: Using Server Manager to Automate Provisioning
Both areas of the server manager user interface operate in similar fashion. Examples
given here are for the Images section, and the Packages section has similar options.
Click on Images in the sidebar to open the list of images configured on the Images page,
as in the following.
Add New Image or Package
To add a new image or package, on the respective Images or Packages page, click the
plus (+) icon in the upper right header to open the Add dialog box. The Add Image dialog
box is shown in the following. Enter the information for the new image (or package) and
click Save to add the new item to the list of configured items.
NOTE: The path field requires the path of the image where it is located on
the server upon which the server-manager process is running.
Selecting Server Manager Actions for Clusters
Once all aspects of a cluster are configured, you can select actions for the server manager
to perform on the cluster, such as Reimage or Provision.
Reimage a Cluster
From the Clusters page (click Clusters in the sidebar or navigate to Setting > Servers >
Clusters), click the right side cog icon of the cluster to be reimaged, then select Reimage
from the action list.
Copyright © 2016, Juniper Networks, Inc.
143
Contrail Feature Guide
The Reimage dialog box pops up, as in the following. Ensure that the correct image for
reimaging is selected in the Default Image field, then click Save to initiate the reimage
action.
Provision a Cluster
The process to provision a cluster is similar to the process to reimage a cluster. From the
Clusters page (click Clusters in the sidebar or navigate to Setting > Servers > Clusters),
click the right side cog icon of the cluster to be provisioned, then select Provision from
the action list.
The Provision Cluster dialog box pops up, as in the following. Ensure that the correct
package for provisioning is selected in the Default Package field, then click Save to initiate
the provision action.
144
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 6
Extending Contrail to Physical Routers,
Bare Metal Servers, Switches, and
Interfaces
•
Using TOR Switches and OVSDB to Extend the Contrail Cluster to Other
Instances on page 145
•
Configuring High Availability for the Contrail OVSDB TOR Agent on page 153
•
Using Device Manager to Manage Physical Routers on page 158
•
REST APIs for Extending the Contrail Cluster to Physical Routers, and Physical and
Logical Interfaces on page 183
Using TOR Switches and OVSDB to Extend the Contrail Cluster to Other Instances
•
Overview: Support for TOR Switch and OVSDB on page 145
•
TOR Services Node (TSN) on page 146
•
Contrail TOR Agent on page 146
•
Using the Web Interface to Configure TOR Switch and Interfaces on page 148
•
Provisioning with Fab Commands on page 149
•
Prerequisite Configuration for QFX5100 Series Switch on page 151
•
Changes to Agent Configuration File on page 152
•
REST APIs on page 153
Overview: Support for TOR Switch and OVSDB
Contrail Release 2.1 and later supports extending a cluster to include bare metal servers
and other virtual instances connected to a top-of-rack (TOR) switch that supports the
Open vSwitch Database Management (OVSDB) Protocol. The bare metal servers and
other virtual instances can belong to any of the virtual networks configured in the Contrail
cluster, facilitating communication with the virtual instances running in the cluster. Contrail
policy configurations can be used to control this communication.
OVSDB protocol is used to configure the TOR switch and to import dynamically-learned
addresses. VXLAN encapsulation is used in the data plane communication with the TOR
switch.
Copyright © 2016, Juniper Networks, Inc.
145
Contrail Feature Guide
TOR Services Node (TSN)
A new node, the TOR services node (TSN), is introduced and provisioned as a new role
in the Contrail system. The TSN acts as the multicast controller for the TOR switches.
The TSN also provides DHCP and DNS services to the bare metal servers or virtual
instances running behind TOR ports.
The TSN receives all the broadcast packets from the TOR, and replicates them to the
required compute nodes in the cluster and to other EVPN nodes. Broadcast packets from
the virtual machines in the cluster are sent directly from the respective compute nodes
to the TOR switch.
The TSN can also act as the DHCP server for the bare metal servers or virtual instances,
leasing IP addresses to them, along with other DHCP options configured in the system.
The TSN also provides a DNS service for the bare metal servers. Multiple TSN nodes can
be configured in the system based on the scaling needs of the cluster.
Contrail TOR Agent
A TOR agent provisioned in the Contrail cluster acts as the OVSDB client for the TOR
switch, and all of the OVSDB interactions with the TOR switch are performed by using
the TOR agent. The TOR agent programs the different OVSDB tables onto the TOR switch
and receives the local unicast table entries from the TOR switch.
The typical practice is to run the TOR agent on the TSN node.
Configuration Model
The following figure depicts the configuration model used in the system.
Figure 9: Configuration Model
146
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
The TOR agent receives the configuration information for the TOR switch. The TOR agent
translates the Contrail configuration to OVSDB and populates the relevant OVSDB table
entries in the TOR switch.
The following table maps the Contrail configuration objects to the OVSDB tables.
Contrail Object
OVSDB Table
Physical device
Physical switch
Physical interface
Physical port
Virtual networks
Logical switch
Logical I\interface
<VLAN physical port> binding to logical switch
Layer 2 unicast route table
Unicast remote and local table
Multicast remote table
Multicast local table
Physical locator table
Physical locator set table
Control Plane
The TOR agent receives the EVPN route entries for the virtual networks in which the TOR
switch ports are members, and adds the entries to the unicast remote table in the OVSDB.
MAC addresses learned in the TOR switch for different logical switches (entries from
the local table in OVSDB) are propagated to the TOR agent. The TOR agent exports the
addresses to the control node in the corresponding EVPN tables, which are further
distributed to other controllers and subsequently to compute nodes and other EVPN
nodes in the cluster.
The TSN node receives the replication tree for each virtual network from the control
node. It adds the required TOR addresses to the received replication tree, forming its
complete replication tree. The other compute nodes receive the replication tree from the
control node, whose tree includes the TSN node.
Data Plane
The data plane encapsulation method is VXLAN. The virtual tunnel endpoint (VTEP) for
the bare metal end is on the TOR switch.
Unicast traffic from bare metal servers is VXLAN-encapsulated by the TOR switch and
forwarded, if the destination MAC address is known within the virtual switch.
Unicast traffic from the virtual instances in the Contrail cluster is forwarded to the TOR
switch, where VXLAN is terminated and the packet is forwarded to the bare metal server.
Copyright © 2016, Juniper Networks, Inc.
147
Contrail Feature Guide
Broadcast traffic from bare metal servers is received by the TSN node. The TSN node uses
the replication tree to flood the broadcast packets in the virtual network.
Broadcast traffic from the virtual instances in the Contrail cluster is sent to the TSN node,
which replicates the packets to the TORs.
Using the Web Interface to Configure TOR Switch and Interfaces
The Contrail Web user interface can be used to configure a TOR switch and the interfaces
on the switch.
Select Configure > Physical Devices > Physical Routers and create an entry for the TOR
switch, providing the TOR's IP address and VTEP address. Also configure the TSN and
TOR agent addresses for the TOR.
Figure 10: Add Physical Router Window
Select Configure > Physical Devices > Interfaces and add the logical interfaces to be
configured on the TOR. The name of the logical interface must match the name on the
TOR, for example, ge-0/0/0.10. Also enter other logical interface configurations, such
as VLAN ID, MAC address, and IP address of the bare metal server and the virtual network
to which it belongs.
148
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
Figure 11: Add Interface Window
Provisioning with Fab Commands
The TSN can be provisioned using fab commands.
To provision with fab commands, the following changes are required in the testbed.py
file.
1.
In env.roledefs, add hosts for the roles tsn and toragent.
2. Configure the TSN node in the compute role.
3. Use the following example to configure the TOR agent.the TOR agent node should
also be configured into the compute node.
env.tor_agent = {
host2: [{
'tor_ip':'<ip address>', # IP address of the TOR
'tor_id':'<1>',
# Numeric value to uniquely identify TOR
'tor_type':'ovs',
# always ovs
'tor_ovs_port':'9999', # the TCP port to connect on the TOR
'tor_ovs_protocol':'tcp', # always tcp, for now
'tor_tsn_ip':'<ip address>' # IP address of the TSN for this TOR
'tor_tsn_name':'<name>', # Name of the TSN node
'tor_name':'<switch name>', # Name of the TOR switch
'tor_tunnel_ip':'<ip address>', # IP address of Data tunnel endpoint
'tor_vendor_name':'<name>', # Vendor name for TOR switch
'tor_http_server_port':'<port>', # HTTP port for TOR Introspect
}]
Copyright © 2016, Juniper Networks, Inc.
149
Contrail Feature Guide
}
4. Two TOR agents provisioned on different hosts are considered redundant to each
other if the tor_name and tor_ovs_port in their respective configurations are the same.
Note that this means the TOR agents are listening on the same port for SSL
connections on both the nodes.
Use the tasks add_tsn and add_tor_agent to provision the TSN and TOR agents.
To configure an existing compute node as a TSN or a TOR Agent, use the following
fab tasks:
add_tsn : Provision all the TSNs given in the testbed.
add_tor_agent : Add all the tor-agents given in the testbed.
add_tor_agent_node : Add all tor-agents in specified node
(e.g., fab add_tor_agent_node:root@<ip>).
add_tor_agent_by_id : Add the specified tor-agent, identified by tor_agent_id
(e.g., fab add_tor_agent_by_id:1,root@<ip>).
add_tor_agent_by_index : Add the specified tor-agent,
identified by index/position in testbed
(e.g., fab add_tor_agent_by_index:0,root@<ip>).
add_tor_agent_by_index_range : Add a group of tor-agents,
identified by indices in testbed
<be>(e.g., fab add_tor_agent_by_index_range:0-2,root@<ip>).</be>
delete_tor_agent : Remove all tor-agents in all nodes.
delete_tor_agent_node : Remove all tor-agents in specified node
(e.g., fab delete_tor_agent_node:root@<ip>).
delete_tor_agent_by_id : Remove the specified tor-agent,
identified by tor-id
(e.g., fab delete_tor_agent_by_id:2,root@<ip>).
delete_tor_agent_by_index : Remove the specified tor-agent,
identified by index/position in testbed (e.g., fab
delete_tor_agent_by_index:0,root@<ip>).
delete_tor_agent_by_index_range : Remove a group of tor-agents,
identified by indices in testbed
(e.g., fab delete_tor_agent_by_index_range:0-2,root@<ip>).
setup_haproxy_config : provision HA Proxy.
To configure an existing compute node as a TSN or a TOR Agent, use the following
fab tasks:
fab add_tsn_node:True,user@<ip>
fab add_tor_agent_node:True,user@<ip>
Note that fab setup_all would provision appropriately when run with updated testbed.
5. Vrouter limits on the TSN node have to be configured to suit the scaling requirements
in the setup. The following can be updated in the testbed file, before the setup, so
that appropriate vrouter options are configured by fab task.
env.vrouter_module_params = {
host4:{'mpls_labels':'196000', 'nexthops':'521000', 'vrfs':'65536',
'macs':'1000000'},
host5:{'mpls_labels':'196000', 'nexthops':'521000', 'vrfs':'65536',
'macs':'1000000'}
}
150
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
The following applies:
•
mpls_labels = (max number of VNs * 3) + 4000
•
nexthops = (max number of VNs * 4) + number of TORs + number of compute
nodes + 100
•
vrfs = Max number of VNs
•
macs = Maximum number of MACs in a VN
On a TSN node or on a compute node, the currently configured limits can be seen using
vrouter --info. If these have to be changed, update them by editing the
/etc/modprobe.d/vrouter.conf file with the following (numbers updated to the desired
values) and restarting the node.
options vrouter vr_mpls_labels=196000 vr_nexthops=521000 vr_vrfs=65536
vr_bridge_entries=1000000
Prerequisite Configuration for QFX5100 Series Switch
When using the Juniper Networks QFX5100 Series switches, ensure the following
configurations are made on the switch before configuring to extend the Contrail cluster.
1.
Enable OVSDB.
2. Set the connection protocol.
3. Indicate the interfaces that will be managed via OVSDB.
4. Configure the controller in case pssl is used. If HA proxy is used, use the address of
the HA Proxy node and use the vIP when VRRP is used between multiple nodes running
HA Proxy.
set interfaces lo0 unit 0 family inet address
set switch-options ovsdb-managed
set switch-options vtep-source-interface lo0.0
set protocols ovsdb interfaces
set protocols ovsdb passive-connection protocol tcp port
set protocols ovsdb controller <tor-agent-ip> inactivity-probe-duration 10000
protocol ssl port <tor-agent-port>
5. When using SSL to connect, CA-signed certificates must be copied to the /var/db/certs
directory in the QFX device. One way to get these is using the following commands
(could be run on any server).
apt-get install openvswitch-common
ovs-pki init
ovs-pki req+sign vtep
scp vtep-cert.pem root@<qfx>:/var/db/certs
scp vtep-privkey.pem root@<qfx>:/var/db/certs
cacert.pem file will be available in /var/lib/openvswitch/pki/switchca, when
the above are done. This is the file to be provided in the above testbed (in
env.ca_cert_file).
Copyright © 2016, Juniper Networks, Inc.
151
Contrail Feature Guide
Debug QFX5100 Configuration
On the QFX, use the following commands to show the OVSDB configuration.
show ovsdb logical-switch
show ovsdb interface
show ovsdb mac
show ovsdb controller
show vlans
Using the agent introspect on the TOR agent and TSN nodes show the configuration and
operational state of these modules.
The TSN module is like any other contrail-vrouter-agent on a compute node, with
introspect access available on port 8085 by default. Use the introspect on port 8085 to
view operational data such as interfaces, virtual network, and VRF information, along
with their routes.
The port on which the TOR agent introspect access is available is in the configuration file
provided to contrail-tor-agent. This provides the OVSDB data available through the client
interface, apart from the other data available in a Contrail Agent.
Changes to Agent Configuration File
Changes are made to the agent features in the configuration file. In
the /etc/contrail/contrail-vrouter-agent.conf file for TSN, the agent _mode option is now
available in the DEBUG section to configure the agent to be in TSN mode.
agent_mode = tsn
The following are typical configuration items in a TOR agent configuration file.
agent_mode = tsn
The following are typical configuration items in a TOR agent configuration file
[DEFAULT]
agent_name = noded2-1 # Name (formed with hostname and TOR id from below)
agent_mode = tor # Agent mode
http_server_port=9010 # Port on which Introspect access is available
[DISCOVERY]
server=<ip> # IP address of discovery server
[TOR]
tor_ip=<ip> # IP address of the TOR to manage
152
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
tor_id=1 # Identifier for ToR Agent.
tor_type=ovs # ToR management scheme - only “ovs” is supported
tor_ovs_protocol=tcp # IP-Transport protocol used to connect to TOR, can be tcp
or pssl
tor_ovs_port=port # OVS server port number on the ToR
tsn_ip=<ip> # IP address of the TSN
tor_keepalive_interval=10000 # keepalive timer in ms
ssl_cert=/etc/contrail/ssl/certs/tor.1.cert.pem # path to SSL certificate on TOR
Agent, needed for pssl
ssl_privkey=/etc/contrail/ssl/private/tor.1.privkey.pem # path to SSL private key
on TOR Agent, needed for pssl
ssl_cacert=/etc/contrail/ssl/certs/cacert.pem # path to SSL CA cert on the node,
needed for pssl
REST APIs
For information regarding REST APIs for physical routers and physical and logical
interfaces, see “REST APIs for Extending the Contrail Cluster to Physical Routers, and
Physical and Logical Interfaces” on page 183
Related
Documentation
•
REST APIs for Extending the Contrail Cluster to Physical Routers, and Physical and
Logical Interfaces on page 183
•
Using Device Manager to Manage Physical Routers on page 158
Configuring High Availability for the Contrail OVSDB TOR Agent
This topic describes how high availability can be configured for the Contrail TOR agent.
•
Overview: High Availability for a TOR Switch on page 153
•
High Availability Solution for Contrail TOR Agent on page 154
•
Failover Methodology Description on page 155
•
Failure Scenarios on page 155
•
Redundancy for HAProxy on page 157
•
Configuration for TOR Agent High Availability on page 157
•
Testbed.py and Provisioning for High Availability on page 158
Overview: High Availability for a TOR Switch
Starting with Contrail Release 2.20, high availability can be configured for the Contrail
TOR agent.
When a top-of-rack (TOR) switch is managed through the Open vSwitch Database
Management (OVSDB) Protocol by using a TOR agent on Contrail, a high availability
Copyright © 2016, Juniper Networks, Inc.
153
Contrail Feature Guide
configuration is necessary to maintain TOR agent redundancy. With TOR agent
redundancy, if the TOR agent responsible for a TOR switch is unable to act as the vRouter
agent for the TOR switch, due to any failure condition in the network or the node, then
another TOR agent takes over and manages the TOR switch.
TOR agent redundancy (high availability) for Contrail Release 2.20 and greater is achieved
using HAProxy. HAProxy is an open source, reliable solution that offers high availability
and proxy service for TCP applications. The solution uses HAProxy to initiate an SSL
connection from the TOR switch to the TOR agent. This configuration ensures that the
TOR switch is connected to exactly one active TOR agent at any given point in time.
High Availability Solution for Contrail TOR Agent
The following figure illustrates the method for achieving high availability for the TOR
agent in Contrail.
Figure 12: High Availability Solution for Contrail TOR Agent
The following describes the events shown in the figure:
154
•
TOR agent redundancy is achieved using HAProxy.
•
Two TOR agents are provisioned on different TSN nodes, to manage the same TOR
switch.
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
•
Both TOR agents created in the cluster are active and get the same information from
the control node.
•
HAProxy monitors these TOR agents.
•
An SSL connection is established from the TOR switch to the TOR agent, via HAProxy.
•
HAProxy selects one TOR agent to establish the SSL connection (e.g., TOR Agent 1
running on TSN A).
•
Upon connection establishment, this TOR Agent adds the ff:ff:ff:ff:ff:ff broadcast MAC
in the OVSDB with its own TSN IP address.
•
The TOR Agent sends the MAC addresses of the bare metal servers learned by the
TOR switch to the control node using XMPP.
•
The control node reflects the addresses to other TOR agents and vRouter agents.
Failover Methodology Description
The TOR switch connects to the HAProxy that is configured to use one of the TOR agents
on the two TOR services nodes (TSNs). An SSL connection is established from the TOR
switch to the TOR agent, making that agent the active TOR agent. The active TOR agent
is responsible for managing the OVSDB on the TOR switch. It configures the OVSDB
tables based on the configuration. It advertises the MAC routes learnt on the TOR switch
as Ethernet VPN (EVPN) routes to the Contrail controller. It also programs any routes
learned by means of EVPN over XMPP, southbound into OVSDB on the TOR switch.
The active TOR agent also advertises the multicast route (ff:ff:ff:ff:ff:ff) to the TOR
switch, ensuring that there is only one multicast route in OVSDB pointing to the active
TSN.
Both the TOR agents, active and standby, receive the same configuration from the control
node, and all routes are synchronized by means of BGP.
After the SSL connection is established, keepalive messages are exchanged between
the TOR switch and the TOR agent. The messages can be sent from either end and are
responded to from the other end. When any message exchange is seen on the connection,
the keepalive message is skipped for that interval. When the TOR switch sees that
keepalive has failed, it closes the current SSL session and attempts to reconnect. When
the TOR agent side sees that keepalive has failed, it closes the SSL session and retracts
the routes it exported to the control node.
Failure Scenarios
Whenever the HAProxy cannot communicate with the TOR agent, a new SSL connection
from the TOR switch is established to the other TOR agent.
HAProxy communication failures can occur under several scenarios, including:
•
The node on which the TOR agent is running goes down or fails.
•
The TOR agent crashes.
Copyright © 2016, Juniper Networks, Inc.
155
Contrail Feature Guide
•
A network or other issue prevents or interrupts HAProxy communication with the TOR
agent.
Figure 13: Failure Scenarios
When a connection is established to the other TOR agent, the new TOR agent does the
following:
•
Updates the multicast route in OVSDB to point to the new TSN.
•
Gets all of the OVSDB entries.
•
Audits the data with the configurations available.
•
Updates the database.
•
Exports entries from the OVSDB local table to the control node.
Because the configuration and routes from control node are already synchronized to the
new TOR Services Node (TSN), the new TSN can immediately act on the broadcast
traffic from the TOR switch. Any impact to the service is only for the time needed for the
SSL connection to be set up and for programming the multicast and unicast routes in
OVSDB.
When SSL connection goes down, the TOR agent retracts the routes exported. Also, if
the Extensible Messaging and Presence Protocol (XMPP) connection between the TOR
156
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
agent and the control node goes down, control node removes the routes exported by
the TOR agent. In these scenarios, the entries from OVSDB local table are retracted and
then added back from the new TOR agent.
Redundancy for HAProxy
In a high availability configuration, multiple HAProxy nodes are configured, with Virtual
Router Redundancy Protocol (VRRP) running between them. The TOR agents are
configured to use the virtual IP address of the HAProxy nodes to make the SSL connection
to the controller. The active TCP connections go to the virtual IP master node, which
proxies them to the chosen TOR agent. A TOR agent is chosen based on the number of
connections from the HA Proxy to that node (the node with lower number of connections
gets the new connection) and can be controlled through configuration of the HAProxy.
Figure 14: Redundancy for HAProxy
If the HAProxy node fails, a standby node becomes the virtual IP master and sets up the
connections to the TOR agents. The SSL connections are reestablished following the
same methods discussed earlier.
Configuration for TOR Agent High Availability
To get the required configuration downloaded from the control node to the TSN agent
and to the TOR agent, the physical router node must be linked to the virtual router nodes
that represent the two TOR agents and the two TSNs.
Copyright © 2016, Juniper Networks, Inc.
157
Contrail Feature Guide
The Contrail Web user interface can be used to configure this. Go to Configure > Physical
Devices > Physical Routers and create an entry for the TOR switch, providing the TOR
switch IP address and the virtual tunnel endpoint (VTEP) address. The router name
should match the hostname of the TOR switch. Both TOR agents and their respective
TSN nodes can be configured here.
Testbed.py and Provisioning for High Availability
The same testbed configuration used for provisioning the TSN and TOR agents is used
to provision high availability. The redundant TOR agents should have the same tor_name
and tor_ovs_port in their respective stanzas, for them to be considered as a pair.
Related
Documentation
•
Using TOR Switches and OVSDB to Extend the Contrail Cluster to Other Instances on
page 145
Using Device Manager to Manage Physical Routers
•
Overview: Support for Physical Routers on page 158
•
Configuration Model on page 159
•
Alternate Ways to Configure a Physical Router on page 161
•
Device Manager Configurations on page 161
•
Prerequisite Configuration Required on MX Series Device on page 162
•
Configuration Scenarios on page 162
•
Device Manager Functionality on page 162
•
Dynamic Tunnels on page 163
•
Extending the Public Network on page 169
•
Ethernet VPN Configuration on page 170
•
Floating IP Addresses and Source Network Address Translation for Guest Virtual
Machines and Bare Metal Servers on page 171
•
Samples of Generated Configurations for an MX Series Device on page 179
Overview: Support for Physical Routers
There is a configuration node daemon named Device Manager, used to manage physical
routers in the Contrail system.
The Device Manager daemon listens to configuration events from the API server, creates
any necessary configurations for all physical routers it is managing, and programs those
physical routers.
In Contrail Release 2.10 and later, it is possible to extend a cluster to include physical
Juniper Networks MX Series routers and other physical routers that support the Network
Configuration (NETCONF) Protocol. You can configure physical routers to be part of any
of the virtual networks configured in the Contrail cluster, facilitating communication
between the physical routers and the Contrail control nodes. Contrail policy configurations
can be used to control this communication.
158
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
Configuration Model
Figure 15 on page 159 depicts the configuration model used in the system. The Physical
Router, Physical Interface, and Logical Interface all represent physical router entities.
Figure 15: Contrail Configuration Model
Configuring a Physical Router
The Contrail Web user interface can be used to configure a physical router into the Contrail
system. Select Configure > Physical Devices > Physical Routers to create an entry for the
physical router and provide the router's management IP address and user credentials.
The following shows how a Juniper Networks MX Series device can be configured from
the Contrail Web user interface.
Copyright © 2016, Juniper Networks, Inc.
159
Contrail Feature Guide
Figure 16: Add Physical Router Window
Select Configure > Physical Devices > Interfaces to add the logical interfaces to be
configured on the router. The name of the logical interface must match the name on the
router (For example, ge-0/0/0.10).
160
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
Figure 17: Add Interface Window
Alternate Ways to Configure a Physical Router
You can also configure a physical router by using a Contrail REST API, see “REST APIs
for Extending the Contrail Cluster to Physical Routers, and Physical and Logical Interfaces”
on page 183.
Device Manager Configurations
In Contrail Release 2.10 and later, Device Manager can configure all of the following on
a Juniper Networks MX Series device and other physical routers.
•
Create configurations for physical interfaces and logical interfaces as needed.
•
Create VRF table entries as needed by the configuration.
•
Add interfaces to VRF tables as needed.
•
Create public VRF tables corresponding to external virtual networks.
•
Create BGP protocol configuration for internal or external BGP groups as needed and
adding iBGP and eBGP peers in appropriate groups.
•
Program route-target import and export rules as needed by policy configurations.
•
Create policies and firewalls as needed.
•
Configure Ethernet VPNs (EVPNs).
Copyright © 2016, Juniper Networks, Inc.
161
Contrail Feature Guide
Prerequisite Configuration Required on MX Series Device
Before using Device Manager to manage the configuration for an MX Series device, use
the following Junos CLI commands to enable NETCONF on the device:
set system services netconf ssh
set system services netconf traceoptions file nc
set system services netconf traceoptions flag all
Debugging Device Manager Configuration
If there is any failure during a Device Manager configuration, the failed configuration is
left on the MX Series device as a candidate configuration. An appropriate error message
is logged in the local system log by the Device Manager.
The log level in the Device Manager configuration file should be set to INFO for logging
NETCONF XML messages sent to physical routers.
Configuration Scenarios
This section presents different configuration scenarios and shows snippets of generated
MX Series configurations.
Configuring Physical Routers Using REST APIs
For information regarding configurations using REST APIs, see “REST APIs for Extending
the Contrail Cluster to Physical Routers, and Physical and Logical Interfaces” on page 183.
Sample Python Script Using Rest API for Configuring an MX Device
Refer to the following link for a Python-based script for configuring required MX Series
device resources in the Contrail system, using the VNC Rest API provided by Contrail.
https://github.com/Juniper/contrail-controller/blob/master/src/config/utils/provision_physical_router.py
Device Manager Functionality
Device Manager auto configures physical routers when it detects associations in the
Contrail database.
The following naming conventions are used for generating MX Series router configurations:
162
•
Device Manager generated configuration group name: __contrail__
•
BGP groups:
•
Internal group name: __contrail__
•
External group name: __contrail_external
•
VRF name: _contrai_{l2|l3}_[vn-id]_[vn-name]
•
NAT VRF name: _contrai_{l2|l3}_[vn-id]_[vn-name]-nat
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
•
Import policy: [vrf-name]—import, Export policy: [vrf-name]—export
•
Service set: sv-[vrf-name]
•
NAT rules, SNAT: sv-[vrf-name]-sn-rule, DNAT: sv-[vrf-name]-dn-rule
•
SNAT term name: term_[private_ip], DNAT term name: term_[public_ip]
•
Firewall filters:
•
•
Public VRF filter: redirect_to_public_vrf_filter
•
Private VRF filter: redirect_to_[vrf_name]_vrf
Logical interface unit numbers:
•
Service ports: 2*vn_id -1, 2*vn_id
•
IRB interface: vn_id
Dynamic Tunnels
Dynamic tunnel configuration in Contrail VNC allows you to configure GRE tunnels on
the Contrail web user interface. When Contrail VNC detects this configuration, the device
manager module constructs GRE tunnel configuration and pushes it to the MX Series
router. A property named ip-fabric-subnets is used in the global system configuration of
the Contrail schema. Each IP fabric subnet and BGP router is configured as a dynamic
tunnel destination point in the MX Series router. The physical router data plane IP address
is considered the source address for the dynamic tunnel. You must configure the data
plane IP address for auto configuring dynamic tunnels on a physical router. Note that the
IP fabric subnets is a global configuration and all of them are configured on all the physical
routers in the cluster which have data plane IP configuration.
The following naming conventions are used for generating VNC API Configuration:
•
Global System Config: ip-fabric-subnets
•
Physical Router: data-plane-ip
Web UI Configuration
Figure 18 on page 164 shows the web user interface used to configure dynamic tunnels.
Copyright © 2016, Juniper Networks, Inc.
163
Contrail Feature Guide
Figure 18: Edit Global Config Window
In the Edit Global Config window the VTEP address is used for the data-plane-ip address.
The following is an example of the MX Series router configuration generated by the Device
Manager.
root@host# show groups __contrail__ routing-options
router-id 172.16.184.200;
route-distinguisher-id 10.87.140.107;
autonomous-system 64512;
dynamic-tunnels {
__contrail__ {
source-address 172.16.184.200;
gre;
destination-networks {
172.16.180.0/24;
164
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
172.16.180.8/32;
172.16.185.200/32;
172.16.184.200/32;
172.16.180.5/32;
172.16.180.7/32;
}
}
}
BGP Groups
When Device Manager detects BGP router configuration and its association with a physical
router, it configures BGP groups on the physical router.
Figure 19 on page 165 shows the web user interface used to configure BGP groups.
Figure 19: Edit BGP Router Window
Figure 20 on page 166 shows the web user interface used to configure the physical router.
Copyright © 2016, Juniper Networks, Inc.
165
Contrail Feature Guide
Figure 20: Edit Physical Router Window for BGP Groups
The following is an example of the MX Series router configuration generated by the Device
Manager.
root@host show groups __contrail__ protocols bgp
group __contrail__ {
type internal;
multihop;
local-address 172.16.184.200;
hold-time 90;
keep all;
family inet-vpn {
unicast;
}
family inet6-vpn {
unicast;
}
family evpn {
signaling;
}
family route-target;
neighbor 172.16.180.8;
neighbor 172.16.185.200;
neighbor 172.16.180.5;
neighbor 172.16.180.7;
}
group __contrail_external__ {
type external;
multihop;
local-address 172.16.184.200;
hold-time 90;
keep all;
family inet-vpn {
unicast;
}
family inet6-vpn {
unicast;
}
family evpn {
signaling;
}
166
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
family route-target;
}
Extending the Private Network
Device Manager allows you to extend a private network and ports to a physical router.
When Device Manager detects VNC configuration, it pushes Layer 2 (EVPN) and Layer
3 VRF, import and export rules and interface configuration to the physical router.
Figure 21 on page 167 shows the web user interface for configuring the physical router for
extending the private network.
Figure 21: Edit Physical Router Window for Extending Private Networks
The following is an example of the MX Series router configuration generated by the Device
Manager.
/* L2 VRF */
root@host# show groups __contrail__ routing-instances _contrail_l2_147_vn_private-x1-63
vtep-source-interface lo0.0;
instance-type virtual-switch;
vrf-import _contrail_l2_147_vn_private-x1-63-import;
vrf-export _contrail_l2_147_vn_private-x1-63-export;
protocols {
evpn {
encapsulation vxlan;
extended-vni-list all;
}
}
bridge-domains {
bd-147 {
vlan-id none;
routing-interface irb.147;
Copyright © 2016, Juniper Networks, Inc.
167
Contrail Feature Guide
vxlan {
vni 147;
}
}
}
/* L3 VRF */
root@host# show groups __contrail__ routing-instances _contrail_l3_147_vn_private-x1-63
instance-type vrf;
interface irb.147;
vrf-import _contrail_l3_147_vn_private-x1-63-import;
vrf-export _contrail_l3_147_vn_private-x1-63-export;
vrf-table-label;
routing-options {
static {
route 1.0.63.0/24 discard;
}
auto-export {
family inet {
unicast;
}
}
}
/* L2 Import policy */
root@host# ...cy-options policy-statement _contrail_l2_147_vn_private-x1-63-import
term t1 {
from community target_64512_8000066;
then accept;
}
then reject;
/* L2 Export Policy */
root@host# ...ail__ policy-options policy-statement
_contrail_l2_147_vn_private-x1-63-export
term t1 {
then {
community add target_64512_8000066;
accept;
}
}
/* L3 Import Policy */
root@host# ...ail__ policy-options policy-statement
_contrail_l3_147_vn_private-x1-63-import
term t1 {
from community target_64512_8000066;
then accept;
}
then reject;
168
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
/*L3 Export Policy */
root@host# ...ail__ policy-options policy-statement
_contrail_l3_147_vn_private-x1-63-export
term t1 {
then {
community add target_64512_8000066;
accept;
}
}
Extending the Public Network
When a public network is extended to a physical router, a static route is configured on
the MX Series router. The configuration copies the next-hop from the public.inet.0 routing
table to the inet.0 default routing table and a forwarding table filter from the inet.0 routing
table to the public.inet.0 routing table. The filter is applied to all packets being looked
up in the inet.0 routing table and matches destinations that are in the subnet(s) for the
public virtual network. The policy action is to perform the lookup in the public.inet.0
routing table.
Figure 22 on page 169 shows the web user interface for extending the public network
Figure 22: Edit Network Gateway Window
The following is an example of the MX Series router configuration generated by the Device
Manager.
Copyright © 2016, Juniper Networks, Inc.
169
Contrail Feature Guide
/* forwarding options */
root@host show groups __contrail__ forwarding-options
family inet {
filter {
input redirect_to_public_vrf_filter;
}
}
/* firewall filter configuration */
root@host# show groups __contrail__ firewall family inet filter redirect_to_public_vrf_filter
term term-_contrail_l3_184_vn_public-x1- {
from {
destination-address {
20.1.0.0/16;
}
}
then {
routing-instance _contrail_l3_184_vn_public-x1-;
}
}
term default-term {
then accept;
}
/* L3 VRF static route 0.0.0.0/0 configuration */
root@host# ...instances _contrail_l3_184_vn_public-x1- routing-options static route
0.0.0.0/0
next-table inet.0;
Ethernet VPN Configuration
For every private network, a Layer 2 Ethernet VPN (EVPN) instance is configured on the
MX Series router. If any Layer 2 interfaces are associated with the virtual network, logical
interfaces are also created under the bridge domain.
The following is an example of the MX Series router configuration generated by the Device
Manager.
root@host# show groups __contrail__ routing-instances _contrail_l2_147_vn_private-x1-63
170
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
vtep-source-interface lo0.0;
instance-type virtual-switch;
vrf-import _contrail_l2_147_vn_private-x1-63-import;
vrf-export _contrail_l2_147_vn_private-x1-63-export;
protocols {
evpn {
encapsulation vxlan;
extended-vni-list all;
}
}
bridge-domains {
bd-147 {
vlan-id none;
interface ge-1/0/5.0;
routing-interface irb.147;
vxlan {
vni 147;
}
}
}
Floating IP Addresses and Source Network Address Translation for Guest Virtual Machines and
Bare Metal Servers
This section describes a bare metal server deployment scenario where servers are
connected to a TOR QFX device inside a private network and a MX Series router is the
gateway for the public network connection.
The MX Series router provides the NAT capability that allows traffic from a public network
to enter a private network and also allows traffic from the private network to the public
network. To do this, you need to configure NAT rules on the MX Series router. The Device
Manager is responsible for programming these NAT rules on MX Series routers when it
detects that a bare metal server is connected to a public network.
You must configure virtual network computing for the TOR device, the MX Series router,
the private network, and the public network including the address pool. Contrail detects
this configuration, when a logical interface on the TOR device is associated with the
virtual machine interface and a floating IP address is assigned to the same virtual machine
interface (VMI) then the Device Manager configures the necessary floating IP NAT rules
on each of the MX Series routers associated with the private network.
Figure 23 on page 172 illustrates that the Device Manager configures two special logical
interfaces called service-ports on the MX Series router for NAT translation from the private
network to the public network.
Copyright © 2016, Juniper Networks, Inc.
171
Contrail Feature Guide
Figure 23: Logical Topology for Floating IP and SNAT
The Contrail virtual network computing schema allows a user to specify a service port
name using the virtual network computing API. The service port is a string attribute of
the physical router object in virtual network computing schema. This service port must
be a physical link on the MX Series router and the administrative and operational state
must be up. The Device Manager creates two logical interfaces, one for each private
virtual network, on this service port and applies NAT rules.
The private network routing instance on the MX Series router has a default static route
(0.0.0.0/0) next-hop pointing to the inside service interface and a public network routing
instance on the MX Series router has a route for the private IP prefix next hop pointing to
the outside service interface. The public IP address to private IP address and the reverse
NAT rules are configured on the MX Series router.
A special routing instance for each private network to one or more public networks
association is created on the MX Series router. This VRF has two interfaces on one side
allowing traffic to and from the public network and another interface allowing traffic to
and from the private network. Firewall filters on the MX Series router are configured in
such a way that, if the public network has floating IP addresses associated with a guest
VM (Contrail vRouter managed) then the Contrail vRouter does the floating IP address
functionality otherwise the MX Series router performs the NAT functions to send and
receive the traffic to and from the bare metal server VM.
As illustrated in Figure 23 on page 172, you must create the necessary physical device,
interface, and virtual network configuration that is pushed to the to the MX Series router.
172
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
Contrail configuration can be done using the Web UI or VNC API. The required configuration
is:
•
Create the private virtual network.
•
Create one or more TOR physical routers (No Junos OS configuration needs to be
pushed to this device by Contrail. Therefore set the vnc managed attribute to False).
•
Extend the private virtual network to the TOR device.
•
Create physical and logical interfaces on the TOR device.
•
Create the VMI on the private network for the bare metal server and associate the VMI
with the logical interface. Doing that indicates that the bare metal server is connected
to the TOR device through the logical interface. An instance IP address must be assigned
to this VMI. The VMI uses a private IP address for the bare metal server.
•
Create the gateway router. This is a physical router that is managed by the Device
Manager.
•
Configure the service-port parameter information for the physical router. The service
port is a physical interface on the MX Series router. Device Manager configures two
logical service interfaces on the MX Series router for each private network associated
with the device. Device Manager automatically configures NAT rules on these interfaces
for the private to public IP address translation and SNAT rules for the opposite direction.
The logical port ID is incrementally calculated from the virtual network ID allocated
by Contrail VNC. Two logical ports are required for each private network
•
Associate the floating IP address. Create the public network, floating IP address pool
and a floating IP address in Contrail and associate this IP address with the VMI bare
metal server.
•
The private network and public network must be extended to the physical router.
When the required configuration is present in Contrail, the Device Manager pushes the
generated Junos OS configuration to the MX Series device.
/* NAT VRF configuration */
root@host# show groups __contrail__ routing-instances
_contrail_l3_147_vn_private-x1-63-nat
instance-type vrf;
interface si-2/0/0.293;
vrf-import _contrail_l3_147_vn_private-x1-63-nat-import;
vrf-export _contrail_l3_147_vn_private-x1-63-nat-export;
vrf-table-label;
routing-options {
static {
Copyright © 2016, Juniper Networks, Inc.
173
Contrail Feature Guide
route 0.0.0.0/0 next-hop si-2/0/0.293;
}
auto-export {
family inet {
unicast;
}
}
}
/* NAT VRF import policy */
root@host# ...y-statement _contrail_l3_147_vn_private-x1-63-nat-import
term t1 {
from community target_64512_8000066;
then accept;
}
then reject;
/* NAT VRF Export policy */
root@host# ..._ policy-options policy-statement
_contrail_l3_147_vn_private-x1-63-nat-export
term t1 {
then reject;
}
/* The following additional config is generated for public l3 vrf */
root@host# show groups __contrail__ routing-instances _contrail_l3_184_vn_public-x1interface si-2/0/0.294;
routing-options {
static {
route 20.1.252.8/32 next-hop si-2/0/0.294;
route 20.1.252.9/32 next-hop si-2/0/0.294;
}
174
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
}
/* Services set configuration */
root@host# show groups __contrail__
services {
service-set sv-_contrail_l3_147_vn_ {
nat-rules sv-_contrail_l3_147_vn_-sn-rule;
nat-rules sv-_contrail_l3_147_vn_-dn-rule;
next-hop-service {
inside-service-interface si-2/0/0.293;
outside-service-interface si-2/0/0.294;
}
}
}
/* Source Nat Rules*/
root@host# show groups __contrail__ services nat rule sv-_contrail_l3_147_vn_-sn-rule
match-direction input;
term term_1_0_63_248 {
from {
source-address {
1.0.63.248/32;
}
}
then {
translated {
source-prefix 20.1.252.8/32;
translation-type {
basic-nat44;
}
Copyright © 2016, Juniper Networks, Inc.
175
Contrail Feature Guide
}
}
}
term term_1_0_63_249 {
from {
source-address {
1.0.63.249/32;
}
}
then {
translated {
source-prefix 20.1.252.9/32;
translation-type {
basic-nat44;
}
}
}
}
/* Destination NAT rules */
root@host# show groups __contrail__ services nat rule sv-_contrail_l3_147_vn_-dn-rule
match-direction output;
term term_20_1_252_8 {
from {
destination-address {
20.1.252.8/32;
}
}
then {
176
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
translated {
destination-prefix 1.0.63.248/32;
translation-type {
dnat-44;
}
}
}
}
term term_20_1_252_9 {
from {
destination-address {
20.1.252.9/32;
}
}
then {
translated {
destination-prefix 1.0.63.249/32;
translation-type {
dnat-44;
}
}
}
}
/* Public VRf Filter */
root@host# show groups __contrail__ firewall family inet filter redirect_to_public_vrf_filter
term term-_contrail_l3_184_vn_public-x1- {
from {
Copyright © 2016, Juniper Networks, Inc.
177
Contrail Feature Guide
destination-address {
20.1.0.0/16;
}
}
then {
routing-instance _contrail_l3_184_vn_public-x1-;
}
}
term default-term {
then accept;
}
/* NAT Vrf filter */
root@host# ...all family inet filter redirect_to__contrail_l3_147_vn_private-x1-63-nat_vrf
term term-_contrail_l3_147_vn_private-x1-63-nat {
from {
source-address {
1.0.63.248/32;
1.0.63.249/32;
}
}
then {
routing-instance _contrail_l3_147_vn_private-x1-63-nat;
}
}
term default-term {
then accept;
}
178
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
/* IRB interface for NAT VRF */
root@host# show groups __contrail__ interfaces
irb {
gratuitous-arp-reply;
unit 147 {
family inet {
filter {
input redirect_to__contrail_l3_147_vn_private-x1-63-nat_vrf;
}
address 1.0.63.254/24;
}
}
/* Service Interfaces config */
root@host# show groups __contrail__ interfaces si-2/0/0
unit 293 {
family inet;
service-domain inside;
}
unit 294 {
family inet;
service-domain outside;
}
Samples of Generated Configurations for an MX Series Device
This section provides several scenarios and samples of MX Series device configurations
generated using the Python script provided.
Scenario 1: Physical Router With No External Networks
The following scenario describes the use case of basic vn, vmi, li, pr, pi configuration with
no external virtual networks. When the Python script (as described in the previous
Copyright © 2016, Juniper Networks, Inc.
179
Contrail Feature Guide
example) is executed with the parameters of this scenario, the following configuration
is applied on the MX Series physical router.
Script executed on the Contrail controller:
# python provision_physical_router.py --api_server_ip 127.0.0.1 --api_server_port
8082 --admin_user user1 --admin_password password1 --admin_tenant_name
default-domain --op add_basic
Generated configuration for MX Series device:
root@a2-mx80-2# show groups __contrail__
routing-options {
route-distinguisher-id 10.84.63.133;
autonomous-system 64512;
}
protocols {
bgp {
group __contrail__ {
type internal;
multihop;
local-address 10.84.63.133;
keep all;
family inet-vpn {
unicast;
}
family inet6-vpn {
unicast;
}
family evpn {
signaling;
}
family route-target;
}
group __contrail_external__ {
type external;
multihop;
local-address 10.84.63.133;
keep all;
family inet-vpn {
unicast;
}
family inet6-vpn {
unicast;
}
family evpn {
signaling;
}
family route-target;
}
}
}
policy-options {
policy-statement __contrail__default-domain_default-project_vn1-export {
term t1 {
then {
community add target_64200_8000008;
accept;
}
}
}
180
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
policy-statement __contrail__default-domain_default-project_vn1-import {
term t1 {
from community target_64200_8000008;
then accept;
}
then reject;
}
community target_64200_8000008 members target:64200:8000008;
}
routing-instances {
__contrail__default-domain_default-project_vn1 {
instance-type vrf;
interface ge-1/0/5.0;
vrf-import __contrail__default-domain_default-project_vn1-import;
vrf-export __contrail__default-domain_default-project_vn1-export;
vrf-table-label;
routing-options {
static {
route 10.0.0.0/24 discard;
}
auto-export {
family inet {
unicast;
}
}
}
}
}
Scenario 2: Physical Router With External Network, Public VRF
The scenario in this section describes the use case of vn, vmi, li, pr, pi configuration with
an external virtual network, public VRF. When the Python script (as in the previous
examples) is executed with the parameters of this scenario, the following configuration
is applied on the MX Series physical router.
Script executed on the Contrail controller:
# python provision_physical_router.py --api_server_ip 127.0.0.1 --api_server_port
8082 --admin_user user1 --admin_password password1 --admin_tenant_name
default-domain --op add_basic --public_vrf_test True
Generated configuration for MX Series device:
The following additional configuration is pushed to the MX Series device, in addition to
the configuration generated in Scenario 1.
forwarding-options {
family inet {
filter {
input redirect_to___contrail__default-domain_default-project_vn1_vrf;
}
}
}
firewall {
filter redirect_to___contrail__default-domain_default-project_vn1_vrf {
term t1 {
from {
destination-address {
Copyright © 2016, Juniper Networks, Inc.
181
Contrail Feature Guide
10.0.0.0/24;
}
}
then {
routing-instance __contrail__default-domain_default-project_vn1;
}
}
term t2 {
then accept;
}
}
}
routing-instances {
__contrail__default-domain_default-project_vn1 {
routing-options {
static {
route 0.0.0.0/0 next-table inet.0;
}
}
}
}
Scenario 3: Physical Router With External Network, Public VRF, and EVPN
The scenario in this section describes the use case of vn, vmi, li, pr, pi physical router
configuration with external virtual networks (public VRF) and EVPN configuration. When
the Python script (as in the previous examples) is executed with the parameters of this
scenario, the following configuration is applied on the MX Series physical router.
Script executed on the Contrail controller:
# python provision_physical_router.py --api_server_ip 127.0.0.1 --api_server_port
8082 --admin_user user1 --admin_password password1 --admin_tenant_name
default-domain --op add_basic --public_vrf_test True –vxlan 2002
Generated configuration for MX Series device:
The following additional configuration is pushed to the MX Series device, in addition to
the configuration generated in Scenario 1.
protocols {
mpls {
interface all;
}
}
firewall {
filter redirect_to___contrail__default-domain_default-project_vn1_vrf {
term t1 {
from {
destination-address {
10.0.0.0/24;
}
}
then {
routing-instance __contrail__default-domain_default-project_vn1;
}
}
term t2 {
182
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
then accept;
}
}
}
routing-instances {
__contrail__default-domain_default-project_vn1 {
vtep-source-interface lo0.0;
instance-type virtual-switch;
vrf-target target:64200:8000008;
protocols {
evpn {
encapsulation vxlan;
extended-vni-all;
}
}
bridge-domains {
bd-2002 {
vlan-id 2002;
interface ge-1/0/5.0;
routing-interface irb.2002;
vxlan {
vni 2002;
ingress-node-replication;
}
}
}
}
}
Scenario 4: Physical Router With External Network, Public VRF, and Floating IP
Addresses for a Bare Metal Server
The scenario in this section describes the user case of vn, vmi, li, pr, pi physical router
configuration with external virtual networks (public VRF) and floating IP addresses for
bare metal server configuration.
Script executed on the Contrail controller:
#python provision_physical_router.py --api_server_ip 127.0.0.1 --api_server_port
8082 --admin_user admin --admin_password <password> --admin_tenant_name
default-domain --op {fip_test|delete_fip_test}
Related
Documentation
•
REST APIs for Extending the Contrail Cluster to Physical Routers, and Physical and
Logical Interfaces on page 183
•
Using Device Manager to Manage Physical Routers on page 158
REST APIs for Extending the Contrail Cluster to Physical Routers, and Physical and
Logical Interfaces
Introduction: REST APIs for Extending Contrail Cluster
Use the following REST APIs when extending the Contrail cluster to include physical
routers, physical interfaces, and logical interfaces.
Copyright © 2016, Juniper Networks, Inc.
183
Contrail Feature Guide
REST API for Physical Routers
Use the following REST API when extending the Contrail cluster to include physical
routers.
{
u'physical-router': {
u'physical_router_management_ip': u'100.100.100.100',
u'virtual_router_refs': [],
u'fq_name': [
u'default-global-system-config',
u'test-router'
],
u'name': u'test-router',
u'physical_router_vendor_name': u'juniper',
u'parent_type': u'global-system-config',
u'virtual_network_refs': [],
'id_perms': {
u'enable': True,
u'uuid': None,
u'creator': None,
u'created': 0,
u'user_visible': True,
u'last_modified': 0,
u'permissions': {
u'owner': u'cloud-admin',
u'owner_access': 7,
u'other_access': 7,
u'group': u'cloud-admin-group',
u'group_access': 7
},
u'description': None
},
184
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
u'bgp_router_refs': [],
u'physical_router_user_credentials': {
u'username': u'',
u'password': u''
},
'display_name': u'test-router',
u'physical_router_dataplane_ip': u'101.1.1.1'
}
}
REST API for Physical Interfaces
Use the following REST API when extending the Contrail cluster to include physical
interfaces.
{
u'physical-interface': {
u'parent_type': u'physical-router',
'id_perms': {
u'enable': True,
u'uuid': None,
u'creator': None,
u'created': 0,
u'user_visible': True,
u'last_modified': 0,
u'permissions': {
u'owner': u'cloud-admin',
u'owner_access': 7,
u'other_access': 7,
u'group': u'cloud-admin-group',
u'group_access': 7
},
u'description': None
Copyright © 2016, Juniper Networks, Inc.
185
Contrail Feature Guide
},
u'fq_name': [
u'default-global-system-config',
u'test-router',
u'ge-0/0/1'
],
u'name': u'ge-0/0/1',
'display_name': u'ge-0/0/1'
}
}
REST API for Logical Interfaces
Use the following REST API when extending the Contrail cluster to include logical
interfaces.
{
u'logical-interface': {
u'fq_name': [
u'default-global-system-config',
u'test-router',
u'ge-0/0/1',
u'ge-0/0/1.0'
],
u'parent_uuid': u'6608b8ef-9704-489d-8cbc-fed4fb5677ca',
u'logical_interface_vlan_tag': 0,
u'parent_type': u'physical-interface',
u'virtual_machine_interface_refs': [
{
u'to': [
u'default-domain',
u'demo',
u'4a2edbb8-b69e-48ce-96e3-7226c57e5241'
]
186
Copyright © 2016, Juniper Networks, Inc.
Chapter 6: Extending Contrail to Physical Routers, Bare Metal Servers, Switches, and Interfaces
}
],
'id_perms': {
u'enable': True,
u'uuid': None,
u'creator': None,
u'created': 0,
u'user_visible': True,
u'last_modified': 0,
u'permissions': {
u'owner': u'cloud-admin',
u'owner_access': 7,
u'other_access': 7,
u'group': u'cloud-admin-group',
u'group_access': 7
},
u'description': None
},
u'logical_interface_type': u'l2',
'display_name': u'ge-0/0/1.0',
u'name': u'ge-0/0/1.0'
}
}
Related
Documentation
•
Using TOR Switches and OVSDB to Extend the Contrail Cluster to Other Instances on
page 145
•
Using Device Manager to Manage Physical Routers on page 158
Copyright © 2016, Juniper Networks, Inc.
187
Contrail Feature Guide
188
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 7
Installing and Using Contrail Storage
•
Installing and Using Contrail Storage on page 189
Installing and Using Contrail Storage
•
Overview of the Contrail Storage Solution on page 189
•
Basic Storage Functionality with Contrail on page 190
•
Ceph Block and Object Storage Functionality on page 190
•
Using the Contrail Storage User Interface on page 191
•
Hardware Specifications on page 192
•
Software Files for Compute Storage Nodes on page 192
•
Contrail OpenStack Nova Modifications on page 192
•
Installing the Contrail Storage Solution on page 193
•
Using Fabric Commands to Install and Configure Storage on page 193
•
Fabric Installation Procedure on page 194
•
Using Server Manager to Install and Configure Storage on page 196
•
Server Manager Installation Procedure for Storage on page 196
•
Example Configurations for Storage for Reimaging and Provisioning a Server on page 197
•
Storage Installation Limits on page 203
Overview of the Contrail Storage Solution
Starting with Contrail Release 2.00, Contrail provides a storage support solution using
OpenStack Cinder configured to work with Ceph. Ceph is a unified, distributed storage
system whose infrastructure provides storage services to Contrail. The Contrail solution
provides a validated Network File System (NFS) storage service, however, it is not the
Ceph FS distributed file system.
The Contrail storage solution has the following features:
•
Provides storage class features to Contrail clusters, including replication, reliability,
and robustness.
•
Uses open source components.
•
Uses Ceph block and object storage functionality.
Copyright © 2016, Juniper Networks, Inc.
189
Contrail Feature Guide
•
Integrates with OpenStack Cinder functionality.
•
Does not require virtual machines (VMs) to configure mirrors for replication.
•
Allows nodes to provide both compute and storage services.
•
Provides easy installation of basic storage functionality based on Contrail roles.
•
Provides services necessary to perform virtual machine migrations between compute
nodes, and supports both migratable and non-migratable virtual machines.
•
Provides a Contrail-integrated user interface from which the user can monitor Ceph
components and drill down for more information about components.
Basic Storage Functionality with Contrail
The following are basic interaction points between Contrail and the storage solution.
•
Cinder volumes must be manually configured prior to installing the Contrail storage
solution. The Cinder volumes can be attached to virtual machines (VMs) to provide
additional storage.
•
The storage solution stores virtual machine boot images and snapshots in Glance,
using Ceph object storage functionality.
•
All storage nodes can be monitored through a graphical user interface (GUI).
•
It is possible to migrate virtual machines that have ephemeral storage in Ceph.
Ceph Block and Object Storage Functionality
Installing the Contrail storage solution creates the following Ceph configurations.
190
•
Each each disk is configured as a standalone storage device, enhancing optimal
performance and creating proper failure boundaries. Ceph allocates and assigns a
process called object storage daemon (OSD) to each disk.
•
A replication factor of 2 is configured, consisting of one original instance plus one replica
copy. Ceph ensures that each replica is on a different storage node.
•
A Ceph monitor process (mon) is configured on each storage node.
•
The correct number of placement groups are automatically configured, based on the
number of disk drives in the cluster.
•
Properly identified SSD drives are set up for use as Ceph OSD journals to reduce write
latencies.
•
An NFS server is created in a virtual machine within the cluster to support virtual
machine migration. The NFS file system is mounted on all storage nodes, and every
storage node has a shared Nova directory under /var/lib/nova/instances. By default,
this NFS file system is configured to utilize 30% of the total initial Contrail storage
capacity.
Copyright © 2016, Juniper Networks, Inc.
Chapter 7: Installing and Using Contrail Storage
Using the Contrail Storage User Interface
The Contrail storage solution provides a user interface integrated into the Contrail user
interface. The storage solution user interface displays the following:
•
Customer usable space, which is different from Ceph total space. The displayed usable
space does not display the space used by replication and other Ceph functions.
•
Monitor OSDs (disks), monitoring processes (MON), and state changes, enabling quick
identification of resource failures within storage components.
•
Total cluster I/O statistics and individual drive statistics.
•
Ceph-specific information about each OSD (disk).
•
Ceph logs, Ceph nodes, and Ceph alerts.
Use Monitor > Infrastructure > Dashboard to get an “at-a-glance” view of the system
infrastructure components, including the numbers of virtual routers, control nodes,
analytics nodes, config nodes, and storage nodes currently operational, and a bubble
chart of storage nodes showing the Available (%) and Total storage (GB). See the
following figure.
Bubble charts use the following color-coding scheme for storage nodes:
•
Blue—working as configured.
•
Red—error, node is down.
•
Yellow—one of the node disks is down.
Copyright © 2016, Juniper Networks, Inc.
191
Contrail Feature Guide
Select Monitor > Storage > Dashboard to see a summary of cluster health, usage, pools,
and disk status, and to gain insight into activity statistics for all nodes. See the following
figure.
Hardware Specifications
The following are additional hardware specifications needed for the Contrail storage
solution.
Additional minimum specifications:
•
Two 500 GB, 7200 RPM drives in the server 4 and server 5 cluster positions (those
with the compute storage role) in the Contrail installation. This configuration provides
1 TB of clustered, replicated storage.
Recommended compute storage configuration:
•
For every 4-5 HDD devices on one compute storage node, use one SSD device to provide
the OSD journals for that set of HDD devices.
Software Files for Compute Storage Nodes
The Contrail storage solution is only supported with the Ubuntu operating system.
For each compute storage node, ensure the following software is downloaded:
•
The storage Debian package: contrail-storage-packages_x.xx-xx~xxxxxx_all.deb.
•
NFS VM qcow2 image from Juniper.
Contrail OpenStack Nova Modifications
Contrail's OpenStack Nova function has been modified to spawn both migratable and
non-migratable virtual machines.
192
Copyright © 2016, Juniper Networks, Inc.
Chapter 7: Installing and Using Contrail Storage
•
Nova's typical virtual machine storage directory, /var/lib/nova/instances, is used for
non-migratable virtual machine ephemeral storage.
•
Contrail storage creates a new directory, /var/lib/nova/instances/global, used for the
ephemeral storage for migratable virtual machines. The /var/lib/nova/instances/global
must be mounted on a shared storage device (NFS with Contrail Storage), accessible
from all the compute nodes.
•
To start a non-migratable virtual machine with the Nova CLI command nova boot, the
additional argument "--meta storage_scope=local" must be provided.
•
To start a migratable virtual machine with nova boot, the additional argument "--meta
storage_scope=global" must be provided. To force Nova and the Horizon UI to spawn
migratable virtual machines by default, the storage scope must be set to global. This
task is described in the next section.
Installing the Contrail Storage Solution
The Contrail storage solution can be installed using the same tools used to install Contrail,
either by using Fabric (fab) commands or by using the Contrail Server Manager.
Both installation methods are described in the following sections.
Installation Notes
•
When installing a base operating system on any compute storage node, the operating
system must be installed only on a single drive. The other drives must be configured
as individual devices, and should not be concatenated together in a logical volume
manager (LVM) device.
•
For best performance, it is recommended to use solid state devices (SSD) for the Ceph
OSD journals. Each SSD device can provide OSD journal support to 3-6 HDD OSD
devices, depending on the model of SSD device. Most SSD devices can support up to
4 HDDs, assuming the HDDs are running at capacity.
Using Fabric Commands to Install and Configure Storage
Use the information in this section to install storage using Fabric (fab) commands.
When installing the operating system on a compute storage node, install the operating
system on a single drive and leave all other drives as unbundled.
Installing the Contrail storage solution with Fabric commands gives the following:
•
Base Ceph block device and object support.
•
Easy configuration of SSD devices for OSD journals.
•
Virtual machine migration support.
•
Limited Cinder multi-backend support.
Cautions
Copyright © 2016, Juniper Networks, Inc.
193
Contrail Feature Guide
Before installing, ensure the following:
•
Manually ensure that the UID or GID of the Nova user is identical on all compute nodes
before provisioning any software.
•
Manually ensure that the time is identical on all nodes by configuring NTP.
Fabric Installation Procedure
This section provides guidelines and steps for using Fabric (fab) commands to install
the Contrail storage solution. The installation is similar to a regular Contrail fab installation,
however, you will define additional storage information in the testbed.py, including:
•
Define new roles: storage-master and compute-storage.
•
Define how each additional non-root drive is used in the cluster.
•
Define potential additional virtual machine migration variables.
•
Copy and install the additional storage package to systems.
1.
Install the storage Debian package to all nodes:
fab install_storage_pkg_all:/YYYY/contrail-storage-package-XXX.deb
2. After issuing fab install_contrail, issue fab install_storage.
3. After issuing fab setup_all, issue fab setup_storage.
4. If Contrail-based live virtual machine migration needs to be enabled, issue fab
setup_nfs_livem or fab setup_nfs_livem_global, as described in the following
NOTE: If virtual machine migration is not needed, do not issue either
command.
•
Use fab setup_nfs_livem to store the virtual machine's ephemeral storage on local
drives.
•
Use fab setup_nfs_livem_global to store the virtual machine's ephemeral storage
within Contrail's storage (using Ceph). This command sets the cluster storage scope
to global.
5. Add two new Contrail storage roles: compute-storage and storage-master.
•
Define the storage-master role on all nodes running OpenStack. Although Ceph has
no notion of a master, define this role because Ceph must be run on the node that
runs the OpenStack software. OpenStack nodes typically do not have any cluster
storage defined, only local storage.
•
The storage-compute role is an add-on role, which means that compute nodes have
the option of providing storage functionality. Standalone storage nodes are not
supported.
6. Change the testbed.py details as needed for your environment.
194
Copyright © 2016, Juniper Networks, Inc.
Chapter 7: Installing and Using Contrail Storage
In the base configuration, define the storage_node_config, which gives device details.
See the following example.
storage_node_config = {
host2 : { 'disks' : ['/dev/sdb', '/dev/sdc'], 'ssd-disks': ['/dev/sdd', '/dev/sde'],
'local-disks' : ['/dev/sdf', '/dev/sdh'], 'nfs' : ['10.87.140.156:/test',
'10.87.140.156:/test1']},
host3 : { 'disks' : ['/dev/sdb:/dev/sde', '/dev/sdc:/dev/sde', '/dev/sdd:/dev/sde',
'/dev/sdf:/dev/sdj', '/dev/sdg:/dev/sdj', '/dev/sdh:/dev/sdj', '/dev/sdi:/dev/sdj',
'local-ssd-disks' : ['/dev/sdk', '/dev/sdl']},}
Available device details parameters include:
•
disks and ssd-disks are Ceph disks.
•
local-disk and local-ssd-disks are LVM disks.
•
host2 in the example shows all the storage types that can be configured using Cinder
multi-backend.
•
disks is a list of HDD disks used for a Ceph HDD pool.
•
ssd-disks is a list of SSD disks used for a Ceph SSD pool.
•
local-disks is a list of disks used for local LVM storage.
•
nfs is an NFS device.
•
In the example, host3 is a more typical configuration.
•
/dev/sde and /dev/sdj are SSD disks which are used as OSD journals for other HDD
drives.
•
local-ssd-disks is a list of disks used for local SSD LVM storage.
7. Add virtual machine migration as needed, using the following parameters.
live_migration = True
ceph_nfs_livevm = True
ceph_nfs_livem_subnet = ‘<ip address subnet>’ # Private subnet to be provided for live
migration VM
ceph_nfs_livem_image = ‘/Ubuntu/libmnfs.qcow2’ # path of live migration qcow2 image.
This image is provided by Juniper.
ceph_nfs_livem_host = host3 # host in which the NFS VM will run
For external NFS server based live migration, use the following configuration, valid for
Contrail Release 2.20 and greater.
live_migration = True
ext_nfs_livevm = True
ext_nfs_livem_mount = '10.10.10.10:/nfsmount' # External NFS server mount path
Copyright © 2016, Juniper Networks, Inc.
195
Contrail Feature Guide
NOTE: when using an external NFS server, make sure the NFS server maps
the uids/gids correctly and provides read/write access for all the uids. If
there is any issue related to the permission, either the VM launch will error
out or the live migration will fail with permission related errors.
Using Server Manager to Install and Configure Storage
This section provides notes and guidelines to install the storage solution using the Contrail
Server Manager. Installing the Contrail Storage solution using Server Manager provides:
•
Base Ceph block device and object support.
•
Easy configuration of SSD journals.
•
Support for live migration configuration, starting with Contrail Release 2.10.
Before installing the base operating system with Server Manager, ensure that the compute
storage nodes have been configured with single operating system device installs.
Cautions
•
Virtual machine migration support uses a fixed IP (192.168.101.3) for the livemnfs virtual
machine, starting with Contrail Release 2.10.
•
There is no Cinder multi-backend support.
•
There is no support for single server provisioning, the entire cluster must be provisioned.
Server Manager Installation Procedure for Storage
This section provides notes and guidelines if you choose to install the storage solution
using the Contrail Server Manager.
1.
Upload the storage package: server-manager add image -f <filename.json>
where <filename.json> has content similar to the following example:
{
"image": [
{
"id": "contrail-storage-packages_1.10-xx~xxxxxx_all",
"parameters": "{}",
"path": "/store/contrail-storage-packages_1.10-xx~xxxxxx_all.deb",
"type": "contrail-storage-ubuntu-package",
"version": "1.10-xx"
196
Copyright © 2016, Juniper Networks, Inc.
Chapter 7: Installing and Using Contrail Storage
},
]
}
2. Run ceph-authtool if you need to generate unique keys for administration, monitor,
and OSD.
a. To install ceph-authtool on CentOS:
yum install
http://ceph.com/rpm/el6/x86_64/ceph-common-0.80.5-0.el6.x86_64.rpm
b. To install ceph-authtool on Ubuntu:
apt-get install ceph-common
c. Run this command once for each key:
ceph-authtool --gen-print-key
d. Add the generated keys to the cluster.json file:
cluster.json:
"storage_mon_secret":
"AQBDCdpTsB5FChAAOzI2++uosfmtj7tjmhPu0g==”,
"osd_bootstrap_key":
"AQBKCdpTmN+HGRAAl6rmStq5iYoPnANzSXLcXA==”,
"admin_key":
"AQBLCdpTuOS6FhAAfDW0SsdzyDAUeuwOr/h61A==”
e. Starting with Release 2.10, add live-migration configuration to the cluster.json file,
if you are using live migration.
"live_migration": "enable",
"live_migration_nfs_vm_host":"compute-node-01",
"live_migration_storage_scope":"global",
Example Configurations for Storage for Reimaging and Provisioning a Server
Use the following example configurations as guidelines for reimaging and provisioning
a server for storage. Examples are given for configurations for releases prior to Release
2.10 and for configurations for Release 2.10 and greater.
1.
Define storage in the cluster. The following example configurations show new key-value
pairs added to the configuration. The “cluster” section should appear similar to the
following when storage is defined in a cluster.
Copyright © 2016, Juniper Networks, Inc.
197
Contrail Feature Guide
Example: Storage and key-value pairs defined in releases prior to 2.10:
{
"cluster" : [
{
"id" : "demo-cluster",
"parameters" : {
"router_asn": "<asn>",
"database_dir": "/home/cassandra",
"database_token": "",
"use_certificates": "False",
"multi_tenancy": "False",
"encapsulation_priority": "MPLSoUDP,MPLSoGRE,VXLAN",
"service_token": "<password>",
"keystone_user": "admin",
"keystone_password": "<password>",
"keystone_tenant": "admin",
"analytics_data_ttl": "168",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"haproxy": "disable",
"external_bgp": "",
"domain": "demo.company.net",
"storage_mon_secret": "$ABC123”,
"osd_bootstrap_key": "$ABC123”,
"admin_key": "$ABC123”
}
198
Copyright © 2016, Juniper Networks, Inc.
Chapter 7: Installing and Using Contrail Storage
}
]
}
Example: Storage and key-value pairs defined in releases 2.10 and greater:
{
"cluster" : [
{
"id" : "demo-cluster",
"parameters" : {
"router_asn": "<asn>",
"database_dir": "/home/cassandra",
"database_token": "",
"use_certificates": "False",
"multi_tenancy": "False",
"encapsulation_priority": "MPLSoUDP,MPLSoGRE,VXLAN",
"service_token": "<password>",
"keystone_user": "admin",
"keystone_password": "<password>",
"keystone_tenant": "admin",
"analytics_data_ttl": "168",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"haproxy": "disable",
"external_bgp": "",
"domain": "demo.company.net",
"storage_mon_secret": "$ABC123”,
"osd_bootstrap_key": "$ABC123”,
Copyright © 2016, Juniper Networks, Inc.
199
Contrail Feature Guide
"admin_key": "$ABC123”,
"live_migration" : "enable",
"live_migration_nfs_vm_host": "compute-host-01,
"live_migration_storage_scope": "global",
}
}
]
}
2. Add the disks key, the storage-compute role value, and the storage_repo_id.
•
The storage_repo_id key must be added to servers with the storage-master or
storage-compute roles.
•
The disks key-value pair must be added to servers with the storage-compute roles.
•
The storage-master value must be added to the "roles" key for the server that has
the storage-master role.
•
The storage-compute value must be added to the roles key for the servers which
have the storage-compute role.
The following server section is an example, showing the new keys storage_repo_id
and disks, and the new values storage-compute and storage-master.
In the example, one server contains the storage-compute role and has 3 HDD drives
(/dev/sdb, /dev/sdc, /dev/sdd), supporting 3 OSDs.
Each OSD uses one partition of an SSD drive (/dev/sde) as its OSD journal.
The server manager software will correctly partition /dev/sdd and assign one
partition to each OSD. The storage_repo_id contains the base name of the Contrail
storage package which has been added as an image to Server Manager.
Example: Server.json updates defined in releases prior to 2.10:
{"server": [
{
"id": "demo2-server",
"mac_address": "<mac address>",
"ip_address": "<ip address>",
"parameters" : {
"interface_name": "eth1",
200
Copyright © 2016, Juniper Networks, Inc.
Chapter 7: Installing and Using Contrail Storage
"compute_non_mgmt_ip": "",
"compute_non_mgmt_gway": "",
"storage_repo_id":"contrail-storage-packages",
"disks": ["/dev/sdb:/dev/sde", "/dev/sdc:/dev/sde", "/dev/sdd:/dev/sde"]
},
"roles" :
["config","openstack","control","compute","collector","webui","database","storage-compute","storage-master"],
"cluster_id": "demo-cluster",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"domain": "demo.company.net",
"email": "id@company.net"
}]
}
Example: Server.json updates defined in releases 2.10 and greater:
Server.json :
{"server": [
{
"id": "demo2-server",
"mac_address": "<mac address>",
"ip_address": "<ip address>",
"parameters" : {
"interface_name": "eth1",
"compute_non_mgmt_ip": "",
"compute_non_mgmt_gway": "",
"storage_repo_id":"contrail-storage-packages",
Copyright © 2016, Juniper Networks, Inc.
201
Contrail Feature Guide
"disks": ["/dev/sdb:/dev/sde", "/dev/sdc:/dev/sde", "/dev/sdd:/dev/sde"]
},
"roles" :
["config","openstack","control","compute","collector","webui","database","storage-compute","storage-master"],
"contrail": {
"control_data_interface": "p3p2"
},
"network": {
"interfaces": [
{
"default_gateway": "<ip address>",
"dhcp": true,
"ip_address": "<ip address>",
"mac_address": "<mac address>",
"member_interfaces": "",
"name": "eth1",
"tor": "",
"tor_port": "",
"type": "physical"
},
{
"default_gateway": "<ip address>",
"dhcp": "",
"ip_address": "<ip address>",
"mac_address": "<mac address>",
"member_interfaces": "",
"name": "p3p2",
"tor": "",
"tor_port": "",
"type": "physical"
}
],
"management_interface": "eth1"
},
"cluster_id": "demo-cluster",
"subnet_mask": "<ip address>",
"gateway": "<ip address>",
"password": "<password>",
"domain": "demo.company.net",
"email": "id@company.net"
}]
}
202
Copyright © 2016, Juniper Networks, Inc.
Chapter 7: Installing and Using Contrail Storage
3. Use the following commands to provision the entire cluster:
# /opt/contrail/server_manager/client/server-manager -c
# /opt/contrail/server_manager/smgr_config.ini provision --cluster_id test-cluster
contrail_test_pkg
Storage Installation Limits
General Limitations
Fab Storage Install
Limitations
•
Minimum number of storage nodes to configure: 2
•
The number of storage nodes should always be an even number (2, 4, 12, 22, etc.).
There are no limitations to installation when using fab commands.
Server Manager
Storage Install
Limitations
•
There is no integrated way to add OSDs or drives to a storage node.
•
There is no integrated way to add new storage nodes to a cluster.
•
Provisioning a single server is not supported. You can add a server to Server Manager
and then provision the entire cluster.
•
The live migration overlay network is preset to use 192.168.101.0/24.
•
The user must copy the image livemnfs.qcow2.gz to the folder
/var/www/html/contrail/images before provisioning live migration using Server Manager.
Copyright © 2016, Juniper Networks, Inc.
203
Contrail Feature Guide
204
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 8
Upgrading Contrail Software
•
Upgrading Contrail Software from Release 2.00 or Greater to Release 2.20 on page 205
•
Adding or Removing a Compute Node in an Existing Contrail Cluster on page 208
•
DKMS for vRouter Kernel Module on page 209
Upgrading Contrail Software from Release 2.00 or Greater to Release 2.20
Use the following procedure to upgrade an installation of Contrail software from one
release to a more recent release. This procedure is valid starting from Contrail Release
2.00 and greater.
NOTE: If you are installing Contrail for the first time, refer to the full
documentation and installation instructions in “Installing the Operating
System and Contrail Packages” on page 13.
Instructions are given for both CentOS and Ubuntu versions. The only Ubuntu versions
supported for upgrading are Ubuntu 12.04 and 14.04.2.
To upgrade Contrail software from Contrail Release 2.00 or greater:
1.
Download the file contrail-install-packages-x.xx-xxx.xxx.noarch.rpm | deb from
http://www.juniper.net/support/downloads/?p=contrail#sw and copy it to the /tmp
directory on the config node, as follows:
CentOS : scp <id@server>:/path/to/contrail-install-packages-x.xx-xxx.xxx.noarch.rpm
/tmp
Ubuntu : scp <id@server>:/path/to/contrail-install-packages-x.xx-xx~havana_all.deb
/tmp
NOTE: The variables xxx.-xxx and so on represent the release and build
numbers that are present in the name of the installation packages that
you download.
2. Install the contrail-install-packages, using the correct command for your operating
system:
Copyright © 2016, Juniper Networks, Inc.
205
Contrail Feature Guide
CentOS: yum localinstall /tmp/contrail-install-packages-x.xx-xxx.xxx..noarch.rpm
Ubuntu: dpkg –i /tmp/contrail-install-packages_x.xx-xxx~icehouse_all.deb
3. Set up the local repository by running the setup.sh:
cd /opt/contrail/contrail_packages; ./setup.sh
4. Ensure that the testbed.py file that was used to set up the cluster with Contrail is intact
at /opt/contrail/utils/fabfile/testbeds/.
•
Ensure that testbed.py has been set up with a combined control_data section
(required as of Contrail Release 1.10).
•
Ensure that the do_parallel flag is set to True in the testbed.pyfile, see bug 1426522
in Launchpad.net.
See “Populating the Testbed Definitions File” on page 16.
5. Upgrade the software, using the correct set of commands to match your operating
system and vrouter, as described in the following:
Change directory to the utils folder:
cd /opt/contrail/utils; \
Select the correct upgrade procedure from the following to match your operating
system and vrouter. In the following, <from> refers to the currently installed release
number, such as 2.0, 2.01, 2.1:
CentOS Upgrade Procedure:
fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx.xxx.noarch.rpm;
Ubuntu 12.04 Procedure:
fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;
Ubuntu 14.04 Upgrade, Two Procedures:
There are two different upgrade procedures for Ubuntu 14.04 upgrade to Contrail
Release 2.20, depending on which vrouter (contrail-vrouter-3.13.0-35-generic
or contrail-vrouter-dkms) is installed in your current setup.
As of Contrail Release 2.20, the recommended kernel version for an Ubuntu
14.04-based system is 3.13.0-40. Both procedures can use the
command fab upgrade_kernel_all to upgrade the kernel.
Ubuntu 14.04 Upgrade Procedure For a System
With contrail-vrouter-3.13.0-35-generic:
Use the following upgrade procedure for Contrail Release 2.20 systems based on
Ubuntu 14.04 with the contrail-vrouter-3.13.0-35-generic installed. The command
sequence upgrades the kernel version and also reboots the compute nodes when
finished.
fab install_pkg_all:/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;
fab migrate_compute_kernel;
206
Copyright © 2016, Juniper Networks, Inc.
Chapter 8: Upgrading Contrail Software
fab
upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;
fab upgrade_kernel_all;
fab restart_openstack_compute;
Ubuntu 14.04 Upgrade Procedure For System with contrail-vrouter-dkms:
Use the following upgrade procedure for Contrail R2.20 systems based on Ubuntu
14.04 with contrail-vrouter-dkms instlled. The command sequence upgrades the kernel
version and also reboots the compute nodes when finished.
fab
upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;
All nodes in the cluster can be upgraded to kernel version 3.13.0-40 by using the
following fab command:
fab upgrade_kernel_all
6. On the OpenStack node, soft reboot all of the virtual machines.
You can do this in the OpenStack dashboard or log into the node that uses the
openstack role and issue the following commands:
source /etc/contrail/openstackrc ; nova reboot <vm-name>
You can also use the following fab command to reboot all virtual machines:
fab reboot_vm
7. Check to ensure that the nova-novncproxy service is still running:
service nova-novncproxy status
If necessary, restart the service:
service nova-novncproxy restart
8. (For Contrail Storage option, only.)
Contrail Storage has its own packages.
To upgrade Contrail Storage, download the file:
contrail-storage-packages_x.x-xx*.deb
from http://www.juniper.net/support/downloads/?p=contrail#sw
and copy it to the /tmp directory on the config node, as follows:
Ubuntu: scp <id@server>:/path/to/contrail-storage-packages_x.x-xx*.deb /tmp
NOTE: Use only Icehouse packages (for
example, contrail-storage-packages_2.0-22~icehouse_all.deb) because
OpenStack Havana is no longer supported.
Copyright © 2016, Juniper Networks, Inc.
207
Contrail Feature Guide
Use the following statement to upgrade the software:
cd /opt/contrail/utils; \
Ubuntu: fab
upgrade_storage:<from>,/tmp/contrail-storage-packages_2.0-22~icehouse_all.deb;
When upgrading to Contrail Release 2.10, add the following steps if you have live
migration configured. Upgrades to Release 2.0 do not require these steps. Select the
command that matches your live migration configuration.
fab setup_nfs_livem
or
fab setup_nfs_livem_global
Related
Documentation
•
Contrail Getting Started Guide, Release 2.21
•
Contrail Feature Guide, Release 2.21
Adding or Removing a Compute Node in an Existing Contrail Cluster
Use the following procedure to add one or more new compute nodes to an existing
Contrail cluster.
1.
Add the new information about the new compute node(s) into your existing testbed.py
file.
NOTE: For convenience, this procedure assumes you are adding a node
@1.1.1.1 , however, replace the 1.1.1.1 with the correct IP for the node or nodes
that you are adding.
2. Copy the contrail-install-packages file for CentOS or Ubuntu to the /tmp directory of
the cfgm node where the fab commands are triggered:
CentOS: scp <id@server>:/path/to/contrail-install-packages-xxx-xxx.el6.noarch.rpm
/tmp
Ubuntu: scp <id@server>:/path/to/contrail-install-packages_xxx-xxx~havana_all.deb
/tmp
3. For Ubuntu 12.04.4 or 12.04.3 server with kernel version older than 3.13.0-34, upgrade
the kernel by using the following fab command:
cd /opt/contrail/utils; fab upgrade_kernel_node:root@1.1.1.1
where 1.1.1.1 should be replaced with the server’s actual IP address.
4. Install the contrail-install-packages on to the new compute node (or nodes):
CentOS: fab
install_pkg_node:/tmp/contrail-install-packages_x.xx-xxx.xxx.noarch.rpm,root@1.1.1.1
208
Copyright © 2016, Juniper Networks, Inc.
Chapter 8: Upgrading Contrail Software
Ubuntu: fab
install_pkg_node:/tmp/contrail-install-packages_x.xx-xxx~havana_all.deb,root@1.1.1.1
5. Use fab commands to add the new compute node (or nodes):
fab add_vrouter_node:root@1.1.1.1
Removing a Node
Use the following procedure to remove one or more compute nodes from an existing
Contrail cluster.
NOTE: For convenience, this procedure assumes you are adding a node @1.1.1.1
, however, replace the 1.1.1.1 with the correct IP for the node or nodes that you
are adding.
1.
Use the following fab command to remove the new compute node:
fab detach_vrouter_node:root@1.1.1.1
2. Remove the information about this detached compute node from the existing
testbed.py file.
DKMS for vRouter Kernel Module
Dynamic Kernel Module Support (DKMS) is a framework provided by Linux to
automatically build out-of-tree driver modules for Linux kernels whenever the Linux
distribution upgrades the existing kernel to a newer version.
In Contrail, the vRouter kernel module is an out-of-tree, high performance packet
forwarding module that provides advanced packet forwarding functionality in a reliable
and stable manner. Contrail provides a DKMS-compatible source package for Ubuntu
so that a customer who deploys an Ubuntu-based Contrail system does not need to
manually compile the kernel module each time the Linux deployment gets upgraded.
The contrail-vrouter-dkms package provides the DKMS compatibility for Contrail. Prior
to installing the contrail-vrouter-dkms package, both the DKMS package and the
contrail-vrouter-utils package must be installed, because the contrail-vrouter-dkms
package is dependent on both. Installing the contrail-vrouter-dkms package adds the
vrouter sources to the DKMS database, builds the vrouter module, and installs it in the
existing kernel modules tree. When a kernel upgrade occurs, DKMS ensures that the
module is compiled for the newer kernel and installed in the proper location so that upon
reboot, the newer module can be used with the upgraded kernel.
This feature is supported as of Contrail Release 1.10 on Ubuntu distributions. Support for
CentOS is in the product roadmap.
For more information about DKMS, refer to:
•
DKMS Ubuntu documentation at https://help.ubuntu.com/community/DKMS
Copyright © 2016, Juniper Networks, Inc.
209
Contrail Feature Guide
•
DKMS Ubuntu manual pages at
http://manpages.ubuntu.com/manpages/lucid/man8/dkms.8.html
•
Related
Documentation
210
Linux Journal article on DKMS at http://www.linuxjournal.com/article/6896
•
Copyright © 2016, Juniper Networks, Inc.
PART 3
Configuring Contrail
•
Configuring Virtual Networks on page 213
•
Example of Deploying a Multi-Tier Web Application Using Contrail on page 255
•
Configuring Services on page 267
•
Configuring High Availability on page 291
•
Configuring Service Chaining on page 309
•
Configuring Multitenancy Support on page 333
•
Optimizing Contrail on page 339
Copyright © 2016, Juniper Networks, Inc.
211
Contrail Feature Guide
212
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 9
Configuring Virtual Networks
•
Creating Projects in OpenStack for Configuring Tenants in Contrail on page 214
•
Creating a Virtual Network—Juniper Networks Contrail on page 215
•
Deleting a Virtual Network–Juniper Networks Contrail on page 218
•
Creating a Virtual Network—OpenStack Contrail on page 219
•
Deleting a Virtual Network–OpenStack Contrail on page 220
•
Creating an Image on page 222
•
Launching a Virtual Machine (Instance) on page 224
•
Creating a Network Policy—Juniper Networks Contrail on page 227
•
Associating a Network to a Policy—Juniper Networks Contrail on page 229
•
Creating a Network Policy—OpenStack Contrail on page 233
•
Associating a Network to a Policy—OpenStack Contrail on page 236
•
Creating a Floating IP Address Pool on page 238
•
Allocating a Floating IP Address to a Virtual Machine on page 240
•
Using Security Groups with Virtual Machines (Instances) on page 241
•
Support for IPv6 Networks in Contrail on page 244
•
Configuring EVPN and VXLAN on page 247
Copyright © 2016, Juniper Networks, Inc.
213
Contrail Feature Guide
Creating Projects in OpenStack for Configuring Tenants in Contrail
In Contrail, a tenant configuration is called a project. A project is created for each set of
virtual machines (VMs) and virtual networks (VNs) that are configured as a discrete
entity for the tenant.
Projects are created, managed, and edited at the OpenStack Projects screen.
1.
Click the Admin tab on the OpenStack dashboard, then click the Projects link to access
the Projects screen; see Figure 24 on page 214.
Figure 24: OpenStack Projects
2. In the upper right, click the Create Project button to access the Add Project screen; see
Figure 25 on page 214.
Figure 25: Add Project
3. In the Add Project window, on the Project Info tab, enter a Name and a Description for
the new project, and select the Enabled check box to activate this project.
214
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
4. In the Add Project window, select the Project Members tab, and assign users to this
project. Designate each user as admin or as Member.
As a general rule, one person should be a super user in the admin role for all projects
and a user with a Member role should be used for general configuration purposes.
5. Click Finish to create the project.
6. Refer to OpenStack documentation for more information about creating and managing
projects.
Related
Documentation
•
Creating a Virtual Network—Juniper Networks Contrail on page 215
•
Creating a Virtual Network—OpenStack Contrail on page 219
•
OpenStack documentation
Creating a Virtual Network—Juniper Networks Contrail
Contrail makes creating a virtual network very easy for a self-service user. You create
networks and network policies at the user dashboard, then associate policies with each
network. The following procedure shows how to create a virtual network when using
Juniper Networks Contrail.
1.
Before creating a virtual network, create an IP Address Management (IPAM) for your
project. Select Configure > Networking > IP Address Management, then click the Create
button.
The Add IP Address Management window appears, see Figure 26 on page 215.
Figure 26: Add IP Address Management
2. Complete the fields in Add IP Address Management: see field descriptions in
Table 11 on page 215.
Table 11: Add IP Address Management Fields
Field
Description
Name
Enter a name for the IPAM you are creating.
Copyright © 2016, Juniper Networks, Inc.
215
Contrail Feature Guide
Table 11: Add IP Address Management Fields (continued)
Field
Description
DNS Method
Select from a drop-down list the domain name server method for this IPAM: Default,
Virtual DNS, Tenant, or None.
NTP Server IP
Enter the IP address of an NTP server to be used for this IPAM.
Domain Name
Enter a domain name to be used for this IPAM.
3. Click Configure > Networking > Networks to access the Configure Networks screen; see
Figure 27 on page 216.
Figure 27: Configure Networks
4. Verify that your project is displayed as active in the upper right field, then click the
to display the Create Network window; see Figure 28 on page 216. Use the scroll bar
to access all sections of this window.
Figure 28: Create Network
216
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
5. Complete the fields in the Create Network window with values that identify the network
name, network policy, and IP options as needed. See field descriptions in
Table 12 on page 217.
Table 12: Create Network Fields
Field
Description
Name
Enter a name for the virtual network you are creating.
Network Policy(s)
Select the policy to be applied to this network from the drop-down list of available policies. You can
select more than one policy by clicking each one needed.
Subnets
Use this area to identify and manage subnets for this virtual network. Click the + icon to open fields
for IPAM, CIDR, Allocation Pools, Gateway, DNS, and DHCP. Select the subnet to be added from a
drop down list in the IPAM field. Complete the remaining fields as necessary. You can add multiple
subnets to a network. When finished, click the + icon to add the selections into the columns below the
fields. Or click the - icon to remove the selections.
Host Routes
Use this area to add or remove host routes for this network. Click the + icon to open fields where you
can enter the Route Prefix and the Next Hop. Click the + icon to add the information, or click the - icon
to remove the information.
Advanced Options
Use this area to add or remove advanced options, including identifying the Admin State to be Up or
Down, to identify the network as Shared or External, to add DNS servers, or to define a VxLAN Identifier.
Floating IP Pools
Use this area to identify and manage the Floating IP address pools for this virtual network. Click the +
icon to open fields where you can enter the Pool Name and Projects. Click the + icon to add the
information, or click the - icon to remove the information.
Route Target(s)
Move the scroll bar down to access this area, then specify one or more route targets for this virtual
network. Click the + icon to open fields where you can enter route target identifiers. Click the + icon to
add the information, or click the - icon to remove the information.
6. To save your network, click the Save button, or click Cancel to discard your work and
start over.
Now you can create a network policy, see “Creating a Network Policy—Juniper Networks
Contrail” on page 227.
Related
Documentation
•
Creating an Image on page 222
•
Launching a Virtual Machine (Instance) on page 224
•
Creating a Network Policy—Juniper Networks Contrail on page 227
•
Deleting a Virtual Network–Juniper Networks Contrail on page 218
Copyright © 2016, Juniper Networks, Inc.
217
Contrail Feature Guide
Deleting a Virtual Network–Juniper Networks Contrail
You can delete any of the virtual networks in your system. However, you must first
disassociate any virtual machines (instances) that are associated with that network.
Use OpenStack to view and delete the virtual machines associated with a virtual network,
see “Deleting a Virtual Network–OpenStack Contrail” on page 220 When you are finished
deleting the virtual machines associated with a virtual network, you can delete the network
in OpenStack, or you can delete the network in Juniper Networks Contrail, using the
following procedure.
1.
To view the virtual networks in the current project, click Configure > Networks to access
the Configure Networks screen in Juniper Networks Contrail; see Figure 29 on page 218.
Figure 29: Configure Networks
2. Click the network you want to delete, then click the Delete (trashcan) icon at the top
right. A confirm window is displayed.
3. Click Confirm to delete the network, or click Cancel to quit the delete activity.
Related
Documentation
218
•
Creating a Virtual Network—Juniper Networks Contrail on page 215
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Creating a Virtual Network—OpenStack Contrail
Contrail makes creating a virtual network very easy for a self-service user. You create
networks and network policies at the user dashboard, then associate policies with each
network. The following procedure shows how to create a virtual network when using
OpenStack.
1.
Click Project > Networking to access the Networks screen; see Figure 30 on page 219.
Figure 30: Networks
2. Verify that the correct project is displayed in the Current Project box, then click the
Create Network button to access the Create Network window; see Figure 31 on page 219.
Figure 31: Create Network
3. Complete the fields in the Create Network window with values that identify the network
name, IP block, policy, and IP options as needed. See field descriptions in
Table 13 on page 219.
Table 13: Create Network Fields
Field
Description
Name
Enter a name for the network.
Description
Enter a description for the network.
IP Block (optional)
Enter the IP address of the address block assigned to this project.
Copyright © 2016, Juniper Networks, Inc.
219
Contrail Feature Guide
Table 13: Create Network Fields (continued)
Field
Description
IPAM (optional)
For new projects, an IPAM can be added while creating the virtual network. VM instances
created in this virtual network will be assigned an address from this address block
automatically by the system when a VM is launched.
Gateway (optional)
Optionally, enter an explicit gateway IP for the IP address block.
Network Policy
Any policies already created are listed. To select a policy, click the check box for the policy.
4. To save your network, click the Create Network button, or click Cancel to discard your
work and start over.
Related
Documentation
•
Deleting a Virtual Network–OpenStack Contrail on page 220
Deleting a Virtual Network–OpenStack Contrail
You can delete any of the virtual networks in your system. However, you must first
disassociate any virtual machines (instances) that are associated with that network.
The following procedure shows how to delete a virtual network when using OpenStack.
1.
To view virtual machines that are associated with a virtual network, in the OpenStack
module, access the Project tab and click Networking. The Networks screen appears;
see Figure 32 on page 220.
Figure 32: OpenStack Networks
2. At the Networks screen, click the network to be deleted.
The Network Detail screen appears; see Figure 33 on page 221.
220
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Figure 33: OpenStack Network Detail , Associated Instances Tab
3. Click the Associated Instances tab to see the instances associated with this network.
Make note of the IP addresses of any instances that are associated with this network.
4. In the Project tab, select Instances.
The Instances screen appears, displaying the instances associated with the current
project; see Figure 34 on page 221.
Figure 34: Instances
5. On the Instances screen, click the check box for any instance that is associated with
the network that you want to delete, then click the Terminate Instances button to
delete the instance.
6. When all instances that are associated with the network to be deleted have been
terminated, delete the network.
To delete a network, return to the Networks screen (see Figure 32 on page 220), select
the network to be deleted, then click the Delete Networks button in the upper right.
Related
Documentation
•
Creating a Virtual Network—OpenStack Contrail on page 219
Copyright © 2016, Juniper Networks, Inc.
221
Contrail Feature Guide
Creating an Image
You can use the OpenStack dashboard to specify an image to upload to the Image Service
for a project in your system.
1.
Make sure you have selected the correct project to which you will associate an image.
.
a. In OpenStack, in the left column, select the Project tab.
b. Make sure your project is selected as the project in the Current Project box, then
click Images & Snapshots in the left column.
The Images & Snapshots screen appears; see Figure 35 on page 222
Figure 35: Images & Snapshots
2. Click Create Image.
The Create An Image window appears; see Figure 36 on page 223.
222
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Figure 36: Create An Image
3. Complete the fields in this window to specify your image. Table 14 on page 223 describes
each of the fields on this screen.
NOTE: Only images available via an HTTP URL are supported, and the
image location must be accessible to the Image Service. Compressed
image binaries are supported (*.zip and *.tar.gz).
Table 14: Create An Image Fields
Field
Description
Name
Required field. Enter a name for this image.
Image Location
Required field. Enter an external HTTP URL from which to load the image.
The URL must be a valid and direct URL to the image binary. URLs that redirect
or serve error pages will result in unusable images.
Format
Required field. Select the format of the image from a list:
AKI- Amazon Kernel Image
AMI- Amazon Machine Image
ARI- Amazon Ramdisk Image
ISO- Optical Disk Image
QCOW2-QEMU Emulator
Raw
VDI
VHD
VMDK
Copyright © 2016, Juniper Networks, Inc.
223
Contrail Feature Guide
Table 14: Create An Image Fields (continued)
Field
Description
Minimum Disk (GB)
Enter the minimum disk size required to boot the image. If you do not specify
a size, the default is 0 (no minimum).
Minimum Ram (MB)
Enter the minimum RAM required to boot the image. If you do not specify a
size, the default is 0 (no minimum).
Public
Check the box if this is a public image. Leave unchecked for a private image.
4. When finished, click Create Image.
Related
Documentation
•
Launching a Virtual Machine (Instance) on page 224
Launching a Virtual Machine (Instance)
After you have created virtual networks for your project, you can create and launch virtual
machines. Virtual networks (VNs) are populated with virtual machines (VMs), also called
instances. A VM is a simulation of a physical machine, such as a workstation or a server,
that runs on a host that supports virtualization. Many VMs can run on the same host,
sharing its resources. A VM has its own operating system image that can be different
from that of other VMs running on the same host.
You use the OpenStack module to define and launch VMs (instances).
1.
On the OpenStack dashboard Project tab, make sure your project is selected in the
left column in the Current Project box, then click Instances.
The Instances screen appears, displaying all instances (VMs) currently in the selected
project; see Figure 37 on page 224.
Figure 37: OpenStack Instances
2. To create and launch a new instance, click the Launch Instance button in the upper
right corner.
224
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
The Launch Instance window appears, where you can define and launch a new instance.
.
Figure 38: Launch Instance , Details Tab
3. Make sure the Details tab is active; see Figure 38 on page 225, then define your instance
using the fields shown in Table 15 on page 225
Table 15: Launch Instance Details Tab Fields
Field
Description
Instance Source
Select from the list the source type: Image or Snapshot.
Image
Select from a list the image to use for this instance. The images represent the operating systems
and applications available for this project.
Instance Name
Enter a name for this instance. .
Flavor
Select from a list the OpenStack Flavor for this instance. Flavors provide general definitions for
sizing VMs. The Flavor Details of the Flavor you select displays on the right column of this window.
Instance Count
Enter the number of instances you want to launch using the details defined in this screen. On the
right side column, Project Quotas displays the number of instances currently active and the
number still available for this project.
Copyright © 2016, Juniper Networks, Inc.
225
Contrail Feature Guide
Table 15: Launch Instance Details Tab Fields (continued)
Field
Description
Compute Hostname
To launch a VM on a specific compute node, enter the name of the compute node. This
functionality is only available to administrators.
4. Click the Networking tab on the Launch Instance screen to identify one or more networks
to associate with this instance; see Figure 39 on page 226.
Figure 39: Launch Instance, Networking Tab
5. When finished defining this instance, click the Launch button at the lower right.
Your new VM instance is launched as part of your project.
Related
Documentation
226
•
OpenStack documentation
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Creating a Network Policy—Juniper Networks Contrail
The Contrail Controller makes creating network traffic policies very simple. You work
from the self-service user interface to define a policy, then define a rule or rules to be
applied in that policy. You can define such things as the type and direction of traffic for
the rule, the source and destination of that traffic, traffic originating from or destined for
specific ports, the sequence in which to apply a rule, and so on. The following procedure
shows how to create a network policy when using Juniper Networks Contrail.
1.
In the Contrail module, start at Configure > Networking > Policies to display the Network
Policy screen; see Figure 40 on page 227.
Figure 40: Network Policy
2. Click the Create button at the upper right.
The Create Policy window appears; see Figure 41 on page 227.
Figure 41: Create Policy
3. Complete the fields in the Create Policy window, using the guidelines in
Table 16 on page 228.
Copyright © 2016, Juniper Networks, Inc.
227
Contrail Feature Guide
Table 16: Create Policy Fields
Field
Description
Policy Name
Enter a name for the policy you are creating.
Associate Networks
Click this field to select from a list of available networks the networks to be associated
with this policy. Click one network at a time to add one or more networks to the field.
The selected networks are listed in the field. To remove any selected network, click the
X to the right of the network.
Policy Rules
Use this area to define the rules for the policy you are creating. Click the + (plus sign)
to open up the fields for defining the rules. Click the - (minus sign) to delete any rule.
Multiple rules can be added to a policy. Each policy rule field is described in the following
table rows.
Action
Define the action to take with traffic that matches the current rule. Select from a list:
Pass, Deny.
Protocol
Define the protocol associated with traffic for this policy rule. Select from a list of
available protocols (or ANY): ANY, TCP, UDP, ICMP.
Source Network
Select the source network for traffic associated with this policy rule. Choose ANY or
select from a list of all sources available displayed in the drop-down, in the form:
domain-name:project-name:network-name.
Source Ports
Use this field to specify that traffic from particular source port(s) are associated with
this policy rule. Identify traffic from any port or enter a specific port, a list of ports
separated with commas, or a range of ports in the form nnnn-nnnnn.
Direction
Define the direction of traffic to match the rule, for example, to traffic moving in and
out, or only to traffic moving in one direction. Select from a list: <> (bidirectional), >
(unidirectional).
Destination Network
Select the destination network for traffic to match this rule. Choose ANY or select from
a list of all destinations available displayed in the drop-down, in the form:
domain-name:project-name:network-name.
Destination Ports
Define the destination port for traffic to match tis rule. Enter any for any destination
port, or enter a specific port, a list of ports separated with commas, or a range of ports
in the form nnnn-nnnnn.
Apply Service
Check the box to open a field where you can select from a list of available services the
services to apply to this policy. The services will be applied in the order in which they
are selected. There is a restricted set of options that can be selected when applying
services. For more information about services, see “Service Chaining” on page 309.
Mirror to
Check the box to open a field where you can select from the list of configured services
the services that you want to mirror in this policy. You can select a maximum of two
services to mirror. For more information about mirroring; see “Configuring Traffic
Analyzers and Packet Capture for Mirroring” on page 365.
4. When you are finished selecting the rules for this policy, click the Save button.
The policy you just defined displays in the Network Policy column.
228
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Next you can associate the policy to a network, see “Associating a Network to a
Policy—Juniper Networks Contrail” on page 229.
Related
Documentation
•
Associating a Network to a Policy—Juniper Networks Contrail on page 229
•
Creating a Virtual Network—Juniper Networks Contrail on page 215
Associating a Network to a Policy—Juniper Networks Contrail
•
Associating Network Policies Overview on page 229
•
Associating a Network Policy to a Network from the Edit Network on page 229
•
Associating Networks with Network Policies from the Edit Policy on page 232
Associating Network Policies Overview
Contrail helps you create and manage virtual networks (VNs). By default, all traffic in a
VN is isolated to that VN. Traffic can only leave a VN by means of network policies that
are defined for the VN.
This procedure shows how to associate a network policy with a network, using the Juniper
Networks Contrail interface.
If you did not associate an existing network policy when you created your virtual network,
you can use the Network Policy(s) field from the Edit Network window, or you can use
the Associate Networks field from the Edit Policy window to associate or disassociate
network policies with networks. The following procedures demonstrate both methods.
Associating a Network Policy to a Network from the Edit Network
This procedure shows how to attach (associate) a network policy to a network when
starting from the Edit Network window.
Copyright © 2016, Juniper Networks, Inc.
229
Contrail Feature Guide
1.
Start at Configure > Networking > Networks; see Figure 42 on page 230.
Make sure your project is the active project in the upper right.
Figure 42: Configure > Networking > Networks
2. Click the network you will associate with a policy, then in the Action column, click the
action icon and select Edit.
The Edit Network window for the selected network appears; see Figure 43 on page 231.
Any policies already associated with the selected network appear in the Network
Policy(s) field.
230
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Figure 43: Edit Network
3. Click the Network Policy(s) field to show a list of existing policies, and click to select
a policy to associate with the selected network.
You can also disassociate a selected policy by clicking the X next to its name when it
appears configured in the Network Policy(s) field.
4. When finished, click the Save button, or click the Cancel button to undo your selections.
Copyright © 2016, Juniper Networks, Inc.
231
Contrail Feature Guide
Associating Networks with Network Policies from the Edit Policy
If you did not associate a network when you created a network policy, you can use Edit
Policy to associate a selected policy with a network. This procedure shows how to
associate network policies when starting from the Edit Policy window.
1.
Start at Configure > Networking > Policies.
Make sure your project is selected in the field in the upper right; see
Figure 44 on page 232.
Figure 44: Configure > Networking > Policies
2. Click the policy you will associate with a network, then in the Action column, click the
action icon and select Edit.
The Edit Policy window for the selected policy appears; see Figure 45 on page 232.
Any networks already associated with the selected policy appear in the Associate
Networks field.
Figure 45: Edit Policy
3. Click in the Associate Networks field to show a list of existing networks, and click to
select a network to associate with the selected policy.
You can also disassociate a selected network by clicking the X next to its name when
it appears configured in the Associate Networks field.
4. When finished, click the Save button, or click the Cancel button to undo your selections.
232
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Related
Documentation
•
Creating a Floating IP Address Pool on page 238
•
Allocating a Floating IP Address to a Virtual Machine on page 240
Creating a Network Policy—OpenStack Contrail
Contrail makes creating network traffic policies very simple. You work from the self-service
user interface to define a policy, then define a rule or rules to be applied in that policy.
You can define such things as the type and direction of traffic for the rule, the source and
destination of that traffic, traffic originating from or destined for specific ports, the
sequence in which to apply a rule, and so on. The following procedure shows how to
create a network policy when using OpenStack.
1.
On the OpenStack dashboard, make sure your project is displayed in the Current Project
box, click Networking, and then click the Network Policy tab to display the Network
Policy screen; see Figure 46 on page 233.
Figure 46: Network Policy
2. Click the Create Policy button at the upper right.
The Create Network Policy window appears; see Figure 47 on page 234.
Copyright © 2016, Juniper Networks, Inc.
233
Contrail Feature Guide
Figure 47: Create Network Policy
3. Enter a name and a description for this policy. Names cannot include spaces.
4. When finished, click the Create Policy button on the lower right.
Your policy is created and it appears on the Network Policy screen; see
Figure 48 on page 234.
Figure 48: Network Policy
5. On the Network Policy window, click the check box for your new policy, then click the
Edit Rules button for that policy.
The Edit Policy Rules window appears; see Figure 49 on page 235.
234
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Figure 49: Edit Policy Rules
6. Define the rules for your policy, using the guidelines in Table 17 on page 235.
Table 17: Edit Policy Rules Fields
Field
Description
Policy Rules Details
This section of the window displays any rules that have already been created for this
policy.
Id
Displays a sequential number identifier for each rule within a policy.
Rule Details
Displays a description of the rule on this line.
Actions
Available actions for the rule on this line appear in this column. Currently you can use
the Delete button in this column to delete a rule.
Sequence Id
This field lets you define the order in which to apply the current rule. Select from a list:
Last Rule, First Rule, After Rule.
Action
Define the action to take with traffic that matches the current rule. Select from a list:
Pass, Deny.
Direction
Define the direction in which to apply the rule, for example, to traffic moving in and out,
or only to traffic moving in one direction. Select from a list: Bidirectional, Unidirectional.
IP Protocol
Select from a list of available protocols (or ANY): ANY, TCP, UDP, ICMP,
Copyright © 2016, Juniper Networks, Inc.
235
Contrail Feature Guide
Table 17: Edit Policy Rules Fields (continued)
Field
Description
Source Net
Select the source network for this rule. Choose Local (any network to which this policy
is associated), Any (all networks created under the current project) or select from a list
of all sources available displayed in the drop-down list, in the form:
domain-name:project-name:network-name.
Source Ports
Accept traffic from any port or enter a specific port, a list of ports separated with
commas, or a range of ports in the form nnnn-nnnnn.
Destination Net
Select the destination network for this rule. Choose Local (any network to which this
policy is associated), Any (all networks created under the current project) or select
from a list of all destinations available displayed in the drop-down list, in the form:
domain-name:project-name:network-name.
Destination Ports
Send traffic to any port or enter a specific port, a list of ports separated with commas,
or a range of ports in the form nnnn-nnnnn.
7. When you are finished selecting the rules for this policy, click the Add Rule button on
the lower right of the Edit Policy Rules window.
Next you can associate the policy to a network, see “Associating a Network to a
Policy—OpenStack Contrail” on page 236.
Related
Documentation
•
Associating a Network to a Policy—OpenStack Contrail on page 236
Associating a Network to a Policy—OpenStack Contrail
236
•
Associating Network Policies Overview on page 237
•
Associating a Network Policy to a Network on page 237
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Associating Network Policies Overview
Contrail helps you create and manage virtual networks (VNs). By default, all traffic in a
VN is isolated to that VN. Traffic can only leave a VN by means of network policies that
are defined for the VN.
This procedure shows how to associate a network policy with a network when using
OpenStack.
Associating a Network Policy to a Network
1.
Using the OpenStack Networking module, start at the Project tab and click Networking.
The Networks screen appears; see Figure 50 on page 237.
Figure 50: Networks Screen
2. Click the check box to select the network you will associate with a policy, then click
the drop-down box in the Actions column and select Edit Policy.
The Edit Network Policy window appears; see Figure 51 on page 237.
Available network policies are listed in the Edit Network Policy window.
Figure 51: Edit Network Policy
3. Click the check box of any policies to be associated with the selected network.
4. When finished, click the Save Changes button.
Copyright © 2016, Juniper Networks, Inc.
237
Contrail Feature Guide
Creating a Floating IP Address Pool
A floating IP address is an IP address (typically public) that can be dynamically assigned
to a running virtual instance. You can configure floating IP address pools in project
networks in Contrail, then allocate floating IP addresses from the pool to virtual machine
instances in other virtual networks.
1.
Start at Configure > Networking > Networks; see Figure 52 on page 238. Make sure your
project is the active project in the upper right.
Figure 52: Configure > Networking > Networks
2. Click the network you will associate with a floating IP pool, then in the Action column,
click the action icon and select Edit.
The Edit Network window for the selected network appears; see Figure 53 on page 239.
238
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Figure 53: Edit Network
3. In the Floating IP Pools section, click the Pool Name field, enter a name for your floating
IP pool, and click the + (plus sign) to add the IP pool to the table below the field.
•
Multiple floating IP pools can be created at the same time.
•
A floating IP pool can be associated to multiple projects.
4. Click the Save button to create the floating IP address pool, or click Cancel to remove
your work and start over.
Related
Documentation
•
Allocating a Floating IP Address to a Virtual Machine on page 240
Copyright © 2016, Juniper Networks, Inc.
239
Contrail Feature Guide
Allocating a Floating IP Address to a Virtual Machine
If you have configured a floating IP address pool, you can use the following procedure to
allocate the pool to a VM instance.
1.
In the Contrail controller module, start at Configure > Networking > Allocate Floating
IPs.
Make sure your project is displayed (active) in the upper right. Click to select the virtual
network that has the floating IP pool; see Figure 54 on page 240.
Figure 54: Allocate Floating IPs
2. In Allocated Floating IPs, click the Allocate button.
The Allocate Floating IP window appears; see Figure 55 on page 240. .
Figure 55: Allocate Floating IP
3. Select from a drop-down list the name of the floating IP pool. The floating IP pool is
shared among multiple projects. Click Save.
4. Once the floating IP pool has been allocated, you can associate or disassociate it from
instance addresses. In Allocate Floating IPs, click to select the floating IP pool you
want to use, then in the Actions column, click and select the Associate option.
The Associate Floating IP to Instance window appears; see Figure 56 on page 241
240
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Figure 56: Associate Floating IP to Instance
5. In the Instance field, select from a drop-down list the UUID of the VM instance to
associate with the selected floating IP, and click Save to finish.
Related
Documentation
•
Creating a Floating IP Address Pool on page 238
Using Security Groups with Virtual Machines (Instances)
•
Security Groups Overview on page 241
•
Creating Security Groups and Adding Rules on page 241
Security Groups Overview
A security group is a container for security group rules. Security groups and security group
rules allow administrators to specify the type of traffic that is allowed to pass through a
port. When a virtual machine (VM) is created in a virtual network (VN), a security group
can be associated with the VM when it is launched. If a security group is not specified, a
port will be associated with a default security group. The default security group allows
both ingress and egress traffic. Security rules can be added to the default security group
to change the traffic behavior.
Creating Security Groups and Adding Rules
A default security group is created for each project. You can add security rules to the
default security group and you can create additional security groups and add rules to
them. The security groups are then associated with a VM, when the VM is launched or at
a later date.
Copyright © 2016, Juniper Networks, Inc.
241
Contrail Feature Guide
To add rules to a security group:
1.
From the OpenStack interface, click the Project tab, select Access & Security, and click
the Security Groups tab.
Any existing security groups are listed under the Security Groups tab, including the
default security group; see Figure 57 on page 242.
Figure 57: Security Groups
2. Select the default-security-group and click the Edit Rules button in the Actions column.
The Edit Security Group Rules screen appears; see Figure 58 on page 242. Any rules
already associated with the security group are listed.
Figure 58: Edit Security Group Rules
3. Click the Add Rule button to add a new rule; see Figure 59 on page 243.
242
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Figure 59: Add Rule
Table 18: Add Rule Fields
Column
Description
IP Protocol
Select the IP protocol to apply for this rule: TCP, UDP, ICMP.
From Port
Select the port from which traffic originates to apply this rule. For TCP and UDP, enter a single port
or a range of ports. For ICMP rules, enter an ICMP type code.
To Port
The port to which traffic is destined that applies to this rule, using the same options as in the From
Port field.
Source
Select the source of traffic to be allowed by this rule. Specify subnet—the CIDR IP address or address
block of the inter-domain source of the traffic that applies to this rule, or you can choose security
group as source. Selecting security group as source allows any other instance in that security group
access to any other instance via this rule.
4. Click the Create Security Group button to create additional security groups.
The Create Security Group window appears; see Figure 60 on page 244.
Each new security group has a unique 32-bit security group ID and an ACL is associated
with the configured rules.
Copyright © 2016, Juniper Networks, Inc.
243
Contrail Feature Guide
Figure 60: Create Security Group
5. When an instance is launched, there is an opportunity to associate a security group;
see Figure 61 on page 244.
In the Security Groups list, click the check box next to a security group name to associate
with the instance.
Figure 61: Associate Security Group at Launch Instance
6. You can verify that security groups are attached by viewing the SgListReq and IntfReq
associated with the agent.xml.
Support for IPv6 Networks in Contrail
As of Contrail Release 2.0, support for IPv6 overlay networks is provided.
244
•
Overview: IPv6 Networks in Contrail on page 245
•
Creating IPv6 Virtual Networks in Contrail on page 245
•
Adding IPv6 Peers on page 246
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Overview: IPv6 Networks in Contrail
As of Contrail Release 2.0, support for IPv6 overlay networks is provided, including:
•
Configuring IPv6 subnets from the Contrail user interface or by using Neutron APIs
•
IPv6 address assignment to virtual machine interfaces over DHCPv6
•
IPv6 forwarding in overlay networks between virtual machines, and between virtual
machines and BGP peers
•
IPv6 to-VPN peering with other BGP peers
•
IPv6 forwarding in Layer 2-only networks
•
IPv6 interface static routes
Creating IPv6 Virtual Networks in Contrail
You can create an IPv6 virtual network from the Contrail user interface in the same way
you create an IPv4 virtual network. When you create a new virtual network at Configure
> Networking > Networks, the Edit fields accept IPv6 addresses, as shown in the following
image.
Address Assignments
When virtual machines are launched with an IPv6 virtual network created in the Contrail
user interface, the virtual machine interfaces get assigned addresses from all the families
configured in the virtual network.
Copyright © 2016, Juniper Networks, Inc.
245
Contrail Feature Guide
The following is a sample of IPv6 instances with address assignments, as listed in the
OpenStack Horizon user interface.
Enabling DHCPv6 In Virtual Machines
To allow IPv6 address assignment using DHCPv6, the virtual machine network interface
configuration must be updated appropriately.
For example, to enable DHCPv6 for Ubuntu-based virtual machines, add the following
line in /etc/network/interfaces:
iface eht0 inet6 dhcp
Also, dhclient -6 can be run from within the virtual machine to get IPv6 addresses using
DHCPv6.
Adding IPv6 Peers
The procedure to add an IPv6 BGP peer in Contrail is similar to adding an IPv4 peer. At
Configure > Infrastructure > BGP Peers, include inet6-vpn in the Address Family list to
allow advertisement of IPv6 addresses.
246
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
A sample is shown in the following.
NOTE: Additional configuration is required on the peer router to allow
inet6-vpn peering.
Configuring EVPN and VXLAN
Contrail supports Ethernet VPNs (EVPN) and Virtual Extensible Local Area Networks
(VXLAN).
EVPN is a flexible solution that uses Layer 2 overlays to interconnect multiple edges
(virtual machines) within a data center. Traditionally, the data center is built as a flat
Layer 2 network with issues such as flooding, limitations in redundancy and provisioning,
and high volumes of MAC addresses learned, which cause churn at node failures. EVPN
is designed to address these issues without disturbing flat MAC connectivity.
In EVPN, MAC address learning is driven by the control plane, rather than by the data
plane, which helps control learned MAC addresses across virtual forwarders, thus avoiding
flooding. The forwarders advertise locally learned MAC addresses to the controllers. The
controllers use MP-BGP to communicate with peers. The peering of controllers using
BGP for EVPN results in better and faster convergence.
With EVPN, MAC learning is confined to the virtual networks to which the virtual machine
belongs, thus isolating traffic between multiple virtual networks. In this manner, virtual
networks can share the same MAC addresses without any traffic crossover.
Unicast in EVPN
Unicast forwarding is based on MAC addresses where traffic can terminate on a local
endpoint or is encapsulated to reach the remote endpoint. Encapsulation can be
MPLS/UDP, MPLS/GRE, or VXLAN.
BUM Traffic in EVPN
Copyright © 2016, Juniper Networks, Inc.
247
Contrail Feature Guide
Multicast and broadcast traffic is flooded in a virtual network. The replication tree is built
by the control plane, based on the advertisements of end nodes (virtual machines) sent
by forwarders. Each virtual network has one distribution tree, a method that avoids
maintaining multicast states at fabric nodes, so the nodes are unaffected by multicast.
The replication happens at the edge forwarders. Per-group subscription is not provided.
Broadcast, unknown unicast, and multicast (BUM) traffic are all handled the same way,
and get flooded in the virtual network to which the virtual machine belongs.
VXLAN
VXLAN is an overlay technology that encapsulates MAC frames into a UDP header at
Layer 2. Communication is established between two virtual tunnel endpoints (VTEPs).
VTEPs encapsulate the virtual machine traffic into a VXLAN header, as well as strip off
the encapsulation. Virtual machines can only communicate with each other when they
belong to the same VXLAN segment. A 24-bit virtual network identifier (VNID) uniquely
identifies the VXLAN segment. This enables having the same MAC frames across multiple
VXLAN segments without traffic crossover. Multicast in VXLAN is implemented as Layer
3 multicast, in which endpoints subscribe to groups.
Design Details of EVPN and VXLAN
With Contrail Release 1.03, EVPN is enabled by default. The supported forwarding modes
include:
•
Fallback bridging—IPv4 traffic lookup is performed using IP FIB. All non-IPv4 traffic is
directed to a MAC FIB.
•
Layer 2-only— All traffic is forwarded using MAC FIB lookup.
The forwarding mode can be configured individually on each virtual network.
EVPN is used to share MACs across different control planes in both forwarding models.
The result of a MAC lookup is a next hop, which, similar to IP forwarding, points to a local
virtual machine or a tunnel to reach the virtual machine on a remote server. The tunnel
encapsulation methods supported for EVPN are MPLSoGRE, MPLSoUDP, and VXLAN.
The encapsulation method selected is based on a user-configured priority.
In VXLAN, the VNID is assigned uniquely for every virtual network carried in the VXLAN
header. The VNID uniquely identifies a virtual network.. When the VXLAN header is received
from the fabric at a remote server, the VNID lookup provides the VRF of the virtual
machine. This VRF is used for the MAC lookup from the inner header, which then provides
the destination virtual machine.
Non-IP multicast traffic uses the same multicast tree as for IP multicast
(255.255.255.255). The multicast is matched against the all-broadcast prefix in the
bridging table (FF:FF:FF:FF:FF:FF). VXLAN is not supported for IP/non-IP multicast traffic.
The following table summarizes the traffic and encapsulation types supported for EVPN.
Encapsulation
MPLS-GRE
248
MPLS-UDP
VXLAN
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Traffic
Type
IP unicast
Yes
Yes
No
IP-BUM
Yes
Yes
No
non IP unicast
Yes
Yes
Yes
non IP-BUM
Yes
Yes
No
•
Configuring Forwarding on page 249
•
Configuring the VXLAN Identifier Mode on page 250
•
Configuring the VXLAN Identifier on page 251
•
Configuring Encapsulation Methods on page 251
Configuring Forwarding
With Contrail 1.03, the default forwarding mode is enabled for fallback bridging (IP FIB
and MAC FIB). The mode can be changed, either through the Contrail web UI or by using
python provisioning commands.
From the Contrail web UI, select the virtual network for which you will change the
forwarding mode, select Edit Network, then select Advanced Options to change the
forwarding mode. In the following, the network named vn is being edited. Under Advanced
Options->Forwarding Mode, two options are available:
•
Select L2 and L3 to enable IP and MAC FIB (fallback bridging).
•
Select L2 to enable only MAC FIB.
Copyright © 2016, Juniper Networks, Inc.
249
Contrail Feature Guide
Alternatively, you can use the following python provisioning command to change the
forwarding mode:
python provisioning_forwarding_mode --project_fq_name 'defaultdomain: admin' --vn_name
vn1 --forwarding_mode < l2_l3| l2 >
Options:
l2_l3 = Enable IP FIB and MAC FIB (fallback bridging)
l2 = Enable MAC FIB only (Layer 2 only)
Configuring the VXLAN Identifier Mode
The VXLAN identifier mode can be configured to select an auto-generated VNID or a
user-generated VXLAN ID, either through the Contrail web UI or by modifying a python
file.
The following Contrail web UI shows the location for configuring the VXLAN identifier
mode at Configure > Infrastructure > Forwarding Options. The user can select one of these
options:
•
Automatic— The VXLAN identifier is automatically assigned for the virtual network.
•
Configured– The VXLAN identifier must be provided by the user for the virtual network.
NOTE: When Configured is selected, if the user does not provide an identifier,
then VXLAN encapsulation is not used and the mode falls back to MPLS.
Alternatively, the VXLAN identifier mode can be set by using python to modify the file
/opt/contrail/utils/encap.py , as follows:
python encap.py <add | update | delete > <username > < password > < tenant_name > <
config_node_ip >
250
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
Configuring the VXLAN Identifier
The VXLAN identifier can be set only if the VXLAN network identifier mode has been set
to configured. You can then set the VXLAN ID either by using the Contrail web UI or by
using python commands.
The following shows the web UI location of the VXLAN identifier field in Edit Network,
Advanced Options.
Alternatively, you can use the following python provisioning command to configure the
VXLAN identifier:
python provisioning_forwarding_mode --project_fq_name 'defaultdomain: admin' --vn_name
vn1 --forwarding_mode < vxlan_id >
Configuring Encapsulation Methods
The default encapsulation mode for EVPN is MPLS over UDP. All packets on the fabric
are encapsulated with the label allocated for the virtual machine interface. The label
encoding and decoding is the same as for IP forwarding. Additional encapsulation methods
supported for EVPN include MPLS over GRE and VXLAN. MPLS over UDP is different
from MPLS over GRE only in the method of tunnel header encapsulation.
VXLAN has its own header and uses a VNID label to carry the traffic over the fabric. A
VNID is assigned to every virtual network and is shared by all virtual machines in the
virtual network. The VNID is mapped to the VRF of the virtual network to which it belongs.
Copyright © 2016, Juniper Networks, Inc.
251
Contrail Feature Guide
The priority order in which to apply encapsulation methods is determined by the sequence
of methods set either from the web UI or in the file encap.py.
The following shows the web UI location for setting the encapsulation method priorities
from Configure > Infrastructure > Forwarding Options, in which the user has selected the
priority order to be VXLAN first, then MPLS over GRE, and last priority is MPLS over UDP.
Use the following procedure to change the default encapsulation method to VXLAN.
NOTE: VXLAN is only supported for EVPN unicast. It is not supported for IP
traffic or multicast traffic. VXLAN priority and presence in encap.py or
configured in web UI is ignored for traffic not supported by VXLAN.
To set the priority of encapsulation methods to VXLAN:
1.
Modify the file encap.py found in /opt/contrail/utils/.
The default encapsulation line is:
encap_obj=EncapsulationPrioritiesType(encapsulation=['MPLSoUDP','M PLSoGRE'])
Modify the line to:
encap_obj=EncapsulationPrioritiesType(encapsulation=['VXLAN',
'MPLSoUDP','MPLSoGRE'])
252
Copyright © 2016, Juniper Networks, Inc.
Chapter 9: Configuring Virtual Networks
2. Once the status is modified, execute the following script:
python encap_set.py <add|update|delete> <username> <password> <tenant_name>
<config_node_ip>
The configuration is applied globally for all virtual networks.
Copyright © 2016, Juniper Networks, Inc.
253
Contrail Feature Guide
254
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 10
Example of Deploying a Multi-Tier Web
Application Using Contrail
•
Example: Deploying a Multi-Tier Web Application on page 255
•
Sample Network Configuration for Devices for Simple Tiered Web
Application on page 261
Example: Deploying a Multi-Tier Web Application
•
Multi-Tier Web Application Overview on page 255
•
Example: Setting Up Virtual Networks for a Simple Tiered Web Application on page 256
•
Verifying the Multi-Tier Web Application on page 258
•
Sample Addressing Scheme for Simple Tiered Web Application on page 258
•
Sample Physical Topology for Simple Tiered Web Application on page 259
•
Sample Physical Topology Addressing on page 260
Multi-Tier Web Application Overview
A common requirement for a cloud tenant is to create a tiered web application in leased
cloud space. The tenant enjoys the favorable economics of a private IT infrastructure
within a shared services environment. The tenant seeks speedy setup and simplified
operations.
The following example shows how to set up a simple tiered web application using Contrail.
The example has a web server that a user accesses by means of a public floating IP
address. The front-end web server gets the content it serves to customers from
information stored in a SQL database server that resides on a back-end network. The
web server can communicate directly with the database server without going through
any gateways. The public (or client) can only communicate to the web server on the
front-end network. The client is not allowed to communicate directly with any other parts
of the infrastructure. See Figure 62 on page 256.
Copyright © 2016, Juniper Networks, Inc.
255
Contrail Feature Guide
Figure 62: Simple Tiered Web Use Case
Example: Setting Up Virtual Networks for a Simple Tiered Web Application
This example provides basic steps for setting up a simple multi-tier network application.
Basic creation steps are provided, along with links to the full explanation for each of the
creation steps. Refer to the links any time you need more information about completing
a step.
1.
Working with a system that has the Contrail software installed and provisioned, create
a project named demo.
For more information; see “Creating Projects in OpenStack for Configuring Tenants
in Contrail” on page 214.
2. In the demo project, create three virtual networks:
a. A network named public with IP address 10.84.41.0/24
This is a special use virtual network for floating IP addresses— it is assigned an
address block from the public floating address pool that is assigned to each web
server. The assigned block is the only address block advertised outside of the data
center to clients that want to reach the web services provided.
b. A network named frontend with IP address 192.168.1.0/24
This network is the location where the web server virtual machine instances are
launched and attached. The virtual machines are identified with private addresses
that have been assigned to this virtual network.
c. A network named backend with IP address 192.168.2.0/24
This network is the location where the database server virtual machines instances
are launched and attached. The virtual machines are identified with private
addresses that have been assigned to this virtual network.
256
Copyright © 2016, Juniper Networks, Inc.
Chapter 10: Example of Deploying a Multi-Tier Web Application Using Contrail
For more information; see “Creating a Virtual Network—OpenStack Contrail” on page 219
or “Creating a Virtual Network—Juniper Networks Contrail” on page 215.
3. Create a floating IP pool named public_pool for the public network within the demo
project; see Figure 63 on page 257.
Figure 63: Create Floating IP Pool
4. Allocate the floating IP pool public_pool to the demo project; see Figure 64 on page 257.
Figure 64: Allocate Floating IP
5. Verify that the floating IP pool has been allocated; see Configure > Networking >
Allocate Floating IPs.
For more information; see “Creating a Floating IP Address Pool” on page 238 and
“Allocating a Floating IP Address to a Virtual Machine” on page 240.
Copyright © 2016, Juniper Networks, Inc.
257
Contrail Feature Guide
6. Create a policy that allows any host to talk to any host using any IP address, protocol,
and port, and apply this policy between the frontend network and the backend network.
This now allows communication between the web servers in the front-end network
and the database servers in the back-end network.
For more information; see “Creating a Network Policy—Juniper Networks Contrail” on
page 227, “Associating a Network to a Policy—Juniper Networks Contrail” on page 229,
or “Creating a Network Policy—OpenStack Contrail” on page 233, and “Associating a
Network to a Policy—OpenStack Contrail” on page 236.
7. Launch the virtual machine instances that represent the web server and the database
server.
NOTE: Your installation might not include the virtual machines needed
for the web server and the database server. Contact your account team
if you need to download the VMs for this setup.
On the Instances tab for this project, select Launch Instance and for each instance
that you launch, complete the fields to make the following associations:
•
Web server VM: select frontend network and the policy created to allow
communication between frontend and backend networks. Apply the floating IP
address pool to the web server.
•
Database server VM: select backend network and the policy created to allow
communication between frontend and backend networks.
For more information; see “Launching a Virtual Machine (Instance)” on page 224.
Verifying the Multi-Tier Web Application
Verify your web setup.
•
To demonstrate this web application setup, go to the client machine, open a browser,
and navigate to the address in the public network that is assigned to the web server
in the frontend network.
The result will display the Contrail interface with various data populated, verifying
that the web server is communicating with the database server in the backend network
and retrieving data.
The client machine only has access to the public IP address. Attempts to browse to
any of the addresses assigned to the frontend network or to the backend network
should fail.
Sample Addressing Scheme for Simple Tiered Web Application
Use the information in Table 19 on page 259 as a guide for addressing devices in the simple
tiered web example.
258
Copyright © 2016, Juniper Networks, Inc.
Chapter 10: Example of Deploying a Multi-Tier Web Application Using Contrail
Table 19: Sample Addressing Scheme for Example
System Name
Address Allocation
System001
10.84.11.100
System002
10.84.11.101
System003
10.84.11.102
System004
10.84.11.103
System005
10.84.11.104
MX80-1
10.84.11.253
10.84.45.1 (public connection)
MX80-2
10.84.11.252
10.84.45.2 (public connection)
EX4200
10.84.11.254
10.84.45.254 (public connection)
10.84.63.259 (public connection)
frontend network
192.168.1.0/24
backend network
192.168.2.0/24
public network (floating address)
10.84.41.0/24
Sample Physical Topology for Simple Tiered Web Application
Figure 65 on page 260 provides a guideline diagram for the physical topology for the simple
tiered web application example.
Copyright © 2016, Juniper Networks, Inc.
259
Contrail Feature Guide
Figure 65: Sample Physical Topology for Simple Tiered Web Application
Sample Physical Topology Addressing
Figure 66 on page 261 provides a guideline diagram for addressing the physical topology
for the simple tiered web application example.
260
Copyright © 2016, Juniper Networks, Inc.
Chapter 10: Example of Deploying a Multi-Tier Web Application Using Contrail
Figure 66: Sample Physical Topology Addressing
Sample Network Configuration for Devices for Simple Tiered Web Application
This section shows sample device configurations that can be used to create the “Example:
Deploying a Multi-Tier Web Application” on page 255. Configurations are shown for Juniper
Networks devices: two MX80s and one EX4200.
MX80-1 Configuration
version 12.2R1.3;
system {
root-authentication {
encrypted-password "xxxxxxxxxx"; ## SECRET-DATA
}
services {
ssh {
root-login allow;
}
}
syslog {
user * {
Copyright © 2016, Juniper Networks, Inc.
261
Contrail Feature Guide
any emergency;
}
file messages {
any notice;
authorization info;
}
}
}
chassis {
fpc 1 {
pic 0 {
tunnel-services;
}
}
}
interfaces {
ge-1/0/0 {
unit 0 {
family inet {
address 10.84.11.253/24;
}
}
}
ge-1/1/0 {
description "IP Fabric interface";
unit 0 {
family inet {
address 10.84.45.1/24;
}
}
}
lo0 {
unit 0 {
family inet {
address 127.0.0.1/32;
}
}
}
}
routing-options {
static {
route 0.0.0.0/0 next-hop 10.84.45.254;
}
route-distinguisher-id 10.84.11.253;
autonomous-system 64512;
dynamic-tunnels {
setup1 {
source-address 10.84.11.253;
gre;
destination-networks {
10.84.11.0/24;
}
}
}
}
protocols {
262
Copyright © 2016, Juniper Networks, Inc.
Chapter 10: Example of Deploying a Multi-Tier Web Application Using Contrail
bgp {
group mx {
type internal;
local-address 10.84.11.253;
family inet-vpn {
unicast;
}
neighbor 10.84.11.252;
}
group contrail-controller {
type internal;
local-address 10.84.11.253;
family inet-vpn {
unicast;
}
neighbor 10.84.11.101;
neighbor 10.84.11.102;
}
}
}
routing-instances {
customer-public {
instance-type vrf;
interface ge-1/1/0.0;
vrf-target target:64512:10000;
routing-options {
static {
route 0.0.0.0/0 next-hop 10.84.45.254;
}
}
}
}
MX80-2 Configuration
version 12.2R1.3;
system {
root-authentication {
encrypted-password "xxxxxxxxx"; ## SECRET-DATA
}
services {
ssh {
root-login allow;
}
}
syslog {
user * {
any emergency;
}
file messages {
any notice;
authorization info;
}
}
}
chassis {
Copyright © 2016, Juniper Networks, Inc.
263
Contrail Feature Guide
fpc 1 {
pic 0 {
tunnel-services;
}
}
}
interfaces {
ge-1/0/0 {
unit 0 {
family inet {
address 10.84.11.252/24;
}
}
}
ge-1/1/0 {
description "IP Fabric interface";
unit 0 {
family inet {
address 10.84.45.2/24;
}
}
}
lo0 {
unit 0 {
family inet {
address 127.0.0.1/32;
}
}
}
}
routing-options {
static {
route 0.0.0.0/0 next-hop 10.84.45.254;
}
route-distinguisher-id 10.84.11.252;
autonomous-system 64512;
dynamic-tunnels {
setup1 {
source-address 10.84.11.252;
gre;
destination-networks {
10.84.11.0/24;
}
}
}
}
protocols {
bgp {
group mx {
type internal;
local-address 10.84.11.252;
family inet-vpn {
unicast;
}
neighbor 10.84.11.253;
}
264
Copyright © 2016, Juniper Networks, Inc.
Chapter 10: Example of Deploying a Multi-Tier Web Application Using Contrail
group contrail-controller {
type internal;
local-address 10.84.11.252;
family inet-vpn {
unicast;
}
neighbor 10.84.11.101;
neighbor 10.84.11.102;
}
}
}
routing-instances {
customer-public {
instance-type vrf;
interface ge-1/1/0.0;
vrf-target target:64512:10000;
routing-options {
static {
route 0.0.0.0/0 next-hop 10.84.45.254;
}
}
}
}
EX4200 Configuration
system {
host-name EX4200;
time-zone America/Los_Angeles;
root-authentication {
encrypted-password "xxxxxxxxxxxxx"; ## SECRET-DATA
}
login {
class read {
permissions [ clear interface view view-configuration ];
}
user admin {
uid 2000;
class super-user;
authentication {
encrypted-password "xxxxxxxxxxxx"; ## SECRET-DATA
}
}
user regress {
uid 2002;
class read;
authentication {
encrypted-password "xxxxxxxxxxxxxx"; ## SECRET-DATA
}
}
}
services {
ssh {
root-login allow;
}
telnet;
Copyright © 2016, Juniper Networks, Inc.
265
Contrail Feature Guide
netconf {
ssh;
}
web-management {
http;
}
}
syslog {
user * {
any emergency;
}
file messages {
any notice;
authorization info;
}
file interactive-commands {
interactive-commands any;
}
}
}
chassis {
aggregated-devices {
ethernet {
device-count 64;
}
}
}
266
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 11
Configuring Services
•
Configuring DNS Servers on page 267
•
Configuring Discovery Service on page 276
•
Support for Multicast on page 280
•
Using Static Routes with Services on page 282
•
Configuring Metadata Service on page 286
•
Configuring Load-Balancing-as-a-Service in Contrail on page 287
Configuring DNS Servers
•
DNS Overview on page 267
•
Defining Multiple Virtual Domain Name Servers on page 268
•
IPAM and Virtual DNS on page 268
•
DNS Record Types on page 269
•
Configuring DNS Using the Interface on page 270
•
Configuring DNS Using Scripts on page 275
DNS Overview
Domain Name System (DNS) is the standard protocol for resolving domain names into
IP addresses so that traffic can be routed to its destination. DNS provides the translation
between human-readable domain names and their IP addresses. The domain names
are defined in a hierarchical tree, with a root followed by top-level and next-level domain
labels.
A DNS server stores the records for a domain name and responds to queries from clients
based on these records. The server is authoritative for the domains for which it is
configured to be the name server. For other domains, the server can act as a caching
server, fetching the records by querying other domain name servers.
The following are the key attributes of domain name service in a virtual world:
Copyright © 2016, Juniper Networks, Inc.
267
Contrail Feature Guide
•
It should be possible to configure multiple domain name servers to provide name
resolution service for the virtual machines spawned in the system.
•
It should be possible to configure the domain name servers to form DNS server
hierarchies required by each tenant.
•
The hierarchies can be independent and completely isolated from other similar
hierarchies present in the system, or they can provide naming service to other
hierarchies present in the system.
•
DNS records for the virtual machines spawned in the system should be updated
dynamically when a virtual machine is created or destroyed.
•
The service should be scalable to handle an increase in servers and the resulting
increased numbers of virtual machines and DNS queries handled in the system.
Defining Multiple Virtual Domain Name Servers
Contrail provides the flexibility to define multiple virtual domain name servers under each
domain in the system. Each virtual domain name server is an authoritative server for the
DNS domain configured. Figure 67 on page 268 shows examples of virtual DNS servers
defined in default-domain, providing the name service for the DNS domains indicated.
Figure 67: DNS Servers Examples
IPAM and Virtual DNS
Each IP address management (IPAM) service in the system can refer to one of the virtual
DNS servers configured. The virtual networks and virtual machines spawned are
associated with the DNS domain specified in the corresponding IPAM. When the VMs
are configured with DHCP, they receive the domain assignment in the DHCP domain-name
option. Examples are shown in Figure 68 on page 269
268
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
Figure 68: IPAM and Virtual DNS
DNS Record Types
DNS records can be added statically. DNS record types A, CNAME, PTR, and NS are
currently supported in the system. Each record includes the type, class (IN), name, data,
and TTL values. See Table 20 on page 269 for descriptions of the record types.
Table 20: DNS Record Types Supported
DNS Record Type
Description
A
Used for mapping hostnames to IPv4 addresses. Name refers to the name of the
virtual machine, and data is the IPv4 address of the virtual machine.
CNAME
Provides an alias to a name. Name refers to the name of the virtual machine, and
data is the new name (alias) for the virtual machine.
PTR
A pointer to a record, it provides reverse mapping from an IP address to a name. Name
refers to the IP address, and data is the name for the virtual machine. The address in
the PTR record should be part of a subnet configured for a VN within one of the IPAMs
referring to this virtual DNS server.
NS
Used to delegate a subdomain to another DNS server. The DNS server could be
another virtual DNS server defined in the system or the IP address of an external DNS
server reachable via the infrastructure. Name refers to the subdomain being delegated,
and data is the name of the virtual DNS server or IP address of an external server.
Figure 69 on page 270 shows an example usage for the DNS record type of NS.
Copyright © 2016, Juniper Networks, Inc.
269
Contrail Feature Guide
Figure 69: Example Usage for NS Record Type
Configuring DNS Using the Interface
DNS can be configured by using the user interface or by using scripts. The following
procedure shows how to configure DNS through the Juniper Networks Contrail interface.
1.
Access Configure > DNS > Servers to create or delete virtual DNS servers and records.
The Configure DNS Records screen appears; see Figure 70 on page 270.
Figure 70: Configure DNS Records
2. To add a new DNS server, click the Create button.
270
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
Enter DNS server information in the Add DNS window; see Figure 71 on page 271
Figure 71: Add DNS
Complete the fields for the new server; see Table 21 on page 271.
Table 21: Add DNS Fields
Field
Description
Server Name
Enter a name for this server.
Domain Name
Enter the name of the domain for this server.
Time To Live
Enter the TTL in seconds.
Next DNS Server
Select from a list the name of the next DNS server to process DNS requests if they cannot be
processed at this server, or None.
Load Balancing Order
Select the load-balancing order from a drop-down list—Random, Fixed, Round Robin. When a
name has multiple records matching, the configured record order determines the order in which
the records are sent in the response. Select Random to have the records sent in random order.
Select Fixed to have records sent in the order of creation. Select Round Robin to have the record
order cycled for each request to the record.
OK
Click OK to create the record.
Cancel
Click Cancel to clear the fields and start over.
3. To add a new DNS record, from the Configure DNS Records screen, click the Add Record
button in the lower right portion of the screen.
The Add DNS Record window appears; see Figure 72 on page 272.
Copyright © 2016, Juniper Networks, Inc.
271
Contrail Feature Guide
Figure 72: Add DNS Record
4. Complete the fields for the new record; see Table 22 on page 272.
Table 22: Add DNS Record Fields
Field
Description
Record Name
Enter a name for this record.
Type
Select the record type from a drop-down list—A, CNAME, PTR, NS.
IP Address
Enter the IP address for the location for this record.
Class
Select the record class from a drop-down list—IN is the default.
Time To Live
Enter the TTL in seconds.
OK
Click OK to create the record.
Cancel
Click Cancel to clear the fields and start over.
5. To associate an IPAM to a virtual DNS server, from the Configure DNS Records screen,
select the Associated IPAMs tab in the lower right portion of the screen and click the
Edit button.
The Associate IPAMs to DNS window appears; see Figure 73 on page 273.
272
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
Figure 73: Associate IPAMs to DNS
Complete the IPAM associations, using the field descriptions in Table 23 on page 273.
Table 23: Associate IPAMs to DNS Fields
Field
Description
Associate to All IPAMs
Select this box to associate the selected DNS server to all available IPAMs.
Available IPAMs
This column displays the currently available IPAMs.
Associated IPAMs
This column displays the IPAMs currently associated with the selected DNS server.
>>
Use this button to associate an available IPAM to the selected DNS server, by selecting an available
IPAM in the left column and clicking this button to move it to the Associated IPAMs column. The
selected IPAM is now associated with the selected DNS server.
<<
Use this button to disassociate an IPAM from the selected DNS server, by selecting an associated
IPAM in the right column and clicking this button to move it to the left column (Available IPAMs).
The selected IPAM is now disassociated from the selected DNS server.
OK
Click OK to commit the changes indicated in the window.
Cancel
Click Cancel to clear all entries and start over.
6. Use the IP Address Management screen (Configure > Networking > IP Address
Management; see Figure 74 on page 274) to configure the DNS mode for any DNS server
and to associate an IPAM to DNS servers of any mode or to tenants’ IP addresses.
Copyright © 2016, Juniper Networks, Inc.
273
Contrail Feature Guide
Figure 74: Configure IP Address Management
7. To associate an IPAM to a virtual DNS server or to tenant’s IP addresses, at the IP
Address Management screen, select the network associated with this IPAM, then click
the Action button in the last column, and click Edit.
The Edit IP Address Management window appears; see Figure 75 on page 274.
Figure 75: DNS Server
8. In the first field, select the DNS Method from a drop-down list (None, Default DNS,
Tenant DNS, Virtual DNS; see Table 24 on page 274.
Table 24: DNS Modes
DNS Mode
Description
None
Select None when no DNS support is required for the VMs.
Default
In default mode, DNS resolution for VMs is performed based on the name server configuration in the
server infrastructure. The subnet default gateway is configured as the DNS server for the VM, and the
DHCP response to the VM has this DNS server option. DNS requests sent by a VM to the default gateway
are sent to the name servers configured on the respective compute nodes. The responses are sent back
to the VM.
274
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
Table 24: DNS Modes (continued)
DNS Mode
Description
Tenant
Configure this mode when a tenant wants to use its own DNS servers. Configure the list of servers in the
IPAM. The server list is sent in the DHCP response to the VM as DNS servers. DNS requests sent by the
VMs are routed the same as any other data packet based on the available routing information.
Virtual DNS
Configure this mode to support virtual DNS servers (VDNS) to resolve the DNS requests from the VMs.
Each IPAM can have a virtual DNS server configured in this mode.
9. Complete the remaining fields on this screen, and click OK to commit the changes, or
click Cancel to clear the fields and start over.
Configuring DNS Using Scripts
DNS can be configured via the user interface or by using scripts that are available in the
opt/contrail/utils directory. The scripts are described in Table 25 on page 275.
CAUTION: Be aware of the following cautions when using scripts to configure
DNS:
•
DNS doesn’t allow special characters in the names, other than - (dash)
and . (period). Any records that include special characters in the name will
be discarded by the system.
•
The IPAM DNS mode and association should only be edited when there
are no virtual machine instances in the virtual networks associated with
the IPAM.
Table 25: DNS Scripts
Action
Script
Add a virtual DNS server
Script: add_virtual_dns.py
Sample usage: python add_virtual_dns.py --api_server_ip 10.204.216.21 --api_server_port 8082
--name vdns1 --domain_name default-domain --dns_domain juniper.net --dyn_updates
--record_order random --ttl 1200 --next_vdns default-domain:vdns2
Delete a virtual DNS server
Script: del_virtual_dns_record.py
Sample usage: python del_virtual_dns.py --api_server_ip 10.204.216.21 --api_server_port 8082
--fq_name default-domain:vdns1
Add a DNS record
Script: add_virtual_dns_record.py
Sample usage: python add_virtual_dns_record.py --api_server_ip 10.204.216.21 --api_server_port
8082 --name rec1 --vdns_fqname default-domain:vdns1 --rec_name one --rec_type A --rec_class
IN --rec_data 1.2.3.4 --rec_ttl 2400
Copyright © 2016, Juniper Networks, Inc.
275
Contrail Feature Guide
Table 25: DNS Scripts (continued)
Action
Script
Delete a DNS record
Script: del_virtual_dns_record.py
Sample usage: python del_virtual_dns_record.py --api_server_ip 10.204.216.21 --api_server_port
8082 --fq_name default-domain:vdns1:rec1
Associate a virtual DNS server with
an IPAM
Script: associate_virtual_dns.py
Sample usage: python associate_virtual_dns.py --api_server_ip 10.204.216.21 --api_server_port
8082 --ipam_fqname default-domain:demo:ipam1 --vdns_fqname default-domain:vdns1
Disassociate a virtual DNS server
with an IPAM
Script: disassociate_virtual_dns.py
Sample usage: python disassociate_virtual_dns.py --api_server_ip 10.204.216.21
--api_server_port 8082 --ipam_fqname default-domain:demo:ipam1 --vdns_fqname
default-domain:vdns1
Configuring Discovery Service
The Contrail Discovery Service publishes the IP address and port of the multiple
components of the configuration node. The system runs multiple instances of each
process for high availability and load balancing purposes.
•
Contrail Discovery Service Introduction on page 276
•
Discovery Service Registration and Publishing on page 277
•
Discovery Service Subscription on page 277
•
Discovery Service REST API on page 278
•
Discovery Service Heartbeats on page 280
•
Discovery Service Internal Databases on page 280
•
Discovery Service Client Library on page 280
•
Discovery Service Debugging on page 280
Contrail Discovery Service Introduction
The following ports are used by the discover service.
•
API port : 5998 TCP
•
Hearbeat port: 5998 TCP
To display the publishers, connect to the http://discovery-server-ip:5998/services URL.
To display the subscribers, connect to the http://discovery-server-ip:5998/clients URL.
The Contrail Discovery Service uses the following configuration file and log file:
/etc/contrail/discovery.conf
/var/log/contrail/discovery.log
276
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
Discovery Service Registration and Publishing
The Discovery Service publishers send registration requests to the discovery server using
a REST API.
The Discovery Service publishers send periodic heartbeat to the discovery server. The
default interval for the heartbeat is 5 seconds.
If three successive heartbeats are missed, the Discovery Service is marked down.
The Discovery Service status is maintained internally. It indicates if the service is up or
down based on the received heartbeat messages.
The discovery server currently supports three polices for selecting what information to
return. The three polices are:
Load Balance—The service is returned based on the in-use count (how many subscribers
are currently using the service).
Round Robin—The service is assigned based on a timestamp. The earliest (oldest) pubisher
is selected for the next assignment.
Fixed—An ordered list of available servers is always sent. If a service goes offline and
comes back again, that service moves to the bottom of the list.
The three polices are configured in the /etc/contrail/discovery.cfg file under the service
type section.
The response to a publish request is a cookie that must be sent back in the heartbeats.
Discovery Service Subscription
Clients needing a service, send requests to the discovery server using a REST API.
The client can specify how many instances of a service to be returned. The default is 1.
If the requested number of instances is 0, the information about all of the publishers of
that service type is returned. Use this to display all the providers of a particular service.
A client is identified by a token (uuid). The token is typically sent as part of a subscription
request. The client information is removed from the discovery server database when the
time to live. (TTL) expires
A response to a client includes a TTL value. When the TTL expires, the client refreshes
the information by sending another subscription request. The TTL sent to the client is a
random value in the range of 5 to 30 minutes.
If a service is overloaded and a new one is started, the new clients are assigned a new
service instance automatically. To spray the new servers to the existing subscribers, use
the discovery_cli.py file to reassign them on demand.
Clients find the discovery service by using the configured IP address and port
Copyright © 2016, Juniper Networks, Inc.
277
Contrail Feature Guide
Discovery Service REST API
A REST API is available for registering and publishing the Contrail Discover Server.
The following values are defined in the Contrail Discover Server file:
•
POST: /publish or POST /publish/<publisher-id>
•
Content-Type: application/json or application/xml
•
Body: information to be published (service type and data)
The following example shows the REST API for registering and publishing the discover
service
JSON simple:
{
"control-node": {"ip_addr": "192.168.2.0", "port":1682 }
}
JSON verbose:
{
"service-type" : "foobar",
"foobar" : {"ip_addr": "192.168.2.0", "port":1682 }
}
XML simple:
<foobar2>
<ip-addr>1.1.1.1</ip-addr>
<port>4289</port>
</foobar2>
XML verbose:
<publish>
<foobar2>
<ip-addr>1.1.1.1</ip-addr>
<port>4289</port>
</foobar2>
<oper-state>down</oper-state>
<service-type>foobar2</service-type>
</publish>
JSON Response: {"cookie": c76716813f4b}
XML Response: <response><cookie>c76716813f4b</cookie></response>
The following fields are allowed in the body of the file:
service-type—Name of the service to publish
admin-state—Up or down state
remote-addr—IP address of the client
remote-version—Version number of the client
remote-name—Hostname of the client
278
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
oper-state—Each published service can set the oper-state up or down based on its internal
state. You can display the reason the oper-state is up or down using the port 5998
URL.
A REST API is available for subscribing to the Contrail Discover Server.
The following values are defined in the Contrail Discover Server discovery_cli.py file:
•
POST http://discovery-server-ip:5998/subscribe
•
Content-Type: application/json or application/xml
•
Body: Service type, instance count, client ID
The following example shows the REST API for subscribing to the discover service.
JSON: {
"service": "control-node",
"instances": 1,
"client": "6c3f48bf-1098-46e8-8117-5cc745b45983",
"remote-addr" : '1.1.1.'
}
XML:
<control-node>
<instances>1</instances>
<client>UUID</client>
<remote-addr>1.1.1.1</remote-addr>
</control-node>
Response: TTL, List of <service type, Blob>
JSON: {
"Apiservice": [{"ip_addr": "10.84.13.34", "port": "8082"}],
"ttl": 357
}
XML:
<response>
<ttl>300</ttl>
<control-node>
<ip_addr>192.168.2.0</ip_addr>
<port>1682</port>
</control-node>
</response>
The following fields are allowed in the body of the file:
Service Type—This is a string denoting what service is being requested (Apiservice). The
instance count is the number of servers needed.
Client ID—This is a unique ID for the subscriber. Typically it is constructed from the UUID
and the name of the subscriber.
NOTE: The subscription response includes a list of the services.
Copyright © 2016, Juniper Networks, Inc.
279
Contrail Feature Guide
Discovery Service Heartbeats
A cookie is returned in response to a publish API request. It is sent in the heartbeat
message to the discovery server. If three heartbeat messages are missed, the discovery
server marks the service down and it is no longer assigned to the subscribers.
The heartbeat responses from the discovery server are either 200 Ok or 401. The 401
response is sent if the discovery server does not recognize the cookie. This could happen
if the discovery server is restarted with the reset_config option. In this case the client
should plan on republishing the information.
Discovery Service Internal Databases
The database is maintained in Cassandra. There is a persistent copy so that the discovery
service can maintain state across restarts.
Discovery Service Client Library
Python and C++ client libraries are available that allow publishing and subscription of
services.
Discovery Service Debugging
To see a list of Discovery Service publishers, connect to the
http://discovery-server-ip:5998/services URL.
To see list of Discovery Service subscribers, connect to the
http://discovery-server-ip:5998/clients URL.
To see the log messages for the Discovery Service, display the
/var/log/contrail/discovery.log file.
Related
Documentation
•
Configuring Load-Balancing-as-a-Service in Contrail on page 287
Support for Multicast
This section describes how the Contrail Controller supports broadcast and multicast.
280
•
Subnet Broadcast on page 281
•
All-Broadcast/Limited-Broadcast and Link-Local Multicast on page 281
•
Host Broadcast on page 282
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
Subnet Broadcast
Multiple subnets can be attached to a virtual network when it is spawned. Each of the
subnets has one subnet broadcast route installed in the unicast routing table assigned
to that virtual network. The recipient list for the subnet broadcast route includes all of
the virtual machines that belong to that subnet. Packets originating from any VM in that
subnet are replicated to all members of the recipient list, except the originator. Because
the next hop is the list of recipients, it is called a composite next hop.
If there is no virtual machine spawned under a subnet, the subnet routing entry discards
the packets received. If all of the virtual machines in a subnet are turned off, the routing
entry points to discard. If the IPAM is deleted, the subnet route corresponding to that
IPAM is deleted. If the virtual network is turned off, all of the subnet routes associated
with the virtual network are removed.
Subnet Broadcast Example
The following configuration is made:
Virtual network name – vn1
Unicast routing instance – vn1.uc.inet
Subnets (IPAM) allocated – 1.1.1.0/24; 2.2.0.0/16; 3.3.0.0/16
Virtual machines spawned – vm1 (1.1.1.253); vm2 (1.1.1.252); vm3 (1.1.1.251); vm4
(3.3.1.253)
The following subnet route additions are made to the routing instance vn1.uc.inet.0:
1.1.1.255 -> forward to NH1 (composite next hop)
2.2.255.255 -> DROP
3.3.255.255 -> forward to NH2
The following entries are made to the next-hop table:
NH1 – 1.1.1.253; 1.1.1.252; 1.1.1.251
NH2 – 3.3.1.253
If traffic originates for 1.1.1.255 from vm1 (1.1.1.253), it will be forwarded to vm2 (1.1.1.252)
and vm3 (1.1.1.251). The originator vm1 (1.1.1.253) will not receive the traffic even though
it is listed as a recipient in the next hop.
All-Broadcast/Limited-Broadcast and Link-Local Multicast
The address group 255.255.255.255 is used with all-broadcast (limited-broadcast) and
multicast traffic. The route is installed in the multicast routing instance. The source
address is recorded as ANY, so the route is ANY/255.255.255.255 (*,G). It is unique per
routing instance, and is associated with its corresponding virtual network. When a virtual
network is spawned, it usually contains multiple subnets, in which virtual machines are
added. All of the virtual machines, regardless of their subnets, are part of the recipient
list for ANY/255.255.255.255. The replication is sent to every recipient except the originator.
Copyright © 2016, Juniper Networks, Inc.
281
Contrail Feature Guide
Link-local multicast also uses the all-broadcast method for replication. The route is
deleted when all virtual machines in this virtual network are turned off or the virtual
network itself is deleted.
All-Broadcast Example
The following configuration is made:
Virtual network name – vn1
Unicast routing instance – vn1.uc.inet
Subnets (IPAM) allocated – 1.1.1.0/24; 2.2.0.0/16; 3.3.0.0/16
Virtual machines spawned – vm1 (1.1.1.253); vm2 (1.1.1.252); vm3 (1.1.1.251); vm4
(3.3.1.253)
The following subnet route addition is made to the routing instance vn1.uc.inet.0:
255.255.255.255/* -> NH1
The following entries are made to the next-hop table:
NH1 – 1.1.1.253; 1.1.1.252; 1.1.1.251; 3.3.1.253
If traffic originates for 1.1.1.255 from vm1 (1.1.1.253), the traffic is forwarded to vm2 (1.1.1.252),
vm3 (1.1.1.251), and vm4 (3.3.1.253). The originator vm1 (1.1.1.253) will not receive the traffic
even though it is listed as a recipient in the next hop.
Host Broadcast
The host broadcast route is present in the host routing instance so that the host operating
system can send a subnet broadcast/all-broadcast (limited-broadcast). This type of
broadcast is sent to the fabric by means of a vhost interface. Additionally, any subnet
broadcast/all-broadcast received from the fabric will be handed over to the host operating
system.
Using Static Routes with Services
•
Static Routes for Service Instances on page 282
•
Configuring Static Routes on a Service Instance on page 283
•
Configuring Static Routes on Service Instance Interfaces on page 284
•
Configuring Static Routes as Host Routes on page 286
Static Routes for Service Instances
Static routes can be configured in a virtual network to direct traffic to a service virtual
machine.
The following figure shows a virtual network with subnet 10.1.1.0/24. All of the traffic from
a virtual machine that is directed to subnet 11.1.1.0/24 can be configured to be routed by
282
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
means of a service machine, by using the static route 11.1.1.252 configured on the service
virtual machine interface.
Configuring Static Routes on a Service Instance
To configure static routes on a service instance, first enable the static route option in the
service template to be used for the service instance.
To enable the static route option in a service template:
1.
Go to Configure > Services > Service Templates and click Create.
2. At Add Service Template, complete the fields for Name, Service Mode, and Image Name.
3. Select the Interface Types to use for the template, then for each interface type that
might have a static route configured, click the check box under the Static Routes
column to enable the static route option for that interface.
The following figure shows a service template in which the left and right interfaces
of service instances have the static routes option enabled. Now a user can configure
Copyright © 2016, Juniper Networks, Inc.
283
Contrail Feature Guide
a static route on a corresponding interface on a service instance that is based on the
service template shown.
Configuring Static Routes on Service Instance Interfaces
To configure static routes on a service instance interface:
1.
Go to Configure > Services > Service Instances and click Create.
2. At Create Service Instances, complete the fields for Instance Name and Services
Template.
3. Select the virtual network for each of the interfaces
4. Click the Static Routes dropdown menu under each interface field for which the static
routes option is enabled to open the Static Routes menu and configure the static
routes in the fields provided.
NOTE: If the Auto Configured option is selected, traffic destined to the
static route subnet is load balanced across service instances.
The following figure shows a configuration to apply a service instance between VN1
(10.1.1.0/24) and VN2 (11.1.1.0/24). The left interface of the service instance is configured
with VN1 and the right interface is configured to be VN2 (11.1.1.0/24). The static route
284
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
11.1.1.0/24 is configured on the left interface, so that all traffic from VN1 that is destined
to VN2 reaches the left interface of the service instance.
The following figure shows static route 10.1.1.0/24 configured on the right interface, so
that all traffic from VN2 that is destined to VN1 reaches the right interface of the service
virtual machine.
When the static routes are configured for both the left and the right interfaces, all
inter-virtual network traffic is forwarded through the service instance.
Copyright © 2016, Juniper Networks, Inc.
285
Contrail Feature Guide
Configuring Static Routes as Host Routes
You can also use static routes for host routes for a virtual machine, by using the classless
static routes option in the DHCP server response that is sent to the virtual machine.
The routes to be sent in the DHCP response to the virtual machine can be configured for
each virtual network as it is created.
To configure static routes as host routes:
1.
Go to Configure > Network > Networks and click Create.
2. At Create Network, click the Host Routes option and add the host routes to be sent to
the virtual machines.
An example is shown in the following figure.
Related
Documentation
•
Configuring Metadata Service
OpenStack enables virtual machines to access metadata by sending an HTTP request
to the link-local address 169.254.169.254. The metadata request from the virtual machine
is proxied to Nova with additional HTTP header fields that Nova uses to identify the
source instance, then responds with appropriate metadata.
286
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
In Contrail, the vRouter acts as the proxy, by trapping the metadata requests, adding the
necessary header fields, and sending the requests to the Nova API server.
The metadata service is configured by setting the linklocal-services property on the
global-vrouter-config object.
Use the following elements to configure the linklocal-services element for metadata
service:
•
linklocal-service-name = metadata
•
linklocal-service-ip = 169.254.169.254
•
linklocal-service-port = 80
•
ip-fabric-service-ip = [server-ip-address]
•
ip-fabric-service-port = [server-port]
The linklocal-services properties can be set from the Contrail UI (Configure > Infrastructure
> Link Local Services) or by using the following command:
python /opt/contrail/utils/provision_linklocal.py --admin_user <user> --admin_password
<passwd> --linklocal_service_name metadata --linklocal_service_ip 169.254.169.254
--linklocal_service_port 80 --ipfabric_service_ip --ipfabric_service_port 8775
Configuring Load-Balancing-as-a-Service in Contrail
•
Overview: Load-Balancing-as-a-Service on page 287
•
Contrail LBaaS Implementation on page 288
Overview: Load-Balancing-as-a-Service
Load-Balancing-as-a-Service (LBaaS) is a feature available through OpenStack Neutron.
Contrail Release 1.20 and greater allows the use of the Neutron API for LBaaS to apply
open source load balancing technologies to provision a load balancer in the Contrail
system.
The LBaaS load balancer enables the creation of a pool of virtual machines serving
applications, all front-ended by a virtual-ip. The LBaaS implementation has the following
features:
•
Load balancing of traffic from clients to a pool of backend servers. The load balancer
proxies all connections to its virtual IP.
•
Provides load balancing for HTTP, TCP, and HTTPS.
•
Provides health monitoring capabilities for applications, including HTTP, TCP, and ping.
•
Enables floating IP association to virtual-ip for public access to the backend pool.
In the following figure, the load balancer is launched with the virtual IP address 20.1.1.1.
The backend pool of virtual machine applications (App Pool) is on the subnet 30.1.1.0/24.
Each of the application virtual machines gets an IP address (virtual-ip) from the pool
subnet. When a client connects to the virtual-ip for accessing the application, the load
Copyright © 2016, Juniper Networks, Inc.
287
Contrail Feature Guide
balancer proxies the TCP connection on its virtual-ip, then creates a new TCP connection
to one of the virtual machines in the pool.
The pool member is selected using one of following methods:
•
weighted round robin (WRR), based on the weight assignment
•
least connection, selects the member with the fewest connections
•
source IP selects based on the source-ip of the packet
Additionally, the load balancer monitors the health of each pool member using the
following methods:
•
Monitors TCP by creating a TCP connection at intervals.
•
Monitors HTTP by creating a TCP connection and issues an HTTP request at intervals.
•
Monitors ping by checking if a member can be reached by pinging.
Contrail LBaaS Implementation
Contrail supports the OpenStack LBaaS Neutron APIs and creates relevant objects for
LBaaS, including virtual-ip, loadbalancer-pool, loadbalancer-member, and
loadbalancer-healthmonitor. Contrail creates a service instance when a loadbalancer-pool
is associated with a virtual-ip object. The service scheduler then launches a namespace
on a randomly selected virtual router and spawns HAProxy into that namespace. The
configuration for HAProxy is picked up from the load balancer objects. Contrail supports
high availability of namespaces and HAproxy by spawning active and standby on two
different vrouters.
288
Copyright © 2016, Juniper Networks, Inc.
Chapter 11: Configuring Services
Example: Configuring
LBaaS
Creating a Load
Balancer
This feature is enabled on Contrail through Neutron API calls. The following is an example
of creating a pool network and a VIP network. The VIP network is created in the public
network and members are added in the pool network.
Use the following steps to create a load balancer in Contrail.
1.
Create a VIP network.
neutron net-create vipnet
neutron subnet-create –-name vipsubnet vipnet 20.1.1.0/24
2. Create a pool network.
neutron net-create poolnet
neutron subnet-create --name poolsubnet poolnet 10.1.1.0/24
3. Create a pool for HTTP.
neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP
--subnet-id poolsubnet
4. Add members to the pool.
neutron lb-member-create --address 10.1.1.2 --protocol-port 80 mypool
neutron lb-member-create --address 10.1.1.3 --protocol-port 80 mypool
5. Create a VIP for HTTP and associate it to the pool.
neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP--subnet-id
vipsubnet mypool
Deleting a Load
Balancer
Use the following steps to delete a load balancer in Contrail.
1.
Delete the VIP.
neutron lb-vip-delete <vip-uuid>
2. Delete members from the pool.
neutron lb-member-delete <member-uuid>
3. Delete the pool.
neutron lb-pool-delete <pool-uuid>
Managing
Healthmonitor for
Load Balancer
Use the following commands to create a healthmonitor, associate a healthmonitor to a
pool, disassociate a healthmonitor, and delete a healthmonitor.
•
Create a healthmonitor.
neutron lb-healthmonitor-create --delay 20 --timeout 10 --max-retries 3 --type HTTP
•
Associate a healthmonitor to a pool.
neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool
•
Disassociate a healthmonitor from a pool.
neutron lb-healthmonitor-disassociate <healthmonitor-uuid> mypool
Copyright © 2016, Juniper Networks, Inc.
289
Contrail Feature Guide
Configuring an SSL VIP
with an HTTP Backend
Pool
Use the following steps to configure an SSL VIP with an HTTP backend pool.
1.
Copy an SSL certificate to all compute nodes.
scp ssl_certificate.pem <compute-node-ip> <certificate-path>
2. Update the information in /etc/contrail/contrail-vrouter-agent.conf.
# SSL certificate path haproxy
haproxy_ssl_cert_path=<certificate-path>
3. Restart contrail-vrouter-agent.
service contrail-vrouter-agent restart
4. Create a VIP for port 443 (SSL).
neutron lb-vip-create --name myvip --protocol-port 443 --protocol HTTP --subnet-id
vipsubnet mypool
A Note on Installation
To use the LBaaS feature, HAProxy, version 1.5 or greater and iproute2, version 3.10.0 or
greater must both be installed on the Contrail compute nodes.
If you are using fab commands for installation, the haproxy and iproute2 packages will
be installed automatically with LBaaS if you set the following:
env.enable_lbaas=True
Use the following to check the version of the iproute2 package on your system:
root@nodeh5:/var/log# ip -V
ip utility, iproute2-ss130716
root@nodeh5:/var/log#
Limitations
290
LBaaS currently has these limitations:
•
A pool should not be deleted before deleting the VIP.
•
Multiple VIPs cannot be associated with the same pool. If pool needs to be reused,
create another pool with the same members and bind it to the second VIP.
•
Members cannot be moved from one pool to another. If needed, first delete the
members from one pool, then add to a different pool.
•
In case of active-standby failover, namespaces might not get cleaned up when the
agent restarts.
•
The floating-ip association needs to select the VIP port and not the service ports.
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 12
Configuring High Availability
•
High Availability Support on page 291
•
Juniper OpenStack High Availability on page 294
•
Example: Adding New OpenStack or Contrail Roles to an Existing High Availability
Cluster on page 303
High Availability Support
This section describes how to set up Contrail options for high availability support.
•
In Ubuntu setups, OpenStack high availability and Contrail high availability are both
supported, for Contrail Release 1.10 and greater.
•
In CentOS setups, only Contrail high availability is supported, and only for Contrail
Release 1.20 and greater.
•
Contrail High Availability Features on page 291
•
Configuration Options for Enabling Contrail High Availability on page 292
•
Supported Cluster Topologies for High Availability on page 292
•
Deploying OpenStack and Contrail on the Same High Available Nodes on page 292
•
Deploying OpenStack and Contrail on Different High Available Nodes on page 293
•
Deploying Contrail Only on High Available Nodes on page 293
Contrail High Availability Features
The Contrail OpenStack high availability design and implementation provides:
•
A high availability active-active implementation for scale-out of the cloud operation
and for flexibility to expand the controller nodes to service the compute fabric.
•
Anytime availability of the cloud for operations, monitoring, and workload monitoring
and management.
•
Self-healing of the service and states.
•
VIP-based access to the cloud operations API provides an easy way to introduce new
controllers and an API to the cluster with zero downtime. Improved capital efficiencies
Copyright © 2016, Juniper Networks, Inc.
291
Contrail Feature Guide
compared with dedicated hardware implementations, by using nodes assigned to
controllers and making them a federated node in the cluster.
•
Operational load distribution across the nodes in the cluster.
For more details about high availability implementation in Contrail, see “High Availability
Support” on page 291.
Configuration Options for Enabling Contrail High Availability
The following are options available to configure high availability within the Contrail
configuration file (testbed.py).
Option
Description
internal_vip
The virtual IP of the OpenStack high availability nodes in the control data network. In
a single interface setup, the internal_vip will be in the management data control
network.
external_vip
The virtual IP of the OpenStack high availability nodes in the management network.
In a single interface setup, the external_vip is not required.
contrail_internal_vip
The virtual IP of the Contrail high availability nodes in the control data network. In a
single interface setup, the contrail_internal_vip will be in the management data control
network.
contrail_external_vip
The virtual IP of the Contrail high availability nodes in the management network. In a
single interface setup, the contrail_external_vip is not required.
nfs_server
The IP address of the NFS server that will be mounted to /var/lib/glance/images
fof the openstack node. The default is to env.roledefs['compute'][0] .
nfs_glance_path
The NFS server path to save images. The default is to /var/tmp/glance-images/ .
manage_amqp
A flag to tell the setup_all task to provision separate rabbitmq setups for openstack
services in openstack nodes.
Supported Cluster Topologies for High Availability
This section describes configurations for the cluster topologies supported, including:
•
OpenStack and Contrail on the same high available nodes
•
OpenStack and Contrail on different high available nodes
•
Contrail only on high available nodes
Deploying OpenStack and Contrail on the Same High Available Nodes
OpenStack and Contrail services can be deployed in the same set of high available nodes
by setting the internal_vip parameter in the env.ha dictionary of the testbed.py.
Because the high available nodes are shared by both OpenStack and Contrail services,
it is sufficient to specify only internal_vip. However, if the nodes have multiple interfaces
292
Copyright © 2016, Juniper Networks, Inc.
Chapter 12: Configuring High Availability
with management and data control traffic separated by provisioning multiple interfaces,
then the external_vip also needs to be set in the testbed.py.
Example
env.ha = {
‘internal_vip’ : ‘an-ip-in-control-data-network’,
‘external_vip’ : ‘an-ip-in-management-network’,
}
Deploying OpenStack and Contrail on Different High Available Nodes
OpenStack and Contrail services can be deployed on different high available nodes by
setting the internal_vip and the contrail_internal_vip parameter in the env.ha dictionary
of the testbed.py.
Because the OpenStack and Contrail services use different high available nodes, it is
required to separately specify internal_vip for OpenStack high available nodes and
contrail_internal_vip for Contrail high available nodes. If the nodes have multiple interfaces,
with management and data control traffic separated by provisioning multiple interfaces,
then the external_vip and contrail_external_vip options also must be set in the testbed.py.
Example
env.ha = {
‘internal_vip’ : ‘an-ip-in-control-data-network’,
‘external_vip’ : ‘an-ip-in-management-network’,
‘contrail_internal_vip’ : ‘another-ip-in-control-data-network’,
‘contrail_external_vip’ : ‘another-ip-in-management-network’,
}
To manage separate rabbitmq clusters in the OpenStack high available nodes for
OpenStack services to communicate, specify manage_amqp in the env.openstack
dictionary of testbed.py. If manage_amqp is not specified, the default is for the OpenStack
services to use the rabbitmq cluster available in the Contrail high available nodes for
communication.
Example:
env.openstack = {
‘manage_amqp’ : ‘yes’
}
Deploying Contrail Only on High Available Nodes
Contrail services can be deployed only on a set of high available nodes by setting the
contrail_internal_vip parameter in the env.ha dictionary of the testbed.py.
Because the high available nodes are used by only Contrail services, it is sufficient to
specify only contrail_internal_vip. If the nodes have multiple interfaces with management
Copyright © 2016, Juniper Networks, Inc.
293
Contrail Feature Guide
and data control traffic are separated by provisioning multiple interfaces, the
contrail_external_vipalso needs to be set in the testbed.py.
Example
env.ha = {
‘contrail_internal_vip’ : ‘an-ip-in-control-data-network’,
‘contrail_external_vip’ : ‘an-ip-in-management-network’,
}
To manage separate rabbitmq clusters in the OpenStack node for the OpenStack services
to communicate, specify manage_amqp in the env.openstack dictionary of the testbed.py.
If the manage_amqp is not specified, the default is the OpenStack services will use the
cluster available in the Contrail high available nodes for communication.
Example:
env.openstack = {
‘manage_amqp’ : ‘yes’
}
Related
Documentation
•
Juniper OpenStack High Availability on page 294
•
Example: Adding New OpenStack or Contrail Roles to an Existing High Availability
Cluster on page 303
Juniper OpenStack High Availability
294
•
Introduction on page 295
•
Contrail High Availability on page 295
•
OpenStack High Availability on page 295
•
Supported Platforms on page 295
•
Juniper OpenStack High Availability Architecture on page 295
•
Juniper OpenStack Objectives on page 296
•
Limitations on page 296
•
Solution Components on page 297
•
Virtual IP with Load Balancing on page 297
•
Failure Handling on page 297
•
Deployment on page 298
•
Minimum Hardware Requirement on page 298
•
Compute on page 298
•
Network on page 299
•
Installation on page 299
•
Testbed File for Fab on page 300
Copyright © 2016, Juniper Networks, Inc.
Chapter 12: Configuring High Availability
Introduction
The Juniper Networks software-defined network (SDN) controller has two major
components: OpenStack and Contrail. High availability (HA) of the controller requires
that both OpenStack and Contrail are resistant to failures. Failures can range from a
service instance failure, node failure, link failure, to all nodes down due to a power outage.
The basic expectation from a highly available SDN controller is that when failures occur,
already provisioned workloads continue to work as expected without any traffic drop,
and the controller is available to perform operations on the cluster. Juniper Networks
OpenStack is a distribution from Juniper Networks that combines OpenStack and Contrail
into one product.
Contrail High Availability
Contrail has high availability already built into various components, including support for
the Active-Active model of high availability, which works by deploying the Contrail node
component with an appropriate required level of redundancy.
The Contrail control node runs BGP and maintains adjacency with the vRouter module
in the compute nodes. Additionally, every vRouter maintains a connection with all available
control nodes.
Contrail uses Cassandra as the database. Cassandra inherently supports fault tolerance
and replicates data across the nodes participating in the cluster. A highly available
deployment of Contrail requires at least two control nodes, three config nodes (including
analytics and webui) and three database nodes.
OpenStack High Availability
High availability of OpenStack is supported by deploying the OpenStack controller nodes
in a redundant manner on multiple nodes. Previous releases of Contrail supported only
a single instance of the OpenStack controller, and multiple instances of OpenStack
posed new problems that needed to solved, including:
•
State synchronization of stateful services (e.g. MySQL) across multiple instances.
•
Load-balancing of requests across the multiple instances of services.
Supported Platforms
Juniper OpenStack Controller has tested high availability on the following platforms:
•
Linux - Ubuntu 12.04 with kernel version 3.13.0-34
•
OpenStack Havana
Juniper OpenStack High Availability Architecture
A typical cloud infrastructure deployment consists of a pool of resources of compute,
storage, and networking infrastructure, all managed by a cluster of controller nodes.
Copyright © 2016, Juniper Networks, Inc.
295
Contrail Feature Guide
The following figure illustrates a high-level reference architecture of a high availability
deployment using Juniper OpenStack deployed as a cluster of controller nodes.
Juniper OpenStack Objectives
The main objectives and requirements for Juniper OpenStack high availability are:
•
99.999% availability for tenant traffic.
•
Anytime availability for cloud operations.
•
Provide VIP-based access to the API and UI services.
•
Load balance network operations across the cluster.
•
Management and orchestration elasticity.
•
Failure detection and recovery.
Limitations
The following are limitations of Juniper OpenStack high availability:
296
•
Only one failure is supported.
•
During failover, a REST API call may fail. The application or user must reattempt the
call.
•
Although zero packet drop is the objective, in a distributed system such as Contrail, a
few packets may drop during ungraceful failures.
•
Juniper OpenStack high availability is not tested with any third party load balancing
solution other than HAProxy.
Copyright © 2016, Juniper Networks, Inc.
Chapter 12: Configuring High Availability
Solution Components
Juniper Openstack's high availability active-active model provides scale out of the
infrastructure and orchestration services. The model makes it very easy to introduce new
services in the controller and in the orchestration layer.
Virtual IP with Load Balancing
HAProxy is run on all nodes to load balance the connections across multiple instances
of the services. To provide a Virtual IP (VIP), Keepalived (open source health check
framework and hot standby protocol) runs and elects a master based on VRRP protocol.
The VRRP master owns the VIP. If the master node fails, the VIP moves to a new master
elected by VRRP.
The following figure shows OpenStack services provisioned to work with HAProxy and
Keepalived, with HAProxy at the front of OpenStack services in a multiple operating
system node deployment. The OpenStack database is deployed in clustered mode and
uses Galera for replicating data across the cluster. RabbitMQ has clustering enabled as
part of a multinode Contrail deployment. The RabbitMQ configuration is further tuned
to support high availability.
Failure Handling
This section describes how various types of failures are handled, including:
•
Service failures
•
Node failures
•
Networking failures
Service Failures
Copyright © 2016, Juniper Networks, Inc.
297
Contrail Feature Guide
When an instance of a service fails, HAProxy detects the failure and load-balances any
subsquent requests across other active instances of the service. The supervisord process
monitors for service failures and brings up the failed instances. As long as there is one
instance of a service operational, the Juniper OpenStack controller continues to operate.
This is true for both stateful and stateless services across Contrail and OpenStack.
Node Failures
The Juniper OpenStack controller supports single node failures involving both graceful
shutdown or reboots and ungraceful power failures. When a node that is the VIP master
fails, the VIP moves to the next active node as it is elected to be the VRRP master. HAProxy
on the new VIP master sprays the connections over to the active service instances as
before, while the failed down node is brought back online. Stateful services
(MySQL/Galera, Zookeeper, and so on) require a quorum to be maintained when a node
fails. As long as a quorum is maintained, the controller cluster continues to work without
problems. Data integrity is also inherently preserved by Galera, Rabbit, and other stateful
components in use.
Network Failures
A connectivity break esp. in the control/data network causes the Controller cluster to
partition into two. As long as the caveat around minimum number of nodes is maintained
for one of the partitions, Controller cluster continues to work fine. Stateful services
including MySQL Galera and RabbitMQ detect the partitioning and reorganize their cluster
around the reachable nodes. Existing workloads continue to function and pass traffic
and new workloads can be provisioned. When the connectivity is restored, the joining
node becomes part of the working cluster and system gets restored to its original state.
Deployment
Minimum Hardware Requirement
A minimum of 3 servers (physical or virtual machines) are required to deploy a highly
available Juniper OpenStack Controller. In Active-Active mode, the Controller cluster
uses Quorum-based consistency management for guaranteeing transaction integrity
across its distributed nodes. This translates to the requirement of deploying 2n+1 nodes
to tolerate n failures.
Juniper OpenStack Controller offers variety of deployment choices. Depending on the
use case, the roles can be deployed either independently or in some combined manner.
The type of deployment determines the sizing of the infrastructure. The numbers below
present minimum requirement across compute, storage, and network.
Compute
298
•
Quad core Intel(R) Xeon 2.5 Gz or higher
•
32 GB or higher RAM for the controller hosts (increases with number of hypervisors
being supported)
•
Minimum 1 TB disk, SSD, HDD
Copyright © 2016, Juniper Networks, Inc.
Chapter 12: Configuring High Availability
Network
A typical deployment separates control/data traffic from the management traffic.
•
Dual 10 GE that is bonded (using LAG 802.3ad) for redundant control/data connection.
•
Dual 1 GE bonded (using LAG 802.3 ad) for redundant management connection.
•
Single 10G and 1G will also work if link redundancy is not desired.
Need Virtual IP (VIP) addresses carved from the networks the above NICs participate in.
External VIP on the management network and Internal VIP on control/data network.
External facing services get load-balanced using external VIP and internal VIP is used
for communication between other services.
Packaging
High availability support brought in new components to the Contrail OpenStack
deployment, which are packaged in a new package called contrail-openstack-ha. It
primarily contains HAProxy, Keepalived, Galera, and their requisite dependencies.
Installation
Installation is supported through fabric (fab) scripts. External facing, there is very little
change mostly to incorporate multiple OpenStack roles and VIP configuration. Testbed.py
has new sections to incorporate external & internal VIPs. If OpenStack and Contrail roles
are co-located on the nodes, only one set of external and internal VIP is enough.
Install also supports separating OpenStack and Contrail roles on physically different
servers. In this case, external and internal VIPs specified are used for OpenStack controller
and a separate set of VIPs, contrail_external_vip and contrail_internal_vip are used for the
Contrail controller nodes. It is also possible specify separate RabbitMQ for OpenStack
and Contrail controllers.
When multiple OpenStack roles are specified along with VIPs, the ‘install-contrail’ target
treats the installation as the High Availability install and additionally adds the new
‘contrail-openstack-ha’ package.
Similarly, setup_all treats the setup as Contrail High Availability setup and provisions the
following services using the respective newly-created fab tasks.
•
Keepalived — fab setup_keepalived
•
high availability proxy — fab fixup_restart_haproxy_in_openstack
•
Galera — fab setup_galera_cluster, fab fix_wsrep_cluster_address
•
Glance — fab setup_glance_images_loc
•
Keystone — fab sync_keystone_ssl_certs
Also all the provisioning scripts are changed to use VIPs instead of the physical IP of the
node in all OpenStack and Contrail related configuration files. The following figure shows
Copyright © 2016, Juniper Networks, Inc.
299
Contrail Feature Guide
a typical three node deployment where Openstack and Contrail roles are co-located on
three servers.
Testbed File for Fab
A sample file is available at:
https://github.com/Juniper/contrail-fabric-utils/blob/R1.10/fabfile/testbeds/testbed_multibox_example.py
You can use the sample file by uncommenting and changing the high availability section
to match your deployment.
The contents of the sample testbed.py file for the minimum high availability configuration
is the following.
from fabric.api import env
#Management ip addresses of hosts in the cluster
300
Copyright © 2016, Juniper Networks, Inc.
Chapter 12: Configuring High Availability
host1 = 'host@<ip address>'
host2 = 'host@<ip address>'
host3 = host@<ip address>'
host4 = 'host@<ip address>'
host5 = 'host@<ip address>'
#External routers if any
#for eg.
#ext_routers = [('mx1', '<ip address>')]
ext_routers = [('mx1','5.5.5.1')]
public_vn_rtgt = 20000
public_vn_subnet = "<ip address>"
#Autonomous system number
router_asn = <asn>
#Host from which the fab commands are triggered to install and provision
host_build = 'host@<ip address>'
#Role definition of the hosts.
env.roledefs = {
'all': [host1, host2, host3, host4, host5],
'cfgm': [host1, host2, host3],
'openstack': [host1, host2, host3],
'control': [host2, host3 ],
'compute': [host4, host5 ],
'collector': [host1, host2, host3],
'webui': [host1,host2,host3],
'database': [host1,host2,host3],
'build': [host_build],
}
env.hostnames = {
'all': ['vse2100-2', 'vse2100-3', 'vse2100-4','vse2100-5','vse2100-6']
}
#Openstack admin password
env.openstack_admin_password = '<pasword>'
env.password = '<password>'
#Passwords of each host
env.passwords = {
host1: '<password>',
host2: '<password>',
host3: '<password>',
host4: '<password>',
host5: '<password>',
host_build: '<password>',
}
#For reimage purpose
env.ostypes = {
host1: 'ubuntu',
host2: 'ubuntu',
host3: 'ubuntu',
Copyright © 2016, Juniper Networks, Inc.
301
Contrail Feature Guide
host4: 'ubuntu',
host5: 'ubuntu',
}
#OPTIONAL BONDING CONFIGURATION
#==============================
#Inferface Bonding
#OPTIONAL BONDING CONFIGURATION
#==============================
#Inferface Bonding
bond= {
host1 : { 'name': 'bond0', 'member': ['eth1','eth2'], 'mode':'802.3ad' },
host2 : { 'name': 'bond0', 'member': ['eth1','eth2'], 'mode':'802.3ad' },
host3 : { 'name': 'bond0', 'member': ['eth1','eth2'], 'mode':'802.3ad' },
host4 : { 'name': 'bond0', 'member': ['eth1','eth2'], 'mode':'802.3ad' },
host5 : { 'name': 'bond0', 'member': ['eth1','eth2'], 'mode':'802.3ad' },
}
#OPTIONAL SEPARATION OF MANAGEMENT AND CONTROL + DATA
#====================================================
#Control Interface
control_data = {
host1 : { 'ip': '<ip address>', 'gw' : '5.5.5.1', 'device':'bond0' },
host2 : { 'ip': '<ip address>', 'gw' : '5.5.5.1', 'device':'bond0' },
host3 : { 'ip': '<ip address>', 'gw' : '5.5.5.1', 'device':'bond0' },
host4 : { 'ip': '<ip address>', 'gw' : '5.5.5.1', 'device':'bond0' },
host5 : { 'ip': '<ip address>', 'gw' : '5.5.5.1', 'device':'bond0' },
}
# VIP
env.ha = {
'internal_vip' : '<ip address>',
'external_vip' : '<ip address>'
}
#To disable installing contrail interface rename package
env.interface_rename = False
#To enable multi-tenancy feature
#multi_tenancy = True
#To Enable parallel execution of task in multiple nodes
do_parallel = True
# To configure the encapsulation priority. Default: MPLSoGRE
#env.encap_priority = "'MPLSoUDP','MPLSoGRE','VXLAN'"
NOTE: The management interface configuration happens outside of fab, so
if the user needs a bond interface, the user needs to create a bond and assign
the management NICs to it.
The management network must be a routable network.
302
Copyright © 2016, Juniper Networks, Inc.
Chapter 12: Configuring High Availability
Related
Documentation
•
High Availability Support on page 291
•
Example: Adding New OpenStack or Contrail Roles to an Existing High Availability
Cluster on page 303
Example: Adding New OpenStack or Contrail Roles to an Existing High Availability
Cluster
This section provides an example of adding new nodes or roles to an existing cluster with
high availability enabled. It is organized in the following:
•
Adding New OpenStack or Contrail Roles to an Existing High Availability
Cluster on page 303
•
Purging a Controller From an Existing Cluster on page 304
•
Replacing a Node With a Node That Has the Same IP Address on page 305
•
Known Limitations and Configuration Guidelines on page 305
•
Understanding How the System Adds a New Node to an Existing Cluster on page 305
Adding New OpenStack or Contrail Roles to an Existing High Availability Cluster
To add new nodes or roles to an existing cluster with high availability enabled:
1.
Install the new server with the new base OS image (currently Ubuntu 14.04 is
supported)
2. Download the latest Contrail installer package to the new server.
3. Run the dpkg –I <contrail_install_deb.deb> command on the new server.
4. Run the /opt/contrail/contrail_packages/setup.sh script in the new server to install
the Contrail repository.
5. Modify the testbed.py file in the build server as follows:
a. Add the new host in the list of hosts as host<x>
b. In the env.roledefs list, add the new node as appropriate. For example, add the
node in the openstack role list, so that the new node is configured as an OpenStack
server. Every role can be added mutually exclusive of each other.
c. Add the hostname of the new server in the env.hostnames field.
d. Add the password for the new server in the env.passwords field.
e. Add the control data interface of the new server if applicable.
6. In the build server, go to the /opt/contrail/utils directory and use the fab
install_new_contrail:new_ctrl=’root@<host_ip>’ command to install all the required
packages in the new server based on the roles configured in the testbed.py file.
Copyright © 2016, Juniper Networks, Inc.
303
Contrail Feature Guide
7. If you have two interfaces, the control and data interface and the management
interface, then the control and data interface can be setup using the fab
setup_interface_node’root@<host_ip>’ command.
8. In the build server, go to the /opt/contrail/utils directory and use the fab
join_cluster:’root@<host_ip>’command. This adds the new server to the corresponding
cluster based on the role that is configured in the testbed.py file.
The new node is added to the existing cluster. See Understanding How a New Node is
Added to an Existing Cluster.
Purging a Controller From an Existing Cluster
To purge a controller node from an existing cluster with high availability enabled:
1.
Open the testbed.py file which contains the topology information of the cluster.
2. In the env.roledefs list, remove the role as appropriate. For example, if you need the
node to cease existing as an OpenStack controller, remove the node from the
openstack role list.
NOTE: Every role can be added mutually exclusive to each other. However,
there are certain minimum node requirements for OpenStack and Database
roles (at least 3 nodes) and if the size of the rest of the cluster does not
meet these minimum requirements after purging the node, then deleting
the node is not allowed.
NOTE: The node should not be deleted from the host list (or the
passwords) but only from the env.roledefs list. The node can be removed
from the host list after the purge operation is completed.
3. In the build server, go to the /opt/contrail/utils directory and use the fab
purge_node:'root@<ip_address>' command. This removes all the configuration and
stops the relevant services from the node that you need to purge.
4. Remove the rest of the configuration related to the node from the testbed.py file after
the previous command completes.
CAUTION: In the event that the node that needs to be deleted is already
down (non-operational), it should not be brought up again since it would
join the cluster again with unknown consequences.
304
Copyright © 2016, Juniper Networks, Inc.
Chapter 12: Configuring High Availability
Replacing a Node With a Node That Has the Same IP Address
To replace a node in an existing cluster with high availability enabled with a node that
has the same IP address:
Make sure that the cluster continues to operate when the node being replaced is taken
off the cluster. (For example, it meets the minimum failure requirements)
1.
2. Reimage the node that is being replaced and make sure it gets the same IP address.
3. Follow the steps to add a node to an existing cluster.
Known Limitations and Configuration Guidelines
The following lists some known limitations and some configuration guidelines for when
you add new nodes or roles to an existing cluster with high availability enabled:
•
Adding a new node to an existing high availability cluster is only supported in Contrail
Release 2.21 and later.
•
Converting a single node or cluster that does not have high availability enabled to a
cluster that does have high availability enabled is not supported.
•
New nodes must be appended to the existing list of nodes at the end of the testbed.py
lists.
•
We recommend you maintain a cluster with an odd number of controllers since high
availability is based on a quorum and to support n failures, (2n + 1) nodes are required.
•
The new node is required to be running the same release as of the other nodes in the
cluster.
•
You need to use the nodetool cleanup command after a new node joins the Cassandra
cluster. You can safely schedule this for low usage hours to prevent disruption of the
cluster operation.
•
While deleting a node from an existing cluster, the remaining cluster should be
operational and meet the high availability cluster requirements, if not, purging the node
is not allowed.
Understanding How the System Adds a New Node to an Existing Cluster
The following lists the actions the system takes when adding a new node to an existing
cluster with high availability enabled:
•
If a new OpenStack server is configured, the system:
•
•
Adds the new node as a participant in the VRRP mastership election.
•
Generates a keepalived configuration similar to other nodes in the existing cluster.
•
Restarts the keepalived process in all nodes so the configuration takes effect.
Modifies the haproxy configuration in all the existing nodes to add the new node as
another backend for roles like keystone, nova, glance, and cinder.
Copyright © 2016, Juniper Networks, Inc.
305
Contrail Feature Guide
•
•
•
Restarts the haproxy process in all the nodes so that the new configuration takes
effect.
•
Modifies the mysql configuration in the /etc/mysql/conf.d/wsrep.conf file to add the
new node into the Galera cluster in all the existing controllers.
•
Restarts MySQL in every node sequentially and waits until they come back up.
•
Generates a MySQL configuration similar to other existing controllers and waits
until the new OpenStack server syncs all the data from the existing MySQL donors.
•
Installs CMON and generates the CMON configuration.
•
Adds a CMON user in all the MySQL databases so that CMON is able to monitor the
MySQL server and regenerates the CMON configuration in all the nodes so that the
CMON in the other nodes can also monitor the new MySQL database.
•
Generates cmon_param files based on the new configuration and invokes monitor
scripts.
•
Instructs the new node to join the RabbitMQ cluster. (If RabbitMQ is not externally
managed.)
•
Modifies the rabbitmq.config file in all the existing nodes to include the new node
and restarts the RabbitMQ server sequentially so that the new node joins the cluster.
If a new database node is configured, the system:
•
Generates the database configuration (cassandra.yaml file) for the new node and
uses the nodetool command to make the new node join the existing cluster. (Uses
the fab setup_database_node command)
•
Generates a new zookeeper configuration in the existing node, adds the new node
into the existing zookeeper configuration, and restarts the zookeeper nodes
sequentially.
•
Starts the Cassandra database in the new node.
If a new Configuration Manager node is configured, the system:
•
Generates all the configuration required for a new Configuration Manager (cfgm)
node and start the server (Uses the fab setup_config_node command)
•
Modifies the HAProxy configuration to add the new node into the existing list of
backends.
•
If required, modifies the zookeeper and Cassandra server list in the
/etc/contrail/contrail-api.conf file in the existing config nodes.
•
Restarts the config nodes so that the new configuration takes effect.
• If a new controller node is configured, the system:
306
•
Generates all the configuration required for a new control node and starts the control
node. (Uses the fab setup_control_node command)
•
Instructs the control node to peer with the existing control nodes.
Copyright © 2016, Juniper Networks, Inc.
Chapter 12: Configuring High Availability
If a new collector, analytics, or WebUI node is configured, the system:
Related
Documentation
•
Generates all the configuration required for a new collector node and starts the
control node (Uses the fab setup_collector_node and fab setup_webui_node
commands)
•
Updates the existing configuration in all the new nodes to add a new database node
if required.
•
Starts the collector process in the new node.
•
Starts the webui process in the new node.
•
High Availability Support on page 291
•
Juniper OpenStack High Availability on page 294
Copyright © 2016, Juniper Networks, Inc.
307
Contrail Feature Guide
308
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 13
Configuring Service Chaining
•
Service Chaining on page 309
•
Service Chaining MX Series Configuration on page 313
•
Example: Creating an In-Network or In-Network-NAT Service Chain on page 314
•
Example: Creating a Transparent Service Chain on page 321
•
ECMP Load Balancing in the Service Chain on page 325
•
Example: Creating a Service Chain With the CLI on page 326
•
Using the Juniper Networks Heat Template with Contrail on page 329
Service Chaining
Contrail Controller supports chaining of various Layer 2 through Layer 7 services such as
firewall, NAT, IDP, and so on.
•
Service Chaining Basics on page 309
•
Service Chaining Configuration Elements on page 310
Service Chaining Basics
Services are offered by instantiating service virtual machines to dynamically apply single
or multiple services to virtual machine (VM) traffic. It is also possible to chain physical
appliance-based services.
Figure 76 on page 309 shows the basic service chain schema, with a single service. The
service VM spawns the service, using the convention of left interface (left IF) and right
interface (right IF). Multiple services can also be chained together.
Figure 76: Service Chaining
Copyright © 2016, Juniper Networks, Inc.
309
Contrail Feature Guide
When you create a service chain, the Contrail software creates tunnels across the underlay
network that span through all services in the chain. Figure 77 on page 310 shows two end
points and two compute nodes, each with one service instance and traffic going to and
from one end point to the other.
Figure 77: Contrail Service Chain
The following are the modes of services that can be configured.
Transparent or bridge mode
Used for services that do not modify the packet. Also known as bump-in-the-wire
or Layer 2 mode. Examples include Layer 2 firewall, IDP, and so on.
In-network or routed mode
Provides a gateway service where packets are routed between the service instance
interfaces. Examples include NAT, Layer 3 firewall, load balancer, HTTP proxy, and
so on.
In-network-nat mode
Similar to in-network mode, however, return traffic does not need to be routed to
the source network. In-network-nat mode is particularly useful for NAT service.
Service Chaining Configuration Elements
Service chaining requires the following configuration elements in the solution:
310
•
Service template
•
Service instance
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
•
Service policy
Service Template
Service templates are always configured in the scope of a domain, and the templates
can be used on all projects within a domain. A template can be used to launch multiple
service instances in different projects within a domain.
The following are the parameters to be configured for a service template:
•
Service template name
•
Domain name
•
Service mode
•
•
Transparent
•
In-Network
•
In-Network NAT
Image name (for virtual service)
•
•
•
If the service is a virtual service, then the name of the image to be used must be
included in the service template. In an OpenStack setup, the image must be added
to the setup by using Glance.
Interface list
•
Ordered list of interfaces---this determines the order in which Interfaces will be
created on the service instance.
•
Most service templates will have management, left, and right interfaces. For service
instances requiring more interfaces, “other” interfaces can be added to the interface
list.
•
Shared IP attribute, per interface
•
Static routes enabled attribute, per interface
Advanced options
•
Service scaling— use this attribute to enable a service instance to have more than
one instance of the service instance virtual machine.
•
Flavor—assign an OpenStack flavor to be used while launching the service instance.
Flavors are defined in OpenStack Nova with attributes such as assignments of CPU
cores, memory, and disk space.
Service Instance
A service instance is always maintained within the scope of a project. A service instance
is launched using a specified service template from the domain to which the project
belongs.
Copyright © 2016, Juniper Networks, Inc.
311
Contrail Feature Guide
The following are the parameters to be configured for a service instance:
•
Service instance name
•
Project name
•
Service template name
•
Number of virtual machines that will be spawned
•
•
Enable service scaling in the service template for multiple virtual machines
Ordered virtual network list
•
Interfaces listed in the order specified in the service template
•
Identify virtual network for each interface
•
Assign static routes for virtual networks that have static route enabled in the service
template for their interface
•
Traffic that matches an assigned static route is directed to the service instance
on the interface created for the corresponding virtual network
Service Policy
The following are the parameters to be configured for a service policy:
•
Policy name
•
Source network name
•
Destination network name
•
Other policy match conditions, for example direction and source and destination ports
•
Policy configured in “routed/in-network” or “bridged/” mode
•
An action type called apply_service is used:
Example: 'apply_service’: [DomainName:ProjectName:ServiceInstanceName]
Related
Documentation
312
•
Example: Creating an In-Network or In-Network-NAT Service Chain on page 314
•
Example: Creating a Service Chain With the CLI on page 326
•
ECMP Load Balancing in the Service Chain on page 325
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
Service Chaining MX Series Configuration
This topic shows how to extend service chaining to the MX Series routers.
To configure service chaining for MX Series routers, extend the virtual networks to the
MX Series router and program routes so that traffic generated from a host connected to
the router can be routed through the service.
1.
The following configuration snippet for an MX Series router has a left virtual network
called enterprise and a right virtual network called public. The configuration creates
two routing instances with loopback interfaces and route targets.
routing-instances {
enterprise {
instance-type vrf;
interface lo0.1;
vrf-target target:100:20000;
}
public {
instance-type vrf;
interface lo0.2;
vrf-target target:100:10000;
routing-options {
static {
route 0.0.0.0/0 next-hop 10.84.20.1
}
}
interface xe-0/0/0.0;
}
}
2. The following configuration snippet shows the configuration for the loopback
interfaces.
interfaces {
lo0 {
unit 1 {
family inet
address
}
}
unit 2 {
family inet
address
}
}
}
}
{
2.1.1.100/32;
{
200.1.1.1/32;
3. The following configuration snippet shows the configuration to enable BGP. The
neighbor 10.84.20.39 and neighbor 10.84.20.40 are control nodes.
protocols {
bgp {
group demo_contrail {
type internal;
description "To Contrail Control Nodes & other MX";
local-address 10.84.20.252;
Copyright © 2016, Juniper Networks, Inc.
313
Contrail Feature Guide
keep all;
family inet-vpn {
unicast;
}
neighbor 10.84.20.39;
neighbor 10.84.20.40;
}
}
4. The final step is to add target:100:10000 to the public virtual network and
target:100:20000 to the enterprise virtual network, using the Contrail Juniper Networks
interface.
A full MX Series router configuration for Contrail can be seen in “Sample Network
Configuration for Devices for Simple Tiered Web Application” on page 261.
Example: Creating an In-Network or In-Network-NAT Service Chain
This section provides an example of creating an in-network service chain and an
in-network-nat service chain using the Contrail Juniper Networks user interface. This
service chain example also shows scaling of service instances.
•
Creating an In-Network or In-Network-NAT Service Chain on page 314
Creating an In-Network or In-Network-NAT Service Chain
To create an in-network or in-network-nat service chain:
1.
Create a left and a right virtual network. Select Configure > Networking > Networks
and create left_vn and right_vn; see Figure 78 on page 314.
Figure 78: Create Networks
2. Configure a service template for an in-network service template for NAT. Navigate to
Configure > Services > Service Templates and click the Create button on Service
Templates. The Add Service Template window appears; see Figure 79 on page 315.
314
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
Figure 79: Add Service Template
Table 26: Add Service Template Fields
Field
Description
Name
Enter a name for the service template.
Service Mode
Select the service mode: In-Network (for firewall service), In-Network-NAT (for NAT service), or
Transparent.
Service Scaling
If you will be using multiple virtual machines for a single service instance to scale out the service,
select the Service Scaling check box. When scaling is selected, you can choose to use the same IP
address for a particular interface on each virtual machine interface or to allocate new addresses for
each virtual machine. For a NAT service, the left (inner) interface should have the same IP address,
and the right (outer) interface should have a different IP address.
Image Name
Select from a list of available images the image for the service.
Interface Types
Select the interface type or types for this service:
•
For firewall or NAT services, both Left Interface and Right Interface are required.
•
For an analyzer service, only a Left Interface is required.
•
For Juniper Networks virtual images, Management Interface is also required, in addition to any left
or right requirement.
Copyright © 2016, Juniper Networks, Inc.
315
Contrail Feature Guide
3. On Add Service Template, complete the following for the in-network service template:
•
Name: nat-template
•
Service Mode: In-Network
•
Service Scaling: select from Advanced
•
Image Name: nat-service
•
Interface Types: select Left Interface and Right Interface. For Juniper Networks virtual
images, select Management Interface as the first interface.
•
The Left Interface will be automatically marked for sharing the same IP address
4. If multiple instances are to be launched for a particular service instance, select the
Service Scaling check box, which enables the Shared IP feature. Figure 80 on page 316
shows the Left interface selected, with the Shared IP check box selected, so the left
interface will share the IP address.
Figure 80: Add Service Template Shared IP
5. When finished, click Save.
The service template is created and appears on the Service Templates screen, see
Figure 81 on page 317.
316
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
Figure 81: Service Templates
6. Now create the service instance. Navigate to Configure > Services > Service Instances,
and click Create, then select the template to use and select the corresponding left,
right, or management networks; see Figure 82 on page 317.
Figure 82: Create Service Instances
Table 27: Create Service Instances Fields
Field
Description
Instance Name
Enter a name for the service instance.
Services Template
Select from a list of available service templates the service template to use for this
instance.
Number of Instances
If scaling is enabled, enter a value in the Number of Instances field to define the number
of instances of service virtual machines to launch.
Interface List and Virtual
Networks
An ordered list of interfaces as defined in the Service Template. If you are using the Management
Interface, select Auto Configured. The software will use an internally-created virtual network.
For Left Interface , select left_vn and for Right Interface, select right_vn.
7. If static routes are enabled for specific interfaces, open the Static Routes field below
each enabled interface and enter the static route address details;see
Figure 83 on page 318.
Copyright © 2016, Juniper Networks, Inc.
317
Contrail Feature Guide
Figure 83: Create Service Instances
8. The console for the service instances can be viewed. At Configure > Services > Service
Instances, click the arrow next to the name of the service instance to reveal the details
panel for that instance, then click View Console to see the console details; see
Figure 84 on page 318 and Figure 85 on page 319.
Figure 84: Service Instance Details
318
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
Figure 85: Service Instance Console
9. Next, configure the network policy. Navigate to Configure > Networking > Policies.
•
Name the policy and associate it to the networks created earlier – left_vn and
right_vn.
•
Set source network as left_vn and destination network as right_vn.
•
Check Apply Service and select the service (nat-ecmp).
Figure 86: Create Policy
10. Next, associate the policy to both the left_vn and the right_vn. Navigate to Configure
> Networking > Network.
•
On the right side of left_vn, click the gear icon to enable Edit Network.
•
In the Edit Network dialog box for left_vn, select nat-policy in the Network Policy(s)
field.
•
Repeat the same process for the right_vn.
Copyright © 2016, Juniper Networks, Inc.
319
Contrail Feature Guide
Figure 87: Edit Network
11. Next, launch virtual machines (from OpenStack) and test the traffic through the
service chain by doing the following:
a. Navigate to Configure > Networking > Policies.
b. Launch left_vm in virtual network left_vn.
c. Launch right_vm in virtual network right_vn.
d. Ping from left_vm to right_vm IP address (2.2.2.252 in Figure 88 on page 320).
e. A TCPDUMP on the right_vm should show that packets are NAT-enabled and have
the source IP set to 2.2.2.253.
Figure 88: Launch Instances
320
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
Related
Documentation
•
Service Chaining on page 309
•
Example: Creating a Transparent Service Chain on page 321
•
ECMP Load Balancing in the Service Chain on page 325
Example: Creating a Transparent Service Chain
This section provides an example of creating a transparent mode service chain using the
Contrail Controller Juniper Networks user interface. Also called bridge mode, transparent
mode is used for services that do not modify the packet, such as Layer 2 firewall, Intrusion
Detection and Prevention (IDP), and so on. The following service chain example also
shows scaling of service instances.
•
Creating a Transparent Mode Service Chain on page 321
Creating a Transparent Mode Service Chain
To create a transparent mode service chain:
1.
First create a left and a right virtual network. Select Configure > Networking > Networks
and create left_vn and right_vn; see Figure 89 on page 321.
Figure 89: Create Networks
2. Next, configure a service template for a transparent mode. Navigate to Configure >
Services > Service Templates and click the Create button on Service Templates. The
Add Service Template window appears; see Figure 90 on page 322.
Copyright © 2016, Juniper Networks, Inc.
321
Contrail Feature Guide
Figure 90: Add Service Template
Table 28: Add Service Template Fields
Field
Description
Name
Enter a name for the service template.
Service Mode
Select the service mode: In-Network or Transparent
Service Scaling
If you will be using multiple virtual machines for a single service instance to scale out the service,
select the Service Scaling check box. When scaling is selected, you can choose to use the same IP
address for a particular interface on each virtual machine interface or to allocate new addresses for
each virtual machine. For a NAT service, the left (inner) interface should have the same IP address,
and the right (outer) interface should have a different IP address.
Image Name
Select from a list of available images the image for the service.
Interface Types
Select the interface type or types for this service:
322
•
For firewall or NAT services, both Left Interface and Right Interface are required.
•
For an analyzer service, only Left Interface is required.
•
For Juniper Networks virtual images, Management Interface is also required, in addition to any left
or right requirement.
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
3. On Add Service Template, complete the following for the transparent mode service
template:
•
Name: firewall-template
•
Service Mode: Transparent
•
Service Scaling: select
•
Image Name: vsrx-bridge
•
Interface Types: select Left Interface, Right Interface, and Management Interface
If multiple instances are to be launched for a particular service instance, select the
Service Scaling check box, which enables the Shared IP feature.
5. When finished, click Save.
6. Now create the service instance. Navigate to Configure > Services > Service Instances,
and click Create, then select the template to use and select the corresponding left,
right, or management networks; see Figure 91 on page 323.
Figure 91: Create Service Instances
Table 29: Create Service Instances Fields
Field
Description
Instance Name
Enter a name for the service instance.
Services Template
Select from a list of available service templates the service template to use for this instance.
Left Network
Select from a list of available virtual networks the network to use for the left interface. For
transparent mode, select Auto Configured.
Right Network
Select from a list of available virtual networks the network to use for the right interface. For
transparent mode, select Auto Configured
Copyright © 2016, Juniper Networks, Inc.
323
Contrail Feature Guide
Table 29: Create Service Instances Fields (continued)
Management Network
If you are using the Management Interface, select Auto Configured. The software will use an
internally-created virtual network. For transparent mode, select Auto Configured
7. If scaling is enabled, enter a value in the Number of Instances field to define the number
of instances of service virtual machines to launch; see Figure 92 on page 324. .
Figure 92: Service Instance Details
8. Next, configure the network policy. Navigate to Configure > Networking > Policies.
•
Name the policy fw-policy.
•
Set source network as left_vn and destination network as right_vn.
•
Check Apply Service and select the service (fw-instance ).
Figure 93: Create Policy
9. Next, associate it to the networks created earlier – left_vn and right_vn. Navigate to
Configure > Networking > Policies.
•
On the right side of left_vn, click the gear icon to enable Edit Network.
•
In the Edit Network dialog box for left_vn, select nat-policy in the Network Policy(s)
field.
•
Repeat the process for the right_vn.
10. Next, launch virtual machines (from OpenStack) and test the traffic through the
service chain by doing the following:
a. Navigate to Configure > Networking > Policies.
b. Launch left_vm in virtual network left_vn.
324
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
c. Launch right_vm in virtual network right_vn.
d. Ping from left_vm to right_vm IP address (2.2.2.252 in Figure 94 on page 325).
e. A TCPDUMP on the right_vm should show that packets have the source IP set to
2.2.2.253.
Figure 94: Launch Instances
Related
Documentation
•
Service Chaining on page 309
ECMP Load Balancing in the Service Chain
Traffic flowing through a service chain can be load-balanced by distributing traffic streams
to multiple service virtual machines (VMs) that are running identical applications. This
is illustrated in Figure 95 on page 325, where the traffic streams between VM-A and VM-B
are distributed between Service VM-1 and Service VM-2. If Service VM-1 goes down, then
all streams that are dependent on Service VM-1 will be moved to Service VM-2.
Figure 95: Load Balancing a Service Chain
Copyright © 2016, Juniper Networks, Inc.
325
Contrail Feature Guide
The following are the major features of load balancing in the service chain:
Related
Documentation
•
Load balancing can be configured at every level of the service chain.
•
Load balancing is supported in routed and bridged service chain modes.
•
Load balancing can be used to achieve high availability—if a service VM goes down,
the traffic passing through that service VM can be distributed through another service
VM.
•
A load balanced traffic stream always follows the same path through the chain of
service VM.
•
Service Chaining on page 309
•
Example: Creating a Service Chain With the CLI on page 326
Example: Creating a Service Chain With the CLI
This section provides syntax and examples for creating service chaining objects for
Contrail Controller.
•
CLI for Creating a Service Chain on page 326
•
CLI for Creating a Service Template on page 326
•
CLI for Creating a Service Instance on page 327
•
CLI for Creating a Service Policy on page 327
•
Example: Creating a Service Chain with VSRX and In-Network or Routed
Mode on page 327
CLI for Creating a Service Chain
All of the commands needed to create service chaining objects are located in
/opt/contrail/utils.
CLI for Creating a Service Template
The following commands are used to create a service template:
./service-template.py add
[--svc_type {firewall, analyzer}]
[--image_name IMAGE_NAME]
template_name
./service-template.py del
326
template_name
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
CLI for Creating a Service Instance
The following commands are used to create a service instance:
./service-instance.py add
[--proj_name PROJ_NAME]
[--mgmt_vn MGMT_VN]
[--left_vn LEFT_VN]
[--right_vn RIGHT_VN]
instance_name
template_name
./service-instance.py del
[--proj_name PROJ_NAME]
instance_name
template_name
CLI for Creating a Service Policy
The following commands are used to create a service policy:
./service-policy.py add
--svc_list SVC_LIST [SVC_LIST ...]
--vn_list VN_LIST [VN_LIST ...]
[--proj_name PROJ_NAME]
policy_name
./service-policy.py del
[--proj_name PROJ_NAME]
policy_name
Example: Creating a Service Chain with VSRX and In-Network or Routed Mode
The following example creates a VSRX firewall service in a virtual network named test,
using a project named demo and a template, an instance, and a policy, all named test.
Copyright © 2016, Juniper Networks, Inc.
327
Contrail Feature Guide
1.
Add images to Glance (OpenStack image service).
a. Download the following images:
precise-server-cloudimg-amd64-disk1.img
junos-vsrx-12.1-nat.img
b. Add the images to Glance, using the names ubuntu and vsrx.
(source /etc/contrail/openstackrc; glance add name='ubuntu' is_public=true
container_format=ovf disk_format=qcow2 <
precise-server-cloudimg-amd64-disk1.img)
(source /etc/contrail/openstackrc; glance add name='vsrx' is_public=true
container_format=ovf disk_format=qcow2 < junos-vsrx-12.1-dhcp.img)
2. Create a service template of type firewall and named vsrx.
./service-template.py add test_template --svc_type firewall --image_name vsrx
3. Create virtual networks.
VN1
VN2
4. Create a service template.
./service-template.py add --svc_scaling ecmp-template
5. Create a service instance.
./service-instance.py add --proj_name admin --left_vn VN1 --right_vn VN2
--max_instances 3 ecmp-instance ecmp-template
6. Create a service policy.
./service-policy.py add proj_name admin -–svc_list ecmp-instance --vn_list VN1 VN2
ecmp-policy
7. Create virtual machines and attach them to virtual networks.
VM1 (attached to VN1)—use ubuntu image
VM2 (attached to VN2)—use ubuntu image
8. Launch the instances VM1 and VM2.
9. Send ping traffic from VM1 to VM2.
10. Send traffic from VM1 in VN1 to VM2 in VN2.
11. You can use the Contrail Juniper Networks interface to monitor the ping traffic flows.
Select Monitor > Infrastructure > Virtual Routers and select an individual vRouter. Click
through to view the vRouter details, where you can click the Flows tab to view the
flows.
Related
Documentation
328
•
Service Chaining on page 309
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
Using the Juniper Networks Heat Template with Contrail
Heat is the orchestration engine of the OpenStack Orchestration program. Heat enables
launching multiple cloud applications based on templates that are comprised of text
files.
•
Introduction to Heat on page 329
•
Heat Architecture on page 329
•
Juniper Heat Plugin on page 329
•
Example: Creating a Service Template Using Heat on page 330
Introduction to Heat
A Heat template describes the infrastructure for a cloud application, such as networks,
servers, floating IP addresses, and the like, and can be used to manage the entire life
cycle of that application.
When the application infrastructure changes, the Heat templates can be modified to
automatically reflect those changes. Heat also can delete all application resources if the
system is finished with an application.
Heat templates can also record the relationships between resources, for example, which
networks are connected by means of policy enforcements, and consequently call
OpenStack REST APIs that create the necessary infrastructure, in the correct order, that
is needed to launch the application managed by the Heat template.
Heat Architecture
Heat is implemented by means of Python applications, including the following:
•
heat-client --- The CLI tool that communicates with the heat-api application to run
Heat APIs.
•
heat-api --- Provides an OpenStack native REST API that processes API requests by
sending them to the Heat engine over remote procedure calls (RPC).
•
heat-engine ---Responsible for orchestrating the launch of templates and providing
events back to the API consumer.
Juniper Heat Plugin
The Juniper Heat plugin enables the use of some resources not currently included in the
OpenStack Heat orchestration engine, including network policy, service template, and
service instances. Resources are the specific objects that Heat creates or modifies as
part of its operation. The Heat plugin resources are loaded into the /usr/lib/heat/resources
directory by the heat-engine service as it starts up. The names of the resource types in
the Juniper Heat plugin include:
•
OS::Contrail::NetworkPolicy
•
OS::Contrail::ServiceTemplate
Copyright © 2016, Juniper Networks, Inc.
329
Contrail Feature Guide
•
OS::Contrail::AttachPolicy
•
OS::Contrail::ServiceInstance
Example: Creating a Service Template Using Heat
The following is an example of how to create a service template using Heat.
1.
Define a template to create the service template.
service_template.yaml
heat_template_version: 2013-¬ 05-¬ 23
description: >
HOT template to create a service template
parameters:
name:
type: string
description: Name of service template
mode:
type: string
description: service mode
type:
type: string
description: service type
image:
type: string
description: Name of the image
flavor:
type: string
description: Flavor
service_interface_type_list:
type: string
description: List of interface types
shared_ip_list:
type: string
description: List of shared ip enabled-¬ disabled
static_routes_list:
type: string
description: List of static routes enabled-¬ disabled
resources:
service_template:
type: OS::Contrail::ServiceTemplate
properties:
name: { get_param: name }
service_mode: { get_param: mode }
service_type: { get_param: type }
image_name: { get_param: image }
flavor: { get_param: flavor }
service_interface_type_list: { "Fn::Split" : [ ",", Ref:
service_interface_type_list ] }
shared_ip_list: { "Fn::Split" : [ ",", Ref: shared_ip_list ] }
static_routes_list: { "Fn::Split" : [ ",", Ref: static_routes_list ] }
outputs:
service_template_fq_name:
description: FQ name of the service template
value: { get_attr: [ service_template, fq_name] }
}
2. Define an environment file to give input to the Heat template.
330
Copyright © 2016, Juniper Networks, Inc.
Chapter 13: Configuring Service Chaining
service_template.env
parameters:
name: contrail_svc_temp
mode: transparent
type: firewall
image: cirros
flavor: m1.tiny
service_interface_type_list: management,left,right,other
shared_ip_list: True,True,False,False
static_routes_list: False,True,False,False
3. Create the Heat stack using the following command:
heat stack- create stack1 –f service_template.yaml –e service_template.env
Copyright © 2016, Juniper Networks, Inc.
331
Contrail Feature Guide
332
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 14
Configuring Multitenancy Support
•
Configuring Multitenancy Support on page 333
•
Configuring Network QoS Parameters on page 335
Configuring Multitenancy Support
The following sections describe enabling and viewing multitenancy support.
•
Multitenancy Permissions on page 333
•
API Server on page 334
•
API Library Keystone Integration on page 334
•
Supporting Utilities on page 334
Multitenancy Permissions
The multi tenancy feature of the API server enables multiple tenants to coexist on the
system without interfering with each other. This is achieved by encoding ownership
information and permissions with each resource, allowing fine-grained control over create,
read, update, and delete (CRUD) operations on those resources.
The Contrail api-server enforces resources permissions in a manner similar to Unix files.
Each resource has an owner and group. Permissions associated with owner, group, and
"others" are:
R - reading resource
W - create/update resource
X - link (refer to) object
CRUD permission requirements for resources managed by api-server are as follows:
C - write on parent object
For example, to create a virtual network requires write permission on the project.
R - read on object (parent if a collection)
U - write on object
Copyright © 2016, Juniper Networks, Inc.
333
Contrail Feature Guide
D - write on parent
ref(link) - execute on object
For example, on a virtual network using network-ipam, network-ipam should have X
permissions for owner, group, or "others".
API Server
If multitenancy is enabled, api-server deploys keystone middleware in its pipeline. The
keystone middleware architecture supports a common authentication protocol in use
between OpenStack projects.
The keystone middleware works in conjunction with api-server to derive the user name
and role for each incoming request. Once obtained, the user name and role are matched
against resource ownership and permissions. If the ownership matches or the permissions
allow access, access is granted.
For example, assume Tenant A has the following attributes:
•
owner = Bob
•
group = Staff
•
permisssions = 750
In this example, only Bob can create a virtual network in Tenant A. Other staff members
can view the virtual networks in Tenant A. No others can create or view any virtual
networks in Tenant A.
Clients can obtain an auth_token by posting credentials to the keystone admin API
(/v2.0/tokens). The VncApi client library does this automatically. If an auth_token is
present in an incoming request, api-server validates credentials derived from the token
against object permissions. If an incoming request has an invalid or missing auth_token,
a 401 error is returned.
Notes:
•
Multitenancy is enabled by the flag multi_tenancy in /etc/contrail/api-server.conf
•
If multitenancy is enabled, memcaching is automatically enabled, to improve token
validation response time.
API Library Keystone Integration
VncApi has been updated to check for any 401 error that api-server returns as a result of
a missing or invalid token. This forces VncApi to connect with the keystone middleware
and fetch an auth_token. All subsequent requests to api-server include the auth_token.
Supporting Utilities
•
/opt/contrail/utils/chmod.py—- To change permissions and ownership (user or group
membership) of a resource. Requires the resource type (for example, virtual-network)
334
Copyright © 2016, Juniper Networks, Inc.
Chapter 14: Configuring Multitenancy Support
and the resource FQN (for example,
default-domain:default-project:default-virtual-network).
Invoke python /opt/contrail/utils/chmod.py -h to see usage information
Example 1 - See current permissions:
[root@host]# python /opt/contrail/utils/chmod.py <ip address>:8082 project
default-domain:default-project
Type = project
Name = default-domain:default-project
API Server = <ip address>:8082
Keystone credentials admin/<password>/admin
Obj uuid = $ABC123
Obj perms = cloud-admin/cloud-admin-group 777
[root@host]# python /opt/contrail/utils/chmod.py <ip address>:8082 --owner foo
--group bar --perms 555 project default-domain:default-project
Type = project Name = default-domain:default-project
API Server = <ip address>
Owner = foo
Group = bar
Perms = 555
Keystone credentials admin/<password>/admin
Obj uuid = $ABC123
Obj perms = cloud-admin/cloud-admin-group 777
New perms = foo/bar 555
•
/opt/contrail/utils/multi_tenancy.py —- Show if multitenancy is enabled or disabled.
Also used to turn multitenancy on or off. Requires admin credentials.
Invoke python /opt/contrail/utils/multi_tenancy.py -h to see usage information
Example 1: View multitenancy status
[root@host]# python /opt/contrail/utils/multi_tenancy.py <ip address>:8082
API Server = <ip address>:8082
Keystone credentials admin/<password>/admin
Multi Tenancy is enabled
Example 2: Turn multitenancy off
[root@host]# python /opt/contrail/utils/multi_tenancy.py 10.84.13.34:8082 --off
API Server = <ip address>:8082
Keystone credentials admin/<password>/admin
Multi Tenancy is disabled
Configuring Network QoS Parameters
•
Overview on page 336
•
QoS Configuration Examples on page 336
•
Limitations on page 337
Copyright © 2016, Juniper Networks, Inc.
335
Contrail Feature Guide
Overview
You can use the OpenStack Nova command-line interface (CLI) to specify a quality of
service (QoS) setting for a virtual machine’s network interface, by setting the quota of a
Nova flavor. Any virtual machine created with that Nova flavor will inherit all of the
specified QoS settings. Additionally, if the virtual machine that was created with the QoS
settings has multiple interfaces in different virtual networks, the same QoS settings will
be applied to all of the network interfaces associated with the virtual machine. The QoS
settings can be specified in unidirectional or bidirectional mode.
The quota driver in Neutron converts QoS parameters into libvirt network settings of the
virtual machine.
The QoS parameters available in the quota driver only cover rate limiting the network
interface. There are no specifications available for policy-based QoS at this time.
QoS Configuration Examples
Although the QoS setting can be specified in quota by using either Horizon or CLI, quota
creation using CLI is more robust and stable, therefore, creating by CLI is the recommended
method.
Example
CLI for Nova flavor has the following format:
nova flavor-key <flavor_name> set quota:vif_<direction> _<param_name> = value
where:
<flavor_name> is the name of an existing Nova flavor.
vif_<direction>_<param_name> is the inbound or outbound QoS data name.
QoS vif types include the following:
•
vif_inbound_average lets you specify the average rate of inbound (receive) traffic, in
kilobytes/sec.
•
vif_outbound_average lets you specify the average rate of outbound (transmit) traffic,
in kilobytes/sec.
•
Optional: vif_inbound_peak and vif_outbound_peak specify the maximum rate of inbound
and outbound traffic, respectively, in kilobytes/sec.
•
Optional: vif_inbound_burst and vif_outbound_peak specify the amount of kilobytes
that can be received or transmitted, respectively, in a single burst at the peak rate.
Details for various QoS parameters for libvirt can be found at
http://libvirt.org/formatnetwork.html.
The following example shows an inbound average of 800 kilobytes/sec, a peak of 1000
kilobytes/sec, and a burst amount of 30 kilobytes.
nova flavor-key m1.small set quota:vif_inbound_average=800
nova flavor-key m1.small set quota:vif_inbound_peak=1000
nova flavor-key m1.small set quota:vif_inbound_burst=30
336
Copyright © 2016, Juniper Networks, Inc.
Chapter 14: Configuring Multitenancy Support
The following is an example of specified outbound parameters:
nova flavor-key m1.small set quota:vif_outbound_average=800
nova flavor-key m1.small set quota:vif_outbound_peak=1000
nova flavor-key m1.small set quota:vif_outbound_burst=30
After the Nova flavor is configured for QoS, a virtual machine instance can be created,
using either Horizon or CLI. The instance will have network settings corresponding to the
nova flavor-key, as in the following:
<interface type="ethernet">
<mac address="02:a3:a0:87:7f:61"/>
<model type="virtio"/>
<script path=""/>
<target dev="tapa3a0877f-61"/>
<bandwidth>
<inbound average="800" peak="1000" burst="30"/>
<outbound average="800" peak="1000" burst="30"/>
</bandwidth>
</interface>
Limitations
•
The stock libvirt does not support rate limiting of ethernet interface types. Consequently,
settings like those in the example for the guest interface will not result in any tc qdisc
settings for the corresponding tap device in the host. For more details, refer to issue
#1367095 in Launchpad.net, where you can find patches and instructions to make libvirt
work for network rate limiting of virtual machine interfaces.
•
The nova flavor-key rxtx_factor takes a float as an input and acts as a scaling factor
for receive (inbound) and transmit (outbound) throughputs. This key is only available
to Neutron extensions (private extensions). The Contrail Neutron plugin doesn’t
implement this private extension. Consequently, setting the nova flavor-key rxtx_factor
will not have any effect on the QoS setting of the network interface(s) of any virtual
machine created with that nova flavor.
•
The outbound rate limits of a virtual machine interface are not strictly achieved. The
outbound throughput of a virtual machine network interface is always less than the
average outbound limit specified in the virtual machine's libvirt configuration file. The
same behavior is also seen when using a Linux bridge.
Copyright © 2016, Juniper Networks, Inc.
337
Contrail Feature Guide
338
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 15
Optimizing Contrail
•
Using a Headless vRouter to Improve Redundancy on page 339
•
vRouter Command Line Utilities on page 340
•
Route Target Filtering on page 356
•
Source Network Address Translation (SNAT) on page 358
Using a Headless vRouter to Improve Redundancy
•
Overview: vRouter Agent Redundancy on page 339
•
Headless vRouter Function on page 339
•
Configuring the Headless vRouter on page 340
Overview: vRouter Agent Redundancy
The Contrail vRouter agent downloads routes, configurations, and the multicast tree
from the control node. For redundancy, the vRouter agent connects to two control nodes.
However, in some circumstances, it is possible for the vRouter agent to lose the connection
to the two control nodes, in this case, the information provided by the control nodes can
get flushed. If the vRouter agent loses connection to the control nodes, the multicast
and unicast are flushed immediately, however, the configuration agent waits awhile for
the control node to come up, and after that time, the configuration information gets
flushed.
Headless vRouter Function
When the headless vRouter feature is enabled, if the agent's connection to the control
nodes is lost, the last known information about routes, configurations, and the multicast
tree is retained and marked as stale entries. In the mean time, the system can remain in
a working state. When the control node comes up again, the agent waits awhile to flush
out the stale entries, and the newly-connected control node sends information about
the routes, configurations, and multicast tree to the agent.
If the control node is in an unstable state, coming up and going down again, the agent
retains the stale information until the control node becomes stable again.
Copyright © 2016, Juniper Networks, Inc.
339
Contrail Feature Guide
Configuring the Headless vRouter
By default, the vRouter agent runs in non-headless mode. You can enable the headless
router feature by using the command-line interface or by configuring the agent
configuration file. In headless mode, the vRouter agent retains the last known good
configuration from the control node if all control nodes are lost.
Using CLI to Configure Headless vRouter
Use the following command-line parameters to enable the headless vRouter. The
following argument will run the compute node in headless mode.
--DEFAULT.headless arg
Using contrail-vrouter-agent.conf to Configure Headless vRouter
Use the following line in the DEFAULT section of the contrail-vrouter-agent.conf to enable
the headless vRouter.
headless_mode=true
Possible values include true (enable) and false (disable).
vRouter Command Line Utilities
•
Overview on page 340
•
vif Command on page 341
•
flow Command on page 343
•
vrfstats Command on page 345
•
rt Command on page 345
•
dropstats Command on page 346
•
mpls Command on page 350
•
mirror Command on page 352
•
vxlan Command on page 353
•
nh Command on page 354
Overview
This section describes the shell prompt utilities available for examining the state of the
vrouter kernel module in Contrail.
The most useful commands for inspecting the Contrail vrouter module are summarized
in the following table.
Command
Description
vif
Inspect vrouter interfaces associated with the vrouter module.
flow
Display active flows in a system.
340
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
Command
Description
vrfstats
Display next hop statistics for a particular VRF.
rt
Display routes in a VRF.
dropstats
Inspect packet drop counters in the vrouter.
mpls
Display the input label map programmed into the vrouter.
mirror
Display the mirror table entries.
vxlan
Display the vxlan table entries.
nh
Display the next hops that the vrouter knows.
--help
Display all command options available for the current command.
The following sections describe each of the vrouter utilities in detail.
vif Command
The vrouter requires vrouter interfaces (vif) to forward traffic. Use the vif command to
see the interfaces that are known by the vrouter.
NOTE: Having interfaces only in the OS (Linux) is not sufficient for forwarding.
The relevant interfaces must be added to vrouter. Typically, the set up of
interfaces is handled by components like nova-compute or vrouter agent.
Example: vif --list
# vif –-list
vif0/0 OS: pkt0
Type:Agent HWaddr:00:00:5e:00:01:00 IPaddr:0
Vrf:65535 Flags:L3 MTU:1514 Ref:2
RX packets:6591 bytes:648577 errors:0
TX packets:12150 bytes:1974451 errors:0
vif0/1 OS: vhost0
Type:Host HWaddr:00:25:90:c3:08:68 IPaddr:0
Vrf:0 Flags:L3 MTU:1514 Ref:3
RX packets:3446598 bytes:4478599344 errors:0
TX packets:851770 bytes:1337017154 errors:0
vif0/2 OS: p1p0p0 (Speed 1000, Duplex 1)
Type:Physical HWaddr:00:25:90:c3:08:68 IPaddr:0
Vrf:0 Flags:L3 MTU:1514 Ref:22
RX packets:1643238 bytes:1391655366 errors:2812
TX packets:3523278 bytes:6806058059 errors:0
vif0/18 OS: tap3214fc7e-88
Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0
Vrf:13 Flags:PL3L2 MTU:9160 Ref:6
RX packets:60 bytes:4873 errors:0
TX packets:21 bytes:2158 errors:0
Copyright © 2016, Juniper Networks, Inc.
341
Contrail Feature Guide
Table 30: vif Fields
vif Output Field
Description
vif0/X
The vrouter assigned name, where 0 is the router id and X is the index allocated to the
interface within the vrouter.
OS: pkt0
The pkt0 (in this case) is the name of the actual OS (Linux) visible interface name. For
physical interfaces, the speed and the duplex settings are also displayed.
Type:xxxxx
Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0
The type of interface and its IP address, as defined by vrouter. The values can be different
from what is seen in the OS. Types defined by vrouter include:
Vrf:xxxxx
•
Virtual – Interface of a virtual machine (VM).
•
Physical – Physical interface (NIC) in the system.
•
Host – An interface toward the host.
•
Agent – An interface used to trap packets to the vrouter agent when decisions need to
be made for the forwarding path.
Vrf:65535 Flags:L3 MTU:1514 Ref:2
The identifier of the vrf to which the interface is assigned, the flags set on the interface, the
MTU as understood by vrouter, and a reference count of how many individual entities
actually hold reference to the interface (mainly of debugging value).
Flag options identify that the following are enabled for the interface:
Rx
•
P - Policy
•
L3 - Layer 3 forwarding
•
L2 - Layer 2 bridging
•
X - Cross connect mode, only set on physical and host interfaces, indicating that packets
are moved between physical and host directly, with minimal intervention by vrouter.
Typically set when the agent is not alive or not in good shape.
•
Mt - Mirroring transmit direction
•
Mr - Mirroring receive direction
•
Tc - Checksum offload on the transmit side. Valid only on the physical interface.
RX packets:60 bytes:4873 errors:0
Packets received by vrouter from this interface.
Tx
TX packets:21 bytes:2158 errors:0
Packets transmitted out by vrouter on this interface.
342
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
vif Options
Use vif –-help to display all options available for the vif command. Following os a brief
description of each option.
NOTE: It is not recommended to use the following options unless you are
very experienced with the system utilities.
# vif --help
Usage: vif [--create <intf_name> --mac <mac>]
[--add <intf_name> --mac <mac> --vrf <vrf>
--type [vhost|agent|physical|virtual][--policy, --mode <mode:x>]]
[--delete <intf_id>]
[--get <intf_id>][--kernel]
[--set <intf_id> --vlan <vlan_id> --vrf <vrf_id>]
[--list]
[--help]
Option
Description
--create
Creates a ‘Host’ interface with name <intf_name> and mac <mac> on the host kernel.
The ‘vhost0’ interface that you see on Linux is a typical example of invocation of this
command.
--add
Adds the existing interfaces in the host OS to vrouter, with type and flag options.
--delete
Deletes the interface from vrouter. The <intf_id> is the vrouter interface id as given by
vif0/X, where X is the iID
--get
Displays a specific interface. The <intf_id> is the vrouter interface id, unless the command
is appended by the ‘—kernel’ option, in which case the ID can be the kernel ID.
--set
Set working parameters of an interface. The only ones supported are the vlan id and the
vrf. The vlan id as understood by vrouter differs from what one typically expects, and is
relevant as of now only for interfaces of service instances.
--list
Display all of the interfaces of which the vrouter is aware.
--help
Display all options available for the current command.
flow Command
Use the flow command to display all active flows in a system.
Example: flow -l
Use -l to list everything in the flow table. The -l is the only relevant debugging option.
# flow –l
Flow table
Index
Source:Port
Destination:Port Proto(V)
------------------------------------------------------------------------------------------------263484
Copyright © 2016, Juniper Networks, Inc.
1.1.1.252:1203
1.1.1.253:0
1 (3)
343
Contrail Feature Guide
(Action:F, S(nh):91, Statistics:22/1848)
379480
1.1.1.253:1203
1.1.1.252:0
1 (3)
(Action:F, S(nh):75, Statistics:22/1848)
Each record in the flow table listing displays the index of the record, the source ip: source
port, the destination ip: destination port, the inet protocol, and the source vrf to which
the flow belongs.
Each new flow has to be approved by the vrouter agent. The agent does this by setting
actions for each flow. There are three main actions associated with a flow table entry:
Forward (‘F’), Drop (‘D’), and Nat (‘N’).
For NAT, there are additional flags indicating the type of NAT to which the flow is subject,
including: SNAT (S), DNAT (D), source port translation (Ps), and destination port
translation (Pd).
S(nh) indicates the source nexthop index used for the RPF check to validate that the
traffic is from a known source. If the packet must go to an ECMP destination, E:X is also
displayed, where ‘X’ indicates the destination to be used through the index within the
ECMP next hop.
The Statistics field indicates the Packets/Bytes that hit this flow entry.
There is a Mirror Index field if the traffic is mirrored, listing the indices into the mirror table
(which can be dumped by using mirror –-dump).
If there is an explicit association between the forward and the reverse flows, as is the
case with NAT, you will see a double arrow in each of the records with either side of the
arrow displaying the flow index for that direction.
Example: flow -r
Use -r to view all of the flow setup rates.
# flow –r
New = 2, Flow setup rate =
New = 2, Flow setup rate =
New = -2, Flow setup rate =
New = 2, Flow setup rate =
New = -2, Flow setup rate =
Example: flow --help
3 flows/sec, Flow rate =
3 flows/sec, Flow rate =
-3 flows/sec, Flow rate =
3 flows/sec, Flow rate =
-3 flows/sec, Flow rate =
3 flows/sec, for last 548 ms
3 flows/sec, for last 543 ms
-3 flows/sec, for last 541 ms
3 flows/sec, for last 544 ms
-3 flows/sec, for last 542 ms
Use --help to display all options available for the flow command.
# flow –-help
Usage:flow [-f flow_index][-d flow_index][-i flow_index]
[--mirror=mirror table index]
[-l]
-f <flow_index> Set forward action for flow at flow_index <flow_index>
-d <flow_index> Set drop action for flow at flow_index <flow_index>
-i <flow_index> Invalidate flow at flow_index <flow_index>
--mirror
mirror index to mirror to
-l
List all flows
-r
Start dumping flow setup rate
--help
Print this help
344
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
vrfstats Command
Use vrfstats to display statistics per next hop for a vrf. It is typically used to determine if
packets are hitting the expected next hop.
Example: vrfstats
--dump
The —dump option displays the statistics for all vrfs that have seen traffic. In the following
example, there was traffic only in Vrf 0 (the public vrf). Receives shows the number of
packets that came in the fabric destined to this location. Encaps shows the number of
packets destined to the fabric.
If there is VM traffic going out on the fabric, the respective tunnel counters will increment.
# vrfstats --dump
Vrf: 0
Discards 414, Resolves 3, Receives 165334
Ecmp Composites 0, L3 Mcast Composites 0, L2 Mcast Composites 0, Fabric Composites
0, Multi Proto Composites 0
Udp Tunnels 0, Udp Mpls Tunnels 0, Gre Mpls Tunnels 0
L2 Encaps 0, Encaps 130955
Example: vrfstats --get
0
Example: vrfstats
--help
Use --get 0 to retrieve statistics for a particular vrf.
# vrfstats --get 0
Vrf: 0
Discards 418, Resolves 3, Receives 166929
Ecmp Composites 0, L3 Mcast Composites 0, L2 Mcast Composites 0, Fabric Composites
0, Multi Proto Composites 0
Udp Tunnels 0, Udp Mpls Tunnels 0, Gre Mpls Tunnels 0
L2 Encaps 0, Encaps 132179
Usage: vrfstats --get <vrf>
--dump
--help
--get <vrf>
Displays packet statistics for the vrf <vrf>
--dump
Displays packet statistics for all vrfs
--help
Displays this help message
rt Command
Use the rt command to display all routes in a vrf.
Example: rt --dump
The following example displays inet family routes for vrf 0.
# rt --dump 0
Kernel IP routing table 0/0/unicast
Destination
PPL
0.0.0.0/8
0
1.0.0.0/8
Copyright © 2016, Juniper Networks, Inc.
0
Flags
-
Label
Nexthop
5
5
345
Contrail Feature Guide
2.0.0.0/8
0
-
5
3.0.0.0/8
0
-
5
4.0.0.0/8
0
-
5
5.0.0.0/8
0
-
5
In this example output, the first line displays the routing table that is being dumped. In
0/0/unicast, the first 0 is for the router id, the next 0 is for the vrf id, and unicast identifies
the unicast table. The vrouter maintains separate tables for unicast and multicast routes.
By default, if the —table option is not specified, only the unicast table is dumped.
Each record in the table output specifies the destination prefix length, the parent route
prefix length from which this route has been expanded, the flags for the route, the MPLS
label if the destination is a VM in another location, and the next hop id. To understand
the second field “PPL”, it is good to keep in mind that the unicast routing table is internally
implemented as an ‘mtrie’.
The Flags field can have two values. L indicates that the label field is valid, and H indicates
that vroute should proxy arp for this IP.
The Nexthop field indicates the next hop ID to which the route points.
Example: rt --dump
--table mcst
To dump the multicast table, use the —table option with mcst as the argument.
# rt --dump 0 --table mcst
Kernel IP routing table 0/0/multicast
(Src,Group)
Nexthop
0.0.0.0,255.255.255.255
dropstats Command
Use the dropstats command to see packet drop counters in vrouter.
Example: dropstats
# dropstats
GARP
0
ARP notme
12904
Invalid ARPs
0
Invalid IF
Trap No IF
IF TX Discard
IF Drop
346
0
0
0
49
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
IF RX Discard
0
Flow Unusable
0
Flow No Memory
Flow Table Full
0
0
Flow NAT no rflow
0
Flow Action Drop
0
Flow Action Invalid
0
Flow Invalid Protocol
0
Flow Queue Limit Exceeded 0
Discards
34
TTL Exceeded
0
Mcast Clone Fail
0
Cloned Original
0
Invalid NH
2
Invalid Label
0
Invalid Protocol
Rewrite Fail
0
0
Invalid Mcast Source
Push Fails
Pull Fails
Duplicated
0
0
0
0
Head Alloc Fails
0
Head Space Reserve Fails
PCOW fails
Invalid Packet
Copyright © 2016, Juniper Networks, Inc.
0
0
0
347
Contrail Feature Guide
Misc
Nowhere to go
0
Checksum errors
0
No Fmd
Ivalid VNID
348
0
0
0
Fragment errors
0
Invalid Source
0
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
dropstats ARP Block
GARP packets from VMs are dropped by vrouter, an expected behavior. In the example
output, the first counter GARP indicates how many packets were dropped.
ARP requests that are not handled by vrouter are dropped, for example, requests for a
system that is not a host. These drops are counted by ARP notme counters.
The Invalid ARPs counter is incremented when the Ethernet protocol is ARP, but the ARP
operation was neither a request nor a response.
dropstats Interface
Block
Invalid IF counters are incremented normally during transient conditions, and should not
be a concern.
Trap No IF counters are incremented when vrouter is not able to find the interface to trap
the packets to vrouter agent, and should not happen in a working system.
IF TX Discard and IF RX Discard counters are incremented when vrouter is not in a state
to transmit and receive packets, and typically happens when vrouter goes through a reset
state or when the module is unloaded.
IF Drop counters indicate packets that are dropped in the interface layer. The increase
can typically happen when interface settings are wrong.
dropstats Flow Block
When packets go through flow processing, the first packet in a flow is cached and the
vrouter agent is notified so it can take actions on the packet according to the policies
configured. If more packets arrive after the first packet but before the agent makes a
decision on the first packet, then those new packets are dropped. The dropped packets
are tracked by the Flow unusable counter.
The Flow No Memory counter increments when the flow block doesn't have enough
memory to perform internal operations.
The Flow Table Full counter increments when the vrouter cannot install a new flow due
to lack of available slots. A particular flow can only go in certain slots, and if all those
slots are occupied, packets are dropped. It is possible that the flow table is not full, but
the counter might increment.
The Flow NAT no rflow counter tracks packets that are dropped when there is no reverse
flow associated with a forward flow that had action set as NAT. For NAT, the vrouter
needs both forward and reverse flows to be set properly. If they are not set, packets are
dropped.
The Flow Action Drop counter tracks packets that are dropped due to policies that prohibit
a flow.
The Flow Action Invalid counter usually does not increment in the normal course of time,
and can be ignored.
The Flow Invalid Protocol usually does not increment in the normal course of time, and
can be ignored.
The Flow Queue Limit Exceeded usually does not increment in the normal course of time,
and can be ignored.
dropstats
Miscellaneous
Operational Block
Copyright © 2016, Juniper Networks, Inc.
349
Contrail Feature Guide
The Discard counter tracks packets that hit a discard next hop. For various reasons
interpreted by the agent and during some transient conditions, a route can point to a
discard next hop. When packets hit that route, they are dropped.
The TTL Exceeded counter increments when the MPLS time-to-live goes to zero.
The Mcast Clone Fail happens when the vrouter is not able to replicate a packet for
flooding.
The Cloned Original is an internal tracking counter. It is harmless and can be ignored.
The Invalid NH counter tracks the number of packets that hit a next hop that was not in
a state to be used (usually in transient conditions) or a next hop that was not expected,
or no next hops when there was a next hop expected. Such increments happen rarely,
and should not continuously increment.
The Invalid Label counter tracks packets with an MPLS label unusable by vrouter because
the value is not in the expected range.
The Invalid Protocol typically increments when the IP header is corrupt.
The Rewrite Fail counter tracks the number of times vrouter was not able to write next
hop rewrite data to the packet.
The Invalid Mcast Source tracks the multicast packets that came from an unknown or
unexpected source and thus were dropped.
The Invalid Source counter tracks the number of packets that came from an invalid or
unexpected source and thus were dropped.
The remaining counters are of value only to developers.
mpls Command
The mpls utility command displays the input label map that has been programmed in
the vrouter.
Example: mpls --dump
The —dump command dumps the complete label map. The output is divided into two
columns. The first field is the label and the second is the next hop corresponding to the
label. When an MPLS packet with the specified label arrives in the vrouter, it uses the
next hop corresponding to the label to forward the packet.
# mpls –dump
MPLS Input Label Map
Label NextHop
----------------------
350
16
9
17
11
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
You can inspect the operation on nh 9 as follows:
# nh --get 9
Id:009 Type:Encap
Fmly: AF_INET Flags:Valid, Policy, Rid:0 Ref_cnt:4
EncapFmly:0806 Oif:3 Len:14 Data:02 d0 60 aa 50 57 00 25 90 c3 08 69 08 00
The nh output shows that the next hop directs the packet to go out on the interface with
index 3 (Oif:3) with the given rewrite data.
To check the index of 3, use the following:
# vif –get 3
vif0/3 OS: tapd060aa50-57
Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0
Vrf:1 Flags:PL3L2 MTU:9160 Ref:6
RX packets:1056 bytes:103471 errors:0
TX packets:1041 bytes:102372 errors:0
The -get 3 output shows that the index of 3 corresponds to a tap interface that goes to
a VM.
You can also dump individual entries in the map using the —get option, as follows:
# mpls –get 16
MPLS Input Label Map
Label NextHop
----------------------16
Example: mpls -help
9
# mpls –help
Usage: mpls --dump
mpls --get <label>
mpls --help
--dump Dumps the mpls incoming label map
--get
--help
Copyright © 2016, Juniper Networks, Inc.
Dumps the entry corresponding to label <label>
in the label map
Prints this help message
351
Contrail Feature Guide
mirror Command
Use the mirror command to dump the mirror table entries.
Example: Inspect
Mirroring
The following example inspects a mirror configuration where traffic is mirrored from
network vn1 (1.1.1.0/24) to network vn2 (2.2.2.0/24). A ping is run from 1.1.1.253 to 2.2.2.253,
where both IPs are valid VM IPs, then the flow table is listed:
# flow -l
Flow table
Index
Source:Port
Destination:Port Proto(V)
------------------------------------------------------------------------135024
2.2.2.253:1208
1.1.1.253:0
1 (1)
(Action:F, S(nh):17, Statistics:208/17472 Mirror Index : 0)
387324
1.1.1.253:1208
2.2.2.253:0
1 (1)
(Action:F, S(nh):8, Statistics:208/17472 Mirror Index : 0)
In the example output, Mirror Index:0 is listed, it is the index to the mirror table. The mirror
table can be dumped with the —dump option, as follows:
# mirror --dump
Mirror Table
Index NextHop Flags References
-----------------------------------------------0
18
3
The mirror table entries point to next hops. In the example, the index 0 points to next hop
18. The References indicate the number of flow entries that point to this entry.
A next hop get operation on ID 18 is performed as follows:
# nh --get 18
Id:018 Type:Tunnel Fmly: AF_INET Flags:Valid, Udp, Rid:0 Ref_cnt:2
Oif:0 Len:14 Flags Valid, Udp, Data:00 00 00 00 00 00 00 25 90 c3 08 69 08 00
Vrf:-1 Sip:192.168.1.10 Dip:250.250.2.253
Sport:58818 Dport:8099
352
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
The nh --get output shows that mirrored packets go to a system with IP 250.250.2.253.
The packets are tunneled as a UDP datagram and sent to the destination. Vrf:-1 indicates
that a lookup has to be done in the source Vrf for the destination.
You can also get an individual mirror table entry using the —get option, as follows:
# mirror --get 10
Mirror Table
Index NextHop Flags References
----------------------------------------------10
Example: mirror --help
1
1
# mirror --help
Usage: mirror --dump
mirror --get <index>
mirror --help
--dump Dumps the mirror table
--get
Dumps the mirror entry corresponding to index <index>
--help
Prints this help message
vxlan Command
The vxlan command can be used to dump the vxlan table. The vxlan table maps a network
ID to a next hop, similar to an MPLS table.
If a packet comes with a vxlan header and if the VNID is one of those in the table, the
vrouter will use the next hop identified to forward the packet.
Example: vxlan --dump
# vxlan --dump
VXLAN Table
VNID NextHop
---------------------
Example: vxlan --get
4
16
5
16
You can use the —get option to dump a specific entry, as follows:
# vxlan --get 4
VXLAN Table
Copyright © 2016, Juniper Networks, Inc.
353
Contrail Feature Guide
VNID NextHop
---------------------4
Example: vxlan --help
16
# vxlan --help
Usage: vxlan --dump
vxlan --get <vnid>
vxlan --help
--dump Dumps the vxlan table
--get Dumps the entry corresponding to <vnid>
--help Prints this help message
nh Command
The nh command enables you to inspect the next hops that are known by the vrouter.
Next hops tell the vrouter the next location to send a packet in the path to its final
destination. The processing of the packet differs based on the type of the next hop. The
next hop types are described in the following table.
Next Hop Type
Description
Receive
Indicates that the packet is destined for itself and the vrouter should perform
Layer 4 protocol processing. As an example, all packets destined to the host IP
will hit the receive next hop in the default VRF. Similarly, all traffic destined to
the VMs hosted by the server and tunneled inside a GRE will hit the receive next
hop in the default VRF first, because the outer packet that carries the traffic to
the VM is that of the server.
Encap (Interface)
Used only to determine the outgoing interface and the Layer 2 information. As
an example, when two VMs on the same server communicate with each other,
the routes for each of them point to an encap next hop, because the only
information needed is the Layer 2 information to send the packet to the tap
interface of the destination VM. A packet destined to a VM hosted on one server
from a VM on a different server will also hit an encap next hop, after tunnel
processing.
Tunnel
Encapsulates VM traffic in a tunnel and sends it to the server that hosts the
destination VM. There are different types of tunnel next hops, based on the type
of tunnels used. Vrouter supports two main tunnel types for Layer 3 traffic:
MPLSoGRE and MPLSoUDP. For Layer 2 traffic, a VXLAN tunnel is used. A typical
tunnel next hop indicates the kind of tunnel, the rewrite information, the outgoing
interface, and the source and destination server IPs.
Discard
A catch-all next hop. If there is no route for a destination, the packet hits the
discard next hop, which drops the packet.
354
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
Next Hop Type
Description
Resolve
Used by the agent to lazy install Layer 2 rewrite information.
Composite
Groups a set of next hops, called component next hops or sub next hops.
Typically used when multi-destination distribution is needed, for example for
multicast, ECMP, and so on.
Vxlan
A VXLAN tunnel is used for Layer 2 traffic. A typical tunnel next hop indicates
the kind of tunnel, the rewrite information, the outgoing interface, and the source
and destination server IPs.
Example: nh --list
Id:000 Type:Drop
Fmly: AF_INET Flags:Valid, Rid:0 Ref_cnt:1781
Id:001 Type:Resolve Fmly: AF_INET Flags:Valid, Rid:0 Ref_cnt:244
Id:004 Type:Receive Fmly: AF_INET Flags:Valid, Policy, Rid:0
Ref_cnt:2 Oif:1
Id:007 Type:Encap
Fmly: AF_INET Flags:Valid, Multicast, Rid:0 Ref_cnt:3
EncapFmly:0806 Oif:3 Len:14 Data:ff ff ff ff ff ff 00 25 90 c4 82 2c 08 00
Id:010 Type:Encap
Fmly:AF_BRIDGE Flags:Valid, L2, Rid:0 Ref_cnt:3
EncapFmly:0000 Oif:3 Len:0 Data:
Id:012 Type:Vxlan Vrf Fmly: AF_INET Flags:Valid, Rid:0 Ref_cnt:2
Vrf:1
Id:013 Type:Composite Fmly: AF_INET Flags:Valid, Fabric, Rid:0 Ref_cnt:3
Sub NH(label): 19(1027)
Id:014 Type:Composite Fmly: AF_INET Flags:Valid, Multicast, L3, Rid:0 Ref_cnt:3
Sub NH(label): 13(0) 7(0)
Id:015 Type:Composite Fmly:AF_BRIDGE Flags:Valid, Multicast, L2, Rid:0 Ref_cnt:3
Sub NH(label): 13(0) 10(0)
Id:016 Type:Tunnel Fmly: AF_INET Flags:Valid, MPLSoGRE, Rid:0 Ref_cnt:1
Oif:2 Len:14 Flags Valid, MPLSoGRE, Data:00 25 90 aa 09 a6 00 25 90 c4 82 2c 08
00
Vrf:0 Sip:10.204.216.72 Dip:10.204.216.21
Id:019 Type:Tunnel Fmly: AF_INET Flags:Valid, MPLSoUDP, Rid:0 Ref_cnt:7
Oif:2 Len:14 Flags Valid, MPLSoUDP, Data:00 25 90 aa 09 a6 00 25 90 c4 82 2c 08
Copyright © 2016, Juniper Networks, Inc.
355
Contrail Feature Guide
00
Vrf:0 Sip:10.204.216.72 Dip:10.204.216.21
Id:020 Type:Composite Fmly:AF_UNSPEC Flags:Valid, Multi Proto, Rid:0 Ref_cnt:2
Sub NH(label): 14(0) 15(0)
Example: nh --get
Use the --get option to display information for a single next hop.
# nh –get 9
Id:009 Type:Encap
Fmly:AF_BRIDGE Flags:Valid, L2, Rid:0 Ref_cnt:4
EncapFmly:0000 Oif:3 Len:0 Data:
Example: nh --help
# nh –help
Usage: nh --list
nh --get <nh_id>
nh --help
--list Lists All Nexthops
--get <nh_id> Displays nexthop corresponding to <nh_id>
--help Displays this help message
Route Target Filtering
•
Introduction on page 356
•
Debugging and Troubleshooting Route Target Filtering on page 357
•
RTF Limitations in Contrail 1.10 on page 358
Introduction
BGP route target filtering (RTF) is a method for limiting the distribution of VPN routes to
only those systems in the network for which the routes are necessary. If RTF is not active,
the Contrail control node advertises all VPN routes to all of its VPN peers, which are either
other control nodes or gateway routers such as an MX Series router. On the receiving
side, the control node stores all VPN routes it receives from peers in the VPN table (for
example, bgp.l3vpn.0). Any routes that do not include a route target extended community
that is referenced by the local vrf-import policies are discarded by Junos.
The control node must send all route updates to its peers, even for unnecessary routes
that are discarded. Continuous route updates are both CPU- and memory-intensive. The
only routes that are necessary to advertise to gateway routers are those that belong to
the virtual networks that are configured for public access. It is not necessary to advertise
VM routes belonging to other virtual networks to gateway routers.
356
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
If a datacenter has more than two control nodes, the vrouter-agent only subscribes to
two of the control nodes, indicated by the discovery service. When a VM is initially launched
in a virtual network, it sends an XMPP subscribe request for the virtual network VRF and
publishes the VM route to the connected control node. It is not necessary to advertise
routes belonging to this type of VRF to control nodes that don’t have the vrouter-agent
subscribed in that VRF.
RTF is used to optimize the route distribution among control nodes and to the gateway
routers to avoid unwanted route updates. If the BGP peer has not advertised or configured
with RTF address family, then all routes belonging to the VPN table will be advetised.
RTF implementation in the control node does not support advertising and receiving of
default route targets.
Constrained route distribution using route target reachability information is defined in
RFC 4684, “Constrained Route Distribution for Border Gateway Protocol/MultiProtocol
Label Switching (BGP/MPLS) Internet Protocol (IP) Virtual Private Networks (VPNs)“.
Debugging and Troubleshooting Route Target Filtering
Use the tips in this section to troubleshoot issues with RTF. Use various http introspect
commands to reveal details about BGP neighbors for RTF. The following is a sample
portion of an http introspect page.
When you access an introspect page, only the first panel of detail columns appears. Use
a scroll bar or arrow keys to reveal more columns to the right, and vice versa.
•
Use the following http introspect URL to display the details of each peer:
http://(your_node_name):8083/Snh_BgpNeighborReq
For BGP peers, verify the configured and negotiated capability and the BGP table
registration.
For XMPP peers, look at the routing_instances column to get details about the VRF to
which the displayed vrouter-agent has subscribed and to see the import rtargets of the
VRFs.
Copyright © 2016, Juniper Networks, Inc.
357
Contrail Feature Guide
•
Use the following http introspect URL to dump the bgp.rtarget.0 table to display the
RTargetRoutes:
http://(your_node_name):8083/Snh_ShowRouteReq?x=bgp.rtarget.0
•
Use the following http introspect URL to dump the details for each of the route targets
configured on the control node:
http://(your_node_name):8083/Snh_ShowRtGroupReq?
For any given route target, this introspect displays the BGP table that imports and
exports the route, the BGP peers that have shown interest in this route, and all
dependent routes (when this route target has the extended community BGP attribute).
RTF Limitations in Contrail 1.10
The following are RTF limitations in Contrail 1.10.
•
The control node does not support advertising a default route target, which is an rtarget
route with target:0:0 or 0/0 as the prefix. This type of rtarget route enables a BGP peer
to receive all VPN routes without rtarget filtering.
•
The control node does not support receiving a default route target. If rtarget routes
with a default rtarget prefix are received, they are silently ignored.
•
A keep all configuration, typical for BGP peering for a control node on an MX Series
router, does not have impact, because all VPN routes with an extended community
route target, for which the MX has advertised the rtarget route, are sent to the MX. An
example of this type of typical configuration is the following:
set protocols bgp group contrail-control-nodes type internal
set protocols bgp group contrail-control-nodes local-address 10.204.216.253
set protocols bgp group contrail-control-nodes keep all
set protocols bgp group contrail-control-nodes family inet-vpn unicast
set protocols bgp group contrail-control-nodes family route-target
set protocols bgp group contrail-control-nodes neighbor 10.204.216.16
Source Network Address Translation (SNAT)
•
Overview on page 358
•
Neutron APIs for Routers on page 359
•
Network Namespace on page 360
•
Using Web UI to Configure Routers with SNAT on page 360
Overview
Source Network Address Translation (source-nat or SNAT) allows traffic from a private
network to go out to the internet. Virtual machines launched on a private network can
get to the internet by going through a gateway capable of performing SNAT. The gateway
has one arm on the public network and as part of SNAT, it replaces the source IP of the
originating packet with its own public side IP. As part of SNAT, the source port is also
updated so that multiple VMs can reach the public network through a single gateway
public IP.
358
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
The following diagram shows a virtual network with the private subnet of 10.1.1.0/24. The
default route for the virtual network points to the SNAT gateway. The gateway replaces
the source-ip from 10.1.1.0/24 and uses its public address 172.21.1.1 for outgoing packets.
To maintain unique NAT sessions the source port of the traffic also needs to be replaced.
Figure 96: Virtual Network With a Private Subnet
Neutron APIs for Routers
OpenStack supports SNAT gateway implementation through its Neutron APIs for routers.
The SNAT flag can be enabled or disabled on the external gateway of the router. The
default is True (enabled).
The OpenContrail plugin supports the Neutron APIs for routers and creates the relevant
service-template and service-instance objects in the API server. The service scheduler
in OpenContrail instantiates the gateway on a randomly-selected virtual router.
OpenContrail uses network namespace to support this feature.
Example
Configuration: SNAT
for Contrail
The SNAT feature is enabled on OpenContrail through Neutron API calls.
The following configuration example shows how to create a test network and a public
network, allowing the test network to reach the public domain through the SNAT gateway.
1.
Create the public network and set the router external flag.
neutron net-create public
neutron subnet-create public 172.21.1.0/24
neutron net-update public -- --router:external=True
2. Create the test network.
neutron net-create test
neutron subnet-create --name test-subnet test 10.1.1.0/24
3. Create the router with one interface in test.
neutron router-create r1
Copyright © 2016, Juniper Networks, Inc.
359
Contrail Feature Guide
neutron router-interface-add r1 test-subnet
4. Set the external gateway for the router.
neutron router-gateway-set r1 public
Network Namespace
Setting the external gateway is the trigger for OpenContrail to set up the Linux network
namespace for SNAT.
The network namespace can be cleared by issuing the following Neutron command:
neutron router-gateway-clear r1
Using Web UI to Configure Routers with SNAT
You can use the Contrail user interface to configure routers for SNAT and to check the
SNAT status of routers.
To enable SNAT for a router, go to Configure > Networking > Routers. In the list of routers,
select the router for which SNAT should be enabled. Click the Edit cog to reveal the Edit
Routers window. Click the check box for SNAT to enable SNAT on the router.
The following shows a router for which SNAT has been Enabled.
Figure 97: Edit Router Window to Enable SNAT
When a router has been Enabled for SNAT, the configuration can be seen by selecting
Configure > Networking > Routers. In the list of routers, click open the router of interest.
In the list of features for that router, the status of SNAT is listed. The following shows a
router that has been opened in the list. The status of the router shows that SNAT is
Enabled.
360
Copyright © 2016, Juniper Networks, Inc.
Chapter 15: Optimizing Contrail
Figure 98: Router Status for SNAT
You can view the real time status of a router with SNAT by viewing the instance console,
as in the following.
Figure 99: Instance Details Window
Copyright © 2016, Juniper Networks, Inc.
361
Contrail Feature Guide
362
Copyright © 2016, Juniper Networks, Inc.
PART 4
Monitoring and Troubleshooting Contrail
•
Configuring Traffic Mirroring to Monitor Network Traffic on page 365
•
Using Contrail Analytics to Monitor and Troubleshoot the Network on page 379
•
Common Support Answers on page 467
Copyright © 2016, Juniper Networks, Inc.
363
Contrail Feature Guide
364
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 16
Configuring Traffic Mirroring to Monitor
Network Traffic
•
Configuring Traffic Analyzers and Packet Capture for Mirroring on page 365
•
Configuring Interface Monitoring and Mirroring on page 374
•
Analyzer Service Virtual Machine (analyzer-vm-console.qcow2) on page 375
Configuring Traffic Analyzers and Packet Capture for Mirroring
Contrail provides traffic mirroring so you can mirror specified traffic to a traffic analyzer
where you can perform deep traffic inspection. Traffic mirroring enables you to designate
certain traffic flows to be mirrored to a traffic analyzer, where you can view traffic flows
in great detail.
Use Monitor > Debug > Packet Capture to configure packets to be captured and “mirrored”
to a virtual machine configured as a traffic analyzer. The packet activity can then be
inspected for monitoring and troubleshooting purposes. This section demonstrates how
to set up packet capture to mirror traffic packets to an analyzer.
•
Traffic Analyzer Images on page 365
•
Configuring Traffic Analyzers on page 366
•
Setting Up Traffic Mirroring Using Monitor > Debug > Packet Capture on page 366
•
Setting Up Traffic Mirroring Using Configure > Networking > Services on page 369
Traffic Analyzer Images
Before using the Contrail interface to configure traffic analyzers and packet capture for
mirroring, make sure that the following analyzer images are available in the VM image
list for your system. The traffic analyzer images are enhanced for viewing details of
captured packets in Wireshark. When creating a policy for the traffic analyzer, the traffic
analyzer instance should always have the Mirror to field selected in the policy, do not
select the Apply Service field for a traffic analyzer.
•
analyzer-vm-console-qcow2—Standard traffic analyzer; should be named analyzer in
the image list. This type of traffic analyzer is always configured with a single interface,
and the interface should be a Left interface.
Copyright © 2016, Juniper Networks, Inc.
365
Contrail Feature Guide
•
analyzer-vm-console-two-if qcow2—This type of traffic analyzer has two interfaces,
Left and Management. This traffic analyzer can have any name except the name
analyzer, which is reserved for the single interface analyzer.
Configuring Traffic Analyzers
In Contrail Controller, you use a two-part configuration to mirror captured packet traffic
to a traffic analyzer, where the traffic details can be inspected. The configuration has the
following steps:
1.
Configure analyzer(s) on the host.
2. Set up rules for packet capture.
Additionally, there are two ways to configure the packet capture for the analyzers from
within the Contrail interface:
•
Configure from Monitor > Debug > Packet Capture
•
Configure from Configure > Networking > Services
Setting Up Traffic Mirroring Using Monitor > Debug > Packet Capture
The following are the steps needed to set up packet capture in order to “mirror” the traffic
to an analyzer VM for the purpose of reviewing various aspects of packet traffic moving
through the system.
1.
Select Monitor > Debug > Packet Capture. The Packet Capture screen appears; see
Figure 100 on page 366.
Figure 100: Packet Capture
2. Click Create to add an analyzer; see Figure 101 on page 366.
Figure 101: Create Analyzer
366
Copyright © 2016, Juniper Networks, Inc.
Chapter 16: Configuring Traffic Mirroring to Monitor Network Traffic
3. In the Analyzer Name field, enter a name for the analyzer and in the Virtual Network
field, select Automatic or select a specific virtual network from the drop-down list of
available networks; click Save when finished.
4. To create rules for the analyzer, in the lower portion of the Create Analyzer screen,
click the + button to add a rule.
The Analyzer Rules fields appear; see Figure 102 on page 367.
Figure 102: Analyzer Rules
5. Select the rules to apply to determine which packets should be “mirrored”—sent to
the analyzer for monitoring.
See Table 31 on page 367 for guidelines for completing the rule fields.
Table 31: Analyzer Rule Fields
Field
Description
IP Protocol
Select from a list to define from which protocol packets are to be captured:
Source Network
•
ANY
•
TCP
•
UDP
•
ICMP
Select from a list the source network from which packets are to be captured:
•
any
•
local
•
domain:network 1
•
domain:network 2
•
domain:network .....
Source Ports
If you want to capture only those packets that originate from a specific port number, enter
the port number.
Direction
Select the direction of flow for the packets to be captured:
•
Bidirectional
•
Unidirectional
Copyright © 2016, Juniper Networks, Inc.
367
Contrail Feature Guide
Table 31: Analyzer Rule Fields (continued)
Field
Description
Destination Network
Select from a list the destination network for the packets to be captured:
•
any
•
local
•
domain:network 1
•
domain:network 2
•
domain:network .....
Destination Ports
If you want to capture only those packets that are destined to a specific port number, enter
the port number.
Cancel, Save
When finished, click Save to commit your selections, or click Cancel to clear the entries and
start over.
6. To associate virtual networks with the analyzer, click the Associate Networks field in
the center portion of the screen. Select from a drop-down list of available networks
the networks to associate with this analyzer; see Figure 103 on page 368.
Figure 103: Create Analyzer Associate Networks
NOTE: If there is already a network policy attached to the virtual network
selected, any conflicting rules configured for the analyzer will not take
effect.
7. View the analyzer activity from Monitor > Debug > Packet Capture. For the selected
analyzer, click in the Action column and select View Analyzer; see Figure 104 on page 369.
368
Copyright © 2016, Juniper Networks, Inc.
Chapter 16: Configuring Traffic Mirroring to Monitor Network Traffic
Figure 104: Launch Analyzer VM
8. The Wireshark Packet Capture Display appears; see Figure 105 on page 369.
Figure 105: Packet Capture Display
Setting Up Traffic Mirroring Using Configure > Networking > Services
You can set up packet capture for mirroring to an analyzer within a service chain utilizing
more than one interface by starting with a service template. The following procedure
provides the steps needed.
Copyright © 2016, Juniper Networks, Inc.
369
Contrail Feature Guide
1.
Access Configure > Networking > Services > Service Templates.
The Service Templates screen appears; see Figure 106 on page 370.
Figure 106: Service Templates
2. To create a new service template, click the Create button.
The Add Service Template window appears; see Figure 107 on page 370.
Figure 107: Add Service Template
3. Complete the fields by using the guidelines in Table 32 on page 370.
Table 32: Add Service Template Fields
Field
Description
Name
Enter a descriptive text name for this service template.
370
Copyright © 2016, Juniper Networks, Inc.
Chapter 16: Configuring Traffic Mirroring to Monitor Network Traffic
Table 32: Add Service Template Fields (continued)
Field
Description
Service Mode
Select Transparent from the drop-down list to indicate that this service template is for purposes
of mirroring.
Service
Select Analyzer from the drop-down list to indicate that this service template is for a traffic analyzer.
Image Name
Select from a drop-down list of available images the analyzer image to use for this analyzer service
template. You should select the analyzer named analyzer two interfaces if you used the
recommended naming for the image analyzer-vm-console-two-if qcow2 in the image list.
Interface Types
From the drop-down list, click the check boxes to indicate which two interface types are used for
this analyzer service template:
•
Left
•
Right
•
Management
Save
When finished, click OK to commit the changes
Cancel
Click Cancel to clear the fields and start over.
4. Create a service instance by clicking the Service Instances link and clicking the Create
button.
The Create Service Instances window appears; see Figure 108 on page 371.
Figure 108: Create Service Instances
5. Complete the fields by using the guidelines in Table 33 on page 371.
Table 33: Create Service Instances Fields
Field
Description
Services Template
Select from a drop-down list of available service templates the template
to use for this service instance (e.g. AnalyzerTemplate).
Copyright © 2016, Juniper Networks, Inc.
371
Contrail Feature Guide
Table 33: Create Service Instances Fields (continued)
Field
Description
Instance Name
Enter a text name for this service instance.
Left Virtual Network
Select from a drop-down list of available networks the network for the
left interface, or select Automatic.
Right Virtual Network
Select from a drop-down list of available networks the network for the
right interface, or select Automatic.
Management Virtual Network
Select from a drop-down list of available networks the network for the
management interface, or select Automatic.
Save
Click Save to commit your changes.
Cancel
Click Cancel to clear your changes and start over.
6. To create a network policy rule for this service instance, click Configure > Networking
> Policies.
The Policies window appears.
7. Click Create to get to the Create Policy window; see Figure 109 on page 372.
Figure 109: Create Policy
8. Click the + button in the lower portion of the screen to open the Policy Rules fields;
see Figure 110 on page 372.
Figure 110: Policy Rules
372
Copyright © 2016, Juniper Networks, Inc.
Chapter 16: Configuring Traffic Mirroring to Monitor Network Traffic
9. To add policy rules, complete the fields, using the guidelines in Table 34 on page 373.
NOTE: When there is a network policy attached to the virtual network,
any conflicting rules configured for the analyzer will not take effect.
Table 34: Add Rule Fields
Field
Description
Action
Enter a text name for this service instance.
Protocol
Select from a drop-down list of available networks the network for the left
interface, or select Automatic.
Source Network
Select from a drop-down list of available networks the network for the right
interface, or select Automatic.
Source Ports
Select from a drop-down list of available networks the network for the
management interface, or select Automatic.
Direction
Select the direction of flow for the packets to be captured:
Destination Network
Destination Ports
•
Bidirectional
•
Unidirectional
Select from a list the destination network for the packets to be captured:
•
any
•
local
•
domain:network 1
•
domain:network 2
•
domain:network .....
Select from a list the destination network for the packets to be captured:
•
any
•
local
•
domain:network 1
•
domain:network 2
•
domain:network .....
Apply Service
Check this box to open a field where you can select a service to apply.
Mirror to
Check this box to open a field where you can select a service to accept the
mirrored packets.
Save
Click Save to commit your changes.
Cancel
Click Cancel to clear your changes and start over.
Copyright © 2016, Juniper Networks, Inc.
373
Contrail Feature Guide
10. Click the Mirror to box and select the available analyzer service instance, then click
Save.
11. To verify packet capture, at Configure > Services > Service Instances, select the analyzer
service instance and click View Console.
The packet capture displays; see Figure 111 on page 374. The analyzer service VM
launches the Contrail enhanced Wireshark as it starts and captures the mirrored
packets destined to this service.
Figure 111: Service Instances View Console
NOTE: When using the Firefox web browser, you may have difficulty
viewing the mixed content presented by the View Console enhanced
Wireshark option. To fix this, please enable mixed content in Firefox.
Alternatively, you can select Click here to show only console to view the
console information in a separate window.
Configuring Interface Monitoring and Mirroring
Contrail supports user monitoring of traffic on any guest virtual machine interface when
using the Juniper Contrail user interface.
When interface monitoring (packet capture) is selected, a default analyzer is created
and all traffic from the selected interface is mirrored and sent to the default analyzer. If
a mirroring instance is already launched, the traffic will be redirected to the selected
instance. The interface traffic is only mirrored during the time that the monitor packet
capture interface is in use. When the capture screen is closed, interface mirroring stops.
To configure interface mirroring:
1.
Select Monitor > Infrastructure > Virtual Routers, then select the vRouter that has the
interface to mirror.
2. In the list of attributes for the vRouter, select Interfaces; see Figure 112 on page 375.
374
Copyright © 2016, Juniper Networks, Inc.
Chapter 16: Configuring Traffic Mirroring to Monitor Network Traffic
Figure 112: Individual vRouter
A list of interfaces for that vRouter appears.
3. For the interface to mirror, click the Action icon in the last column and select the option
Packet Capture; see Figure 113 on page 375.
Figure 113: Interfaces
The mirror packet capture starts and displays at this screen.
The mirror packet capture stops when you exit this screen.
Related
Documentation
•
Analyzer Service Virtual Machine (analyzer-vm-console.qcow2)
The analyzer service virtual machine launches a Contrail-enhanced version of the network
protocol analyzer Wireshark as the analyzer starts capturing mirror packets destined to
the analyzer service.
•
Packet Format for Analyzer on page 376
•
Metadata Format on page 376
•
Wireshark Changes on page 377
•
Troubleshooting Packet Display on page 377
Copyright © 2016, Juniper Networks, Inc.
375
Contrail Feature Guide
Packet Format for Analyzer
The analyzer uses the PCAP format, which has these parts:
•
Global header
•
PCAP packet header
•
Packet data (original packet data)
The global header is added by the analyzer service by means of the Wireshark instance.
The vRouter DP uses the configured UDP session to send mirrored packets to the analyzer,
adding the PCAP packet header to the packet data as it sends it over the UDP socket to
the analyzer.
The following additional information is also added to the packet data as metadata:
•
Captured host (IP address)
•
Ingress or egress
•
Action (Pass/Deny/...)
•
Source VN (fully qualified name)
•
Destination VN (fully qualified name)
In the existing PCAP, a network ID is added in the global header. The metadata (additional
flow information) is added in front of the existing packet as follows.
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+
| Global header | Packet header| Meta data |Packet data| Packet header| Meta data
|Packet data|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+
Metadata Format
The metadata is in type-length-value (TLV) format as follows.
Type: 1 Byte
Length: 1 Byte
Value: up to length
Type
1 – Captured host IPv4 address
2 - Action field
3 – Source VN
4 – Destination VN
255 – TLV end
376
Copyright © 2016, Juniper Networks, Inc.
Chapter 16: Configuring Traffic Mirroring to Monitor Network Traffic
Captured host address
Length is 4 or 16 bytes based on IP address type
Action field
Length is 2 bytes. Multiple bits might be turned on, if there are more actions. Ingress or
egress bit will be present in the Action field.
Source VN or Destination VN
Length is variable and up to 256 characters
TLV end
A special type 255 (0xFF) is used to identify the end of TLV entries. The TLV end must
be last, at the end of the metadata.
Wireshark Changes
A plugin is added to the Wireshark code. The plugin parses the metadata and displays
the packet fields; see example in Figure 114 on page 377.
Figure 114: Wireshark Packet Display
Troubleshooting Packet Display
Follow these steps if the packets are not displaying:
1.
Use tcpdump on the tap interfaces to see if packets are going towards the analyzer
VM.
2. Check introspect to see whether the flow action has mirror activity in it or not.
Copyright © 2016, Juniper Networks, Inc.
377
Contrail Feature Guide
378
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 17
Using Contrail Analytics to Monitor and
Troubleshoot the Network
•
Contrail Analytics Overview on page 379
•
Analytics Scalability on page 381
•
High Availability for Analytics on page 382
•
Ceilometer Support in a Contrail Cloud on page 383
•
Underlay Overlay Mapping in Contrail on page 388
•
Monitoring the System on page 405
•
Debugging Processes Using the Contrail Introspect Feature on page 407
•
Monitor > Infrastructure > Dashboard on page 412
•
Monitor > Infrastructure > Control Nodes on page 415
•
Monitor > Infrastructure > Virtual Routers on page 422
•
Monitor > Infrastructure > Analytics Nodes on page 433
•
Monitor > Infrastructure > Config Nodes on page 438
•
Monitor > Networking on page 441
•
Query > Flows on page 449
•
Query > Logs on page 456
•
System Log Receiver in Contrail Analytics on page 461
•
Example: Debugging Connectivity Using Monitoring for Troubleshooting on page 462
Contrail Analytics Overview
Contrail is a distributed system of compute nodes, control nodes, configuration nodes,
database nodes, web UI nodes, and analytics nodes.
The analytics nodes are responsible for the collection of system state information, usage
statistics, and debug information from all of the software modules across all of the nodes
of the system. The analytics nodes store the data gathered across the system in a
database that is based on the Apache Cassandra open source distributed database
management system. The database is queried by means of an SQL-like language and
representational state transfer (REST) APIs.
Copyright © 2016, Juniper Networks, Inc.
379
Contrail Feature Guide
System state information collected by the analytics nodes is aggregated across all of
the nodes, and comprehensive graphical views allow the user to get up-to-date system
usage information easily.
Debug information collected by the analytics nodes includes the following types:
•
System log (syslog) messages—informational and debug messages generated by
system software components.
•
Object log messages—records of changes made to system objects such as virtual
machines, virtual networks, service instances, virtual routers, BGP peers, routing
instances, and the like.
•
Trace messages—records of activities collected locally by software components and
sent to analytics nodes only on demand.
Statistics information related to flows, CPU and memory usage, and the like is also
collected by the analytics nodes and can be queried at the user interface to provide
historical analytics and time-series information. The queries are performed using REST
APIs.
Analytics data is written to a database in Contrail. The data expires after the default
time-to-live (TTL) period of 48 hours. This default TTL time can be changed as needed
by changing the value of the database_ttl value in the file testbed.py.
Related
Documentation
380
•
Monitoring the System on page 405
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Analytics Scalability
The Contrail monitoring and analytics services (collector role) collect and store data
generated by various system components and provide the data to the Contrail interface
by means of representational state transfer (REST) application program interface (API)
queries.
The Contrail components are horizontally scalable to ensure consistent performance as
the system grows. Scalability is provided for the generator components (control and
compute roles) and for the REST API users (webui role).
This section provides a brief description of the recommended configuration of analytics
in Contrail to achieve horizontal scalability.
The following is the recommended locations for the various component roles of the
Contrail system for a 5-node configuration.
•
Node 1 —config role, web-ui role
•
Node 2 —control role, analytics role, database role
•
Node 3 —control role, analytics role, database role
•
Node 4 —compute role
•
Node 5 —compute role
Figure 115 on page 382 illustrates scalable connections for analytics in a 5-node system,
with the nodes configured for roles as recommended above. The analytics load is
distributed between the two analytics nodes. This configuration can be extended to any
number of analytics nodes.
Copyright © 2016, Juniper Networks, Inc.
381
Contrail Feature Guide
Figure 115: Analytics Scalability
The analytics nodes collect and store data and provide this data through various REST
API queries. Scalability is provided for the control nodes, the compute nodes, and the
REST API users, with the API output displayed in the Contrail user interface. As the number
of control and compute nodes increase in the system, the analytics nodes can also be
increased.
High Availability for Analytics
Contrail supports multiple instances of analytics for high availability and load balancing.
Contrail analytics provides two broad areas of functionality:
•
contrail-collector —Receives status, logs, and flow information from all Contrail
processing elements (for example, generators) and records them.
Every generator is connected to one of the contrail-collector instances at any given
time. If an instance fails (or is shut down), all the generators that are connected to it
are automatically moved to another functioning instance, typically in a few seconds
or less. Some messages may be lost during this movement. UVEs are resilient to
message loss, so the state shown in a UVE is kept consistent to the state in the
generator.
•
contrail-opserver —Provides an external API to report UVEs and to query logs and flows.
Each analytics component exposes a northbound REST API represented by the
contrail-opserver service (port 8081) so that the failure of one analytics component
or one contrail-opserver service should not impact the operation of other instances.
These are the ways to manage connectivity to the contrail-opserver endpoints:
382
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
•
Periodically poll the contrail-opserver service on a set of analytics nodes to determine
the list of functioning endpoints, then make API requests from one or more of the
functioning endpoints.
•
Subscribe to the Contrail Discovery Service to get a list of functioning endpoints. If
there are any failures, it can take 5-30 minutes for the Contrail Discovery Service to
send an update.
The Contrail user interface makes use of the same northbound REST API to present
dashboards, and reacts to any contrail-opserver high availability event automatically,
using the Contrail Discovery Service.
Ceilometer Support in a Contrail Cloud
This topic describes how to configure the Ceilometer services.
•
Overview on page 383
•
Ceilometer Details on page 383
•
Verification of Ceilometer Operation on page 384
•
Contrail Ceilometer Plugin on page 386
•
Ceilometer Installation and Provisioning on page 388
Overview
Contrail Release 2.20 and later supports the OpenStack Ceilometer services. OpenStack
Ceilometer is supported on the OpenStack release Juno on Ubuntu 14.04.1 LTS.
The prerequisites for installing Ceilometer are:
•
Contrail Cloud installation
•
Provisioned using Fabric with enable_ceilometer = True in the testbed.py file.
•
Alternately, provisioned using Server Manager with enable_ceilometer = True in the
cluster.json or cluster configuration.
NOTE: Ceilometer services are only installed on the first OpenStack controller
node and do not support high availability in Contrail Release 2.20.
Ceilometer Details
Ceilometer is used to reliably collect measurements of the utilization of the physical and
virtual resources comprising deployed clouds, persist these data for subsequent retrieval
and analysis, and trigger actions when defined criteria are met.
The Ceilometer architecture consists of:
Polling agent—Agent designed to poll OpenStack services and build meters. The polling
agents are also run on the compute nodes in addition to the OpenStack controller.
Copyright © 2016, Juniper Networks, Inc.
383
Contrail Feature Guide
Notification agent—Agent designed to listen to notifications on message queue and
convert them to events and samples.
Collector —Gathers and records event and metering data created by the notification and
polling agents.
API server—Provides a REST API to query and view data recorded by the collector service.
Alarms—Daemons to evaluate and notify based on defined alarming rules.
Database—Stores the metering data, notifications, and alarms. The supported databases
are MongoDB, SQL-based databases compatible with SQLAlchemy, and HBase.
The recommended database is MongoDB, which has been thoroughly tested with
Contrail and deployed on a production scale.
Verification of Ceilometer Operation
The Ceilometer services are named slightly differently on the Ubuntu and RHEL Server
7.0.
On Ubuntu, the service names are:
Polling agent—ceilometer-agent-central and ceilometer-agent-compute
Notification agent—ceilometer-agent-notification
Collector —ceilometer-collector
API Server—ceilometer-api
Alarms—ceilometer-alarm-evaluator and ceilometer-alarm-notifier
On RHEL Server 7.0, the service names are:
Polling agent—openstack-ceilometer-central and openstack-ceilometer-compute
Notification agent—openstack-ceilometer-notification
Collector —openstack-ceilometer-collector
API server—openstack-ceilometer-api
Alarms—openstack-ceilometer-alarm-evaluator and openstack-ceilometer-alarm-notifier
To verify the Ceilometer installation, users can verify that the Ceilometer services are up
and running by using the ccopenstack-statuscommand.
For example, using the openstack-status command on an all-in-one node running Ubuntu
14.04.1 LTS with release 2.2 of Contrail installed shows the following Ceilometer services
as active:
== Ceilometer services ==
ceilometer-api:
active
ceilometer-agent-central: active
ceilometer-agent-compute: active
384
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
ceilometer-collector:
active
ceilometer-alarm-notifier: active
ceilometer-alarm-evaluator: active
ceilometer-agent-notification:active
You can issue the ceilometer meter-list command on the OpenStack controller node to
verify that meters are being collected, stored, and reported via the REST API. The following
is an example of the output:
root@a7s37:~# (source /etc/contrail/openstackrc; ceilometer meter-list)
+------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+
| Name
| Type
| Unit | Resource ID
| User ID
| Project
ID
|
+------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+
| ip.floating.receive.bytes | cumulative | B | a726f93a-65fa-4cad-828b-54dbfcf4a119
| None
| None
|
| ip.floating.receive.packets | cumulative | packet |
a726f93a-65fa-4cad-828b-54dbfcf4a119 | None
| None
|
| ip.floating.transmit.bytes | cumulative | B | a726f93a-65fa-4cad-828b-54dbfcf4a119
| None
| None
|
| ip.floating.transmit.packets | cumulative | packet |
a726f93a-65fa-4cad-828b-54dbfcf4a119 | None
| None
|
| network
| gauge | network | 7fa6796b-756e-4320-9e73-87d4c52ecc83 |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| network
| gauge | network | 9408e287-d3e7-41e2-89f0-5c691c9ca450 |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| network
| gauge | network | b3b72b98-f61e-4e1f-9a9b-84f4f3ddec0b |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| network
| gauge | network | cb829abd-e6a3-42e9-a82f-0742db55d329 |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| network.create
| delta | network | 7fa6796b-756e-4320-9e73-87d4c52ecc83
| 15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| network.create
| delta | network | 9408e287-d3e7-41e2-89f0-5c691c9ca450
| 15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| network.create
| delta | network | b3b72b98-f61e-4e1f-9a9b-84f4f3ddec0b |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| network.create
| delta | network | cb829abd-e6a3-42e9-a82f-0742db55d329
| 15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| port
| gauge | port | 0d401d96-c2bf-4672-abf2-880eecf25ceb |
01edcedd989f43b3a2d6121d424b254d | 82ab961f88994e168217ddd746fdd826 |
| port
| gauge | port | 211b94a4-581d-45d0-8710-c6c69df15709 |
01edcedd989f43b3a2d6121d424b254d | 82ab961f88994e168217ddd746fdd826 |
| port
| gauge | port | 2287ce25-4eef-4212-b77f-3cf590943d36 |
01edcedd989f43b3a2d6121d424b254d | 82ab961f88994e168217ddd746fdd826 |
| port.create
| delta | port | f62f3732-222e-4c40-8783-5bcbc1fd6a1c |
01edcedd989f43b3a2d6121d424b254d | 82ab961f88994e168217ddd746fdd826 |
| port.create
| delta | port | f8c89218-3cad-48e2-8bd8-46c1bc33e752 |
01edcedd989f43b3a2d6121d424b254d | 82ab961f88994e168217ddd746fdd826 |
| port.update
| delta | port | 43ed422d-b073-489f-877f-515a3cc0b8c4 |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| subnet
| gauge | subnet | 09105ed1-1654-4b5f-8c12-f0f2666fa304 |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| subnet
| gauge | subnet | 4bf00aac-407c-4266-a048-6ff52721ad82 |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
| subnet.create
| delta | subnet | 09105ed1-1654-4b5f-8c12-f0f2666fa304 |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
Copyright © 2016, Juniper Networks, Inc.
385
Contrail Feature Guide
| subnet.create
| delta | subnet | 4bf00aac-407c-4266-a048-6ff52721ad82 |
15c0240142084d16b3127d6f844adbd9 | ded208991de34fe4bb7dd725097f1c7e |
+------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+
NOTE: The ceilometer meter-list command lists the meters only if images
have been created, or instances have been launched, or if subnet, port, floating
IP addresses have been created, otherwise the meter list is empty. You also
need to source the /etc/contrail/openstackrc file when executing the
command.
Contrail Ceilometer Plugin
The Contrail Ceilometer plugin adds the capability to meter the traffic statistics of floating
IP addresses in Ceilometer. The following meters for each floating IP resource are added
by the plugin in Ceilometer.
ip.floating.receive.bytes
ip.floating.receive.packets
ip.floating.transmit.bytes
ip.floating.transmit.packets
The Contrail Ceilometer plugin configuration is done in the /etc/ceilometer/pipeline.yaml
file when Contrail is installed by the Fabric provisioning scripts.
The following example shows the configuration that is added to the file:
sources:
- name: contrail_source
interval: 600
meters:
- "ip.floating.receive.packets"
- "ip.floating.transmit.packets"
- "ip.floating.receive.bytes"
- "ip.floating.transmit.bytes"
resources:
- contrail://<IP-address-of-Contrail-Analytics-Node>:8081
sinks:
- contrail_sink
sinks:
- name: contrail_sink
publishers:
- rpc://
transformers:
The following example shows the Ceilometer meter list output for the floating IP meters:
+-------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| Name
| Type
| Unit | Resource ID
| User ID
| Project ID
|
+-------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| ip.floating.receive.bytes | cumulative | B
|
451c93eb-e728-4ba1-8665-6e7c7a8b49e2
| None
| None
|
| ip.floating.receive.bytes | cumulative | B
|
386
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
9cf76844-8f09-4518-a09e-e2b8832bf894
|
| ip.floating.receive.packets | cumulative | packet
451c93eb-e728-4ba1-8665-6e7c7a8b49e2
|
| ip.floating.receive.packets | cumulative | packet
9cf76844-8f09-4518-a09e-e2b8832bf894
|
| ip.floating.transmit.bytes | cumulative | B
|
451c93eb-e728-4ba1-8665-6e7c7a8b49e2
|
| ip.floating.transmit.bytes | cumulative | B
|
9cf76844-8f09-4518-a09e-e2b8832bf894
|
| ip.floating.transmit.packets | cumulative | packet
451c93eb-e728-4ba1-8665-6e7c7a8b49e2
|
| ip.floating.transmit.packets | cumulative | packet
9cf76844-8f09-4518-a09e-e2b8832bf894
|
| None
| None
| None
| None
| None
| None
| None
| None
| None
| None
| None
| None
| None
| None
|
|
|
|
In the meter -list output, the Resource ID refers to the floating IP.
The following example shows the output from the ceilometer resource-show -r
451c93eb-e728-4ba1-8665-6e7c7a8b49e2 command:
+-------------+-------------------------------------------------------------------------+
| Property | Value
|
+-------------+-------------------------------------------------------------------------+
| metadata | {u'router_id': u'None', u'status': u'ACTIVE', u'tenant_id':
|
|
| u'ceed483222f9453ab1d7bcdd353971bc', u'floating_network_id':
|
|
| u'6d0cca50-4be4-4b49-856a-6848133eb970', u'fixed_ip_address':
|
|
| u'2.2.2.4', u'floating_ip_address': u'3.3.3.4', u'port_id': u'c6ce2abf- |
|
| ad98-4e56-ae65-ab7c62a67355', u'id':
|
|
| u'451c93eb-e728-4ba1-8665-6e7c7a8b49e2', u'device_id':
|
|
| u'00953f62-df11-4b05-97ca-30c3f6735ffd'}
|
| project_id | None
|
| resource_id | 451c93eb-e728-4ba1-8665-6e7c7a8b49e2
|
| source | openstack
|
| user_id | None
|
+-------------+-------------------------------------------------------------------------+
The following example shows the output from the ceilometer statistics command and
the ceilometer sample-list command for the ip.floating.receive.packets meter:
+--------+----------------------------+----------------------------+-------+-----+-------+--------+----------------+------------+----------------------------+----------------------------+
| Period | Period Start
| Period End
| Count | Min | Max | Sum | Avg
|
Duration | Duration Start
| Duration End
|
+--------+----------------------------+----------------------------+-------+-----+-------+--------+----------------+------------+----------------------------+----------------------------+
| 0 | 2015-02-13T19:50:40.795000 | 2015-02-13T19:50:40.795000 | 2892 | 0.0 | 325.0
| 1066.0 | 0.368603042877 | 439069.674 | 2015-02-13T19:50:40.795000 |
2015-02-18T21:48:30.469000 |
+--------+----------------------------+----------------------------+-------+-----+-------+--------+----------------+------------+----------------------------+----------------------------+
+--------------------------------------+-----------------------------+------------+--------+--------+----------------------------+
Copyright © 2016, Juniper Networks, Inc.
387
Contrail Feature Guide
| Resource ID
| Name
| Type
| Volume | Unit | Timestamp
|
+--------------------------------------+-----------------------------+------------+--------+--------+----------------------------+
| 9cf76844-8f09-4518-a09e-e2b8832bf894 | ip.floating.receive.packets | cumulative
| 208.0 | packet | 2015-02-18T21:48:30.469000 |
| 451c93eb-e728-4ba1-8665-6e7c7a8b49e2 | ip.floating.receive.packets | cumulative |
325.0 | packet | 2015-02-18T21:48:28.354000 |
| 9cf76844-8f09-4518-a09e-e2b8832bf894 | ip.floating.receive.packets | cumulative
| 0.0 | packet | 2015-02-18T21:38:30.350000 |
Ceilometer Installation and Provisioning
There are two scenarios possible for Contrail Ceilometer plugin installation.
1.
If you install your own OpenStack distribution, you can install the Contrail Ceilometer
plugin on the OpenStack controller node.
2. When using Contrail Cloud services, the Ceilometer controller services are installed
and provisioned automatically as part of the OpenStack controller node and the
compute agent service is installed as part of the compute node.
The following fabric tasks are added to facilitate the installation and provisioning:
fab install_ceilometer—Installs the Ceilometer packages on the OpenStack controller
node.
fab install_ceilometer_compute—Installs the Ceilometer packages on the compute
node.
fab setup_ceilometer—Provisions the Ceilometer controller services on the OpenStack
controller node.
fab setup_ceilometer_compute—Provisions the Contrail Ceilometer plugin package
on the OpenStack controller node.
fab install_contrail_ceilometer_plugin—Installs the Contrail Ceilometer plugin package
on the OpenStack controller node.
fab setup_contrail_ceilometer_plugin—Provisions the Contrail Ceilometer plugin package
on the OpenStack controller node.
NOTE: The fabric tasks are automatically called as part of the fab
install_openstack and fab setup_openstack commands for the OpenStack
controller node, and as part of the fab install_vrouter, fab setup_vrouter
commands for the compute node
Underlay Overlay Mapping in Contrail
388
•
Overview: Underlay Overlay Mapping using Contrail Analytics on page 389
•
Underlay Overlay Analytics Available in Contrail on page 389
•
Architecture and Data Collection on page 390
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
•
New Processes/Services for Underlay Overlay Mapping on page 390
•
External Interfaces Configuration for Underlay Overlay Mapping on page 391
•
Physical Topology on page 391
•
SNMP Configuration on page 392
•
Link Layer Discovery Protocol (LLDP) Configuration on page 392
•
IPFIX and sFlow Configuration on page 392
•
Sending pRouter Information to the SNMP Collector in Contrail on page 393
•
pRouter UVEs on page 394
•
Contrail User Interface for Underlay Overlay Analytics on page 395
•
Viewing Topology to the Virtual Machine Level on page 395
•
Viewing the Traffic of any Link on page 396
•
Trace Flows on page 396
•
Search Flows and Map Flows on page 397
•
Overlay to Underlay Flow Map Schemas on page 398
•
Module Operations for Overlay Underlay Mapping on page 400
•
SNMP Collector Operation on page 400
•
Topology Module Operation on page 401
•
IPFIX and sFlow Collector Operation on page 402
•
Troubleshooting Underlay Overlay Mapping on page 403
•
Script to add pRouter Objects on page 403
Overview: Underlay Overlay Mapping using Contrail Analytics
Today’s cloud data centers consist of large collections of interconnected servers that
provide computing and storage capacity to run a variety of applications. The servers are
connected with redundant TOR switches, which in turn, are connected to spine routers.
The cloud deployment is typically shared by multiple tenants, each of whom usually
needs multiple isolated networks. Multiple isolated networks can be provided by overlay
networks that are created by forming tunnels (for example, gre, ip-in-ip, mac-in-mac)
over the underlay or physical connectivity.
As data flows in the overlay network, Contrail can provide statistics and visualization of
the traffic in the underlay network.
Underlay Overlay Analytics Available in Contrail
Starting with Contrail Release 2.20, you can view a variety of analytics related to underlay
and overlay traffic in the Contrail Web user interface. The following are some of the
analytics that Contrail provides for statistics and visualization of overlay underlay traffic.
•
View the topology of the underlay network.
A user interface view of the physical underlay network with a drill down mechanism
to show connected servers (contrail computes) and virtual-machines on the servers.
•
View the details of any element in the topology.
Copyright © 2016, Juniper Networks, Inc.
389
Contrail Feature Guide
One can view details of a prouter, vrouter, virtual-machine, link between 2 elements.
One can also view traffic statistics in a graphical view corresponding the selected
element.
•
View the underlay path of an overlay flow.
Given an overlay flow, get the underlay path used for that flow and map the path in
the topology view.
Architecture and Data Collection
Accumulation of the data to map an overlay flow to its underlay path is performed in
several steps across Contrail modules.
The following outlines the essential steps.
1.
The SNMP collector module polls physical routers.
The SNMP collector module receives the authorizations and configurations of the
physical routers from the contrail-config module, polls all of the physical routers, using
SNMP protocol, then uploads the data to the Contrail analytics collectors. The SNMP
information is stored in the pRouter UVEs (physical router user visible entities).
2. IPFIX and sFlow protocols are used to collect the flow statistics.
The physical router is configured to send flow statistics to the collector, using one of
the collection protocols, Internet Protocol Flow Information Export (IPFIX) or sFlow
(an industry standard for sampled flow of packet export at Layer 2).
3. The topology module reads the SNMP information.
The Contrail topology module reads SNMP information from the pRouter UVEs from
the analytics API, computes the neighbor list, and writes the neighbor information into
the pRouter UVEs. This neighbor list is used by the Contrail WebUI to display the
physical topology.
4. The Contrail user interface reads and displays the topology and statistics.
The Contrail user interface module reads the topology information from the Contrail
analytics and displays the physical topology. It also uses information stored in the
analytics to display graphs for link statistics, and to show the map of the overlay flows
on the underlay network.
New Processes/Services for Underlay Overlay Mapping
The contrail-snmp-collector and the contrail-topology are new daemons that are both
added to the contrail-analytics node. The contrail-analytics package contains these new
features and their associated files. The contrail-status displays the new services.
Example:
contrail-status
The following is an example of using contrail-status to show the status of the new process
and service for underlay overlay mapping.
root@a7s37:~# contrail-status
== Contrail Control ==
390
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
supervisor-control:
contrail-control
active
active
…
== Contrail Analytics ==
supervisor-analytics:
active
…
contrail-query-engine
active
contrail-snmp-collector active
contrail-topology
Example: Service
Command
active
The service command can be used to start, stop, and restart the new services. See the
following example.
root@a7s37:~# service contrail-snmp-collector status
contrail-snmp-collector
RUNNING pid 12179, uptime 1 day, 14:59:11
External Interfaces Configuration for Underlay Overlay Mapping
This section outlines the external interface configurations necessary for successful
underlay overlay mapping for Contrail analytics.
Physical Topology
The typical physical topology includes:
•
Servers connected to the TOR switches
•
TOR switches connected to spine switches
•
Spine switches connected to core switches
The following is an example of how the topology is depicted in the Contrail WebUI
analytics.
Figure 116: Analytics Topology
Copyright © 2016, Juniper Networks, Inc.
391
Contrail Feature Guide
SNMP Configuration
Configure SNMP on the physical routers so that the contrail-snmp-collector can read
SNMP data.
The following shows an example SNMP configuration from a Juniper Networks router.
set snmp community public authorization read-only
Link Layer Discovery Protocol (LLDP) Configuration
Configure LLDP on the physical router so that the contrail-snmp-collector can read the
neighbor information of the routers.
The following is an example of LLDP configuration on a Juniper Networks router.
set protocols lldp interface all
set protocols lldp-med interface all
IPFIX and sFlow Configuration
Flow samples are sent to the contrail-collector by the physical routers. Because the
contrail-collector supports the sFlow and IPFIX protocols for receiving flow samples, the
physical routers must be configured to send samples using one of those protocols.
Example: sFlow
Configuration
The following shows a sample sFlow configuration.
root@host> show configuration protocols sflow | display set
set protocols sflow polling-interval 0
set protocols sflow sample-rate ingress 10
set protocols sflow source-ip 10.84.63.114
set protocols sflow collector 10.84.63.130 udp-port 6343
set protocols sflow interfaces ge-0/0/0.0
set protocols sflow interfaces ge-0/0/1.0
set protocols sflow interfaces ge-0/0/2.0
set protocols sflow interfaces ge-0/0/3.0
set protocols sflow interfaces ge-0/0/4.0
Example: IPFIX
Configuration
The following is a sample IPFIX configuration from a Juniper Networks router.
root@host> show configuration chassis | display set
set chassis tfeb slot 0 sampling-instance sample-ins1
set chassis network-services all-ethernet
392
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
root@host> show configuration chassis tfeb | display set
set chassis tfeb slot 0 sampling-instance sample-ins1
root@host > show configuration services flow-monitoring | display set
set services flow-monitoring version-ipfix template t1 flow-active-timeout 30
set services flow-monitoring version-ipfix template t1 flow-inactive-timeout 30
set services flow-monitoring version-ipfix template t1 template-refresh-rate packets 10
set services flow-monitoring version-ipfix template t1 ipv4-template
root@host > show configuration interfaces | display set | match sampling
set interfaces ge-1/0/0 unit 0 family inet sampling input
set interfaces ge-1/0/1 unit 0 family inet sampling input
root@host> show configuration forwarding-options sampling | display set
set forwarding-options sampling instance sample-ins1 input rate 1
set forwarding-options sampling instance sample-ins1 family inet output flow-server
10.84.63.130 port 4739
set forwarding-options sampling instance sample-ins1 family inet output flow-server
10.84.63.130 version-ipfix template t1
set forwarding-options sampling instance sample-ins1 family inet output inline-jflow
source-address 10.84.27.41
Sending pRouter Information to the SNMP Collector in Contrail
Information about the physical routers must be sent to the SNMP collector before the
full analytics information can be read and displayed. Typically, the pRouter information
is taken from the contrail-config file, but the information can also be sent to the SNMP
collector by means of a device.ini file.
SNMP collector getting pRouter information from contrail-config file
The physical routers are added to the contrail-config by using the Contrail user interface
or by using direct API, through provisioning or other scripts. Once the configuration is in
the contrail-config, the contrail-snmp-collector gets the physical router information from
Copyright © 2016, Juniper Networks, Inc.
393
Contrail Feature Guide
contrail-config. The SNMP collector uses this list and the other configuration parameters
to perform SNMP queries and to populate pRouter UVEs.
Figure 117: Add Physical Router Window
pRouter UVEs
pRouter UVEs are accessed from the REST APIs on your system from
contrail-analytics-api, using a URL of the form:
http://<ip>:8081/analytics/uves/prouters
The following is sample output from a pRouter REST API:
Figure 118: Sample Output From a pRouter REST API
Details of a pRouter UVE can be obtained from your system, using a URL of the following
form.
http://<ip>:8081/analytics/uves/prouter/a7-ex3?flat
The following is sample output of a pRouter UVE.
394
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 119: Sample Output From a pRouter UVE
Contrail User Interface for Underlay Overlay Analytics
The topology view and related functionality is accessed from the Physical Topology link
in the Monitor tab of the Contrail user interface, Monitor > Physical Topology.
Viewing Topology to the Virtual Machine Level
In the Contrail user interface, it is possible to drill down through displayed topology to
the virtual machine level. The following diagram shows the virtual machines instantiated
on a7s36 vRouter and the full physical topology related to each.
Copyright © 2016, Juniper Networks, Inc.
395
Contrail Feature Guide
Figure 120: Physical Topology Related to a vRouter
Viewing the Traffic of any Link
At Monitor > Physical Topology, double click any link on the topology to display the traffic
statistics graph for that link. The following is an example.
Figure 121: Traffic Statistics Graph
Trace Flows
Click the Trace Flows tab to see a list of active flows. To see the path of a flow, click a
flow in the active flows list, then click the Trace Flow button. The path taken in the underlay
by the selected flow displays. The following is an example.
396
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 122: List of Active Flows
Limitations of Trace Flow Feature
Because the Trace Flow feature uses ip traceroute to determine the path between the
two vRouters involved in the flow, it has the same limitations as the ip traceroute, including
that Layer 2 routers in the path are not listed, and therefore do not appear in the topology.
Search Flows and Map Flows
Click the Search Flows tab to open a search dialog, then click the Search button to list
the flows that match the search criteria. You can select a flow from the list and click Map
Flow to display the underlay path taken by the selected flow in the topology. The following
is an example.
Copyright © 2016, Juniper Networks, Inc.
397
Contrail Feature Guide
Figure 123: Underlay Path
Overlay to Underlay Flow Map Schemas
The schema to query the underlay mapping information for an overlay flow is obtained
from a REST API, which can be accessed on your system using a URL of the following
form.
http://<ip>:8081/analytics/table/OverlayToUnderlayFlowMap/schema
Example: Overlay to
Underlay Flow Map
Schema
{"type": "FLOW",
"columns": [
{"datatype": "string", "index": true, "name": "o_svn", "select": false, "suffixes": ["o_sip"]},
{"datatype": "string", "index": false, "name": "o_sip", "select": false, "suffixes": null},
{"datatype": "string", "index": true, "name": "o_dvn", "select": false, "suffixes": ["o_dip"]},
{"datatype": "string", "index": false, "name": "o_dip", "select": false, "suffixes": null},
{"datatype": "int", "index": false, "name": "o_sport", "select": false, "suffixes": null},
{"datatype": "int", "index": false, "name": "o_dport", "select": false, "suffixes": null},
{"datatype": "int", "index": true, "name": "o_protocol", "select": false, "suffixes":
["o_sport", "o_dport"]},
{"datatype": "string", "index": true, "name": "o_vrouter", "select": false, "suffixes": null},
{"datatype": "string", "index": false, "name": "u_prouter", "select": null, "suffixes": null},
{"datatype": "int", "index": false, "name": "u_pifindex", "select": null, "suffixes": null},
398
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
{"datatype": "int", "index": false, "name": "u_vlan", "select": null, "suffixes": null},
{"datatype": "string", "index": false, "name": "u_sip", "select": null, "suffixes": null},
{"datatype": "string", "index": false, "name": "u_dip", "select": null, "suffixes": null},
{"datatype": "int", "index": false, "name": "u_sport", "select": null, "suffixes": null},
{"datatype": "int", "index": false, "name": "u_dport", "select": null, "suffixes": null},
{"datatype": "int", "index": false, "name": "u_protocol", "select": null, "suffixes": null},
{"datatype": "string", "index": false, "name": "u_flowtype", "select": null, "suffixes": null},
{"datatype": "string", "index": false, "name": "u_otherinfo", "select": null, "suffixes":
null}]}
The schema for underlay data across pRouters is defined in the Contrail installation at:
http://<ip>:8081/analytics/table/StatTable.UFlowData.flow/schema
Example: Flow Data
Schema for Underlay
{"type": "STAT",
"columns": [
{"datatype": "string", "index": true, "name": "Source", "suffixes": null},
{"datatype": "int", "index": false, "name": "T", "suffixes": null},
{"datatype": "int", "index": false, "name": "CLASS(T)", "suffixes": null},
{"datatype": "int", "index": false, "name": "T=", "suffixes": null},
{"datatype": "int", "index": false, "name": "CLASS(T=)", "suffixes": null},
{"datatype": "uuid", "index": false, "name": "UUID", "suffixes": null},
{"datatype": "int", "index": false, "name": "COUNT(flow)", "suffixes": null},
{"datatype": "string", "index": true, "name": "name", "suffixes": ["flow.pifindex"]},
{"datatype": "int", "index": false, "name": "flow.pifindex", "suffixes": null},
{"datatype": "int", "index": false, "name": "SUM(flow.pifindex)", "suffixes": null},
{"datatype": "int", "index": false, "name": "CLASS(flow.pifindex)", "suffixes": null},
{"datatype": "int", "index": false, "name": "flow.sport", "suffixes": null},
{"datatype": "int", "index": false, "name": "SUM(flow.sport)", "suffixes": null},
{"datatype": "int", "index": false, "name": "CLASS(flow.sport)", "suffixes": null},
{"datatype": "int", "index": false, "name": "flow.dport", "suffixes": null},
Copyright © 2016, Juniper Networks, Inc.
399
Contrail Feature Guide
{"datatype": "int", "index": false, "name": "SUM(flow.dport)", "suffixes": null},
{"datatype": "int", "index": false, "name": "CLASS(flow.dport)", "suffixes": null},
{"datatype": "int", "index": true, "name": "flow.protocol", "suffixes": ["flow.sport",
"flow.dport"]},
{"datatype": "int", "index": false, "name": "SUM(flow.protocol)", "suffixes": null},
{"datatype": "int", "index": false, "name": "CLASS(flow.protocol)", "suffixes": null},
{"datatype": "string", "index": true, "name": "flow.sip", "suffixes": null},
{"datatype": "string", "index": true, "name": "flow.dip", "suffixes": null},
{"datatype": "string", "index": true, "name": "flow.vlan", "suffixes": null},
{"datatype": "string", "index": false, "name": "flow.flowtype", "suffixes": null},
{"datatype": "string", "index": false, "name": "flow.otherinfo", "suffixes": null}]}
Example: Typical
Query for Flow Map
The following is a typical query. Internally, the analytics-api performs a query into the
FlowRecordTable, then into the StatTable.UFlowData.flow, to return list of (prouter,
pifindex) pairs that give the underlay path taken for the given overlay flow.
FROM
OverlayToUnderlayFlowMap
SELECT
prouter, pifindex
WHERE
o_svn, o_sip, o_dvn, o_dip, o_sport, o_dport, o_protocol = <overlay flow>
Module Operations for Overlay Underlay Mapping
SNMP Collector Operation
The Contrail SNMP collector uses a Net-SNMP library to talk to a physical router or any
SNMP agent. Upon receiving SNMP packets, the data is translated to the Python
dictionary, and corresponding UVE objects are created. The UVE objects are then posted
to the SNMP collector.
The SNMP module sleeps for some configurable period, then forks a collector process
and waits for the process to complete. The collector process goes through a list of devices
to be queried. For each device, it forks a greenlet task (Python coroutine), accumulates
SNMP data, writes the summary to a JSON file, and exits. The parent process then reads
the JSON file, creates UVEs, sends the UVEs to the collector, then goes to sleep again.
The pRouter UVE sent by the SNMP collector carries only the raw MIB information.
400
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Example: pRouter
Entry Carried in
pRouter UVE
The definition below shows the pRouterEntry carried in the pRouterUVE. Additionally,
an example LldpTable definition is shown.
The following definitions will create a virtual table as defined by:
http://<ip>:8081/analytics/table/StatTable.UFlowData.flow/schema
struct LldpTable {
1: LldpLocalSystemData lldpLocalSystemData
2: optional list<LldpRemoteSystemsData> lldpRemoteSystemsData
}
struct PRouterEntry {
1: string name (key="ObjectPRouter")
2: optional bool deleted
3: optional LldpTable lldpTable
4: optional list<ArpTable> arpTable
5: optional list<IfTable> ifTable
6: optional list<IfXTable> ifXTable
7: optional list<IfStats> ifStats (tags="name:.ifIndex")
8: optional list<IpMib> ipMib
}
uve sandesh PRouterUVE {
1: PRouterEntry data
}
Topology Module Operation
The topology module reads UVEs posted by the SNMP collector and computes the
neighbor table, populating the table with remote system name, local and remote interface
names, the remote type (pRouter or vRouter) and local and remote ifindices. The topology
module sleeps for a while, reads UVEs, then computes the neighbor table and posts the
UVE to the collector.
The pRouter UVE sent by the topology module carries the neighbor list, so the clients
can put together all of the pRouter neighbor lists to compute the full topology.
The corresponding pRouter UVE definition is the following.
Copyright © 2016, Juniper Networks, Inc.
401
Contrail Feature Guide
struct LinkEntry {
1: string remote_system_name
2: string local_interface_name
3: string remote_interface_name
4: RemoteType type
5: i32 local_interface_index
6: i32 remote_interface_index
}
struct PRouterLinkEntry {
1: string name (key="ObjectPRouter")
2: optional bool deleted
3: optional list<LinkEntry> link_table
}
uve sandesh PRouterLinkUVE {
1: PRouterLinkEntry data
}
IPFIX and sFlow Collector Operation
An IPFIX and sFlow collector has been implemented in contrail-collector. The collector
receives the IPFIX and sFlow samples and stores them as statistics samples in the
analytics database.
Example: IPFIX sFlow
Collector Data
The following definition shows the data stored for the statistics samples and the indices
that can be used to perform queries.
struct UFlowSample {
1: u64 pifindex
2: string sip
3: string dip
4: u16 sport
5: u16 dport
402
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
6: u16 protocol
7: u16 vlan
8: string flowtype
9: string otherinfo
}
struct UFlowData {
1: string name (key="ObjectPRouterIP")
2: optional bool deleted
3: optional list<UFlowSample> flow (tags="name:.pifindex, .sip, .dip, .protocol:.sport,
.protocol:.dport, .vlan")
}
Troubleshooting Underlay Overlay Mapping
This section provides a variety of links where you can research errors that may occur with
underlay overlay mapping.
System Logs
Logs for contrail-snmp-collector and contrail-topology are in the following locations on
an installed Contrail system:
/var/log/contrail/contrail-snmp-collector-stdout.log
/var/log/contrail/contrail-topology.log
Introspect Utility
Use URLs of the following forms on your Contrail system to access the introspect utilities
for SNMP data and for topology data.
•
SNMP data introspect
http://<ip>:5920/Snh_SandeshUVECacheReq?x=PRouterEntry
•
Topology data introspect
http://<ip>:5921/Snh_SandeshUVECacheReq?x=PRouterLinkEntry
Script to add pRouter Objects
The usual mechanism for adding pRouter objects to contrail-config is through Contrail
UI. But you also have the ability to add these objects using the contrail vnc-api. To add
one pRouter, save the file with the name cfg-snmp.py, and then execute the command
as shown:
python cfg-snmp.py
Example: Content for
cfg-snmp.py
#!python
Copyright © 2016, Juniper Networks, Inc.
403
Contrail Feature Guide
from vnc_api import vnc_api
from vnc_api.gen.resource_xsd import SNMPCredentials
vnc = vnc_api.VncApi('admin', 'abcde123', 'admin')
apr = vnc_api.gen.resource_client.PhysicalRouter(name='a7-mx80-1')
apr.set_physical_router_management_ip('1.1.1.105')
apr.set_physical_router_dataplane_ip('1.1.1.41')
apr.set_physical_router_snmp_credentials(SNMPCredentials(version=2,
v2_community='public'))
vnc.physical_router_create(apr)
#u'd4b817fb-7885-4649-bad7-89302dde12e1'
apr = vnc_api.gen.resource_client.PhysicalRouter(name='a7-mx80-2')
apr.set_physical_router_management_ip('1.1.1.117')
apr.set_physical_router_dataplane_ip('1.1.1.43')
apr.set_physical_router_snmp_credentials(SNMPCredentials(version=2,
v2_community='public'))
vnc.physical_router_create(apr)
#u'b60c2d36-4a6d-408b-bb26-054e9c18453a'
apr = vnc_api.gen.resource_client.PhysicalRouter(name='a7-ex3')
apr.set_physical_router_management_ip('1.1.1.114')
apr.set_physical_router_dataplane_ip('1.1.1.114')
apr.set_physical_router_snmp_credentials(SNMPCredentials(version=2,
v2_community='public'))
vnc.physical_router_create(apr)
#u'28107445-2aa4-4c7f-91ed-3146af6f163d'
apr = vnc_api.gen.resource_client.PhysicalRouter(name='a7-ex2')
apr.set_physical_router_management_ip('1.1.1.106')
apr.set_physical_router_dataplane_ip('1.1.1.106')
apr.set_physical_router_snmp_credentials(SNMPCredentials(version=2,
v2_community='public'))
404
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
vnc.physical_router_create(apr)
#u'e2d2ddc6-4e0f-4cd4-b846-3bad53093ec6'
Monitoring the System
The Monitor icon on the Contrail Controller provides numerous options so you can view
and analyze usage and other activity associated with all nodes of the system, through
the use of reports, charts, and detailed lists of configurations and system activities.
Monitor pages support monitoring of infrastructure components—control nodes, virtual
routers, analytics nodes, and config nodes. Additionally, users can monitor networking
and debug components.
Use the menu options available from the Monitor icon to configure and view the statistics
you need for better understanding of the activities in your system. See
Figure 124 on page 405
Figure 124: Monitor Menu
See Table 35 on page 406 for descriptions of the items available under each of the menu
options from the Monitor icon.
Copyright © 2016, Juniper Networks, Inc.
405
Contrail Feature Guide
Table 35: Monitor Menu Options
Option
Description
Infrastructure > Dashboard
Shows “at-a-glance” status view of the infrastructure components, including the
numbers of virtual routers, control nodes, analytics nodes, and config nodes
currently operational, and a bubble chart of virtual routers showing the CPU and
memory utilization, log messages, system information, and alerts. See “Monitor
> Infrastructure > Dashboard” on page 412.
Infrastructure > Control Nodes
View a summary for all control nodes in the system, and for each control node,
view:
•
Graphical reports of memory usage and average CPU load.
•
Console information for a specified time period.
•
A list of all peers with details about type, ASN, and the like.
•
A list of all routes, including next hop, source, local preference, and the like.
See “Monitor > Infrastructure > Control Nodes” on page 415.
Infrastructure > Virtual Routers
View a summary of all vRouters in the system, and for each vRouter, view:
•
Graphical reports of memory usage and average CPU load.
•
Console information for a specified time period.
•
A list of all interfaces with details such as label, status, associated network, IP
address, and the like.
•
A list of all associated networks with their ACLs and VRFs.
•
A list of all active flows with source and destination details, size, and time.
See “Monitor > Infrastructure > Virtual Routers” on page 422.
Infrastructure > Analytics Nodes
View activity for the analytics nodes, including memory and CPU usage, analytics
host names, IP address, status, and more. See “Monitor > Infrastructure > Analytics
Nodes” on page 433.
Infrastructure > Config Nodes
View activity for the config nodes, including memory and CPU usage, config host
names, IP address, status, and more. See “Monitor > Infrastructure > Config Nodes”
on page 438.
Networking > Networks
For all virtual networks for all projects in the system, view graphical traffic statistics,
including:
•
Total traffic in and out.
•
Inter VN traffic in and out.
•
The most active ports, peers, and flows for a specified duration.
•
All traffic ingress and egress from connected networks, including their attached
policies.
See “Monitor > Networking” on page 441.
406
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Table 35: Monitor Menu Options (continued)
Option
Description
Networking > Dashboard
For all virtual networks for all projects in the system, view graphical traffic statistics,
including:
•
Total traffic in and out.
•
Inter VN traffic in and out.
You can view the statistics in varying levels of granularity, for example, for a whole
project, or for a single network. See “Monitor > Networking” on page 441.
Networking > Projects
View essential information about projects in the system including name, associated
networks, and traffic in and out.
Networking > Networks
View essential information about networks in the system including name and
traffic in and out.
Networking > Instances
View essential information about instances in the system including name,
associated networks, interfaces, vRouters, and traffic in and out.
Debug > Packet Capture
•
Add and manage packet analyzers.
•
Attach packet captures and configure their details.
•
View a list of all packet analyzers in the system and the details of their
configurations, including source and destination networks, ports, and IP
addresses.
Related
Documentation
•
Monitor > Infrastructure > Dashboard on page 412
•
Monitor > Infrastructure > Control Nodes on page 415
•
Monitor > Infrastructure > Virtual Routers on page 422
•
Monitor > Networking on page 441
•
Query > Logs on page 456
•
Query > Flows on page 449
Debugging Processes Using the Contrail Introspect Feature
This topic describes how to use the Sandesh infrastructure and the Contrail Introspect
feature to debug processes.
Introspect is a mechanism for taking a program object and querying information about
it.
Sandesh is the name of a unified infrastructure in the Contrail Virtual Networking solution.
Sandesh is a way for the Contrail daemons to provide a request-response mechanism.
Requests and responses are defined in Sandesh format and the Sandesh compiler
generates code to process the requests and send responses.
Copyright © 2016, Juniper Networks, Inc.
407
Contrail Feature Guide
Sandesh also provides a way to use a Web browser to send Sandesh requests to a Contrail
daemon and get the Sandesh responses. This feature is used to debug processes by
looking into the operational status of the daemons.
Each Contrail daemon starts an HTTP server, with the following page types:
•
The main index.html listing all Sandesh modules and the links to them.
•
Sandesh module pages that present HTML forms for each Sandesh request.
•
XML-based dynamically-generated pages that display Sandesh responses.
•
An automatically generated page that shows all code needed for rendering and all
HTTP server-client interactions.
You can display the HTTP introspect of a Contrail daemon directly by accessing the
following Introspect ports:
•
<controller-ip>:8083. This port displays the contrail-control introspect port.
•
<compute-ip>:8085 This port displays the contrail-vrouter-agent introspect port.
Another way to launch the Introspect page is by browsing to a particular node page using
the Contrail Web user interface.
Figure 125 on page 409 shows the contrail-control infrastructure page. Notice the Introspect
link at the bottom of the Control Nodes Details tab window.
408
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 125: Control Nodes Details Tab Window
Figure 126 on page 410 shows the Sandesh modules for the Contrail control process
(contrail-control) Introspect port.
Copyright © 2016, Juniper Networks, Inc.
409
Contrail Feature Guide
Figure 126: Sandesh Modules for the Contrail Control Process
Figure 127 on page 410 shows the Controller Introspect window.
Figure 127: Controller Introspect Window
Figure 128 on page 411 shows an example of the BGP Peer (bgp_peer.xml) Introspect
page.
410
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 128: BGP Peer Introspect Page
Figure 129 on page 411 shows an example of the BGP Neighbor Summary Introspect page.
Figure 129: BGP Neighbor Summary Introspect Page
Figure 126 on page 410 shows the Sandesh modules for the Contrail vRouter agent
(contrail-vrouter-agent) Introspect port.
Copyright © 2016, Juniper Networks, Inc.
411
Contrail Feature Guide
Figure 130: s042491
Figure 131 on page 412 shows an example of the Agent (agent.xml) Introspect page.
Figure 131: Agent Introspect Page
Related
Documentation
•
Monitor > Infrastructure > Dashboard
Use Monitor > Infrastructure > Dashboard to get an “at-a-glance” view of the system
infrastructure components, including the numbers of virtual routers, control nodes,
412
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
analytics nodes, and config nodes currently operational, a bubble chart of virtual routers
showing the CPU and memory utilization, log messages, system information, and alerts.
•
Monitor Dashboard on page 413
•
Monitor Individual Details from the Dashboard on page 413
•
Using Bubble Charts on page 414
•
Color-Coding of Bubble Charts on page 414
Monitor Dashboard
Click Monitor > Infrastructure > Dashboard on the left to view the Dashboard. See
Figure 132 on page 413.
Figure 132: Monitor > Infrastructure > Dashboard
Monitor Individual Details from the Dashboard
Across the top of the Dashboard screen are summary boxes representing the components
of the system that are shown in the statistics. See Figure 133 on page 413. Any of the
control nodes, virtual routers, analytics nodes, and config nodes can be monitored
individually and in detail from the Dashboard by clicking an associated box, and drilling
down for more detail.
Figure 133: Dashboard Summary Boxes
Detailed information about monitoring each of the areas represented by the boxes is
provided in the links in Table 36 on page 414.
Copyright © 2016, Juniper Networks, Inc.
413
Contrail Feature Guide
Table 36: Dashboard Summary Boxes
Box
For More Information
vRouters
“Monitor > Infrastructure > Virtual Routers” on page 422
Control Nodes
“Monitor > Infrastructure > Control Nodes” on page 415
Analytics Nodes
“Monitor > Infrastructure > Analytics Nodes” on page 433
Config Nodes
“Monitor > Infrastructure > Config Nodes” on page 438
Using Bubble Charts
Bubble charts show the CPU and memory utilization of components contributing to the
current analytics display, including vRouters, control nodes, config nodes, and the like.
You can hover over any bubble to get summary information about the component it
represents; see Figure 134 on page 414. You can click through the summary information
to get more details about the component.
Figure 134: Bubble Summary Information
Color-Coding of Bubble Charts
Bubble charts use the following color-coding scheme:
Control Nodes
•
Blue—working as configured.
•
Red—error, at least one configured peer is down.
vRouters
414
•
Blue—working, but no instance is launched.
•
Green—working with at least one instance launched.
•
Red—error, there is a problem with connectivity or a vRouter is in a failed state.
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Related
Documentation
•
Monitor > Infrastructure > Virtual Routers on page 422
•
Monitor > Infrastructure > Control Nodes on page 415
•
Monitor > Infrastructure > Analytics Nodes on page 433
•
Monitor > Infrastructure > Config Nodes on page 438
Monitor > Infrastructure > Control Nodes
Use Monitor > Infrastructure > Control Nodes to gain insight into usage statistics for control
nodes.
•
Monitor Control Nodes Summary on page 415
•
Monitor Individual Control Node Details on page 416
•
Monitor Individual Control Node Console on page 417
•
Monitor Individual Control Node Peers on page 419
•
Monitor Individual Control Node Routes on page 420
Monitor Control Nodes Summary
Select Monitor > Infrastructure > Control Nodes to see a graphical chart of average memory
usage versus average CPU percentage usage for all control nodes in the system. Also on
this screen is a list of all control nodes in the system. See Figure 135 on page 415. See
Table 37 on page 415 for descriptions of the fields on this screen.
Figure 135: Control Nodes Summary
Table 37: Control Nodes Summary Fields
Field
Description
Host name
The name of the control node.
IP Address
The IP address of the control node.
Copyright © 2016, Juniper Networks, Inc.
415
Contrail Feature Guide
Table 37: Control Nodes Summary Fields (continued)
Field
Description
Version
The software version number that is installed on the control node.
Status
The current operational status of the control node — Up or Down.
CPU (%)
The CPU percentage currently in use by the selected control node.
Memory
The memory in MB currently in use and the total memory available for this control
node.
Total Peers
The total number of peers for this control node.
Established in Sync Peers
The total number of peers in sync for this control node.
Established in Sync vRouters
The total number of vRouters in sync for this control node.
Monitor Individual Control Node Details
Click the name of any control nodes listed under the Control Nodes title to view an array
of graphical reports of usage and numerous details about that node. There are several
tabs available to help you probe into more details about the selected control node. The
first tab is the Details tab; see Figure 136 on page 416.
Figure 136: Individual Control Node—Details Tab
416
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
The Details tab provides a summary of the status and activity on the selected node, and
presents graphical displays of CPU and memory usage. See Table 38 on page 417 for
descriptions of the fields on this tab.
Table 38: Individual Control Node—Details Tab Fields
Field
Description
Hostname
The host name defined for this control node.
IP Address
The IP address of the selected node.
Status
The operational status of the control node.
Control Node Manager
The operational status of the control node manager.
Config Node
The IP address of the configuration node associated with this control node.
Analytics Node
The IP address of the node from which analytics (monitor) information is derived.
Analytics Messages
The total number of analytics messages in and out from this node.
Peers
The total number of peers established for this control node and how many are in sync
and of what type.
CPU
The average percent of CPU load incurred by this control node.
Memory
The average memory usage incurred by this control node.
Last Log
The date and time of the last log message issued about this control node.
Control Node CPU/Memory Utilization
A graphic display x, y chart of the average CPU load and memory usage incurred by this
control node over time.
Monitor Individual Control Node Console
Click the Console tab for an individual control node to display system logging information
for a defined time period, with the last 5 minutes of information as the default display.
See Figure 137 on page 418.
Copyright © 2016, Juniper Networks, Inc.
417
Contrail Feature Guide
Figure 137: Individual Control Node—Console Tab
See Table 39 on page 418 for descriptions of the fields on the Console tab screen.
Table 39: Control Node: Console Tab Fields
Field
Description
Time Range
Select a timeframe for which to review logging information as sent to the console. There are 11
options, ranging from the Last 5 mins through to the Last 24 hrs. The default display is for the Last
5 mins.
Log Category
Select a log category to display:
All
_default_
XMPP
IFMap
TCP
Log Type
418
Select a log type to display.
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Table 39: Control Node: Console Tab Fields (continued)
Field
Description
Log Level
Select a log severity level to display:
SYS_EMERG
SYS_ALERT
SYS_CRIT
SYS_ERR
SYS_WARN
SYS_NOTICE
SYS_INFO
SYS_DEBUG
Search
Enter any text string to search and display logs containing that string.
Limit
Select from a list an amount to limit the number of messages displayed:
No Limit
Limit 10 messages
Limit 50 messages
Limit 100 messages
Limit 200 messages
Limit 500 messages
Auto Refresh
Click the check box to automatically refresh the display if more messages occur.
Display Logs
Click this button to refresh the display if you change the display criteria.
Reset
Click this button to clear any selected display criteria and reset all criteria to their default settings.
Time
This column lists the time received for each log message displayed.
Category
This column lists the log category for each log message displayed.
Log Type
This column lists the log type for each log message displayed.
Log
This column lists the log message for each log displayed.
Monitor Individual Control Node Peers
The Peers tab displays the peers for an individual control node and their peering state.
Click the expansion arrow next to the address of any peer to reveal more details. See
Figure 138 on page 420.
Copyright © 2016, Juniper Networks, Inc.
419
Contrail Feature Guide
Figure 138: Individual Control Node—Peers Tab
See Table 40 on page 420 for descriptions of the fields on the Peers tab screen.
Table 40: Control Node: Peers Tab Fields
Field
Description
Peer
The hostname of the peer.
Peer Type
The type of peer.
Peer ASN
The autonomous system number of the peer.
Status
The current status of the peer.
Last flap
The last flap detected for this peer.
Messages (Recv/Sent)
The number of messages sent and received from this peer.
Monitor Individual Control Node Routes
The Routes tab displays active routes for this control node and lets you query the results.
Use horizontal and vertical scroll bars to view more results. Click the expansion icon next
to a routing table name to reveal more details about the selected route. See
Figure 139 on page 421.
420
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 139: Individual Control Node—Routes Tab
See Table 41 on page 421 for descriptions of the fields on the Routes tab screen.
Table 41: Control Node: Routes Tab Fields
Field
Description
Routing Instance
You can select a single routing instance from a list of all instances for which to display the
active routes.
Address Family
Select an address family for which to display the active routes:
All (default)
l3vpn
inet
inetmcast
(Limit Field)
Select to limit the display of active routes:
Limit 10 Routes
Limit 50 Routes
Limit 100 Routes
Limit 200 Routes
Peer Source
Select from a list of available peers the peer for which to display the active routes, or select
All.
Prefix
Enter a route prefix to limit the display of active routes to only those with the designated
prefix.
Copyright © 2016, Juniper Networks, Inc.
421
Contrail Feature Guide
Table 41: Control Node: Routes Tab Fields (continued)
Field
Description
Protocol
Select a protocol for which to display the active routes:
All (default)
XMPP
BGP
ServiceChain
Static
Display Routes
Click this button to refresh the display of routes after selecting different display criteria.
Reset
Click this button to clear any selected criteria and return the display to default values.
Column
Description
Routing Table
The name of the routing table that stores this route.
Prefix
The route prefix for each active route displayed.
Protocol
The protocol used by the route.
Source
The host source for each active route displayed.
Next hop
The IP address of the next hop for each active route displayed.
Label
The label for each active route displayed.
Security
The security value for each active route displayed.
Origin VN
The virtual network from which the route originates.
AS Path
The AS path for each active route displayed.
Monitor > Infrastructure > Virtual Routers
422
•
Monitor vRouters Summary on page 423
•
Monitor Individual vRouters Tabs on page 424
•
Monitor Individual vRouter Details Tab on page 424
•
Monitor Individual vRouters Interfaces Tab on page 425
•
Configuring Interface Monitoring and Mirroring on page 426
•
Monitor Individual vRouters Networks Tab on page 427
•
Monitor Individual vRouters ACL Tab on page 428
•
Monitor Individual vRouters Flows Tab on page 429
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
•
Monitor Individual vRouters Routes Tab on page 430
•
Monitor Individual vRouter Console Tab on page 431
Monitor vRouters Summary
Click Monitor > Infrastructure > Virtual Routers to view the vRouters summary screen.
See Figure 140 on page 423.
Figure 140: vRouters Summary
See Table 42 on page 423 for descriptions of the fields on the vRouters Summary screen.
Table 42: vRouters Summary Fields
Field
Description
Host name
The name of the vRouter. Click the name of any vRouter to reveal more details.
IP Address
The IP address of the vRouter.
Version
The version of software installed on the system.
Status
The current operational status of the vRouter — Up or Down.
CPU (%)
The CPU percentage currently in use by the selected vRouter.
Memory (MB)
The memory currently in use and the total memory available for this vRouter.
Networks
The total number of networks for this vRouter.
Instances
The total number of instances for this vRouter.
Copyright © 2016, Juniper Networks, Inc.
423
Contrail Feature Guide
Table 42: vRouters Summary Fields (continued)
Field
Description
Interfaces
The total number of interfaces for this vRouter.
Monitor Individual vRouters Tabs
Click the name of any vRouter to view details about performance and activities for that
vRouter. Each individual vRouters screen has the following tabs.
•
Details—similar display of information as on individual control nodes Details tab. See
Figure 141 on page 424.
•
Console—similar display of information as on individual control nodes Console tab. See
Figure 149 on page 432.
•
Interfaces—details about associated interfaces. See Figure 142 on page 426.
•
Networks—details about associated networks. See Figure 145 on page 428.
•
ACL—details about access control lists. See Figure 146 on page 429.
•
Flows—details about associated traffic flows. See Figure 147 on page 430.
•
Routes—details about associated routes. See Figure 148 on page 431.
Monitor Individual vRouter Details Tab
The Details tab provides a summary of the status and activity on the selected node, and
presents graphical displays of CPU and memory usage; see Figure 141 on page 424.
SeeTable 43 on page 424 for descriptions of the fields on this tab.
Figure 141: Individual vRouters—Details Tab
Table 43: vRouters Details Tab Fields
Field
Description
Hostname
The hostname of the vRouter.
424
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Table 43: vRouters Details Tab Fields (continued)
Field
Description
IP Address
The IP address of the selected vRouter.
Status
The operational status of the vRouter.
vRouter Node Manager
The operational status of the vRouter node manager.
Analytics Node
The IP address of the node from which analytics (monitor) information is derived.
Control Nodes
The IP address of the configuration node associated with this vRouter.
Analytics Messages
The total number of analytics messages in and out from this node.
XMPP Messages
The total number of XMPP messages that have gone in and out of this vRouter.
Flow
The number of active flows and the total flows for this vRouter.
Networks
The number of networks associated with this vRouter.
Interfaces
The number of interfaces associated with this vRouter.
Instances
The number of instances associated with this vRouter.
Last Log
The date and time of the last log message issued about this vRouter.
vRouter CPU/Memory Utilization
Graphs (x, y) displaying CPU and memory utilization averages over time for this vRouter,
in comparison to system utilization averages.
Monitor Individual vRouters Interfaces Tab
The Interfaces tab displays details about the interfaces associated with an individual
vRouter. Click the expansion arrow next to any interface name to reveal more details.
Use horizontal and vertical scroll bars to access all portions of the screen. See
Figure 142 on page 426. See Table 44 on page 426 for descriptions of the fields on the
Interfaces tab screen.
Copyright © 2016, Juniper Networks, Inc.
425
Contrail Feature Guide
Figure 142: Individual vRouters—Interfaces Tab
Table 44: vRouters: Interfaces Tab Fields
Field
Description
Name
The name of the interface.
Label
The label for the interface.
Status
The current status of the interface.
Network
The network associated with the interface.
IP Address
The IP address of the interface.
Floating IP
Displays any floating IP addresses associated with the interface.
Instance
The name of any instance associated with the interface.
Configuring Interface Monitoring and Mirroring
Contrail supports user monitoring of traffic on any guest virtual machine interface when
using the Juniper Contrail user interface.
When interface monitoring (packet capture) is selected, a default analyzer is created
and all traffic from the selected interface is mirrored and sent to the default analyzer. If
a mirroring instance is already launched, the traffic will be redirected to the selected
instance. The interface traffic is only mirrored during the time that the monitor packet
capture interface is in use. When the capture screen is closed, interface mirroring stops.
426
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
To configure interface mirroring:
1.
Select Monitor > Infrastructure > Virtual Routers, then select the vRouter that has the
interface to mirror.
2. In the list of attributes for the vRouter, select Interfaces; see Figure 112 on page 375.
Figure 143: Individual vRouter
A list of interfaces for that vRouter appears.
3. For the interface to mirror, click the Action icon in the last column and select the option
Packet Capture; see Figure 113 on page 375.
Figure 144: Interfaces
The mirror packet capture starts and displays at this screen.
The mirror packet capture stops when you exit this screen.
Monitor Individual vRouters Networks Tab
The Networks tab displays details about the networks associated with an individual
vRouter. Click the expansion arrow at the name of any network to reveal more details.
See Figure 145 on page 428. See Table 45 on page 428 for descriptions of the fields on the
Networks tab screen.
Copyright © 2016, Juniper Networks, Inc.
427
Contrail Feature Guide
Figure 145: Individual vRouters—Networks Tab
Table 45: vRouters: Networks Tab Fields
Field
Description
Name
The name of each network associated with this vRouter.
ACLs
The name of the access control list associated with the listed network.
VRF
The identifier of the VRF associated with the listed network.
Action
Click the icon to select the action: Edit, Delete
Monitor Individual vRouters ACL Tab
The ACL tab displays details about the access control lists (ACLs) associated with an
individual vRouter. Click the expansion arrow next to the UUID of any ACL to reveal more
details. See Figure 146 on page 429. See Table 46 on page 429 for descriptions of the fields
on the ACL tab screen.
428
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 146: Individual vRouters—ACL Tab
Table 46: vRouters: ACL Tab Fields
Field
Description
UUID
The universal unique identifier (UUID) associated with the listed ACL.
Flows
The flows associated with the listed ACL.
Action
The traffic action defined by the listed ACL.
Protocol
The protocol associated with the listed ACL.
Source Network or Prefix
The name or prefix of the source network associated with the listed ACL.
Source Port
The source port associated with the listed ACL.
Destination Network or Prefix
The name or prefix of the destination network associated with the listed ACL.
Destination Port
The destination port associated with the listed ACL.
ACE Id
The ACE ID associated with the listed ACL.
Monitor Individual vRouters Flows Tab
The Flows tab displays details about the flows associated with an individual vRouter.
Click the expansion arrrow next to any ACL/SG UUID to reveal more details. Use the
horizontal and vertical scroll bars to access all portions of the screen. See
Copyright © 2016, Juniper Networks, Inc.
429
Contrail Feature Guide
Figure 147 on page 430. See Table 47 on page 430 for descriptions of the fields on the Flows
tab screen.
Figure 147: Individual vRouters—Flows Tab
Table 47: vRouters: Flows Tab Fields
Field
Description
ACL UUID
The default is to show All flows, however, you can select from a drop down list any single
flow to view its details.
ACL / SG UUID
The universal unique identifier (UUID) associated with the listed ACL or SG.
Protocol
The protocol associated with the listed flow.
Src Network
The name of the source network associated with the listed flow.
Src IP
The source IP address associated with the listed flow.
Src Port
The source port of the listed flow.
Dest Network
The name of the destination network associated with the listed flow.
Dest IP
The destination IP address associated with the listed flow.
Dest Port
The destination port associated with the listed flow.
Bytes/Pkts
The number of bytes and packets associated with the listed flow.
Setup Time
The setup time associated with the listed flow.
Monitor Individual vRouters Routes Tab
The Routes tab displays details about unicast and multicast routes in specific VRFs for
an individual vRouter. Click the expansion arrow next to the route prefix to reveal more
details. See Figure 148 on page 431. See Table 48 on page 431 for descriptions of the fields
on the Routes tab screen.
430
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 148: Individual vRouters—Routes Tab
Table 48: vRouters: Routes Tab Fields
Field
Description
VRF
Select from a drop down list the virtual routing and forwarding (VRF) to view.
Show Routes
Select to show the route type: Unicast or Multicast.
Prefix
The IP address prefix of a route.
Next hop
The next hop method for this route.
Next hop details
The next hop details for this route.
Monitor Individual vRouter Console Tab
Click the Console tab for an individual vRouter to display system logging information for
a defined time period, with the last 5 minutes of information as the default display. See
Figure 149 on page 432. See Table 49 on page 432 for descriptions of the fields on the
Console tab screen.
Copyright © 2016, Juniper Networks, Inc.
431
Contrail Feature Guide
Figure 149: Individual vRouter—Console Tab
Table 49: Control Node: Console Tab Fields
Field
Description
Time Range
Select a timeframe for which to review logging information as sent to the console. There are
several options, ranging from Last 5 mins through to the Last 24 hrs, plus a Custom time range.
From Time
If you select Custom in Time Range, enter the start time.
To Time
If you select Custom in Time Range, enter the end time.
Log Category
Select a log category to display:
•
All
•
_default_
•
XMPP
•
IFMap
•
TCP
Log Type
Select a log type to display.
Log Level
Select a log severity level to display:
432
•
SYS_EMERG
•
SYS_ALERT
•
SYS_CRIT
•
SYS_ERR
•
SYS_WARN
•
SYS_NOTICE
•
SYS_INFO
•
SYS_DEBUG
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Table 49: Control Node: Console Tab Fields (continued)
Field
Description
Limit
Select from a list an amount to limit the number of messages displayed:
•
No Limit
•
Limit 10 messages
•
Limit 50 messages
•
Limit 100 messages
•
Limit 200 messages
•
Limit 500 messages
Auto Refresh
Click the check box to automatically refresh the display if more messages occur.
Display Logs
Click this button to refresh the display if you change the display criteria.
Reset
Click this button to clear any selected display criteria and reset all criteria to their default settings.
Columns
Time
This column lists the time received for each log message displayed.
Category
This column lists the log category for each log message displayed.
Log Type
This column lists the log type for each log message displayed.
Log
This column lists the log message for each log displayed.
Monitor > Infrastructure > Analytics Nodes
Select Monitor > Infrastructure > Analytics Nodes to view the console logs, generators,
and query expansion (QE) queries of the analytics nodes.
•
Monitor Analytics Nodes on page 433
•
Monitor Analytics Individual Node Details Tab on page 434
•
Monitor Analytics Individual Node Generators Tab on page 435
•
Monitor Analytics Individual Node QE Queries Tab on page 436
•
Monitor Analytics Individual Node Console Tab on page 437
Monitor Analytics Nodes
Select Monitor > Infrastructure > Analytics Nodes to view a summary of activities for the
analytics nodes; see Figure 150 on page 434. See Table 50 on page 434 for descriptions of
the fields on the analytics summary.
Copyright © 2016, Juniper Networks, Inc.
433
Contrail Feature Guide
Figure 150: Analytics Nodes Summary
Table 50: Fields on Analytics Nodes Summary
Field
Description
Host name
The name of this node.
IP address
The IP address of this node.
Version
The version of software installed on the system.
Status
The current operational status of the node — Up or Down — and the length of time it is
in that state.
CPU (%)
The average CPU percentage usage for this node.
Memory
The average memory usage for this node.
Generators
The total number of generators for this node.
Monitor Analytics Individual Node Details Tab
Click the name of any analytics node displayed on the analytics summary to view the
Details tab for that node. See Figure 151 on page 435.
See Table 51 on page 435 for descriptions of the fields on this screen.
434
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 151: Monitor Analytics Individual Node Details Tab
Table 51: Monitor Analytics Individual Node Details Tab Fields
Field
Description
Hostname
The name of this node.
IP Address
The IP address of this node.
Version
The installed version of the software.
Overall Node Status
The current operational status of the node — Up or Down — and the length of time in
this state.
Processes
The current status of each analytics process, including Collector, Query Engine, and
OpServer.
CPU (%)
The average CPU percentage usage for this node.
Memory
The average memory usage of this node.
Messages
The total number of messages for this node.
Generators
The total number of generators associated with this node.
Last Log
The date and time of the last log message issued about this node.
Monitor Analytics Individual Node Generators Tab
The Generators tab displays information about the generators for an individual analytics
node; see Figure 152 on page 436. Click the expansion arrow next to any generator name
to reveal more details. See Table 52 on page 436 for descriptions of the fields on the Peers
tab screen.
Copyright © 2016, Juniper Networks, Inc.
435
Contrail Feature Guide
Figure 152: Individual Analytics Node—Generators Tab
Table 52: Monitor Analytics Individual Node Generators Tab Fields
Field
Description
Name
The host name of the generator.
Status
The current status of the peer— Up or Down — and the length of time in that state.
Messages
The number of messages sent and received from this peer.
Bytes
The total message size in bytes.
Monitor Analytics Individual Node QE Queries Tab
The QE Queries tab displays the number of query expansion (QE) messages that are in
the queue for this analytics node. See Figure 153 on page 436.
See Table 53 on page 436 for descriptions of the fields on the QE Queries tab screen.
Figure 153: Individual Analytics Node—QE QueriesTab
Table 53: Analytics Node QE Queries Tab Fields
Field
Description
Enqueue Time
The length of time this message has been in the queue waiting to be delivered.
Query
The query message.
Progress (%)
The percentage progress for the message delivery.
436
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Monitor Analytics Individual Node Console Tab
Click the Console tab for an individual analytics node to display system logging information
for a defined time period. See Figure 154 on page 437. See Table 54 on page 437 for
descriptions of the fields on the Console tab screen.
Figure 154: Analytics Individual Node—Console Tab
Table 54: Monitor Analytics Individual Node Console Tab Fields
Field
Description
Time Range
Select a timeframe for which to review logging information as sent to the console. There are 11
options, ranging from the Last 5 mins through to the Last 24 hrs. The default display is for the
Last 5 mins.
Log Category
Select a log category to display:
All
_default_
XMPP
IFMap
TCP
Log Type
Select a log type to display.
Log Level
Select a log severity level to display:
SYS_EMERG
SYS_ALERT
SYS_CRIT
SYS_ERR
SYS_WARN
SYS_NOTICE
SYS_INFO
SYS_DEBUG
Keywords
Enter any text string to search for and display logs containing that string.
Copyright © 2016, Juniper Networks, Inc.
437
Contrail Feature Guide
Table 54: Monitor Analytics Individual Node Console Tab Fields (continued)
Field
Description
(Limit field)
Select the number of messages to display:
No Limit
Limit 10 messages
Limit 50 messages
Limit 100 messages
Limit 200 messages
Limit 500 messages
Auto Refresh
Click the check box to automatically refresh the display if more messages occur.
Display Logs
Click this button to refresh the display if you change the display criteria.
Reset
Click this button to clear any selected display criteria and reset all criteria to their default settings.
Time
This column lists the time received for each log message displayed.
Category
This column lists the log category for each log message displayed.
Log Type
This column lists the log type for each log message displayed.
Log
This column lists the log message for each log displayed.
Monitor > Infrastructure > Config Nodes
Select Monitor > Infrastructure > Config Nodes to view the information about the system
config nodes.
•
Monitor Config Nodes on page 438
•
Monitor Individual Config Node Details on page 439
•
Monitor Individual Config Node Console on page 440
Monitor Config Nodes
Select Monitor > Infrastructure > Config Nodes to view a summary of activities for the
analytics nodes. See Figure 155 on page 439.
438
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 155: Config Nodes Summary
Table 55 on page 439 describes the fields in the Config Nodes summary.
Table 55: Config Nodes Summary Fields
Field
Description
Host name
The name of this node.
IP address
The IP address of this node.
Version
The version of software installed on the system.
Status
The current operational status of the node — Up or Down — and the length of time it is
in that state.
CPU (%)
The average CPU percentage usage for this node.
Memory
The average memory usage for this node.
Monitor Individual Config Node Details
Click the name of any config node displayed on the config nodes summary to view the
Details tab for that node; see Figure 156 on page 439.
Figure 156: Individual Config Nodes— Details Tab
Table 56 on page 440 describes the fields on the Details screen.
Copyright © 2016, Juniper Networks, Inc.
439
Contrail Feature Guide
Table 56: Individual Config Nodes— Details Tab Fields
Field
Description
Hostname
The name of the config node.
IP Address
The IP address of this node.
Version
The installed version of the software.
Overall Node Status
The current operational status of the node — Up or Down — and the length of time it is
in this state.
Processes
The current operational status of the processes associated with the config node, including
AI Server, Schema Transformer, Service Monitor, Discovery, and Ifmap.
Analytics Node
The analytics node associated with this node.
CPU (%)
The average CPU percentage usage for this node.
Memory
The average memory usage by this node.
Monitor Individual Config Node Console
Click the Console tab for an individual config node to display system logging information
for a defined time period. See Figure 157 on page 440.
Figure 157: Individual Config Node—Console Tab
See Table 57 on page 440 for descriptions of the fields on the Console tab screen.
Table 57: Individual Config Node-Console Tab Fields
Field
Description
Time Range
Select a timeframe for which to review logging information as sent to the console. Use the drop
down calendar in the fields From Time and To Time to select the date and times to include in
the time range for viewing.
Log Category
Select from the drop down menu a log category to display. The option to view All is also available.
Log Type
Select a log type to display.
440
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Table 57: Individual Config Node-Console Tab Fields (continued)
Field
Description
Log Level
Select a log severity level to display:
Limit
Select from a list an amount to limit the number of messages displayed:
All
Limit 10 messages
Limit 50 messages
Limit 100 messages
Limit 200 messages
Limit 500 messages
Keywords
Enter any key words by which to filter the log messages displayed.
Auto Refresh
Click the check box to automatically refresh the display if more messages occur.
Display Logs
Click this button to refresh the display if you change the display criteria.
Reset
Click this button to clear any selected display criteria and reset all criteria to their default settings.
Monitor > Networking
The Monitor -> Networking pages give an overview of the networking traffic statistics and
health of domains, projects within domains, virtual networks within projects, and virtual
machines within virtual networks.
•
Monitor > Networking Menu Options on page 441
•
Monitor -> Networking -> Dashboard on page 442
•
Monitor > Networking > Projects on page 443
•
Monitor Projects Detail on page 444
•
Monitor > Networking > Networks on page 446
Monitor > Networking Menu Options
Figure 158 on page 442 shows the menu options available under Monitor > Networking.
Copyright © 2016, Juniper Networks, Inc.
441
Contrail Feature Guide
Figure 158: Monitor Networking Menu Options
Monitor -> Networking -> Dashboard
Select Monitor -> Networking -> Dashboard to gain insight into usage statistics for domains,
virtual networks, projects, and virtual machines. When you select this option, the Traffic
Statistics for Domain window is displayed as shown in Figure 159 on page 442.
Figure 159: Traffic Statistics for Domain Window
442
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Table 58 on page 443 describes the fields in the Traffic Statistics for Domain window.
Table 58: Projects Summary Fields
Field
Description
Total Traffic In
The volume of traffic into this domain
Total Traffic Out
The volume of traffic out of this domain.
Inter VN Traffic In
The volume of inter-virtual network traffic into this domain.
Inter VN Traffic Out
The volume of inter-virtual network traffic out of this domain.
Projects
This chart displays the networks and interfaces for projects with the most throughput over
the past 30 minutes. Click Projects then select Monitor > Networking > Projects, to display
more detailed statistics.
Networks
This chart displays the networks for projects with the most throughput over the past 30
minutes. Click Networks then select Monitor > Networking > Networks, to display more detailed
statistics.
Monitor > Networking > Projects
Select Monitor > Networking > Projects to see information about projects in the system.
See Figure 160 on page 443.
Figure 160: Monitor > Networking > Projects
See Table 59 on page 443 for descriptions of the fields on this screen.
Table 59: Projects Summary Fields
Field
Description
Projects
The name of the project. You can click the name to access details about connectivity for
this project.
Networks
The volume of inter-virtual network traffic out of this domain.
Copyright © 2016, Juniper Networks, Inc.
443
Contrail Feature Guide
Table 59: Projects Summary Fields (continued)
Field
Description
Traffic In
The volume of traffic into this domain.
Traffic Out
The volume of traffic out of this domain.
Monitor Projects Detail
You can click any of the projects listed on the Projects Summary to get details about
connectivity, source and destination port distribution, and instances. When you click an
individual project, the Summary tab for Connectivity Details is displayed as shown in
Figure 161 on page 444. Hover over any of the connections to get more details.
Figure 161: Monitor Projects Connectivity Details
In the Connectivity Details window you can click the links between the virtual networks
to view the traffic statistics between the virtual networks.
The Traffic Statistics information is also available when you select Monitor > Networking
> Networks as shown in Figure 162 on page 444.
Figure 162: Traffic Statistics Between Networks
In the Connectivity Details window you can click the Instances tab to get a summary of
details for each of the instances in this project.
444
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 163: Projects Instances Summary
See Table 3 for a description of the fields on this screen.
Table 60: Projects Instances Summary Fields
Field
Description
Instance
The name of the instance. Click the name then select Monitor > Networking > Instances to
display details about the traffic statistics for this instance.
Virtual Network
The virtual network associated with this instance.
Interfaces
The number of interfaces associated with this instance.
vRouter
The name of the vRouter associated with this instance.
IP Address
Any IP addresses associated with this instance.
Floating IP
Any floating IP addresses associated with this instance.
Traffic (In/Out)
The volume of traffic in KB or MB that is passing in and out of this instance.
Select Monitor > Networking > Instances to display instance traffic statistics as shown in
Figure 164 on page 446.
Copyright © 2016, Juniper Networks, Inc.
445
Contrail Feature Guide
Figure 164: Instance Traffic Statistics
Monitor > Networking > Networks
Select Monitor > Networking > Networks to view a summary of the virtual networks in
your system. See Figure 165 on page 446.
Figure 165: Network Summary
Table 61: Network Summary Fields
Field
Description
Network
The domain and network name of the virtual network. Click the arrow next to the name to
display more information about the network, including the number of ingress and egress flows,
the number of ACL rules, the number of interfaces, and the total traffic in and out.
Instances
The number of instances launched in this network.
Traffic (In/Out)
The volume of inter-virtual network traffic in and out of this network.
Throughput (In/Out)
The throughput of inter-virtual network traffic in and out of this network.
At Monitor > Networking > Networks you can click on the name of any of the listed networks
to get details about the network connectivity, traffic statistics, port distribution, instances,
and other details, by clicking the tabs across the top of the page.
446
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 166 on page 447 shows the Summary tab for an individual network, which displays
connectivity details and traffic statistics for the selected network.
Figure 166: Individual Network Connectivity Details—Summary Tab
Figure 167 on page 447 shows the Port Map tab for an individual network, which displays
the relative distribution of traffic for this network by protocol, by port.
Figure 167: Individual Network-– Port Map Tab
Figure 168 on page 448 shows the Port Distribution tab for an individual network, which
displays the relative distribution of traffic in and out by source port and destination port.
Copyright © 2016, Juniper Networks, Inc.
447
Contrail Feature Guide
Figure 168: Individual Network-– Port Distribution Tab
Figure 169 on page 448 shows the Instances tab for an individual network, which displays
details for each instance associated with this network, including the number of interfaces,
the associated vRouter, the instance IP address, and the volume of traffic in and out.
Additionally, you can click the arrow near the instance name to reveal even more details
about the instance—the interfaces and their addresses, UUID, CPU (usage), and memory
used of the total amount available.
Figure 169: Individual Network Instances Tab
Figure 170 on page 449 shows the Details tab for an individual network, which displays the
code used to define this network -–the User Virtual Environment (UVE) code.
448
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 170: Individual Network Details Tab
Query > Flows
Use the Query > Flows option to perform rich and complex SQL-like queries on flows in
the Contrail Controller. You can use the query results for such things as gaining insight
into the operation of applications in a virtual network, performing historical analysis of
flow issues, and pinpointing problem areas with flows.
•
Query > Flows > Flow Series on page 449
•
Example: Query Flow Series on page 452
•
Query > Flow Records on page 453
•
Query > Flows > Query Queue on page 455
Query > Flows > Flow Series
Use Query > Flows > Flow Series to create queries on the flow series table, with results
in the form of time series data for flow series. See Figure 171 on page 450
Copyright © 2016, Juniper Networks, Inc.
449
Contrail Feature Guide
Figure 171: Query Flow Series
The query fields available on the screen for the Flow Series tab are described in
Table 62 on page 450. Enter query data into the fields to create a SQL-like query to display
and analyze flows.
Table 62: Query Flow Series Fields
Field
Description
Time Range
Select a range of time for which to see the flow series:
•
Last 10 Mins
•
Last 30 Mins
•
Last 1 Hr
•
Last 6 Hrs
•
Last 12 Hrs
•
Custom
Click Custom to enter a specific custom time range in two new fields: From Time and To Time.
Select
Click the edit button (pencil icon) to open a Select window (Figure 172 on page 451), where you can click
one or more boxes to select the fields to display from the flow series, such as Source VN, Dest VN, Bytes,
Packets, and more.
Where
Click the edit button (pencil icon) to open a query-writing window, where you can specify query values
for variables such as sourcevn, sourceip, destvn, destip, protocol, sport, dport.
Direction
Select the desired flow direction: INGRESS or EGRESS.
Filter
Click the edit button (pencil icon) to open a Filter window (Figure 173 on page 452), where you can select
filter items by which to sort, sort order, and limits to the number of results returned.
Run Query
Click this button to retrieve the flows that match the query you have created. The flows are listed on
the lower portion of the screen in a box with columns identifying the selected fields for each flow.
(graph buttons)
When Time Granularity is selected, you have the option to view results in graph or flowchart form. Graph
buttons appear on the screen above the Export button. Click a graph button to transform the tabular
results into a graphical chart display.
450
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Table 62: Query Flow Series Fields (continued)
Field
Description
Export
This button appears after you click Run Query, allowing you to export the list of flows to a text/csv file.
The Select window allows you to select one or more attributes of a flow series by clicking
the check box for each attribute desired, see Figure 172 on page 451. The upper section of
the Select window includes field names, and the lower portion lets you select units. Use
the Time Granularity feature to aggregate bytes and packets in intervals by selecting
SUM(Bytes) or SUM(Packets).
Figure 172: Flow Series Select
Use the Filter window to refine the display of query results for flows, by defining an
attribute by which to sort the results, the sort order of the results, and any limit needed
to restrict the number of results. See Figure 173 on page 452.
Copyright © 2016, Juniper Networks, Inc.
451
Contrail Feature Guide
Figure 173: Flow Series Filter
Example: Query Flow Series
The following is an example flow series query that returns the time series of the
summation of traffic in bytes for all combinations of source VN and destination VN for
the last 10 minutes, with the bytes aggregated in 10 second intervals. See
Figure 174 on page 452.
Figure 174: Example: QueryFlow Series
The query returns tabular time series data, see Figure 175 on page 453, for the following
combinations of Source VN and Dest VN:
1.
Flow Class 1: Source VN = default-domain:demo:front-end, Dest VN=__UNKNOWN__
2. Flow Class 2: Source VN = default-domain:demo:front-end, Dest
VN=default-domain:demo:back-end
452
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 175: Query Flow Series Tabular Results
Because the Time Granularity option was selected, the results can also be displayed as
graphical charts. Click the graph button on the right side of the tabular results. The results
are now displayed in a graphical flow chart. See Figure 176 on page 453.
Figure 176: Query Flow Series Graphical Results
Query > Flow Records
Click Query > Flow Records to create queries on individual flow records for detailed
debugging of connectivity issues between applications and virtual machines. Queries at
this level return records of the active flows within a given time period.
Copyright © 2016, Juniper Networks, Inc.
453
Contrail Feature Guide
Figure 177: Flow Records
The query fields available on the screen for the Flow Records tab are described in
Table 63 on page 454. Enter query data into the fields to create a SQL-like query to display
and analyze flows.
Table 63: Query Flow Records Fields
Field
Description
Time Range
Select a range of time for which to see the flow records:
•
Last 10 Mins
•
Last 30 Mins
•
Last 1 Hr
•
Last 6 Hrs
•
Last 12 Hrs
•
Custom
Click Custom to enter a specified custom time range in two new fields: From Time and To Time.
Select
Click the edit button (pencil icon) to open a Select window (Figure 178 on page 455), where you can click
one or more boxes to select attributes to display for the flow records, including Setup Time, Teardown
Time, Aggregate Bytes, and Aggregate Packets.
Where
Click the edit button (pencil icon) to open a query-writing window where you can specify query values
for sourcevn, sourceip, destvn, destip, protocol, sport, dport. .
Direction
Select the desired flow direction: INGRESS or EGRESS.
Run Query
Click this button to retrieve the flow records that match the query you have created. The records are
listed on the lower portion of the screen in a box with columns identifying the fields for each flow.
Export
This button appears after you click Run Query, allowing you to export the list of flows to a text/csv file.
The Select window allows you to select one or more attributes to display for the flow
records selected, see Figure 178 on page 455.
454
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 178: Flow Records Select Window
The user can restrict the query to a particular source VN and destination VN combination
using the Where section.
The Where Clause supports logical AND and logical OR operations, and is modeled as
logical OR of multiple AND terms. ( (term1 AND term2 AND term3..) OR (term4 AND
term5) OR…).
Each term is a single variable expression such as Source VN = VN1.
Figure 179: Where Clause Window
Query > Flows > Query Queue
Click Query > Flows > Query Queue to view queries that are in queue, waiting to be
performed on the data. See Figure 180 on page 456.
Copyright © 2016, Juniper Networks, Inc.
455
Contrail Feature Guide
Figure 180: Flows Query Queue
The query fields available on the screen for the Flow Records tab are described in
Table 64 on page 456. Enter query data into the fields to create a SQL-like query to display
and analyze flows.
Table 64: Query Flow Records Fields
Field
Description
Date
The date and time the query was started.
Query
A display of the parameters set for the query.
Progress
The percentage completion of the query to date.
Records
The number of records matching the query to date.
Status
The status ofd the query, such as completed.
Time Taken
The amount of time in seconds it has taken the query to return the matching records.
(Action icon)
Click the Action icon and select View Results to view a list of the records that match the query,
or click Delete to remove the query from the queue.
Query > Logs
The Query > Logs option allows you to access the system log and object log activity of
any Contrail Controller component from one central location.
•
Query > Logs Menu Options on page 456
•
Query > Logs > System Logs on page 457
•
Sample Query for System Logs on page 458
•
Query > Logs > Object Logs on page 459
Query > Logs Menu Options
Click Query > Logs to access the Query Logs menu, where you can select System Logs to
view system log activity, Object Logs to view object logs activity, and Query Queue to
create custom queries of log activity; see Figure 181 on page 457.
456
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 181: Query > Logs
Query > Logs > System Logs
Click Query > Logs > System Logs to access the Query System Logs menu, where you can
view system logs according to criteria that you determine. See Figure 182 on page 457.
Figure 182: Query > Logs > System Logs
The query fields available on the Query System Logs screen are described in
Table 65 on page 457.
Table 65: Query System Logs Fields
Field
Description
Time Range
Select a range of time for which to see the system logs:
•
Last 10 Mins
•
Last 30 Mins
•
Last 1 Hr
•
Last 6 Hrs
•
Last 12 Hrs
•
Custom
If you click Custom, enter a desired time range in two new fields: From Time and To Time.
Copyright © 2016, Juniper Networks, Inc.
457
Contrail Feature Guide
Table 65: Query System Logs Fields (continued)
Field
Description
Where
Click the edit button (pencil icon) to open a query-writing window, where you can specify query values
for variables such as Source, Module, MessageType, and the like, in order to retrieve specific information.
Level
Select the message severity level to view:
•
SYS_NOTICE
•
SYS_EMERG
•
SYS_ALERT
•
SYS_CRIT
•
SYS_ERR
•
SYS_WARN
•
SYS_INFO
•
SYS_DEBUG
Run Query
Click this button to retrieve the system logs that match the query. The logs are listed in a box with
columns showing the Time, Source, Module Id, Category, Log Type, and Log message.
Export
This button appears after you click Run Query, allowing you to export the list of system messages to a
text/csv file.
Sample Query for System Logs
This section shows a sample system logs query designed to show all System Logs from
ModuleId = VRouterAgent on Source = b1s16 and filtered by Level = SYS_DEBUG.
1.
458
At the Query System Logs screen, click in the Where field to access the Where query
screen and enter information defining the location to query in the Edit Where Clause
section and click OK; see Figure 183 on page 459.
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 183: Edit Where Clause
2. The information you defined at the Where screen displays on the Query System Logs.
Enter any more defining information needed; see Figure 184 on page 459. When finished,
click Run Query to display the results.
Figure 184: Sample Query System Logs
Query > Logs > Object Logs
Object logs allow you to search for logs associated with a particular object, for example,
all logs for a specified virtual network. Object logs record information related to
modifications made to objects, including creation, deletion, and other modifications; see
Figure 185 on page 460.
Copyright © 2016, Juniper Networks, Inc.
459
Contrail Feature Guide
Figure 185: Query > Logs > Object Logs
The query fields available on the Object Logs screen are described in Table 66 on page 460.
Table 66: Object Logs Query Fields
Field
Description
Time Range
Select a range of time for which to see the logs:
•
Last 10 Mins
•
Last 30 Mins
•
Last 1 Hr
•
Last 6 Hrs
•
Last 12 Hrs
•
Custom
If you click Custom, enter a desired time range in two new fields: From Time and To Time.
Object Type
Select the object type for which to show logs:
•
Virtual Network
•
Virtual Machine
•
Virtual Router
•
BGP Peer
•
Routing Instance
•
XMPP Connection
Object Id
Select from a list of available identifiers the name of the object you wish to use.
Select
Click the edit button (pencil icon) to open a window where you can select searchable types by clicking
a checkbox:
460
•
ObjectLog
•
SystemLog
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Table 66: Object Logs Query Fields (continued)
Field
Description
Where
Click the edit button (pencil icon) to open the query-writing window, where you can specify query
values for variables such as Source, ModuleId, and MessageType, in order to retrieve information as
specific as you wish.
Run Query
Click this button to retrieve the system logs that match the query. The logs are listed in a box with
columns showing the Time, Source, Module Id, Category, Log Type, and Log message.
Export
This button appears after you click Run Query, allowing you to export the list of system messages to
a text/csv file.
System Log Receiver in Contrail Analytics
•
Overview on page 461
•
Redirecting System Logs to Contrail Collector on page 461
•
Exporting Logs from Contrail Analytics on page 461
Overview
The contrail-collector process on the Contrail Analytics node can act as a system log
receiver.
Redirecting System Logs to Contrail Collector
You can enable the contrail-collector to receive system logs by giving a valid syslog_port
as a command line option:
--DEFAULT.syslog_port <arg>
or by adding syslog_port in the DEFAULT section of the configuration file at
/etc/contrail/contrail-collector.conf .
For nodes to send system logs to the contrail-collector, the system log configuration for
the node should be set up to direct the system logs to contrail-collector.
Example
Add the following line in /etc/rsyslog.d/50-default.conf on an Ubuntu system to redirect
the system logs to contrail-collector.
*.* @<collector_ip>:<collector_syslog_port> :: @ for udp, @@ for tcp
The logs can be retrieved by using Contrail tool, either by using the contrail-logs utility
on the analytics node or by using the Contrail user interface on the system log query page.
Exporting Logs from Contrail Analytics
You can also export logs stored in Contrail analytics to another system log receiver by
using the contrail-logs utility.
The contrail-logs utility can take these options: --send-syslog, --syslog-server, --syslog-port,
to query Contrail analytics, then send the results as system logs to a system log server.
Copyright © 2016, Juniper Networks, Inc.
461
Contrail Feature Guide
This is an on-demand command, one can write a cron job or a job that continuously
invokes contrail-logs to achieve continuous sending of logs to another system log server.
Example: Debugging Connectivity Using Monitoring for Troubleshooting
•
Using Monitoring to Debug Connectivity on page 462
Using Monitoring to Debug Connectivity
This example shows how you can use monitoring to debug connectivity in your Contrail
system. You can use the demo setup in Contrail to use these steps on your own.
1.
Navigate to Monitor -> Networking -> Networks -> default-domain:demo:vn0, Instance
ed6abd16-250e-4ec5-a382-5cbc458fb0ca with IP address 192.168.0.252 in the virtual
network vn0; see Figure 186 on page 462
Figure 186: Navigate to Instance
2. Click the instance to view Traffic Statistics for Instance. see Figure 187 on page 462.
Figure 187: Traffic Statistics for Instance
3. Instance d26c0b31-c795-400e-b8be-4d3e6de77dcf with IP address 192.168.0.253 in
the virtual network vn16. see Figure 188 on page 462 and Figure 189 on page 463.
Figure 188: Navigate to Instance
462
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 189: Traffic Statistics for Instance
4. From Monitor->Infrastructure->Virtual Routers->a3s18->Interfaces, we can see that
Instance ed6abd16-250e-4ec5-a382-5cbc458fb0ca is hosted on Virtual Router a3s18;
see Figure 190 on page 463.
Figure 190: Navigate to a3s18 Interfaces
5. From Monitor->Infrastructure->Virtual Routers->a3s19->Interfaces, we can see that
Instance d26c0b31-c795-400e-b8be-4d3e6de77dcf is hosted on Virtual Router a3s19;
see Figure 191 on page 463.
Figure 191: Navigate to a3s19 Interfaces
6. Virtual Routers a3s18 and a3s19 have the ACL entries to allow connectivity between
default-domain:demo:vn0 and default-domain:demo:vn16 networks; see
Figure 192 on page 463 and Figure 193 on page 464.
Figure 192: ACL Connectivity a3s18
Copyright © 2016, Juniper Networks, Inc.
463
Contrail Feature Guide
Figure 193: ACL Connectivity a3s19
7. Next, verify the routes on the control node for routing instances
default-domain:demo:vn0:vn0 and default-domain:demo:vn16:vn16; see
Figure 194 on page 464 and Figure 195 on page 464.
Figure 194: Routes default-domain:demo:vn0:vn0
Figure 195: Routes default-domain:demo:vn16:vn16
8. We can see that VRF default-domain:demo:vn0:vn0 on Virtual Router a3s18 has the
appropriate route and next hop to reach VRF default-domain:demo:front-end on Virtual
Router a3s19; see Figure 196 on page 465.
464
Copyright © 2016, Juniper Networks, Inc.
Chapter 17: Using Contrail Analytics to Monitor and Troubleshoot the Network
Figure 196: Verify Route and Next Hop a3s18
9. We can see that VRF default-domain:demo:vn16:vn16 on Virtual Router a3s19 has the
appropriate route and next hop to reach VRF default-domain:demo:vn0:vn0 on Virtual
Router a3s18; see Figure 197 on page 465.
Figure 197: Verify Route and Next Hop a3s19
10. Finally, flows between instances (IPs 192.168.0.252 and 192.168.16.253) can be verified
on Virtual Routers a3s18 and a3s19; see Figure 198 on page 466 and
Figure 199 on page 466.
Copyright © 2016, Juniper Networks, Inc.
465
Contrail Feature Guide
Figure 198: Flows for a3s18
Figure 199: Flows for a3s19
466
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 18
Common Support Answers
•
Debugging Ping Failures for Policy-Connected Networks on page 467
•
Debugging BGP Peering and Route Exchange in Contrail on page 473
•
Troubleshooting the Floating IP Address Pool in Contrail on page 488
•
Removing Stale Virtual Machines and Virtual Machine Interfaces on page 516
•
Troubleshooting Link-Local Services in Contrail on page 520
Debugging Ping Failures for Policy-Connected Networks
This topic presents troubleshooting scenarios and steps for resolving reachability issues
(ping failures) when working with policy-connected virtual networks.
These are the methods used to configure reachability for a virtual network or virtual
machine:
•
Use network policy to exchange virtual network routes.
•
Use a floating IP address pool to associate an IP address from a destination virtual
network to virtual machine(s) in the source virtual network.
•
Use an ASN/RT configuration to exchange virtual network routes with an MX Series
router gateway.
•
Use a service instance static route configuration to route between service instances
in two virtual networks.
This topic focuses on troubleshooting reachability for the first method --- using network
policy to exchange routes between virtual networks.
Troubleshooting Procedure for Policy-Connected Network
1.
Check the state of the virtual machine and interface.
Before doing anything else, check the status of the source and destination virtual
machines.
•
Is the Status of each virtual machine Up?
•
Are the corresponding tap interfaces Active?
Check the virtual machine status in the Contrail UI:
Copyright © 2016, Juniper Networks, Inc.
467
Contrail Feature Guide
Figure 200: Virtual Machine Status Window
Check the tap interface status in the http agent introspect, for example:
http://nodef1.englab.juniper.net:8085/Snh_ItfReq?name=
Figure 201: Tap Interface Status Window
When the virtual machine status is verified Up, and the tap interface is Active, you can
focus on other factors that affect traffic, including routing, network policy, security
policy, and service instances with static routes.
2. Check reachability and routing.
Use the following troubleshooting guidelines whenever you are experiencing ping
failures on virtual network routes that are connected by means of network policy.
Check the network policy configuration:
•
Verify that the policy is attached to each of the virtual networks.
•
Each attached policy should have either an explicit rule allowing traffic from one
virtual network to the other, or an allow all traffic rule.
•
Verify that the order of the actions in the policy rules is correct, because the actions
are applied in the order in which they are listed.
•
If there are multiple policies attached to a virtual network, verify that the policies
are attached in a logical order. The first policy listed is applied first, and its rules are
applied first, then the next policy is applied.
•
Finally, if either of the virtual networks does not have an explicit rule to allow traffic
from the other virtual network, the traffic flow will be treated as an UNRESOLVED
or SHORT flow and all packets will be dropped.
Use the following sequence in the Contrail UI to check policies, attachments, and
traffic rules:
Check VN1-VN2 ACL information from the compute node:
468
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Figure 202: Policies, Attachments, and Traffic Rule Status Window
Check the virtual network policy configuration with route information:
Figure 203: Virtual Network Policy Configuration Window
Check the VN1 route information for VN2 routes:
Figure 204: Virtual Network Route Information Window
If a route is missing, ping fails. Flow inspection in the compute node displays Action:
D(rop).
Copyright © 2016, Juniper Networks, Inc.
469
Contrail Feature Guide
Repeated dropstats commands confirms the drop by incrementing the Flow Action
Drop counter with each iteration of dropstats.
Flow and dropstats commands issued at the compute node:
Figure 205: Flow and Dropstats Command List
To help in debugging flows, you can use the detailed flow query from the agent
introspect page for the compute node.
Fields of interest include:
•
Inputs [from flow –l output]: src/dest ip, src/dest ports, protocol, and vrf
•
Output from detailed flow query: short_flow, src_vn, action_str->action
Flow command output:
Figure 206: Flow Command Output Window
Fetching details of a single flow:
470
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Figure 207: Fetch Flow Record Window
Output from FetchFlowRecord shows unresolved IP addresses:
Figure 208: Unresolved IP Address Window
You can also retrieve information about unresolved flows from the Contrail UI, as
shown in the following:
Copyright © 2016, Juniper Networks, Inc.
471
Contrail Feature Guide
Figure 209: Unresolved Flow Details Window
3. Check for protocol-specific network policy action.
If you are still experiencing reachability issues, troubleshoot any protocol-specific
action, where routes are exchanged, but only specific protocols are allowed.
The following shows a sample query on a protocol-specific flow in the agent introspect:
Figure 210: Protocol-Specific Flow Sample
The following shows that although the virtual networks are resolved (not
__UNKNOWN__), and not a short flow (the flow entry exists for a defined aging time),
the policy action clearly displays deny as the action.
472
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Figure 211: Protocol-Specific Flow Sample With Deny Action
Summary
This topic explores one area —debugging for policy-based routing. However, in a complex
system, a virtual network might have one or more configuration methods combined that
influence reachability and routing.
For example, an environment might have a virtual network VN-X configured with
policy-based routing to another virtual network VN-Y. At the same time, there are a few
virtual machines in VN-X that have a floating IP to another virtual network VN-Z, which
is connected to VN-XX via a NAT service instance. This is a complex scenario, and you
need to debug step-by-step, taking into account all of the features working together.
Additionally, there are other considerations beyond routing and reachability that can
affect traffic flow. For example, the rules of network policies and security groups can
affect traffic to the destination. Also, if multi-path is involved, then ECMP and RPF need
to be taken into account while debugging.
Debugging BGP Peering and Route Exchange in Contrail
Use the troubleshooting steps and guidelines in this topic when you have errors with
Contrail BGP peering and route exchange.
•
Example Cluster on page 474
•
Verifying the BGP Routers on page 474
•
Verifying the Route Exchange on page 477
•
Debugging Route Exchange with Policies on page 479
•
Debugging Peering with an MX Series Router on page 481
•
Debugging a BGP Peer Down Error with Incorrect Family on page 482
Copyright © 2016, Juniper Networks, Inc.
473
Contrail Feature Guide
•
Configuring MX Peering (iBGP) on page 484
•
Checking Route Exchange with an MX Series Peer on page 485
•
Checking the Route in the MX Series Router on page 487
Example Cluster
Examples in this document refer to a virtual cluster that is set up as follows:
Config Nodes : [‘nodea22’, ‘nodea20’]
Control Nodes : [‘nodea22’, ‘nodea20’]
Compute Nodes : [‘nodea22’, ‘nodea20’]
Collector : [‘nodea22’]
WebU : nodea22
Openstack : nodea22
Verifying the BGP Routers
Use this procedure to launch various introspects to verify the setup of the BGP routers
in your system.
474
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Use this procedure to launch various introspects to verify the setup of the BGP routers
in your system.
1.
Verify the BGP routers.
All of the configured control nodes and external BGP routers are visible from the
following location, shown using the sample node setup.
http: //nodea22.testlab.juniper.net:8082/bgp-routers
NOTE: Throughout this procedure, replace nodea22.englab.juniper.net with
the correct location for your system to see the setup in your system.
Figure 212: Sample Output, BGP Routers:
2. Verify the BGP peering.
The following statement is entered to check the bgp_router_refs object on the API
server to validate the peering on the sample setup.
http: //<ip address>:8082/bgp-router/1da579c5-0907-4c98-a7ad-37671f00cf60
Copyright © 2016, Juniper Networks, Inc.
475
Contrail Feature Guide
Figure 213: Sample Output, BGP Router References:
3. Verify the command line arguments that are passed to the control-node.
On the control-node, use ps aux | grep control-node to see the arguments that are
passed to the control-node.
Example
/usr/bin/control-node --map-user <ip address> --map-password <ip address>
--hostname nodea22 --host-ip <ip address> --bgp-port 179 --discovery-server <ip
address>
The host name is the bgp-router name. Ensure that the bgp-router config can be found
for the hostname, using the procedure in Step 1.
4. Validate the BGP neighbor config and the BGP peering config object.
http: //<ip address>:8083/Snh_ShowBgpNeighborConfigReq?
Figure 214: Sample Output, BGP Neighbor Config:
http: //<ip address>:8083/Snh_ShowBgpPeeringConfigReq?
476
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Figure 215: Sample Output, BGP Peering Config:
5. Check the BGP neighbor states on the sample setup.
http: //<ip address>:8083/Snh_BgpNeighborReq?ip_address=&domain=
Figure 216: Sample Output, BGP Neighbor States:
If the peer is not in an established state, check the last_error and the flap_count. Debug
the BGP state machine by using information displayed in the output, such as last_state
and last_event.
NOTE: The image displayed is truncated to fit this page. On the console
screen you can scroll horizontally to see more columns and data.
Verifying the Route Exchange
The following two virtual networks are used in the sample debugging session for route
exchange.
vn1 -> 1.1.1.0/24
vn2 -> 2.2.2.0/24
Example Procedure for Verifying Route Exchange
1.
Validate the presence of the routing instance for each virtual network in the sample
system.
http ://<ip address>:8083/Snh_ShowRoutingInstanceReq?name=
NOTE: Throughout this example, replace nodea22.englab.juniper.net with
the correct location for the control node on your system.
Copyright © 2016, Juniper Networks, Inc.
477
Contrail Feature Guide
Figure 217: Sample Output, Show Routing Instance:
In the sample output, you can see the import_target and the export_target configured
on the routing instance. Also shown are the xmpp peers (vroutes) registered to the
table.
The user can click on the inet table of the required routing instance to display the
routes that belong to the instance.
Use the information in Step 2 to validate a route.
2. Validate a route in a given routing instance in the sample setup:
http ://<ip address>:8083/Snh_ShowRouteReq?x=default-domain:demo:vn1:vn1.inet.0
In the following sample output (truncated), the user can validate the BGP paths for
the protocol and for the source of the route to verify which XMPP agent or vRouter
has pushed the route. If the path source is BGP, the route is imported to the VRF table
from a BGP peer, either another control-node or an external bgp router such as an MX
Series router. BGP paths are displayed in the order of path selection.
Figure 218: Sample Output, Validate Route:
3. Validate the l3vpn table.
http: //<ip address>:8083/Snh_ShowRouteReq?x=bgp.l3vpn.0
478
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Figure 219: Sample Output, Validate L3vpn Table:
The following sample output has been scrolled horizontally to display the BGP path
attributes of each route. policies.
The extended community (communities column), determines the VRF table to which
this VPN route is imported. The origin_vn shows the virtual network where this route
was created, information useful for applying ACL
The label (MPLS) and tunnel encap columns can be used for debugging data path
issues.
Figure 220: Sample Output, Validate L3vpn Table, Scrolled:
Debugging Route Exchange with Policies
This section uses the sample output and the sample vn1 and vn2 to demonstrate methods
of debugging route exchange with policies.
1.
Create a network policy to allow vn1 and vn2 traffic and associate the policy to the
virtual networks.
Copyright © 2016, Juniper Networks, Inc.
479
Contrail Feature Guide
Figure 221: Create Policy Window
2. Validate that the routing instances have the correct import_target configuration.
http: //<ip address>:8083/Snh_ShowRoutingInstanceReq?name=
Figure 222: Sample Output, Validate Import Target:
3. Validate that the routes are imported from VRF.
Use the BGP path attribute to check the replication status of the path. The route from
the destination VRF should be replicated and validate the origin-vn.
Figure 223: Sample Output, Route Import:
480
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Debugging Peering with an MX Series Router
This section sets up an example BGP MX Series peer and provides some troubleshooting
scenarios.
1.
Set the Global AS number of the control-node for an MX Series BGP peer, using the
Contrail WebUI (eBGP).
Figure 224: Edit Global ASN Window
2. Configure the eBGP peer for the MX Series router. Use the Contrail Web UI or Python
provisioning.
Figure 225: Create BGP Peer Window
Configuring the MX Series BGP peer with the Python provision utility:
python ./provision_mx.py --router_name mx --router_ip <ip address> --router_asn
12345 --api_server_ip <ip address> --api_server_port 8082 --oper add --admin_user
admin --admin_password <password> --admin_tenant_name admin
3. Configure a control-node peer on the MX Series router, using Junos CLI:
set protocols bgp group contrail-control-nodes type external
set protocols bgp group contrail-control-nodes local-address <ip address>
set protocols bgp group contrail-control-nodes keep all
set protocols bgp group contrail-control-nodes peer-as 54321
set protocols bgp group contrail-control-nodes local-as 12345
Copyright © 2016, Juniper Networks, Inc.
481
Contrail Feature Guide
set protocols bgp group contrail-control-nodes neighbor <ip address>
Debugging a BGP Peer Down Error with Incorrect Family
Use this procedure to identify and resolve errors that arise from families mismatched
configurations.
NOTE: This example uses locations at http: //<ip address>:. Be sure to replace
<ip address>with the correct address for your environment.
1.
Check the BGP peer UVE.
http: //<ip address>:8081/analytics/uves/bgp-peers
2. Search for the MX Series BGP peer by name in the list.
In the sample output, families is the family advertised by the peer and
configured_families is what is provisioned. In the sample output, the families configured
on the peer has a mismatch, thus the peer doesn’t move to an established state. You
can verify it in the peer UVE.
Figure 226: Sample BGP Peer UVE
3. Fix the families mismatch in the sample by updating the configuration on the MX
Series router, using Junos CLI:
set protocols bgp group contrail-control-nodes family inet-vpn unicast
4. After committing the CLI configuration, the peer comes up. Verify this with UVE.
http: //<ip address>:8081/analytics/uves/bgp-peers
482
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Figure 227: Sample Established BGP Peer UVE
5. Verify the peer status on the MX Series router, using Junos CLI:
run show bgp neighbor 10.204.216.16
Peer: 10.204.216.16+46924 AS 54321 Local: 10.204.216.253+179 AS 12345
Type: External
State: Established
Last State: OpenConfirm
Flags: <ImportEval Sync>
Last Event: RecvKeepAlive
Last Error: None
Options: <Preference LocalAddress KeepAll AddressFamily PeerAS LocalAS
Rib-group Refresh>
Address families configured: inet-vpn-unicast
Local Address: <ip address> Holdtime: 90 Preference: 170 Local AS: 12345
Local System AS: 64512
Number of flaps: 0
Error: 'Cease' Sent: 0 Recv: 2
Peer ID: <ip address>
Local ID: <ip address>
Keepalive Interval: 30
Group index: 1
Active Holdtime: 90
Peer index: 0
BFD: disabled, down
Local Interface: ge-1/0/2.0
NLRI for restart configured on peer: inet-vpn-unicast
NLRI advertised by peer: inet-vpn-unicast
NLRI for this session: inet-vpn-unicast
Copyright © 2016, Juniper Networks, Inc.
483
Contrail Feature Guide
Peer does not support Refresh capability
Stale routes from peer are kept for: 300
Peer does not support Restarter functionality
Peer does not support Receiver functionality
Peer does not support 4 byte AS extension
Peer does not support Addpath
Configuring MX Peering (iBGP)
1.
Edit the Global ASN.
Figure 228: Edit Global ASN Window
2. Configure the MX Series IBGP peer, using Contrail WebUI or Python provisioning.
Figure 229: Create BGP Peer Window
Configuring the MX Series BGP peer with the Python provision utility:
python ./provision_mx.py --router_name mx--router_ip <ip address> --router_asn <asn>
--api_server_ip <ip address> --api_server_port 8082 --oper add --admin_user admin
--admin_password <password> --admin_tenant_name admin
3. Verify the peer from UVE.
484
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
http ://<ip address>:8081/analytics/uves/bgp-peers
Figure 230: Sample Established IBGP Peer UVE
4. You can verify the same information at the HTTP introspect page of the control node
(8443 in this example).
http: //<ip address>:8083/Snh_BgpNeighborReq?ip_address=&domain=
Figure 231: Sample Established IBGP Peer Introspect Window
Checking Route Exchange with an MX Series Peer
1.
Check the route table in the bgp.l3vpn.0 table.
Figure 232: Routing Instance Route Table
2. Configure a public virtual network.
Copyright © 2016, Juniper Networks, Inc.
485
Contrail Feature Guide
Figure 233: Routing Instance Route Table
3. Verify the routes in the public.inet.0 table.
http: //<ip
address>:8083/Snh_ShowRouteReq?x=default-domain:admin:public:public.inet.0
Figure 234: Routing Instance Public IPv4 Route Table
4. Launch a virtual machine in the public network and verify the route in the public.inet.0
table.
http: //<ip address>:8083/
Snh_ShowRouteReq?x=default-domain:admin:public:public.inet.0
Figure 235: Virtual Machine Routing Instance Public IPv4 Route Table
5. Verify the route in the bgp.l3vpn.0 table.
http: //<ip address>:8083/Snh_ShowRouteReq?x=bgp.l3vpn.0
486
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Figure 236: BGP Routing Instance Route Table
Checking the Route in the MX Series Router
Use Junos CLI show commands from the router to check the route.
run show route table public.inet.0
public.inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
0.0.0.0/0
*[Static/5] 15w6d 08:50:34
> to <ip address> via ge-1/0/1.0
<ip address set> *[Direct/0] 15w6d 08:50:35
> via ge-1/0/1.0
<ip address set> *[Local/0] 15w6d 08:50:51
Local via ge-1/0/1.0
<ip address set> *[BGP/170] 01:13:34, localpref 100, from <ip address>
AS path: ?, validation-state: unverified
> via gr-1/0/0.32771, Push 16
[BGP/170] 01:13:34, localpref 100, from <ip address>
AS path: ?, validation-state: unverified
> via gr-1/0/0.32771, Push 16
<ip address set>
*[BGP/170] 00:03:20, localpref 100, from <ip address>
AS path: ?, validation-state: unverified
> via gr-1/0/0.32769, Push 16
Copyright © 2016, Juniper Networks, Inc.
487
Contrail Feature Guide
run show route table bgp.l3vpn.0 receive-protocol bgp <ip address> detail
bgp.l3vpn.0: 92 destinations, 130 routes (92 active, 0 holddown, 0 hidden)
* 10.xxx.xxx.70:1:11.2.3.253/32 (1 entry, 0 announced)
Import Accepted
Route Distinguisher: <ip address>
VPN Label: 16
Nexthop: <ip address>
Localpref: 100
AS path: ?
Communities: target:64512:1 target:64512:10003 unknown iana 30c unknown iana
30c unknown type 8004 value fc00:1 unknown type 8071 value fc00:4
Troubleshooting the Floating IP Address Pool in Contrail
This document provides troubleshooting methods to use when you have errors with the
floating IP address pool when using Contrail.
488
•
Example Cluster on page 489
•
Example on page 489
•
Example: MX80 Configuration for the Gateway on page 490
•
Ping the Floating IP from the Public Network on page 493
•
Troubleshooting Details on page 493
•
Get the UUID of the Virtual Network on page 493
•
View the Floating IP Object in the API Server on page 494
•
View floating-ips in floating-ip-pools in the API Server on page 497
•
Check Floating IP Objects in the Virtual Machine Interface on page 500
•
View Floating IP Objects in the IFMAP Server View on page 503
•
View the BGP Peer Status on the Control Node on page 507
•
Querying Routes in the Public Virtual Network on page 508
•
Verification from the MX80 Gateway on page 509
•
Viewing the Compute Node Vnsw Agent on page 511
•
Advanced Troubleshooting on page 514
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Example Cluster
Examples in this document refer to a virtual cluster that is set up as follows:
Config Nodes : ['nodec6', 'nodec7', 'nodec8']
Control Nodes : [‘nodec7', 'nodec8']
Compute Nodes : ['nodec9', 'nodec10']
Collector
: ['nodec6', 'nodec8']
WebUI
: nodec7
Openstack
: nodec6
The following virtual networks are used in the examples in this document:
Public virtual network:
•
Virtual network name: public_vn
•
Public addresses range: 10.204.219.32 to 10.204.219.37
•
Route Target: 64512:10003
•
Floating IP pool name: public_pool
Private virtual network:
•
Virtual network name: vn1
•
Subnet: 10.1.1.0/24
Example
Copyright © 2016, Juniper Networks, Inc.
489
Contrail Feature Guide
A virtual machine is created in the virtual network VN1 with the name VN1_VM1 and with
the IP address 10.1.1.253. A floating IP address of 10.204.219.37 is associated to the
VN1_VM1 instance.
An MX80 router is configured as a gateway to peer with control nodes nodec7 and nodec8.
Example: MX80 Configuration for the Gateway
The following is the Junos OS configuration for the MX80 gateway. The route
10.204.218.254 is the route to the external world.
chassis {
fpc 1 {
pic 0 {
tunnel-services;
}
}
}
interfaces {
ge-1/0/1 {
490
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
unit 0 {
family inet {
address 10.204.218.1/24;
}
}
}
ge-1/0/2 {
unit 0 {
family inet {
address 10.204.216.253/24;
}
}
}
}
routing-options {
static {
route 0.0.0.0/0 next-hop 10.204.216.254;
}
router-id 10.204.216.253;
route-distinguisher-id 10.204.216.253;
autonomous-system 64512;
dynamic-tunnels {
tun1 {
source-address 10.204.216.253;
gre;
destination-networks {
10.204.216.0/24;
10.204.217.0/24;
Copyright © 2016, Juniper Networks, Inc.
491
Contrail Feature Guide
}
}
}
}
protocols {
bgp {
group control-nodes {
type internal;
local-address 10.204.216.253;
keep all;
family inet-vpn {
unicast;
}
neighbor 10.204.216.64;
neighbor 10.204.216.65;
}
}
}
routing-instances {
public {
instance-type vrf;
interface ge-1/0/1.0;
vrf-target target:64512:10003;
vrf-table-label;
routing-options {
static {
route 0.0.0.0/0 next-hop 10.204.218.254;
}
492
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
}
}
}
Ping the Floating IP from the Public Network
From the public network, ping the floating IP 10.204.219.37.
user1-test:~ user1$ ping 10.204.219.37
PING 10.204.219.37 (10.204.219.37): 56 data bytes
64 bytes from 10.204.219.37: icmp_seq=0 ttl=54 time=62.439 ms
64 bytes from 10.204.219.37: icmp_seq=1 ttl=54 time=56.018 ms
64 bytes from 10.204.219.37: icmp_seq=2 ttl=54 time=55.915 ms
64 bytes from 10.204.219.37: icmp_seq=3 ttl=54 time=57.755 ms
^C
--- 10.204.219.37 ping statistics --5 packets transmitted, 4 packets received, 20.0% packet loss
round-trip min/avg/max/stddev = 55.915/58.032/62.439/2.647 ms
Troubleshooting Details
The following sections show details of ways to get related information, view, troubleshoot,
and validate floating IP addresses in a Contrail system.
Get the UUID of the Virtual Network
Use the following to get the universal unique identifier (UUID) of the virtual network.
[root@nodec6 ~]# (source /etc/contrail/openstackrc; quantum net-list -F id -F name)
2>/dev/null
+--------------------------------------+-------------------------+
| id
| name
|
+--------------------------------------+-------------------------+
| 43707766-75f3-4d48-80d9-1b7240fb161d | public_vn
| 2ab7ea04-8f5f-4b8d-acbf-a7c29c9b4112 | VN1
|
|
| 1c59ded0-38e8-4168-b91f-4c51aba10d30 | default-virtual-network |
| 5b0a1040-91e4-47ff-bd4c-0a81e1901a1f | ip-fabric
Copyright © 2016, Juniper Networks, Inc.
|
493
Contrail Feature Guide
| 7efddf64-ff3c-44d2-aeb2-45d7472b7a64 | __link_local__
|
+--------------------------------------+-------------------------+
View the Floating IP Object in the API Server
Use the following to view the floating IP pool information in the API server. API server
requests can be made on http port 8082.
The Contrail API servers have the virtual-network public_vn object that contains floating
IP pool information. Use the following to view the floating-ip-pools object information.
curl http://<API-Server_IP>:8082/virtual-network/<UUID_of_VN>
Example
root@nodec6 ~]# curl
http://nodec6:8082/virtual-network/43707766-75f3-4d48-80d9-1b7240fb161d | python
-m json.tool
{
"virtual-network": {
"floating_ip_pools": [
{
"href":
"http://127.0.0.1:8095/floating-ip-pool/663737c1-f3ab-40ff-9442-bdb6c225e3c3",
"to": [
"default-domain",
"admin",
"public_vn",
"public_pool"
],
"uuid": "663737c1-f3ab-40ff-9442-bdb6c225e3c3"
}
],
"fq_name": [
"default-domain",
"admin",
494
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
"public_vn"
],
"href":
"http://127.0.0.1:8095/virtual-network/43707766-75f3-4d48-80d9-1b7240fb161d",
"id_perms": {
"created": "2014-02-07T08:58:40.892803",
"description": null,
"enable": true,
"last_modified": "2014-02-07T10:06:42.234423",
"permissions": {
"group": "admin",
"group_access": 7,
"other_access": 7,
"owner": "admin",
"owner_access": 7
},
"uuid": {
"uuid_lslong": 9284482284331406877,
"uuid_mslong": 4859515279882014024
}
},
"name": "public_vn",
"network_ipam_refs": [
{
"attr": {
"ipam_subnets": [
{
"default_gateway": "10.204.219.38",
"subnet": {
Copyright © 2016, Juniper Networks, Inc.
495
Contrail Feature Guide
"ip_prefix": "10.204.219.32",
"ip_prefix_len": 29
}
}
]
},
"href":
"http://127.0.0.1:8095/network-ipam/39b0e8da-fcd4-4b35-856c-8d18570b1483",
"to": [
"default-domain",
"default-project",
"default-network-ipam"
],
"uuid": "39b0e8da-fcd4-4b35-856c-8d18570b1483"
}
],
"parent_href":
"http://127.0.0.1:8095/project/deef6549-8e6c-4e3e-9cde-c9bc2b72ce6f",
"parent_type": "project",
"parent_uuid": "deef6549-8e6c-4e3e-9cde-c9bc2b72ce6f",
"route_target_list": {
"route_target": [
"target:64512:10003"
]
},
"routing_instances": [
{
"href":
"http://127.0.0.1:8095/routing-instance/3c6254ac-cfde-417e-916d-e7a1c0efad92",
496
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
"to": [
"default-domain",
"admin",
"public_vn",
"public_vn"
],
"uuid": "3c6254ac-cfde-417e-916d-e7a1c0efad92"
}
],
"uuid": "43707766-75f3-4d48-80d9-1b7240fb161d",
"virtual_network_properties": {
"extend_to_external_routers": null,
"forwarding_mode": "l2_l3",
"network_id": 4,
"vxlan_network_identifier": null
}
}
}
View floating-ips in floating-ip-pools in the API Server
Once you have located the floating-ip-pools object, use the following to review its
floating-ips object.
The floating-ips object should display the floating IP that is shown in the Contrail UI. The
floating IP should have a reference to the virtual machine interface (VMI) object that is
bound to the floating IP.
Example
[root@nodec6 ~]#
curlhttp://nodec6:8082/floating-ip-pool/663737c1-f3ab-40ff-9442-bdb6c225e3c3 |
python -m json.tool
{
Copyright © 2016, Juniper Networks, Inc.
497
Contrail Feature Guide
"floating-ip-pool": {
"floating_ips": [
{
"href":
"http://127.0.0.1:8095/floating-ip/f3eec4d6-889e-46a3-a8f0-879dfaff6ca0",
"to": [
"default-domain",
"admin",
"public_vn",
"public_pool",
"f3eec4d6-889e-46a3-a8f0-879dfaff6ca0"
],
"uuid": "f3eec4d6-889e-46a3-a8f0-879dfaff6ca0"
}
],
"fq_name": [
"default-domain",
"admin",
"public_vn",
"public_pool"
],
"href":
"http://127.0.0.1:8095/floating-ip-pool/663737c1-f3ab-40ff-9442-bdb6c225e3c3",
"id_perms": {
"created": "2014-02-07T08:58:41.136572",
"description": null,
"enable": true,
"last_modified": "2014-02-07T08:58:41.136572",
"permissions": {
498
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
"group": "admin",
"group_access": 7,
"other_access": 7,
"owner": "admin",
"owner_access": 7
},
"uuid": {
"uuid_lslong": 10683309858715198403,
"uuid_mslong": 7365417021744038143
}
},
"name": "public_pool",
"parent_href":
"http://127.0.0.1:8095/virtual-network/43707766-75f3-4d48-80d9-1b7240fb161d",
"parent_type": "virtual-network",
"parent_uuid": "43707766-75f3-4d48-80d9-1b7240fb161d",
"project_back_refs": [
{
"attr": {},
"href": "http://127.0.0.1:8095/project/deef6549-8e6c-4e3e-9cde-c9bc2b72ce6f",
"to": [
"default-domain",
"admin"
],
"uuid": "deef6549-8e6c-4e3e-9cde-c9bc2b72ce6f"
}
],
"uuid": "663737c1-f3ab-40ff-9442-bdb6c225e3c3"
Copyright © 2016, Juniper Networks, Inc.
499
Contrail Feature Guide
}
}
Check Floating IP Objects in the Virtual Machine Interface
Use the following to retrieve the virtual machine interface of the virtual machine from
either the quantum port-list command or from the Contrail UI. Then get the virtual machine
interface identifier and check its floating IP object associations.
•
Using quantum port-list to get the virtual machine interface:
Example
[root@nodec6 ~]# quantum port-list -F id -F fixed_ips
+--------------------------------------+-----------------------------------------------------------------------------------+
| id
|
fixed_ips
|
+--------------------------------------+-----------------------------------------------------------------------------------+
| cdca35ce-84ad-45da-9331-7bc67b7fcca6 | {"subnet_id":
"e80f480b-98d4-43cc-847c-711e637295db", "ip_address": "10.1.1.253"} |
+--------------------------------------+-----------------------------------------------------------------------------------+
•
Checking Floating IP
Objects on the Virtual
Machine Interface
Using Contrail UI to get the virtual machine interface:
Once you have obtained the virtual machine interface identifier, check the floating-ip
objects that are associated with the virtual machine interface.
[root@nodec6 ~]# curl
500
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
http://127.0.0.1:8095/floating-ip/f3eec4d6-889e-46a3-a8f0-879dfaff6ca0 | python
-m json.tool
{
"floating-ip": {
"floating_ip_address": "10.204.219.37",
"fq_name": [
"default-domain",
"admin",
"public_vn",
"public_pool",
"f3eec4d6-889e-46a3-a8f0-879dfaff6ca0"
],
"href": "http://127.0.0.1:8095/floating-ip/f3eec4d6-889e-46a3-a8f0-879dfaff6ca0",
"id_perms": {
"created": "2014-02-07T10:07:05.869899",
"description": null,
"enable": true,
"last_modified": "2014-02-07T10:36:36.820926",
"permissions": {
"group": "admin",
"group_access": 7,
"other_access": 7,
"owner": "admin",
"owner_access": 7
},
"uuid": {
"uuid_lslong": 12173378905373109408,
Copyright © 2016, Juniper Networks, Inc.
501
Contrail Feature Guide
"uuid_mslong": 17577202821367744163
}
},
"name": "f3eec4d6-889e-46a3-a8f0-879dfaff6ca0",
"parent_href":
"http://127.0.0.1:8095/floating-ip-pool/663737c1-f3ab-40ff-9442-bdb6c225e3c3",
"parent_type": "floating-ip-pool",
"parent_uuid": "663737c1-f3ab-40ff-9442-bdb6c225e3c3",
"project_refs": [
{
"attr": null,
"href": "http://127.0.0.1:8095/project/deef6549-8e6c-4e3e-9cde-c9bc2b72ce6f",
"to": [
"default-domain",
"admin"
],
"uuid": "deef6549-8e6c-4e3e-9cde-c9bc2b72ce6f"
}
],
"uuid": "f3eec4d6-889e-46a3-a8f0-879dfaff6ca0",
"virtual_machine_interface_refs": [
{
"attr": null,
"href":
"http://127.0.0.1:8095/virtual-machine-interface/cdca35ce-84ad-45da-9331-7bc67b7fcca6",
"to": [
"54bb44e1-50e4-43d7-addd-44be809f1e40",
"cdca35ce-84ad-45da-9331-7bc67b7fcca6"
502
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
],
"uuid": "cdca35ce-84ad-45da-9331-7bc67b7fcca6"
}
]
}
}
View Floating IP Objects in the IFMAP Server View
Use the following to view the output of /usr/bin/ifmap_view.py on the config-nodes. The
IFMAP server example output shows the BGP peering configurations and the
configurations of the virtual networks VN1 and public_vn.
[root@nodec6 ~]# (source /opt/contrail/api-venv/bin/activate ; python
/usr/bin/ifmap_view.py nodec6 8443 test3 test3 -v 2 )
....
....
....
project = admin
floating-ip = f3eec4d6-889e-46a3-a8f0-879dfaff6ca0
project = admin
floating-ip-pool = public_pool
security-group = default
access-control-list = default-access-control-list
virtual-network = VN1
network-ipam = default-network-ipam
{
"ipam_subnets": [
{
"subnet": {
"ip_prefix": "10.1.1.0",
"ip_prefix_len": 24
Copyright © 2016, Juniper Networks, Inc.
503
Contrail Feature Guide
},
"default_gateway": "10.1.1.254"
}
],
"host_routes": null
}
routing-instance = VN1
route-target = 2
{
"import_export": null
}
virtual-network = public_vn
floating-ip-pool = public_pool
floating-ip = f3eec4d6-889e-46a3-a8f0-879dfaff6ca0
virtual-machine-interface = cdca35ce-84ad-45da-9331-7bc67b7fcca6
network-ipam = default-network-ipam
{
"ipam_subnets": [
{
"subnet": {
"ip_prefix": "10.204.219.32",
"ip_prefix_len": 29
},
"default_gateway": "10.204.219.38"
}
],
"host_routes": null
}
504
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
routing-instance = public_vn
route-target = 10003
{
"import_export": null
}
route-target = 1
{
"import_export": null
}
....
....
project = default-project
virtual-network = ip-fabric
routing-instance = __default__
bgp-router = nodec8
bgp-router = nodec7
{
"session": [
{
"attributes": [
{
"bgp_router": null,
"address_families": {
"family": [
"inet-vpn",
"e-vpn"
]
}
Copyright © 2016, Juniper Networks, Inc.
505
Contrail Feature Guide
}
],
"uuid": null
}
]
}
bgp-router = nodec7
bgp-router = mx1
bgp-router = nodec7
{
"session": [
{
"attributes": [
{
"bgp_router": null,
"address_families": {
"family": [
"inet-vpn"
]
}
}
],
"uuid": null
}
]
}
bgp-router = nodec8
{
506
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
"session": [
{
"attributes": [
{
"bgp_router": null,
"address_families": {
"family": [
"inet-vpn"
]
}
}
],
"uuid": null
}
]
}
....
....
....
View the BGP Peer Status on the Control Node
Use the Contrail UI or the control node http introspect on port 8083 to view the BGP peer
status. In the following example, the control nodes are nodec7 and nodec8.
Ensure that the BGP peering state is displayed as Established for the control nodes and
the gateway MX.
Copyright © 2016, Juniper Networks, Inc.
507
Contrail Feature Guide
Example
•
Using the Contrail UI:
•
Using the control-node Introspect:
http://nodec7:8083/Snh_BgpNeighborReq?ip_address=&domain=
http://nodec8:8083/Snh_BgpNeighborReq?ip_address=&domain=
Querying Routes in the Public Virtual Network
On each control-node, a query on the routes in the public_vn lists the routes that are
pushed by the MX gateway, which in the following example are 0.0.0.0/0 and
10.204.218.0/24.
In the following results, the floating IP route of 10.204.217.32 is installed by the compute
node (nodec10) that hosts that virtual machine.
Example
•
Using the Contrail UI:
•
Using the http Introspect:
Following is the format for using an introspect query.
http://<nodename/ip>:8083/Snh_ShowRouteReq?x=<RoutingInstance of public
VN>.inet.0
Example
508
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
http://nodec8:8083/Snh_BgpNeighborReq?ip_address=&domain=
View Corresponding
BGP LL3VPN Routes
Use the Contrail UI or the http introspect to view the public route’s corresponding BGP
L3VPN routes, as in the following.
Example
•
Using the Contrail UI:
•
Using the control-node Introspect:
http://nodec7:8083/Snh_ShowRouteReq?x=bgp.l3vpn.0
http://nodec8:8083/Snh_ShowRouteReq?x=bgp.l3vpn.0
Verification from the MX80 Gateway
This section provides options for verifying floating IP pools from the MX80 gateway.
Verify BGP Sessions
are Established
Use the following commands from the gateway to verify that BGP sessions are established
with the control nodes nodec7 and nodec8:
root@mx-host> show bgp neighbor 10.204.216.64
Peer: 10.204.216.64+59287 AS 64512 Local: 10.204.216.253+179 AS 64512
Copyright © 2016, Juniper Networks, Inc.
509
Contrail Feature Guide
Type: Internal State: Established Flags: <Sync>
Last State: OpenConfirm Last Event: RecvKeepAlive
Last Error: Hold Timer Expired Error
Options: <Preference LocalAddress KeepAll AddressFamily Rib-group Refresh>
Address families configured: inet-vpn-unicast
Local Address: 10.204.216.253 Holdtime: 90 Preference: 170
Number of flaps: 216
Last flap event: HoldTime
Error: 'Hold Timer Expired Error' Sent: 68 Recv: 0
Error: 'Cease' Sent: 0 Recv: 43
Peer ID: 10.204.216.64 Local ID: 10.204.216.253 Active Holdtime: 90
Keepalive Interval: 30
Group index: 0 Peer index: 3
BFD: disabled, down
NLRI for restart configured on peer: inet-vpn-unicast
NLRI advertised by peer: inet-vpn-unicast
NLRI for this session: inet-vpn-unicast
Peer does not support Refresh capability
Stale routes from peer are kept for: 300
Peer does not support Restarter functionality
Peer does not support Receiver functionality
Peer does not support 4 byte AS extension
Peer does not support Addpath
Show Routes Learned
from Control Nodes
From the MX80, use show route to display the routes for the virtual machine 10.204.219.37
that are learned from both control-nodes.
In the following example, the routes learned are 10.204.216.64 and 10.204.216.65, pointing
to a dynamic GRE tunnel next hop with a label of 16 (of the virtual machine).
public.inet.0: 4 destinations, 5 routes (4 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
510
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
0.0.0.0/0
*[Static/5] 10w6d 18:47:50
> to 10.204.218.254 via ge-1/0/1.0
10.204.218.0/24 *[Direct/0] 10w6d 18:47:51
> via ge-1/0/1.0
10.204.218.1/32 *[Local/0] 10w6d 18:48:07
Local via ge-1/0/1.0
10.204.219.37/32 *[BGP/170] 09:42:43, localpref 100, from 10.204.216.64
AS path: ?, validation-state: unverified
> via gr-1/0/0.32779, Push 16
[BGP/170] 09:42:43, localpref 100, from 10.204.216.65
AS path: ?, validation-state: unverified
> via gr-1/0/0.32779, Push 16
Viewing the Compute Node Vnsw Agent
The compute node introspect can be accessed from port 8085. In the following examples,
the compute nodes are nodec9 and nodec10.
Copyright © 2016, Juniper Networks, Inc.
511
Contrail Feature Guide
View Routing Instance
Next Hops
On the routing instance of VN1, the routes 0.0.0.0/0 and 10.204.218.0/24 should have
the next hop pointing to the MX gateway (10.204.216.253).
Example
Using the Contrail UI:
Using the Unicast
Route Table Index to
View Next Hops
512
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Alternatively, from the agent introspect, you can view the next hops at the unicast route
table.
First, use the following to get the unicast route table index (ucindex ) for the routing
instance default-domain:admin:public_vn:public_vn.
http://nodec10:8085/Snh_VrfListReq?x=default-domain:admin:public_vn:public_vn
Example
In the following example, the unicast route table index is 2.
Next, perform a route request query on ucindex 2, as shown in the following. The tunnel
detail indicates the source and destination endpoints of the tunnel and the MPLS label
16 (the label of the virtual machine).
The query should also show a route for 10.204.219.37 with an interface next hop of
tap-interface. http://nodec10:8085/Snh_Inet4UcRouteReq?x=2
Copyright © 2016, Juniper Networks, Inc.
513
Contrail Feature Guide
A ping from the MX gateway to the virtual machine’s floating IP in the public
routing-instance should work.
Advanced Troubleshooting
If you still have reachability problems after performing all of the tests in this article, for
example, a ping between the virtual machine and the MX IP or to public addresses is
failing, try the following:
•
Validate that all the required Contrail processes are running by using the contrail-status
command on all of the nodes.
•
On the compute node where the virtual machine is present (nodec10 in this example),
perform a tcpdump on the tap interface (tcpdump –ni tapcdca35ce-84). The output
should show the incoming packets from the virtual machine.
•
Check to see if any packet drops occur in the kernel vrouter module:
http://nodec10:8085/Snh_KDropStatsReq?
In the output, scroll down to find any drops. Note: You can ignore any ds_invalid_arp
increments.
•
On the physical interface where packets transmit onto the compute-node, perform a
tcpdump matching the host IP of the MX to show the GRE encapsulated packets, as
in the following.
[root@nodec10 ~]# cat /etc/contrail/agent.conf |grep -A 1 eth-port
<eth-port>
<name>p1p0p0</name>
</eth-port>
<metadata-proxy>
514
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
[root@nodec10 ~]# tcpdump -ni p1p0p0 host 10.204.216.253 -vv
tcpdump: WARNING: p1p0p0: no IPv4 address assigned
tcpdump: listening on p1p0p0, link-type EN10MB (Ethernet), capture size 65535 bytes
02:06:51.729941 IP (tos 0x0, ttl 64, id 57430, offset 0, flags [DF], proto GRE (47),
length 112)
10.204.216.253 > 10.204.216.67: GREv0, Flags [none], length 92
MPLS (label 16, exp 0, [S], ttl 54)
IP (tos 0x0, ttl 54, id 35986, offset 0, flags [none], proto ICMP (1), length 84)
172.29.227.6 > 10.204.219.37: ICMP echo request, id 53240, seq 242, length 64
02:06:51.730052 IP (tos 0x0, ttl 64, id 324, offset 0, flags [none], proto GRE (47),
length 112)
10.204.216.67 > 10.204.216.253: GREv0, Flags [none], length 92
MPLS (label 16, exp 0, [S], ttl 64)
IP (tos 0x0, ttl 64, id 33909, offset 0, flags [none], proto ICMP (1), length 84)
10.204.219.37 > 172.29.227.6: ICMP echo reply, id 53240, seq 242, length 64
02:06:52.732283 IP (tos 0x0, ttl 64, id 12675, offset 0, flags [DF], proto GRE (47),
length 112)
10.204.216.253 > 10.204.216.67: GREv0, Flags [none], length 92
MPLS (label 16, exp 0, [S], ttl 54)
IP (tos 0x0, ttl 54, id 54155, offset 0, flags [none], proto ICMP (1), length 84)
172.29.227.6 > 10.204.219.37: ICMP echo request, id 53240, seq 243, length 64
02:06:52.732355 IP (tos 0x0, ttl 64, id 325, offset 0, flags [none], proto GRE (47),
length 112)
10.204.216.67 > 10.204.216.253: GREv0, Flags [none], length 92
MPLS (label 16, exp 0, [S], ttl 64)
IP (tos 0x0, ttl 64, id 33910, offset 0, flags [none], proto ICMP (1), length 84)
10.204.219.37 > 172.29.227.6: ICMP echo reply, id 53240, seq 243, length 64
^C
4 packets captured
5 packets received by filter
Copyright © 2016, Juniper Networks, Inc.
515
Contrail Feature Guide
0 packets dropped by kernel
[root@nodec10 ~]#
•
On the MX gateway, use the following to inspect the GRE tunnel rx/tx
(received/transmitted) packet count:
root@mx-host> show interfaces gr-1/0/0.32779 |grep packets
Input packets : 542
Output packets: 559
root@blr-mx1> show interfaces gr-1/0/0.32779 |grep packets
Input packets : 544
Output packets: 561
•
Look for any packet drops in the FPC, as in the following:
show pfe statistics traffic fpc <id>
•
Also inspect the dynamic tunnels, using the following:
show dynamic-tunnels database
Removing Stale Virtual Machines and Virtual Machine Interfaces
This topic gives examples for removing stale VMs (virtual machines) and VMIs (virtual
machine interfaces). Before you can remove a stale VM or VMI, you must first remove
any back references associated to the VM or VMI.
•
Problem Example on page 516
•
Show Virtual Machines on page 517
•
Show Virtual Machines Using Python API on page 519
•
Delete Methods on page 520
Problem Example
The troubleshooting examples in this topic are based on the following problem example.
A net-delete of the virtual machine 2a8120ec-bd18-49f4-aca0-acfc6e8fe74f returned
the following messages that there are two VMIs that still have back-references to the
stale VM.
The two VMIs must be deleted first, then the Neutron net-delete <vm_ID> command will
complete without errors.
From neutron.log:
2014-03-10 14:18:05.208
516
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
DEBUG [urllib3.connectionpool]
"DELETE/virtual-network/2a8120ec-bd18-49f4-aca0-acfc6e8fe74f HTTP/1.1" 409 203
2014-03-10 14:18:05.278
ERROR [neutron.api.v2.resource] delete failed
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line
84, in resource
result = method(request=request, **args)
File "/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line
432, in delete
obj_deleter(request.context, id, **kwargs)
File
"/usr/lib/python2.7/dist-packages/neutron/plugins/juniper/contrail/contrail
plugin.py", line 294, in delete_network
raise e
RefsExistError: Back-References from
http: //127.0.0.1:8082/virtual-machine-interface/51daf6f4-7366-4463-a819-bd1
17fe3a8c8,
http: //127.0.0.1:8082/virtual-machine-interface/30882e66-e175-4fbb-862e-354
bb700b579 still exist
Show Virtual Machines
Use the following command to show all of the virtual machines known to the Contrail
API server. Replace the variable <config-node-IP> shown in the example with the IP
address of the config-node in your setup.
http://<config-node-IP>:8082/virtual-machines
Example
In the following example, 03443891-99cc-4784-89bb-9d1e045f8aa6 is a stale VM that
needs to be removed.
virtual-machines:
[
Copyright © 2016, Juniper Networks, Inc.
517
Contrail Feature Guide
{
href:"http:
//example-node:8082/virtual-machine/03443891-99cc-4784-89bb-9d1e045f8aa6",
fq_name:
[
"03443891-99cc-4784-89bb-9d1e045f8aa6"
],
uuid:"03443891-99cc-4784-89bb-9d1e045f8aa6"
},
When the user attempts to delete the stale VM, a message displays that children to the
VM still exist:
root@example-node:~# curl -X DELETE -H "Content-Type: application/json;
charset=UTF-8" http:
//127.0.0.1:8082/virtual-machine/03443891-99cc-4784-89bb-9d1e045f8aa6
Children http:
//127.0.0.1:8082/virtual-machine-interface/0c32a82a-7bd3-46c7-b262-6d85b9911a0d
still exist
root@example-node:~#
The user opens http: //example-node:8082/virtual-machine/
03443891-99cc-4784-89bb-9d1e045f8aa6, and sees a virtual-machine-interface (VMI)
attached to it. The VMI must be removed before the VM can be removed.
However, when the user attempts to delete the VMI from the stale VM, they get a message
that there is still a back-reference:
root@example-node:~# curl -X DELETE -H "Content-Type: application/json;
charset=UTF-8" http:
//<example-IP>:8082/virtual-machine-interface/0c32a82a-7bd3-46c7-b262-6d85b9911a0d
Back-References from http:
//<example-IP>:8082/instance-ip/6ffa29a1-023f-462b-b205-353da8e3a2a4 still exist
root@example-node:~#
Because there is a back-reference from an instance-ip object still present, the instance-ip
object must first be deleted, as follows:
root@example-node:~# curl -X DELETE -H "Content-Type: application/json;
charset=UTF-8" http:
//<example-IP>:8082/instance-ip/6ffa29a1-023f-462b-b205-353da8e3a2a4
root@example-node:~#
When the instance-ip is deleted, then the VMI and the VM can be deleted.
518
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
NOTE: To prevent inconsistency, be certain that the VM is not present in the
Nova database before deleting the VM.
Show Virtual Machines Using Python API
The following example shows how to view virtual machines using a Python API. This
example shows virtual machines and back-references. Once you identify back-references
and existing children, you can delete them first, then delete the stale VM.
root@example-node:~# source /opt/contrail/api-venv/bin/activate
File "<stdin>", line 1, in <module>
File
"/opt/contrail/api-venv/lib/python2.7/site-packages/vnc_api/gen/vnc_api_client_gen.py",
line 3793, in virtual_machine_interface_delete
content = self._request_server(rest.OP_DELETE, uri)
File "/opt/contrail/api-venv/lib/python2.7/site-packages/vnc_api/vnc_api.py", line
342, in _request_server
raise RefsExistError(content)
cfgm_common.exceptions.RefsExistError: Back-References from http: //
<example-IP>:8082/instance-ip/6ffa29a1-023f-462b-b205-353da8e3a2a4 still exist
>>> (api-venv)root@example-node:~# python
Python 2.7.5 (default, Mar 10 2014, 03:55:35)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from vnc_api.vnc_api import VncApi
>>> vh=VncApi()
>>>
vh.virtual_machine_interface_delete(id='0c32a82a-7bd3-46c7-b262-6d85b9911a0d')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File
"/opt/contrail/api-venv/lib/python2.7/site-packages/vnc_api/gen/vnc_api_client_gen.py",
line 3793, in virtual_machine_interface_delete
content = self._request_server(rest.OP_DELETE, uri)
File "/opt/contrail/api-venv/lib/python2.7/site-packages/vnc_api/vnc_api.py", line
Copyright © 2016, Juniper Networks, Inc.
519
Contrail Feature Guide
342, in _request_server
raise RefsExistError(content)
cfgm_common.exceptions.RefsExistError: Back-References from http: //
<example-IP>:8082/instance-ip/6ffa29a1-023f-462b-b205-353da8e3a2a4 still exist
>>>
Delete Methods
Use help (vh) to show all delete methods supported.
Typical commands for deleting VMs and VMIs include:
•
virtual_machine_delete() to delete a virtual machine
•
instance_ip_delete() to delete an instance-ip.
Troubleshooting Link-Local Services in Contrail
Use the troubleshooting steps and guidelines in this topic when you have errors with
Contrail link-local services.
•
Overview of Link-Local Services on page 520
•
Troubleshooting Procedure for Link-Local Services on page 521
•
Metadata Service on page 523
•
Troubleshooting Procedure for Link-Local Metadata Service on page 523
Overview of Link-Local Services
Virtual machines might be set up to access specific services hosted on the fabric
infrastructure. For example, a virtual machine might be a Nova client that requires access
to the Nova API service running in the fabric network. Access to services hosted on the
fabric network can be provided by configuring the services as link-local services.
A link-local address and a service port is chosen for the specific service running on a TCP
/ UDP port on a server in the fabric. With the link-local service configured, virtual machines
can access the service using the link-local address. For link-local services, Contrail uses
the address range 169.254.169.x.
520
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Link-local service can be configured using the Contrail WebUI: Configure > Infrastructure
> Link Local Services.
Troubleshooting Procedure for Link-Local Services
Use the following steps when you are troubleshooting link-local services errors.
1.
Verify the reachability of the fabric server that is hosting the link-local service from
the compute node.
2. Check the state of the virtual machine and the interface:
•
Is the Status of virtual machine Up?
•
Is the corresponding tap interface Active?
Checking the virtual machine status in the Contrail UI:
Checking the tap interface status in the http agent introspect:
http://<compute-node-ip>:8085/Snh_ItfReq?name=
3. Check the link-local configuration in the vrouter agent. Make sure the configured
link-local service is displayed.
http://<compute-node-ip>:8085/Snh_LinkLocalServiceInfo?
4. Validate the BGP neighbor config and the BGP peering config object. When the virtual
machine communicates with the configured link-local service, a forward and reverse
Copyright © 2016, Juniper Networks, Inc.
521
Contrail Feature Guide
flow for the communication is set up. Check that the flow for this communication is
created and the flow action is NAT.
http://<compute-node-ip>:8085/Snh_KFlowReq?flow_idx=
Check that all flow entries display NAT action programmed and display flags for the
fields (source or destination IP and ports) that have NAT programmed. Also shown
are the number of packets and bytes transmitted in the respective flows.
The forward flow displays the source IP of the virtual machine and the destination IP
of the link-local service. The reverse flow displays the source IP of the fabric host and
the destination IP of the compute node’s vhost interface. If the service is hosted on
the same compute node, the destination address of the reverse flow displays the
metadata address allocated to the virtual machine.
Note that the index and rflow index for the two flows are reversed.
You can also view similar information in the vrouter agent introspect page, where you
can see the policy and security group for the flow. Check that the flow actions display
as pass.
http://<compute-node-ip>:8085/Snh_FetchAllFlowRecords?
522
Copyright © 2016, Juniper Networks, Inc.
Chapter 18: Common Support Answers
Metadata Service
OpenStack allows virtual instances to access metadata by sending an HTTP request to
the link-local address 169.254.169.254. The metadata request from the instance is proxied
to Nova, with additional HTTP header fields added, which Nova uses to identify the source
instance. Then Nova responds with appropriate metadata.
The Contrail vrouter acts as the proxy, trapping the metadata requests, adding the
necessary header fields, and sending the requests to the Nova API server.
Troubleshooting Procedure for Link-Local Metadata Service
Metadata service is also a link-local service, with a fixed service name (metadata), a
fixed service address (169.254.169.254:80), and a fabric address pointing to the server
where the OpenStack Nova API server is running. All of the configuration and
troubleshooting procedures for Contrail link-local services also apply to the metadata
service.
However, for metadata service, the flow is always set up to the compute node, so the
vrouter agent will update and proxy the HTTP request. The vrouter agent listens on a
local port to receive the metadata requests. Consequently, the reverse flow has the
compute node as the source IP, the local port on which the agent is listening is the source
port, and the instance’s metadata IP is the destination IP address.
After performing all of the troubleshooting procedures for link-local services, the following
additional steps can be used to further troubleshoot metadata service.
1.
Check the metadata statistics for: the number of metadata requests received by the
vrouter agent, the number of proxy sessions set up with the Nova API server, and
number of internal errors encountered.
http://<compute-node-ip>:8085/Snh_MetadataInfo?
The port on which the vrouter agent listens for metadata requests is also displayed.
2. Check the metadata trace messages, which show the trail of metadata requests and
responses.
Copyright © 2016, Juniper Networks, Inc.
523
Contrail Feature Guide
http://<compute-node-ip>:8085/Snh_SandeshTraceRequest?x=Metadata
3. Check the Nova configuration. On the server running the OpenStack service, inspect
the nova.conf file.
•
Ensure that the metadata proxy is enabled, as follows:
service_neutron_metadata_proxy = True
service_quantum_metadata_proxy = True (on older installations)
•
Check to see if the metadata proxy shared secret is set:
neutron_metadata_proxy_shared_secret
quantum_metadata_proxy_shared_secret (on older installations)
If the shared secret is set in nova.conf, the same secret must be configured on each
compute node in the file /etc/contrail/contrail-vrouter-agent.conf, and the same
shared secret must be updated in the METADATA section as
metadata_proxy_secret=<secret>.
4. Restart the vrouter agent after modifying the shared secret:
service contrail-vrouter restart
524
Copyright © 2016, Juniper Networks, Inc.
PART 5
Contrail Commands and APIs
•
Contrail Commands on page 527
•
Contrail Application Programming Interfaces (APIs) on page 547
Copyright © 2016, Juniper Networks, Inc.
525
Contrail Feature Guide
526
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 19
Contrail Commands
•
contrail-logs (Accessing Log File Messages) on page 527
•
contrail-status (Viewing Node Status)
•
contrail-version (Viewing Version Information
•
service (Managing Services)
•
Backing Up and Restoring Configurations on page 534
contrail-logs (Accessing Log File Messages)
A command-line utility, contrail-logs, uses REST APIs to retrieve system log messages,
object log messages, and trace messages.
•
Command-Line Options for Contrail-Logs on page 527
•
Option Descriptions on page 528
•
Example Uses on page 529
Command-Line Options for Contrail-Logs
The command-line utility for accessing log file information is contrail-logs in the analytics
node. The following are the options supported at the command line for contrail-logs, as
viewed using the -–help option.
[root@host]# contrail-logs --help
usage: contrail-logs [-h]
[--opserver-ip OPSERVER_IP]
[--opserver-port OPSERVER_PORT]
[--start-time START_TIME]
[--end-time END_TIME]
[--last LAST]
[--source SOURCE]
[--module {ControlNode, VRouterAgent, ApiServer, Schema, OpServer, Collector,
QueryEngine, ServiceMonitor, DnsAgent}]
[--category CATEGORY]
[--level LEVEL]
[--message-type MESSAGE_TYPE]
[--reverse]
[--verbose]
[--all]
[--object {ObjectVNTable, ObjectVMTable, ObjectSITable, ObjectVRouter,
Copyright © 2016, Juniper Networks, Inc.
527
Contrail Feature Guide
ObjectBgpPeer, ObjectRoutingInstance, ObjectBgpRouter, ObjectXmppConnection,
ObjectCollectorInfo, ObjectGeneratorInfo, ObjectConfigNode}]
[--object-id OBJECT_ID]
[--object-select-field {ObjectLog,SystemLog}]
[--trace TRACE]
Option Descriptions
The following are the descriptions for each of the option arguments available for
contrail-logs.
optional arguments:
-h, --help
show this help message and exit
--opserver-ip OPSERVER_IP
IP address of OpServer (default: 127.0.0.1)
--opserver-port OPSERVER_PORT
Port of OpServer (default: 8081)
--start-time START_TIME
Logs start time (format now-10m, now-1h) (default: now-10m)
--end-time END_TIME
Logs end time (default: now)
--last LAST
Logs from last time period (format 10m, 1d) (default: None)
--source SOURCE
Logs from source address (default: None)
--module {ControlNode, VRouterAgent, ApiServer, Schema, OpServer, Collector,
QueryEngine, ServiceMonitor, DnsAgent}
Logs from module (default: None)
--category CATEGORY
Logs of category (default: None)
--level LEVEL
Logs of level (default: None)
--message-type MESSAGE_TYPE
Logs of message type (default: None)
--reverse
Show logs in reverse chronological order (default: False)
--verbose
Show internal information (default: True)
--all
Show all logs (default: False)
--object {ObjectVNTable, ObjectVMTable, ObjectSITable, ObjectVRouter,
ObjectBgpPeer, ObjectRoutingInstance, ObjectBgpRouter, ObjectXmppConnection,
ObjectCollectorInfo, ObjectGeneratorInfo, ObjectConfigNode}
Logs of object type (default: None)
--object-id OBJECT_ID
Logs of object name (default: None)
--object-select-field {ObjectLog,SystemLog}
Select field to filter the log (default: None)
--trace TRACE
Dump trace buffer (default: None)
528
Copyright © 2016, Juniper Networks, Inc.
Chapter 19: Contrail Commands
Example Uses
The following examples show how you can use the option arguments available for
contrail-logs to retrieve the information you specify.
1.
View only the system log messages from all boxes for the last 10 minutes.
contrail-logs
2. View all log messages (systemlog, objectlog, uve, ...) from all boxes for the last 10
minutes.
contrail-logs --all
3. View only the control node system log messagess from all boxes for the last 10 minutes.
contrail-logs --module ControlNode
--module accepts the following values - ControlNode, VRouterAgent, ApiServer, Schema,
ServiceMonitor, Collector, OpServer, QueryEngine, DnsAgent
4. View the control node system log messages from source a6s23.contrail.juniper.net
for the last 10 minutes.
contrail-logs --module ControlNode --source a6s23.contrail.juniper.net
5. View the XMPP category system log messages from all modules on all boxes for the
last 10 minutes.
contrail-logs --category XMPP
6. View the system log messages from all the boxes from the last hour.
contrail-logs --last 1h
7. View the system log messages from the VN object named demo:admin:vn1 from all
boxes for the last 10 minutes.
contrail-logs --object ObjectVNTable --object-id demo:admin:vn1
--object accepts the following values - ObjectVNTable, ObjectVMTable, ObjectSITable,
ObjectVRouter, ObjectBgpPeer, ObjectRoutingInstance, ObjectBgpRouter,
ObjectXmppConnection, ObjectCollectorInfo
8. View the system log messages from all boxes for the last 10 minutes in reverse
chronological order:
contrail-logs --reverse
9. View the system log messages from a specific time interval and display them in a
specified date format.
contrail-logs --start-time "2013 May 12 18:30:27.0" --end-time "2013 May 12 18:31:27.0"
Copyright © 2016, Juniper Networks, Inc.
529
Contrail Feature Guide
contrail-status (Viewing Node Status)
Syntax
Release Information
Description
Required Privilege
Level
[root@host ~]# contrail-status
Command introduced in Contrail Release 1.0.
Display a list of all components of a Contrail server node (such as control, configuration,
database, Web-UI, analytics, or vrouter) and report their current status of active or inactive.
admin
Sample Output
The following example usage displays on a server that is configured for the roles of
analytics, configuration, web-ui, and database. It is not configured with the roles control
or vrouter.
Sample Output
root@host> contrail-status
VRouter is NOT PRESENT
Agent is NOT PRESENT
== Control node ==
supervisor-control:
active
contrail-control
active
supervisor-dns:
active
contrail-dns
active
contrail-named
active
== Analytics ==
supervisor-analytics:
active
contrail-analytics-nodemgr
active
contrail-collector
active
contrail-opserver
active
contrail-qe
active
redis-query
active
redis-sentinel
active
redis-uve
active
== Contrail API server ==
supervisor-config:
active
contrail-api
active
contrail-discovery
active
contrail-schema
active
contrail-svc-monitor
active
ifmap
active
== Contrail quantum ==
== Contrail Web UI ==
supervisor-webui:
active
contrail-webui
active
contrail-webui-middleware
active
== Contrail Database ==
supervisord-contrail-database:active
contrail-database
active
530
Copyright © 2016, Juniper Networks, Inc.
Chapter 19: Contrail Commands
contrail-version (Viewing Version Information
Syntax
Release Information
Description
Required Privilege
Level
[root@host]# contrail-version
Command introduced in Contrail Release 1.0.
Display a list of all installed components with their version and build numbers.
admin
Sample Output
The following example shows version and build information for all installed components.
Sample Output
root@host> contrail-version
Package
RPM Name
----------------------------------------------------------------------contrail-analytics
contrail-analytics-venv
contrail-api
contrail-api-lib
contrail-api-venv
contrail-control
contrail-database
contrail-dns
contrail-fabric-utils
contrail-libs
contrail-nodejs
contrail-openstack-analytics
contrail-openstack-cfgm
contrail-openstack-control
Version
Build-ID | Repo |
----------------------1-1309090026.el6
0.1-1309062310.el6
0.1-1309090026.el6
0.1-1309090026.el6
0.1-1309080539.el6
2012.0-1309090026.el6
0.1-1309050028
1-1309090026.el6
1-1309090026
1-1309090026.el6
0.8.15-1309090026.el6
0.1-1309090026.el6
0.1-1309090026.el6
0.1-1309090026.el6
141
141
141
141
141
141
141
141
141
141
141
141
141
141
Sample Output
The following example shows version and build information for only the installed contrail
components.
Sample Output
root@host> contrail-version | grep contrail
Package
Version
Build-ID | Repo |
RPM Name
-------------------------------------- -------------------------------------------------------contrail-analytics
1-1309090026.el6
141
contrail-analytics-venv
0.1-1309062310.el6
141
contrail-api
0.1-1309090026.el6
141
contrail-api-lib
0.1-1309090026.el6
141
Copyright © 2016, Juniper Networks, Inc.
531
Contrail Feature Guide
532
contrail-api-venv
0.1-1309080539.el6
141
contrail-control
2012.0-1309090026.el6
141
contrail-database
0.1-1309050028
141
contrail-dns
1-1309090026.el6
141
contrail-fabric-utils
1-1309090026
141
contrail-libs
1-1309090026.el6
141
contrail-nodejs
0.8.15-1309090026.el6
141
contrail-openstack-analytics
0.1-1309090026.el6
141
contrail-openstack-cfgm
0.1-1309090026.el6
141
contrail-openstack-control
0.1-1309090026.el6
141
contrail-openstack-database
0.1-1309090026.el6
141
contrail-openstack-webui
0.1-1309090026.el6
141
contrail-setup
1-1309090026.el6
141
contrail-webui
1-1309090026
141
openstack-quantum-contrail
2013.2-1309090026
141
Copyright © 2016, Juniper Networks, Inc.
Chapter 19: Contrail Commands
service (Managing Services)
Syntax
Release Information
Description
service contrail-service ( start | stop | restart | status )
Standard Linux command used for managing and viewing services in Contrail Controller
Release 1.0.
Start, stop, or restart a Contrail service. Display the status of a Contrail service.
All contrail services are managed by the process supervisord, which is open source
software written in Python. Each Contrail node type, such as compute, control, and so
on, has an instance of supervisord that, when running, launches Contrail services as child
processes. All supervisord instances display in contrail-status output with the prefix
supervisor. If the supervisord instance of a particular node type is not up, none of the
services for that node type are up. For more details about the open source supervisord
process, see http://www.supervisord.org.
Options
Required Privilege
Level
•
start—start a named service.
•
stop—stop a named service.
•
restart—stop and restart a named service.
•
status—display the status of a named service.
admin
Sample Output
The following examples show usage for the contrail-collector service, which is only
configured on nodes that have the roles of analytics, configuration, web-ui, or database.
Sample Output
[root@hostservice supervisor-analytics status
supervisord (pid 32116) is running... [
[root@host]# service contrail-collector restart
contrail-collector: stopped
contrail-collector: started
[root@host]# service contrail-collector stop
contrail-collector: stopped
[root@host]# service contrail-collector start
contrail-collector: started
[root@host]# service contrail-collector status
contrail-collector
Copyright © 2016, Juniper Networks, Inc.
RUNNING
pid 20071, uptime 0:00:04
533
Contrail Feature Guide
Backing Up and Restoring Configurations
•
Back up Procedure on page 534
•
Restore Procedure on page 535
•
Restore Steps Continued on page 545
•
Finishing on page 545
Back up Procedure
Configuration Backup and Restore
1.
Take a snapshot of the Cassandra database on all database nodes. Copy it to a
different host if you intend to reimage or reset the same servers.
root@a4s1:~# nodetool -h localhost -p 7199 snapshot
Requested creating snapshot for: all keyspaces
Snapshot directory: 1403160262349
NOTE: The snapshot could be in /home/cassandra/ or /var/lib/cassandra.
Zip the cassandra directory and store it in a remote host.
2. Get a back up of the MySQL database in the OpenStack node.
root@a4s1:~# cat /etc/contrail/mysql.token
422beb8ab4e9a6bdc5e7
root@a4s1:~# mysqldump -u root --password=422beb8ab4e9a6bdc5e7 --all-databases
> openstack.sql
3. For testing purposes only: Bring servers to a clean state
fab reset_config
NOTE: This step is for testing purposes ONLY. This is not necessary if you
are bringing up another node.
534
Copyright © 2016, Juniper Networks, Inc.
Chapter 19: Contrail Commands
Restore Procedure
1.
Stop Nova services.
root@a4s1:~# service nova-api stop
nova-api stop/waiting
root@a4s1:~# service nova-compute stop
nova-compute stop/waiting
root@a4s1:~# service nova-scheduler stop
nova-scheduler stop/waiting
root@a4s1:~# service nova-conductor stop
nova-conductor stop/waiting
2. Stop Glance service.
root@a4s1:~# service glance-api stop
glance-api stop/waiting
root@a4s1:~# service glance-registry stop
glance-registry stop/waiting
3. Stop Keystone service.
root@a4s1:~# service keystone stop
keystone stop/waiting
4. Stop Config service.
fab stop_cfgm
5. Stop Collector service.
fab stop_collector
6. Stop Database service.
fab stop_database
7. Restore All OpenStack databases.
root@a4s1:~# cat /etc/contrail/mysql.token
e5814139795b0c06e90a
root@a4s1:~# mysql -u root --password=e5814139795b0c06e90a < openstack.sql
8. Restore the Cassandra database using the script cass-db-restore-v4.sh.
Copyright © 2016, Juniper Networks, Inc.
535
Contrail Feature Guide
NOTE: Please copy the backed up Cassandra database to this server.
root@a4s1:~# ./cass-db-restore-v4.sh
NAME
Script to restore Cassandra datbase from snapshot
SYNOPSIS
cass-db-restore-v4.sh [--help|-h] [--base_db_dir|-b] [--snapshot_dir|-s]
[--snapshot_name|-n]
MUST OPTIONS: base_db_dir, snapshot_dir, snapshot_name
DESCRIPTION
--base_db_dir, -b
Location of running Cassandra database
--snapshot_dir, -s
Snapshot location of Cassandra database
--snapshot_name, -n
Snapshot name
Restore Example
cass-db-restore-v4.sh -b /var/lib/cassandra/data -s /root/data.ss -n 1403068337967
root@a4s1:~# ./cass-db-restore-v4.sh -b /home/cassandra/data -s
/root/data.ss/cassandra/data -n 1403160262349
Snapshot available...continuing..
----------------dirs to be restored-----------to_bgp_keyspace/route_target_table/
ContrailAnalytics/MessageTableSource/
ContrailAnalytics/MessageTableMessageType/
ContrailAnalytics/StatsTableByU64StrTag/
ContrailAnalytics/MessageTableModuleId/
ContrailAnalytics/ObjectValueTable/
ContrailAnalytics/MessageTable/
ContrailAnalytics/StatsTableByStrStrTag/
ContrailAnalytics/MessageTableTimestamp/
536
Copyright © 2016, Juniper Networks, Inc.
Chapter 19: Contrail Commands
ContrailAnalytics/MessageTableCategory/
ContrailAnalytics/SystemObjectTable/
ContrailAnalytics/ObjectTable/
config_db_uuid/obj_fq_name_table/
config_db_uuid/obj_uuid_table/
system/schema_columns/
system/local/
system/schema_columnfamilies/
system/schema_keyspaces/
----------db files in snapshots-------------=======check /home/cassandra/data/to_bgp_keyspace/route_target_table//
===============
to_bgp_keyspace-route_target_table-ic-1-CompressionInfo.db
to_bgp_keyspace-route_target_table-ic-2-CompressionInfo.db
to_bgp_keyspace-route_target_table-ic-1-Data.db
to_bgp_keyspace-route_target_table-ic-2-Data.db
to_bgp_keyspace-route_target_table-ic-1-Filter.db
to_bgp_keyspace-route_target_table-ic-2-Filter.db
to_bgp_keyspace-route_target_table-ic-1-Index.db
to_bgp_keyspace-route_target_table-ic-2-Index.db
to_bgp_keyspace-route_target_table-ic-1-Statistics.db
to_bgp_keyspace-route_target_table-ic-2-Statistics.db
to_bgp_keyspace-route_target_table-ic-1-Summary.db
to_bgp_keyspace-route_target_table-ic-2-Summary.db
to_bgp_keyspace-route_target_table-ic-1-TOC.txt
=======check /home/cassandra/data/ContrailAnalytics/MessageTableSource//
===============
ContrailAnalytics-MessageTableSource-ic-1-CompressionInfo.db
ContrailAnalytics-MessageTableSource-ic-2-CompressionInfo.db
ContrailAnalytics-MessageTableSource-ic-1-Data.db
ContrailAnalytics-MessageTableSource-ic-2-Data.db
ContrailAnalytics-MessageTableSource-ic-1-Filter.db
ContrailAnalytics-MessageTableSource-ic-2-Filter.db
ContrailAnalytics-MessageTableSource-ic-1-Index.db
Copyright © 2016, Juniper Networks, Inc.
537
Contrail Feature Guide
ContrailAnalytics-MessageTableSource-ic-2-Index.db
ContrailAnalytics-MessageTableSource-ic-1-Statistics.db
ContrailAnalytics-MessageTableSource-ic-2-Statistics.db
ContrailAnalytics-MessageTableSource-ic-1-Summary.db
ContrailAnalytics-MessageTableSource-ic-2-Summary.db
ContrailAnalytics-MessageTableSource-ic-1-TOC.txt
=======check /home/cassandra/data/ContrailAnalytics/MessageTableMessageType//
===============
ContrailAnalytics-MessageTableMessageType-ic-1-CompressionInfo.db
ContrailAnalytics-MessageTableMessageType-ic-2-CompressionInfo.db
ContrailAnalytics-MessageTableMessageType-ic-1-Data.db
ContrailAnalytics-MessageTableMessageType-ic-2-Data.db
ContrailAnalytics-MessageTableMessageType-ic-1-Filter.db
ContrailAnalytics-MessageTableMessageType-ic-2-Filter.db
ContrailAnalytics-MessageTableMessageType-ic-1-Index.db
ContrailAnalytics-MessageTableMessageType-ic-2-Index.db
ContrailAnalytics-MessageTableMessageType-ic-1-Statistics.db
ContrailAnalytics-MessageTableMessageType-ic-2-Statistics.db
ContrailAnalytics-MessageTableMessageType-ic-1-Summary.db
ContrailAnalytics-MessageTableMessageType-ic-2-Summary.db
ContrailAnalytics-MessageTableMessageType-ic-1-TOC.txt
=======check /home/cassandra/data/ContrailAnalytics/StatsTableByU64StrTag//
===============
ContrailAnalytics-StatsTableByU64StrTag-ic-1-CompressionInfo.db
ContrailAnalytics-StatsTableByU64StrTag-ic-1-Index.db
ContrailAnalytics-StatsTableByU64StrTag-ic-1-Data.db
ContrailAnalytics-StatsTableByU64StrTag-ic-1-Statistics.db
ContrailAnalytics-StatsTableByU64StrTag-ic-1-Filter.db
ContrailAnalytics-StatsTableByU64StrTag-ic-1-Summary.db
=======check /home/cassandra/data/ContrailAnalytics/MessageTableModuleId//
===============
ContrailAnalytics-MessageTableModuleId-ic-1-CompressionInfo.db
ContrailAnalytics-MessageTableModuleId-ic-2-CompressionInfo.db
ContrailAnalytics-MessageTableModuleId-ic-1-Data.db
ContrailAnalytics-MessageTableModuleId-ic-2-Data.db
ContrailAnalytics-MessageTableModuleId-ic-1-Filter.db
ContrailAnalytics-MessageTableModuleId-ic-2-Filter.db
538
Copyright © 2016, Juniper Networks, Inc.
Chapter 19: Contrail Commands
ContrailAnalytics-MessageTableModuleId-ic-1-Index.db
ContrailAnalytics-MessageTableModuleId-ic-2-Index.db
ContrailAnalytics-MessageTableModuleId-ic-1-Statistics.db
ContrailAnalytics-MessageTableModuleId-ic-2-Statistics.db
ContrailAnalytics-MessageTableModuleId-ic-1-Summary.db
ContrailAnalytics-MessageTableModuleId-ic-2-Summary.db
ContrailAnalytics-MessageTableModuleId-ic-1-TOC.txt
=======check /home/cassandra/data/ContrailAnalytics/ObjectValueTable//
===============
ContrailAnalytics-ObjectValueTable-ic-1-CompressionInfo.db
ContrailAnalytics-ObjectValueTable-ic-2-CompressionInfo.db
ContrailAnalytics-ObjectValueTable-ic-1-Data.db
ContrailAnalytics-ObjectValueTable-ic-2-Data.db
ContrailAnalytics-ObjectValueTable-ic-1-Filter.db
ContrailAnalytics-ObjectValueTable-ic-2-Filter.db
ContrailAnalytics-ObjectValueTable-ic-1-Index.db
ContrailAnalytics-ObjectValueTable-ic-2-Index.db
ContrailAnalytics-ObjectValueTable-ic-1-Statistics.db
ContrailAnalytics-ObjectValueTable-ic-2-Statistics.db
ContrailAnalytics-ObjectValueTable-ic-1-Summary.db
ContrailAnalytics-ObjectValueTable-ic-2-Summary.db
ContrailAnalytics-ObjectValueTable-ic-1-TOC.txt
=======check /home/cassandra/data/ContrailAnalytics/MessageTable//
===============
ContrailAnalytics-MessageTable-ic-1-CompressionInfo.db
ContrailAnalytics-MessageTable-ic-2-CompressionInfo.db
ContrailAnalytics-MessageTable-ic-1-Data.db
ContrailAnalytics-MessageTable-ic-2-Data.db
ContrailAnalytics-MessageTable-ic-1-Filter.db
ContrailAnalytics-MessageTable-ic-2-Filter.db
ContrailAnalytics-MessageTable-ic-1-Index.db
ContrailAnalytics-MessageTable-ic-2-Index.db
ContrailAnalytics-MessageTable-ic-1-Statistics.db
ContrailAnalytics-MessageTable-ic-2-Statistics.db
ContrailAnalytics-MessageTable-ic-1-Summary.db
ContrailAnalytics-MessageTable-ic-2-Summary.db
Copyright © 2016, Juniper Networks, Inc.
539
Contrail Feature Guide
ContrailAnalytics-MessageTable-ic-1-TOC.txt
ContrailAnalytics-MessageTable-ic-2-TOC.txt
=======check /home/cassandra/data/ContrailAnalytics/StatsTableByStrStrTag//
===============
ContrailAnalytics-StatsTableByStrStrTag-ic-1-CompressionInfo.db
ContrailAnalytics-StatsTableByStrStrTag-ic-2-CompressionInfo.db
ContrailAnalytics-StatsTableByStrStrTag-ic-1-Data.db
ContrailAnalytics-StatsTableByStrStrTag-ic-2-Data.db
ContrailAnalytics-StatsTableByStrStrTag-ic-1-Filter.db
ContrailAnalytics-StatsTableByStrStrTag-ic-2-Filter.db
ContrailAnalytics-StatsTableByStrStrTag-ic-1-Index.db
ContrailAnalytics-StatsTableByStrStrTag-ic-2-Index.db
ContrailAnalytics-StatsTableByStrStrTag-ic-1-Statistics.db
ContrailAnalytics-StatsTableByStrStrTag-ic-2-Statistics.db
ContrailAnalytics-StatsTableByStrStrTag-ic-1-Summary.db
ContrailAnalytics-StatsTableByStrStrTag-ic-2-Summary.db
ContrailAnalytics-StatsTableByStrStrTag-ic-1-TOC.txt
=======check /home/cassandra/data/ContrailAnalytics/MessageTableTimestamp//
===============
ContrailAnalytics-MessageTableTimestamp-ic-1-CompressionInfo.db
ContrailAnalytics-MessageTableTimestamp-ic-2-CompressionInfo.db
ContrailAnalytics-MessageTableTimestamp-ic-1-Data.db
ContrailAnalytics-MessageTableTimestamp-ic-2-Data.db
ContrailAnalytics-MessageTableTimestamp-ic-1-Filter.db
ContrailAnalytics-MessageTableTimestamp-ic-2-Filter.db
ContrailAnalytics-MessageTableTimestamp-ic-1-Index.db
ContrailAnalytics-MessageTableTimestamp-ic-2-Index.db
ContrailAnalytics-MessageTableTimestamp-ic-1-Statistics.db
ContrailAnalytics-MessageTableTimestamp-ic-2-Statistics.db
ContrailAnalytics-MessageTableTimestamp-ic-1-Summary.db
ContrailAnalytics-MessageTableTimestamp-ic-2-Summary.db
ContrailAnalytics-MessageTableTimestamp-ic-1-TOC.txt
=======check /home/cassandra/data/ContrailAnalytics/MessageTableCategory//
===============
ContrailAnalytics-MessageTableCategory-ic-1-CompressionInfo.db
ContrailAnalytics-MessageTableCategory-ic-2-CompressionInfo.db
ContrailAnalytics-MessageTableCategory-ic-1-Data.db
540
Copyright © 2016, Juniper Networks, Inc.
Chapter 19: Contrail Commands
ContrailAnalytics-MessageTableCategory-ic-2-Data.db
ContrailAnalytics-MessageTableCategory-ic-1-Filter.db
ContrailAnalytics-MessageTableCategory-ic-2-Filter.db
ContrailAnalytics-MessageTableCategory-ic-1-Index.db
ContrailAnalytics-MessageTableCategory-ic-2-Index.db
ContrailAnalytics-MessageTableCategory-ic-1-Statistics.db
ContrailAnalytics-MessageTableCategory-ic-2-Statistics.db
ContrailAnalytics-MessageTableCategory-ic-1-Summary.db
ContrailAnalytics-MessageTableCategory-ic-2-Summary.db
ContrailAnalytics-MessageTableCategory-ic-1-TOC.txt
=======check /home/cassandra/data/ContrailAnalytics/SystemObjectTable//
===============
ContrailAnalytics-SystemObjectTable-ic-1-CompressionInfo.db
ContrailAnalytics-SystemObjectTable-ic-1-Statistics.db
ContrailAnalytics-SystemObjectTable-ic-1-Data.db
ContrailAnalytics-SystemObjectTable-ic-1-Summary.db
ContrailAnalytics-SystemObjectTable-ic-1-Filter.db
ContrailAnalytics-SystemObjectTable-ic-1-TOC.txt
ContrailAnalytics-SystemObjectTable-ic-1-Index.db
=======check /home/cassandra/data/ContrailAnalytics/ObjectTable//
===============
ContrailAnalytics-ObjectTable-ic-1-CompressionInfo.db
ContrailAnalytics-ObjectTable-ic-2-CompressionInfo.db
ContrailAnalytics-ObjectTable-ic-1-Data.db
ContrailAnalytics-ObjectTable-ic-2-Data.db
ContrailAnalytics-ObjectTable-ic-1-Filter.db
ContrailAnalytics-ObjectTable-ic-2-Filter.db
ContrailAnalytics-ObjectTable-ic-1-Index.db
ContrailAnalytics-ObjectTable-ic-2-Index.db
ContrailAnalytics-ObjectTable-ic-1-Statistics.db
ContrailAnalytics-ObjectTable-ic-2-Statistics.db
ContrailAnalytics-ObjectTable-ic-1-Summary.db
ContrailAnalytics-ObjectTable-ic-2-Summary.db
ContrailAnalytics-ObjectTable-ic-1-TOC.txt
=======check /home/cassandra/data/config_db_uuid/obj_fq_name_table//
===============
Copyright © 2016, Juniper Networks, Inc.
541
Contrail Feature Guide
config_db_uuid-obj_fq_name_table-ic-1-CompressionInfo.db
config_db_uuid-obj_fq_name_table-ic-2-CompressionInfo.db
config_db_uuid-obj_fq_name_table-ic-1-Data.db
config_db_uuid-obj_fq_name_table-ic-2-Data.db
config_db_uuid-obj_fq_name_table-ic-1-Filter.db
config_db_uuid-obj_fq_name_table-ic-2-Filter.db
config_db_uuid-obj_fq_name_table-ic-1-Index.db
config_db_uuid-obj_fq_name_table-ic-2-Index.db
config_db_uuid-obj_fq_name_table-ic-1-Statistics.db
config_db_uuid-obj_fq_name_table-ic-2-Statistics.db
config_db_uuid-obj_fq_name_table-ic-1-Summary.db
config_db_uuid-obj_fq_name_table-ic-2-Summary.db
config_db_uuid-obj_fq_name_table-ic-1-TOC.txt
=======check /home/cassandra/data/config_db_uuid/obj_uuid_table//
===============
config_db_uuid-obj_uuid_table-ic-1-CompressionInfo.db
config_db_uuid-obj_uuid_table-ic-2-CompressionInfo.db
config_db_uuid-obj_uuid_table-ic-1-Data.db
config_db_uuid-obj_uuid_table-ic-2-Data.db
config_db_uuid-obj_uuid_table-ic-1-Filter.db
config_db_uuid-obj_uuid_table-ic-2-Filter.db
config_db_uuid-obj_uuid_table-ic-1-Index.db
config_db_uuid-obj_uuid_table-ic-2-Index.db
config_db_uuid-obj_uuid_table-ic-1-Statistics.db
config_db_uuid-obj_uuid_table-ic-2-Statistics.db
config_db_uuid-obj_uuid_table-ic-1-Summary.db
config_db_uuid-obj_uuid_table-ic-2-Summary.db
config_db_uuid-obj_uuid_table-ic-1-TOC.txt
=======check /home/cassandra/data/system/schema_columns// ===============
system-schema_columns-ic-5-CompressionInfo.db
system-schema_columns-ic-5-Summary.db system-schema_columns-ic-6-Index.db
system-schema_columns-ic-5-Data.db
system-schema_columns-ic-5-TOC.txt
system-schema_columns-ic-6-Statistics.db
system-schema_columns-ic-5-Filter.db
system-schema_columns-ic-6-CompressionInfo.db
system-schema_columns-ic-6-Summary.db
system-schema_columns-ic-5-Index.db
542
system-schema_columns-ic-6-Data.db
Copyright © 2016, Juniper Networks, Inc.
Chapter 19: Contrail Commands
system-schema_columns-ic-5-Statistics.db
system-schema_columns-ic-6-Filter.db
=======check /home/cassandra/data/system/local// ===============
system-local-ic-6-CompressionInfo.db system-local-ic-6-TOC.txt
system-local-ic-7-Summary.db
system-local-ic-8-Statistics.db
system-local-ic-6-Data.db
system-local-ic-7-TOC.txt
system-local-ic-7-CompressionInfo.db
system-local-ic-8-Summary.db
system-local-ic-6-Filter.db
system-local-ic-7-Data.db
system-local-ic-8-CompressionInfo.db system-local-ic-8-TOC.txt
system-local-ic-6-Index.db
system-local-ic-8-Data.db
system-local-ic-6-Statistics.db
system-local-ic-8-Filter.db
system-local-ic-6-Summary.db
system-local-ic-8-Index.db
system-local-ic-7-Filter.db
system-local-ic-7-Index.db
system-local-ic-7-Statistics.db
=======check /home/cassandra/data/system/schema_columnfamilies//
===============
system-schema_columnfamilies-ic-45-CompressionInfo.db
system-schema_columnfamilies-ic-47-CompressionInfo.db
system-schema_columnfamilies-ic-45-Data.db
system-schema_columnfamilies-ic-47-Data.db
system-schema_columnfamilies-ic-45-Filter.db
system-schema_columnfamilies-ic-47-Filter.db
system-schema_columnfamilies-ic-45-Index.db
system-schema_columnfamilies-ic-47-Index.db
system-schema_columnfamilies-ic-45-Statistics.db
system-schema_columnfamilies-ic-47-Statistics.db
system-schema_columnfamilies-ic-45-Summary.db
system-schema_columnfamilies-ic-47-Summary.db
system-schema_columnfamilies-ic-45-TOC.txt
system-schema_columnfamilies-ic-47-TOC.txt
system-schema_columnfamilies-ic-46-CompressionInfo.db
system-schema_columnfamilies-ic-48-CompressionInfo.db
system-schema_columnfamilies-ic-46-Data.db
system-schema_columnfamilies-ic-48-Data.db
system-schema_columnfamilies-ic-46-Filter.db
system-schema_columnfamilies-ic-48-Filter.db
Copyright © 2016, Juniper Networks, Inc.
543
Contrail Feature Guide
system-schema_columnfamilies-ic-46-Index.db
system-schema_columnfamilies-ic-48-Index.db
system-schema_columnfamilies-ic-46-Statistics.db
system-schema_columnfamilies-ic-48-Statistics.db
system-schema_columnfamilies-ic-46-Summary.db
system-schema_columnfamilies-ic-48-Summary.db
system-schema_columnfamilies-ic-46-TOC.txt
=======check /home/cassandra/data/system/schema_keyspaces//
===============
system-schema_keyspaces-ic-5-CompressionInfo.db
system-schema_keyspaces-ic-6-CompressionInfo.db
system-schema_keyspaces-ic-7-CompressionInfo.db
system-schema_keyspaces-ic-5-Data.db
system-schema_keyspaces-ic-6-Data.db
system-schema_keyspaces-ic-7-Data.db
system-schema_keyspaces-ic-5-Filter.db
system-schema_keyspaces-ic-6-Filter.db
system-schema_keyspaces-ic-7-Filter.db
system-schema_keyspaces-ic-5-Index.db
system-schema_keyspaces-ic-6-Index.db
system-schema_keyspaces-ic-7-Index.db
system-schema_keyspaces-ic-5-Statistics.db
system-schema_keyspaces-ic-6-Statistics.db
system-schema_keyspaces-ic-7-Statistics.db
system-schema_keyspaces-ic-5-Summary.db
system-schema_keyspaces-ic-6-Summary.db
system-schema_keyspaces-ic-7-Summary.db
system-schema_keyspaces-ic-5-TOC.txt
system-schema_keyspaces-ic-6-TOC.txt
root@a4s1:~#
544
Copyright © 2016, Juniper Networks, Inc.
Chapter 19: Contrail Commands
Restore Steps Continued
9. Start Nova services.
root@a4s1:~# service nova-api start
nova-api start/running, process 25075
root@a4s1:~# service nova-compute start
nova-compute start/running, process 25527
root@a4s1:~# service nova-scheduler start
nova-scheduler start/running, process 25509
root@a4s1:~# service nova-conductor start
nova-conductor start/running, process 25545
10. Start Glance service.
root@a4s1:~# service glance-api start
glance-api start/running, process 25779
root@a4s1:~# service glance-registry start
glance-registry start/running, process 25793
11. Start Keystone service.
root@a4s1:~# service keystone start
keystone start/running, process 25806
12. Start Config service.
fab start_cfgm
13. Start Collector service.
fab start_collector
14. Start Database service.
fab start_database
Finishing
Purpose
Once all the services are started again, you should be able to restore all the VNs and
policies in the new node setup.
Copyright © 2016, Juniper Networks, Inc.
545
Contrail Feature Guide
546
Copyright © 2016, Juniper Networks, Inc.
CHAPTER 20
Contrail Application Programming
Interfaces (APIs)
•
Contrail Analytics Application Programming Interfaces (APIs) and User-Visible Entities
(UVEs) on page 547
•
Contrail Node Status on page 558
•
Log and Flow Information APIs on page 567
•
Working with Neutron on page 573
•
Support for Amazon VPC APIs on Contrail OpenStack on page 576
Contrail Analytics Application Programming Interfaces (APIs) and User-Visible Entities
(UVEs)
The Contrail analytics-api server provides a REST API interface to extract the operational
state of the Contrail system.
APIs are used by the Contrail Web user interface to present the operational state to users.
Other applications might also use the server's REST APIs for analytics or other uses.
This section describes some of the more common APIs and their uses. To see all of the
available APIs, navigate the URL tree at the REST interface, starting at the root
http://<ip>:<analytics-api-port>
•
User-Visible Entities on page 547
•
Common UVEs in Contrail on page 549
•
Virtual Network UVE on page 549
•
Virtual Machine UVE on page 549
•
vRouter UVE on page 549
•
UVEs for Contrail Nodes on page 550
•
Wild Card Query of UVEs on page 550
•
Filtering UVE Information on page 551
User-Visible Entities
In Contrail, a User-Visible Entity (UVE) is an object entity that might span multiple
components in Contrail and might require aggregation before the complete information
Copyright © 2016, Juniper Networks, Inc.
547
Contrail Feature Guide
of the UVE is presented. Examples of UVEs in Contrail are virtual network, virtual machine,
vRouter, and similar objects. Complete operational information for a virtual network
might span multiple vRouters, config nodes, control nodes, and the like. The analytics-api
server aggregates all of this information through REST APIs.
To get information about a UVE, you must have the UVE type and the UVE key. In Contrail,
UVEs are identified by type, such as virtual network, virtual machine, vRouter, and so on.
A system-wide unique key is associated with each UVE. The key type could be different,
based on the UVE type. For example, perhaps a virtual network uses its name as its UVE
key, and in the same system, a virtual machine uses its UUID as its key.
The URL /analytics/uves shows the list of all UVE types available in the system.
The following is sample output from /analytics/uves:
[
{
href:
name:
},
{
href:
name:
},
{
href:
name:
},
{
href:
name:
},
{
href:
name:
},
{
href:
name:
},
{
href:
name:
},
{
href:
name:
},
{
href:
name:
},
{
href:
name:
},
{
href:
name:
548
"http://10.84.13.45:8081/analytics/uves/xmpp-peers",
"xmpp-peers"
"http://10.84.13.45:8081/analytics/uves/service-instances",
"service-instances"
"http://10.84.13.45:8081/analytics/uves/config-nodes",
"config-nodes"
"http://10.84.13.45:8081/analytics/uves/virtual-machines",
"virtual-machines"
"http://10.84.13.45:8081/analytics/uves/bgp-routers",
"bgp-routers"
"http://10.84.13.45:8081/analytics/uves/collectors",
"collectors"
"http://10.84.13.45:8081/analytics/uves/service-chains",
"service-chains"
"http://10.84.13.45:8081/analytics/uves/generators",
"generators"
"http://10.84.13.45:8081/analytics/uves/bgp-peers",
"bgp-peers"
"http://10.84.13.45:8081/analytics/uves/virtual-networks",
"virtual-networks"
"http://10.84.13.45:8081/analytics/uves/vrouters",
"vrouters"
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
},
{
href: "http://10.84.13.45:8081/analytics/uves/dns-nodes",
name: "dns-nodes"
}
]
Common UVEs in Contrail
This section presents descriptions of some common UVEs in Contrail.
Virtual Network UVE
This UVE provides information associated with a virtual network, such as:
•
list of networks connected to this network
•
list of virtual machines spawned in this network
•
list of access control lists (ACLs) associated with this virtual network
•
global input and output statistics
•
input and output statistics per virtual network pair
The REST API to get a UVE for a specific virtual network is through HTTP GET, using the
URL:
/analytics/uves/virtual-network/<key>
The REST API to get UVEs for all virtual machines is through HTTP GET, using the URL:
/analytics/uves/virtual-networks
Virtual Machine UVE
This UVE provides information associated with a virtual machine, such as:
•
list of interfaces in this virtual machine
•
list of floating IPs associated with each interface
•
input and output statistics
The REST API to get a UVE for a specific virtual machine is through HTTP GET, using the
URL:
/analytics/uves/virtual-machine/<key>
The REST API to get UVEs for all virtual machines is through HTTP GET, using the URL:
/analytics/uves/virtual-machines
vRouter UVE
This UVE provides information associated with a vRouter, such as:
•
virtual networks present on this vRouter
Copyright © 2016, Juniper Networks, Inc.
549
Contrail Feature Guide
•
virtual machines spawned on the server of this vRouter
•
statistics of the traffic flowing through this vRouter
The REST API to get a UVE for a specific vRouter is through HTTP GET, using the URL:
/analytics/uves/vrouter/<key>
The REST API to get UVEs for all virtual machines is through HTTP GET, using the URL:
/analytics/uves/vrouters
UVEs for Contrail Nodes
There are multiple node types in Contrail (including the node type vRouter previously
described). Other node types include control node, config node, analytics node, and
compute node.
There is a UVE for each node type. The common information associated with each node
UVE includes:
•
the IP address of the node
•
a list of processes running on the node
•
the CPU and memory utilization of the running processes
Each UVE also has node-specific information, such as:
•
the control node UVE has information about its connectivity to the vRouter and other
control nodes
•
the analytics node UVE has information about the number of generators connected
The REST API to get a UVE for a specific config node is through HTTP GET, using the
URL:
/analytics/uves/config-node/<key>
The REST API to get UVEs for all config nodes is through HTTP GET, using the URL:
/analytics/uves/config-nodes
NOTE: Use similar syntax to get UVES for each of the different types of nodes,
substituting the node type that you want in place of config-node.
Wild Card Query of UVEs
You can use wildcard queries when you want to get multiple UVEs at the same time.
Example queries are the following:
The following HTTP GET with wildcard retrieves all virtual network UVEs:
/analytics/uves/virtual-network/*
550
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
The following HTTP GET with wildcard retrieves all virtual network UVEs with name
starting with project1:
/analytics/uves/virtual-network/project1*
Filtering UVE Information
It is possible to retrieve filtered UVE information. The following flags enable you to retrieve
partial, filtered information about UVEs.
Supported filter flags include:
sfilt : filter by source (usually the hostname of the generator)
mfilt : filter by module (the module name of the generator)
cfilt : filter by content, useful when only part of a UVE needs to be retrieved
kfilt : filter by UVE keys, useful to get multiple, but not all, UVEs of a particular type
Examples
The following HTTP GET with filter retrieves information about virtual network vn1 as
provided by the source src1:
/analytics/uves/virtual-network/vn1?sfilt=src1
The following HTTP GET with filter retrieves information about virtual network vn1 as
provided by all ApiServer modules:
/analytics/uves/virtual-network/vn1?mfilt=ApiServer
Example Output:
Virtual Network UVE
Example output for a virtual network UVE:
[root@a3s14 ~]# curl
127.0.0.1:8081/analytics/virtual-network/default-domain:demo:front-end | python
-mjson.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2576 100 2576 0 0 152k 0 --:--:-- --:--:-- --:--:-- 157k
{
"UveVirtualNetworkAgent": {
"acl": [
[
{
"@type": "string"
},
"a3s18:VRouterAgent"
]
],
"in_bytes": {
"#text": "2232972057",
"@aggtype": "counter",
"@type": "i64"
},
"in_stats": {
"@aggtype": "append",
Copyright © 2016, Juniper Networks, Inc.
551
Contrail Feature Guide
"@type": "list",
"list": {
"@size": "3",
"@type": "struct",
"UveInterVnStats": [
{
"bytes": {
"#text": "2114516371",
"@type": "i64"
},
"other_vn": {
"#text": "default-domain:demo:back-end",
"@aggtype": "listkey",
"@type": "string"
},
"tpkts": {
"#text": "5122001",
"@type": "i64"
}
},
{
"bytes": {
"#text": "1152123",
"@type": "i64"
},
"other_vn": {
"#text": "__FABRIC__",
"@aggtype": "listkey",
"@type": "string"
},
"tpkts": {
"#text": "11323",
"@type": "i64"
}
},
{
"bytes": {
"#text": "8192",
"@type": "i64"
},
"other_vn": {
"#text": "default-domain:demo:front-end",
"@aggtype": "listkey",
"@type": "string"
},
"tpkts": {
"#text": "50",
"@type": "i64"
}
}
]
}
},
"in_tpkts": {
"#text": "5156342",
"@aggtype": "counter",
552
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
"@type": "i64"
},
"interface_list": {
"@aggtype": "union",
"@type": "list",
"list": {
"@size": "1",
"@type": "string",
"element": [
"tap2158f77c-ec"
]
}
},
"out_bytes": {
"#text": "2187615961",
"@aggtype": "counter",
"@type": "i64"
},
"out_stats": {
"@aggtype": "append",
"@type": "list",
"list": {
"@size": "4",
"@type": "struct",
"UveInterVnStats": [
{
"bytes": {
"#text": "2159083215",
"@type": "i64"
},
"other_vn": {
"#text": "default-domain:demo:back-end",
"@aggtype": "listkey",
"@type": "string"
},
"tpkts": {
"#text": "5143693",
"@type": "i64"
}
},
{
"bytes": {
"#text": "1603041",
"@type": "i64"
},
"other_vn": {
"#text": "__FABRIC__",
"@aggtype": "listkey",
"@type": "string"
},
"tpkts": {
"#text": "9595",
"@type": "i64"
}
},
{
Copyright © 2016, Juniper Networks, Inc.
553
Contrail Feature Guide
"bytes": {
"#text": "24608",
"@type": "i64"
},
"other_vn": {
"#text": "__UNKNOWN__",
"@aggtype": "listkey",
"@type": "string"
},
"tpkts": {
"#text": "408",
"@type": "i64"
}
},
{
"bytes": {
"#text": "8192",
"@type": "i64"
},
"other_vn": {
"#text": "default-domain:demo:front-end",
"@aggtype": "listkey",
"@type": "string"
},
"tpkts": {
"#text": "50",
"@type": "i64"
}
}
]
}
},
"out_tpkts": {
"#text": "5134830",
"@aggtype": "counter",
"@type": "i64"
},
"virtualmachine_list": {
"@aggtype": "union",
"@type": "list",
"list": {
"@size": "1",
"@type": "string",
"element": [
"dd09f8c3-32a8-456f-b8cc-fab15189f50f"
]
}}
},
"UveVirtualNetworkConfig": {
"connected_networks": {
"@aggtype": "union",
"@type": "list",
"list": {
"@size": "1",
"@type": "string",
"element": [
554
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
"default-domain:demo:back-end"
]
}
},
"routing_instance_list": {
"@aggtype": "union",
"@type": "list",
"list": {
"@size": "1",
"@type": "string",
"element": [
"front-end"
]
}
},
"total_acl_rules": [
[
{
"#text": "3",
"@type": "i32"
},
":",
"a3s14:Schema"
]
]
}
}
Example Output:
Virtual Machine UVE
Example output for a virtual machine UVE:
[root@a3s14 ~]# curl
127.0.0.1:8081/analytics/virtual-machine/f38eb47e-63d2-4b39-80de-8fe68e6af1e4 |
python -mjson.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 736 100 736 0 0 160k 0 --:--:-- --:--:-- --:--:-- 179k
{
"UveVirtualMachineAgent": {
"interface_list": [
[
{
"@type": "list",
"list": {
"@size": "1",
"@type": "struct",
"VmInterfaceAgent": [
{
"in_bytes": {
"#text": "2188895907",
"@aggtype": "counter",
"@type": "i64"
},
"in_pkts": {
"#text": "5130901",
"@aggtype": "counter",
"@type": "i64"
Copyright © 2016, Juniper Networks, Inc.
555
Contrail Feature Guide
},
"ip_address": {
"#text": "192.168.2.253",
"@type": "string"
},
"name": {
"#text":
"f38eb47e-63d2-4b39-80de-8fe68e6af1e4:ccb085a0-c994-4034-be0f-6fd5ad08ce83",
"@type": "string"
},
"out_bytes": {
"#text": "2201821626",
"@aggtype": "counter",
"@type": "i64"
},
"out_pkts": {
"#text": "5153526",
"@aggtype": "counter",
"@type": "i64"
},
"virtual_network": {
"#text": "default-domain:demo:back-end",
"@aggtype": "listkey",
"@type": "string"
}
}
]
}
},
"a3s19:VRouterAgent"
]
]
}
}
Example Output:
vRouter UVE
556
Example output for a vRouter UVE:
[root@a3s14 ~]# curl 127.0.0.1:8081/analytics/vrouter/a3s18 | python -mjson.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 706 100 706 0 0 142k 0 --:--:-- --:--:-- --:--:-- 172k
{
"VrouterAgent": {
"collector": [
[
{
"#text": "10.84.17.1",
"@type": "string"
},
"a3s18:VRouterAgent"
]
],
"connected_networks": [
[
{
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
"@type": "list",
"list": {
"@size": "1",
"@type": "string",
"element": [
"default-domain:demo:front-end"
]
}
},
"a3s18:VRouterAgent"
]
],
"interface_list": [
[
{
"@type": "list",
"list": {
"@size": "1",
"@type": "string",
"element": [
"tap2158f77c-ec"
]
}
},
"a3s18:VRouterAgent"
]
],
"virtual_machine_list": [
[
{
"@type": "list",
"list": {
"@size": "1",
"@type": "string",
"element": [
"dd09f8c3-32a8-456f-b8cc-fab15189f50f"
]
}
},
"a3s18:VRouterAgent"
]
],
"xmpp_peer_list": [
[
{
"@type": "list",
"list": {
"@size": "2",
"@type": "string",
"element": [
"10.84.17.2",
"10.84.17.3"
]
}
},
"a3s18:VRouterAgent"
Copyright © 2016, Juniper Networks, Inc.
557
Contrail Feature Guide
]
]
}
}
Contrail Node Status
•
Overview on page 558
•
UVE for NodeStatus on page 558
•
Node Status Features on page 559
•
Using Introspect to Get Process Status on page 564
•
contrail-status script on page 565
Overview
This topic describes how to view the status of a Contrail node on a physical server. Contrail
nodes include config, control, analytics, compute, and so on.
UVE for NodeStatus
The User-Visible Entity (UVE) mechanism is used to aggregate and send the status
information. All node types send a NodeStatus structure in their respective node UVEs.
The following is a control node UVE of NodeStatus:
struct NodeStatus {
1: string name (key="ObjectBgpRouter")
2: optional bool deleted
3: optional string status
// Sent by process
4: optional list<process_info.ProcessStatus> process_status (aggtype="union")
// Sent by node manager
5: optional list<process_info.ProcessInfo> process_info (aggtype="union")
6: optional string description
}
uve sandesh NodeStatusUVE {
1: NodeStatus data
}
558
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
Node Status Features
The most important features of NodeStatus include:
ProcessStatus
ProcessInfo
ProcessStatus
Also process_status, is sent by the processes corresponding to the virtual node, and
displays the status of the process and an aggregate state indicating if the process is
functional or non-functional. The process_status includes the state of the process
connections (ConnectionInfo) to important services and other information necessary
for the process to be functional. Each process sends its NodeStatus information, which
is aggregated as union (aggtype="union") at the analytics node. The following is the
ProcessStatus structure:
1. struct ProcessStatus {
2.
1: string module_id
3.
2: string instance_id
4.
3: string state
5.
4: optional list<ConnectionInfo> connection_infos
6.
5: optional string description
7. }
8.
9. struct ConnectionInfo {
10.
1: string type
11.
2: string name
12.
3: optional list<string> server_addrs
13.
4: string status
14.
5: optional string description
15. }
ProcessInfo
Sent by the node manager, /usr/bin/contrail-nodemgr. Node manager is a monitor
process per contrail virtual node that tracks the running state of the processes. The
following is the ProcessInfo structure:
16. struct ProcessInfo {
17.
1: string
18.
2: string
Copyright © 2016, Juniper Networks, Inc.
process_name
process_state
559
Contrail Feature Guide
19.
3: u32
20.
4: u32
start_count
stop_count
21.
5: u32
exit_count
22.
// time when the process last entered running stage
23.
6: optional string
last_start_time
24.
7: optional string
last_stop_time
25.
8: optional string
last_exit_time
26.
9: optional list<string>
core_file_list
27. }
Example: NodeStatus
The following is an example output of NodeStatus obtained from the Rest API:
http://:8081/analytics/uves/control-...ilt=NodeStatus .
{
NodeStatus:
{
process_info:
[
{
process_name: "contrail-control",
process_state: "PROCESS_STATE_RUNNING",
last_stop_time: null,
start_count: 1,
core_file_list: [ ],
last_start_time: "1409002143776558",
stop_count: 0,
last_exit_time: null,
exit_count: 0
},
{
560
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
process_name: "contrail-control-nodemgr",
process_state: "PROCESS_STATE_RUNNING",
last_stop_time: null,
start_count: 1,
core_file_list: [ ],
last_start_time: "1409002141773481",
stop_count: 0,
last_exit_time: null,
exit_count: 0
},
{
process_name: "contrail-dns",
process_state: "PROCESS_STATE_RUNNING",
last_stop_time: null,
start_count: 1,
core_file_list: [ ],
last_start_time: "1409002145778383",
stop_count: 0,
last_exit_time: null,
exit_count: 0
},
{
process_name: "contrail-named",
process_state: "PROCESS_STATE_RUNNING",
last_stop_time: null,
start_count: 1,
core_file_list: [ ],
Copyright © 2016, Juniper Networks, Inc.
561
Contrail Feature Guide
last_start_time: "1409002147780118",
stop_count: 0,
last_exit_time: null,
exit_count: 0
}
],
process_status:
[
{
instance_id: "0",
module_id: "ControlNode",
state: "Functional",
description: null,
connection_infos:
[
{
server_addrs:
[
"10.84.13.45:8443"
],
status: "Up",
type: "IFMap",
name: "IFMapServer",
description: "Connection with IFMap Server (irond)"
},
{
server_addrs:
[
562
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
"10.84.13.45:8086"
],
status: "Up",
type: "Collector",
name: null,
description: "Established"
},
{
server_addrs:
[
"10.84.13.45:5998"
],
status: "Up",
type: "Discovery",
name: "Collector",
description: "SubscribeResponse"
},
{
server_addrs:
[
"10.84.13.45:5998"
],
status: "Up",
type: "Discovery",
name: "IfmapServer",
description: "SubscribeResponse"
},
{
Copyright © 2016, Juniper Networks, Inc.
563
Contrail Feature Guide
server_addrs:
[
"10.84.13.45:5998"
],
status: "Up",
type: "Discovery",
name: "xmpp-server",
description: "Publish Response - HeartBeat"
}
]
}
]
}
}
Using Introspect to Get Process Status
The user can also view the state of a specific process by using the introspect mechanism.
Example: Introspect of
NodeStatus
The following is an example of the process state of contrail-control that is obtained by
using
http://server-ip:8083/Snh_SandeshUVECacheReq?x=NodeStatus
NOTE: The example output is the ProcessStatus of only one process of
contrail-control. It does not show the full aggregated status of the control
node through its UVE (as in the previous example).
root@a6s45:~# curl http://10.84.13.45:8083/Snh_SandeshU...q?x=NodeStatus
<?xml-stylesheet type="text/xsl" href="/universal_parse.xsl"?><__NodeStatusUVE_list
type="slist"><NodeStatusUVE type="sandesh"><data type="struct"
identifier="1"><NodeStatus><name type="string" identifier="1"
key="ObjectBgpRouter">a6s45</name><process_status type="list" identifier="4"
aggtype="union"><list type="struct" size="1"><ProcessStatus><module_id type="string"
identifier="1">ControlNode</module_id><instance_id type="string"
identifier="2">0</instance_id><state type="string"
identifier="3">Functional</state><connection_infos type="list" identifier="4"><list
type="struct" size="5"><ConnectionInfo><type type="string"
identifier="1">IFMap</type><name type="string"
564
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
identifier="2">IFMapServer</name><server_addrs type="list" identifier="3"><list
type="string"
size="1"><element>10.84.13.45:8443</element></list></server_addrs><status
type="string" identifier="4">Up</status><description type="string"
identifier="5">Connection with IFMap Server
(irond)</description></ConnectionInfo><ConnectionInfo><type type="string"
identifier="1">Collector</type><name type="string"
identifier="2"></name><server_addrs type="list" identifier="3"><list type="string"
size="1"><element>10.84.13.45:8086</element></list></server_addrs><status
type="string" identifier="4">Up</status><description type="string"
identifier="5">Established</description></ConnectionInfo><ConnectionInfo><type
type="string" identifier="1">Discovery</type><name type="string"
identifier="2">Collector</name><server_addrs type="list" identifier="3"><list
type="string"
size="1"><element>10.84.13.45:5998</element></list></server_addrs><status
type="string" identifier="4">Up</status><description type="string"
identifier="5">SubscribeResponse</description></ConnectionInfo><ConnectionInfo><type
type="string" identifier="1">Discovery</type><name type="string"
identifier="2">IfmapServer</name><server_addrs type="list" identifier="3"><list
type="string"
size="1"><element>10.84.13.45:5998</element></list></server_addrs><status
type="string" identifier="4">Up</status><description type="string"
identifier="5">SubscribeResponse</description></ConnectionInfo><ConnectionInfo><type
type="string" identifier="1">Discovery</type><name type="string"
identifier="2">xmpp-server</name><server_addrs type="list" identifier="3"><list
type="string"
size="1"><element>10.84.13.45:5998</element></list></server_addrs><status
type="string" identifier="4">Up</status><description type="string"
identifier="5">Publish Response HeartBeat</description></ConnectionInfo></list></connection_infos><description
type="string"
identifier="5"></description></ProcessStatus></list></process_status></NodeStatus></data></NodeStatusUVE><SandeshUVECacheResp
type="sandesh"><returned type="u32" identifier="1">1</returned><more type="bool"
identifier="0">false</more></SandeshUVECacheResp></__NodeStatusUVE_list>
contrail-status script
The contrail-status script is used to give the status of the Contrail processes on a server.
The contrail-status script first checks if a process is running, and if it is, performs introspect
into the process to get its functionality status, then outputs the aggregate status.
The possible states to display include:
Example Output:
Contrail-Status Script
•
active - the process is running and functional; the internal state is good
•
inactive - stopped by user
•
failed – the process exited too quickly and has not restarted
•
initializing - the process is running, but the internal state is not yet functional.
The following is an example output from the contrail-status script.
root@a6s45:~# contrail-status
Copyright © 2016, Juniper Networks, Inc.
565
Contrail Feature Guide
== Contrail vRouter ==
supervisor-vrouter:
active
contrail-vrouter-agent
active
contrail-vrouter-nodemgr
active
== Contrail Control ==
supervisor-control:
active
contrail-control
active
contrail-control-nodemgr
contrail-dns
active
active
contrail-named
active
== Contrail Analytics ==
supervisor-analytics:
active
contrail-analytics-api
active
contrail-analytics-nodemgr active
contrail-collector
active
contrail-query-engine
active
== Contrail Config ==
supervisor-config:
contrail-api:0
active
active
contrail-config-nodemgr
contrail-discovery:0
contrail-schema
active
active
contrail-svc-monitor
ifmap
rabbitmq-server
566
active
active
active
active
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
== Contrail Web UI ==
supervisor-webui:
contrail-webui
active
active
contrail-webui-middleware
redis-webui
active
active
== Contrail Database ==
supervisord-contrail-database:active
contrail-database
active
contrail-database-nodemgr
active
Log and Flow Information APIs
In Contrail, log and flow analytics information is collected and stored using a horizontally
scalable Contrail collector and NoSQL database. The analytics-api server provides REST
APIs to extract this information using queries. The queries use well-known SQL syntax,
hiding the underlying complexity of the NoSQL tables.
•
HTTP GET APIs on page 567
•
HTTP POST API on page 568
•
POST Data Format Example on page 568
•
Query Types on page 570
•
Examining Query Status on page 570
•
Examining Query Chunks on page 570
•
Example Queries for Log and Flow Data on page 570
HTTP GET APIs
Use the following GET APIs to identify tables and APIs available for querying.
/analytics/tables -- lists the SQL-type tables available for querying, including the hrefs
for each of the tables
/analytics/table/<table> -- lists the APIs available to get information for a given table
/analytics/table/<table>/schema -- lists the schema for a given table
Copyright © 2016, Juniper Networks, Inc.
567
Contrail Feature Guide
HTTP POST API
Use the following POST API information to extract data from a table.
/analytics/query -- format your query using the following SQL syntax:
SELECT field1, field2 ...
FROM table1
WHERE field1 = value1 AND field2 = value2 ...
FILTER BY ...
SORT BY ...
LIMIT n
Additionally, it is mandatory to include the start time and the end time for the data range
to define the time period for the query data. The parameters of the query are passed
through POST data, using the following fields:
start_time — the start of the time period
end_time — the end of the time period
table — the table from which to extract data
select_fields — the columns to display in the extracted data
where — the list of match conditions
POST Data Format Example
The POST data is in JSON format, stored in an idl file. A sample file is displayed in the
following.
NOTE: The result of the query API is also in JSON format.
/*
* Copyright (c) 2013 Juniper Networks, Inc. All rights reserved.
*/
/*
* query_rest.idl
*
* IDL definitions for query engine REST API
*
* PLEASE NOTE: After updating this file, do update json_parse.h
*
*/
enum match_op {
EQUAL = 1,
NOT_EQUAL = 2,
IN_RANGE = 3,
568
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
NOT_IN_RANGE = 4,
// not supported currently
// following are only for numerical column fields
LEQ = 5, // column value is less than or equal to filter value
GEQ = 6, // column value is greater than or equal to filter value
PREFIX = 7, // column value has the "value" field as prefix
REGEX_MATCH = 8 // for filters only
}
enum sort_op {
ASCENDING = 1,
DESCENDING = 2,
}
struct
1:
2:
3:
4:
}
match {
string name;
string value;
match_op op;
optional string value2;
// this is for only RANGE match
typedef list<match> term; (AND of match)
enum flow_dir_t {
EGRESS = 0,
INGRESS = 1
}
struct query {
1: string table; // Table to query (FlowSeriesTable, MessageTable,
ObjectVNTable, ObjectVMTable, FlowRecordTable)
2: i64 start_time; // Microseconds in UTC since Epoch
3: i64 end_time; // Microseconds in UTC since Epoch
4: list<string>> select_fields; // List of SELECT fields
5: list<term> where; // WHERE (OR of terms)
6: optional sort_op sort;
7: optional list<string> sort_fields;
8: optional i32 limit;
9: optional flow_dir_t dir; // direction of flows being queried
10: optional list<match> filter; // filter the processed result by value
}
struct flow_series_result_entry {
1: optional i64 T; // Timestamp of the flow record
2: optional string sourcevn;
3: optional string sourceip;
4: optional string destvn;
5: optional string destip;
6: optional i32 protocol;
7: optional i32 sport;
8: optional i32 dport;
9: optional flow_dir_t direction_ing;
10: optional i64 packets; // mutually exclusive to 12,13
11: optional i64 bytes; // mutually exclusive to 12,13
12: optional i64 sum_packets; // represented as "sum(packets)" in JSON
13: optional i64 sum_bytes; // represented as "sum(bytes)" in JSON
};
typedef list<flow_series_result_entry> flow_series_result;
Copyright © 2016, Juniper Networks, Inc.
569
Contrail Feature Guide
Query Types
The analytics-api supports two types of queries. Both types use the same POST
parameters as described in POST API.
•
sync — Default query mode. The results are sent inline with the query processing.
•
async — To execute a query in async mode, attach the following header to the POST
request: Expect: 202-accepted.
Examining Query Status
For an asynchronous query, the analytics-api responds with the code: 202 Accepted. The
response contents are a status entity href URL of the form: /analytics/query/<QueryID>.
The QueryID is assigned by the analytics-api. To view the response contents, poll the
status entity by performing a GET action on the URL. The status entity has a variable
named progress, with a number between 0 and 100, representing the approximate
percentage completion of the query. When progress is 100, the query processing is
complete.
Examining Query Chunks
The status entity has an element named chunks that lists portions (chunks) of query
results. Each element of this list has three fields: start_time, end_time, href. The
analytics-api determines how many chunks to list to represent the query data. A chunk
can include an empty string ("") to indicate that the data query is not yet available. If a
partial result is available, the chunk href is of the form:
/analytics/query/<QueryID>/chunk-partial/<chunk number>. When the final result of a
chunk is available, the href is of the form: /analytics/query/<QueryID>/chunk-final/<chunk
number>.
Example Queries for Log and Flow Data
The following example query lists the tables available for query.
[root@host ~]# curl 127.0.0.1:8081/analytics/tables | python -mjson.tool
% Total
% Received % Xferd Average Speed
Time
Time
Time Current
Dload Upload
Total
Spent
Left Speed
100
846 100
846
0
0
509k
0 --:--:-- --:--:-- --:--:-- 826k
[
{
"href": "http://127.0.0.1:8081/analytics/table/MessageTable",
"name": "MessageTable"
},
{
"href": "http://127.0.0.1:8081/analytics/table/ObjectVNTable",
"name": "ObjectVNTable"
},
{
"href": "http://127.0.0.1:8081/analytics/table/ObjectVMTable",
"name": "ObjectVMTable"
},
{
"href": "http://127.0.0.1:8081/analytics/table/ObjectVRouter",
"name": "ObjectVRouter"
570
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
},
{
"href": "http://127.0.0.1:8081/analytics/table/ObjectBgpPeer",
"name": "ObjectBgpPeer"
},
{
"href": "http://127.0.0.1:8081/analytics/table/ObjectRoutingInstance",
"name": "ObjectRoutingInstance"
},
{
"href": "http://127.0.0.1:8081/analytics/table/ObjectXmppConnection",
"name": "ObjectXmppConnection"
},
{
"href": "http://127.0.0.1:8081/analytics/table/FlowRecordTable",
"name": "FlowRecordTable"
},
{
"href": "http://127.0.0.1:8081/analytics/table/FlowSeriesTable",
"name": "FlowSeriesTable"
}
]
The following example query lists details for the table named MessageTable.
[root@host ~]# curl 127.0.0.1:8081/analytics/table/MessageTable | python
-mjson.tool
% Total
% Received % Xferd Average Speed
Time
Time
Time Current
Dload Upload
Total
Spent
Left Speed
100
192 100
192
0
0
102k
0 --:--:-- --:--:-- --:--:-- 187k
[
{
"href": "http://127.0.0.1:8081/analytics/table/MessageTable/schema",
"name": "schema"
},
{
"href": "http://127.0.0.1:8081/analytics/table/MessageTable/column-values",
"name": "column-values"
}
]
The following example query lists the schema for the table named MessageTable.
[root@host ~]# curl 127.0.0.1:8081/analytics/table/MessageTable/schema | python
-mjson.tool
% Total
% Received % Xferd Average Speed
Time
Time
Time Current
Dload Upload
Total
Spent
Left Speed
100
630 100
630
0
0
275k
0 --:--:-- --:--:-- --:--:-- 307k
{
"columns": [
{
"datatype": "int",
"index": "False",
"name": "MessageTS"
},
{
"datatype": "string",
"index": "True",
"name": "Source"
},
{
Copyright © 2016, Juniper Networks, Inc.
571
Contrail Feature Guide
"datatype": "string",
"index": "True",
"name": "ModuleId"
},
{
"datatype": "string",
"index": "True",
"name": "Category"
},
{
"datatype": "int",
"index": "True",
"name": "Level"
},
{
"datatype": "int",
"index": "False",
"name": "Type"
},
{
"datatype": "string",
"index": "True",
"name": "Messagetype"
},
{
"datatype": "int",
"index": "False",
"name": "SequenceNum"
},
{
"datatype": "string",
"index": "False",
"name": "Context"
},
{
"datatype": "string",
"index": "False",
"name": "Xmlmessage"
}
],
"type": "LOG"
}
The following set of example queries explore a message table.
root@a6s45:~# cat filename
{ "end_time": "now" , "select_fields": ["MessageTS", "Source", "ModuleId",
"Category", "Messagetype", "SequenceNum", "Xmlmessage", "Type", "Level",
"NodeType", "InstanceId"] , "sort": 1 , "sort_fields": ["MessageTS"] ,
"start_time": "now-10m" , "table": "MessageTable" , "where": {"name": "ModuleId",
"value": "contrail-control", "op": 1, "suffix": null, "value2": null}, {"name":
"Messagetype", "value": "BGPRouterInfo", "op": 1, "suffix": null, "value2": null}
}
root@a6s45:~#
root@a6s45:~# curl -X POST --data @filename 127.0.0.1:8081/analytics/query --header
"Content-Type:application/json" | python -mjson.tool
% Total
% Received % Xferd Average Speed
Time
Time
Time Current
Dload Upload
Total
Spent
Left Speed
100 9765
0 9297 100
468
9168
461 0:00:01 0:00:01 --:--:-- 9177
{
572
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
"value": [
{
"Category": null,
"InstanceId": "0",
"Level": 2147483647,
"MessageTS": 1428442589947392,
"Messagetype": "BGPRouterInfo",
"ModuleId": "contrail-control",
"NodeType": "Control",
"SequenceNum": 1302,
"Source": "a6s45",
"Type": 6,
"Xmlmessage": "<BGPRouterInfo type=""><data
type=""><BgpRouterState><name type=""
>a6s45</name><cpu_info type=""><CpuLoadInfo><num_cpu type="">4</num_cpu
><meminfo type=""><MemInfo><virt type="">438436</virt><peakvirt type=""
>561048</peakvirt><res type="">12016</res></MemInfo></meminfo><cpu_share
type="">0.0416667</cpu_share></CpuLoadInfo></cpu_info><cpu_share type=""
>0.0416667</cpu_share></BgpRouterState></data></BGPRouterInfo>"
},
{
"Category": null,
"InstanceId": "0",
"Level": 2147483647,
...
Working with Neutron
OpenStack’s networking solution, Neutron, has representative elements for Contrail
elements for Network (VirtualNetwork), Port (VirtualMachineInterface), Subnet
(IpamSubnets), and Security-Group. The Neutron plugin translates the elements from
one representation to another.
•
Data Structure on page 573
•
Network Sharing in Neutron on page 574
•
Commands for Neutron Network Sharing on page 574
•
Support for Neutron APIs on page 575
•
Contrail Neutron Plugin on page 575
•
DHCP Options on page 576
•
Incompatibilities on page 576
Data Structure
Although the actual data between Neutron and Contrail is similar, the listings of the
elements differs significantly. In the Contrail API, the networking elements list is a
summary, containing only the UUID, FQ name, and an href, however, in Neutron, all details
of each resource are included in the list.
The Neutron plugin has an inefficient list retrieval operation, especially at scale, because
it:
•
reads a list of resources (for example. GET /virtual-networks), then
Copyright © 2016, Juniper Networks, Inc.
573
Contrail Feature Guide
•
iterates and reads in the details of the resource (GET /virtual-network/<uuid> ).
As a result, the API server spends most of the time in this type of GET operation just
waiting for results from the Cassandra database.
The following features in Contrail improve performance with Neutron:
•
An optional detail query parameter is added in the GET of collections so that the API
server returns details of all the resources in the list, instead of just a summary. This is
accompanied by changes in the Contrail API library so that a caller gets returned a list
of the objects.
•
The existing Contrail list API takes in an optional parent_id query parameter to return
information about the resource anchored by the parent.
•
The Contrail API server reads objects from Cassandra in a multiget format into
obj_uuid_cf, where object contents are stored, instead of reading in an xget/get format.
This reduces the number of round-trips to and from the Cassandra database.
Network Sharing in Neutron
Using Neutron, a deployer can make a network accessible to other tenants or projects
by using one of two attributes on a network:
•
set the shared attribute to allow sharing
•
set the router:external attribute, when the plugin supports an external_net extension
Using the Shared Attribute
When a network has the shared attribute set, users in other tenants or projects, including
non-admin users, can access that network, using:
neutron net-list --shared
Users can also launch a virtual machine directly on that network, using:
nova boot <other-parameters> –nic net-id=<shared-net-id>
Using the Router:External Attribute
When a network has the router:external attribute set, users in other tenants or projects,
including non-admin users, can use that network for allocating floating IPs, using:
neutron floatingip-create <router-external-net-id>
then associating the IP address pool with their instances.
Commands for Neutron Network Sharing
The following table summarizes the most common Neutron commands used with Contrail.
Action
Command
List all shared networks.
neutron net-list --shared
574
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
Action
Command
Create a network that has the shared attribute.
neutron net-create <net-name> –shared
Set the shared attribute on an existing network.
neutron net-update <net-name> -shared
List all router:external networks.
neutron net-list --router:external
Create a network that has the router:externalattribute.
neutron net-create <net-name> -router:external
Set the router:external attribute on an existing network.
neutron net-update <net-name> -router:external
Support for Neutron APIs
The OpenStack Neutron project provides virtual networking services among devices that
are managed by the OpenStack compute service. Software developers create applications
by using the OpenStack Networking API v2.0 (Neutron).
Contrail provides the following features to increase support for OpenStack Neutron:
•
Create a port independently of a virtual machine.
•
Support for more than one subnet on a virtual network.
•
Support for allocation pools on a subnet.
•
Per tenant quotas.
•
Enabling DHCP on a subnet.
•
External router can be used for floating IPs.
For more information about using OpenStack Networking API v2.0 (Neutron), refer to:
http://docs.openstack.org/api/openstack-network/2.0/content/ and the OpenStack
Neutron Wiki at: http://wiki.openstack.org/wiki/Neutron .
Contrail Neutron Plugin
The Contrail Neutron plugin provides an implementation for the following core resources:
•
Network
•
Subnet
•
Port
It also implements the following standard and upstreamed Neutron extensions:
•
Security group
•
Router IP and floating IP
•
Per-tenant quota
•
Allowed address pair
Copyright © 2016, Juniper Networks, Inc.
575
Contrail Feature Guide
The following Contrail-specific extensions are implemented:
•
Network IPAM
•
Network policy
•
VPC table and route table
•
Floating IP pools
The plugin does not implement native bulk, pagination, or sort operations and relies on
emulation provided by the Neutron common code.
DHCP Options
In Neutron commands, DHCP options can be configured using extra-dhcp-options in
port-create.
Example
neutron port-create net1 --extra-dhcp-opt
opt_name=<dhcp_option_name>,opt_value=<value>
The opt_name and opt_value pairs that can be used are maintained in GitHub:
https://github.com/Juniper/contrail-controller/wiki/Extra-DHCP-Options .
Incompatibilities
In the Contrail architecture, the following are known incompatibilities with the Neutron
API.
•
Filtering based on any arbitrary key in the resource is not supported. The only supported
filtering is by id, name, and tenant_id.
•
To use a floating IP, it is not necessary to connect the public subnet and the private
subnet to a Neutro n router. Marking a public network with router:external is sufficient
for a floating IP to be created and associated, and packet forwarding to it will work.
•
The default values for quotas are sourced from /etc/contrail/contrail-api.conf and not
from /etc/neutron/neutron.conf.
Support for Amazon VPC APIs on Contrail OpenStack
576
•
Overview of Amazon Virtual Private Cloud on page 577
•
Mapping Amazon VPC Features to OpenStack Contrail Features on page 577
•
VPC and Subnets Example on page 578
•
Euca2ools CLI for VPC and Subnets on page 579
•
Security in VPC: Network ACLs Example on page 579
•
Euca2ools CLI for Network ACLs on page 580
•
Security in VPC: Security Groups Example on page 580
•
Euca2ools CLI for Security Groups on page 581
•
Elastic IPs in VPC on page 582
•
Euca2ools CLI for Elastic IPs on page 582
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
•
Euca2ools CLI for Route Tables on page 582
•
Supported Next Hops on page 583
•
Internet Gateway Next Hop Euca2ools CLI on page 583
•
NAT Instance Next Hop Euca2ools CLI on page 583
•
Exzample: Creating a NAT Instance with Euca2ools CLI on page 584
Overview of Amazon Virtual Private Cloud
The current Grizzly release of OpenStack supports Elastic Compute Cloud (EC2) API
translation to OpenStack Nova, Quantum, and Keystone calls. EC2 APIs are used in
Amazon Web Services (AWS) and virtual private clouds (VPCs) to launch virtual
machines, assign IP addresses to virtual machines, and so on. A VPC provides a container
where applications can be launched and resources can be accessed over the networking
services provided by the VPC.
Contrail enhances its use of EC2 APIs to support the Amazon VPC APIs.
The Amazon VPC supports networking constructs such as: subnets, DHCP options, elastic
IP addresses, network ACLs, security groups, and route tables. The Amazon VPC APIs
are now supported on the Openstack Contrail distribution, so users of the Amazon EC2
APIs for their VPC can use the same scripts to move to an Openstack Contrail solution.
Euca2ools are command-line tools for interacting with Amazon Web Services (AWS)
and other AWS-compatible web services, such as OpenStack. Euca2ools have been
extended in OpenStack Contrail to add support for the Amazon VPC, similar to the support
that already exists for the Amazon EC2 CLI.
For more information about Amazon VPC and AWS EC2, see:
•
Amazon VPC documentation:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html
•
Amazon VPC API list:
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/query-apis.html
Mapping Amazon VPC Features to OpenStack Contrail Features
The following table compares Amazon VPC features to their equivalent features in
OpenStack Contrail.
Table 67: Amazon VPC and OpenStack Contrail Feature Comparison
Amazon VPC Feature
OpenStack Contrail Feature
VPC
Project
Subnets
Networks (Virtual Networks)
DHCP options
IPAM
Elastic IP
Floating IP
Copyright © 2016, Juniper Networks, Inc.
577
Contrail Feature Guide
Table 67: Amazon VPC and OpenStack Contrail Feature Comparison (continued)
Amazon VPC Feature
OpenStack Contrail Feature
Network ACLs
Network ACLs
Security Groups
Security Groups
Route Table
Route Table
VPC and Subnets Example
When creating a new VPC, the user must provide a classless inter-domain routing (CIDR)
block of which all subnets in this VPC will be part.
In the following example, a VPC is created with a CIDR block of 10.1.0.0/16. A subnet is
created within the VPC CIDR block, with a CIDR block of 10.1.1.0/24. The VPC has a default
network ACL named acl-default.
All subnets created in the VPC are automatically associated to the default network ACL.
This association can be changed when a new network ACL is created. The last command
in the list below creates a virtual machine using the image ami-00000003 and launches
with an interface in subnet-5eb34ed2.
# euca-create-vpc 10.1.0.0/16
VPC VPC:vpc-8352aa59 created
# euca-describe-vpcs
VpcId
CidrBlock
------------vpc-8352aa59
10.1.0.0/16
DhcpOptions
----------None
# euca-create-subnet -c 10.1.1.0/24 vpc-8352aa59
Subnet: subnet-5eb34ed2 created
# euca-describe-subnets
Subnet-id
Vpc-id
-------------subnet-5eb34ed2 vpc-8352aa59
CidrBlock
--------10.1.1.0/24
# euca-describe-network-acls
AclId
----acl-default(def)
vpc-8352aa59
Rule
Dir
-----100
ingress
100
egress
32767
ingress
32767
egress
Action
-----allow
allow
deny
deny
Assocation
---------aclassoc-0c549d66
578
Proto
-----1
-1
-1
-1
Port
---0
0
0
0
SubnetId
-------subnet-5eb34ed2
Range
----65535
65535
65535
65535
Cidr
---0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
AclId
-----------acl-default
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
# euca-run-instances -s subnet-5eb34ed2 ami-00000003
Euca2ools CLI for VPC and Subnets
The following euca2ools CLI commands are used to create, define, and delete VPCs and
subnets:
•
euca-create-vpc
•
euca-delete-vpc
•
euca-describe-vpcs
•
euca-create-subnet
•
euca-delete-subnet
•
euca-describe-subnets
Security in VPC: Network ACLs Example
Network ACLs support ingress and egress rules for traffic classification and filtering. The
network ACLs are applied at a subnet level.
In the following example, a new ACL, acl-ba7158, is created and an existing subnet is
associated to the new ACL.
# euca-create-network-acl vpc-8352aa59
acl-ba7158c
# euca-describe-network-acls
AclId
----acl-default(def)
vpc-8352aa59
Rule
Dir
-----100
ingress
100
egress
32767
ingress
32767
egress
Action
-----allow
allow
deny
deny
Assocation
---------aclassoc-0c549d66
Proto
-----1
-1
-1
-1
Port
---0
0
0
0
SubnetId
-------subnet-5eb34ed2
Range
----65535
65535
65535
65535
Cidr
---0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
AclId
-----------acl-default
AclId
----acl-ba7158c
vpc-8352aa59
Rule
---32767
32767
Copyright © 2016, Juniper Networks, Inc.
Dir
--ingress
egress
Action
-----deny
deny
Proto
-----1
-1
Port
---0
0
Range
----65535
65535
Cidr
---0.0.0.0/0
0.0.0.0/0
579
Contrail Feature Guide
# euca-replace-network-acl-association -a aclassoc-0c549d66 acl-ba7158c
aclassoc-0c549d66
# euca-describe-network-acls
AclId
----acl-default(def)
vpc-8352aa59
Rule
Dir
-----100
ingress
100
egress
32767
ingress
32767
egress
Action
-----allow
allow
deny
deny
Assocation
----------
Proto
-----1
-1
-1
-1
Port
---0
0
0
0
SubnetId
--------
Range
----65535
65535
65535
65535
Cidr
---0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
AclId
------------
AclId
----acl-ba7158c
vpc-8352aa59
Rule
---32767
32767
Dir
--ingress
egress
Action
-----deny
deny
Assocation
---------aclassoc-0c549d66
Proto
-----1
-1
Port
---0
0
SubnetId
-------subnet-5eb34ed2
Range
----65535
65535
Cidr
---0.0.0.0/0
0.0.0.0/0
AclId
-----------acl-ba7158c
Euca2ools CLI for Network ACLs
The following euca2ools CLI commands are used to create, define, and delete VPCs and
subnets:
•
euca-create-network-acl
•
euca-delete-network-acl
•
euca-replace-network-acl-association
•
euca-describe-network-acls
•
euca-create-network-acl-entry
•
euca-delete-network-acl-entry
•
euca-replace-network-acl-entry
Security in VPC: Security Groups Example
Security groups provide virtual machine level ingress/egress controls. Security groups
are applied to virtual machine interfaces.
580
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
In the following example, a new security group is created. The rules can be added or
removed for the security group based on the commands listed for euca2ools. The last
line launches a virtual machine using the newly created security group.
# euca-describe-security-groups
GroupId
------sg-6d89d7e2
VpcId
----vpc-8352aa59
Name
---default
Direction
--------Ingress
Egress
Proto
----any
any
Description
-----------
Start
----0
0
End
--65535
65535
Remote
-----[0.0.0.0/0]
[0.0.0.0/0]
# euca-create-security-group -d "TestGroup" -v vpc-8352aa59 testgroup
GROUP
sg-c5b9d22a
testgroup
TestGroup
# euca-describe-security-groups
GroupId
------sg-6d89d7e2
GroupId
------sg-c5b9d22a
VpcId
----vpc-8352aa59
Name
---default
Description
-----------
Direction
--------Ingress
Egress
Proto
----any
any
VpcId
----vpc-8352aa59
Name
---testgroup
Direction
--------Egress
Proto
----any
Start
----0
0
Start
----0
End
--65535
65535
Remote
-----[0.0.0.0/0]
[0.0.0.0/0]
Description
----------TestGroup
End
--65535
Remote
-----[0.0.0.0/0]
# euca-run-instances -s subnet-5eb34ed2 -g testgroup ami-00000003
Euca2ools CLI for Security Groups
The following euca2ools CLI commands are used to create, define, and delete security
groups:
•
euca-create-security-group
•
euca-delete-security-group
•
euca-describe-security-groups
•
euca-authorize-security-group-egress
•
euca-authorize-security-group-ingress
Copyright © 2016, Juniper Networks, Inc.
581
Contrail Feature Guide
•
euca-revoke-security-group-egress
•
euca-revoke-security-group-ingress
Elastic IPs in VPC
Elastic IPs in VPCs are equivalent to the floating IPs in the Contrail Openstack solution.
In the following example, a floating IP is requested from the system and assigned to a
particular virtual machine. The prerequisite is that the provider or Contrail administrator
has provisioned a network named “public” and allocated a floating IP pool to it. This
“public” floating IP pool is then internally used by the tenants to request public IP
addresses that they can use and attach to virtual machines.
# euca-allocate-address --domain vpc
ADDRESS 10.84.14.253
eipalloc-78d9a8c9
# euca-describe-addresses
Address
Domain
-----------10.84.14.253
vpc
--filter domain=vpc
AllocationId
InstanceId(AssociationId)
-----------------------------------eipalloc-78d9a8c9
# euca-associate-address -a eipalloc-78d9a8c9 i-00000008
ADDRESS eipassoc-78d9a8c9
# euca-describe-addresses
Address
Domain
-----------10.84.14.253
vpc
--filter domain=vpc
AllocationId
InstanceId(AssociationId)
-----------------------------------eipalloc-78d9a8c9 i-00000008(eipassoc-78d9a8c9)
Euca2ools CLI for Elastic IPs
The following euca2ools CLI commands are used to create, define, and delete elastic
IPs:
•
euca-allocate-address
•
euca-release-address
•
euca-describe-addresses
•
euca-associate-address
•
euca-disassociate-address
Euca2ools CLI for Route Tables
Route tables can be created in an Amazon VPC and associated with subnets. Traffic
exiting a subnet is then looked up in the route table and, based on the route lookup result,
the next hop is chosen.
The following euca2ools CLI commands are used to create, define, and delete route
tables:
582
•
euca-create-route-table
•
euca-delete-route-table
Copyright © 2016, Juniper Networks, Inc.
Chapter 20: Contrail Application Programming Interfaces (APIs)
•
euca-describe-route-tables
•
euca-associate-route-table
•
euca-disassociate-route-table
•
euca-replace-route-table-association
•
euca-create-route
•
euca-delete-route
•
euca-replace-route
Supported Next Hops
The supported next hops for the current release are:
•
Local Next Hop
Designating local next hop indicates that all subnets in the VPC are reachable for the
destination prefix.
•
Internet Gateway Next Hop
This next hop is used for traffic destined to the Internet. All virtual machines using the
Internet gateway next hop are required to use an Elastic IP to reach the Internet, because
the subnet IPs are private IPs.
•
NAT instance
To create this next hop, the user needs to launch a virtual machine that provides
network address translation (NAT) service. The virtual machine has two interfaces:
one internal and one external, both of which are automatically created. The only
requirement here is that a “public” network should have been provisioned by the admin,
because the second interface of the virtual machine is created in the “public” network.
Internet Gateway Next Hop Euca2ools CLI
The following euca2ools CLI commands are used to create, define, and delete Internet
gateway next hop:
•
euca-attach-internet-gateway
•
euca-create-internet-gateway
•
euca-delete-internet-gateway
•
euca-describe-internet-gateways
•
euca-detach-internet-gateway
NAT Instance Next Hop Euca2ools CLI
The following euca2ools CLI commands are used to create, define, and delete NAT
instance next hops:
•
euca-run-instances
Copyright © 2016, Juniper Networks, Inc.
583
Contrail Feature Guide
•
euca-terminate-instances
Exzample: Creating a NAT Instance with Euca2ools CLI
The following example creates a NAT instance and creates a default route pointing to
the NAT instance.
# euca-describe-route-tables
RouteTableId
Main
VpcId
------------------rtb-default
yes
vpc-8352aa59
Prefix
-----10.1.0.0/16
# euca-describe-images
IMAGE
ami-00000003
IMAGE
ami-00000005
AssociationId
------------rtbassoc-0c549d66
SubnetId
-------subnet-5eb34ed2
NextHop
------local
None (ubuntu)
None (nat-service)
2c88a895fdea4461a81e9b2c35542130
2c88a895fdea4461a81e9b2c35542130
# euca-run-instances ami-00000005
# euca-create-route --cidr 0.0.0.0/0 -i i-00000006 rtb-default
# euca-describe-route-tables
RouteTableId
Main
VpcId
------------------rtb-default
yes
vpc-8352aa59
Prefix
-----10.1.0.0/16
0.0.0.0/0
584
AssociationId
------------rtbassoc-0c549d66
SubnetId
-------subnet-5eb34ed2
NextHop
------local
i-00000006
Copyright © 2016, Juniper Networks, Inc.
PART 6
Index
•
Index on page 587
Copyright © 2016, Juniper Networks, Inc.
585
Contrail Feature Guide
586
Copyright © 2016, Juniper Networks, Inc.
D
#, comments in configuration statements.................xxix
( ), in syntax descriptions..................................................xxix
< >, in syntax descriptions................................................xxix
[ ], in configuration statements......................................xxix
{ }, in configuration statements......................................xxix
| (pipe), in syntax descriptions........................................xxix
dashboard..............................................................................405
DHCP........................................................................................268
DKMS........................................................................................209
DNS............................................................................................267
configuring......................................................................270
DHCP...............................................................................268
IPAM.................................................................................268
record types...................................................................269
scripts...............................................................................275
See also Domain Name System
documentation
comments on................................................................xxix
Domain Name System.......................................................267
See also DNS
A
E
Index
Symbols
ASN
global...................................................................................16
EX 4200....................................................................................261
existing OpenStack.................................................................19
B
F
BGP peers............................................................................27, 33
braces, in configuration statements.............................xxix
brackets
angle, in syntax descriptions...................................xxix
square, in configuration statements....................xxix
C
comments, in configuration statements.....................xxix
compute nodes
vRouter.................................................................................4
XMPP agent........................................................................4
configure custom
hostname...........................................................................13
IP address...........................................................................13
LAN port..............................................................................13
nameserver........................................................................13
Contrail ISO................................................................................14
Contrail packages....................................................................14
contrail-logs...........................................................20, 291, 527
control node
configuring.........................................................................27
control nodes..............................................................................4
conventions
text and syntax...........................................................xxviii
curly braces, in configuration statements...................xxix
customer support..................................................................xxx
contacting JTAC............................................................xxx
Copyright © 2016, Juniper Networks, Inc.
font conventions.................................................................xxviii
H
hardware requirements..........................................................9
heat template........................................................................329
high availability..................................................291, 294, 303
hostname
configure custom............................................................13
I
image
creating............................................................................222
infrastructure.........................................................................405
install............................................................................................14
instance
virtual machine.............................................................224
IP address
configure custom............................................................13
IP Address Management...................................................267
See also IPAM
IP address pool
allocating........................................................................240
creating............................................................................238
floating..................................................................238, 240
IPAM.................................................................................267, 268
See also IP Address Management
587
Contrail Feature Guide
L
LAN port
configure custom............................................................13
control..................................................................................11
webui.....................................................................................11
rpm................................................................................................14
M
S
manuals
comments on................................................................xxix
MD5 authentication
configuring........................................................................33
monitor....................................................................................405
multi-tier example...............................................................255
multitenancy..........................................................................333
MX 80........................................................................................261
security groups
associating to an instance........................................241
service chain
creating..........................................................314, 321, 326
example.........................................................314, 321, 326
service instance
commands...................................................314, 321, 326
service policy
commands...................................................314, 321, 326
service template
commands...................................................314, 321, 326
support, technical See technical support
syntax conventions............................................................xxviii
syslog........................................................................20, 291, 527
N
nameserver
configure custom............................................................13
network
create................................................................................219
delete......................................................................218, 220
Juniper...............................................................................218
OpenStack.....................................................................220
network policy
associating to a network.................................229, 236
creating...................................................................227, 233
Juniper.....................................................................227, 229
OpenStack............................................................233, 236
T
technical support
contacting JTAC............................................................xxx
testbed definitions............................................................16, 19
testbed.py............................................................................16, 19
trace messages.....................................................20, 291, 527
V
O
object log.................................................................20, 291, 527
OpenStack...................................................................................3
P
parentheses, in syntax descriptions..............................xxix
policy
associating to a network.................................229, 236
creating...................................................................227, 233
Juniper.....................................................................227, 229
OpenStack............................................................233, 236
projects
creating.............................................................................214
virtual machine
instance...........................................................................224
launching........................................................................224
virtual network
creating....................................................................215, 219
Juniper...............................................................................215
OpenStack......................................................................219
VSRX........................................................................314, 321, 326
R
REST API
physical routers.............................................................183
roles
cfgm......................................................................................11
collector...............................................................................11
compute..............................................................................11
588
Copyright © 2016, Juniper Networks, Inc.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising