TIBCO for Docker Mashery Local Installation and Configuration Guide

TIBCO for Docker Mashery Local Installation and Configuration Guide
Add to My manuals

Mashery Local for Docker provides a set of Docker images for running Mashery Local. This provides customers the ability to perform hybrid traffic management on premise running the API traffic inside data-centers.

advertisement

Assistant Bot

Need help? Our chatbot has already read the manual and is ready to assist you. Feel free to ask any questions about the device, but providing details will make the conversation more productive.

TIBCO Mashery Local for Docker Installation Guide | Manualzz

TIBCO Mashery

®

Local

Installation and Configuration Guide for Docker

Software Release 4.0.0

November 2016

Two-Second Advantage

®

2

Important Information

SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH

EMBEDDED OR BUNDLED TIBCO SOFTWARE IS SOLELY TO ENABLE THE FUNCTIONALITY (OR

PROVIDE LIMITED ADD-ON FUNCTIONALITY) OF THE LICENSED TIBCO SOFTWARE. THE

EMBEDDED OR BUNDLED SOFTWARE IS NOT LICENSED TO BE USED OR ACCESSED BY ANY

OTHER TIBCO SOFTWARE OR FOR ANY OTHER PURPOSE.

USE OF TIBCO SOFTWARE AND THIS DOCUMENT IS SUBJECT TO THE TERMS AND

CONDITIONS OF A LICENSE AGREEMENT FOUND IN EITHER A SEPARATELY EXECUTED

SOFTWARE LICENSE AGREEMENT, OR, IF THERE IS NO SUCH SEPARATE AGREEMENT, THE

CLICKWRAP END USER LICENSE AGREEMENT WHICH IS DISPLAYED DURING DOWNLOAD

OR INSTALLATION OF THE SOFTWARE (AND WHICH IS DUPLICATED IN THE LICENSE FILE)

OR IF THERE IS NO SUCH SOFTWARE LICENSE AGREEMENT OR CLICKWRAP END USER

LICENSE AGREEMENT, THE LICENSE(S) LOCATED IN THE “LICENSE” FILE(S) OF THE

SOFTWARE. USE OF THIS DOCUMENT IS SUBJECT TO THOSE TERMS AND CONDITIONS, AND

YOUR USE HEREOF SHALL CONSTITUTE ACCEPTANCE OF AND AN AGREEMENT TO BE

BOUND BY THE SAME.

This document contains confidential information that is subject to U.S. and international copyright laws and treaties. No part of this document may be reproduced in any form without the written authorization of TIBCO Software Inc.

TIBCO and TIBCO Mashery are either registered trademarks or trademarks of TIBCO Software Inc. in the United States and/or other countries.

All other product and company names and marks mentioned in this document are the property of their respective owners and are mentioned for identification purposes only.

THIS SOFTWARE MAY BE AVAILABLE ON MULTIPLE OPERATING SYSTEMS. HOWEVER, NOT

ALL OPERATING SYSTEM PLATFORMS FOR A SPECIFIC SOFTWARE VERSION ARE RELEASED

AT THE SAME TIME. SEE THE README FILE FOR THE AVAILABILITY OF THIS SOFTWARE

VERSION ON A SPECIFIC OPERATING SYSTEM PLATFORM.

THIS DOCUMENT IS PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS

OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF

MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.

THIS DOCUMENT COULD INCLUDE TECHNICAL INACCURACIES OR TYPOGRAPHICAL

ERRORS. CHANGES ARE PERIODICALLY ADDED TO THE INFORMATION HEREIN; THESE

CHANGES WILL BE INCORPORATED IN NEW EDITIONS OF THIS DOCUMENT. TIBCO

SOFTWARE INC. MAY MAKE IMPROVEMENTS AND/OR CHANGES IN THE PRODUCT(S)

AND/OR THE PROGRAM(S) DESCRIBED IN THIS DOCUMENT AT ANY TIME.

THE CONTENTS OF THIS DOCUMENT MAY BE MODIFIED AND/OR QUALIFIED, DIRECTLY OR

INDIRECTLY, BY OTHER DOCUMENTATION WHICH ACCOMPANIES THIS SOFTWARE,

INCLUDING BUT NOT LIMITED TO ANY RELEASE NOTES AND "READ ME" FILES.

Copyright

©

2004-2016 TIBCO Software Inc. All rights reserved.

TIBCO Software Inc. Confidential Information

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Contents

TIBCO Documentation and Support Services

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

Introduction

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

Assumptions

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

Conventions

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

Deployment Topology

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

Overview of Installation and Configuration Process

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

Installing and Configuring Mashery Local for Docker

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

Docker Images

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

Installation

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

Additional Installation Tips

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

Installation Troubleshooting Tips

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

Changing the Traffic Manager Port

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

How to Enable Additional Features That Require a New Port

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

How to Use NFS for Verbose Log

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

Managing Docker Containers

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

Configuring the Mashery Local Cluster

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

Configuring a Mashery Local Master

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

Configuring Slaves to the Local Master

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

Configuring the Load Balancer

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

Configuring the Instance

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

Advanced Configuration and Maintenance

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

Configuring Quota Notifications

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

Configuring JMX Reporting Access

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

Using the Adapter SDK

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

Adapter SDK Package

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

TIBCO Mashery Domain SDK

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

TIBCO Mashery Infrastructure SDK

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

SDK Domain Model

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

Extended Attributes

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

Pre and Post Processor Extension Points

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

Listener Pattern

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

Event Types and Event

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

Event Listener API

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

Implementing and Registering Processors

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

Downloading the SDK

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

Implementing the Event Listener

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

3

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

4

Implementing Lifecycle Callback Handling

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

Adding Libraries to Classpath

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

Deploying Processors to Runtime

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

Packaging the Custom Processor

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

Uploading the Custom Processor

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

Enabling Debugging

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

Caching Content

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

Configuring Trust Management

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

Configuring Identity Management

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

Testing the New Instance

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

Testing a New Instance

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

Tracking the Database Restore and Replication Status

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

Troubleshooting

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

Verbose Logs

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

Using the Verbose Logs Feature

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

Working with Verbose Logs

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

Mapping Endpoint IDs

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

Debugging Utility

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

Running the Debug Utility

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

Collect Logs

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

Test Connectivity to Cloud Sync

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

Show Slave Status

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

Check IP Address

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

Update Record of Master IP Address in Master

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

Fix Slave Corruption

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

Update Record of Master IP Address in Old Slave Node

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

System Manager (Remove Non-functional or Unused Slaves from Master)

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

System Level Troubleshooting

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

TIBCO Documentation and Support Services

Documentation for this and other TIBCO products is available on the TIBCO Documentation site. This site is updated more frequently than any documentation that might be included with the product. To ensure that you are accessing the latest available help topics, visit: https://docs.tibco.com

Product-Specific Documentation

The following document for this product can be found on the TIBCO Documentation site:

TIBCO Mashery Local Installation and Configuration Guide

How to Contact TIBCO Support

For comments or problems with this manual or the software it addresses, contact TIBCO Support:

For an overview of TIBCO Support, and information about getting started with TIBCO Support, visit this site: http://www.tibco.com/services/support

If you already have a valid maintenance or support contract, visit this site: https://support.tibco.com

Entry to this site requires a user name and password. If you do not have a user name, you can request one.

5

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Introduction

This guide provides an overview of the installation, requirements and configuration for Mashery Local for Docker.

Mashery Local for Docker is a set of Docker images for running Mashery Local. These images can be customized for custom configurations. Mashery Local for Docker allows customers to perform hybrid traffic management on premise to run the API traffic inside data-centers. Mashery Local securely interacts with the Mashery Cloud hosted Developer Portal, Administration Dashboard and API

Reporting and Analytics modules.

Assumptions

This guide assumes that you are using Docker version 1.12 or later. If you have an internal cloud, established best practices will be applied (for example disk alignment). If you are using different servers and clients, the underlying concepts implied by the installation and configuration steps still apply.

Conventions

This guide uses the following conventions:

Keys you press simultaneously appear with a plus (+) sign between them (for example, Ctrl+P means press the Ctrl key first, and while holding it down, press the P key).

Field, list, folder, window, and dialog box names have initial caps (for example, City, State).

Tab names are bold and have initial caps (for example, People tab).

Names of buttons and keys that you press on your keyboard are in bold and have initial caps (for example, Cancel, OK, Enter, Y).

Deployment Topology

The following diagram depicts a typical deployment topology for Mashery Local.

6

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Overview of Installation and Configuration Process

This section provides a roadmap of the installation process for Mashery Local.

Procedure

1. Download the Mashery Local Docker artifact from TIBCO

®

eDelivery and create the Mashery Local

Docker Image set as described in

Installing and Configuring Mashery Local for Docker

.

2. Configure a Mashery Local Master as described in

Configuring a Mashery Local Master .

3. Configure slaves to the Mashery Local Master as described in Configuring Slaves to the Local

Master . It is best practice to set up production with no less than 2 slaves per master.

4. Configure the load balancer as described in Configuring the Load Balancer .

5. Perform advanced configuration such as enabling notifications, as described in

Advanced

Configuration

.

7

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Installing and Configuring Mashery Local for Docker

The following sections describe how to install and configure some basic environments complete with a master, one or more slaves, and load balancing.

Mashery Local for Docker includes a script that will download and install third-party software from third-party websites, including but not necessarily limited to CentOS and EPEL repositories located here:

● https://hub.docker.com/_/centos/

● http://vault.centos.org/

● https://dl.fedoraproject.org/pub/epel/

Such third-party software is subject to third-party software licenses that may be available on such thirdparty websites. For more information on CentOS repositories and EPEL, see:

● https://wiki.centos.org/AdditionalResources/Repositories

● https://fedoraproject.org/wiki/EPEL

Docker Images

Three images are needed to install Mashery Local for Docker:

1. On-premise database: ml_db

2. Memcache: ml_mem

3. TIBCO Mashery

®

Local Core - Traffic Manager plus Cluster Manager UI: ml_core

Installation

To install Mashery Local for Docker:

Procedure

1. Install Docker Engine and docker-compose on your operating system.

Refer to the Docker documentation for the operating system of your choice:

● https://docs.docker.com/engine/

● https://docs.docker.com/compose/

2. TIBCO Mashery Local

®

for Docker is available as a TIB_mash-local**.tar.gz file. Download this file from TIBCO

®

eDelivery and extract the contents.

3. Navigate to the root folder of the extracted contents and run the following command to build the

Mashery Local image set (comprising three images): a)

./build-docker.sh

b) Verify three images are created: ml_db.tar.gz, ml_mem.tar.gz, ml_core.tar.gz.

4. Navigate to the examples folder and copy the docker-compose.yml and the three image .gz s to the target Docker host machine.

The docker-compose.yml may need additional edits, depending on what ports need to be exposed or for other customization. Refer to the README.md for details.

8

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

In order for NTP to work, the following modification is necessary:

1. In the docker-compose.yml file, under the Services/ml_tm section, add: privileged:

true.

(use the same indent as container_name: ml_tm)

2. Under the Services/ml_tm/ports section, add: - "123:123"

(use the same indent as - "80:80")

Note: The indents and dash are important.

Run the following commands:

● docker load -i

<each of the three image files, one by one>

● docker-compose up -d

5. Verify that four Docker containers are up: docker ps

to make sure the four containers are running.

6. Repeat Steps 4 -7 for each Docker host that will run a Mashery Local instance.

7. Go to the instance in a browser: https://<docker host-IP>:5480.

8. Complete Master registration to TIBCO MOM (Mashery On-Prem Manager) or complete Slave registration to Master.

Additional Installation Tips

Docker Toolbox is a tool that lets you manage Docker engines on multiple virtual instances, and is used with Docker Machine. If you need to setup slaves for the cluster on different virtual instances, images

built in the previous set of instructions (Step 3 of Installation

) can be reused below.

Installation steps with Docker Toolbox

1. Install docker toolbox from https://www.docker.com/products/docker-toolbox .

2. Use docker-machine create command to create Docker engines on virtual instances.

Drivers are available for various cloud provider platforms. Refer to https:// docs.docker.com/machine/ for the latest information. Also refer to individual cloud provider documentation for more details on authentication details and other parameters you can use to customize your Docker Machine.

Some example commands are below: a. To create a Docker Machine on a VirtualBox setup on your machine (prerequisite: VirtualBox 5+ ideal): docker-machine create --driver virtualbox <docker machine name> b. To create a Docker Machine on a VMware Fusion setup on your machine: docker-machine create --driver vmwarefusion <docker machine name> c. To create a Docker Machine on AWS (prerequisite: AWS signup, create an IAM administrator user and a key pair: AWS access key, AWS secret key): docker-machine create --driver amazonec2 --amazonec2-access-key <your aws access key> --amazonec2-secret-key <your aws secret key> <name for your new

AWS instance> d. To create a Docker Machine on Microsoft Azure (prerequisite: Microsoft Azure signup): docker-machine create --driver azure --azure-subscription-id <your subscription id> <name for your new azure instance>

9

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

10

e. To create a Docker Machine on Google Cloud (prerequisite: Google Cloud signup, recommend installing and configuring gcloud tools locally to manage authentication. Refer to GCE documentation.): docker-machine create --driver google --google-project <google project id> google-zone "us-west1-a" <name for your new google instance>

3. List all your available machines and make sure the one you just created shows up: docker-machine ls

4. Set environment variables for the machine you would like to deploy Mashery Local images to: docker-machine env <docker machine name>

5. Connect your shell to the new machine: eval $(docker-machine env <docker machine name>) docker-machine ls

(confirm the machine you are connecting to has an * to it to show that it's active)

6. You can use the three images you created via running the build-docker.sh script above: a. Run: docker load -i <each of the three ml.....tar.gz files> b. Obtain the latest docker-compose.yml file.

c. Run: docker compose up -d d. Use the following commands to access logs, etc., within Mashery Local containers: docker exec -it ml_tm /bin/bash docker exec -it ml_cm /bin/bash e. Login to Cluster Manager UI to complete Master registration to MOM process (https://<docker host ip>:5480)

Installation Troubleshooting Tips

Use the following tips in this section to troubleshoot your installation.

Changing the Traffic Manager Port

To change the Traffic Manager port in Mashery Local for Docker, modify the docker-compose.yml file to change the

80:80

under services:/ml_tm:/ports: to

<host port>:<container port>

, where the container port is the port you configured for the proxy.

Note that the host port could be different from the container port. The host port is the port that would be used to access the proxy from outside. After changing the ports in the docker-compose.yml, you will need to do docker-compose down and up to take them into effect. If you know the ports you are planning to switch in the future, you can add them in advance. Then, later when you decide to switch the port, you can simply change it from the UI (under Instance Management > Instance Settings >

HTTP/HTTPS port).

There could be two scenarios for changing the proxy port:

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

11

Scenario 1

Add the new port mapping to docker-compose.yml

Execute the command below if the Mashery Local Docker instance is running: docker-compose down

Execute docker-compose up -d

Change port from UI

Check whether port is in effect: docker exec -it ml_tm netstat -nlp |grep LISTEN|grep tcp

If the new port is not being listened, execute the command: docker exec -it ml_tm nohup service javaproxy restart

Scenario 2

Change port from UI

Add the new port mapping to docker-compose.yml

Execute docker-compose down

Execute docker-compose up -d

How to Enable Additional Features That Require a New Port

To enable features, such as HTTPS, that requires a new port, the port must be mapped in the dockercompose.yml file. If not, add it to the .yml file. Normally, it would be associated with Traffic Manager.

So add it under the services:/ml_tm:/ports. Then, you access from outside through the Docker host IP address.

The example docker-compose.yml file already has most needed ports mapped. However, to change the ports to be used (for example HTTP/HTTPS ports), it would be better to make the changes in the docker-compose.yml file before starting the containers so that the mapping are in place. Then later, you can modify the UI to change the ports. However, if new port was not in effect after the UI change, try restarting the javaproxy. This can be done with command docker exec -it ml_tm nohup service javaproxy restart

.

How to Use NFS for Verbose Log

To use NFS for verbose log:

1. Mount the NFS to a host directory, for example,

/mnt/nfs

.

2. Add the volume mapping in the docker-compose.yml file under the services:/ml_tm:/volumes, for example:

- /mnt/nfs:/var/log/tm_verbose_log

Use the same indent as the existing entry

- mldata:/mnt

.

3. Execute docker-compose down

4. Execute docker-compose up -d

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

12

5. Modify the UI to set the Verbose Logs Location to

/var/log/tm_verbose_log

but leave the flag

Use NFS unchecked.

6. Enable the verbose log.

7. Execute docker exec -it ml_tm nohup service javaproxy restart

Managing Docker Containers

Use the following commands to manage the Docker containers:

Action

Pause

Unpause

Restart

Shut down

Complete Cleanup

(remove persistent data)

Command

docker-compose pause docker-compose unpause docker-compose restart docker-compose down docker volume rm $(docker volume ls -q)

This will clean up all the database content and configurations.

Then, you will need to redo and register the master and slave after rerunning Mashery Local for Docker.

This command removes all volumes for a docker host. If you have other volumes besides those used by Mashery Local for

Docker, you must remove the volumes for Mashery Local for

Docker individually.

Configuring the Mashery Local Cluster

Mashery Local may run configured in a cluster of one master and multiple slaves.

To configure the Mashery

®

Local cluster, you need to:

Configure a Mashery local master

Configure slave(s) to the local master

Configuring a Mashery Local Master

To configure a Mashery Local master:

Procedure

1. Browse to the Mashery Local Cluster Manager of the master by using the Docker Host IP address of the instance: https://<IP_address_of_instance>:5480

2. Login with username administrator and the password configured in set-user-variables.sh.

Click Master.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

13

The Configure Master window appears.

Enter an instance name (this name will eventually display in the Mashery Admin Dashboard) that is meaningful to your operation, the Mashery Cloud Key and shared secret provided by TIBCO

Mashery, and the NTP server address, if used.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

14

3. Click Commence Initiation Sequence.

After the Master initializes with the Mashery cloud service, a completion page appears.

4. Click Continue.

5. Navigate to the Cloud Sync page and perform manual syncs for API Settings and Developers by clicking the adjacent icons:

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

15

6. Test the instance as described in Testing a New Instance

.

7. See the instructions in Advanced Configuration for how to enable notifications, if desired.

Configuring Slaves to the Local Master

Mashery Local may run configured in a cluster of one master and multiple slaves.

To configure slaves to the master:

Procedure

1. Browse to the Mashery Local Cluster Manager of the slave by using the Docker Host IP address of the instance: https://<IP_address_of_instance>:5480

2. Login with username administrator and the password provided by TIBCO Mashery.

3. Click Slave.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

16

4. Enter an instance name (this name will eventually display in the Mashery Admin Dashboard) that is meaningful to your operation, the Mashery Cloud Key and shared secret provided by TIBCO

Mashery, and the NTP server address, if used.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

17

5. Click Register with Mashery and Master.

6. Click Continue.

7. Test the instances as described in Testing a New Instance

.

8. See the instructions in Advanced Configuration for how to enable notifications, and API and JMX

reporting access, if desired.

Configuring the Load Balancer

TIBCO Mashery recommends using a Load Balancer to best utilize the cluster, although this is not required because you may route your API traffic directly to each instance.

Each instance hosts a service called /mashping. Configure the Load Balancer to access the following address, without the host header: http://<IP_address_of_instance>/mashping

If the Load Balancer and the cluster is working correctly, /mashping returns the following response:

HTTP/1.1 200 OK

Server: Mashery Proxy

Content-Type: application/json; charset=UTF-8

Transfer-Encoding: chunked

{"status":200,"time":1315510300,"message":"success"}

If /mashping returns any other response, then the load balancer should remove the instance from the cluster and either retry after a period of time or alert operations to investigate.

Mashery Local has two instance types: Master and Slave. Should the Load Balancer pull the Master out of the cluster pool, an Operations engineer should immediately investigate whether it can be recovered,

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

18

and, if not, promote a Slave to Master. If no Master exists in the pool, data synchronization with the

Mashery Cloud Service will not occur with the exception of API event activity. Access Tokens, Keys,

Applications, Classes and Services will not be synchronized.

Configuring the Instance

 The Instance Management tab allows you to configure additional settings for that particular instance. You can edit the instance name, configure instance settings, and update software and custom adapters. Additional system-level parameters can be tuned here such as application memory allocation, configuration cache size, maximum concurrent connections, and connection pool size for the database.

To configure an instance:

Procedure

1. On the Mashery Cluster Manager tab, click Instance Management.

2. Click the Management Options for which you want to configure the settings.

A text box is displayed for the selected Management Options.

3. Enter the details for the following fields to configure the instance.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

19

Field

Use NTP

(recommended)

Description

NTP server address.

Memory Allocation Specify application memory size as a fraction of the available memory.

Concurrent

Connections

Sets the maximum number of concurrent connections to the service instance.

Database Connector Sets the maximum number of concurrent connections the instance will make to its database.

Configuration Cache Specify the memory (in MB) to use for configuration cache.

Disable IPv6 Select this option to disable IPv6 if IPv6 traffic should not be allowed to the backend. By default, Mashery Local supports both IPv4 and IPv6.

4. Select the appropriate HTTP Server Security Level:

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

20

Enable HTTP only: If selected, the default HTTP Port for HTTP Server Security Settings is

80

.

Enable HTTPS only: If selected, enter the details for the following fields:

Field

HTTPS Port

Description

Specify the HTTPS port. The default is

443

.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Field

Certificate Common

Name (display only)

Description

Automatically displays the name of the selected certificate.

Certificate # (display only)

Automatically displays the number of the selected certificate.

New SSL Certificate

Select from:

Create new certificate: If selected, enter a Certificate Common name in the Create SSL Certificate window, then click Create.

21

Upload new certificate: If selected, in the Upload SSL Certificate window, browse to the SSL certificate using the Click here to

select file link, enter the Password for Certificate, then click

Upload.

Download SSL

Certificate

Select from:

Download certificate in PEM: downloads the current certificate in PEM format.

Download certificate in DER: downloads the current certificate in

DER format.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

22

Enable HTTP and HTTPS: If selected, enter the details for the following fields:

Field

HTTP Port

HTTPS Port

Certificate Common

Name (display only)

Description

Specify the HTTP port. The default is

80

Specify the HTTPS port. The default is

.

443

.

Displays the name of the selected certificate.

Certificate # (display only)

Displays the number of the selected certificate.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Field Description

New SSL Certificate

Select from:

Create new certificate: If selected, enter a Certificate Common name in the Create SSL Certificate window, then click Create.

23

Upload new certificate: If selected, in the Upload SSL Certificate window, browse to the SSL certificate using the Click here to

select file link, enter the Password for Certificate, then click

Upload.

Download SSL

Certificate

Select from:

Download certificate in PEM: downloads the current certificate in PEM format.

Download certificate in DER: downloads the current certificate in

DER format.

5. Click Save.

You may be reminded that Mashery Local needs to restart proxy service.

The instance is configured for the specified settings.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

24

Advanced Configuration and Maintenance

This section describes how you can extend your installation by adding the following capabilities:

Quota Notifications

JMX Reporting

Configuring Quota Notifications

You can configure Mashery Local to send Over Throttle Limit, Over Quota Limit and Near Quota Limit

Warning notifications when an API key exceeds or nears its limits.

To configure quota notifications, follow the instructions in the illustration:

Similar notifications settings are available on the slave instances as well.

Configuring JMX Reporting Access

JMX Monitoring is not supported in Mashery Local for Docker 4.0.0.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Using the Adapter SDK

This section outlines the development process for writing custom adapters using the Adapter SDK for

Mashery Local Traffic Manager. This section also provides the list of areas of extension provided in the

SDK, along with code samples to illustrate the extension process.

Adapter SDK Package

The Adapter SDK defines the Traffic Manager domain model, tools and APIs and provides extension points to inject custom code in the processing of a call made to the Traffic Manager.

DIY SDK adapters need to be coded and compiled using JDK 1.6 or lower.

The Adapter SDK package contains the following:

TIBCO Mashery Domain SDK

TIBCO Mashery Infrastructure SDK

TIBCO Mashery Domain SDK

TIBCO Mashery Domain SDK packaged in com.mashery.trafficmanager.sdk identifies the traffic manager SDK and provides access to the TIBCO Mashery domain model which includes key objects such as Members, Applications, Developer Classes, Keys, Packages.

TIBCO Mashery Infrastructure SDK

TIBCO Mashery Infrastructure SDK provides the ability to handle infrastructure features and contains the following:

TIBCO Mashery HTTP Provider

The HTTP provider packaged as com.mashery.http provides HTTP Request/Response processing capability and tools to manipulate the HTTP Request, Response, their content and headers.

TIBCO Mashery Utility

The utility packaged as com.mashery.util provides utility code which handles frequently occurring logic such as string manipulations, caching, specialized collection handling, and logging.

SDK Domain Model

The Traffic Manager domain model defines the elements of the Traffic Manager runtime.

The following table highlights some of the key elements:

Element

User

API

Description

A user or member subscribing to

APIs and accesses the APIs.

An API represents the service definition. A service definition has endpoints defined for it.

Usage

com.mashery.trafficmanager.model.User

com.mashery.trafficmanager.model.API

25

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Element

Endpoint

Method

Package

Plan

Description Usage

An Endpoint is a central resource of an API managed within Mashery. It is a collection of configuration options that defines the inbound and outbound URI's, rules, transformations, cache control, security, etc. of a unique pathway of your API.

An Endpoint is specialized as either an API Endpoint or a Plan

Endpoint. This specialization provides context to whether or not the Endpoint is being used as part of a Plan or not.

Generic endpoint entity representation: com.mashery.trafficmanager.model.E

ndpoint

API endpoint entity representation: com.mashery.trafficmanager.model.A

PIEndpoint

Plan endpoint entity representation: com.mashery.trafficmanager.model.P

lanEndpoint

A method is a function that can be called on an endpoint and represents the method currently being accessed/requested from the

API request. A method could have rate and throttle limits specified on it to dictate the volume of calls made using a specific key to that method.

A Method is specialized as either an

API Method or Plan Method. The specialization provides context to whether or not the Method belong to a Plan.

Generic method entity representation: com.mashery.trafficmanager.model.M

ethod

API method entity representation: com.mashery.trafficmanager.model.A

PIMethod

Plan method entity representation: com.mashery.trafficmanager.model.P

lanMethod

A Package is a mechanism to bundle or group API capability allowing the API Manager to then offer these capabilities to customers/users based on various access levels and price points. A

Package represents a group of

Plans.

com.mashery.trafficmanager.model.Pack

age

A Plan is a collection of API endpoints, methods and response filters to group functionality so that

API Product Managers can manage access control and provide access to appropriate Plans to different users.

com.mashery.trafficmanager.model.Plan

26

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

27

Element

API Call

Key

Description

The API Call object is the complete transaction of the incoming request received by the Traffic Manager and the outgoing response as processed by the Traffic Manager. It provides an entry point into all other entities used in the execution of the request.

Usage

com.mashery.trafficmanager.model.core

.APICall

A key is an opaque string allowing a developer to access the API functionality. A key has rate and throttle controls defined on it and dictates the volume of calls that can be made to the API by the caller.

A Key can be specialized as an API key or Package Key. This specialization provides context to whether the key provides access to an API or a specific Plan in a

Package.

Generic key entity representation: com.mashery.trafficmanager.model.K

ey

API key entity representation: com.mashery.trafficmanager.model.A

PIKey

Package key entity representation: com.mashery.trafficmanager.model.P

ackageKey

Application An application is a developer artifact that is registered by the developer when he subscribes to an

API or a Package.

Rate

Constraint

A Rate Constraint specifies how the amount of traffic is managed by limiting the number of calls per a time period (hours, days, months) that may be received.

com.mashery.trafficmanager.model.Appl

ication com.mashery.trafficmanager.model.Rate

Constraint

Throttle

Constraint

Customer

Site

A Throttle Constraint specifies how the velocity of traffic is managed by limiting the number of calls per second that may be received.

com.mashery.trafficmanager.model.Thro

ttleConstraint

A customer specific area configured through the developer portal.

com.mashery.trafficmanager.model.Cust

omerSite

Extended Attributes

The traffic manager model allows defining name-value pairs on different levels of the model. The levels are identified here:

Application

Customer Site

Key (both API Key and Package Key)

Package

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

28

Plan

User

Pre and Post Processor Extension Points

This version of the SDK allows extensions for Processors only. This means that only pre and post processing of requests prior to invocation of the target host are allowed.

Listener Pattern

The extension API leverages a listener pattern to deliver callbacks to extension points to allow injecting custom logic.

A call made to the traffic manager is an invocation to a series of tasks. Each step in the workflow accomplishes a specific task to fulfill the call. The current API release only allows customization of the tasks prior to invoking the API server (pre-process) and post receipt of the response from the API server (post-process). The callback API handling these extensions is called a Processor.

The pre-process step allows a processor to receive a fully-formed HTTP request targeted to the API server. The processor is allowed to alter the headers or the body of the request prior to the request being made to the server. Upon completion of the request and receiving the response the Traffic Manager allows the processor to alter the response content and headers prior to the response flowing back through a series of exit tasks out to the client.

Event Types and Event

The transition of the call from one task to the next is triggered through ‘events’ and an event is delivered to any subscriber interested in receiving the event. The SDK supports two event-types which are delivered synchronously:

Pre-Process Event type: This event is used to trigger any pre-process task.

Post-Process Event type: This event is used to trigger any post-process task.

The subscribers in this case will be Processors registered in a specific manner with the Traffic Manager

API.

Event Listener API

The Traffic Manager SDK provides the following interface and is implemented by custom processors to receive Processor Events.

package com.mashery.trafficmanager.event.listener; import com.mashery.trafficmanager.event.model.TrafficEvent;

/*** Event listener interface which is implemented by listeners which wish to handle Traffic events. Traffic events will be delivered via this callback synchronously to handlers implementing the interface.

The implementers of this interface subscribe to events via annotations. E.g.

Processor events need to handle events by using annotations in the com.mashery.proxy.sdk.event.processor.annotation */ public interface TrafficEventListener {

/*** The event is delivered to this API @param event*/

void handleEvent(TrafficEvent event);

}

Implementing and Registering Processors

Writing custom processors involves the following general steps:

Downloading the SDK

Implementing the event listener

Implementing lifecycle callback handling

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

29

Adding libraries to the classpath

The following sections describe these steps in more detail.

Downloading the SDK

To download the SDK:

Procedure

1. Click Download SDK, as shown in the screen shot:

2. Use your favorite IDE and put the SDK jars in your classpath.

3. Create a project and a new java class. The details of that process are skipped here and assumed that the developer will use the relevant IDE documentation to accomplish this.

Implementing the Event Listener

To implement the event listener:

Procedure

1. Employ the Traffic Event Listener interface (introduced in

Event Listener API ) as shown in the

following example: package com.company.extension; public class CustomProcessor implements TrafficEventListener{

public void handleEvent(TrafficEvent event){

//write your custom code here

}

}

2. Annotate your code to ensure that the processor is identified correctly for callbacks on events related to the specific endpoints it is written to handle:

@ProcessorBean(enabled=true, name=”com.company.extension.CustomProcessor”, immediate=true) public class CustomProcessor implements TrafficEventListener{

public void handleEvent(TrafficEvent event){

//write your custom code here

}

}

The annotation identifies the following properties:

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

30

enabled: Identifies if the processor is to be enabled.

name: Identifies the unique name of the processor as configured in API Settings (see marked area in ‘red’ in the following screenshot).

immediate: Identifies if the processor is enabled immediately.

The name used in the annotation for the Processor MUST be the same as configured on the portal for the Endpoint>Pre/Post Processing, as shown in the following screenshot:

Implementing Lifecycle Callback Handling

If you wish to have some initialization work done once and only once for each of the processors, then implement the following interface:

package com.mashery.trafficmanager.event.listener

:

/*** The lifecycle callback which gets called when the processor gets loaded when installed and released*/ public interface ListenerLifeCycle {

/*** The method is called once in the life-cycle of the processor before the processor is deemed ready to handle requests. If the processor throws an exception, the activation is assumed to be a failure and the processor will not receive any requests @throws ListenerLifeCycleException*/

public void onLoad(LifeCycleContext ctx) throws ListenerLifeCycleException;

/*** The method is called once in the life-cycle of the processor before the processor is removed due. The processor will not receive any requests upon inactivation.*/

public void onUnLoad(LifeCycleContext ctx);

}

The onLoad call is made once prior to the processor handling any requests and onUnLoad call is made before the processor is decommissioned and no more requests are routed to it.

The lifecycle listener can be implemented on the Processor class or on a separate class. The annotation needs to add a reference to the lifecycle-class if the interface is implemented (see highlighted property in bold).

package com.company.extension;

@ProcessorBean(enabled=true, name=”com.company.extension.CustomProcessor”, immediate=true, lifeCycleClass=”com.company.extension.CustomProcessor”)

public class CustomProcessor implements TrafficEventListener, ListenerLifeCycle{

public void handleEvent(TrafficEvent event){

//write your custom code here

}

public void onLoad(LifeCycleContext ctx) throws ListenerLifeCycleException{

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

31

}

public void onUnLoad(LifeCycleContext ctx){

}

}

The lifeCycleClass

property should point to the class implementing the Listener LifeCycle interface.

This also allows having a separate lifecycle listener interface as follows (note the different lifeCycleClass name).

The following example shows a different class implementing the LifeCycle callback: package com.company.extension;

@ProcessorBean(enabled=true, name=”com.company.extension.CustomProcessor”, immediate=true, lifeCycleClass=”com.company.extension.CustomProcessorLifeCycle”) public class CustomProcessor implements TrafficEventListener {

public void handleEvent(TrafficEvent event){

//write your custom code here

}

public void onLoad(LifeCycleContext ctx) throws ListenerLifeCycleException{

}

public void onUnLoad(LifeCycleContext ctx){

}

} public class CustomProcessorLifeCycle implements ListenerLifeCycle{

public void onLoad(LifeCycleContext ctx) throws ListenerLifeCycleException{

}

public void onUnLoad(LifeCycleContext ctx){

}

}

Adding Libraries to Classpath

If the processor needs third-party libraries, those can be used in development and packaged with the processors, as described in

Deploying Processors to Runtime .

Deploying Processors to Runtime

Deploying a custom processor involves the following general steps:

Packaging the custom processor

Uploading the custom processor

Enabling Debugging

The following sections describe these steps in more detail.

Packaging the Custom Processor

Once the processor code is written, compile the classes and create a jar file with all the classes. Any third party libraries used must be specified in the

Manifest.MF

of the jar containing the processor classes.

The classpath entries are introduced as follows. For example, if you want to add apache-commons.jar

, then you would add it as follows to the

META-INF/MANIFEST.MF

file of the jar:

Class-Path: apache-commons.jar

Uploading the Custom Processor

To upload the custom processor:

Procedure

● Once the jar file is created, add it to a .ZIP file. Upload the .ZIP file to the Mashery Local instance by using the Mashery Local Admin UI as shown:

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

32

If the upload is successful, a message appears that the adapters were uploaded successfully.

Enabling Debugging

During development, it is sometimes necessary to enable debugging on the Mashery Local instance.

To enable debugging, click the Enable debugging check box, indicate the port number to which you will connect your debugger, and then click Save:

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Caching Content

The custom endpoints can cache content during the call handling. The cache configuration is found in the Manage Custom Content Cache section on the API Settings page.

33

Manage Custom Content Cache provides the following options:

Custom TTL: A default TTL provided for the cache.

Update TTL: Provides ability to save any TTL changes.

Update TTL & Flush Cache: Updates the database with the updated TTL and flushes the cache contents.

Flush Cache: Allows the cache contents to be flushed.

The SDK provides references to a Cache where all this data is stored. The cache interface provided in the callback to the

TrafficEventListener

is: package com.mashery.trafficmanager.cache;

/*** Cache API which allows extensions to store and retrieve data from cache*/ public interface Cache {

/**

* Retrieves the value from the cache for the given key

* @param key

* @return

* @throws CacheException

*/

Object get(String key) throws CacheException;

/**

* Puts the value against the key in the cache for a given ttl

* @param key

* @param value

* @param ttl

* @throws CacheException

*/

void put(String key, Object value, int ttl) throws CacheException;

}

A reference to the cache can be found on the

ProcessorEvent

which is reported on the callback. Here is an example of how to access cache on callback: package com.company.extension;

@ProcessorBean(enabled=true, name=”com.company.extension.CustomProcessor”, immediate=true public class CustomProcessor implements TrafficEventListener, ListenerLifeCycle{

public void handleEvent(TrafficEvent event){

ProcessorEvent processorEvent = (ProcessorEvent) event;

Cache cacheReference = processorEvent.getCache();

//Add data to cache

try{

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

cacheReference.put(“testkey”, “testValue”, 10)

}catch(CacheException e){

//load data or load default data

}

//write your custom processor code here

}

}

A reference to cache is also available on the lifecycle callback: package com.company.extension; public class CustomProcessorLifeCycle implements ListenerLifeCycle{

public void onLoad(LifeCycleContext ctx) throws ListenerLifeCycleException{

Cache cache = ctx.getCache();

// perform cache operations

}

public void onUnLoad(LifeCycleContext ctx){

}

}

34

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

35

Configuring Trust Management

The Trust Management page allows the administrator to add or update certificates used by the HTTPS client. The HTTPS client profile references these certificates.

The following table describes the fields in the Trust Management page.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Field or button

Upload Trust

Description

Opens the Upload a Trusted CA Certificate window.

Name

Serial Number

Expiration Date

State

To upload a certificate, click Click here to select file, then click Upload.

The name of the certificate.

The serial number of the certificate.

The date and time the certificate expires.

Identifies the following information:

The state of the certificate:

Certificate manifest will be synchronized with TIBCO Mashery SaaS

Certificate manifest has been synchronized with TIBCO Mashery SaaS

Certificate is about to expire - The expiration warning is shown one month before expiration date.

Certificate expired.

Certificate manifest update will be synchronized with TIBCO Mashery

SaaS

Certifcate in TIBCO Mashery Local is outdated for TIBCO Mashery

SaaS

Certificate is not present in TIBCO Mashery Local

The number of profile(s) using the certificate.

The number of endpoint(s) using the certificate.

The available action suggested or required for the certificate.

36

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Configuring Identity Management

The Identity Management page allows the administrator to add or update identities used by the

HTTPS client. The HTTPS client profile references these identities.

37

The following table describes the fields in the Identity Management page.

Field or button

Upload Identity

Description

Opens the Upload a Trusted CA Certificate window.

Name

Serial Number

To upload an identity, click Click here to select file, enter the Password, then click Upload.

The name of the identity.

The serial number of the identity.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Field or button

Expiration Date

State

Description

The date and time the identity expires.

Identifies the following information:

The state of the identity:

Certificate manifest will be synchronized with TIBCO Mashery SaaS

Certificate manifest has been synchronized with TIBCO Mashery SaaS

Certificate is about to expire - The expiration warning is shown one month before expiration date.

Certificate expired.

Certificate manifest update will be synchronized with TIBCO Mashery

SaaS

Certifcate in TIBCO Mashery Local is outdated for TIBCO Mashery

SaaS

Certificate is not present in TIBCO Mashery Local

The number of profile(s) using the identity.

The number of endpoint(s) using the identity.

The available action suggested or required for the identity.

38

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

39

Testing the New Instance

You should test a new instance after installing and creating it.

Testing a New Instance

One approach to test a new instance is:

Procedure

1. Find the API to test in the API Settings area of the Mashery Admin Dashboard and identify an associated endpoint that is ready for testing.

2. Create a test API key for the API identified in the previous step. You accomplish this in the Users area accessed by clicking the Users tab of the Mashery Admin Dashboard.

3. Perform a manual sync of the Services and Developers in the Cloud Sync page of the Mashery Local

Cluster Manager, as described in step 7 of

Configuring Slaves to the Master .

4. Construct a test API call for the API you wish to test.

5. Execute the API call against the instance. Unless you have set up a domain name for the instance, your API call will need to be made against the IP address of the instance directly.

Should you use a hostname or IP in your test call? When a service is setup in the dashboard, the hostnames (IP addresses as well could be used) that will consume the service are defined. When a call is made to the proxy, the hostname used for the call must match one of the hostnames setup in the dashboard for the service, otherwise the call will fail. If you make a call directly to one of the instances using its IP address and that IP address was not configured in the service definition, then the proxy returns a 596 error.

If you receive the expected response from the API, then your instance is working properly.

Tracking the Database Restore and Replication Status

TIBCO Mashery Local slave node registration process provides a status endpoint that helps to track the asynchronous steps of database restore and replication status. However, these steps are processed in the background, so there is no active feedback on completion or failure of processes.

The status endpoint (registration_status.py) is an experimental API and is subject to change in later implementations.

There is no need to Enable API Access on the node for this endpoint to function.

To obtain the database restore and replication status, type the URL of the endpoint in your browser. For example: https://<Mashery_Local_Slave_IP>:5480/service/mashery/cgi/ replication_status.py

The endpoint returns the following JSON:

{

"replication_status":

{

"restore": {

"error": false,

"errors": "",

"log": "1. transfer backup from master\nMon Aug 22 18:38:04

UTC

2016\n\n2. unzip backup\nMon Aug 22 18:38:41 UTC 2016\n\n3.

stop slave\nSlave stopped\nMon Aug 22 18:38:41 UTC

2016\n\n4. restore from backup\nMon Aug 22 20:36:18 UTC

2016\n\n5. start slave\nSlave started\nMon Aug 22 20:36:18

UTC 2016\n\n6. done\nMon Aug 22 20:36:18 UTC 2016\n\n",

"complete": true

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

},

"replication": {

"last_error": "",

"seconds_behind_master": "250832\n",

"slave_io_running": "Yes\n",

"slave_sql_running": "Yes\n"

}

},

"error": null

}

The following table provides details about the JSON that is returned by the status endpoint:

Description JSON Node

Database restore log

Value in JSON

replication_status.restor

e.log

replication_status.restor

e.complete

true false

Provides database restore log.

replication_status.restor

e.error

true false

Implies that the database restore step is done.

Implies that the database restore step is not complete.

Implies that there are errors during the process. Refer to the replication_status.restor

e.errors

node to get more details for the errors.

Implies that there are errors during the process.

Database replication status replication_status.replic

ation replication_status.

replication.last_error

<error> replication_status.replic

ation.

seconds_behind_master

<time in milliseconds>

Provides replication-specific information from “show slave status”.

Provides the details for the errors (if any).

Provides an estimate of how long it takes the slave node to catch up to the master.

40

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

JSON Node

replication_status.replic

ation.slave_io_running

or replication_status.replic

ation.slave_sql_running

Value in JSON

No

Yes

Description

Replication is not running and last_error

provides the details for the errors (if any).

Replication starts, seconds_behind_master provides an estimate of how long it takes the slave node to catch up to the master.

When the restore step is in process, replication is disabled. Therefore, the value of slave_io_running and slave_sql_running

is

No

.

41

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

Troubleshooting

Mashery Local provides a set of tools that help an administrator to debug issues with API calls as they flow through TIBCO Mashery, troubleshoot networking issues with the system, identify issues with cloud synchronization, and collect system logs to facilitate Operations and Support staff to identify root-cause faster. This section outlines the tools available and their usage scenarios.

Verbose Logs

The Mashery Local administrator can troubleshoot issues related to call payloads or identify any inconsistencies as API call data flows through TIBCO Mashery by enabling verbose logs on API calls.

This feature is not enabled as an “always on” feature as producing these verbose logs may have some impact on API call performance. Instead, options are provided on the Cluster Manager UI to enable and disable verbose logs.

Using the Verbose Logs Feature

To use the Verbose Log feature:

Procedure

1. Specify the Verbose Logs location.

a) Select Use NFS.

TIBCO Mashery highly recommends using NFS location for verbose logs so that local system usage is not impacted. In addition, all the nodes in a cluster (the Master and Slaves) can write to the same centralized location for easier, further analysis.

1. Enter the NFS host name.

2. Enter the NFS directory name.

3. Click Save.

The specified NFS directory is mounted onto

/mnt/nfs/home

local directory location.

42

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

43

b) If you do not select Use NFS (not recommended), verbose logs will be saved to the local directory. The default directory

</var/log/mashery/verbose>

can be changed.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

44

2. Enable Verbose Logs.

a) Select duration for capturing the logs (05, 10, 15, or 30 minutes).

b) Click Enable.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

45

After you enable verbose logs, TIBCO Mashery Local writes the call data logs that include inbound request data, inbound processed data, outbound response data, and outbound processed data. Verbose logs (call data capturing) is disabled after the selected time duration expires.

You must set the Verbose Logs Location on each node in the cluster including Master and all

Slaves. Enabling or disabling verbose logs can only occur on the Master node. The Slave nodes just inherit the current verbose log enablement status from the Master.

Working with Verbose Logs

A directory is created every minute with the name format as YYYY-MM-DD-HH-MM. All the calls that are logged in a minute become part of one directory and so on. For each call, a sub-directory is created using the name <timestamp>-<Mashery Message ID>.

Mashery Message ID is a globally unique ID generated for every API call that is processed by TIBCO

Mashery. The Mashery Message ID provides a possible mechanism for administrators to create a golden thread for debugging issues between TIBCO Mashery, your partners and your backend system.

To be able to include this GUID in request and response headers, you can toggle on Include X-

Mashery-Message-ID in Request and Include X-Mashery-Message-ID in Response properties on in the Services>Endpoint>Properties page in the TIBCO Mashery Administration Dashboard.

Within each sub-directory, four log files are

InboundRequest.log

,

InboundProcessed.log

,

OutboundResponse.log

,

OutboundProcessed.log

.

InboundRequest

contains the request data on the API call as it is originally received by Mashery

Local from the client application.

InboundProcessed

contains the Mashery processed version of the inbound request as sent to API server (or to cache if enabled).

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

46

OutboundResponse

contains the response data as it is originally received by Mashery from the API server (or from cache if enabled).

OutboundProcessed

contains the Mashery processed version of the outbound response as sent to the client application.

Each of the four files contain some important metadata written as key-value pairs with 1 pair on one line. After the metadata, a new delimiter line is written followed by the actual message.

The metadata included are:

Key: Key a developer uses to get access to a Package Plan or API.

Service ID: TIBCO Mashery generated unique ID to identify an API.

Endpoint ID: TIBCO Mashery generated unique ID to identify an Endpoint.

Site ID: TIBCO Mashery generated unique ID to identify your Site within the TIBCO Mashery

Network.

IP address: IP address of the client application invoking the API call.

Method (if available): Method that was being accessed in the API call (available if appropriate

Method Configuration settings are specified in Services>End-points>Properties tab in the TIBCO

Mashery Administration dashboard).

Cache hit: 1 if cache is enabled and response is met from cache, 0 otherwise.

Error message (if any): TIBCO Mashery generated error message on that API call (if any).

Mapping Endpoint IDs

TIBCO Mashery Local provides a script that allows fetching a list of endpoints with details such as the

Endpoint ID and the Endpoint name. The Endpoints associated with a service are displayed. The

Service ID is the parameter used to fetch the Endpoint details.

//Request searching with a particular service id python getEndpointNames.py --service 95bpf2jv3f8p5x3xqsu657x5

//Response in json formatter

{

"services":[

{

"endpoints":[

{

"id":"7xwgjatahmuwgrz79cgw286a",

"name":"CORS-disabled"

},

{

"id":"2m4zz8nw4n9w36uau7j2bnqb",

"name":"Custom CORS(custom rest as the API key source)"

},

{

"id":"g2qx6vhxubu4d4w66egqnxsh",

"name":"CORS-enabled- EIN-112-dontallow"

},

{

"id":"uavv2nm6yy7j94nhp8zp5kjf",

"name":"CORS-enabled-EIN-112"

},

{

"id":"pgbrzzu89dtyvumqht4ncnt4",

"name":"preflight-requestmultipledomainno"

},

{

"id":"7qcpe6dsss4kxp4u8c6fy5pr",

"name":"EIN-222"

}

],

"id":"95bpf2jv3f8p5x3xqsu657x5"

}

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

47

]

}

Debugging Utility

A debug utility is provided that can be used to capture information about system health and configuration, connectivity to the cloud for synchronization, fix Slave registration issues, and resolve any replication issues between Master and Slave. A command line console is available to run the various options available. This utility is useful for gathering information to assist with trouble-shooting common system configuration errors with TIBCO Mashery Local.

Running the Debug Utility

Execute the following command to run the debug utility:

$ python /opt/mashery/utilities/debug_util.py

The following options are available. Some options are available to be run only on Master and some only on Slave.

Select from the following:

1: Collect Logs

2: Test connectivity to Cloud Sync

3: Show Slave Status

4: Check IP address

5: Update record or Master IP address in Master (Master IP address has changed and registration of new Slave with cluster fails)

6: Fix slave corruption (Restart slave at last valid read position)

7: Update record of Master IP address in old Slave node (Master IP address has changed and cluster is not updated)

8: System manager (Remove non-functional or unused slaves form Master)

9: Collect system state (Disk health, process health, time setting, network settings) menu: Show this menu exit: Quit

For option 9: Collect system state, the resulting files for this option are created in the home directory, depending on the login users (root/administrator).

Collect Logs

This tool produces a tar.gz

file that collects Traffic Manager component logs, sync status with the cloud, the Slave and Master IP address checks, logs required to check replication issues between Master and Slave and verbose logs for the day (if any).

This option can be run on Master and Slave nodes.

Test Connectivity to Cloud Sync

This tool helps to determine if there are any errors connecting to the TIBCO Mashery Cloud system for synchronization.

This option can be run on Master and Slaves.

Show Slave Status

This option displays whether a Slave is functioning correctly, including its status, the Master systems IP address and any replication errors that are present between Master and Slave.

This option can be run on a Slave node.

Check IP Address

This option allows you to check the current IP address of the Master.

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

This option can be run on a Master node.

Update Record of Master IP Address in Master

Sometimes if the IP address of a Master node changes, new Slave registration with the Master fails.

Running this option fixes the record of the Master IP address in the Master node for successful Slave registration.

This option can be run on a Master node.

Fix Slave Corruption

This option allows you to resolve Master Slave replication issues.

This option can be run on a Slave node.

Update Record of Master IP Address in Old Slave Node

This option updates the record of the Master IP address in the Slave nodes and is useful for resolving

Master-Slave replication issues.

System Manager (Remove Non-functional or Unused Slaves from Master)

Sometimes Slave nodes are decommissioned and new Slave nodes are created. This option on Master system can be used to remove unused slaves.

System Level Troubleshooting

TIBCO Mashery Local administrators have the ability to run the following select commands to investigate and troubleshoot the network or system level issues.

● ping ping6 tracepath tracepath6 tcpdump traceroute sudo arping sudo tshark sudo route sudo ifconfig sudo iptables sudo dhclient sudo pvresize sudo resize2fs sudo edit for the following files:

/etc/resolv.conf

/etc/sysconfig/network-scripts/ifcfg-eth0

/etc/sysconfig/network-scripts/ifcfg-eth1

48

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

/etc/rc.local

/etc/hosts

/etc/securitylimits.conf

/etc/nssswitch.conf

49

TIBCO Mashery

®

Local Installation and Configuration Guide for Docker

advertisement

Key Features

  • Hybrid traffic management
  • On-premise deployment
  • Dockerized environment
  • Secure interaction with Mashery Cloud
  • Developer Portal, Administration Dashboard, and API Reporting and Analytics
  • Cluster configuration (Master & Slaves)
  • Load balancing support

Frequently Answers and Questions

What is Mashery Local for Docker?
Mashery Local for Docker is a set of Docker images for running Mashery Local, enabling on-premise API traffic management with secure integration to the Mashery Cloud.
What are the advantages of using Mashery Local for Docker?
It allows for hybrid traffic management, on-premise deployment with Docker for flexibility, and secure integration with Mashery Cloud features.
How do I configure a Mashery Local cluster?
The installation and configuration process involves setting up a master instance and configuring slaves to connect to it. This allows for scalability and redundancy in the API management system.
What are the different components of the Mashery Local for Docker installation?
The installation process involves deploying Docker containers for the database, memcache, and the core Mashery Local components, including the Traffic Manager and Cluster Manager UI.
What is the purpose of the Adapter SDK?
The Adapter SDK provides a way to extend the functionality of the Mashery Local Traffic Manager by creating custom adapters. The SDK defines the Traffic Manager domain model and provides tools and APIs for custom code injection.

Related manuals

Download PDF

advertisement

Table of contents