FortiController Session-Aware Load Balancing (SLBC)

FortiController Session-Aware Load Balancing (SLBC)
FortiController Session-Aware Load Balancing (SLBC)
Guide
FortiController-5103B
FortiController-5903C
FortiController-5913C
FORTINET DOCUMENT LIBRARY
http://docs.fortinet.com
FORTINET VIDEO GUIDE
http://video.fortinet.com
FORTINET BLOG
https://blog.fortinet.com
CUSTOMER SERVICE & SUPPORT
https://support.fortinet.com http://cookbook.fortinet.com/how-to-work-with-fortinet-support/
FORTIGATE COOKBOOK
http://cookbook.fortinet.com
FORTINET TRAINING SERVICES
http://www.fortinet.com/training
FORTIGUARD CENTER
http://www.fortiguard.com
FORTICAST
http://forticast.fortinet.com
CLI REFERENCE
http://cli.fortinet.com
END USER LICENSE AGREEMENT
http://www.fortinet.com/doc/legal/EULA.pdf
FEEDBACK
Email: [email protected]
Wednesday, February 15, 2017
FortiController Session-Aware Load Balancing (SLBC) Guide
10-520-247862-20170215
TABLE OF CONTENTS
Change Log
About Session-Aware Load Balanced Clusters (SLBCs)
About the SLBC FortiController models
7
8
12
About the FortiController-5103B
About the FortiController-5903C
About the FortiController-5913C
13
14
15
Getting started and SLBC Basics
18
SLBC licensing
FortiOS Carrier licensing
Licenses (Support, FortiGuard, FortiCloud, FortiClient, FortiToken Mobile, VDOMs)
Certificates
Quick Setup
Connecting to the FortiController GUI
Connecting to the FortiController command line interface (CLI)
Factory default settings
Managing the cluster
Managing the workers (including SNMP, FortiManager)
Managing the FortiControllers (including SNMP and FortiManager)
Managing the devices in an SLBC with the External Management IP
Single chassis or chassis 1 special management port numbers
Chassis 2 special management port numbers
Example single chassis management IP addresses and port numbers
Example management IP addresses and port numbers for two chassis and two or four
FortiControllers
Monitoring the cluster
Worker communication with FortiGuard
Basic cluster NAT/Route mode configuration
Using the GUI to configure NAT/Route mode
Using the CLI to configure NAT/Route mode
Primary unit (master) selection
Selecting the primary FortiController (and the primary chassis)
Selecting the primary worker
Adding and removing workers
Adding link aggregation (LACP) to an SLBC cluster (FortiController trunks)
19
19
19
19
19
21
21
21
22
22
23
24
25
26
26
27
28
28
29
29
31
31
32
32
33
34
To add an aggregated interface
Adding VLANs
To add a VLAN to a FortiController interface
Changing FortiController interface configurations
Changing FortiController-5103B interface speeds
Changing FortiController-5903C and FortiController-5913C interface speeds
Splitting FortiController-5903C and FortiController-5913C front panel network
interfaces
FGCP to SLBC migration
Assumptions
Conversion steps
Updating SLBC firmware
Upgrading a single-chassis cluster
Upgrading a two chassis cluster
Verifying the configuration and the status of the units in the cluster
Configuring communication between FortiControllers
Changing the base control subnet and VLAN
Changing the base management subnet and VLAN
Changing the heartbeat VLAN
Using the FortiController-5103B mgmt interface as a heartbeat interface
Changing the heartbeat interface mode
Enabling session sync and configuring the session sync interface
Changing load balancing settings
Tuning TCP load balancing performance (TCP local ingress)
Tuning UDP load balancing (UDP local ingress and UDP remote/local session setup)
Changing the load distribution method
TCP and UDP local ingress session setup and round robin load balancing
Changing how UDP sessions are processed by the cluster
Tuning load balancing performance: fragmented sessions
Changing session timers
Life of a TCP packet
Life of a TCP packet (default configuration: TCP local ingress disabled)
Life of a TCP packet (TCP local ingress enabled)
Life of a UDP packet
Life of a UDP packet (default configuration: UDP local ingress disabled and UDP remote
session setup)
Life of a UDP packet (UDP local ingress disabled and UDP local session setup)
Life of a UDP packet (UDP local ingress enabled and UDP remote session setup)
Life of a UDP packet (UDP local ingress enabled and UDP local session setup)
SLBC with one FortiController-5103B
1. Setting up the Hardware
2. Configuring the FortiController
34
35
35
35
35
36
36
36
37
37
38
38
39
40
42
43
43
43
44
44
44
45
45
45
46
47
47
48
48
50
50
50
52
52
52
53
54
55
55
55
3. Adding the workers
4. Results
Active-Passive SLBC with two FortiController-5103Bs
1. Setting up the Hardware
2. Configuring the FortiControllers
3. Adding the workers to the cluster
4. Results
Dual mode SLBC with two FortiController-5103Bs
1. Setting up the Hardware
2. Configuring the FortiControllers
3. Adding the workers to the cluster
4. Results
Active-passive SLBC with two FortiController-5103Bs and two chassis
1. Setting up the Hardware
2. Configuring the FortiController in Chassis 1
3. Configuring the FortiController in Chassis 2
4. Configuring the cluster
5. Adding the workers to the cluster
6. Managing the cluster
7. Results - Configuring the workers
8. Results - Checking the cluster status
Active-passive SLBC with four FortiController-5103Bs and two chassis
1. Setting up the Hardware
2. Configuring the FortiController in Chassis 1 Slot 1
3. Configuring the FortiController in Chassis 1 Slot 2
4. Configuring the FortiController in Chassis 2 Slot 1
5. Configuring the FortiController in Chassis 2 Slot 2
6. Configuring the cluster
7. Adding the workers to the cluster
8. Managing the cluster
9. Results - Configuring the workers
10. Results - Primary FortiController cluster status
11. Results - Chassis 1 Slot 2 FortiController status
12. Results - Chassis 2 Slot 1 FortiController status
13. Results - Chassis 2 Slot 2 FortiController status
Dual mode SLBC with four FortiController-5103Bs and two chassis
1. Setting up the Hardware
2. Configuring the FortiController in Chassis 1 Slot 1
3. Configuring the FortiController in Chassis 1 Slot 2
4. Configuring the FortiController in Chassis 2 Slot 1
5. Configuring the FortiController in Chassis 2 Slot 2
6. Configuring the cluster
57
58
60
60
61
65
67
68
69
69
73
75
76
77
77
79
80
82
83
85
86
90
91
92
93
94
94
95
98
99
100
101
103
105
107
109
110
112
114
114
115
115
7. Adding the workers to the cluster
8. Managing the cluster
9. Results - Configuring the workers
10. Results - Primary FortiController cluster status
11. Results - Chassis 1 Slot 2 FortiController status
12. Results - Chassis 2 Slot 1 FortiController status
13. Results - Chassis 2 Slot 2 FortiController status
Dual mode SLBC with four FortiController-5903Cs and two chassis
1. Setting up the Hardware
2. Configuring the FortiController in Chassis 1 Slot 1
3. Configuring the FortiController in Chassis 1 Slot 2
4. Configuring the FortiController in Chassis 2 Slot 1
5. Configuring the FortiController in Chassis 2 Slot 2
6. Configuring the cluster
7. Adding the workers to the cluster
8. Managing the cluster
9. Results - Configuring the workers
10. Results - Primary FortiController cluster status
11. Results - Chassis 1 Slot 2 FortiController status
12. Results - Chassis 2 Slot 1 FortiController status
13. Results - Chassis 2 Slot 2 FortiController status
FortiController get and diagnose commands
get load-balance status
diagnose system flash list
diagnose system ha showcsum
diagnose system ha stats
diagnose system ha status
diagnose system ha force-slave-state
diagnose system load-balance worker-blade status
diagnose system load-balance worker-blade session-clear
diagnose switch fabric-channel egress list
diagnose switch base-channel egress list
diagnose switch fabric-channel packet heartbeat-counters list
diagnose switch fabric-channel physical-ports
diagnose switch fabric-channel mac-address list
diagnose switch fabric-channel mac-address filter
diagnose switch fabric-channel trunk list
118
119
121
122
125
127
129
131
132
133
135
135
136
136
139
141
142
143
146
148
150
152
152
152
152
153
153
154
154
154
155
155
156
157
158
159
160
Change Log
Change Log
Date
Change Description
15 February, 2017
New chapter: FortiController get and diagnose commands on page 152.
21 November, 2016
Added information about the MTU size of FortiController data interfaces.
19 July, 2016
Corrected information about FortiToken licensing and SLBC throughout.
2 February 2016
Added a note about GTP load balancing not being supported by SLBC to About
Session-Aware Load Balanced Clusters (SLBCs) on page 8.
Additional explanation about dual model network connections added to the dual
model examples.
Clarification of the VLANs used for session sync by FortiController-5903C and
FortiController-5913C added to Configuring communication between
FortiControllers on page 42.
6 November, 2015
New section "SLBC licensing" on page 19.
1 November, 2015
Clarification and corrections about how to connect the B1 and B2 interfaces on a
FortiController-5903C and FortiController-5913C cluster in the following sections:
"Dual mode SLBC with four FortiController-5903Cs and two chassis " on page 131
and Configuring communication between FortiControllers on page 42.
30 October 2015
12 July 2015
7
Improved the coverage of the FortiController-5903C and FortiController-5913C
throughout the document. New sections: About the SLBC FortiController models on
page 12, Changing FortiController interface configurations on page 35, and Dual
mode SLBC with four FortiController-5903Cs and two chassis on page 131. More
FortiController-5903C and FortiController-5913C examples to be added to future
versions.
New format, contents re-organized. Configuration examples enhanced with content
from http://cookbook.fortinet.com. New section: FGCP to SLBC migration on page
36. If you notice problems, send comments to [email protected]
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
About Session-Aware Load Balanced Clusters (SLBCs)
About Session-Aware Load Balanced Clusters (SLBCs)
This FortiController Session-Aware Load Balancing (SLBC) Guide describes connecting and configuring sessionaware load balancing (SLBC) clusters consisting of FortiControllers acting as load balancers and FortiGate-5000s
and operating as workers all installed in FortiGate-5000 series chassis.
SLBC clusters load balance TCP and UDP sessions. As a session-aware load balancer, the FortiController
includes FortiASIC DP processors that maintain state information for all TCP and UDP sessions. The FortiASIC
DP processors are capable of directing any TCP or UDP session to any worker installed in the same chassis. This
session-awareness means that all TCP and UDP traffic being processed by a specific worker continues to be
processed by the same worker. Session-awareness also means that more complex networking features such as
network address translation (NAT), fragmented packets, complex UDP protocols, and complex protocols such as
SIP that use pinholes, can be load balanced by the cluster.
In an SLBC, when a worker that is processing SIP traffic creates a pinhole, this information is communicated to
the FortiController. The FortiController then knows to distribute the voice and media sessions to this worker.
The SIP protocol uses known SIP ports for control traffic but dynamically uses a wide
range of ports for voice and other media traffic. To successfully pass SIP traffic
through a firewall, the firewall must use a session helper or application gateway to look
inside the SIP control traffic and determine the ports to open for voice and media. To
allow the voice and media traffic, the firewall temporarily opens these ports, creating
what’s known as a pinhole that temporarily allows traffic on a port as determined by
the SIP control traffic. The pinhole is closed when the first voice or media session
packet is received. When this happens the pinhole is converted to a normal session
and the pinhole itself is deleted.
Session-aware load balancing does not support traffic shaping.
IPv4 and IPv6 interface (or route-based) IPsec VPN sessions are not load balanced but
are all processed by the primary worker. Policy-based IPsec VPNs, manual key IPsec
VPNs and hub and spoke IPsec VPNs are not supported. These IPsec VPN session are
dropped. Uni-directional SSL VPN sessions are load balanced to all workers.
You cannot mix ELBC, FGCP and SLBC clusters in the same chassis.
GTP sessions are not load balanced by SLBC. All GTP sessions are processed by the
primary worker.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
8
About Session-Aware Load Balanced Clusters (SLBCs)
An SLBC consists of one or two FortiControllers installed in chassis slots 1 and 2 and from one to 12 workers
installed chassis slots 3 and up. Network traffic is received by the FortiControllers and load balanced to the
workers by the FortiASIC DP processors on the FortiControllers. Networks are connected to the FortiController
front panel interfaces and communication between the FortiControllers and the workers uses the chassis fabric
and base backplanes.
An SLBC with two FortiControllers can operate in active-passive mode or dual mode. In active-passive mode, if
the active FortiController fails traffic is transferred to the backup FortiController. In dual mode both
FortiControllers load balance traffic and twice as many network interfaces are available.
You can also install FortiControllers and workers in a second chassis. The second chassis acts as a backup and
will keep operating if the active chassis fails. You can install one or two FortiControllers in each chassis. If you
install one FortiController in each chassis you create an active-passive setup. If the active chassis fails, traffic is
processed by the backup chassis. You can also install two FortiControllers in each chassis. The SLBC cluster in
each chassis can operate in active-passive mode or dual mode. If the active chassis fails, traffic is processed by
the backup chassis.
The following picture shows a FortiController cluster consisting of one FortiController and three FortiGate5001Cs.
9
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
About Session-Aware Load Balanced Clusters (SLBCs)
Example FortiController Session-Aware Load Balanced Cluster (SLBC)
Load Balanced Traffic
on Fabric Backplane
Single
FortiController
Slot 1
Workers
Slots 3 to 14
fctrl/f1
fctrl /f3
Management
(mgmt)
Management and
Control Traffic on
Base Backplane
Internal network
SLBC does not support session sync between workers in the same chassis. The FortiControllers in a cluster keep
track of the status of the workers in their chassis and load balance sessions to the workers. If a worker fails the
FortiController detects the failure and stops load balancing sessions to that worker. The sessions that the worker
is processing when it fails are lost.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
10
About Session-Aware Load Balanced Clusters (SLBCs)
Most of the examples in this document are based on the FortiController-5103B. However all configurations
should be similar with other FortiControllers the only differences being things like the FortiController interface
names. Supported FortiControllers include the FortiController-5103B, 5903C, and 5913C. Supported workers
include the FortiGate-5001B, 5101C, 5001C and 5001D.
Before using this document, your chassis should be mounted and connected to your power system. The chassis
should be powered up and the front panel LEDs should indicate that it is functioning normally.
11
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
About the SLBC FortiController models
About the SLBC FortiController models
Currently three FortiController models are available for session aware load balancing (SLBC).
The FortiController-5902D and the FortiSwitch-5203B are used for content clustering
and are not compatible with SLBC.
Some FortiController hardware and software features that affect SLBC configurations
Network interfaces
Base channel interfaces
Fabric backplane
interfaces
Base backplane
interfaces
Chassis supported
Heartbeat between
FortiControllers
Base control between
chassis
Base management
between chassis
FortiController-5103B
FortiController-5903C
FortiController-5913C
Eight front panel 10Gbps
SFP+ FortiGate interfaces
(F1 to F8) .
Speed can be changed to
1Gbps.
MTU size 9000 bytes.
Four front panel 40Gbps
QSFP+ fabric channel
interfaces (F1 to F4).
Can be split into four
4x10G SFP+ interfaces.
MTU size 9000 bytes.
Two front panel 100Gbps
CFP2 fabric channel
interfaces (F1 and F2).
Can be split into two
10x10G SFP+ interfaces.
MTU size 9000 bytes.
Two front panel base
backplane 1Gbps SFP+
interfaces (B1 and B2).
Two front panel 10Gbps
SFP+ base channel
interfaces (B1 and B2).
Speed can be changed to
1Gbps.
Two front panel 10Gbps
SFP+ base channel
interfaces (B1 and B2).
Speed can be changed to
1Gbps.
10Gbps
Speed can be changed to
1Gbps.
40Gbps
Speed can be changed to
10- or 1Gbps.
40Gbps
Speed can be changed to
10- or 1Gbps.
1Gbps
1Gbps
1Gbps
FortiGate-5144C (14 slots)
FortiGate-5140B (14 slots)
FortiGate-5060 (6 slots)
FortiGate-5144C (14 slots)
FortiGate-5144C (14 slots)
B1 and B2
Default VLAN 999
B1 and B2
default VLAN 999
B1 and B2
Default VLAN 301
B1 and B2
Default VLAN 301
B1 and B2
Default VLAN 101
B1 and B2
Default VLAN 101
B1, B2, and Mgmt (optional)
Default VLAN 999
B1, B2, and mgmt (optional)
Default VLAN 301
B1, B2, and mgmt (optional)
Default VLAN 101
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
12
About the FortiController-5103B
Session sync
About the SLBC FortiController models
FortiController-5103B
FortiController-5903C
FortiController-5913C
One of F1 to F8
VLAN 2000 (VLAN cannot
be changed)
B1 and B2
VLAN 1900 and 1901
(cannot be changed)
B1 and B2
VLAN 1900 and 1901
(cannot be changed)
The remaining sections of this chapter describe each SLBC FortiController in more detail.
About the FortiController-5103B
The FortiController-5103B distributes IPv4 TCP and UDP sessions to multiple FortiGate-5000-series boards
(called workers) over the ATCA chassis fabric backplane. The FortiController-5103B board forms a session-aware
load balanced cluster with up to 12 FortiGate-5000 boards operating as workers and uses FortiASIC DP
processors to load balance millions of sessions to the cluster, providing 10 Gbps of traffic to each cluster
member. Performance of the cluster shows linear improvement if more workers are added.
Clusters can be formed with one or two FortiController-5103B boards and up to 12 workers. All of the workers
must be the same model. Currently FortiGate-5001B, FortiGate-5001C, FortiGate-5101C, and FortiGate-5001D
models are supported.
The FortiController-5103B board can be installed in any ATCA chassis that can provide sufficient power and
cooling. Supported FortiGate chassis include the 14-slot FortiGate-5140B and the 6-slot FortiGate-5060 chassis.
You can also install the FortiController-5103B board in a FortiGate-5144C chassis but this is not recommended
because the 5144C chassis has a 40Gbit fabric backplane while the FortiController-5103B only supports 10Gbit
fabric backplane connections. Older FortiGate-5000 chassis do not supply the power and cooling required for the
FortiController-5103B board.
In all ATCA chassis, FortiController-5103B boards are installed in the first and second hub/switch slots (usually
slots 1 and 2). A single FortiController-5103B board should be installed in slot 1 (but you can install it in slot 2). If
you add a second board it should be installed in slot 2.
FortiController-5103B front panel
Base Network
Activity LEDs
RJ-45
Console
Fabric Network
Activity LEDs
Retention
Screw
Extraction
Lever
OOS
LED
1 to 8 10Gpbs
SFP+ Network
Interfaces
B1 and B2
10Gbps Base Channel
SFP+ Interfaces
(heartbeat and
management)
Retention
Screw
Extraction
Lever
MGMT
10/100/1000 Copper
Management Interface
STA
LED
PWR
LED
IPM
LED
(board
position)
ACC
LED
The FortiController-5103B board includes the following hardware features:
13
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
About the SLBC FortiController models
l
l
l
l
About the FortiController-5903C
One 1Gbps base backplane channel for layer-2 base backplane switching between FortiGate-5000 boards installed
in the same chassis as the FortiController-5103B board. This base backplane channel includes 13 1Gbps
connections to up to 13 other slots in the chassis (slots 2 to 14).
One 10Gbps fabric backplane channel for layer-2 fabric backplane switching between FortiGate-5000 boards
installed in the same chassis as the FortiController-5103B board. This fabric backplane channel includes 13 10Gbps
connections to up to 13 other slots in the chassis (slots 2 to 14). Speed can be changed to 1Gbps.
Eight front panel 10Gbps SFP+ FortiGate interfaces (1 to 8). In a session-aware load balanced cluster these
interfaces are connected to 10Gbps networks to distribute sessions to FortiGate-5000 boards installed in chassis
slots 3 to 14. Speed can be changed to 1Gbps. The MTU size of these interfaces is 9000 bytes.
Two front panel base backplane 10Gbps SFP+ interfaces (B1 and B2) that connect to the base backplane channel.
These interfaces are used for heartbeat and management communication between FortiController-5103B boards.
Speed can be changed to 1Gbps.
l
On-board FortiASIC DP processors to provide high-capacity session-aware load balancing.
l
One 1Gbps out of band management ethernet interface (MGMT).
l
One RJ-45, RS-232 serial console connection (CONSOLE).
About the FortiController-5903C
The FortiController-5903C distributes IPv4 TCP and UDP sessions to multiple FortiGate-5000-series boards
(called workers) over the FortiGate-5144C chassis fabric backplane. The FortiController-5903C includes four front
panel 40Gbps Quad Small form-factor Pluggable + (QSFP+) interfaces (F1 to F4) for connecting to 40Gbps
networks. The FortiController-5903C forms a session-aware load balanced cluster and uses FortiASIC DP
processors to load balance millions of sessions to the cluster, providing up to 40 Gbps of traffic to each cluster
member (each worker). Performance of the cluster shows linear improvement if more workers are added.
Dual dual star clusters can be formed with four FortiController-5903Cs and up to 8 FortiGate-5001D workers.
Each FortiGate-5001D worker can handle up to 40 Gbps of traffic.
Clusters can also be formed with one or two FortiController-5903Cs and up to 12 workers. All of the workers must
be the same model. Currently FortiGate-5001B, FortiGate-5001C, FortiGate-5101C, and FortiGate-5001D
workers are supported. FortiGate-5001C and FortiGate-5001D workers can handle up to 40 Gbps of traffic.
FortiGate-5001B and FortiGate-5101C workers can handle up to 10 Gbps.
The FortiController-5903C can also provide 40Gbps fabric and 1Gbps base backplane channel layer-2 switching
in a dual star architecture.
You should install the FortiController-5903C in a FortiGate-5144C chassis to meet FortiController-5903C power
requirements, to have access to a 40G fabric backplane, for dual dual star architecture and to have enough slots
for the number of workers that the FortiController-5903C can load balance sessions to.
In all ATCA chassis, FortiController-5903Cs are installed in the first and second hub/switch slots (usually slots 1
and 2). A single FortiController-5903C should be installed in slot 1 (but you can install it in slot 2). If you add a
second FortiController-5903C it should be installed in slot 2. In dual dual star mode they are installed in the first
four slots.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
14
About the FortiController-5913C
About the SLBC FortiController models
FortiController-5903C Front Panel
Base Network
Activity LEDs
Fabric Network
Activity LEDs
Retention
Screw
Extraction
Lever
RJ-45
Console
OOS
LED
B1 and B2
10Gbps Base Channel
F1 to F4 40Gbps SFP+ Interfaces
QSFP+ Network (heartbeat and
management)
Interfaces
IPM
LED
(board
position)
Retention
Screw
MGMT
Extraction
10/100/1000 Copper Lever
Management Interface
STA
LED
PWR
LED
The FortiController-5903C includes the following hardware features:
l
l
l
l
One 1 base backplane channel for layer-2 base backplane switching between workers installed in the same chassis
as the FortiController-5903C. This base backplane channel includes 13 1Gbps connections to up to 13 other slots in
the chassis (slots 2 to 14).
One 40Gbps fabric backplane channel for layer-2 fabric backplane switching between workers installed in the same
chassis as the FortiController-5903C. This fabric backplane channel includes 13 40Gbps connections to up to 13
other slots in the chassis (slots 2 to 14). Speed can be changed to 10Gbps or 1Gbps.
Four front panel 40Gbps QSFP+ fabric channel interfaces (F1 to F4). In a session-aware load balanced cluster
these interfaces are connected to 40Gbps networks to distribute sessions to workers installed in chassis slots 3 to
14. These interfaces can also be split into 4x10G SFP+ interfaces. The MTU size of these interfaces is 9000 bytes.
Two front panel 10Gbps SFP+ base channel interfaces (B1 and B2) that connect to the base backplane channel.
These interfaces are used for heartbeat and management communication between FortiController-5903Cs. Speed
can be changed to 1Gbps.
l
On-board FortiASIC DP processors to provide high-capacity session-aware load balancing.
l
One 1Gbps out of band management ethernet interface (MGMT).
l
Internal 64 GByte SSD for storing log messages, DLP archives, SQL log message database, historic reports, IPS
packet archiving, file quarantine, WAN Optimization byte caching and web caching. according to the PRD there is
no internal storage
l
One RJ-45, RS-232 serial console connection (CONSOLE).
l
One front panel USB port.
About the FortiController-5913C
The FortiController-5913C is an Advanced Telecommunications Computing Architecture (ATCA) compliant
session-aware load balancing hub/switch board that distributes IPv4 TCP and UDP sessions to multiple
FortiGate-5000-series boards (called workers) over the FortiGate-5144C chassis fabric backplane. The
FortiController-5913C includes two front panel 100Gbps C form-factor pluggable 2 (CFP2) interfaces (F1 and F2)
for connecting to 100Gbps networks. The FortiController-5913C forms a session-aware load balanced cluster and
uses FortiASIC DP processors to load balance millions of sessions to the cluster, providing up to 40 Gbps of
traffic to each cluster member (each worker). Performance of the cluster shows linear improvement if more
workers are added.
15
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
About the SLBC FortiController models
About the FortiController-5913C
Dual dual star clusters can be formed with four FortiController-5913Cs and up to 8 FortiGate-5001D workers,
called workers. Each FortiGate-5001D worker can handle up to 40 Gbps of traffic.
Clusters can also be formed with one or two FortiController-5913Cs and up to 12 workers. All of the FortiGate5000 workers must be the same model. Currently FortiGate-5001B, FortiGate-5001C, FortiGate-5101C, and
FortiGate-5001D workers are supported. FortiGate-5001C and FortiGate-5001D workers can handle up to 40
Gbps of traffic. FortiGate-5001B and FortiGate-5101C workers can handle up to 10 Gbps.
The FortiController-5913C can also provide 40Gbps fabric and 1Gbps base backplane channel layer-2 switching
in a dual star architecture.
You should install the FortiController-5913C in a FortiGate-5144C chassis to meet FortiController-5913C power
requirements, to have access to a 40G fabric backplane, for dual dual star architecture and to have enough slots
for the number of workers that the FortiController-5913C can load balance sessions to.
In all ATCA chassis, FortiController-5913Cs are installed in the first and second hub/switch slots (usually slots 1
and 2). A single FortiController-5913C should be installed in slot 1 (but you can install it in slot 2). If you add a
second FortiController-5913C it should be installed in slot 2. In dual dual star mode they are installed in the first
four slots.
FortiController-5913C Front Panel
F1 and F2
RJ-45 100Gbps Fabric Channel
Console
CFP2 Network
Interfaces (data)
USB
Retention
Screw
Extraction
Lever
OOS
LED
B1 and B2
10Gbps Base Channel
SFP+ Interfaces
(heartbeat and
management)
STA
LED
PWR
LED
IPM
LED
(board
position)
Retention
Screw
MGMT
Extraction
10/100/1000 Copper Lever
Management Interface
The FortiController-5913C includes the following hardware features:
l
l
l
l
One 1Gbps base backplane channel for layer-2 base backplane switching between workers installed in the same
chassis as the FortiController-5913C. This base backplane channel includes 13 1Gbps connections to up to 13
other slots in the chassis (slots 2 to 14).
One 40Gbps fabric backplane channel for layer-2 fabric backplane switching between workers installed in the same
chassis as the FortiController-5913C. This fabric backplane channel includes 13 40Gbps connections to up to 13
other slots in the chassis (slots 2 to 14). Speed can be changed to 10Gbps or 1Gbps.
Two front panel 100Gbps CFP2 fabric channel interfaces (F1 and F2). In a session-aware load balanced cluster
these interfaces are connected to 100Gbps networks to distribute sessions to workers installed in chassis slots 3 to
14. These interfaces can also be split into 10x10G SFP+ interfaces. The MTU size of these interfaces is 9000 bytes.
Two front panel 10Gbps SFP+ base channel interfaces (B1 and B2) that connect to the base backplane channel.
These interfaces are used for heartbeat and management communication between FortiController-5913Cs. Speed
can be changed to 1Gbps.
l
On-board FortiASIC DP processors to provide high-capacity session-aware load balancing.
l
One 1Gbps out of band management ethernet interface (MGMT).
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
16
About the FortiController-5913C
17
About the SLBC FortiController models
l
One RJ-45, RS-232 serial console connection (CONSOLE).
l
One front panel USB port.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
About the FortiController-5913C
Getting started and SLBC Basics
This chapter describes connecting and configuring a basic SLBC cluster consisting of one FortiController installed
in chassis slot 1 and three FortiGate workers installed in chassis slots 3, 4, and 5.
Example session-aware load balanced cluster (SLBC)
Load Balanced Traffic
on Fabric Backplane
Single
FortiController
Slot 1
Workers
Slots 3 to 14
fctrl/f1
fctrl /f3
Management
(mgmt)
Management and
Control Traffic on
Base Backplane
Internal network
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
18
SLBC licensing
Getting started and SLBC Basics
SLBC licensing
The following sections describe some considerations when licensing an SLBC cluster.
FortiOS Carrier licensing
If the workers in an SLBC cluster will be running FortiOS Carrier, apply the FortiOS Carrier license before
configuring the cluster (and before applying other licenses). Applying the FortiOS Carrier license sets the
configuration to factory defaults, requiring you to repeat steps performed before applying the license.
Licenses (Support, FortiGuard, FortiCloud, FortiClient, FortiToken Mobile, VDOMs)
Register and apply licenses to each worker before adding the worker to the SLBC cluster. This includes
FortiCloud activation, FortiClient licensing, and entering a license key if you purchased more than 10 Virtual
Domains (VDOMS). FortiToken licenses can be added at any time because they are synchronized to all workers.
Certificates
You can also install any third-party certificates on the primary worker before forming the SLBC cluster. Once the
cluster is formed third-party certificates are synchronized to all workers.
Quick Setup
This section contains some high-level steps that guide you through the basics of setting up an example SLBC
cluster consisting of a single FortiController and 3 workers installed in a FortiGate-5000 chassis.
1. Install the FortiGate-5000 series chassis and connect it to power.
2. Install the FortiController in chassis slot 1.
3. Install the workers in chassis slots 3, 4, and 5.
4. Power on the chassis.
5. Check the chassis, FortiController and worker LEDs to verify that all components are operating normally.
19
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
Quick Setup
6. Check the FortiSwitch-ATCA release notes and confirm that your FortiController is running the latest supported
firmware. You can download the release notes from the Fortinet Documentation website and the correct firmware
from Fortinet’s Support site (https://support.fortinet.com). Select the FortiSwitch-ATCA product.
7. Log into the CLI of each of the workers and use the following command to set them to FortiController mode:
config system elbc
set mode forticontroller
end
8. From the FortiController GUI Dashboard System Information widget, beside HA Status select Configure.
9. Set Mode to Active-Passive, change the Group ID, and move the b1 and b2 interfaces to the Selected column
and select OK.
Or from the CLI enter the following command:
config
set
set
set
end
system ha
mode a-p
groupid 4
hbdev b1 b2
10. You can optionally configure other HA settings.
If you have more than one cluster on the same network, each cluster should have a
different Group ID. Changing the Group ID changes the cluster interface MAC
addresses. Its possible that a group ID setting will cause a MAC address conflict. If
this happens select a different Group ID. The default Group ID of 0 is not a good
choice and usually should be changed.
11. Go to Load Balance > Config add the workers to the cluster by selecting Edit and moving the slots that contain
workers to the Members list.
12. Configure the cluster external management interface so that you can manage the worker configuration. From the
FortiController GUI go to Load Balance > Config and edit the External Management IP/Netmask and
change it to an IP address and netmask for the network that the mgmt interfaces of the FortiController and the
workers are connected to. The External Management IP/Netmask must be on the same subnet as the
FortiController management IP address.
13. Connect FortiController front panel interface 1 (F1 on some models) to the Internet and front panel interface 3 (F3
on some models) to the internal network.
14. The workers see these interfaces as fctrl/f1 and fctrl/f3.
15. Log into the workers using the External Management IP/Netmask and configure the workers to process traffic
between fctrl/f1 and fctrl/f3.
If you need to add a default route to connect to the External Management
IP/Netmask, log into the FortiController CLI and enter the following command:
config route static
edit route 1
set gateway <gateway-ip>
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
20
Quick Setup
Getting started and SLBC Basics
Connecting to the FortiController GUI
You can connect to the FortiController GUI by browsing to the FortiController mgmt interface IP address. From
the FortiController GUI you can add workers to the cluster and configure load balancing settings.
By default, you can connect to the FortiController GUI by connecting the mgmt interface to your network and
browsing to https://192.168.1.99.
Connecting to the FortiController command line interface (CLI)
You can connect to the FortiController CLI using the serial cable that came packaged with your FortiController or
an Ethernet connection to the mgmt interface.
To connect to the CLI over an Ethernet network use SSH to connect to the mgmt port (default IP address
192.168.1.99).
To connect to the CLI using a serial console connection
1. Connect the FortiController unit’s Console port to the serial communications (COM) port on your management
computer using a serial cable (or using an RS-232 to USB convertor).
2. Start the terminal emulation application and configure the following settings.
Bits per second: 9600
Data bits: 8
Parity: None
Stop bits: 1
Flow control: None
3. Press Enter to connect to the CLI.
4. Type a valid administrator account name (such as admin) and press Enter.
5. Type the password for that administrator account and press Enter. (In its default state, there is no password for
the admin account.)
Factory default settings
The FortiController unit ships with a factory default configuration. The default configuration allows you to connect
to and use the FortiController GUI or CLI to configure the FortiController.
To configure the FortiController you should add a password for the admin administrator account, change the
management interface IP address, and, if required, configure the default route for the management interface.
FortiController factory default settings
Administrator Account User Name
admin
Password
(none)
MGMT IP/Netmask
192.168.1.99/24
At any time during the configuration process, if you run into problems, you can reset the FortiController or the
FortiGates to factory default settings and start over. From the CLI enter execute factory-reset.
21
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
Managing the cluster
Managing the cluster
This section describes managing the FortiControllers and workers in an SLBC.
Do not disable or change the configuration of the FortiController or worker basemgmt interfaces. These interfaces are used for internal management communication
and other related functions. They are visible from the FortiController and worker GUI
and CLI but normally you would not need to change them.
Managing the workers (including SNMP, FortiManager)
After the workers have been added to a SLBC you can use the SLBC External Management IP to manage the
primary worker. This includes access to the primary worker GUI or CLI, SNMP queries to the primary worker, and
using FortiManager to manage the primary worker. As well SNMP traps and log messages are sent from the
primary worker with the External Management IP as their source address. And finally connections to FortiGuard
for updates, web filtering lookups and so on, all originate from the External Management IP.
Connecting to the external management IP address using a web browser or using other methods like SSH or
telnet connects you to the primary worker (also called the master or the ELBC master). For example, if the
External Management IP address is 10.10.10.1 you can browse to https://10.10.10.1 to connect to the primary
worker GUI. You can connect to the primary worker CLI using ssh [email protected], or telnet
10.10.10.1 and so on as long as allow access settings permit.
Configuration changes made to the primary worker are synchronized to all of the workers in the cluster.
The primary worker SNMP configuration is the same as a any FortiGate SNMP configuration. SNMP queries to
the primary worker report on the status of the primary worker only. However, some of the SNMP events (traps)
sent by the primary worker can report HA events which can indicate when workers enter and leave the cluster etc.
You can use FortiManager to manage the primary worker and FortiManager does support the primary worker
SLBC configuration. Of course, configuration changes made through FortiManager to the primary worker are
synchronized to the other workers.
You can also managed individual workers, including the primary worker, using the SLBC External Management IP
and a special port number. See Managing the devices in an SLBC with the External Management IP on page 24.
You can also manage any individual worker (including the primary worker) by connecting directly to their mgmt1 or
mgmt2 interfaces. You can configure these management interfaces when you first configure the worker before
adding it to the cluster. The mgmt1 and mgmt2 interface settings are not synchronized so each worker will
maintain its own mgmt1 and mgmt2 configuration. You can using the console to configure the mgmt1 and
mgmt2 interfaces after the workers are operating in a cluster.
To get SNMP results from all of the workers in the cluster you can send SNMP queries to each one using their
individual mgmt1 or mgmt2 IP addresses or using the External Management IP address and special port number.
The primary worker SNMP configuration is synchronized to all workers. SNMP traps sent by the primary worker
come from the external management IP address. Individual workers can send traps from their mgmt1 and mgmt2
interfaces.
If you use the External Management IP address for SNMP queries the FortiController performs network address
translation on the SNMP packets. So when the worker sees SNMP query packets their source address is set to the
internal management IP. The internal management IP is 10.101.10.1 for the FortiController in slot 1 and
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
22
Managing the cluster
Getting started and SLBC Basics
10.101.10.16 for the FortiController in slot 2. So you must configure SNMP communities to allow SNMP packets
from these source addresses (or from any source address). For example:
config system snmp community
edit 1
config hosts
edit 1
set ip 10.101.10.1
next
edit 2
set ip 10.101.10.16
end
end
You can manage individual workers using FortiManager, but this is not recommended.
Managing the FortiControllers (including SNMP and FortiManager)
You can manage the primary FortiController using the IP address of its mgmt interface, set up when you first
configured the primary FortiController. You can use this address for GUI access, CLI access, SNMP queries and
FortiManager access.
The only way to remotely manage a backup FortiController is by using the SLBC External Management IP and a
special port number. See Managing the devices in an SLBC with the External Management IP on page 24. You
can also connect to the primary or backup FortiController’s console port.
FortiManager supports managing the primary FortiController. It may take some time after a new FortiController
model is released for FortiManager to support it. Managing backup FortiControllers with FortiManager is not
recommended.
To manage a FortiController using SNMP you need to load the FORTINET-CORE-MIB.mib file into your SNMP
manager. You can get this MIB file from the Fortinet support site, in the same location as the current
FortiController firmware (select the FortiSwitchATCA product). You also need to configure SNMP settings
(usually on the primary FortiController. The SNMP configuration is synchronized to the backup FortiControllers.
First, enable SNMP access on the mgmt interface. Then from the CLI, configure system information. Make sure
to set status to enable:
config
set
set
set
set
set
set
end
system snmp sysinfo
contact-info <string>
description <string>
location <string>
status {enable | disable}
trap-high-cpu-treshold <percentage>
trap-lowmemory-treshold <percentage>
Second add one or more SNMP communities:
config system snmp community
edit <index_integer>
set events {cpu-high | mem-low | ha-switch | ha-hb-member-up | ha-member-down | hbfail
| hbrcv | tkmem-down | tkmem-up}
set name <name_string>
set query-v1-port <port_number>
set query-v1-status {enable | disable}
set query-v2c-port <port_number>
set query-v2c=status <port_number>
set status {enable | disable}
23
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
set
set
set
set
set
set
end
Managing the cluster
trap-v1-lport <port_number>
trap-v1-rport <port_number>
trap-v1-status {enable | disable}
trap-v2c-lport <port_number>
trap-v2c-rport <port_number>
trap-v2c-status {enable | disable}
FortiControllers can send SNMP traps for the following events:
l
cpu-high, cpu usage too high
l
mem-low, available memory too low
l
ha-switch, cluster status change
l
ha-hb-member-up, FortiController (cluster member) up
l
ha-member-down, FortiController (cluster member) down,
l
hbfail, heartbeat failure
l
hbrcv, heartbeat received
l
tkmem-down, worker (trunk member) down
l
tkmem-up, worker (trunk member) up
Managing the devices in an SLBC with the External Management IP
The External Management IP address is used to manage all of the individual devices in a SLBC by adding a
special port number. This special port number begins with the standard port number for the protocol you are using
and is followed by two digits that identify the chassis number and slot number. The port number can be calculated
using the following formula:
service_port x 100 + (chassis_id - 1) x 20 + slot_id
Where:
l
service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS and so on).
l
chassis_id is the chassis ID specified as part of the FortiController HA configuration and can be 1 or 2.
l
slot_id is the number of the chassis slot.
By default, chassis 1 is the primary chassis and chassis 2 is the backup chassis. But
the actual primary chassis is the one with the primary FortiController and the primary
FortiController can be changed independently of the chassis number. And the
chassis_id depends on the chassis number and not on whether the chassis contains
the primary FortiController.
Some examples:
l
HTTPS, chassis 1 slot 2: 443 x 100 + (1 - 1) x 20 + 2 = 44300 + 0 + 2 = 44302:
browse to: https://172.20.120.100:44302
l
HTTP, chassis 2, slot 4: 80 x 100 + (2 - 1) x 20 + 4 = 8000 + 20 + 4 = 8024:
browse to http://172.20.120.100/8024
l
HTTPS, chassis 1, slot 10: 443 x 100 + (1 - 1) x 20 + 10 = 44300 + 0 + 10 = 44310,
browse to https://172.20.120.100/44310
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
24
Managing the cluster
l
Getting started and SLBC Basics
HTTPS, chassis 2, slot 10: 443 x 100 + (2 - 1) x 20 + 10 = 44300 + 20 + 10 = 44330:
browse to https://172.20.120.100/44330
l
SNMP query port, chassis 1, slot 4: 161 x 100 + (1 - 1) x 20 + 4 = 16100 + 0 + 4 = 16104
l
Telnet to connect to the CLI of the worker in chassis 2 slot 4:
telnet 172.20.120.100 2324
l
To use SSH to connect to the CLI the worker in chassis 1 slot 5:
ssh [email protected] -p2205
Single chassis or chassis 1 special management port numbers
25
Slot number
HTTP (80)
HTTPS (443)
Telnet (23)
SSH (22)
SNMP (161)
Slot 1
8001
44301
2301
2201
16101
Slot 2
8002
44302
2302
2202
16102
Slot 3
8003
44303
2303
2203
16103
Slot 4
8004
44304
2304
2204
16104
Slot 5
8005
44305
2305
2205
16105
Slot 6
8006
44306
2306
2206
16106
Slot 7
8007
44307
2307
2207
16107
Slot 8
8008
44308
2308
2208
16108
Slot 9
8009
44309
2309
2209
16109
Slot 10
8010
44310
2310
2210
16110
Slot 11
8011
44311
2311
2211
16111
Slot 12
8012
44312
2312
2212
16112
Slot 13
8013
44313
2313
2213
16113
Slot 14
8014
44314
2314
2214
16114
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
Managing the cluster
Chassis 2 special management port numbers
Slot Number
HTTP (80)
HTTPS (443)
Telnet (23)
SSH (22)
SNMP (161)
Slot 1
8021
44321
2321
2221
16121
Slot 2
8022
44322
2322
2222
16122
Slot 3
8023
44323
2323
2223
16123
Slot 4
8024
44324
2324
2224
16124
Slot 5
8025
44325
2325
2225
16125
Slot 6
8026
44326
2326
2226
16126
Slot 7
8027
44327
2327
2227
16127
Slot 8
8028
44328
2328
2228
16128
Slot 9
8029
44329
2329
2229
16129
Slot 10
8030
44330
2330
2230
16130
Slot 11
8031
44331
2331
2231
16131
Slot 12
8032
44332
2332
2232
16132
Slot 13
8033
44333
2333
2233
16133
Slot 14
8034
44334
2334
2234
16134
Example single chassis management IP addresses and port numbers
Use the special port numbers below to manage the devices in an SLBC that includes one or two FortiControllers
and multiple workers installed in one chassis with External Management IP address 10.10.10.1:
l
To manage the primary worker using HTTPS browse to:
https://10.10.10.1
l
To manage the FortiController in slot 1 using HTTPS, browse to the following address. This is usually the primary
FortiController.
https://10.10.10.1:44301
l
To manage the FortiController in slot 2 using HTTPS, browse to the following address. This is usually the backup
FortiController and this is the only way to use HTTPS to manage the backup FortiController.
https://10.10.10.1:44302
l
To manage the worker in slot 4 using HTTPS browse to:
https://10.10.10.1:44304
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
26
Managing the cluster
l
Getting started and SLBC Basics
To manage the worker in slot 14 using HTTP browse to:
http://10.10.10.1:8014
l
To manage the worker in slot 12 using SSH, enter a command similar to:
ssh [email protected] -p2212
l
To manage the worker in slot 5 using telnet, enter a command similar to:
telnet 10.10.10.1 2305
l
To use SNMP to query the FortiController in slot 1 use port 16101 in the SNMP query.
l
To use SNMP to query the FortiController in slot 2 use port 16102 in the SNMP query.
l
To use SNMP to query a worker in slot 7 use port 16107 in the SNMP query.
Example management IP addresses and port numbers for two chassis and two or four
FortiControllers
Use the special port numbers below to manage the devices in an SLBC that includes two or four FortiControllers
and multiple workers installed in two chassis with External Management IP address 10.10.10.1:
l
To manage the primary worker using HTTPS browse to the external management IP address. (This worker may be
in chassis 1 or chassis 2.)
https://10.10.10.1
l
To manage the FortiController in chassis 1 slot 1 using HTTPS, browse to the following address. This is usually the
primary FortiController.
https://10.10.10.1:44301
l
To manage the FortiController in chassis 1 slot 2 using HTTPS, browse to the following address.
https://10.10.10.1:44302
l
To manage the FortiController in chassis 2 slot 1 using HTTPS, browse to the following address.
https://10.10.10.1:44321
l
To manage the FortiController in chassis 2 slot 2 using HTTPS, browse to the following address.
https://10.10.10.1:44322
l
To manage the worker in chassis 1 slot 10 using HTTPS browse to:
https://10.10.10.1:44310
l
To manage the worker in chassis 2 slot 5 using HTTP browse to:
http://10.10.10.1:8025
l
To manage the worker in chassis 2 slot 13 using HTTP browse to:
http://10.10.10.1:8033
l
To manage the worker in chassis 1 slot 8 using SSH, enter a command similar to:
ssh [email protected] -p2208
l
To manage the worker in chassis 2 slot 5 using telnet, enter a command similar to:
telnet 10.10.10.1 2325
27
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
Monitoring the cluster
l
To use SNMP to query the FortiController in chassis 1 slot 1 use port 16101 in the SNMP query.
l
To use SNMP to query the FortiController in chassis 2 slot 2 use port 16122 in the SNMP query.
l
To use SNMP to query a worker in chassis 1 slot 7 use port 16107 in the SNMP query.
l
To use SNMP to query a worker in chassis 2 slot 11 use port 16131 in the SNMP query.
Monitoring the cluster
From the FortiController GUI you can go to Load Balance > Monitor to view the amount of traffic processed by
each worker according to each worker’s slot number. The traffic log displays the amount of data processed by
each worker and the Session Count displays the current number of half-sessions being processed by each worker.
(The actual number of sessions is half the number of half-sessions.) You can display the total session count or the
pinhole session count.
Worker communication with FortiGuard
Individual workers need to be able to communicate with FortiGuard for antivirus updates, IPS updates,
application control updates, FortiGuard web filtering lookups and other FortiGuard services. You can do this by
adding a default route to the worker elbc-mgmt VDOM that points at the FortiController internal management
interface. This causes each worker to route Internet-bound management traffic over the internal management
network. The FortiController then forwards this traffic to the Internet using its default route.
When you add the default route to the primary worker elbc-mgmt VDOM it is synchronized to all of the workers in
the custer.
config vdom
edit elbc-mgmt
config router static
set device base-mgmt
set gateway 10.101.10.1
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
28
Basic cluster NAT/Route mode configuration
Getting started and SLBC Basics
end
The gateway address is on the same subnet as the FortiController internal management network. If you change
the FortiController internal management network you should also change the gateway for this default route. So
the default gateway address for this route is 10.101.10.1. If you change the internal management network
address to 20.202.20.0, then the gateway for this route would be 20.202.20.1.
Basic cluster NAT/Route mode configuration
When all of the devices have been added to the cluster, configuring the cluster is just like configuring a
standalone FortiGate unit operating with multiple VDOMs. When you first log into the primary worker you are
logging into a FortiGate unit in multiple VDOM mode.
You can either log into the FortiController GUI and from there go to Load Balance > Status and connect to the
worker GUI or you can connect directly to the worker primary unit using the External Management IP/Netmask.
No additional changes to the FortiController configuration are required. However, you can tune the FortiController
configuration, see Changing load balancing settings on page 45
In the load balanced cluster the workers are configured with two VDOMs:
l
l
elbc-mgmt includes the mgmt interface and is used for management traffic. When you connect to the mgmt
interface you connect to this VDOM. Normally you do not have to change the configuration of this VDOM.
root includes the fctrl/f1 to fctrl/f8 interfaces. Configure this VDOM to allow traffic through the cluster and to apply
UTM and other FortiOS features to the traffic.
By default the root VDOM operates in NAT/Route mode. You can add more VDOMs that operate in NAT/Route or
Transparent mode. If you add more VDOMs you must add some of the fctrl/f1 to fctrl/f8 interfaces to each
VDOM. You can also add VLAN interfaces and add these interfaces to VDOMs.
FortiController interfaces other than the fctrl/f1 to fctrl/f8 interfaces are visible from the
GUI and CLI. In a session-aware load balanced cluster these interfaces are not used
for network traffic.
Using the GUI to configure NAT/Route mode
To configure the session-aware load balanced cluster DNS settings
1. Log into the FortiController GUI.
2. Go to Load Balance > Status and select the Config Master icon beside the primary worker, which is always the
top entry in the list.
3. Log into the worker GUI.
You can also connect to the worker GUI by browsing directly to the External
Management IP/Netmask.
4. Go to System > Network > DNS and configure DNS settings as required.
29
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
Basic cluster NAT/Route mode configuration
To configure an interface
1. Go to Virtual Domains > root.
2. Go to System > Network > Interfaces and Edit an interface (for example, fctrl/f1).
3. Configure the interface as required, for example set the Addressing Mode to Manual and set the IP/Netmask
to 172.20.120.10/255.255.255.0.
4. Select OK.
5. Repeat for all interfaces connected to networks.
To add a default route
1. Go to Router > Static and select Create New and configure the default route:
Destination IP/Mask
0.0.0.0/0.0.0.0
Device
fctrl/f1
Gateway
172.20.120.2
2. Select OK.
To allow users on the internal network to connect to the Internet
1. Go to Policy > Policy > Policy and select Create New to add the following security policy.
Policy Type
Firewall
Policy Subtype
Address
Incoming Interface
fctrl/f2
Source Address
all
Outgoing Interface
fctrl/f1
Destination Address
all
Schedule
always
Service
ALL
Action
ACCEPT
2. Select Enable NAT and Use Destination Interface Address.
3. Select other security policy options as required (for example, add Security Profiles).
4. Select OK.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
30
Primary unit (master) selection
Getting started and SLBC Basics
Using the CLI to configure NAT/Route mode
To use the CLI to configure NAT/Route mode
1. Connect to the CLI Using a serial cable to connect the Console port of the primary worker, which would usually be
the worker in slot 3.
2. You can also use SSH or Telnet to connect to the External Management IP/Netmask.
3. Configure the primary and secondary DNS server IP addresses.
config global
config system dns
set primary <dns-server_ip>
set secondary <dns-server_ip>
end
end
4. Connect to the root VDOM.
config vdom
edit root
5. From within the root VDOM, configure the interfaces.
config system interface
edit fctrl/f1
set ip 172.20.120.10
next
edit fctrl/f2
set ip 10.31.101.40
end
6. Add the default route.
config router static
edit 1
set device fctrl/f1
set gateway 172.20.120.2
end
7. Add a security policy.
config firewall policy
edit 1
set srcintf fctrl/f2
set scraddr all
set dstintf fctrl/f1
set dstaddr all
set action accept
set schedule always
set service ANY
set nat enable
end
Primary unit (master) selection
SLBC clusters typically consist of multiple FortiControllers and multiple workers. Cluster operation requires the
selection of a primary (or master) FortiController and a primary (or master) worker. SLBC primary unit selection is
31
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
Primary unit (master) selection
similar to FGCP HA primary unit selection. This section provides some notes about how the primary units are
selected.
Selecting the primary FortiController (and the primary chassis)
In a single chassis cluster the FortiController with the longest up time becomes the primary unit. If you start
configuring the cluster by configuring the FortiController in chassis slot 1 first, this FortiController would typically
be the primary unit when the cluster is first up and running. If the FortiController in chassis slot 1 is restarted, its
uptime is lower so the FortiController in chassis slot 2 becomes the primary unit.
If the FortiControllers all have the same up time, the FortiController with the lowest serial number becomes the
primary unit.
In a two chassis cluster, the FortiController with the highest uptime also becomes the primary FortiController,
also making the chassis it is in the primary chassis.
The other factor that contributes to selecting the primary chassis is comparing the number of active workers. The
chassis with the most active workers becomes the primary chassis and the primary FortiController in that chassis
becomes the primary FortiController for the cluster.
The primary FortiController is the one you log into when you connect to the FortiController management IP
address. Configuration changes made to this FortiController are synchronized to the other FortiController(s) in
the cluster.
You can change the HA Device Priority of individual FortiControllers. The FortiController with the highest device
priority becomes the primary unit. The device priority is not synchronized among all FortiControllers in the cluster
so if you want one FortiController to become the primary unit you can set its device priority higher.
Just changing the device priority will not cause the cluster to select a new primary unit. The FortiController with
the highest device priority will become the primary unit the next time the cluster negotiates to select a new
primary FortiController. You can force the cluster to renegotiate by selecting the Enable Override configuration
option. This will cause the cluster to re-negotiate more often. The Enable Override setting is synchronized to all
FortiControllers in a cluster.
If it doesn’t matter which FortiController becomes the primary unit there is no need to adjust device priorities or
select Enable Override. Selecting Enable Override can cause the cluster to negotiate more often, potentially
disrupting traffic.
Selecting the primary worker
There are two types of primary worker: the ELBC master and the config master.
l
l
The config master is the worker that you log into using the cluster External Management IP. All configuration
changes made to this worker are synchronized to other workers in the cluster. The configuration of this working is
compared to the configurations of the other workers in the cluster by comparing their configuration file checksums.
If the checksums are different the configuration of the config master is copied to the worker or workers that are out
of synch.
The ELBC master is the worker that is considered the master by the FortiControllers in the cluster.
You can determine which worker is the ELBC master and which is the config master from the FortiController
Load Balance > Status GUI page. Usually the same worker will have both primary roles. But not always. The
ELBC master is the worker in the lowest slot number that is active. The config master is the in-sync worker with
the lowest slot number. If the workers are not in-synch, then the worker with the highest uptime becomes the
config master.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
32
Adding and removing workers
Getting started and SLBC Basics
Adding and removing workers
You can add a new worker to a functioning cluster at any time. When the worker is added the FortiController
starts load balancing sessions to it. The new worker must be running the same firmware build as the workers
already in the cluster. However, its configuration does not have to match because when its set to forticontroller
mode its configuration will be reset to factory defaults. Then when the worker joins the cluster its configuration
will be synchronized with the cluster configuration.
You can also remove a worker from a cluster at any time simply by powering down and removing the worker from
the chassis. The cluster detects that the worker is removed and will stop sending sessions to it. If a worker fails or
is removed the sessions that the worker is processing are lost. However, the cluster load balances new sessions
to the remaining workers in the cluster.
If you re-install the missing worker the cluster will detect it and start load balancing sessions to it.
If nat-source-port is set to running-slots sessions maybe lost or interrupted
when you remove or add workers. The full command syntax is:
config load-balance settings
set nat-source-port running-slots
end
To add one or more new workers
1. From the FortiController GUI go to Load Balance > Config, edit the membership list and add the slot or slots
that you will install the new workers in to the members list.
2. You can also use the following FortiController CLI command (for example, to add workers to slots 8 and 9):
config load-balance settings
config slots
edit 8
next
edit 9
end
3. Insert the new workers into their slots.
4. Connect to each worker’s CLI and enter the following command:
config system elbc
set mode forticontroller
end
The worker restarts in load balance mode and joins the cluster.
5. To verify that a worker has joined the cluster, from the FortiController GUI go to Load Balance > Status and
verify that the worker appears in the correct chassis slot.
To remove a worker from a cluster
1. From the FortiControllerGUI go to Load Balance > Config, edit the membership list and move the slot or slots
that contain the workers to be removed to the available slots list.
You can also use the following FortiController CLI command (for example, to remove the workers in slots 8
and 9):
config load-balance settings
33
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
Adding link aggregation (LACP) to an SLBC cluster (FortiController trunks)
config slots
delete 8
delete 9
end
2. Remove the workers from their slots.
3. To verify that a worker has been removed from the cluster, from the FortiController GUI go to Load Balance >
Status and verify that the worker does not appear in the chassis and that its chassis slot appears to be empty.
Adding link aggregation (LACP) to an SLBC cluster (FortiController trunks)
Configuring LACP interfaces on a SLBC cluster allows you to increase throughput from a single network by
combining two or more physical FortiController interfaces into a single aggregated interface, called a
FortiController trunk. You configure LACP interfaces from the FortiController CLI or GUI. LACP interfaces appear
on worker GUI and GLI as single FortiController trunk interfaces and you can create routes, firewall policies and
so on for them just like a normal physical interface.
It is possible to add LACP and other aggregated interfaces from the worker GUI or
CLI. However, you should not do this, because these aggregated interfaces are not
recognized by the FortiController and will not function properly.
After combining two FortiController front panel interfaces into an LACP interface, the
two front panel interfaces may continue to appear on the worker GUI and CLI.
However you should not configure policies or routes or other options for these
interfaces.
To add an aggregated interface
1. Log into the primary FortiController GUI or CLI.
2. Go to Switch > Fabric Channel and select New Trunk.
3. Enter a Name for the aggregate interface.
4. Set the mode to LACP Passive or LACP Active.
Do not select FortiTrunk. This option is not supported.
Selecting LACP Passive creates an 802.3ad passive mode aggregate interface. Selecting LACP Active
creates an 802.3ad Active mode aggregate interface. Static mode means the aggregate does not send or
receive LACP control messages.
5. Move the FortiController front panel interfaces to add to the aggregate interface to the Selected column and
select OK to add the interface.
From the FortiController CLI:
config switch fabric-channel trunk
edit f1-f2
set port-select-criteria {dst-ip | src-dst-ip | src-ip}
set mode {static | lacp-active | lacp-passive}
set lacp-speed {fast | slow}
set member-withdrawl-behavior {block | forward}
set min-links <0-8>
set members <members>
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
34
Adding VLANs
Getting started and SLBC Basics
end
6. Log into the primary worker GUI or CLI.
7. From the GUI go to Global > Network > Interfaces.
You should see the LACP interface that you added in the interface list. Its Type as listed on the interface
page should be fctrl trunk. When you edit the interface the type is FortiController Trunk. You can configure
the interface as required, adding an IP address and so on.
Aggregate interfaces are added the root VDOM by default. You can move them to other VDOMs as required.
Once in the correct VDOM, you can create policies and other configuration objects that reference the agger gi
ate interface.
Adding VLANs
You can add VLANs to FortiController interfaces from the worker GUI. No FortiController configuration changes
are required. Network equipment connected to the physical FortiController interfaces that contain VLANs must be
able to pass and handle VLAN traffic.
To add a VLAN to a FortiController interface
1. Log into the primary worker GUI or CLI.
2. From the GUI go to System > Network > Interface.
3. Select Create New.
4. Add a Name and set the type to VLAN .
5. Set Interface to one of the FortiController front panel interfaces.
6. Add a VLAN ID and configure the rest of the interface settings as required.
7. Select OK to save the VLAN interface.
Changing FortiController interface configurations
This section shows how to change FortiController interface speeds and split FortiController-5903C and
FortiController-5913C ports.
Changing FortiController-5103B interface speeds
To change front panel B1 and B2 interface speeds, from the FortiController-5103B GUI go to Switch > Base
Channel and edit the b1 or b2 interface. Set the Speed to 10Gbps Full-duplex or 1Gpbs Full duplex and select
OK. From the CLI enter the following command to change the speed of the B1 port.
config switch base-channel physical-port
edit b1
set speed {10000full | 1000full}
end
To change front panel F1 to F8 interface speeds, from the FortiController-5103B GUI go to Switch > Fabric
Channel and edit a f1 to f8 interface. Set the Speed to 10Gbps Full-duplex or 1Gpbs Full duplex and select OK.
From the CLI enter the following command to change the speed of the F6 port.
config switch fabric-channel physical-port
35
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
FGCP to SLBC migration
edit f6
set speed {10000full | 1000full}
end
To change backplane fabric channel interface speeds, from the FortiController-5103B GUI go to Switch >
Fabric Channel and edit a slot-1/2 to slot 14 interface. Set the Speed to 10Gbps Full-duplex or 1Gpbs Full
duplex and select OK. From the CLI enter the following command to change the speed of the slot-4 port.
config switch fabric-channel physical-port
edit slot-4
set speed {10000full | 1000full}
end
Changing FortiController-5903C and FortiController-5913C interface speeds
To change front panel B1 and B2 interface speeds, from the GUI go to Switch > Base Channel and edit the b1
or b2 interface. Set the Speed to 10Gbps Full-duplex or 1Gpbs Full duplex and select OK. From the CLI enter the
following command to change the speed of the B1 port.
config switch base-channel physical-port
edit b1
set speed {10000full | 1000full}
end
To change backplane fabric channel interface speeds, from the GUI go to Switch > Fabric Channel and edit a
slot-1/2 to slot 14 interface. Set the Speed to 40Gpbs Full-duplex, 10Gbps Full-duplex or 1Gpbs Full duplex and
select OK. From the CLI enter the following command to change the speed of the slot-4 port.
config switch fabric-channel physical-port
edit slot-4
set speed {40000full | 10000full | 1000full}
end
Splitting FortiController-5903C and FortiController-5913C front panel network interfaces
You can use the following command to split all of the FortiController-5903C F1 to F4 or the FortiController-5913C
F1 and F2 front panel network interfaces into 10G ports:
config system global
set fabric-front-port-10g-mode enable
end
In split mode each FortiController-5903C 40Gbps interface is split into four 10Gbps interfaces for a total of 16
10Gbps interfaces. The interfaces are are named fctrl1/f1-1, fctrl1/f1-2, fctrl1/f1-3, fctrl1/f1-4, fctrl1/f2-1, and so
on.
In split mode each FortiController-5913C 100Gbps interface is split into 10 10Gbps interfaces for a total of 20
10Gbps interfaces. The interfaces are named fctrl1/f1-1, fctrl1/f1-2, fctrl1/f1-3, fctrl1/f1-4, fctrl1/f1-5, ..., fctrl1/f21, fctrl1/f2-2 and so on.
FGCP to SLBC migration
This section descries how to convert a FortiGate Clustering Protocol (FGCP) virtual cluster (with VDOMs) to an
SLBC cluster. The conversion involves replicating the VDOM, interface and VLAN configuration of the FGCP
cluster on the SLBC cluster primary worker then backing up the configuration of each FGCP cluster VDOM. Each
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
36
FGCP to SLBC migration
Getting started and SLBC Basics
of the VDOM configuration files is manually edited to adjust interface names among other things. Then these
modified VDOM configuration files are restored to the corresponding SLBC cluster primary worker VDOMs.
Only VDOM configurations are migrated. You have to manually configure primary worker management and
global settings.
This section describes general conversion steps and does not include configuration specifics, CLI syntax, or GUI
procedures.
Assumptions
l
l
l
The FGCP cluster and the SLBC workers must be running the same firmware version. If required, you can upgrade
the FGCP cluster or SLBC worker firmware before or after the conversion.
This example assumes that VDOMs are enabled on the FGCP cluster. The FGCP cluster must have VDOMs
enabled for this conversion to work because SLBC cluster workers have VDOMs enabled. You can transfer the
configuration from one VDOM to another but you can't transfer the configuration of an FGCP cluster without
VDOMs enabled to a VDOM in an SLBC cluster worker. (If your FGCP cluster does not have multiple VDOMs
enabled you could enable VDOMs on the FGCP cluster and add all configuration elements to the root VDOM but
this would essentially mean re-configuring the FGCP cluster in which case it would make more sense to reconfigure the SLBC workers.)
Both clusters are operating, once the SLBC workers are configured you can divert traffic to the SLBC cluster and
then take the FGCP cluster out of service.
l
The FGCP cluster units do not have to be the same model as the SLBC cluster workers.
l
The SLBC workers have been registered and licensed.
Conversion steps
1. Add VDOM(s) to the SLBC primary worker with names that match those of the FGCP cluster.
2. Map FGCP cluster interface names to SLBC primary worker interfaces names. For example you could map the
FGCP cluster port1 and port2 interfaces to the SLBC primary worker fctl/f1 and fctl/f2 interfaces. You can also
include aggregate interfaces in this mapping and you can also map FGCP cluster interfaces to SLBC trunks.
3. Add interfaces to the SLBC primary worker VDOMs according to your mapping. This includes moving SLBC
physical interfaces into the appropriate VDOMs, creating aggregate interfaces, and creating SLBC trunks if
required.
4. Add VLANs to the SLBC primary worker that match VLANs in the FGCP cluster. They should have the same
names as the FGCP VLANs, be added to the corresponding SLBC VDOMs and interfaces, and have the same
VLAN IDs.
5. Add inter-VDOM links to the SLBC primary worker that match the FGCP cluster.
6. Backup the configuration of each FGCP cluster VDOM.
7. Backup the configuration of each SLBC primary worker VDOM.
8. Use a text editor to replace the first 4 lines of each FGCP cluster VDOM configuration file with the first four lines of
the corresponding SLBC primary worker VDOM configuration file. Here are example lines from an SLBC primary
worker VDOM configuration file:
#config-version=FG-5KB-5.02-FW-build670-150318:opmode=0:vdom=1:user=admin
#conf_file_ver=2306222306838080295
#buildno=0670
#global_vdom=0:vd_name=VDOM1
9. With a text editor edit each FGCP cluster VDOM configuration file and replace all FGCP cluster interface names
with the corresponding SLBC worker interfaces names according to the mapping you created in step 2.
37
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
Updating SLBC firmware
10. Set up a console connection to the SLBC primary worker to check for errors during the following steps.
11. From the SLBC primary worker, restore each FGCP cluster VDOM configuration file to each corresponding SLBC
primary worker VDOM.
12. Check the following on the SLBC primary worker:
Make sure set type fctrl-trunk is enabled for SLBC trunk interfaces.
Enable the global and management VDOM features that you need including SNMP, logging, connections to
FortiManager, FortiAnalyzer, and so on.
If there is a FortiController in chassis slot 2, make sure the worker base2 interface status is up.
Remove snmp-index entries for each interface.
Since you can manage the workers from the FortiController you can remove management-related
configurations using the worker mgmt1 and mgmt2 interfaces (Logging, snmp, admin access, etc.) if you are
not going to use these interfaces for management.
Updating SLBC firmware
After you have registered your FortiControllers and workers you can download the most recent FortiController and
FortiOS firmware from the support web site https://support.fortinet.com. Select the FortiSwitch-ATCA product.
You can upgrade the worker firmware from the primary worker GUI or CLI. Upgrading the primary worker firmware
also upgrades the firmware of all of the FortiControllers in the cluster. In a two chassis configuration the process
is a bit more complex. See Upgrading a two chassis cluster on page 39.
You can upgrade the FortiController firmware from the primary FortiController GUI or CLI. Upgrading the primary
FortiController firmware also upgrades the firmware of all of the FortiControllers in the cluster in one operation.
This also works for a two chassis cluster.
When you upgrade workers or FortiControllers, the cluster upgrades the backup workers or FortiControllers first.
Once the all of the backup units have been upgraded the primary worker or FortiController is upgraded.
Worker and FortiController firmware can be upgraded independently as long as the firmware running your
FortiControllers supports the FortiOS firmware that you are upgrading the workers to. If you need to upgrade both
the workers and the FortiControllers you should upgrade the workers first unless stated otherwise in the
FortiSwitch ATCA release notes.
Upgrading FortiController and worker firmware may briefly interrupt network traffic so
if possible this should be done during a quiet period.
Upgrading a single-chassis cluster
To upgrade single-chassis cluster worker firmware
This procedure upgrades the firmware running on all of the workers in a single operation.
1. Log into the primary worker GUI.
2. From the Global System Information dashboard widget beside Firmware Version select Update.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
38
Updating SLBC firmware
Getting started and SLBC Basics
3. Select the new firmware file and select OK.
The firmware image file is uploaded and verified then installed on all of the workers. After a few minutes the
cluster continues operating, the workers running the new firmware build.
You can confirm that all of the workers are back in the cluster from the FortiController Load Balance > Status
page.
To upgrade single-chassis cluster FortiController firmware
This procedure upgrades the firmware running on all of the FortiControllers in a single operation.
1. Log into the FortiController GUI.
2. From the System Information dashboard widget beside Firmware Version select Update.
3. Select the new firmware file and select OK.
The firmware image file is uploaded and verified then installed on all the FortiControllers. After a few minutes
the cluster continues operating.
You can confirm that all of the FortiControllers and workers are back in the cluster and operating normally
from the Load Balance > Status page of the FortiController GUI.
Upgrading a two chassis cluster
To upgrade two chassis cluster worker firmware
Use the following multi-step process to upgrade worker firmware in a two-chassis cluster.
1. Log into the primary worker GUI.
2. From the Global System Information dashboard widget beside Firmware Version select Update.
3. Select the new firmware image file and select OK.
The firmware image file is uploaded and verified then installed on all of the workers in the backup chassis.
The primary chassis continues processing traffic.
From console connections to the workers in the primary chassis you can see messages indicating that they
are waiting for their chassis to become the backup chassis so that they can upgrade their firmware.
From console connections to the workers in the backup chassis you can see them upgrade their firmware and
restart.
4. Once all of the workers in the backup chassis have upgraded their firmware and restarted, log into the primary
FortiController CLI and enter the following command to force the primary chassis to become the backup chassis:
diagnose system ha force-slave-state by-chassis <delay> <chassis-number>
For example, if chassis 1 is the primary chassis, enter the following command:
diagnose system ha force-slave-state by-chassis 10 1
This command waits 10 seconds, then forces chassis 1 to become the backup chassis, resulting in chassis 2
becoming the primary chassis.
The workers in the new primary chassis process all network traffic. And the workers in the new backup chassis
upgrade their firmware.
The workers in the primary chassis can wait up to 20 minutes to become the backup chassis and upgrade
their firmware. If the primary chassis does not become the backup chassis within 20 minutes, all worker
firmware is restored to the original version.
39
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Getting started and SLBC Basics
Updating SLBC firmware
5. After the firmware on all of the workers is upgraded you should clear the force slave state using the following
command:
diagnose system ha force-slave-state by-chassis clear
6. You can confirm that all of the workers are back in the cluster from the FortiController Load Balance > Status
page.
To upgrade two chassis cluster FortiController firmware
This procedure upgrades the firmware running on the FortiControllers in a single operation.
1. Log into the primary FortiController GUI.
2. From the System Information dashboard widget beside Firmware Version select Update.
3. Select the new firmware file and select OK.
The firmware image file is uploaded and verified then installed on all the FortiControllers. After a few minutes
the cluster continues operating.
You can confirm that all of the FortiControllers and workers are back in the cluster and operating normally
from the FortiController Load Balance > Status page.
Verifying the configuration and the status of the units in the cluster
Use the following command from the FortiController CLI to verify that the primary FortiController can
communicate with all of the workers and to show the status of each worker. For example,for a cluster that
includes 2 workers the command output would be the following if the cluster is operating properly:
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: slot-3
Blades:
Working: 3 [ 3 Active 0
Ready:
0 [ 0 Active 0
Dead:
0 [ 0 Active 0
Total:
3 [ 3 Active 0
Standby]
Standby]
Standby]
Standby]
Slot 3: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 5: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
The command output provides the same information as the Load Balance > Status page, including the slot
that contains the primary unit (slot 3), the number of workers in the cluster, the slots containing all of the workers
(3, 4, and 6) and the status of each. Status information includes the status of the connection between the
FortiController and the base and fabric backplanes, whether the heartbeat is active, the status of the
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
40
Updating SLBC firmware
Getting started and SLBC Basics
FortiController and the data processed by it. The status message can also indicate if the FortiController is waiting
for a fabric connection or waiting for a base connection.
You can also use the following commands to display detailed session aware load balancing diagnostics:
diagnose SLBC {dp | tcam-rules}
The dp option provides diagnostics for the FortiASIC DP processors and the tcam-rules option provides
diagnostics for content aware routing rules (TCAM).
41
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Configuring communication between FortiControllers
Updating SLBC firmware
Configuring communication between FortiControllers
SLBC clusters consisting of more than one FortiController use the following types of communication between
FortiControllers to operate normally:
l
l
l
l
Heartbeat communication allows the FortiControllers in the cluster to find each other and share status information.
If a FortiController stops sending heartbeat packets it is considered down by other cluster members. By default
heartbeat traffic uses VLAN 999.
Base control communication between FortiControllers on subnet 10.101.11.0/255.255.255.0 using VLAN 301.
Base management communication between FortiControllers on subnet 10.101.10.0/255.255.255.0 using
VLAN 101.
Session synchronization between FortiControllers in different chassis so that if one FortiController fails another
can take its place and maintain active communication sessions. FortiController-5103B session sync traffic uses
VLAN 2000. FortiController-5903C and FortiController-5913C session sync traffic between the FortiControllers in
slot 1 uses VLAN 1900 and between the FortiControllers in slot 2 uses VLAN 1901. You cannot change these
VLANs. Session sync synchronizes sessions between workers that have the same slot-id (e.g. chassis 1 slot 3 to
chassis 2 slot 3). As a result of enabling session-sync if a fail-over occurs, the SLBC will use a best effort approach
to maintain existing sessions.
If a cluster contains more than one FortiController you must connect their front panel B1 and B2 interfaces
together for heartbeat and base control and management communication. You can also use the front panel
Mgmt interface for this configuration.
A cluster with two chassis must include session synchronization connections among all of the FortiControllers.
l
l
For the FortiController-5103B you must connect one of the front panel F1 to F8 interfaces of all of the
FortiController-5103Bs together. For example, in a FortiController-5103B cluster with two chassis you can connect
the F8 interfaces of the FortiControllers in the cluster together.
For the FortiController-5903C and FortiController-5913C cluster you use the B1 and B2 interfaces for session
synchronization connections.
See the two-chassis examples in this document for details. The requirements for these session sync connections
depend on the type of cluster.
l
l
l
In a two chassis A-P mode cluster with two or four FortiController-5103Bs, the session sync ports of all
FortiController-5103Bs (for example F8) must be connected to the same broadcast domain by connecting all of the
F8 interfaces to the same switch.
In a FortiController-5103B two chassis dual mode cluster, session sync ports need to be 1-to-1 connected according
to chassis slot. So F8 from the FortiController-5103Bs in chassis 1 slot 1 needs to be connected to F8 in chassis 2
slot 1. And, F8 in chassis 1 slot 2 needs to be connected to F8 in chassis 2 slot 2. Because these are 1 to 1
connections you can use patch cables to connect them. You can also make these connections through a switch.
In a two chassis A-P or dual mode cluster with two or four FortiController-5903Cs or FortiController-5913Cs, all of
the B1 interfaces must all be connected to the same 10 Gbps switch. All of the B2 interfaces must all be connected
to a different 10 Gbps switch. Connecting the B1 and B2 interfaces to the same switch is not recommended
because it requires a double-tagging VLAN configuration.
Network equipment carrying this communication must be able to handle the traffic. This traffic uses VLANs and
specific subnets so you may have to configure the network equipment to allow this communication.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
42
Changing the base control subnet and VLAN
Configuring communication between FortiControllers
Changing the base control subnet and VLAN
You can change the base control subnet and VLAN from the FortiController CLI. For example to change the base
control subnet to 10.122.11.0/255.255.255.0 and the VLAN ID to 320:
config load-balance setting
set base-ctrl-network 10.122.11.0 255.255.255.0
config base-ctrl-interfaces
edit b1
set vlan-id 320
next
edit b2
set vlan-id 320
end
If required, you can use different VLAN IDs for the B1 and B2 interface.
Changing this VLAN only changes the VLAN used for base control traffic between chassis. Within a chassis the
default VLAN is used.
Changing the base management subnet and VLAN
You can change the base management subnet from the FortiController GUI by going to Load Balance >
Config and changing the Internal Management Network.
You can also change the base management subnet and VLAN ID from the FortiController CLI. For example, use
the following command to change the base management subnet to 10.121.10.0 and the VLAN to 131:
config load-balance setting
set base-mgmt-internal-network 10.121.10.0 255.255.255.0
config base-mgt-interfaces
edit b1
set vlan-id 131
next
edit b2
set vlan-id 131
end
If required, you can use different VLAN IDs for the B1 and B2 interface.
Changing this VLAN only changes the VLAN used for base management traffic between chassis. Within a chassis
the default VLAN is used.
Changing the heartbeat VLAN
To change the VLAN from the FortiController GUI, from the System Information dashboard widget, beside HA
Status, select Configure. Change the VLAN to use for HA heartbeat traffic(1-4094) setting.
You can also change the heartbeat VLAN ID from the FortiController CLI. For example, to change the heartbeat
VLAN ID to 333:
config system ha
set hbdev-vlan-id 333
43
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Configuring communication between
FortiControllers
Using the FortiController-5103B mgmt interface as a heartbeat
interface
end
Using the FortiController-5103B mgmt interface as a heartbeat interface
The FortiController-5103B can use the following command to add the mgmt interface to the list of heartbeat
interfaces used. This example adds the mgmt interface for heartbeats to the B1 and B2 interfaces. The B1 and
B2 ports are recommended because they are 10G ports and the Mgmt interface is a 100Mb interface.
config system ha
set hbdev b1 b2 mgmt
end
Changing the heartbeat interface mode
By default, only the first heartbeat interface (usually B1) is used for heartbeat traffic. If this interface fails on any
of the FortiControllers in a cluster, then the second heartbeat interface is used (B2).
You can use the following command to simultaneously use all heartbeat interfaces for heartbeat traffic:
config load-balance-setting
set base-mgmt-interface-mode active-active
end
Enabling session sync and configuring the session sync interface
In a two chassis configuration you can use the following command to enable session synchronization:
config load-balance setting
set session-sync enable
end
Then for the FortiController-5103B you need to use the following command to select the interface to use for
session sync traffic. The following example, sets the FortiController-5103B session sync interface to F4:
config system ha
set session-sync-port f4
end
The FortiController-5903C and FortiController-5913C use b1 and b2 as the session sync interfaces so no
configuration changes are required.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
44
Tuning TCP load balancing performance (TCP local ingress)
Changing load balancing settings
Changing load balancing settings
The default load balance configuration provides optimal performance for most network traffic and most
requirements. A number of load balance settings are available to change how packets and sessions are
processed by the cluster. These changes may be required for certain traffic types or to optimize performance for
your configuration.
Tuning TCP load balancing performance (TCP local ingress)
TCP packets pass through the FortiController twice: first on ingress when the packet is received from the network
by the FortiController front panel interface and a second time on egress after the packet leaves a worker and
before it exits from a FortiController front panel interface to the network. New TCP sessions can be added to the
FortiASIC DP processor session table on ingress or on egress. By default they are added on egress. Adding
sessions on egress makes more efficient use of FortiASIC DP processor memory because sessions that are
denied by worker firewall policies are not added to the FortiASIC DP processor session table. As a result the
SLBC cluster can support more active sessions.
Adding sessions to the session table on egress has a limitation: round-robin load balancing does not work. If you
need round-robin load balancing you must configure the cluster to add sessions to the FortiASIC DP processor on
ingress by entering the following command:
config load-balance session-setup
set tcp-ingress enable
end
Round-robin load balancing is now supported; but, since sessions are added to the FortiASIC DP processor
before filtering by worker firewall policies, some of these sessions may subsequently be denied by these firewall
policies. These denied sessions remain in FortiASIC session table taking up memory until they time out.
In addition, adding sessions on ingress means that the FortiController is potentially open to DDOS attacks that
could be prevented by worker firewall policies.
In most cases, the default configuration of disabling TCP local ingress should be maintained. However, if you
need to use round-robin load balancing you can enable TCP local ingress as long as you are aware of the
limitations of this configuration.
For details about the life of a TCP packet, see Life of a TCP packet on page 50.
Tuning UDP load balancing (UDP local ingress and UDP remote/local session
setup)
Similar to TCP packets, UDP packets also pass through the FortiController twice: first on ingress when the packet
is received from the network by the FortiController front panel interface and a second time on egress after the
packet leaves a worker and before it exits from a FortiController front panel interface to the network.
45
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Changing load balancing settings
Changing the load distribution method
Just like TCP sessions, by default new UDP sessions are added to the FortiASIC DP processor session table on
egress. You can also enable UDP local ingress to add sessions to the FortiASIC DP processor on ingress using
the following command:
config load-balance session-setup
set udp-ingress enable
end
or from the FortiController GUI by going to Load Balance > Session > Setup > UDP Local Ingress.
On egress, UDP packets are not handled the say way as TCP packets. UDP Packets are transmitted directly from
the FortiController fabric backplane interface to the FortiController front panel interface, bypassing the FortiASIC
DP processor. The workers update the FortiASIC DP processor UDP session table by sending worker-toFortiController remote setup session helper packets.
You can change this on egress behavior by adjusting the UDP remote/local session setup. The default setting is
remote. If you change the setting to local, both incoming and outgoing UDP sessions are forwarded by the
FortiASIC DP processor; effectively doubling the number of UDP sessions that the FortiASIC DP processor
handles. Doubling the session load on the FortiASIC DP processor can create a performance bottleneck.
You can switch UDP remote/local session setup to local if you experience errors with UDP traffic. In practice;
however, remote mode provides better performance without causing errors.
You can change UDP remote/local session setup with the following command:
config load-balance session-setup
set udp-session local
end
or from the FortiController GUI by going to Load Balance > Session > Setup > UDP Session Setup:
For details about the life of a UDP packet, see Life of a UDP packet on page 52.
Changing the load distribution method
Go to Load Balance > Session > Setup > Load Distribution to change the load distribution method used by
the cluster. The default load distribution method is src-dst-ip-sport-dport which means sessions are identified by
their source address and port and destination address and port.
You can change the load distribution method using the following CLI command:
config load-balance session-setup
set load-distribution {round-robin | src-ip | dst-ip | scr-dst-ip | src-ip-sport | dstip-dport | src-dst-ip-sport-dport}
end
The following load balancing schedules are also available:
round-robin
Directs new requests to the next slot regardless of response time or number of connections.
Round robin is only supported if TCP or UDP local ingress is enabled.
src-ip
The traffic load is distributed across all slots according to source IP address.
dst-ip
The traffic load is statically distributed across all slots according to destination IP address.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
46
TCP and UDP local ingress session setup and round robin load balancing
src-dst-ip
src-ip-sport
dst-ip-dport
src-dst-ipsport-dport
Changing load balancing settings
The traffic load is distributed across all slots according to the source and destination IP
addresses.
The traffic load is distributed across all slots according to the source IP address and source
port.
The traffic load is distributed across all slots according to the destination IP address and
destination port.
The traffic load is distributed across all slots according to the source and destination IP
address, source port, and destination port. This is the default load balance schedule and
represents true session-aware load balancing.
TCP and UDP local ingress session setup and round robin load balancing
By default TCP and UDP local ingress is set to disable and sessions are added to the load balancer memory only
after they are accepted by worker firewall policies. This setting results in improved performance because fewer
sessions are recorded and managed by the load balancer so the load balancer has more memory available to
handle more accepted sessions.
Enabling local ingress session setup also means a cluster is more vulnerable to DDOS attacks because the
cluster is processing more sessions and because FortiGate DDOS protection cannot block DDOS attacks before
they are recorded by the load balancer.
However, disabling local ingress session setup means that round-robin load distribution is not supported.
So in general, unless you need to use round-robin load distribution you should leave TCP and UDP local ingress
set to disable.
Changing how UDP sessions are processed by the cluster
Go to Load Balance > Session > Setup > Session Setup to change how the cluster handles UDP sessions.
UDP session setup can be set to remote or local.
On the CLI the configuration is:
config load-balance session-setup
set udp-ingress-setup disable
set udp-session-setup {local | remote}
end
l
l
In local mode, UDP sessions are setup locally on the FortiController. All types of UDP traffic are supported by local
mode.
In remote mode, UDP sessions are setup on individual workers in the cluster. Remote mode results in better
performance but some types of UDP traffic are not supported by remote mode.
Remote should work in most configurations, but for some configurations you may have to change the setting to
local. For example, if the load distribution method is set to round-robin unidirectional UDP packets in a session
may be distributed to different chassis slots. For some UDP protocols this will cause problems.
47
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Changing load balancing settings
Tuning load balancing performance: fragmented sessions
You should also enable local mode if the cluster processes non-symmetric UDP traffic no matter what the
distribution method is.
Tuning load balancing performance: fragmented sessions
Go to Load Balance > Session > Setup > Session Performance to control how the cluster handles
fragmented sessions.
From the CLI the configuration is:
config load-balance session-setup
set fragment-sessions enable
end
Changing this setting has no effect. Sending fragmented packets to the FortiASIC DP processors is disabled in
the current release.
Changing session timers
Go to Load Balance > Session > Timer to view and change load balancing session timers. These timers
control how long the FortiController waits before closing a session or performing a similar activity. In most cases
you do not have to adjust these timers, but they are available for performance tuning. The range for each timer is
1 to 15,300 seconds.
Use the following command to change these timers from the CLI:
config
set
set
set
set
set
set
set
set
end
load-balance session-age
fragment 120
pin-hole 120
rsync 300
tcp-half-close 125
tcp-half-open 125
tcp-normal 3605
tcp-timewait 2
udp 185
Four of these FortiController timers have corresponding timers in the FortiGate-5000 configuration. The
FortiController timers must be set to values greater than or equal to the corresponding FortiGate-5000 timers.
The worker timers are (default values shown):
config global
config system global
set tcp-halfclose-timer 120
set tcp-halfopen-timer 120
set tcp-timewait-timer 1
set udp-idle-timer 180
end
The following timers are supported:
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
48
Changing session timers
age-interval tcp normal
age-interval tcp timewait
age-interval tcp half-open
age-interval tcp half-close
age-interval udp
age-interval pin-hole
age-interval rsync
age-interval fragment
49
Changing load balancing settings
The time to wait without receiving a packet before the session is considered
closed. Default 3605 seconds.
The amount of time that the FortiController keeps normal TCP sessions in
the TIME_WAIT state. Default is 2 seconds.
The amount of time that the FortiController keeps normal TCP sessions in
the HALF_OPEN state. Default is 125 seconds.
The amount of time that the FortiController keeps normal TCP sessions in
the HALF_CLOSE state. Default is 125 seconds.
The amount of time that the FortiController keeps normal UDP sessions
open after a packet is received. Default is 185 seconds.
The amount of time that the FortiController keeps pinhole sessions open.
Default is 120 second.
When two FortiControllers are operating in HA mode, this timer controls how
long a synced session can remain on the subordinate unit due to inactivity. If
the session is active on the primary unit, rsync updates the session on the
subordinate unit. So a long delay means the session is no longer active and
should be removed from the subordinate unit. Default is 300 seconds.
To track fragmented frames, the FortiController creates fragmented
sessions to track the individual fragments. Idle fragmented sessions are
removed when this timer expires. Default is 120 seconds.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Life of a TCP packet
Life of a TCP packet (default configuration: TCP local ingress disabled)
Life of a TCP packet
This section describes the life of a TCP packet and related sessions as it passes through a SLBC cluster. The life
of a packet is affected by TCP load balancing settings (see Tuning TCP load balancing performance (TCP local
ingress) on page 45).
Life of a TCP packet (default configuration: TCP local ingress disabled)
Here is what can happen when a TCP packet enters a SLBC cluster with the default load balancing configuration
(TCP local ingress disabled):
1. A TCP packet is received by a FortiController front panel interface.
2. The FortiASIC DP processor looks up the packet in its session table and one of the following happens:
If the packet is part of an established session it is forwarded to the FortiController fabric backplane interface
and from there to the fabric backplane interface of the worker that is processing the session. The packet is
then processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session it is forwarded to the FortiController fabric backplane interface and
from there to the fabric backplane interface of a worker. The worker is selected by the FortiASIC DP
processor based on the load distribution method. The worker applies FortiGate firewall policies and accepts
the packet. The packet is processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session it is forwarded to the FortiController fabric backplane interface and
from there to the fabric backplane interface of a worker. The worker is selected by the FortiASIC DP
processor based on the load distribution method. The worker applies FortiGate firewall policies and denies
the session. The packet is dropped.
3. Accepted packets are received by the FortiController backplane interface.
If the packet is part of an established session the FortiASIC DP processor records the packet as part of an
established session.
If the packet is starting a new session, the FortiASIC DP processor adds the new session to its session table.
4. The packets exit the cluster through a FortiController front panel interface.
The FortiASIC DP processor session table contains sessions accepted by worker firewall policies. These
sessions expire and are removed from the table when no new packets have been received for that session by
the TCP session timeout.
Life of a TCP packet (TCP local ingress enabled)
With TCP local ingress enabled the life of a TCP packet looks like this:
1. A TCP packet is received by a FortiController front panel interface.
2. The FortiASIC DP processor looks up the packet in its session table and one of the following happens:
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
50
Life of a TCP packet (TCP local ingress enabled)
Life of a TCP packet
If the packet is part of an established session it is forwarded to the FortiController fabric backplane interface
and from there to the fabric backplane interface of the worker that is processing the session. The packet is
then processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session the new session is added to the FortiASIC DP processor session table.
The packet is forwarded to the FortiController fabric backplane interface and from there to the fabric
backplane interface of a worker. The worker is selected by the FortiASIC DP processor based on the load
distribution method. The worker applies FortiGate firewall policies and accepts the packet. The packet is
processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session the new session is added to the FortiASIC DP processor session table.
The packet is forwarded to the FortiController fabric backplane interface and from there to the fabric
backplane interface of a worker. The worker is selected by the FortiASIC DP processor based on the load
distribution method. The worker applies FortiGate firewall policies and denies the packet. The packet is
blocked by the worker.
3. Accepted packets are received by the FortiController backplane interface and recorded by FortiASIC DP processor
as part of an established session.
4. The packets exit the cluster through a FortiController front panel interface.
The FortiASIC DP processor session table contains sessions accepted by and denied by worker firewall
policies. These sessions expire and are removed from the table when no new packets have been received for
that session by the TCP session timeout.
51
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Life of a UDP
packet
Life of a UDP packet (default configuration: UDP local ingress disabled and UDP remote session
setup)
Life of a UDP packet
This section describes four variations on the life of a UDP packet and related sessions as it passes through a
SLBC cluster. The life of a packet is affected by UDP load balancing settings (see Tuning UDP load balancing
(UDP local ingress and UDP remote/local session setup) on page 45).
Life of a UDP packet (default configuration: UDP local ingress disabled and
UDP remote session setup)
Here is what can happen when a UDP packet enters a SLBC cluster with the default load balancing configuration
(UDP local ingress disabled and UDP remote/local session setup set to remote):
1. A UDP packet is received by a FortiController front panel interface.
2. The FortiASIC DP processor looks up the packet in its session table and one of the following happens:
If the packet is part of an established session it is forwarded to the FortiController fabric backplane interface
and from there to the fabric backplane interface of the worker that is processing the session. The packet is
then processed by the worker and exits the worker’s fabric backplane interface. The packet is received by the
FortiController fabric backplane interface and then exits the cluster from a FortiController front panel
interface.
If the packet is starting a new session it is forwarded to the FortiController fabric backplane interface and
from there to the fabric backplane interface of a worker. The worker is selected by the FortiASIC DP
processor based on the load distribution method. The worker applies FortiGate firewall policies and accepts
the packet. The packet is processed by the worker and exits the worker’s fabric backplane interface. The
packet is received by the FortiController fabric backplane interface and then exits the cluster from a
FortiController front panel interface
If the packet is starting a new session it is forwarded to the FortiController fabric backplane interface and
from there to the fabric backplane interface of a worker. The worker is selected by the FortiASIC DP
processor based on the load distribution method. The worker applies FortiGate firewall policies and denies
the session. The packet is blocked by the worker.
3. Accepted packets are received by the FortiController backplane interface.
4. Using worker-to-FortiController session setup helper packets, the workers send session updates for established
sessions and new sessions to the FortiASIC DP processor.
5. The packets exit the cluster through a FortiController front panel interface.
The FortiASIC DP processor session table contains sessions accepted by worker firewall policies. These
sessions expire and are removed from the table when no new packets have been received for that session by
the UDP session timeout.
Life of a UDP packet (UDP local ingress disabled and UDP local session setup)
Here is what can happen when a UDP packet enters a SLBC cluster with UDP local ingress disabled and UDP
remote/local session setup set to local:
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
52
Life of a UDP packet (UDP local ingress enabled and UDP remote session setup)
Life of a UDP packet
1. A UDP packet is received by a FortiController front panel interface.
2. The FortiASIC DP processor looks up the packet in its session table and one of the following happens:
If the packet is part of an established session it is forwarded to the FortiController fabric backplane interface
and from there to the fabric backplane interface of the worker that is processing the session. The packet is
then processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session it is forwarded to the FortiController fabric backplane interface and
from there to the fabric backplane interface of a worker. The worker is selected by the FortiASIC DP
processor based on the load distribution method. The worker applies FortiGate firewall policies and accepts
the packet. The packet is processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session it is forwarded to the FortiController fabric backplane interface and
from there to the fabric backplane interface of a worker. The worker is selected by the FortiASIC DP
processor based on the load distribution method. The worker applies FortiGate firewall policies and denies
the session. The packet is blocked by the worker.
3. Accepted packets are received by the FortiController backplane interface.
If the packet is part of an established session the FortiASIC DP processor records the packet as part of an
established session.
If the packet is starting a new session, the FortiASIC DP processor adds the new session to its session table.
4. The packets exit the cluster through a FortiController front panel interface.
The FortiASIC DP processor session table contains sessions accepted by worker firewall policies. These
sessions expire and are removed from the table when no new packets have been received for that session by
the UDP session timeout.
Life of a UDP packet (UDP local ingress enabled and UDP remote session
setup)
With UDP local ingress enabled and UDP session setup set to remote, the life of a UDP packet looks like this:
1. A UDP packet is received by a FortiController front panel interface.
2. The FortiASIC DP processor looks up the packet in its session table and one of the following happens:
If the packet is part of an established session it is forwarded to the FortiController fabric backplane interface
and from there to the fabric backplane interface of the worker that is processing the session. The packet is
then processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session the new session is added to the FortiASIC DP processor session table.
The packet is forwarded to the FortiController fabric backplane interface and from there to the fabric
backplane interface of a worker. The worker is selected by the FortiASIC DP processor based on the load
distribution method. The worker applies FortiGate firewall policies and accepts the packet. The packet is
processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session the new session is added to the FortiASIC DP processor session table.
The packet is forwarded to the FortiController fabric backplane interface and from there to the fabric
backplane interface of a worker. The worker is selected by the FortiASIC DP processor based on the load
distribution method. The worker applies FortiGate firewall policies and denies the packet. The packet is
blocked by the worker.
53
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Life of a UDP packet
Life of a UDP packet (UDP local ingress enabled and UDP local session setup)
3. Accepted packets are received by the FortiController backplane interface.
4. Using worker-to-FortiController heartbeats, the workers send session updates for established sessions and new
sessions to the FortiASIC DP processor.
5. The packets exit the cluster through a FortiController front panel interface.
The FortiASIC DP processor session table contains sessions accepted by and denied by worker firewall
policies. These sessions expire and are removed from the table when no new packets have been received for
that session by the UDP session timeout.
Life of a UDP packet (UDP local ingress enabled and UDP local session setup)
With UDP local ingress enabled and UDP session setup set to local, the life of a UDP packet looks like this:
1. A UDP packet is received by a FortiController front panel interface.
2. The FortiASIC DP processor looks up the packet in its session table and one of the following happens:
If the packet is part of an established session it is forwarded to the FortiController fabric backplane interface
and from there to the fabric backplane interface of the worker that is processing the session. The packet is
then processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session the new session is added to the FortiASIC DP processor session table.
The packet is forwarded to the FortiController fabric backplane interface and from there to the fabric
backplane interface of a worker. The worker is selected by the FortiASIC DP processor based on the load
distribution method. The worker applies FortiGate firewall policies and accepts the packet. The packet is
processed by the worker and exits the worker’s fabric backplane interface.
If the packet is starting a new session the new session is added to the FortiASIC DP processor session table.
The packet is forwarded to the FortiController fabric backplane interface and from there to the fabric
backplane interface of a worker. The worker is selected by the FortiASIC DP processor based on the load
distribution method. The worker applies FortiGate firewall policies and denies the packet. The packet is
blocked by the worker.
3. Accepted packets are received by the FortiController backplane interface and recorded by FortiASIC DP processor
as part of an established session.
4. The packets exit the cluster through a FortiController front panel interface.
The FortiASIC DP processor session table contains sessions accepted by and denied by worker firewall
policies. These sessions expire and are removed from the table when no new packets have been received for
that session by the UDP session timeout.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
54
Life of a UDP packet (UDP local ingress enabled and UDP local session setup)
SLBC with one FortiController-5103B
SLBC with one FortiController-5103B
This example describes the basics of setting up a Session-aware Load Balancing Cluster (SLBC) that consists of
one FortiController-5103B, installed in chassis slot 1, and three FortiGate-5001C workers, installed in chassis
slots 3, 4, and 5. This SLBC configuration can have up to eight 10Gbit network connections.
Internet
Internal Network
FortiController
in Slot 1
fctl/f1
fctl/f3
Traffic Load
Balanced
to workers over
fabric backplane
mgmt
Workers (FortiGates)
in Slots 3, 4, and 5
1. Setting up the Hardware
Install a FortiGate-5000 series chassis and connect it to power. Install the FortiController in slot 1. Install the
workers in slots 3, 4, and 5. Power on the chassis.
Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally.
(To check normal operation LED status see the FortiGate-5000 series documents available here.)
Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiController
and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA
product.
2. Configuring the FortiController
Connect to the FortiController GUI (using HTTPS) or CLI (using SSH) with the default IP address
(http://192.168.1.99) or connect to the FortiController CLI through the console port (Bits per second: 9600,
Data bits: 8, Parity: None, Stop bits: 1, Flow control: None). Login using the admin administrator account and
no password.
55
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
SLBC with one FortiController-5103B
Life of a UDP packet (UDP local ingress enabled and UDP local session setup)
Add a password for the
admin administrator
account. From the GUI
use the Administrators
widget or from the CLI
enter this command.
config admin user
edit admin
set password <password>
end
Change the
FortiController mgmt
interface IP address.
From the GUI use the
config system interface
edit mgmt
set ip 172.20.120.151/24
end
Management Port
widget or from the CLI
enter this command.
If you need to add a
default route for the
management IP address,
enter this command.
config route static
edit route 1
set gateway 172.20.120.2
end
Set the chassis type that
you are using.
config system global
set chassis-type fortigate-5140
end
Go to Load Balance >
Config to add the
workers to the cluster by
selecting Edit and
moving the slots that
contain workers to the
Members list.
The Config page shows
the slots in which the
cluster expects to find
workers. Since the
workers have not been
configured yet their
status is Down.
Configure the External
Management
IP/Netmask. Once you
have connected workers
to the cluster, you can
use this IP address to
manage and configure
them.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
56
Life of a UDP packet (UDP local ingress enabled and UDP local session setup)
You can also enter the
following CLI command
to add slots 3, 4, and 5 to
the cluster:
SLBC with one FortiController-5103B
config load-balance setting
config slots
edit 3
next
edit 4
next
edit 5
end
end
You can also enter the following CLI command to configure the external management IP/Netmask and
management access to this address:
config load-balance setting
set base-mgmt-external-ip 172.20.120.100 255.255.255.0
set base-mgmt-allowaccess https ssh ping
end
3. Adding the workers
Enter this command to
reset the workers to
factory default settings.
execute factoryreset
If the workers are going
to run FortiOS Carrier,
add the FortiOS Carrier
license instead. This will
reset the worker to
factory default settings.
57
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
SLBC with one FortiController-5103B
Life of a UDP packet (UDP local ingress enabled and UDP local session setup)
Register each worker and
apply licenses to each
worker before adding the
workers to the cluster.
This includes
FortiCloud activation
and FortiClient
licensing, and entering a
license key if you
purchased more than 10
Virtual Domains
(VDOMs). You should
also install any third-party
certificates on each
worker before forming
the cluster. FortiToken
licenses can be added at
any time because they
are synchronized to all
workers.
Log into the CLI of each
worker and enter this CLI
command to set the
worker to operate in
FortiController mode.
config system elbc
set mode forticontroller
end
The worker restarts and
joins the cluster. On the
FortiController GUI go to
Load Balance >
Status. As the workers
restart they should
appear in their
appropriate slots.The
worker in the lowest slot
number usually becomes
the primary unit.
4. Results
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
58
Life of a UDP packet (UDP local ingress enabled and UDP local session setup)
SLBC with one FortiController-5103B
You can now manage the workers in the same way as you would manage a standalone FortiGate. You can
connect to the worker GUI or CLI using the External Management IP. If you had configured the worker
mgmt1 or mgmt2 interfaces you can also connect to one of these addresses to manage the cluster.
To operate the cluster, connect networks to the FortiController front panel interfaces and connect to a worker
GUI or CLI to configure the workers to process the traffic they receive. When you connect to the External
Management IP you connect to the primary worker. When you make configuration changes they are
synchronized to all workers in the cluster.
By default on the workers, all FortiController front panel interfaces are in the root VDOM. You can configure
the root VDOM or create additional VDOMs and move interfaces into them.
For example, you could
connect the Internet to
FortiController front
panel interface 4 (fctrl/f4
on the worker GUI and
CLI) and an internal
network to FortiController
front panel interface 2
(fctrl/f2 on the worker
GUI and CLI) . Then
enter the root VDOM and
add a policy to allow
users on the Internal
network to access the
Internet.
59
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-Passive SLBC with two FortiController5103Bs
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Active-Passive SLBC with two FortiController-5103Bs
This example describes the basics of setting up an active-passive Session-aware Load Balancing Cluster (SLBC)
that consists of two FortiController-5103Bs, installed in chassis slots 1 and 2, and three FortiGate-5001C
workers, installed in chassis slots 3, 4, and 5. This SLBC configuration can have up to eight redundant 10Gbit
network connections.
fctl/f1
fctl/f6
b1
b2 mgmt
FortiController
in slot 1
Internal Network
Heartbeat Links
B1 and B2
Internet
FortiController
in slot 2
fctl/f1
fctl/f6
Traffic Load
Balanced
to workers over
fabric backplane
b1 b2 mgmt
Workers (FortiGates)
in Slots 3, 4, and 5
The FortiControllers in the same chassis to operate in active-passive HA mode for redundancy. The
FortiController in slot 1 becomes the primary unit actively processing sessions. The FortiController in slot 2
becomes the subordinate unit, sharing the primary unit’s session table. If the primary unit fails the subordinate
unit resumes all active sessions.
All networks have redundant connections to both FortiControllers. You also create heartbeat links between the
FortiControllers and management links from the FortiControllers to an internal network.
1. Setting up the Hardware
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
60
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Active-Passive SLBC with two FortiController5103Bs
Install a FortiGate-5000 series chassis and connect it to power. Install the FortiControllers in slots 1 and 2.
Install the workers in slots 3, 4, and 5. Power on the chassis.
Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally
(to check normal operation LED status, see the FortiGate-5000 series documents available here).
Create duplicate connections from the FortiController front panel interfaces to the Internet and to the internal
network.
Create a heartbeat link by connecting the FortiController B1 interfaces together. Create a backup heartbeat
link by connecting the FortiController B2 interfaces together. You can directly connect the interfaces with a
patch cable or connect them together through a switch. If you use a switch, it must allow traffic on the
heartbeat VLAN (default 999) and the base control and management VLANs (301 and 101). These
connections establish heartbeat, base control, and base management communication between the
FortiControllers. Only one heartbeat connection is required but redundant connections are recommended.
Connect the mgmt interfaces of the both FortiControllers to the internal network or any network from which
you want to manage the cluster.
Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiController
and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA
product.
2. Configuring the FortiControllers
Connect to the GUI (using HTTPS) or CLI (using SSH) of the FortiController in slot 1 with the default IP
address (http://192.168.1.99) or connect to the FortiController CLI through the console port (Bits per second:
9600, Data bits: 8, Parity: None, Stop bits: 1, Flow control: None).
61
Add a password for the
admin administrator
account. You can either
use the GUI
Administrators widget
or enter this CLI
command.
config admin user
edit admin
set password <password>
end
Change the
FortiController mgmt
interface IP address. Use
the Management Port
widget in the GUI or
enter this command.
Each FortiController
should have a different
Management IP address.
config system interface
edit mgmt
set ip 172.20.120.151/24
end
If you need to add a
default route for the
management IP address,
enter this command.
config route static
edit 1
set gateway 172.20.120.2
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-Passive SLBC with two FortiController5103Bs
Set the chassis type that
you are using.
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
config system global
set chassis-type fortigate-5140
end
Configure active-passive
HA on the FortiController
in slot 1.From the
FortiController GUI
System Information
widget, beside HA
Status select
Configure.
Set Mode to ActivePassive, change the
Group ID, and move the
b1 and b2 interfaces to
the Selected column and
select OK.
You can also enter this
command:
config system ha
set mode a-p
set groupid 23
set hbdev b1 b2
end
If you have more than one cluster on the same network, each cluster should have a different Group ID.
Changing the Group ID changes the cluster interface virtual MAC addresses. If your group ID setting causes
a MAC address conflict you can select a different Group ID. The default Group ID of 0 is not a good choice
and normally should be changed.
You can also adjust other HA settings. For example, you could increase the Device Priority of the
FortiController that you want to become the primary unit, enable Override to make sure the FortiController
with the highest device priority becomes the primary unit, and change the VLAN to use for HA heartbeat
traffic if it conflicts with a VLAN on your network.
You would only select Enable chassis redundancy if your cluster has more than one chassis.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
62
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Active-Passive SLBC with two FortiController5103Bs
Log into the GUI of the FortiController in slot 2 and duplicate the HA configuration of the FortiController in
slot 1, except for the Device Priority and override setting, which can be different on each FortiController.After
a short time, the FortiControllers restart in HA mode and form an active-passive cluster. Both FortiControllers
must have the same HA configuration and at least one heartbeat link must be connected.Normally the
FortiController in slot 1 is the primary unit, and you can log into the cluster using the management IP address
you assigned to this FortiController.
You can confirm that the
cluster has been formed
by viewing the HA
configuration from the
the FortiController GUI.
The display should show
both FortiControllers in
the cluster.
Since the configuration of
all FortiControllers is
synchronized, you can
complete the
configuration of the
cluster from the primary
FortiController.
You can also go to Load
Balance > Status to see
the status of the cluster.
This page should show
both FortiControllers in
the cluster.
The FortiController in slot
1 is the primary unit (slot
icon colored green) and
the FortiController in slot
2 is the backup unit (slot
icon colored yellow).
63
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-Passive SLBC with two FortiController5103Bs
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Go to Load Balance >
Config to add the
workers to the cluster by
selecting Edit and
moving the slots that
contain workers to the
Members list.
The Config page shows
the slots in which the
cluster expects to find
workers. If the workers
have not been configured
yet their status will be
Down.
Configure the External
Management
IP/Netmask. Once you
have connected workers
to the cluster, you can
use this IP address to
manage and configure
them.
You can also enter this
command to add slots 3,
4, and 5 to the cluster:
config load-balance setting
config slots
edit 3
next
edit 4
next
edit 5
end
end
You can also enter this command to set the external management IP/Netmask and configure management
access:
config load-balance setting
set base-mgmt-external-ip 172.20.120.100 255.255.255.0
set base-mgmt-allowaccess https ssh ping
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
64
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Enable base
management traffic
between FortiControllers.
The CLI syntax shows
setting the default base
management VLAN
(101). You can also use
this command to change
the base management
VLAN.
Enable base control
traffic between
FortiControllers. The CLI
syntax shows setting the
default base control
VLAN (301). You can also
use this command to
change the base
management VLAN.
Active-Passive SLBC with two FortiController5103Bs
config load-balance setting
config base-mgmt-interfaces
edit b1
set vlan-id 101
next
edit b2
set vlan-id 101
end
end
config load-balance setting
config base-ctrl-interfaces
edit b1
set vlan-id 301
next
edit b2
set vlan-id 301
end
end
3. Adding the workers to the cluster
Reset the workers to
factory default settings.
execute factoryreset
If the workers are going
to run FortiOS Carrier,
add the FortiOS Carrier
license instead. This will
reset the worker to
factory default settings.
65
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-Passive SLBC with two FortiController5103Bs
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Register each worker and
apply licenses to each
worker before adding the
workers to the cluster.
This includes
FortiCloud activation
and FortiClient
licensing, and entering a
license key if you
purchased more than 10
Virtual Domains
(VDOMs). You can also
install any third-party
certificates on the
primary worker before
forming the cluster. Once
the cluster is formed,
third-party certificates are
synchronized to all of the
workers. FortiToken
licenses can be added at
any time because they
are synchronized to all
workers.
Optionally give the mgmt1 and or mgmt2 interfaces of each worker IP addresses and connect them to your
network. When a cluster is created, the mgmt1 and mgmt2 IP addresses are not synchronized, so you can
connect to and manage each worker separately.
Optionally give each worker a different hostname. The hostname is also not synchronized and allows you to
identify each worker.
Log into the CLI of each
worker and enter this
command to set the
worker to operate in
FortiController mode.
config system elbc
set mode forticontroller
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
66
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Active-Passive SLBC with two FortiController5103Bs
The worker restarts and
joins the cluster. On the
FortiController GUI go to
Load Balance >
Status. As the workers
restart they should
appear in their
appropriate slots.
4. Results
You can now connect to the worker GUI or CLI using the External Management IP and manage the
workers in the same way as you would manage a standalone FortiGate. If you configured the worker mgmt1
or mgmt2 interfaces you can also connect to these interfaces to configure the workers. Configuration
changes made to any worker are synchronized to all workers.
Configure the workers to process the traffic they receive from the FortiController front panel interfaces. By
default all FortiController front panel interfaces are in the root VDOM. You can keep them in the root VDOM
or create additional VDOMs and move interfaces into them.
For example, if you
connect the Internet to
FortiController front
panel interface 1 (fctrl/f1
on the worker GUI and
CLI) and the internal
network to FortiController
front panel interface 6
(fctrl/f6 on the worker
GUI and CLI) you would
access the root VDOM
and add this policy to
allow users on the
Internal network to
access the Internet.
67
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with two FortiController5103Bs
Life of a UDP packet (UDP local ingress enabled and UDP local session
setup)
Dual mode SLBC with two FortiController-5103Bs
This example describes the basics of setting up a dual mode Session-aware Load Balancing Cluster (SLBC) that
consists of two FortiController-5103Bs, installed in chassis slots 1 and 2, and three FortiGate-5001C workers,
installed in chassis slots 3, 4, and 5. This SLBC configuration can have up to 16 10Gbit network connections.
fctl1/f2
b1
b2 mgmt
FortiController
in slot 1
Internal Network
Heartbeat Links
B1 and B2
Internet
FortiController
in slot 2
fctl2/f6
Traffic Load
Balanced
to workers over
fabric backplane
b1 b2 mgmt
Workers (FortiGates)
in Slots 3, 4, and 5
The two FortiControllers in the same chassis to operate in dual mode to double the number of network interfaces
available. In dual mode, two FortiControllers load balance traffic to multiple workers. Traffic can be received by
both FortiControllers and load balanced to all of the workers in the chassis. In dual mode configuration the front
panel interfaces of both FortiControllers are active.
In a dual mode FortiController-5103B cluster up to 16 10Gbyte network interfaces are available. The interfaces of
the FortiController in slot 1 are named fctrl/f1 to fctrl/f8 and the interfaces of the FortiController in slot 2 are
named fctr2/f1 to fctrl2/f8.
All networks have single connections to the first or second FortiController. It is a best practice in a dual-mode
configuration to distribute traffic evenly between the FortiControllers. So in this example, ingress traffic from the
Internet is processed by the FortiController in slot 1 and egress traffic for the internal network is processed by the
FortiController in slot 2.
Redundant connections to a network from the FortiControllers in same chassis is not
supported (unless you configure link aggregation).
One or more heartbeat links are created between the FortiControllers. Redundant heartbeat links are
recommended. The heartbeat links use the front panel B1 and B2 interfaces.
If one of the FortiControllers fails, the remaining FortiController keeps processing traffic received by its front
panel interfaces. Traffic to and from the failed FortiController is lost.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
68
Life of a UDP packet (UDP local ingress enabled and UDP local session
setup)
Dual mode SLBC with two FortiController5103Bs
1. Setting up the Hardware
Install a FortiGate-5000 series chassis and connect it to power. Install the FortiControllers in slots 1 and 2.
Install the workers in slots 3, 4, and 5. Power on the chassis.
Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally
(to check normal operation LED status, see the FortiGate-5000 series documents available here).
Create connections from the FortiController front panel interfaces to the Internet and to the internal network.
Create a heartbeat link by connecting the FortiController B1 interfaces together. Create a backup heartbeat
link by connecting the FortiController B2 interfaces together. You can directly connect the interfaces with a
patch cable or connect them together through a switch. If you use a switch, it must allow traffic on the
heartbeat VLAN (default 999) and the base control and management VLANs (301 and 101). These
connections establish heartbeat, base control, and base management communication between the
FortiControllers. Only one heartbeat connection is required but redundant connections are recommended.
Connect the mgmt interfaces of the both FortiControllers to the internal network or any network from which
you want to manage the cluster.
Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiController
and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA
product.
2. Configuring the FortiControllers
Connect to the GUI (using HTTPS) or CLI (using SSH) of the FortiController in slot 1 with the default IP
address (https://192.168.1.99) or connect to the FortiController CLI through the console port (Bits per
second: 9600, Data bits: 8, Parity: None, Stop bits: 1, Flow control: None).
69
Add a password for the
admin administrator
account. You can either
use the Administrators
widget in the GUI or
enter the following
command in the CLI.
config admin user
edit admin
set password <password>
end
Change the
FortiController mgmt
interface IP address. Use
the Management Port
widget in the GUI or
enter the following
command in the CLI.
config system interface
edit mgmt
set ip 172.20.120.151/24
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with two FortiController5103Bs
Life of a UDP packet (UDP local ingress enabled and UDP local session
setup)
If you need to add a
default route for the
management IP address,
enter this command.
config route static
edit 1
set gateway 172.20.120.2
end
Set the chassis type that
you are using.
config system global
set chassis-type fortigate-5140
end
Configure dual Mode HA
on the FortiController in
slot 1.
From the FortiController
GUI System
Information widget,
beside HA Status select
Configure.
Set Mode to Dual
Mode, change the
Group ID, and move the
b1 and b2 interfaces to
the Selected column and
select OK.
You can also enter this
CLI command:
config system ha
set mode dual
set groupid 4
set hbdev b1 b2
end
If you have more than one cluster on the same network, each cluster should have a different Group ID.
Changing the Group ID changes the cluster interface virtual MAC addresses. If your group ID setting causes
a MAC address conflict you can select a different Group ID. The default Group ID of 0 is not a good choice
and normally should be changed.
You can also adjust other HA settings. For example, you could increase the Device Priority of the
FortiController that you want to become the primary unit, enable Override to make sure the FortiController
with the highest device priority becomes the primary unit, and change the VLAN to use for HA heartbeat
traffic if it conflicts with a VLAN on your network.
You would only select Enable chassis redundancy if your cluster has more than one chassis.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
70
Life of a UDP packet (UDP local ingress enabled and UDP local session
setup)
Dual mode SLBC with two FortiController5103Bs
Log into the GUI of the FortiController in slot 2 and duplicate the HA configuration of the FortiController in
slot 1, except for the Device Priority and override setting, which can be different on each FortiController.
After a short time, the FortiControllers restart in HA mode and form a dual mode cluster. Both
FortiControllers must have the same HA configuration and at least one heartbeat link must be connected.
Normally the FortiController in slot 1 is the primary unit, and you can log into the cluster using the
management IP address you assigned to this FortiController.
If the FortiControllers are unable to form a cluster, check to make sure that they both have the same HA
configuration. Also they can't form a cluster if the heartbeat interfaces (B1 and B2) are not connected.
You can confirm that the
cluster has been formed
by viewing the HA
configuration from the
the FortiController GUI.
The display should show
both FortiControllers in
the cluster.
Since the configuration of
the FortiControllers is
synchronized, you can
complete the
configuration of the
cluster from the primary
FortiController.
You can also go to Load
Balance > Status to see
the status of the cluster.
This page should show
both FortiControllers in
the cluster.
Since both
FortiControllers are
active their slot icons are
both colored green.
71
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with two FortiController5103Bs
Life of a UDP packet (UDP local ingress enabled and UDP local session
setup)
Go to Load Balance >
Config to add the
workers to the cluster by
selecting Edit and
moving the slots that
contain workers to the
Members list.
The Config page shows
the slots in which the
cluster expects to find
workers. If the workers
have not been configured
yet their status will be
Down.
Configure the External
Management
IP/Netmask. Once you
have connected workers
to the cluster, you can
use this IP address to
manage and configure
them.
You can also enter this
command to add slots 3,
4, and 5 to the cluster.
config load-balance setting
config slots
edit 3
next
edit 4
next
edit 5
end
end
You can also enter this command to configure the external management IP/Netmask and management
access to this address:
config load-balance setting
set base-mgmt-external-ip 172.20.120.100 255.255.255.0
set base-mgmt-allowaccess https ssh ping
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
72
Life of a UDP packet (UDP local ingress enabled and UDP local session
setup)
Enable base
management traffic
between FortiControllers.
The CLI syntax shows
setting the default base
management VLAN
(101). You can also use
this command to change
the base management
VLAN.
Enable base control
traffic between
FortiControllers. The CLI
syntax shows setting the
default base control
VLAN (301). You can also
use this command to
change the base
management VLAN.
Dual mode SLBC with two FortiController5103Bs
config load-balance setting
config base-mgmt-interfaces
edit b1
set vlan-id 101
next
edit b2
set vlan-id 101
end
end
config load-balance setting
config base-ctrl-interfaces
edit b1
set vlan-id 301
next
edit b2
set vlan-id 301
end
end
3. Adding the workers to the cluster
Reset the workers to
factory default settings.
execute factoryreset
If the workers are going
to run FortiOS Carrier,
add the FortiOS Carrier
license instead. This will
reset the worker to
factory default settings.
73
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with two FortiController5103Bs
Life of a UDP packet (UDP local ingress enabled and UDP local session
setup)
Register each worker and
apply licenses to each
worker before adding the
workers to the cluster.
This includes
FortiCloud activation
and FortiClient
licensing, and entering a
license key if you
purchased more than 10
Virtual Domains
(VDOMs). You can also
install any third-party
certificates on the
primary worker before
forming the cluster. Once
the cluster is formed,
third-party certificates are
synchronized to all of the
workers. FortiToken
licenses can be added at
any time because they
are synchronized to all
workers.
Optionally give the mgmt1 and or mgmt2 interfaces of each worker IP addresses and connect them to your
network. When a cluster is created, the mgmt1 and mgmt2 IP addresses are not synchronized, so you can
connect to and manage each worker separately.
Optionally give each worker a different hostname. The hostname is also not synchronized and allows you to
identify each worker.
Log into the CLI of each
worker and enter this
command to set the
worker to operate in
FortiController mode.
config system elbc
set mode dual-forticontroller
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
74
Life of a UDP packet (UDP local ingress enabled and UDP local session
setup)
Dual mode SLBC with two FortiController5103Bs
The worker restarts and
joins the cluster. On the
FortiController GUI go to
Load Balance >
Status. As the workers
restart they should
appear in their
appropriate slots.
4. Results
You can now connect to the worker GUI or CLI using the External Management IP and manage the
workers in the same way as you would manage a standalone FortiGate. If you configured the worker mgmt1
or mgmt2 interfaces you can also connect to these interfaces to configure the workers. Configuration
changes made to any worker are synchronized to all workers.
Configure the workers to process the traffic they receive from the FortiController front panel interfaces. By
default all FortiController front panel interfaces are in the root VDOM. You can keep them in the root VDOM
or create additional VDOMs and move interfaces into them.
For example, if you
connect the Internet to
FortiController front
panel interface 2 of the
FortiController in slot 1
(fctrl1/f2 on the worker
GUI and CLI) and the
internal network to
FortiController front
panel interface 6 of the
FortiController in slot 2
(fctrl2/f6 on the worker
GUI and CLI) you would
access the root VDOM
and add this policy to
allow users on the
Internal network to
access the Internet.
75
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with two FortiController-5103Bs and
two chassis
This example describes how to setup an active-passive session-aware load balancing cluster (SLBC) consisting of
two FortiGate-5000 chassis, two FortiController-5103Bs, and six FortiGate-5001Bs acting as workers, three in
each chassis. This SLBC configuration can have up to seven redundant 10Gbit network connections.
Traffic Load
Balanced
to workers over
fabric backplane
Workers (FortiGates)
in chassis 1 slots 3, 4, and 5
fctl/f2
f4
FortiController
Session-sync
Link F4
Internet
fctl/f6
b1
b2 mgmt
Heartbeat Links
B1 and B2
Primary FortiController
in chassis 1 slot 1
Internal Network
Backup FortiController
in chassis 2 slot 1
fctl/f2
f4
fctl/f6
Traffic Load
Balanced
to workers over
fabric backplane
b1 b2 mgmt
Workers (FortiGates)
in chassis 2 slots 3, 4, and 5
The FortiControllers operate in active-passive HA mode for redundancy. The FortiController in chassis 1 slot 1 will
be configured to be the primary unit, actively processing sessions. The FortiController in chassis 2 slot 1 becomes
the subordinate unit. If the primary unit fails the subordinate unit resumes all active sessions.
All networks in this example have redundant connections to both FortiControllers and redundant heartbeat and
base control and management links are created between the FortiControllers using their front panel B1 and B2
interfaces.
This example also includes a FortiController session sync connection between the FortiControllers using the
FortiController F4 front panel interface (resulting in the SLBC having a total of seven redundant 10Gbit network
connections). (You can use any fabric front panel interface.)
Heartbeat and base control and management traffic uses VLANs and specific subnets. So the switches and
network components used must be configured to allow traffic on these VLANs and you should be aware of the
subnets used in case they conflict with any connected networks.
This example sets the device priority of the FortiController in chassis 1 higher than the device priority of the
FortiController in chassis 2 to make sure that the FortiController in chassis 1 becomes the primary FortiController
for the cluster.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
76
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with two FortiController-5103Bs
and two chassis
1. Setting up the Hardware
Install two FortiGate-5000 series chassis and connect them to power. Ideally each chassis should be
connected to a separate power circuit. Install a FortiController in slot 1 of each chassis. Install the workers in
slots 3, 4, and 5 of each chassis. The workers must be installed in the same slots in both chassis. Power on
both chassis.
Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally
(to check normal operation LED status, see the FortiGate-5000 series documents available here).
Create duplicate connections from both FortiController front panel interfaces to the Internet and to the
internal network.
Create a heartbeat link by connecting the FortiController B1 interfaces together. Create a backup heartbeat
link by connecting the FortiController B2 interfaces together. You can directly connect the interfaces with a
patch cable or connect them together through a switch. If you use a switch, it must allow traffic on the
heartbeat VLAN (default 999) and the base control and management VLANs (301 and 101). These
connections establish heartbeat, base control, and base management communication between the
FortiControllers. Only one heartbeat connection is required but redundant connections are recommended.
Create a FortiController session sync connection between the chassis by connecting the FortiController F4
interfaces. If you use a switch it must allow traffic on the FortiController session sync VLAN (2000). You can
use any of the F1 to F8 interfaces. We chose F4 in this example to make the diagram easier to understand.
Connect the mgmt interfaces of the both FortiControllers to the internal network or any network from which
you want to manage the cluster.
Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiController
and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA
product.
2. Configuring the FortiController in Chassis 1
Connect to the GUI (using HTTPS) or CLI (using SSH) of the FortiController in chassis 1 with the default IP
address (http://192.168.1.99) or connect to the FortiController CLI through the console port (Bits per second:
9600, Data bits: 8, Parity: None, Stop bits: 1, Flow control: None).
77
From the Dashboard System
Information widget, set the
Host Name to ch1-slot1. Or
enter this command.
config system global
set hostname ch1-slot1
end
Add a password for the admin
administrator account. You
can either use the
Administrators widget on the
GUI or enter this command.
config admin user
edit admin
set password
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Change the FortiController
mgmt interface IP address.
Use the GUI Management
Port widget or enter this
command.
config system interface
edit mgmt
set ip 172.20.120.151/24
end
If you need to add a default
route for the management IP
address, enter this command.
config route static
edit 1
set gateway 172.20.120.2
end
Set the chassis type that you
are using.
config system global
set chassis-type fortigate-5140
end
Enable FortiController session
sync.
config load-balance setting
set session-sync enable
end
Configure Active-Passive HA.
From the FortiController GUI
System Information widget,
beside HA Status select
Configure.
Set Mode to Active-Passive,
set the Device Priority to
250, change the Group ID,
select Enable Override,
enable Chassis
Redundancy, set Chassis
ID to 1 and move the b1 and
b2 interfaces to the Selected
column and select OK.
Enter this command to use the
FortiController front panel F4
interface for FortiController
session sync communication
between FortiControllers.
config system ha
set session-sync-port f4
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
78
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
You can also enter the
complete HA configuration
with this command.
Active-passive SLBC with two FortiController-5103Bs
and two chassis
config system ha
set mode active-passive
set groupid 5
set priority 250
set override enable
set chassis-redundancy enable
set chassis-id 1
set hbdev b1 b2
set session-sync-port f4
end
If you have more than one cluster on the same network, each cluster should have a different Group ID.
Changing the Group ID changes the cluster interface virtual MAC addresses. If your group ID setting causes
a MAC address conflict you can select a different Group ID. The default Group ID of 0 is not a good choice
and normally should be changed.
Enable Override is selected to make sure the FortiController in chassis 1 always becomes the primary unit.
Enabling override could lead to the cluster renegotiating more often, so once the chassis is operating you can
disable this setting.
You can also adjust other HA settings. For example, you could change the VLAN to use for HA heartbeat
traffic if it conflicts with a VLAN on your network. You can also adjust the Heartbeat Interval and Number
of Heartbeats lost to adjust how quickly the cluster determines one of the FortiControllers has failed.
3. Configuring the FortiController in Chassis 2
Log into the FortiController in
chassis 2.
config system global
set hostname ch2-slot1
end
config system ha
Enter these commands to set
the host name to ch2-slot1 and set mode active-passive
duplicate the HA configuration set groupid 5
set priority 10
of the FortiController in
set chassis-redundancy enable
chassis 1. Except, do not
set chassis-id 2
select Enable Override and
set hbdev b1 b2
set the Device Priority to a
set session-sync-port f4
lower value (for example, 10),
end
and set the Chassis ID to 2.
All other configuration settings
are synchronized from the
primary FortiController when
the cluster forms.
79
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
4. Configuring the cluster
After a short time the FortiControllers restart in HA mode and form an active-passive SLBC. Both
FortiControllers must have the same HA configuration and at least one heartbeat link (the B1 and B2
interfaces) must be connected. If the FortiControllers are unable to form a cluster, check to make sure that
they both have the same HA configuration. Also they can't form a cluster if the heartbeat interfaces (B1 and
B2) are not connected.
With the configuration described in the previous steps, the FortiController in chassis 1 should become the
primary unit and you can log into the cluster using the management IP address that you assigned to the
FortiController in chassis 1.
The FortiController in chassis 2 becomes the backup FortiController. You cannot log into or manage the
backup FortiController until you configure the cluster External Management IP and add workers to the
cluster. Once you do this you can use the External Management IP address and a special port number to
manage the backup FortiController. This is described below. (You can also connect to the backup
FortiController CLI using the console port.)
You can confirm that the
cluster has been formed by
viewing the FortiController HA
configuration. The display
should show both
FortiControllers in the cluster.
(Note in some of the screen
images in this example the
host names shown on the
screen images may not match
the host names used in the
example configuration.)
You can also go to Load
Balance > Status to see the
status of the primary
FortiController (slot icon
colored green).
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
80
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Go to Load Balance >
Config to add the workers to
the cluster by selecting Edit
and moving the slots that
contain workers to the
Members list.
The Config page shows the
slots in which the cluster
expects to find workers. If the
workers have not been
configured their status will be
Down.
Configure the External
Management IP/Netmask.
Once you have connected
workers to the cluster, you can
use this IP address to manage
and configure all of the devices
in the cluster.
You can also enter this
command to add slots 3, 4,
and 5 to the cluster.
config load-balance setting
config slots
edit 3
next
edit 4
next
edit 5
end
end
You can also enter this command to set the External Management IP and configure management access:
config load-balance setting
set base-mgmt-external-ip 172.20.120.100 255.255.255.0
set base-mgmt-allowaccess https ssh ping
end
Enable base management
traffic between
FortiControllers. The CLI
syntax shows setting the
default base management
VLAN (101). You can also use
this command to change the
base management VLAN.
81
config load-balance setting
config base-mgmt-interfaces
edit b1
set vlan-id 101
next
edit b2
set vlan-id 101
end
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Enable base control traffic
between FortiControllers. The
CLI syntax shows setting the
default base control VLAN
(301). You can also use this
command to change the base
management VLAN.
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
config load-balance setting
config base-ctrl-interfaces
edit b1
set vlan-id 301
next
edit b2
set vlan-id 301
end
end
5. Adding the workers to the cluster
Reset each worker to factory
default settings.
execute factoryreset
If the workers are going to run
FortiOS Carrier, add the
FortiOS Carrier license
instead. This will reset the
worker to factory default
settings.
Give the mgmt1 or mgmt2
interface of each worker an IP
address and connect these
interfaces to your network.
This step is optional but useful
because when the workers are
added to the cluster, these IP
addresses are not
synchronized, so you can
connect to and manage each
worker separately.
config system interface
edit mgmt1
set ip 172.20.120.120
end
config system global
Optionally give each worker a
set hostname worker-chassis-1-slot-3
different hostname. The
end
hostname is also not
synchronized and allows you to
identify each worker.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
82
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Register each worker and
apply licenses to each worker
before adding the workers to
the cluster. This includes
FortiCloud activation and
FortiClient licensing, and
entering a license key if you
purchased more than 10
Virtual Domains (VDOMs).
You can also install any thirdparty certificates on the
primary worker before forming
the cluster. Once the cluster is
formed, third-party certificates
are synchronized to all of the
workers. FortiToken licenses
can be added at any time
because they are synchronized
to all workers.
Log into the CLI of each
worker and enter this
command to set the worker to
operate in FortiController
mode. The worker restarts and
joins the cluster.
config system elbc
set mode forticontroller
end
6. Managing the cluster
83
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
After the workers have been added to the cluster you can use the External Management IP to manage the
the primary worker. This includes access to the primary worker GUI or CLI, SNMP queries to the primary
worker, and using FortiManager to manage the primary worker. As well SNMP traps and log messages are
sent from the primary worker with the External Management IP as their source address. And finally
connections to FortiGuard for updates, web filtering lookups and so on, all originate from the External
Management IP.
You can use the external management IP followed by a special port number to manage individual devices in
the cluster. The special port number identifies the protocol and the chassis and slot number of the device you
want to connect to. In fact this is the only way to manage the backup FortiControllers. The special port
number begins with the standard port number for the protocol you are using and is followed by two digits that
identify the chassis number and slot number. The port number is determined using the following formula:
service_port x 100 + (chassis_id - 1) x 20 + slot_id
service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS, 22 for
SSH, 23 for Telnet, 161 for SNMP). chassis_id is the Chassis ID part of the FortiController HA configuration
and can be 1 or 2. slot_id is the number of the chassis slot.
Some examples:
HTTPS, chassis 1 slot 2: 443 x 100 + (1 - 1) x 20 + 2 = 44300 + 0 + 2 = 44302, browse to:
https://172.20.120.100:44302
HTTP, chassis 2, slot 4: 80 x 100 + (2 - 1) x 20 + 4 = 8000 + 20 + 4 = 8024, browse to:
http://172.20.120.100/8024
HTTPS, chassis 1, slot 10: 443 x 100 + (1 - 1) x 20 + 10 = 44300 + 0 + 10 = 44310, browse to
https://172.20.120.100/44310
HTTPS, chassis 2, slot 10: 443 x 100 + (2 - 1) x 20 + 10 = 44300 + 20 + 10 = 44330, browse to
https://172.20.120.100/44310
SNMP query port, chassis 1, slot 4: 161 x 100 + (1 - 1) x (20 + 4) = 16100 + 0 + 4 = 16104
Telnet to connect to the CLI of the worker in chassis 2 slot 4: telnet 172.20.120.100 2324
To use SSH to connect to the CLI the worker in chassis 1 slot 5: ssh [email protected] -p2205
You can also manage the primary FortiController using the IP address of its mgmt interface, set up when you
first configured the primary FortiController. You can also manage the workers by connecting directly to their
mgmt1 or mgmt2 interfaces if you set them up. However, the only way to manage the backup FortiController
is by using its special port number.
To manage a FortiController using SNMP you need to load the FORTINET-CORE-MIB.mib file into your
SNMP manager. You can get this MIB file from the Fortinet support site, in the same location as the current
FortiController firmware (select the FortiSwitchATCA product).
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
84
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with two FortiController-5103Bs
and two chassis
On the primary FortiController
GUI go to Load Balance >
Status. As the workers in
chassis 1 restart they should
appear in their appropriate
slots.
The primary FortiController
should be the FortiController in
chassis 1 slot 1. The primary
FortiController status display
includes a Config Master link
that you can use to connect to
the primary worker.
Log into the backup
FortiController GUI (for
example by browsing to
https://172.20.120.100:44321)
and go to Load Balance >
Status. As the workers in
chassis 2 restart they should
appear in their appropriate
slots.
The backup FortiController
Status page shows the status
of the workers in chassis 2 and
does not include the Config
Master link.
7. Results - Configuring the workers
Configure the workers to process the traffic they receive from the FortiController front panel interfaces. By
default all FortiController front panel interfaces are in the worker root VDOM. You can keep them in the root
VDOM or create additional VDOMs and move interfaces into them.
85
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
For example, if you connect
the Internet to FortiController
front panel 2 interfaces (fctrl/f2
on the worker GUI and CLI)
and the internal network to
FortiController front panel 6
interfaces (fctrl/f6) you would
access the root VDOM and
add this policy to allow users
on the Internal network to
access the Internet.
8. Results - Checking the cluster status
You can use the following get and diagnose commands to show the status of the cluster and all of the
devices in it.
Log into the primary
FortiController CLI and enter
this command to view the
system status of the primary
FortiController.
For example, you can use SSH to log into the primary FortiController CLI
using the external management IP:
ssh [email protected] -p2201
get system status
Version: FortiController-5103B v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3912000029
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch1-slot1
Current HA mode: a-p, master
System time: Sat Sep 13 06:51:53 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
86
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Enter this command to view
the load balance status of the
primary FortiController and its
workers. The command output
shows the workers in slots 3, 4,
and 5, and status information
about each one.
Active-passive SLBC with two FortiController-5103Bs
and two chassis
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: slot-3
Blades:
Working: 3 [ 3 Active 0
Ready: 0 [ 0 Active 0
Dead: 0 [ 0 Active 0
Total: 3 [ 3 Active 0
Standby]
Standby]
Standby]
Standby]
Slot 3: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 4: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 5: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Enter this command from the primary FortiController to show the HA status of the primary and backup
FortiControllers. The command output shows a lot of information about the cluster including the host names
and chassis and slot locations of the FortiControllers, the number of sessions each FortiController is
processing (this case 0 for each FortiController) the number of failed workers (0 of 3 for each FortiController),
the number of FortiController front panel interfaces that are connected (2 for each FortiController) and so on.
The final two lines of output also show that the B1 interfaces are connected (status=alive) and the B2
interfaces are not (status=dead). The cluster can still operate with a single heartbeat connection, but
redundant heartbeat interfaces are recommended.
diagnose system ha status
mode: a-p
minimize chassis failover: 1
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.41, uptime=62581.81, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=2
force-state(0:none) hbdevs: local_interface= b1 best=yes
local_interface= b2 best=no
ch2-slot1(FT513B3912000051), Slave(priority=1),
ip=169.254.128.42, uptime=1644.71, chassis=2(1)
slot: 1
sync: conf_sync=0, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=2
force-state(0:none) hbdevs: local_interface= b1 last_hb_time=66430.35
status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
87
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Log into the backup
FortiController CLI and enter
this command to view the
status of the backup
FortiController.
To use SSH:
ssh [email protected] -p2221
Enter this command to view
the status of the backup
FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: N/A
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready: 0 [ 0 Active 0 Standby]
Dead: 0 [ 0 Active 0 Standby]
Total: 3 [ 3 Active 0 Standby]
Slot 3: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 4: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 5: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
get system status
Version: FortiController-5103B v5.0,build0020,131118
(Patch 3)
Branch Point: 0020
Serial-Number: FT513B3912000051
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch2-slot1
Current HA mode: a-p, backup
System time: Sat Sep 13 07:29:04 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
88
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with two FortiController-5103Bs
and two chassis
Enter this command from the backup FortiController to show the HA status of the backup and primary
FortiControllers. Notice that the backup FortiController is shown first. The command output shows a lot of
information about the cluster including the host names and chassis and slot locations of the FortiControllers,
the number of sessions each FortiController is processing (this case 0 for each FortiController) the number of
failed workers (0 of 3 for each FortiController), the number of FortiController front panel interfaces that are
connected (2 for each FortiController) and so on. The final two lines of output also show that the B1
interfaces are connected (status=alive) and the B2 interfaces are not (status=dead). The cluster can still
operate with a single heartbeat connection, but redundant heartbeat interfaces are recommended.
diagnose system ha status
mode: a-p
minimize chassis failover: 1
ch2-slot1(FT513B3912000051), Slave(priority=1),
ip=169.254.128.42, uptime=3795.92, chassis=2(1)
slot: 1
sync: conf_sync=0, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 best=yes
local_interface= b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0),
ip=169.254.128.41, uptime=64732.98, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_
time=68534.90 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
89
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with four FortiController-5103Bs and
two chassis
This example describes how to setup an active-passive session-aware load balancing cluster (SLBC) consisting of
two FortiGate-5000 chassis, four FortiController-5103Bs two in each chassis, and six FortiGate-5001Bs acting as
workers, three in each chassis. This SLBC configuration can have up to seven redundant 10Gbit network
connections.
Traffic Load
Balanced
to workers over
fabric backplane
Workers (FortiGates)
in chassis 1 slots 3, 4, and 5
fctl/f2
f4
FortiController
Session-sync
Link F4
Internet
fctl/f2
fctl/f6
b1
Heartbeat
Links
B1 and B2
f4
fctl/f6
Chassis 1, Primary
FortiController in slot 1
Backup in slot 2
b2 mgmt
Internal Network
b1 b2 mgmt
Chassis 2, Backup
FortiControllers in
slot 1 and 2
Workers (FortiGates)
in chassis 2 slots 3, 4, and 5
Traffic Load
Balanced
to workers over
fabric backplane
The FortiControllers operate in active-passive HA mode for redundancy. The FortiController in chassis 1 slot 1 will
be configured to be the primary unit, actively processing sessions. The other FortiControllers become the
subordinate units.
In active-passive HA with two chassis and four FortiControllers, both chassis have two FortiControllers in activepassive HA mode and the same number of workers. Network connections are duplicated to the redundant
FortiControllers in each chassis and between chassis for a total of four redundant data connections to each
network.
All traffic is processed by the primary unit. If the primary unit fails, all traffic fails over to the chassis with two
functioning FortiControllers and one of these FortiControllers becomes the new primary unit and processes all
traffic. If the primary unit in the second chassis fails as well, one of the remaining FortiControllers becomes the
primary unit and processes all traffic.
Heartbeat and base control and management communication is established between the chassis using the
FortiController B1 and B2 interfaces. Only one heartbeat connection is required but redundant connections are
recommended. Connect all of the B1 and all of the B2 interfaces together using switches. This example shows
using one switch for the B1 connections and another for the B2 connections. You could also use one switch for
both the B1 and B2 connections but using separate switches provides more redundancy.
The following VLAN tags and subnets are used by traffic on the B1 and B2 interfaces:
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
90
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with four FortiController-5103Bs
and two chassis
l
Heartbeat traffic uses VLAN 999.
l
Base control traffic on the 10.101.11.0/255.255.255.0 subnet uses VLAN 301.
l
Base management on the 10.101.10.0/255.255.255.0 subnet uses VLAN 101.
This example also includes a FortiController session sync connection between the FortiControllers using the
FortiController F4 front panel interface (resulting in the SLBC having a total of seven redundant 10Gbit network
connections). (You can use any fabric front panel interface, F4 is used in this example to make the diagram
clearer.) In a two chassis A-P mode cluster with two or four FortiControllers, the session sync ports of all
FortiControllers must be connected to the same broadcast domain. You can do this by connecting all of the F4
interfaces to the same switch.
FortiController-5103B session sync traffic uses VLAN 2000.
This example sets the device priority of the FortiController in chassis 1 slot 1 higher than the device priority of the
other FortiControllers to make sure that the FortiController in chassis 1 slot 1 becomes the primary FortiController
for the cluster. Override is also enabled on the FortiController in chassis 1 slot 1. Override may cause the cluster
to negotiate more often to select the primary unit. This makes it more likely that the unit that you select to be the
primary unit will actually be the primary unit; but enabling override can also cause the cluster to negotiate more
often.
1. Setting up the Hardware
Install two FortiGate-5000 series chassis and connect them to power. Ideally each chassis should be
connected to a separate power circuit. Install FortiControllers in slot 1 and 2 of each chassis. Install the
workers in slots 3, 4, and 5 of each chassis. The workers must be installed in the same slots in both chassis.
Power on both chassis.
Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally
(to check normal operation LED status, see the FortiGate-5000 series documents available here).
Create redundant connections from all four FortiController front panel interfaces to the Internet and to the
internal network.
Create a heartbeat link by connecting the FortiController B1 interfaces together. Create a backup heartbeat
link by connecting the FortiController B2 interfaces together.
Create FortiController session sync connections between the chassis by connecting the FortiController F4
interfaces together as described above and shown in the diagram.
Connect the mgmt interfaces of all of the FortiControllers to the internal network or any network from which
you want to manage the cluster.
Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiControllers
and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA
product.
91
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
2. Configuring the FortiController in Chassis 1 Slot 1
This will become the primary FortiController. To make sure this is the primary FortiController it will be
assigned the highest device priority and override will be enabled. Connect to the GUI (using HTTPS) or CLI
(using SSH) of the FortiController in chassis 1 slot 1 with the default IP address (http://192.168.1.99) or
connect to the FortiController CLI through the console port (Bits per second: 9600, Data bits: 8, Parity: None,
Stop bits: 1, Flow control: None).
From the Dashboard System Information
widget, set the Host Name to ch1-slot1.
Or enter this command.
config system global
set hostname ch1-slot1
end
Add a password for the admin
administrator account. You can either use
the Administrators widget on the GUI or
enter this command.
config admin user
edit admin
set password
end
Change the FortiController mgmt
interface IP address. Use the GUI
Management Port widget or enter this
command.
config system interface
edit mgmt
set ip 172.20.120.151/24
end
If you need to add a default route for the
management IP address, enter this
command.
config route static
edit 1
set gateway 172.20.120.2
end
Set the chassis type that you are using.
config system global
set chassis-type fortigate-5140
end
Enable FortiController session sync.
config load-balance setting
set session-sync enable
end
Configure Active-Passive HA. From the
FortiController GUI System Information
widget, beside HA Status select
Configure.
Set Mode to Active-Passive, set the
Device Priority to 250, change the
Group ID, select Enable Override,
enable Chassis Redundancy, set
Chassis ID to 1 and move the b1 and b2
interfaces to the Selected column and
select OK.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
92
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Enter this command to use the
FortiController front panel F4 interface for
FortiController session sync
communication between FortiControllers.
config system ha
set session-sync-port f4
end
You can also enter the complete HA
configuration with this command.
config system ha
set mode active-passive
set groupid 15
set priority 250
set override enable
set chassis-redundancy enable
set chassis-id 1
set hbdev b1 b2
set session-sync-port f4
end
If you have more than one cluster on the same network, each cluster should have a different Group ID.
Changing the Group ID changes the cluster interface virtual MAC addresses. If your group ID setting causes
a MAC address conflict you can select a different Group ID. The default Group ID of 0 is not a good choice
and normally should be changed.
You can also adjust other HA settings. For example, you could change the VLAN to use for HA heartbeat
traffic if it conflicts with a VLAN on your network. You can also adjust the Heartbeat Interval and Number
of Heartbeats lost to adjust how quickly the cluster determines one of the FortiControllers has failed.
3. Configuring the FortiController in Chassis 1 Slot 2
Log into the FortiController in chassis 1
slot 2.
Enter these commands to set the host
name to ch1-slot2, to configure the mgmt
interface, and to duplicate the HA
configuration of the FortiController in slot
1. Except, do not select Enable Override
and set the Device Priority to a lower
value (for example, 10).
All other configuration settings are
synchronized from the primary
FortiController when the cluster forms.
93
config system global
set hostname ch1-slot2
end
config system interface
edit mgmt
set ip 172.20.120.152/24
end
config system ha
set mode active-passive
set groupid 15
set priority 10
set chassis-redundancy enable
set chassis-id 1
set hbdev b1 b2
set session-sync-port f4
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
4. Configuring the FortiController in Chassis 2 Slot 1
Log into the FortiController in chassis 2
slot 1.
Enter these commands to set the host
name to ch2-slot1, to configure the mgmt
interface, and to duplicate the HA
configuration of the FortiController in
chassis 1 slot 1. Except, do not select
Enable Override and set the Device
Priority to a lower value (for example,
10), and set the Chassis ID to 2.
All other configuration settings are
synchronized from the primary
FortiController when the cluster forms.
config system global
set hostname ch2-slot1
end
config system interface
edit mgmt
set ip 172.20.120.251/24
end
config system ha
set mode active-passive
set groupid 15
set priority 10
set chassis-redundancy enable
set chassis-id 2
set hbdev b1 b2
set session-sync-port f4
end
5. Configuring the FortiController in Chassis 2 Slot 2
Log into the FortiController in chassis 2
slot 2.
Enter these commands to set the host
name to ch2-slot2, to configure the mgmt
interface, and to duplicate the HA
configuration of the FortiController in
chassis 1 slot 1. Except, do not select
Enable Override and set the Device
Priority to a lower value (for example,
10), and set the Chassis ID to 2.
All other configuration settings are
synchronized from the primary
FortiController when the cluster forms.
config system global
set hostname ch2-slot2
end
config system interface
edit mgmt
set ip 172.20.120.252/24
end
config system ha
set mode active-passive
set groupid 15
set priority 10
set chassis-redundancy enable
set chassis-id 2
set hbdev b1 b2
set session-sync-port f4
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
94
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with four FortiController-5103Bs
and two chassis
6. Configuring the cluster
After a short time the FortiControllers restart in HA mode and form an active-passive SLBC. All of the
FortiControllers must have the same HA configuration and at least one heartbeat link (the B1 and B2
interfaces) must be connected. If the FortiControllers are unable to form a cluster, check to make sure that
they all have the same HA configuration. Also they can't form a cluster if the heartbeat interfaces (B1 and B2)
are not connected.
With the configuration described in the previous steps, the FortiController in chassis 1 slot 1 should become
the primary unit and you can log into the cluster using the management IP address that you assigned to this
FortiController.
The other FortiControllers become backup FortiControllers. You cannot log into or manage the backup
FortiControllers until you configure the cluster External Management IP and add workers to the cluster. Once
you do this you can use the External Management IP address and a special port number to manage the
backup FortiControllers. This is described below. (You can also connect to any backup FortiController CLI
using their console port.)
You can confirm that the cluster has been
formed by viewing the FortiController HA
configuration. The display should show
both FortiControllers in the cluster.
You can also go to Load Balance >
Status to see the status of the primary
FortiController (slot icon colored green).
95
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Go to Load Balance > Config to add the
workers to the cluster by selecting Edit
and moving the slots that contain workers
to the Members list.
The Config page shows the slots in which
the cluster expects to find workers. If the
workers have not been configured for
SLBC operation their status will be Down.
Configure the External Management
IP/Netmask. Once you have connected
workers to the cluster, you can use this IP
address to manage and configure all of
the devices in the cluster.
You can also enter this command to add
slots 3, 4, and 5 to the cluster.
config load-balance setting
config slots
edit 3
next
edit 4
next
edit 5
end
end
You can also enter this command to set the External Management IP and configure management access.
config load-balance setting
set base-mgmt-external-ip 172.20.120.100 255.255.255.0
set base-mgmt-allowaccess https ssh ping
end
Enable base management traffic
between FortiControllers. The CLI syntax
shows setting the default base
management VLAN (101). You can also
use this command to change the base
management VLAN.
config load-balance setting
config base-mgmt-interfaces
edit b1
set vlan-id 101
next
edit b2
set vlan-id 101
end
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
96
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Enable base control traffic between
FortiControllers. The CLI syntax shows
setting the default base control VLAN
(301). You can also use this command to
change the base management VLAN.
97
Active-passive SLBC with four FortiController-5103Bs
and two chassis
config load-balance setting
config base-ctrl-interfaces
edit b1
set vlan-id 301
next
edit b2
set vlan-id 301
end
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
7. Adding the workers to the cluster
Reset each worker to factory default
settings.
execute factoryreset
If the workers are going to run FortiOS
Carrier, add the FortiOS Carrier license
instead. This will reset the worker to
factory default settings.
Give the mgmt1 or mgmt2 interface of
each worker an IP address and connect
these interfaces to your network. This step
is optional but useful because when the
workers are added to the cluster, these IP
addresses are not synchronized, so you
can connect to and manage each worker
separately.
config system interface
edit mgmt1
set ip 172.20.120.120
end
Optionally give each worker a different
hostname. The hostname is also not
synchronized and allows you to identify
each worker.
config system global
set hostname worker-chassis-1-slot-3
end
Register each worker and apply licenses to
each worker before adding the workers to
the cluster. This includes FortiCloud
activation and FortiClient licensing, and
entering a license key if you purchased
more than 10 Virtual Domains
(VDOMs). You can also install any thirdparty certificates on the primary worker
before forming the cluster. Once the
cluster is formed, third-party certificates
are synchronized to all of the workers.
FortiToken licenses can be added at any
time because they are synchronized to all
workers.
Log into the CLI of each worker and enter
this command to set the worker to operate
in FortiController mode. The worker
restarts and joins the cluster.
config system elbc
set mode forticontroller
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
98
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with four FortiController-5103Bs
and two chassis
8. Managing the cluster
After the workers have been added to the cluster you can use the External Management IP to manage the
the primary worker. This includes access to the primary worker GUI or CLI, SNMP queries to the primary
worker, and using FortiManager to manage the primary worker. As well SNMP traps and log messages are
sent from the primary worker with the External Management IP as their source address. And finally
connections to FortiGuard for updates, web filtering lookups and so on, all originate from the External
Management IP.
You can use the external management IP followed by a special port number to manage individual devices in
the cluster. The special port number identifies the protocol and the chassis and slot number of the device you
want to connect to. In fact this is the only way to manage the backup FortiControllers. The special port
number begins with the standard port number for the protocol you are using and is followed by two digits that
identify the chassis number and slot number. The port number is determined using the following formula:
service_port x 100 + (chassis_id - 1) x 20 + slot_id
service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS, 22 for
SSH, 23 for Telnet, 161 for SNMP). chassis_id is the Chassis ID part of the FortiController HA configuration
and can be 1 or 2. slot_id is the number of the chassis slot.
Some examples:
HTTPS, chassis 1 slot 2: 443 x 100 + (1 - 1) x 20 + 2 = 44300 + 0 + 2 = 44302, browse to:
https://172.20.120.100:44302
HTTP, chassis 2, slot 4: 80 x 100 + (2 - 1) x 20 + 4 = 8000 + 20 + 4 = 8024, browse to:
http://172.20.120.100/8024
HTTPS, chassis 1, slot 10: 443 x 100 + (1 - 1) x 20 + 10 = 44300 + 0 + 10 = 44310, browse to
https://172.20.120.100/44310
HTTPS, chassis 2, slot 10: 443 x 100 + (2 - 1) x 20 + 10 = 44300 + 20 + 10 = 44330, browse to
https://172.20.120.100/44330
SNMP query port, chassis 1, slot 4: 161 x 100 + (1 - 1) x (20 + 4) = 16100 + 0 + 4 = 16104
Telnet to connect to the CLI of the worker in chassis 2 slot 4: telnet 172.20.120.100 2324
To use SSH to connect to the CLI the worker in chassis 1 slot 5: ssh [email protected] -p2205
You can also manage the primary FortiController using the IP address of its mgmt interface, set up when you
first configured the primary FortiController. You can also manage the workers by connecting directly to their
mgmt1 or mgmt2 interfaces if you set them up. However, the only way to manage the backup
FortiControllers is by using its special port number (or a serial connection to the Console port).
To manage a FortiController using SNMP you need to load the FORTINET-CORE-MIB.mib file into your
SNMP manager. You can get this MIB file from the Fortinet support site, in the same location as the current
FortiController firmware (select the FortiSwitchATCA product).
99
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
On the primary FortiController GUI go to
Load Balance > Status. As the workers
in chassis 1 restart they should appear in
their appropriate slots.
The primary FortiController should be the
FortiController in chassis 1 slot 1. The
primary FortiController status display
includes a Config Master link that you
can use to connect to the primary worker.
Log into a backup FortiController GUI (for
example by browsing to
https://172.20.120.100:44321 to log into
the FortiController in chassis 2 slot 1) and
go to Load Balance > Status. If the
workers in chassis 2 are configured
correctly they should appear in their
appropriate slots.
The backup FortiController Status page
shows the status of the workers in chassis
2 and does not include the Config Master
link.
9. Results - Configuring the workers
Configure the workers to process the traffic they receive from the FortiController front panel interfaces. By
default all FortiController front panel interfaces are in the worker root VDOM. You can keep them in the root
VDOM or create additional VDOMs and move interfaces into them.
For example, if you connect the Internet to
FortiController front panel interface 2
(fctrl/f2 on the worker GUI and CLI) and
the internal network to FortiController
front panel interface 6 (fctrl/f6) you can
access the root VDOM and add a policy to
allow users on the Internal network to
access the Internet.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
100
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with four FortiController-5103Bs
and two chassis
10. Results - Primary FortiController cluster status
Log into the primary FortiController CLI For example, you can use SSH to log into the primary
FortiController CLI using the external management IP:
and enter this command to view the
ssh [email protected] -p2201
system status of the primary
FortiController.
get system status
Version: FortiController-5103B
v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3912000029
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch1-slot1
Current HA mode: a-p, master
System time: Sun Sep 14 08:16:25 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
Enter this command to view the load balance status of the primary FortiController and its workers. The
command output shows the workers in slots 3, 4, and 5, and status information about each one.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: slot-3
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready: 0 [ 0 Active 0 Standby
Dead: 0 [ 0 Active 0 Standby]
Total: 3 [ 3 Active 0 Standby]
Slot 3: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 5: Status:Working Function:Active
Link:
Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
101
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Enter this command from the primary FortiController to show the HA status of the FortiControllers. The
command output shows a lot of information about the cluster including the host names and chassis and slot
locations of the FortiControllers, the number of sessions each FortiController is processing (this case 0 for
each FortiController) the number of failed workers (0 of 3 for each FortiController), the number of
FortiController front panel interfaces that are connected (2 for each FortiController) and so on. The final two
lines of output also show that the B1 interfaces are connected (status=alive) and the B2 interfaces are not
(status=dead). The cluster can still operate with a single heartbeat connection, but redundant heartbeat
interfaces are recommended.
diagnose system ha status
mode: a-p
minimize chassis failover: 1
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.121,
uptime=4416.18, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 best=yes
local_interface= b2 best=no
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.123,
uptime=1181.62, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_time=
4739.97 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.124,
uptime=335.79, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_time=
4739.93 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.122,
uptime=4044.46, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_time=
4740.03 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
102
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with four FortiController-5103Bs
and two chassis
11. Results - Chassis 1 Slot 2 FortiController status
Log into the chassis 1 slot 2
To use SSH:
ssh [email protected] -p2202
FortiController CLI and enter this
command to view the status of this backup
get system status
FortiController.
Version: FortiController-5103B
v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3914000006
BIOS version: 04000010
System Part-Number: P08442-04
Hostname: ch1-slot2
Current HA mode: a-p, backup
System time: Sun Sep 14 12:44:58 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
Enter this command to view the status of this backup FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: slot-3
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready: 0 [ 0 Active 0 Standby]
Dead: 0 [ 0 Active 0 Standby]
Total: 3 [ 3 Active 0 Standby]
Slot 3: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 4: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 5: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
103
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Enter this command from the FortiController in chassis 1 slot 2 to show the HA status of the FortiControllers.
Notice that the FortiController in chassis 1 slot 2 is shown first.
diagnose system ha status
mode: a-p
minimize chassis failover: 1
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.122,
uptime=4292.69, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 best=yes
local_interface= b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.121,
uptime=4664.49, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_time=
4958.88 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.123,
uptime=1429.99, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_time=
4958.88 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.124,
uptime=584.20, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_time=
4958.88 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
104
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with four FortiController-5103Bs
and two chassis
12. Results - Chassis 2 Slot 1 FortiController status
Log into the chassis 2 slot 1
To use SSH:
ssh [email protected] -p2221
FortiController CLI and enter this
command to view the status of this backup
get system status
FortiController.
Version: FortiController-5103B
v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3912000051
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch2-slot1
Current HA mode: a-p, backup
System time: Sun Sep 14 12:53:09 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
Enter this command to view the status of this backup FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: N/A
Blades:
Working: 3 [ 3 Active 0
Ready: 0 [ 0 Active 0
Dead: 0 [ 0 Active 0
Total: 3 [ 3 Active 0
Standby]
Standby]
Standby]
Standby]
Slot 3: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 4: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 5: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
105
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Enter this command from the FortiController in chassis 2 slot 1 to show the HA status of the FortiControllers.
Notice that the FortiController in chassis 2 slot 1 is shown first.
diagnose system ha status
mode: a-p
minimize chassis failover: 1
ch2-slot1(FT513B3912000051), Slave
(priority=2), ip=169.254.128.123, uptime=1858.71, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 best=yes
local_interface= b2 best=no
ch1-slot1(FT513B3912000029), Master
(priority=0), ip=169.254.128.121, uptime=5093.30, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_
time= 2074.15 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
ch2-slot2(FT513B3913000168), Slave
(priority=3), ip=169.254.128.124, uptime=1013.01, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_
time= 2074.15 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
ch1-slot2(FT513B3914000006), Slave
(priority=1), ip=169.254.128.122, uptime=4721.60, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_
time= 2074.17 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
106
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Active-passive SLBC with four FortiController-5103Bs
and two chassis
13. Results - Chassis 2 Slot 2 FortiController status
Log into the chassis 2 slot 2
To use SSH:
ssh [email protected] -p2222
FortiController CLI and enter this
command to view the status of this backup
get system status
FortiController.
Version: FortiController-5103B
v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3913000168
BIOS version: 04000010
System Part-Number: P08442-04
Hostname: ch2-slot2
Current HA mode: a-p, backup
System time: Sun Sep 14 12:56:45 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
Enter this command to view the status of the backup FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: N/A
Blades:
Working: 3 [ 3 Active 0
Ready: 0 [ 0 Active 0
Dead: 0 [ 0 Active 0
Total: 3 [ 3 Active 0
Standby]
Standby]
Standby]
Standby]
Slot 3: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 4: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 5: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
107
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Active-passive SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP
local session setup)
Enter this command from the FortiController in chassis 2 slot 2 to show the HA status of the FortiControllers.
Notice that the FortiController in chassis 2 slot 2 is shown first.
diagnose system ha status mode: a-p
minimize chassis failover: 1
ch2-slot2(FT513B3913000168), Slave
(priority=3), ip=169.254.128.124, uptime=1276.77, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 best=yes
local_interface= b2 best=no
ch1-slot1(FT513B3912000029), Master
(priority=0), ip=169.254.128.121, uptime=5356.98, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_
time= 1363.89 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
ch2-slot1(FT513B3912000051), Slave
(priority=2), ip=169.254.128.123, uptime=2122.58, chassis=2(1)
slot: sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_
time= 1363.97 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
ch1-slot2(FT513B3914000006), Slave
(priority=1), ip=169.254.128.122, uptime=4985.27, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none) hbdevs: local_interface= b1 last_hb_
time= 1363.89 status=alive
local_interface= b2 last_hb_time= 0.00 status=dead
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
108
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Dual mode SLBC with four FortiController-5103Bs and two
chassis
This example describes how to setup a dual-mode session-aware load balancing cluster (SLBC) consisting of two
FortiGate-5000 chassis, four FortiController-5103Bs two in each chassis, and six FortiGate-5001Bs acting as
workers, three in each chassis. This SLBC configuration can have up to 14 redundant 10Gbit network
connections.
Traffic Load
Balanced
to workers over
fabric backplane
Workers (FortiGates)
in chassis 1 slots 3, 4, and 5
fctl1/f2
slot 1
f4
b1
b2 mgmt
slot 2
fctl2/f6
Heartbeat
Links
B1 and B2
FortiController
Session-sync
Link F4
fctl1/f2
Internet
Internal Network
slot 1
b1 b2 mgmt
f4
slot 2
fctl2/f6
Chassis 1, Primary
FortiController in slot 1
Backup in slot 2
Chassis 2, Backup
FortiControllers in
slot 1 and 2
Workers (FortiGates)
in chassis 2 slots 3, 4, and 5
Traffic Load
Balanced
to workers over
fabric backplane
In this dual mode configuration, the FortiController in chassis 1 slot 1 is configured to become the primary unit.
Both of the FortiControllers in chassis 1 receive traffic and load balance it to the workers in chassis 1. In dual
mode configuration the front panel interfaces of both FortiControllers are active. All networks have single
connections to the FortiController in slot 1 or the FortiController in slot 2. It is a best practice in a dual-mode
configuration to distribute traffic evenly between the FortiControllers. So in this example, ingress traffic from the
Internet is processed by the FortiController in slot 1 and egress traffic for the internal network is processed by the
FortiController in slot 2.
Redundant connections to a network from the FortiControllers in same chassis is not
supported (unless you configure link aggregation).
The front panel F1 to F8 interfaces of the FortiController in slot 1 are named fctrl1/f1 to fctrl1/f8 and the front
panel F1 to F8 interfaces of the FortiController in slot 2 are named fctrl2/f1 to fctrl2/f8.
The network connections to the FortiControllers in chassis 1 are duplicated with the FortiControllers in chassis 2.
If one of the FortiControllers in chassis 1 fails, the FortiController in chassis 2 slot 1 becomes the primary
FortiController and all traffic fails over to the FortiControllers in chassis 2. If one of the FortiControllers in chassis
109
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
2 fails, the remaining FortiController in chassis 2 keeps processing traffic received by its front panel interfaces.
Traffic to and from the failed FortiController is lost.
Heartbeat and base control and management communication is established between the chassis using the
FortiController B1 and B2 interfaces. Only one heartbeat connection is required but redundant connections are
recommended. Connect all of the B1 and all of the B2 interfaces together using switches. This example shows
using one switch for the B1 connections and another for the B2 connections. You could also use one switch for
both the B1 and B2 connections but using separate switches provides more redundancy.
The following VLAN tags and subnets are used by traffic on the B1 and B2 interfaces:
l
Heartbeat traffic uses VLAN 999.
l
Base control traffic on the 10.101.11.0/255.255.255.0 subnet uses VLAN 301.
l
Base management on the 10.101.10.0/255.255.255.0 subnet uses VLAN 101
This example also includes a FortiController session sync connection between the FortiControllers using the
FortiController F4 front panel interface (resulting in the SLBC having a total of 14 redundant 10Gbit network
connections). (You can use any fabric front panel interface, F4 is used in this example to make the diagram
clearer.) In a two chassis dual mode cluster, session sync ports need to be 1-to-1 connected according to chassis
slot. So F4 from the FortiController in chassis 1 slot 1 needs to be connected to F4 in chassis 2 slot 1. And, F4 in
chassis 1 slot 2 needs to be connected to F4 in chassis 2 slot 2. Because these are 1 to 1 connections you can use
patch cables to connect them. You can also make these connections through a switch.
FortiController-5103B session sync traffic uses VLAN 2000.
This example sets the device priority of the FortiController in chassis 1 slot 1 higher than the device priority of the
other FortiControllers to make sure that the FortiController in chassis 1 slot 1 becomes the primary FortiController
for the cluster. Override is also enabled on the FortiController in chassis 1 slot 1. Override may cause the cluster
to negotiate more often to select the primary unit. This makes it more likely that the unit that you select to be the
primary unit will actually be the primary unit; but enabling override can also cause the cluster to negotiate more
often.
1. Setting up the Hardware
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
110
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Install two FortiGate-5000 series chassis and connect them to power. Ideally each chassis should be
connected to a separate power circuit. Install FortiControllers in slot 1 and 2 of each chassis. Install the
workers in slots 3, 4, and 5 of each chassis. The workers must be installed in the same slots in both chassis.
Power on both chassis.
Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally
(to check normal operation LED status, see the FortiGate-5000 series documents available here).
Create redundant network connections to FortiController front panel interfaces. In this example, a redundant
connection to the Internet is made to the F2 interface of the FortiController in chassis 1 slot 1 and the F2
interface of the FortiController in chassis 2 slot 1. This becomes the fctl1/f2 interface. As well, a redundant
connection to the internal network is made to the F6 interface of the FortiController in chassis 1 slot 2 and the
F6 interface of the FortiController in chassis 2 slot 2. This becomes the fctl2/f6 interface.
Create a heartbeat link by connecting the FortiController B1 interfaces together. Create a backup heartbeat
link by connecting the FortiController B2 interfaces together.
Create a FortiController session sync link by connecting the FortiController F4 interfaces as described above
and shown in the diagram.
Connect the mgmt interfaces of all of the FortiControllers to the internal network or any network from which
you want to manage the cluster.
Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiControllers
and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA
product.
111
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
2. Configuring the FortiController in Chassis 1 Slot 1
This will become the primary FortiController. Connect to the GUI (using HTTPS) or CLI (using SSH) of the
FortiController in chassis 1 slot 1 with the default IP address (https://192.168.1.99) or connect to the
FortiController CLI through the console port (Bits per second: 9600, Data bits: 8, Parity: None, Stop bits: 1,
Flow control: None).
From the Dashboard System
Information widget, set the
Host Name to ch1-slot1. Or
enter this command.
config system global
set hostname ch1-slot1
end
Add a password for the admin
administrator account. You
can either use the
Administrators widget on
the GUI or enter this
command.
config admin user
edit admin
set password <password>
end
Change the FortiController
mgmt interface IP address.
Use the GUI Management
Port widget or enter this
command.
config system interface
edit mgmt
set ip 172.20.120.151/24
end
If you need to add a default
route for the management IP
address, enter this command.
config route static
edit 1
set gateway 172.20.120.2
end
Set the chassis type that you
are using.
config system global
set chassis-type fortigate-5140
end
Enable FortiController session config load-balance setting
set session-sync enable
sync.
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
112
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Configure Dual mode HA.
From the FortiController GUI
System Information widget,
beside HA Status select
Configure.
Set Mode to Dual Mode, set
the Device Priority to 250,
change the Group ID, select
Enable Override, enable
Chassis Redundancy, set
Chassis ID to 1 and move
the b1 and b2 interfaces to
the Selected column and
select OK.
Enter these commands to use
the FortiController front panel
F4 interface for session sync
communication.
config system ha
set session-sync-port f4
end
You can also enter the
complete HA configuration
with this command.
config system ha
set mode dual
set groupid 25
set priority 250
set override enable
set chassis-redundancy enable
set chassis-id 1
set hbdev b1 b2
set session-sync-port f4
end
If you have more than one cluster on the same network, each cluster should have a different Group ID.
Changing the Group ID changes the cluster interface virtual MAC addresses. If your group ID setting causes
a MAC address conflict you can select a different Group ID. The default Group ID of 0 is not a good choice
and normally should be changed.
You can also adjust other HA settings. For example, you could change the VLAN to use for HA heartbeat
traffic if it conflicts with a VLAN on your network. You can also adjust the Heartbeat Interval and Number
of Heartbeats lost to adjust how quickly the cluster determines one of the FortiControllers has failed.
113
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
3. Configuring the FortiController in Chassis 1 Slot 2
Log into the FortiController in
chassis 1 slot 2.
Enter these commands to set
the host name to ch1-slot2, to
configure the mgmt interface,
and to duplicate the HA
configuration of the
FortiController in slot 1.
Except, do not select Enable
Override and set the Device
Priority to a lower value (for
example, 10).
All other configuration
settings are synchronized
from the primary
FortiController when the
cluster forms.
config system global
set hostname ch1-slot2
end
config system interface
edit mgmt
set ip 172.20.120.152/24
end
config system ha
set mode dual
set groupid 25
set priority 10
set chassis-redundancy enable
set chassis-id 1
set hbdev b1 b2
set session-sync-port f4
end
4. Configuring the FortiController in Chassis 2 Slot 1
Log into the FortiController in
chassis 2 slot 1.
Enter these commands to set
the host name to ch2-slot1, to
configure the mgmt interface,
and to duplicate the HA
configuration of the
FortiController in chassis 1
slot 1. Except, do not select
Enable Override and set the
Device Priority to a lower
value (for example, 10), and
set the Chassis ID to 2.
All other configuration
settings are synchronized
from the primary
FortiController when the
cluster forms.
config system global
set hostname ch2-slot1
end
config system interface
edit mgmt
set ip 172.20.120.251/24
end
config system ha
set mode dual
set groupid 25
set priority 10
set chassis-redundancy enable
set chassis-id 2
set hbdev b1 b2
set session-sync-port f4
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
114
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
5. Configuring the FortiController in Chassis 2 Slot 2
Log into the FortiController in
chassis 2 slot 2.
Enter these commands to set
the host name to ch2-slot2, to
configure the mgmt interface,
and to duplicate the HA
configuration of the
FortiController in chassis 1
slot 1. Except, do not select
Enable Override and set the
Device Priority to a lower
value (for example, 10), and
set the Chassis ID to 2.
All other configuration
settings are synchronized
from the primary
FortiController when the
cluster forms.
config system global
set hostname ch2-slot2
end
config system interface
edit mgmt
set ip 172.20.120.252/24
end
config system ha
set mode dual
set groupid 25
set priority 10
set chassis-redundancy enable
set chassis-id 2
set hbdev b1 b2
set session-sync-port f4
end
6. Configuring the cluster
After a short time the FortiControllers restart in HA mode and form an active-passive SLBC. All of the
FortiControllers must have the same HA configuration and at least one heartbeat link (the B1 and B2
interfaces) must be connected. If the FortiControllers are unable to form a cluster, check to make sure that
they all have the same HA configuration. Also they can't form a cluster if the heartbeat interfaces (B1 and B2)
are not connected.
With the configuration described in the previous steps, the FortiController in chassis 1 slot 1 should become
the primary FortiController and you can log into the cluster using the management IP address that you
assigned to this FortiController.
The other FortiControllers become backup FortiControllers. You cannot log into or manage the backup
FortiControllers until you configure the cluster External Management IP and add workers to the cluster. Once
you do this you can use the External Management IP address and a special port number to manage the
backup FortiControllers. This is described below. (You can also connect to any backup FortiController CLI
using their console port.)
115
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
You can confirm that the
cluster has been formed by
viewing the FortiController HA
configuration. The display
should show all four of the
FortiControllers in the cluster.
You can also go to Load
Balance > Status to see the
status of FortiControllers
(both slot icons should be
green because both
FortiControllers process
traffic).
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
116
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Go to Load Balance >
Config to add the workers to
the cluster by selecting Edit
and moving the slots that
contain workers to the
Members list.
The Config page shows the
slots in which the cluster
expects to find workers. If the
workers have not been
configured for SLBC operation
their status will be Down.
Configure the External
Management IP/Netmask.
Once you have connected
workers to the cluster, you can
use this IP address to manage
and configure all of the
devices in the cluster.
You can also enter this
command to add slots 3, 4,
and 5 to the cluster.
config load-balance setting
config slots
edit 3
next
edit 4
next
edit 5
end
end
You can also enter this command to set the External Management IP and configure management access:
config load-balance setting
set base-mgmt-external-ip 172.20.120.100 255.255.255.0
set base-mgmt-allowaccess https ssh ping
end
Enable base management
traffic between
FortiControllers. The CLI
syntax shows setting the
default base management
VLAN (101). You can also use
this command to change the
base management VLAN.
117
config load-balance setting
config base-mgmt-interfaces
edit b1
set vlan-id 101
next
edit b2
set vlan-id 101
end
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Enable base control traffic
between FortiControllers. The
CLI syntax shows setting the
default base control VLAN
(301). You can also use this
command to change the base
management VLAN.
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
config load-balance setting
config base-ctrl-interfaces
edit b1
set vlan-id 301
next
edit b2
set vlan-id 301
end
end
7. Adding the workers to the cluster
Reset each worker to factory
default settings.
execute factoryreset
If the workers are going to run
FortiOS Carrier, add the
FortiOS Carrier license
instead. This will reset the
worker to factory default
settings.
Give the mgmt1 or mgmt2
interface of each worker an IP
address and connect these
interfaces to your network.
This step is optional but
useful because when the
workers are added to the
cluster, these IP addresses
are not synchronized, so you
can connect to and manage
each worker separately.
config system interface
edit mgmt1
set ip 172.20.120.120
end
Optionally give each worker a
different hostname. The
hostname is also not
synchronized and allows you
to identify each worker.
config system global
set hostname worker-chassis-1-slot-3
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
118
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Register each worker and
apply licenses to each worker
before adding the workers to
the cluster. This includes
FortiCloud activation and
FortiClient licensing, and
entering a license key if you
purchased more than 10
Virtual Domains (VDOMs).
You can also install any thirdparty certificates on the
primary worker before forming
the cluster. Once the cluster is
formed, third-party certificates
are synchronized to all of the
workers. FortiToken licenses
can be added at any time
because they are
synchronized to all workers.
Log into the CLI of each
worker and enter this
command to set the worker to
operate in FortiController
mode. The worker restarts
and joins the cluster.
config system elbc
set mode dual-forticontroller
end
8. Managing the cluster
119
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
After the workers have been added to the cluster you can use the External Management IP to manage the
the primary worker. This includes access to the primary worker GUI or CLI, SNMP queries to the primary
worker, and using FortiManager to manage the primary worker. As well SNMP traps and log messages are
sent from the primary worker with the External Management IP as their source address. And finally
connections to FortiGuard for updates, web filtering lookups and so on, all originate from the External
Management IP.You can use the external management IP followed by a special port number to manage
individual devices in the cluster. The special port number identifies the protocol and the chassis and slot
number of the device you want to connect to. In fact this is the only way to manage the backup
FortiControllers. The special port number begins with the standard port number for the protocol you are using
and is followed by two digits that identify the chassis number and slot number. The port number is
determined using the following formula:
service_port x 100 + (chassis_id - 1) x 20 + slot_id
service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS, 22 for
SSH, 23 for Telnet, 161 for SNMP). chassis_id is the Chassis ID part of the FortiController HA configuration
and can be 1 or 2. slot_id is the number of the chassis slot.
Some examples:HTTPS, chassis 1 slot 2: 443 x 100 + (1 - 1) x 20 + 2 = 44300 + 0 + 2 = 44302, browse to:
https://172.20.120.100:44302
HTTP, chassis 2, slot 4: 80 x 100 + (2 - 1) x 20 + 4 = 8000 + 20 + 4 = 8024, browse to:
http://172.20.120.100/8024
HTTPS, chassis 1, slot 10: 443 x 100 + (1 - 1) x 20 + 10 = 44300 + 0 + 10 = 44310, browse to
https://172.20.120.100/44310
HTTPS, chassis 2, slot 10: 443 x 100 + (2 - 1) x 20 + 10 = 44300 + 20 + 10 = 44330, browse to
https://172.20.120.100/44330
SNMP query port, chassis 1, slot 4: 161 x 100 + (1 - 1) x (20 + 4) = 16100 + 0 + 4 = 16104
Telnet to connect to the CLI of the worker in chassis 2 slot 4: telnet 172.20.120.100 2324
To use SSH to connect to the CLI the worker in chassis 1 slot 5: ssh [email protected] -p2205
You can also manage the primary FortiController using the IP address of its mgmt interface, set up when you
first configured the primary FortiController. You can also manage the workers by connecting directly to their
mgmt1 or mgmt2 interfaces if you set them up. However, the only way to manage the backup
FortiControllers is by using its special port number (or a serial connection to the Console port).
To manage a FortiController using SNMP you need to load the FORTINET-CORE-MIB.mib file into your
SNMP manager. You can get this MIB file from the Fortinet support site, in the same location as the current
FortiController firmware (select the FortiSwitchATCA product).
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
120
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
On the primary FortiController
GUI go to Load Balance >
Status. If the workers in
chassis 1 are configured
correctly they should appear
in their appropriate slots
The primary FortiController
should be the FortiController
in chassis 1 slot 1. The
primary FortiController status
display includes a Config
Master link that you can use
to connect to the primary
worker.
Log into a backup
FortiController GUI (for
example by browsing to
https://172.20.120.100:44321
to log into the FortiController
in chassis 2 slot 1) and go to
Load Balance > Status. If
the workers in chassis 2 are
configured correctly they
should appear in their
appropriate slots.
The backup FortiController
Status page shows the status
of the workers in chassis 2
and does not include the
Config Master link.
9. Results - Configuring the workers
Configure the workers to process the traffic they receive from the FortiController front panel interfaces. By
default all FortiController front panel interfaces are in the worker root VDOM. You can keep them in the root
VDOM or create additional VDOMs and move interfaces into them.
121
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
For example, if you connect
the Internet to FortiController
front panel interface 2
(fctrl1/f2 on the worker GUI
and CLI) and the internal
network to FortiController
front panel interface 6
(fctrl2/f6) you can access the
root VDOM and add a policy
to allow users on the Internal
network to access the
Internet.
10. Results - Primary FortiController cluster status
Log into the primary
FortiController CLI and
enter this command to view
the system status of the
primary FortiController.
For example, you can use SSH to log into the primary FortiController CLI
using the external management IP:
ssh [email protected] -p2201
get system status
Version: FortiController-5103B v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3912000029
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch1-slot1
Current HA mode: dual, master
System time: Mon Sep 15 10:11:48 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
122
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Enter this command to view the load balance status of the primary FortiController and its workers. The
command output shows the workers in slots 3, 4, and 5, and status information about each one.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: slot-3
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready:
0 [ 0 Active 0 Standby]
Dead:
0 [ 0 Active 0 Standby]
Total:
3 [ 3 Active 0 Standby]
Slot 3: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 5: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Heartbeat: Management: Good Data: Good
Status Message:"Running"
123
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Enter this command from the primary FortiController to show the HA status of the FortiControllers. The
command output shows a lot of information about the cluster including the host names and chassis and slot
locations of the FortiControllers, the number of sessions each FortiController is processing (in this case 0 for
each FortiController) the number of failed workers (0 of 3 for each FortiController), the number of
FortiController front panel interfaces that are connected (2 for each FortiController) and so on. The final two
lines of output also show that the B1 interfaces are connected (status=alive) and the B2 interfaces are not
(status=dead). The cluster can still operate with a single heartbeat connection, but redundant heartbeat
interfaces are recommended.
diagnose system ha status
mode: dual
minimize chassis failover: 1
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201,
uptime=1517.38, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 best=yes
local_interface=
b2 best=no
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203,
uptime=1490.50, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82192.16
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204,
uptime=1476.37, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82192.27
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202,
uptime=1504.58, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82192.16
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
124
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
11. Results - Chassis 1 Slot 2 FortiController status
Log into the chassis 1 slot 2
FortiController CLI and
enter this command to view
the status of this backup
FortiController.
To use SSH:
ssh [email protected] -p2202
get system status
Version: FortiController-5103B
v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3914000006
BIOS version: 04000010
System Part-Number: P08442-04
Hostname: ch1-slot2
Current HA mode: dual, backup
System time: Mon Sep 15 10:14:53 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
Enter this command to view the status of this backup FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: slot-3
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready:
0 [ 0 Active 0 Standby]
Dead:
0 [ 0 Active 0 Standby]
Total:
3 [ 3 Active 0 Standby]
Slot 3: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 5: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
125
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Enter this command from the FortiController in chassis 1 slot 2 to show the HA status of the FortiControllers.
Notice that the FortiController in chassis 1 slot 2 is shown first.
diagnose system ha status
mode: dual
minimize chassis failover: 1
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202,
uptime=1647.44, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 best=yes
local_interface=
b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201,
uptime=1660.17, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82305.93
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203,
uptime=1633.27, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82305.83
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204,
uptime=1619.12, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82305.93
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
126
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
12. Results - Chassis 2 Slot 1 FortiController status
Log into the chassis 2 slot 1
FortiController CLI and
enter this command to view
the status of this backup
FortiController.
To use SSH:
ssh [email protected] -p2221
get system status
Version: FortiController-5103B v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3912000051
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch2-slot1
Current HA mode: dual, backup
System time: Mon Sep 15 10:17:10 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada))
Enter this command to view the status of this backup FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: N/A
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready:
0 [ 0 Active 0 Standby]
Dead:
0 [ 0 Active 0 Standby]
Total:
3 [ 3 Active 0 Standby]
Slot 3: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 5: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
127
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Enter this command from the FortiController in chassis 2 slot 1 to show the HA status of the FortiControllers.
Notice that the FortiController in chassis 2 slot 1 is shown first.
diagnose system ha status
mode: dual
minimize chassis failover: 1
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203,
uptime=1785.61, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 best=yes
local_interface=
b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201,
uptime=1812.38, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=79145.95
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204,
uptime=1771.36, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=79145.99
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202,
uptime=1799.56, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=79145.86
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
128
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5103Bs
and two chassis
13. Results - Chassis 2 Slot 2 FortiController status
Log into the chassis 2 slot 2
FortiController CLI and
enter this command to view
the status of this backup
FortiController.
To use SSH:
ssh [email protected] -p2222
get system status
Version: FortiController-5103B v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3913000168
BIOS version: 04000010
System Part-Number: P08442-04
Hostname: ch2-slot2
Current HA mode: dual, backup
System time: Mon Sep 15 10:20:00 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
Enter this command to view the status of the backup FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: N/A
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready:
0 [ 0 Active 0 Standby]
Dead:
0 [ 0 Active 0 Standby]
Total:
3 [ 3 Active 0 Standby]
Slot 3: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 5: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
129
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5103Bs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Enter this command from the FortiController in chassis 2 slot 2 to show the HA status of the FortiControllers.
Notice that the FortiController in chassis 2 slot 2 is shown first.
diagnose system ha status
mode: dual
minimize chassis failover: 1
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204,
uptime=1874.39, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 best=yes
local_interface=
b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201,
uptime=1915.59, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=78273.86
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203,
uptime=1888.78, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=78273.85
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202,
uptime=1902.72, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=78273.72
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
130
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Dual mode SLBC with four FortiController-5903Cs and two
chassis
This example describes how to setup a dual-mode session-aware load balancing cluster (SLBC) consisting of two
FortiGate-5144C chassis, four FortiController-5903Cs two in each chassis, and six FortiGate-5001Ds acting as
workers, three in each chassis. This SLBC configuration can have up to 8 redundant 40Gbps network
connections. The FortiGate-5144C is required to supply enough power for the FortiController-5903Cs and provide
40Gpbs fabric backplane communication.
Traffic Load
Balanced
to workers over
fabric backplane
Workers (FortiGates)
in chassis 1 slots 3, 4, and 5
fctl1/f1
slot 1
b1
b2 mgmt
slot 2
Internet
fctl1/f1
fctl2/f3
Session-sync
and Heartbeat
Links B1 and B2
Internal Network
slot 1
b1 b2 mgmt
slot 2
fctl2/f3
Chassis 1, Primary
FortiController in slot 1
Backup in slot 2
Chassis 2, Backup
FortiControllers in
slot 1 and 2
Workers (FortiGates)
in chassis 2 slots 3, 4, and 5
Traffic Load
Balanced
to workers over
fabric backplane
In this dual mode configuration, the FortiController in chassis 1 slot 1 is configured to become the primary unit.
Both of the FortiControllers in chassis 1 receive traffic and load balance it to the workers in chassis 1. In dual
mode configuration the front panel interfaces of both FortiControllers are active. All networks have single
connections to the FortiController in slot 1 or the FortiController in slot 2. It is a best practice in a dual-mode
configuration to distribute traffic evenly between the FortiControllers. So in this example, ingress traffic from the
Internet is processed by the FortiController in slot 1 and egress traffic for the internal network is processed by the
FortiController in slot 2.
Redundant connections to a network from the FortiControllers in same chassis is not
supported (unless you configure link aggregation).
The front panel F1 to F4 interfaces of the FortiController in slot 1 are named fctrl1/f1 to fctrl1/f4 and the front
panel F1 to F4 interfaces of the FortiController in slot 2 are named fctrl2/f1 to fctrl2/f4.
The network connections to the FortiControllers in chassis 1 are duplicated with the FortiControllers in chassis 2.
If one of the FortiControllers in chassis 1 fails, the FortiController in chassis 2 slot 1 becomes the primary
131
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
FortiController and all traffic fails over to the FortiControllers in chassis 2. If one of the FortiControllers in chassis
2 fails, the remaining FortiController in chassis 2 keeps processing traffic received by its front panel interfaces.
Traffic to and from the failed FortiController is lost.
Heartbeat, base control, base management, and session sync communication is established between the chassis
using the FortiController B1 and B2 interfaces. Connect all of the B1 interfaces together using a 10 Gbps switch.
Collect all of the B2 interfaces together using another 10 Gbps switch. Using the same switch for the B1 and B2
interfaces is not recommended and requires a double VLAN tagging configuration.
The switches must be configured to support the following VLAN tags and subnets used by the traffic on the B1
and B2 interfaces:
l
Heartbeat traffic uses VLAN 999.
l
Base control traffic on the 10.101.11.0/255.255.255.0 subnet uses VLAN 301.
l
Base management on the 10.101.10.0/255.255.255.0 subnet uses VLAN 101.
l
Session sync traffic between the FortiControllers in slot 1 uses VLAN 1900.
l
Session sync traffic between the FortiControllers in slot 2 uses VLAN 1901.
This example sets the device priority of the FortiController in chassis 1 slot 1 is higher than the device priority of
the other FortiControllers to make sure that the FortiController in chassis 1 slot 1 becomes the primary
FortiController for the cluster. Override is also enabled on the FortiController in chassis 1 slot 1. Override may
cause the cluster to negotiate more often to select the primary unit. This makes it more likely that the unit that
you select to be the primary unit will actually be the primary unit; but enabling override can also cause the cluster
to negotiate more often.
1. Setting up the Hardware
Install two FortiGate-5144C series chassis and connect them to power. Ideally each chassis should be
connected to a separate power circuit. Install FortiControllers in slot 1 and 2 of each chassis. Install the
workers in slots 3, 4, and 5 of each chassis. The workers must be installed in the same slots in both chassis.
Power on both chassis.
Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally
(to check normal operation LED status, see the FortiGate-5000 series documents available here).
Create redundant network connections to FortiController front panel interfaces. In this example, a redundant
connection to the Internet is made to the F1 interface of the FortiController in chassis 1 slot 1 and the F1
interface of the FortiController in chassis 2 slot 1. This becomes the fctl1/f1 interface. As well, a redundant
connection to the internal network is made to the F3 interface of the FortiController in chassis 1 slot 2 and the
F3 interface of the FortiController in chassis 2 slot 2. This becomes the fctl2/f3 interface.
Create the heartbeat links by connecting the FortiController B1 interfaces together and the FortiController B2
interfaces together.
Connect the mgmt interfaces of all of the FortiControllers to the internal network or any network from which
you want to manage the cluster.
Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiControllers
and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA
product.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
132
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
2. Configuring the FortiController in Chassis 1 Slot 1
This will become the primary FortiController. Connect to the GUI (using HTTPS) or CLI (using SSH) of the
FortiController in chassis 1 slot 1 with the default IP address (https://192.168.1.99) or connect to the
FortiController CLI through the console port (Bits per second: 9600, Data bits: 8, Parity: None, Stop bits: 1,
Flow control: None).
From the Dashboard System
Information widget, set the
Host Name to ch1-slot1. Or
enter this command.
config system global
set hostname ch1-slot1
end
Add a password for the admin
administrator account. You
can either use the
Administrators widget on
the GUI or enter this
command.
config admin user
edit admin
set password <password>
end
Change the FortiController
mgmt interface IP address.
Use the GUI Management
Port widget or enter this
command.
config system interface
edit mgmt
set ip 172.20.120.151/24
end
If you need to add a default
route for the management IP
address, enter this command.
config route static
edit 1
set gateway 172.20.120.2
end
Set the chassis type that you
are using.
config system global
set chassis-type fortigate-5144
end
Enable FortiController session config load-balance setting
set session-sync enable
sync.
end
133
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Configure Dual mode HA.
From the FortiController GUI
System Information widget,
beside HA Status select
Configure.
Set Mode to Dual Mode, set
the Device Priority to 250,
change the Group ID, select
Enable Override, enable
Chassis Redundancy, set
Chassis ID to 1 and move
the b1 and b2 interfaces to
the Selected column and
select OK.
Enter these commands to use
the FortiController front panel
F4 interface for session sync
communication.
config system ha
set session-sync-port f4
end
You can also enter the
complete HA configuration
with this command.
config system ha
set mode dual
set groupid 25
set priority 250
set override enable
set chassis-redundancy enable
set chassis-id 1
set hbdev b1 b2
end
If you have more than one cluster on the same network, each cluster should have a different Group ID.
Changing the Group ID changes the cluster interface virtual MAC addresses. If your group ID setting causes
a MAC address conflict you can select a different Group ID. The default Group ID of 0 is not a good choice
and normally should be changed.
You can also adjust other HA settings. For example, you could change the VLAN to use for HA heartbeat
traffic if it conflicts with a VLAN on your network. You can also adjust the Heartbeat Interval and Number
of Heartbeats lost to adjust how quickly the cluster determines one of the FortiControllers has failed.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
134
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
3. Configuring the FortiController in Chassis 1 Slot 2
Log into the FortiController in
chassis 1 slot 2.
Enter these commands to set
the host name to ch1-slot2, to
configure the mgmt interface,
and to duplicate the HA
configuration of the
FortiController in slot 1.
Except, do not select Enable
Override and set the Device
Priority to a lower value (for
example, 10).
All other configuration
settings are synchronized
from the primary
FortiController when the
cluster forms.
config system global
set hostname ch1-slot2
end
config system interface
edit mgmt
set ip 172.20.120.152/24
end
config system ha
set mode dual
set groupid 25
set priority 10
set chassis-redundancy enable
set chassis-id 1
set hbdev b1 b2
end
4. Configuring the FortiController in Chassis 2 Slot 1
Log into the FortiController in
chassis 2 slot 1.
Enter these commands to set
the host name to ch2-slot1, to
configure the mgmt interface,
and to duplicate the HA
configuration of the
FortiController in chassis 1
slot 1. Except, do not select
Enable Override and set the
Device Priority to a lower
value (for example, 10), and
set the Chassis ID to 2.
All other configuration
settings are synchronized
from the primary
FortiController when the
cluster forms.
135
config system global
set hostname ch2-slot1
end
config system interface
edit mgmt
set ip 172.20.120.251/24
end
config system ha
set mode dual
set groupid 25
set priority 10
set chassis-redundancy enable
set chassis-id 2
set hbdev b1 b2
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
5. Configuring the FortiController in Chassis 2 Slot 2
Log into the FortiController in
chassis 2 slot 2.
Enter these commands to set
the host name to ch2-slot2, to
configure the mgmt interface,
and to duplicate the HA
configuration of the
FortiController in chassis 1
slot 1. Except, do not select
Enable Override and set the
Device Priority to a lower
value (for example, 10), and
set the Chassis ID to 2.
All other configuration
settings are synchronized
from the primary
FortiController when the
cluster forms.
config system global
set hostname ch2-slot2
end
config system interface
edit mgmt
set ip 172.20.120.252/24
end
config system ha
set mode dual
set groupid 25
set priority 10
set chassis-redundancy enable
set chassis-id 2
set hbdev b1 b2
end
6. Configuring the cluster
After a short time the FortiControllers restart in HA mode and form an active-passive SLBC. All of the
FortiControllers must have the same HA configuration and at least one heartbeat link (the B1 and B2
interfaces) must be connected. If the FortiControllers are unable to form a cluster, check to make sure that
they all have the same HA configuration. Also they can't form a cluster if the heartbeat interfaces (B1 and B2)
are not connected.
With the configuration described in the previous steps, the FortiController in chassis 1 slot 1 should become
the primary FortiController and you can log into the cluster using the management IP address that you
assigned to this FortiController.
The other FortiControllers become backup FortiControllers. You cannot log into or manage the backup
FortiControllers until you configure the cluster External Management IP and add workers to the cluster. Once
you do this you can use the External Management IP address and a special port number to manage the
backup FortiControllers. This is described below. (You can also connect to any backup FortiController CLI
using their console port.)
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
136
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
You can confirm that the
cluster has been formed by
viewing the FortiController HA
configuration. The display
should show all four of the
FortiControllers in the cluster.
You can also go to Load
Balance > Status to see the
status of FortiControllers
(both slot icons should be
green because both
FortiControllers process
traffic).
137
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Go to Load Balance >
Config to add the workers to
the cluster by selecting Edit
and moving the slots that
contain workers to the
Members list.
The Config page shows the
slots in which the cluster
expects to find workers. If the
workers have not been
configured for SLBC operation
their status will be Down.
Configure the External
Management IP/Netmask.
Once you have connected
workers to the cluster, you can
use this IP address to manage
and configure all of the
devices in the cluster.
You can also enter this
command to add slots 3, 4,
and 5 to the cluster.
config load-balance setting
config slots
edit 3
next
edit 4
next
edit 5
end
end
Make sure the FortiController
fabric backplane ports are set
to the correct speed. Since
the workers are FortiGate5001Ds and the cluster is
using FortiGate-5144C
chassis, the FortiController
fabric backplane interface
speed should be set to
40Gbps full duplex.
To change backplane fabric channel interface speeds, from the GUI go to
Switch > Fabric Channel and edit the slot-3, slot-4, and slot-5 interfaces.
For each one, set the Speed to 40Gpbs Full-duplex and select OK.
From the CLI enter the following command to change the speed of the slot-3,
slot-4 and slot-5 interfaces.
config switch fabric-channel physical-port
edit slot-3
set speed 40000full
next
edit slot-4
set speed 40000full
next
edit slot-5
set speed 40000full
end
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
138
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
You can also enter this command to set the External Management IP and configure management access:
config load-balance setting
set base-mgmt-external-ip 172.20.120.100 255.255.255.0
set base-mgmt-allowaccess https ssh ping
end
Enable base management
traffic between
FortiControllers. The CLI
syntax shows setting the
default base management
VLAN (101). You can also use
this command to change the
base management VLAN.
config load-balance setting
config base-mgmt-interfaces
edit b1
set vlan-id 101
next
edit b2
set vlan-id 101
end
end
Enable base control traffic
between FortiControllers. The
CLI syntax shows setting the
default base control VLAN
(301). You can also use this
command to change the base
management VLAN.
config load-balance setting
config base-ctrl-interfaces
edit b1
set vlan-id 301
next
edit b2
set vlan-id 301
end
end
7. Adding the workers to the cluster
Reset each worker to factory
default settings.
execute factoryreset
If the workers are going to run
FortiOS Carrier, add the
FortiOS Carrier license
instead. This will reset the
worker to factory default
settings.
Give the mgmt1 or mgmt2
interface of each worker an IP
address and connect these
interfaces to your network.
This step is optional but
useful because when the
workers are added to the
cluster, these IP addresses
are not synchronized, so you
can connect to and manage
each worker separately.
139
config system interface
edit mgmt1
set ip 172.20.120.120
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Optionally give each worker a
different hostname. The
hostname is also not
synchronized and allows you
to identify each worker.
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
config system global
set hostname worker-chassis-1-slot-3
end
Register each worker and
apply licenses to each worker
before adding the workers to
the cluster. This includes
FortiCloud activation and
FortiClient licensing, and
entering a license key if you
purchased more than 10
Virtual Domains (VDOMs).
You can also install any thirdparty certificates on the
primary worker before forming
the cluster. Once the cluster is
formed, third-party certificates
are synchronized to all of the
workers. FortiToken licenses
can be added at any time
because they are
synchronized to all workers.
Log into the CLI of each
worker and enter this
command to set the worker to
operate in FortiController
mode. The worker restarts
and joins the cluster.
config system elbc
set mode dual-forticontroller
end
Set the backplane
communication speed of the
workers to 40Gbps to match
the FortiController-5903C.
config system interface
edit elbc-ctrl/1
set speed 40000full
next
edit elbc-ctrl/2
set speed 40000full
end
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
140
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
8. Managing the cluster
After the workers have been added to the cluster you can use the External Management IP to manage the
the primary worker. This includes access to the primary worker GUI or CLI, SNMP queries to the primary
worker, and using FortiManager to manage the primary worker. As well SNMP traps and log messages are
sent from the primary worker with the External Management IP as their source address. And finally
connections to FortiGuard for updates, web filtering lookups and so on, all originate from the External
Management IP.
You can use the external management IP followed by a special port number to manage individual devices in
the cluster. The special port number identifies the protocol and the chassis and slot number of the device you
want to connect to. In fact this is the only way to manage the backup FortiControllers. The special port
number begins with the standard port number for the protocol you are using and is followed by two digits that
identify the chassis number and slot number. The port number is determined using the following formula:
service_port x 100 + (chassis_id - 1) x 20 + slot_id
service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS, 22 for
SSH, 23 for Telnet, 161 for SNMP). chassis_id is the Chassis ID part of the FortiController HA configuration
and can be 1 or 2. slot_id is the number of the chassis slot.
Some examples:HTTPS, chassis 1 slot 2: 443 x 100 + (1 - 1) x 20 + 2 = 44300 + 0 + 2 = 44302, browse to:
https://172.20.120.100:44302
HTTP, chassis 2, slot 4: 80 x 100 + (2 - 1) x 20 + 4 = 8000 + 20 + 4 = 8024, browse to:
http://172.20.120.100/8024
HTTPS, chassis 1, slot 10: 443 x 100 + (1 - 1) x 20 + 10 = 44300 + 0 + 10 = 44310, browse to
https://172.20.120.100/44310
HTTPS, chassis 2, slot 10: 443 x 100 + (2 - 1) x 20 + 10 = 44300 + 20 + 10 = 44330, browse to
https://172.20.120.100/44330
SNMP query port, chassis 1, slot 4: 161 x 100 + (1 - 1) x (20 + 4) = 16100 + 0 + 4 = 16104
Telnet to connect to the CLI of the worker in chassis 2 slot 4: telnet 172.20.120.100 2324
To use SSH to connect to the CLI the worker in chassis 1 slot 5: ssh [email protected] -p2205
You can also manage the primary FortiController using the IP address of its mgmt interface, set up when you
first configured the primary FortiController. You can also manage the workers by connecting directly to their
mgmt1 or mgmt2 interfaces if you set them up. However, the only way to manage the backup
FortiControllers is by using its special port number (or a serial connection to the Console port).
To manage a FortiController using SNMP you need to load the FORTINET-CORE-MIB.mib file into your
SNMP manager. You can get this MIB file from the Fortinet support site, in the same location as the current
FortiController firmware (select the FortiSwitchATCA product).
141
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
On the primary FortiController
GUI go to Load Balance >
Status. If the workers in
chassis 1 are configured
correctly they should appear
in their appropriate slots
The primary FortiController
should be the FortiController
in chassis 1 slot 1. The
primary FortiController status
display includes a Config
Master link that you can use
to connect to the primary
worker.
Log into a backup
FortiController GUI (for
example by browsing to
https://172.20.120.100:44321
to log into the FortiController
in chassis 2 slot 1) and go to
Load Balance > Status. If
the workers in chassis 2 are
configured correctly they
should appear in their
appropriate slots.
The backup FortiController
Status page shows the status
of the workers in chassis 2
and does not include the
Config Master link.
9. Results - Configuring the workers
Configure the workers to process the traffic they receive from the FortiController front panel interfaces. By
default all FortiController front panel interfaces are in the worker root VDOM. You can keep them in the root
VDOM or create additional VDOMs and move interfaces into them.
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
142
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
For example, if you connect
the Internet to FortiController
front panel interface 2
(fctrl1/f2 on the worker GUI
and CLI) and the internal
network to FortiController
front panel interface 6
(fctrl2/f6) you can access the
root VDOM and add a policy
to allow users on the Internal
network to access the
Internet.
10. Results - Primary FortiController cluster status
Log into the primary
FortiController CLI and
enter this command to view
the system status of the
primary FortiController.
143
For example, you can use SSH to log into the primary FortiController CLI
using the external management IP:
ssh [email protected] -p2201
get system status
Version: FortiController-5903C v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3912000029
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch1-slot1
Current HA mode: dual, master
System time: Mon Sep 15 10:11:48 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Enter this command to view the load balance status of the primary FortiController and its workers. The
command output shows the workers in slots 3, 4, and 5, and status information about each one.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: slot-3
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready:
0 [ 0 Active 0 Standby]
Dead:
0 [ 0 Active 0 Standby]
Total:
3 [ 3 Active 0 Standby]
Slot 3: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 5: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Heartbeat: Management: Good Data: Good
Status Message:"Running"
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
144
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Enter this command from the primary FortiController to show the HA status of the FortiControllers. The
command output shows a lot of information about the cluster including the host names and chassis and slot
locations of the FortiControllers, the number of sessions each FortiController is processing (in this case 0 for
each FortiController) the number of failed workers (0 of 3 for each FortiController), the number of
FortiController front panel interfaces that are connected (2 for each FortiController) and so on. The final two
lines of output also show that the B1 interfaces are connected (status=alive) and the B2 interfaces are not
(status=dead). The cluster can still operate with a single heartbeat connection, but redundant heartbeat
interfaces are recommended.
diagnose system ha status
mode: dual
minimize chassis failover: 1
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201,
uptime=1517.38, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 best=yes
local_interface=
b2 best=no
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203,
uptime=1490.50, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82192.16
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204,
uptime=1476.37, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82192.27
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202,
uptime=1504.58, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82192.16
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
145
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
11. Results - Chassis 1 Slot 2 FortiController status
Log into the chassis 1 slot 2
FortiController CLI and
enter this command to view
the status of this backup
FortiController.
To use SSH:
ssh [email protected] -p2202
get system status
Version: FortiController-5903C
v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3914000006
BIOS version: 04000010
System Part-Number: P08442-04
Hostname: ch1-slot2
Current HA mode: dual, backup
System time: Mon Sep 15 10:14:53 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
Enter this command to view the status of this backup FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: slot-3
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready:
0 [ 0 Active 0 Standby]
Dead:
0 [ 0 Active 0 Standby]
Total:
3 [ 3 Active 0 Standby]
Slot 3: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 5: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
146
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Enter this command from the FortiController in chassis 1 slot 2 to show the HA status of the FortiControllers.
Notice that the FortiController in chassis 1 slot 2 is shown first.
diagnose system ha status
mode: dual
minimize chassis failover: 1
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202,
uptime=1647.44, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 best=yes
local_interface=
b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201,
uptime=1660.17, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82305.93
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203,
uptime=1633.27, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82305.83
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204,
uptime=1619.12, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=82305.93
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
147
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
12. Results - Chassis 2 Slot 1 FortiController status
Log into the chassis 2 slot 1
FortiController CLI and
enter this command to view
the status of this backup
FortiController.
To use SSH:
ssh [email protected] -p2221
get system status
Version: FortiController-5903C v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3912000051
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch2-slot1
Current HA mode: dual, backup
System time: Mon Sep 15 10:17:10 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada))
Enter this command to view the status of this backup FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: N/A
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready:
0 [ 0 Active 0 Standby]
Dead:
0 [ 0 Active 0 Standby]
Total:
3 [ 3 Active 0 Standby]
Slot 3: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 5: Status:Working
Function:Active
Link:
Base: Up
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
148
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Enter this command from the FortiController in chassis 2 slot 1 to show the HA status of the FortiControllers.
Notice that the FortiController in chassis 2 slot 1 is shown first.
diagnose system ha status
mode: dual
minimize chassis failover: 1
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203,
uptime=1785.61, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 best=yes
local_interface=
b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201,
uptime=1812.38, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=79145.95
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204,
uptime=1771.36, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=79145.99
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202,
uptime=1799.56, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=79145.86
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
149
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
13. Results - Chassis 2 Slot 2 FortiController status
Log into the chassis 2 slot 2
FortiController CLI and
enter this command to view
the status of this backup
FortiController.
To use SSH:
ssh [email protected] -p2222
get system status
Version: FortiController-5903C v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3913000168
BIOS version: 04000010
System Part-Number: P08442-04
Hostname: ch2-slot2
Current HA mode: dual, backup
System time: Mon Sep 15 10:20:00 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)
Enter this command to view the status of the backup FortiController and its workers.
get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: N/A
Blades:
Working: 3 [ 3 Active 0 Standby]
Ready:
0 [ 0 Active 0 Standby]
Dead:
0 [ 0 Active 0 Standby]
Total:
3 [ 3 Active 0 Standby]
Slot 3: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 4: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
Slot 5: Status:Working
Function:Active
Link:
Base: Down
Fabric: Up
Heartbeat: Management: Good
Data: Good
Status Message:"Running"
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
150
Life of a UDP packet (UDP local ingress enabled and UDP local
session setup)
Dual mode SLBC with four FortiController-5903Cs
and two chassis
Enter this command from the FortiController in chassis 2 slot 2 to show the HA status of the FortiControllers.
Notice that the FortiController in chassis 2 slot 2 is shown first.
diagnose system ha status
mode: dual
minimize chassis failover: 1
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204,
uptime=1874.39, chassis=2(1)
slot: 2
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 best=yes
local_interface=
b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201,
uptime=1915.59, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=78273.86
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203,
uptime=1888.78, chassis=2(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=78273.85
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202,
uptime=1902.72, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/3, intf_state=(port up:)=0
force-state(0:none)
hbdevs: local_interface=
b1 last_hb_
time=78273.72
status=alive
local_interface=
b2 last_hb_time=
0.00
status=dead
151
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
FortiController get and diagnose commands
get load-balance status
FortiController get and diagnose commands
This chapter introduces some useful FortiController get and diagnose commands. Contact [email protected]
if you have suggestions for additions or examples to add to this chapter.
get load-balance status
Display information about the status of the workers in the cluster. In the example below the cluster includes the
primary worker in slot 5 and one other worker in slot 6. You can also see that both workers are active and
operating correctly.
get load-balance status
ELBC Master Blade: slot-5
Confsync Master Blade: slot-5
Blades:
Working: 2 [ 2 Active 0
Ready: 0 [ 0 Active 0
Dead: 0 [ 0 Active 0
Total: 2 [ 2 Active 0
Standby]
Standby]
Standby]
Standby]
Slot 5: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
Slot 6: Status:Working Function:Active
Link: Base: Up Fabric: Up
Heartbeat: Management: Good Data: Good
Status Message:"Running"
diagnose system flash list
Displays information about the FortiController flash partitions including the firmware version on each partition.
diagnose system flash list
ImageName Version TotalSize(KB) Used(KB) Use% BootImage RunningImage
primary FT513B-5MR0-b0024-140815-P0 253871
27438 11% Yes Yes
secondary FT513B-5MR0-b0024-140827-int 253871
27640 11% No No
diagnose system ha showcsum
Check the HA checksums for the cluster.
diagnose system ha showcsum [<level>] [<object>]
diagnose system ha showcsum
debugzone checksum:
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
152
diagnose system ha stats
FortiController get and diagnose commands
06 d3 98 d4 ed 3f 08 1d 39 b3 0e 94 e9 57 98 41
checksum:
06 d3 98 d4 ed 3f 08 1d 39 b3 0e 94 e9 57 98 41
diagnose system ha stats
Display FortiController statistics for the cluster including the number of heartbeats transmitted and received for
each heartbeat device and the number of status changes. Normally the receive and transmitted numbers should
match or be similar and there shouldn’t be any status changes.
diagnose system ha stats
hbdev: b1
Hearbeat RX : 13676
Hearbeat TX : 13676
hbdev: b2
Hearbeat RX : 12385
Hearbeat TX : 13385
HA failover Master->Slave : 0
HA failover Slave->Master : 1
diagnose system ha status
Display HA status information for the FortiController HA cluster including the HA mode. As well the command
displays status information for each of the FortiControllers in the cluster including the uptime, heartbeat IP
address, and synchronization status.
diagnose system ha status
mode: a-p
minimize chassis failover: 1
FT513B3913000068(FT513B3913000068), Slave(priority=1), ip=169.254.128.2,
uptime=3535.28, chassis=1(1)
slot: 2
sync: conf_sync=1, elbc_sync=1
session: total=0, session_sync=in sync
state: worker_failure=0/2, intf_state=(port up:)=5
force-state(0:none) hbdevs: local_interface= b1 best=yes
FT513B3913000082(FT513B3913000082), Master(priority=0),
ip=169.254.128.1, uptime=3704.54, chassis=1(1)
slot: 1
sync: conf_sync=1, elbc_sync=1, conn=3(connected)
session: total=0, session_sync=in sync
state: worker_failure=0/2, intf_state=(port up:)=5
force-state(0:none) hbdevs: local_interface= b1
last_hb_time= 3622.36 status=alive
153
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
FortiController get and diagnose commands
diagnose system ha force-slave-state
diagnose system ha force-slave-state
Set an individual FortiController unit or chassis to switch to backup mode (slave state). You can set devices to
operate in backup mode, view which devices have been forced into this mode, and clear the force mode settings.
You must enter this command form the primary FortiController’s CLI.
diagnose system ha force-slave-state {by-chassis | by-serial-number | clear | show}
l
by-chassis force a chassis to be slave
l
by-serial-number force up to 3 devices to be slave by serial number
l
clear clear force settings from the cluster
l
show show current force state
The following example shows how to force chassis 1 to operate in passive mode (slave state). The command
delays applying the passive mode state by 20 seconds.
diagnose system ha force-slave-state by-chassis 20 1
diagnose system load-balance worker-blade status
Display the status of the workers in the cluster. You can use this command to see the relative load being placed
on individual workers (CPU and memory usage).
diagnose system load-balance worker-blade status
load balance worker blade in slot 5 (service group 1, chassis 1, snmp
blade index 1)
cpu usage 0 % mem usage 0 % uptime 0 seconds
sessions 0 setup rate 0
last successful update was NEVER, last attempt was less than 10
seconds ago
last update error count 1, total 685
load balance worker blade in slot 6 (service group 1, chassis 1, snmp
blade index 2)
cpu usage 0 % mem usage 0 % uptime 0 seconds
sessions 0 setup rate 0
last successful update was NEVER, last attempt was less than 10
seconds ago
last update error count 1, total 685
diagnose system load-balance worker-blade session-clear
Clear worker sessions.
diagnose system load-balance worker-blade session-clear
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
154
diagnose switch fabric-channel egress list
FortiController get and diagnose commands
diagnose switch fabric-channel egress list
Display and egress port map for the cluster fabric channel.
diagnose switch fabric-channel egress list
Switch Interface Egress Map, fabric-Channel
Port Map: Name(Id):
f1( 1) f2( 2) f3( 3)
f4( 4) f5( 5) f6( 6)
f7( 7) f8( 8) slot-1/2(15)
slot-3(16) slot-4(17) slot-5(18)
slot-6(19) slot-7(20) slot-8(21)
slot-9(22) slot-10(23) slot-11(24)
slot-12(25) slot-13(26)
slot-14(27) CPU( 0)
Source Interface Destination Ports
________________ ___________________________________
f5 0-27
f6 0-27
f7 0-27
f8 0-27
slot-1/2 0-27
slot-3 0-27
slot-4 0-27
slot-5 0-27
slot-6 0-27
slot-7 0-27
slot-8 0-27
slot-9 0-27
slot-10 0-27
slot-11 0-27
slot-12 0-27
slot-13 0-27
slot-14 0-27
trunk
diagnose switch base-channel egress list
Display and egress port map for the cluster base channel.
diagnose switch base-channel egress list
Switch Interface Egress Map, base-Channel
Port Map: Name(Id):
sh1(10) sh2(11) slot-1/2(12)
slot-3(13) slot-4(14) slot-5(15)
slot-6(16) slot-7( 2) slot-8( 3)
155
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
FortiController get and diagnose commands
diagnose switch fabric-channel packet heartbeat-counters list
slot-9( 4) slot-10( 5) slot-11( 6)
slot-12( 7) slot-13( 8) slot-14( 9)
base-mgmt(25) b1(29) b2(28) CPU( 0)
Source Interface Destination Ports
________________ ___________________________________
sh1 0,2-29
sh2 0,2-29
slot-1/2 0,2-29
slot-3 0,2-29
slot-4 0,2-29
slot-5 0,2-29
slot-6 0,2-29
slot-7 0,2-29
slot-8 0,2-29
slot-9 0,2-29
slot-10 0,2-29
slot-11 0,2-29
slot-12 0,2-29
slot-13 0,2-29
slot-14 0,2-29
base-mgmt 0,2-29
b1 0,2-16,25-26
b2 0,2-16,25-26
diagnose switch fabric-channel packet heartbeat-counters list
Display details about heartbeat packets in different ports.
diagnose switch fabric-channel packet heartbeat-counters list
fabric-channel CPU counters:
packet receive error : 0
Non-zero port counters:
f1:
LACP packet : 167
STP packet : 2292
unknown bridge packet: 153
f2:
LACP packet : 168
unknown bridge packet: 153
f3:
LACP packet : 167
unknown bridge packet: 153
f4:
LACP packet : 168
unknown bridge packet: 153
slot-5:
elbcv3 heartbeat : 25210
slot-6:
elbcv3 heartbeat : 25209
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
156
diagnose switch fabric-channel physical-ports
FortiController get and diagnose commands
diagnose switch fabric-channel physical-ports
Display information about fabric channel physical ports.
diagnose switch fabric-channel physical-ports {clear-stats | list <port-name> | summary}
l
clear-stats reset counters on one or all ports
l
list <port-name> list details of a physical port's statistics
l
summary list summary information of all physical ports
The following example shows how to list all fabric channel ports, list details for slot-5, clear the counters for slot-5
and list the details for slot-5 again.
diagnose switch fabric-channel physical-ports summary
Portname Status Vlan Duplex Speed Flags
__________ ______ ____ ______ _____ __________
f1 up 100 f2 up 100 f3 up 100 f4 up 100 f5 down 104 f6 down 105 f7 down 106 f8 up 107 slot-1/2 down 1 slot-3 down 1 slot-4 down 1 slot-5 up 400 slot-6 up 400 slot-7 down 1 slot-8 down 1 slot-9 down 1 slot-10 down 1 slot-11 down 1 slot-12 down 1 slot-13 down 1 slot-14 down 1 full 10G QE,TL,
full 10G QE,TL,
full 10G QE,TL,
full 10G QE,TL,
full 10G QE, ,
full 10G QE, ,
full 10G QE, ,
full 10G QE, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
full 10G QI, ,
Flags: QS(802.1Q) QE(802.1Q-in-Q,external) QI(802.1Q-in-Q,internal)
TS(static trunk) TF(forti trunk) TL(lacp trunk); MD(mirror dst)
MI(mirror ingress) ME(mirror egress) MB(mirror ingress and egress)
diagnose switch fabric-channel physical-ports list slot-5
Port(slot-5) is up, line protocol is up
Interface Type is 10 Gigabit Media Independent Interface (XGMII)
Address is 00:09:0F:62:11:12, loopback is not set
MTU 16356 bytes, Encapsulation IEEE 802.3/Ethernet-II
full-duplex, 10000 b/s, link type is manual
157
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
FortiController get and diagnose commands
diagnose switch fabric-channel mac-address list
25994 packets input, 2065330 bytes(unicasts 0, multicasts and broadcasts
25994)
33025 packets output, 14442754 bytes(unicasts 2, multicasts and
broadcasts 33023)
input: good 25994, drop 0, unknown protocols 0
error 0 (crc 0, undersize 0, frame 0, oversize 0, jabbers 0,
collisions 0)
output: good 33025, drop 0, error 0
diagnose switch fabric-channel physical-ports clear-stats
diagnose switch fabric-channel physical-ports list slot-5
Port(slot-5) is up, line protocol is up
Interface Type is 10 Gigabit Media Independent Interface (XGMII)
Address is 00:09:0F:62:11:12, loopback is not set
MTU 16356 bytes, Encapsulation IEEE 802.3/Ethernet-II
full-duplex, 10000 b/s, link type is manual
4 packets input, 272 bytes(unicasts 0, multicasts and broadcasts 4)
5 packets output, 2415 bytes(unicasts 0, multicasts and broadcasts 5)
input: good 4, drop 0, unknown protocols 0
error 0 (crc 0, undersize 0, frame 0, oversize 0, jabbers 0,
collisions 0)
output: good 5, drop 0, error 0
diagnose switch fabric-channel mac-address list
List the MAC addresses of interfaces connected to the fabric channel.
diagnose switch fabric-channel mac-address list
MAC: 64:00:f1:d2:89:78 VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 64:00:f1:d2:89:7b VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 00:09:0f:74:eb:ed VLAN: 1002 Port: slot-5(port-id 18)
Flags: 0x00000440 [ used ]
MAC: 08:5b:0e:08:94:cc VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 00:1d:09:f1:14:6b VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 64:00:f1:d2:89:79 VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
158
diagnose switch fabric-channel mac-address filter
FortiController get and diagnose commands
MAC: 64:00:f1:d2:89:7a VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 00:1d:09:f1:14:6b VLAN: 2005 Trunk: unknown(trunk-id 0)
Flags: 0x000004c0 [ used trunk ]
MAC: 00:09:0f:74:ec:6b VLAN: 1002 Port: slot-6(port-id 19)
Flags: 0x00000440 [ used ]
diagnose switch fabric-channel mac-address filter
Filter fabric-channel MAC address data to diagnose behavior for link aggregation trunks.
diagnose switch fabric-channel mac-address filter {clear | flags | port-id-map | show |
trunk-id-map | vlan-map}
l
clear clear the MAC address display filter
l
flags enter a bit pattern to match and mask bits that are displayed
l
port-id-map set the port-id's to display
l
show show the current filter settings
l
trunk-id-map set the trunk-id's to display
l
vlan-map set the VLANs to display
The following commands show how to display the MAC address list for link aggregation trunk 1.
diagnose switch fabric-channel mac-address filter trunk-id-map 1
diagnose switch fabric-channel mac-address list
MAC: 64:00:f1:d2:89:78 VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 64:00:f1:d2:89:7b VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 08:5b:0e:08:94:cc VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 00:1d:09:f1:14:6b VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 64:00:f1:d2:89:79 VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
MAC: 64:00:f1:d2:89:7a VLAN: 100 Trunk: trunk(trunk-id 1)
Flags: 0x000004c0 [ used trunk ]
159
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
FortiController get and diagnose commands
diagnose switch fabric-channel trunk list
diagnose switch fabric-channel trunk list
List the fabric channel trunks and display status information for each link aggregation and LACP trunk.
Information displayed includes the port in each link aggregation group and LACP flags.
diagnose switch fabric-channel trunk list
Switch Trunk Information, fabric-Channel
Trunk Name: trunk
Port Selection Algorithm: src-dst-ip
Minimum Links: 0
Active Port Update Time
___________ ____________________
f1 f2 f3 f4 14:52:36
14:52:38
14:52:38
14:52:38
Aug-27-2014
Aug-27-2014
Aug-27-2014
Aug-27-2014
Non-Active Port Status
_______________ ____________________
LACP flags: (A|P)(S|F)(A|I)(I|O)(E|D)(E|D)
(A|P) - LACP mode is Active or Passive
(S|F) - LACP speed is Slow or Fast
(A|I) - Aggregatable or Individual
(I|O) - Port In sync or Out of sync
(E|D) - Frame collection is Enabled or Disabled
(E|D) - Frame distribution is Enabled or Disabled
status: up
Live links: 4
ports: 4
LACP mode: active
LACP speed: slow
aggregator ID: 1
actor key: 1
actor MAC address: 00:09:0f:62:11:01
partner key: 2
partner MAC address: 00:22:56:ba:5d:00
slave: f1
status: up
link failure count: 0
permanent MAC addr: 00:09:0f:62:11:01
actor state: ASAIEE
partner state: ASAIEE
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
160
diagnose switch fabric-channel trunk list
FortiController get and diagnose commands
aggregator ID: 1
slave: f2
status: up
link failure count: 0
permanent MAC addr: 00:09:0f:62:11:02
actor state: ASAIEE
partner state: ASAIEE
aggregator ID: 1
slave: f3
status: up
link failure count: 0
permanent MAC addr: 00:09:0f:62:11:03
actor state: ASAIEE
partner state: ASAIEE
aggregator ID: 1
slave: f4
status: up
link failure count: 0
permanent MAC addr: 00:09:0f:62:11:04
actor state: ASAIEE
partner state: ASAIEE
aggregator ID: 1
161
FortiController Session-Aware Load Balancing (SLBC) Guide
Fortinet Technologies Inc.
Copyright© 2017 Fortinet, Inc. All rights reserved. Fortinet®, FortiGate®, FortiCare® and FortiGuard®, and certain other marks are registered trademarks of Fortinet,
Inc., in the U.S. and other jurisdictions, and other Fortinet names herein may also be registered and/or common law trademarks of Fortinet. All other product or company
names may be trademarks of their respective owners. Performance and other metrics contained herein were attained in internal lab tests under ideal conditions, and
actual performance and other results may vary. Network variables, different network environments and other conditions may affect performance results. Nothing herein
represents any binding commitment by Fortinet, and Fortinet disclaims all warranties, whether express or implied, except to the extent Fortinet enters a binding written
contract, signed by Fortinet’s General Counsel, with a purchaser that expressly warrants that the identified product will perform according to certain expressly-identified
performance metrics and, in such event, only the specific performance metrics expressly identified in such binding written contract shall be binding on Fortinet. For
absolute clarity, any such warranty will be limited to performance in the same ideal conditions as in Fortinet’s internal lab tests. In no event does Fortinet make any
commitment related to future deliverables, features, or development, and circumstances may change such that any forward-looking statements herein are not accurate.
Fortinet disclaims in full any covenants, representations,and guarantees pursuant hereto, whether express or implied. Fortinet reserves the right to change, modify,
transfer, or otherwise revise this publication without notice, and the most current version of the publication shall be applicable.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement